diff --git "a/stack_exchange/AI/AIQ&A 2019.csv" "b/stack_exchange/AI/AIQ&A 2019.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/AI/AIQ&A 2019.csv" @@ -0,0 +1,60736 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense +9775,2,,9751,1/1/2019 7:01,,1,,"

If you have a gray scale image, that means you are getting data from one sensor. +If you have an RGB image, that means you are getting data from three sensors. +If you have a CMYK image, that means you are getting data from four sensors.

+ +

So, channels can be considered as same information seen from different perspective. (here color)

+ +

If you see how the kernel (for example 5*5*3) moves, it moves only in XY direction and not in the channel direction. So, you are trying to learn features in XY direction from all the channels together.

+ +

But, if you exchange the dimensions like you mentioned your XY dimensions become 200*3 or 3*200 and your channels become 200. In this case you are moving kernel not in the actual XY spatial space of image. So, it doesn't make any sense according to me. You are contradicting the basic concept of CNN by doing so.

+ +

The concept of CNN itself is that you want to learn features from the spatial domain of the image which is XY dimension. So, you cannot change dimensions like you mentioned.

+",20760,,,,,1/1/2019 7:01,,,,0,,,,CC BY-SA 4.0 +9778,2,,9442,1/1/2019 12:01,,0,,"

To generate high level programming language code in the context of genetic programming the Grammatical Evolution technique could be a good start. It allows to generate syntactically correct samples according to a grammar, so there will be no (syntactic) garbage in a population.

+ +

In the original implementation it has (very simple and) quite destructive mutation and crossover operators. This could be changed by making the operators more sophisticated, so they respect the actual tree-like structure of the samples and the grammar constraints, but effectively it will result in implementing the classical tree-based Genetic Programming system (which isn't bad).

+ +

The evaluation of such samples should be done by executing them in an appropriate environment (the actual VM and the desired map or whatever).

+",15881,,,,,1/1/2019 12:01,,,,0,,,,CC BY-SA 4.0 +9779,2,,9354,1/1/2019 18:42,,0,,"

In the Large-Scale Study of Curiosity-Driven Learning paper (the prequel to the Random Network Distillation work), in their discussion of Random Features, they reference 3 papers that discuss this:

+ +
    +
  1. K. Jarrett, K. Kavukcuoglu, Y. LeCun, et al. What is the Best Multi-Stage Architecture for Object Recognition?
  2. +
  3. A. M. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On Random Weights and Unsupervised Feature Learning
  4. +
  5. Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep Fried Convnets
  6. +
+ +

I just briefly glanced over these. For now, one interesting idea from [2] is to use randomly initialized networks for architecture search. To evaluate the architecture for the task, you don't have to train it; you can just randomly initialize it and measure its performance.

+",10985,,,,,1/1/2019 18:42,,,,0,,,,CC BY-SA 4.0 +9781,1,,,1/1/2019 22:07,,0,100,"

I want to create a NHL game predictor and have already trained one neural network on game data.

+ +

What I would like to do is train another model on player seasonal/game data and combine the two models to archive better accuracy.

+ +

Is this approach feasible? If it is, how do I go about doing it?

+ +

EDIT:

+ +

I have currently trained a neural network to classify the probability of the home team winning a game on a dataset that looks like this:

+ +
h_Won/Lost  h_metric2 h_metric3 h_metric4 a_metric2 a_metric3 a_metric4 h_team1 h_team2 h_team3 h_team4 a_team1 a_team2 a_team3 a_team4
+ 1            10       10         10        10         10        10      1       0        0      0         0      1        0      0
+ 1            10       10         10        10         10        10      1       0        0      0         0      1        0      0
+ 1            10       10         10        10         10        10      1       0        0      0         0      1        0      0
+
+ +

and so on.

+ +

I am preparing a dataset of player-data for each game that will have the shape of this:

+ +
Player     PlayerID    Won/Lost     team      opponent     metric1     metric2   
+ Henke         1           1          NY          CAP         10          10
+
+ +

Hopefully, this new dataset will have some accuracy on if team is going to have some predictive features that are good and recognised.

+ +

Now, say I have these two trained Nural Networks and they both have an accuracy of 70% by them self. But I want to combine them both in the hopes to achieve better predictability. How is this archived? How will the test-dataset be structured?

+",21077,,21077,,1/1/2019 23:22,1/1/2019 23:22,is it possible to train several Neural Networks on different types of data and combine them?,,1,0,,,,CC BY-SA 4.0 +9782,2,,9781,1/1/2019 23:08,,1,,"

The term you need is “model ensembles”, that’s the way models are combined. Pretty hard to be more specific since you don’t give a language or any other details.

+",21079,,,,,1/1/2019 23:08,,,,1,,,,CC BY-SA 4.0 +9783,2,,9725,1/2/2019 0:45,,3,,"

The Project Summarized

+ +

The project goal appears to be a common one: Routing correspondence in an efficient manner to maintain good but low cost customer and public relations. A few features of the project were mentioned.

+ +
    +
  • Neural network project
  • +
  • Received some design and project history from predecessor
  • +
  • Classifies messages for telcos
  • +
  • Sends results to support groups at appropriate locales
  • +
  • Uses 2 relu layers, ending with a softmax
  • +
  • Word2Vec embedding
  • +
  • Trained with a clean language file
  • +
  • All special characters and numbers removed
  • +
+ +

The requirements for current development were indicated. The current work is to develop an artificial network that places incoming messages into one of two categories accurately and reliably.

+ +
    +
  • Moderated — insulting, fraudulent in purpose (spam), trivial routine
  • +
  • Operative — relevant question requiring internal human attention
  • +
+ +

Research and development is beginning along reasonable lines.

+ +
    +
  • Trained with 300,000 messages
  • +
  • Word2Vec used
  • +
  • 40% of classified as moderated
  • +
  • Permuted cycles and epochs
  • +
  • Achieved 90% accuracy
  • +
  • Loss stays near 0.5
  • +
  • In test, operative accuracy 0.9, moderated accuracy max of 0.6
  • +
+ +

First Obstacle and Feasibility

+ +

The first obstacle encountered is that in QA using production environment data, 90% of the messages where left unclassified, 5% of the classifications were accurate, and the remaining 5% were inaccurately classified.

+ +

It is correct that the even split of 5% accuracy and 5% inaccuracy indicates that information learned is not yet transferable to the quality assurance test phase using real production environment messages. In information theory phraseology, no bits of usable information were transferred and entropy remained unchanged on this first experiment.

+ +

These kinds of disappointments are not uncommon when first approaching the use of AI in an existing business environment, so this initial outcome should not be taken as a sign that the idea won't work. The approach will likely work, especially with foul language, which is not dependent on cultural references, analogies, or other semantic complexity.

+ +

Recognizing notices that are for audit purposes only, from a social network accounts or purchase confirmations, can be handled through rules. The rule creation and maintenance can theoretically be automated too, and some proprietary systems exist that do exactly that. Such automation can be learned using the appropriate training data, but real time feedback is usually employed, and those systems are usually model based. That is an option for further down the R&D road.

+ +

The scope of the project is probably too small, but that's not a big surprise either. Most projects suffer from early overoptimism. A pertinent quote from Redford's The Melagro Beanfield War illuminates the practical purpose of optimism.

+ +
+

        APPARITION

+ +

I don't know if your friend knows what he's in for.

+ +

        AMARANTE

+ +

Nobody would do anything if they knew what they were in for.

+
+ +

Initial Comments

+ +

It is not necessary to reduce the number of message categories to two, but there is nothing wrong with starting R&D by refining approach and high level design with the simplest case.

+ +

The last layer may be more training efficient if a binary threshold is used for the activation function instead of softmax, since there is only one bit of output needed when there are only two categories. This also forces the network training objective to be the definitive selection of a category, which may benefit the overall rate of R&D progress.

+ +

There may be ways of improving outcomes by adding more metrics in the code to beyond just 'accuracy'. Others who work with such details every day may have more domain specific knowledge in this regard.

+ +

Culture and Pattern Detection

+ +

Insults and curse words are entirely different kinds of things. Foul language is a linguistic symbol or phrase that fits into a broadcasting or publishing category of prohibition. The rules of prohibition are well established in most languages and could be held in a configuration file along with the permutations of each symbol or phrase. In the case of sh*t, related forms include sh*tty, sh*thead, and so on.

+ +

It is also useful to distinguish the sub-sets of foul language.

+ +
    +
  • Cursing (expressing the wish for calamity to befall the recipient)
  • +
  • Swearing (considered blasphemy by some)
  • +
  • Exclamations that are considered foul by publishers and broadcasters
  • +
  • Additional items parents don't want their children to hear
  • +
  • Edge cases like crap
  • +
+ +

The term foul language is a super-set of these.

+ +

Distribution Alignment

+ +

Learning algorithms and theory are based on probabilistic alignment of feature distributions between training and use. The distribution of training data must closely resembles the distribution found when the trained AI component is later used. If not, the convergence of learning processes on some optimal behavior defined by gain or loss functions may succeed but the execution of that behavior in the business or industry may fail.

+ +

Internationalization

+ +

Multilingual AI should usually be fully internationalized. Training and use of training with two distinct dialects will almost always perform poorly. That creates a data acquisition challenge.

+ +

As stated above, classification and learning depend on the alignment of statistical distributions between data used in training and data processing relying on the use of what was learned. This is also true of human learning, so this requirement will not likely be overcome any time soon.

+ +

All these forms of foul language must be programmed flexibly across these cultural dimensions.

+ +
    +
  • Character set
  • +
  • Collation order
  • +
  • Language
  • +
  • Dialect
  • +
  • Other locale related determinants
  • +
  • Education level
  • +
  • Economic strata
  • +
+ +

Once one of these is included in the model (which will be imperative) then there is no reason why the others cannot be included at little cost, so it is wise to begin with standard dimensions of flexibility. The alternative will likely lead to costly branching complexity to represent specific rules, which could have been made more maintainable by generalizing for international use up front.

+ +

Insult Recognition

+ +

Insults require comprehension beyond the current state of technology. Cognitive science may change that in the future, but projections are mere conjecture.

+ +

Use of a regular expression engine with a fuzzy logic comparator is achievable and may appease the stakeholders of the project, but identifying insults may be infeasible at this time, and the expectations should be set with stakeholders to avoid later surprises. Consider these examples.

+ +
    +
  • The nose on your face looks like a camel.
  • +
  • Kiss the darkest part of my little white. (From Avatar screenplay)
  • +
+ +

The word combinations in these are not likely to be in some data set you can use for training, so Word2Vec will not help in these types of cases. Additional layers may assist with proper handling of the at least some of the semantic and referential complexity of insults, but only some.

+ +

Explicit Answers to Explicit Questions

+ +
+

Is it possible to accomplish this task with a neural network?

+
+ +

Yes, in combination with excellence in higher level system design and best practices for internationalization.

+ +
+

Is the structure of this neural network correct for this task?

+
+ +

The initial experiments look like a reasonable beginning toward what would later be correct enough. Do not be discouraged, but don't expect the first pass at something like this to look much like what passes user acceptance testing a year from now. Experts can't pull that rate of R&D progress off, unless they hack and cobble something together from previous work.

+ +
+

Are 300k messages enough to train the neural network?

+
+ +

Probably not. In fact, 300m messages will not catch all combinations of cultural references, analogies, colloquialisms, variations in dialect, plays on words, and games that spammers play to avoid detection.

+ +

What would really help is a feedback mechanism so that production outcomes are driving the training rather than a necessarily limited data set. Canned data sets are usually restricted in the accuracy of their probabilistic representation of social phenomena. None will likely infer dialect and other locale features to better detect insults. A Parisian insult may have nothing in common with a Creole insult.

+ +

The feedback mechanism must be based on impressions in some way to become and remain accurate. The impressions must be labelled with all the locale data that is reasonably easy to collect and possibly correlated to the impression.

+ +

This implies the use of rules acquisition, fuzzy logic control, reinforcement learning, or the application of naive Bayesian approaches somewhere appropriate within the system architecture.

+ +
+

Do I need to clean up the data from uppercase, special characters, numbers etc?

+
+ +

Numbers can be relevant. Because of historical events and religious texts, 13 and 666 might be indications of something offensive, respectively. One can also use numbers and punctuation to convey word content. Here are some examples of spam detection resistant click bait.

+ +
    +
  • I've got a 6ex opportunity 4u.
  • +
  • Wanna 69?
  • +
  • Values are rising 50%! We have 9 investment choices 4 you to check out.
  • +
+ +

The meaning of the term special character is vague and ambiguous. Any character in UTF-8 is legitimate for almost all Internet communications today. HTML5 provides additional entities beginning with an ampersand and ending with a semicolon. (See https://dev.w3.org/html5/html-author/charref.)

+ +

Filtering these out is a mistake. Spammers leverage these standards to penetrate spam detection. In this example, the stroke similarities of a capital ell (L) and those of the British pound symbol can be exploited to produce spam detection resistant click bait.

+ +
    +
  • Do you like hot £egs?
  • +
+ +

Removing special characters that fit within the Internet standards of UTF-8 and HTML entities will likely lead to disaster. It is recommended not to follow that part of the predecessor's design.

+ +

Regarding emoticons and other ideograms, these are linguistic elements that may represent in text encoding the volume, pitch, or tone modulation of phonetics, or they may represent face or body language. In many languages ideograms are used in place of words. For a global system running in parallel with the blogsphere, emoticons are part of linguistic expression.

+ +

For that reason, they are not significantly different than word roots, prefixes, suffixes, conjugations, or word pairs as linguistic elements which can also express emotion as well as logical reasoning. For the learning algorithm to learn categorization behavior in the presence of ideograms, the ideograms must remain in training features and later in real time processing of those features using the results of training.

+ +

Additional Information

+ +

Some additional information is covered in this existing post: Spam Detection using Recurrent Neural Networks.

+ +

Since spam detection is closely related to fraud detection, the spammer fraudulently acting like a relationship already exists with their recipients, this existing post may be of assistance too: Can we implement GAN (Generative adversarial neural networks) for classication problem like Fraud detecion?

+ +

Another resource that may help is this: https://www.tensorflow.org/tutorials/representation/word2vec

+",4302,,4302,,1/4/2019 0:04,1/4/2019 0:04,,,,0,,,,CC BY-SA 4.0 +9784,2,,9677,1/2/2019 5:28,,1,,"

I've discovered Doc2Vec which does something similar to what I am trying to accomplish. This doesn't exactly answer my question of why the network I was trying to build doesn't work, but at least it shows how indexed outputs can be pulled from a network, with open source to show how it is built.

+ +

https://datascience.stackexchange.com/questions/23969/sentence-similarity-prediction

+",20930,,,,,1/2/2019 5:28,,,,0,,,,CC BY-SA 4.0 +9786,1,9788,,1/2/2019 8:58,,2,475,"

I'm currently working on a regression problem and I have 10 inputs/attributes.

+ +

What should I do if there are correlations between different features of the input data? Does the correlation between inputs affect the performance (e.g. accuracy) of the model?

+",21084,,2444,,8/19/2019 22:43,8/19/2019 22:43,Does the correlation between inputs affect the model performance?,,1,0,,,,CC BY-SA 4.0 +9788,2,,9786,1/2/2019 11:30,,5,,"

Non-correlation does not imply independence, that is, if two features are not correlated (i.e. zero correlation), it does not mean that they are independent. But (non-zero) correlation implies dependence (see https://stats.stackexchange.com/q/113417/82135 for more details). So, if you have non-zero correlation between two features, it means they are dependent. If they are dependent, then one feature gives you information about the other and vice-versa: in a certain way, one of the two is, at least partially, redundant.

+ +

Unnecessary features might not affect the performance (e.g. the accuracy) of a model. However, if you reduce the number of features, the learning process might actually be faster.

+ +

You may want to try some dimensionality reduction technique, in order to reduce the number of features.

+",2444,,2444,,6/4/2019 0:46,6/4/2019 0:46,,,,0,,,,CC BY-SA 4.0 +9790,2,,5308,1/2/2019 17:46,,0,,"

Algorithms can learn to cheat:

+
+

"A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.”

+

"...a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new." +
Source: This clever AI hid data from its creators to cheat at its appointed task (TechCrunch)

+
+",1671,,-1,,6/17/2020 9:57,1/2/2019 17:46,,,,0,,,,CC BY-SA 4.0 +9794,1,9801,,1/2/2019 20:05,,0,94,"

I’m a researcher and I’m currently conducting a research project. I will conduct a study where I would like to trigger different emotions using chatbots on a smartphone (e.g. on Facebook Messenger).

+ +

Are there any existing chatbots which are able to trigger different emotions intentionally (also negative ones)?

+",21103,,,,,1/2/2019 23:30,Chatbots triggering emotions,,1,0,,6/7/2022 15:14,,CC BY-SA 4.0 +9795,2,,8407,1/2/2019 20:09,,1,,"

Check out source code to DeepImagePrior it does a remarkable job guessing what's missing to repair images with a variety of damage.

+",3370,,,,,1/2/2019 20:09,,,,0,,,,CC BY-SA 4.0 +9797,2,,7222,1/2/2019 20:33,,0,,"

This is a huge growth area in the impact of AI on HR -- see all the companies we've found that do candidate matching for instance (disclaimer I work for CognitionX). Under the hood, there are techniques that don't rely on vocabulary such as Facebook's FastText but need more training data.

+ +

Here are some other resources +Job matching using unsupervised learning (k-nearest neighbour) + see paper

+",3370,,,,,1/2/2019 20:33,,,,0,,,,CC BY-SA 4.0 +9799,5,,,1/2/2019 21:44,,0,,"

-Artificial chemistry and the origins of life +-Self-assembly, growth, and development +-Self-replication and self-repair +-Systems and synthetic biology +-Perception, cognition, and behavior +-Embodiment and enactivism +-Collective behaviors of swarms +-Evolutionary and ecological dynamics +-Open-endedness and creativity +-Social organization and cultural evolution +-Societal and technological implications +-Philosophy and aesthetics +-Applications to biology, medicine, business, education, or entertainment

+ +

See: Artificial Life Forum (MIT Press) | International Society for Artificial Life

+",1671,,1671,,1/2/2019 21:44,1/2/2019 21:44,,,,0,,,,CC BY-SA 4.0 +9800,4,,,1/2/2019 21:44,,0,,For question about artificial systems that exhibit the behavioral characteristics of natural living systems.,1671,,1671,,1/2/2019 21:44,1/2/2019 21:44,,,,0,,,,CC BY-SA 4.0 +9801,2,,9794,1/2/2019 23:30,,1,,"

Emotions can of course be triggered by lots of different things. I think the most rich source could well be socialbots like Mitsuku.com and Zo.ai -- Steve Worswick is the owner of Mitsuku and may be interested in helping you by doing (appropriately filtered) chat log queries. You can get him on Twitter at @Mitsuku.

+",3370,,,,,1/2/2019 23:30,,,,5,,,,CC BY-SA 4.0 +9802,2,,9766,1/3/2019 0:17,,0,,"

Simple answer is tweaking an image in unnoticeable ways that completely fool software. Eg a cat that is identified as 99% likely ""to be guacamole"" https://mashable.com/2017/11/02/mit-researchers-fool-google-ai-program/#CU7dSAfQ5sqY

+",3370,,,,,1/3/2019 0:17,,,,0,,,,CC BY-SA 4.0 +9808,1,,,1/3/2019 14:20,,7,2542,"

I understand the minimax algorithm, but I am unable to understand deeply the minimax algorithm with alpha-beta pruning, even after having looked up several sources (on the web) and having tried to read the algorithm and understand how it works.

+ +

Do you have a good source that explains alpha-beta pruning clearly, or can you help me to understand the alpha-beta pruning (with a simple explanation)?

+",21125,,2444,,2/26/2019 9:40,2/26/2019 9:40,Can someone help me to understand the alpha-beta pruning algorithm?,,2,2,0,,,CC BY-SA 4.0 +9812,1,9830,,1/3/2019 16:42,,6,2805,"

In the DQN paper, it is written that the state-space is high dimensional. I am a little bit confused about this terminology.

+

Suppose my state is a high dimensional vector of length $N$, where $N$ is a huge number. Let's say I solve this task using $Q$-learning and I fix the state space to $10$ vectors, each of $N$ dimensions. $Q$-learning can easily work with these settings as we need only a table of dimensions $10$ x number of actions.

+

Let's say my state space can have an infinite number of vectors each of $N$ dimensions. In these settings, Q-learning would fail as we cannot store Q-values in a table for each of these infinite vectors. On the other hand, DQN would easily work, as neural networks can generalize for other vectors in the state-space.

+

Let's also say I have a state space of infinite vectors, but each vector is now of length $2$, i.e., small dimensional vectors. Would it make sense to use DQN in these settings? Should this state-space be called high dimensional or low dimensional?

+",21131,,2444,,2/3/2021 19:09,2/4/2021 11:00,What is a high dimensional state in reinforcement learning?,,3,0,,,,CC BY-SA 4.0 +9813,1,,,1/3/2019 17:45,,4,1139,"

This is a question related to Neural network to detect "spam"?. +I'm wondering how it would be possible to handle the emotion conveyed in text. In informal writing, especially among a juvenile audience, it's usual to find emotion expressed as repetition of characters. For example, ""Hi"" doesn't mean the same as ""Hiiiiiiiiiiiiiii"" but ""hiiiiii"", ""hiiiiiiiii"", and ""hiiiiiiiiii"" do.

+ +

A naive solution would be to preprocess the input and remove the repeating characters after a certain threshold, say, 4. This would probably reduce most long ""hiiiii"" to 4 ""hiiii"", giving a separate meaning (weight in a context?) to ""hi"" vs ""long hi"".

+ +

The naivete of this solution appears when there are combinations. For example, +haha vs hahahahaha or lol vs lololololol. Again, we could write a regex to reduce lolol[ol]+ to lolol. But then we run into the issue of hahahaahhaaha where a typo broke the sequence.

+ +

There is also the whole issue of Emoji. Emoji may seem daunting at first since they are special characters. But once understood, emoji may actually become helpful in this situation. For example, 😂 may mean a very different thing than 😂😂😂😂😂, but 😂😂😂😂😂 may mean the same as 😂😂😂😂 and 😂😂😂😂😂😂.

+ +

The trick with emojis, to me, is that they might actually be easier to parse. Simply add spaces between 😂 to convert 😂😂😂😂 to 😂 😂 😂 😂 in the text analysis. I would guess that repetition would play a role in training, but unlike ""hi"", and ""hiiii"", Word2Vec won't try to categorize 😂 and 😂😂 as different words (as I've now forced to be separate words, relying in frequency to detect the emotion of the phrase).

+ +

Even more, this would help the detection of ""playful"" language such as 😠😂😂😂, where the 😠 emoji might imply there is anger, but alongside 😂 and especially when repeating 😂 multiple times, it would be easier for a neural network to understand that the person isn't really angry.

+ +

Does any of this make sense or I'm going in the wrong direction?

+",17272,,,,,2/28/2020 1:02,Handling emotion in informal text (Hi vs HIIIIII!!!!)?,,2,0,,,,CC BY-SA 4.0 +9815,2,,9812,1/3/2019 18:01,,2,,"

Yes, it makes sense to use DQN in state space with small number of dimensions as well. It doesn't really matter how big your state dimension is, but if you have state with 2 dimensions for instance you wouldn't use convolutional layers in your neural net like its used in the paper you mentioned, you can use ordinary fully connected layers, it depends on the problem.

+",20339,,,,,1/3/2019 18:01,,,,0,,,,CC BY-SA 4.0 +9816,1,,,1/3/2019 20:09,,0,869,"

I am trying to implement a Deep Q Network to play Asteroids. Unfortunately, I am not sure how to calculate the Q value exactly, if I am exploring. For example, the agent is exploring for 1 second (otherwise makes no sense; I cannot let it just explore one step). Unfortunately, it makes a mistake at 0.99s, and the reward collapses.

+ +

At the moment, I am using the following formula to evaluate or update the Q value:

+ +

$$Q_{new,t} = reward + \gamma Q_{max,t+1}$$

+ +

But how do I know the max Q value of the next step? I could consider the best Q value the network says, but this is not necessarily true.

+ +

You can see the current implementation at the following URL: +https://github.com/SuchtyTV/RLearningBird/blob/master/src/main/java/rlgame/Brain.java.

+",19062,,2444,,2/16/2019 0:14,2/16/2019 0:14,How do I update the Q values of a Deep Q Network when exploring?,,1,0,,,,CC BY-SA 4.0 +9819,1,,,1/3/2019 23:19,,1,205,"

Background

+ +

My understanding is the input neurons seem to seem to compute a weighted sum moving from one layer to another.

+ +

+$$ \sum_i a_i w_i = a'_{k} $$

+ +

But to compute this weighted sum the sum must be discrete. Is there any known method to compute the sum when the activation is a continuous function? Is the below formula of any consequence problems in artificial intelligence? Can anyone give a specific problem where it might be useful?

+ +

My Method

+ +

Let $b_r = \sum_{d \mid r} a_d\mu(\frac{m}{d})$. We prove that if the $b_r$'s are small enough, the result is true (where $\mu$ is the mobius function).

+ +
+

Claim: If $\lim_{n \to \infty} \frac{\log^2(n)}{n}\sum_{r=1}^n |b_r| = 0$ and $f$ is smooth, then $$\lim_{k \to \infty} \lim_{n \to \infty} \sum_{r=1}^n a_rf\left(\frac{kr}{n}\right)\frac{k}{n} = \left(\lim_{s \to 1} \frac{1}{\zeta(s)}\sum_{r=1}^\infty \frac{a_r}{r^s}\right)\int_0^\infty f(x)dx.$$

+
+ +

I will not go into the proof of this over but for those who are interested: https://math.stackexchange.com/questions/2888976/a-rough-proof-for-infinitesimals I will merely state what the formula means:

+ +

Consider we have a curve $f(x)$ now if one wishes to perform a weighted sum in the limiting case of this function.

+ +

+ +

Consider the curve $f(x)$. Then splitting it to $k/n= h$ intervals then adding the first strip ($d_1$ times): $ f(h) \cdot d_1$. Then the second strip ($d_2$ times) $ f(2h) \cdot d_2$ times ... And so on . Hence. $d_r$ can be thought of as the weight at $f(rh)$.

+",21136,,21136,,3/25/2020 16:52,3/25/2020 16:52,Method to compute the sum when the activation is a continuous function?,,1,0,,,,CC BY-SA 4.0 +9820,2,,9813,1/3/2019 23:50,,1,,"

These kinds of repetitions in text can place recurrence demands on learning algorithms that may or may not be handled without special encoding.

+ +
    +
  • Hi.
  • +
  • Hiiii!
  • +
  • HIIIIIIIII
  • +
  • Hi!!!!!!!!!!!!!!
  • +
+ +

These have the same meaning on one level, but different emotional content and therefore different correlations to categories when detecting the value of an email, which in the simplest case is the placement of a message in one of two categories.

+ +
    +
  • Pass to a recipient
  • +
  • Archive only
  • +
+ +

This is colloquially called spam detection, although not all useless emails are spam and some messages sent by organizations that broadcast spam may be useful, so technically the term spam is not particularly useful. The determinant should usually be the return on investment to the recipient or the organization receiving and categorizing the message.

+ +
+

Is reading the message and potentially responding likely of greater value than the cost of reading it?

+
+ +

That is a high level paraphrase of what the value or cost function must represent when AI components are employed to learn about or track close to (in continuous learning) some business or personal optimality.

+ +

The question proposes a normalization scheme that truncates long repetitions of short patterns in characters, but truncation is necessarily destructive. Compression of some type that will both preserve nuance and work with the author's use of Word2Vec is a more flexible and comprehensive approach.

+ +

In the case of playful sequences of characters it is anthropomorphic to imagine that an artificial network will understand playfulness or anger, however existing learning devices can certainly learn to use character sequences that humans would call playful or angry in the function that emerges to categorize the message containing them. Just remember that model free learning is not at all like cognition, so the term understanding is placing an expectation on the mental capacities of the AI component that the AI component may not possess.

+ +

Since no indication that a recurrent or recursive network will be used but rather the entire message is represented in a fixed width vector, so the question becomes which of these two approaches will produce the best outcomes after learning.

+ +
    +
  • Leaving the text uncompressed so that an 'H' character followed by ten 'i' characters is distinct as a word from an 'H' character followed by five 'i' characters
  • +
  • Compressing the text to ""Hi [9xi]"" and ""Hi [4xi]"" respectively or some such word bifurcation.
  • +
+ +

This second approach produces reasonable behavior with other cases mentioned, such as ""😠😂😂😂"" pre-processed into ""😠😂 [2x😂]"". What the algorithm in Word2Vec will do with each of these two choices and how its handling of them will affect outcomes is difficult to predict. Experiments must be run. Three things are advisable courses of action.

+ +
    +
  • Build a test fixture to allow quick evaluation of outcomes for various trials.
  • +
  • Experiment with dilligence. Don't leave any potentially interesting case untried.
  • +
  • Label as much production data as reasonably possible and use that as well as the canned data so that the above options can be evaluated in permuted combinations with the differences in pattern distribution between the canned and live data.
  • +
+",4302,,,,,1/3/2019 23:50,,,,0,,,,CC BY-SA 4.0 +9821,2,,9816,1/4/2019 5:07,,3,,"

For tabular Q-learning, the q-values for state s and action a are updated according to

+ +

$$ +Q(s, a) \gets Q(s, a) + \alpha [(r + max_{a'} Q(s', a')) - Q(s,a)] +$$

+ +

where $\alpha$ is the learning rate and $(r + max_{a'} Q(s', a')) - Q(s,a)$ is the difference between the current estimate of the q-value, $Q(s,a)$, and the target, $r + max_{a'} Q(s', a')$.

+ +

The target q-value is based on the greedy policy, not the exploratory policy. Q-learning is theoretically guaranteed to converge to the optimal policy for any behavior policy (like $\epsilon$-greedy) that is guaranteed to visit every state and action pair an infinite number of times. See Section 6.5 of the Sutton and Barto book for more details.

+ +

In contrast to Q-learning, the target q-value for SARSA is $r + Q(s', a')$, where $a'$ is chosen from an exploratory behavior policy like $\epsilon$-greedy. For SARSA the learned q-values are dependent on the behavior policy and therefore not guaranteed to converge to the optimal policy. A behavior policy that intentionally acted randomly for multiple consecutive actions, as in your example Asteroids exploratory policy, would likely lead to learning different q-values than would be learned for an $\epsilon$-greedy behavior policy.

+ +

Unfortunately Q-learning's theoretical guarantees of convergence to an optimal policy go out the window when nonlinear function approximation is introduced, as is the case for deep neural networks. Nevertheless, in the Deep Q-Networks paper, the q-value function is updated using a target value based on the maximum q-value for the next state. Specifically, if $Q(s, a, w)$ is a q-value function parameterized by weights $w$, then the weights are updated by

+ +

$$ +w \gets w + \alpha [(r + max_{a'} Q(s', a', w^-)) - Q(s, a, w)] \nabla_w Q(s,a,w) +$$

+ +

where $w^-$ are the parameters of the target network used to stabilize training. (See the paper for more details). This update rule is chosen to minimizes the loss function

+ +

$$ +L(w) = E[(r + max_{a'} Q(s', a', w^-)) - Q(s, a, w)]^2 +$$

+ +

For your own implementation, it may be helpful to see a code example of the Deep Q-Networks parameter updates. A tensorflow implementation is available in the function build_train in the OpenAI Baselines DeepQ code.

+",15444,,,,,1/4/2019 5:07,,,,1,,,,CC BY-SA 4.0 +9824,1,,,1/4/2019 9:28,,0,333,"

I am looking for an non-ML method for two chat bots to communicate to each other about a specific topic. I am looking for an ""explainable AI"" method, as opposed to a ""black-box"" one (like a neural network).

+",20378,,2444,,5/1/2019 17:03,5/1/2019 17:03,How do I create chatbots without machine learning?,,1,0,,,,CC BY-SA 4.0 +9825,2,,9824,1/4/2019 11:15,,1,,"

The easiest non-ML way would be to use a finite state machine. You could model various states of your conversation topics, and certain utterances of your bots could advance the bot's internal model along different paths. The complexity depends on the complexity of the topic.

+ +

You can then enhance the transitions with probabilities, and later move on towards ML by transforming it into an HMM.

+ +

However, even simple topics will probably lead to fairly complex state machines. But you should be able to keep track of what is going on in your conversation nevertheless.

+ +

Update: just to make it a bit clearer, I was thinking along the lines of having states for particular stages in the conversation. You could either have one model for the whole conversation, or one per participant.

+ +

Initially, there would be a state 'greeting'. Possible transitions would be to a further state 'greeting' (the response of the person who has been greeted), or that could be skipped to states such as 'statement', 'question', etc. 'Question' would have transitions to 'answer', 'ignore question', 'counter/clarification question' etc. The level of detail depends on your application.

+",2193,,2193,,1/7/2019 9:26,1/7/2019 9:26,,,,2,,,,CC BY-SA 4.0 +9826,2,,9808,1/4/2019 12:43,,2,,"

Suppose that you have already search a part of the complete search tree, for example the complete left half. This may not yet give you the true game-theoretic value for the root node, but it can already give you some bounds on the game-theoretic value that the player to play in the root node (let's say, the max player) can guarantee by moving into that part of the search tree. Those bounds / guarantees are:

+ +
    +
  • $\alpha$: the minimum score that the maximizing player already knows it can guarantee if we move into the part of the search tree searched so far. Maybe it can still do better (get higher values) by moving into the unsearched part, but it can already definitely get this value.
  • +
  • $\beta$: the maximum score that the minimizing player already knows it can guarantee if we moves into the part of the search tree searched so far. Maybe it can still do better (get lower values) by moving into the unsearched part, but it can already definitely get this value.
  • +
+ +

The intuitive idea behind alpha-beta pruning is to prune chunks of the search tree that become uninteresting for either player because they already know they can guarantee better based on the $\alpha$ or $\beta$ bounds.

+ +
+ +

For a simple example, suppose $\alpha = 1$, which means that the maximizing player already has explored a part of the search tree such that it can guarantee at least a value of $1$ by playing inside that part (the minimizing player has no options inside that entire tree to reduce the value below $1$, if the maximizing player plays optimally in that part).

+ +

Suppose that, in the current search process, we have arrived at a node where the minimizing player is to play, and it has a long list of child nodes. We evaluate the first of those children, and find a value of $0$. This means that, under the assumption that we reach this node, the minimizing player can already guarantee a value of $0$ (and possibly get even lower, we didn't evaluate the other children yet). But this is worse (for the maximizing player) than the $\alpha = 1$ bound we already had. Without evaluating any of the other children, we can already tell that this part of the search tree is uninteresting, that the maximizing player would make sure that we never end up here, so we can prune the remaining children (which could each have large subtrees below them).

+",1641,,,,,1/4/2019 12:43,,,,2,,,,CC BY-SA 4.0 +9828,1,9942,,1/4/2019 13:39,,9,3467,"

There are several activation functions, such as ReLU, sigmoid or $\tanh$. What happens when I mix activation functions?

+ +

I recently found that Google has developed Swish activation function which is (x*sigmoid). By altering activation function can it increase accuracy on small neural network problem such as XOR problem?

+",21143,,2444,,5/15/2019 15:16,12/15/2022 20:40,What happens when I mix activation functions?,,1,0,,,,CC BY-SA 4.0 +9829,1,9843,,1/4/2019 14:11,,2,296,"

I am not sure if I understood the q learning algorithms correctly. +Therefore I would give a concrete example and ask if someone can tell me how to update the q value correctly.

+ +

First I initialized a Neural Network with random weights. It shall henceforth evaluate the Q Value for all possible actions(4) given a State S.

+ +

Then the following happens. The agent is playing and is exploring. +For 3 steps the Q Values evaluated were: +(0,-1,-5,0), (0,-1,0,0), (0,-.6,0,0)

+ +

The reward given was: 0,0,1 +The action took were: (1.,1.,1.) +In the random walk example (same reward given), it was: (1.,2.,3.)

+ +

So what are the new Q - Values, assuming a discount factor of 0.99 and the learning rate 0.1?

+ +

The States for Simplicity are only one number: 1,1.3,2.4 Where 2.4 is the state who ends the game...

+ +

The same example holds for exploiting. Is the algorithm the same here?

+ +

Here you see my last implementation:

+ +
    public void rlearn(ArrayList<Tuple> tupels, double learningrate, double discountfactor) {
+
+    //newQ = sum of all rewards you have got through
+    for(int i = tupels.size()-1; i > 0; i--) {
+        MLData in = new BasicMLData(45);
+        MLData out = new BasicMLData(5);
+
+        //Add State as in
+        int index = 0;
+        for(double w : tupels.get(i).statefirst.elements) {
+            in.add(index++, w);
+        }
+
+        //Now start updating Q - Values 
+        double qnew = 0;
+        if(i <= tupels.size()-2){
+            qnew = tupels.get(i).rewardafter + discountfactor*qMax(tupels.get(i+1));
+        } else {
+            qnew = tupels.get(i).rewardafter;
+        }
+
+        tupels.get(i).qactions.elements[tupels.get(i).actionTaken] = qnew;
+        //Add Q Values as out
+        index = 0;
+        for(double w : tupels.get(i).qactions.elements) {
+            out.add(index++, w);
+        }
+         bigset.add(in, out);
+    }
+}
+
+ +

Edit: This is the qMax - function:

+ +
    private double qMax(Tuple tuple) {
+    double max = Double.MIN_VALUE;
+    for(double w : tuple.qactions.elements) {
+        if(w > max) {
+            max = w;
+        }
+    }
+    return max;
+}
+
+",19062,,19062,,1/5/2019 10:58,1/7/2019 21:44,Concrete Example for Q Learning,,1,9,,,,CC BY-SA 4.0 +9830,2,,9812,1/4/2019 16:20,,4,,"

Usually when people write about having a high-dimensional state space, they are referring to the state space actually used by the algorithm.

+ +
+

Suppose my state is a high dimensional vector of $N$ length where $N$ is a huge number. Let's say I solve this task using $Q$-learning and I fix my state space to $10$ vectors each of $N$ dimensions. $Q$-learning can easily work with these settings as we need only a table of dimensions $10$ x number of actions.

+
+ +

In this case, I'd argue that the ""feature vectors"" of length $N$ are quite useless. If there are effectively only $10$ unique states (which may each have a very long feature vector of length $N$)... well, it seems like a bad idea to make use of those long feature vectors, just using the states as identity (i.e. a tabular RL algorithm) is much more efficient. If you end up using a tabular approach, I wouldn't call that a high-dimensional space. If you end up using function approximation with the feature vectors instead, that would be a high-dimensional space (for large $N$).

+ +
+

Let's also say I have a state space of infinite vectors but each vector is now of length $2$ i.e. very small dimensional vectors. Would it make sense to use DQN in these settings ? Should this state-space be called high dimensional or low dimensional ?

+
+ +

This would typically be referred to as having a low-dimensional state space. Note that I'm saying low-dimensional. The dimensionality of your state space / input space is low, because it's $2$ and that's typically considered to be a low value when talking about dimensionality of input spaces. The state space may still have a large size (that's a different word from dimensionality).

+ +

As for whether DQN would make sense in such a setting.. maybe. With such low dimensionality, I'd guess that a linear function approximator would often work just as well (and be much less of a pain to train). But yes, you can use DQN with just 2 input nodes.

+",1641,,,,,1/4/2019 16:20,,,,2,,,,CC BY-SA 4.0 +9832,2,,9819,1/4/2019 16:40,,2,,"

In terms of the normal use cases for machine learning, the equation does not have much utility, because:

+ +
+

Consider we have a curve $f(x)$ now if one wishes to . . .

+
+ +

In most AI problems, we don't usually have such a curve as input that can be treated analytically. For instance, there is no such input curve to describe a natural image received by a sensor.

+ +

In the vast majority of cases in AI problems, the form of inputs - whether it is images, text, mapping data, robotic telemetry, is going to be multi-dimensional discrete samples from highly complex functions where we don't know the analytical form and can only construct approximations from a set of basis functions. The resulting combination of basis functions could be treated as continuous, integrated etc, but as it would have been constructed from discrete data, the end result would be a lot of computation to end up with something probably less accurate than working direct with the discrete samples. In a lot of cases, the raw data is discrete by definition (e.g. whether someone clicked on a link or replied to a message), so the form of $f(x)$ would be discrete by definition of the problem, and calculus not really applicable.

+ +

There might be some interesting use cases in analog signal processing. Using your formula or a variation of it for instance it should be possible to create an analog neural network learning system - a robotic brain that worked with continuous signals and only contained analog components. Such a system would have some interesting advantages - speed of processing, probably low power consumption compared to digital approach, but the accuracy and precision might be lower. E.g. imagine a bot that could steer towards/away from light sources (compared to a digital one that might recognise the faces of people and steer towards/away from them). I would not be at all surprised to find that someone had done just that already, although I am not sure how to search for it.

+",1847,,1847,,1/4/2019 16:45,1/4/2019 16:45,,,,0,,,,CC BY-SA 4.0 +9833,2,,3879,1/4/2019 17:02,,2,,"

The error in the code is simply having a $+$ rather than a $-$ sign. Line 4 of the algorithm says:

+ +

$$E\left[ g^2 \right]_t = \rho E\left[ g^2 \right]_{t - 1} + (1 - \rho) g_t^2,$$

+ +

but your code implements (note the $+$ inside the brackets at the end):

+ +

$$E\left[ g^2 \right]_t = \rho E\left[ g^2 \right]_{t - 1} + (1 + \rho) g_t^2.$$

+ +

A correct implementation, with only that minor change, would be:

+ +
import math
+
+Eg = Ex = 0
+p = 0.95
+e = 1e-6
+x = 1
+history = [x]
+
+for t in range(100):
+    g = 2*x
+    Eg = p*Eg + (1-p)*g*g
+    Dx = -(math.sqrt(Ex + e) / math.sqrt(Eg + e)) * g
+    Ex = p*Ex + (1-p)*Dx*Dx
+    x = x + Dx
+    history.append(x)
+
+print(history)
+
+ +

On my end, that code leads to a value of approximately $0.597$. It looks like, with these hyperparameters, you'll need more like 400 or 500 iterations to get really close to $0$, but it steadily gets there.

+ +
+ +
+

For example, the paper claims that the update step Δx will have the same unit as x, if x has some hypothetical unit. While this is probably a desireable property, it is as far as I'm concerned not true, since the premise that RMS[Δx] has the same unit as x is incorrect to begin with, since RMS[Δx]_0 = sqrt(E[Δx]_0 + ϵ) = sqrt(0 + ϵ) which is a unitless constant, so all Δx become unitless rather than having the same unit as x. (Correct me if I'm wrong.)

+
+ +

Suppose that we use the symbol $u$ for this hypothetical unit that $x$ has. Line 5 of the algorithm says:

+ +

\begin{aligned} +\Delta x_t &= - \frac{\text{RMS}\left[ \Delta x \right]_{t-1}}{\text{RMS} \left[ g \right]_t} g_t\\ +&= - \frac{\sqrt{E\left[ \Delta x^2 \right]_{t-1} + \epsilon}}{\sqrt{E\left[ g^2 \right]_{t} + \epsilon}} g_t. +\end{aligned}

+ +

We can get rid of the $\epsilon$ terms, their addition does not change the unit of whatever they are added to:

+ +

$$- \frac{\sqrt{E\left[ \Delta x^2 \right]_{t-1}}}{\sqrt{E\left[ g^2 \right]_{t}}} g_t.$$

+ +

As you stated correctly, for the very first iteration, we have $E\left[ \Delta x^2 \right]_{t-1} = 0$. Technically in the very first iteration it could have any unit we like (or no unit at all), based on whatever unit we choose to assign to the $0$ constant it is initially set to. Let's just say we assign it the unit $u^2$ (by saying that that is the unit of the $0^2$ constant we initialize it to). This is convenient because it allows us to immediately figure out the unit in all cases rather than just the $t = 0$ case, this is the unit that it has to have if we also still want things to work out for $t > 0$.

+ +

The gradient $g_t$ has a unit $\frac{1}{u}$, which means that $E[g^2]_t$ has a unit $\frac{1}{u^2}$, and $\sqrt{E[g^2]_t}$ then again has the unit $\frac{1}{u}$. If we replace all the quantities by their units, we then get:

+ +

$$\frac{u}{\frac{1}{u}} \times \frac{1}{u} = u.$$

+",1641,,1641,,1/6/2019 19:06,1/6/2019 19:06,,,,4,,,,CC BY-SA 4.0 +9834,1,,,1/4/2019 18:46,,3,302,"

What are the strengths of the Hierarchical Temporal Memory model compared to competing models such as 'traditional' Neural Networks as used in deep learning? And for those strengths are there other available models that aren't as bogged down by patents?

+",21155,,2444,,7/28/2019 10:05,7/28/2019 14:28,What are the strengths of the Hierarchical Temporal Memory model compared to competing models?,,2,0,,,,CC BY-SA 4.0 +9838,1,,,1/4/2019 23:45,,6,815,"

After reading an excellent BLOG post Deep Reinforcement Learning: Pong from Pixels and playing with the code a little, I've tried to do something simple: use the same code to train a logical XOR gate.

+ +

But no matter how I've tuned hyperparameters, the reinforced version does not converge (gets stuck around -10). What am I doing wrong? Isn't it possible to use Policy Gradients, in this case, for some reason?

+ +

The setup is simple:

+ +
    +
  • 3 inputs (1 for bias=1, x, and y), 3 neurons in the hidden layer and 1 output.
  • +
  • The game is passing all 4 combinations of x,y to the RNN step-by-step, and after 4 steps giving a reward of +1 if all 4 answers were correct, and -1 if at least one was wrong.
  • +
  • The episode is 20 games
  • +
+ +

The code (forked from original and with minimal modifications) is here: https://gist.github.com/Dimagog/de9d2b2489f377eba6aa8da141f09bc2

+ +

P.S. Almost the same code trains XOR gate with supervised learning in no time (2 sec).

+",20941,,20941,,12/3/2019 21:43,12/3/2019 21:43,How to train a logical XOR with reinforcement learning?,,3,0,,,,CC BY-SA 4.0 +9842,1,,,1/5/2019 4:51,,1,138,"

I'm learning machine learning by looking through other people's kernel on Kaggle, specifically this Mushroom Classification kernel.

+

The author first applied PCA to the transformed indicator matrix. He only used 2 principal components for visualization later. Then I checked how much variance it has maintained, and found out that only 16% variance is maintained.

+
in [18]: pca.explained_variance_ratio_.cumsum()
+out[18]: array([0.09412961, 0.16600686])
+
+

But the test result with 90% accuracy suggests it works well.

+

If variance stands for information, then how can the ML model work well when so much information is lost?

+",21166,,2444,,4/1/2021 11:15,8/19/2023 19:05,Why does PCA work well while the total variance retained is small?,,1,0,,,,CC BY-SA 4.0 +9843,2,,9829,1/5/2019 5:26,,3,,"

Most Deep Q-learning implementations I have read are based on Deep Q-Networks (DQN). In DQN, the q-value network maps an input state to a vector of q-values, one for each action:

+ +

$$ +Q(s, \mathbf{w}) \to \mathbf{v} +$$

+ +

where $s$ is the input state from the environment, $\mathbf{w}$ are the parameters of the neural network, and $\mathbf{v}$ is a vector of q-values, where $v_i$ is the estimated q-value of the ith action. In the Sutton and Barto book, the q-value function is written as $Q(s, a, \mathbf{w})$, which corresponds to the network output for action $a$.

+ +

Unlike tabular Q-learning, Deep Q-learning updates the parameters of the the neural network according to the gradients of the loss function with respect to the parameters. DQN uses the loss function

+ +

$$ +L(\mathbf{w}) = [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})]^2 +$$

+ +

where $\gamma$ is the discount rate, $a$ is the selected action (either greedily or randomly for an $epsilon$-greedy behavior policy), $s'$ is the next state, $a'$ is the argmax action for the next state, and $\mathbf{w^-}$ is an older version of the network weights $\mathbf{w}$ that is used to help stabilize training.

+ +

In deep Q-learning, training directly updates parameters, not q-values. Parameters are updated by taking a small step in the direction of the gradient of the loss function

+ +

$$ +\mathbf{w} \gets \mathbf{w} + \alpha [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})] \nabla_w Q(s, a, \mathbf{w}) +$$

+ +

where $\alpha$ is the learning rate.

+ +

In frameworks like tensorflow or pytorch the derivative is calculated automatically by giving the loss function and model parameters directly to an optimizer class which uses some variation of mini-batch gradient descent. In eagerly executed tensorflow updating the parameters for a mini-batch might look something like

+ +
batch = buffer.sample(batch_size)
+observations, actions, rewards, next_obervations = batch
+
+with tf.GradientTape() as tape:
+    qvalues = model(observations, training=True)
+    next_qvalues = target_model(next_obervations)
+    # r + max_{a'} Q(s', a') for the batch
+    target_qvalues = rewards + gamma * tf.reduce_max(next_qvalues, axis=-1)
+    # Q(s, a) for the batch
+    selected_qvalues = tf.reduce_sum(tf.one_hot(actions, depth=qvalues.shape[-1]) * qvalues, axis=-1)
+    loss = tf.reduce_mean((target_qvalues - selected_qvalues)**2)
+
+grads = tape.gradient(loss, model.variables)
+optimizer.apply_gradients(zip(grads, model.variables))
+
+ +

Though I am not familiar with the Encog neural network framework you are using, based on the example Brain.java file from your Github repo and Chapter 5 of the Encog User Manual and the Encog neural network examples on Github it looks like weights are updated as follows:

+ +
    +
  1. A training set is constructed from pairs of input and target output.
  2. +
  3. A Propagation instance, train, is constructed with a network and training set. Different subclasses of Propagation use different loss functions to update the network parameters.
  4. +
  5. The method train.iterate() is called to run the network on the inputs, calculate the loss between the network outputs and target outputs, and update the weights according to the loss.
  6. +
+ +

For DQN, a training set is constructed from a random sample from the experience replay buffer to help stabilize training. A training set could also be the trajectory of an episode, which is what the tupels argument in the example code of the question appears to be.

+ +

The input would be the statefirst member of each element of tupels. Since the network produces a vector of q-values, the target output must also be a vector of q-values.

+ +

The target output element for the selected action is $r + \gamma max_{a'} Q(s', a', \mathbf{w^-})$, In the example code of the question, this is

+ +
double qnew = 0;
+if(i <= tupels.size()-2){
+    qnew = tupels.get(i).rewardafter + discountfactor*qMax(tupels.get(i+1));
+} else {
+    qnew = tupels.get(i).rewardafter;
+}
+tupels.get(i).qactions.elements[tupels.get(i).actionTaken] = qnew
+
+ +

The target output elements for actions that were not selected should be $Q(s, b, \mathbf{w})$, where $b$ is one of the non-selected actions. This should have the effect of ignoring the q-values of non-selected actions by making the network output equal to the target output.

+ +

So what are the new Q - Values, assuming a discount factor of 0.99 and the learning rate 0.1?

+ +

Assuming you mean target outputs by the new Q - Values, and given the trajectory of actions, (1, 1, 1), and q-value vectors from the question, the concrete target outputs are (0, 0 + 0.99 * 0, -5, 0), (0, 0 + 0.99 * 0, 0, 0), and (0, 1 + 0, 0, 0).

+",15444,,,,,1/5/2019 5:26,,,,4,,,,CC BY-SA 4.0 +9847,2,,9842,1/5/2019 10:36,,0,,"

Because it selects both Xtrain and Xtest from the space of two selected principal components. Hence, the 90% accuracy is in that 2-D selected space.

+ +

This fact that the ratio in PCA stands the information, depends on the distribution of the data and it's not true at all.

+",4446,,4446,,1/5/2019 10:43,1/5/2019 10:43,,,,0,,,,CC BY-SA 4.0 +9848,1,9851,,1/5/2019 10:56,,1,266,"

Let's say I have a $2 \times 2$ pixel of grayscale picture, where there is one edge such that the left pixel contains a value, 30, and the right pixels contain a value 0 (in red below). And for edge detection I have zero-padded the input image and then used the Sobel vertical filter to find out the vertical edges and apply ReLU to the output. The output is a $2 \times 2$ matrix with all pixel values $0$. So that should mean there is no edge in the picture whereas in actual case it has one. Where am I going wrong?

+ +

+",21172,,2444,,6/5/2019 21:40,6/5/2019 21:42,Understanding the application of Sobel kernel followed by ReLU to a zero-padded image,,1,0,,,,CC BY-SA 4.0 +9849,1,,,1/5/2019 11:08,,1,60,"

Consider the following loss function

+ +

$$ +L(\mathbf{w}) = [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})]^2 +$$

+ +

where $Q(s, a, \mathbf{w^-})$ and $Q(s, a, \mathbf{w})$ are represented as neural networks, where $w^-$ and $w$ are the corresponding weights.

+ +

But how do you calculate $max_{a'} Q(s', a', \mathbf{w^-})$? Do you really need to hold always an older version of the network? If yes, why and how old should it be?

+",19062,,2444,,2/15/2019 20:21,2/15/2019 20:21,"How do I calculate $max_{a′}Q(s′,a′,w−)$ when it is represented as a neural network?",,1,0,,,,CC BY-SA 4.0 +9851,2,,9848,1/5/2019 12:35,,1,,"

I assume that you would like to use convolution with padding with the same size of output matrix as this picture. You messed up calculations with convolution with ""full padding"". If we imagine these two matrices as windows that slide on each other, you can see that the filter is symmetrically inverted. I used a little bit different filter to show you in a better way how it works (I changed the last row to [-3, 0 3]).

+ +

Assuming these matrices:

+ +

+ +

You should add to your picture matrix two rows and columns of zeros:

+ +

+ +

Then you can start matrix multiplication, but notice that the filter is symmetrically inverted. The result shown as matrix $4 \times 4$ is the convolution with ""full padding"". The $2 \times 2$ matrix in the middle is a result of convolution with the ""same padding"", that you requested.

+ +

+ +

Next step:

+ +

+ +

Some iterations later:

+ +

+ +

And later:

+ +

+ +

And finally:

+ +

+ +

For convolution with the same size, the result will be the small $2 \times 2$ matrix in the middle.

+ +

+ +

After using the ReLU function, the result will be exactly the same.

+ +

So with using of your filter, the result would look like [[0,90], [0,90]].

+",21171,,2444,,6/5/2019 21:42,6/5/2019 21:42,,,,2,,,,CC BY-SA 4.0 +9852,2,,9849,1/5/2019 18:32,,1,,"

You calculate the max by calculating your estimates for all possible actions for the next state, and taking the highest value.

+ +

The details depend a little on your neural network architecture:

+ +
    +
  • If you have a network that takes the state vector as input and outputs all possible action values $(\hat{q}(s,a_0), \hat{q}(s,a_1), \hat{q}(s,a_2) ...)$, then you can run it once forward with the next_state as input to getthe array $(\hat{q}(s',a_0), \hat{q}(s',a_1), \hat{q}(s',a_2) ...)$, and take the maximum element (you don't need to care which action $a'$ caused it). You will then have the problem that you now have a loss for $Q(s, a, \mathbf{w})$ for the single action $a$ just taken, but no data any of the alternative actions. The loss for these alternative actions needs to be set to zero - if you are training the NN using a normal supervised learning approach, that means you need to keep the full output of the network that you ran forward to calculate $Q(s, a, \mathbf{w})$ then substitute in this new estimated value against the action $a$ and train using this modified vector.

  • +
  • If you have a network that takes the state vector and action combined as input and outputs a single estimate $\hat{q}(s,a)$, then you have to run that network once for each possible action from the next state and take the maximum value. You would typically do this as a small batch prediction for better performance. In this case your training data is simple to construct as you only have the loss for and train the network against one state/action combination.

  • +
+ +

Overall the first option (all action values at once) is usually a lot more efficient, but slightly more complex to code the training routine.

+ +
+

do you really need to hold always an older version of the network. If yes why and how old should it be?

+
+ +

You don't have to, but it is highly advisable to have this target network (so called because it helps generate your TD Target values), because Q learning using neural networks is often unstable. This is due to the bootstrap calculations where estimates are based on other estimates plus a little bit of observed data at each step. There is a strong possibility for runaway feedback due to training a neural network on something that includes its own output.

+ +

How old should it be? That's sadly a hyper-parameter of the architecture that you will need to establish through experiment on each new problem. I have worked with maximum age values from 100 to 10000 in my own simple experiments. Note this is not usually a rolling age - you don't keep 1000 copies of the network weights. Just keep one frozen copy, and after N steps replace it with a copy of the most recent one.

+ +

One alternative to this copy/freeze/copy approach is to update the target network towards the learning network on every step by a small factor. E.g. $\mathbf{w}^{\_} = (1 - \beta)\mathbf{w}^{\_} + \beta \mathbf{w}$ where $\beta$ might be $0.001$

+ +

In addition, you should be using experience replay for training data, and not training directly online. The combination of experience replay and using a frozen or slowly adapting target network makes a large difference to the stability of deep Q learning in practice.

+",1847,,1847,,1/5/2019 19:24,1/5/2019 19:24,,,,5,,,,CC BY-SA 4.0 +9854,1,9865,,1/5/2019 21:24,,1,1116,"

Consider this game state: +

+

d5 captures c6

+

Quiescence search returns about 8.0 as evaluation because after dxc6 and bxc6 Qxd6 would be played (then Qxd6 by black). A normal player would not play this move but quiescence search includes it in the evaluation and it would result in this end state: +which would result in a huge advantage for black.

+

Is my interpretation of quiescence search wrong?

+",19783,,-1,,6/17/2020 9:57,5/13/2020 20:49,Does quiescence search even improve the minimax algorithm?,,1,0,,,,CC BY-SA 4.0 +9860,1,,,1/6/2019 17:12,,2,216,"

Why is the actor-critic algorithm limited to using on-policy data? Or can we use the actor-critic algorithm with off-policy data?

+",21180,,2444,,2/15/2019 19:38,2/15/2019 19:43,Why is the actor-critic algorithm limited to using on-policy data?,,1,0,,,,CC BY-SA 4.0 +9862,1,9872,,1/6/2019 21:54,,1,487,"

Is there a ReLU-like activation function that concatenates positive and negative values? What is its name? Apparently, it doubles the output dimension.

+",21203,,2444,,6/4/2020 15:24,6/4/2020 15:24,Is there a ReLU-like activation function that concatenates positive and negative values?,,1,0,,,,CC BY-SA 4.0 +9865,2,,9854,1/7/2019 1:54,,1,,"

Your logic is flawed because you negated ""stand-pat"" (i.e. do nothing) and alpha-beta. Let's take a look at the pseudocode (https://www.chessprogramming.org/Quiescence_Search#Pseudo_Code):

+ +
int Quiesce( int alpha, int beta ) {
+    int stand_pat = Evaluate();
+    if( stand_pat >= beta )
+        return beta;
+    if( alpha < stand_pat )
+        alpha = stand_pat;
+
+    until( every_capture_has_been_examined )  {
+        MakeCapture();
+        score = -Quiesce( -beta, -alpha );
+        TakeBackMove();
+
+        if( score >= beta )
+            return beta;
+        if( score > alpha )
+           alpha = score;
+    }
+    return alpha;
+}
+
+ +

Your Qxd6 capture will make return a score far below the alpha. The line:

+ +
+

if( score > alpha )

+
+ +

will prevent your blunder being reported. Instead the engine would report either stand_pat (do nothing), or something like Nf3, Nc3 etc.

+",6014,,,,,1/7/2019 1:54,,,,0,,,,CC BY-SA 4.0 +9870,1,9891,,1/7/2019 11:54,,0,545,"

I'm trying to understand how the dimensions of the feature maps produced by the convolution are determined in a ConvNet.

+

Let's take, for instance, the VGG-16 architecture. How do I get from 224x224x3 to 112x112x64? (The 112 is understandable, it's the last part I don't get)

+

I thought the CNN was to apply filters/convolutions to layers (for instance, 10 different filters to channel red, 10 to green: are they the same filters between channels ?), but, obviously, 64 is not divisible by 3.

+

And then, how do we get from 64 to 128? Do we apply new filters to the outputs of the previous filters? (in this case, we only have 2 filters applied to previous outputs) Or is it something different?

+

+",19094,,2444,,6/19/2021 12:19,6/19/2021 12:19,How are the dimensions of the feature maps produced by the convolutional layer determined in VGG-16?,,3,0,,,,CC BY-SA 4.0 +9871,1,,,1/7/2019 11:59,,1,65,"

I have time series data where I use a sliding window to detect anomalies in those windows. A sliding window is an interval of the dataset that steps one datapoint for each iteration. Datapoints are seen multiple times in this way equal to the size of the window.

+ +

In short, the algorithm works like this:

+ +
    +
  1. Choose window length: wl
  2. +
  3. Learn normal data with sliding window
  4. +
  5. Try to detect anomalies on test data with sliding window
  6. +
+ +

I want to keep the sliding window method since it is necessary for the performance of the algorithm.

+ +

However, one anomaly occurs multiple times in the sliding window. When the anomaly appears in the sliding window for the first time it's on the 'right' side of the window.

+ +

How do we measure accuracy of anomaly detection in this case?

+ +

We could say that detecting the anomaly once in the window is enough or detect it wl times. What's best practice?

+",21223,,16229,,6/5/2019 19:34,6/5/2019 19:34,Performance measure on windowed time series data,,0,0,,,,CC BY-SA 4.0 +9872,2,,9862,1/7/2019 13:13,,0,,"

It seems I have found it. It is called concatenated ReLU (CReLU).

+ +
+

Concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together. In other words, for positive x it produces [x, 0], and for negative x it produces [0, x]. Because it has two outputs, CReLU doubles the output dimension.

+
+ + + +

There is also Negative CReLU. It seems that the difference is only the sign.

+ +

$$\text{NCReLU}(x) = (\rho(x) , −\rho(−x) )$$

+",21203,,2444,,6/4/2020 15:11,6/4/2020 15:11,,,,0,,,,CC BY-SA 4.0 +9874,2,,9870,1/7/2019 16:34,,1,,"

The 64 here is the number of filters that are used. +The picture is kind of misleading in that it leaves out the transition of the maxpool. +

+ +

Below is a text description of the size of the features as they go through the network with the number of filters in bold.

+ +
    +
  1. The first 2 layers in the diagram you posted contain 64 3x3 convs +resulting in a 224x224x64 matrix of features.
  2. +
  3. This is then fed into a maxpool which reduces the size to a 112x112x64 matrix.
  4. +
  5. This is then fed to 3 layers of 128 3x3 convs resulting in a 112x112x128 +matrix.
  6. +
  7. Then another maxpool giving a 56x56x128 matrix.
  8. +
  9. Feeding that to 3 layers of 256 3x3 convs results in a 56x56x256 matrix.
  10. +
  11. This is then fed into another maxpool giving a 28x28x256 matrix,
  12. +
  13. Which is then fed into 3 layers of 512 3x3 convs resulting in a +28x28x512 matrix.
  14. +
  15. Another maxpool gives a 14x14x512 matrix which is fed to 3 layers of 512 3x3 convs giving a matrix of 14x14x512 features.
  16. +
  17. Another maxpool reduces this to a 7x7x512 which is then +given to 3 fully connected layers of 4096 units each before being +sent to a softmax.
  18. +
+",4398,,,,,1/7/2019 16:34,,,,0,,,,CC BY-SA 4.0 +9875,2,,9870,1/7/2019 16:45,,1,,"

For learning image features with CNNs, we use 2D Convolutions. Here 2D does not refer to the input of the operation, but the output.

+ +

Consider you have an input tensor of size 224 x 224 x 3. Say for example you have 64 different convolution kernels. Theses kernels are also 3 dimensional. Each kernel will produce a 2D matrix as output. Since you have 64 different kernels/filters, you will have 64 different 2D matrices. In other words, you got a tensor with depth 64 as output.

+ +

+ +

I would suggest you to go through this question:

+ +

Understanding 1D, 2D and 3D convolutions

+",21229,,,,,1/7/2019 16:45,,,,0,,,,CC BY-SA 4.0 +9876,2,,9838,1/7/2019 17:30,,3,,"

Reinforcement learning is used when we know the outcome we want, but not how to get there which is why you won't see a lot of people using it for classification (because we already know the optimal policy, which is just to output the class label). You knew that already, just getting it out of the way for future readers!

+ +

As you say, your policy model is fine - a fully connected model that is just deep enough to learn XOR. I think the reward gradient is a little shallow - when I give a reward of +1 for ""3 out 4"" correct and +2 for ""4 out of 4"", then convergence happens (but very slowly).

+",17770,,17770,,1/8/2019 18:03,1/8/2019 18:03,,,,0,,,,CC BY-SA 4.0 +9883,2,,9860,1/8/2019 2:33,,1,,"

It's because, in the actor-critic algorithm, the objective function is an expectation under the $\tau$ of the policy. If we want to use off-policy data, we have to resort to importance sampling relative to the other policy.

+",21180,,2444,,2/15/2019 19:43,2/15/2019 19:43,,,,1,,,,CC BY-SA 4.0 +9887,1,9888,,1/8/2019 10:43,,2,185,"

Which algorithms, between ant colony or classical routing algorithms, have a better time complexity for the shortest path problem?

+ +

In general, can we compare efficiency of these two types of algorithm for the shortest path problem in a graph?

+",19910,,2444,,5/27/2019 22:06,5/27/2019 22:06,"Which algorithms, between ant colony or classical routing algorithms, have a better time complexity for the shortest path problem?",,1,0,,,,CC BY-SA 4.0 +9888,2,,9887,1/8/2019 11:16,,1,,"

No. In general, you can't find a tight bound for evolutionary algorithms, and it is one of the main difference of these algorithms with the classical algorithms.

+ +

You should notice that it does not mean you can't find when the evolutionary algorithms are finished! But, you can't find a tight bound for the algorithms time complexity to reach to the optimal solution or how much that solution is near to the optimal solution (in contrast to the approximation algorithms).

+",4446,,4446,,1/8/2019 11:56,1/8/2019 11:56,,,,4,,,,CC BY-SA 4.0 +9890,1,14031,,1/8/2019 11:57,,3,1443,"

On recommendation of Kanak on stackoverflow I am posting this question here:

+ +

Currently I am experimenting with various loss functions and optimizers for my binary image segmentation problem. The loss functions that I use in my Unet however give different output segmentation maps.

+ +

I have a highly imbalanced dataset, thus I am trying dice loss for which the customized function is given below.

+ +
    def dice_coef(y_true, y_pred, smooth=1):
+        """"""
+        Dice = (2*|X & Y|)/ (|X|+ |Y|)
+             =  2*sum(|A*B|)/(sum(A^2)+sum(B^2))
+        ref: https://arxiv.org/pdf/1606.04797v1.pdf
+        """"""
+        intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
+        return (2. * intersection + smooth) / (K.sum(K.square(y_true), -1) + K.sum(K.square(y_pred), -1) + smooth)
+
+    def dice_coef_loss(y_true, y_pred):
+        return 1 - dice_coef(y_true, y_pred)
+
+ +

Binary cross entropy results in a probability output map, where each pixel has a color intensity that represents the chance of that pixel being the positive or negative class. However, when I use the dice loss function, the output is not a probability map but the pixels are classed as either 0 or 1.

+ +

My questions are:

+ +

1.How is it possible that these different loss functions have these vastly different results?

+ +
    +
  1. Is there a way to customize the dice loss function so that the output segmentation map is a probability map similar to the one of binary crossentropy loss.
  2. +
+",21257,,,,,8/22/2019 20:04,Dice loss gives binary output whereas binary crossentropy produces probability output map,,1,0,,,,CC BY-SA 4.0 +9891,2,,9870,1/8/2019 12:15,,1,,"

Both responses I got are correct but do not answer exactly what I was looking for.

+ +

The answer to my question is : each filter is a 2D convolution. It is applied to every channel from previous node (so we get N 2D matrices). Then all of these matrices are added up to make a final matrix (1 matrix for 1 filter). Finally, the output is all filters' matrices in parallel (like channels).

+ +

The hard part was to find the ""sum up"", since many websites speak of it as a 3D convolution (which is not !).

+",19094,,,,,1/8/2019 12:15,,,,0,,,,CC BY-SA 4.0 +9894,2,,4889,1/8/2019 16:18,,3,,"

If you are using a softmax distribution for your classification, then you could determine what your baseline max probability is for correctly classified samples, and then infer if a new sample doesn't belong to any of your known classes if its max probability is below some kind of threshold.

+ +

This idea comes from a research paper that does a much better job of explaining the process than what I just said: A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks

+",21265,,,,,1/8/2019 16:18,,,,0,,,,CC BY-SA 4.0 +9895,1,,,1/8/2019 16:48,,1,116,"

I have this question in my head: does the current level of AI development allow us to spot faked or photoshoped images? (i.e forged ID card or personal documents).

+ +

If it is possible, what is such a process to follow in order to build an AI that achieves this task?

+",19059,,19059,,1/24/2019 10:25,4/8/2019 7:46,Is it possible to spot photoshoped or edited photos using AI?,,1,0,,,,CC BY-SA 4.0 +9897,1,,,1/8/2019 22:27,,3,189,"

Is ""emotion"" ever used in AI?

+ +

Psychologists have a lot to say about emotion and it's functional utility for survival - but I've never seen any AI research that uses something resembling ""emotion"" inside an algorithm. (Yes, there's some work done on trying to classify human emotions, called ""emotional intelligence"", but that's extrememly different from /using/ emotions within an algorithm) For example, you could imagine that a robot might need fuel and be ""very thirsty"" - causing it to prioritize different tasks (seeking fuel). Emotions also sometimes don't just focus on objectives/priorities - but categorize how much certain classifications are ""projected"" into a particular emotions.
+For example, maybe a robot that needs fuel might be very ""afraid"" of going towards cars because it's been hit in the past - while it might be ""frustrated"" at a container that doesn't open properly. +It seems very natural that these things are helpful for survival - and they are likely ""hardcoded"" in our genes (since some emotions - like sexual attraction - seem to be mostly unchangeable by ""nurture"") - so I would think they would have a lot of general utility in AI.

+",20685,,,,,1/9/2019 9:54,"Is ""emotion"" ever used in AI?",,2,2,,,,CC BY-SA 4.0 +9899,2,,9897,1/8/2019 23:35,,0,,"

Not a bad question but we can solve this with a little thought experiment. Consider what it means to be ""afraid"", or to even ""feel"". It's a DESIRE for something. That something is what pushes us towards general survival. It forces us to focus on what is important right now. And it's relative to our immediate environment & generalized to our abstract conceptualization.

+ +

The difference with modern ai paradigms is that they are very structured/rigid in their objectives. There's no general sense of ""okayness"" or generalized sense of guidance on what it should do. This would require a radically different approach to AI design & infrastructure.

+ +

Being that most companies are trying to make money, there's not a lot to be gained by experimenting with ""feeling"" machines.

+",1720,,,,,1/8/2019 23:35,,,,0,,,,CC BY-SA 4.0 +9900,2,,9897,1/9/2019 1:20,,2,,"

Current Simulation of Emotional Behavior

+ +

Emotion is used in AI in very limited ways in leading edge natural language systems. For instance, advanced natural language systems can calculate the probability that a particular segment of speech originates from an angry human. This recognition can be trained using labels from bio-monitors. However, the mental features of a human with soft skills tuned from years of experience with people is not nearly simulated in computers as of this writing.

+ +

We will not see computers becoming counselors (as once believed) or directors of movies or courtroom judges or customs officials any time soon. Nonetheless, the processes behind emotion are not entirely undiscovered, and there is definite interest in simulating them in computers. Much of that work is company confidential.

+ +

The emergence of emotional sophistication in computers likely to begin in the context of sexuality, primarily because flirtation is powerful and primordial emotional expression will probably be easier to simulate in natural language than higher emotional expressions such as love or chaotic ones like rage. Sexy AI will likely be exploited by what businesses might consider legitimate marketing activity.

+ +

It is also going to be exploited by the sex industry. The ethical and moral analysis of sexy AI beyond the scope of the question but will probably gain the attention of public media as it unfolds, and that has already begun on FaceBook, originating from third party attacks using fictitious identities.

+ +

The Science of Emotion

+ +

Emotion isn't a scientific quantity. From an AI perspective, emotion is a quality an individual might recognize through visual and audio queues, specifically through the natural language and affect of another individual. (Affect is a visual clue about a person's emotional and general mental state.)

+ +

An individual can also learn to recognize those clues in her or his self. They can be detected by replaying one's own speech as heard through the ear, by linguistic analysis of thoughts not spoken, or through the detection of muscle tension or vital signs. Those skilled in meditation can detect emotional predecessors closer to their causal centers in the brain and control them more directly before emotions even arise.

+ +

In the brain, emotion is not in a single geometric location. We cannot say, ""That emotion of compassion comes from this group of neurons in Jerome's brain."" We cannot say, ""Sheri is angry at this 3D coordinate in her cerebral cortex."" Emotions are also not strictly system wide either. An individual can be annoyed without going into rage, leaving most of the brain chemistry and electrical signaling unaffected.

+ +

Emotions are not entirely electrical and not entirely chemical. On the electric side, emotional states can occur simultaneously through separate circuit pathway connecting distinct and only distantly related regions of the brain. On the chemical side, there is the synaptic chemistry that is part of the electrical signal pathways. There are also many regional signaling systems using specialized pathways that are neither circulatory (blood) nor primary electrical (neuron) pathways. Serotonin is one of dozens of chemical signaling compounds that operate regionally in this way.

+ +

Emotions, being a largely social set of phenomena, should not be characterized as purely Darwinian. Although related to survival, emotional processing and communications impact mate selection and, more generally, social patterns within a community, including altruistic and collaborative activity.

+ +

Emotions don't always lead to survival. In some cases, emotional states may lead to death prior to reproduction. One could say that emotional balance and the ability to interact on emotional levels may improve odds of having offspring. Imbalance to the degree of any of hundreds of emotional extremes can lead to childlessness.

+ +

Emotional intelligence is different than using emotions within an algorithm, but not extremely so.

+ +

Discussion of emotional intelligence is one of many advancements in the concept of intelligence since the formation of one-dimensional conceptions of intelligence. Those nineteenth century conceptions, such as IQ and G-factor are poorly supported by genetic evidence and anthropological theory. Mathematically unproven and naive concepts like general intelligence rest on those one-dimensional concepts.

+ +

Emotional intelligence is a form of mental capability related to emotional balance. If a person's cognitive skills are honed with respect to their emotions and the assessment of the emotional states of others, then they have greater emotional intelligence than someone who cannot read the affect and linguistic clues of another and cannot integrate cognitive and emotional skill to balance of their own emotions.

+ +

Cybernetic Analysis

+ +

The interface between natural emotion and artificial emotion fits within the realm of cybernetics, the conceptual study of the interface between humans and machines. Such interaction is clearly related to both algorithms and topology, two important concepts in AI research and development.

+ +

Emotion has an algorithmic context because there is clearly some combination of neurons and chemistry that produce this algorithmic difference between a reactive person and one who has developed emotional intelligence.

+ +
    emotion[person] = recognize_emotion[person]
+    if emotion[person] = anger
+        be_in_responses(angry)
+
+    emotion[person] = recognize_emotion[person]
+    if emotion[person] = anger
+        be_in_responses(extra_calm)
+
+ +

The former is reactive and the later exhibits emotional intelligence. The acquisition of the later skill may be cognitive and conscious or it may be intuitive and unconscious. In either case, the actual algorithms at a lower level may be entirely different than those shown above, yet the external behavior of the person as marshaled by the brain is essentially one of those shown.

+ +

The plural, algorithms, is used rather than the singular, algorithm, because it is unlikely that a single synchronous algorithm is involved. The brain is a massively parallel processor. Emotional processing is likely best expressed in artificial form as hundreds of thousands of algorithms operating in parallel and forming millions of balances within the system — multidimensional and highly parallel stasis.

+ +

This is why emotional recognition and emotional responses are not very sophisticated in computer systems as of this writing. The balances have much social nuance. It may be easier to simulate rational thought than emotional thought.

+ +

Desire as a Systemic Behavior

+ +

Hunger and thirst may sometimes be called feelings, but they are not strictly emotional. The detection of the need for air, energy, nutrients, and water may stimulate emotional states if the needs are unmet and other emotional states if met. A person may become frustrated and irritable when lacking something essential and confronted with another person's less important agenda. A robot may someday do the same. A person may become elated and generous when all such essentials have recently been made available in surplus. A robot may someday do the same. These relationships are expressed in the question this way.

+ +
+

Emotions also sometimes don't just focus on objectives/priorities — but categorize how much certain classifications are ""projected"" into a particular emotions.

+
+ +

That statement in the question and its explanation is true in some respects. If a robot that needs fuel but is afraid of passing in front of a moving vehicle because it has been hit in the past can be seen in more than one way.

+ +
    +
  • Probabilistic risk management based on past experiences
  • +
  • Fuzzy logic that produces control behavior
  • +
  • Feeling fear because of past experience
  • +
+ +

In AI design, these three would be handled in different ways.

+ +
    +
  • Development of a function that ties visual and auditory information to a model of collision and produces a projected likelihood of injury based each of a number of paths to obtaining fuel
  • +
  • Rules that relates to travel risk with learned probabilities for each along with rules that relate to energy depletion risk with learned probabilities for each and a fuzzy logic rules engine
  • +
  • Artificial networks that have no audit trail but simulate reptilian emotional circuitry and simulate base instinct
  • +
+ +

Maintaining Scientific Perspective

+ +

Emotion is not hard coded into the brain circuitry or DNA. The reality is significantly more complex.

+ +

The DNA provides parameters to a genetic expression system that leads to protein synthesis that leads to brain structure and function that leads to the ability to learn emotional responses that lead to improved social behavior that may lead to higher probabilities of gene pool survival.

+ +

Applying digital system traditions to biological process can be counter productive, like anthropomorphic views of programs. Artificial networks don't actually learn; they converge. Nothing is hard coded into biology because the term code applied to DNA isn't anything like a page of Java or Python code.

+ +

It is true that some behavioral predispositions are strong forms of stasis within the course of a species. An organism will normally exhibit a strong desire to acquire resources from the biosphere, such as oxygen, proteins, nutrients, carbs, fats, and water. A robot might replace those with a voltage to use for a charge and lubricants for moving parts. An organism will normally exhibit a string desire to reproduce. A robot might be given a simulation of that recursive process and wish to build another like itself.

+ +

These are not hard coded in biology. They form a kind of stasis within a population. Some humans don't want children. Some are hospitalized for anorexia nervosa. Some commit suicide by asphyxiation. The statistical mean produces the behavior of the species, not a fixed behavior identical across individuals within the species.

+ +

Nature and Nurture

+ +

Nature and nurture are useful umbrella terms for general categories of causality in biology and may have equivalents in future robotic products, but they are broad generalities. There are no nature algorithms or nurture algorithms or algorithms that balance nature and nurture. That is where topology is of paramount conceptual importance.

+ +

Topology of Algorithmic Components

+ +

There is massive interaction between many systems operating independently in multiple dimensions. The visualization of such interactive structure would look more like the topology of all the web sites in a country than a machine learning block diagram. If somehow coded into one algorithm it is possible that all the silicon from all the sand on earth converted to random access memory (RAM) might be insufficient to hold the code expressing the algorithm. Perhaps not. Perhaps a simplicity underlies the interactive system design of life. Perhaps we'll someday know. Perhaps not.

+ +

The elegance in the design of life on earth is that multiple independent processes are tuned by billions of years of trial and error to inter-operate and support complex organic processes with billions of moving parts at a molecular level.

+ +

Veins of Interdisciplinary Research

+ +

Study of these are important for biology, for bioinformatics, for cognitive science, and for artificial intelligence. Emotional recognition and integration of emotional reaction and control into natural communications is part of this research and development.

+",4302,,4302,,1/9/2019 9:54,1/9/2019 9:54,,,,0,,,,CC BY-SA 4.0 +9903,1,9923,,1/9/2019 8:59,,6,1812,"

I'm new to machine learning, and AI in general (but with 20+ years for programming). I'm wondering if machine learning is a good general approach to find the seed of a random number generator.

+

Suppose I have a list of 2000 numbers. Is there a machine learning algorithm to correctly guess the next number?

+

Just to be clear, as there are many random number generator algorithms, I'm talking about rand and srand from the stdlib.

+",21284,,2444,,6/14/2021 1:12,6/14/2021 1:12,Would machine learning be suitable for finding the seed of a random number generator?,,1,0,,,,CC BY-SA 4.0 +9904,1,,,1/9/2019 9:12,,2,124,"

I'm reading the book ""Reinforcement Learning: An Introduction"" (by Andrew Barto and Richard S. Sutton).

+ +

The authors provide the pseudocode of the prioritized sweeping algorithm, but I do not know what is the meaning of Model(s, a). Does it mean that Model(s, a) is the history of rewards gained when we are in state s and the action a is taken?

+ +

Does R, S_new = Model(s,a) mean that we should take a random sample from rewards gained in state s and action a is taken?

+",10191,,2444,,2/15/2019 19:22,2/15/2019 19:22,"What is the meaning of Model(s, a) in the prioritized sweeping algorithm?",,1,0,,,,CC BY-SA 4.0 +9905,1,10206,,1/9/2019 10:37,,3,169,"

In my view intelligence begins once the thoughts/actions are logical rather than purely randomn based. The learning environments can be random but the logic seems to obey some elusive rules. There is also the aspect of a parenting that guides through some really bad decisions by using the collective knowledge. All of this seems to hint that intelligence needs intelligence to coexist and a sharing communication network for validation/rejection.

+ +

Personally I believe that we must keep the human intelligence in a parental role for long enough time until at least the AI had fully assimilated our values. The actual danger is to leave the artificial intelligence parenting another AI and loose control of it. This step is not necessary from our perspective but can we resist the temptation and try it eventually, only time will tell.

+ +

Above all we must remember the purpose of AI. I think the purpose should always be to help humans achieve mastery of the environment while ensuring our collective preservation.

+ +

AI should not be left unsupervised as we would not give guns to kids, do we?

+ +

To resume it all AI needs an environment and supervision where to learn and grow. The environment can vary but the supervision must stay in place.

+ +

Are initiated thoughts/actions by the means of guidance and supervision considered random?

+ +

Lastly I believe that the sensible think to do is to only develop artificial intelligence that is limited by our own beliefs and values rather than searching for something greater than us.

+ +

It seems not possible to create greater than our intelligence without letting it go exploring! +Exploring has greater access to random actions and can go against the intended purpose.

+",21285,,,,,1/25/2019 22:17,Is learning possible without random thoughts and actions?,,1,2,,,,CC BY-SA 4.0 +9906,2,,9904,1/9/2019 10:46,,2,,"

I think pseudocode was made for tabular case with an assumption of deterministic environment. $Model(s, a)$ would then be a table with information of the next state and reward after taking action $a$ from state $s$. The size of that table would be same as the size of Q table. Because the environment is deterministic you wouldn't take a random sample because there is only one possible transition so you would take the transition remembered in model table.

+",20339,,,,,1/9/2019 10:46,,,,5,,,,CC BY-SA 4.0 +9908,1,9915,,1/9/2019 11:11,,3,278,"

When trying to map artificial neuronal models to biological facts it was not possible to find an answer regarding the biological justification of randomly initializing the weights.

+ +

Perhaps this is not yet known from our current understanding of biological neurons?

+",21269,,,,,1/10/2019 8:00,How do biological neurons weights get initialized?,,2,0,,,,CC BY-SA 4.0 +9909,1,,,1/9/2019 12:25,,3,1716,"

I'm doing a research on a finite-horizon Markov decision process with $t=1, \dots, 40$ periods. In every time step $t$, the (only) agent has to chose an action $a(t) \in A(t)$, while the agent is in state $s(t) \in S(t)$. The chosen action $a(t)$ in state $s(t)$ affects the transition to the following state $s(t+1)$.

+ +

In my case, the following holds true: $A(t)=A$ and $S(t)=S$, while the size of $A$ is $6 000 000$ (6 million) and the size of $S$ is $10^8$. Furthermore, the transition function is stochastic.

+ +

Would Monte Carlo Tree Search (MCTS) an appropriate method for my problem (in particular due to the large size of $A$ and $S$ and the stochastic transition function?)

+ +

I have already read a lot of papers about MCTS (e.g. progressive widening and double progressive widening, which sound quite promising), but maybe someone can tell me about his experiences applying MCTS to similar problems or about appropriate methods for this problem (with large state/action space and a stochastic transition function).

+",21287,,2444,,2/15/2019 19:34,3/9/2021 2:04,Is Monte Carlo Tree Search appropriate for problems with large state and action spaces?,,2,6,,,,CC BY-SA 4.0 +9910,2,,9908,1/9/2019 14:02,,0,,"

I am not an DL expert but these are my short thoughts on it:

+ +

I think this is because it is believed (from an information theoretic point of view) to be the good way to avoid that the network falls into some wired state from beginning on. Remember: DNNs are nonlinear approximators for continuous functions. So they have some storage capacity to learn an amount of n function to map from input to output. When you look on topic like data leakage you will see that NNs quickly try to cheat you if they can :D. The optimization applied during training will heavily be affected by the init state. So starting with an random initialization at least avoids that your neurons do all the same at the beginning etc.

+ +

Biological reasoning: +From the viewpoint of a neurobiologist I can recommend you to read Hebbian rule and how neural systems work (eg. google how neurons find targets) in general and then to compare it to what is known about how dendrite cells in the cerebrum develop their interconnections in the first 3 years after birth. In summary there are behavioral patterns in nature which could look similar, inspiring and even reasonable. But, I would say the reason why this random init. is recommend is backed by mathematical and information theoretical assumptions rather then pure biological arguments.

+",21290,,21290,,1/10/2019 8:00,1/10/2019 8:00,,,,0,,,,CC BY-SA 4.0 +9911,2,,9909,1/9/2019 16:57,,1,,"

MCTS is often said to be a good choice for problems with large branching factors... but the context where that sentiment comes from is that it originally became popular for playing the game of Go, as an alternative to older game-playing approaches such as alpha-beta pruning. The branching factor of Go is more like 250-300 though, which is often viewed as a large branching factor for board games. It's not such an impressive branching factor anymore when compared to your branching factor of $6,000,000$...

+ +

I don't see MCTS working well out of the box when you have 6 million choices at every step. Maybe it could do well if you have an extremely efficient implementation of your MDP (e.g. if you can simulate millions of roll-outs per second), and if you have a large amount of ""thinking time"" (probably in the order of hours or days) available.

+ +
+ +

To have any chance of doing better with such a massive branching factor, you really need generalization across actions. Are your 6 million actions really all entirely different actions? Or are many of them somehow related to each other? If you gather some experience (a simulation in MCTS, or just a trajectory with Reinforcement Learning approaches), can you generalize the outcome to other actions for which you did not yet collect experience?

+ +

If there is some way of treating different actions as being ""similar"" (in a given state), you can use a single observation to update statistics for multiple different actions at once. The most obvious way would be if you can define meaningful features for actions (or state-action pairs). Standard Reinforcement Learning approaches (with function approximation, maybe linear or maybe Deep Neural Networks) can then relatively ""easily"" generalize in a meaningful way across lots of actions. They can also be combined with MCTS in various ways (see for example AlphaGo Zero / Alpha Zero).

+ +

Even with all that, a branching factor of 6 million still remains massive... but generalization across actions is probably your best bet (which may be done inside MCTS, but really does need a significant number of bells and whistles on top of the standard approach).

+",1641,,,,,1/9/2019 16:57,,,,0,,,,CC BY-SA 4.0 +9912,1,9913,,1/9/2019 17:28,,6,562,"

In the book Reinforcement Learning: An Introduction (2nd edition) Sutton and Barto define at page 104 (p. 126 of the pdf), equation (5.3), the importance sampling ratio, $\rho _{t:T-1}$, as follows:

+

$$\rho _{t:T-1}=\prod_{k=t}^{T-1}\frac{\pi(A_k|S_k)}{b(A_k|S_k)}$$

+

for a target policy $\pi$ and a behavior policy $b$.

+

However, on page 103, they state:

+
+

The target policy $\pi$ [...] may be deterministic [...].

+
+

When $\pi$ is deterministic and greedy it gives $1$ for the greedy action and 0 for all other possible actions.

+

So, how can the above formula give something else than zero, except for the case where policy $b$ takes a path that $\pi$ would have taken as well? If any selected action of $b$ is different from $\pi$'s choice, then the whole numerator is zero and thus the whole result.

+",21299,,2444,,11/5/2020 22:01,11/5/2020 22:08,How can the importance sampling ratio be different than zero when the target policy is deterministic?,,2,0,,,,CC BY-SA 4.0 +9913,2,,9912,1/9/2019 17:58,,4,,"

You're correct, when the target policy $\pi$ is deterministic, the importance sampling ratio will be $\geq 1$ along the trajectory where the behaviour policy $b$ happened to have taken the same actions that $\pi$ would have taken, and turns to $0$ as soon as $b$ makes one ""mistake"" (selects an action that $\pi$ would not have selected).

+ +

Before importance sampling is introduced in the book, I believe the only off-policy method you will have seen is one-step $Q$-learning, which can only propagate observations back along exactly one step. With the importance sampling ratio, you can often do a bit better. You're right, there is a risk that it turns to $0$ rather quickly (especially when $\pi$ and $b$ are very different from each other), at which point it essentially ""truncates"" your trajectory and ignores all subsequent experience... but that still can be better than one-step, there is a chance that the ratio will remain $1$ for at least a few steps. It will occasionally still only permit $1$-step returns, but also sometimes $2$-step returns, sometimes $3$-step returns, etc., which is often better than only having $1$-step returns.

+ +

Whenever the importance sampling ratio is not $0$, it can also give more emphasis to the observations resulting from trajectories that would be common under $\pi$, but are uncommon under $b$. Such trajectories will have a ratio $> 1$. Emphasizing such trajectories more can be beneficial, because they don't get experienced often under $b$, so without the extra emphasis it can be difficult to properly learn what would have happened under $\pi$.

+ +
+ +

Of course, it is also worth noting that your quote says (emphasis mine):

+ +
+

The target policy $\pi$ [...] may be deterministic [...]

+
+ +

It says that $\pi$ may be deterministic (and in practice it very often is, because we very often take $\pi$ to be the greedy policy)... but sometimes it won't be. The entire approach using the importance sampling ratio is well-defined also for cases where we choose $\pi$ not to be deterministic. In such situations, we'll often be able to propagate observations over significantly longer trajectories (although there is also a risk of excessive variance and/or numeric instability when $b$ selects actions that are highly unlikely according to $b$, but highly likely according to $\pi$).

+",1641,,1641,,1/10/2019 8:40,1/10/2019 8:40,,,,10,,,,CC BY-SA 4.0 +9914,1,,,1/10/2019 3:38,,2,1701,"

In Deep Learning by Goodfellow et al., I came across the following line on the chapter on Stochastic Gradient Descent (pg. 287):

+
+

The main question is how to set $\epsilon_0$. If it is too large, the +learning curve will show violent oscillations, with the cost function +often increasing significantly.

+
+

I'm slightly confused why the loss function would increase at all. My understanding of gradient descent is that given parameters $\theta$ and a loss function $\ell (\vec{\theta})$, the gradient update is performed as follows:

+

$$\vec{\theta}_{t+1} = \vec{\theta}_{t} - \epsilon \nabla_{\vec{\theta}}\ell (\vec{\theta})$$

+

The loss function is guaranteed to monotonically decrease because the parameters are updated in the negative direction of the gradient. I would assume the same holds for SGD, but clearly it doesn't. With a high learning rate $\epsilon$, how would the loss function increase in its value? Is my interpretation incorrect, or does SGD have different theoretical guarantees than vanilla gradient descent?

+",19403,,2444,,1/8/2022 16:50,1/10/2022 10:39,Why can the learning rate make the loss increase in stochastic gradient descent?,,1,0,,,,CC BY-SA 4.0 +9915,2,,9908,1/10/2019 4:04,,4,,"

In short

+ +

I mentioned in another post, how the Artificial Neural Network (ANN) weights are a relatively crude abstraction of connections between neurons in the brain. Similarly, the random weight initialization step in ANNs is a simple procedure that abstracts the complexity of central nervous system development and synaptogenesis.

+ +

A bit more detail (with the most relevant parts italicized below)

+ +

The neocortex (one of its columns, more specifically) is a region of the brain that somewhat resembles an ANN. It has a laminar structure with layers that receive and send axons from other brain regions. Those layers can be viewed as ""input"" and ""output"" layers of an ANN (axons ""send"" signals, dendrites ""receive""). Other layers are intermediate-processing layers and can be viewed as the ANN ""hidden"" layers.

+ +

When building an ANN, the programmer can set the number of layers and the number of units in each layer. In the neocortex, the number of layers and layer cell counts are determined mostly by genes (however, see: Human echolocation for an example of post-birth brain plasticity). Chemical cues guide the positions of the cell bodies and create the laminar structure. They also seem to guide long term axonal connections between distant brain regions. The cells then sprout dendrites in certain characteristic ""tree-like"" patterns (see: NeuroMorpho.org for examples). The dendrites will then form synapses with axons or other cell bodies they encounter along the way, generally based on the encountered cell type.

+ +

This last phase is probably the most analogous to the idea of random weight initialization in ANNs. Based on where the cell is positioned and its type, the encountered other neurons will be somewhat random and so will the connections to them. These connections are probably not going to be very strong initially but will have room to get stronger during learning (probably analogous to initial random weights between 0 and ~0.1, with 1 being the strongest possible connection). Furthermore, most cells are either inhibitory or excitatory (analogous to negative and positive weights).

+ +

Keep in mind this randomization process has a heavy spatial component in real brains. The neurons are small and so they will make these connections to nearby neurons that are 10-200 microns away. The long-distance connections between brain regions are mostly ""programmed-in"" via genes. In most ANNs, there is generally no distance-based aspect to the initialization of connection weights (although convolutional ANNs implicitly perform something like distance-based wiring by using the sliding window).

+ +

There is also the synaptic pruning phenomenon, which might be analogous to creating many low weight connections in an ANN initially (birth), training it for some number of epochs (adolescence), and then removing most low-weight connections (consolidation in adulthood).

+",21307,,,,,1/10/2019 4:04,,,,1,,,,CC BY-SA 4.0 +9918,1,9928,,1/10/2019 8:29,,2,1843,"

I have non-smooth loss function $f(x) = \min(x, 0.5)$.

+

Can gradient descent be used for training neural networks with such functions? Can gradient descent be used for fairly general, mathematically not-nice functions?

+

PyTorch or TensorFlow can calculate numerically gradients from almost any function, but it is acceptable practice to use general, not-nice loss functions?

+",8332,,2444,,10/14/2021 14:59,10/14/2021 14:59,Can gradient descent training be used for non-smooth loss functions?,,1,0,,,,CC BY-SA 4.0 +9919,1,10789,,1/10/2019 10:01,,2,466,"

In the paper Markov games as a framework for multi-agent reinforcement learning (which introduces the minimax Q Learning algorithm), at the bottom left of page 3, my understanding is that the author suggests, for a simultaneous 1v1 zero-sum game, to do Bellman iterations with $$V(s)=\min_{o}\sum_{a}\pi_{a}Q(s,a,o)$$ with $\pi_{a}$ the probability of playing action $a$ for the maximizing player in his best mixed strategy to play in state $s$.

+ +

If my understanding is correct, why does the opponent in this equation play a pure strategy ($\min_{o}$) rather than his best mixed strategy in state $s$. This would instead give $$V(s)=\sum_{o}\sum_{a}\pi_{a}\pi_{o}Q(s,a,o)$$ with $\pi_{o}$ the opponent's best mixed strategy in state $s$. Which of these two formulations is correct and why? Are they somehow equivalent?

+ +

The context of this question is that I am trying to use minimax Q learning with a Neural Network outputting the matrix $Q(s,a,o)$ for a simultaneous zero-sum game. I have tried both methods and so far have seen seemingly equally bad results, quite possibly due to bugs or other errors in my method.

+",21311,,2444,,2/21/2019 14:43,2/21/2019 14:43,Using the opponent's mixed strategy in estimating the state value in minimax Q learning,,1,0,,,,CC BY-SA 4.0 +9921,1,9926,,1/10/2019 12:45,,2,67,"

The spectrum of human sensory inputs seems to fall within certain ranges suggesting normalization is built-in into biological NNs?

+ +

It also adapts to circumstantial conditions, e.g. people living in a city with certain factory smell eventually don't perceive the smell anymore, at least consciously (within working memory) / it adapts to a new baseline?

+",21269,,,,,1/11/2019 7:59,Is input normalization built-in into mammals sensory neurons?,,1,0,,,,CC BY-SA 4.0 +9922,2,,9838,1/10/2019 12:57,,-1,,"

Maybe Deep Reinforced Learning?

+ +

I am not sure but AND gate could be solved by your implementation. I have other feeling with OR gates. Just think - first we need to have information about two conditions and then we can check for complex solutions. +First of all I thought about Neural Network with one hidden layer. Sounds perfect.

+ +

I think you will understand when you check this Tensorflow-Keras code:

+ +
iterations = 50
+
+model = Sequential()
+model.add(Dense(16, input_shape=(None, 2), activation='relu')) # our hidden layer for OR gate problem
+model.add(Dense(2, activation='sigmoid'))
+model.summary()
+opt = Adam(0.01)
+model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['acc'])
+# mean_squared_error categorical_crossentropy binary_crossentropy
+
+for iteration in range(iterations):
+    x_train = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # table of inputs
+    y_train = np.array([[1, 0], [0, 1], [0, 1], [1, 0]]) # outputs in categorical (first index is 0, second is 1)
+
+    r = np.random.randint(0, len(x_train)) # random input
+    r_x = x_train[r]
+    r_x = np.array([[r_x]])
+    result = model.predict(r_x)[0] # predict
+    best_id = np.argmax(result) # get of index of ""better"" output
+
+    input_vector = np.array([[x_train[r]]])
+    isWon = False
+    if (best_id == np.argmax(y_train[r])):
+        isWon = True # everything is good
+    else:
+        # answer is bad!
+        output = np.zeros((2))
+        output[best_id] = -1
+        output = np.array([[output]])
+        loss = model.train_on_batch(input_vector, output)
+
+    print(""iteration"", iteration, ""; has won?"", isWon)
+
+ +

When ""answer"" of agent is good - we are not changing anything (but we could train network with best action as 1 for stability).

+ +

When answer is bad, we set action as bad - other actions have more probability for be chosen.

+ +

Sometimes learning need to have more than 50 iterations but it is only my proposition. Play with hidden layer neuron count, learn rate and iterations.

+ +

Hope will help you :)

+",9101,,,,,1/10/2019 12:57,,,,2,,,,CC BY-SA 4.0 +9923,2,,9903,1/10/2019 13:40,,3,,"

Machine Learning is a bad fit to this problem.

+ +

Even simple PRNGs that are not suitable for use in simulators (such as rand()) are varied enough that it is very hard to reverse engineer them statistically using generic techniques - essentially what 90% of ML does is fit a generic model to data statistically by altering parameters. The remaining 10% might do things in specialist manner, such as saving all the data and picking best option.

+ +

In theory most ML approaches would eventually solve a PRNG, however that would typically involve iterating through the entire state space of the PRNG multiple times. The statistical relationship between internal state, next state and output of a PRNG is complex by design, so that this is the only ""black box"" statistical approach, and this is clearly not feasible for any real implementation of a random number generator, which is going to have at least $2^{31}$ states on modern machines. Perhaps older 16-bit PRNGs, with a single value for state might be tractable.

+ +

An AI advanced enough to attempt to reverse engineer the output logically based on purely the data and researching how RNGs work is too advanced for current ML techniques to consider.

+ +

That leaves approaches that might try to construct a similar RNG, such as Genetic Programming (where the genome is converted to executable code). The trouble with this approach is there is no heuristic for a RNG that measures how close its output is to a target. A single bit of state difference or any tiny but meaningful change in generated RNG design will produce output that has no similarities with the target output whatsoever. Without such a measure you have no fitness function, and no way to attempt a guided search using the many discrete optimisation tools from AI.

+ +

Instead the usual approach to ""breaking"" a PRNG is to analyse the algorithm. Knowing the algorithm of many non-cryptographic PRNGs can allow predicting the internal state of the generator, sometimes in very few steps (for really simple Linear Congruential Generators that might be just a single step!).

+",1847,,1847,,1/10/2019 14:30,1/10/2019 14:30,,,,0,,,,CC BY-SA 4.0 +9924,1,,,1/10/2019 13:44,,3,1383,"

An artificial intelligence (AI) is often defined as something that can learn over time and can imitate human behaviors.

+

If an Expert system (e.g. MYCIN) that only involves if-then-else statements qualifies to be an AI, then every program we write in our daily lives that involves some condition-based question answering should be an AI. Right? If not, then what should be an exact and universal definition for AI. How can a software qualify to be called AI?

+",21316,,2444,,11/17/2021 14:19,11/17/2021 14:19,"If expert systems are a bunch of if-then-else statements, then how are they termed as AI?",,1,1,,,,CC BY-SA 4.0 +9925,1,,,1/10/2019 14:42,,6,879,"

Disclaimer: I'm not a student in computer science and most of my knowledge about ML/NN comes from YouTube, so please bear with me!

+ +
+ +

Let's say we have a classification neural network, that takes some input data $w, x, y, z$, and has some number of output neurons. I like to think about a classifier that decides how expensive a house would be, so its output neurons are bins of the approximate price of the house.

+ +

Determining house prices is something humans have done for a while, so let's say we know a priori that data $x, y, z$ are important to the price of the house (square footage, number of bedrooms, number of bathrooms, for example), and datum $w$ has no strong effect on the price of the house (color of the front door, for example). As an experimentalist, I might determine this by finding sets of houses with the same $x, y, z$ and varying $w$, and show that the house prices do not differ significantly.

+ +

Now, let's say our neural network has been trained for a little while on some random houses. Later on in the data set. it will encounter sets of houses whose $x, y, z$ and price are all the same, but whose $w$ are different. I would naively expect that at the end of the training session, the weights from $w$ to the first layer of neurons would go to zero, effectively decoupling the input datum $w$ from the output neuron. I have two questions:

+ +
    +
  1. Is it certain, or even likely, that $w$ will become decoupled from the layer of output neurons?
  2. +
  3. Where, mathematically, would this happen? What in the backpropagation step would govern this effect happening, and how quickly would it happen?
  4. +
+ +

For a classical neural network, the network has no ""memory,"" so it might be very difficult for the network to realize that $w$ is a worthless input parameter.

+ +

Any information is much appreciated, and if there are any papers that might give me insight into this topic, I'd be happy to read them.

+",21319,,,,,4/17/2019 9:36,Can neural networks learn to ignore an input datum?,,2,0,,,,CC BY-SA 4.0 +9926,2,,9921,1/10/2019 15:07,,2,,"

Yes, for many sensory inputs there is indeed something similar to normalization. But its not rally the same as in classical data analytics compared to what eg min/max normalization does or other technics.

+ +

Lets look on some examples and considerations:

+ +
    +
  • mammals don't perceive heat or loudness in a linear way. This is because already many sensory receptors have chemical / physical limits. Double decibels will not perceived with double intensity. Inside your ear, the small hammer and abil will brace to protect you. --> its like normalization with logarithmic effects applied.

  • +
  • heat perception is more like a difference integration than a absolute temperature measurement. Its measured via H+ ions flow in mitochondria in the cell (if i recall correctly)

  • +
  • On the neuronal side gradual signals in the dendrites (analog signal) sum up gradually to later form an spike at the axon hill. where in turn a fire frequency is then encoded - the maximum frequency of this serves as a a natural maximum limit. I remember that grasshoppers increase axon fire frequency when objects started covering more ommatidial area on their ""eye"". The more of their ""eyes"" are covered by the shadow the more input on the neuron --> higher fire rate.

  • +
  • a lot of sensory input is post processed in higher cerebral areas. Eg. compared to what is expect able and heuristics are applied to compare a signal with former events.

  • +
  • when doing computational data analysis we may want go for accuracy and maximum comparability. Mostly on all data that could be available. --> eg. with respect to properties of a standard normal distribution. Hence we put some effort to be accurate and know the true parameters, remove outliers and so on --> big data comes into play here. +Nature in contrast strives often for efficiency with the means of reaching the minimal required with minimal resources.

  • +
+ +

Summary: +Compared to normalization in an analytical sense (eg. mean, min-max or other feature normalization techniques), nature is often only interested in the current difference between stimuli. And this only within some relevant range. Other information is not integrated. And normalization with the goal of making measurement points comparable only happens within this range of the mapping function provided by the sensor/neuron/receptor whatever!

+ +

So this should also answer your question about, why you are not smelling something in the city after a while any more. However, this for sure happens at higher cerebral regions (it might also be that your smell receptors saturate) its the same principle. Your consciousness just saves energy by not concentrating on something that is anyway not changing.

+ +

If you want to read more have a look here: https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law

+",21290,,21290,,1/11/2019 7:59,1/11/2019 7:59,,,,3,,,,CC BY-SA 4.0 +9928,2,,9918,1/10/2019 17:37,,4,,"

Gradient descent and stochastic gradient descent can be applied to any differentiable loss function irrespective of whether it is convex or non-convex. The ""differentiable"" requirement ensures that trainable parameters receive gradients that point in a direction that decreases the loss over time.

+ +

In the absence of a differentiable loss function, the true gradient must be approximated through other methods. For example, in classification problems, the 0-1 loss function is considered the ""true"" loss, but it is non-convex and difficult to optimize. Instead, surrogate loss functions act as tractable proxies for true loss functions. They are not necessarily worse; negative log-likelihood loss gives a softmax distribution over $k$ classes rather than just the classification boundary.

+ +

For your problem specifically, $f(x,a)=min(x,a)$ is not a differentiable loss function. It is not differentiable at $x=0.5$, but the gradient could be estimated through the subgradient. In practice, this works because neural networks often don't achieve the local/global minima of a loss function but instead asymptotically decreasing values that achieve good generalization error. Tensorflow and PyTorch use subgradients when fed non-differentiable loss functions. You could also use a smooth approximation of the $min$ function (see this thread) to get better gradients.

+",19403,,,,,1/10/2019 17:37,,,,0,,,,CC BY-SA 4.0 +9929,2,,5774,1/11/2019 3:20,,1,,"

I have just the same problem, and I was trying to derive the backpropagation for the convolutional layer with stride, but it doesn't work.

+

When you do the striding in the forward propagation, you chose the elements next to each other to convolve with the kernel, then take a step $>1$. This results in the fact that in the backpropagation, in the reverse operation, the delta matrix elements will be multiplied by the kernel elements, (with the rotation) but not as strided, but you are picking elements that are not next to each other, something like $DY_{11} * K_{11} + DY_{13} * K_{12} + DY_{31} * K_{21} + DY_{33} * K_{22}$, which is NOT the equivalent as a convolution with a stride $>1$.

+

So as far as I am concerned, if I would like to implement the ConvNet by myself to get a better grasp of the concept, I have to implement a different method for the backprop, if I allow strides.

+",21330,,2444,,12/30/2021 13:38,12/30/2021 13:38,,,,0,,,,CC BY-SA 4.0 +9933,1,,,1/11/2019 9:20,,3,156,"

I understand why deep generative models like DBN ( deep belief nets ) or DBM ( deep boltzmann machines ) are able to capture underlying structures in data and use it for various tasks ( classification, regression, multimodal representations etc ...).

+ +

But for the classification tasks like in Learning deep generative models, I was wondering why the network is fine-tuned on labeled-data like a feed-forward network and why only the last hidden layer is used for classification?

+ +

During the fine-tuning and since we are updating the weights for a classification task ( not the same goal as the generative task ), could the network lose some of its ability to regenerate proper data? ( and thus to be used for different classification tasks ? )

+ +

Instead of using only the last layer, could it be possible to use a partition of the hidden units of different layers to perform the classifications task and without modifying the weights? For example, by taking a subset of hidden units of the last two layers ( sub-set of abstract representations ) and using a simple classifier like an SVM?

+ +

Thank you in advance!

+",21335,,21335,,1/16/2019 15:59,1/16/2019 15:59,Why is the last layer of a DBN or DBM used for classification task?,,1,0,,,,CC BY-SA 4.0 +9934,1,9997,,1/11/2019 15:00,,11,14993,"

I have checked out many methods and papers, like YOLO, SSD, etc., with good results in detecting a rectangular box around an object, However, I could not find any paper that shows a method that learns a rotated bounding box.

+

Is it difficult to learn the rotated bounding box for a (rotated) object?

+

Here's a diagram that illustrates the problem.

+

+

For example, for this object (see this), its bounding box should be of the same shape (the rotated rectangle is shown in the 2nd right image), but the prediction result for the YOLO will be Ist right.

+

Is there any research paper that tackles this problem?

+",16313,,2444,,1/28/2021 23:38,1/28/2021 23:41,Is it difficult to learn the rotated bounding box for a (rotated) object?,,3,0,,,,CC BY-SA 4.0 +9935,1,9961,,1/11/2019 16:48,,4,570,"

I'm using an object detection neural network and I employ data augmentation to increase a little my small dataset. More specifically I do rotation, translation, mirroring and rescaling.

+ +

I notice that rotating an image (and thus it's bounding box) changes its shape. This implies an erroneous box for elongated boxes, for instance on the augmented image (right image below) the box is not tightly packed around the left player as it was on the original image.

+ +

The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.

+ +

Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?

+ +

Thank you in advance!

+ +

(Obviously, it seems advisable to use small rotation angles)

+ +

+",19859,,21337,,1/12/2019 10:25,1/13/2019 11:56,How data augmentation like rotation affects the quality of detection?,,1,0,,,,CC BY-SA 4.0 +9936,1,,,1/11/2019 18:21,,1,40,"

Let's consider a classic feedforward neural network $F$ with input dimension $d$, output dimension $k$, $L$ layers $l_i$ with $m$ neurons each. ReLu activation.

+ +

This means that, given a point $x \in R^d$ its image $F(x) \in R^k$. Let's now assume i add some gaussian noise $\eta_i$ in EVERY hidden layer $l_i(x)$ at the same time, where the norm of this noise is 5% the norm of its layer computed on the point $x$. Let's call this new neural network $F_*$

+ +

I know that, empirically, neural networks are resistant to this kind of noise, especially on the first layers. How can i show this theoretically?

+ +

The question i'm trying to answer is the following:

+ +

After having injected this noise $\eta_i$ in every layer $l_i(x)$, how far the output $F_{*}(x)$ will be from the output of the original neural network $F(x)$?

+",21338,,21338,,1/12/2019 13:23,1/12/2019 13:23,Are Neural Network layers resistent to noise?,,0,0,,,,CC BY-SA 4.0 +9937,1,,,1/11/2019 22:31,,3,445,"

If I'm performing a text classification task using a model built in Keras, and, for example, I am attempting to predict the appropriate tag for a given Stack Overflow question:

+
+

How do I subtract 1 from an integer?

+
+

And the ground-truth tag for this question is:

+
+

objective-c

+
+

But my model is predicting:

+
+

c#

+
+

If I were to retrain my model, but this time add the above question and tag in both the training and testing data, would the model be guaranteed to predict the correct tag for this question in the test data?

+

I suppose the tl;dr is: Are neural networks deterministic if they encounter identical data during training and testing?

+

I'm aware it's not a good idea to use the same data in both training and testing, but I'm interested from a hypothetical perspective, and for gaining more insight into how neural networks actually learn. My intuition for this question is "no", but I'd really be interested in being pointed to some relevant literature that expands/explains that intuition.

+",21347,,2444,,1/17/2021 17:23,1/17/2021 17:23,Will a neural network always predict the correct label if it sees the exact same input during training and testing?,,2,0,,,,CC BY-SA 4.0 +9938,2,,9934,1/11/2019 22:57,,3,,"

Here's a recent paper that does what you're looking for. It looks like they achieve this simply by adding a couple rotated prior boxes and regressing the angles in between. This is similar to what standard object detectors do in terms of creating a bunch of prior box shapes and regressing the actual sizes.

+",17408,,,,,1/11/2019 22:57,,,,0,,,,CC BY-SA 4.0 +9939,1,,,1/12/2019 0:29,,1,64,"

The Markov property is the dependence of a system's future state probability distribution solely on the present state, excluding any dependence on past system history.

+ +

The presence of the Markov property saves computing resource requirements in terms of memory and processing in AI implementations, since no indexing, retrieval, or calculations involving past states is required.

+ +

However, the Markov property is often an unrealistic and too strong assumption.

+ +

Precisely, what limitations does the Markov property place on real-time learning?

+",4302,,2444,,2/13/2019 2:35,2/13/2019 2:35,What limitations does the Markov property place on real time learning?,,0,0,,,,CC BY-SA 4.0 +9942,2,,9828,1/12/2019 1:34,,3,,"

The general answer to the behavior of combining common activation functions is that the laws of calculus must be applied, specifically differential calculus, the results must be obtained through experiment to be sure of the qualities of the assembled function, and the additional complexity is likely to increase computation time. The exception to such increase will be when the computational burden of the combination is small compared to the convergence advantages the combination provides.

+

This appears to be true of Swish, the name given to the activation function defined as

+

$$f(x) = x \, \mathbb{S}(\beta x) \; \text{,}$$

+

where $f()$ is the Swish activation function and $\mathbb{S}$ is the sigmoid function. Note that Swish is not strictly a combination of activation functions. It is formed through the addition of a hyper-parameter $\beta$ inside the sigmoid function and a multiplication of the input to the sigmoid function result.

+

It does not appear to be developed by Google. The originally anonymously submitted paper (for double blind review as a ICLR 2018 paper), Searching for Activation Functions, was authored by Prajit Ramachandran, Barret Zoph, and Quoc V. Le around 2017. This is their claim.

+
+

Our experiments show that the best discovered activation function, ... Swish, ... tends to work better than ReLU on deeper models across a number of challenging datasets.

+
+

Any change in activation function to any one layer will, except in the astronomically rare case, impact accuracy, reliability, and computational efficiency. Whether the change is significant cannot be generalized. That's why new ideas are tested against data sets traditionally used to gauge usefulness1.

+

Combining activation functions to form new activation functions is not common. For instance, AlexNet does not combine them.2. It is, however, very common to use different activation functions in different layers of a single, effective network design.

+
+

Footnotes

+

[1] Whether these traditions create a bias is another question. Those who follow the theory of use case analysis pioneered by Swedish computer scientist Ivar Hjalmar Jacobson or 6 Sigma ideas would say that these tests are unit test, not functional tests against real world use cases, and they have a point.

+

[2] To correct any misconceptions that may arise from another answer, AlexNet, the name given to the approach outlined in ImageNet Classification with Deep Convolutional Neural Networks (2012) by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton from the University of Toronto, does not involve combining activation functions to form new ones. They write this.

+
+

The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels.

+

...

+

The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer. The internal layers are pure ReLU and the output layer is Softmax.

+
+

There are also convolution kernels and pooling layers in the AlexNet approach's series of layers used by them, and the design has entered common use since their winning of the ImageNet competition in 2012. Other approaches have won subsequent competitions.

+",4302,,-1,,6/17/2020 9:57,1/12/2019 1:34,,,,0,,,,CC BY-SA 4.0 +9944,1,,,1/12/2019 6:27,,2,394,"

How does DARTS compare to ENAS? Which one is better or what advantages does they each have?

+ +

Links:

+ + +",21351,,1671,,1/12/2019 23:29,1/12/2019 23:29,How does DARTS compare to ENAS?,,0,0,,,,CC BY-SA 4.0 +9945,2,,9937,1/12/2019 8:51,,1,,"

After training, all standard models are deterministic (the process each input goes thru is set).

+ +

In essence, during training the model attempts to learn the distribution of the training dataset. Whether it is able to depends on the size of the model, if it is big enough, it can simply ""memorize"" all the training samples and result in perfect accuracy on the training set.

+ +

Normally this is considered to be terrible (called overfitting) and many regularization techniques attempt to prevent it. Eventually when training a model, you are giving it the training distribution as an example but you hope that it will be able to estimate the real distribution out of it.

+",20399,,,,,1/12/2019 8:51,,,,0,,,,CC BY-SA 4.0 +9946,2,,2841,1/12/2019 9:16,,-1,,"

This May not be what you were looking for, but technically yes. Although not for Speed and Strength. But you could randomly guess new Mathematical/Physical/chemical solutions to become more efficient in random guessing (basically anything that allows the machine to compute faster and to maybe simulate the effect of those findings) thus technically achieving something similar to a singularity, without having to have any Intelligence at all actually (or just on a human Level), since you could just brute force all.

+ +

Is this efficient? No, not even close to being in any way feasible. +Does it work? Technically, Yes.

+ +

It would be a singularity of sorts, since it improves itself continuously, but it wouldn‘t need to improve its own intelligence.

+ +

Of course, some findings might make it possible to become more intelligent, but let‘s just assume it doesn‘t apply those findings to itself.

+",21191,,,,,1/12/2019 9:16,,,,0,,,,CC BY-SA 4.0 +9947,2,,9937,1/12/2019 10:13,,3,,"

No, Neural Networks do not have such a guarantee. In fact, I don't believe any kind of classifier in the entire field of Machine Learning has such a guarantee, though some may be slipping my mind...

+ +

For an easy counterexample, consider what happens if you have two instances with precisely identical inputs, but different output labels. If your classifier is deterministic (in the sense that there is no stochasticity in the procedure going from input to output after training), which a Neural Network is (unless, for example, you mess up a Dropout implementation and accidentally also apply dropout after training), it cannot possibly generate the correct output for both of those instances, even if they were presented as examples thousands of times during training.

+ +

Of course the above is an extreme example, but similar intuition applies to more realistic cases. There can be cases where getting the correct prediction on one instance would reduce the quality of predictions on many other instances if they have somewhat similar inputs. Normally, the training procedure would then prefer getting better predictions on the larger number of instances, and settle for failure on another instance.

+",1641,,,,,1/12/2019 10:13,,,,2,,,,CC BY-SA 4.0 +9949,2,,9933,1/12/2019 11:20,,1,,"

One of the big realizations that deep learning models brought in recent years was that we can train the feature extractors and classifiers simultaneously. In fact most people have stopped separating the 2 tasks and simply refer to all the process as training the model.

+ +

However, if you dive in to every single model architecture, it will always be constructed from the first part which is the feature extractor which outputs the embedding output - (which is basically the x encoded features of the input), and second part consisting of the final layer the model - the classifier which uses the embedding layer encoding to predict the class of the input.

+ +

The goal of the first part is to reduce the dimensionality of the input to just the most impotent features for the final task. The goal of the classifier is to use those features to output the final score/class etc.

+ +

This is why usually only this layer is fine-tuned, because we don't want to damage the trained feature extractor, just update the classifier to fit a slightly different distribution.

+ +

I'm pretty sure that in your mentioned case, for generation they do not use the classification layer, so updating it shouldn't have any affect on the model's generative abilities.

+ +

Regarding your last question, yes it is possible, ones you extracted the features with the model, you can use any kind of classifier on them.

+",20399,,,,,1/12/2019 11:20,,,,0,,,,CC BY-SA 4.0 +9954,1,9970,,1/12/2019 14:55,,3,458,"

Some examples of low-variance machine learning algorithms include linear regression, linear discriminant analysis, and logistic regression.

+

Examples of high-variance machine learning algorithms include decision trees, k-nearest neighbors, and support vector machines.

+

Source:

+

+

What makes a machine learning algorithm a low variance one or a high variance one? For example, why do decision trees, k-NNs and SVMs have high variance?

+",15368,,2444,,6/23/2020 21:24,6/24/2020 14:36,What makes a machine learning algorithm a low variance one or a high variance one?,,2,0,,,,CC BY-SA 4.0 +9958,1,,,1/12/2019 22:28,,1,22,"

For a neural turing machine, there is an attention distribution over the memory cells. A read operation consists of multiplying the memory cell's value by its respective probability, and adding these results for all memory cells.

+ +

Suppose we only did the above operation for memory cells with a probability greater than 0.5, or suppose we concatenated the results instead of adding them. Can this be implemented/ trained with stochastic gradient descent? Or would it not be differentiable?

+ +

Thanks!

+",21375,,,,,1/12/2019 22:28,Is discrete reading in neural turing machines differentiable?,,0,0,,,,CC BY-SA 4.0 +9961,2,,9935,1/13/2019 11:56,,1,,"
+

The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.

+
+ +

Yes, it is clear from your examples that the bounding boxes become wider. Generally, including large amounts of data like this in your training data will mean that your network will also have a tendency to learn slightly larger bounding boxes. Of course, if the majority of your training data still has tight boxes, it should stell tend towards learning those... but likely slightly wider ones than if the training data did not include these kinds of rotations.

+ +
+

Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?

+ +

(Obviously, it seems advisable to use small rotation angles)

+
+ +

I do not personally work directly in the area of computer vision really, so I'm not sufficiently familiar with the literature to point you to any references on this particular issue. Based on my own intuition, I can recommend:

+ +
    +
  1. Using relatively small rotation angles, as you also already suggested yourself. The bounding boxes will become a little bit wider than in the original dataset, but not by too much.
  2. +
  3. Using rotation angles that are a multiple of $90^\circ$. Note that if you rotate a bounding box by a multiple of $90^\circ$, the rotated bounding boxes become axis-aligned and your problem disappears again, they'll become just as tight as the bounding boxes in the unrotated image. Of course, you can also combine this suggestion with the previous one, and use rotation angles in, for example, $[85^\circ, 95^\circ]$.
  4. +
  5. Apply larger rotations primarily in images that only have bounding boxes that are approximately ""square"". From looking at your image, I get the impression that the problem of bounding boxes becoming wider after rotations is much more severe when you have extremely wide or thin bounding boxes (with one dimension much greater than the other). When the original bounding box is square, there still will be some widening after rotation, but not nearly as much, so the problem may be more acceptable in such cases.
  6. +
+",1641,,,,,1/13/2019 11:56,,,,0,,,,CC BY-SA 4.0 +9962,1,9964,,1/13/2019 13:41,,0,64,"

Why is the e-function used to decide whether to accept a worse solution or not? +To be more specific: Why was $e$ chosen as basis?

+ +

The propability to accept a worse solution is described with: +$p=e^{-\frac{E(y)-E(x)}{kT}}$

+ +

$E(y)$ is the energy from the old solution +$E(x)$ is the energy from new solution $T$ is a constant temprature decreasing with a constant factor k in every iteration.

+",19413,,21157,,1/14/2019 9:20,1/14/2019 9:20,Simulated Annealing: Why is e-function used as propability function to decide to accept a worse solution,,1,0,,,,CC BY-SA 4.0 +9964,2,,9962,1/13/2019 17:53,,0,,"

You can find the explanation by asking some question about the function. Suppose, the value of $\frac{E(y)-E(x)}{kT} >> 0$ is much more greater than zero. What does it mean? It means the value of $E(y)$ is much greater than $E(x)$ related to the $kT$ that is as a measure of temperature decreasing. Now, you want in this situation a probability which is near to zero. Hence, $e^{-\frac{E(y)-E(x)}{kT}}$ could be a good value for the probability of selection of worse solutions!

+ +

Why $e$ instead of $2$ or other values greater than $1$? Because it could be a good function in optimization problems as its derivative is more simple than others!

+",4446,,,,,1/13/2019 17:53,,,,0,,,,CC BY-SA 4.0 +9965,1,,,1/13/2019 18:10,,1,93,"

I am currently working with classical roboticists who insist on inverse kinematics, and what I (perhaps mistakenly) call the old way of thinking about robots accomplishing tasks. +Much of the relatively recent research focuses on Robots using Brain models such as Multiple timescales (Artificial Intelligence models) that segment sequences and reproduce them, having learned them. The problem I face is this bunch of roboticists insist that a robot already knows the sequence, and training it to be reproduced is redundant, since a Robot can already reproduce the sequence anyway. +How accurate would you rate this assessment of using AI in robotics? +Are there any advantages of using AI to learn sequences for robot control?

+",21397,,,,,1/13/2019 18:10,Using Artificial Intelligence for Robot movement instead of regular Inverse Kinematics,,0,1,,,,CC BY-SA 4.0 +9966,1,,,1/13/2019 18:38,,1,156,"

I would appreciate your help with this (naive) question of mine.

+ +

Given the set of points located on a circle, $x_{i}, y_{i}$ as the input data, Can a deep/machine learning algorithm infer that radius of the circle is constant ? In other words, given the data $x_{i}, y_{i}$ is there way that algorithm discovers the constraint: $x_{i}^2 + y_{i}^2 = \text{constant}$ ?

+ +

I would also appreciate any related reference on the subject.

+",21399,,,,,10/6/2021 21:06,Extracting algebraic constraints from the input data,,1,1,,,,CC BY-SA 4.0 +9968,1,,,1/13/2019 20:34,,1,34,"

I have data that are a result of rules that are exceptionless. I want to my program to 'look' at my data and figure out those rules. However, the data might contain what might look like an exception (rule within a rule) but that is too, true for all occasions e.g.

+ +

All men of the dataset with x common characteristics go out for a beer on Thursday after work. That is true for all men with those characteristics. However, they will cancel their plans if their wife is sick. That last condition might initially look as an exception to the rule (go out for beer on Thursdays), but it is not as long as it is true for all men with those x characteristics.

+ +

So the question is: Which approach/method would be suitable for this?

+",19393,,,,,1/13/2019 20:34,How can I model regularity?,,0,0,,,,CC BY-SA 4.0 +9970,2,,9954,1/14/2019 1:42,,3,,"

What this is talking about is how much a machine learning algorithm is good at ""memorizing"" the data. Decision trees, for their nature, tend to overfit very easily, this is because they can separate the space along very non-linear curves, especially if you get a very deep tree. Simpler algorithms, on the other hand, tend to separate the space along linear hyper surfaces, and therefore tend to under-fit the data and may not give very good prediction, but may behave better on new unseen data which is very different from the training data.

+",177,,,,,1/14/2019 1:42,,,,0,,,,CC BY-SA 4.0 +9973,1,9995,,1/14/2019 5:38,,10,820,"

I was wondering if machine learning algorithms (CNNs?) can be used/trained to differentiate between small differences in details between images (such as slight differences in shades of red or other colours, or the presence of small objects between otherwise very similar images?)? And then classify images based on these differences? If this is a difficult endeavour with our current machine learning algorithms, how can it be solved? Would using more data (more images) help?

+ +

I would also appreciate it if people could please provide references to research that has focused on this, if possible.

+ +

I've only just begun learning machine learning, and this is something that I've been wondering from my research.

+",16521,,2444,,5/30/2020 12:40,5/30/2020 12:46,Can machine learning algorithms be used to differentiate between small differences in details between images?,,2,0,,,,CC BY-SA 4.0 +9975,1,10197,,1/14/2019 9:33,,1,1592,"

How do you distinguish between a complex and a simple model in machine learning? Which parameters control the complexity or simplicity of a model? Is it the number of inputs, or maybe the number of layers?

+

Moreover, when should a simple model be used instead of a complex one, and vice-versa?

+",7681,,2444,,9/20/2020 10:44,9/20/2020 10:44,How do you distinguish between a complex and a simple model in machine learning?,,4,0,,,,CC BY-SA 4.0 +9976,2,,9975,1/14/2019 9:42,,1,,"

If you want to find a proper architecture for your model, you can use the NAS (neural architecture search) methods instead of running some naive models to find a model and involving to decide which model is more complex or simpler. Some methods which used in NAS to find a proper architecture are:

+ +
    +
  1. NAS with Reinforcement Learning
  2. +
  3. NAS with Evolution
  4. +
  5. NAS with Hill-climbing
  6. +
  7. Multi-objective Neural architecture search
  8. +
+",4446,,4446,,1/14/2019 9:56,1/14/2019 9:56,,,,1,,,,CC BY-SA 4.0 +9978,1,,,1/14/2019 9:47,,2,74,"

I am reading about CANN. However, I do not seem to grasp what it is. Maybe someone who has worked with it can explain it? I found out about it while reading about RatSLAM. I understand that it helps to keep long/short term memory.

+",14863,,2444,,4/12/2022 8:39,4/12/2022 8:39,What is a continuous-attractor neural network?,,0,0,,,,CC BY-SA 4.0 +9982,1,9986,,1/14/2019 11:59,,7,2612,"

What are the current NLP/NLU techniques that can extract metaphors from texts?

+

For example

+
+

His words cut deeper than a knife.

+
+

Or a simpler form like:

+
+

Life is a journey that must be travelled no matter how bad the roads and accommodations.

+
+",21415,,2444,,1/15/2021 0:30,7/22/2021 22:15,How to recognise metaphors in texts using NLP/NLU?,,1,1,,,,CC BY-SA 4.0 +9983,1,,,1/14/2019 12:43,,7,5648,"

I was thinking of something of the sort:

+ +
    +
  1. Build a program (call this one fake user) that generates lots and lots and lots of data based on the usage of another program (call this one target) using stimuli and response. For example, if the target is a minesweeper, the fake user would play the game a carl sagan number of times, as well as try to click all buttons on all sorts of different situations, etc...

  2. +
  3. run a machine learning program (call this one the copier) designed to evolve a code that works as similar as possible to the target.

  4. +
  5. kablam, you have a ""sufficiently nice"" open source copy of the target.

  6. +
+ +

Is this possible?

+ +

Is something else possible to achieve the same result, namely, to obtain a ""sufficiently nice"" open source copy of the original target program?

+",20976,,2444,,6/28/2019 16:54,2/10/2023 21:42,Is it possible to use AI to reverse engineer software?,,3,1,,,,CC BY-SA 4.0 +9984,2,,9983,1/14/2019 13:10,,1,,"

Remarkably, more or less the scenario you describe is not only feasible and has already been demonstrated (detailed explanation and fascinating videos at link).

+ +

However, the fidelity of the copy is currently quite limited: +

+ +

So for now, your copy will be quite low quality. However, there is a big exception to this rule: if the software you are copying is itself based on machine learning, then you can probably make a high-quality copy quite cheaply and easy, as I and my co-authors explain in this short article.

+ +

Interesting question and I'm quite sure that the correct answer will change rapidly over the next few years.

+",17770,,,,,1/14/2019 13:10,,,,10,,,,CC BY-SA 4.0 +9986,2,,9982,1/14/2019 13:58,,3,,"

This is still a research topic in linguistics. A quick google search brings up a couple of papers that might be useful:

+ +

However, you probably won't get an off-the-shelf tool that recognises metaphors for you.

+

To add more details, the problem with metaphors is that you cannot detect them by surface structure alone. Any sentence could (in theory) be a metaphor. This is different from a simile, which can usually be spotted easily through the word like, as in she runs like the wind. Obviously, like on its own is not sufficient, but it's a good starting point to identify possible candidates.

+

However, his words cut deeper than a knife is -- on the surface -- a normal sentence. Only the semantic incongruence between words as the subject and cut as the main verb creates a clash. In order to detect this automatically, you need to identify possible semantic features of the verbal roles and look for violations of the expected pattern.

+

The verb cut would generally expect an animate object, preferably human, or an instrument with a blade (the knife cuts through the butter) as its actor or subject. But it also can include (water)ways: the canal cuts through the landscape, the road cuts through the field. The more closely you look, the more exceptions/extensions you will find for your initial assumption.

+

And every extension/exception will water down the accuracy of your metaphor detection algorithm.

+

The second example is similar: Life is a journey. You could perhaps use a thesaurus and see what the hyperonyms of life are. Then you could do the same with journey, and see if they are compatible. A car is a vehicle is not a metaphor, because vehicle is a hyperonym of car. But journey is not a hyperonym of life, so could be a metaphor. But I would think that this is still very tricky to get right. In this case, the absence of a determiner might be a hint, as it's not a life is a journey -- you might restrict yourself to bare nouns for this type of metaphor. But this is also not a firm rule.

+

In short, it is a hard problem, as you need to look at the meaning, rather than just the structure or word choice. And meaning is not easy to deal with in NLP, despite decades of work on it.

+",2193,,2444,,1/15/2021 0:27,1/15/2021 0:27,,,,0,,,,CC BY-SA 4.0 +9987,1,9988,,1/14/2019 16:03,,3,883,"

I want to start a project for my artificial intelligence class about speaker recognition. Basically, I want to train my AI to detect if it's me who's speaking or somebody else. I would like some suggestions or libraries to work with.

+",21421,,16229,,1/21/2019 20:58,10/18/2019 23:01,Training an AI to recognize my voice (or any voice),,1,0,,,,CC BY-SA 4.0 +9988,2,,9987,1/14/2019 17:16,,3,,"

The human voice is based on the neural muscular control of vocal apparatus made up of many parts.

+ +
    +
  • Diaphragm
  • +
  • Vocal cords
  • +
  • Throat (constrictors and anti-constrictors)
  • +
  • Nasal cavity
  • +
  • Cheek
  • +
  • Jaw
  • +
  • Tongue
  • +
+ +

These coordinated muscular manipulations produce envelopes (controlling) of audio that can be characterized by periodic and transient wave forms.

+ +
    +
  • Volume
  • +
  • Pitch
  • +
  • Tone (relative volume of harmonics)
  • +
  • Consonant transients
  • +
+ +

Voices are unique to the learning state of neural activity and anatomic attributes, which is a way of saying that vocal habits and the physical attributes of the voice supports the distinguishing of vocal identity.

+ +
    +
  • Strength of vocal muscles
  • +
  • Connectivity of muscles to bone, tendons, and cartilage
  • +
  • Shape of inner surface of vocal pathways
  • +
  • Neural coordination of those muscles
  • +
  • Neural production of phonetic control to produce linguistic elements
  • +
  • Neural serialization of semantic structures (ideas)
  • +
+ +

The detection of distinguishing features of voices by the ear is equally complex. In a room full of people talking, the brain can learn to track a single voice.

+ +

It is important to note that performing voice recognition to determine the identity of the human source is significantly different than performing voice recognition to produce text. To produce text accurately, the NLP must determine language elements and construct a semantic network that represents the vocal content or a text from that representation to be accurate in the case of like sounding words. Fortunately, the identification of the speaker is easier in some ways than the accurate voice to text. Unfortunately, the identification of the speaker has general limitations discussed below.

+ +

The first stage of hearing in the ear is mechanical, involving the length of hairs along the cochlear surface, which is like a radio tuner that discriminates all frequencies within a range simultaneously. The software equivalent is a spectrum derived by applying a root mean square to the result of an FFT (fast Fourier transform) to provide magnitudes.

+ +

$$ m_f := \sqrt{t_f^2 + {(it_f)}^2} $$

+ +

The phase component of the FFT results ($\, \arctan(t, it) \,)$ can be discarded, since it is not correlated with neural control of voice.

+ +

The application of the FFT to speech (as with any changing audio) requires windowing over the audio samples using one of the windowing tapers, such as the Hann window or Blackman window. The input is the audio stream or file contents as a sequence of pressure samples, the audio. The output is a sequence of spectra, each containing the volume of each frequency in the vocal range, from about 30 Hertz to 15 K Hertz.

+ +

This series of spectra can be fed into the initial layer of one of the more advanced RNNs (recurrent neural networks), such as the LSTM (long short term memory) networks, its bidirectional version, the B-LSTM, or a GRU (gated recurrent network), which is touted as training equally well with less time or computing resource consumption.

+ +

The identity of the speaker is the label. The series of spectra are the features.

+ +

Using the PAC (probably approximately correct) learning framework, it may be possible to estimate, in advance of experimentation, the minimum number of words the speaker must speak to produce a particular accuracy and reliability in use of the learned parameters from the network training.

+ +

It will take some study to set up the hyper-parameters and design the layers of the network in terms of depth (number of layers) and width sequence (number of cells per layer, which may vary from layer to layer).

+ +

The use case limitation of this system is that each speaker must read some text that provides adequate training example sequences of adequate length, so that there are sufficient number of overlapping windows for the FFT to transform into spectra so that the training converges reasonably.

+ +

There is no way around the individual user training as there is with recognition of linguistic content, which can be trained across a large set of speakers to recognize content somewhat independent of the speaker. The system can be adjusted and improved to minimize the amount of speech required, but information theory constraints keep that quantity from ever approaching zero.

+ +

No network, whether artificial or biological, can learn something from nothing. Claude Shannon and John von Neumann realized decades ago that there is a kind of conservation of information, just as there is a conservation of matter and energy in space below nuclear reaction thresholds. This led to the definition of a bit and the formulation of information as a quantity of bits corresponding to a narrowing of probability that the information provides.

+ +

$$ b_i = - \log_2 {\frac {P(x|i)} {P(x)}} $$

+",4302,,4302,,1/14/2019 18:46,1/14/2019 18:46,,,,0,,,,CC BY-SA 4.0 +9990,1,,,1/14/2019 18:11,,2,2417,"

I was trying to understand the loss function of GANs, but I found a little mismatch between different papers.

+

This is taken from the original GAN paper:

+
+

The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator's distribution $p_{g}$ over data $\boldsymbol{x}$, we define a prior on input noise variables $p_{\boldsymbol{z}}(\boldsymbol{z})$, then represent a mapping to data space as $G\left(\boldsymbol{z} ; \theta_{g}\right)$, where $G$ is a differentiable function represented by a multilayer perceptron with parameters $\theta_{g} .$ We also define a second multilayer perceptron $D\left(\boldsymbol{x} ; \theta_{d}\right)$ that outputs a single scalar. $D(\boldsymbol{x})$ represents the probability that $\boldsymbol{x}$ came from the data rather than $p_{g}$. We train $D$ to maximize the probability of assigning the correct label to both training examples and samples from $G$. We simultaneously train $G$ to minimize $\log (1-D(G(\boldsymbol{z})))$ :

+

In other words, $D$ and $G$ play the following two-player minimax game with value function $V(G, D)$ :

+
+

$$ +\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))] +$$

+

Equation (1) in this version of the pix2pix paper

+
+

The objective of a conditional GAN can be expressed as +$$ +\begin{aligned} +\mathcal{L}_{c G A N}(G, D)=& \mathbb{E}_{x, y}[\log D(x, y)]+\\ +& \mathbb{E}_{x, z}[\log (1-D(x, G(x, z))], +\end{aligned} +$$ +where $G$ tries to minimize this objective against an adversarial $D$ that tries to maximize it, i.e. $G^{*}=$ $\arg \min _{G} \max _{D} \mathcal{L}_{c G A N}(G, D)$.

+

To test the importance of conditioning the discriminator, we also compare to an unconditional variant in which the discriminator does not observe $x$ : +$$ +\begin{aligned} +\mathcal{L}_{G A N}(G, D)=& \mathbb{E}_{y}[\log D(y)]+\\ +& \mathbb{E}_{x, z}[\log (1-D(G(x, z))] . +\end{aligned} +$$

+
+

Putting aside the fact that pix2pix is using conditional GAN, which introduces a second term $y$, the 2 formulas are quite resemble, except that in the pix2pix paper, they try to get minimax of ${\cal{L}}_{cGAN}(G, D)$, which is defined to be $E_{x,y}[...] + E_{x,z}[...]$, whereas in the original paper, they define $\min\max V(G, D) = E[...] + E[...]$.

+

I am not coming from a good math background, so I am quite confused. I'm not sure where the mistake is, but assuming that $E$ is expectation (correct me if I'm wrong), the version in pix2pix makes more sense to me, although I think it's quite less likely that Goodfellow could make this mistake in his amazing paper. Maybe there's no mistake at all and it's me who do not understand them correctly.

+",3098,,2444,,12/9/2021 9:25,12/9/2021 9:25,Mismatch between the definition of the GAN loss function in two papers,,3,1,,12/9/2021 9:29,,CC BY-SA 4.0 +9993,2,,9990,1/15/2019 0:46,,0,,"

What is meant by both papers is that we have two agents (generator and discriminator) playing a game with the value function V defined as a sum of the expectations (i.e. an expectation of the outcome value defined as a sum of two terms, or actually a logarithm of a product...). The generator uses a strategy G encoded in the parameters of its neural network (θg), the discriminator uses a strategy D encoded in the parameters of its neural network (θd). Our goal is to (hopefully) find such a pair of strategies (a pair of parameter sets θgmin and θdmax) that produce the minimax value.

+ +

While trying to find the (θgmin, θdmax) pair using gradient descent, we actually have two loss functions: one is the loss function for G, parameterized by θg, another is the loss function for D, parameterized by θd, and we train them alternatively on minibatches together.

+ +

If you look at the Algorithm 1 in the original paper, the loss function for the discriminator is -log(D(x; θd)) - log(1 - D(G(z); θd), and the loss function for the generator is log(1 - D(G(z; θg)) (in both cases, in the original paper, x is sampled from the reference data distribution and z is sampled from noise):

+ +

The ideal value for the loss function of the discriminator is 0, otherwise it's greater than 0. The ""loss"" function of the generator is actually negative, but, for better gradient descent behavior, can be replaced with -log(D(G(z; θg)), which also has the ideal value for the generator at 0. It is impossible to reach zero loss for both generator and discriminator in the same GAN at the same time. However, the idea of the GAN is not to reach zero loss for any of the game agents (this is actually counterproductive), but to use that ""double gradient descent"" to ""converge"" the distribution of G(z) to the distribution of x.

+",21426,,21426,,1/21/2019 9:29,1/21/2019 9:29,,,,4,,,,CC BY-SA 4.0 +9995,2,,9973,1/15/2019 3:28,,5,,"

Attentive Recurrent Comparators (2017) by Pranav Shyam et al. is an interesting paper that helps to answer the question you're wondering, along with a blog post that helps to describe it in easier terms.

+ +

The way it's implemented is actually rather intuitive. If you have ever played a ""what is different"" game with two images usually what you'd do is look back and forth between the images to see what the difference is. The network that the researchers created does just that! It looks at one image and then remembers important features about that images and looks at the other image and goes back and forth.

+",17408,,2444,,5/30/2020 12:32,5/30/2020 12:32,,,,0,,,,CC BY-SA 4.0 +9996,1,,,1/15/2019 4:16,,2,98,"

I am trying to generate a model that uses several physicochemical properties of a molecule (including number of atoms, number of rings, volume, etc.) to predict a numeric value $Y$. I would like to use PLS Regression, and I understand that standardization is very important here. I am programming in Python, using scikit-learn.

+ +

The type and range for the features varies. Some are int64 while others are floating point numbers. Some features generally have small (positive or negative) values, while others have a very large value. I have tried using various scalers (e.g. standard scaler, normalize, min-max scaler, etc.). Yet, the R2/Q2 are still low.

+ +

I have a few questions:

+ +
    +
  1. Is it possible that by scaling, some of the very important features lose their significance, and thus contribute less to explaining the variance of the response variable?

  2. +
  3. If yes, if I identify some important features (by expert knowledge), is it OK to scale other features but those? Or scale the important features only?

  4. +
  5. Some of the features, although not always correlated, have values that are in a similar range (e.g. 100-400), compared to others (e.g. -1 to 10). Is it possible to scale only a specific group of features that are within the same range?

  6. +
+",21431,,2444,,6/28/2019 11:22,6/28/2019 11:24,What is the impact of scaling the features on the performance of the model?,,1,0,,,,CC BY-SA 4.0 +9997,2,,9934,1/15/2019 5:58,,3,,"

Cartesian Bias and Pipeline Efficiency

+ +

You are experiencing a techno-cultural artifact of Cartesian-centric imaging running all the way back to the dawn of coordinate systems. It is the momentum of practice as a consequence of applying Cartesian 2D coordinates to rasterize images appearing at the focal planes of lenses from the dawn of television and the earliest standards of raster based capture and display.

+ +

Although some work was done toward adding tilt to bounding rectangles in the late 1990s and since, from a time and computing resource conservation perspective, it is computationally and programmatically less costly to include the four useless triangles of pixels and keep the bounding box orthogonal with the pixel grid.

+ +

Adding a tilt angle to the bounding boxes is marginally competitive when detecting ships from a satellite only because two conditions offset the inefficiencies in that narrow domain. The ship appears as an oblong rectangle with rounded corners from a satellite positioned in geosynchronous orbit. In the general case, adding a tilt angle can slow recognition significantly.

+ +

Biology Less Biased

+ +

An interesting side note is that the neural networks of animal and human vision systems do not have that Cartesian-centricity, but that doesn't help this question's solution, since non-orthogonal hardware and software is virtually nonexistent.

+ +

Early Non-Cartesian Research and Today's Rasterization

+ +

Gerber Scientific Techonology research and development in the 1980s (South Windsor, Connecticut, U.S.) investigated vector capture, storage, and display, but the R&D was not financially sustainable for a mid-side technology corporation for the reasons above.

+ +

What remains, because it is economically viable and necessary from an animation point of view, is rasterization on the end of the system that converts vector models into frames of pixels. We see this in on the rendering SVG, VRML, and the original intent of CUDA cores and other hardware rendering acceleration strategies and architectures.

+ +

On the object and action recognition side, the support of vector models directly from imaging is much less developed. This has not been a major stumbling block for computer vision because the wasted pixels at one tilt angle may be of central importance at another tilt angle, so there are no actual wasted input pixels if the centering of key scene elements is widely distributed in translation and tilt, which is often the case in real life (although not so much in hygienically pre-processed datasets).

+ +

Conventions Around Object Minus Camera Tilt and Skew from Parallax

+ +

Once edge detection, interior-versus-exterior, and 3D solid recognition come into play, the design of CNN pipelines and the way kernels can do radial transformation without actually requiring $\; \sin, \, \cos, \, \text{and} \, \arctan \;$ functions evaporate the computational burden of the Cartesian nature of pixel tensors. The end result is that the bounding box being orthogonal to the image frame is not as problematic as it initially appears. Efforts to conserve the four triangles of pixels and pre-process orientation is often a wasted effort by a gross margin.

+ +

Summary

+ +

The bottom line is that efforts to produce vector recognition from roster inputs have been significantly inferior in terms of resource and wait time burden, with the exception of insignificant gains in the narrow domain of naval reconnaissance satellite images. Trigonometry is expensive, but convolution kernels, especially now that they are moving from software into hardware accelerated computing paths in VLSI, is computable at lower costs.

+ +

Past and Current Work

+ +

Below is some work that deals with tilting with regard to objects and the effects of parallax in relation to the Cartesian coordinate system of the raster representation. Most of the work has to do with recognizing 3D objects in a 3D coordinate system to project trajectories and pilot or drive vehicles rationally on the basis of Newtonian mechanics.

+ +

Efficient Collision Detection Using Bounding Volume Hierarchies of k-DOPs, James T. Klosowski, Martin Held, Joseph S.B. Mitchell, Henry Sowizral, and Karel Zikan, 1998

+ +

Sliding Shapes for 3D Object Detection in Depth Images, Shuran Song and Jianxiong Xiao, 2014

+ +

Amodal Completion and Size Constancy in Natural Scenes, Abhishek Kar, Shubham Tulsiani, Joao Carreira and Jitendra Malik, 2015

+ +

HMD Vision-based Teleoperating UGV and UAV for Hostile +Environment using Deep Learning, Abhishek Sawarkar1, Vishal Chaudhari, Rahul Chavan, Varun Zope, Akshay Budale and Faruk Kazi, 2016

+ +

Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds, Z Liu, H Wang, L Weng, Y Yang, 2016

+ +

Amodal Detection of 3D Objects: +Inferring 3D Bounding Boxes from 2D Ones in RGB-Depth Images, Zhuo Deng, 2017

+ +

3D Pose Regression using Convolutional Neural Networks, Siddharth Mahendran, 2017

+ +

Aerial Target Tracking Algorithm Based on Faster R-CNN Combined with Frame Differencing, Yurong Yang, Huajun Gong, Xinhua Wang and Peng Sun, 2017

+ +

A Semi-Automatic 2D solution for Vehicle Speed Estimation from Monocular Videos, Amit Kumar, Pirazh Khorramshahi, Wei-An Lin, Prithviraj Dhar, Jun-Cheng Chen, Rama Chellappa, 2018

+",4302,,4302,,1/16/2019 9:10,1/16/2019 9:10,,,,0,,,,CC BY-SA 4.0 +9998,2,,8885,1/15/2019 6:23,,4,,"

The key is: VAE usually use a small latent dimension, the information of input is so hard to pass through this bottleneck, meanwhile it tries to minimize the loss with the batch of input data, you should know the result -- VAE can only have a mean and blurry output.

+ +

If you increase the bandwidth of the bottleneck, i.e. the size of latent vector, VAE can get a high reconstruction quality, e.g. Spatial-Z-VAE

+",21409,,,,,1/15/2019 6:23,,,,1,,,,CC BY-SA 4.0 +9999,2,,7215,1/15/2019 6:52,,3,,"

Principles of Computational Modelling in Neuroscience by David Sterratt, Bruce Graham, Andrew Gillies and David Willshaw discuss it in Chapter 7 (The synapse) and also in Chapter 8 (Simplified models of neurons). Especially in chapter 8, they discuss how to add excitatory or inhibitory synapses to integrate and fire neuron.

+ +

There are various ways to add inhibitory synapse: either substrate voltage, inject negative current.

+",21436,,2444,,5/23/2020 18:28,5/23/2020 18:28,,,,0,,,,CC BY-SA 4.0 +10000,2,,1987,1/15/2019 7:59,,5,,"

By cheating... theta is $\arctan(y,x)$, $r$ is $\sqrt{(x^2 + y^2)}$.

+ +

In theory, $x^2$ and $y^2$ should work, but, in practice, they somehow failed, even though, occasionally, it works.

+ +

+",21439,,2444,,2/26/2019 17:24,2/26/2019 17:24,,,,2,,,,CC BY-SA 4.0 +10001,2,,9838,1/15/2019 8:09,,2,,"

There is some confusion between reinforcement and convergence in this question.

+ +

The XOR problem is of interest in a historical context because the reliability of gradient descent is identity (no advantage over an ideal coin toss) for a single layer perceptron when the data set is are the permutations representing the Boolean XOR operation. This is an information theory way of saying a single layer perceptron can't be used to learn arbitrary Boolean binary operations, with XOR and XAND as counterexamples where convergence is not only not guaranteed but productive of functional behavior only by virtue of luck. That is why the MLP was an important extension of the perceptron design. It can be reliably taught an XOR operation.

+ +

Search results for images related to deep reinforced learning provide a survey of design diagrams representing the principles involved. We can note that the use case for a reinforcement learning application is distinctly different from that of MLPs and their derivatives.

+ +

Parsing the term and recombining to produce the conceptual frameworks that were originally combined to produce DRL, we have deep learning and reinforcement learning. Deep learning is really a set of techniques and algorithmic refinements for the combination of artificial network layers into more successful topologies that perform useful data center tasks. Reinforcement learning is

+ +

Sutton states in his slides for the University of Texas (possibly there to get away from the Alberta winters), ""RL is learning to control data."" His is an overly broad definition, since MLPs, CNNs, and GRU networks all learn a function which is controlling data processing when the learned parameters are then leveraged in their intended use cases. This is where the perspective of the question may be based on the misinformative nature of these excessively broad definitions.

+ +

The distinction of reinforced learning is the idea that a behavior can be reinforced during use. There may be actual parallel reinforcement of beneficial behavior (as in more neurologically inspired architectures) or learning may occur in a time slicing operating system and share the processing hardware with processes that use what is learned (as in Q-learning algorithms and their derivatives).

+ +

Some define RL as machine learning technique that direct the selection of actions along a path of behavior such that some cumulative value of the consequences of actions take is maximized. That may be an excessively narrow definition, biased by the popularity of Markov processes and Q-learning.

+ +

This is the problem with the perspective expressed in the question. An XOR operation is not an environment through which a path can be blazed.

+ +

If one were to construct an XOR maze, where the initial state is undefined and the one single action is to fall into either the quadrant 10 or 01, it is still not representing an XOR because the input was not a Boolean vector

+ +

$\vec{B} \in \mathbb{B}^2 \; \text{,}$

+ +

and the output is not a 1 or 0 resulting from XOR operation, as would be the case for a multilayer perceptron learning of XOR operation. There is no cumulative reward. If there was no input and the move was to divide in half and chose both 10 or 01 because their reward was higher than 00 or 11, then that might be considered a reinforcement learning scenario, but it would be an odd one.

+ +

That the described setup leads to, ""Getting stuck,"" is no surprise when the tool is a wrench for the turning of a screw.

+ +

If the design looses the reinforcement and the artificial network is reduced to a two layer perceptron, the convergence will be guaranteed given a labeled data set of sufficient size or an unsupervised arrangement where the loss function is simply the evaluation of whether the result is XOR.

+ +

To experiment with reinforced learning, the agent must interact with the environment and make choices that have value consequences that direct subsequent behavior. Boolean expressions are not of this nature, no matter how complex.

+",4302,,,,,1/15/2019 8:09,,,,0,,,,CC BY-SA 4.0 +10003,1,,,1/15/2019 10:08,,2,138,"

Is there any way to control the extraction of features? How do I determine which features are been learned during training, i.e relevant information is been learned or not?

+",21441,,2444,,5/18/2020 10:24,5/19/2020 4:05,How do I determine which relevant features have been learned during training in a CNN?,,3,0,,,,CC BY-SA 4.0 +10005,2,,9973,1/15/2019 13:31,,3,,"

It exists networks built to learn how to differentiate between classes even if there are looking quite the same. Usually, a triplet loss is used in those networks to learn the difference between the target, a positive sample, and a negative one.

+ +

For example, those networks are used to perform identity check with face images, the algorithm learns the differences between different people instead of recognizing people.

+ +

Here are some keywords that are possibly relevant: discriminative function, triplet loss, siamese network, one-shot learning.

+ +

Theses papers are interesting:

+ + +",19859,,2444,,5/30/2020 12:46,5/30/2020 12:46,,,,0,,,,CC BY-SA 4.0 +10007,1,,,1/15/2019 16:04,,1,41,"

Let's say I want to model purchase data (i.e. purchase records of many households across time). For simplicity, let's assume each household only picks one alternative at the time. A simple starting point is a multinomial logit model. Then, more flexible network architectures could be used. People have applied NN to this, but kept the number of alternatives (K) constant. In reality, the number of available options changes over time. Also, it would be interesting to predict how choices change when the number of alternatives is changed.

+ +

in bullet points

+ +
    +
  • there are N households
  • +
  • t_n purchases for each household
  • +
  • There are K_t alternatives at time t
  • +
  • Dependent variable Y=k indicates that alternative k was bought
  • +
  • Each alternative is characterized by features, so x_kt is a vector of those features (including brand name, price, ...). The number of features is constant across time.
  • +
+ +

Any guidance or ideas?

+",21451,,,,,1/15/2019 16:04,NN: Predicting choices when number of alternatives changes,,0,0,,,,CC BY-SA 4.0 +10009,1,,,1/15/2019 19:28,,1,179,"

+ +

The image is one of many similar exam questions can anyone pelase help me understand it fully?

+ +

'Internal node': This is simply every node except A?

+ +

Move choices: His only options are B, C and D for this move?

+ +

Focusing on B: E=8 F=4 and G are all opponent responses, therefore they will pick the minimum value.

+ +

Now my confusion, are M N and P your known responses in the case the opponent picks G, so you should pick M=0 (the highest value), so then G gets passed 0 which the opponent should choose so B has a h-value of 0?

+ +

Are the correct value then B=0, C=1 and D=2 so pick D as next move?

+",21459,,,,,1/15/2019 19:28,Can't grasp MiniMax diagram (no alpha beta pruning),,0,1,,,,CC BY-SA 4.0 +10010,1,10089,,1/15/2019 19:58,,2,480,"

Recurrent Neural Networks (RNN) With Attention Mechanism is generally used for Machine Translation and Natural Language Processing. In Python, implementation of RNN With Attention Mechanism is abundant in Machine Translation (For Eg. https://talbaumel.github.io/blog/attention/, however what I would like to do is to use RNN With Attention Mechanism on a temporal data file (not any textual/sentence based data). I have a CSV file with of dimensions 21000 x 1936, which I have converted to a Dataframe using Pandas. The first column is of Datetime Format and last column consists of target classes like ""Class1"", ""Class2"", ""Class3"" etc. which I would like to identify. So in total, there are 21000 rows (instances of data in 10 minutes time-steps) and 1935 features. The last (1936th column) is the label column.

+ +

It is predominant from existing literature that an Attention Mechanism works quite well when coupled into the RNN. I am unable to locate any such implementation of RNN with Attention Mechanism, which can also provide a visualisation as well. Any help in this regard would be highly appreciated. Cheers!

+",21460,,21460,,1/15/2019 20:10,4/25/2019 17:46,How to use RNN With Attention Mechanism on Non Textual Data?,,1,0,,,,CC BY-SA 4.0 +10011,2,,7525,1/15/2019 20:41,,4,,"

In addition to the points already listed in John's answer, some factors that can help to reduce / mitigate the risk of overfitting to commonly-used benchmarks as a research community are:

+ +
    +
  1. Competitions with instances of problems hidden from entrants: as far as I'm aware this is particularly popular in game AI (see the General Game Playing competition and General Video Game Playing competitions). The basic idea is that submissions should be able to tackle a relatively broad class of problems (playing any game defined in a specified format, or generating levels for any video game with rules described in a specific format, etc.). To some extent, using a large suite of problems as a standard benchmark (such as the large collection of Atari games supported by ALE) also fits in with this idea, though there is value in hiding the problems that are ultimately used for testing from the people writing submissions. Of course, the idea is that entries submitted to these kinds of competitions will involve new research which may be published.

  2. +
  3. Using very simple toy problems: With simple I do not necessarily mean that they are simple to solve, but simple to describe / understand (it may still, for example, have a large state space and be difficult for current techniques to solve). Simple toy problems often help to test for a very specific ""skill"", and can more easily give insight into specifically why/when an algorithm may be expected to fail or succeed. Of course, large non-toy problems are also important to demonstrate ""real-world"" usefulness of algorithms, but they may often give less understanding / insight into an algorithm.

  4. +
  5. Theoretical work: Theoretical work can also give more insight and understanding of new algorithms. Algorithms with strong theoretical foundations are often more likely to generalize to a multitude of problem domains, assuming that the initial assumptions hold (big assumption here - there are plenty of cases where assumptions required for strong proofs do not hold!). This is not always possible / ""needed"", sometimes new research based purely on intuition and with relatively little theoretical foundations still turn out to work well (or theory is only developed after promising empirical results)... but it can certainly help. Theoretical work can take many different forms, such proofs of convergence (often under strict conditions), proofs for upper or lower bounds on important measures (such as regret, or probability of making a ""wrong"" choice, etc.), proofs that an algorithm or a problem is a more general or more specific case of an existing, well-understood algorithm or problem, proofs that a model has or does not have a certain representational capacity, proofs of algorithmic equivalence (that an algorithm computes exactly the same quantities as another well-understood algorithm, typically with lower computation and/or memory requirements), etc.

  6. +
+",1641,,,,,1/15/2019 20:41,,,,0,,,,CC BY-SA 4.0 +10013,1,10088,,1/16/2019 2:44,,3,1581,"

What is ""bad local minima""?

+ +

The following papers all mention this expression.

+ +
    +
  1. Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit
  2. +
  3. limination of All Bad Local Minima in Deep Learning
  4. +
  5. Adding One Neuron Can Eliminate All Bad Local Minima
  6. +
+",18443,,2444,,2/20/2019 16:20,2/20/2019 16:20,What is a bad local minimum in machine learning?,,2,0,,,,CC BY-SA 4.0 +10014,2,,10003,1/16/2019 4:48,,3,,"

There are methods called ""scoring systems"" where you give a image scores such as ""0.9 stripes, 0.0 red, 0.8 hair, ..."" and use those scores to classify objects. It's an older idea, not used to determine if the network is learning. It's not in a standard CNN.

+ +

To determine if relevant information is being learned or not, it's standard to use the testing accuracy, training accuracy, confusion matrix, or AUC.

+ +

Determining what exactly a CNN is learning is a complicated research problem that's ongoing. In short - you can't really know. For a basic network, you can tell that it is learning something but not what it's actually using to make determinations.

+",21471,,21471,,2/4/2019 19:08,2/4/2019 19:08,,,,0,,,,CC BY-SA 4.0 +10017,2,,10013,1/16/2019 7:46,,0,,"

As mentioned in the abstract of on of these papers, bad local minima is a suboptimal local minimum which means a local minimum that is near to a global minimum.

+",4446,,,,,1/16/2019 7:46,,,,0,,,,CC BY-SA 4.0 +10019,1,,,1/16/2019 9:20,,1,520,"

I have this problem where I need to get information out of PDF document sent from a scanner. The program needs to be learnable in some way to recognize what different figures mean. Most of this should happen without human interference so it could just give a result after scanning the file. +Do anyone know if it's possible to do with a machine learning program or any alternative way?

+",21476,,,,,1/16/2019 10:36,"Could it be possible to detect text, symbols, and components directly in a scanned PDF file with a program like Tensorflow or another program?",,1,0,,,,CC BY-SA 4.0 +10020,2,,10019,1/16/2019 10:36,,1,,"

Yes, that's possible. +I am working on a project in which I have to detect text in images. I did a quick search and found these two algorithms:

+ +

1. EAST: (Efficient and Accurate Scene Text Detector)
+I am not sure if it is based on Machine Learning. Here are some links link1 link2 explaining how to use it with an example and using tesseract to extract the detected text.

+ +

2. CTPN: (Connectionist Text Proposal Network)
+This algorithm is based on Machine Learning. Here is its link in github. In the description, you will find a link to a pre-trained model that you can use. Or simply, you can prepare your own data and train your own model.

+ +

For me, I tried both of them, and the CTPN model gave better results especially when the image contains large text.

+",19059,,,,,1/16/2019 10:36,,,,4,,,,CC BY-SA 4.0 +10021,1,,,1/16/2019 11:28,,0,63,"

For example, I have the following csv: training.csv
+I want to know how I can determine which column will be the best feature for getting the output prediction before I go for machine training.
+Please do share your responses

+",9126,,,,,1/16/2019 14:29,How to analyze data before going for machine learning training?,,2,0,,,,CC BY-SA 4.0 +10022,2,,10021,1/16/2019 11:41,,0,,"

You should know your data 100%. That means knowing what each of your columns and rows represents (e.g. temperature column, humidity, rows representing days), the value units (e.g. Celsius or Fahrenheit?), accuracy, value format (strings or numbers). You may need to clean and reorganize the data if necessary to bring them to your desired form (e.g. change the structure, units, aggregating, etc).

+ +

Then use your logic and experience to decide what columns are necessary. This is in general. I hope someone will give you a more specific answer.

+",21480,,,,,1/16/2019 11:41,,,,1,,,,CC BY-SA 4.0 +10025,1,10029,,1/16/2019 13:36,,1,127,"

I have a Deep Feedforward Neural Network $F: W \times \mathbb{R}^d \rightarrow \mathbb{R}^k$ (where $W$ is the space of the weights) with $L$ hidden layers, $m$ neurones per layer and ReLu activation. The output layer has a softmax activation function.

+ +

I can consider two different loss functions:

+ +

$L_1 = \frac{1}{2} \sum_i || F(W,x_i) - y||^2$ $ + \ \ \ $ and $\ \ \ L_2 = -\sum_i log(F(w,x_i)_{y_i})$

+ +

where the first one is the classic quadratic loss and the second one is cross entropy loss.

+ +

I'd like to study the norm of the derivative of the loss function and see how the two are related, which means:

+ +

1) Let's assume I know that $|| \frac{\partial L_2(W, x_i)}{\partial W}|| > r$, where $r$ is a small constant. What can I assume about $|| \frac{\partial L_1(W, x_i)}{\partial W}||$ ?

+ +

2) Are there any result which tell you that, under some hypothesis (even strict ones) such as a specific random initialisation, $|| \frac{\partial L_1(W, x_i)}{\partial W}||$ doesn't go to zero during training?

+ +

Thank you

+",21338,,21338,,1/16/2019 14:13,1/17/2019 4:00,Comparing and studying Loss Functions,,1,0,,,,CC BY-SA 4.0 +10026,2,,10021,1/16/2019 14:29,,2,,"

Though there is no universal method which can be blindly used for all datasets, but here is what i usually do;

+ +
    +
  • Fill missing values using interpolation or mean, if missing values +are less than 10-15 percent of number of rows else drop the column.
  • +
  • Encode categorical data using some kind of encoding, e.g. one hot, etc.
  • +
  • Then normalize/rescale columns.
  • +
  • Now look at the variance in each feature. Usually, features with more variance are more important.

  • +
  • Next, see the correlation among columns. If two columns are highly +correlated, you only need to keep only one.

  • +
+",19722,,,,,1/16/2019 14:29,,,,0,,,,CC BY-SA 4.0 +10027,1,10030,,1/16/2019 14:33,,3,1330,"

There are many people trying to show how neural networks are still very different from humans, but I fail to see in what way human brains are different from neural models in anything but complexity.

+ +

The way we learn is similar, the way we process information is similar, the ways we predict outcomes and generate outputs are similar. Give a model enough processing power, enough training samples, and enough time and you can train a human.

+ +

So, what is the difference between human (brains) and neural networks?

+",20399,,2444,,5/17/2020 11:28,5/17/2020 11:28,What is the difference between human brains and neural networks?,,2,1,,1/20/2021 1:16,,CC BY-SA 4.0 +10029,2,,10025,1/16/2019 15:49,,-1,,"

Let's first express a network of arbitrary topology and heterogeneous or homogeneous cell type arrangements as

+ +

$$ N(T, H, s) := \, \big[\, \mathcal{Y} = F(P_s, \, \mathcal{X}) \,\big] \\ + s \in \mathbb{C} \; \land \; s \le S \; \text{,} $$

+ +

where $S$ is the number of learning states or rounds, $N$ is the network of $T$ topology and $H$ hyper-parameter structure and values that at stage $s$ produces a $P$ parameterized function $f$ of $\mathcal{X}$ resulting in $\mathcal{Y}$. In supervised learning, the goal is that $F(P_s)$ approaches a conceptually ideal function $F_i$ as $s \rightarrow S$.

+ +

The popular loss aggregation norms are not quite as the question defines them. The below more canonically expresses the level 1 and 2 norms, which systematically aggregate multidimensional disparity between an intermediate result at some stage (epoch and example index) of training and the conceptual ideal toward which the network in training is intended to converge.

+ +

$$ {||F-\mathcal{Y}||}_1 = \sum{|F_i - y_i|} \\ + {||F-\mathcal{Y}||}_2 = \sqrt{\sum{(F_i - y_i)}^2} $$

+ +

These equations have been mutated by various authors to make various points, but those mutations have obscured the obviousness of their original relationship. The first is where distance can be aggregated through only orthogonal vector displacements. The second is where aggregation uses the minimum Cartesian distance by extending the Pythagorean theorem.

+ +

Note that quadratic loss is a term with some ambiguity. These are all broadly describable as quadratic expressions of loss.

+ +
    +
  • Distance as described in the second expression above
  • +
  • RMS where the sum is divided by the number of dimensions +$\text{count}(i)$
  • +
  • Just the sum of squared difference by itself
  • +
+ +

Cross entropy is an extension of Claude Shannon's information theory concepts based on the work of Bohr, Boltzmann, Gibbs, Maxwell, von Neumann, Frisch, Fermi, and others who were interested in quanta and the thermodynamic concept of entropy as a universal principle running through mater, energy, and knowledge.

+ +

$$ S = k_B \log{\Omega} \\ + H(X) = - \sum_i p(x_i) \, \log_2{\, p(x_i)} \\ + H(p, \, q) = -\sum_{x \in \mathcal{X}} \, p(x) \, \log_2{\, q(x)} $$

+ +

In this progression of theory, we begin with a fundamental postulate in quantum physics, where $k_B$ is Boltzmann's constant and $\Omega$ are the number of microstates for the quanta. The next relation is Shannon's adaptation for information, where $H$ is the entropy in bits, thus the $\log_2$ instead of a natural logarithm. The third relation above expresses cross-entropy in bits for features $\mathcal{X}$ is based on the Kullback-Leibler divergence. the p-attenuated sum of bits of q-information in .

+ +

Notice that $p$ and $q$ are probabilities, not $F$ or $\mathcal{Y}$ values, so one cannot substitute labels and outputs of a network into them and retain the meaning of cross entropy. Therefore level 1 and 2 norms are closely related, but cross-entropy is not a norm; it is the dispersion of one thing Cartesian distance aggregation scheme like them. Cross-entropy is remotely related but is statistically more sophisticated. To produce a cross-entropy loss function of form

+ +

$$ {||F-\mathcal{Y}||}_H = \mathcal{P}(F, y) \; \text{,} $$

+ +

one must derive the probabilistic function $\mathcal{P}$ that represents the cross entropy for two distributions in some way that is theoretically sound on the basis of both information theory and convergence resource conservation. It is not clear that the interpretation of cross entropy in the context of gradient descent and back propagation has caught up with the concepts of entropy in quantum theory. That's an area needing further research and deeper theoretical consideration.

+ +

In the question, the cross-entropy expression is not properly characterized, most evident in the fact that the expression is independent of the labels $\mathcal{Y}$, which would be fine if for unsupervised learning except that no other basis for evaluation is represented in the expression. For the term cross-entropy to be valid, the basis for evaluation must include two distributions, a target one and one that represents the current state of learning.

+ +

The derivatives of the three norms (assuming the cross entropy is properly characterized) can be studied for the case of $\ell$ ReLU layers by generalizing the chain rule (from differential calculus) as applied to ReLU and the loss function developed by applying each of the three norms to aggregate measures of disparity from optimal.

+ +

Regarding the inference in sub-question (1) nothing of particular value can be assumed about the Jacobians of level 2 norms from level 1 norms, both with respect to parameters $P$ or vice versa, except the retention of sign. This is because we cannot determine much about the correlation between the output channels of the network.

+ +

There is no doubt, regarding sub-question (2), that some constraint, set of constraints, stochastic distribution applied to initialization, hyper-parameter settings, or data set features, labels, or number of examples have implications for the reliability and accuracy of convergence. The PAC (probably approximately correct) learning framework is one system of theory that approaches this question with mathematical rigor. One of its practical uses, among others, is to derive inequalities that predict feasibility in some cases and produce more lucid approaches to learning system projects.

+",4302,,4302,,1/17/2019 4:00,1/17/2019 4:00,,,,0,,,,CC BY-SA 4.0 +10030,2,,10027,1/16/2019 16:04,,3,,"

One incredibly important difference between humans and NNs is that the human brain is the result of billions of years of evolution whereas NNs were partially inspired by looking at the result and thinking ""... we could do that"" (utmost respect for Hubel and Wiesel).

+ +

Human brains (and in fact anything biological really) have an embedded structure to them within the DNA of the animal. DNA has about 4 MB of data and incredibly contains the information of where arms go, where to put sensors and in what density, how to initialize neural structures, the chemical balances that drive neural activation, memory architecture, and learning mechanisms among many many other things. This is phenomenal. Note, the placement of neurons and their connections isn't encoded in dna, rather the rules dictating how these connections form is. This is fundamentally different from simply saying ""there are 3 conv layers then 2 fully connected layers..."". +There has been some progress at neural evolution that I highly recommend checking out which is promising though.

+ +

Another important difference is that during ""runtime"" (lol), human brains (and other biological neural nets) have a multitude of functions beyond the neurons. Things like Glial cells. There are about 3.7 Glial cells for every neuron in your body. They are a supportive cell in the central nervous system that surround neurons and provide support for and insulation between them and trim dead neurons. This maintenance is continuous update for neural structures and allows resources to be utilized most effectively. With fMRIs, neurologists are only beginning to understand the how these small changes affect brains.

+ +

This isn't to say that its impossible to have an artificial NN that can have the same high level capabilities as a human. Its just that there is a lot that is missing from our current models. Its like we are trying to replicate the sun with a campfire but heck, they are both warm.

+",4398,,,,,1/16/2019 16:04,,,,1,,,,CC BY-SA 4.0 +10032,1,10033,,1/16/2019 17:03,,2,785,"

I tried to build a Q-learning agent which you can play tic tac toe against after training.

+

Unfortunately, the agent performs pretty poorly. He tries to win but does not try to make me 'not winning' which ends up in me beating up the agent no matter how many loops I gave him for training. I added a reward of 1 for winning the episode and it gets a reward of -0.1 when he tries to put his label on an non-empty square (after the attempt we have s = s'). I also start with an epsilon=1 which decreases in every loop to add some more randomness at the beginning because I witnessed that some (important in my opinion) states did not get updated. Since I spend some hours of debugging without noticeable progress I'd like to know what you think.

+

PS: Don't care about some print statements and count variables. Those where for debugging.

+

Code here or on Github

+
import numpy as np
+import collections
+import time
+
+Gamma = 0.9
+Alpha = 0.2
+
+
+class Environment:
+    def __init__(self):
+        self.board = np.zeros((3, 3))
+        self.x = -1  # player with an x
+        self.o = 1  # player with an o
+        self.winner = None
+        self.ended = False
+        self.actions = {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (1, 0), 4: (1, 1),
+                        5: (1, 2), 6: (2, 0), 7: (2, 1), 8: (2, 2)}
+
+    def reset_env(self):
+        self.board = np.zeros((3, 3))
+        self.winner = None
+        self.ended = False
+
+    def reward(self, sym):
+        if not self.game_over():
+            return 0
+        if self.winner == sym:
+            return 10
+        else:
+            return 0
+
+    def get_state(self,):
+        k = 0
+        h = 0
+        for i in range(3):
+            for j in range(3):
+                if self.board[i, j] == 0:
+                    v = 0
+                elif self.board[i, j] == self.x:
+                    v = 1
+                elif self.board[i, j] == self.o:
+                    v = 2
+                h += (3**k) * v
+                k += 1
+        return h
+
+        def random_action(self):
+            return np.random.choice(self.actions.keys())
+
+    def make_move(self, player, action):
+        i, j = self.actions[action]
+        if self.board[i, j] == 0:
+            self.board[i, j] = player
+
+    def game_over(self, force_recalculate=False):
+        # returns true if game over (a player has won or it's a draw)
+        # otherwise returns false
+        # also sets 'winner' instance variable and 'ended' instance variable
+        if not force_recalculate and self.ended:
+            return self.ended
+
+        # check rows
+        for i in range(3):
+            for player in (self.x, self.o):
+                if self.board[i].sum() == player*3:
+                    self.winner = player
+                    self.ended = True
+                    return True
+
+        # check columns
+        for j in range(3):
+            for player in (self.x, self.o):
+                if self.board[:, j].sum() == player*3:
+                    self.winner = player
+                    self.ended = True
+                    return True
+
+        # check diagonals
+        for player in (self.x, self.o):
+            # top-left -> bottom-right diagonal
+            if self.board.trace() == player*3:
+                self.winner = player
+                self.ended = True
+                return True
+            # top-right -> bottom-left diagonal
+            if np.fliplr(self.board).trace() == player*3:
+                self.winner = player
+                self.ended = True
+                return True
+
+        # check if draw
+        if np.all((self.board == 0) == False):
+            # winner stays None
+            self.winner = None
+            self.ended = True
+            return True
+
+        # game is not over
+        self.winner = None
+        return False
+
+    def draw_board(self):
+        for i in range(3):
+            print("-------------")
+            for j in range(3):
+                print("  ", end="")
+                if self.board[i, j] == self.x:
+                    print("x ", end="")
+                elif self.board[i, j] == self.o:
+                    print("o ", end="")
+                else:
+                    print("  ", end="")
+            print("")
+        print("-------------")
+
+
+
+
+class Agent:
+    def __init__(self, Environment, sym):
+        self.q_table = collections.defaultdict(float)
+        self.env = Environment
+        self.epsylon = 1.0
+        self.sym = sym
+        self.ai = True
+
+    def best_value_and_action(self, state):
+        best_val, best_act = None, None
+        for action in self.env.actions.keys():
+            action_value = self.q_table[(state, action)]
+            if best_val is None or best_val < action_value:
+                best_val = action_value
+                best_act = action
+        return best_val, best_act
+
+    def value_update(self, s, a, r, next_s):
+        best_v, _ = self.best_value_and_action(next_s)
+        new_val = r + Gamma * best_v
+        old_val = self.q_table[(s, a)]
+        self.q_table[(s, a)] = old_val * (1-Alpha) + new_val * Alpha
+
+    def play_step(self, state, random=True):
+        if random == False:
+            epsylon = 0
+        cap = np.random.rand()
+        if cap > self.epsylon:
+            _, action = self.best_value_and_action(state)
+        else:
+            action = np.random.choice(list(self.env.actions.keys()))
+            self.epsylon *= 0.99998
+        self.env.make_move(self.sym, action)
+        new_state = self.env.get_state()
+        if new_state == state and not self.env.ended:
+            reward = -5
+        else:
+            reward = self.env.reward(self.sym)
+        self.value_update(state, action, reward, new_state)
+
+
+class Human:
+    def __init__(self, env, sym):
+        self.sym = sym
+        self.env = env
+        self.ai = False
+
+    def play_step(self):
+        while True:
+            move = int(input('enter position like: \n0|1|2\n------\n3|4|5\n------\n6|7|8'))
+            if move in list(self.env.actions.keys()):
+                break
+        self.env.make_move(self.sym, move)
+
+
+
+def main():
+    env = Environment()
+    p1 = Agent(env, env.x)
+    p2 = Agent(env, env.o)
+    draw = 1
+    for t in range(1000005):
+
+        current_player = None
+        episode_length = 0
+        while not env.game_over():
+            # alternate between players
+            # p1 always starts first
+            if current_player == p1:
+                current_player = p2
+            else:
+                current_player = p1
+
+            # current player makes a move
+            current_player.play_step(env.get_state())
+
+        env.reset_env()
+
+        if t % 1000 == 0:
+            print(t)
+            print(p1.q_table[(0, 0)])
+            print(p1.q_table[(0, 1)])
+            print(p1.q_table[(0, 2)])
+            print(p1.q_table[(0, 3)])
+            print(p1.q_table[(0, 4)])
+            print(p1.q_table[(0, 5)])
+            print(p1.q_table[(0, 6)])
+            print(p1.q_table[(0, 7)])
+            print(p1.q_table[(0, 8)])
+            print(p1.epsylon)
+
+    env.reset_env()
+    # p1.sym = env.x
+
+    while True:
+        while True:
+            first_move = input("Do you want to make the first move? y/n :")
+            if first_move.lower() == 'y':
+                first_player = Human(env, env.x)
+                second_player = p2
+                break
+            else:
+                first_player = p1
+                second_player = Human(env, env.o)
+                break
+        current_player = None
+
+        while not env.game_over():
+            # alternate between players
+            # p1 always starts first
+            if current_player == first_player:
+                current_player = second_player
+            else:
+                current_player = first_player
+            # draw the board before the user who wants to see it makes a move
+
+            if current_player.ai == True:
+                current_player.play_step(env.get_state(), random=False)
+            if current_player.ai == False:
+                current_player.play_step()
+            env.draw_board()
+        env.draw_board()
+        play_again = input('Play again? y/n: ')
+        env.reset_env()
+        # if play_again.lower != 'y':
+        #     break
+
+
+if __name__ == "__main__":
+    main()
+
+",21487,,2444,,10/31/2020 17:20,10/31/2020 17:20,Why isn't my Q-Learning agent able to play tic-tac-toe?,,1,0,,,,CC BY-SA 4.0 +10033,2,,10032,1/16/2019 18:10,,2,,"

The $Q$-learning rule that you have implemented updates $Q(S_t, A_t)$ estimates as follows, after executing an action $A_t$ in a state $S_t$, observing a reward $R_t$, and reaching a state $S_{t+1}$ as a result:

+ +

$$Q(S_t, A_t) \gets (1 - \alpha) Q(S_t, A_t) + \alpha (R_t + \gamma \max_a Q(S_{t+1}, a))$$

+ +

The implementation seems to be correct for the traditional setting for which $Q$-learning is normally described; single-agent MDPs. The problem is that you have a multi-agent setting, in which $Q$-learning is not always directly applicable.

+ +

Now, as far as I can see from a very quick glance at your code, it seems like you actually already have taken some important steps towards allowing it to work, and I think it should be quite close to almost working (at least for a simple game like Tic-Tac-Toe) already. Important things that you appear to already be doing correctly:

+ +
    +
  • Self-play training against an opponent who is hopefully gradually improving, as opposed to training against a uniform-at-random agent or a fixed-strategy agent.
  • +
  • Add randomization during training to ensure sufficient diversity in generated experience.
  • +
+ +
+ +

I think the major issue that remains to be solved is in how you define the subsequent state $S_{t+1}$ after making a move in a state $S_t$.

+ +

The update target that the $Q$-learning update rule moves its $Q$-value estimates towards consists of two components:

+ +
    +
  1. The observed immediate reward $R_t$.
  2. +
  3. The discounted predicted future returns $\gamma \max_a Q(S_{t+1}, a)$ for the greedy policy.
  4. +
+ +

The problem is that, in your implementation, $S_{t+1}$ is a state in which the opponent is allowed to make the next move $a$, rather than the RL agent. This means that $\max_a Q(S_{t+1}, a)$ is an incredibly optimistic, naive, unrealistic estimate of future returns. In fact, $\min_a Q(S_{t+1}, a)$ would be a much more realistic estimate (against an optimally-playing opponent), because the opponent gets to pick the next action $a$.

+ +

I think switching in $\min_a Q(S_{t+1}, a)$ rather than the $\max$ may have a good chance of working in this scenario, but I'm not 100% sure. It wouldn't be a ""pretty"" solution though, since you'd no longer be doing $Q$-learning, but something else altogether.

+ +

The proper $Q$-learning update may work well if you only present states to agents in which they're actually allowed to make the next move in the update rule. Essentially, you'd be plugging $\max_a Q(S_{t + 2}, a)$ into the update rule, replacing $S_{t+1}$ with $S_{t+2}$. Well... that's what you'd be doing in most cases. The only exception to be aware of would be terminal states. If an agent makes a move that leads to a terminal state, you should make sure to also run an additional update for that agent with the terminal game state $S_{t+1}$ (where $Q(S_{t+1}, a)$ will always be $0$ for any action $a$ if $S_{t+1}$ is terminal).

+ +

For a very closely related question, where I essentially provided an answer in the same spirit, see: How to see terminal reward in self-play reinforcement learning?

+",1641,,1641,,1/16/2019 18:18,1/16/2019 18:18,,,,6,,,,CC BY-SA 4.0 +10034,1,,,1/16/2019 18:14,,1,60,"

I'm a programmer with a background in mathematics, but I have no experience whatsoever with artificial intelligence/neural networks. I'd like to study it as a hobby, and my goal for now is to solve the following simple poker game, by letting the program play against itself:

+ +

We have two players, each with a certain number of chips. At the start of the game, they are obligated to put a certain amount of chips in the pot. Then they each get a random real number between 0 and 10. They know their own number, but not the one of their opponent. Then we have one round of betting. The first player puts additional chips in the pot (some number between 0 and their stack size). The second player can either fold (put no additonal chips in the pot, 1st player gets the entire pot), call (put the same number of chips in the pot, player with highest number gets the pot) or raise (put even more chips in the pot, action back on 1st player). There is no limit to the amount of times a player can raise, as long as he still has chips behind to raise.

+ +

I have several questions: +- Is this indeed a problem that can be solved with neural networks? +- What do you recommend me to study in order to solve this problem? +- Is it feasible to solve this game when allowing for continuous bet/raise sizes? Or should I limit it to a few options as a percentage of the pot? +- Do you expect it to be possible to get close to an equilibrium with one nightly run on an 'average' laptop?

+",21488,,1671,,1/16/2019 21:27,1/16/2019 21:27,What to study for this simple poker game?,,0,1,,,,CC BY-SA 4.0 +10036,1,10046,,1/17/2019 1:04,,0,71,"

I have source data that can be represented as a 2D image of many similar curves. They may oftentimes cross over one another, so regions of interest will overlap.

+ +

My goal is to implement a neural network solution to identify each instance or the curves and the pixels that are associated with each instance.

+ +

(Each image is simple in its representation of the data. A pixel in the image is either a point on one of these curves or it is empty. So the image is represented by one or zero at each pixel. For training purposes, I have labels for every pixel, and I have about 150,000 images. The information in the images can be noisy in that there may be omissions of points and point locations are quantized due to measurement limitations and preprocessing for the image preparation.)

+ +

I started looking into what semantic segmentation can do for me, but since all of the instances are of the same class, distinguished mainly by their location in the image, I don't think semantic segmentation is the type of processing I would want to perform. (Am I wrong?)

+ +

I am very interested in seeing how a neural network will work on this problem to separate each instance.

+ +

My question is this: what is the terminology that describes the process I'm looking for? (How can I effectively research for this problem?) Is this an extension of semantic segmentation or is it referred to some other way?

+",8439,,,,,1/17/2019 15:43,Pixel-Level Detection of Each Object of the Same Class In an Image,,1,0,,,,CC BY-SA 4.0 +10037,1,,,1/17/2019 5:29,,1,58,"

At slide 17 of the David Silver's series, the soft-max policy is defined as follows

+ +

$$ +\pi_\theta(s, a) \propto e^{\phi(s, a)^T \theta} +$$

+ +

that is, the probability of an action $a$ (in state $s$) is proportional to the ""exponentiated weight"".

+ +

The score function is then defined as follows

+ +

$$ +\nabla_\theta \log \pi_\theta (s, a) = \phi(s, a) - +\mathbb{E}_{\pi_{\theta}}[\phi(s, \cdot)] +$$

+ +

Where does the expectation term $\mathbb{E}_{\pi_{\theta}}[\phi(s, \cdot)]$ come from?

+",16313,,2444,,2/15/2019 18:59,2/15/2019 18:59,Where does the expectation term in the derivative of the soft-max policy come from?,,0,3,,,,CC BY-SA 4.0 +10039,2,,9996,1/17/2019 7:37,,1,,"

In general, algorithms that exploit distances or similarities (e.g. in the form of scalar product) between data samples, such as k-NN and SVM, are sensitive to feature transformations. We do feature scaling to make our model robust to outliers and make an initial impact of every feature on the model will be roughly similar

+ +

Graphical-model based classifiers, such as Fisher LDA or Naive Bayes, as well as Decision trees and Tree-based ensemble methods (RF, XGB) are invariant to feature scaling, but, still, it might be a good idea to rescale/standardize your data.

+ +
    +
  1. You should explore your data more carefully, find the outliers, apply transformation if needed.

  2. +
  3. Not sure if it is a good idea

  4. +
  5. You can apply different preprocessing techniques like MinMaxScaller, Rank, Log transform, Extracting square root, StandartScaller and etc.

  6. +
+",17925,,2444,,6/28/2019 11:24,6/28/2019 11:24,,,,0,,,,CC BY-SA 4.0 +10044,1,,,1/17/2019 12:23,,2,112,"

How do I check which algorithm solves my problem best?

+ +

Given a optimaization problem, I apply different well known optimization algorithms (genetic algorithm, simulated annealing, ant colony etc.) to solve my problem. However, how do I know if my implementation ( e.g. cost function) is working for every case? How can I compare the algorithms or their goodness in the context of my problem?

+",19413,,,,,1/24/2019 20:36,Method to check goodness of combinatorial optimization algorithm implementation,,1,0,,,,CC BY-SA 4.0 +10046,2,,10036,1/17/2019 15:43,,2,,"

What you want to do is instance segmentation on a pixel level. +I can point you two different way:

+ + +",6019,,,,,1/17/2019 15:43,,,,2,,,,CC BY-SA 4.0 +10048,1,10069,,1/17/2019 19:06,,2,455,"

Can NEAT produce neural networks where inputs are directly (without intermediate hidden neurons) connected to outputs?

+",21517,,2444,,12/12/2021 18:13,12/12/2021 18:13,Can NEAT produce neural networks where inputs are directly connected to outputs?,,1,0,,,,CC BY-SA 4.0 +10049,1,10061,,1/17/2019 19:27,,8,1733,"

I've seen the Monte Carlo return $G_{t}$ being used in REINFORCE and the TD($0$) target $r_t + \gamma Q(s', a')$ in vanilla actor-critic. However, I've never seen someone use the lambda return $G^{\lambda}_{t}$ in these situations, nor in any other algorithms.

+

Is there a specific reason for this? Could there be performance improvements if we used $G^{\lambda}_{t}$?

+",21518,,2444,,1/13/2022 12:03,1/13/2022 12:03,Why are lambda returns so rarely used in policy gradients?,,2,0,,,,CC BY-SA 4.0 +10050,1,,,1/17/2019 19:45,,5,136,"

Suppose there is an evaluation policy called $\pi_{e}$ and there are two behavior policies $\pi_{b1}$ and $\pi_{b2}$. I know that it is possible to estimate the return of policy $\pi_{e}$ through behavior policies via importance sampling, which is unbiased. But I do not know about the variance of return estimated through two behavior policies $\pi_{b1}$ and $\pi_{b2}$. Does anybody know about the variance or any bound on the variance of estimated return?

+ +

Let $G_{0}^{b1}=\sum_{t=1}^{T}\gamma^{t-1}r_{t}^{b1}$ represent the total return for an episode through behavior policy $\pi_{b1}$ and $G_{0}^{b2}=\sum_{t=1}^{T}\gamma^{t-1}r_{t}^{b2}$ represent the total return for an episode through behavior policy $\pi_{b2}$. +It is possible to estimate the return of policy $\pi_{e}$ as follows:

+ +

$$G_{0}^{(e,b1)}=\prod_{t=1}^{T}\frac{\pi_{e}(a_{t}|s_{t})}{\pi_{b1}(a_{t}|s_{t})}*G_{0}^{b1}$$

+ +

$$G_{0}^{(e,b2)}=\prod_{t=1}^{T}\frac{\pi_{e}(a_{t}|s_{t})}{\pi_{b2}(a_{t}|s_{t})}*G_{0}^{b2}$$

+ +

I want to compare the variance of $G_{0}^{(e,b1)}$ and $G_{0}^{(e,b2)}$. Is there any formulation to compute the variance $G_{0}^{(e,b1)}$ and $G_{0}^{(e,b2)}$?

+",10191,,2444,,3/6/2019 12:13,3/30/2020 16:02,How do I compute the variance of the return of an evaluation policy using two behaviour policies?,,1,1,,,,CC BY-SA 4.0 +10051,1,,,1/17/2019 22:59,,2,406,"

Attention has been used widely in recurrent networks to weight feature representations learned by the model. This is not a trivial task since recurrent networks have a hidden state that captures sequence information. The hidden state can be fed into a small MLP that produces a context vector summarizing the salient features of the hidden state.

+

In the context of NLP, convolutional networks are not as straightforward. They have the notion of channels that are different feature representations of the input, but are channels the equivalent to hidden states? Particularly, this raises two questions for me:

+
    +
  • Why use attention in convolutional networks at all? Convolutions have shown to be adept feature detectors––for example, it is known that higher layers learn small features such as edges while lower layers learn more abstract representations. Would attention be used to sort through and weigh these features?

    +
  • +
  • In practice, how would attention be applied to convolutional networks? The output of these networks is usually (batch, channels, input_size) (at least in PyTorch), so how would the attention operations in recurrent networks be applied to the output of convolutional networks?

    +
  • +
+
+

References

+

Convolutional Sequence to Sequence Learning, Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin, 2017

+",19403,,2444,,9/27/2021 16:41,9/27/2021 16:41,"Why would we use attention in convolutional neural networks, and how would we apply it?",,1,0,,,,CC BY-SA 4.0 +10053,1,,,1/18/2019 1:52,,2,423,"

For the purposes of this question I am asking about training the generator, assume that training the discriminator is another topic.

+ +

My understanding of generative adversarial networks is that you feed random input data to the generator and it generates images. Out of those images, the ones which the discriminator thinks are real are used to train the generator.

+ +

For example, I have the random inputs $i_1$, $i_2$, $i_3$, $i_4$... from which the generator produces $o_1$, $o_2$, $o_3$, $o_4$. Say for example, the discriminator thinks that $o_1$ and $o_2$ are real but $o_3$ and $o_4$ are fake, I then throw away input output pairs 3 and 4, but keep 1 and 2, and run back propagation on the generator to tell it that $i_1$ should produce $o_1$, and $i_2$ should produce $o_2$ since these were ""correct"" according to the discriminator.

+ +

The contradiction seems to come from the fact that the generator already generates those outputs from those inputs, so nothing will be gained by running backprop on those input output pairs.

+ +

Where is the flaw in my logic here? I seem to have something wrong in my reasoning, or a misunderstanding of how the generator is trained.

+",21524,,16565,,2/1/2019 13:34,6/10/2023 22:07,Training the generator in a GAN pair with back propagation,,1,0,,,,CC BY-SA 4.0 +10054,2,,4282,1/18/2019 3:38,,2,,"

I think this paper can help you out: 3D Bounding Box Estimation Using Deep Learning and Geometry

+ +

He used 1 VGG-19 (pretrained on ImageNet) to learn the size of cars

+",21528,,1671,,1/18/2019 19:58,1/18/2019 19:58,,,,2,,,,CC BY-SA 4.0 +10056,2,,10053,1/18/2019 5:13,,0,,"

Let's take a look at the images that fooled the D-network. When this happens the binary cross entropy loss of the D-network is important to look at. The network said a fake image was real. As usual, you back propagate to the very beginning of the D-network. What is the back prop doing? It is telling the weights in the D-network to change such that you lessen the loss (remember you are minimizing the loss function by going down hill --> using gradient descent). So you changed the weights of the D-network, but you really don't have to stop there.

+ +

The complex nonlinear function the D-network is calculating is really f(Image, weights of D-network). So the output of the D-network is really dependent on the Image fed in and the weights of the D-network. The cost is a function of the output f. So we have c = g(f(Image, weights of D-network)) where g is really the binary cross entropy. Looking at this, we see that the cost is also a function of the image!

+ +

We can take the partials with respect to the image just like we take the partials with respect to the weights to update the weights. And so after doing back prop to the image, we don't stop -> we take the partial of the cost function with respect to the image that is fed into the D-network and we update the image. And essentially we don't have the original image when we continue to back prop to the beginning of the G-network. Let me know if you'd like more clarification. Again this is my understanding and I haven't coded a GAN yet, so someone please confirm. Also, I am open to others' interpretations.

+",15428,,,,,1/18/2019 5:13,,,,0,,,,CC BY-SA 4.0 +10061,2,,10049,1/18/2019 14:29,,11,,"

That can be done. For example, Chapter 13 of the 2nd edition of Sutton and Barto's Reinforcement Learning book (page 332) has pseudocode for ""Actor Critic with Eligibility Traces"". It's using $G_t^{\lambda}$ returns for the critic (value function estimator), but also for the actor's policy gradients.

+ +

Note that you do not explicitly see the $G_t^{\lambda}$ returns mentioned in the pseudocode. They are being used implicitly through eligibility traces, which allow for an efficient online implementation (the ""backward view"").

+ +
+ +

I do indeed have the impression that such uses are fairly rare in recent research though. I haven't personally played around with policy gradient methods to tell from personal experience why that would be. My guess would be that it is because policy gradient methods are almost always combined with Deep Neural Networks, and variance is already a big enough problem in training these things without starting to involve long-trajectory returns.

+ +

If you use large $\lambda$ with $\lambda$-returns, you get low bias, but high variance. For $\lambda = 1$, you basically get REINFORCE again, which isn't really used much in practice, and has very high variance. For $\lambda = 0$, you just get one-step returns again. Higher values for $\lambda$ (such as $\lambda = 0.8$) tend to work very well in my experience with tabular methods or linear function approximation, but I suspect the variance may simply be too much when using DNNs.

+ +

Note that it is quite popular to use $n$-step returns with a fixed, generally fairly small, $n$ in Deep RL approaches. For instance, I believe the original A3C paper used $5$-step returns, and Rainbow uses $3$-step returns. These often work better in practice than $1$-step returns, but still have reasonably low variance due to using small $n$.

+",1641,,,,,1/18/2019 14:29,,,,1,,,,CC BY-SA 4.0 +10064,1,,,1/18/2019 15:00,,-1,280,"

I am working to build a reinforcement agent with DQN. The agent would be able to place buy and sell orders for a day trading purpose. I am facing a little problem with that project. The question is ""how to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$"".

+ +

I want to maximize the profit inside a trading day and avoid to place the pair (limit buy order, limit sell order) if the profit on that transaction is less than 100$. The idea here is to avoid the little noisy movements. Instead, I prefer long beautiful profitable movements. Be aware that I thought using the ""Profit & Loss"" as the reward.

+ +

""I want the minimal profit per transaction to be 100$"" ==> It seems this is not something that is enforceable. I can train the agent to maximize profit per transaction, but how that profit is cannot be ensured.

+ +

At the beginning, I wanted to tell the agent, if the profit of a transaction is 50 dollars, I will remove 100 dollars, then It becomes a penalty of 50 dollars for the agent. I thought it was a great way to tell the agent to not place a limit buy order if you are not sure it will give us a minimal profit of 100$. It seems that all I would be doing there is simply shifting the value of the reward. The agent only cares about maximizing the sum of rewards and not taking care of individual transactions.

+ +

How to tell the agent to maximize the profit and avoid the transaction where the profit is less than 100$? With that strategy, what guarantee that the agent will never make a buy/sell decision that results in less than 100 dollars profit? Does the sum of reward - # transaction * 100 can be a solution?

+",21539,,21539,,1/19/2019 21:29,2/27/2019 12:02,"For some reasons, a reward becomes a penalty if",,2,2,,,,CC BY-SA 4.0 +10068,2,,9479,1/18/2019 20:22,,-1,,"

Simulated annealing algorithms are generally better at solving mazes, because they are less likely to get suck in a local minima because of their probabilistic ""mutation"" method. See here. Genetic algorithms are better at training neural networks, because of their genetically inspired training algorithm. This makes them more versatile and efficient in more complex situations.

+",4744,,,,,1/18/2019 20:22,,,,0,,,,CC BY-SA 4.0 +10069,2,,10048,1/18/2019 20:37,,0,,"

Yes, it is possible (depending on the nature of your problem), using the four types of standard NEAT mutation, but it is improbable.

+ +

When the NEAT algorithm begins, it operates on a blank canvas. After each generation, the algorithm will either:

+ +
    +
  1. Construct a new axon

  2. +
  3. Construct a new node on an existing axon

  4. +
  5. Update existing weights/bias

  6. +
  7. Remove a node or axon from the network

  8. +
+ +

(However, in general, NEAT does not produce neural networks where two input (or output) nodes are connected.)

+",4744,,2444,,10/13/2019 2:03,10/13/2019 2:03,,,,4,,,,CC BY-SA 4.0 +10070,1,10076,,1/18/2019 21:43,,0,115,"

I have seen a few articles about neural nets. Mostly they went along these lines: we tried these architectures, these meta parameters, we trained it for $x$ hours on $y$ CPUs, and it gave us these results that are 0.1% better than state of the art.

+

What I am interested in is whether there exists (at least as a work in progress) a framework that gives some explanation why is some architecture better than other, what makes one activation function more suitable for image recognition than another, etc.

+

Do you have some tips about where to start? I would prefer something more systematic than a Google search (a book, a list of key articles is ideal).

+",11359,,2444,,12/12/2021 18:17,12/12/2021 18:17,How can I systematically learn about the theory of neural networks?,,1,0,,,,CC BY-SA 4.0 +10071,1,,,1/19/2019 0:25,,1,114,"

I have an electromagnetic sensor and electromagnetic field emitter. +The sensor will read power from the emitter. I want to predict the position of the sensor using the reading.

+ +

Let me simplify the problem, suppose the sensor and the emitter are in 1 dimension world where there are only position X (not X,Y,Z) and the emitter emits power as a function of distance squared.

+ +

From the painted image below, you will see that the emitter is drawn as a circle and the sensor is drawn as a cross.

+ +

+ +

E.g. if the sensor is 5 meter away from the emitter, the reading you get on the sensor will be 5^2 = 25. So the correct position will be either 0 or 10, because the emitter is at position 5.

+ +

So, with one emitter, I cannot know the exact position of the sensor. I only know that there are 50% chance it's at 0, and 50% chance it's at 10.

+ +

So if I have two emitters like the following image:

+ +

+ +

I will get two readings. And I can know exactly where the sensor is. If the reading is 25 and 16, I know the sensor is at 10.

+ +

So from this fact, I want to use 2 emitters to locate the sensor.

+ +

Now that I've explained you the situation, my problems are like this:

+ +
    +
  1. The emitter has a more complicated function of the distance. It's +not just distance squared. And it also have noise. so I'm trying to +model it using machine learning.
  2. +
  3. Some of the areas, the emitter don't work so well. E.g. if you are +between 3 to 4 meters away, the emitter will always give you a fixed +reading of 9 instead of going from 9 to 16.

  4. +
  5. When I train the machine learning model with 2 inputs, the +prediction is very accurate. E.g. if the input is 25,36 and the +output will be position 0. But it means that after training, I +cannot move the emitters at all. If I move one of the emitters to be +further apart, the prediction will be broken immediately because the +reading will be something like 25,49 when the right emitter moves to +the right 1 meter. And the prediction can be anything because the +model has not seen this input pair before. And I cannot afford to +train the model on all possible distance of the 2 emitters.

  6. +
  7. The emitters can be slightly not identical. The difference will +be on the scale. E.g. one of the emitters can be giving 10% bigger +reading. But you can ignore this problem for now.

  8. +
+ +

My question is How do I make the model work when the emitters are allowed to move? Give me some ideas.

+ +

Some of my ideas:

+ +
    +
  1. I think that I have to figure out the position of both +emitters relative to each other dynamically. But after knowing the +position of both emitters, how do I tell that to the model?
  2. +
  3. I have tried training each emitter separately instead of pairing +them as input. But that means there are many positions that cause +conflict like when you get reading=25, the model will predict the +average of 0 and 10 because both are valid position of reading=25. +You might suggest training to predict distance instead of position, +that's possible if there is no problem number 2. But because +there is problem number 2, the prediction between 3 to 4 meters away +will be wrong. The model will get input as 9, and the output will be +the average distance 3.5 meters or somewhere between 3 to 4 meters.
  4. +
  5. Use the model to predict position +probability density function instead of predicting the position. +E.g. when the reading is 9, the model should predict a uniform +density function from 3 to 4 meters. And then you can combine the 2 +density functions from the 2 readings somehow. But I think it's not +going to be that accurate compared to modeling 2 emitters together +because the density function can be quite complicated. We cannot +assume normal distribution or even uniform distribution.
  6. +
  7. Use some kind of optimizer to predict the position separately for each +emitter based on the assumption that both predictions must be the same. If +the predictions are not the same, the optimizer must try to move the +predictions so that they are exactly at the same point. Maybe reinforcement +learning where the actions are ""move left"", ""move right"", etc.
  8. +
+ +

I told you my ideas so that it might evoke some ideas in you. Because this is already my best but it's not solving the issue elegantly yet.

+ +

So ideally, I would want the end-to-end model that are fed 2 readings, and give me position even when the emitters are moved. How would I go about that?

+ +

PS. The emitters are only allowed to move before usage. During usage or prediction, the model can assume that the emitter will not be moved anymore. This allows you to have time to run emitters position calibration algorithm before usage. Maybe this will be a helpful thing for you to know.

+",20819,,2444,,12/13/2021 8:45,12/13/2021 8:45,How do I combine two electromagnetic readings to predict the position of a sensor?,,2,1,,,,CC BY-SA 4.0 +10075,2,,10071,1/19/2019 8:50,,1,,"

Model input:

+ +
    +
  • 1 mean scaled input for each emitter
  • +
  • 1 distance value for each +distance
  • +
+ +

Multiple input

+ +

You mentioned there is noise. If the noise is constant, ie you test it in place A and the values returned are always the same, then it means training in different places. If you place it in a place and the first reading is different from the second reading. Then you need to take a lot of readings and select the mean or median of the readings. The central limit theorem says at least 30 readings. This would be the easiest. You could use each sample as an input which allows the NN to learn to filter out the noise. This makes training longer but is probably better than just taking an average.

+ +

Scaled input

+ +

I know you said it is not important for now, and that the emitters come in different scales. I would scale the output of the emitter proportional to it's scale so that two emitters of different size will produce the same output relative to the sensor no matter how far the sensor is from the emitter. This function might be a simple linear function or more complex, depending if output of the emitter drops faster for a smaller emitter than a larger one.

+ +

Distance value

+ +

You mentioned dynamically calculating the position of an emitter, but it was placed there so you must know where it is.

+ +

This means you could use two solutions. One a co-ordinate system which is a little more complex, or a simpler solution is a distance vector. +There must be a maximum distance that one could place the emitters. Lets assume this distance is 25. You could normalise the data as any distance / maximum distance. This should be repeated for all unique combinations of emitters. i.e if you have 2 (A, B) then only one distance value if you have 3 emitters (A,B,C), then 3 distance values ie from A-B, and A-C, and B-C.

+ +

A co-ordinate system is more complex because using a number to denote a position will apply importance to it, for example a grid from 1 to 10 across the x, and y. Emitter at position 10,10 will have a greater importance than an emitter at 1,1. And if it is 0,0 without a bias input your result will be 0.

+ +

Structure of the NN

+ +

Of course the structure of the NN, the data samples, what you use for validation etc will all play a role.

+ +

Perhaps some research on previous work. +See the following:

+ + +",20508,,20508,,1/21/2019 6:52,1/21/2019 6:52,,,,0,,,,CC BY-SA 4.0 +10076,2,,10070,1/19/2019 9:09,,1,,"

Good question, there is a lot of work in that field. The first part before saying which machine learning algorithm is better and why is defining the problem. Is the problem an optimisation, classification, anomaly detection problem because then you need to use the appropriate machine learning algorithms. Let's assume its an optimisation problem.

+ +

Is this problem, dynamic, or static. Is this time series data? So we need to understand the problem.

+ +

Each optimisation problem has a landscape or a fitness landscape. Computer science has some nice toy landscapes.

+ +

There is a lot of work in determining the nature of the problem landscape. Have a look at K Malan's work.

+ +

Once you can identify the problem space and understand it's characteristics then you can start to identify what machine learning functions work well on what kind of landscapes. This is a totally different field of research.

+ +

For example some researchers are working on how different evolutionary algorithms handle different landscapes, or neural networks handle different landscapes.

+ +

Start by exploring the types of machine learning problems. +Understand the complexity of the problem, followed by classification of machine learning algorithms for specific problem spaces.

+",20508,,,,,1/19/2019 9:09,,,,0,,,,CC BY-SA 4.0 +10078,2,,10064,1/19/2019 14:30,,1,,"
+

I want to maximize the profit inside a trading day and avoid to place the pair (limit buy order, limit sell order) if the profit on that transaction is less than 100$. Be aware that I thought using the ""Profit & Loss"" as the reward.

+
+ +

To me this implies that your profit per transaction is not the true reward function that you should be using. You don't say directly in the question, but presumably there is some per-transaction cost, tax or other issue which means that these low gain transactions are not desirable.

+ +

The answer is to find a more accurate reward function. You have suggested a quick fix of subtracting a fixed offset from the reward. This should have a desirable effect of limiting borderline buy/sell arrangements and meeting your constraint, so I would suggest trying it. The main issue I see with it is that you may have changed the reward function so that it doesn't truly reflect your goals.

+ +

A better approach is to look more carefully at the problem and your goals at a higher level. There must be a reason for wanting to avoid these smaller profit transactions. What is it, and can it be expressed itself as a reward? For instance, if there will be a transaction fee, include that. If the fee structure or reasoning is complicated or appears delayed or aggregated over many actions, then this makes optimisation harder, but RL is actually designed to cope with this. You could for instance only reward the agent with profit/loss after a group of actions covering a whole day's trade. You would then rely on the learning algorithm to figure out which states and actions combined to generate the observed reward.

+ +

If there is an unknown or random difference between predicted and actual financial gain then no amount of juggling the reward function gets around that. For RL, learning about and predicting expected gains is ""built in"", but that does not mean that this task becomes easier - in fact if prediction in the problem is hard, you may be better off focusing on that and forgetting RL at least initially. Your question is not clear on that, but if you want to avoid risk whilst learning, you should bear in mind that a simple hard rule would likely interfere with the ability of RL to work. At its heart RL is a trial-and-error learning system. Errors should be expected, and are required for the system to learn where best the balance point is between risk and reward. Of course that doesn't help you if the learning system would deliberately make you bankrupt exploring what happens when you sell things at a loss - there are likely to be ways to avoid that and still achieve your goals, but you will need to explain more about your system in a different question.

+",1847,,1847,,1/20/2019 15:46,1/20/2019 15:46,,,,8,,,,CC BY-SA 4.0 +10082,1,,,1/20/2019 0:44,,8,5789,"

I am working to build a deep reinforcement learning agent which can place orders (i.e. limit buy and limit sell orders). The actions are {"Buy": 0 , "Do Nothing": 1, "Sell": 2}.

+

Suppose that all the features are well suited for this task. I wanted to use just the standard "Profit & Loss" as a reward, but I hardly thought to get something similar to the above image. The standard P&L will simply place the pair (limit buy order, limit sell order) on every up movement. I don't want that because very often it won't cover the commission and it is not a good indicator to trade manually. I would be interested that the agent can maximize the profit and give me a minimum profit of $100 on every pair (limit buy order, limit sell order).

+

I would be interested in something similar to the picture below.

+

+

Is there a reward function that could allow me to get such a result? If so, what is it?

+

UPDATE

+

Is the following utility function can work with the purpose of that question?

+

$$ +U(x) = \max(\\\$100, x) +$$

+

That seems correct, but I don't know how the agent will be penalized if it covers a wrong transaction, i.e. the pair (limit buy order, limit sell order) creates a loss of money.

+",21539,,2444,,12/18/2021 23:30,12/18/2021 23:30,Suitable reward function for trading buy and sell orders,,1,7,,,,CC BY-SA 4.0 +10083,2,,10071,1/20/2019 5:50,,2,,"

Position Detection

+

In a traditional data acquisition and control scenario, with some assumptions, the relation between sensors signals $s_i$, emitters drive $\epsilon_j$, distances $x_{ij}$, and calibration factors is modelled as follows.

+

$$ \forall \, (i, j) \text{,} \quad \frac {s_i} {v_i} = \frac {\epsilon_j} {v_j \, x_{ij}^2} $$

+

The assumptions include these.

+
    +
  • Linear acquisition of magnetic flux signal strength
  • +
  • Linear control of magnetic flux signal strength
  • +
  • Independent readings either by sequential reading or by use of two distinct emitter frequencies
  • +
  • Dismissal of relativistic phenomena
  • +
  • Single point emission
  • +
  • Single point detection
  • +
+

It is correct that, with only a single emitter, position of the sensor cannot be accurately determined because the direction from which the signal originates cannot be disambiguated. Two emitters are necessary for reliability. In two dimensional space, three are necessary, thus the term triangulation. In three dimensional space, four are necessary.

+

Less Known Function with Motion and Noise

+

The more complicated function of distance was not specified, whether the sample rate is tightly controlled is not indicated, and the nature and magnitude of the noise relative to the signal was not provided. It also appears there is a low digital accuracy in the readings.

+

To model these contingencies, for motion, $j$ shall be the sample index, and $i$ remains the detector number. The data acquisition vector is now a tuple of the reading $r_{ij}$ and time $t_j$. The function $f$ may differ from the inverse squared function due to flux conduction curvature, non-point emission and detection, and other secondary effects. The combination of this function and noise $n$, a function of sample time, $s_j$, is made discrete, according to the question, rounding or truncating to the nearest integer (indicated by $\mathcal{I}$).

+

$$ \forall \, (i, j) \text{,} \quad (r_{ij}, \; t_j) = \bigg(\mathcal{I}\big(f(\epsilon_j, x_{ij}) + n(s_j)\big), \; s_j\bigg) $$

+

There are other benefits to the additional emitter than disambiguation of direction. The impact of noise is reduced as redundancy is added, and there is the calibration issue.

+

Calibration

+

High volume, low cost, electronic parts are not usually calibrated in the factory. Even when they are, the calibration cannot be trusted. Even if calibrated in the factory and then in the lab, the phenomenon of temperature and pressure drift complicates acquisition for passive emitters and transducers. Carefully designed instrumentation and measurement strategy can compensate for component behavioral variance, and redundant emitters and detectors can be used in such designs.

+

Assuming accuracy above that of a mass produced part is required, the calibration voltages $v_i$ and $v_j$ must be determined simultaneously and be consistently either relative to magnetic flux levels at some point or to each other. If the environment cannot be controlled, re-calibration may be periodically necessary so that the calibration will remain representative.

+
+

The emitters can be slightly not identical. The difference will be on the scale. E.g. one of the emitters can be giving 10% bigger reading. But you can ignore this problem for now.

+
+

Calibration issues should not be dismissed until later. They must be built into the model tuned by the parameters converged to an optimal during training. Fortunately, since $f$ is unknown and encapsulates calibration factors, the addressing up front of calibration will not likely frustrate proper analysis.

+

Drawing of Samples and Aligning Training and Use Distributions

+

It is important, however, to understand that, When training, the distribution of the training samples must match the distribution of the samples encountered when the training is expected to work. This applies directly to the calibration issue and determines the frequency of re-calibration. In essence, training is calibration. This is not new to the recent machine learning craze. Such was the case with self-adjusting PID controllers in the 1990s.

+

Addressing Questions in the Ideas Section of the Question

+
+

When I train the machine learning model with 2 inputs, the prediction is very accurate ... but it means that after training, I cannot move the emitters at all. ... I cannot afford to train the model on all possible distance of the 2 emitters.

+
+

That is the case if the training samples are not representative or insufficient in number or the model $f$ is entirely unknown or not used in the convergence strategy.

+
+

I have to figure out the position of both emitters relative to each other dynamically. But after knowing the position of both emitters, how do I tell that to the model?

+
+

A model does not know the position of emitters or detectors. A model generalizes these. What you tell the model is what is known for sure about $f$ and $\mathcal{I}$.

+
+

I have tried training each emitter separately instead of pairing them as input.

+
+

That defies the rule that the training distribution must match the usage distribution. Reliability, accuracy, and speed of convergence will all be damaged by doing that.

+
+

Use the model to predict position probability density function instead of predicting the position. ... We cannot assume normal distribution or even uniform distribution.

+
+

Because of the noise function $n$, the function to be learned is necessarily stochastic, but that is not unusual, and that does not mean that convergence during learning will not occur. It merely means that the loss function cannot be expected to reach zero. It can nonetheless be minimized, even with motion.

+

Because the objects attached to detectors and sensors are physical and have mass and forces involved are not nuclear or supernatural, acceleration cannot be either $\infty$ or $- \infty$, thus the vectors do not have the Markov property.

+

If the preparation of training data allows the labeling of the readings and time stamps with reference positions derived from a test fixture using digital encoders with high accuracy, then this project is much more feasible. In such a case, it is the patterns in the time series and their relationship to actual position that is being learned. Then a B-LSTM or GNU type cell for network layers may be the best choice.

+
+

Maybe reinforcement learning where the actions are "move left", "move right", ...

+
+

Unless the system being designed is required to produce motion control, reinforcement learning or other adaptive control strategies are not indicated. In either case, that the Markov property is not present in a system that involves physical momentum, a form of learning that requires that property may not be the best control strategy.

+
+

The emitters are only allowed to move before usage. During usage or prediction, the model can assume that the emitter will not be [stationary]. This allows you to have time to run emitters position calibration algorithm before usage.

+
+

It is recommended to design the math and fixture used for training as flexibly as possible and then binding variables only after there is no doubt the system is working and various degrees of freedom are superfluous.

+",4302,,-1,,6/17/2020 9:57,1/21/2019 5:15,,,,0,,,,CC BY-SA 4.0 +10086,2,,9966,1/20/2019 12:53,,1,,"

You can try it with ACE (alternating conditional expectations) - an algorithm that searches for transformations

+ +

$$\theta(y) = f_1(x_1)+f_2(x_2)+...+f_n(x_n)$$ +The functions $\theta$, and $f_i$ are estimated from data. I'll give an example in R here. There is also a package in Python that does ACE.

+ +

Let's generate some data

+ +
np <- 100 # number of points
+R <- runif(np, min = 0.9, max = 1.1)
+alpha <- runif(np, min = 0.0, max = 2*pi)
+x <- R*cos(alpha)
+y <- R*sin(alpha)
+par(mfrow=c(1,1))
+plot(x,y)
+
+ +

+ +

ACE to estimate $\theta(y) = f(x)$:

+ +
library(acepack)
+a <- ace(x,y, delrsq = 0.0001)
+
+ +

See the transforms, $\theta$, and $f$

+ +
par(mfrow=c(1,2))
+plot(a$x,a$tx)
+plot(a$y,a$ty)
+
+ +

+ +

They look like parabolas, so let's fit them.

+ +
xx<-drop(a$x)
+yy<-drop(a$tx)
+
+plot(xx,yy)
+m.x <- lm(yy ~ xx+I(xx^2))
+xnew=sort(xx)
+lines(xnew, predict(m.x, list(xx=xnew)),col=""red"",lwd=2)
+
+xx<-drop(a$y)
+yy<-drop(a$ty)
+
+plot(xx,yy)
+m.y <- lm(yy ~ xx+I(xx^2))
+m.y$coefficients
+xnew=sort(xx)
+lines(xnew, predict(m.y, list(xx=xnew)),col=""red"",lwd=2)
+
+ +

+ +

The red parabolas don't go through the transforms exactly but we don't need it. We use them only as hints to find the exact relations. We can tweak the parameters later. The parameters of our approximate fits are

+ +
m.x$coefficients
+
+(Intercept)          xx     I(xx^2) 
+ 1.31459869  0.07527254 -2.55098259
+
+ +

which means $f(x) \approx 1.3-2.56 x^2$

+ +
m.y$coefficients
+ (Intercept)           xx      I(xx^2) 
+-1.342572495  0.001216412  2.791683219
+
+ +

which means $\theta(y) \approx -1.3 + 2.8 y^2$

+ +

So, we have +$$-1.3 + 2.8 y^2 \approx 1.3-2.56 x^2$$ +or +$$2.56 x^2 + 2.8 y^2 \approx 2.6 $$

+ +

From here you can recover $x^2+y^2 = 1$

+",15524,,,,,1/20/2019 12:53,,,,0,,,,CC BY-SA 4.0 +10088,2,,10013,1/20/2019 14:49,,2,,"

The adjective bad isn't mathematically descriptive. A better term is sub-optimal, which implies the state of learning might appear optimal based on current information but the optimal solution from among all possibilities is not yet located.

+ +

Consider a graph representing a loss function, one of the names to measure disparity between the current learning state and the optimal. Some papers use the term error. In all learning and adaptive control cases, it is the quantification of disparity between current and optimal states. This is a 3D plot, so it only visualizes the loss as a function of two real number parameters. There can be thousands, but the two will suffice to visualize the meaning of local versus global minima. Some may recall this phenomenon from pre-calculus or analytic geometry.

+ +

If pure gradient descent is used with an initial value to the far left, the local minimum in the loss surface would be located first. Climbing the slope to test the loss value at the global minimum further to the right would not generally occur. The gradient will in most cases cause the next iteration in learning algorithms to travel down hill, thus the term gradient descent.

+ +

This is a simple case beyond just the visualization of two parameters, since there can be thousands of local minima in the loss surface.

+ +

+ +

There are many strategic approaches to improve the speed and reliability in the search for the global minimum loss. These are just a few.

+ +
    +
  • Descent using gradient, or more exactly, the Jacobian
  • +
  • Possibly further terms of the multidimensional Taylor series, the next in line being related to curvature, the Hessian
  • +
  • Injection of noise, which is at an intuitive (and somewhat oversimplified) level is like shaking the graph so a ball representing the current learning state might bounce over a ridge or peak and land (by essentially trial and error) the global minimum — Simulated annealing is a materials science analogy and involves simulating the injection of Brownian (thermal) motion
  • +
  • Searches from more than one starting position
  • +
  • Parallelism, with or without an intelligent control strategy, to try multiple initial learning states and hyper-parameter settings
  • +
  • Models of the surface based on past learning or theoretically known principles so that the global minimum can be projected as trials are used to tune the parameters of the model
  • +
+ +

Interestingly, the only way to prove that the optimal state is found is to try every possibility by checking each one, which is not nearly feasible in most cases, or by relying on a model from which the global optimal may be determined. Most theoretical frameworks target a particular accuracy, reliability, speed, and minimal input information quantity as part of an AI project, so that no such exhaustive search or model perfection is required.

+ +

In practical terms, for example, an automated vehicle control system is adequate when the unit testing, alpha functional testing, and eventually beta functional testing all indicate a lower incidence of both injury and loss of property than when humans drive. It is a statistical quality assurance, as in the case of most service and manufacturing businesses.

+ +

The graph above was developed for another answer, which has additional information for those interested.

+ + +",4302,,,,,1/20/2019 14:49,,,,0,,,,CC BY-SA 4.0 +10089,2,,10010,1/20/2019 15:32,,0,,"

Project Definition

+
    +
  • Labelled data set contains 21 K rows; 1,936 features; and 1 textual label
  • +
  • Label can be 1 of 14 possible categories
  • +
  • The first feature is a time stamp reflecting exact or approximate 10 minute sampling period
  • +
  • Data content not primarily natural language
  • +
  • The intention is to learn the function mapping the features to the label
  • +
  • Visualization to observe training intermediate and final results
  • +
  • Hoping to simplify implementation using already implemented algorithms and development support
  • +
+

Use of Recurrent Artificial Network Learning

+

It is correct that recurrent networks are designed for temporally related data. The later variants of the original RNN design are most apt to produce favorable results. One of the most effective of these variants is the GRU network cell, which is well represented in all the main machine learning libraries, and visualization hooks in those libraries are well documented.

+

Various Meanings of Attention Mechanisms

+

The belief that an attention mechanism beyond those built into the RNN design are needed to emphasize important features may be over-complicating the problem. +The parameters of the GRU and the other RNN variants already focus attention on particular features during learning. Even a basic feed forward network does that, but the MLP (multilayer perceptron) does not recognize feature trends temporally, so the use of RNN variants is smart.

+

There are other kinds of attention mechanisms that are not inside each cell of a network layer. Research into advanced attention based designs that involve oversight, various forms of feedback from the environment, recursion, or generative designs is ongoing. As the question indicates, those are targeted for natural language work. There is also attention based design for motion and facial recognition and automated walking, driving, and piloting systems. They are designed, tested, optimized, and evolving for the purpose of natural language processing or robotics, not 1,936 feature rows. It is unlikely that those systems can be morphed into something any more effective than a GRU network for this project without considerable further R&D.

+

Output Layer and Class Encoding

+

The 14 labels should be coded as 14 of the 16 permutations of a 4 bit output prior to training. And the loss function should dissuade the two illegal permutations.

+
+

Response to Comments

+
+

[Of the] 1936 features, one of them [is] date-time timestamps and [the] rest [are] numeric. ... Can you please suggest the format of the input? Should I convert each column of feature to a list and create a list of lists or some other way around?

+
+

Regardless of what types the library you use expect as inputs, the theory is clear. Features with a finite set of fixed discrete values are ordinals. The magnitude of their information content is given in bits $b$ as follows, where $p$ is the total number of possible discrete values for the feature.

+

$$ b = \log_2 p $$

+

This is also true of the timestamp, which has a specific possible range and time resolution, where $t_{\emptyset}$ is the initial timestamp where the project or its data began and $t_{res}$ is the time of one resolution step.

+

$$ b_{timestamp} = \log_2 \frac {t_{max} - t_\emptyset} {t_{res}} $$

+

The label also has a range. If the range is a fixed set of permutations, then assign an integer to each, starting with zero, to encode them. If the range of the text is unknown, use a library or utility that converts words or phrases to numbers. One popular one is word2vec.

+

Integrating the features to reduce the number of input bits actually wastes a layer, so don't do that. The total information is given as this.

+

$$ b_{total} = \sum_{i = 1}^{1,936} b_i $$

+

The features, if they are real numbers, can remain so. The input layer of an artificial network expects a number entering the data flow for each cell. One can change the data type of the numbers to reduce computational complexity if no overflow or other mapping issue will occur. This is where the above information content can be useful in understanding how far one can go in collapsing the information into a smaller computational footprint.

+",4302,,-1,,6/17/2020 9:57,4/25/2019 17:46,,,,1,,,,CC BY-SA 4.0 +10090,1,,,1/20/2019 16:00,,3,553,"

How would I go about designing a (relatively) simple AI that discovers and invents random more complex concepts on its own?

+ +

For example, say I had a robot car. It doesn't know it's a car. It has several inputs and outputs, such as a light sensor and the drive motors. If it stays in the dark, it's score drops (bad), and if it moves into the light, it's score rises (good). It'd have to discover that it's motor outputs cause the light input to change (because it's moving closer or farther away from a light source), and that brighter light means higher score.

+ +

Of course, it'd be easier to design an AI that does specifically that, but I want its behaviour discovery system to be more generic, if that makes any sense. Like later on, it could discover a way to fight or cooperate with other robots to increase its score (maybe other robots destroy light sources when they drive over them, and they can be disabled by driving into them), but it'd have to discover this without initially knowing that another robot could possibly exist, how to identify one, what they do, and how to interact with one.

+ +

Also, I want it to be creative instead of following a 'do whatever is best to increase your score' rule. Like maybe one day it could decide that cooperating with other robots is another way to increase its score (it finds out what love is), but if it's unable to do that, it becomes depressed and stops trying to increase it's score and just sits there and dies. Or it could invent any other completely random and possibly useless behavior.

+ +

How hard would it be to make something like this, that essentially builds itself up from a very basic system, provided I give it lots of different kinds of inputs and outputs that it can discover how to use and apply to its own evolving behavior?

+",,user21586,2444,,10/3/2019 3:07,10/3/2019 3:07,How to design an AI that discovers more complex concepts on its own?,,1,1,,,,CC BY-SA 4.0 +10092,2,,10044,1/20/2019 17:05,,3,,"

This is a very large question that could be answered in a variety of ways depending on the context.

+ +

For some optimization problems operating under specific conditions you can make theoretical guarantees that your optimization will solve your problem. A specific example of this is running the gradient descent algorithm on a convex function. If the function being optimized is convex then gradient descent is guaranteed to converge to the correct solution. Given the nice properties of convex functions, there are many types of optimization where you can make these theoretical guarantees.

+ +

When evolutionary algorithms (and many non-derivative based optimizations) are run, it's much harder to make these types of guarantees. Often in the literature I've seen people try their functions on several baseline optimization functions and report back the minimum or maximum found. Many of these are included in Facebook's nevergrad optimization library to be used for evaluation. An example of one of these functions is included below,

+ +

+ +

Even if you can prove that your algorithm minimizes several objective functions you could be testing on, there could be other properties that should be considered. Some of these include the following: the speed at which the algorithm converges; the number of evaluations of the objective function (if not closed form), and even the amount of space in memory which is required for your optimization. All of these (and many more) factors could come into play when evaluating your algorithm.

+ +

Scientists may use functions like this one to evaluate their optimization and compare it to other algorithms that are similar. To answer your final question though, ""How can I see if my optimization works for every problem"", I would say that's a bit of a loaded question. Most of the time these optimizations need some sort of constraint to make a generalized guarantee. I would look at the constraints of your problem and go from there.

+ +

If you're interested, here's a good paper that goes much more in depth than I did here.

+",17408,,17408,,1/24/2019 20:36,1/24/2019 20:36,,,,0,,,,CC BY-SA 4.0 +10093,2,,9909,1/20/2019 17:35,,-2,,"

The question is whether MCTS is an appropriate method given these conditions.

+ +
    +
  • Action $a_t \in A_t$

  • +
  • State $s_t \in S_t$

  • +
  • $[a_t, s_t] \Rightarrow s_{t+1}$

  • +
  • $t_i \land i \in [1, 40]$

  • +
  • $A(t) \in \text{set} \, A \; \land \; \text{size} (A) = 6 x 10^6$

  • +
  • $S(t) \in \text{set} \, S \; \land \; \text{size} (S) = 1 x 10^8$

  • +
  • Transition function is stochastic

  • +
+ +

MCTS focuses on the most promising sub-trees and is therefore asymmetric and well suited for some systems with a high branching factor, but the set sizes can be prohibitive unless some symmetries are exploited.

+ +

The original MCTS algorithm does not converge quickly. This is why alpha–beta pruning and other search strategies that focus on the minimization of the search space are often used, but they have limitations with regard to branching breadth too. Pruning heuristics can help with reducing the combinatoric explosion but may lead to the most advantageous action options being missed.

+ +

Some variants of MCTS, such as those among the articles below and in the case of AlphaZero used to learn Go from only the game rules, show excellence in search speed, but perhaps not to the degree needed in this case without parallelism in the form of a computing cluster or significant hardware acceleration.

+ +

An excellent characteristic of MCTS is that highly promising actions found early may be selected without exhausting all the sub-tree evaluations, but again, that may not be enough.

+ +

A possibly constraining consideration is whether the Marcov property will be upheld, how, and whether it should be. Another consideration is whether the system involves an opposing intelligent participant. All sampling or pruning search strategies involve the risk of not reliably identifying a single branch in a sub-tree that leads readily to an irreversible (or difficult to reverse) loss.

+ +

These are some excellent papers that discuss these considerations and provide a high level of experience in the form of research results. The computational challenge of tree searches with high branching is covered by the articles discussing approximate solvers, an area of high research intensity.

+ + +",4302,,4302,,1/20/2019 19:17,1/20/2019 19:17,,,,1,,,,CC BY-SA 4.0 +10096,2,,9808,1/20/2019 19:09,,0,,"

Minimax

+ +

As the question author may already understand, minimax is an approach before it is an algorithm. The goal is to minimize the advantage of a challenger and maximize the advantage of the participant presently applying the minimax approach. This is a subset of a wider strategy of summing benefits and subtracting the sum of costs to derive a net value.

+ +

Alpha-beta Pruning

+ +

The strategic goal of alpha beta pruning is to produce uncompromized decision making with less work. This goal is usually driven by the cost of computing resources, the impatience of the person waiting on results, or a missed deadline penalty.

+ +

The principle involved is that if there is a finite range to the net gain that can aggregate as a result of traversing any subtree in a search. It can be proven, using the algebra of inequalities, that no knowledge can be lost in the search if the subtree cannot, under any conditions, show any net advantage over the other options.

+ +

Some Graph Theory to Clarify Pruning

+ +

Vertices are the graph theory name for what are often called tree nodes. They are represented as $\nu$ below, and they correspond to states. The connections between vertices are called edges in graph theory. They are represented as $\epsilon$ below, and they correspond to actions.

+ +

The minimax net gain thus far accumulated at an vertex can be calculated as the tree is traversed.

+ +

As the limitations to net gain at further depths are applied to these intermediate net gain values, it can be inferred that one edge (action) will, in every case, be advantageous over another edge. If an algorithm reveals that one edge leads to disadvantage in every possible case, then there is no need to traverse that edge. Such dismissal of never-advantageous edges saves computing resources and likely reduces search time. The term pruning is used because edges are like branches in a fruit orchard.

+ +

The Condition of Pruning and the Algorithm To Accomplish It

+ +

Initialization for the algorithm begins by setting the two Greek letter variables to their worst case values.

+ +

$$ \alpha = - \infty \\ + \beta = \infty $$

+ +

The pruning rule for edge $\epsilon$ leading to a vertex $\nu$ is simply this.

+ +

$$ \alpha_\nu \geq \beta_\nu \Rightarrow \text{prune}(\epsilon_\nu) $$

+ +

Source code doesn't usually directly implement this declaration, but rather produces its effect in the algorithm, which can be represented in simplified pseudo-code. The algorithm to do such is simple once recursion is understood. That is an conceptual prerequisite, so learn about recursion first if not already known.

+ +
ab_minimax ν, α, β
+
+    if ν.depth = search.max_depth or ν.is_leaf
+        return ν.value
+
+    if ν.is_maximizing
+        for ε in ν.edges
+            x = max(β, ab_minimax ε.child, α, β)
+            if α ≥ β
+                ν.prune ε
+                return α
+        return α
+    else
+        for ε in ν.edges
+            x = min(α, ab_minimax child, α, β)
+            if α ≤ β
+                ν.prune ε
+                return β
+        return β
+
+",4302,,4302,,1/27/2019 4:17,1/27/2019 4:17,,,,0,,,,CC BY-SA 4.0 +10100,2,,4917,1/21/2019 0:51,,0,,"

It's been done (essentially). This guy at the following link has used a series of FPGAs to emulate hundreds of 8080s, using them to train a neural network to play Gameboy games. +https://towardsdatascience.com/a-gameboy-supercomputer-33a6955a79a4

+ +

IBM's True North being used in Darpa's SyNAPSE program is also very close to what you suggest. https://en.wikipedia.org/wiki/TrueNorth

+ +

Also of interest may be SpiNNaker and Intel Loihi.

+",18787,,,,,1/21/2019 0:51,,,,1,,,,CC BY-SA 4.0 +10102,1,,,1/21/2019 3:38,,3,37,"

Does it help to "pre-classify" natural language inputs using labeled input fields? E.g., "Who," "What," "Where," "When," "Why," "How," and "How much?" Or is a single, monolithic, free-form, long-text input field equally effective and efficient for model training purposes?

+Scenario 1: Without input labels +
+

We are three research fellows, Alice, Bob and Charlie at the University of Copenhagen. We want to understand the development of the human visual system. This knowledge will help in the prevention and treatment of certain vision problems in children. Further, the rules that guide development in the visual system can be applied to other systems within the brain. Our work, therefore, has wide application to other developmental disorders affecting the nervous system. We will conduct this research in 2019 under a budget of $15,000.

+
+Scenario 2: With input lables +
+

Who: We are three research fellows, Alice, Bob and Charlie.

+

What: We want to understand the development of the human visual system.

+

Where: At the University of Copenhagen.

+

When: During the calendar year of 2019.

+

Why: This knowledge will help in the prevention and treatment of certain vision problems in children.

+

How: Further, the rules that guide development in the visual system can be applied to other systems within the brain.

+

How Much: The research will cost $15,000.

+
+

Use Case:

+

I am building an AI/ML recommendation system. Users subscribe to the system to get recommendations of research projects they might like to participate in or fund. There will be many projects from all over the globe. Far too many for a human to sort through and filter. So AI will sort and filter automatically.

+

Will pre-classifying input fields using labels help the training algorithm be more efficient or effective?

+",21598,,-1,,6/17/2020 9:57,1/27/2019 11:45,Natural language recommendation system: to pre-classify inputs or not?,,1,0,,,,CC BY-SA 4.0 +10109,2,,9745,1/21/2019 10:25,,1,,"

This is what I understood so far from the paper in arxiv named ""Genetic Stereo Matching Algorithm with Fuzzy Fitness"":

+ +

Let's say we have 2 images (left and right taken by a stereo camera) of size 28x28. +The images are in grey scale.

+ +

The individual becomes a disparity map. Let's say, we look at the pixel at row 2, column 3 and it has a number 5. That means that the pixel at (2,3) on the left image (reference image) corresponds to the pixel at (2, 8) on the right image (target image).

+ +

The author of the paper created three classes black, average, and white pixels classes where they take the 0, 127.5, and 255 values respectively in the grey image scale.

+ +

Then the author calculates the likelikehood of each pixel on both left and right images belonging to the same class by calculating the mean and standard deviation of the class in consideration.

+ +

The matching possibility metric is calculated by choosing the max likelikehood of both images belonging to the same class.

+ +

Author also added Sobel gradient normalization to the fitness value.

+ +

Genetic operators seem to work in an easy way where you just swap some parts of the individual (disparity map) with another disparity map to create new individuals and the mutation is just randomly changing the number in the disparity map.

+",14863,,,,,1/21/2019 10:25,,,,0,,,,CC BY-SA 4.0 +10110,1,,,1/21/2019 10:39,,1,69,"

Let's say there's a ball with features position, velocity, acceleration.

+ +

These three are all concatenated as inputs to my neural network.

+ +

However, I have prior knowledge that position is way more predictive than the other features.

+ +

How do I weight the position feature much more strongly than the others? Would just applying a large scalar coefficient to it as preprocessing work? Seems unprincipled...

+",21158,,,,,1/21/2019 11:11,How to weight important features,,1,0,,,,CC BY-SA 4.0 +10111,2,,10110,1/21/2019 11:06,,4,,"

If you are training a neural network, it should learn correct weights to use the most predictive feature without any interference. For a strongly predictive feature matched with weaker ones, most NNs will learn the stronger association very quickly. I'm not sure whether there is an easy way to add this as a prior. You could do things such as inject the most influential feature in one or more hidden layers, but that does seem ugly, and only really helps if the predictive feature has certain types of relationship to output (it would probably help most if there is a strong linear relationship).

+ +

Training problems are usually in the opposite direction - how to extract the small amount of information from more noisy and less influential features that may still add up and sometimes contradict the main predictor.

+ +

The only thing that springs to mind for NNs that may help in this situation (mostly about fixing the problem of using the less reliable features) is the architecture of residual neural networks. The use of skip connections builds the neural network in such a way that a default ""do nothing"" identity function is encouraged as layers are added, and this allows for combining weak but complex relationships with stronger simple ones. A residual NN can do this with less compromise than changing network hyper parameters to find best fit on other NNs.

+ +

However, residual neural networks are good for managing different degrees of complexity that manifest within a problem, which is not necessarily the same as different degrees of predictive power between features.

+ +
+

Would just applying a large scalar coefficient to it as preprocessing work?

+
+ +

No, this would usually be counter-productive for training a neural network. You should instead be normalising all inputs - a common and effective technique for neural networks is to ensure that each feature is scaled and offset so it has mean 0, standard deviation 1.

+",1847,,1847,,1/21/2019 11:11,1/21/2019 11:11,,,,0,,,,CC BY-SA 4.0 +10114,1,10116,,1/21/2019 13:54,,1,1268,"

If ""image captioning"" is utilized to make a commercial product, what application fields will need this technique? And what is the level of required performance for this technique to be usable?

+",21613,,21130,,1/31/2019 20:54,1/31/2019 20:54,"What's the commercial usage of ""image captioning""?",,1,0,,,,CC BY-SA 4.0 +10115,2,,10102,1/21/2019 16:50,,1,,"

Likely yes! When you split out the inputs like this, you are adding information. How much this helps is an open question till you build your system, get some data and start training.

+ +

Of course, it would be marvelous to have a machine do all the work straight unstructured text - but you want a functional, easy-to-use website, not a research project of your own. To that end, do everything that you can to constrain the scope of the problem, and maximize the information available to your model. For example, you might want to see if you can add researchers using Google Scholar (so you can link to their profiles, and perhaps mine some information that way).

+ +

You'll be somewhat 'data constrained' till you've got a decent number of research proposals and user interactions to learn from. Tools like our NLP architect may help you get more out of your text (there are some other really cool new-generation ML-for-NLP packages you should also evaluate).

+",17770,,4302,,1/27/2019 11:45,1/27/2019 11:45,,,,0,,,,CC BY-SA 4.0 +10116,2,,10114,1/21/2019 16:52,,0,,"
+

If "image captioning" is utilized to make a commercial product, what application fields will need this technique?

+
+

There are several important use case categories for image captioning, but most are components in larger systems, web traffic control strategies, SaaS, IaaS, IoT, and virtual reality systems, not as much for inclusion in downloadable applications or software sold as a product. These are a few examples.

+
    +
  • Provision of captions to reduce the number of user operations (keying and selection) required to post a movie or image to improve posting volume of sticky media and therefore improve the position of a web site or page in terms of human interest
  • +
  • Provision of captions to provide HTML header and alt attribute content to improve search engine scoring of page for search terms related to the content of the movie or image
  • +
  • Testing to see if the content of an image submitted for posting matches the criteria intended by the owner or stake holder for the posting space on the basis of caption rather than directly through CNN categorization
  • +
+

The first two are usually monetized in ways such as these.

+
    +
  • Improvements in online purchase daily volume

    +
  • +
  • Acquisition of contact lists with consumer or business interests as fields in the list for marketing purposes

    +
  • +
  • Additional draw of web traffic to enhance ad impression revenue

    +
    +

    What is the level of required performance can this technique be usable?

    +
    +
  • +
+

Performance is in terms of these system qualities.

+
    +
  • Cost of computing resources per caption generated
  • +
  • Accuracy of caption generated (which may be quantified in more than one way)
  • +
  • Reliability of generation (since the AI component should produce a rating of confidence with the caption string or produce a null or empty string based on a hyper-parameter confidence threshold
  • +
+

The quantification of cost, accuracy, and reliability is business dependent. Some may have a nasty negative effect on the business if the caption is wrong or missing. Others use cases may not.

+

In some cases the average revenue generated by the caption's presence is already known and known to be small, which requires that the CNN run time and computing resource requirements must be kept below that. In other cases, a Fortune 500 company CTO said, "Do it and send me the budget needed." In such cases the budget may be, for all practical purposes, unconstrained, not that a system that wastes time and resources is ever desirable, even if only for energy conservation reasons.

+",4302,,-1,,6/17/2020 9:57,1/21/2019 16:52,,,,0,,,,CC BY-SA 4.0 +10117,2,,9990,1/21/2019 17:36,,-1,,"

I'm not sure I understand your question. However responding to your question in the comments. The difference between the two objectives is that:

+ +

In an ordinary GAN, we want to push $p(G)$ to be as close as possible to $p(data)$

+ +

In a conditional GAN, we have a context $c$. If we imagine for ease of understanding that $c=[1,2,3]$ is a discrete variable where all the data can be categorised under one of these c values , then we want to: + push $p(G|c=1)$ as close as possible to $p(data|c=1)$ +push $p(G|c=2)$ as close as possible to $p(data|c=2)$ +push $p(G|c=3)$ as close as possible to $p(data|c=3)$

+",19895,,,,,1/21/2019 17:36,,,,0,,,,CC BY-SA 4.0 +10119,1,10120,,1/21/2019 18:59,,3,212,"

This corresponds to Exercise 1.1 of Sutton & Barto's book (2nd edition), and a discussion followed from this answer.

+

Consider the following two reward functions

+
    +
  • Win = +1, Draw = 0, Loss = -1
  • +
  • Win = +1, Draw or Loss = 0
  • +
+

Can we say something about the optimal Q-values?

+",21509,,2444,,4/8/2022 10:06,4/8/2022 10:06,"Given these two reward functions, what can we say about the optimal Q-values, in self-play tic-tac-toe?",,1,1,,,,CC BY-SA 4.0 +10120,2,,10119,1/21/2019 19:54,,1,,"

Chapter 1 of Sutton & Barto, doesn't introduce the full version Q learning, and you are probably not expected to explain the full distribution of values at that stage.

+ +

Probably what you are expected to notice is that the maximum Q values out of possible next states - after training/convergence - should represent the agent's best choice of move. What the actual optimal values are depends on how the opponent plays. In self-play it is possible to find optimal play for both players in the game, and thus the Q values represent true optimal play. However what ""optimal play"" means is dependent on the goals you have set the agent implied by the reward values.

+ +

Any move which leads to a guarantee that a player can force a win regardless of what the opponent does, would have a Q value of +1. If the agent or opponent can force a draw at best, then it will have the Q value of the draw, and if the opponent can force a win (i.e. the current agent would lose), then the move will have the Q value of a loss. This happens because the learning process copies best case values backwards from the end game states to earlier game states that lead to them.

+ +

In a game with two perfect players, and +1, 0, -1 reward system, then each player will on its turn only see the 0 and -1 moves available. That is because there is no way to force a win in tic-tac-toe, and the perfect opponent will always act to block winning moves. The best choice out of 0 or -1 is 0: each player, when acting under its value estimates, will force a draw. There will be states defined that have a value of +1, but they will never appear as a choice to either player.

+ +

What happens if you don't make a difference in rewards between drawing and losing? In the extreme case of having win +1, lose or draw 0 against a perfect opponent, then all of the agent's available Q values will always be 0. The agent would then be faced with no way to choose between defensive plays that force a draw and mistake plays that allow the opponent to win. In turn that means some chance that the opponent will win, even when the agent had learned optimal play.

+ +

When two agents learn through self-play using the +1, 0, 0 reward scheme it gets more complicated. That is because the opponent's behaviour is part of the Q value system. Some positions will have more opportunities for the opponent to make mistakes, and score more highly. A mistake that allows an opponent to force a win will actually score worse, because the opponent will not make mistakes once it has a sure route to a +1 score. So even though the agent apparently cannot tell the difference between a loss and a draw, it should still at least partially learn to avoid losses. In fact, without running the experiment, I am not sure whether this would be enough to still learn optimal play.

+ +

Intuitively, I think it would be possible for the +1, 0, 0 agent to still learn optimal play, although maybe more slowly than the +1, 0, -1 system, because any situation that gave an opponent a chance of winning would allow it to pick the move with best score, reducing the first agents score for a move that arrived there - and this difference will be backed up to earlier positions. However, the learning would become unstable as described above, as against a perfect opponent the difference disappears as all the best options are draws or losses, and the agent will start to make mistakes again.

+",1847,,1847,,1/25/2019 11:33,1/25/2019 11:33,,,,0,,,,CC BY-SA 4.0 +10122,2,,7680,1/22/2019 0:48,,3,,"

It's a subtle issue.

+ +

If you look at the A3C algorithm in the original paper (p.4 and appendix S3 for pseudo-code), their actor-critic algorithm (same algorithm both episodic and continuing problems) is off by a factor of gamma relative to the actor-critic pseudo-code for episodic problems in the Sutton and Barto book (p.332 of January 2019 edition of http://incompleteideas.net/book/the-book.html). The Sutton and Barto book has the extra ""first"" gamma as labeled in your picture. So, either the book or the A3C paper is wrong? Not really.

+ +

The key is on p. 199 of the Sutton and Barto book:

+ +
+

If there is discounting (gamma < 1) it + should be treated as a form of termination, which can be done simply by including + a factor of in the second term of (9.2).

+
+ +

The subtle issue is that there are two interpretations to the discounting factor gamma:

+ +
    +
  1. A multiplicative factor that puts less weight on distant future rewards.
  2. +
  3. A probability, 1 - gamma, that a simulated trajectory spuriously terminates, at any time step. This interpretation only makes sense for episodic cases, and not continuing cases.
  4. +
+ +

Literal implementations:

+ +
    +
  1. Just multiply the future rewards and related quantities (V or Q) in the future by gamma.
  2. +
  3. Simulate some trajectories and randomly terminate (1 - gamma) of them at each time step. Terminated trajectories give no immediate or future rewards.
  4. +
+ +

The two interpretations of gamma are valid. But choosing one or the other means you are tackling a different problem. The math is slightly different and you end up with an extra gamma multiplying $G \nabla\ln\pi(a|s)$ with the second interpretation.

+ +

For example, if you are at step t=2 and gamma = 0.9, the algorithm for the second interpretation is that the policy gradient is $\gamma^2 G \nabla\ln\pi(a|s)$ or $0.81 G \nabla\ln\pi(a|s)$. This term has 19% less gradient power than the t=0 term for the simple reason that 19% of simulated trajectories have died off by t=2.

+ +

With the first interpretation of gamma, there is no such 19% decay. The policy gradient is just $G \nabla\ln\pi(a|s)$ at t=2. But gamma is still present within $G$ to discount the future rewards.

+ +

You can choose whichever interpretation of gamma, but you have to be mindful of the consequences to the algorithm. I personally prefer to stick with interpretation 1 just because it's simpler. So I use the algorithm in the A3C paper, not the Sutton and Barto book.

+ +

Your question was about the REINFORCE algorithm, but I have been discussing actor-critic. You have the exact same issue related to the two gamma interpretations and the extra gamma in REINFORCE.

+",21625,,21625,,1/22/2019 0:57,1/22/2019 0:57,,,,0,,,,CC BY-SA 4.0 +10124,2,,9479,1/22/2019 2:25,,4,,"
+

Simulated Annealing vs genetic algorithm?

+
+ +

Simulated annealing is a materials science analogy and involves the introduction of noise to avoid search failure due to local minima. See images below. To improve the odds of finding the global minimum rather than a sub-optimal local one, a stochastic element is introduced by simulating Brownian (thermal) motion. Variants of simulated annealing include injection of random numbers with various distributions and the averaging effect of mini-batching (dividing a batch into segments and performing network parameter adjustment after each).

+ +

Genetic algorithms are search methods based on principles of mutation, meiosis, symbiosis, test, elimination of inadequacy, and recursion. The advantages of such approaches is the simulation of sexual reproduction, where the possibility of dominant genetic features from two individuals producing a child individual containing the best of both exists. In a population of children, the probability of such a hybrid emerging is higher. Over several generations, even higher than that.

+ +
+

What kind of problems does simulated annealing perform better than genetic algorithms if any?

+
+ +

This question can only be answered well if comparing original, pure versions of both, not the variants that have developed since their introduction.

+ +

Simulated annealing or other stochastic gradient descent methods usually work better with continuous function approximation requiring high accuracy, since pure genetic algorithms can only select one of two genes at any given position.

+ +
+

From my experience, genetic algorithm seems to perform better than simulated annealing for most problems.

+
+ +

Those performance results would be of value to others and should be published as a paper, presented as an open source project, or published creative commons in the appropriate AI venue. At the very minimum, the results could be placed in the above question or an answer to it.

+ +

+ +

+ +
+ +

References

+ +

[1] Optimization by Simulated Annealing, Science S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi, Science New Series, Vol. 220, No. 4598, May 1983, pp. 671-680

+ +

[2] N. Metropolis, A. Rosenbluth, M. Rosenbluth., A. Teller. E. Teller, J. Chem. Phys. 21. 1087 110511

+ + + +

Images from Other Answers

+ + +",4302,,,,,1/22/2019 2:25,,,,0,,,,CC BY-SA 4.0 +10125,2,,9745,1/22/2019 3:59,,2,,"
+

Do not understand how genetic algorithms are used in stereo matching.

+
+ +

The first paper referenced in the question is Stereo matching using genetic algorithm with adaptive chromosomes, Kyu-Phil Han, Kun-Woen Song, Eui-Yoon Chung, Seok-Je Cho, Yeong-HoHac, 2000. It summarizes an approach this way.

+ +
    +
  1. An individual is a disparity set,

  2. +
  3. A chromosome has a 2D structure for handling image signals efficiently, and

  4. +
  5. A fitness function is composed of certain constraints which are commonly used in stereo matching.

  6. +
+ +

The paper also states, ""Genetic algorithms are efficient search methods based on principles of population genetics, i.e. mating, chromosome crossover, gene mutation, and natural selection."" The extension to genetic algorithm purity is contained in this statement in the paper.

+ +
+

To improve the convergence, an informed generation, that a higher selected possibility is assigned to the chromosome including a smaller intensity difference, is adopted. The informed generation uses several random generations plus a selection. In a random generation, only a value is randomly generated. Yet, the informed generation of the proposed algorithm selects the gene which has the minimum intensity difference among the randomly generated genes.

+
+ +

The question inquires along a few lines.

+ +
+

Does it mean that an individual is a disparity map with random numbers?

+
+ +

The goal is to produce order from chaos through successive mutations, combinations, and tests, arriving at genetic code that represents distances of objects that correspond to a left region of pixels and a right region of pixels. Those two regions are identified through alignment of brightness categories, with some permissible sprinkling of outliers.

+ +

An individual in this context is a simulation of a sample from a population with a particular aggregation of genetic codes. The codes indicate disparity between matching image features in the left and right images, measured in horizontal pixels.

+ +
+

[Don't] understand how this works and even WHY you should use genetic algorithm on an algorithm that at its simplest form uses 5 for loops.

+
+ +

The five loops referenced in the question, in the source at github.com/davechristian/Simple-SSD-Stereo, are as follows.

+ +
for y in range(kernel_half, h - kernel_half):
+    for x in range(kernel_half, w - kernel_half):
+        for offset in range(max_offset):
+            for v in range(-kernel_half, kernel_half):
+                for u in range(-kernel_half, kernel_half):
+
+ +

This exhaustive search has gross shortcomings in relation to the genetic algorithm with informed generation.

+ +
    +
  • The simulation of meiosis in genetic algorithms is essentially a search for hybrids where the best genetic sequences of two parents are preserved in the offspring.

  • +
  • Genetic algorithms are easy to scale across computer clusters without developing specialized hardware acceleration.

  • +
  • Components of failed compositions are not eliminated in an exhaustive search, so there is considerable redundancy in trials.

  • +
+ +

The second paper referenced in the question is Genetic Stereo Matching Algorithm with Fuzzy Fitness, Haythem Ghazouani, 2014. It proposes, ""To get around [convergence speed] limitations, we propose a new encoding for individuals, more compact than binary encoding and requiring much less space. This approach and the use of three lightness classes is shared with the methodology of the first paper.

+ +

This second paper introduces fuzzy matching and the use of the Sobel gradient norms to penalize (dismiss) pixels which project onto uniform regions, [and are therefore] less significant pixels. This paper also provides some comparison results with Han's, Dong's, and Nguyen's algorithms as define in the second paper's bibliography.

+",4302,,,,,1/22/2019 3:59,,,,0,,,,CC BY-SA 4.0 +10131,2,,10027,1/22/2019 9:22,,1,,"

Comparing Unlike Objects

+ +

The comparison between a person and an artificial network cannot be made on an equal basis. The former is a composition of many things that the later is not.

+ +

Unlike an artificial network sitting in computer memory on a laptop or server, a human being is an organism, from head to toe, living in the biosphere and interacting with other human beings from birth.

+ +

Human Training

+ +

We have latent intelligence in the zygotes that met to form us and solidified as our genetic code during meiosis, but it is not yet trained. It cannot be until the brain grows from its first cells, directed by the genetic expressions of the brain's metabolic, sensory, cognitive, motor control, and immune structure and function. After nine months of growth, a newborn baby's intelligence is not yet exhibited in motion, language, or behavior other than to suck liquid food.

+ +

Our intelligence begins to emerge after initial basic behavioral training and does not reach the ability to pass a test indicating academic abilities until the corresponding stages of development in a family structure and components of education are complete. These are all observations well studied and documented by those in the field of developmental psychology.

+ +

Artificial Networks are Not Particularly Neural

+ +

An artificial network is a distant and distorted conceptual offspring of a now obsolete model of how neurons behave in networks. Even when the perceptron was first conceived, it was known that neurons reacted to activation from electrical pulses transmitted across synapses from other neurons arranged in complex micro-structures, not by performing an activation function to a vector-matrix product. The parameter matrix at the input of artificial neurons are summing attenuated signals, not electro-chemically reacting to pulses that may only be roughly aligned in time.

+ +

Since then, imaging and in vetro study of neurons are revealing the complexities of neuro-plasticity (genetically directed morphing of the network topology of neurons), the many varieties of cell types, the groupings of cells to form function geometrically, and the involvement of energy metabolism in the axon.

+ +

In the human brain, chemical pathways of dozens of compounds that regulate function and comprise global and regional states and the secretion, transmission, agonist and antagonist reception, interaction, and uptake of those components is under study. There is barely, if at all, an equivalent in the environment of the artificial networks deployed today, although nothing stops us from designing such regulation systems, and some of the most recent work has pushed the envelope in that direction.

+ +

Sexual Reproduction

+ +

Artificial networks are also not brains inside individuals produced by sexual reproduction, therefore potentially exhibiting in neurological capacity the best of two parents, or the worst. We do not yet spawn artificial networks from genetic algorithms, although that has been thought of and it is likely to be researched again.

+ +

Adjusting the Basis for Comparison

+ +

In short, the basis for comparison renders it meaningless, however, with some adjustment based on the above, another similar comparison can be considered that is meaningful and on a more equal basis.

+ +
+

What is the difference between a college student and an artificial network that has billions of artificial neurons, well configured and attached to five senses and motor control, integrated inside a humanoid robot that has been nurtured and educated like a member of a family and a community for eighteen years since its initial deployment?

+
+ +

We don't know. We can't even simulate such a robotic experience of eighteen years or properly project what might happen with scientific confidence. Many of the AI components of the above are not yet well developed. When they are — and there is no particularly compelling reason to think they cannot — then we will find out together.

+ +

Research that May Provide an Answer

+ +

From further cognitive science development, real time neuron level imaging, work on the genetic expressions out of which brains grow, artificial neuron designs will likely progress beyond perceptrons and the more temporally aware LSTM, B-LSTM, and GRU varieties and the topologies of neuron arrangements may break from their current Cartesian structural limitations.

+ +

The neurons in a brain are not arranged in orthogonal rows and columns. They form clusters that exhibit closed loop feedback at low structural levels. This can be simulated by a B-LSTM type artificial network cell, but any electrical engineer schooled in digital circuit design understands that simulation and realization are miles apart in efficiency. A signal processor can run thousands of times faster than its simulation.

+ +

From development of computer vision, hearing, tactile-motor coordination, olfactory sensing, materials science support, robotic packaging, and miniature power sources far beyond what lithium batteries can produce may come humanoid robots that can learn while interacting. At that time it would probably be easy to find a family that cannot have children that would adopt an artificial child.

+ +

Scientific Rigor

+ +

Progress in these areas is necessary for such a comparison to be made on a scientific basis and for confidence in the comparison results to be published and pass peer review by serious researchers not interested in media hype, making the right career moves, or hiking their company's stock prices.

+",4302,,,,,1/22/2019 9:22,,,,0,,,,CC BY-SA 4.0 +10133,1,10630,,1/22/2019 11:01,,11,10773,"

+ +

They only reference in the paper that the position embeddings are learned, which is different from what was done in ELMo.

+ +

ELMo paper - https://arxiv.org/pdf/1802.05365.pdf

+ +

BERT paper - https://arxiv.org/pdf/1810.04805.pdf

+",17451,,2444,,11/1/2019 2:40,7/10/2021 5:29,What are the segment embeddings and position embeddings in BERT?,,2,0,,,,CC BY-SA 4.0 +10134,1,,,1/22/2019 12:02,,1,52,"

When dealing with continuous action spaces, a common choice when designing a policy in policy gradient methods is to learn mean and variance of actions for a specific state and then simply sample from the normal distribution defined by the learned mean and variance to get an action.

+ +

My first question is, is explicit exploration strategy even needed is such cases, because the dose of randomness in actions would come from the sampling itself, on the other hand there could probably be cases where we would be stuck in a local optimum just by sampling.
+My second question is, in case that explicit exploration is needed, how would one approach this problem of exploration for this specific setup.

+",20339,,,,,1/22/2019 12:02,How to include exploration in Gaussian policy,,0,0,,,,CC BY-SA 4.0 +10136,1,10146,,1/22/2019 15:32,,3,2147,"

I'm trying to understand how the size of the hidden state affects the GRU.

+ +

For example, suppose I want to make a GRU count. I'm gonna feed it with three numbers, and I expect it to predict the fourth.

+ +

How should I choose the size of the hidden state of a GRU?

+",20430,,2444,,4/12/2020 18:30,4/12/2020 18:30,How do I choose the size of the hidden state of a GRU?,,1,0,,,,CC BY-SA 4.0 +10143,1,,,1/22/2019 22:16,,1,96,"

In model-based reinforcement learning algorithms, the model of the environment is constructed to efficiently use samples, models such as Dyna, and Prioritize Sweeping. Moreover, eligibility trace helps the model learns (action) value functions faster.

+ +

Can I know if it is possible to combine learning, planning, and eligibility traces in a model to increase its convergence rate? If yes, how it is possible to use eligibility traces in the planning part, like Prioritize Sweeping?

+",10191,,2444,,2/16/2019 19:05,2/16/2019 19:05,Eligibility trace In Model-based Reinforcement Learning,,0,0,,,,CC BY-SA 4.0 +10144,1,10153,,1/22/2019 22:51,,2,941,"

What is the difference between meta-learning and zero-shot learning? Are they synonymous?

+ +

I have seen articles where they seem to imply that they are at least very similar concepts.

+",19895,,2444,,3/9/2020 14:53,3/9/2020 14:53,What is the difference between meta-learning and zero-shot learning?,,1,0,,,,CC BY-SA 4.0 +10145,1,10156,,1/23/2019 4:49,,1,114,"

In an attempt at designing a neural network more closely modeled by the human brain, I wrote code before doing the reading. The neuron I have modeled operates on the following method.

+ +
    +
  • Parameters: potential, threshold, activation.
  • +
  • [activation] = 0.0
  • +
  • Receive inputs, added to [potential].
  • +
  • If ([potential] >= [threshold]) + +
      +
    • [activation] = [potential]
    • +
    • [potential] = 0.0
    • +
  • +
  • Else + +
      +
    • [potential] *= 0.5
    • +
  • +
+ +

In short, the neuron receives inputs, and decides if it ""fires"" if the threshold is met. If not, the input sum, or potential, decreases. Inputs are applied by adding their values to the input potentials of the input neurons, and connections multiply neuron activation values by weights before applying them to their destination potentials. The only difference between this an a spiking network is the activation model.

+ +

I am, however, beginning to learn that Spiking Neural Networks (SNNs), the actual biologically-inspired model, operate quite differently. Forgive me if my understanding is terribly flawed. I seem to have the understanding that signals in these networks are sharp sinusoidal wave-forms with between 100 and 300 ""spikes"" in a subdivision of ""time,"" given for 1 ""second."" These signals are sampled for the ""1 second"" by the neuron, and processed by a differential equation that determines the activation state of the neuron. Synapses seem to function in the same manner -> multiplying the signal by a weight, but increasing or decreasing the period of the graph.

+ +

However, I wish to know what form of neuron activation model I created. I have been unable to find papers that describe a method like this.

+ +

EDIT. The ""learnable"" parameters of this model are [threshold] of the neuron and [weight] of the connections/synapses.

+",,user12941,,user12941,1/23/2019 11:26,1/24/2019 0:11,How does one characterize a neural network with threshold-based activation functions?,,1,1,,,,CC BY-SA 4.0 +10146,2,,10136,1/23/2019 5:42,,3,,"

Yes, your understanding of the hidden state is correct. But the size of the hidden state is a hyperparameter that needs to found by trial-and-error. There is no closed-form formula or solution which links the size of the hidden state and the problem at hand. But, there are some rules of thumb like to start out with the size of the hidden state to be a power of 2, etc. Keep tuning the hyperparameter until you get very good predictions.

+",9062,,2444,,4/12/2020 18:30,4/12/2020 18:30,,,,0,,,,CC BY-SA 4.0 +10147,2,,9983,1/23/2019 8:29,,1,,"

This is the proposed way to reverse engineer software using AI.

+ +
    +
  • Program fake_user operates program target_prog in diverse ways to generate a huge and comprehensive data set.
  • +
  • The parameters of an artificial network are trained to produce within specified accuracy and reliability criteria a behavioral equivalent of target_prog.
  • +
+ +

Not only is this possible, but it is becoming standard practice for AI projects other than reverse engineering games.

+ +

There are caveats.

+ +
    +
  • Program target_prog may be of sufficient complexity to exceed the capabilities of existing network designs and convergence techniques.
  • +
  • The project may lack access to funds and computing resources to complete the generation and training required to achieve reasonable accuracy, with sufficient reliability, in the time allotted.
  • +
  • The expertise of those involved may not be sufficient to produce satisfactory results.
  • +
  • Although the source code is not copied and the parameter state achieved through learning contains equivalent functionality, there is no guarantee that a civil liability may not result. Copyright law one or more jurisdictions may be interpreted as a protection against this kind of copying even though the text of the source code was not copied verbatim.
  • +
+",4302,,,,,1/23/2019 8:29,,,,0,,,,CC BY-SA 4.0 +10148,1,,,1/23/2019 11:21,,2,817,"

I have some documents containing some text (machine writing text) that I intend to apply OCR on them in order to extract the text.

+

The problem is that these documents contain a lot of noise but in different ways (some documents have noise in the middle, others in the top, etc.), which means that I can't apply simple thresholding in order to remove the noise (i.e applying simple threshold does not only remove the noise, but it removes some parts of the text).

+

For these reasons, I thought about using AI to de-noise the documents.

+

Does anyone know if it is possible to do that with AI or any alternative way?

+",19059,,2444,,12/31/2021 9:51,12/31/2021 9:51,Is it possible to use AI to denoise noisy documents?,,1,0,,,,CC BY-SA 4.0 +10149,1,10152,,1/23/2019 15:33,,5,974,"

DeepMind's paper ""Mastering the game of Go without human knowledge"" states in its ""Methods"" section on its ""Neural network architecture"" that the output layer of AlphaGo Zero's policy head is ""A fully connected linear layer that outputs a vector of size 19^2+1=362, corresponding to the logit probabilities for all intersections and the pass move"" (emphasis mine). I am self-trained regarding neural networks, and I have never heard of a ""logit probability"" before this paper. I have not been able by searching and reading to figure out what it means. In fact, the Wikipedia page on logit seems to make the term a contradiction. A logit can be converted into a probability using the equation $p=\frac{e^l}{e^l+1}$, and a probability can be converted into a logit using the equation $l=\ln{\frac{p}{1-p}}$, so the two cannot be the same. The neural network configuration for Leela Zero, which is supposed to have a nearly identical architecture to that described in the paper, seems to indicate that the fully connected layer described in the above quote needs to be followed with a softmax layer to generate probabilities (though I am absolutely new to Caffe and might not be interpreting the definitions of ""p_ip1"" and ""loss_move"" correctly). The AlphaGo Zero cheat sheet, which is otherwise very helpful, simply echoes the phrase ""logit probability"" as though this is a well-known concept. I have seen several websites that refer to ""logits"" on their own (such as this one), but this is not enough to satisfy me that ""logit probability"" must mean ""a probability generated by passing a logit vector through the softmax function"".

+ +

What is a logit probability? What sources can I read to help me understand this concept better?

+",21674,,18758,,6/29/2022 1:19,6/29/2022 1:19,"What is a ""logit probability""?",,1,0,,,,CC BY-SA 4.0 +10150,1,,,1/23/2019 15:54,,2,183,"

There are many methods and algorithms dealing with planning problems.

+ +

If I understand correctly, according to Wikipedia, there are classical planning problems, with:

+ +
    +
  • a unique known initial state,
  • +
  • duration-less actions,
  • +
  • deterministic actions,
  • +
  • which can be taken only one at a time,
  • +
  • and a single agent.
  • +
+ +

Classical planning problems can be solved using classical planning algorithms. The STRIPS framework for problem description and solution, using backward chaining) of the GraphPlan algorithm can be mentioned here.

+ +

If actions are non-deterministic, according to Wikipedia, we have a Markow Decision Process (MDP), with:

+ +
    +
  • duration-less actions,

  • +
  • nondeterministic actions with probabilities,

  • +
  • full observability, or partial observability for POMDP

  • +
  • maximization of a reward function,

  • +
  • and a single agent.

  • +
+ +

MDPs are mostly solved by Reinforcement Learning.

+ +

Obviously, classical planning problems can also be formulated as MDPs (with state transition probabilities of 1, i.e. deterministic actions), and there are many examples (e.g. some OpenAI Gyms), where these are successfully solved by RL methods.

+ +

Two questions:

+ +
    +
  1. Are there some characteristics of a classical planning problem, which makes MDP formulation and Reinforcement Learning a better suiting solution method? Better suiting in the sense that it finds a solution faster or it finds the (near)optimal solution faster.

  2. +
  3. How do graph search methods like A* perform with classical planning problems? Does STRIPS with backward chaining or GraphPlan always outperform A*? Outperform in the sense of finding the optimal solution faster.

  4. +
+",2585,,2585,,1/24/2019 8:35,6/23/2019 9:02,How to choose method for solving planning problems?,,0,3,,2/6/2021 23:35,,CC BY-SA 4.0 +10151,2,,10148,1/23/2019 16:52,,4,,"

This is also the topic of Image Processing (which has analytical solutions instead of learning) mostly through predesigned filters. The filter depends on the type of noise, (salt & pepper, Gaussian, etc.) i.e., for salt & pepper choosing the median in a window. There are a lot of denoising research in literature. There are also more recent learning based denoising applications, but it requires data so that you can train.

+",21679,,,,,1/23/2019 16:52,,,,5,,,,CC BY-SA 4.0 +10152,2,,10149,1/23/2019 16:54,,6,,"

Indeed I haven't seen the term ""logit probability"" used in many places other than that specific paper. So, I cannot really comment on why they're using that term / where it comes from / if anyone else uses it, but I can confirm that what they mean by ""logit probability"" is basically the same thing that is more commonly referred to simply as ""logits"": they are the raw, unbounded scores of which we generally push a vector through a softmax function to generate a discrete probability distribution that nicely adds up to $1$.

+ +

This definition fits the one you linked from wikipedia (although that link only covers the binary case, and AlphaGo Zero would have multinomial logits since it has more than two outputs for the policy head).

+ +

In the AlphaGo Zero paper, the described architecture has a ""linear output layer"" (i.e. no activation function for the outputs, or the identity function as activation function for the outputs, or however you like to describe it) for the policy head. This means that these outputs are essentially unbounded, they could be any real number. We know for sure that these outputs cannot directly be interpreted as probabilities, even if this isn't stated quite explicitly in the paper.

+ +

By calling them logits (or logit probabilities for reasons unknown to me), they are essentially implying that these outputs will still be post-processed by a softmax to convert them into a vector that can be interpreted as a discrete probability distribution over the actions, even if they do not explicitly describe a softmax layer as being a part of the network.

+ +

It is indeed possible that in Leela Zero they decided to make the softmax operation explicitly a part of the Neural Network architecture. Mathematically they end up doing the same thing... the AlphaGo Zero paper implies (by using the word ""logit"") that a softmax is used as a ""post-processing"" step, and in Leela Zero they explicitly make it a part of the Neural Network.

+ +

Here are a couple more sources for the reasoning that usage of the word ""logit"" basically implies usage of a softmax, though indeed they do not cover the term ""logit probability"":

+ + +",1641,,,,,1/23/2019 16:54,,,,2,,,,CC BY-SA 4.0 +10153,2,,10144,1/23/2019 16:58,,2,,"

First see the definition of meta-learning:

+ +
+

Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017 the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

+
+ +

and zero-shot learning:

+ +
+

Zero-shot learning is being able to solve a task despite not having received any training examples of that task. For a concrete example, imagine recognizing a category of object in photos without ever having seen a photo of that kind of object before. If you've read a very detailed description of a cat, you might be able to tell what a cat is in a photograph the first time you see it.

+
+ +

As you can see, these are different but meta-learning can be used in zero-shot learning to work better. For example see this article, as an instance.

+",4446,,,,,1/23/2019 16:58,,,,0,,,,CC BY-SA 4.0 +10155,1,10161,,1/23/2019 21:24,,2,138,"

It is said, that the essence of https://www.springer.com/us/book/9780817639495 ""Neural Networks and Analog Computation. Beyond the Turing Limit"" is that the continuous/physical/real-valued weights for neural networks can induce super-Turing capabilities. Current digital processors can not implement real-valued neural networks, they can only approximate them. There are very little efforts to build analog classical computers. But it is quite possible that quantum computers will be analogue. So - is there research trend that investigates true real-valued neural networks on analog quantum computers?

+ +

Google is of no use for my efforts, because it does not understand the true meaning of ""true real-valued neural network"", it gives just real-value vs complex valued neural networks articles, which are not relevant to my question.

+",8332,,,,,1/24/2019 13:32,Can analog quantum computer implement real-valued neural networks and hence do hypercomputation?,,1,0,,,,CC BY-SA 4.0 +10156,2,,10145,1/24/2019 0:01,,0,,"

The model you describe is a kind of a leaky integrate-and-fire (LIF) neuron (see p. 7). It is leaky because the membrane potential decreases steadily in the absence of input. In contrast, in the simple integrate-and-fire (IF) model the membrane potential is retained indefinitely until the neuron spikes, at which point it is reset to 0. However, LIF neurons are usually modelled with exponential decay of the membrane potential, where you have a time constant $\tau$ and you compute the potential $P_{t}$ at time $t$ based on the potential $P_{t_{last}}$ at time when the last input arrived as

+ +

$P_{t} = P_{t_{last}} exp(- \frac{t - t_{last}}{\tau})$

+ +

This is the same formula as radioactive decay (see here for more details). The idea is that this model is inherently 'aware' of time, whereas the IF model (and your design above) do not factor in the timing of the spikes, so they act like a classical neural network activation. In any case, whether or not a neuron would fire does depend on the firing threshold, so I think that treating the threshold as a learnable parameter is justified - you just have to decide what rules to use for updating it.

+ +

Based on what you describe as your understanding of spiking neural networks, it seems that you have been reading about the Hodgkin-Huxley (HH) model (also in that paper I linked to). (Please correct me if I'm wrong.) You are correct in thinking that spikes in the brain are not infinitely narrow like a delta function but more like a very sharp sinusoidal signal, and the HH model faithfully reproduces that. However, the reason why the HH model is not actually used for simulations is that it is computationally very taxing. In practice, in most cases we do not actually care about the state of the neuron between inputs, as long as your model accurately describes the neuron state and what happens to it when an input arrives.

+ +

There are other models that approximate the HH model very closely but are much faster to simulate (like the Izhikevich model). However, the LIF model is very fast and sufficient in most cases.

+ +

Hope this helps!

+",16101,,16101,,1/24/2019 0:11,1/24/2019 0:11,,,,0,,,,CC BY-SA 4.0 +10158,1,10160,,1/24/2019 4:22,,5,986,"

Imagine that I have an artificial neural network with a single hidden layer and that I am using ReLU as my activating function. +If by change I initialize my bias and my weights in such a form that: +$$ +X * W + B < 0 +$$ +for every input x in X then the partial derivate of the Loss function with respect to W will always be 0!

+ +

In a setup like the above where the derivate is 0 is it true that an NN won´t learn anything?

+ +

If true (the NN won´t learn anything) can I also assume that once the gradient reaches the value 0 for a given weight, that weight won´t ever be updated?

+",21688,,,,,1/28/2019 17:13,How can a neural network learn when the derivative of the activation function is 0?,,3,0,,,,CC BY-SA 4.0 +10160,2,,10158,1/24/2019 7:59,,3,,"
+

In a setup like the above where the derivat[iv]e is 0 is it true that an NN won´t learn anything?

+
+ +

There are a couple of adjustments to gradients that might apply if you do this in a standard framework:

+ +
    +
  • Momentum may cause weights to continue changing if any recent ones were non-zero. This is typically implemented as a rolling mean of recent gradients.

  • +
  • Weight decay (aka L2 weight regularisation) is often implemented as an additional gradient term and may adjust weights down even in the absence of signals from prediction errors.

  • +
+ +

If either of these extensions to basic gradient descent are active, or anything similar, then it is possible for the neural network to move out of the stationary zone that you have created after a few steps and then continue learning.

+ +

Otherwise, then yes it is correct that the neural networks weights would not change at all through gradient descent, and the NN would remain unchanged for any of your input values. Your careful initialisation of biases and weights will have created a system that is unable to learn from the given data. This is a known problem with ReLU activation, and can happen to some percentage of artificial neurons during training with normal start conditions. Other activation functions such as sigmoid have similar problems - although the gradient is never zero in many of these, it can be arbitrarily low, so it is possible for parameters to get into a state where learning is so slow that the NN, whilst technically learning something on each iteration, is effectively stuck. It is not always easy to tell the difference between these unwanted states of a NN and the goal of finding a useful minimum error.

+",1847,,1847,,1/24/2019 8:21,1/24/2019 8:21,,,,3,,,,CC BY-SA 4.0 +10161,2,,10155,1/24/2019 8:09,,1,,"

Digital and Analog

+ +

The question about analog computing is important.

+ +

Digital circuitry gained popularity as a replacement for analog circuitry during the four decades between 1975 to 2015 due to three compelling qualities.

+ +
    +
  • Greater noise immunity
  • +
  • Greater drift immunity (accuracy)
  • +
  • No leakage of stored values
  • +
+ +

This quickly led to digital signaling standards, architecture of a general purpose computing, and central processing units on a chip. The later, combined with an array of registers to perform elementary operations is the meaning of the word microprocessor.

+ +

Quanta and Computing

+ +

Regarding quantum computing, there have been some interesting proposals to pack digital gates into much smaller volumes, but the notion that a computer can be made of transistors the size of electrons might be a bit fantastic. That's what the term quantum computing implies. That degree of miniaturization would have to defy principles of particle physics that are very strongly supported by amassed empirical evidence. Among them is Heisenberg's uncertainty principle.

+ +

All computing involves quanta, but statistically. For a transistor in a digital circuit to be statistically stable, there must be a sufficient number of Si atoms with at least 0.1% molar concentration of the atoms used to dope the Si to create a PN junction. Otherwise the transistor will not switch reliably.

+ +

The lithographic limit of most mass produced VLSI chips is around 7 nm as of this writing. Crystalline Si, nucleus to nucleus, is about .2 nm, so the miniaturization of a stable transistor is near its quantum limit already. Exceeding that limit by a considerable amount destabilizes the digital circuitry. That's a quantum physics limitation, not a lithographic limitation.

+ +

Projections, Models, and Techniques to Push Limits

+ +

Moore's law was simply an approximate model for the chip industry during the period between the invention of the integrated circuit to the physical limitation of the atomic composition of transistors, which we are now approaching.

+ +

Field effect transistors (FETs) can take the miniaturization only slightly further than the mechanics of PN junctions. 3-D circuitry has theoretical promise, but no repeatable nanotechnology mechanisms have yet been developed to form complex circuitry in the third dimension.

+ +

Returning to the Primary Question

+ +

Placing aside the magical idea that quantum computing will somehow completely revolutionize circuitry, we have a question that is both feasible and predictable.

+ +
+

Can an analog computer implement real-valued neural networks and hence do artificial network computation better?

+
+ +

If we define better in this context as cheaper and faster, while maintaining reliability and accuracy, the answer is straightforward.

+ +

It definitely takes fewer transistors to create the feed forward part of an artificial network using an analog approximation of the closed forms resulting from the calculus of artificial networks than a digital one. Both are approximations. Analog circuitry has noise, and drift and digital circuitry has rounding error. Beyond rounding, digital multiplication is much more complex in terms of circuitry than analog, and multiplication is used quite a bit in artificial network implementations.

+ +

Limitation Interplay of Gödel and Turing

+ +

The idea from the tail end of the title of the book this question referenced, ""Beyond the Turing Limit,"" is also a little fantastic. The thought experiment of Alan Turing leading to the Turing machine and the associated computability theory (including Turing completeness) was not developed to be a limit. Quite the opposite. It was an answer to Gödel's incompleteness theory. People in Turing's time saw the work Gödel's genius as the annoying but indismissable limit threatening the centuries-old vision of using machines to automatically expand human knowledge. To summarize this work, we can state with assurance this.

+ +
+

The theory limiting what computers can do is not related to how the numbers are represented in electronic circuit implementations. It is a limitation of information mechanics.

+
+ +

These principles are important but have little to do with the limitation.

+ +
    +
  • Analog computing
  • +
  • Miniaturization
  • +
  • Parallel computing
  • +
  • Ways that stochastic injection can help
  • +
  • How algorithms can be defined in programming languages
  • +
+ +

The above has to do with the feasibility of a project for which some person or corporation must pay and the intellectual capacities required to completing it, not the hard limit on what is possible.

+ +

Defining what a super-Turing capability might be would be a dismissal or a dismantling of what mathematicians consider to be well constructed theory. Dismantling or shifting the contextual frame of some computability theory is plausible. Dismissing the work that has been done would be naive and counterproductive.

+ +

Real Numbers are Now Less Real Than Integers

+ +

The compelling idea contained in the question is the reference to continuity, physicality, and the real valued nature of parameters that acquire a learned state during the training of artificial networks.

+ +

To multiply a vector of digital signals by a digital parameter matrix requires many transistors and can require a significant number of clock cycles, even when dedicated hardware is used. Only a few transistors per signal path are required to perform the analog equivalent, and the throughput potential is very high.

+ +

To say that real values cannot be represented in digital circuits is inaccurate. The IEEE standards for floating point numbers, when processed in a time series, represent real valued signals well. Analog circuits suffer from noise and drift as stated above. Both analog and digital signals only appear to be comprised of real number values. Real numbers are not real except in the world of mathematical models. What we call quantities in the laboratory are essentially measurements of means of distributions. Solidifying and integrating the probabilistic nature of reality into science and technology may be the primary triumph of the twentieth century,

+ +

For instance, when dealing in milli-Amps (mA), electric current seems to be a continuous phenomenon, but when dealing with nano-Amps (nA), the quantum nature of electric current begins to appear. This is much like what happens with the miniaturization of the transistor. Real numbers can only be represented in analog circuits through the flow of discrete electrons. The key to the advantage of an analog forward feed in artificial networks is solely that the density of network cells can be considerably higher, reducing the cost of the network in its VLSI space.

+ +

In summary, real numbers received the name for their type prior to the emergence of quantum physics. The idea that quantities formerly considered real and continuous were actually statistical averages of discrete activities at a quantum level revolutionized the field of thermodynamics and microelectronics. This is something that disturbed Einstein in his later years. In essence, mathematics using real numbers is effective in engineering because it simplifies what physicist now believe are distributions of a large numbers of quantum phenomena occurring in concert.

+ +

Summarizing the Probable Future of Analog Computing

+ +

This phrase from the question is not precisely scientific, even though it points to a strong likelihood.

+ +
+

It is quite possible that quantum computers will be analogue.

+
+ +

This modified version is more consistent with scientific fact in the way it is phrased, and is also factual.

+ +
+

It is possible that computers dealing with signals at a near quantum level of miniaturization will have a higher proportion of analog circuitry.

+
+ +

This question and its answers have many links to papers and research work regarding analog computing: If digital values are mere estimates, why not return to analog for AI?.

+",4302,,4302,,1/24/2019 13:32,1/24/2019 13:32,,,,1,,,,CC BY-SA 4.0 +10163,1,,,1/24/2019 11:47,,2,1994,"

I am curious if it is possible to do so.

+

For example, if I supply

+
    +
  • $[0, 1, 2, 3, 4, 5]$, the model should return "natural number sequence",

    +
  • +
  • $[1,3,5,7,9,11]$, it should return "natural number with step of $2$",

    +
  • +
  • $[1,1,2,3,5]$, it should return "Fibonacci numbers",

    +
  • +
  • $[1,4,9,16,25]$, it should return "square natural number"

    +
  • +
+

and so on.

+",21696,,2444,,11/13/2020 23:56,11/13/2020 23:56,Can a machine learning model predict the pattern of given sequence?,,2,1,,,,CC BY-SA 4.0 +10164,1,,,1/24/2019 13:28,,2,63,"

I am trying to write an AI to a game, where there is no real adversary. This means, that only the AI player has choices in which move to perform, his opponent may or may not react to the move the AI player made, but when he reacts, he will always do the one and only single move that he is able to do. The goal of this AI would be, to find a solution to the situation, which results in the least amount of monster activations.

+ +

To explain this a bit further, I will describe the game in a few words: there is a 3x3 board, on which there are some monsters. These monsters has a prewritten AI, and activate based on prewritten rules, ie, they do not have to make any decision at all. This is done, by an enrage mechanic, meaning, that when a monster hits it's enrage limit, it activates, and performs his single move action.

+ +

The AI should control the other side of this board, the hero players. Each hero player has a different number of possible moves, each move dealing an amount of damage to the monsters, and increasing it's enrage value, thus getting him closer to his enrage limit.

+ +

What I want to achieve, is to write an AI, that will perform this fight in the least amount of monster activations as possible.

+ +

For now, I've written a minimax algorithm for this, without the min player. I've done this, by calculating the negative effect of the monsters move, in the maximizing and only players move.

+ +

The AI works in the following way: he draws the game tree for a set amount of depth of moves, calculates the bottom move with a heuristic function, selects the highest value from the given depth, and returns the value of this function up one level, then repeat. When he reaches the top of the tree, he performs the move, with the highest quantification value.

+ +

This works, somewhat, but I have a big problem: As there is no randomness in the game, I was expecting that the greater the depth that he can search forward, the better moves he will find, but this is not always the case, sometimes a greater depth, returns a worse solution then a smaller depth

+ +

My questions are as follows:

+ +
    +
  • what could cause the above error? My quantification function? The weights that I use in the function? Or something else?
  • +
  • is minimax the correct algorithm to use, for a game where there is no real adversarry, or is there any algorithm that will perform better for a game like this?
  • +
+",21700,,,,,1/24/2019 14:56,What kind of decision rule algorithm is usable in this situation?,,0,0,,,,CC BY-SA 4.0 +10196,1,10204,,1/24/2019 13:56,,14,11208,"

In Open AI's actor-critic and in Open AI's REINFORCE, the rewards are being normalized like so

+
rewards = (rewards - rewards.mean()) / (rewards.std() + eps)
+
+

on every episode individually.

+

This is probably the baseline reduction, but I'm not entirely sure why they divide by the standard deviation of the rewards?

+

Assuming this is the baseline reduction, why is this done per episode?

+

What if one episode yields rewards in the (absolute, not normalized) range of $[0, 1]$, and the next episode yields rewards in the range of $[100, 200]$?

+

This method seems to ignore the absolute difference between the episodes' rewards.

+",21645,Gulzar,2444,,11/22/2020 0:35,11/22/2020 0:35,Why does is make sense to normalize rewards per episode in reinforcement learning?,,3,1,,,,CC BY-SA 4.0 +10165,2,,10158,1/24/2019 14:42,,1,,"

Learning and Zero Derivatives

+ +

Artificial networks are designed so that even when the partial derivative of a single activation function is zero they can learn. They can also be designed to continue learning when the derivative of the loss1 function is zero too. This resilience to a vanishing feedback signal amplitude, by design, determines some of how calculus results are employed in the learning algorithm, hardware acceleration circuitry, or both. By learning behavior is meant the behavior of the changes to the parameters of the network as learning occurs.

+ +

For many of the activation functions used today, the derivative of the of the activation function is never the real expression 0, but there are such cases. These are examples of when the evaluation of the derivative of the activation function is effectively zero.

+ +
    +
  • All the time for a binary step activation function, which is why they are usually only used for the very last layer of a network to discretize the output.
  • +
  • When the input of a ReLU activation function is negative, which is the case given in the question
  • +
  • When the granularity of the IEEE representation of the number can no longer support the smallness of the absolute value of the number upon evaluation
  • +
  • When the loss is zero
  • +
+ +

Nearing Zero

+ +

This last condition can easily occur if the result of the loss function output is so close to zero that the digital products of that number, during propagation, rounds to zero in the floating point hardware. Even if not zero, the number can be so small that learning slows to an untenable speed. The learning process either oscillates, in many cases chaotically, because of rounding phenomena or finds a static state and remains there. Again, this does not necessarily require a zero partial in the Jacobian.

+ +

A Familiar Analogy

+ +

The cognitive equivalent that helps intuition in understanding this but is not at all a great and across-the-board accurate analogy is the mental concept of doubt. The advantages of various directions of change or action to produce change is no longer clear. This is a rough analogy that some can connect to when considering what it means when the gradient is vanishing. When looking at gradient in historical context, where gradient is the slope of a surface in a location where gravity defines which direction is down, a vanishing gradient is a place where no direction seems to be down hill.

+ +

Flat in One Dimension by Design

+ +

In the question, where an inner layer2 is a ReLU activation function, the evaluation of the partial derivative of the loss function with respect to the parameter being adjusted will always be zero if its input is negative. However, this is by design and is one of the reasons ReLU trains fast. When the signal is negative going into the layer at that particular cell, it is thrifty to ignore it. The other cells upstream are then altered through other paths around the deactivated cell with the zero partial. A neuroscientist might smile at the oversimplification, but this is like a missing synapse between two adjacent cells in the brain.

+ +
+

In a setup like the above where the derivative is 0, is it true that a [network] won't learn anything?

+
+ +

It is false. Learning will stop if all the derivatives in a layer are zero and no other device, such as curvature, momentum, regularization, and other devices controlled by hyper-parameters, is employed. Even so, zeros across the layer would only affect the adjustment of upstream layers, layers closer to the input. Downstream, convergence activity may continue in such a case.

+ +

Zero and close to zero values (as well as near overflows) are kinds of saturation conditions, and these are studied carefully in artificial network research, but a single cell with a zero partial will not stop learning and may, in specific cases, ensure its completion and the adequacy of its result.

+ +

Some Calculus

+ +

In mathematical terms, if the Jacobian has a zero in one position, the others may remain active indicators of proper adjustment magnitude and direction for the individual parameters. If the Hessian is used or various types of momentum, regularization, or other techniques are employed, zeros across the Jacobian will probably not block upstream learning, which is part of the reason why they are used.

+ +

An Analogy for Momentum

+ +

The analogy can again be employed to clarify momentum as a principle, with the caveat that it is again an oversimplification. Beliefs have momentum, so when a belief exists and all other indications of direction for the next step is unsure, most will base their next step upon their beliefs.

+ +

This is how all organisms with a brain tend to work and why mouse traps and spider webs can catch. Without viable feedback from which learning can occur, the organism will act based on the momentum in its DNA or networks of its brain. This is usually beneficial, but can lead to loss in specific cases.

+ +

Gradients are not fool proof either. The problem of local minima can render pure gradient descent dysfunctional as well.

+ +

An Analogy for Curvature

+ +

Curvature (as when the Hessian is employed) requires a different analogy.

+ +

If a smart, blind person is on a dry, flat surface, thirsty, and needing water, the gradient may be flat, they may feel with their foot or cane for some indication of curvature. If some down curving feature of the surface is found, that may guide the person to water in more cases than a random step.

+ +

As hardware acceleration improves, the Hessian, which is to computationally heavy for CPU execution in most cases, may emerge as standard practice.

+ +

Mathematically, this is simply moving from two terms of the Taylor series expansion in multivariate space to three terms. From a mechanics perspective, it is the inclusion of acceleration to velocity.

+ +
+ +

Footnotes

+ +

[1] Loss or any of these functions that drive behavior in AI systems: Error, disparity, value, benefit, optimality, and others of similar evaluative nature.

+ +

[2] Inner layers in artificial networks are often called hidden layers, but that is not technically correct. Even though the encapsulation of a neural network may hide signals in inner layers from the programmer, it is a low level software design feature to do that, and a bad one. One can usually and definitely should be able to monitor those signals and produce statistics on them. They are not hidden in the mathematics, as in some kind of mathematical mystery, and the only difference between them and the output layer is that the output layer is intended to drive or influence something outside the network.

+",4302,,4302,,1/28/2019 17:13,1/28/2019 17:13,,,,0,,,,CC BY-SA 4.0 +10170,2,,10163,1/24/2019 21:07,,1,,"

Those all fit into a single quadratic, auto-correlated model.

+ +

$$ x_0 = a \\ x_i = b i^2 + c x_{i-1} + d i + e $$

+ +

The sequences can be curve fitted producing a set of $n$ perfect fits of the form $(a, b, c, d, e)$ given the above model. A rules engine given the correct parameterized rules can produce the most desirable verbal description from among the $n$ fits in the set. The rules can also be prioritized by a simple feed forward network trained to simulate the most natural selection of string descriptions from any set of fits where $n > 1$.

+ +

This will work well for the examples in the question and many more, however, if the sequence $\{1, 4, 1, 5, 9\}$ is fed into the system, it will produce some weird description based on the quadratic, auto-correlated model it was given rather than, ""digits of $\pi$ to the right of the decimal place.""

+ +

The only way to produce the most common response a university freshman math student would produce would be to extend the boundaries of AI engineering first. For example, once an AI system is developed that can handle natural language and cognition like a child, several of them can be separately trained in a simulation of primary and secondary school mathematics. The median response for each sequence given to the class of AI students (class made up of artificial students studying math, not class of humans studying AI) will then be a reasonable prediction of what human university freshmen would produce as a median response.

+",4302,,,,,1/24/2019 21:07,,,,0,,,,CC BY-SA 4.0 +10171,2,,10163,1/25/2019 0:18,,-1,,"

This can be framed as a classification problem where a model is supervised on a dataset containing finite-length number sequences $x^{(i)}_1, \cdots, x^{(i)}_n$ and the sequence name $y_i$. For example, the dataset could look like this:

+ +
    +
  • ([0,1,2,3,4], 0)
  • +
  • ([1,3,5,7,9], 1)
  • +
  • ([1,1,2,3,5], 2)
  • +
  • ([1,4,9,16,25], 3)
  • +
+ +

where the numbers on the right are integer representations of the sequence name. Given intuitive sequences, plentiful training data, and training examples of reasonable length, this problem would not be that difficult to solve. Sequence models from deep learning, such as recurrent networks (LSTM, GRU) or temporal convolutional networks, are well-suited for tasks such as this one.

+ +

Of course, this is only possible within certain constraints. The models are only good at what they are trained to do, so it would be impossible to use them to infer whether a sequence skips by 2 or 3 without having that information explicitly present in the training data. It would be interesting to see whether unsupervised models could detect this sort of information, although I don't think this work has been done in the present.

+",19403,,,,,1/25/2019 0:18,,,,0,,,,CC BY-SA 4.0 +10172,1,,,1/25/2019 0:51,,3,55,"

Typical Feed Forward Neural Networks require a fixed sized input and output. So when you have variable sized input, it seems to be common practice to pad the input with zero vectors.

+ +

Why does it not seem to be common practice to have a ""is_padding"" attribute? That way the network can easily distinguish between padding and actual data? Especially considering input is commonly centered around 0 by subtracting the mean and using unit variance.

+",20338,,,,,2/1/2019 23:46,"Using a ""is_padding"" attribute in your padding instead of simply zero vectors",,0,1,,,,CC BY-SA 4.0 +10173,5,,,1/25/2019 0:53,,0,,"

Experimental tag, under the ""social"" umbrella. Related to the ""mythology-of-ai"",

+",1671,,1671,,1/25/2019 1:01,1/25/2019 1:01,,,,0,,,,CC BY-SA 4.0 +10174,4,,,1/25/2019 0:53,,0,,"Use this tag for discussions of press coverage of Artificial Intelligence. This primarily oriented toward how AI is covered in the media, but may include include fact checking publications (articles, blogs and popular non-fiction.) Intended to be used in conjunction with the social impacts and industry of AI. ",1671,,1671,,1/25/2019 1:01,1/25/2019 1:01,,,,0,,,,CC BY-SA 4.0 +10175,1,,,1/25/2019 3:53,,2,502,"

I do understand that there are plenty of mobile apps available for body measurement (e.g. MTailor) or creating 3D model (3dlook).

+ +

What I would like to find out is how we can use deep learning to achieve the accurate body measurement/3D model with just smart phone camera?

+ +

For example, MTailor can predict one's body measurement quite accurately given the cameara angle/camera distance from the human and human height. Can we do the same using deep learning with some labeled images to achieve the same accurate body measurement prediction?

+ +

Thanks

+ +

Regards, +Han

+",21710,,,,,1/25/2019 3:53,"Deep Learning on how to find out the body measurement (e.g. shoulder length, waist, hips, legs length etc) from mobile camera captured images?",,0,3,,,,CC BY-SA 4.0 +10177,1,10221,,1/25/2019 6:39,,6,2079,"

I'm quite new to the field of computer vision and was wondering what are the purposes of having the boundary boxes in object detection.

+

Obviously, it shows where the detected object is, and using a classifier can only classify one object per image, but my question is that

+
    +
  1. If I don't need to know 'where' an object is (or objects are) and just interested in the existence of them and how many there are, is it possible to just get rid of the boundary boxes?

    +
  2. +
  3. If not, how does bounding boxes help detect objects? From what I have figured is that a network (if using neural network architectures) predicts the coordinates of the bounding boxes if there is something in the feature map. Doesn't this mean that the detector already knows where the object is (at least briefly)? So, continuing from question 1, if I'm not interested in the exact location, would training for bounding boxes be irrelevant?

    +
  4. +
  5. Finally, in architectures like YOLO, it seems that they predict the probability of each class on each grid (e.g. 7 x 7 for YOLO v1). What would be the purpose of bounding boxes in this architecture other than that it shows exactly where the object is? Obviously, the class has already been predicted, so I'm guessing that it doesn't help classify better.

    +
  6. +
+",21712,,2444,,1/28/2021 23:52,1/28/2021 23:52,What's the role of bounding boxes in object detection?,,2,0,,,,CC BY-SA 4.0 +10180,1,,,1/25/2019 10:34,,8,5890,"

I'm reading the book Artificial Intelligence: A Modern Approach (by Stuart Russell and Peter Norvig).

+

However, I don't understand the difference between search and planning. I was more confused when I saw that some search problems can be determined in a planning way. My professor explained to me in a confusing way that the real difference is that search uses a heuristic function, but my book says that planning uses a heuristic too (in chapter 10.2.3).

+

I read this Stack Overflow post that says in a certain way what I'm saying.

+

So, what is the difference between search and planning?

+",21719,,2444,,12/28/2021 12:57,12/28/2021 12:57,What is the difference between search and planning?,,2,0,,,,CC BY-SA 4.0 +10184,1,10187,,1/25/2019 11:51,,2,65,"

Let's propose, that I can define the state of a board in a board game, with 234 neurons. In theory, could I be able to train a neural network, with 468 inputs (two game boards), and 1 output, to tell me which board state is 'better'? The output should give me ~-1 if the second board is better than the first, ~0 if they are equal, and ~1 if the first board is better than the second.

+ +

If yes, what could be the number of ideal neurons on the hidden layers? What could be the ideal number of hidden layers?

+",21700,,,,,1/25/2019 13:09,Could a neural network be capable to diferentiate between two boards of a game?,,1,1,,,,CC BY-SA 4.0 +10185,2,,10158,1/25/2019 12:31,,1,,"

People often place a batchnorm layer before ReLU. That effectively prevents the problem you have described.

+",21726,,,,,1/25/2019 12:31,,,,0,,,,CC BY-SA 4.0 +10186,2,,10177,1/25/2019 12:52,,-1,,"

In principle, you could train the model to output a sigmoid map of coarse object positions (0 -> no object, 1 -> an object center is located here). The map could be subjected to non-maximum suppression and such model could be trained end-to-end. That would be possible, if that's what you are asking.

+",21726,,,,,1/25/2019 12:52,,,,0,,,,CC BY-SA 4.0 +10187,2,,10184,1/25/2019 13:09,,1,,"

For optimal performance, the network complexity should fit the complexity of the game. Since we do not know the latter, your question is not answerable.

+",21726,,,,,1/25/2019 13:09,,,,3,,,,CC BY-SA 4.0 +10189,1,,,1/25/2019 14:03,,1,151,"

Currently, we can build the Artificial Intelligence (AI) approaches that respectively explain their actions within the use of goal trees 1. By moving up and down across the tree, it keeps tracking the last and next movements. Therefore, giving the ability to the machine for ""explain"" the actions.

+ +

Explainability regarding human levels, requires some cognitive effort, such as self-awareness, memory retrieval, a theory of mind and so on 2. Humans are adept at selecting several causes from an infinite number of causes to be the explanation. However, this selection is influenced by certain cognitive biases. The idea of explanation selection is not new in eXplainable Artificial Intelligence (XAI) [3, 4]. But, as far as we are aware, there are currently no studies that look at the cognitive biases of humans as a way to select explanations from a set of causes.

+ +

Despite a clear definition and description of the XAI field, several questions remain present. The issues are summarized in just one sentence and listed as follows.

+ +

That said, our question is:

+ +
+

How can we create and build XAI?

+
+ +

References

+ +

1 Hadoux, Emmanuel, and Anthony Hunter. Strategic Sequences of Arguments for Persuasion Using Decision Trees. AAAI. 2017.

+ +

2 Miller, T., 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence.

+ +

3 Gunning, D., 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.

+ +

4 Samek, W., Wiegand, T. and Müller, K.R., 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296.

+",21729,,2444,,9/14/2019 12:56,9/14/2019 12:56,How can we create eXplainable Artificial Intelligence?,,0,5,,12/27/2020 14:51,,CC BY-SA 4.0 +10191,1,,,1/25/2019 14:37,,2,68,"

I am a member of a robotics team that is measuring the amount of reflected IR light to determine the lightness/darkness of a given material. We eventually hope to be able to use this to follow a line using a pre-set algorithm, but the first step is determining whether the material is one of the binary options: light or dark.

+ +

Given a large population of values between 0 and 1023, probably in two distinct groupings, how can I best go about classifying a given point as light or dark?

+",20818,,2444,,2/25/2019 15:46,3/27/2019 16:20,How do I classify measurements into only two classes?,,1,0,,,,CC BY-SA 4.0 +10192,2,,9990,1/25/2019 14:40,,1,,"

The question is about a mismatch between the loss function in two papers on GANs. The first paper is Generative Adversarial Nets Ian J. Goodfellow et. al., 2014, and the excerpt image in the question is this.

+
+

The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator’s distribution $p_g$ over data $x$, we define a prior on input noise variables $p_z (z)$, then represent a mapping to data space as $G (z; \theta_g)$, where $G$ is a differentiable function represented by a multilayer perceptron with parameters $\theta_g$. We also define a second multilayer perceptron $D (x; \theta_d)$ that outputs a single scalar. $D (x)$ represents the probability that $x$ came from the data rather than pg. We train $D$ to maximize the probability of assigning the correct label to both training examples and samples from $G$. We simultaneously train $G$ to minimize $\log (1 − D(G(z)))$:

+

In other words, $D$ and $G$ play the following two-player minimax game with value function $V (G, D)$:

+

$$ \min_G \, \max_D V (D, G) = \mathbb{E}_{x∼p_{data}(x)} \, [\log \, D(x)] \\ +\quad\quad\quad\quad\quad\quad\quad + \, \mathbb{E}_{z∼p_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad \text{(1)} $$

+
+

The second paper is Image-to-Image Translation with Conditional Adversarial Networks, Phillip Isola Jun-Yan Zhu Tinghui Zhou Alexei A. Efros, 2018, and the excerpt image in the question is this.

+
+

The objective of a conditional GAN can be expressed as

+

$$ \mathcal{L}_{cGAN} (G, D) = \mathbb{E}_{x, y} \, [\log D(x, y)] \\ \quad\quad\quad\quad\quad\quad\quad + \mathbb{E}_{x, z} \, [\log \, (1 − D(x, G(x, z))], \quad \text{(1)} $$

+

where $G$ tries to minimize this objective against an adversarial $D$ that tries to maximize it, i.e.

+

$$ G^{∗} = \arg \, \min_G \, \max_D \mathcal{L}_{cGAN} (G, D) \, \text{.} $$

+

To test the importance of conditioning the discriminator, we also compare to an unconditional variant in which the discriminator does not observe $x$:

+

$$ \mathcal{L}_{GAN} (G, D) = \mathbb{E}_y \, [\log \, D(y)] \\ +\quad\quad\quad\quad\quad\quad\quad + \mathbb{E}_{x, z} \, [\log \, (1 − D(G(x, z))] \, \text{.} \quad \text{(2)} $$

+
+

In the above $G$ refers to the generative network, $D$ refers to the discriminative network, and $G^{*}$ refers to the minimum with respect to $G$ of the maximum with respect to $D$. As the question author tentatively put forward, $\mathbb{E}$ is the expectation with respect to its subscripts.

+

The question of discrepancy is that the right hand sides do not match between the first paper's equation (1) and the second paper's equation (2) which is absent of the condition involving $y$.

+

First paper:

+

$$ \mathbb{E}_{x∼p_{data}(x)} \, [\log \, D(x)] \\ +\quad\quad\quad\quad\quad\quad\quad + \, \mathbb{E}_{z∼p_z(z)} \, [\log \, (1 − D(G(z)))] \, \text{.} \quad \text{(1)} $$

+

Second paper:

+

$$ \mathbb{E}_y \, [\log \, D(y)] \\ +\quad\quad\quad\quad\quad\quad\quad + \mathbb{E}_{x, z} \, [\log \, (1 − D(G(x, z))] \, \text{.} \quad \text{(2)} $$

+

The second later paper further states this.

+
+

GANs are generative models that learn a mapping from random noise vector $z$ to output image $y, G : z \rightarrow y$. In contrast, conditional GANs learn a mapping from observed image $x$ and random noise vector $z$, to $y, G : {x, z} \rightarrow y$.

+
+

Notice that there is no $y$ in the first paper and the removal of the condition in the second paper corresponds to the removal of $x$ as the first parameter of $D$. This is one of the causes of confusion when comparing the right hand sides. The others are use of variables and degree of explicitness in notation.

+

The tilda $~$ means drawn according to. The right hand side in the first paper indicates that the expectation involving $x$ is based on a drawing according to the probability distribution of the data with respect to $x$ and the expectation involving $z$ is based on a drawing according to the probability distribution of $z$ with respect to $z$.

+

The removal of the observation of $x$ from the second right hand term of the second paper's equation (2), which is the first parameter of $G$, the replacement of that equation's $y$ variable with the now freed up $x$ variable, and the acceptance of the abbreviation of the tilda notation used in the first paper then brings both papers into exact agreement.

+",4302,,-1,,6/17/2020 9:57,1/25/2019 14:40,,,,1,,,,CC BY-SA 4.0 +10197,2,,9975,1/25/2019 16:10,,3,,"

Consider a continuum of complexity in models.

+ +
    +
  • Trivial: $y = x + a$

  • +
  • Simple: $y = x \, \log \, (a x + b) + c$

  • +
  • Moderately complex: A wind turbine under constant wind velocity

  • +
  • Very complex: Ray tracing of lit 3-D motion scenes to pixels

  • +
  • Astronomically complex: The weather

  • +
+ +

Now consider a continuum regarding the generality or specificity of models.

+ +
    +
  • Very specific: The robot for the Mars mission has an exact mechanical topology, materials call-out, and set of mechanical coordinates contained in the CAD files used to machine the robot's parts.

  • +
  • Somewhat specific: The formulas guiding the design of an internal combustion engine, which are well known.

  • +
  • Somewhat general: The phenomenon is deterministic and the variables and their domains are known.

  • +
  • Very general: There's probably some model because it works in nature but we know little more.

  • +
+ +

There are twenty permutations at the above level of granularity. Every one has purpose in mathematical analysis, applied research, engineering, and monetization.

+ +

Here are some general correlations between input, output, and layer counts.

+ +
    +
  • Higher complexity often corresponds to larger layer count.

  • +
  • Higher i/o dimensionality corresponds to higher width to the corresponding i/o layers.

  • +
  • Mapping generality to or from specificity generally requires complexity.

  • +
+ +

Now, to make this answer even less appealing to those who want formula answer they can memorize, ...

+ +
    +
  • Each artificial network is a model of an arbitrary function before training and a model of a specific function afterward.

  • +
  • Loss functions are models of disparity.

  • +
  • An algorithm is a model of a process created by spreading a recursive definition out in time to map into a model of centralized computation called a CPU.

  • +
  • The recursive definition is a model too.

  • +
+ +

There is almost nothing in science that is a not a model except ideas or data that are not yet modeled.

+",4302,,4302,,1/25/2019 18:44,1/25/2019 18:44,,,,0,,,,CC BY-SA 4.0 +10198,1,,,1/25/2019 16:39,,0,347,"

Following Pytorch's actor critic, I understand that the critic is a function mapping from the state space to the reward space, meaning, the critic approximates the state-value funcion.

+ +

However, according to This paper (you don't need to read it, just a glance at the nice picture at page 2 is enough), the critic is a function mapping from the action space to the reward, meaning it approximates the action value funcion

+ +

I am confused.

+ +

When people say ""actor critic"" - what do they mean by ""critic""?

+ +

Is the term ""Critic"" ambiguous in RL?

+",21645,,21645,,2/7/2019 18:54,2/7/2019 18:54,What does the Critic network evaluate in Actor Critic?,,1,0,,,,CC BY-SA 4.0 +10199,2,,10198,1/25/2019 17:14,,2,,"

The same argument of a state-value function int Reinforcement Learning: An Introduction Section 13.5 can be applied to state-action values. The main takeaway is that a critic's state value (or action-state value) function is used for bootstrapping.

+ +
+

Although the REINFORCE-with-baseline method learns both a policy and a state-value function, we do not consider it to be an actor–critic method because its state-value function is used only as a baseline, not as a critic. That is, it is not used for bootstrapping (updating the value estimate for a state from the estimated values of subsequent states), but only as a baseline for the state whose estimate is being updated. This is a useful distinction, for only through bootstrapping do we introduce bias and an asymptotic dependence on the quality of the function approximation. As we have seen, the bias introduced through bootstrapping and reliance on the state representation is often beneficial because it reduces variance and accelerates learning.

+
+",4398,,,,,1/25/2019 17:14,,,,1,,,,CC BY-SA 4.0 +10200,2,,10196,1/25/2019 17:33,,4,,"

We subtract mean from values and divide it with standard deviation to get data with mean of zero and variance of one. The range of values per episode does not matter, it will always make it to have zero mean and variance of one in all cases. If the range is bigger ([100, 200]) then deviation will be bigger as well than for smaller range ([0, 1]) so we will end up dividing by bigger number.

+",20339,,,,,1/25/2019 17:33,,,,5,,,,CC BY-SA 4.0 +10201,1,10887,,1/25/2019 17:45,,3,2027,"

If one examines the SSD: Single Shot MultiBox Detector code from this GitHub repository, it can be seen that, for a testing phase (evaluating network on test data set), there is a parameter test batch size. It is not mentioned in the paper.

+

I am not familiar with using batches during network evaluation. Can someone explain what is the reason behind using it and what are advantages and disadvantages?

+",21737,,2444,,1/2/2022 9:19,1/2/2022 9:19,What is the reason behind using a test batch size?,,1,1,,,,CC BY-SA 4.0 +10202,2,,9975,1/25/2019 17:50,,1,,"

In what context are you asking this? It is totally different if you want to perform object detection, regression or, for example, reinforcement learning.

+ +

For the first case I would say that main point in using simple vs complex model is size of training data. If you have 1000 training samples you can't expect large network to perform better than simple one.

+",21737,,,,,1/25/2019 17:50,,,,0,,,,CC BY-SA 4.0 +10203,1,,,1/25/2019 19:40,,4,2404,"

I am in the process of implementing the DQN model from scratch in PyTorch with the target environment of Atari Pong. After a while of tweaking hyper-parameters, I cannot seem to get the model to achieve the performance that is reported in most publications (~ +21 reward; meaning that the agent wins almost every volley).

+ +

My most recent results are shown in the following figure. Note that the x axis is episodes (full games to 21), but the total training iterations is ~6.7 million.

+ +

+ +

The specifics of my setup are as follows:

+ +

Model

+ +
class DQN(nn.Module):
+    def __init__(self, in_channels, outputs):
+        super(DQN, self).__init__()
+        self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=8, stride=4)
+        self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2)
+        self.conv3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1)
+        self.fc1 = nn.Linear(in_features=64*7*7 , out_features=512)
+        self.fc2 = nn.Linear(in_features=512, out_features=outputs)
+
+    def forward(self, x):
+        x = F.relu(self.conv1(x))
+        x = F.relu(self.conv2(x))
+        x = F.relu(self.conv3(x))
+        x = x.view(-1, 64 * 7 * 7)
+        x = F.relu(self.fc1(x))
+        x = self.fc2(x)
+        return x    # return Q values of each action
+
+ +

Hyperparameters

+ +
    +
  • batch size: 32
  • +
  • replay memory size: 100000
  • +
  • initial epsilon: 1.0
  • +
  • epsilon anneals linearly to 0.02 over 100000 steps
  • +
  • random warmstart episodes: ~50000
  • +
  • update target model every: 1000 steps
  • +
  • optimizer = optim.RMSprop(policy_net.parameters(), lr=0.0025, alpha=0.9, eps=1e-02, momentum=0.0)
  • +
+ +

Additional info

+ +
    +
  • OpenAI gym Pong-v0 environment
  • +
  • Feeding model stacks of 4 last observed frames, scaled and cropped to 84x84 such that only the ""playing area"" is visible.
  • +
  • Treat losing a volley (end-of-life) as a terminal state in the replay buffer.
  • +
  • Using smooth_l1_loss, which acts as Huber loss
  • +
  • Clipping gradients between -1 and 1 before optimizing
  • +
  • I offset the beginning of each episode with 4-30 no-op steps as the papers suggest
  • +
+ +

Has anyone had a similar experience of getting stuck around 6 - 9 average reward per episode like this?

+ +

Any suggestions for changes to hyperparameters or algorithmic nuances would be greatly appreciated!

+",19789,,2444,,4/4/2022 12:50,4/4/2022 12:50,DQN stuck at suboptimal policy in Atari Pong task,,1,7,,,,CC BY-SA 4.0 +10204,2,,10196,1/25/2019 19:40,,8,,"

The ""trick"" of subtracting a (state-dependent) baseline from the $Q(s, a)$ term in policy gradients to reduce variants (which is what is described in your ""baseline reduction"" link) is a different trick from the modifications to the rewards that you are asking about. The baseline subtraction trick for variance reduction does not appear to be present in the code you linked to.

+ +

The thing that your question is about appears to be standardization of rewards, as described in Brale_'s answer, to put all the observed rewards in a similar range of values. Such a standardization procedure inherently requires division by the standard deviation, so... that answers that part of your question.

+ +

As for why they are doing this on a per-episode-basis... I think you're right, in the general case this seems like a bad idea. If there are rare events with extremely high rewards that only occur in some episodes, and the majority of episodes only experience common events with lower-scale rewards... yes, this trick will likely mess up training.

+ +

In the specific case of the CartPole environment (which is the only environment used in these two examples), this is not a concern. In this implementation of the CartPole environment, the agent simply receives a reward with a value of exactly $1$ for every single time step in which it manages to ""survive"". The rewards list in the example code is in my opinion poorly-named, because it actually contains discounted returns for the different time-steps, which look like: $G_T = \sum_{t=0}^{T} \gamma^t R_t$, where all the individual $R_t$ values are equal to $1$ in this particular environment.

+ +

These kinds of values tend to be in a fairly consistent range (especially if the policy used to generate them also only moves slowly), so the standardization that they do may be relatively safe, and may improve learning stability and/or speed (by making sure there are always roughly as many actions for which the probability gets increased as there are actions for which the probability gets decreased, and possibly by making hyperparameters easier to tune).

+ +

It does not seem to me like this trick would generalize well to many other environments, and personally I think it shouldn't be included in such a tutorial / example.

+ +
+ +

Note: I'm quite sure that a per-episode subtraction of the mean returns would be a valid, albeit possibly unusual, baseline for variance reduction. It's the subsequent division by standard deviation that seems particularly problematic to me in terms of generalizing to many different environments.

+",1641,,1641,,1/26/2019 12:55,1/26/2019 12:55,,,,9,,,,CC BY-SA 4.0 +10206,2,,9905,1/25/2019 22:17,,2,,"

Learning is possible without random thoughts and actions. Knowledge can be encapsulated in predetermined forms and passed through predetermined knowledge transfer mechanisms. Much of civilization is based on these predeterminations. Without them, humanity would be thrown back possibly 120,000 years.

+ +

However, initial discovery requires trials and review of their outcome. Purely deterministic identifications of trials is necessarily systematic and the system used may interplay with the phenomena under study in such a way as to miss important cases. Furthermore, when the complexity of the phenomenon is high, the number of trials is often too numerous to check entirely. In this second scenario, random selection of trials is wise for a similar reason illustrated by this simple example.

+ +

The phenomenon has one behavior for even numbers and another for odd. The system for determining trials is to check every factor of 100 to cover the range from 1 to 10,000 in 100 trials. The odd behavior will be inadvertently overlooked.

+ +
+

Intelligence begins once the thoughts/actions are logical rather than purely random based.

+
+ +

For the above reasons, intelligence begins with logic only in determining the domain of trials but, when one of the above two cases apply, is often quickly followed by lack of logic in the selection from that domain. Once models have formed as a result of these initial discovery activities, logical inference is useful again, to combine them in various ways. Through this process, engineering, business, resource planning, and other intelligence related disciplines have improved the living conditions for the species, albeit inconsistently.

+ +

So there are two limitations of logic.

+ +
    +
  • Piercingly deep searches require some temporary dismissal of logic.
  • +
  • Logic applied by large populations produces inconsistently logical results.
  • +
+ +

It is naive to assume that there were logical faults that produces the inconsistency in this second limitation. There is no logical proof that logic necessarily improves conditions logically across the population that uses it. There may be qualities of the goal sets of many individuals that thwart the vision of logic producing peace and prosperity for all. That belief is not new and no person or political group has been able to make it work.

+ +
+

Intelligence needs intelligence to coexist and a sharing communication network for validation/rejection. ... I believe that we must keep the human intelligence in a parental role for long enough time until at least the AI had fully assimilated our values.

+
+ +

That assumes that those values are best. Some might want to agree on those values before letting the AI assimilate them, which, given the insanity evident in human history, could lead to a war. If we give the AI the below four objectives, we could probably live without the remainder of human values.

+ +
    +
  • Survival of the biosphere
  • +
  • Human freedom of expression
  • +
  • Evenly distributed, high prosperity across the human population
  • +
  • Encouragement of the social rule, ""Be with others how you would want them to be with you,"" between humans, between AI robots, and between AI robots and humans.
  • +
+ +

This statement dismisses the human record.

+ +
+

The actual danger is to leave the artificial intelligence parenting another AI and loose control of it. ... the purpose should always be to help humans achieve mastery of the environment while ensuring our collective preservation. ... AI should not be left unsupervised as we would not give guns to kids, do we? ... To resume it all AI needs an environment and supervision where to learn and grow. The environment can vary but the supervision must stay in place.

+
+ +

It is impossible to supervise all adults and it will later become impossible to supervise all AI in data centers and robots. There are already more processes running in computers than there are people in the world. This is why risk management must be applied proactively, during the research and development of the AI. Some of the questions on this site address that very challenge. If it is not done proactively, it will become untenable and control will already have been passed for a random future regarding our species.

+ +
+

Only develop artificial intelligence that is limited by our own beliefs and values rather than searching for something greater than us.

+
+ +

There is no universal set of beliefs among humans. There will inescapably be a wide range in AI developments with a wide range of intended purposes. The key is to encourage researchers to think beyond their obsession with getting programs to do amusing and impressive things. AI research must continue to keep one foot in technology but another foot firmly placed in the practical elements of ethics, social science, environmental science, economics, and risk management.

+",4302,,,,,1/25/2019 22:17,,,,0,,,,CC BY-SA 4.0 +10208,1,,,1/26/2019 3:14,,2,85,"

I am trying to make a personal ML project where my objective is using a photo from an invoice, for instance, a Walmart invoice, classify it as being a Walmart invoice and extract the total amount spent. I would then save this information in a relational database and infer some statistics about my spendings. The goal would be to classify invoices not only from Walmart but from the most frequent shops where I spend money and then extract the total amount spent. I already do this process manually, I insert my spendings in a relational database. I have a bunch of photos from different invoices that I have recorded over the past year for this purpose (training a model).

+ +

What algorithms would you guys recommend? From my point of view, I think that I need some natural language processing to extract the total amount spent and maybe a convolutional neural network to classify the invoice as being from a specific store?

+ +

Thanks!

+",21688,,,,,1/27/2019 2:20,Approach to classify a photo and extract text from it,,2,0,,,,CC BY-SA 4.0 +10209,2,,9975,1/26/2019 8:06,,1,,"

In a nutshell, if you already have a number of models, you usually should be able to distinguish (intuitively, if you will) between simpler and more complex ones. E.g. based on the number of inputs and number of layers, as you have already indicated in the question. Then, if a simpler model and a more complex model perform the same task, and the complex model does not perform significally better than the simpler one, you should use the simpler model. It's your role to decide what difference in performance would be significant, usually based on your use case. It's Occam's razor in practice (https://en.m.wikipedia.org/wiki/Occam%27s_razor). +You might learn more practical aspects as part of this free course https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about

+",6053,,,,,1/26/2019 8:06,,,,0,,,,CC BY-SA 4.0 +10210,1,,,1/26/2019 12:28,,4,340,"

I am using Open AI's code to do a RL task on an environment that I built myself.

+ +

I tried some network architectures, and they all converge, faster or slower on CartPole.

+ +

On my environment, the reward seems not to converge, and keeps flickering forever.

+ +

I suspect the neural network is too small, but I want to confirm my belief before going the route of researching the architecture.

+ +

How can I confirm that the architecture is the problem and not anything else in a neural network reinforcement learning task?

+",21645,,,,,3/3/2019 6:24,How to identify too small network in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 +10212,2,,10191,1/26/2019 14:04,,3,,"

The way to categorize measurements into two separate populations is through what ML people currently term unsupervised learning. Such a process is part of the AI tool chest. Statistics is part, but not all, of the mathematics involved in the theory that leads to algorithms that learn without labeling the array of points light or dark in advance.

+ +

Reflected IR light can be specular or mirror in nature and significant levels of IR emission will also occurs, without raising the surface temperature of materials under most common circumstances. There is also refraction and transmission to be considered in the robotics space.

+ +

Materials will not exhibit lightness. The material composition, coating, and surface texture will exhibit various reflectivities, emission dependent on the absolute temperature raised to the fourth power1, and transmission of light entering the material from various directions, possibly diffused internally.

+ +
+

We eventually hope to be able to use this to follow a line using a pre-set algorithm.

+
+ +

That is one of the primary objectives of computer vision and has been since the first digital signal processing in the mid twentieth century.

+ +
+

The first step is determining whether the material is one of the binary options, light or dark.

+
+ +

That may be an experimental first step, but the loss of information from taking a number that discretely represents a cube in horizontal-vertical-frame space, each of which has $\tau$ channels of $\beta$ bits, is not a good computer vision strategy. The total number of bits of IR information is given by

+ +

$$ \tau \beta \; \text{,} $$

+ +

so the data loss will be, in percent,

+ +

$$ \dfrac {100 \, (\tau \beta - 1)} {\tau \beta} \; \text{.} $$

+ +

Object recognition and thus collision avoidance will be frustrated by that much data loss, even if only one IR spectral range is contained in the cube ($\tau = 1$).

+ +
+

Given a large population of values between 0 and 1023, probably in two distinct groupings, how can I best go about classifying a given point as light or dark?

+
+ +

It appears that $\tau$ does equal one and $\beta = \log_{2} (1023 - 0 + 1) = 10$, thus the information in bits per cube (pixel in a frame) is 10. If the project, for some reason, requires that 9 bits be discarded in a way to conserve as much of the original 10 bits as possible (perhaps to maintain a particular throughput through a transmission or processing bottleneck), then the way to do it is through a one bit feature extraction, meaning that the output of the artificial network should be one bit. These are a few unsupervised learning options to investigate.

+ +
    +
  • RBMs (restricted Boltzmann machine)

  • +
  • K-means clustering

  • +
  • Gaussian mixture models

  • +
+ +

Most AI and artificial network frameworks and libraries will have examples of these in the example directory. It is advisable to approach the project like this.

+ +
    +
  • Find web pages or articles using the above as search terms until an explanation that is clear to the team is found.

  • +
  • Study all three for at least a few hours each as a team.

  • +
  • Find a library or framework that contains all three.

  • +
  • Install the necessary prerequisites for all three on the lab computer(s).

  • +
  • Produce a working example of the three types so that selection is not a function of which one is easiest to get running (something which would qualify as an research anti-pattern).

  • +
  • Make a choice.

  • +
  • Change the working code, one verifiable step at a time into the desired unsupervised learning program.

  • +
  • Eventually, when thoroughly ready, make the jump from the example data set to the robotic IR light frame set.

  • +
+ +
+ +

Footnotes

+ +

[1] Stefan-Boltzmann law

+",4302,,,,,1/26/2019 14:04,,,,2,,,,CC BY-SA 4.0 +10213,1,10215,,1/26/2019 14:11,,2,472,"

It is known that every potential function won't alter the optimal policy [1]. I lack of understanding why is that.

+

The definition:

+

$$R' = R + F,$$ with $$F = \gamma\Phi(s') - \Phi(s),$$

+

where, let's suppose, $\gamma = 0.9$.

+

If I have the following setup:

+
    +
  • on the left is my $R$.
  • +
  • on the right my potential function $\Phi(s)$
  • +
  • the top left is the start state, the top right is the goal state
  • +
+

+

The reward for the red route is: $(0 + (0.9 * 100 - 0)) + (1 + (0.9 * 0 - 100)) = -9$.

+

And the reward for the blue route is: $(-1 + 0) + (1 + 0) = 0$.

+

So, for me, it seems like the blue route is better than the optimal red route and thus the optimal policy changed. Do I have erroneous thoughts here?

+",21685,,2444,,11/9/2020 17:25,11/9/2020 17:25,Why does potential-based reward shaping seem to alter the optimal policy in this case?,,1,0,,,,CC BY-SA 4.0 +10214,1,10229,,1/26/2019 14:24,,1,82,"

I’m looking for advice regarding my ML project.

+ +

Using a special wristband, I am able to collect a bunch of physiological data from human subjects. I want to develop an application to recognize when these physiological signals change in a meaningful way and only then ask the user how he/she is feeling. This data will later be used for machine learning testing. The problem is, that I am struggling to find appropriate ways to classify current data input as meaningful and ask for information only when relevant user input is to be gathered, not more and not less.

+ +

For me, this seems to be a novelty detection problem, combined with a binary classification problem. I have to recognize what values coming from the data stream are to be considered normal, and therefore not bother the user with unnecessary input requests. I would also use novelty detection to recognize the data coming out of the normal zone and ask the user about it. This new data is then not considered novelty anymore, and binary classification will tell if the user is to be asked about his emotions when getting the same data in the future.

+ +

So, these are my questions:
+- What do you think about my reasoning of the problem? Do you have other perspectives on how to handle these problems? I have been told this could also be considered an anomaly detection problem, for example.

+ +

- What algorithms would you use to separate normal from more meaningful physiological data? Support Vector Machines perhaps? Maybe some decision theory?

+ +

- Do you know any books or papers on similar matters? Even if I have found some after hours and hours of research, you may be able to point me to something different than those I have.

+ +

It is worth noting that data collection is supposed to be done when no other factors are messing with signal readings, such as sport.

+ +

Any help would be much appreciated.

+ +

Best regards,
+Augusto

+",21767,,,,,1/27/2019 5:55,Help with Novelty Recognition and Binary Classification for Emotion Recognition,,1,0,,,,CC BY-SA 4.0 +10215,2,,10213,1/26/2019 14:30,,2,,"

The same $\gamma = 0.9$ that you use in the definition $F \doteq \gamma \Phi(s') - \Phi(s)$ should also be used as the discount factor in computing returns for multi-step trajectories. So, rather than simply adding up all the rewards for your different time-steps for the different trajectories, you should discount them by $\gamma$ for every time step that expires.

+ +

Therefore, the returns of the blue route are:

+ +

$$0 + (0.9 \times -1) + (0.9^2 \times 0) + (0.9^3 \times 1) = -0.9 + 0.729 = -0.171,$$

+ +

and the returns of the red route are:

+ +

$$(0 + 0.9 \times 100 - 0) + 0.9 \times (1 + 0.9 \times 0 - 100) = 90 - 89.1 = 0.9.$$

+",1641,,,,,1/26/2019 14:30,,,,1,,,,CC BY-SA 4.0 +10216,2,,10208,1/26/2019 15:48,,1,,"

There are three functional aspects to this project, each of which could be a sub-project, even though they are interrelated. The decoupling of these aspects in the associated R&D will likely improve the rate of development.

+ +
    +
  1. Classify images — in this case, the vendor brand
  2. +
  3. Locate and orient the text
  4. +
  5. OCR the text — in this case, the total
  6. +
+ +

These are certainly feasible in the particular case of invoices from vendors, so the entire project certainly is.

+ +

The goal is to produce two fields of information from input images.

+ +
    +
  • Vendor code
  • +
  • Total in some specific monetary unit
  • +
+ +

That past invoices, the names of the vendors, and the totals of each invoice are available as training data may be helpful. Whether the volume of training data is sufficient can only be determined through a very complicated application of theory to some of the data metrics or by trial. It is recommended to do both.

+ +
+

What algorithms would you guys recommend?

+
+ +

It would be irresponsible to define an AI design in terms of what people recommend for algorithm. The algorithms that guys recommend are already covered by the path names for files in the algorithms directory paths, the example's directory paths, and the sections of the documentation of any good AI framework. Let's first talk about the design and then the algorithm options, so there is more of a basis for algorithm selection than the largely random reading selections of the members of a site.

+ +

The models involved and which can be parameterized such that they can be tuned through training are partly defined by the three sub-projects above.

+ +

Analysis

+ +

For this project, there is no reason for the AI to recognize which text on the invoice is the vendor name. If there are a hundred vendor invoice types, the speed in which a single person can identify the key boundaries of each invoice type is orders of magnitude faster than the work necessary to develop a completely general algorithmic approach for automating that work. It would only be resource thrifty to develop such automation if there were thousands of vendors and constantly varying templates for invoices, which is likely in the case of personal finance.

+ +

These are the key bounds of the form quadrilaterals (four sided polygons) within which the following objects lie.

+ +
    +
  • Vendor name text box
  • +
  • Total amount text box
  • +
  • Rectangle around the entire form
  • +
+ +

Digitizing the twelve points through one of the labelling programs is a model much more likely to produce a reliable system than the representation of three rectangular bounding boxes and tilts. This is because tilt angle requires at least two adjacent points anyway and the aspect ratio cannot be held constant in real scanning scenarios when the scanner might be replaced or produce different ratios with wear or the smoothness or wear characteristics of the paper invoice.

+ +

Doing all three things using a network would require more than one person's invoices for a decade, unless the person is a billionaire compulsive buyer with a team of secretaries doing scanning and data entry.

+ +

The models are then these.

+ +
    +
  • Digitization of a quadrilateral document with varying contrast, brightness, tilt, horizontal pixels per inch, vertical pixels per inch, relative location and size of vendor name, and relative location and size of invoice total
  • +
  • Numeric characters and other monetary characters in a rectangular box
  • +
  • Brand image for the vendor, which may include a logo and type of arbitrary and possibly unique font
  • +
+ +

Now we can talk about artificial network approaches.

+ +
    +
  • Adjusting for contrast, brightness, tilt, resolution, locations, and sizes will require a customized input, loss function, and layer arrangement based on the geometry involved. A GRU network may gain some advantages if the documents can be ordered chronologically, since the collection then becomes a time series, the trends of which can be exploited.
  • +
  • Monetary values can best be done using OCR libraries.
  • +
  • Recognizing the brand is probably best done with CNN as a categorization machine.
  • +
+ +

The system must indicate if the vendor and total are not found, in which case a new template is indicated, and the twelve points must be digitized for this new type.

+ +

It is possible to do this with one single deep convolutional network, but, again, the data set would need to be augmented. The one other way to do this is to create a filled-out-invoice generator to produce the volume of stochastically variable data across the various variability dimensions listed above to train the deep CNN.

+",4302,,,,,1/26/2019 15:48,,,,0,,,,CC BY-SA 4.0 +10217,2,,1544,1/26/2019 15:56,,0,,"

Curiosity is used successfully with Random Network Distillation (RND). OpenAI has published a detailed article about their approach using this method, which was especially successful with previously unsolved games like Montezuma’s Revenge.

+ +

While this does not fully answer your question about curiosity being required to build a true AI, it shows that previously unsolved problems became solvable introducing curiosity in the reward system.

+",9161,,,,,1/26/2019 15:56,,,,0,,,,CC BY-SA 4.0 +10220,2,,10208,1/26/2019 18:10,,1,,"

Since many of these problems have been tackled earlier and we have quite a few good tools to handle images and text, the task does not seem to be so very difficult. But then you would only find out after actually trying out the solutions suggested.

+ +

I suggest the following approach:

+ +
    +
  1. Use Tesseract and OpenCV to extract text from the images you have saved. You can refer to a good example of using tesseract with python here - https://www.pyimagesearch.com/2018/09/17/opencv-ocr-and-text-recognition-with-tesseract/

  2. +
  3. Evaluate the results to plan and consider your next steps. Refer to the publications linked to the tutorials such as the EAST algorithm which describes the challenges and point to other studies on this topic. This will help you evaluate the other approaches which have been studied for different cases by other researchers.

  4. +
  5. Assuming the text detection has worked well, proceed forward by detecting parts of the document by using row detection (find continuous white-space extending horizontally) and tab stop detection (continuous white-space vertically). Use the Tesseract/openCV detected text blocks for this purpose.

  6. +
  7. Re-organize the detected text according to the layout identified by rows/columns. Next, recognize and discard the text you may not required, e.g. the logo, other associated details such as phone no of the store, tax breakup, change given, etc. These may be easily picked up by carefully crafted regular expressions.

  8. +
  9. Run a spell checker to remove errors in recognition (aspell, pyenchant, etc.)

  10. +
  11. Finally, from the results obtained, you could evaluate the remaining errors and fix them by either writing a set of regular expressions for easily identified OCR mistakes, or, else build a text model to learn and fix similar errors.

  12. +
+",2106,,2106,,1/27/2019 2:20,1/27/2019 2:20,,,,0,,,,CC BY-SA 4.0 +10221,2,,10177,1/26/2019 18:23,,3,,"

A bounding box is a rectangle superimposed over an image within which all important features of a particular object is expected to reside. It's purpose is to reduce the range of search for those object features and thereby conserve computing resources: Allocation of memory, processors, cores, processing time, some other resource, or a combination of them. For instance, when a convolution kernel is used, the bounding box can significantly limit the range of the travel for the kernel in relation to the input frame.

+ +

When an object is in the forefront of a scene and a surface of that object is faced front with respect to the camera, edge detection leads directly to that surface's outlines, which lead to object extent in the optical focal plane. When edges of object surfaces are partly obscured, the potential visual recognition value of modelling the object, depth of field, stereoscopy, or extrapolation of spin and trajectory increases to make up for the obscurity.

+ +
+

A classifier can only classify one object per image

+
+ +

A collection of objects is an object, and the objects in the collection or statistics about them can be characterized mathematically as attributes of the collection object. A classifier dealing with such a case can produce a multidimensional classification of that collection object, the dimensions of which can correspond to the objects in the collection. Because of that case, the statement is false.

+ +
+

1) If I don't need to know 'where' an object is (or objects are) and just interested in the existence of them and how many there are, is it possible to just get rid of the boundary boxes?

+
+ +

If you have sufficient resources or patience to process portions of the frame that don't contain the objects, yes.

+ +

Questions (2) and (3) are already addressed above, but let's look at them in that context.

+ +
+

2.a) If not, how does bounding boxes help detect objects?

+
+ +

It helps by fulfilling its purpose, to reduce the range of the search. If by thrifty method a bounding shape of any type can be created, then the narrowing of focus can be used to reduce the computing burden on the less thrifty method by eliminating pixels that are not necessary to peruse with more resource-consuming-per-pixel methods. These less thrifty methods may be necessary to recognize surfaces, motion, and obscured edges and reflections so that the detection of object trajectory can be obtained with reliability.

+ +

That these thrifty mechanisms to find the region of focus and these less thrifty mechanisms to use that information and then determine activity at higher levels of abstraction are artificial networks of this type or that or use algorithms of this type or that is not relevant yet. First understand the need to reduce computing cost in AI, which is a pervasive concept for anything more complex than tic-tac-toe, and then consider how bounding boxes help the AI engineer and the stakeholders of the engineering project to procure technology that is viable in the market.

+ +
+

2.b) From what I have figured is that a network (if using neural network architectures) predicts the coordinates of the bounding boxes if there is something in the feature map. Doesn't this mean that the detector already knows where the object is (at least briefly)?

+ +

2.c) So continuing from question 1, if I'm not interested in the exact location, would training for bounding boxes be irrelevant?

+
+ +

Cognition is something AI seeks to simulate, and many hope to have robots like in the movies that can help out and be invaluable friends, like TARS in the Nolan brothers 2014 film Interstellar. We're not there. The network knows nothing. It can train a complex connection between an input signal through a series of attenuation matrices and activation functions to produce an output signal statistically consistent with its loss function, value function, or some other criteria.

+ +

The inner layers of an artificial net may, if not directed to do so, produce something equivalent to a bounding region only if velocity of convergence is present as a factor in its loss or value function. Otherwise there is nothing in the Jacobian leading convergence to reduce its own time to completion. Therefore, the process may complete, but not as well as if cognition steps in and decides that the bounding region will be found first and then used to reduce the total burden of mechanical (arithmetic) operations to find the desired output signal as a function of input signal.

+ +
+

3) Finally, in architectures like YOLO, it seems that they predict the probability of each class on each grid (e.g. 7 x 7 for YOLO v1). What would be the purpose of bounding boxes in this architecture other than that it shows exactly where the object is? Obviously, the class has already been predicted so I'm guessing that it doesn't help classify better.

+
+ +

Reading the section in A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2, J Zhang, M Huang, X Jin, X Li, 2017, may help further comprehension of these principles and their almost universal role in AI, especially the text around their statement, ""The Network Architecture of YOLO v2 YOLO employs a single neural network to predict bounding boxes and class probabilities directly from full images in one inference. It divides the input image into S × S grids."" This way you can see the use of these principles in the achievement of specific research goals.

+ +

Other such applications can be found by simply reading the article full text available on the right side of an academic search for yolo algorithm and using ctrl-f to find the word bound.

+",4302,,4302,,1/28/2019 11:16,1/28/2019 11:16,,,,2,,,,CC BY-SA 4.0 +10222,2,,9234,1/26/2019 18:29,,1,,"

Either (a) re-recognize it in subsequent frames and then train a network employing a trajectory model to the changing object model parameters or (b) recognize the object and its motion in a single object-trajectory parameterized model from the sequence of sets of detected edges indicating edge movement in the sequence of frames.

+",4302,,,,,1/26/2019 18:29,,,,0,,,,CC BY-SA 4.0 +10223,2,,1544,1/26/2019 18:34,,1,,"

Curiosity by itself does not improve intelligence.

+ +

It increases the chances of better understanding a given subject, given that curiosity is coupled with actions in that direction.

+ +

For example: +I am curious about how to make pancakes and decide to find a recipe but stop at the first instance of an answer with steps to follow.

+ +

Curiosity needs to be coupled with the desire to improve a given understanding and be followed by a review of current knowledge with the aim of updating the previousely reached conclusions. Provided that the used logic that judges improvements is correct.

+ +

Curiosity will not necessarily be benefitial for an improved intelligence. But to allow for an improved intelligence, curiosity is a mandatory requisite.

+ +

Curiosity is a symptom of an evolving intelligence.

+",21285,,21285,,1/26/2019 18:53,1/26/2019 18:53,,,,0,,,,CC BY-SA 4.0 +10228,1,,,1/27/2019 5:54,,4,2087,"

In an MLP with ReLU activation functions after each hidden layer (except the final),

+ +

Let's say the final layer should output positive and negative values.

+ +

With ReLU intermediary activations, this is still possible because the final layer, despite taking in positive inputs only, can combine them to be negative.

+ +

However, would using leaky ReLU allow faster convergence? Because you can pass in negative values as input to the final layer instead of waiting till the final layer to make things negative

+",21158,,,,,4/2/2019 7:48,Does leaky relu help learning if the final output needs negative values?,,3,0,,,,CC BY-SA 4.0 +10229,2,,10214,1/27/2019 5:55,,1,,"

The Project

+ +

It appears from the question that emotional detection and response is the longer term goal of the project and that recognizing potential emotional manifestations in easily detectable physiological metrics is an initial R&D objective.

+ +

Mobile device applications are already available to do this, but biometric monitoring via a wristband, excellence in AI design, and marketing excellence could overtake these apps in the marketplace and provide emotional regulation to improve productivity and reduce health and wellness risks.

+ +

The (at least initial) goal seems to be basic binary classification, which simplifies the output, but not the detection, which seems to have the following two criteria.

+ +
    +
  • Acquire biological information related to emotion

  • +
  • Avoid drawing the user's attention to perform unnecessary tasks

  • +
+ +

The Biometric Device Challenge

+ +

There is definitely a challenge to classifying biometric trends as meaningful in this context.

+ +
+

What do you think about my reasoning of the problem? Do you have other perspectives on how to handle these problems? I have been told this could also be considered an anomaly detection problem, for example.

+
+ +

Novelty detection is not the first milestone in this challenge. Detection of useful features is the prerequisite. Novelty comes into play once changes in the organism can be characterized with some reliability and accuracy (few false positives and false negatives) and the particular user has provided subjective reporting to correspond with some change in the organism.

+ +

Biological systems have forms of stasis that can be indirectly sensed, including these.

+ +
    +
  • Blood pressure
  • +
  • Metabolic rate
  • +
  • Multi-ionic salinity
  • +
  • Cognitive attention
  • +
  • Blood sugar levels
  • +
+ +

These affect the dermis at the wrists through perturbations in externally detectable metrics, the inclusion of which depends on the capabilities of the wrist device.

+ +
    +
  • Circumference
  • +
  • Coefficient of electrical resistance
  • +
  • Surface temperature
  • +
  • Multi-ionic salinity
  • +
  • Surface moisture
  • +
+ +

With blood and urine lab tests, conditions can be controlled, however sampling of those fluids are usually infrequent, leading to the use of ranges to detect anomalies that may be indicative of disease, conditions, or other health risks. There are major challenges to using ranges of metrics at the wrist.

+ +
    +
  • Sensitivity to the general and ongoing physical condition of the individual
  • +
  • Unique individual responses to emotional states
  • +
  • Interference from changes in the temperature, pressure, air movement, and humidity of the environment
  • +
+ +

One inroad to detection of the internal metrics of the organisms is the fact that biological stasis is not the same thing as perfect regulation. Biological systems are chaotic. Normal signals from time series sampling of biological metrics are neither constant nor periodic. When changes in the attractors and spectra characterizing the chaotic fluctuation of direct metrics (like electrical resistance) or indirect metrics (like systolic blood pressure) occur, a request for subjective information may be asked from the wearer.

+ +

These are some chaotic principles to consider.

+ +
    +
  • Spectra can provide much information about the intensity of the force driving the chaotic fluctuation in the time series, which is likely to correlate with stress.
  • +
  • The various ways to evaluate exponents, including the Lyapunov exponent, may be of significant value in correlating patterns in the external, indirect time series to emotional states.
  • +
  • Autocorrelation at the primary frequencies involved may reveal changes indicative of significant mental transitions.
  • +
+ +

Once a correlation is established, then the novelty detection makes sense, except for an occasional re-acquisition of the correlation, since the wearer may become more internally connected to their physical and emotional states as they use the app, and the answers to inquiries may mature correspondingly.

+ +

Additional Questions Within the Question

+ +

These additional questions were also asked.

+ +
+

What algorithms would you use to separate normal from more meaningful physiological data? Support Vector Machines perhaps? Maybe some decision theory?

+
+ +

A recurrent neural network such as B-LSTM or GRU are commonly excellent at characterizing time series, but there may be a need for more than one network.

+ +
    +
  • One to extract the features of the chaotic patterns in the raw data
  • +
  • Another to train to correlate changes in the features of the chaos with subjective reporting
  • +
+ +

This second training must be done during in situ field use of the first already trained network. Since training is not generally viable on a mobile device, there will need to be a client server arrangement where the training data and its resulting trained network can be communicated through secure RestFUL services so the training processes can be asynchronous and leverage GPUs or other hardware acceleration.

+ +
+

Do you know any books or papers on similar matters? Even if I have found some after hours and hours of research, you may be able to point me to something different than those I have.

+
+ +

The terms above in a scholarly search will bring up all kinds of study materials. Specifically these

+ +
    +
  • Lyapunov exponent
  • +
  • Strange attractors
  • +
  • Spectral analysis of biometrics
  • +
  • Chaos autocorrelation
  • +
  • Phase space
  • +
+ +

One good book for all but the middle item is Chaos Theory Tamed, 1997 +by Garnett P. Williams.

+ +
+

It is worth noting that data collection is supposed to be done when no other factors are messing with signal readings, such as sport.

+
+ +

This is where higher level detection involving chaotic analysis is much more reliable than ranges. The chaotic features of athletic activity will be distinctly different from those of fear states or sexual arousal.

+",4302,,,,,1/27/2019 5:55,,,,2,,,,CC BY-SA 4.0 +10232,2,,10228,1/27/2019 9:50,,2,,"

In short, yes Leaky Relu helps in faster convergence if your output requires both positive and negative values. But the catch is that you need to tune the negative slope of Leaky Relu which is a hyperparameter to get better accuracy.

+",9062,,,,,1/27/2019 9:50,,,,3,,,,CC BY-SA 4.0 +10236,2,,10228,1/27/2019 11:38,,0,,"

Both the output and the the gradient at the negative part of leaky ReLU is 100 times lower than at the positive part. I doubt that they have any significant impact on training direction and/or on the final output of a trained model unless the model is severely underfitting.

+",21426,,,,,1/27/2019 11:38,,,,0,,,,CC BY-SA 4.0 +10239,1,10245,,1/27/2019 14:31,,0,96,"

In my input tensor, I would like to use both integer values as well as booleans. For example, if there is a spelling difference between 2 texts, I want to set the value to true, and otherwise false. In the same tensor, I would like to assign a value to, for example, the maximum number of consecutive messages, which will be an integer.

+ +

Am I allowed to use 0's and 1's for the booleans together with integers, or will it have any negative impact on the working of the network? The ANN wont see any difference between the binary and nonbinary values, but is it a problem?

+",21788,,1641,,2/26/2019 20:56,2/26/2019 20:56,Nonbinary and binary values in input tensor,,1,2,,,,CC BY-SA 4.0 +10244,1,10246,,1/27/2019 18:26,,0,46,"

Say I want to train a NN that generates outputs of some sort (say, even numbers). +Note that the network does not classify outputs, but, rather GENERATES the output.

+ +

I want let it run forward and generate some number, then either give it a positive reward of 1 for an even number, and a reward of -1 for an odd number, to make i output only even numbers over time.

+ +

What would be an architecture for such a NN?

+ +

I am getting caught in the part where here is actually no input, and I can't really start with a hidden layer, can I?

+ +

I am quite confused and would appreciate guidance

+",21645,,,,,1/27/2019 19:24,How to (theorically) build a neural network with input of size 0?,,1,0,,,,CC BY-SA 4.0 +10245,2,,10239,1/27/2019 18:34,,-1,,"

If the two inputs for the binary values are held to the domain set $\{0, 1\}$ during training, testing, and use, it will not break the network functionality, although the question is a good one. Why use so many bits to hold one?

+ +

Theory of Holors: A Generalization of Tensors, by Parry Moon and Domina Spencer (ISBNs 978-0521019002 and 0521019001), propose holors precisely because of this kind of limitation on the natural heterogeneity of numerical structure. Gibbs, Ricci, Einstein, and others used holors in their mathematical expressions but without the name holor. Holors are are conceptually related to objects in ontology and classes in object oriented design, but they are numbers, so they fit into mathematical expressions as scalars, vectors, matrices, vector fields, and other tensors do.

+ +

This is the basis for Coplien's operator overload in C++ and one of the reasons the language is too mutable to be mastered by average applications programmers. Support for what Moon and Spenser called holors is also the basis of decoupled type safety in some of the early LISP object oriented frameworks.

+ +

The current programming paradigm we seen in Java, Python, and other popular languages was set in FORTRAN, which was more attainable for average programmers, so we are stuck with homogeneous structure, even in AI libraries and GPUs. Therefore, once one number requires an IEEE 64 float, bytes and bits will use all 64.

+ +

In artificial networks, this only matters in the very first multiplication with a parameter.

+",4302,,,,,1/27/2019 18:34,,,,1,,,,CC BY-SA 4.0 +10246,2,,10244,1/27/2019 19:24,,2,,"

Whether a neural network has learned anything or not, it is a function that maps some input to an output. Training is the process of tweaking the weights so that the output is something that we want. Thus there is always in input of some sorts.

+ +

The problem you have presented, of generating even numbers, is much like a Generative Adversarial Network (GAN). In a GAN there are 2 networks: a Generator that tries to generate a sample from a target distribution and a Discriminator that tries to tell real samples from fake samples. The classic analogy being a criminal making counterfeit money and a copy trying to tell what is real money or not.

+ +

The generator input is usually a random number (or a matrix of random numbers). The generator then learns to transform a particular random input to a particular point in the target space.

+ +

So to answer your question, no there can't be a neural network with 0 inputs as there must always be an input of some kind. Even if the network was to generate a sequence instead of one instance, it would still need something to start with.

+ +

For your example, there would have to be some input for the network to start with. A really simple NN that could solve your problem might look like:

+ +
_input = [RandomInteger()]
+neuralNetworkWeights = [2]
+result = _input * neuralNetworkWeights
+result is always even
+
+",4398,,,,,1/27/2019 19:24,,,,4,,,,CC BY-SA 4.0 +10247,1,10276,,1/27/2019 20:02,,2,338,"

I am trying to implement NEAT algorithm in Python from scratch. However, I am stuck. When a new innovation number is created it has two nodes which represents the connection. Also this innovation number has a weight.

+ +

However, I know that innovation numbers are global variables, in other words when a innovation number is created,

+ +
ex. Innovation ID:1 - Node:1 to Node:4 - weight: 0.5
+
+ +

it will have a ID which will be used by other connections to represent the connection between Node:1 to Node:4.

+ +

When this innovation is used by another neural network, will it also use the weight of the innovation 1, which is 0.5 in this example?

+",21517,,2444,,2/28/2019 22:49,3/30/2019 23:00,Are innovation weights shared in the NEAT algorithm?,,1,0,,,,CC BY-SA 4.0 +10248,1,,,1/27/2019 20:21,,2,48,"

I'm interested in creating a convolutional neural network or LSTM to locate text in an image. I don't want to OCR the text yet, just find the text regions. Yes, I know Tesseract and other systems can do this, but I want to learn how it works by building my own. All of the tutorials and articles I've seen so far have the CNN output to a classification - ""image contains a cat"", ""image contains a dog"". Okay that's nice, but it doesn't say anything about where it was found.

+ +

Can anyone point me to some information that describes the output layer of a NN that can give location information? Like, x-y co-ordinates of text boxes?

+",21796,,,,,1/27/2019 20:21,How does a neural network output text box location data?,,0,0,,,,CC BY-SA 4.0 +10259,1,10260,,1/28/2019 4:52,,5,122,"

I needed to make a system for recognizing people based on hundreds of texts by finding similarities in their written text grammatically or similarities between words they choose for writing. I don't want it so accurate, but I wanted to know if it is possible.

+

For example, finding one person with two accounts or more on a forum or something in that case (texts already gathered). I'm just wondering if it's possible and what field should I research for.

+",21811,,2444,,12/21/2021 0:29,12/21/2021 0:29,Is it possible to recognise a person based on what they have written?,,1,0,,,,CC BY-SA 4.0 +10260,2,,10259,1/28/2019 5:18,,3,,"

The term you are looking for is stylometry, which is related to a technique in forensic linguistics called writeprint analysis. There are many different techniques to perform stylometric analysis, from the very basic 5-feature analysis classifying features such as the lexicon and idiosyncrasies unique to a person to more complex analysis utilizing neural networks and machine learning. Searching online for research papers focusing on stylometry should assist you in finding the best technique for the job.

+",20322,,20322,,3/7/2021 8:56,3/7/2021 8:56,,,,0,,,,CC BY-SA 4.0 +10261,2,,6274,1/28/2019 5:29,,1,,"

As you want to perform image segmentation, you can use U-Net, which does not have fully connected layers, but it is a fully convolutional network, which makes it able to handle inputs of any dimension. You should read the linked papers for more info.

+",21812,,2444,,6/13/2020 20:21,6/13/2020 20:21,,,,0,,,,CC BY-SA 4.0 +10263,2,,6699,1/28/2019 6:33,,0,,"

Face detection evaluation: A new approach based on the golden ratio Φ

+ +

Abstract:

+ +
+

Face detection is a fundamental research area in computer vision field. Most of the face-related applications such as face recognition and face tracking assume that the face region is perfectly detected. To adopt a certain face detection algorithm in these applications, evaluation of its performance is needed. Unfortunately, it is difficult to evaluate the performance of face detection algorithms due to the lack of universal criteria in the literature. In this paper, we propose a new evaluation measure for face detection algorithms by exploiting a biological property called Golden Ratio of the perfect human face. The new evaluation measure is more realistic and accurate compared to the existing one. Using the proposed measure, five haar-cascade classifiers provided by Intel©OpenCV have been quantitatively evaluated on three common databases to show their robustness and weakness as these classifiers have never been compared among each other on same databases under a specific evaluation measure. A thoughtful comparison between the best haar-classifier and two other face detection algorithms is presented. Moreover, we introduce a new challenging dataset, where the subjects wear the headscarf. The new dataset is used as a testbed for evaluating the current state of face detection algorithms under the headscarf occlusion

+
+",21815,,1671,,1/28/2019 20:35,1/28/2019 20:35,,,,0,,,,CC BY-SA 4.0 +10264,1,10269,,1/28/2019 9:27,,2,103,"

I have the following program for my neural network:

+ +
n_steps = 9
+n_inputs = 36
+n_neurons = 50
+n_outputs = 1
+n_layers = 2
+learning_rate = 0.0001
+batch_size =100
+n_epochs = 1000#200 
+train_set_size = 1000
+test_set_size = 1000
+tf.reset_default_graph()
+X = tf.placeholder(tf.float32, [None, n_steps, n_inputs],name=""input"")
+y = tf.placeholder(tf.float32, [None, n_outputs],name=""output"")
+layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.relu6, use_peepholes = True,name=""layer""+str(layer))
+         for layer in range(n_layers)]    layers.append(tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.relu6, use_peepholes = True,name=""layer""+str(layer)))
+multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
+rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
+stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) 
+stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
+outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
+outputs = outputs[:,n_steps-1,:]
+
+ +

I want to know whether my network is fully connected or not?
+When I try to see the variables, I see:

+ +
multi_layer_cell.weights
+
+ +

The output is:

+ +
[<tf.Variable 'rnn/multi_rnn_cell/cell_0/layer0/kernel:0' shape=(86, 200) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_0/layer0/bias:0' shape=(200,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_0/layer0/w_f_diag:0' shape=(50,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_0/layer0/w_i_diag:0' shape=(50,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_0/layer0/w_o_diag:0' shape=(50,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_1/layer1/kernel:0' shape=(100, 200) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_1/layer1/bias:0' shape=(200,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_1/layer1/w_f_diag:0' shape=(50,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_1/layer1/w_i_diag:0' shape=(50,) dtype=float32_ref>,
+ <tf.Variable 'rnn/multi_rnn_cell/cell_1/layer1/w_o_diag:0' shape=(50,) dtype=float32_ref>]
+
+ +

I didn't understood whether each layer is getting the complete inputs or not.
+I want to know whether the following figure is correct for the above code:
+

+ +

If this is not then what is the figure for the network? Please let me know.

+",9126,,9126,,1/28/2019 9:47,1/28/2019 18:23,Is my Neural Network program fully connected?,,1,2,,,,CC BY-SA 4.0 +10265,1,10297,,1/28/2019 10:16,,2,105,"

Is there a connection between the approximator network sizes in a RL task and the speed of convergence to an (near) optimal policy or value function?

+ +

When thinking about this, I came across the following thoughts:

+ +
    +
  1. If the network would be too small, the problem won't get enough representation and would never be solved, and the network would converge to its final state quickly.

  2. +
  3. If the network would be infinitely big (assuming no vanishing gradients and the likes), the network would converge to some (desirable) over-fitting, and the network would converge to its final state very slowly, if at all.

  4. +
  5. This probably means there is some golden middle ground.

  6. +
+ +

Which leads me to the interesting question:

+ +

4. Assuming training time is insignificant relative to running the environment (like in real life environments), then if a network of size M converges to an optimal policy in average after N episodes, would changing M make a predictable change on N?

+ +

Is there any research, or known answer to this?

+ +

How to know that there is no more need to increase the network size?

+ +

How to know if the current network is too large?

+ +

Note: please regard question 4 as the main question here.

+",21645,,2444,,2/20/2019 16:42,2/20/2019 16:42,Is there a relation between the size of the neural networks and speed of convergence in deep reinforcement learning?,,1,3,,,,CC BY-SA 4.0 +10269,2,,10264,1/28/2019 17:56,,2,,"
    +
  1. Is figure = code?

    + +

    No. +Your figure shows a fully connected feed forward network (MLP). But in your code you are using a two layer LSTM with peepholes. For the visualization of LSTMs, blocks are usually used for each layer.

    + +

    Here is a figure of the LSTM with peepholes which is the base of the tensorflow implementation (Source: Paper, fig. 1).

  2. +
+ +

+ +
    +
  1. Why size 86?

    + +

    The input is concatenated with the hidden state: +n_inputs + n_neurons = 36 + 50 = 86.

  2. +
  3. Why size 100 in the second layer?

    + +

    The second LSTM layer gets input of size 50 by the first layer (n_neurons) which is concatenated with the hidden state of the second layer (of size n_neurons = 50). +Therefore you get 50 + 50 = 100.

  4. +
  5. Why size 200?

    + +

    There are four weight matrices of size $86 \times 50$ (fig.: colored circles and the g circle), which seem to be combined to one matrix ($4 \cdot 50$) of size $86 \times 200$ (layer0/kernel).

  6. +
  7. Why size 50?

    + +

    The three variables w_f_diag, w_i_diag and w_o_diag are for the peephole connections (fig: dashed lines) and they have the size of n_neurons = 50.

  8. +
+",8035,,8035,,1/28/2019 18:23,1/28/2019 18:23,,,,2,,,,CC BY-SA 4.0 +10272,1,,,1/28/2019 20:27,,6,162,"

Can someone explain what is the process of learning? What does it mean to learn something?

+",21832,,2444,,12/12/2021 18:15,12/12/2021 18:15,What does learning mean?,,2,1,,12/12/2021 11:08,,CC BY-SA 4.0 +10276,2,,10247,1/28/2019 21:30,,3,,"

Quoting Evolving neural networks through augmenting topologies, p. 10 (emphasis mine):

+ +
+

When crossing over, the genes in both genomes with the same innovation numbers are lined up. These genes are called matching genes. Genes that do not match are either disjoint or excess, depending on whether they occur within or outside the range of the other parent’s innovation numbers. They represent structure that is not present in the other genome. In composing the offspring, genes are randomly chosen from either parent at matching genes, whereas all excess or disjoint genes are always included from the more fit parent.

+
+ +

Innovation numbers are used to line up genes in different genomes so you can perform crossover on networks with different topologies. Each network can optimise the weights in matching genes in a different way, so the weights are not shared. If they were, crossover would have nothing to contribute towards diversifying the population.

+",16101,,,,,1/28/2019 21:30,,,,0,,,,CC BY-SA 4.0 +10278,2,,10272,1/28/2019 22:30,,2,,"

That is a wonderfully fundamental question.

+ +
+

Learning is the use of a system to change another system so that, instead of doing what it did before, which may have been nothing, it does something else.

+
+ +

In the human brain, the system is the way that genetic expression caused the directed mutability of that brain so that human intentions and responses to external stimuli would alter. The direction of alteration is based on incentives and deterrents. In classes of children, there are things a teacher may do or a culture in the school that motivate learning. Misbehavior is deterred through other mechanisms. In that case, the first system is the educational process and the second system is the brain's ability to use what it was taught.

+ +

Even without classes, children learn things that create comfort, so comfort is the internal incentive. It is wired into us as a kind of teacher to dislike groin moisture, so we learn to use bathroom receptacles. It is wired into us to like praise from parents and teachers, so we tend to perform to get it. Things like that are complex losses and rewards and primary systems to incentivize learning.

+ +

Learning in artificial networks is not nearly as complex as in human brains. In some cases the artificial results are better. In other cases the abilities of the human brain cannot be approached by artificial networks yet.

+ +

The functioning of an artificial network often begins arbitrary and completely useless, but it's a parametric function, meaning the function can be changed by changing numbers called parameters. Each time the function is used to process an example drawn from a data set, the result is evaluated and the evaluation is used to modify the parameters. Care is taken to not over-modify the parameters, otherwise confusion can occur. Mathematicians call this type of system confusion chaos.

+ +

Repeating this carefully directed and moderated process ideally leads to something called convergence. The learning system is set up so that the result of convergence is a set of parameters that minimize losses, maximize gains, or both.

+ +

Sometimes initial learning is not enough and there are other related things to do later.

+ +
    +
  • Adjusting learned behavior to adapt to new conditions or information
  • +
  • Unlearning things that no longer produce benefit so that new learning can replace them
  • +
  • Relearning things that had faded from memory
  • +
  • Developing greater confidence in what was learned so that unlearning requires greater impetus
  • +
+ +

There are additional technical terms for the above concepts, more details that can be grasped, many categories of learning system types, varieties within those, and the mathematics that was used as a foundation to construct all this and make it work. Because the question was fundamental, those details and technicalities were omitted.

+",4302,,,,,1/28/2019 22:30,,,,0,,,,CC BY-SA 4.0 +10279,2,,10051,1/28/2019 23:15,,1,,"

It seems that in the reference you provided they use attention to compute weights for the encoder representations based on the context provided by the decoder (Fig. 1). As far as I can tell, attention is applied after convolution (in fact, after the GLU step), so it does not affect the feature maps directly. Rather, attention is used to select the target words in the decoder. In other papers (e.g., this one), attention is applied directly to the feature maps in a way that is more similar to what you described.

+ +

Regarding your second question, the paper you reference actually provides a link to the source code they used. It is written in Lua (using Torch) instead of PyTorch, probably because PyTorch development was just starting when the paper was published. At any rate, you should be able to follow the Lua code and translate it into PyTorch.

+",16101,,,,,1/28/2019 23:15,,,,0,,,,CC BY-SA 4.0 +10281,1,10283,,1/29/2019 1:57,,2,413,"

I'm doing transfer learning using Inception on Tensorflow. The code that I used for training is https://raw.githubusercontent.com/tensorflow/hub/master/examples/image_retraining/retrain.py

+

If you take a look at the Argument Parser section at the bottom of the code, you will find these parameters :

+
    +
  • testing_percentage
  • +
  • validation_percentage
  • +
  • test_batch_size
  • +
  • validation_batch_size
  • +
+

So far, I understand that testing and validation percentage is the number of images that we want to train at 1 time. But I don't really understand the use of test batch size and validation batch size. What is the difference between percentage and batch size?

+",20612,,2444,,12/22/2021 23:54,12/22/2021 23:54,What is the difference between validation percentage and batch size?,,1,0,,,,CC BY-SA 4.0 +10283,2,,10281,1/29/2019 8:14,,2,,"

The percentages refer to the number of samples to use (out of full dataset) as the validation and test datasets. So if you pass a dataset that consists of 100,000 samples to the model and set the validation and testing percentages to be 10% each, your model will train on 80,000 samples, validate them on 10,000 and save additional 10,000 samples for the final test.

+ +

The batch sizes refer to the number of samples in each batch during the test and validation evaluations. +Your model probably can't process 10,000 samples in a single run (due to memory limitations) so during evaluation it breaks the dataset into batches, which are processed sequentially and the result is the mean of all batches.

+ +

When you are training, the batch size is an important hyper-parameter which has an affect on the properties and final results of the training process. During test/validation it has no affect and only needs to be small enough for your model to be able to run it (evaluation with different batch sizes will produce the same results).

+",20399,,,,,1/29/2019 8:14,,,,0,,,,CC BY-SA 4.0 +10288,1,,,1/29/2019 10:26,,1,54,"

As an experiment, I want to teach an ANN to play the game of Nim.

+ +
+

The normal game is between two players and played with three heaps of any number of objects. The two players alternate taking any number of objects from any single one of the heaps. The goal is to be the last to take an object.

+
+ +

The game is easily solvable and I already wrote a small bot that can play Nim perfectly to provide data sets for supervised learning.

+ +

Now I am struggling with the design question, how should I output the solution to a specific board state. The answer always consists of two components:

+ +
    +
  • How many stones to take (a more or less arbitrary integer value)
  • +
  • Which heap to take the stones from (the index of the heap)
  • +
+ +

What are available design choices in this regard and is there a state-of-the-art design for this type of problem?

+",9161,,,,,8/19/2021 4:06,How to design an ANN to give an answer that includes two different components?,,1,0,,,,CC BY-SA 4.0 +10289,1,12354,,1/29/2019 11:08,,6,6218,"

By reading the abstract of Neural Networks and Statistical Models paper it would seem that ANNs are statistical models.

+ +

In contrast Machine Learning is not just glorified Statistics.

+ +

I am looking for a more concise/summarized answer with focus on ANNs.

+",21269,,2444,user9947,7/1/2021 15:23,10/26/2022 5:17,Are neural networks statistical models?,,3,0,,,,CC BY-SA 4.0 +10294,1,,,1/29/2019 14:23,,1,82,"

I am doing neural machine translation task from language S to language T via interlingua L. So - there is the structure:

+ +
S ->
+encoding of S (crisp) ->
+S-L encoder -> S-L decoder ->
+encoding of L (non-crisp, coming from decoder) ->
+L ->
+encoding of L (crisp) ->
+L-T encoder -> L-T decoder ->
+encoding of T (non-crisp, coming from decoder) ->
+T
+
+ +

All of this can be implemented in Pytorch more or less adapting the usual encoder-decoder NMT. So, the layer of interlingua L acts as a somehow symbolic/discrete layer inside the whole S-L-T neural network. My question is - how such system can be trained in end-to-end (S-T) manner? The gradient propagates from the T to the L and at the L one should do some kind of symbolic gradient? I. e. one should be able do compute the difference L1-L2?

+ +

I am somehow confused by such setting. My question is - is there similar networks which contain the symbolic representation as the intermediate layer and how one can train such system. I have heard about policy gradients but are they relevant to my setting?

+ +

Essentially - if I denote some neural network by symbols x(Wi)y, then the training of this network means, that I change Wi and x stays intact. I.e. the last member of backpropagation equation has the form d.../dw1. But if I combine (chain!) 2 neural networks x(Wi)y-y(Wj)z, then the the last backpropagation term for the y(Wj)z has the form (d.../dw1+d.../dy) and hence both the w1 and y should be changed/updated by the gradient descent too. So, doesn't some ambiguity arise here? Is such chaining of neural networks possible? Is is possible to train end-to-end chains of neural networks?

+ +

I am also thinking about use of evolutionary training.

+",8332,,8332,,1/29/2019 14:44,1/29/2019 15:41,Neural network with logical hidden layer - how to train it? Is it policy gradient problem? Chaining NNs?,,1,0,,,,CC BY-SA 4.0 +10295,2,,10294,1/29/2019 15:41,,1,,"

The chain rule applies here as usual, and the term symbolic gradient is interesting.

+ +

How such might apply will depend much on the nature of the policy layer representation and the connections of that layer or layers to other non-policy layers.

+ +
    +
  • A binary threshold layer may feed a logical inference layer.
  • +
  • A sigmoid function might feed a fuzzy logic layer.
  • +
  • The logical layer could be a production system adapted to look like a layer.
  • +
  • The logical layer might be more like a PLD (programmable logic device).
  • +
  • The output of the logical layer might be mapped one to one with a perceptron or an LSTM layer
  • +
+ +

In the example in the question, we can't, from the information given in the question, assume that discrete means a discrete approximation of a curve, since the term used is discrete/symbolic. Since the nature of logic cannot be generalized in a differentiable closed form, it may be that it has to be probabilistically represented in closed form so a derivative may be obtained, if that is possible in the specific case under study.

+ +

Back propagation is a feedback mechanism to provide corrective signaling to the portions of the system that must be corrected. The hope is that the direction of the correction during back propagation leads to convergence on the global minimum of loss without severe delay, which is not always easy to make happen, thus Gaussian noise injection explicitly or indirectly through mini-batching, momentum, multiple initialization states, parallel searches with different hyper-parameters, and other techniques.

+ +

One technique used quite a bit in analog systems and entering use in digital systems in recent years is the idea of multiple corrective mechanisms. Control theory for instrumentation and countermeasure aeronautics is thick with this concept. For the case given in this question, it can look like this, with $f$ representing feed forward layer with some activation function, $\ell$ representing some logical inference container, $p$ is the parameter set for the function of the same subscript, $\epsilon$ is the evaluation function (error, loss, value, benefit), $b$ representing the corrective feedback using the back propagation, the Jacobian, and the chain rule.

+ +

$$ f_1 \quad \rightarrow \;\quad \ell \quad \rightarrow \quad\ f_2 \quad \rightarrow \quad\ \epsilon \\ + p_1 \leftarrow \, b \leftarrow \ell\, ; \quad\quad\quad \; p_2 \, \leftarrow b \leftarrow \epsilon \\ +\quad\quad\quad\quad\quad \ell \quad\quad \leftarrow \quad\quad\quad\quad\quad \epsilon + $$

+ +

In the last line $\epsilon$ is simply a fact (piece of information) passed into the a rules engine session or a floating point representation of a feedback signal assigned as a probability to a rule in a fuzzy logical container. In the first half of the second line, the logical unit is responsible for providing the evaluation function output.

+ +

In Norbert Wiener's Cybernetics, he states the following1 with regard to automating the steering of ships via rudder control, from which cybernetics got its name.

+ +
+

In the important book of MacColl2, we have an example of a complicated system which can be stabilized by two feedbacks but not by one.

+
+ +

He then procures the differential equations and proves MacColl's conclusion. That conclusion is used in nearly every instrumentation amplifier circuit on a chip sold today, if not all of them. There is no way to get the desired stability without the multiple feedback paths.

+ +

Similarly, it may not be possible to stabilize the convergence of $f_1, \ell, and f_2$ via a single mechanism of back-propagation using $\epsilon$ and the Jacobians to descend a gradient.

+ +
+ +

References

+ +

[1] Norbert Wiener's *Cybernetics, 1948, MIT, p 106 in the 1996 printing

+ +

[2] LeRoy A. MacColl. Fundamental Theory of Servomechanisms. Van Nostrand, New York, 1945

+",4302,,,,,1/29/2019 15:41,,,,0,,,,CC BY-SA 4.0 +10297,2,,10265,1/29/2019 16:49,,0,,"

Speed and size, no. That's because speed is dependent on processor clock periods and the effective parallel nature of the particular deep RL design in the environment, which is also dependent on cores and host clustering. Size is not really quantifiable in a way that is meaningful in this relationship because there is a broader and more complex structure of a network that might be trained to produce a value to use in the RL algorithm chosen.

+ +
    +
  • Width for each layer
  • +
  • Cell type and possibly activation function for each layer
  • +
  • Hyper-parameters
  • +
+ +

One can say that number of clock cycles times number of effective parallel processors is correlated to all the above, the overhead of the RL algorithm used, and the size of each data tensor.

+ +

There are a few inequalities developed in the PAC (probably approximately correct) framework, so it would not be surprising if there were some bounding rules for the relationship between clock cycles, effective parallelism, data widths, activation functions, and RL algorithms for deep RL.

+ +

Study of the algorithm, perhaps in conjunction with experimentation, may reveal the primary factors, essentially the processing bottlenecks. Further study of the factors involved in controlling loop iteration counts, which cause the bottleneck, could permit the quantification of computing effort required to maintain a particular maximum allowable response time, but that would be specific to a particular design.

+ +

Such might produce a metric that is effectively a count of clock cycles across the system's potentially parallel architecture for a worst case or mean RL action selection. That could then be used to determine the response time for a given system with all of the factors mentioned above fixed, including the priority and scheduling of the processes in each operating system involved.

+ +

Here's a guess. Feel free to critique, since such a formulation is a project far beyond the effort that should be put into answering a question online.

+ +

$$ t_r \propto e^{k_v} \, n_v \, \eta_v \, t_v \, \mu_v \, \sum c_v + e^{k_d} \, n_d \, \eta_d \, t_d \, \mu_d \, \sum c_d \; \text{,} $$

+ +

where $t$ is time, $k$ is tensor complexity, $n$ is number of cores, $\eta$ is the effective efficiency of the parallel processing, $\mu$ is the effective overhead cost of the glue code, $c$ is the cycles require of the significant (and probably repetitive) elements in the algorithms, $t_r$ is the total time to response, and the subscripts $v$ and $d$ designate the variable subscripted as either RL value determination metrics or deep network metrics respectively.

+",4302,,,,,1/29/2019 16:49,,,,3,,,,CC BY-SA 4.0 +10300,1,10330,,1/29/2019 22:09,,1,203,"

I'm trying to train a neural network on evaluating chess positions if rather white (0.0) or black would win (1.0)

+ +

Currently the input consists of 4 bits per chess field (piece id 0 - 12, equals 64*4). Factors like castling are being ignored for now. Also, all training sets are random positions from popular games where it's white's turn and the desired output is the outcome of the game (0.0, 0.5, 1.0).

+ +

Are my input values the right choice? +How many hidden layers / neurons for each layer should be used and what's the best learning rate? +What type of NN's and which activation function would you recommend for this project?

+",19783,,,,,1/31/2019 15:23,Choosing the right neural network settings,,2,0,,4/17/2022 4:45,,CC BY-SA 4.0 +10303,1,,,1/30/2019 8:20,,6,1999,"

From what I understand, Monte Carlo Tree Search Algorithm is a solution algorithm for model free reinforcement learning (RL).

+ +

Model free RL means agent doesnt know the transition and reward model. Thus for it to know which next state it will observe and next reward it will get is for the agent to actually perform an action.

+ +

my question is: +if that is the case, then how come the agent knows which state it will observe during the rollout, since rollout is just a simulation, and the agent never actually perform that action ? (it never really interact with the environment: e.g it never really move the piece in a Go game during rollout or look ahead, thus cannot observed anything).

+ +

It can only assume observing anything when not actually interacting with environment (during simulation) if it knows the transition model as I understand it. The same arguments goes for the rewards during rollout/ simulation.

+ +

in this case, doesnt rollout in Monte Carlo Tree Search algorithm suggests that the agent knows the transition model and reward model and thus a solution algorithm for model based reinforcement learning and not model free reinforcement learning ?

+ +

** it makes sense in Alphago, since the agent is trained to estimate what it would observed. but MCTS (without policy and value netwrok) method assumes that agent knows what it would observed even though no additional training is included.

+",21872,,21872,,1/30/2019 9:47,2/8/2019 21:37,Rollout algorithm like Monte Carlo search suggest model based reinforcement learning?,,2,4,,,,CC BY-SA 4.0 +10305,2,,3518,1/30/2019 10:50,,0,,"

Look up tournament selection +Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm.[1] Tournament selection involves running several ""tournaments"" among a few individuals (or ""chromosomes"") chosen at random from the population. The winner of each tournament (the one with the best fitness) is selected for crossover. Selection pressure, a probabilistic measure of a chromosome's likelihood of participation in the tournament based on the participant selection pool size, is easily adjusted by changing the tournament size[why?]. If the tournament size is larger, weak individuals have a smaller chance to be selected, because, if a weak individual is selected to be in a tournament, there is a higher probability that a stronger individual is also in that tournament.

+",21876,,,,,1/30/2019 10:50,,,,0,,,,CC BY-SA 4.0 +10306,1,10481,,1/30/2019 11:39,,4,2343,"

Status:

+ +

For a few weeks now, I have been working on a Double DQN agent for the PongDeterministic-v4 environment, which you can find here.

+ +

A single training run lasts for about 7-8 million timesteps (about 7000 episodes) and takes me about 2 days, on Google Collab (K80 Tesla GPU and 13 GB RAM). At first, I thought this was normal because I saw a lot of posts talking about how DQNs take a long time to train for Atari games.

+ +

Revelation:

+ +

But then after cloning the OpenAI baselines repo, I tried running python -m baselines.run --alg=deepq --env=PongNoFrameskip-v4 and this took about 500 episodes and an hour or 2 to converge to a nice score of +18, without breaking a sweat. Now I'm convinced that I'm doing something terribly wrong but I don't know what exactly.

+ +

Investigation:

+ +

After going through the DQN baseline code by OpenAI, I was able to note a few differences:

+ +
    +
  • I use the PongDeterministic-v4 environment but they use the PongNoFrameskip-v4 environment
  • +
  • I thought a larger replay buffer size was important, so I struggled (with the memory optimization) to ensure it was set to 70000 but they set it to a mere 10000, and still got amazing results.
  • +
  • I am using a normal Double DQN, but they seem to be using a Dueling Double DQN.
  • +
+ +

Results/Conclusion

+ +

I have my doubts about such a huge increase in performance with just these few changes. So I know there is probably something wrong with my existing implementation. Can someone point me in the right direction?

+ +

Any sort of help will be appreciated. Thanks!

+",21513,,,,,2/10/2019 3:56,"Each training run for DDQN agent takes 2 days, and still ends up with -13 avg score, but OpenAi baseline DQN needs only an hour to converge to +18?",,2,0,,,,CC BY-SA 4.0 +10308,1,10326,,1/30/2019 13:29,,2,115,"

In LMS(least mean square) since, we use a quadratic error function, and quadratic functions are generally parabola in (some convex like shape). I wonder whether that is the reason why we use least square error metric? If that is not the case(its not ALWAYS convex or reason WHY we use LMS), what is the reason then? why this metric changes for deep learning/neural networks but works for regression problems?

+ +

[EDIT]: Will this always be a convex function or is there any possibility that it will not be convex?

+",18956,,18956,,1/31/2019 17:18,1/31/2019 17:18,"Will LMS always be convex function? If yes, then why do we change it for neural networks?",,1,0,,,,CC BY-SA 4.0 +10311,2,,10306,1/30/2019 15:17,,3,,"

Dueling architectures create bigger differences in the values of actions in the state space. This is because the state-value V(s) function is estimated separately from the state-action value Q(s, a). A new quantity, the advantage of an action, can then be defined as A(s, a) = Q(s, a) - V(s).

+ +
+

The Q function, however, measures the the value + of choosing a particular action when in this state. The advantage + function subtracts the value of the state from the Q + function to obtain a relative measure of the importance of + each action.

+ +

Dueling Network Architectures for Deep Reinforcement Learning

+
+ +
+ +

To better direct you, here are 2 resources that could really help you understand why those differences are important.

+ +

Speeding up DQN on PyTorch: how to solve Pong in 30 minutes

+ +

The main points of the blog are:

+ +
    +
  1. Use a larger batch size and play several steps before updating
  2. +
  3. Play and train in a separate process
  4. +
  5. Use asyncronous cuda transfers
  6. +
+ +

RL Adventure

+ +

This github library has easy to follow jupyter notebooks and links to all of the papers. +It includes:

+ +
    +
  • DQN
  • +
  • Double DQN
  • +
  • Dueling DQN
  • +
  • Prioritized Experience Replay
  • +
  • Noise Networks for Exploration
  • +
  • Distributional RL
  • +
  • Rainbow (That network that deepmind made that had so many things in it they couldn't find a good name)
  • +
  • Distributional RL with Quantile Regression
  • +
  • Hierarchical Deep RL
  • +
+",4398,,,,,1/30/2019 15:17,,,,2,,,,CC BY-SA 4.0 +10312,2,,10303,1/30/2019 17:27,,4,,"
+

From what I understand, Monte Carlo Tree Search Algorithm is a solution algorithm for model free reinforcement learning (RL).

+
+ +

Monte Carlo Tree Search is a planning algorithm. It can be considered part of RL, in a similar way to e.g. Dyna-Q.

+ +

As a planning algorithm MCTS does need access to a model of the environment. Specifically it requires a sampling model, i.e. a model that can accept a state and action, then return a single next state and reward with the same probabilities as the target system. The alternative model, used by other RL techniques such as Value Iteration, is a distribution model which provides the full distribution of probabilities for rewards and next states.

+ +
+

if that is the case, then how come the agent knows which state it will observe during the rollout, since rollout is just a simulation, and the agent never actually perform that action ?

+
+ +

It is not the case. The agent knows what it will observe during the simulation because the simulation is a sampling model.

+ +
+

In this case, doesnt rollout in Monte Carlo Tree Search algorithm suggests that the agent knows the transition model and reward model and thus a solution algorithm for model based reinforcement learning and not model free reinforcement learning ?

+
+ +

Yes. This is how most planning algorithms work.

+ +

The simulation can be driven purely by sampling from previous experience, which is how Dyna-Q works*. I would still consider that a model-based approach, and its success depends a lot on how well such a model can be learned. In many cases, variance due to errors in the learned model adversely affects learning. So MCTS works best in environments that can be accurately sampled, because they are strongly rules-driven. For example, board games.

+ +
+ +

* Functionally DynaQ is almost identical to experience replay. So much so, that whether you consider it a planning algorithm or experience replay added to basic Q learning is more a matter of how you present the design of the learning agent - e.g. perhaps a designer wants to focus on the model-learning side more, so has code that explicitly represents the learned model.

+",1847,,,,,1/30/2019 17:27,,,,2,,,,CC BY-SA 4.0 +10313,1,,,1/30/2019 17:56,,0,81,"

Gradient training changes indiscriminately all the weights and nodes of the neural network. But one can imagine the situations when the training should be shaped, e.g.:

+ +
    +
  • One can put constraints on some of the weights. E.g. human brain contains regions whose inner connections are more dense that external connections with different regions. One can try to mimic this region-shaped structure in neural networks as well and hence one can require that inter-regional weights (in opposit to intra-regional weights) are close to zero (except, possibly, for some channels among regions);
  • +
  • One can put constraints on some of the weights in such manner that some layer of neurons have specific structure. E.g. consider the popular encoder-decoder architecture of neural machine translation e.g. https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html We can see that that the whole output of the encoder is expressed as a single layer of neurons which is forwarded to the input of the decoder. So - one can require that the set of all the possible outputs of the encoder (e.g. the possible values of this single layer of the neurons) forms some kind of structure, e.g. some grammar of some interlingua. This example is for illustration only, I have in mind more complex neural network which has one layer of neurons which indeed should output the encoded words of some interlingua. So, one is required to guid all the weights of the encoder in such manner that this single layer has only allowable values.
  • +
+ +

So - my question is - are there methods that guide the gradient descent training with additional information about the weights or about the nodes (i.e. about the whole subsets of weights that have some impact on specific layer of nodes)? E.g. about methods that impress the region structure on the neural network or that constrains the values of some nodes to be in specific range only?

+ +

Of course, it is quite easy to include such constraints in evolutionary neural networks - one can simply reject the neural networks with weights that violates the constraints. But is it possible to do this in gradient-like training?

+",8332,,,,,6/26/2019 18:30,How to shape the weights or nodes during gradient training of neural network? Training with constraints?,,1,0,,,,CC BY-SA 4.0 +10314,2,,10303,1/30/2019 17:59,,5,,"

Whether or not MCTS is even a Reinforcement Learning algorithm at all may be up for debate, but let's assume that we view it as an RL algorithm here.

+ +

For practical purposes, MCTS really should be considered to be a Model-Based method. Below, I'm going to describe how you could view it as a Model-Free RL approach in some way... and then wrap back to why that viewpoint isn't really often useful in practice.

+ +
+ +

More specifically, following this paper, we'll think of an MCTS search process as a value-based RL algorithm (it learns estimates of a value function, very much like Sarsa, $Q$-learning, etc.), which limits itself to learning values for the states that it chooses to represent by nodes in the search tree (this set of states that it chooses to represent gradually grows over time during the search process).

+ +

Unlike traditional RL approaches, such an MCTS process doesn't really result in a policy or an exhaustive / generalizable value function estimator that can be extracted after the ""training"" process and re-used in many different states afterwards. We normally play a move after running MCTS, and then discard everything and start over again for the next move (maybe we'll keep a relevant part of the search tree and reuse that, but that's a minor detail... we certainly won't be able to re-use our search results in another match/game/episode).

+ +

The MCTS search process itself can be viewed as a Model-Free RL approach; every iteration of the search can be viewed as an actual episode of an ""agent"" that is collecting experience in a model-free manner in a ""real"" environment (but not as real as the game for which we're running the complete search process), where this ""internal agent"" first follows the Selection Policy for a while (e.g. UCB1), and then a Play-out policy for the remainder of the episode (e.g. uniform random).

+ +

This ""internal"" agent ""inside"" the MCTS iterations could be viewed as learning from a model-free RL process. The main problem with this view in practice is that, because MCTS ""decides"" to have a laser-like focus on a relatively small subset of states (around the root node), this process really only leads to something useful being learned for that state in the root node (and possibly some of the closest children/grandchildren/etc.). We don't really learn something that can easily be re-used in the future in MCTS. What this means in practice is that we have to be able to re-run the complete ""Reinforcement Learning process"" (or search) whenever we need to make a decision (i.e. every turn in a turn-based game).

+ +

That is feasible if you have a simulator, or model of the environment, in which you can do the learning... but then we really get back to actually have a model-based approach.

+ +
+ +

Fun fact: if you like to take the viewpoint of MCTS as a Model-Free RL approach, you could also turn that into a Model-Based approach again by incorporating additional forms of planning/search ""inside"" the MCTS iterations. For example, you can run little instances of MiniMax inside every MCTS iteration, and I suppose that would turn the approach into a Model-Based approach again even in this viewpoint.

+",1641,,,,,1/30/2019 17:59,,,,1,,,,CC BY-SA 4.0 +10315,1,,,1/30/2019 18:22,,1,226,"

I am creating a VAE for time series data using CNNs. The data has 4800 timesteps and 4 features. It is standardized and normalized. The network I am using is implemented in Keras as follows. I have used a MSE reconstruction error:

+ + + +
# network parameters
+(_, seq_len, feat_init) = X_train.shape
+input_shape = (seq_len, feat_init)
+intermediate_dim = 512
+batch_size = 128
+latent_dim = 10
+epochs = 10
+img_chns = 3
+filters = 32
+num_conv = (2, 2)
+epsilon_std = 1
+
+inputs = Input(shape=input_shape)
+conv1 = Conv1D(16, 3, 2, padding='same', activation = 'relu', data_format = 'channels_last')(inputs)
+conv2 = Conv1D(32, 2, 2, padding='same', activation = 'relu', data_format = 'channels_last')(conv1)
+conv3 = Conv1D(64, 2, 2, padding='same', activation = 'relu', data_format = 'channels_last')(conv2)
+flat = Flatten()(conv3)
+hidden = Dense(intermediate_dim, activation='relu')(flat)
+z_mean = Dense(latent_dim, name = 'z_mean')(hidden)
+z_log_var = Dense(latent_dim, name = 'z_log_var')(hidden)
+
+def sampling(args):
+    z_mean, z_log_var = args
+    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
+                              mean=0., stddev=epsilon_std)
+    return z_mean + K.exp(z_log_var) * epsilon
+
+z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
+
+decoder_hid = Dense(intermediate_dim, activation='relu')(z)
+decoder_upsample = Dense(38400, activation='relu')(decoder_hid)
+decoder_reshape = Reshape((600,64))(decoder_upsample)
+
+deconv1 = Conv1D(filters=32, kernel_size=2, strides=1,
+             activation=""relu"", padding='same', name='conv-decode1')(decoder_reshape)
+upsample1 = UpSampling1D(size=2, name='upsampling1')(deconv1)
+deconv2 = Conv1D(filters=16, kernel_size=2, strides=1,
+             activation=""relu"", padding='same', name='conv-decode2')(upsample1)
+upsample2 = UpSampling1D(size=2, name='upsampling2')(deconv2)
+deconv3 = Conv1D(filters=8
+                 , kernel_size=2, strides=1,
+             activation=""relu"", padding='same', name='conv-decode3')(upsample2)
+upsample3 = UpSampling1D(size=2, name='upsampling3')(deconv3)
+x_decoded_mean_squash = Conv1D(filters=4
+                 , kernel_size=4, strides=1,
+             activation=""relu"", padding='same', name='conv-decode4')(upsample3)
+
+class CustomVariationalLayer(Layer):
+    def __init__(self, **kwargs):
+        self.is_placeholder = True
+        super(CustomVariationalLayer, self).__init__(**kwargs)
+
+    def vae_loss(self, x, x_decoded_mean_squash):
+        x = K.flatten(x)
+        x_decoded_mean_squash = K.flatten(x_decoded_mean_squash)
+        xent_loss = mse(x, x_decoded_mean_squash)
+        kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
+        return K.mean(xent_loss + kl_loss)
+
+    def call(self, inputs):
+        x = inputs[0]
+        x_decoded_mean_squash = inputs[1]
+        loss = self.vae_loss(x, x_decoded_mean_squash)
+        self.add_loss(loss, inputs=inputs)
+        return x
+
+outputs = CustomVariationalLayer()([inputs, x_decoded_mean_squash])
+
+# entire model
+vae = Model(inputs, outputs)
+vae.compile(optimizer='adadelta', loss=None)
+vae.summary()
+
+ +

I wanted to ask whether it is possible for the network to nearly perfectly reconstruct the test timeseries when passed through the entire VAE network, but still output junk when using a random Normal input. For further details, here is one of the inputs and outputs when passing a test signal through the network.

+ +

+ +

Here is a reconstruction generated purely from a random sample.

+ +

+ +

How can this be? Even if there was a posterior collapse, the VAE should still be able to generate a good output sample with a random input. To further test this I decided to split the network into two parts (encoder and decoder), and then pass the test image through it. The encoder and decoder networks were made by simply splitting the trained VAE network as follows:

+ + + +
idx = 9 
+input_shape = vae.layers[idx].get_input_shape_at(0)
+
+layer_input = Input(shape=(input_shape[1],)) 
+
+x = layer_input
+for layer in vae.layers[idx:-1]:
+    x = layer(x)
+
+decoder = Model(layer_input, x)
+decoder.summary()
+
+ + + +
idx = 0
+input_shape = vae.layers[idx].get_input_shape_at(0)
+
+layer_input = Input(shape=input_shape)
+
+x = layer_input
+for layer in vae.layers[idx + 1:7]:
+     x = layer(x)
+
+encoder = Model(layer_input, x)
+encoder.summary()
+
+ +

Interestingly, I also got junk output here. I'm not sure how it is possible. If the model itself is getting a near perfect reconstruction, surely just passing an image through the encoder, extracting the latent mean, and then passing that latent mean through the decoder should also create a near perfect image?

+ +

Is there something I am missing here?

+",21883,,,,,1/30/2019 18:22,How can VAE have near perfect reconstruction but still output junk when using random noise input,,0,0,,,,CC BY-SA 4.0 +10318,1,,,1/31/2019 1:33,,3,133,"

Can I treat a stochastic policy (over a finite action space of size $n$) as a deterministic policy (in the set of probability distribution in $\mathbb{R}^n$)?

+ +

It seems to me that nothing is broken by making this mental translation, except that the ""induced environment"" now has to take a stochastic action and spit out the next state, which is not hard using on the original environment. Is this legit? If yes, how does this ""deterministify then DDPG"" approach compare to, for example, A2C?

+",21892,,2444,,4/4/2019 16:37,4/4/2019 16:37,Can I use deterministic policy gradient methods for stochastic policy learning?,,0,1,,,,CC BY-SA 4.0 +10321,2,,10300,1/31/2019 9:29,,1,,"

Guessing the winner from a chess position is difficult for classification. In chess, even if you start from the same position, it can give you different result depends on the player's action. So, I recommend you to use Temporal Difference (TD) Learning, the driving concept behind Reinforcement Learning.

+ +

Some methods in Reinforcement Learning still use a neural net but not for predicting the winner. The prediction in Q-Learning, a popular Reinforcement Learning algorithm, predicts the ""value"" of choosing a certain action while in a certain position for a player. From those values, a player can choose the best action for the current position.

+ +

The following references might interest you:

+ + +",16565,,4398,,1/31/2019 15:12,1/31/2019 15:12,,,,1,,,,CC BY-SA 4.0 +10322,1,10328,,1/31/2019 10:35,,4,1359,"

I'm doing bachaleor thesis on traffic sign detection using single shot detector called YOLO. These single shot detectors can perform detection of objects in image and so they have specific way of training, ie. training on full images. Thats quite problem for me, because the biggest real dataset with full traffic sign images is Belgian one with 9000 images in 210 classes, which is unfortunately not enough to train good detector.

+ +

To overcome this problem, I've created DatasetGenerator, which does quite good job in generating synthetic datasets, you can see in the results directory.

+ +

Recently I came across GAN's which can (besides others) generate or extend existing dataset and I would like to use these networks to compare with my dataset generator. I've tried this introduction to GANs succesfully.

+ +

The problem is it's unsupervised learning and so there are no annotations. It means it's able to extend my dataset of traffic signs, but the generated dataset won't be annotated at all, which is problem.

+ +

So my question is: Is there any way how to use GAN's to extend my dataset of full traffic sign images with annotations of traffic sign class and position? Actually the class is not important, because I can do it separately for each class, but what matters is the position of traffic sign in generated image.

+",18760,,18760,,3/7/2020 8:39,3/7/2020 8:39,Using GAN's to generate dataset for CNN training,,2,0,,,,CC BY-SA 4.0 +10325,2,,10322,1/31/2019 11:57,,0,,"

You could add the desired traffic sign location to the latent vector and then arrange that the generator incurs loss if the traffic sign is not at the right place in the generated image.

+",21726,,,,,1/31/2019 11:57,,,,3,,,,CC BY-SA 4.0 +10326,2,,10308,1/31/2019 12:17,,2,,"

Square loss is fine for regression, since minimizing it is the same as maximizing the likelihood of the model parameters (under assumption that the error is Gaussian). However, if the model directly produces probabilities, then it is natural to use these probabilities directly within the loss. Hence, in all classification models we prefer to minimize negative log-likelihood of the correct class.

+ +

Note that choosing a natural loss leads to several practical advantages. In particular, applying a quadratic loss after a sigmoid activation leads to very poor gradients when the sigmoid is saturated in the wrong direction. The negative log-likelihood loss has no such problems.

+ +

This issue is not specific for neural networks. Logistic regression has used the negative log likelihood loss ever since 1958.

+",21726,,21726,,1/31/2019 17:17,1/31/2019 17:17,,,,1,,,,CC BY-SA 4.0 +10327,1,,,1/31/2019 12:52,,2,604,"

I recently came across a paper on Deep Ranking. I was wondering whether this could be used to classify book covers as book titles. (For example, if I had a picture for the cover of the second HP book, the classifier would return Harry Potter and the Chamber of Secrets.)

+ +

For example, say I have a dataset of book covers along with the book titles in text. Could that data set be used for this deep ranking algorithm, or is there a much better way to approach my problem? I'm quite new to this whole thing, and this one of my first projects in this field.

+ +

What I'm trying to create is a mobile app where people can take a picture of a book cover, have an algorithm/neural net classify the title of the book, and then have some other algorithm connect that to the book's Goodreads page.

+ +

Thanks for the help!

+",21905,,16565,,2/1/2019 13:34,2/2/2019 23:08,Deep Ranking/Best way to classify book covers?,,2,0,,,,CC BY-SA 4.0 +10328,2,,10322,1/31/2019 13:03,,3,,"

I think you'll enjoy this work from Apple on improving the realism of synthetic images. Essentially what you need to do is generate a synthetic image then have your GAN modify the synthetic image so that a 1) a discriminator thinks it is real while also 2) not changing the gross structure of the image very much (so the traffic sign doesn't move) - yes, this loss function is going to take a little work!

+ +

Making synthetic data realistic enough to allow models to generalize successfully in the real world is a very active and exciting area of research, not least with respect to robotics, and so the work you are doing now should make you very attractive indeed to the right employer.

+",17770,,,,,1/31/2019 13:03,,,,4,,,,CC BY-SA 4.0 +10329,1,10334,,1/31/2019 13:23,,3,2048,"

I'm training a SARSA agent to update a Q function, but I'm confused about how you handle the final state. In this case, when the game ends and there is no $S'$.

+

For example, the agent performed an action based on the state $S$, and, because of that, the agent won or lost and there is no $S'$ to transition to.

+

So, how do you update the Q function with the very last reward in that scenario, given that the state hasn't actually changed? That case $S'$ would equal $S$ even though an action was performed and the agent received a reward (they ultimately won or lost, so quite important update to make!).

+

Do I add extra inputs to the state "agent won" and "game finished", and that's the difference between $S$ and $S'$ for the final Q update?

+

To make clear, this is in reference to a multi-agent/player system. So, the final action the agent takes could have a cost/reward associated with it, but the subsequent actions other agents then take could further determine a greater gain or loss for this agent and whether it wins or loses. So, the final state and chosen action, in effect, could generate different rewards without the agent taking further actions.

+",20352,,2444,,11/1/2020 14:54,11/1/2020 14:54,How to deal with the terminal state in SARSA in a multi-agent setting?,,1,1,,,,CC BY-SA 4.0 +10330,2,,10300,1/31/2019 15:23,,2,,"

Easy ones first:

+ +
    +
  • Activations are going to be RELU all the way down, until your final softmax layer (win probability?) (because empirically, RELU does great on most problems, except when your model is an RNN, and it makes the gradient explode, or in the final layer of a regression model - numerical stability etc.).
  • +
  • You probably to structure this with some layers of convolutions with max pooling between them, then 1 or 2 fully connected layers (FC) near the end (because if you only have FC layers, you probably won't have enough data to train them)
  • +
  • Well worth trying some 1D convolutions (which cleverly combine channels from convolutions created by previous layers).
  • +
  • Learning rate: take the SGD default to begin with, then tune later. Problem dependent, I'm afraid! The returns to tuning can be large though (my kids twigged to this quite quickly when playing with a toy problem).
  • +
+ +

Now the hard bit - encoding your input:

+ +
    +
  • Categorical encoding using a single integer per board position could cause your model some grief (it is an input that ""looks like"" a real number, but of course it isn't, and values that seem numerically close may represent pieces with radically different abilities (perhaps the code for King is 1 and Queen is 2 and Bishop is 3, but all those pieces have such different attributes).
  • +
  • I would strongly consider thinking of each piece/player combo as a ""colour channel"" - a 64 cell grid where each value is either zero or one (so, one channel for White's pawns, another for White's knights, and so on).
  • +
+ +

Finally, those labels of yours: do think about what you'll do with drawn games - perhaps you have 3 possible outcomes, not 2?

+ +

We would all be fascinated to hear how you get on - I hope you'll write your work up in some form (and that you'll come back and complain/praise our advice, as appropriate!).

+",17770,,,,,1/31/2019 15:23,,,,0,,,,CC BY-SA 4.0 +10331,2,,10327,1/31/2019 16:11,,1,,"

I read some papers talked about it you can take a look on them maybe help you.

+ +

Deep Neural Network Learns to Judge Books by Their Covers

+ +

Classification of Book Genres By Cover and Title

+",21907,,,,,1/31/2019 16:11,,,,0,,,,CC BY-SA 4.0 +10332,2,,5939,1/31/2019 17:04,,1,,"

I think context is important here. Using tactics like those used by Scotland Yard for over a century is probably the best way. Establishing alibis, realistic time lines, motives. For a legal setting, it would be possible to prove these images were fake using methods like this. From an I.T. perspective, it may be possible to pinpoint an origin for these images. If thousands of duplicitous images came from a single origin, then any images from this origin are suspect.

+ +

I think, in general, we should retrain ourselves to not believe everything we see. There are so many methods for faking images, that photography can no longer be considered to be the best evidence of an event occurring. We should not ignore all images, but instead seek outside concurrence of facts before jumping to conclusions. If all facts point to an event happening, then that photograph is likely to be real.

+",16959,,1641,,1/31/2019 19:42,1/31/2019 19:42,,,,0,,,,CC BY-SA 4.0 +10333,2,,10210,1/31/2019 17:05,,1,,"

Check the function loss.

+

It might be that your environment is impossible to learn. However, most likely the network simply can't handle it. By measuring the loss during the learning stage, if you find it is always very high and does not decrease, it's a strong indication this might be the issue.

+

Because the network is too simple, when you optimize for some states, you ruin others. There is not formal way to find out if this is the case, but since the same algorithm works elsewhere, it's either a problem of your environment, or of the network.

+",7496,,-1,,6/17/2020 9:57,1/31/2019 17:05,,,,2,,,,CC BY-SA 4.0 +10334,2,,10329,1/31/2019 17:49,,2,,"

The SARSA update rule looks like:

+

$$Q(S, A) \gets Q(S, A) + \alpha \left[ R + \gamma Q(S', A') \right].$$

+

Very similar, the $Q$-learning update rule looks like:

+

$$Q(S, A) \gets Q(S, A) + \alpha \left[ R + \gamma \max_{A'} Q(S', A') \right].$$

+

Both of these update rules are formulated for single-agent Markov Decision Processes. Sometimes you can make them work reasonably ok in multi-agent settings, but it is crucial to remember that these update rules should still always be implemented "from the perspective" of a single learning agent, who is oblivious to the presence of other agents and pretends them to be a part of the environment.

+

What this means is that the states $S$ and $S'$ that you provide in update rules really must both be states in which the learning agent is allowed to make the next move (with the exception being that $S'$ is permitted to be a terminal game state.

+

So, suppose that you have three subsequent states $S_1$, $S_2$, and $S_3$, where the learning agent gets to select actions in states $S_1$ and $S_3$, and the opponent gets to select an action in state $S_2$. In the update rule, you should completely ignore $S_2$. This means that you should take $S = S_1$, and $S' = S_3$.

+

Following the reasoning I described above literally may indeed lead to a tricky situation with rewards from transitioning into terminal states, since technically every episode there will be only one agent that directly causes the transition into a terminal state. This issue (plus also some of my explanation above being repeated) is discussed in the "How to see terminal reward in self-play reinforcement learning?" question on this site.

+",1641,,2444,,11/1/2020 14:34,11/1/2020 14:34,,,,2,,,,CC BY-SA 4.0 +10336,1,,,1/31/2019 22:27,,0,1044,"

This PyTorch implementation of the actor-critic algorithm calculates the losses like so:

+
actor_loss = -log_prob * discounted_reward
+policy_loss = F.smooth_l1_loss(value, torch.tensor([discounted_reward]))
+
+

Both are different from the regular formulas which are, in the case of the actor loss (parameterized by $\theta$):

+

$$log[\pi_\theta(s_t,a_t)]Q_w(s_t,a_t)$$

+

and, in the case of the critic loss (parameterized by $w$):

+

$$r(s_t,a_t) + \gamma Q_w(s_{t+1},a_{t+1}) - Q_w(s_{t},a_{t}),$$

+

where $r(s_t,a_t)$ is the immediate reward following taking the action.

+

For the actor, "the immediate critic evaluation of the transition" was replaced with "the discounted reward". For the critic, the discounted evaluation of the value from the next state $r(s_t,a_t) + \gamma Q_w(s_{t+1},a_{t+1})$ was replaced by "the discounted reward". The $L_1$ loss is then calculated, effectively discarding the sign of the (equation) loss.

+

Questions:

+
    +
  1. Why did they make these changes?

    +
  2. +
  3. Why is the sign discarded for the critic loss?

    +
  4. +
+",21645,,2444,,11/1/2020 16:23,11/1/2020 16:27,Why is this PyTorch implementation of the actor-critic algorithm inconsistent with the mathematical formulas?,,1,0,,,,CC BY-SA 4.0 +10372,1,10373,,1/31/2019 23:35,,3,483,"

Experience replay is buffer (or a ""memory"") of transactions $e_t = (s_t, a_t, r_t, s_{t+1})$.

+ +

The equations for calculating the loss in actor critic are an

+ +

actor loss (parameterized by $\theta$) $$log[\pi_\theta(s_t,a_t)]Q_w(s_t,a_t)$$ and a critic loss (parameterized by $w$) $$r(s_t,a_t) + \gamma Q_w(s_{t+1},a_{t+1}) - Q_w(s_{t},a_{t}).$$

+ +

As I see it, there are two more elements that need to be saved for later use:

+ +
    +
  1. The expected Q value at the time $t$: $Q_w(s_{t},a_{t})$

  2. +
  3. The log probability for action $a_t$: $log[\pi_\theta(s_t,a_t)]$

  4. +
+ +

If they are not saved, how will we be able to later calculate the loss for learning? I didn't see anywhere stating to save those, and I must be missing something.

+ +

Do these elements need to be saved or not?

+",21645,Gulzar,21645,,2/6/2019 14:41,2/6/2019 14:41,What information should be cached in experience replay for actor-critic?,,1,0,,,,CC BY-SA 4.0 +10339,1,,,2/1/2019 5:36,,0,119,"

I am working on a project where the Neural Network weights must be quantized on 8 or 16 bits for an embedded platform, thus I will lose some precision.

+ +

Since our platform does not have floating point arithmetic we need to quantize the weights. By quantizing i mean taking the max absolute value of the weights and divide it by the maximum signed number representable on 8 or 16 bits. this operation will give us a quantization factor $(qf)$. +the final quantized weights will be integer$(value * qf)$.

+ +

If my weights are very sparse and have a very bad distribution, I lose more precision.

+ +

For example, to the left here is the distribution of weights for one layer, and to the right is the distribution of weights after I added to the loss function the Kurtosis and skew measures of the weights, and it improved a bit the shape of the distribution while keeping the same accuracy, even a bit higher.

+ +

Does anybody have any other suggestions? Has anyone tackled this problem before?

+",21918,,,user9947,4/19/2019 13:47,4/19/2019 13:47,Tips for keeping the distribution of weights normal,,2,7,,,,CC BY-SA 4.0 +10340,1,,,2/1/2019 6:44,,1,526,"

Is the role played by activation function significant only during the training of neural network or they play their role during testing (after training we supply data for prediction) the network.

+ +

I understand that a linear line cannot separate data scattered in complex manner but +Then why we don't used simple polynomials.

+ +

why specifically sigmoid, or tanh or ReLu what exactly they are doing ?

+ +

What Activation functions do when we are supplying data during training and

+ +

And when we supply test data once we have trained the network and we input test data for prediction?

+",21642,,21642,,2/1/2019 8:23,10/29/2019 9:02,What role the activation function plays in the forward pass and how it is different from backpropagation,,1,2,,1/22/2021 0:20,,CC BY-SA 4.0 +10341,2,,10340,2/1/2019 6:59,,2,,"

Activation function is a non-linear function. Operation in a neuron without activation function is just a linear function. If we don't put activation function between operation of neurons, then the function ""Layer"" is useless.

+ +

for example if you have two layer network, when you are doing forward-propagation, your output (without activation function) of your first layer will be calculated as:

+ +

$O_1 = W_1X+b_1 $

+ +

Then your output of your second layer will be:

+ +

$O_2 = W_2O_1+b_2 $

+ +

If we substitute $O_1$, so the output of your second layer can be calculated as:

+ +

$O_2 = W_2(W_1X+b_1)+b_2 $

+ +

or simply

+ +

$O_2 = W_2W_1X+W_2b_1+b_2 $

+ +

As we train neural network to optimize the value of $W$ and $b$ (we train to find the best value of it) so instead of training neural network with two layers, we actually just train a one layer network. From the latter formula we can said $W_2W_1 = W_3$ and $W_2b_1+b_2 = b_3$ so our two layer network is just another linear model:

+ +

$O_2 = W_3X+b_3 $

+ +

We don't want that, we add layers to get more complex model. That's why we use Activation function that is non-linear function. To prevent our deep model is just become a simple linear function.

+",16565,,,,,2/1/2019 6:59,,,,4,,,,CC BY-SA 4.0 +10342,2,,10336,2/1/2019 7:45,,1,,"

There is no 'regular' formula for calculating policy loss, the regular thing when calculating policy gradient is to multiply gradient with advantage function which can be many things. Look at section 2 of this paper for coverage on basic advantage functions. Also, the expected discounted reward is the same thing as the state-action value function (Q value).

+

$$Q^\pi(s_t, a_t) = \mathbb{E}_{s_{t+1:\infty}, a_{t+1:\infty}}[\sum_{l=0}^\infty \gamma^lr_{t+l}]$$

+

So, the variations you posted roughly calculate the same thing.

+

Regarding the negative sign, in policy gradient methods, we want to maximize our performance function which has the following form:

+

$$J(\theta) = \sum_\limits{\substack{a}} \pi(a \mid s, \theta)A(s, a)$$

+

So, the higher our performance the better it is. When people write code, they use minimizers that minimize the objective function, but, in this case, we want to maximize it, so maximizing the objective function is the same thing as minimizing the negative objective function therefore the negative sign.

+",20339,,2444,,11/1/2020 16:27,11/1/2020 16:27,,,,6,,,,CC BY-SA 4.0 +10343,2,,10228,2/1/2019 10:18,,2,,"

The answer depends on a case to case basis. It may so happen that a dataset performs very well on ReLu but takes more number of iterations to converge on leaky ReLu's or PReLu's or vice-versa. There are 2 arguments to consider here:

+ +
    +
  • ReLu is the most non-linear among all other type of ReLu's, and by this not so mathematical term I mean to say that it has the steepest drop in slope at 0, as compared to any other type of modified ReLu's.
  • +
  • ReLu's omit negative values which can be a significant problem with context to data normalisation. As this video (~10:00) from Stanford explains how data normalisation is necessary in context of signs of weight updates, so we can very roughly say any form of Leaky ReLU's somewhat normalise the data.
  • +
+ +

So theoretically speaking (might not be mathematically rigorous) iff all the inputs have a positive correlation to the output(input increases, output also increases), ReLu should work very well and converge faster. Whereas if there is negative correlation as well then Leaky ReLu's might work better.

+ +

The point is that unless someone gives definitive mathematical relations of what's going on inside a NN when it is being trained, its hard to tell which will work well and which will not except from intuition.

+",,user9947,,user9947,4/2/2019 7:48,4/2/2019 7:48,,,,0,,,,CC BY-SA 4.0 +10344,1,,,2/1/2019 10:20,,4,874,"

I've watched this video of the recent contest of AlphaStar Vs Pro players of StarCraft2, and during the discussion David Silver of DeepMind said that they train AlphaStar on TPUs.

+ +

My question is, how is it possible to utilise a GPU or TPU for reinforcement learning when the agent would need to interact with an environment, in this case is the StarCraft game engine?

+ +

At the moment with my training of a RL agent I need to run it on my CPU, but obviously I'd love to utilise the GPU to speed it up. Does anyone know how they did it?

+ +

Here's the part where they talk about it, if anyone is interested:

+ +

https://www.youtube.com/watch?v=cUTMhmVh1qs&t=7030s

+",20352,,,,,2/1/2019 16:10,How does DeepMind perform reinforcement learning on a TPU?,,1,5,,,,CC BY-SA 4.0 +10345,1,,,2/1/2019 10:24,,1,338,"

Suppose we have a data set consists of columns

+ +
+

TransactionId, CardNo, TransactionDate

+
+ +

then how can we calculate the customer purchase interval (means if customer A purchased on Jan 1st and after 10 days he again purchased, and then he again purchased after 15 days.) and how to predict the next visit of customer A by analyzing the purchasing intervals of customer A.

+ +

Any help will be appreciated.

+",16770,,,,,3/5/2019 20:55,predict customer visit,,2,0,,,,CC BY-SA 4.0 +10347,2,,3494,2/1/2019 12:04,,4,,"

It's a mix of many factors that together make it a very good option to develop cognitive systems.

+ +
    +
  • Quick development
  • +
  • Rapid prototyping
  • +
  • Friendly syntax with almost human-level readability
  • +
  • Diverse standard library and multi-paradigm
  • +
  • It can be used as a frontend for performant backends written in compiled languages such as C/C++.
  • +
+ +

Existing performant numerical libraries, such as numpy and others already do the intensive bulk work for you which lets you focus more on architectural aspects of your system.

+ +

Besides, there is a very big community and ecosystem around Python, which results in a diverse set of available tools oriented to diffent kind of tasks.

+",9623,,,,,2/1/2019 12:04,,,,0,,,,CC BY-SA 4.0 +10348,2,,1877,2/1/2019 13:47,,0,,"

Typical learning algorithms can be stated as a search problem, where we want to find the best possible solution, that successfully solves a particular task, among all the available candidate solutions in the solution space.

+ +

It is often the case where we can't find the best one or it is too hard to find it and thus we compromise with a sub-optimal solution.

+",9623,,,,,2/1/2019 13:47,,,,0,,,,CC BY-SA 4.0 +10349,1,,,2/1/2019 14:00,,1,29,"

I have a sensor that reads electromagnetic field strength from each position.

+

And the field is stable and unique for each position. So the reading is simply a function of the position like this: reading = emf(x,y,z)

+

The reading consists of 3 numbers (not position).

+

I want to find the inverse function of emf function. This means I want to find function pos that is defined like this: x,y,z = pos(reading)

+

I don't have access to both emf and pos function. I think that I want to gradually estimate the pos function using a neural network.

+

So, I have the input reading and acceleration ax,ay,az of the sensor through space from an IMU. The acceleration is not so accurate. I want to use these 2 inputs to help me figure out the position of the sensor over time. You can assume that the starting position is at 0,0,0 on the first reading.

+

In short, input is reading and ax,ay,az on each timestep, the output will be an adjustment on the weights of pos function or output will be position directly.

+

I've been reading about SLAM (simultaneous localization and mapping) algorithm and I think that it might help in my case because my problem is probabilistic. If I know accurately the acceleration, I would not need any probability, but the acceleration is not accurate.

+

So, I want to know how do I model this problem in terms of SLAM? +I don't have a camera to do vision-based SLAM though.

+

Why do I think this is tractable?

+

If the first reading is 1,1,1 and the position is at origin 0,0,0, and I move the sensor, the position can drift because the sensor has never seen other reading before, but after I go back to the origin, the reading will be 1,1,1 again so the sensor should report the origin 0,0,0 as output. During the movement of the sensor, the algorithm should filter the acceleration so that all the previous positions make sense.

+",20819,,2444,,12/13/2021 8:49,12/13/2021 8:49,How to use SLAM on other sensor other than camera?,,0,0,,,,CC BY-SA 4.0 +10351,2,,10344,2/1/2019 16:10,,3,,"

In their blog post, they link to (among many other papers) their IMPALA paper. Now, the blog post only links to that paper with text implying that they're using the ""off-policy actor-critic reinforcement learning"" described in that paper, but one of the major points of the IMPALA paper is actually an efficient, large-scale, distributed RL setup.

+ +

So, until we get more details (for example in their paper that's currently under review), our best guess would be that they're also using a similar kind of distributed RL setup as described in the IMPALA paper. As depicted in Figures 1 and 2, they decouple actors (machines running code to generate experience, e.g. by playing StarCraft) and learners (machines running code to learn/train/update weights of neural network(s)).

+ +

I would assume that their TPUs are definitely being used by the Learner (or, likely, multiple Learners). StarCraft 2 itself won't benefit from running on TPUs (and probably would be impossible to even get to run on them in the first place), because the game logic likely doesn't depend on large-scale, dense matrix operations (the kinds of operations that TPUs are optimized for). So, the StarCraft 2 game itself (which only needs to run for the ""actors"", not for the ""learners"") is almost certainly running on CPUs.

+ +

The actors will still have to run forwards passes through Neural Networks in order to select actions. I would assume that their Actors are still equipped with either GPUs or TPUs to do this more quickly than a CPU would be capable of, but the more expensive backwards passes are not necessary here; only the Learners need to perform those.

+",1641,,,,,2/1/2019 16:10,,,,3,,,,CC BY-SA 4.0 +10352,1,10362,,2/1/2019 17:49,,1,149,"

In experience replay, the update rule follows the loss:

+
+

$$ +L_i(\theta_i) = \mathbb{E}_{(s_t, a_t, r_t, s_{t+1}) \sim U(D)} \left[ \left(r_t + \gamma \max_{a_{t+1}} Q(s_{t+1}, a_{t+1}; \theta_i^-) - Q(s_t, a_t; \theta_i)\right)^2 \right] +$$

+
+

I can't get my head around the order of calculation of the terms in that equation:

+

An experience element is

+
+

$(s_t, a_t, r_t, s_{t+1} )$

+
+

where

+
+

$s_t$ is the state at time $t$

+

$a_t$ is the action taken from $s_t$ at time $t$

+

$r_t$ is the reward received by taking that action from $s_t$ at time +$t$

+

$s_{t+1}$ is the next state

+
+

In the on policy case, as I understand it, Q of the equation above is the same Q, which is the only approximator.

+

As I understand the algorithm, at time $t$ we save an experience

+
+

$(s_t, a_t, r_t, s_{t+1} )$.

+
+

Then, later, at time $t+x$, we attempt to learn from that experience.

+

However, at the time of saving the experience, $Q(s_t,a_t)$ was something different than at the time of attempting to learn from that experience, because the parameters $\theta$ of $Q$ have since changed. This could actually be written as

+
+

$Q_t(s_t, a_t) \neq Q_{t+x}(s_t,a_t)$

+
+

Because the Q value is different, I don't see how the reward signal at time $t$ is of any relevance for $Q_{t+x}(s_t,a_t)$ at $t+x$, the time of learning.

+

Also, it is likely that following a policy that is derived from $Q_t$ would lead to $a_{t}$, whereas following a policy that is derived from $Q_{t+x}$ would not.

+

I don't see in the experience replay algorithm that the Q value $Q_t(s_t, a_t)$ is saved, so I must assume that is not.

+

Why does calculating the Q value again at a later time make sense FOR THE SAME SAVED REWARD AND ACTION?

+",21645,,2444,,11/1/2020 10:36,11/1/2020 10:36,When are Q values calculated in experience replay?,,2,0,,,,CC BY-SA 4.0 +10354,2,,10352,2/1/2019 19:11,,0,,"

Once the algorithm reaches a point where a reward (or penalty) is gained, then a slowly decaying amount of that reward is given to the Q-value of whichever action was derived from a particular state for each time step back through the game (or however far back you want to apply that reward). +You will have stored the state and Q-values for each time step during the game, allowing you to update the appropriate Q-values with the new reward. +These state / updated Q-values (from many runs) become your training data for the next iteration of your training algorithm.

+",12509,,,,,2/1/2019 19:11,,,,3,,,,CC BY-SA 4.0 +10373,2,,10372,2/1/2019 19:13,,4,,"

The loss function is estimated in every batch training cycle. Gradients of the loss are computed and propagation back to the network happens in every cycle. This means that you use a small batch (e.g. 100 instances) from the replay memory, and by having the states you can input them to the respective network and have the $Q(s)$ for every state in your batch. Then you estimate the loss and run backpropagation and the networks' weights are getting updated. You continue gathering experience by interacting with the environment and after a threshold that you specify you repeat the cycle by re-sampling a new batch from the memory.

+ +

Just a suggestion: Start moving towards Asynchronous/synchronous methods for RL and use one network with different ""heads"" for Actor and Critic. Then you use one loss function plus the experience that now is collected from multiple instances of your agent-environment interaction (in contrast to your description in which one agent collects and stores experience from one instance).

+",36093,Constantinos,,,,2/1/2019 19:13,,,,0,,,,CC BY-SA 4.0 +10357,2,,10339,2/1/2019 22:30,,1,,"

Quantizing of weights is used very often in implementing neural networks with the tensorflow library on mobile devices. The reason is, the such microcontrollers doesn't have a floating point unit and the developer is trying to increase the overall performance. Quantizing means, to map float values to integer values:

+ +
Integer Float
+0      -5.0
+128    0.0
+256    5.0
+
+ +

The problem isn't trivial, because the float point numbers can have a certain minimum, maximum and precision. Maintaining the original distribution is also an issue.

+ +

To avoid such kind of problems, the best idea is to implement a floating point library from scratch. In the stackbased Forth language this can be realized in under 400 bytes. It was described in the paper “Willi Stricker: Forth Floating Point Word-Set without Floating Point Stack, 2012”

+",,user11571,,,,2/1/2019 22:30,,,,0,,,,CC BY-SA 4.0 +10358,5,,,2/1/2019 22:37,,0,,,1671,,1671,,2/1/2019 22:37,2/1/2019 22:37,,,,0,,,,CC BY-SA 4.0 +10359,4,,,2/1/2019 22:37,,0,,Use for questions about autonomous weapons systems specifically. ,1671,,1671,,2/1/2019 22:37,2/1/2019 22:37,,,,0,,,,CC BY-SA 4.0 +10360,1,,,2/2/2019 8:06,,2,705,"

I am reading BayesChess: A computer chess program based on Bayesian networks (Fernandez, Salmeron; 2008)

+

It is a chess-playing engine using Bayesian networks. The following is mentioned about the heuristic function in section 3.

+
+

Here the heuristic is defined in terms of 838 parameters.

+

There are 5 parameters indicating the value of each piece (pawn, +queen, rook, knight, and bishop -the king is not evaluated, as it must +always be on the board), 1 parameter for controlling whether the king +is under check, 64 parameters for evaluating the location of each +piece on each square on the board (i.e., a total of 786 parameters, +corresponding to 64 squares × 6 pieces each colour × 2 colours) and +finally 64 more parameters that are used to evaluate the position of +the king on the board during the endgame.

+
+

The above sentence contains the parameters used by the heuristic function. But I didn't find the actual definition. What is the actual definition of the heuristic function?

+",21964,,2444,,9/29/2020 22:40,9/29/2020 22:40,What is the definition of a heuristic function in the BayesChess paper?,,3,0,,,,CC BY-SA 4.0 +10361,2,,10360,2/2/2019 8:42,,0,,"

Heuristic just means that it is hand constructed by a human. Suppose you have the value of each piece given an initial value. That is a heuristic because it was just defined by a human. But if the bayesian process is going to modify that initial value given by the human that would be a non-heuristic thing.

+ +

So there is no function definition to search for except to find what the initial parameters given by the human are.

+",21724,,,,,2/2/2019 8:42,,,,1,,,,CC BY-SA 4.0 +10362,2,,10352,2/2/2019 9:11,,2,,"
+

Because the Q value is different, I don't see how the reward signal at time $t$ is of any relevance for $Q_{t+x}(s_t,a_t)$ at $t+x$, the time of learning.

+
+ +

The $r_t$ value for any single step is not dependent on $Q$ or the current policy. It is purely dependent on $(s_t,a_t)$. That means you can use the Q update equation to use it to calculate new TD targets, by combining your knowledge of the old reward signal and the value functions of the latest policy.

+ +

The value $r_t + \gamma \max_{a_{t+1}} Q(s_{t+1}, a_{t+1}; \theta_i^-)$ is the TD target in the loss function you show. There are many possible variations to calculate the TD target value in RL, with different properties. The one you are using is biased towards initial random values unrelated to the problem, and also deliberately biased towards older Q values to prevent runaway feedback. It is also low variance compared to other possibilities, and simpler to calculate.

+ +
+

Also, it is likely that following a policy which is derived from $Q_t$ would lead to $a_{t}$, whereas following a policy which is derived from $Q_{t+x}$ would not.

+
+ +

Yes that is correct. However, you don't care about that for a single step, you just care about calculating a better estimate for $Q(s,a)$ regardless of whether you would choose $a$ in state $s$ with the current policy. That is what an action value measures - the utility of taking action $a$ in state $s$ and thereafter following a given policy.

+ +

This is a strength of experience replay, that you are constantly refining your estimates of off-policy action values to determine which is the best.

+ +

This does become more of an issue when you want to use longer trajectories in the update step (which you may want to do to reduce bias of your TD target estimate). A series of steps in history $s_t,a_t,r_t ... s_{t+1}, a_{t+1},r_{t+1}...s_{t+n}$ may not have the same chance of occurring under the latest policy as it did when it was stored in memory. For the first step $s_t,a_t$ again you don't care if it is one you would currently take because the point is to refine your estimate of that action value. However, if you want to use $r_{t+1}, r_{t+2}$ etc plus $s_{t+n}$ to create a TD target, then you have to care whether your current policy and the one used to populate the history table would be different.

+ +

It is a problem if you want to use more sophisticated estimates of TD target that use multiple sample steps along with experience replay. There are some approaches you can take to allow for this, such as importance sampling. For a single step update mechanism you don't need to worry about it.

+ +
+

I don't see in the experience replay algorithm that the Q value $Q_t(s_t, a_t)$ is saved, so I must assume that is is not.

+
+ +

This is correct. You must re-calculate the TD target from a more up-to-date policy to get better estimates of the action value. With experience replay, you are not interested in collecting a history of values of Q. Instead you are interested in the history of state transitions and rewards.

+ +
+

Why does calculating the Q value again at a later time make sense FOR THE SAME SAVED REWARD AND ACTION?

+
+ +

Because it will change, due to the learning process summarising effects of state transitions.

+ +

As a toy example, consider a maze solver with traps (negative rewards) and treasures (positive rewards). At one point in history, the agent finds itself in a location and its policy told it to move into a trap on the next step. The agent would initially score location and steps leading up to it with negative Q values. Later it discovers through exploration that there is also some treasure if it takes a different turning towards the end of the the same series of steps. With experience replay, and re-calculating Q values each time, it can figure out which section of that path should be converted to high scores because they lead to treasure as well as the trap and now the agent has a better policy for the end of the path it has better estimates of value there.

+",1847,,1847,,2/2/2019 9:45,2/2/2019 9:45,,,,0,,,,CC BY-SA 4.0 +10363,2,,3494,2/2/2019 11:27,,3,,"

Python has rich library, it is also object oriented, easy to program. It can be also used as frontend language. That's why it is used in artificial intelligence. Rather than AI it is also used in machine learning, soft computing, NLP programming and also used as web scripting or in Ethical hacking.

+",21965,,21965,,2/28/2019 23:18,2/28/2019 23:18,,,,0,,,,CC BY-SA 4.0 +10364,1,,,2/2/2019 14:14,,2,517,"

Up to now, I have been using (my version of) open AI's code, with the suggested CartPole.

+ +

I have been using Monte Carlo methods, which, for cartpole, seemed to work fine.

+ +

Trying to move to temporal difference, Cartpole seems to fail to learn (with simple TD method) (or I stopped it too soon, but still unacceptable).

+ +

I assume that is the case because in Cartpole, for every timestamp, we get a reward of 1, which has very little immediate information about weather or not the action was good or not.

+ +

Which gym environment is the simplest that would probably work with TD learning?

+ +

by simplest I mean that there is no need for a large NN to solve it. No conv nets, no RNNS. just a few small layers of a fully connected NN, just like in cartpole. something I can train on my home cpu, just to see it starting to converge.

+",21645,,,,,2/2/2019 17:12,Which Openai Gym environment should I use to test a Temporal Difference RL Agent?,,1,0,,,,CC BY-SA 4.0 +10365,2,,3749,2/2/2019 15:15,,1,,"
+

Lex Fridman interviews

+
+

These are some of the best interviews on the field that I have found on the internet.

+

They are part of the MIT course 6.S099: Artificial General Intelligence.

+

+
+

Andrew Ng's Heroes of deep learning series

+
+

The focus is on deep learning.

+

+

Yannic Kilcher has some fascinating youtube videos:

+

Yannic's channel

+

Check out this one!

+",157,,-1,,6/17/2020 9:57,6/16/2020 9:41,,,,0,,,,CC BY-SA 4.0 +10367,2,,10364,2/2/2019 17:12,,1,,"

Cartpole will work just fine with a single-step TD method - e.g. Q learning - and a simple neural network. Configured well, a simple DQN model should solve it in a few minutes. You can also forgo the neural network and use discretised states, or tile coding etc.

+ +

What you cannot do is plug a neural network estimator into basic Q learning and make no other adjustments. That is because the bootstrap process in TD learning will create a runaway feedback driven from the random and incorrect return estimates in the initialised NN. You have to use something like DQN to get it to work at all. This is different to Monte Carlo approach, where you can generally just plug in a NN in place of a Q table.

+ +

The basics of DQN are:

+ +
    +
  • Use experience replay. Store $(S, A, R, S')$ values from each step, and when training the NN, draw a small random batch (e.g. maybe 16 samples) from this table, re-estimate their Q values to calculate TD target e.g. $R + \gamma\text{max}_{a'} Q(S',a')$ and use that to create a minibatch to train the NN

  • +
  • Use a ""delayed"" target network for estimating TD target. This can just be a snapshot of your learning network, taken once every N steps, where N is typically anything from 100 to 100000.

  • +
+ +

Further details of what configured well means varies from problem to problem. With DQN you will want to play with hyper parameters of experience history size, time to keep frozen copy of network (for bootstrap estimates), batch size of experience table samples, amount of exploration.

+ +

In CartPole though, there will be a wide range of acceptable hyperparameter values. So I suspect you have just plugged in a NN in place of the Q table and wonder why it doesn't work.

+ +
+

I assume that is the case because in Cartpole, for every timestamp, we get a reward of 1, which has very little immediate information about weather or not the action was good or not.

+
+ +

Yes this is part of what makes some learning goals become harder challenges in sequential-decision control systems. However, this was also the case for your Monte Carlo method, and in general this relates to the credit assignment problem in RL.

+",1847,,,,,2/2/2019 17:12,,,,4,,,,CC BY-SA 4.0 +10368,1,10379,,2/2/2019 17:30,,6,2356,"

TD lambda is a way to interpolate between TD(0) - bootstrapping over a single step, and, TD(max), bootstrapping over the entire episode length, or, Monte Carlo.

+ +

Reading the link above, I see that an eligibility trace is kept for each state in order to calculate its ""contribution to the future"".

+ +

But, if we use an approximator, and not a table for state-values, then can we still use eligibility traces? If so, how would the loss (and thus the gradients) be calculated? Specifically, I would like to use actor-critic (or advantage actor-critic).

+",21645,,2444,,6/4/2020 16:59,8/13/2020 14:19,Can TD($\lambda$) be used with deep reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +10369,2,,9614,2/2/2019 19:16,,10,,"

An important thing we're going to need is what is called the "Expected Grad-Log-Prob Lemma here" (proof included on that page), which says that (for any $t$):

+

$$\mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta}(a_t \mid s_t) \right] = 0.$$

+

Taking the analytical expression of the gradient (from, for example, slide 9) as a starting point:

+

$$\begin{aligned} +\nabla_{\theta} J(\theta) &= \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \left( \sum_{t=1}^T \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \right) \left( \sum_{t=1}^T r(s_t, a_t) \right) \right] \\ +% +&= \sum_{t=1}^T \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=1}^T r(s_{t'}, a_{t'}) \right] \\ +% +&= \sum_{t=1}^T \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=1}^{t-1} r(s_{t'}, a_{t'}) + \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=t}^T r(s_{t'}, a_{t'}) \right] \\ +% +&= \sum_{t=1}^T \left( \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=1}^{t-1} r(s_{t'}, a_{t'}) \right] \\ ++ \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=t}^T r(s_{t'}, a_{t'}) \right] \right) \\ +\end{aligned}$$

+

At the $t^{th}$ "iteration" of the outer sum, the random variables +$ \sum_{t'=1}^{t-1} r(s_{t'}, a_{t'}) $ +and +$ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) $ +are independent (we assume, by definition, the action only depends on the most recent state), which means we are allowed to split the expectation:

+

$$\nabla_{\theta} J(\theta) = \sum_{t=1}^T \left( \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \sum_{t'=1}^{t-1} r(s_{t'}, a_{t'}) \right] \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \right] \\ ++ \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=t}^T r(s_{t'}, a_{t'}) \right] \right)$$

+

The first expectation can now be replaced by $0$ due to the lemma mentioned at the top of the post:

+

$$ +\begin{aligned} +\nabla_{\theta} J(\theta) +% +&= \sum_{t=1}^T \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \left[ \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \sum_{t'=t}^T r(s_{t'}, a_{t'}) \right] \\ +% +&= \mathbb{E}_{\tau \sim \pi_{\theta}(\tau)} \sum_{t=1}^T \nabla_{\theta} \log \pi_{\theta} (a_t \mid s_t) \left( \sum_{t'=t}^T r(s_{t'}, a_{t'}) \right). \\ +\end{aligned} +$$

+

The expression on slide 18 of the linked slides is an unbiased, sample-based estimator of this gradient:

+

$$\nabla_{\theta} J(\theta) \approx \frac{1}{N} \sum_{i=1}^N \sum_{t=1}^T \nabla_{\theta} \log \pi_{\theta} (a_{i, t} \mid s_{i, t}) \left( \sum_{t'=t}^T r(s_{i, t'}, a_{i, t'}) \right)$$

+
+

For a more formal treatment of the claim that we can pull $\sum_{t'=1}^{t-1} r(s_{t'}, a_{t'})$ out of an expectation due to the Markov property, see this page: https://spinningup.openai.com/en/latest/spinningup/extra_pg_proof1.html

+",1641,,42533,,11/25/2020 20:14,11/25/2020 20:14,,,,7,,,,CC BY-SA 4.0 +10370,2,,10327,2/2/2019 23:08,,0,,"
+

Could that data set be used for this deep ranking algorithm?

+
+ +

Yes you can! I think there are at least two approaches for this task:

+ +
    +
  1. First, solve using Image Classification. If you want to use Deep Learning, you can use Deep Convolutional Neural Network to create classifier that decide the image is HP book or not. You can read papers mentioned on Mahmoud's answer. But the problem is you need a very very large dataset, to make a good classifier you can't just provide one image for one book so if you have thousand title of books (or more) you need train your model with very hugh dataset.

  2. +
  3. Using image similarity or content based image retrieval (CBIR), there is a good discussion in Stackoverflow about this topic, there are a lot of techniques include Deep Ranking, perceptual hash, and others. One of their difference is Deep Ranking use fewer feature engineering technique than others. In my opinion using image similarity technique is better approach than using image classification (it's also compared in Deep Ranking paper) because some methods will be faster and didn't required a lot of dataset.

  4. +
+ +

You also can read another simple reference of image similarity.

+",16565,,,,,2/2/2019 23:08,,,,0,,,,CC BY-SA 4.0 +10371,1,10388,,2/2/2019 23:24,,9,2579,"

In my country, the Expert System class is mandatory, if you want to take the AI specialization in most universities. In class, I learned how to make a rule-based system, forward chaining, backward chaining, Prolog, etc.

+

However, I have read somewhere on the web that expert systems are no longer used.

+

Is that true? If yes, why? If not, where are they being used? With the rise of machine learning, they may not be as used as before, but is there any industry or company that still uses them today?

+

Please, provide some references to support your claims.

+",16565,,2444,,1/22/2021 3:40,1/22/2021 3:40,Is the expert system still in use today?,,2,2,,,,CC BY-SA 4.0 +10374,1,,,2/3/2019 5:37,,2,82,"

I've come up with an idea on how we could use a combination of Deep Learning and body sensors to create a walking talking living humanoid. Here goes:

+

First, we will recruit 1 billion people and have them wear a special full face mask and suit. This suit will contain touch sensors along the skin, cameras, smell sensors, taste sensors on the mask, basically every data and information that a human receives will be collected electronically, whether it is what they see, what they smell what they feel and so on.

+

These suits will also have potentiometers and other sensors to measure the movement made by the person. Every hand movement, leg movement, muscle movement will be recorded and saved in a database as well.

+

After 50 years or so, all collected input and output data from every single person who participated in this experiment will be saved in a computer. We then create a neural network and then train it on the input and output data from the database.

+

Next, we create a robot that has motorized muscles and hand/leg joints that are the same specs as to our previous suits and also the touch, smell, sight and other sensors integrated inside of this robot.

+

Once everything is ready, we will load the trained neural network onto the robot and switch it on. During inference, the neural network will take data from sensors all over the robot's body and translate it into movement in legs, hands, and muscles.

+

Could these techniques in conjunction with the data collection I describe, produce an AGI?

+

Essentially, how feasible is current technology to produce a robot that will behave, speak, live, move like a normal human being?

+",21980,,2444,,12/12/2021 18:56,12/12/2021 18:56,"If we collected a very large labelled dataset from multiple sensors, then train a neural network with that data, could that lead to an AGI?",,0,4,,,,CC BY-SA 4.0 +10377,2,,145,2/3/2019 19:58,,6,,"
+

Is AIXI really a big deal in artificial general intelligence research?

+
+ +

Yes, it is a great theoretical contribution to AGI. AFAIK, it is the most serious attempt to build a theoretical framework or foundation for AGI. Similar works are Schmidhuber's Gödel Machines and SOAR architecture.

+ +

AIXI is an abstract and non-anthropomorphic framework for AGI which builds on top of the reinforcement learning field, without a few usual assumptions (e.g., without the Markov and ergodicity assumptions, which guarantees that the agent can easily recover from any mistakes it made in the past). Even though some optimality properties of AIXI have been proved, it is (Turing) uncomputable (it cannot be run on a computer), and so it is of very limited practical usefulness. Nonetheless, in the Hutter's book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability (2005), where several properties of AIXI are rigorously proved, a computable but intractable version of AIXI, AIXItl, is also described. Furthermore, in the paper A Monte Carlo AIXI Approximation (2009), by Joel Veness et al., a computable and tractable approximation of AIXI is introduced. So, there have been some attempts to make AIXI practically useful.

+ +

The article What is AIXI? — An Introduction to General Reinforcement Learning (2015), by Jan Leike, which is one of the contributors to the development and evolution of the AIXI framework, gives a gentle introduction to the AIXI agent. See also The AIXI Architecture at the Stanford Encyclopedia of Philosophy for a possibly gentler introduction to AIXI.

+ +
+

Can it be thought of as a central concept for the field?

+
+ +

Yes, the introduction of AIXI and related research has contributed to the evolution of the AGI field. There have been several discussions and published papers, after its introduction in 2000 by Hutter in the paper A Theory of Universal Artificial Intelligence based on Algorithmic Complexity.

+ +

See e.g. section 7, ""Examples of Superintelligences"", of the paper Artificial General Intelligence and the Human Mental Model (2012), by Roman V. Yampolskiy and Joshua Fox. See also https://wiki.lesswrong.com/wiki/AIXI which contains a discussion regarding a few problems related to AIXI, which need to be solved or possibly avoided in future AGI frameworks. Furthermore, see also this and this articles.

+ +
+

If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?

+
+ +

There have been several publications, mainly by Marcus Hutter and associated researchers. You can see Marcus Hutter's publications on the following web page: http://www.hutter1.net/official/publ.htm.

+ +

If you are interested in contributing to this theory, there are several ways. If you are mathematically well educated, you can attempt to solve some of the problems described here (which are also mentioned in the Hutter's 2005 book mentioned above). Furthermore, you can also contribute to new approximations or improvements of existing approximations of the AIXI agent. Finally, you can build your new AGI framework by avoiding the problems associated with the AIXI framework. See also projects promoted by Hutter. It may be a good idea to also take into account e.g. Gödel Machines and related work, before attempting to introduce a new framework (provided you are capable of it).

+ +

I think that this theory has not attracted more people probably because it is highly technical and mathematical (so it is not very easy to understand unless you have a very solid background in reinforcement learning, probability theory, etc.). I also think that most people (in the AI community) are not interested in theories, but they are mainly guided by practical and useful results.

+",2444,,20044,,10/21/2019 14:34,10/21/2019 14:34,,,,0,,,,CC BY-SA 4.0 +10378,1,,,2/3/2019 20:05,,3,215,"

As in sigmoid function when input x is very large or very small the curve is flat that means low gradient descent but when it is in between the slope is more so, +My question is how this thing helps us in neural network.

+",21642,,,,,2/4/2019 1:05,How sigmoid funtion helps us in reducing error in neural networks?,,1,0,,,,CC BY-SA 4.0 +10379,2,,10368,2/3/2019 23:09,,7,,"

Eligibility traces is a method of weighting between temporal-difference "targets" and Monte-Carlo "returns". In practice, for example, instead of using the one-step TD target, $r_t + \gamma V (s_{t+1})$, as in the temporal difference update $V (s_t) \leftarrow V (s_t) + \alpha (r_t + \gamma V (s_{t+1}) − V (s_t))$, you use the so-called "lambda" ($\lambda$) target, which is a target that balances between the TD target and the Monte Carlo return. So, in practice and intuitively, eligibility traces is just a way of using a more "appropriate" target while learning. In general, you need to perform these updates (e.g., the TD update above) "online", i.e. while you explore or exploit the environment.

+

In theory, you could use a deep neural network to represent your value function (or your policy), while using eligibility traces. It would be similar to not using them: you would just use a different target.

+

However, deep RL (that is, RL which uses deep neural networks to represent e.g. value functions) training needs to be performed using i.i.d. data, in order to prevent overfitting, which often means that they can't be trained online or need to use "tricks" like the "experience replay" (used in the paper Human-level control through deep reinforcement learning). Note that, in RL, successive states are often very correlated (e.g. two successive frames of a video would be very correlated).

+

In theory and similarly, you would still be able to use eligibility traces with the actor-critic method, but not with the asynchronous advantage actor-critic method. See the section 2.3 of the paper "Efficient Eligibility Traces for Deep Reinforcement Learning" (2018) by Brett Daley and Christopher Amato, for more info.

+

In this same paper, an approach is introduced to efficiently combine eligibility traces with deep neural networks. The authors propose DQN($\lambda$), which is the DQN architecture combined with eligibility traces, where the $\lambda$ return is computed in an "efficient" (and recursive) way, instead of the "usual" way. Since they use a DQN, they also use an "experience replay" buffer (or memory), where they also store the efficiently computed $\lambda$ target (in addition to the usual rewards). Furthermore, they also eliminate the need for the "target" network used in the standard DQN. You can have a look at algorithm 1 of the same paper to see how they improve the parameters of the network, which represents the Q function, in the case of the DQN($\lambda$) model. See the section 3.1 of the same paper for more details regarding this model.

+

They also introduce A3C($\lambda$), which combines asynchronous advantage actor-critic (A3C) with eligibility traces. See the section 3.2 for more details.

+

Note that there have been other proposals for combining eligibility traces with deep learning. You can have a look at the literature.

+",2444,,2444,,8/13/2020 14:19,8/13/2020 14:19,,,,0,,,,CC BY-SA 4.0 +10380,1,10417,,2/4/2019 0:33,,0,154,"

I wasn't sure how to title this question so pardon me please.

+ +

You may have seen at least one video of those ""INSANE A.I created simulation of {X} doing {Y & Z} like the following ones:

+ +

A.I learns how to play Mario +A.I swaps faces of {insert celebrity} in this video after 16hrs. +etc...

+ +

I want to know what I have to learn to be able to create for example a program that takes xyz-K images of a person as training data and changes it with another person's face in a video.

+ +

Or create a program that on a basic level creates a simulation of 2 objects orbiting /attracting each other /colliding like this: +

+ +

What field/topic is that? + I suspect deep learning but I'm not sure. I'm currently learning machine learning with Python.

+ +

I'm struggling because linear regression & finances /stock value prediction is really not interesting compared to teaching objects in games to do archive something or create a program that tries to read characters from images.

+",21988,,1671,,2/6/2019 2:48,2/6/2019 5:59,Which field to study to learn & create a.i generated simulations?,,1,0,,,,CC BY-SA 4.0 +10381,2,,10378,2/4/2019 1:05,,2,,"

There are several nice things about using the logistic function and a few drawbacks as well. I'm not going to discuss all of them, but here is a small synopsis.

+

Nice Properties

+

+

For networks which calculate probabilities, it brings the input features into a nice 0 to 1 range.

+

Since the logistic function maps inputs to the range (0,1), it is commonly used as an activation function in neural networks which require some sort of classification. For example, if we had a convolution neural network (CNN) that classified if an image given to it was a dog or not, it could have an output neuron that returned 0 if the image was not a dog or 1 if it was. When the CNN is given an image with inputs of varying dimensions and ranges, it's more intuitive to deal with the input features in the realm of 0 to 1 instead of 0 to 255 or whatever input format you're dealing with.

+

It is smooth and differentiable.

+

When performing back propagation, it's much easier to deal with differential functions. Otherwise you'd have to approximate the gradient which may lead to unfavorable results.

+

A Vanishing Issue

+

At extreme values, the gradient vanishes.

+

The OP was also hinting that at very positive and very negative values of X, derivative of the logistic function could become close to 0. For large input values this may hinder learning for extreme values. If you think of gradient decent, the jump that gradient descent makes is only as strong as its magnitude of the gradient that it requires. Analytically speaking, this also hurts back propagation because learning is calculated using the chain rule, which ends up multiplying several gradients together.

+",17408,,-1,,6/17/2020 9:57,2/4/2019 1:05,,,,0,,,,CC BY-SA 4.0 +10387,1,,,2/4/2019 13:59,,1,478,"

I have a convolutional encoder (a CNN) consisting of DenseBlocks and a total of 50 layers (cf. FC-DenseNet103). The receptive field of the encoder (after last layer) is 660 according to Tensorflow function compute_receptive_field_from_graph_def(..)) whereas the input image is 64x64 pixels. Obviously the receptive field is way too big.

+ +

How can the receptive field be reduced to say 46 but the capacity of the encoder be more or less kept at the same level? By capacity I simply mean the number of parameters of the model. The capacity requirement is justified due to the complex dataset to be processed.

+ +

Using less layers or smaller kernels reduces the receptive field size but also the capacity. Should I then just increase the number of filters in the remaining layers in order to keep the capacity?

+",20191,,20191,,2/4/2019 20:03,10/27/2020 16:00,Reduce receptive field size of CNN while keeping its capacity?,,1,0,,,,CC BY-SA 4.0 +10388,2,,10371,2/4/2019 14:07,,5,,"

I would say Expert Systems is still being taught. For instance, if you look at some of the open courses like MIT's, there are still lectures on it.

+ +

Also, looking at the CLIPS documentation, you will find a couple of examples of usage from 2005.

+ +

What I suspect is that Expert Systems are now embedded with ""normal systems"" in practice. Hence it may be difficult to distinguish from systems used on a daily basis for diagnostics, etc. and not as popular as before.

+",15812,,,,,2/4/2019 14:07,,,,0,,,,CC BY-SA 4.0 +10391,1,,,2/4/2019 18:18,,1,93,"

I have got multi-class object detector. One model accuracy evaluation of detection consists of: mAP, FP, FN, TP for each class divided to two graphs and looks like this (I've used this repo for evaluation).

+ +

+ +

Now, I've got many of these evaluations (multiple times these two graphs for different models) and I would like to easily compare all these trained models (results) and put them to one graph.

+ +

I've searched through the whole Internet, but wasn't able to find suitable method of placing all the values to one graph. Also, the values of these three classes can be put together (eg. result mAP for this evaluation would be (75+ 68+ 66) / 3 = ~70%), so I would have just single value of each mAP, FN, FP, TP for one whole model evaluation.

+ +

What comes to my mind is the following graph (or maybe some kind of plot):

+ +

+ +

Note: It may not make sense to place mAP together with TP, etc. into one graph, but I would like to have all these values together to easily compare all the model evaluations. Also I am not really looking for a script, I can do the graph manually from values, but script would be more helpful. What really matters is, how to create meaningful graph with all the data :). If the post is more suitable for different kind of site, please, let me know.

+",18760,,,user9947,2/4/2019 18:32,2/4/2019 18:32,How to create meaningful multiple object detection evaluation comparison graph?,,0,2,,,,CC BY-SA 4.0 +10393,1,,,2/4/2019 22:27,,1,17,"

Is it possible to sample from a distribution inside a neural network forward function? Assume that there is a NN and a sample is needed to be derived from it at every forward-pass to randomly set a layer-specific hyper-parameter.

+ +

Is this operation differentiable

+",10569,,,,,2/4/2019 22:27,Sample from a distribution inside a NN layer,,0,0,,,,CC BY-SA 4.0 +10394,1,,,2/4/2019 22:50,,1,44,"

Are there chatbots for Facebook Messenger or Skype available which are game-based, i.e. with which it is possible to play a short funny game? It should be possible to play the game for at least 10 minutes in sequence and it should be writing based and not based on clicking at boxes. That means the agent should be pretty clever, like Microsoft Zo bot, but instead of conducting random smalltalk a game should be played.

+ +

Second, are there bots for Facebook Messenger or Skype available which are nasty and unfriendly, i.e. which are offending?

+ +

Thank you a lot in advance. I'm thankful for any help.

+",21103,,1671,,2/6/2019 2:18,2/6/2019 2:18,Game-based or nasty chatbot for Facebook Messenger or Skype,,0,1,,,,CC BY-SA 4.0 +10400,2,,10003,2/5/2019 9:56,,0,,"

Yes there is! If a model generalizes well to the test set, we already know that it has found some useful features. However, the latent representation of the data may still be ""entangled"" - a single element of the latent vector may actually encode information about multiple attributes of the input, or a single attribute may be spread across multiple elements. We usually prefer a representation in which the features are represented by the axes of the latent space - a ""disentangled"" representation. For example, if we were encoding faces, it would be nice to have an axis for smiling/not, another for masculine/feminine, and ao on.

+ +

Pushing models to learn ""clean"" (disentangled) representations is an active sub-field of machine learning research with practical applications (like interpretability, but also because it makes it easier for ""downstream"" models to learn their tasks, e.g. a control policy in reinforcement learning system taking as input a learned representation from a world model).

+ +

Where to begin? Start with L2 regularisation to push your network to ""spend"" those weights wisely (more weights close to zero => sparser latent vector) and work your way up from there.

+",17770,,,,,2/5/2019 9:56,,,,0,,,,CC BY-SA 4.0 +10402,2,,10387,2/5/2019 11:02,,1,,"

One way to keep the capacity while reducing the receptive field size is to add 1x1 conv layers instead of 3x3 (I did so within the DenseBlocks, there the first layer is a 3x3 conv and now followed by 4 times a 1x1 conv layer instead of the original 3x3 convs (which increase the receptive field)). In doing that, the number of parameters can be kept at a similar level. While 1x1 convolutions are good to add non-linearity, they are not useful to learn spatial information where neighboring pixels/values correlate. This might turn out to be a problem.

+ +

Therefore, experiments will have to show if this idea of adding 1x1 conv layers to keep the capacity proves useful or not in learning spatial features on a natural image data set.

+",20191,,,,,2/5/2019 11:02,,,,0,,,,CC BY-SA 4.0 +10403,1,11297,,2/5/2019 11:34,,8,711,"

The Softsign (a.k.a. ElliotSig) activation function is really simple:

+ +

$$ f(x) = \frac{x}{1+|x|} $$

+ +

It is bounded $[-1,1]$, has a first derivative, it is monotonic, and it is computationally extremely simple (easy for, e.g., a GPU).

+ +

Why it is not widely used in neural networks? Is it because it is not infinitely derivable?

+",15017,,2444,,3/18/2019 20:42,3/18/2019 20:42,Why isn't the ElliotSig activation function widely used?,,1,3,,,,CC BY-SA 4.0 +10404,1,,,2/5/2019 11:47,,1,580,"

I want to customize the 'Pendulum-v0' environment such that the action (the torque) from previous time step as well as from the current timestep serve as the inputs in the Env.step() function.

+ +

My problem statement is that I want to generate torque from the controller which has a white Gaussian noise of magnitude 1 and then filter it with the torque generated in the previous timestep as follows:

+ +

tor_ = tor_c + a*WGN ;

+ +

tor(t) = lambda*tor_ + (1-lambda)*tor(t-1) ;

+ +

https://github.com/openai/gym/blob/master/gym/envs/classic_control/pendulum.py#L37

+ +

We can see that the 'u' is an array of some numbers as the input which is afterward clipped from -max_torque to max_torque and then only the first element is taken as the torque value to calculate the states and the reward function for the given time-step.

+ +

My question is what does the value of the other elements signify? Are they +torque values from the previous time steps or is it that the length of the 'u' array is just 1 and its value is restricted between -max_torque to max_torque?

+ +

In conclusion, I just wanna access the action (the torque value) from the previous time-step. Is it possible ? If yes, how?

+",22023,,,,,2/5/2019 11:47,Input for the Env.step() in the 'Pendulum-v0' environment,,0,8,0,,,CC BY-SA 4.0 +10406,1,,,2/5/2019 14:23,,1,239,"

In a neural network, each neuron represents some part of the input. For example, in the case of a MNIST digit, consider the stem of the number 9. Each neuron in the NN represents some part of this digit.

+ +
    +
  1. What determines which neuron will represent which part of the digit?

  2. +
  3. Is it possible that if we pass in the same input multiple times, each neuron can represent different parts of the digit?

  4. +
  5. How is this related to the back-propagation algorithm and chain rule? Is it the case that, before training the neural network, each neuron doesn't really represent anything of the input, and, as training proceeds, neurons start to represent some part of the input?

  6. +
+",19583,,2444,,5/8/2019 13:23,1/18/2023 0:06,Which neuron represents which part of the input?,,2,0,,,,CC BY-SA 4.0 +10412,1,,,2/5/2019 20:18,,2,60,"

I am trying to build a film review classifier where I determine if a given review is positive or negative (w/ Python). I'm trying to avoid any other ML libraries so that I can better understand the processes. Here is my approach and the problems that I am facing:

+
    +
  1. I mine thousands of film reviews as training sets and classify them as positive or negative.
  2. +
  3. I parse through my training set and for each class, I build an array of unique words.
  4. +
  5. For each document, I build a vector of TF-IDF values where the vector size is my number of unique words.
  6. +
  7. I use a Gaussian classifier to determine: $$P(C_i|w)=P(C_i)P(w|C)=P(C_i)*\dfrac{1}{\sqrt{2\pi}\sigma_i}e^{-(1/2)(w-\mu_i)^T\sigma_i^{-1}(w-\mu_i)}$$ where $w$ is the my document in a vector, $C_i$ is a particular class, $\mu_i$ is the mean vector and $\sigma_i$ is my covariance matrix.
  8. +
+

This approach seems to make sense. My problem is that my algorithm is much too slow. As an example, I have sampled over 1,500 documents and I have determined over 40,000 unique words. This means that each of my document vectors has 40,000 entries and if I were to build a covariance matrix, it would have dimensions 40,000 by 40,000. Even I were able to generate the entirety of $\sigma_i$, but then I would have to compute the matrix product in the exponent, which will take an extraordinarily long time just to classify one document.

+

I have experimented with a multinomial approach, which is working well. I am very curious about how to make this work more efficiently. I realise the matrix multiplication runtime can't be improved, and I was hoping for insight on how others are able to do this.

+

Some things I have tried:

+
    +
  • Filtered any stop words (but this still leaves me with tens of thousands of words)
  • +
  • Estimated $\sigma_i$ by summing over a couple of documents.
  • +
+",22017,,2444,,12/13/2021 8:45,12/13/2021 8:45,My Gaussian Naive Bayes classifier is too slow,,0,0,,,,CC BY-SA 4.0 +10414,1,,,2/5/2019 21:22,,2,394,"

I just finished the three-part series of Probabilistic Graphical Models courses from Stanford over on Coursera. I got into them because I realized there is a certain class of problem for which the standard supervised learning approaches don't apply, for which graph search algorithms don't work, problems that don't look like RL control problems, that don't even exactly look like the kind of clustering I came to call "unsupervised learning".

+

In my AI courses in the Institute, we talked briefly about Bayes Nets, but it was almost as if professors considered that preamble to hotter topics like Neural Nets. Meanwhile, I heard about "Expectation Maximization" and "Inference" and "Maximum Likelihood Estimation" all the time, like I was supposed to know what they were talking about. It frustrated me not to be able to remember statistics well enough to feel these things, so I decided to fill the hole by delving deeper into PGMs.

+

Throughout, Koller gives examples of how to apply PGMs to things like image segmentation and speech recognition, examples that seem completely dated now because we have CNNs and LSTMs, even deep nets that encode notions of uncertainty about their beliefs.

+

I gather PGMs are good when:

+
    +
  1. You know the structure of the problem and can encode domain knowledge that way.

    +
  2. +
  3. You need a generative model.

    +
  4. +
  5. You want to learn more than just one $X \rightarrow Y$ mapping, when you instead need a more general-purpose model that can be queried from several sides to answer different kinds of questions.

    +
  6. +
  7. You want to feed the model inputs that look more like probability distributions than like samples.

    +
  8. +
+

What else are they good for?

+

Here are a few more related questions.

+
    +
  • Have they not been outstripped by more advanced methods for lots of problems now?

    +
  • +
  • In which domains or for which specific kinds of problem are they still the preferred approach?

    +
  • +
  • How are they complementary to modern advanced methods?

    +
  • +
+",18196,,2444,,11/20/2021 13:39,11/20/2021 13:39,How do probabilistic graphical models factor into modern machine learning?,,0,1,,,,CC BY-SA 4.0 +10415,2,,10360,2/6/2019 2:36,,2,,"

Heuristics can be understood aas rules. Typically heuristics are thought of as problem-specific strategies. Expert systems were an early form of AI that utilized rules-based decisions. In a game-playing context, heuristics can be pure strategies.

+ +

A heuristic function would be one that includes a some predefined decision rules. Russell and Norvig have a nice chapter on informed (heuristic) search strategies.

+ +
+

Heuristic functions are the most common form in which additional knowledge of the + problem is imparted to the search algorithm. +
Artificial Intelligence: A Modern Approach; 3.5 pdf

+
+ +

h(n) as opposed to f(n) or g(n)

+",1671,,1671,,2/7/2019 17:52,2/7/2019 17:52,,,,0,,,,CC BY-SA 4.0 +10417,2,,10380,2/6/2019 5:59,,1,,"

You need to define ""simulation"" more specific. Playing Mario, Swapping face on image/video, or generating simulation of objects that are orbiting use different techniques.

+ +
    +
  • Playing Mario or ""AI that playing game"": the AI agent trained on available environment (Mario game, so the environment is not generated) and learn the best sequential actions to achieve the goal. It runs the game thousand times, when it did a wrong action then it gets ""penalties"" that improve its knowledge. The algorithm that can be used is Reinforcement Learning, but some earlier paper use Genetic Algorithm to generate the best action

  • +
  • Face swap: It's close to computer vision area, some methods that I know use Style Transfer principle (Convolutional Neural Network) to make transformation of face of one image to another image. You can read the basic of style transfer here.

  • +
  • Generating physical movement: I don't know too much about this topic but I know there are some papers talk about this, Fluid Net from Google workers and this paper from TU Munchen. At a glance they also use CNN to improve the result but the main simulation came from Euler Fluid Equation. So if you need to generate object that orbiting, I think you need to find equations that models that movement.

  • +
+ +

Hope it helps!

+",16565,,,,,2/6/2019 5:59,,,,0,,,,CC BY-SA 4.0 +10418,1,,,2/6/2019 7:33,,3,234,"

I work with neural networks for real-time image processing on embedded softwares and I tested different architectures (Googlenet, Mobilenet, Resnet, custom networks...) and different hardware solutions (boards, processors, AI accelerators...). I noticed that the performance of the system, in terms of inference time, does not depend only on the processor but also on other factors.

+ +

For example, I have two boards from different manifacturers, B1 (with a cheap processor) and B2 (with a better processor), and two neural networks, N1 (very light with regular convolutions and fully connected layers) and N2 (very large, with inception modules and many layers). The inference time for N1 is better on B1, while for N2 it is better on N2. Moreover, it happens that, as the software is executed, the inference time changes over time.

+ +

So my question is: in an embedded system, what are the aspects that impact on the inference time, and how? I am interested not only in the hardware features but also in the neural network architecture (convolutional filter size, types of layers and so on).

+",16671,,,,,2/7/2019 14:44,What are the aspects that most impact on the inference time for neural networks in embedded systems?,,1,0,,,,CC BY-SA 4.0 +10419,1,,,2/6/2019 10:43,,2,216,"

I trained some Gaussian process model with the Python library GPFlow on a dataset consisting of $(X, Y)$, inputs and outputs, in a regression setting. This model gives me pretty good predictions in the sense that the relative error is small almost everywhere. I want to use the uncertainty as well, which is given in a GPFlow setting in the form of a standard deviation (STD) associated with every prediction. Here's my problem: I normalised both inputs and outputs before training (separately) using sklearn's StandardScaler (effectively making the data normally distributed with $0$ mean and unit STD). So the STD given by the model pertains to the scaled data. How do I ""rescale"" the uncertainty estimates of the GP to the actual data? Using the inverse_transform function of the output scaler makes little sense. This issue might be easier solvable if I scaled with a MinMaxScaler (squishing all data points into the unit interval) by dividing by the length of the range of the original output set (at least I think it works that way). But how about the case of the StandardScaler? Any insights will be appreciated!

+",16901,,,,,6/7/2022 8:03,GPFlow: Gaussian Process Uncertainty Quantification,,1,1,,,,CC BY-SA 4.0 +10420,1,,,2/6/2019 13:48,,3,96,"

How we should train a CNN model when training dataset contains only limited number of cases, and the trained model is supposed to predict class (label) for several other cases, which has not seen before?

+ +

Supposing there was hidden independent features describing the label repeated in previously seen cases of dataset.

+ +

For example let's consider we want to train a model to movement time series signals so it can predict some sort of activities (labels), and we have long record of movement signals (e.g. hours) for limited number of persons (e.g. 30) during various type of activities (e.g. 5), we may say these signals carry three type of hidden features:

+ +
    +
  1. Noise-features: Common features between every persons/activities
  2. +
  3. Case-features: features mostly correlated with persons
  4. +
  5. Class-features: features mostly correlated with activities
  6. +
+ +

We want to train the model such it learn mostly Class-features and eliminate 1st and 2nd types of features.

+ +

In conventional types of supervised-learning CNN learns all features how dataset represents them. In my test the model learned those 30 person activities very well, but on new persons it only predict randomly (i.e. 20% success). Over-fitted?

+ +

It seems there are three straight workaround to this:

+ +
    +
  1. Extracting class-features and using a shallow classifier.
  2. +
  3. Increasing dataset wideness by recording signal on other persons: it can get so expensive or impossible in some situations.
  4. +
  5. Signal augmentation: by augmenting signals such it does not change Class-features, and making augmented Case-features. it seems to me harder than 1st workaround.
  6. +
+ +

Is there any other workaround on this type of problem?

+ +

For example specific type of training to use, to learn the model how different cases similarly follow class-features during class changes, eliminating case-features which varies case by case.

+ +

Sorry for very long question!

+",22055,,22055,,2/6/2019 21:41,2/6/2019 21:41,How to train CNN such it eliminate dependent features and focuses on independent ones?,,0,7,,,,CC BY-SA 4.0 +10421,1,10426,,2/6/2019 15:04,,3,849,"

In actor critic, The equations for calculating the loss in actor critic are an

+ +

actor loss (parameterized by $\theta$)

+ +

$$log[\pi_\theta(s_t,a_t)]Q_w(s_t,a_t)$$

+ +

and a critic loss (parameterized by $w$)

+ +

$$r(s_t,a_t) + \gamma Q_w(s_{t+1}, a_{t+1}) - Q_w(s_{t}, a_t).$$

+ +

This is bootstrapping in experience replay:

+ +

$$ +L_i(\theta_i) = \mathbb{E}_{(s, a, r, s') \sim U(D)} \left[ \left(r + \gamma \max_{a'} Q(s', a'; \theta_i^-) - Q(s, a; \theta_i)\right)^2 \right] +$$

+ +

It is clear that bootstrapping is comparable to the critic loss, except that the $max$ operation is lacking from the critic.

+ +

As i see it, (correct me if I'm wrong):

+ +

$Q(s_t,a_t) = V(s_{t+1}) + r_t$ where $a_t$ is the actual action that had been taken.

+ +

The critic, as I understand, estimates $V(s)$

+ +

My question:

+ +

What exactly is the critic calculating?

+ +

What In actor critic outputs $Q(s_{t+1},a_{t+1})$?

+ +

It seems to me like the critic calculates the average next state $s_{t+1}$ value, over all possible actions, with their corresponding probabilities, yielding

+ +

$Q(s_t, a_t) = r_t + \sum_{a_{t+1} \in A}P(a_{t+1}|s_t)V(s_{t+1})$

+ +

Which would mean that in order to get $Q(s_{t+1}, a_{t+1})$ for the above formula, I would need to calculate

+ +

$Q(s_{t+1}, a_{t+1}) = r_{t+1} + \sum_{a_{t+2} \in A}P(a_{t+2}|s_{t+1})V(s_{t+2})$

+ +

Where $V(s_{t+2})$ is the critic output on $s_{t+2}$, a state we get to by taking action $a_{t+1}$ from state $s_{t+1}$ but I am not sure that is indeed the meaning of the critic output and still it is unclear to me how to get $Q(s_{t+1}, a_{t+1})$ from actor critic.

+ +

If indeed that is what's being calculated, then why is it mathematically true that an improvement is being made? Or why does it make sense (even if not mathematically always true)?

+ +
+ +

Practical use:

+ +

I want to use actor critic with experience replay in an environment with a large action space (could be continuous). Therefore, I cannot use the $max$ term. I need to understand the correct equation for the critic loss, and why it works.

+",21645,,21645,,2/6/2019 16:53,2/6/2019 18:07,Meaning of Actor Output in Actor Critic Reinforcement Learning,,1,3,,,,CC BY-SA 4.0 +10422,1,10427,,2/6/2019 15:06,,2,151,"

Imagine that the agent receives a positive reward upon reaching a state 𝑠. Once the state 𝑠 has been reached the positive reward associated with it vanishes and appears somewhere else in the state space, say at state 𝑠′. The reward associated to 𝑠′ also vanishes when the agent visits that state once and re-appears at state 𝑠. This goes periodically forever. Will discounted Q-learning converge to the optimal policy in this setup? Is yes, is there any proof out there, I couldn't find anything.

+",22060,,2444,,2/15/2019 17:40,2/15/2019 17:40,Will Q-learning converge to the optimal state-action function when the reward periodically changes?,,1,4,,,,CC BY-SA 4.0 +10425,1,,,2/6/2019 17:48,,2,40,"

I am reviewing my Neural Network lectures and I have a doubt: My book's (Haykin) batch PTA describes a cost function which is defined over the set of the misclassified inputs.

+

I have always been taught to use MSE < X as a stopping condition for the training process. Is the batch case different? Should I use as stopping condition size(misclassified) < Y (and as a consequence when the weight change is very little)?

+

Moreover, the book uses the same symbol for both the training set and the misclassified input set. Does this mean that my training set changes each epoch?

+",21676,,2444,,12/12/2021 18:59,12/12/2021 18:59,Batch PTA stopping condition,,0,2,,,,CC BY-SA 4.0 +10426,2,,10421,2/6/2019 18:07,,4,,"

When using the loss function for the critic described in your question, the Actor-Critic is an on-policy approach (as are most Actor-Critic methods). Your intuition as to what it is learning seems to be quite close, but the notation/terminology is not quite on point.

+ +

First it's important to realize that the $Q(s, a)$ critic is an estimator, we're training it to estimate state-action values. You could say that we are training it such that it can hopefully provide accurate estimates of:

+ +

$$Q_w^{\pi} (s_t, a_t) \approx \mathbb{E}_{\pi} \left[ r_t + \gamma V^{\pi}(s_{t+1}) \right].$$

+ +

You'll notice I've added quite a number of symbols there in comparison to your $Q(s_t, a_t) = r_t + V(s_{t+1})$:

+ +
    +
  • I have added the $\pi$ superscript to $Q$ and $V$; this denotes the behaviour policy, which is the policy that we're using to generate experience. In on-policy methods, this is equal to the target policy (the policy for which we're learning to predict values). Adding this superscript makes explicit the fact that we're learning expected returns for states and state-action pairs that are only accurate under the assumption that we continue following the $\pi$ policy from state $s_{t+1}$ onwards.
  • +
  • I added the discount factor $\gamma$, which is probably just a tiny detail you forgot.
  • +
  • I added $\mathbb{E}_{\pi}$ to indicate that we're trying to estimate an expectation under $\pi$ (and the environment's dynamics).
  • +
+ +

So, the critic is trained to estimate $Q^{\pi}(s, a)$, which can intuitively be interpreted as the long-term discounted rewards that we expect to collect when executing $a$ in $s$, and selecting actions according to the distribution $\pi$ subsequently. It definitely still is trying to estimate $Q(s, a)$ values for state-action pairs, not just $V(s)$ values for states alone.

+ +
+ +
+

What In actor critic outputs $Q(s_{t+1},a_{t+1})$?

+
+ +

In practice, when using the loss function described in your question, $a_{t+1}$ really simply is a single action selected in an actual trajectory of experience by the policy $\pi$. The trained network simply takes $s_{t+1}$ as input, and the output corresponding to a single action $a_{t+1}$ as selected by the policy is used as the value for $Q(s_{t+1},a_{t+1})$ in the update rule.

+ +

The update rule does not involve any sum over all actions, multiplied with their probabilities. The ""trick"" is that we do not just run the update rule a single time, but we expect to generate lots (sometimes millions) of trajectories as experience, and we repeatedly run the update rule. In different trajectories, we'll experience the different actions $a_t$ as samples with approximately the correct frequencies, and in expectation we'll have proper update targets (except for potential bias resulting from function approximation).

+ +
+ +
+

I want to use actor critic with experience replay in an environment with a large action space

+
+ +

The Actor-Critic method in your question is, as I mentioned above, on-policy. This means that the experience used in update rules has to be generated according to exactly the same policy for which you are also learning value estimates. This is incompatible with the idea of experience replay, because old trajectories stored in a replay buffer were generated by older versions of your policy.

+ +

There are off-policy Actor-Critic methods which can correct for the mismatch in distributions and use experience replay, but these are going to be quite a bit more complicated. Examples are ACER and IMPALA.

+",1641,,,,,2/6/2019 18:07,,,,9,,,,CC BY-SA 4.0 +10427,2,,10422,2/6/2019 18:13,,3,,"

No, it will not converge in the general case (maybe it might in extremely convenient special cases, not sure, didn't think hard enough about that...).

+ +

Practically everything in Reinforcement Learning theory (including convergence proofs) relies on the Markov property; the assumption that the current state $s_t$ includes all relevant information, that the history leading up to $s_t$ is no longer relevant. In your case, this property is violated; it is important to remember whether or not we visited $s$ more recently than $s'$.

+ +

I suppose if you ""enhance"" your states such that they include that piece of information, then it should converge again. This means that you'd essentially double your state-space. For every state that you have in your ""normal"" state space, you'd have to add a separate copy that would be used in cases where $s$ was visited more recently than $s'$.

+",1641,,1641,,2/6/2019 18:28,2/6/2019 18:28,,,,1,,,,CC BY-SA 4.0 +10429,1,,,2/6/2019 23:29,,2,47,"

I'm seeing a lot of examples of neuroevolution techniques involving games or robot problems. Can neuroevolution be used for solving tasks other than games? For example, how could you transform a CSV file of psychological data to determine the best life actions you can get from a self-report questionnaire?

+",22070,,2444,,7/7/2019 19:17,7/7/2019 19:17,Can neuroevolution be used for solving tasks other than games?,,0,1,,,,CC BY-SA 4.0 +10430,1,10433,,2/6/2019 23:51,,2,109,"

A lot of research has been done to create the optimal (or ""smartest"") RL agent, using methods such as A2C. An agent can now beat humans at playing Go, Chess, Poker, Atari Games, DOTA, etc. But I think these kind of agents will never be a friend of humans, because humans won't play with a agent that always beats them.

+ +

How could we create an agent that doesn't outperform humans, but it has the human level skill, so that when it plays agains a human, the human is still motivated to beat it?

+",16565,,2444,,2/15/2019 17:37,2/15/2019 17:37,How do we create a good agent that does not outperform humans?,,1,0,,,,CC BY-SA 4.0 +10431,1,10604,,2/7/2019 6:32,,10,3538,"

I have two Machine Learning models (I use LSTM) that have a different result on the validation set (~100 samples data):

+
    +
  • Model A: Accuracy: ~91%, Loss: ~0.01
  • +
  • Model B: Accuracy: ~83%, Loss: ~0.003
  • +
+

The size and the speed of both models are almost the same. So, which model should I choose?

+",16565,,2444,,1/29/2021 1:50,1/29/2021 1:50,Should I choose a model with the smallest loss or highest accuracy?,,3,2,,,,CC BY-SA 4.0 +10432,2,,5497,2/7/2019 8:55,,1,,"

Yes. The mutation can either disable or enable a gene.

+ +

It's in the original NEAT implementation released by Dr. Kenneth O. Stanley.

+ +
+

Declared in genetics.h:

+ +
void mutate_toggle_enable(int times); /* toggle genes on or off */
+void mutate_gene_reenable();  /* Find first disabled gene and enable it */
+
+
+ +

http://nn.cs.utexas.edu/soft-view.php?SoftID=4

+ +

http://nn.cs.utexas.edu/?neat-c

+",20193,,,,,2/7/2019 8:55,,,,0,,,,CC BY-SA 4.0 +10433,2,,10430,2/7/2019 9:20,,3,,"

You basically have to degrade the result, assuming that the machine always finds the best move. There are a number of possibilities:

+ +
    +
  • restrict the depth of searching. In early chess programs I believe that was the main way of regulating the difficulty. You stop the evaluation of moves after a particular depth in your search tree has been reached. This would be equivalent to only looking ahead two moves instead of twenty.

  • +
  • set a time limit. This is somewhat similar the restricting the depth of the search, but more generally applicable. If your algorithm accumulates candidate moves, and the general tendency is to get to the better moves after first finding a number of weaker ones, then you can stop at a given point in time and return what you have found then.

  • +
  • distort available information. This might not be that applicable to games such a chess, but you could restrict the information the machine has available for evaluating moves. Something like the ""Fog of War"" often used in strategy games. With incomplete information it is harder to find a good move, though it is not impossible, which makes it more challenging than, say, restricting the depth of search too much.

  • +
  • sub-optimal evaluation function. If you have a function that evaluates the quality of a move, simply fudge that function to not return the best value. Perhaps add a random offset to the return value to make it less deterministic/predictable.

  • +
+ +

There are probably other methods as well; the tricky part is to tread the fine line between appearing to be a weaker (but consistent) player, and just being a random number generator.

+",2193,,,,,2/7/2019 9:20,,,,2,,,,CC BY-SA 4.0 +10436,1,10463,,2/7/2019 11:34,,2,176,"

I have been researching about determining some key points on an image, in this case I'm gonna use cloth (top side of human body) pictures. I want to detect some corner points on those.

+ +

Example:

+ +

+ +

I have two solutions on my mind. One CNN with transpose layers resulting in heatmap where I can get points. The second is to get 24 number as output from the model meaning 12(x,y) point. I don't know which one will be better.

+ +

In face point detection, they use the second method. In human pose estimation, they use method one. So what do you suggest me to use? or do you have any new ideas? Thanks

+",16864,,16864,,2/7/2019 11:40,2/8/2019 17:41,Key Point Extraction the best method?,,1,2,,,,CC BY-SA 4.0 +10440,2,,10418,2/7/2019 14:44,,1,,"

You can expect that the inference time will strongly depend on particular hardware and software present on your platform. First, GPU equipped devices (eg NVidia TX) will outperform non-GPU equipped devices (eg. Intel Movidius). Second, software support (eg. cudnn, TensorRT) will make dramatic further impact.

+ +

For instance, we have measured the inference time of two convolutional models. The model A requires 250% more floating point operations than the model B. Yet, the two models take roughly the same time to evaluate on our device, since the layers of model A are better optimized in software. Conclusion: algorithmic complexity and practical execution time on a particular computing platform are not bound to be proportional any more.

+",21726,,,,,2/7/2019 14:44,,,,0,,,,CC BY-SA 4.0 +10441,1,,,2/7/2019 15:22,,1,101,"

In the book ""Reinforcement Learning: An Introduction"", by Sutton and Barto, they provided the ""Q-learning prioritized sweeping"" algorithm, in which the model saves the next state and the immediate reward, for each state and action, that is, $Model(S_{t},A_{t}) \leftarrow S_{t+1}, R_{t+1}$.

+ +

If we want to use ""SARSA prioritized sweeping"", should we save ""next state, immediate reward, and next action"", that is, $Model(S_{t},A_{t}) \leftarrow S_{t+1}, R_{t+1}, A_{t+1}$?

+",10191,,2444,,2/7/2019 16:41,2/7/2019 17:16,What should be saved in SARSA prioritized sweeping?,,1,0,,,,CC BY-SA 4.0 +10442,1,,,2/7/2019 15:38,,6,1228,"

In this video, the lecturer states that $R(s)$, $R(s, a)$ and $R(s, a, s')$ are equivalent representations of the reward function. Intuitively, this is the case, according to the same lecturer, because $s$ can be made to represent the state and the action. Furthermore, apparently, the Markov decision process would change depending on whether we use one representation or the other.

+ +

I am looking for a formal proof that shows that these representations are equivalent. Moreover, how exactly would the Markov decision process change if we use one representation over the other? Finally, when should we use one representation over the other and why are there three representations? I suppose it is because one representation may be more convenient than another in certain cases: which cases? How do you decide which representation to use?

+",2444,,2444,,1/20/2021 17:02,1/27/2021 16:14,"How are the reward functions $R(s)$, $R(s, a)$ and $R(s, a, s')$ equivalent?",,2,0,,,,CC BY-SA 4.0 +10443,2,,10441,2/7/2019 17:10,,3,,"

SARSA is an on-policy method. Your historical choices of action will have been made using older Q values, and thus from a different policy. In addition, the action that was taken at the time may not have been typical for the agent (it may have been exploring). So you don't usually want to re-use historical action choices to calculate TD targets in single-step SARSA, because these may introduce bias.

+ +

Provided you are performing single-step SARSA, then you can re-generate the action choice sampling from the current best policy. This is similar to generating the max Q value in the TD target value $R_{t+1} + \text{max}_{a'} Q(S_{t+1},a')$.

+ +

You could do this using a regular SARSA sampling the action choice:

+ +

$$R_{t+1} + Q(S_{t+1}, a' \sim \pi(\cdot|S_{t+1}) )$$

+ +

Or you could use Expected SARSA and take a weighted mean of all possible actions:

+ +

$$R_{t+1} + \sum_{a' \in \mathcal{A}(S_{t+1})} \pi(a'|S_{t+1})Q(S_{t+1}, a')$$

+ +

Technically these would both be on-policy with respect to evaluating the TD Target, but off-policy with respect to the distribution of $S_t, A_t$ that you are running the update for. That's already the case due to prioritised sweeping focussing more updates on certain transitions, but could be a big difference when using a neural network to approximate Q. Bear in mind that making TD learning methods off-policy can have a negative impact on stability.

+ +

If you want to process multi-step updates, then you do have to reference $A_{t+1}$ and adjust for when the historical data makes a different action choice than your current estimate of the best policy. This would commonly use importance sampling. This is true for Q-learning as well however, so there would still be no difference in what you store between Q-learning and SARSA.

+",1847,,1847,,2/7/2019 17:16,2/7/2019 17:16,,,,4,,,,CC BY-SA 4.0 +10444,2,,10442,2/7/2019 17:49,,7,,"

In general the different reward functions $R(s)$, $R(s, a)$ and $R(s, a, s')$ are not equivalent mathematically, so you will not find any formal proof.

+

It is possible for the functions to resolve to the same value in a specific MDP, if, for instance, you use $R(s, a, s')$ and the value returned only depends on $s$, then $R(s, a, s') = R(s)$. This is not true in general, but as the reward functions are often under your control, it can be the case quite often.

+

For instance, in scenarios where the agent's goal is to reach some pre-defined state, as in the grid world example from the video, then there is no difference between $R(s, a, s')$ or $R(s)$. Given that is the case, for those example problems you may as well use $R(s)$, as it simplifies the expressions that you need to calculate for algorithms like Q-learning.

+

I think the lecturer did not mean "equivalent" in the mathematical sense, but in the sense that future lectures will use one of the functions, and a lot of what you will learn is going to be much the same as if you had used a different reward function.

+
+

Finally, when should we use one representation over the other and why are there three representations?

+
+

Typically, I don't use any of those representations by default. I tend to use Sutton & Barto's $p(s', r|s, a)$ notation for combined state transitions and rewards. That expression returns probability of transitioning to state $s'$ and receiving reward $r$ when starting in state $s$ and taking action $a$. For discrete actions, you can re-write the expectation of the different functions $R$ in terms of this function as follows:

+

$$\mathbb{E}[R(s)] = \sum_{a \in \mathcal{A}(s)}\sum_{s' \in \mathcal{S}}\sum_{r \in {R}}rp(s', r|s, a)\qquad*$$

+

$$\mathbb{E}[R(s,a)] = \sum_{s' \in \mathcal{S}}\sum_{r \in {R}}rp(s', r|s, a)$$

+

$$\mathbb{E}[R(s,a, s')] = \sum_{r \in {R}}r\frac{p(s', r|s, a)}{p(s'|s, a)}$$

+

I think this is one way to see how the functions in the video are closely related.

+

Which one would you use? Depends on what you are doing. If you want to simplify an equation or code, then use the simplest version of the reward function that fits with the reward scheme you set up for the goals of the problem. For instance, if there is one goal state to exit a maze, and an episode ends as soon as this happens, then you don't care how you got to that state or what the previous state was, and can use $R(s)$

+

In practice, what happens if you use a different reward function is that you need to pay attention to where it appears in things like the Bellman equation for theoretical treatments. When you get to implement model-free methods like Q-learning, $R(s)$ or its variants don't really appear except in the theory.

+
+

* This is not technically correct in all cases. The assumption I have made is that $R(s)$ is a reward granted at the point of leaving state $s$, and is independent of how the state is left and where the agent ends up next.

+

If this was a fixed reward for entering state $s$, regardless of how, then it could be written around $R(s')$ as follows:

+

$$\mathbb{E}[R(s')] = \sum_{s \in \mathcal{S}}\sum_{a \in \mathcal{A}(s)}\sum_{r \in {R}}rp(s', r|s, a)$$

+

i.e. by summing all the rewards that end up at $s'$

+",1847,,2444,,1/27/2021 16:14,1/27/2021 16:14,,,,2,,,,CC BY-SA 4.0 +10445,2,,10442,2/7/2019 19:43,,5,,"

Let $R(s)$ denote a probability distribution over rewards that our agent may get in some MDP as a reward for entering a state $s$. The easiest case is to demonstrate that we can also choose to write this as $R(s, a)$ or $R(s, a, s')$: simply take $\forall a: R(s, a) = R(s)$, or $\forall a \forall s': R(s, a, s') = R(s)$, as also described in Neil's answer.

+ +
+ +

Let $R(s, a)$ denote a probability distribution over rewards that our agent may get as a reward for executing action $a$ in state $s$. The easy case of demonstrating equivalence to $R(s, a, s')$ is already handled above, but can we also construct an MDP in which we only use the $R(s)$ notation?

+ +

The easiest way I can think of to do so (may not be the cleanest way) would be to construct a new MDP with a bunch of ""dummy"" states $z(s, a)$, such that executing action $a$ in state $s$ in the original MDP deterministically leads to a dummy state $z(s, a)$ in the new MDP. Note that I write $z(s, a)$ to make the connection back to the original MDP explicit, but this is a completely independent MDP and you should view it as just a state ""$z$"".

+ +

Then, the reward distribution $R(s, a)$ that was associated with the state-action pair $(s, a)$ in the original MDP can be written as $R(z(s, a))$, which is now only a function of the state in the new MDP. In this dummy state $z(s, a)$ in the new MDP, every possible action $\alpha$ should have exactly the same transition probabilities towards new states $s'$ as the original transition probabilities for executing $a$ in $s$ back in the original MDP. This guarantees that the same policy has the same probabilities of reaching certain states in both MDPs; only in our new MDP the agent is forced to transition through these dummy states in between.

+ +

If you also have a discount factor $\gamma$ in the original MDP, I guess you should use a discount factor $\sqrt{\gamma}$ in the new MDP, because every step in the original MDP requires two steps (one step into a dummy state, and one step out of it again) in the new MDP.

+ +
+ +

The final case for $R(s, a, s')$ could be done in a very similar way, but it would get even more complicated to write out formally. The intuition would be the same though. Above, we pretty much ""baked"" state-action pairs $(s, a)$ from the original MDP into additional dummy states, such that in the new MDP we have states that ""carry the same amount of information"" as a full state-action pair in the original MDP. For the $R(s, a, s')$ case, you'd need to devise an even uglier solution with even more information ""baked into"" dummy states, such that you can treat full $(s, a, s')$ triples as single $z(s, a, s')$ states in the new MDP.

+ +
+ +
+

Finally, when should we use one representation over the other and why are there three representations? I suppose it is because one representation may be more convenient than another in certain cases: which cases? How do you decide which representation to use?

+
+ +

I would recommend always using the simplest representation that happens to be sufficient to describe how the rewards work in your environment in a natural way.

+ +

For example, if you have a two-player zero-sum game where terminal game states give a reward of $-1$, $0$, or $1$ for losses, draws, or wins, it is sufficient to use the $R(s)$ notation; the reward depends on the terminal game state reached, not on how it was reached. Another example would be a maze with a specific goal position, as described in Neil's answer. You could use the more complex $R(s, a)$ or $R(s, a, s')$ notations... but there wouldn't be much of a point in doing so really.

+ +

If you have an environment in which the reached state and the played action both have influence on the reward distribution, then it's much more sensible to just use the $R(s, a)$ notation rather than trying to define a massively overcomplicated MDP with dummy states as I've tried to do above. An example would be... let's say we're playing a quizz, where $s$ denotes the current question, and different actions $a$ are different answers that the agent can give. Then it's natural to model the problem as an MDP where $R(s, a)$ is only positive if $a$ is the correct answer to the question $s$.

+",1641,,,,,2/7/2019 19:43,,,,2,,,,CC BY-SA 4.0 +10446,1,10455,,2/7/2019 22:33,,2,364,"

I'm building a customer assistant chatbot in Python. So, I am modelling this problem as a text classification task. I have available more or less 7 hundred sentences of an average length of 15 words (unbalanced class).

+ +

What do you think, knowing that I have to do an oversampling, is this dataset large enough?

+",20780,,2444,,5/22/2020 1:24,5/22/2020 1:24,Is a dataset of roughly 700 sentences of an average length of 15 words enough for text classification?,,1,0,,,,CC BY-SA 4.0 +10447,1,10449,,2/7/2019 22:33,,9,12949,"

I have a dataset which I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 1972 columns (the features). The last column i.e. column 1972 has string type labels (14 different categories of target labels). I would like to use a CNN to classify the data in this case and predict the target labels using the available features. This is a somewhat unconventional approach though it seems possible. However, I am very confused on how the methodology should be as I could not find any sample code/ pseudo code guiding on using CNN for Classifying non-image data, either in Tensorflow or Keras. Any help in this regard will be highly appreciated. Cheers!

+",21460,,,,,8/17/2021 8:34,How to use CNN for making predictions on non-image data?,,3,0,,,,CC BY-SA 4.0 +10449,2,,10447,2/8/2019 0:52,,10,,"

You can use CNN on any data, but it's recommended to use CNN only on data that have spatial features (It might still work on data that doesn't have spatial features, see DuttaA's comment below).

+

For example, in the image, the connection between pixels in some area gives you another feature (e.g. edge) instead of a feature from one pixel (e.g. color). So, as long as you can shaping your data, and your data have spatial features, you can use CNN.

+

For Text classification, there are connections between characters (that form words) so you can use CNN for text classification in character level.

+

For Speech recognition, there is also a connection between frequencies from one frame with some previous and next frames, so you can also use CNN for speech recognition.

+

If your data have spatial features, just reshape it to a 1D array (for example in text) or 2D array (for example in Audio). Tensorflow's function conv1d and conv2d are general function that can be used on any data. It look the data as an array of floating-point, not as image/audio/text.

+

But if your data doesn't have spatial features, for example, your features are price, salary, status_marriage, etc. I think you don't need CNN, and using CNN won't help.

+",16565,,36737,,4/13/2021 15:54,4/13/2021 15:54,,,,7,,,,CC BY-SA 4.0 +10450,1,10453,,2/8/2019 1:47,,3,1558,"

In my understanding, Q-learning gives you a deterministic policy. However, can we use some technique to build a meaningful stochastic policy from the learned Q values? I think that simply using a softmax won't work.

+",22105,,22105,,2/8/2019 14:45,2/8/2019 14:45,Can Q-learning be used to derive a stochastic policy?,,1,1,,,,CC BY-SA 4.0 +10453,2,,10450,2/8/2019 8:19,,2,,"

No it is not possible to use Q-learning to build a deliberately stochastic policy, as the learning algorithm is designed around choosing solely the maximising value at each step, and this assumption carries forward to the action value update step $Q_{k+1}(S_t,A_t) = Q_k(S_t,A_t) + \alpha(R_{t+1} +\gamma\text{max}_{a'}Q_k(S_{t+1},a') - Q_k(S_t,A_t))$ - i.e. the assumption is that the agent will always choose the highest Q value, and that in turn is used to calculate the TD target values. If you use a stochastic policy as the target policy, then the assumption is broken and the Q table (or approximator) would not converge to estimates of action value for the policy*.

+ +

The policy produced by Q-learning can only be treated as stochastic when there is more than one maximum action value in a particular state - in which case you can select equivalent maximising values using any distribution.

+ +

In theory you could use the Q values to derive various distributions, such as a Boltzmann distribution, or softmax as you suggest (you will want to include some weighting factor to make softmax work in general). These can work nicely for the behaviour policy, for further training, and as an alternative to the more common $\epsilon$-greedy approach. However, they are not optimal policies, and the training algorithm will not adjust the probabilities in any meaningful way related to the problem you want to solve. You can set a value for e.g. $\epsilon$ for $\epsilon$-greedy, or have more sophisticated action choice with more parameters, but no value-based method can provide a way to change those parameters to make action choice optimal.

+ +

In cases where a stochastic policy would perform better - e.g. Scissor, Paper, Stone versus an opponent exploiting patterns in the agent's behaviour - then value based methods provide no mechanism to learn a correct distribution, and they typically fail to learn well in such environments. Instead you need to look into policy gradient methods, where the policy function is learned directly and can be stochastic. The most basic policy gradient algorithm is REINFORCE, and variations on Actor-Critic such as A3C are quite popular.

+ +
+ +

* You could get around this limitation by using an estimator that does work with a stochastic target policy, e.g. SARSA or Expected SARSA. Expected SARSA can even be used off-policy to learn one stochastic policy's Q values whilst behaving differently. However, neither of these provide you with the ability to change the probability distribution towards an optimal one.

+",1847,,1847,,2/8/2019 12:40,2/8/2019 12:40,,,,2,,,,CC BY-SA 4.0 +10454,1,,,2/8/2019 8:32,,1,58,"

Let's assume that we have a dataset of variables (random events)I apriori would like to set dependency conditions between some of them and perform structure learning to figure out the rest of the Bayesian network.

+

How can this be done practically (e.g. some libraries, like bnlearn) or, at least, in theory?

+

I was trying to google it, but haven't found anything related.

+",22113,,2444,,12/13/2021 8:52,5/7/2023 13:00,How to perform structure learning for Bayesian network given already partially constructed Bayesian network?,,1,0,,,,CC BY-SA 4.0 +10455,2,,10446,2/8/2019 9:19,,2,,"

It depends on the number of classes; we are getting good results with about 40 training examples per class.

+ +

A good way to get an idea about this is to run a test with an increasing set of training data, evaluating the result as you go along. Obviously, with a small set (eg 3 sentences per class), it will be very poor, but the accuracy should quickly increase and then stabilise at a higher level. With larger amounts of data you will probably only find a small increase or no change at all.

+ +

Collecting this data would not only give you confidence in your conclusion, it would also be a good supporting argument when you have to ask for more training data, or have to justify the poor performance of the classifier if you do find the data set is too small.

+ +

So, set up an automated 10-fold cross validation, feed an increasing amount of your available data into it, sit back, and graph the results.

+",2193,,,,,2/8/2019 9:19,,,,0,,,,CC BY-SA 4.0 +10456,1,,,2/8/2019 10:52,,2,333,"

I have chromosomes with floating-point representation with values between $0$ and $1$. For example

+

Let $p_1 = [0.1, 0.2, 0.3]$ and $p_2 = [0.5, 0.6, 0.7]$ be two parents. Both comply with the set of constraints. In my case, the major constraint is $$ p_1[1]*p_1[2] - k*p_1[0] \geq 0 $$ for any chromosome $p_1$. For the example above we can take $k=0.3$, which renders $c_2$ infeasible.

+

However, the children produced by 1 point crossover, we get $c_1 = [0.1, 0.6, 0.7]$ and $c_2 = [0.5, 0.2, 0.3]$ out of which 1 or both may not comply with the given constraints.

+

A similar scenario can also occur with a small perturbation of values due to mutation strategy. Correct me if I am wrong in the belief that such kind of scenarios might arise irrespective of the strategy employed for crossover and mutation.

+

What are the options to handle such kinds of cases?

+",22115,,2444,,1/30/2021 22:00,1/30/2021 22:00,How to handle infeasibility caused due to crossover and mutation in genetic algorithm for optimization?,,1,0,,,,CC BY-SA 4.0 +10457,2,,10456,2/8/2019 15:37,,1,,"

You have two broad categories of options, prevention and repair.

+ +

Prevention means defining a crossover and mutation operator that try to be more intelligent about respecting the constraints. Suppose you have an encoding where each individual is a list of integers, and the constraint is that there can't be duplicates. You might define a crossover operator that did something like the following. For each position in the offspring, choose randomly from one parent such that, if possible, you choose a value that hasn't already appeared in the offspring. If both parents have values at that position that are already used in the offspring, choose a random value.

+ +

That operator would avoid violating the constraint in the first place, while attempting as much as possible to have the offspring inherit information from the parents.

+ +

Another option is to just let the operator violate constraints and then deal with the ramifications afterward. You don't go into details about what your constraints actually are, just that c1=[0.1, 0.6, 0.7] violates them. Let's say the constraint is that the third position should not be more than 4x greater than the first one. OK, so then let's take this offspring and adjust either the first or third item. Maybe we make the new individual into c1=[0.2, 0.6, 0.7].

+ +

Often, you want some element of randomness in either option so that you don't strongly bias the production of new search points. In my example, don't always make the first element larger. Sometimes, make the third element smaller or produce some random combination of both to repair the constraint violation.

+ +

Finally, both options typically strongly benefit from domain knowledge. Design an operator that understands your problem and tries to intelligently solve the problem.

+",3365,,,,,2/8/2019 15:37,,,,0,,,,CC BY-SA 4.0 +10458,2,,10447,2/8/2019 16:05,,6,,"

The convolutional models are a method of choice when your problem is translation invariant (or covariant). In image classification, the image should be classified into class 'cow' if a cow is present in any part of the image. In text classification, different orders of phrases and sentences result in related meaning. In speech recognition, the same syllable is used at different places to build different words.

+ +

In your problem, you should check whether some subsequences of your 1791 columns give rise to the same meaning although they are located at different places within the sample. If the answer is positive, then convolutional layers are likely going to improve the performance.

+",21726,,,,,2/8/2019 16:05,,,,0,,,,CC BY-SA 4.0 +10459,2,,2279,2/8/2019 16:10,,0,,"

Object detection models work in a very similar fashion to what you have proposed. They output dense predictions at reduced resolutions. Each prediction fires if an object center is located within the respective region of the image. Of course, there are various further developments, but the main idea is exactly that.

+",21726,,,,,2/8/2019 16:10,,,,0,,,,CC BY-SA 4.0 +10461,1,,,2/8/2019 16:45,,2,654,"

I've used a table to represent the Q function, while an agent is being trained to catch the cheese without touching the walls.

+ +

The first and last row (and column) of the matrix are associated with the walls. I placed in last cell a cheese that agent must catch while being training.

+ +

So far, I've done it with dynamic states and, when necessary, I resized matrix with new states. I've used four actions (up, left, right and down).

+ +

I would like now to use an ANN to represent my Q function. How do I do that? What should be the input and output of such neural network?

+",17405,,2444,,2/10/2019 20:27,2/10/2019 20:27,How do I convert table-based to neural network-based Q-learning?,,1,0,,,,CC BY-SA 4.0 +10462,1,,,2/8/2019 17:39,,1,592,"

LSTM is supposed to be the right tool to capture path-dependency in time-series data.

+ +

I decided to run a simple experiment (simulation) to assess the extent to which LSTM is better able to understand path-dependency.

+ +

The setting is very simple. I just simulate a bunch (N=100) paths coming from 4 different data generating processes. Two of these processes represent a real increase and a real decrease, while the other two fake trends that eventually revert to zero.

+ +

The following plot shows the simulated paths for each category:

+ +

+ +

The candidate machine learning algorithm will be given the first 8 values of the path ( t in [1,8] ) and will be trained to predict the subsequent movement over the last 2 steps.

+ +

In other words:

+ +
    +
  • the feature vector is X = (p1, p2, p3, p4, p5, p6, p7, p8)

  • +
  • the target is y = p10 - p8

  • +
+ +

I compared LSTM with a simple Random Forest model with 20 estimators. Here are the definitions and the training of the two models, using Keras and scikit-learn:

+ +
# LSTM
+model = Sequential()
+model.add(LSTM((1), batch_input_shape=(None, H, 1), return_sequences=True))
+model.add(LSTM((1), return_sequences=False))
+model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
+history = model.fit(train_X_LS, train_y_LS, epochs=100, validation_data=(vali_X_LS, vali_y_LS), verbose=0)
+
+ +
# Random Forest
+RF = RandomForestRegressor(random_state=0, n_estimators=20)
+RF.fit(train_X_RF, train_y_RF);
+
+ +

The results are the summarized by the following scatter plots:

+ +

+ +

As you can see, the Random Forest model is clearly outperforming the LSTM. The latter seems to be not able to distinguish between the real and the fake trends.

+ +

Do you have any idea to explain why this is happening?

+ +

How would you modify the LSTM model to make it better at this problem?

+ +

Some remarks:

+ +
    +
  • The data points are divided by 100 to make sure gradients do not explode
  • +
  • I tried to increase the sample size, but I noticed no differences
  • +
  • I tried to increase the number of epochs over which the LSTM is trained, but I noticed no differences (the loss becomes stagnant after a bunch of epochs)
  • +
  • You can find the code I used to run the experiment here
  • +
+",22129,,22129,,2/8/2019 22:22,2/8/2019 22:22,Experiment shows that LSTM does worse than Random Forest... Why?,,0,1,,,,CC BY-SA 4.0 +10463,2,,10436,2/8/2019 17:41,,0,,"

The 2nd method would make sense only if your object is at the same position in all test images. You would have such situation if you operated on crops located by a separate object detection algorithm. This happens to be the case in facial key-point detection.

+ +

The 1st method would be much more robust to various object poses since it is translation covariant by design. If a keypoint is detected at location A, it will be equally well detected at any other position with the same set of parameters.

+",21726,,,,,2/8/2019 17:41,,,,2,,,,CC BY-SA 4.0 +10464,2,,10461,2/8/2019 17:48,,1,,"

A neural network (NN) is a ""function approximator"", that is, it is a model that can be used to approximate functions. In fact, a neural network with at least one hidden layer is a ""universal"" function approximator (that is, it can approximate any function).

+ +

In mathematics, a function $f$ is usually represented as a mapping of the form $f: D \rightarrow C$, where $D$ and $C$ are respectively the domain (inputs) and codomain (outputs) of $f$, and $\rightarrow$ means that $f$ ""maps"" $D$ to $C$. A NN (with at least one hidden layer) can thus approximate any function $f$ of this form.

+ +

In your context, the $Q$ table is a function: it is a mapping between states and actions (inputs) and Q values (outputs), which are the ""expected future cumulative reward"" (that you will obtain if you take a certain action from a certain state and then continue to follow the same policy). The Q function can thus more formally be denoted by $Q: (S, A) \rightarrow \mathcal{R}$, where $S$ is the ""space of states"" and $A$ is the ""space of actions"" in your problem. Initially, your Q table does not contain the correct (optimal) values. However, after training (or learning), hopefully, your Q table will be an approximation to the optimal Q function for your specific problem.

+ +

How do you then represent this table using a NN? What should be the inputs and outputs of the NN?

+ +

Let's suppose that your $Q$ table is implemented as matrix $M$. Then $M[s, a]$ is the $Q$ value for the state $s$ and action $a$. So, in this case, the combination of $s$ and $a$ is the input, whereas $M[s, a]$ is the output of your $Q$ function.

+ +

To represent this table as a NN, you can thus have as input of the NN the state and action, and as output the $Q$ value. You will then train the NN (using e.g. back-propagation) to learn the $Q(s, a)$, given a state $s$ and an action $a$ as input, for all $s \in \mathcal{S}$ and $a \in \mathcal{A}$. So, during your $Q$-learning algorithm, instead of using $M[s, a]$ to represent $Q(s, a)$, you will simply use the current output of your neural network.

+ +

Note that, in practice, I think, you will likely encounter problems while training the NN, if you update the weights of your NN at every time step (because e.g. NNs do not cope well with correlated data, and, in general, the ""experience"" data you will obtain from time step to time step will be highly correlated). Anyway, this is the basic idea of how to use a NN to represent a $Q$ function. There are other ways, but this is the simplest one, at least, conceptually.

+",2444,,2444,,2/8/2019 21:10,2/8/2019 21:10,,,,18,,,,CC BY-SA 4.0 +10465,1,,,2/8/2019 19:37,,1,79,"

I have a data analysis problem that I can reduce to one similar to analyzing the trajectories in the images below. These images show the tracks of subatomic particles interacting in a bubble chamber.

+ +

It's pretty obvious that by eye, easily discernible patterns can be seen. I want very much to know more about how classification and segmentation can be done using neural networks for this type of image.

+ +

These images are binary. The trajectory is either at a point in the image or it isn't. As can be seen, trajectories cross over one another, Some data appears to be missing in otherwise smooth curves, at arbitrary points along those curves. (My data may be more sparse in this respect.)

+ +

A typical paper on bubble chamber analysis that I would find deals with the analysis of the physics after trajectories have been classified and segmented.

+ +

Can anyone identify some papers that address this or something similar in the context of neural networks? I am not able to find anything recent on automated methods at all, but my google fu may not be up to the challenge. (By the way, I am less interested in some of the parametric methods like Hough Transforms. I'd like to focus on the neural approach.)

+ +

(I posted this previous question which wasn't quite as specific as this one. I hope there is some available research in this area related to physics that might give me some insights that are more directly related to my problem.)

+ +

+ +

+",8439,,,,,2/8/2019 19:37,Bubble Chamber Image Analysis Using Neural Network,,0,0,,,,CC BY-SA 4.0 +10466,1,10498,,2/8/2019 19:56,,0,294,"

I'm a beginner in machine learning and I was trying to make a test neural network for digits recognition from scratch using Numpy. I used MNIST dataset for training and testing. Input layer have 28*28 neurons which correspond to each pixel of image that must be recognized. Output layer have 10 neurons which correspond to each digit (0-9), and return values from 0 to 1 which mean chance that the corresponding digit is displayed on the image. Class Layer represents a separate layer and contains links to previous and next layer (if prevLayer is None, the current layer is input layer; if nextLayer is None, the current layer is output). The forward() method is responsible for passing data through neural network. The backprop() method is responsible for training of neural network via Backpropagation algorithm. A Layer object contains weights (W) between previous and current layer (input layer object doesn't contain weights). 'data_in' property contains a vector of calculated values before passing them into activation function. Property 'data' contains the values after activation function. But, unfortunately, it doesn't work: returned value of loss function doesn't decrease during training and neural network returns the same result during testing. I assume that bugs might be associated with backprop() and softmax_derivatime() methods. I tried in vain to find all bugs. Here's my code:

+ +
import numpy as np
+
+def ReLU(x):
+    return np.maximum(0, x)
+
+def ReLU_derivative(x):
+    return np.greater(x, 0).astype(int)
+
+def softmax(x):
+    shift = x - np.max(x)
+    return np.exp(shift) / np.sum(np.exp(shift))
+
+def softmax_derivative(x):
+    sm_array = softmax(x)
+    J = np.zeros((x.size, x.size))
+    for i in range(x.size):
+        for j in range(x.size):
+            delta = np.equal(i, j).astype(int)
+            J[j, i] = sm_array[0][i] * (delta - sm_array[0][j])
+    return J
+
+class Layer:
+    def __init__(self, size, prev_layer=None):
+        self.size = size
+        self.prevLayer = prev_layer
+        self.nextLayer = None
+        self.data = None
+        self.data_in = None
+        if prev_layer is not None:
+            self.prevLayer.nextLayer = self
+            self.W = np.random.random((self.prevLayer.size, size))
+            self.W_bias = np.array([np.random.random(size)])
+        else:
+            self.W = None
+            self.W_bias = None
+
+    def forward(self):
+        if self.prevLayer is not None:
+            self.data_in = np.dot(self.prevLayer.data, self.W)
+            self.data_in += np.dot([[1]], self.W_bias)
+            if self.nextLayer is not None:
+                self.data = ReLU(self.data_in)
+                self.nextLayer.forward()
+            else:
+                self.data = softmax(self.data_in)
+        else:
+            self.nextLayer.forward()
+
+    def backprop(self, expected_output=None, prev_delta=None):
+        if prev_delta is None:
+            #print(self.data_in)
+            delta = np.dot(-(expected_output - self.data), softmax_derivative(self.data_in))
+            delta_bias = delta
+        else:
+            delta = np.dot(prev_delta, self.nextLayer.W.T) * ReLU_derivative(self.data_in)
+            delta_bias = np.dot(prev_delta, self.nextLayer.W_bias.T) * ReLU_derivative(self.data_in)
+        training_velocity = 0.1
+        W_dif = np.dot(self.prevLayer.data.T, delta) * training_velocity
+        W_bias_dif = np.dot([[1]], delta_bias) * training_velocity
+        if self.prevLayer.prevLayer is not None:
+            self.prevLayer.backprop(prev_delta=delta)
+        self.W -= W_dif
+        self.W_bias -= W_bias_dif
+
+f_images = open(""train-images.idx3-ubyte"", ""br"")
+f_images.seek(4)
+f_labels = open(""train-labels.idx1-ubyte"", ""br"")
+f_labels.seek(8)
+images_number = int.from_bytes(f_images.read(4), byteorder='big')
+rows_number = int.from_bytes(f_images.read(4), byteorder='big')
+cols_number = int.from_bytes(f_images.read(4), byteorder='big')
+
+input_layer = Layer(rows_number*cols_number)
+hidden_layer1 = Layer(rows_number*cols_number*7//10, input_layer)
+hidden_layer2 = Layer(rows_number*cols_number*7//10, hidden_layer1)
+output_layer = Layer(10, hidden_layer2)
+digits = np.array([np.zeros(10)])
+
+input_image = np.array([np.zeros(rows_number * cols_number)])
+for k in range(images_number):
+    for i in range(rows_number):
+        for j in range(cols_number):
+            input_image[0][i*cols_number+j] = int.from_bytes(f_images.read(1), byteorder='big') / 255.0 * 2 - 1
+    input_layer.data = input_image
+    input_layer.forward()
+    current_digit = int.from_bytes(f_labels.read(1), byteorder='big')
+    digits[0][current_digit] = 1
+    output_layer.backprop(expected_output=digits)
+    print(np.sum((digits - output_layer.data)**2)/2)
+    digits[0][current_digit] = 0
+    if((k+1) % 1000 == 0):
+        print(str(k+1) + "" / "" + str(images_number))
+f_images.close()
+f_labels.close()
+
+f_images = open(""t10k-images.idx3-ubyte"", ""br"")
+f_images.seek(4)
+f_labels = open(""t10k-labels.idx1-ubyte"", ""br"")
+f_labels.seek(8)
+images_number = int.from_bytes(f_images.read(4), byteorder='big')
+rows_number = int.from_bytes(f_images.read(4), byteorder='big')
+cols_number = int.from_bytes(f_images.read(4), byteorder='big')
+
+for k in range(images_number):
+    for i in range(rows_number):
+        for j in range(cols_number):
+            input_image[0][i*cols_number+j] = int.from_bytes(f_images.read(1), byteorder='big')
+    input_layer.data = input_image
+    input_layer.forward()
+    current_digit = int.from_bytes(f_labels.read(1), byteorder='big')
+    print(output_layer.data)
+
+f_images.close()
+f_labels.close()
+
+ +

I would appreciate for any help. Thanks in advance!

+",21567,,21567,,2/10/2019 16:06,2/10/2019 22:31,"A neural network for digits recognition doesn't work (MNIST, Numpy)",,1,0,,2/16/2019 19:38,,CC BY-SA 4.0 +10467,1,,,2/8/2019 21:04,,4,196,"

Suppose that we are doing machine translation. We have a conditional language model with attention where we are are trying to predict a sequence $y_1, y_2, \dots, y_J$ from $x_1, x_2, \dots x_I$: $$P(y_1, y_2, \dots, y_{J}|x_1, x_2, \dots x_I) = \prod_{j=1}^{J} p(y_j|v_j, y_1, \dots, y_{j-1})$$ where $v_j$ is a context vector that is different for each $y_j$. Using an RNN with a encoder-decoder structure, each element $x_i$ of the input sequence and $y_j$ of the output sequence is converted into an embedding $h_i$ and $s_j$ respectively: $$h_i = f(h_{i-1}, x_i) \\ s_j = g(s_{j-1},[y_{j-1}, v_j])$$ where $f$ is some function of the previous input state $h_{i-1}$ and the current input word $x_i$ and $g$ is some function of the previous output state $s_{j-1}$, the previous output word $y_{j-1}$ and the context vector $v_j$.

+ +

Now, we want the process of predicting $s_j$ to ""pay attention"" to the correct parts of the encoder states (context vector $v_j$). So: $$v_j = \sum_{i=1}^{I} \alpha_{ij} h_i$$ where $\alpha_{ij}$ tells us how much weight to put on the $i^{th}$ state of the source vector when predicting the $j^{th}$ word of the output vector. Since we want the $\alpha_{ij}$s to be probabilities, we use a softmax function on the similarities between the encoder and decoder states: $$\alpha_{ij} = \frac{\exp(\text{sim}(h_i, s_{j-1}))}{\sum_{i'=1}^{I} \exp(\text{sim}(h_i, s_{j-1}))}$$

+ +

Now, in additive attention, the similarities of the encoder and decoder states are computed as: $$\text{sim}(h_i, s_{j}) = \textbf{w}^{T} \text{tanh}(\textbf{W}_{h}h_{i} +\textbf{W}_{s}s_{j})$$

+ +

where $\textbf{w}$, $\textbf{W}_{h}$ and $\textbf{W}_{s}$ are learned attention parameters using a one-hidden layer feed-forward network.

+ +

What is the intuition behind this definition? Why use the $\text{tanh}$ function? I know that the idea is to use one layer of a neural network to predict the similarities.

+ +

Added. This description of machine translation/attention is based on the Coursera course Natural Language Processing.

+",22131,,22131,,2/8/2019 22:26,2/8/2019 22:26,What is the intuition behind the calculation of the similarity between encoder and decoder states?,,0,2,,,,CC BY-SA 4.0 +10471,1,10489,,2/9/2019 10:17,,5,911,"

I am studying a knowledge base (KB) from the book "Artificial Intelligence: A Modern Approach" (by Stuart Russell and Peter Norvig) and from this series of slides.

+

A formula is satisfiable if there is some assignment to the variables that makes the formula evaluate to true. For example, if we have the boolean formula $A \land B$, then the assignments $A=\text{true}$ and $B=\text{true}$ make it satisfiable. Right?

+

But what does it mean for a KB to be consistent? The definition (given at slide 14 of this series of slides) is:

+
+

a KB is consistent with formula $f$ if $M(KB \cup \{ f \})$ is non-empty (there is a world in which KB is true and $f$ is also true).

+
+

Can anyone explain this part to me with an example?

+",21719,,2444,,1/21/2021 18:44,1/21/2021 18:44,When is a knowledge base consistent?,,2,0,,,,CC BY-SA 4.0 +10472,1,,,2/9/2019 11:24,,1,234,"

I've recently started reading a book about deep learning. The book is titled ""Grokking Deep Learning"" (by Andrew W Trask). In chapter 3 (pages 44 and 45), it talks about multiplying vectors using dot product and element-wise multiplication. For instance, taking 3 scalar inputs (vector) and 3 vector weights (matrix) and multiplying.

+ +

From my understanding, when multiplying vectors the size needs to be identical. The concept I have a hard time understanding is multiplying vectors by a matrix. The book gives an example of an 1x4 vector being multiplied by 4x3 matrix. The output is an 1x3 vector. I'm am confused because I assumed multiplying vector by matrix needs the same number of columns, but I have read that the matrices need rows equal to the vectors columns.

+ +

If I do not have an equal number of columns, how does my deep learning algorithm multiply each input in my vector by a corresponding weight?

+",22145,,2444,,2/13/2019 2:35,2/13/2019 2:35,How are vectors and matrices multiplied in supervised machine learning?,,1,1,,,,CC BY-SA 4.0 +10473,2,,10472,2/9/2019 12:13,,1,,"

In general, when people do not explicitly state it, a vector $v \in \mathbb{R}^n$ is usually considered a ""column vector"", that is, you can think of it as the matrix $v \in \mathbb{R}^{n \times 1}$ (that is, a matrix with $n$ rows and $1$ column). +If it is not explicitly stated and you assume that the given vector is a column vector, but the dimensions do not match, then you should check that the dimensions match if you consider the given vector as a ""row vector"" (because it might be the case that the author is implicitly considering the vectors as row vectors).

+ +

Having said that, you can multiply a vector $v \in \mathbb{R}^n$ by a matrix $A \in \mathbb{R}^{n \times m}$ from the left, that is, you can do $v^T A$. Here, I considered $v$ has a column vector (that is, $v \in \mathbb{R}^{n \times 1}$), even though I have not explicitly stated it: you can and you often need to deduce this from the context! If I transpose $v$, I obtain $v^T \in \mathbb{R}^{1 \times n}$, and, thus, you can see that you can indeed perform the operation $v^T A = u \in \mathbb{R}^{1 \times m}$. Note that, at this point, $u$, the vector resulting from the operation $v^T A$, is actually considered a matrix, but you can still use it as a vector, if you need and that is permitted according to the mathematical rules of the operations you need to perform. Note also that I cannot multiply $v$ from the right of $A$, because there is no way of making the dimensions match. Have a look at this question, if you do not know how to multiply a vector by a matrix from the left.

+ +

Similarly, you can multiply $v \in \mathbb{R}^n$ by the matrix $B \in \mathbb{R}^{m \times n}$ only from the right. If $v$ is a column vector (that is, $v \in \mathbb{R}^{n \times 1}$), you need to do $B v \in \mathbb{R}^{m \times 1}$, but, if $v$ is a row vector (that is, $v \in \mathbb{R}^{1 \times n}$), you will first need to transpose it, so that you can perform the operation: $B v^T\in \mathbb{R}^{m \times 1}$.

+ +

Furthermore, note that, if you multiply a vector by a matrix from the left, that vector needs to be a ""row vector"", so, if you initially assume that the vector is a column vector (or that is explicitly stated), you will need to transpose it first, before the multiplication. However, if the vector is already a row vector, you won't have to transpose it. Similarly, if you multiply a vector by a matrix from the right, you will need a column vector.

+ +

To conclude, you can multiply a vector either from the left or right of a matrix, but you need to make sure that the dimensions match: if you multiply from the left, you will need to check that the dimensions of the vector match the number of the rows of the matrix; if you multiply the vector from the right of the matrix, you will need to check that the dimensions of the vector match the number of columns of the matrix.

+",2444,,2444,,2/9/2019 14:23,2/9/2019 14:23,,,,11,,,,CC BY-SA 4.0 +10474,1,,,2/9/2019 14:48,,14,7261,"

In the context of RL, there is the notion of on-policy and off-policy algorithms. I understand the difference between on-policy and off-policy algorithms. Moreover, in RL, there's also the notion of online and offline learning.

+

What is the relation (including the differences) between online learning and on-policy algorithms? Similarly, what is the relation between offline learning and off-policy algorithms?

+

Finally, is there any relation between online (or offline) learning and off-policy (or on-policy) algorithms? For example, can an on-policy algorithm perform offline learning? If yes, can you explain why?

+",2444,,37607,,3/6/2023 20:34,3/6/2023 20:34,What is the relation between online (or offline) learning and on-policy (or off-policy) algorithms?,,1,0,,,,CC BY-SA 4.0 +10476,1,10477,,2/9/2019 15:33,,4,537,"

The update rules for Q-learning and SARSA each are as follows:

+ +

Q Learning:

+ +

$$Q(s_t,a_t)←Q(s_t,a_t)+α[r_{t+1}+γ\max_{a'}Q(s_{t+1},a')−Q(s_t,a_t)]$$

+ +

SARSA:

+ +

$$Q(s_t,a_t)←Q(s_t,a_t)+α[r_{t+1}+γQ(s_{t+1},a_{t+1})−Q(s_t,a_t)]$$

+ +

I understand the theory that SARSA performs 'on-policy' updates, and Q-learning performs 'off-policy' updates.

+ +

At the moment I perform Q-learning by calculating the target thusly:

+ +
target = reward + self.y * np.max(self.action_model.predict(state_prime))
+
+ +

Here you can see I pick the maximum for the Q-function for state prime (i.e. greedy selection as defined by maxQ in the update rule). If I were to do a SARSA update and use the same on-policy as used when selecting an action, e.g. ϵ-greedy, would I basically change to this:

+ +
if np.random.random() < self.eps:
+    target = reward + self.y * self.action_model.predict(state_prime)[random.randint(0,9)]
+else:
+    target = reward + self.y * np.max(self.action_model.predict(state_prime))
+
+ +

So sometimes it will pick a random future reward based on my epsilon greedy policy?

+",20352,,,,,4/8/2022 16:49,How do updates in SARSA and Q-learning differ in code?,,1,0,,,,CC BY-SA 4.0 +10477,2,,10476,2/9/2019 18:28,,5,,"

Picking actions and making updates should be treated as separate things. For Q-learning you also need to explore by using some exploration strategy (e.g. $\epsilon$-greedy).

+

Steps for Q-learning:

+
    +
  1. initialize state $S$
    +For every step of the episode:
  2. +
  3. choose action $A$ by some exploratory policy (e.g. $\epsilon$-greedy) from state $S$
  4. +
  5. take action $A$ and observe $R$ and $S'$
  6. +
  7. do the update $Q(S, A) = Q(S, A) + \alpha(R + \gamma*\max_aQ(S', a) - Q(S, A))$
  8. +
  9. update the state $S = S'$ and keep looping from step 2 until the end of episode
  10. +
+

Steps for Sarsa:

+
    +
  1. initialize state $S$
  2. +
  3. initialize first action $A$ from state $S$ by some exploratory policy (e.g. $\epsilon$-greedy)
    +For every step of the episode:
  4. +
  5. take action $A$ and observe $R$ and $S'$
  6. +
  7. choose action $A'$ from state $S'$ by some exploratory policy (e.g. $\epsilon$-greedy)
  8. +
  9. do the update $Q(S, A) = Q(S, A) + \alpha(R + \gamma * Q(S', A') - Q(S, A))$
  10. +
  11. update state and action $S = S'$, $A = A'$ and keep looping from step 3 until end of the episode
  12. +
+",20339,,2444,,4/8/2022 16:49,4/8/2022 16:49,,,,4,,,,CC BY-SA 4.0 +10478,2,,9667,2/9/2019 20:56,,1,,"

Yes, if you follow the original implementation the children will inherit the topology from the most fit parent.

+ +

Keep in mind that the goal is to obtain a good population, maintaining the genetic diversity high but at the same time selecting the best individuals from the population; so, in theory you are allowed to give the topology you prefer to the children.

+ +

Here there is an example of an alternative topology inheritance in which a child gets the genes that lead to an excess node while another child gets only the genes that create a new connection.

+ +

+",15530,,,,,2/9/2019 20:56,,,,0,,,,CC BY-SA 4.0 +10479,1,10480,,2/9/2019 23:56,,2,423,"

I'm using Q-learning to train an agent to play a board game (e.g. chess, draughts or go).

+ +

The agent takes an action while in state $S$, but then what is the next state (that is, $S'$)? Is $S'$ now the board with the piece moved as a result of taking the action, or is $S'$ the state the agent encounters after the other player has performed his action (i.e. it's this agent's turn again)?

+",20352,,2444,,2/10/2019 21:18,2/10/2019 21:18,What is the next state for a two-player board game?,,1,0,,,,CC BY-SA 4.0 +10480,2,,10479,2/10/2019 2:06,,2,,"

If your opponent has fixed knowledge (it doesn't learn), then the next state after your agent did an action is the state when your turn is back. So the actions of other players are considered as an environment reaction to your actions.

+ +

But if your opponent can learn, you may create a Multi-agent Reinforcement Learning

+",16565,,16565,,2/10/2019 9:56,2/10/2019 9:56,,,,3,,,,CC BY-SA 4.0 +10481,2,,10306,2/10/2019 3:56,,3,,"

Although what @Jaden said may be true by itself, it does not really serve to answer my question as I have seen after conducting numerous experiments, and finally reaching close to Dueling Network performance using a normal Double DQN (DDQN).

+ +

I made the following changes to my code after closely examining the OpenAI baselines code:

+ +
    +
  • Used PongFrameskip-v4 instead of PongDeterministic-v4
  • +
  • Used a small replay buffer of size 10000
  • +
  • During a step_update() or replay() call, changed the condition for a return from buffer_fill_size < learn_start to t < learn_start, where t is the current timestep, buffer_fill_size is the current size of buffer that has been filled up with experience tuples, and learn_start is the number of timesteps to wait before starting to learn from the experience collected.
  • +
  • Made sure that the make_atari() wrapper function is also called on the env:

    + +
    ENV_GYM = 'PongFrameskip-v4'
    +env = make_atari(ENV_GYM)
    +env = wrap_deepmind(env, frame_stack=True, scale=False)
    +
    + +

    These wrappers may be implemented from scratch or can be obtained from the OpenAI baseline Atari wrappers. I personally used the latter since there is no point in reinventing the wheel.

  • +
+ +

Conclusion:

+ +

The biggest step that I overlooked, or rather didn't pay much attention to was the input preprocessing. These few changes improved my DDQN from an average score saturation at -13 in almost 5000 episodes to +18 in about 700-800 episodes. That is indeed a huge difference. You can check out my implementation here.

+",21513,,,,,2/10/2019 3:56,,,,2,,,,CC BY-SA 4.0 +10482,2,,10203,2/10/2019 4:04,,1,,"

Incase you still haven't been able to resolve the problem, here's a link to the answer to my own question, which has the step-wise changes I made to achieve a +18 average score saturation using just a 10000 replay buffer size and a normal Double DQN (DDQN), trained for about 700-800 episodes. The updated code can also be found here.

+ +

No fancy changes like Prioritized Replay Buffer or any secret hyperparameter changes are required. It's usually something simple, like a small problem with the input preprocessing step.

+",21513,,,,,2/10/2019 4:04,,,,0,,,,CC BY-SA 4.0 +10484,1,,,2/10/2019 10:22,,1,147,"

I am playing with a deep Q-learning algorithm in my own environment. The network can perform well as long as there is only one enemy. My agent can perform the following actions:

+ +
    +
  1. do_nothing
  2. +
  3. prepare_for(e)
  4. +
  5. attack(e)
  6. +
+ +

where e is some enemy.

+ +

In the case of two enemies, the action vector has 5 elements:

+ +
|   0       |      1          |      2      |        3         |     4      |
+-----------------------------------------------------------------------------
+|do_nothing | prepare_for(e1) |  attack(e1) |  prepare_for(e2) | attack(e2) |
+-----------------------------------------------------------------------------
+
+ +

After a couple of episodes, the agent always starts picking the first do_nothing action, which is not desired. Changing reward for do_nothing action is not helping, even using significantly higher negative reward, than for other actions.

+ +

There is no problem with the environment with only one enemy. (Only using columns 0, 1, 2). I feel like my action encoding can be the issue, but I can't figure it out, how to fix it. Any suggestions?

+",22162,,2444,,2/10/2019 21:18,2/10/2019 21:18,Deep Q-learning is not performing well when there are several enemies,,1,1,,,,CC BY-SA 4.0 +10485,2,,10484,2/10/2019 10:30,,1,,"

1) Vary the number of enemies during the training process. +2) Use other generalisation methods that are appropriate for neural networks such as dropout or L1 and L2 regularisation.

+",12509,,,,,2/10/2019 10:30,,,,2,,,,CC BY-SA 4.0 +10489,2,,10471,2/10/2019 13:10,,2,,"

I will first recapitulate the key concepts which you need to know in order to understand the answer to your question (which will be very simple, because I will just try to clarify what is given as a "definition").

+

In logic, a formula is e.g. $f$, $\lnot f$, $f \land g$, where $f$ can be e.g. the proposition (or variable) "today it will rain". So, in a (propositional) formula, you have propositions, i.e. sentences like "today it will rain", and logical connectives, i.e. symbols like $\land$ (i.e. logical AND), which logically connect these sentences. The propositions like "today it will rain" can often be denoted by a single (capital) letter like $P$. $f \land g$ is the combination of two formulae (where formulae is the plural of formula). So, for example, suppose that $f$ is composed of the propositions "today it will rain" (denoted by $P$) or "my friend will visit me" (denoted by $Q$) and $f$ is defined as "I will play with my friend" (denoted by $S$). Then the formula $f \land g = (P \lor Q) \land S$. In general, you can combine formulae in any logically appropriate way.

+

In this context, a model is an assignment to each variable in a formula. For example, suppose $f = P \lor Q$, then $w = \{ P=0, Q = 1\}$ is a model for $f$, that is, each variable (e.g. $P$) is assigned either "true" ($1$) or "false" ($0$) but not both. (Note that the word model may be used to refer to different concepts depending on the context; again, in this context, you can simply think of a model as an assignment of values to the variables in a formula.)

+

Suppose now we define $I(f, w)$ to be a function that receives the formula $f$ and the model $w$ as input, and $I$ returns either "true" ($1$) or "false" ($0$). In other words, $I$ is a function that automatically tells us if $f$ is evaluated to true or false given the assignment $w$.

+

You can now define $M(f)$ to be a set of assignments (or models) to the formula $f$ such that $f$ is true. So, $M$ is a set and not just an assignment (or model). This set can be empty, it can contain one assignment or it can contain any number of assignments: it depends on the formula $f$: in some cases, $M$ is empty and, in other cases, it may contain say $n$ valid assignments to $f$, where by "valid" I mean that these assignments make $f$ evaluate to "true". For example, suppose we have formula $f = A \land \lnot A$. Then you can try to assign any value to $A$, but $f$ will never evaluate to true. In that case, $M(f)$ is an empty set, because there is no assignment to the variables (or propositions) of $f$ which make $f$ evaluate to true.

+

A knowledge base is a set of formulae $\text{KB} = \{ f_1, f_2, \dots, f_n \}$. So, for example, $f_2 = $ "today it will rain" and $f_3 = $ "I will go to school AND I will have lunch".

+

We can now define $M(\text{KB})$ to be the set of assignments to the formulae in the knowledge base $\text{KB}$ such that all formulae are true. If you think of the formulae in $KB$ as "facts", $M(\text{KB})$ is an assignment to these formulae in $KB$ such that these facts hold or are true.

+

In this context, we then say that a particular knowledge base (i.e., a set of formulae as defined above), denoted by $\text{KB}$, is consistent with formula $f$ if $M(\text{KB} \cup \{ f \})$ is a non-empty set, where $\cup$ means the union operation between sets: note that (as we defined it above) $\text{KB}$ is a set, and $\{ f \}$ means that we are making a set out of the formula $f$, so we are indeed performing an union operation on sets.

+

So, what does it mean for a knowledge base to be consistent? First of all, the consistency of a knowledge base $\text{KB}$ is defined with respect to another formula $f$. Recall that a knowledge base is a set of formulae, so we are defining the consistency of a set of formulae with respect to another formula.

+

When is then a knowledge base $\text{KB}$ consistent with a formula $f$? When $M(\text{KB} \cup \{ f \})$ is a non-empty set. Recall that $M$ is an assignment to the variables in its input such that its inputs evaluate to true. So, $\text{KB}$ is consistent with $f$ when there is a set of assignments of values to the formulae in $\text{KB}$ and an assignment of values to the variables in $f$ such that both $\text{KB}$ and $f$ are true. In other words, $\text{KB}$ is consistent with $f$ when both all formulae in $\text{KB}$ and $f$ can be true at the same time.

+",2444,,2444,,1/21/2021 18:44,1/21/2021 18:44,,,,0,,,,CC BY-SA 4.0 +10490,2,,10471,2/10/2019 13:11,,0,,"

Here is a (very) brief wikipedia article on consistency in KB's, which should answer your question.

+ +

A KB is consistent, if it does not contain any contradictions, ie $\lnot a$ and $a$ are not both derivable from it. Which is pretty much common sense if you think about it.

+ +

If I have a formula $f$, for example ""A is a trout $\land$ A lays eggs"", and my KB contains ""fish lay eggs"" and ""a trout is a fish"", then, if $f$ is true, ie trout do lay eggs, that formula is consistent with my KB, which states that trout are fish and that fish lay eggs.

+ +

Edit: for a more formalised version, see nbro's answer.

+",2193,,,,,2/10/2019 13:11,,,,0,,,,CC BY-SA 4.0 +10491,2,,10474,2/10/2019 14:49,,20,,"

The concepts of on-policy vs off-policy and online vs offline are separate, but do interact to make certain combinations more feasible. When looking at this, it is worth also considering the difference between prediction and control in Reinforcement Learning (RL).

+ +

Online vs Offline

+ +

These concepts are not specific to RL, many learning systems can be categorised as online or offline (or somewhere in-between).

+ +
    +
  • Online learning algorithms work with data as it is made available. Strictly online algorithms improve incrementally from each piece of new data as it arrives, then discard that data and do not use it again. It is not a requirement, but it is commonly desirable for an online algorithm to forget older examples over time, so that it can adapt to non-stationary populations. Stochastic gradient descent with back-propagation - as used in neural networks - is an example.

  • +
  • Offline learning algorithms work with data in bulk, from a dataset. Strictly offline learning algorithms need to be re-run from scratch in order to learn from changed data. Support vector machines and random forests are strictly offline algorithms (although researchers have constructed online variants of them).

  • +
+ +

Of the two types, online algorithms are more general in that you can easily construct an offline algorithm from a strictly online one plus a stored dataset, but the opposite is not true for a strictly offline algorithm. However, this does not necessarily make them superior - often compromises are made in terms of sample efficiency, CPU cost or accuracy when using an online algorithm. Approaches such as mini-batches in neural network training can be viewed as attempts to find a middle ground between online and offline algorithms.

+ +

Experience replay, a common RL technique, used in Deep Q Networks amongst others, is another in-between approach. Although you could store all the experience necessary to fully train an agent in theory, typically you store a rolling history and sample from it. It's possible to argue semantics about this, but I view the approach as being a kind of ""buffered online"", as it requires low-level components that can work online (e.g. neural networks for DQN).

+ +

On-policy vs Off-Policy

+ +

These are more specific to control systems and RL. Despite the similarities in name between these concepts and online/offline, they refer to a different part of the problem.

+ +
    +
  • On-policy algorithms work with a single policy, often symbolised as $\pi$, and require any observations (state, action, reward, next state) to have been generated using that policy.

  • +
  • Off-policy algorithms work with two policies (sometimes effectively more, though never more than two per step). These are a policy being learned, called the target policy (usually shown as $\pi$), and the policy being followed that generates the observations, called the behaviour policy (called various things in the literature - $\mu$, $\beta$, Sutton and Barto call it $b$ in the latest edition).

    + +
      +
    • A very common scenario for off-policy learning is to learn about best guess at optimal policy from an exploring policy, but that is not the definition of off-policy.
    • +
    • The primary difference between observations generated by $b$ and the target policy $\pi$ is which actions are selected on each time step. There is also a secondary difference which can be important: The population distribution of both states and actions in the observations can be different between $b$ and $\pi$ - this can have an impact for function approximation, as cost functions (for e.g. NNs) are usually optimised over a population of data.
    • +
  • +
+ +

In both cases, there is no requirement for the observations to be processed strictly online or offline.

+ +

In contrast to the relationship between online and offline learning, off-policy is always a strict generalisation of on-policy. You can make any off-policy algorithm into an equivalent on-policy one by setting $\pi = b$. There is a sense in which you can do this by degrees, by making $b$ closer to $\pi$ (for instance, reducing $\epsilon$ in an $\epsilon$-greedy behaviour policy for $b$ where $\pi$ is the fully greedy policy). This can be desirable, as off-policy agents do still need to observe states and actions that occur under the target policy - if that happens rarely because of differences between $b$ and $\pi$, then learning about the target policy will happen slowly.

+ +

Prediction vs Control

+ +

This can get forgotten due to the focus on search for optimal policies.

+ +
    +
  • The prediction problem in RL is to estimate the value of a particular state or state/action pair, given an environment and a policy.

  • +
  • The (optimal) control problem in RL is to find the best policy given an environment.

  • +
+ +

Solving the control problem when using value-based methods involves both estimating the value of being in a certain state (i.e. solving the prediction problem), and adjusting the policy to make higher value choices based on those estimates. This is called generalised policy iteration.

+ +

The main thing to note here is that the prediction problem is stationary (all long-term expected distributions are the same over time), whilst the control problem adds non-stationary target for the prediction component (the policy changes, so does the expected return, distribution of states etc)

+ +

Combinations That Work

+ +

Note this is not about choice of algorithms. The strongest driver for algorithm choice is on-policy (e.g. SARSA) vs off-policy (e.g. Q-learning). The same core learning algorithms can often be used online or offline, for prediction or for control.

+ +
    +
  • Online, on-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs), and learns from observation data as it arrives. It should always act the same way (it may be observing some other control system with a fixed, and maybe unknown policy).

  • +
  • Online, off-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs) from the perspective of an arbitrary fixed target policy $\pi$ (which must be defined to the agent), and learns from observation data as it arrives. The observations can be from any behaviour policy $b$ - depending on the algorithm being used, it may be necessary to have $b$ defined as well as $\pi$.

  • +
  • Offline, on-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs), and is given a dataset of observations from the environment of an agent acting using some fixed policy.

  • +
  • Offline, off-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs) from the perspective of an arbitrary fixed target policy $\pi$ (which must be defined to the agent), and is given a dataset of observations from the environment of an agent acting using some other policy $b$.

  • +
  • Online, on-policy control. A learning agent is set the task of behaving optimally in an environment, and learns from each observations as it arrives. It will adapt its own policy as it learns, making this a non-stationary problem, and also importantly making its own history of observations off-policy data.

  • +
  • Online, off-policy control. A learning agent is set the task of behaving optimally in an environment. It may behave and gain observations from a behaviour policy $b$, but learns a separate optimal target policy $\pi$. It is common to link $b$ and $\pi$ - e.g. for $\pi$ to be deterministic greedy policy with respect to estimated action values, and for $b$ to be $\epsilon$-greedy policy with respect to the same action values.

  • +
  • Offline, on-policy control. This not really possible in general, as an on-policy agent needs to be able to observe data about its current policy. As soon as it has learned a policy different to that in the stored dataset, then all the data becomes off-policy to it, and the agent has no valid source data. In some cases you might still be able to get something to work.

  • +
  • Offline, off-policy control. A learning agent is set the task of learning an optimal policy from a store dataset of observations. The observations can be from any behaviour policy $b$ - depending on the algorithm being used, it may be necessary to have $b$ defined as well as $\pi$.

  • +
+ +

As you can see above, only one combination offline, on-policy control, causes a clash.

+ +

There is a strong skew towards online learning in RL. Approaches that buffer some data, such as Monte Carlo control, experience replay or Dyna-Q do mix-in some of the traits of offline learning, but still require a constant supply of new observations plus forget older ones. Control algorithms imply non-stationary data, and these require online forgetting behaviour from the estimators - another online learning trait.

+ +

However, mixing in a little ""offline"" data in experience replay can cause some complications for on-policy control algorithms. The experience buffer can contain things that are technically off-policy to the latest iteration of the agent. How much of a problem this is in practice will vary.

+",1847,,1847,,2/10/2019 17:27,2/10/2019 17:27,,,,6,,,,CC BY-SA 4.0 +10492,1,10495,,2/10/2019 16:34,,6,497,"

In the context of reinforcement learning, a policy, $\pi$, is often defined as a function from the space of states, $\mathcal{S}$, to the space of actions, $\mathcal{A}$, that is, $\pi : \mathcal{S} \rightarrow \mathcal{A}$. This function is the "solution" to a problem, which is represented as a Markov decision process (MDP), so we often say that $\pi$ is a solution to the MDP. In general, we want to find the optimal policy $\pi^*$ for each MDP $\mathcal{M}$, that is, for each MDP $\mathcal{M}$, we want to find the policy which would make the agent behave optimality (that is, obtain the highest "cumulative future discounted reward", or, in short, the highest "return").

+

It is often the case that, in RL algorithms, e.g. Q-learning, people often mention "policies" like $\epsilon$-greedy, greedy, soft-max, etc., without ever mentioning that these policies are or not solutions to some MDP. It seems to me that these are two different types of policies: for example, the "greedy policy" always chooses the action with the highest expected return, no matter which state we are in; similarly, for the "$\epsilon$-greedy policy"; on the other hand, a policy which is a solution to an MDP is a map between states and actions.

+

What is then the relation between a policy which is the solution to an MDP and a policy like $\epsilon$-greedy? Is a policy like $\epsilon$-greedy a solution to any MDP? How can we formalise a policy like $\epsilon$-greedy in a similar way that I formalised a policy which is the solution to an MDP?

+

I understand that "$\epsilon$-greedy" can be called a policy, because, in fact, in algorithms like Q-learning, they are used to select actions (i.e. they allow the agent to behave), and this is the fundamental definition of a policy.

+",2444,,2444,,12/1/2021 16:59,12/1/2021 16:59,What is the relation between a policy which is the solution to a MDP and a policy like $\epsilon$-greedy?,,1,0,,,,CC BY-SA 4.0 +10493,1,,,2/10/2019 16:46,,2,63,"

Can traditional neural networks be combined with spiking neural networks? And can there be training algorithms for such hybrid network? Does such hybrid network model biological brains?

+ +

As I understand, brains contain only spiking networks and traditional networks are more or less crude approximation of them. But we can imagine that evolutionary computing can surpass the biological evolution and so the new structure can be created that are better than mind. And that is why the question about such tradition-spiking hybrid neural networks should be interesting.

+",8332,,2444,,2/10/2019 19:20,2/10/2019 19:20,Can traditional neural networks be combined with spiking neural networks?,,0,0,,,,CC BY-SA 4.0 +10495,2,,10492,2/10/2019 17:58,,4,,"
+

for example, the ""greedy policy"" always chooses the action with the highest expected return, no matter which state we are in

+
+ +

The ""no matter which state we are in"" there is generally not true; in general, the expected return depends on the state we are in and the action we choose, not just the action.

+ +

In general, I wouldn't say that a policy is a mapping from states to actions, but a mapping from states to probability distributions over actions. That would only be equivalent to a mapping from states to actions for deterministic policies, not for stochastic policies.

+ +

Assuming that our agent has access to (estimates of) value functions $Q(s, a)$ for state-action pairs, the greedy and $\epsilon$-greedy policies can be described in precisely the same way.

+ +

Let $\pi_g (s, a)$ denote the probability assigned to an action $a$ in a state $s$ by the greedy policy. For simplicity, I'll assume there are no ties (otherwise it would in practice be best to randomize uniformly across the actions leading to the highest values). This probability is given by:

+ +

$$ +\pi_g (s, a) = +\begin{cases} +1, & \text{if } a = \arg\max_{a'} Q(s, a') \\ +0, & \text{otherwise} +\end{cases} +$$

+ +

Similarly, $\pi_{\epsilon} (s, a)$ could denote the probability assigned by an $\epsilon$-greedy strategy, with probabilities given by:

+ +

$$ +\pi_{\epsilon} (s, a) = +\begin{cases} +(1 - \epsilon) + \frac{\epsilon}{\vert \mathcal{A}(s) \vert}, & \text{if } a = \arg\max_{a'} Q(s, a') \\ +\frac{\epsilon}{\vert \mathcal{A}(s) \vert}, & \text{otherwise} +\end{cases} +$$ +where $\vert \mathcal{A}(s) \vert$ denotes the size of the set of legal actions in state $s$.

+",1641,,1641,,2/11/2019 8:05,2/11/2019 8:05,,,,6,,,,CC BY-SA 4.0 +10496,1,,,2/10/2019 18:39,,1,30,"

I have continual simulated data of million sentences of two simulated persons talking to each other in a room and I want to model one of the persons speech. Now, during this period things in the room can change. Let's say, one of them says ""Where is the book?"" The other one responds ""I placed the book on the bookshelf"". Now during time, the position of the book changes, so the question Where is the book? does not have stationary answer i.e the answer changes during time. However, in general the answer has to be ""The book is at some_location"" and not something else. Also, the mentioning that the book is placed on the bookshelf can be sometimes 10, 100 or 1000 sentences before the question ""Where is the book?""

+ +

How do you approach this kind of problem? Since the window can be too large I can not split data into training samples of 10, 100 or 1000 sentences. My guess is that I should use BPTT + LSTM and train in one shot without shuffling the data. I am not sure this is feasible, so I will greatly appreciate your help! I have also my doubts what if ""Where is the book?"" appears 20 sentences after (instead of 10,100 and 1000) in the test set (which is not same as the training set)? Also, should I use Reinforcement Learning (since I can generate the data) or Supervised learning?

+ +

Thanks a lot!

+",20378,,20378,,2/10/2019 18:46,2/10/2019 18:46,How to train chat bot on infinite non-stationary data?,,0,0,,,,CC BY-SA 4.0 +10498,2,,10466,2/10/2019 22:31,,1,,"

It seems I've solved the issue. There was several mistakes:
+1. I've generated random weights from 0 to 1. As a result, too big numbers passed through softmax function (>10000), and the function wasn't calculated correctly. I divided each initial weight on the number of neurons in previous layer and solved the issue.
+2. I've calculated separate delta for biases while delta must be the same for main weights and biases.
+If anyone is interested, here is the correct code (83% and 89% precision after first and second launch):

+ +
import numpy as np
+
+def ReLU(x):
+    return np.maximum(0, x)
+
+def ReLU_derivative(x):
+    return np.greater(x, 0).astype(int)
+
+def softmax(x):
+    shift = x - np.max(x)
+    return np.exp(shift) / np.sum(np.exp(shift))
+
+def softmax_derivative(x):
+    sm_array = softmax(x)
+    J = np.zeros((x.size, x.size))
+    for i in range(x.size):
+        for j in range(x.size):
+            delta = np.equal(i, j).astype(int)
+            J[j, i] = sm_array[0][i] * (delta - sm_array[0][j])
+    return J
+
+class Layer:
+    def __init__(self, size, prev_layer=None):
+        self.size = size
+        self.prevLayer = prev_layer
+        self.nextLayer = None
+        self.data = None
+        self.data_in = None
+        if prev_layer is not None:
+            self.prevLayer.nextLayer = self
+            self.W = np.random.random((self.prevLayer.size, size)) / (self.prevLayer.size + 1)
+            self.W_bias = np.random.random((1, size)) / (self.prevLayer.size + 1)
+        else:
+            self.W = None
+            self.W_bias = None
+
+    def forward(self):
+        if self.prevLayer is not None:
+            self.data_in = np.dot(self.prevLayer.data, self.W)
+            self.data_in += np.dot([[1]], self.W_bias)
+            if self.nextLayer is not None:
+                self.data = ReLU(self.data_in)
+                self.nextLayer.forward()
+            else:
+                self.data = softmax(self.data_in)
+        else:
+            self.nextLayer.forward()
+
+    def backprop(self, expected_output=None, prev_delta=None):
+        if prev_delta is None:
+            delta = np.dot(-(expected_output - self.data), softmax_derivative(self.data_in))
+        else:
+            delta = np.dot(prev_delta, self.nextLayer.W.T) * ReLU_derivative(self.data_in)
+        training_velocity = 0.1
+        W_dif = np.dot(self.prevLayer.data.T, delta) * training_velocity
+        W_bias_dif = np.dot([[1]], delta) * training_velocity
+        if self.prevLayer.prevLayer is not None:
+            self.prevLayer.backprop(prev_delta=delta)
+        self.W -= W_dif
+        self.W_bias -= W_bias_dif
+
+f_images = open(""train-images.idx3-ubyte"", ""br"")
+f_images.seek(4)
+f_labels = open(""train-labels.idx1-ubyte"", ""br"")
+f_labels.seek(8)
+images_number = int.from_bytes(f_images.read(4), byteorder='big')
+rows_number = int.from_bytes(f_images.read(4), byteorder='big')
+cols_number = int.from_bytes(f_images.read(4), byteorder='big')
+
+input_layer = Layer(rows_number*cols_number)
+hidden_layer1 = Layer(rows_number*cols_number*7//10, input_layer)
+hidden_layer2 = Layer(rows_number*cols_number*7//10, hidden_layer1)
+output_layer = Layer(10, hidden_layer2)
+digits = np.array([np.zeros(10)])
+
+print(""Training:"")
+input_image = np.array([np.zeros(rows_number * cols_number)])
+for k in range(images_number):
+    for i in range(rows_number):
+        for j in range(cols_number):
+            input_image[0][i*cols_number+j] = int.from_bytes(f_images.read(1), byteorder='big') / 255.0
+    input_layer.data = input_image
+    input_layer.forward()
+    current_digit = int.from_bytes(f_labels.read(1), byteorder='big')
+    digits[0][current_digit] = 1
+    output_layer.backprop(expected_output=digits)
+    digits[0][current_digit] = 0
+    if((k+1) % 1000 == 0):
+        print(str(k+1) + "" / "" + str(images_number))
+f_images.close()
+f_labels.close()
+
+f_images = open(""t10k-images.idx3-ubyte"", ""br"")
+f_images.seek(4)
+f_labels = open(""t10k-labels.idx1-ubyte"", ""br"")
+f_labels.seek(8)
+images_number = int.from_bytes(f_images.read(4), byteorder='big')
+rows_number = int.from_bytes(f_images.read(4), byteorder='big')
+cols_number = int.from_bytes(f_images.read(4), byteorder='big')
+
+print(""\r\nTesting:"")
+correct = 0
+for k in range(images_number):
+    for i in range(rows_number):
+        for j in range(cols_number):
+            input_image[0][i*cols_number+j] = int.from_bytes(f_images.read(1), byteorder='big')
+    input_layer.data = input_image
+    input_layer.forward()
+    current_digit = int.from_bytes(f_labels.read(1), byteorder='big')
+    if np.argmax(output_layer.data[0]) == current_digit:
+        correct += 1
+    if((k+1) % 1000 == 0):
+        print(str(k+1) + "" / "" + str(images_number))
+
+print(""\r\nCorrect: "" + str(correct) + "" / "" + str(images_number))
+f_images.close()
+f_labels.close()
+
+",21567,,,,,2/10/2019 22:31,,,,0,,,,CC BY-SA 4.0 +10499,2,,4456,2/11/2019 1:34,,2,,"

According to OpenAI – Kinds of RL Algorithms, algorithms which use a model of the environment, i.e. a function which predicts state transitions and rewards, are called model-based methods, and those that don’t are called model-free. This model can either have been given the agent or learned by the agent.

+ +

Using a model allows the agent to plan by thinking ahead, seeing what would happen for a range of possible choices, and explicitly deciding between its options. This may be useful when faced with problems that require more long-term thinking. One way to perform planning is by using some kind of tree search, for example Monte Carlo tree search (MCTS), or—which I suspect could also be used—variants of the rapidly exploring random tree (RRT). See e.g. Agents that imagine and plan.

+ +

The agent can then distill the results from planning ahead into a learned policy – this is known as expert iteration.

+ +

A model can also be used to create a simulated, or ""imagined,"" environment in which the state is updated by using the model, and make the agent learn inside of that environment, such as in World Models.

+ +

In many real-world scenarios, the ground-truth model of the environment is not available to the agent. If an agent wants to use a model in this case, it has to learn the model, which can be challenging for several reasons.

+ +

There are however cases in which the agent uses a model that is already known and consequently doesn't have to learn the model, such as in AlphaZero, where the model comes in form of the rules of the game.

+",9220,,,,,2/11/2019 1:34,,,,0,,,,CC BY-SA 4.0 +10500,1,,,2/11/2019 3:34,,1,67,"

In the diagram below, there are three variables: X3 is a function of (depends on) X1 and X2, X2 also depends on X1. More specifically, X3 = f(X1, x2) and X2 = g(X1). Therefore, X3 = f(X1, g(X1)).

+

+

If the probabilistic distribution of X1 is known, is it possible to derive the probabilistic distribution of X3?

+",22184,,2444,,12/13/2021 8:58,12/13/2021 8:58,Can we derive the distribution of a random variable based on a dependent random variable's distribution?,,2,1,,,,CC BY-SA 4.0 +10508,1,,,2/11/2019 10:38,,3,162,"

What is the difference between automatic transcription and automatic speech recognition? Are they the same?

+ +

Is my following interpretation correct?

+ +

Automatic transcription: it converts the speech to text by looking at the whole spoken input

+ +

Automatic speech recognition: it converts the speech to text by looking into word by word choices

+",22195,,2444,,4/21/2019 13:17,4/30/2023 22:01,What is the difference between automatic transcription and automatic speech recognition?,,2,0,,,,CC BY-SA 4.0 +10511,2,,10508,2/11/2019 15:26,,0,,"

They are both the same. There are different algorithms to recognise speech, but essentially they all aim to identify the content of the spoken input and convert it into written text.

+ +

Automatic transcription is then done, whereas the output of more general ASR is often passed on to further processing, such as recognising entities or commands expressed in the speech.

+",2193,,,,,2/11/2019 15:26,,,,0,,,,CC BY-SA 4.0 +10516,2,,10500,2/11/2019 22:43,,0,,"

No, it is not possible. We could derive the most probable $x_3$ by calculating the maximum likelihood: $x^*_3=\underset{x_3}{\arg\max} p(x_1,x_2|x_3)$. We are unable to calculate this as you only stated that there is a correlation, but we don't know how it looks like.

+",3986,,,,,2/11/2019 22:43,,,,0,,,,CC BY-SA 4.0 +10517,1,,,2/12/2019 5:15,,1,129,"

Suppose $G_t$, the discounted return at time $t$ is defined as: $$ G_t \triangleq R_t+\gamma R_{t+1}+\gamma^{2}R_{t+2} + \cdots = \sum_{j=1}^{\infty} \gamma^{k}R_{t+k}$$

+ +

where $R_t$ is the reward at time $t$ and $0 < \gamma < 1$ is a discount factor. Let the state-value function $v(s)$ be defined as: $$v_{\pi}(s) \triangleq \mathbb{E}[G_t|S_{t}=s]$$

+ +

In other words, it is the expected discounted return given that we start in state $s$ with some policy $\pi$. Then $$v_{\pi}(s) = \mathbb{E}_{\pi}[R_t+\gamma G_{t+1}|S_{t}=s]$$

+ +

$$ = \sum_{a} \pi(a|s) \sum_{s',r} p(r,s'|s,a)[r+\ \gamma v_{\pi}(s')]$$

+ +
+

Question 1. Are the states $s'$ drawn from a from a joint probability distribution $P_{sa}$? In other words, if you are in an + initial state $s$, take an action $\pi(s)$, then $s'$ is the random + state you would end up in according to the probability distribution + $P_{sa}$?

+
+ +

Also let $q_{\pi}(s,a)$, the action-value function be defined as: $$q_{\pi}(s,a) \triangleq \mathbb{E}_{\pi}[G_t|S_t = s, A_t = a]$$

+ +

$$=\sum_{s',r} p(r,s'|s,a)[r+\ \gamma v_{\pi}(s')]$$

+ +
+

Question 2. What are the advantages of looking at $q_{\pi}(s,a)$ versus $v_{\pi}(s)$?

+
+",22228,,22916,,3/18/2019 14:22,3/18/2019 14:22,Is the next state drawn from the joint distribution of the previous state and action?,,1,2,,,,CC BY-SA 4.0 +10518,2,,10517,2/12/2019 7:54,,1,,"
+

Question 1. Are the states $s'$ drawn from a from a joint probability distribution $P_{sa}$? In other words, if you are in an + initial state $s$, take an action $\pi(s)$, then $s'$ is the random + state you would end up in according to the probability distribution + $P_{sa}$?

+
+ +

This is tricky, because you don't show a definition of $P_{sa}$. My first thought was that you meant the transition matrix $P_{ss'}^a$, but that doesn't fit with the phrase joint probability distribution.

+ +

If you really mean joint probability distribution then the answer is generally ""no"", because $P_{sa}$ should be the probability of observing state $s$ and action $a$ when taking a random sampled time step:

+ +

$$P_{sa} = \pi(a|s)\rho_{\pi}(s)$$

+ +

where $\rho_{\pi}(s)$ is the distribution of states under policy $\pi$. Note this makes no reference to $s'$ at all.

+ +

However, there are ways that this could relate to the distribution of $s'$. Probably the most direct relationship would be when there is a deterministic environment, thus knowing $s$ and $a$ would determine $s'$. If, in addition to that, each $s'$ could only be reached from a single $(s,a)$ combination, then knowing $P_{sa}$ would also give you knowledge of $P_{s'}$ - this is not the same thing as the question is asking though.

+ +

If you did mean the transition matrix $P_{ss'}^a$ instead in the question, then the answer is yes, because

+ +

$$P_{ss'}^a = \sum_r p(r,s'|s,a)$$

+ +
+

Question 2. What are the advantages of looking at $q_{\pi}(s,a)$ versus $v_{\pi}(s)$?

+
+ +

The main advantage is that you can derive a policy more easily from $q_{\pi}$:

+ +

$$\pi'(s) = \text{argmax}_a q_{\pi}(s,a)$$

+ +

Compare with deriving a policy using $v_{\pi}$:

+ +

$$\pi'(s) = \text{argmax}_a \sum_{s',r} p(r, s'|s,a)(r + \gamma v_{\pi}(s'))$$

+ +

Note that these policies are not necessarily the same as $\pi$ on which the $q$ or $v$ values are evaluated. In fact this is a common situation whilst searching for an optimal policy, and it is possible to show that $\pi'(s)$ will result in same or higher returns as $\pi(s)$ across all states . . . the proof of this is called the policy improvement theorem.

+ +

The important thing about the first equation using $q$ is that it does not involve using the MDP model $p(r, s'|s,a)$ directly. This is the basis of model-free RL. Whilst the version using $v$ is more complex (taking more computation) and requires that you know $p(r,s'|s,a)$.

+ +

The main disadvantage of looking at $q_{\pi}(s,a)$ is that it has a larger dimension, it maps $S \times A \rightarrow \mathbb{R}$, as opposed to using $v_{\pi}(s)$ which maps $S \rightarrow \mathbb{R}$. So it can take longer to get good approximations of $q$ compared to $v$.

+",1847,,1847,,2/12/2019 9:46,2/12/2019 9:46,,,,0,,,,CC BY-SA 4.0 +10525,1,,,2/12/2019 10:33,,4,636,"

Suppose we have a deterministic environment where knowing $s,a$ determines $s'$. Is it possible to get two different rewards $r\neq r'$ in some state $s_{\text{fixed}}$? Assume that $s_{\text{fixed}}$ is a fixed state I get to after taking the action $a$. Note that we can have situations where in multiple iterations we have: $$(s,a) \to (s_1, r_1) \\ (s,a) \to (s_{\text{fixed}}, r_1) \\ (s,a) \to (s_{\text{fixed}}, r_2) \\ (s,a) \to (s_3, r_3) \\ \vdots$$

+ +

My question is, would $r_1 =r_2$?

+",22236,,2444,,11/2/2020 14:12,11/2/2020 14:12,Can the rewards be stochastic when the transition model is deterministic?,,1,0,,,,CC BY-SA 4.0 +10526,2,,10525,2/12/2019 10:46,,4,,"
+

My question is, would $r_1 =r_2$?

+
+ +

That's usually up to you as the designer of the system.

+ +

Usually when you declare that you have ""a deterministic environment"", you imply that both $s'$ and $r$ are fixed values depending on $(s,a)$. So in your examples, you would expect your observations to also have $r_1 = r_2$

+ +

However, it is possible to define a MDP where transition to state $s'$ is deterministic, but $r$ is not. For instance, you could define reward in a game equal to the sum of a number of dice rolled, with better rewards (on average) resulting in more dice. This is still a valid MDP and can be solved using RL techniques.

+ +

A real-world example of this might be managing a queue of work, where you want to minimise lead time, but don't know for certain how long each task will take. Your state progression moves deterministically - you have a queue of pending tasks, current tasks and workers, and assigning a task to a worker is completely deterministic. However, you don't know how efficiently tasks will be performed until after they are done, so you don't know the reward perfectly from the assignment (whether you can treat this as random or hidden state is a more complex issue - it is often pragmatic to treat such unknown data as random though).

+",1847,,1847,,2/13/2019 7:43,2/13/2019 7:43,,,,0,,,,CC BY-SA 4.0 +10529,1,10538,,2/12/2019 12:32,,3,260,"

From the reinforcement learning book section 13.3:

+ +

+ +

Using pytorch, I need to calculate a loss, and then the gradient is calculated internally.

+ +

How to obtain the loss from equations which are stated in the form of an iterative update with respect to the gradient?

+ +

In this case:

+ +

$\theta \leftarrow \theta + \alpha\gamma^tG\nabla_{\theta}ln\pi(A_t|S_t,\theta)$

+ +

What would be the loss?

+ +

And in general, what would be the loss if the update rule were

+ +

$\theta \leftarrow \theta + \alpha C\nabla_{\theta}g(x|\theta)$

+ +

for some general (derivable) function $g$ parameterized by theta?

+",21645,,,,,4/20/2021 1:51,"How to obtain a formula for loss, when given an iterative update rule in gradient descent?",,2,6,,,,CC BY-SA 4.0 +10531,1,10537,,2/12/2019 15:25,,2,992,"

The Reinforcement Learning Book by Richard Sutton et al, section 13.5 shows an online actor critic algorithm.

+ +

+ +

Why do the weights updates depend on the discount factor via $I$?

+ +

It seems that the more we get closer to the end of the episode, the less we value our newest experience $\delta$.

+ +

This seems odd to me. I thought discounting in the recursive formula of $\delta$ itself is enough.

+ +

Why does the weights update become less significant as the episode progresses? +Note this is not eligibility traces, as those are discussed separately, later in the same episode.

+",21645,,,,,2/12/2019 17:43,"In online one step actor critic, why does the weights update become less significant as the episode progresses?",,1,0,,,,CC BY-SA 4.0 +10534,1,10583,,2/12/2019 15:49,,-1,56,"

We recently founded a company in the area of additive manufacturing. Our development focuses on making the process easier and liberate time to the user, which I personally believe in as the ultimate promise of any computerised process, including AI, for the public.

+ +

We have gone through the usual channels including regular job offers and personal contacts, visiting universities and meetups. However there appears to be a lack of participation of real specialists in most of those gatherings as the ones interested in learning about AI are doing just that, and don't show up somewhere where we can find them, although we believe to have a very interesting task at hand.

+ +

It appears to be remarkably hard to find specialists in the field, as AI specialists are either employed by huge multinational companies (instead of startups), have no interest in additive manufacturing (because it implies something else than only code) and the challenge of machines (making machines) or just do not pop up anywhere.

+ +

How would a specialist of AI look for someone else in the specific field of additive manufacturing/motion planning/creative strategy and its employment? (Or course in the case they do not already know someone.)

+",22244,,1671,,2/14/2019 4:00,2/15/2019 8:52,How to find AI specialists interested in additive Manufacturing,,1,1,,12/18/2021 18:36,,CC BY-SA 4.0 +10535,2,,9563,2/12/2019 16:26,,2,,"

I think we would consider regularization and downsampling better in this way:

+ +
    +
  1. dropout
  2. +
+ +

it puts some input value (neuron) for the next layer as 0, which makes the current layer a sparse one. So it reduces the dependence of each feature in this layer.

+ +
    +
  1. pooling layer
  2. +
+ +

the downsampling directly remove some input, and that makes the layer ""smaller"" rather than ""sparser"". The difference can be subtle but clear enough.

+ +

That's the root reason why the former also affect the evaluation/test process but the later does not.

+",22249,,,,,2/12/2019 16:26,,,,0,,,,CC BY-SA 4.0 +10536,1,,,2/12/2019 17:13,,1,478,"

Is there any research in this area?

+",22251,,1671,,2/13/2019 2:18,2/15/2019 14:18,Are there profitable hedge funds using AI?,,2,2,,12/11/2021 20:55,,CC BY-SA 4.0 +10537,2,,10531,2/12/2019 17:37,,1,,"

This ""decay"" of later values is a direct consequence of the episodic formula for the objective function for REINFORCE:

+ +

$$J(\theta) = v_{\pi_\theta}(s_0)$$

+ +

That is, the expected return from the first state of the episode. This is equation 13.4 in the book edition that you linked in the question.

+ +

In other words, if there is any discounting, we care less about rewards seen later in the episode. We mainly care about how well the agent will do from its starting position.

+ +

This is not true for all formulations of policy gradients. There are other, related, choices of objective function. We can formulate the objective function as caring about the returns from any distribution of states, but in order to define it well, we do need to describe the weighting/distribution somehow, it should be relevant to the problem, and we want to be able to get approximate samples of $\nabla J(\theta)$ for policy gradient to work. The algorithm you are asking about is specifically for improving policy for episodic problems. Note you can set $\gamma = 1$ for these problems, so the decay is not necessarily required.

+ +

As an aside (because someone is bound to ask): Defining $J(\theta)$ with respect to all states equally weighted could lead to difficulties e.g. the objective would take less account of a policy's ability to avoid undesirable states, and it would require a lot of samples from probably irrelevant states in order to estimate it. These difficulties would turn up as a hard to calculate (or maybe impossible) expectation for $\nabla J(\theta)$

+",1847,,1847,,2/12/2019 17:43,2/12/2019 17:43,,,,0,,,,CC BY-SA 4.0 +10538,2,,10529,2/12/2019 18:09,,3,,"

You can find an implementation of the REINFORCE algorithm (as defined in your question) in PyTorch at the following URL: https://github.com/JamesChuanggg/pytorch-REINFORCE/. First of all, I would like to note that a policy can be represented or implemented as a neural network, where the input is the state (you are currently in) and the output is a ""probability distribution over the actions you can take from that state received as input"".

+ +

In the Python module https://github.com/JamesChuanggg/pytorch-REINFORCE/blob/master/reinforce_discrete.py, the policy is defined as a neural network with 2 linear layers, where the first linear layer is followed by a ReLU activation function, whereas the second is followed by a soft-max. In that same Python module, the author also defines another class called REINFORCE, which creates a Policy object (in the __init__ method) and defines it as property of that class. The class REINFORCE also defines two methods select_action and update_parameters. These two methods are called from the main.py module, where the main loop of the REINFORCE algorithm is implemented. In that same main loop, the author declares lists entropies, log_probs and rewards. Note that these lists are re-initialized at ever episode. A ""log_prob"" and an ""entropy"" is returned from the select_action method, whereas a ""reward"" is returned from the environment after having executed one environment step. The environment is provided by the OpenAI's Gym library. The lists entropies, log_probs and rewards are then used to update the parameters, i.e. they are used by the method update_parameters defined in the class REINFORCE.

+ +

Let's see better now what these methods, select_action and update_parameters, actually do.

+ +

select_action first calls the forward method of the class Policy, which returns the output of the forward pass of the NN (i.e. the output of the soft-max layer), so it returns the probabilities of selecting each of the available actions (from the state given as input). It then selects the probability associated with the first action (I guess, it picks the probabilities associated with the action with the highest probabilities), denoted by prob (in the source code). Essentially, what I've described so far regarding this select_action method is the computation of $\pi(A_t \mid S_t, \theta)$ (as shown in the pseudocode of your question). Afterwards, in the same method select_action, the author also computes the log of that probability I've just mentioned above (i.e. the one associated with the action with the highest probability, i.e. the log of prob), denoted by log_prob. In that same method, the entropy (as defined in this answer) is calculated. In reality, the author calculates the entropy using only one distribution (instead of two): more specifically, the entropy is calculated as follows entropy = -(probs*probs.log()).sum(). In fact, the entropy loss function usually requires the ground-truth labels (as explained in the answer I linked you to above), but, in this case, we do not have ground-truth labels (given that we are performing RL and not supervised learning). Nonetheless, I can't really tell you why the entropy is calculated like this, in this case. Finally, the method select_action then return action[0], log_prob, entropy.

+ +

First of all, I would like to note that the method update_parameters is called only at the end of each episode (in the main.py module). In that same method, a variable called loss is first initialized to zero. In that method, we then iterate the list of rewards for the current episode. Inside that loop of the update_parameters method, the return, R is calculated. R is also multiplied by $\gamma$. On each time step, the loss is then calculated as follows

+ +
loss = loss - (log_probs[i]*(Variable(R).expand_as(log_probs[i])).cuda()).sum() - (0.0001*entropies[i].cuda()).sum()
+
+ +

The loss is calculated by subtracting the previous loss with

+ +
(log_probs[i]*(Variable(R).expand_as(log_probs[i])).cuda()).sum() - (0.0001*entropies[i].cuda()).sum()
+
+ +

where log_probs are the log probabilities calculated in the select_action method. log_probs is the part $\log \pi(A_t \mid S_t, \theta)$ of the update rule of your pseudocode. log_probs are then multiplied by the return R. We then sum the result of this multiplication over all elements of the vector. We then subtract this just obtained result by the entropies multiplied by 0.0001. I can't really tell you why the author decided to implement the loss in this way. I would need to think about it a little more.

+ +

The following article may also be useful: https://pytorch.org/docs/stable/distributions.html.

+",2444,,,,,2/12/2019 18:09,,,,5,,,,CC BY-SA 4.0 +10540,1,,,2/12/2019 19:22,,2,637,"

I am a beginner, just started studying around NLP, specifically various language models. So far, my understanding is that - the goal is to understand/produce natural language.

+ +

So far the methods I have studied speak about correlation of words, using correct combination to make a meaningful sentence. I also have the sense that the language modeling does not really care about the punctuation marks (or did I miss it?)

+ +

Thus I am curious is there a way they can classify sentence types such as Declarative, Imperative, Interrogative or Exclamatory?

+",22254,,1671,,2/14/2019 3:52,8/14/2019 0:02,Is there a way to understand the type of a sentence?,,2,0,,,,CC BY-SA 4.0 +10541,2,,10536,2/13/2019 2:16,,2,,"

High-frequency trading is where you see it being used, essentially, decision-making algorithms analyzing and making transactions in microseconds. It accounts for a significant percentage of market activity, and has been considered a source of greater market volatility (See: Flash Crashes).

+ +

You can bet that hedge funds are evaluating every form of AI for trading and predicting market trends in general, but, unlike academics (and a segment of the tech sector) I doubt they consider it in their interests to publish methods which competitors could then utilize.

+ +

It's difficult to source information on exactly what is being looked into and utilized in the financial sector because there is a lot of marketing noise (exaggerated claims, + unreliable sources) but I did find these articles:

+ +

Will AI-Powered Hedge Funds Outsmart the Market? (Tech Review; 2/4/2016)

+""AI Hedge Fund Is Said to Liquidate After Less Than Two Years (Bloomberg; 9/7/2018)""

+ +

My gut tells me that Machine Learning methods will surpass humans in these kinds of decisions before too long (educated guess), and only increase in utility as the dataset these algorithms draw from grows, because it's ultimately a statistical problem.

+ +
+ +

AI is related to Game Theory, the study of economic decision-making, in the context of utility. Game Theory might be said to have had its greatest success in computing in general, via minimax, but has traditionally been much harder to apply in real world economics. Minimax can be utilized in machine learning, and it's actually hard to think of any economic decision-making that wouldn't utilize it in some form.

+",1671,,1671,,2/13/2019 18:16,2/13/2019 18:16,,,,0,,,,CC BY-SA 4.0 +10543,5,,,2/13/2019 2:28,,0,,"

https://en.wikipedia.org/wiki/High-frequency_trading

+",1671,,1671,,2/13/2019 2:28,2/13/2019 2:28,,,,0,,,,CC BY-SA 4.0 +10544,4,,,2/13/2019 2:28,,0,,"For questions about algorithmic trading (stocks, commodities bonds, etc.) and predictive financial algorithms in general.",1671,,1671,,2/13/2019 2:28,2/13/2019 2:28,,,,0,,,,CC BY-SA 4.0 +10545,1,,,2/13/2019 7:07,,3,444,"

For example, you train on dataset 1 with an adaptive optimizer like Adam. Should you reload the learning schedule, etc., from the end of training on dataset 1 when attempting transfer to dataset 2? Why or why not?

+",21158,,2444,,8/18/2021 10:19,8/18/2021 10:19,Should you reload the optimizer for transfer learning?,,2,0,,,,CC BY-SA 4.0 +10546,1,10547,,2/13/2019 7:30,,1,5025,"

I am purchasing Titan RTX GPU. Everything seems fine with that except float32 & float64 performance which seems lower vis-a-vis some of its counter parts. I wanted to understand if single precision and double precision performance of GPU affect deep learning training or efficiency ? We work mostly with images, however not limited to that.

+",17980,,,user9947,2/13/2019 8:01,2/14/2019 9:03,Does fp32 & fp64 performance of GPU affect deep learning model training?,,2,0,,5/13/2020 21:40,,CC BY-SA 4.0 +10547,2,,10546,2/13/2019 7:58,,2,,"

First off I would like to post this comprehensive blog which makes comparison between all kinds of NVIDIA GPU's.

+ +

The most popular deep learning library TensorFlow by default uses 32 bit floating point precision. The choice is made as it helps in 2 causes:

+ +
    +
  • Lesser memory requirements
  • +
  • Faster calculations
  • +
+ +

64 bit is only marginally better than 32 bit as very small gradient values will also be propagated to the very earlier layers. But the trade-off for the gain in performance vs (the time for calculations + memory requirements + time for running through so many epochs so that those small gradients actually do something) is not worth it. There are state of art CNN architectures, which insert gradients midpoint and has very good performance.

+ +

So overall 32 bit performance is the one which should really matter for deep learning, unless you are doing a very very high precision job (which still would hardly matter as small differences due to 64 bit representation is literally erased by any kind of softmax or sigmoid). So 64 bit might increase your accuracy classification by $<< 1 {\%}$ and will only become significant over very large datasets.

+ +

As far as raw specs go the TITAN RTX in comparison to 2080Ti, TITAN will perform better than 2080Ti in fp64 (as its memory is double than 2080Ti and has higher clock speeds, BW, etc) but a more practical approach would be to use 2 2080Ti's coupled together, giving a much better performance for price.

+ +

Side Note: Good GPU's require good CPU's. It is difficult to tell whether a given CPU will bottleneck a GPU as it entirely depends how the training is being performed (whether data is fully loaded in GPU then training occurs, or continuous feeding from CPU takes place.) +Here are a few links explaining the problem:

+ +

CPU and GPU Bottleneck: A Detailed Explanation

+ +

A Full Hardware Guide to Deep Learning

+",,user9947,,user9947,2/13/2019 8:24,2/13/2019 8:24,,,,0,,,,CC BY-SA 4.0 +10549,1,10573,,2/13/2019 8:33,,3,491,"

In the paper Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, they define the loss function for the policy network as

+

$$ +J_\pi(\phi)=\mathbb E_{s_t\sim \mathcal D}\left[D_{KL}\left(\pi_\phi(\cdot|s_t)\Big\Vert {\exp(Q_\theta(s_t,\cdot)\over Z_\theta(s_t)}\right)\right] +$$

+

Applying the reparameterization trick, let $a_t=f_\phi(\epsilon_t;s_t)$, then the objective could be rewritten as

+

$$ +J_\pi(\phi)=\mathbb E_{s_t\sim \mathcal D, \epsilon \sim\mathcal N}[\log \pi_\phi(f_\phi(\epsilon_;s_t)|s_t)-Q_\theta(s_t,f_\phi(\epsilon_t;s_t))] +$$

+

They compute the gradient of the above objective as follows

+

$$ +\nabla_\phi J_\pi(\phi)=\nabla_\phi\log\pi_\phi(a_t|s_t)+(\nabla_{a_t}\log\pi_\phi(a_t|s_t)-\nabla_{a_t}Q(s_t,a_t))\nabla_\phi f_\phi(\epsilon_t;s_t) +$$

+

The thing confuses me is the first term in the gradient, where does it come from? To my best knowledge, the second large term is already the gradient we need, why do they add the first term?

+",8689,,2444,,11/23/2020 1:49,11/23/2020 1:49,What is the gradient of the objective function in the Soft Actor-Critic paper?,,2,0,,,,CC BY-SA 4.0 +10550,2,,10540,2/13/2019 9:27,,1,,"

You can generally identify the mood of a verb by looking at grammatical structures; you don't need any language model for it. The three major moods in English are declarative, interrogative, and imperative. Assuming English is the language you will be working with, here are some questions:

+ +
    +
  • Does he like coffee?
  • +
  • Is this a piece of chocolate?
  • +
  • When did you go there?
  • +
  • How is that possible?
  • +
  • Have you got any cheese?
  • +
+ +

Apart from the obvious marker '?', these examples all start with either an auxiliary verb or a wh-word, so are fairly easy to recognise with a simple lookup. The one exception I can think of is an imperative (Do be quiet, please), where the do is followed by a further verb, which wouldn't be the case with a question.

+ +

Imperatives:

+ +
    +
  • Go to school now!
  • +
  • Eat up your vegetables!
  • +
  • Do shut up.
  • +
  • Have a go at it!
  • +
+ +

These start with a main verb in the base form, or with an auxiliary followed by a verb/not a pronoun.

+ +

Once you identified all interrogative and imperative sentences, all the remaining ones should be declarative.

+ +

So, all you would need is a small list of auxiliary verb, pronouns, and wh-words, and with a bit of simple string matching you should get most of the way there. Undoubtedly there will be some exceptions, but there shouldn't be too many of them.

+ +

In other languages there will be similar structures; or there might be explicit markers (eg in Hawai'ian an imperative starts with the marker 'e', as in Hele 'oe ma ka hale ""you go to the house"" vs E hele 'oe ma ka hale ""Go to the house!"")

+",2193,,,,,2/13/2019 9:27,,,,0,,,,CC BY-SA 4.0 +10551,1,,,2/13/2019 10:38,,1,63,"

An activation function is a function from $R \rightarrow R$. It takes as input the inner products of weights and activations in the previous layer. It outputs the activation.

+ +

A softmax however, is a function that takes input from $R^p$, where $p$ is the number of possible outcomes that need to be classified. Therefore, strictly speaking, it cannot be an activation function.

+ +

Yet everywhere on the net it says the softmax is an activation function. Am I wrong or are they?

+",22273,,,,,2/13/2019 14:26,Is it a great misconception that the softmax is an activation function?,,1,3,,,,CC BY-SA 4.0 +10552,2,,10551,2/13/2019 10:46,,1,,"

I see no problem in regarding the softmax as a particular activation function which takes a vector input and produces a vector output. In fact, the sigmoid function can be viewed as a two-dimensional softmax in which one of the two inputs is hardwired to zero while the corresponding output is neglected.

+",21726,,21726,,2/13/2019 14:26,2/13/2019 14:26,,,,1,,,,CC BY-SA 4.0 +10553,2,,10545,2/13/2019 10:58,,3,,"

When doing transfer learning it makes sense to have different update policies for ""inherited"" parameters and the ""new"" parameters. ""Inherited"" parameters are pre-trained on dataset1 and they typically form the front end of the deep model. The ""new"" parameters are trained from scratch and they typically produce the desired predictions on dataset2. It would be sensible to restart the learning schedule for the ""new"" parameters. However, most often we would avoid doing that for ""inherited"" parameters in order to avoid catastrophic forgetting.

+",21726,,,,,2/13/2019 10:58,,,,0,,,,CC BY-SA 4.0 +10554,2,,10546,2/13/2019 11:05,,1,,"

Deep models are very tolerant to arithmetic underflow. You can hope for neglectable differences in prediction accuracy between FP32 and FP16 models. Check this paper for concrete results.

+",21726,,21726,,2/14/2019 9:03,2/14/2019 9:03,,,,0,,,,CC BY-SA 4.0 +10555,1,,,2/13/2019 11:27,,2,187,"

I've noticed that when modelling a continuous action space, the default thing to do is to estimate a mean and a variance where each is parameterized by a neural network or some other model.

+ +

I also often see that it is one network $\theta$ models both. The REINFORCE objective can be written as

+ +

$$\nabla \mathcal{J}(\theta) = \mathbb{E}_{\pi} [\nabla_\theta \log \pi(a_t|s_t) * R_t] $$

+ +

For discrete action space this makes sense since the output of the network is determined by a softmax. However, if we explicitly model the output of the network as a Gaussian, then the gradient of the log likelihood is of a different form,

+ +

$$\pi_\theta(a_t|s_t) = Normal(\mu_\theta(s_t), \Sigma_\theta(s_t))$$

+ +

and the log is:

+ +

$$\log \pi_\theta(a_t | s_t) = -\frac{1}{2} (a_t-\mu_\theta)^\top \Sigma^{-1}_\theta(a_t-\mu_\theta) + \log 2 \pi \det({\Sigma_\theta})$$

+ +

In the slides provided here (slide 18): +http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching_files/pg.pdf

+ +

IF the variance is held constant, then we can solve this analytically:

+ +

$$\nabla_\theta \log \pi_\theta(a_t|s_t) = (a_t - \mu_\theta) \Sigma^{-1} \phi(s)$$

+ +

But, are things always modelled assuming a constant variance? If it's not constant then we have to account for the inverse of the covariance matrix +as well as the determinant?

+ +

I've taken a look at code online and from what I've seen, most of them assume the variance is constant.

+ +
+ +

@NielSlater

+ +

Using the reparameterization trick we would use a normal distribution with fixed parameters 0 and 1. +$$ a_t \sim \mu_\theta(s_t) + \Sigma_\theta(s_t) * Normal(0, 1) $$

+ +

Which is the same as if we had actually sampled directly from a distribution, $ \pi_\theta(a_t | s_t) = Normal(\mu_\theta(s_t), \Sigma_\theta(s_t))$ and let's us calculate the corresponding $\log \pi_\theta(a_t|s_t)$ and $\nabla_\theta \log \pi_\theta(a_t | s_t)$ without having to differentiate through the actual density.

+",7858,,7858,,2/13/2019 14:43,2/15/2019 9:47,Calculating gradient for log policy when variance is not constant,,1,7,,,,CC BY-SA 4.0 +10556,1,10558,,2/13/2019 11:28,,2,204,"

I wonder how self-driving cars determine the path to follow. Yes, there's GPS, but GPS can have hiccups and a precision larger than expected. Suppose the car is supposed to turn right at an intersection on the inner lane, how is the exact path determined? How does it determine the trajectory of the inner lane?

+",13068,,,,,6/29/2019 15:33,How do self-driving cars construct paths?,,1,0,,,,CC BY-SA 4.0 +10558,2,,10556,2/13/2019 14:21,,2,,"

As you say, GPS is not precise enough for the purpose (until recently it was only accurate within 5m or so, since 2018 there are receivers that have an accuracy of about 30cm). Instead, autonomous vehicles have a multitude of sensors, mostly cameras and radar, which record the surrounding area and monitor the road ahead. Due to them being flat, mostly one colour, and often with lines or other markers on them, roads are usually fairly easy to spot, which is why most success has been made driving on roads as opposed to off-road. Once you know exactly where you are and where you want to go, computing the correct trajectory is then just a matter of maths and physics.

+ +

For an academic paper on the subject of trajectory planning see Local Trajectory Planning and Tracking of Autonomous Vehicles, Using Clothoid Tentacles Method.

+ +

It quickly becomes more complex when other road users and obstacles are taken into account; here machine learning is used to identify stationary and movable objects at high speed from the sensor input. Reacting to the input is a further problem, and one reason why there aren't any self-driving cars on the roads today.

+ +

This is all on driving automation level 2 and above; on the lower levels things are somewhat easier. For example, the latest model Nissan LEAF has an automatic parking mode, where the car self-steers, guided by camera images and sonar, but still requires the driver to indicate the final position of the vehicle. Apart from that, it is fully automatic.

+",2193,,2193,,2/13/2019 15:04,2/13/2019 15:04,,,,2,,,,CC BY-SA 4.0 +10559,1,,,2/13/2019 17:04,,1,102,"
+

Show which literals can be inferred from the following knowledge bases, using both reasoning patterns and truth tables. Show all steps in your reasoning and explain your answers.

+ +

1) P & Q
+ 2) Q →R v S
+ 3) P → ~R

+
+ +

This is from my reasoning pattern tutorial, my text book shows similar question except that there is a single literal with workings, so I can somehow read through but I'm not familiar with some terms. I don't understand how I can infer all literals with the above information. I also don't fully understand what is and-elimination, modus ponens and unit resolution.

+ +

Is there anyone who is kind enough to use the above question as an example so that I can have a clearer picture?

+",22282,,,,,2/13/2019 17:04,"Logic questions: reasoning pattern, Infer literals, unit resolution, and-elimination etc",,0,1,,,,CC BY-SA 4.0 +10560,2,,9024,2/13/2019 17:14,,5,,"

Short answer

+

The Q values are updated using a greedy policy because, in the Q-learning algorithm, the $\max$ operator is used to determine the target, which is denoted by

+

$$\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$$

+

Intuitively, the $\max$ operator is used because we assume that the target policy (the policy associated with the optimal value function that we want to learn) takes a greedy action, which is defined, in this context, as the action associated with the highest Q value: $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ means that we are selecting the $Q$ value, associated with the (next) state $S_{t+1}$, which corresponds to the action $a$, such that $Q(S_{t+1}, a)$ is the highest (with respect to other possible actions from $S_{t+1}$).

+

Note that the $Q$ function receives as input a state and an action. So, for each state $s$, we have an action (among all possible actions, $a_1, a_2, \dots$, which we can take from the state $s$), denote it by $a^*$, such that $Q(s, a^*) > Q(s, a_1)$, $Q(s, a^*) > Q(s, a_2)$, etc. In the expression $\color{blue}{\max_{a}Q(S_{t+1}, a)}$, we are basically selecting $Q(s, a^*)$ for $s = S_{t+1}$.

+

More explanations

+

To explain all the components of this target, I will explain the Q-learning algorithm. Hopefully, after this explanation, you will be able to understand why Q-learning uses the greedy policy to update the Q values and why it is an off-policy algorithm (which is, IMHO, a quite unintuitive and confusing term to describe what off-policy actually means).

+

How does Q-learning work?

+

Here's the Q-learning algorithm

+

+

The Q-learning algorithm estimates a value function, known as the Q-function, associated with a policy $\pi$. Intuitively, it does that by simulating an agent which takes actions in the environment, observes the impact of those actions on the environment in terms of the received rewards and the new states where the agent ends up in after taking those actions. Meanwhile, during this exploration of the environment, it attempts to estimate the optimal $Q$ function (i.e. the value function associated with the optimal policy, which, if followed, will give the agent the highest amount of reward in the long run, aka return).

+

Q-learning proceeds in episodes. So, initially, you need to pass the number of episodes as input. You can think of episodes as iterations (like in any iterative optimisation algorithm). However, in the context of RL, an episode is a little bit more specific: the start and end of an episode are associated with specific states of the environment: the episode starts when the agent is in a starting state $S_0$ (which can be sampled from a probability distribution over $S_0$, if there is more than one) and ends when it is in a terminal state.

+

In the pseudocode above, at the beginning of each episode, we initialise $t=0$, where $t$ represents the time step of a specific episode.

+

Inner/Episode loop

+

We then have the following loop, which terminates when the agent reaches a terminal state:

+

+

So, at each episode, we run the loop above. The block of code inside this loop contains the main logic of the Q-learning algorithm.

+

Behaviour policy

+

On each iteration of this inner loop, the agent chooses an action $A_t$ (the action at time step $t$ of the current episode) using a policy (which is known, in this context, as the behaviour policy, which should ensure that all states are sufficiently visited, in order for tabular Q-learning to converge). In this case, the $\epsilon$-greedy policy is used.

+

How does this $\epsilon$-greedy policy work?

+

If you look at the pseudocode above, $\epsilon$ is initialised at beginning of each episode. In the pseudocode above, $\epsilon$ can change from episode to episode, but assume, for simplicity, that, at every episode, it is a fixed small number (e.g. $0.01$). The statement

+
+

Choose action $A_t$ using policy derived from $Q$ (e.g., $\epsilon$-greedy)

+
+

means that, with probability $1 - \epsilon$, the greedy action is chosen, and, with probability $\epsilon$, a random action is taken.

+

What is the greedy action in this case?

+

In this case, the greedy action is the action, in the current state $S_t$, which is associated with the highest Q value (given the current estimate of the Q value). It is exactly the same action as the action $a^*$ (as I explained above). The difference is that, in this case, we choose $A_t$ using the $\epsilon$-greedy policy: so, most of the time, we choose the greedy action, but, sometimes, we can also choose a random action.

+

The agent then executes the just chosen action $A_t$ in the environment, and it observes the impact of this action on the environment, which is determined by how the environment responds to this action: the response consists of a reward, $R_{t+1}$, and a next state, $S_{t+1}$.

+

To recapitulate, the agent chooses an action using the $\epsilon$-greedy policy, executes this action on the environment, and it observes the response (that is, a reward and a next state) of the environment to this action. This is the part of the Q-learning algorithm where the agent interacts with the environment in order to gather some info about it, so as to be able to estimate the Q function.

+

Q-learning update

+

After that, the agent can update its estimate of the Q function using the following update rule

+

$$\color{orange}{Q(S_t, A_t)} \leftarrow \color{red}{Q(S_t, A_t)} + \alpha ([\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}] - \color{red}{Q(S_t, A_t)})$$

+

where $S_t$ is the current state (of the current episode) the agent is in, $A_t$ is the action chosen using the $\epsilon$-greedy policy (as described above), and $S_{t+1}$ and $R_{t+1}$ are respectively the next state and rewards, which, collectively, are the response of the environment to the just taken action $A_t$.

+

So, how is the estimate of this $Q$ function updated?

+

First of all, I would like to note that, if you look at the beginning of the pseudocode above, $Q(s, a)$ is initialized arbitrarily for all states $s \in \mathcal{S}$ and for all actions $a \in \mathcal{A}$: it can e.g. be initialised to $0$. $Q(s, a)$ can e.g. be implemented as a matrix (or 2-dimensional array) $M \in \mathbb{R}^{|\mathcal{S}| \times |\mathcal{A}|}$, where $M[s, a] = Q(s, a)$, $|\mathcal{S}|$ is the number of states in your problem and $|\mathcal{A}|$ the number of actions.

+

Furthermore, note that the symbol $\leftarrow$ means "assignment" (like assignment to a variable, in the context of programming). So, in the update rule above, we are assigning to $\color{orange}{Q(S_t, A_t)}$ (which will be the next or updated estimate of the Q value for the current state $S_t$ and the just taken action from that state $A_t$) the value $\color{red}{Q(S_t, A_t)} + \alpha (\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)} - \color{red}{Q(S_t, A_t)})$. Let's break this value down.

+

$\color{red}{Q(S_t, A_t)}$ (on the right side of the assignment) is the estimate of the Q value for the state $S_t$ and action $A_t$ before the assignment. So, we are summing $\color{red}{Q(S_t, A_t)}$ and $\alpha (\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)} - \color{red}{Q(S_t, A_t)})$, and then we assign it to $\color{orange}{Q(S_t, A_t)}$ again.

+

$\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$ is what is often called the target. Q-learning is a temporal-difference (TD) algorithm, and TD algorithms update estimates of the value or action-value functions based on the difference between the current estimate, in the case of Q-learning it is denoted by $\color{red}{Q(S_t, A_t)}$ (on the right side of the $\leftarrow$), and a "target". So, in the Q-learning algorithm, $\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$ is the target. We can roughly think of it as "the value that $\color{red}{Q(S_t, A_t)}$ should have been". So, in a certain way, we are performing supervised learning, where $\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$ would be the ground-truth label and $\color{red}{Q(S_t, A_t)}$ the current estimate, and so $[\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}] - \color{red}{Q(S_t, A_t)}$ would be the error (or loss): in fact, it is often called the TD error. However, note that this is not really supervised learning, because $\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$ is not a ground-truth (it is partially an estimate, because of the part $\gamma \color{blue}{\max_{a}Q(S_{t+1}, a)} $, and it partially a ground-truth, because of $\color{green}{R_{t+1}}$).

+

To recapitulate, $\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}$ is the target, $\color{red}{Q(S_t, A_t)}$ is the current estimate, and $[\color{green}{R_{t+1}} + \gamma \color{blue}{\max_{a}Q(S_{t+1}, a)}] - \color{red}{Q(S_t, A_t)}$ is the TD error. We are thus summing the "error" (weighted by the hyper-parameter $\alpha$, which is, in this case, often called the learning rate) and the current estimate $\color{red}{Q(S_t, A_t)}$ in order to produce the new estimate $\color{orange}{Q(S_t, A_t)}$.

+

In the target, you can see that we are multiplying the $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ by $\gamma$. This is a hyper-parameter (a parameter which often needs to be chosen by the programmer before the algorithm is executed), known as the discount factor. It controls the contribution of $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ to the target: that is, how much of $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ we want to include in the target. Recall that I've just said above that the target is composed of the reward $\color{green}{R_{t+1}}$ (which is a ground-truth or real-world experience, because it is directly received from the environment) and $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ (which actually uses an estimate of the Q function, that is, it uses $Q(S_{t+1}, a)$). So, $\gamma$ controls the contribution of an estimate to the ground-truth.

+

As I said at the beginning of this answer, $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ can be thought of as the $Q$ value associated with the next state $S_{t+1}$ (which was observed by the agent after he has taken the action $A_t$) and associated with the action $a$, such that $Q(S_{t+1}, a)$ is the highest among all other possible actions from state $S_{t+1}$. In other words, $\color{blue}{\max_{a}Q(S_{t+1}, a)}$ can be thought of the estimate of the Q value associated with the next state $S_{t+1}$ and the greedy action taken from that same state.

+

Q-learning is off-policy

+

Note that, when we update the value function, the agent is not really taking actions in the environment (the only action taken is $A_t$, and it was taken, using the behavior policy, before the update!). Nonetheless, people often call Q-learning an off-policy algorithm because

+
    +
  1. It uses the $\epsilon$-greedy policy to interact with the environment (aka a behavior policy). In this case, actions are really taken, and the responses of the environment are really produced, observed, and used to update estimates of the $Q$ function.

    +
  2. +
  3. It uses a target that is based on an estimate which is greedy (i.e. it uses $\color{blue}{\max_{a}Q(S_{t+1}, a)}$).

    +
  4. +
+

Given that Q-learning uses estimates of the form $\color{blue}{\max_{a}Q(S_{t+1}, a)}$, Q-learning is often considered to be performing updates to the Q values, as if those Q values were associated with the greedy policy, that is, the policy that always chooses the action associated with highest Q value. So, you will often hear that Q-learning finds a target policy (i.e. the policy that is derived from the last estimate of the Q function) that is greedy (so, usually, different from the behavior policy).

+",2444,,2444,,12/8/2021 8:56,12/8/2021 8:56,,,,5,,,,CC BY-SA 4.0 +10563,1,,,2/13/2019 22:52,,1,122,"

I am trying to write a genetic algorithm that generates 100 items, assigning random weights and utilities to them. And then try to pick items how out these 100 items while maximising the utility and not picking items over 500ks. The program should return an array of boolean values, where true represents items to be picked and false represents item not to be picked.

+ +

Can someone help with this or point me to a link of something that has been written like this before?

+",22288,,2444,,2/20/2019 18:21,2/20/2019 18:21,How do I write a genetic algorithm to solve the knapsack problem?,,0,1,,,,CC BY-SA 4.0 +10565,1,10605,,2/14/2019 3:04,,2,693,"

My understanding of the main idea behind A2C / A3C is that we run small segments of an episode to estimate the return using a trainable value function to compensate for the unseen final steps of the episode.

+ +

While I can see how this could work in continuing tasks with relatively dense rewards, where you can still get some useful immediate rewards from a small experience segment, does this approach work for episodic tasks where the reward is only delivered at the end? For example, in a game where you only know if you win or lose at the end of the game, does it still make sense to use the A2C / A3C approach?

+ +

It's not clear to me how the algorithm could get any useful signal to learn anything if almost every experience segment has zero reward, except for the last one. This would not be a problem in a pure MC approach for example, except for the fact that we might need a lot of samples. However, it's not clear to me that arbitrarily truncating episode segments like in A2C / A3C is a good idea in this case.

+",22297,,2444,,2/15/2019 17:29,2/15/2019 19:47,Are A2C or A3C suitable for episodic tasks where the reward is delivered only at the end of the episode?,,1,0,,,,CC BY-SA 4.0 +10566,1,,,2/14/2019 8:30,,2,220,"

One of the most common misconceptions about reinforcement learning (RL) applications is that, once you deploy them, they continue to learn. And, usually, I'm left having to explain this. As part of my explanations, I like to show where it is being used and where not.

+

I've done a little bit of research on the topic, but the descriptions seem fairly academic, and I'm left with the opinion that reinforcement learning is not really suitable for financial services in regulated markets.

+

Am I wrong? If so, I would like to know where RL is being used? Also, in those cases, are these RL algorithms adapting to new data over time? How do you ensure they are not picking up on data points or otherwise making decisions that are considered to be unacceptable?

+",19484,,2444,,11/13/2020 23:46,11/14/2020 2:17,Where are reinforcement algorithms used in financial services?,,1,2,,,,CC BY-SA 4.0 +10567,1,,,2/14/2019 8:47,,1,140,"

Which representation is most biologically plausible for actor nodes? For example, actions represented across several output nodes which may be either

+ +
    +
  1. mutually exclusive with each other (e.g., go north, go south, etc), +achieved by winner-takes-all.

  2. +
  3. NOT mutually exclusive with each other (e.g. left leg forward, right leg forward); these actions may occur concurrently. To go north, the correct combination of nodes must be active.

  4. +
+ +

Similarly which representation is most plausible for critic output nodes?

+ +
    +
  1. A single output node that outputs a real number representing the +reward.

  2. +
  3. A set of output nodes each representing a separate value, achieved by winner-takes-all.

  4. +
+ +

Or do other representations better align with real brains ?

+",22305,,2444,,2/15/2019 0:53,2/15/2019 8:32,What is the most biologically plausible representation for the actor and critic?,,1,2,,,,CC BY-SA 4.0 +10568,1,,,2/14/2019 9:31,,1,22,"

Feature visualization allows to better understand neural networks by generating images that maximize the activation of a specific neuron, and therefore understand what are the abstract features that produce a high activation.

+ +

The examples that I saw so far are related to classification tasks. So my question is: can these concepts be applied to other convolutional neural network tasks, like semantic segmentation or image embedding (triplet loss)? What can I expect if I apply visualization algorithms to these networks?

+",16671,,,,,2/14/2019 9:31,Feature visualization on neural networks which are not for classification,,0,0,,,,CC BY-SA 4.0 +10572,1,10609,,2/14/2019 13:46,,0,254,"

I have to calculate the affluence in localities of Metro city. To calculate affluence, I am considering a parameter per capita income.

+ +

Where I can get a dataset of it? What are other parameters I should consider for the problem?

+ +

Any guidance will be fruitful for me.

+",15368,,2444,,2/15/2019 20:54,2/15/2019 23:24,Parameters to calculate affluence in localities of Metro city,,1,3,0,,,CC BY-SA 4.0 +10573,2,,10549,2/14/2019 15:09,,3,,"

I'll give it a go here and try to answer your question, I'm not sure if this is entirely correct, so if someone thinks that it isn't please correct me.
+I'll disregard expectation here to make things simpler. First, note that policy $\pi$ depends on parameter vector $\phi$ and function $f_\phi(\epsilon_t;s_t)$, and value function $Q$ depends on parameter vector $\theta$ and same function $f_\phi(\epsilon_t;s_t)$. Also, one important thing that authors mention in the paper and you didn't mention is that this solution is approximate gradient not the true gradient.
+Our goal is to calculate gradient of objective function $J_\pi$ with respect to $\phi$, so disregarding the expectation we have:

+ +

$\nabla_\phi J_\pi (\phi) = \nabla_\phi \log\pi(\phi,f_\phi (\epsilon_t;s_t)) - \nabla_\phi Q(s_t,\theta,f_\phi (\epsilon_t;s_t))$

+ +

Let's see the gradient of first term on right hand side. To get the full gradient we need to calculate derivative w.r.t to both variables, $\phi$ and $f_\phi (\epsilon_t;s_t)$, so we have:

+ +

$\nabla_\phi \log\pi(\phi,f_\phi (\epsilon_t;s_t)) = \frac {\partial \log\pi(\phi,f_\phi (\epsilon_t;s_t))}{\partial \phi} + \frac{\partial \log\pi(\phi,f_\phi (\epsilon_t;s_t))}{\partial f_\phi(\epsilon_t;s_t)} \frac{\partial f_\phi(\epsilon_t;s_t)}{\partial \phi}$

+ +

This is where approximation comes, they replace $f_\phi (\epsilon_t;s_t)$ with $a_t$ in some places and we have:

+ +

$\nabla_\phi \log\pi(\phi,f_\phi (\epsilon_t;s_t)) \approx \frac {\partial \log\pi(\phi,a_t)}{\partial \phi} + \frac{\partial \log\pi(\phi,a_t)}{\partial a_t} \frac{\partial f_\phi(\epsilon_t;s_t)}{\partial \phi}$
+$\nabla_\phi \log\pi(\phi,f_\phi (\epsilon_t;s_t)) \approx \nabla_\phi \log\pi(\phi,a_t) + \nabla_{a_t} \log\pi(\phi,a_t) \nabla_\phi f_\phi (\epsilon_t;s_t)$

+ +

For the second term in first expression on right hand side we have:

+ +

$\nabla_\phi Q(s_t,\theta,f_\phi (\epsilon_t;s_t)) = \frac {\partial Q(s_t,\theta,f_\phi (\epsilon_t;s_t))}{\partial \phi} + \frac{\partial Q(s_t,\theta,f_\phi (\epsilon_t;s_t))}{\partial f_\phi(\epsilon_t;s_t)} \frac{\partial f_\phi(\epsilon_t;s_t)}{\partial \phi}$
+$\nabla_\phi Q(s_t,\theta,f_\phi (\epsilon_t;s_t)) \approx \frac {\partial Q(s_t,\theta,a_t)}{\partial \phi} + \frac{\partial Q(s_t,\theta,a_t)}{\partial a_t} \frac{\partial f_\phi(\epsilon_t;s_t)}{\partial \phi}$

+ +

Fist term on right hand side is 0 because $Q$ does not depend on $\phi$ so we have:

+ +

$\nabla_\phi Q(s_t,\theta,f_\phi (\epsilon_t;s_t)) \approx \nabla_{a_t}Q(s_t, \theta,a_t)\nabla_\phi f_\phi(\epsilon_t;s_t)$

+ +

Now you add up things and you get the final result.

+",20339,,20339,,2/14/2019 15:30,2/14/2019 15:30,,,,3,,,,CC BY-SA 4.0 +10575,1,11069,,2/14/2019 17:03,,3,755,"

In reinforcement learning, we often define two functions, the state-value function

+ +

$$V^\pi(s) = \mathbb{E}_{\pi} \left[\sum_{k=0}^{\infty} +\gamma^{k}R_{t+k+1} \Bigg| S_t=s \right]$$

+ +

and the state-action-value function

+ +

$$Q^\pi(s,a) = \mathbb{E}_{\pi}\left[\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}\Bigg|S_t=s, A_t=a \right]$$

+ +

where $\mathbb{E}_{\pi}$ means that these functions are defined as the expectation with respect to a fixed policy $\pi$ of what is often called the return, $\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}$, where $\gamma$ is a discount factor and $R_{t+k+1}$ is the reward received from the environment (while the agent interacts with it) from time $t$ onwards.

+ +

So, both the $V$ and $Q$ functions are defined as expectations of the return (or the cumulative future discounted reward), but these expectations have different ""conditions"" (or are conditioned on different variables). The $V$ function is the expectation (with respect to a fixed policy $\pi$) of the return given that the current state (the state at time $t$) is $s$. The $Q$ function is the expectation (with respect to a fixed policy $\pi$) of the return conditioned on the fact that the current state the agent is in is $s$ and the action the agent takes at $s$ is $a$.

+ +

Furthermore, the Bellman optimality equation for $V^*$ (the optimal value function) can be expressed as the Bellman optimality equation for $Q^{\pi^*}$ (the optimal state-action value function associated with the optimal policy $\pi^*$) as follows

+ +

$$ +V^*(s) = \max_{a \in \mathcal{A}(s)} Q^{\pi^*}(s, a) +$$

+ +

This is actually shown (or proved) at page 76 of the book ""Reinforcement Learning: An Introduction"" (1st edition) by Andrew Barto and Richard S. Sutton.

+ +

Are there any other functions, apart from the $V$ and $Q$ functions defined above, in the RL context? If so, how are they related?

+ +

For example, I've heard of the ""advantage"" or ""continuation"" functions. How are these functions related to the $V$ and $Q$ functions? When should one be used as opposed to the other? Note that I'm not just asking about the ""advantage"" or ""continuation"" functions, but, if possible, any existing function that is used in RL that is similar (in purpose) to these mentioned functions, and how they are related to each other.

+",2444,,2444,,11/23/2020 14:02,11/23/2020 14:13,"Apart from the state and state-action value functions, what are other examples of value functions used in RL?",,2,0,,,,CC BY-SA 4.0 +10578,1,,,2/14/2019 21:44,,1,119,"

TL;DR: read the bold. The rest are details

+ +

I am trying to implement Reinforcement Learning:An Introduction, section 13.5 myself:

+ +

+ +

on OpenAi's cartpole

+ +

The algorithm seems to be learning something useful (and not random), as shown in these graphs (different zoom on the same run):

+ +

+ +

+ +

+ +

Which show the reward per episode (y axis is the ""time alive"", x axis is episode number).

+ +

However, as can be seen,

+ +
    +
  1. The learning does not seem to stabilize.

  2. +
  3. It looks like every time the reward maxes out (200), it immediately drops.

  4. +
+ +
+ +

My relevant code for reference (inspired by pytorch's actor critic)

+ +

note: in this question, xp_batch is ONLY THE VERY LAST (s, a, r, s'), meaning experience replay is not in use in this code!

+ +

The actor and critic are both distinct neural networks which

+ +
def learn(self, xp_batch):#in this question, xp_batch is ONLY THE VERY LAST (s, a, r, s')
+    for s_t, a_t, r_t, s_t1 in xp_batch:
+        expected_reward_from_t = self.critic_nn(s_t)
+        probs_t = self.actor_nn(s_t)
+        expected_reward_from_t1 = torch.tensor([[0]], dtype=torch.float)
+        if s_t1 is not None:  # s_t is not a terminal state, s_t1 exists.
+            expected_reward_from_t1 = self.critic_nn(s_t1)
+
+        m = Categorical(probs_t)
+        log_prob_t = m.log_prob(a_t)
+
+        delta = r_t + self.args.gamma * expected_reward_from_t1 - expected_reward_from_t
+
+        loss_critic = delta * expected_reward_from_t
+        self.critic_optimizer.zero_grad()
+        loss_critic.backward(retain_graph=True)
+        self.critic_optimizer.step()
+
+        delta.detach()
+        loss_actor = delta * log_prob_t
+        self.actor_optimizer.zero_grad()
+        loss_actor.backward()
+        self.actor_optimizer.step()
+
+def select_action(self, state):
+    probs = self.actor_nn(state)
+    m = Categorical(probs)
+    action  = m.sample()
+    return action
+
+ +
+ +

My questions are:

+ +
    +
  1. Am I doing something wrong, or is this to be expected?

  2. +
  3. I know this can be improved with eligibility traces/experience replay+off policy learning. Before making those upgrades, I want to make sure the current results make sense.

  4. +
+",21645,,,,,2/14/2019 21:44,"How to make episode ending ""good"" in reinforcement learning?",,0,5,,,,CC BY-SA 4.0 +10579,1,,,2/15/2019 4:49,,4,168,"

In value iteration, we have a model of the environment's dynamics, i.e $p(s', r \mid s, a)$, which we use to update an estimate of the value function.

+

In the case of temporal-difference and Monte Carlo methods, we do not use $p(s', r \mid s, a)$, but then how do these methods work?

+",22337,,2444,,11/16/2020 21:18,11/16/2020 21:18,"How do temporal-difference and Monte Carlo methods work, if they do not have access to model?",,1,0,0,,,CC BY-SA 4.0 +10580,1,10587,,2/15/2019 6:36,,2,561,"

What is the common representation used for the state in articulated robot environments? My first guess is that it's a set of the angles of every joint. Is that correct? My question is motivated by the fact that one common trick that helps training neural nets in general is to normalize the inputs, like setting mean = 0 and std dev = 1, or scaling all the input values to $[0, 1]$, which could be easily done in this case too if all the inputs are angles in $[0, 2 \pi]$. But, what about distances? Is it common to, for example, use as input some distance of the agent to the ground, or a distance to some target position? In that case, the scale of the distances can be arbitrary and vary a lot. What are some common ways to deal with that?

+",22297,,,,,2/15/2019 10:37,how to normalize the state space for articulated robot environments?,,1,2,,,,CC BY-SA 4.0 +10581,2,,10579,2/15/2019 7:05,,5,,"

The main idea is that you can estimate $V^\pi(s)$, the value of a state $s$ under a given policy $\pi$, even if you don't have a model of the environment, by visiting that state $s$ and following the policy $\pi$ after that state. If you repeat this process many times, you'll get many samples of trajectories starting at $s$ with some total return associated with them. If you average them, you'll have a sample-based estimate of $V^\pi(s)$. In a similar way, you can do a sample-based approach to estimate $Q^\pi(s, a)$, where you start at $s$, take some arbitrary action $a$, and then continue following $\pi$. After you collect many samples, you can also average the returns to get a sample-based estimate of $Q^\pi(s, a)$. Once you have a good enough estimate of $Q^\pi(s, a)$, you can see that it's easy to improve the current policy $\pi$ to get a new policy $\pi'$ that always chooses the best action on any given state: $\pi'(s) = \underset{a}{argmax}\ Q^\pi(s, a)$.

+ +

The procedure I described in the last paragraph where you sample an entire trajectory and wait until the end of the episode to estimate a return is the Monte Carlo approach. In contrast, TD exploits the recursive nature of the Bellman equation to learn as you go, even before the episode ends. For example, you can estimate the value function like this: $V^\pi(s_0) = r_0 + \gamma V^\pi(s_1)$, where $r_0$ comes from experience and instead of collecting more $r_t$ until the end of the episode, you rely on your estimate of the value of the next state (which you also get from experience). At the beginning, your estimates might be random, but after many iterations the process converges to good estimates. The same idea of exploiting this recursive structure can be used to estimate $Q^\pi(s, a)$. I guess this approach is known as temporal difference because you sample $r_0$ which is in some sense the temporal difference between $s_0$ and $s_1$.

+",22297,,22297,,2/15/2019 8:54,2/15/2019 8:54,,,,1,,,,CC BY-SA 4.0 +10582,2,,10567,2/15/2019 8:32,,1,,"

For the actor, I'd say the 'not mutually exclusive' option is more biologically plausible in the context of muscle systems, where the actions can be seen as simultaneous muscle activations. Maybe at a higher level, an agent thinks of the action as 'go north' or 'go south', but the final outputs which have to control muscles at a lower level have to represent simultaneous muscle activations.

+ +

For the critic, I'd say the 'single output node' is more biologically plausible. Agents perceive the world in the form of high dimensional inputs, such as images. The approach where a value function is learned in a tabular fashion where you know the value for every single state doesn't really scale very well and is limited to small discrete state spaces. For biological agents, it makes sense to have a function that senses the current state of the environment and outputs a single number that represents the value, which gives the agent an idea of how things are going so far given the actions it took in the past.

+",17312,,,,,2/15/2019 8:32,,,,5,,,,CC BY-SA 4.0 +10583,2,,10534,2/15/2019 8:52,,1,,"

You should probably compile a list of Universities/Instituitions that specialise in Additive Manufacturing, and target graduating students or researchers looking for a new challenge. This may prove more fruitful than targeting the normal channels. You may have to search worldwide, but of course this adds the issue of work visa etc. +There is no easy way to solve this....

+ +

Yo can try posting on the Academia forum too.

+",15812,,,,,2/15/2019 8:52,,,,0,,,,CC BY-SA 4.0 +10584,1,,,2/15/2019 9:24,,3,160,"

In section ""5.2 Monte Carlo Estimation of Action Values"" of the second edition of the reinforcement learning book by Sutton and Barto, this is stated:

+ +
+

If a model is not available, then it is particularly useful to estimate action values (the values of state– action pairs) rather than state values. With a model, state values alone are sufficient to determine a policy; one simply looks ahead one step and chooses whichever action leads to the best combination of reward and next state, as we did in the chapter on DP.

+
+ +

However, I don't see how this is true in practice. I can see how it'd work trivially for discrete state and action spaces with deterministic environment dynamics, because we could compute $\pi(s) = \underset{a}{\text{argmax}}\ V(\text{step}(s, a))$ by just looking at all possible actions and choosing the best one. As soon as I think about continuous state and action spaces with stochastic environment dynamics, computing the $\text{argmax}$ seems to be become very complicated and impractical. For the particular case of continuous states and discrete actions, I think estimating an action value might be more practical to do even if a forward model of the environment dynamics is available, because the $\text{argmax}$ becomes easier (I'm especially thinking of the approach taken in deep Q learning).

+ +

Am I correct in thinking this way or is it true that if a model is available it's not useful to estimate action values if state values are already available?

+",17312,,2444,,6/22/2019 19:28,6/22/2019 19:28,Why is the state value function sufficient to determine the policy if a model is available?,,1,0,,,,CC BY-SA 4.0 +10585,2,,10555,2/15/2019 9:47,,2,,"

It's true that computing the log prob of a sample from a gaussian requires inverting a matrix and dealing with the determinant in the general case of a full covariance matrix. If you wanted to backprop through the log prob, you'd need to backprop through these operations, which by the way are available as differentiable operations in pytorch. See the gaussian distribution class which deals with these operations: https://github.com/pytorch/pytorch/blob/master/torch/distributions/multivariate_normal.py#L177 +You can see there's a half_log_det and although you won't find an explicit matrix inversion, you'll see that it uses a differentiable triangular linear solve (torch.trtrs) at some point, which is essentially doing a matrix inversion in a more efficient way.

+",17312,,,,,2/15/2019 9:47,,,,0,,,,CC BY-SA 4.0 +10586,1,,,2/15/2019 10:28,,9,2666,"

I often see the terms episode, trajectory, and rollout to refer to basically the same thing, a list of (state, action, rewards). Are there any concrete differences between the terms or can they be used interchangeably?

+

In the following paragraphs, I'll summarize my current slightly vague understanding of the terms. Please point any inaccuracy or missing details in my definitions.

+

I think episode has a more specific definition in that it begins with an initial state and finishes with a terminal state, where the definition of whether or not a state is initial or terminal is given by the definition of the MDP. Also, I understand an episode as a sequence of $(s, a, r)$ sampled by interacting with the environment following a particular policy, so it should have a non-zero probability of occurring in the exact same order.

+

With trajectory, the meaning is not as clear to me, but I believe a trajectory could represent only part of an episode and maybe the tuples could also be in an arbitrary order; even if getting such sequence by interacting with the environment has zero probability, it'd be ok, because we could say that such trajectory has zero probability of occurring.

+

I think rollout is somewhere in between since I commonly see it used to refer to a sampled sequence of $(s, a, r)$ from interacting with the environment under a given policy, but it might be only a segment of the episode, or even a segment of a continuing task, where it doesn't even make sense to talk about episodes.

+",12640,,18758,,1/8/2022 11:33,1/8/2022 11:37,"What is the difference between an episode, a trajectory and a rollout?",,1,0,,,,CC BY-SA 4.0 +10587,2,,10580,2/15/2019 10:37,,4,,"

This paper might provide some answers https://arxiv.org/pdf/1810.05762.pdf

+ +

For the observations / states they used not only angles, but also velocities, heights and positions (Table 2).

+ +

In 4.2 Learning algorithm you can see that they mention this, which is related to your question about normalization:

+ +
+

Additionally, for stability we whiten the current observations by + maintaining online statistics of mean and standard deviation from the + history of past observations.

+
+",12640,,,,,2/15/2019 10:37,,,,0,,,,CC BY-SA 4.0 +10588,2,,10431,2/15/2019 10:43,,6,,"

You should choose the model A. The loss is just a differentiable proxy for accuracy.

+ +

That said, the situation should be examined in more detail. If the higher loss is due to the data term, examine the data which produce high loss and check for presence of overfitting or incorrect labels.

+ +

If the higher loss is due to a regularizer then reducing the regularization factor may further improve the results.

+",21726,,21726,,2/15/2019 18:17,2/15/2019 18:17,,,,4,,,,CC BY-SA 4.0 +10589,2,,10431,2/15/2019 11:07,,1,,"

It depends on your application! Imagine a binary classifier that is always very ""confident"" - it always assigns P=100% to Class A and 0% to Class B, or vice versa (sometimes wrong, never uncertain!). Now imagine a ""humble"" model that is perhaps fractionally less accurate, but whose probabilities are actually meaningful (when it says ""Class A with probability 70%"" it is wrong 30% of the time).

+ +

In your case, both losses are quite small, so we probably prefer the more accurate one.

+",17770,,,,,2/15/2019 11:07,,,,0,,,,CC BY-SA 4.0 +10591,1,,,2/15/2019 13:20,,16,3686,"

Is the optimal policy always stochastic (that is, a map from states to a probability distribution over actions) if the environment is also stochastic?

+ +

Intuitively, if the environment is deterministic (that is, if the agent is in a state $s$ and takes action $a$, then the next state $s'$ is always the same, no matter which time step), then the optimal policy should also be deterministic (that is, it should be a map from states to actions, and not to a probability distribution over actions).

+",2444,,2444,,2/15/2019 14:22,2/16/2019 2:37,Is the optimal policy always stochastic if the environment is also stochastic?,,3,1,,,,CC BY-SA 4.0 +10592,2,,10584,2/15/2019 13:25,,2,,"
+
+

With a model, state values alone are sufficient to determine a policy; one simply looks ahead one step and chooses whichever action leads to the best combination of reward and next state, as we did in the chapter on DP.

+
+

As soon as I think about continuous state and action spaces with stochastic environment dynamics, computing the $\text{argmax}$ seems to be become very complicated and impractical.

+
+

For stochastic dynamics the calculations would be more complex, but will often be quite tractable. It depends on size of the distribution, and ease of calculating probabilities to make the correct weighted sums. Instead of $|\mathcal{A}(s)|$ calls to $Q(s,*)$ you will need to make roughly $|\mathcal{S'}(s)| \times |\mathcal{A}(s)|$ calls to $V(s')$ where $\mathcal{S'}(s) \in \mathcal{S}$ are all possible states that might result from the starting state. In the worst case this is $|\mathcal{S}| \times |\mathcal{A}|$

+

Despite the additional work here, you have been more efficient earlier. By only evaluating $V(s)$, you have removed the action dimension, which otherwise splits up your estimates. Your value function estimates will, all else being equal, converge faster because of this. So this is sometimes a compromise you might be willing to make.

+

For continuous states, this might still be practical, if both action choices and possible transitions are still discrete.

+

Once either action space is continuous, or transitions are continuous over some probability density function, then finding the maximising action via the model becomes impractical.

+",1847,,-1,,6/17/2020 9:57,2/15/2019 13:31,,,,0,,,,CC BY-SA 4.0 +10593,2,,10591,2/15/2019 13:47,,11,,"
+

Is the optimal policy always stochastic (that is, a map from states to a probability distribution over actions) if the environment is also stochastic?

+
+ +

No.

+ +

An optimal policy is generally deterministic unless:

+ +
    +
  • Important state information is missing (a POMDP). For example, in a map where the agent is not allowed to know its exact location or remember previous states, and the state it is given is not enough to disambiguate between locations. If the goal is to get to a specific end location, the optimal policy may include some random moves in order to avoid becoming stuck. Note that the environment in this case could be deterministic (from the perspective of someone who can see the whole state), but still lead to requiring a stochastic policy to solve it.

  • +
  • There is some kind of minimax game theory scenario, where a deterministic policy can be punished by the environment or another agent. Think scissors/paper/stone or prisoner's dilemma.

  • +
+ +
+

Intuitively, if the environment is deterministic (that is, if the agent is in a state 𝑠 and takes action 𝑎, then the next state 𝑠′ is always the same, not matter which time step), then the optimal policy should also be deterministic (that is, it should be a map from states to actions, and not to a probability distribution over actions).

+
+ +

That seems reasonable, but you can take that intuition further with any method based on a value function:

+ +

If you have found an optimal value function, then acting greedily with respect to it is the optimal policy.

+ +

The above statement is just a natural language re-statement of the Bellman optimality equation:

+ +

$$v^*(s) = \text{max}_a \sum_{r,s'}p(r,s'|s,a)(r+\gamma v^*(s'))$$

+ +

i.e. the optimal values are obtained when always choosing the action that maximises reward plus discounted value of next step. The $\text{max}_a$ operation is deterministic (if necessary you can break ties for max value deterministically with e.g. an ordered list of actions).

+ +

Therefore, any environment that can be modelled by a MDP and solved by a value-based method (e.g. value iteration, Q-learning) has an optimal policy which is deterministic.

+ +

It is possible in such an environment that the optimal solution may not be stochastic at all (i.e. if you add any randomness to the deterministic optimal policy, the policy will become strictly worse). However, when there are ties for maximum value for one or more actions in one or more states then there are multiple equivalent optimal and deterministic policies. You may construct a stochastic policy that mixes these in any combination, and it will also be optimal.

+",1847,,1847,,2/15/2019 15:13,2/15/2019 15:13,,,,1,,,,CC BY-SA 4.0 +10594,2,,10591,2/15/2019 13:54,,6,,"

I would say no.

+ +

For example, consider the multi-armed bandit problem. So, you have $n$ arms which all have a probability of giving you a reward (1 point, for example), $p_i$, $i$ being between 1 and $n$. This is a simple stochastic environment: this is a one state environment, but it is still an environment.

+ +

But obviously the optimal policy is to choose the arm with the highest $p_i$. So this is not a stochastic policy.

+ +

Obviously, if you are in an environment where you play against other agent (a game theory setting), your optimal policy will certainly be stochastic (think of a poker game, for example).

+",8912,,2444,,2/15/2019 16:04,2/15/2019 16:04,,,,7,,,,CC BY-SA 4.0 +10595,2,,10536,2/15/2019 14:18,,2,,"

Machine learning based hedge funds are currently the worst performing slice through the industry. Large quant funds also claim to use ML but their performance is also very poor and worsening over time. Mostly what they say they are dong is not really what they are actually doing. I know of cases where funds claimed to be doing AI but actually its just simple technical analysis. Successful applications are more in the area of data collation of sentiment using NLP or feature extraction using dimensional reduction but these are not actually part of the trading strategies themselves. The only ML influenced trading I know is HFT trading of the order book which is a well define problem divorced from the actual price series itself. Apart from that 99% of ML use in the hedge fund industry is solely as snake oil for their marketing departments. Out of 10,000 or so hedge funds world wide only 0.25% generate alpha according to academic studies. That's 25 out of 10,000 ... and many of those will be cheating like SAC.

+ +

The fundamental problem is that ML requires data that is high dimensional, highly structured and is low noise and has stationary dynamics and statistical moments. Unfortunately financial price series are low dimensional, unstructured and noise dominated (negative SNR) and exhibit multi-level non-stationarity of the underlying processes and statistical moments in a manner highly related to multi fractals. It is hard to imagine a time series less suitable for machine learning.

+",17764,,,,,2/15/2019 14:18,,,,2,,,,CC BY-SA 4.0 +10596,2,,10591,2/15/2019 15:58,,0,,"

I'm thinking of a probability landscape, in which you find yourself as an actor, with various unknown peaks and troughs. A good deterministic approach is always likely to lead you to the nearest local optimum, but not necessarily to the global optimum. To find the global optimum, something like an MCMC algorithm would allow to stochastically accept a temporarily worse outcome in order to escape from a local optimum and find the global optimum. My intuition is that in a stochastic environment this would also be true.

+",22355,,,,,2/15/2019 15:58,,,,0,,,,CC BY-SA 4.0 +10599,5,,,2/15/2019 17:13,,0,,,2444,,2444,,12/14/2019 23:19,12/14/2019 23:19,,,,0,,,,CC BY-SA 4.0 +10600,4,,,2/15/2019 17:13,,0,,"For questions related to the concept of Partially Observable Markov Decision Process (POMDP), which is a generalization of the Markov Decision Process (MDP) to the cases where information about the states is incomplete (or partially observable).",2444,,2444,,12/14/2019 23:19,12/14/2019 23:19,,,,0,,,,CC BY-SA 4.0 +10603,1,10631,,2/15/2019 19:30,,9,367,"

This is not meant to be negative or a joke but rather looking for a productive solution on AI development, engineering and its impact on human life:

+ +

Lately with my Google searches, the AI model keeps auto filling the ending of my searches with:

+ +

“...in Vietnamese”

+ +

And

+ +

“...in a Vietnamese home”

+ +

The issue is I have never searched for that but because of my last name the model is creating this context.

+ +

The other issue is that I’m a halfy and my dad is actually third generation, I grew up mainstream American and don’t even speak Vietnamese. I’m not even sure what a Vietnamese home means.

+ +

My buddy in a similar situation of South Asian and noticed the same exact thing more so with YouTube recommended videos.

+ +

We already have enough issues in the US with racism, projections of who others expect us to be based on any number of things, stereotyping and putting people in boxes to limit them - I truly believe AI is adding to the problem, not helping.

+ +

How can we fix this. Moreover, how can we use AI to bring out peoples true self, talents and empower and free them them to create their life how they like ?

+ +

There is huge potential here to harness AI in ways that can bring us more freedom, joy and beauty so people can be the whole of themselves and with who they really are. Then meet peoples needs, wishes, dreams and hope. Given them shoulders to stand on to create their reality, not live someone else's projection of themselves.

+",22368,,19686,,7/26/2023 19:12,7/26/2023 19:12,"How is it that AI can become biased, and what are the proposals to mitigate this?",,3,8,,,,CC BY-SA 4.0 +10604,2,,10431,2/15/2019 19:40,,4,,"

You should note that both your results are consistent with a ""true"" probability of 87% accuracy, and your measurement of a difference between these models is not statistically significant. With an 87% accuracy applied at random, then there is approx 14% chance of getting the two extremes of accuracy you have observed by chance if samples are chosen randomly from the target population, and models are different enough make errors effectively at random. This last assertion is usually not true though, so you can relax a little - that is, unless you took different random slices for cross-validation in each case.

+ +

100 test cases is not really enough to discern small differences between models. I would suggest using k-fold cross-validation in order to reduce errors in your accuracy and loss estimates.

+ +

Also, it is critical to check that the cross-validation split was identical in both cases here. If you have used auto-splitting with a standard tool and not set the appropriate RNG seed, then you may have got a different set each time, and your results are just showing you variance due to the validation split which could completely swamp any differences between the models.

+ +

However, assuming the exact same dataset was used each time, and it was representative sample of your target population, then on average you should expect the one with the best metric to have the highest chance of being the best model.

+ +

What you should really do is decide which metric to base the choice on in advance of the experiment. The metric should match some business goal for the model.

+ +

Now you are trying to choose after the fact, you should go back to the reason you created the model in the first place and see if you can identify the correct metric. It might not be either accuracy or loss.

+",1847,,,,,2/15/2019 19:40,,,,5,,,,CC BY-SA 4.0 +10605,2,,10565,2/15/2019 19:47,,1,,"
+

My understanding of the main idea behind A2C / A3C is that we run small segments of an episode to estimate the return using a trainable value function to compensate for the unseen final steps of the episode.

+
+ +

This seems fairly accurate. The important thing to note is that the trainable value function is trained to predict values (specifically, advantage values of state-action pairs in the case of A2C / A3C, where the first A stands for ""advantage""). These value estimates can intuitively be understood as estimates of long-term (discounted) rewards, they're not just short-term rewards.

+ +

So yes, initially when the agent only observes a reward at the end of a long trajectory, only state-action pairs close to the end will receive credit for that reward. For example, when using $n$-step returns, approximately only the last $n$ state-action pairs receive credit. However, in the next episode, that longer-term reward will already become ""visible"" in the form of an advantage value prediction when you're still $n$ steps away from the end, and then that update can again get propagated back $n$ steps further into the history of state-action pairs.

+ +

My explanation above is very informal... there are all kinds of nuances that I skipped over. Use of function approximation is likely to speed up the propagation of reward observations through the space of state-action pairs even more, and of course in reality things won't be as ""clean"" as getting the propagation to get $n$ steps further in the next episode in comparison to the previous episode, since selected actions and random state transitions can be different... but hopefully it gets the idea across.

+",1641,,,,,2/15/2019 19:47,,,,0,,,,CC BY-SA 4.0 +10606,2,,10586,2/15/2019 20:11,,5,,"

I don't really think there are fixed, different definitions for all those terms that everyone agrees upon. In most contexts, they're going to be quite interchangeable, and if anyone is really using them in a context where they are supposed to have crucially important, different meanings, they should probably precisely define them right there.

+
+
+

I think episode has a more specific definition in that it begins with an initial state and finishes with a terminal state, where the definition of whether or not a state is initial or terminal is given by the definition of the MDP. Also, I understand an episode as a sequence of $(s,a,r)$ sampled by interacting with the environment following a particular policy, so it should have a non-zero probability of occurring in the exact same order.

+
+

Agreed with this.

+
+

With trajectory, the meaning is not as clear to me, but I believe a trajectory could represent only part of an episode and maybe the tuples could also be in an arbitrary order; even if getting such sequence by interacting with the environment has zero probability, it'd be ok, because we could say that such trajectory has zero probability of occurring.

+
+

I can't really think of cases where it's sensible to talk about trajectories with tuples shuffled into an arbitrary order. I'd still think of trajectories as having to be in the "correct" order in which they were experienced. But I do agree that trajectories can be little samples (for instance, little sequences of experience that we store in an experience replay buffer). So, every full episode would be a (long) trajectory, but not every trajectory is a full episode (a trajectory can just be a small part of an episode).

+
+

I think rollout is somewhere in between since I commonly see it used to refer to a sampled sequence of $(s, a,r)$ from interacting with the environment under a given policy, but it might be only a segment of the episode, or even a segment of a continuing task, where it doesn't even make sense to talk about episodes.

+
+

I'd say that... often a rollout should have a "terminal" state as ending, but maybe not a true "initial" state of an episode as the start. We might be in the middle of an episode, and then say that we "roll out", which to me implies that we keep going until the end of an episode. I don't think this term is as common as the other two in Reinforcement Learning, but more common in search/planning literature (in particular, Monte Caro Tree Search).

+

That said, when I'm working with MCTS I often like to put a limit on my rollouts where I cut them off if no terminal state was reached yet... so that isn't exactly a crisp definition either.

+

Due to how commonly-used this term is specifically in MCTS, and other Monte-Carlo-based algorithms, I also associate a greater degree of randomness with the term "rollout". When I hear "episode" or "trajectory", I can envision a highly sophisticated, "intelligent" policy being used to select actions, but when I hear "rollout" I am inclined to think of a greater degree of randomness being incorporated in the action selection (maybe uniformly random, or maybe with some cheap-to-compute, simple policy for biasing away from uniformity). Again, that's really just an association I have in my mind with the term and not a crisp definition.

+",1641,,18758,,1/8/2022 11:37,1/8/2022 11:37,,,,2,,,,CC BY-SA 4.0 +10607,1,,,2/15/2019 22:19,,0,160,"

I'd like to use machine learning to guess a mathematical pattern: the input are certain polynomials in four variables $q_1,q_2,q_3,q_4$, the output can be zero or one.

+ +

Allowed polynomials are such that (i) all their non-zero coefficients are equal to one, (ii) they do not contain monomials of the form $q_1^j$ for $j \geq 0$, and (iii) if an allowed polynomial contains a monomial $m=q_1^a q_2^b q_3^c q_4^d$ for some non-negative integers $a,b,c,d$, then it also contains $m'=q_1^{a-1} q_2^b q_3^c q_4^d$, provided this does not violate (ii) and $a \geq 1$; similarly for $b \to b-1$, $c \to c-1$, and $d \to d-1$.

+ +

Here's an example batch, given by pairs {input, output}: $\{q_2,1\},\{q_3,1\},\{q_4,1\}$

+ +

Here's a second batch: $\{q_2+q_1 q_2,0\},\{q_2+q_2^2,0\},\{q_2+q_3,1\},\{q_3+q_1 q_3,0\},\{q_3+q_3^2,0\},\{q_2+q_4,1\},\{q_3+q_4,1\},\{q_4+q_1 q_4,0\},\{q_4+q_4^2,1\}$

+ +

I can construct larger and larger batches using Mathematica, and I'd like to know how to practically go from here, to instructing an AI to guess a simple function of $q$'s that reproduces the behavior, namely that can guess the correct output for previously unknown admissible polynomials.

+ +

What are the typical batch size and computational power required for such a program to succeed?

+ +

My idea is to use a function $\phi$ from the space of allowed polynomials $\mathcal P$ to the set $\mathbb Z_2=\{0,1\}$, of the form $\phi:\mathcal P \to \mathbb Z_2$, $p=\sum_{i \in I} m_i \mapsto \phi(p):= \sum_{i \in I' \subset I} m_i|_1 \mod 2$, where $m_i|_1$ means the $i$-th monomial inside $p$ evaluated at $q_1=q_2=q_3=q_4=1$, and come up with the form of $I'$ as function of $I$.

+ +

Notice there's no linear structure on $\mathcal P$.

+ +

Remark: of course instead of polynomials one could use punctured solid partitions.

+",22371,,22371,,2/18/2019 16:01,2/18/2019 16:01,Using AI to guess a mathematical pattern of certain polynomials in four variables: practical challenge,,0,4,,,,CC BY-SA 4.0 +10609,2,,10572,2/15/2019 23:24,,1,,"

Affluence could encompass several parameters: +Income; +Wealth (property ownership); +Life expectancy; +Access to services such as education and health; +Access to clean natural resources; +Low levels of criminality.

+ +

Property prices in each locality might be easy to obtain from real estate agent sources +Ratings for schools or medical facilities in each area might be published

+ +

Generally, where public statistics are collected on a locality, they will be related in one way or another to affluence. A useful strategy might be to collect as many of these diverse data sets as possible, and to learn a composite affluence score from the data. It is very likely that all of these parameters will be correlated to a greater or lesser degree, and so you could accurately learn about affluence from a small number of these parameters.

+",22355,,,,,2/15/2019 23:24,,,,0,,,,CC BY-SA 4.0 +10611,1,10614,,2/16/2019 0:21,,3,562,"

I am in the process of writing my own basic machine learning library in Python as an exercise to gain a good conceptual understanding. I have successfully implemented backpropagation for activation functions such as $\tanh$ and the sigmoid function. However, these are normalised in their outputs. A function like ReLU is unbounded so its outputs can blow up really fast. In my understanding, a classification layer, usually using the SoftMax function, is added at the end to squash the outputs between 0 and 1.

+ +

How does backpropagation work with this? Do I just treat the SoftMax function as another activation function and compute its gradient? If so, what is that gradient and how would I implement it? If not, how does the training process work? If possible, a pseudocode answer is preferred.

+",22373,,,,,2/16/2019 15:06,How does backpropagation with unbounded activation functions such as ReLU work?,,1,0,,,,CC BY-SA 4.0 +10614,2,,10611,2/16/2019 10:41,,3,,"

Backprop through ReLU is easier than backprop through sigmoid activations. For positive activations, you just pass through the input gradients as they were. For negative activations you just set the gradients to 0.

+ +

Regarding softmax, the easiest approach is to consider it a part of the negative log-likelihood loss. In other words, I am suggesting to directly derive gradients of that loss with respect to the softmax input. The result is very elegant and extremely easy to implement. Try to derive that yourself!

+",21726,,21726,,2/16/2019 15:06,2/16/2019 15:06,,,,6,,,,CC BY-SA 4.0 +10615,1,10616,,2/16/2019 14:04,,4,4347,"

This is an excerpt taken from Sutton and Barto (pg. 3):

+
+

Another key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. This is in contrast with many approaches that address subproblems without addressing how they might fit into a larger picture. For example, we have mentioned that much of machine learning research is concerned with supervised learning without explicitly specifying how such an ability would finally be useful. Other researchers have developed theories of planning with general goals, but without considering planning's role in real-time decision-making, or the question of where the predictive models necessary for planning would come from. Although these approaches have yielded many useful results, their focus on +isolated subproblems is a significant limitation.

+
+

I have an idea of supervised learning (SL), but what exactly does the author mean by planning? And how is the RL approach different from planning and SL?

+

(Illustration with an example would be nice).

+",,user9947,2444,,11/21/2020 13:13,11/21/2020 13:15,"What is ""planning"" in the context of reinforcement learning, and how is it different from RL and SL?",,2,0,,,,CC BY-SA 4.0 +10616,2,,10615,2/16/2019 15:40,,12,,"

The concept of "planning" is not just related to RL. In general (as the name suggests), planning consists in creating a "plan" which you will use to reach a "goal". The goal depends on the context or problem. For example, in robotics, you can use a "planning algorithm" (e.g. Dijkstra's algorithm) in order to find the path between two points on a map (given e.g. the map as a graph).

+

In RL, planning usually refers to the use of a model of the environment in order to find a policy that hopefully will help the agent to behave optimally (that is, obtain the highest amount of return or "future cumulative discounted reward"). In RL, the problem (or environment) is usually represented as a Markov Decision Process (MDP). The "model" of the environment (or MDP) refers to the transition probability distribution (and reward function) associated with the MDP. If the transition model (and reward function) is known, you can use an algorithm that exploits it to (directly or indirectly) find a policy. This is the usual meaning of planning in RL. A common planning algorithm in RL is e.g. value iteration (which is a dynamic programming algorithm).

+
+

Other researchers have developed theories of planning with general goals, but without considering planning's role in real-time decision- making, or the question of where the predictive models necessary for planning would come from.

+
+

Planing is often performed "offline", that is, you "plan" before executing. While you're executing the "plan", you often do not change it. However, often this is not desirable, given that you might need to change the plan because the environment might also have changed. Furthermore, the authors also point out that planning algorithms often have a few limitations: in the case of RL, a "model" of the environment is required to plan.

+
+

For example, we have mentioned that much of machine learning research is concerned with supervised learning without explicitly specifying how such an ability would finally be useful.

+
+

I think the authors simply want to say that supervised learning is usually used to solve specific problems. The solutions to supervised problems often are not directly applicable to other problems, so this makes them limited.

+
+

Another key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment.

+
+

In RL, there is the explicit notion of a "goal": there is an agent that interacts with an environment in order to achieve its goal. The goal is often to maximize the "return" (or "future cumulative discounted reward", or, simply, the reward in the long run).

+
+

How is RL different from planning and supervised learning?

+
+

RL and planning (in RL) are quite related. In RL, the problem is similar to the one in planning (in RL). However, in RL, the transition model and reward function of the MDP (which represents the environment) are usually unknown. Therefore, the only way of finding or estimating an optimal policy that will allow the agent to (near-optimally) behave in this environment is to interact with the environment and gather some info regarding its "dynamics".

+

RL and supervised learning (SL) are quite different. In SL, there isn't usually the explicit concept of "agent" or "environment" (and their interaction), even though it might be possible to describe supervised learning in that way (see this question). In supervised learning, during the training or learning phase, a set of inputs and the associated expected outputs is often provided. Then the "objective" is to find a map between inputs and outputs, which generalizes to inputs (and corresponding outputs) that have not been observed during the learning phase. In RL, there isn't such a set of inputs and associated expected outputs. In RL, there is just a scalar signal emitted by the environment, at each time step, which roughly indicates how well the agent is currently performing. However, the goal of the agent is not just to obtain rewards, but to behave optimally (in the long run).

+

In short, in RL, there is the explicit notion of agent, environment and goal, and the reward is the only signal which tells the agent how well it is performing, but the reward does not tell the agent which actions it should take at each time step. In supervised learning, the objective is to find a function that maps inputs to the corresponding outputs, and this function is learned by providing explicit examples of such mappings during the training phase.

+

There are some RL algorithms (like the temporal-difference ones), which could roughly be thought of as self-supervised learning algorithms, where the agent learns from itself (or from the experience it has gained by interacting with the environment). However, even in these cases, the actions that the agent needs to take are not explicitly taught.

+",2444,,2444,,11/21/2020 13:15,11/21/2020 13:15,,,,0,,,,CC BY-SA 4.0 +10617,1,10619,,2/16/2019 16:09,,1,439,"

I am trying to understand how RNNs are used for sequence modelling.

+ +

On a tutorial here, it mentions that if you want to translate say a sentence from English to French you can use an encoder-decoder set-up as they described.

+ +

However what if you want to do a sequence to sequence modelling where your inputs and outputs are of the same domain but you just want to predict the next output of a sequence.

+ +

For example if I want to use sequence modelling to learn the sine function. So say I have 20 y-coordinates from $y = sin(x)$ from 20 evenly spaced out x-coordinates and I want to predict the next 10 or so y-coordinates. Would I use an encoder-decoder setup here?

+",19895,,2444,,2/21/2019 11:17,2/21/2019 11:17,Do I need an encoder-decoder architecture to predict the next item of a sequence?,,2,0,,,,CC BY-SA 4.0 +10618,2,,10615,2/16/2019 16:11,,0,,"

The automated planning is:

+ +
+

Automated planning and scheduling, sometimes denoted as simply AI Planning,1 is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.

+ +

In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning and scheduling are often called action languages.

+
+ +

In other words, the planning is some strategies or actions to reach from the start state to the goal state. As you found in the above, one of the solutions for planning could be RL (depends on the problem). Hence, MDP is a specific case of planning and it is more general.

+ +

For the difference of RL and supervised learning you can see this post:

+ +
+

The main difference is to do with how ""correct"" or optimal results are learned:

+ +
    +
  • In Supervised Learning, the learning model is presented with an input and desired output. It learns by example.

  • +
  • In Reinforcement Learning, the learning agent is presented with an environment and must guess correct output. Whilst it receives feedback on how good its guess was, it is never told the correct output (and in addition the feedback may be delayed). It learns by exploration, or trial and error.

  • +
+
+",4446,,4446,,2/16/2019 17:02,2/16/2019 17:02,,,,2,,,,CC BY-SA 4.0 +10619,2,,10617,2/16/2019 16:51,,2,,"

You don't need to Encoder-Decoder here. When using seq2seq learning for text (for example, for translation) you need encoder-decoder to encode the words into the numeric vectors and decode the vectors into the words. Therefore, for your numerical case, you don't need an encoder or decoder to train the RNN.

+",4446,,2444,,2/21/2019 11:16,2/21/2019 11:16,,,,2,,,,CC BY-SA 4.0 +10620,1,23647,,2/16/2019 18:59,,1,1790,"

In reinforcement learning (RL), there are model-based and model-free algorithms. In short, model-based algorithms use a transition model $p(s' \mid s, a)$ and the reward function $r(s, a)$, even though they do not necessarily compute (or estimate) them. On the other hand, model-free algorithms do not use such a transition model or reward function, but they directly estimate a value function or policy by interacting with the environment, which allows the agent to infer the dynamics of the environment.

+

Given that model-based RL algorithms do not necessarily estimate or compute the transition model or reward function, in the case these are unknown, how can they be computed or estimated (so that they can be used by the model-based algorithms)? In general, what are examples of algorithms that can be used to estimate the transition model and reward function of the environment (represented as either an MDP, POMDP, etc.)?

+",2444,,2444,,1/24/2022 9:01,1/24/2022 11:29,How can we estimate the transition model and reward function?,,2,1,,,,CC BY-SA 4.0 +10621,1,,,2/16/2019 19:15,,1,62,"

Introduction:

+ +

The notion that various social complex systems (e.g. society, family, business company, state, etc) could be regarded as ones exhibiting consistent traits of behaviour of their own - suggesting that they are entities unto themselves, some sort of organisms on their own or even intelligent entities on their own - is not new. I have personally stumbled upon papers of scholars who straightforwardly speak of such systems as if they were already proven to be singular entities.

+ +

That kind of assumption has entered the vernacular, as well, long time ago - e.g. ""the state wants to..."" , ""society responds to conflict by..."", ""the family dynamics seeks balance through..."", etc.

+ +

Therefore, we could even assume at one point, that such social systems are not only organisms of their own, but even some sort of artificial intelligence entities (as long as we could see them as an artificial product of human activity).

+ +

We are generally used to seeing ourselves as conscious entities and we are also good at exploring entities of less complexity than ourselves. But when it comes to entities which consists of us as mere components, we are not ready to mentally process that idea - it sounds as either too abstract or too sci-fi (think of Stanisław Lem's work).

+ +

Question:

+ +
    +
  • While the average Joe could easily say ""The state wants to..."" or ""Society responds to..."", etc, how exactly do we prove (or at least gather some sort of supporting evidence) that a complex social system really exhibits a behaviour of its own?

  • +
  • Under what conditions could we regard it as some sort of spontaneously born artificial intelligence?

  • +
  • If that were true, how could we predict if that AI would procreate and bring about other social system forms which are also entities unto themselves? How could we possibly become aware if that has already happened?

  • +
+",22390,,22390,,2/18/2019 12:56,2/18/2019 12:56,Complex systems constituting an entity unto itself,,0,1,,,,CC BY-SA 4.0 +10622,1,,,2/16/2019 19:21,,1,465,"

I am reading about the actor-critic architecture. I am confused about how the actor determines the action using the value (or future reward) from the critic network.

+ +

Below you have the most popular picture of actor-critic network.

+ +

+ +

It looks like the input of the actor network is only the ""state"" variable ($s_t$), it has nothing to do with the critic network.

+ +

However, from the equation below

+ +

+ +

the actor seems to be related to critic network.

+ +

I have a few questions

+ +
    +
  1. Does actor network has two inputs, state variable and future reward (output from critic network), or only the state variable?

  2. +
  3. If the actor network does take future reward as input, how does it use it? Only during the training stage, or in the action making stage?

  4. +
  5. Is there a ""policy iteration"" procedure happens during decision making stage, i.e. for every state $s_t$, policy network will make several attempts with critic network, and output the best policy?

  6. +
+",22393,,2444,,2/17/2019 2:11,2/17/2019 2:11,How the actor use the output from the critic to make action in actor-critic network?,,0,4,,,,CC BY-SA 4.0 +10623,1,10624,,2/16/2019 20:02,,95,84445,"

What is self-supervised learning in machine learning? How is it different from supervised learning?

+",2444,,2444,,11/20/2020 2:46,11/20/2020 2:46,What is self-supervised learning in machine learning?,,3,0,,,,CC BY-SA 4.0 +10624,2,,10623,2/16/2019 20:02,,95,,"

Introduction

+

The term self-supervised learning (SSL) has been used (sometimes differently) in different contexts and fields, such as representation learning [1], neural networks, robotics [2], natural language processing, and reinforcement learning. In all cases, the basic idea is to automatically generate some kind of supervisory signal to solve some task (typically, to learn representations of the data or to automatically label a dataset).

+

I will describe what SSL means more specifically in three contexts: representation learning, neural networks and robotics.

+

Representation learning

+

The term self-supervised learning has been widely used to refer to techniques that do not use human-annotated datasets to learn (visual) representations of the data (i.e. representation learning).

+

Example

+

In [1], two patches are randomly selected and cropped from an unlabelled image and the goal is to predict the relative position of the two patches. Of course, we have the relative position of the two patches once you have chosen them (i.e. we can keep track of their centers), so, in this case, this is the automatically generated supervisory signal. The idea is that, to solve this task (known as a pretext or auxiliary task in the literature [3, 4, 5, 6]), the neural network needs to learn features in the images. These learned representations can then be used to solve the so-called downstream tasks, i.e. the tasks you are interested in (e.g. object detection or semantic segmentation).

+

So, you first learn representations of the data (by SSL pre-training), then you can transfer these learned representations to solve a task that you actually want to solve, and you can do this by fine-tuning the neural network that contains the learned representations on a labeled (but smaller dataset), i.e. you can use SSL for transfer learning.

+

This example is similar to the example given in this other answer.

+

Neural networks

+

Some neural networks, for example, autoencoders (AE) [7] are sometimes called self-supervised learning tools. In fact, you can train AEs without images that have been manually labeled by a human. More concretely, consider a de-noising AE, whose goal is to reconstruct the original image when given a noisy version of it. During training, you actually have the original image, given that you have a dataset of uncorrupted images and you just corrupt these images with some noise, so you can calculate some kind of distance between the original image and the noisy one, where the original image is the supervisory signal. In this sense, AEs are self-supervised learning tools, but it's more common to say that AEs are unsupervised learning tools, so SSL has also been used to refer to unsupervised learning techniques.

+

Robotics

+

In [2], the training data is automatically but approximately labeled by finding and exploiting the relations or correlations between inputs coming from different sensor modalities (and this technique is called SSL by the authors). So, as opposed to representation learning or auto-encoders, in this case, an actual labeled dataset is produced automatically.

+

Example

+

Consider a robot that is equipped with a proximity sensor (which is a short-range sensor capable of detecting objects in front of the robot at short distances) and a camera (which is long-range sensor, but which does not provide a direct way of detecting objects). You can also assume that this robot is capable of performing odometry. An example of such a robot is Mighty Thymio.

+

Consider now the task of detecting objects in front of the robot at longer ranges than the range the proximity sensor allows. In general, we could train a CNN to achieve that. However, to train such CNN, in supervised learning, we would first need a labelled dataset, which contains labelled images (or videos), where the labels could e.g. be "object in the image" or "no object in the image". In supervised learning, this dataset would need to be manually labelled by a human, which clearly would require a lot of work.

+

To overcome this issue, we can use a self-supervised learning approach. In this example, the basic idea is to associate the output of the proximity sensors at a time step $t' > t$ with the output of the camera at time step $t$ (a smaller time step than $t'$).

+

More specifically, suppose that the robot is initially at coordinates $(x, y)$ (on the plane), at time step $t$. At this point, we still do not have enough info to label the output of the camera (at the same time step $t$). Suppose now that, at time $t'$, the robot is at position $(x', y')$. At time step $t'$, the output of the proximity sensor will e.g. be "object in front of the robot" or "no object in front of the robot". Without loss of generality, suppose that the output of the proximity sensor at $t' > t$ is "no object in front of the robot", then the label associated with the output of the camera (an image frame) at time $t$ will be "no object in front of the robot".

+",2444,,2444,,8/1/2020 13:58,8/1/2020 13:58,,,,2,,,,CC BY-SA 4.0 +10625,1,10637,,2/16/2019 21:15,,4,545,"

In this tutorial from Jeremy Howard: What is torch.nn really? he has an example towards the end where he creates a CNN for MNIST. In nn.Conv2d, he makes the in_channels and out_channels: (1,16), (16,16), (16,10).

+

I get that the last one has to be 10 because there are 10 classes and we want 'probabilities' of each class. But why go up to 16 first? How do you choose this value? And why not just go from 1 to 10, 10 to 10, and 10 to 10? Does this have to do with the kernel_size and stride?

+

All of the images are 28x28, so I can't see any correlation between these values and 16 either.

+
class Mnist_CNN(nn.Module):
+    def __init__(self):
+        super().__init__()
+        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
+        self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
+        self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
+
+    def forward(self, xb):
+        xb = xb.view(-1, 1, 28, 28)
+        xb = F.relu(self.conv1(xb))
+        xb = F.relu(self.conv2(xb))
+        xb = F.relu(self.conv3(xb))
+        xb = F.avg_pool2d(xb, 4)
+        return xb.view(-1, xb.size(1))
+
+",12983,,2444,,12/30/2021 15:12,12/30/2021 15:12,Why is the number of output channels 16 in the hidden layer of this CNN?,,1,0,,,,CC BY-SA 4.0 +10628,2,,9141,2/17/2019 5:31,,35,,"

For newbies, NO.

+ +

Sentence generation requires sampling from a language model, which gives the probability distribution of the next word given previous contexts. But BERT can't do this due to its bidirectional nature.

+ +
+ +

For advanced researchers, YES.

+ +

You can start with a sentence of all [MASK] tokens, and generate words one by one in arbitrary order (instead of the common left-to-right chain decomposition). Though the text generation quality is hard to control.

+ +

Here's the technical report BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model, its errata and the source code.

+ +
+ +

In summary:

+ +
    +
  • If you would like to do some research in the area of decoding with +BERT, there is a huge space to explore
  • +
  • If you would like to generate +high quality texts, personally I recommend you to check GPT-2.
  • +
+",22399,,22399,,10/13/2019 9:55,10/13/2019 9:55,,,,1,,,,CC BY-SA 4.0 +10630,2,,10133,2/17/2019 9:16,,3,,"

These embeddings are nothing more than token embeddings.

+ +

You just randomly initialize them, then use gradient descent to train them, just like what you do with token embeddings.

+",22399,,,,,2/17/2019 9:16,,,,1,,,,CC BY-SA 4.0 +10631,2,,10603,2/17/2019 11:28,,13,,"
+

Lately with my Google searches, the AI model keeps auto filling the ending of my searches with:

+

“...in Vietnamese”

+
+

I can see how this would be annoying.

+

I don't think Google's auto-complete algorithm and training data is publicly available. Also it changes frequently as they work to improve the service. As such, it is hard to tell what exactly is leading it to come up with this less-than-useful suggestion.

+

Your suspicion that it has something to do with Google's service detecting your heritage seems plausible.

+

The whole thing is based around statistical inference. At no point does any machine "know" what Vietnamese - or in fact any of the words in your query - actually means. This is a weakness of pretty much all core NLP work in AI, and is called the grounding problem. It is why, for instance, that samples of computer generated text produce such surreal and comic material. The rules of grammar are followed, but semantics and longer term coherence are a mess.

+

Commercial chatbot systems work around this with a lot of bespoke coding around some subject area, such as booking tickets, shopping etc. These smaller domains are possible for human developers to "police", connecting them back to reality, and avoiding the open-ended nature of the whole of human language. Search engine text autocomplete however, cannot realistically use this approach.

+

Your best bets are probably:

+
    +
  • Wait it out. The service will improve. Whatever language use statistics are at work here are likely change over time. Your own normal use of the system without using the suggestions will be part of that data stream of corrections.

    +
  • +
  • Send a complaint to Google. Someone, somewhere in Google will care about these results, and view them as errors to be fixed.

    +
  • +
+

Neither of these approaches guarantee results in any time frame sadly.

+
+

We already have enough issues in the US with racism, projections of who others expect us to be based on any number of things, stereotyping and putting people in boxes to limit them - I truly believe AI is adding to the problem, not helping.

+
+

You are not alone in having these worries. The statistics-driven nature of machine learning algorithms and use of "big data" to train them means that machines are exposing bias and prejudice that are long buried in our language. These biases are picked up by machinery then used by companies that don't necessarily want to reflect those attitudes.

+

A similar example occurs in natural language processing models with word embeddings. A very interesting feature of LSTM neural networks that learn statistical language models is that you can look at word embeddings, mathematical representations of words, and do "word math":

+

$$W(king) - W(man) + W(woman) \approx W(queen)$$

+

$$W(he) - W(male) + W(female) \approx W(she)$$

+

This is very cool, and implies that the learned embeddings really are capturing semantics up to some depth. However, the same model can produce results like this:

+

$$W(doctor) - W(male) + W(female) \approx W(nurse)$$

+

This doesn't reflect modern sensibilities of gender equality. There is obviously a deep set reason for this, as it has appeared from non-prejudiced statistical analysis of billions of words of text from all sorts of sources. Regardless of this though, engineers responsible for these systems would prefer that their models did not have these flaws.

+
+

How can we fix this. Moreover, how can we use AI to bring out peoples true self, talents and empower and free them them to create their life how they like ?

+
+

Primarily by recognising that statistical ML and AI doesn't inherently have prejudice or any agenda at all. It is reflecting back ugliness already in the world. The root problem is to fix people (beyond scope of this answer, if I had solid ideas about this I would not be working in software engineering, but in something more people-focussed).

+

However, we can remove some of the unwanted bias from AI systems. Broadly the steps toward this go:

+
    +
  • Recognise that a particular AI system has captured and is using unwanted gender, racial, religious etc bias.

    +
  • +
  • Reach a consensus about how an unbiased model should behave. It must still be useful for purpose.

    +
  • +
  • Add the desired model behaviour into the training and assessment routines of the AI.

    +
  • +
+

For instance in your case, there are possibly some users of Google's system who would prefer to read articles in Vietnamese, or have English translated into Vietnamese, and are finding it awkward that the default assumption is that everything should be presented in English. These users don't necessarily need to use the search text for this, but presumably are for some reason. A reasonable approach is to figure out how their needs could be met without spamming "in Vietnamese" on the end of every autocomplete suggestion, and perhaps in general move suggestions to localise searches by cultural differences out of autocomplete into a different part of the system.

+

For the case of gender bias in NLP systems, Andrew Ng's Coursera course on RNNs shows how this can be achieved using the embeddings themselves. Essentially it can be done by identifying a bias direction from a set of words (e.g. "he/she", "male/female"), and removing deviations in that direction for most other words, preserving it only for words where it is inherently OK to reflect the differences (such as "king" and "queen" for gender bias).

+

Each case of unwanted bias though needs to be discovered by people and oversight of this as a political and social issue, not primarily a technical one.

+",1847,,-1,,6/17/2020 9:57,2/17/2019 15:14,,,,5,,,,CC BY-SA 4.0 +10632,5,,,2/17/2019 12:00,,0,,"

Supervised learning is a machine learning technique where a function which maps inputs to outputs is learned using a labelled training dataset. A good learned function should be able to generalise to unseen (during the training phase) data.

+",2444,,2444,,2/18/2019 23:34,2/18/2019 23:34,,,,0,,,,CC BY-SA 4.0 +10633,4,,,2/17/2019 12:00,,0,,For questions related to supervised learning.,2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10634,1,,,2/17/2019 12:29,,5,172,"

I have a system (like a bank) that people (customers) are entered into the systems by a Poisson process, so the time between the arrival of people (two consecutive customers) will be a random variable. The state of the problem is related to just the system (bank), and the action, made inside the system, can be e.g. offering the customer promotion or not (just based on the state of the system, not the status of customers).

+ +

To model the problem through RL, 1) it is possible to discretize time horizon into very short time interval (for example 5 minutes as a stage) such that in each time interval, just a single customer enter to our system. On the other hand, 2) it is possible that stages are defined as the time when a customer enters our system.

+ +

My questions are:

+ +
    +
  1. Is the second approach an semi-MDP (SMDP)? If I want to solve it with RL, should I use hierarchical RL?

  2. +
  3. In the first approach, if a customer enters in a time interval, it is easy to update the Q values. However, what should we do, if we are in state $S$ and take action $A$, but no customer enters our system, so we do not receive any reward for the pair of $(S, A)$? There would be no difference if we would take action $A_{1}$, $A_{2}$, and so on. This can happen for several consecutive time intervals. I think it is more challenging when we consider eligibility traces.

  4. +
+",10191,,2444,,2/18/2019 18:11,3/20/2019 20:01,Should I model my problem as a semi-MDP?,,1,0,,,,CC BY-SA 4.0 +10637,2,,10625,2/17/2019 17:07,,5,,"

I understand your question as: ""How did the author select the number of neurons in their hidden layer?""

+ +

The number of neurons in the hidden layer is how you can control the complexity of the function you are trying to generate to map the inputs to an output. The more neurons in the hidden layer the more complex the function thus you can capture more intricate decision barriers. However, the more complex function is harder to optimize and will lead to lower performance scores. The goal here is to find the right tradeoff to maximize your performance. You can tune the number of hidden neurons as a hyper-parameter using cross-validation.

+ +

There isn't any formula to determine the number of neurons you will need, however you can get an intuition based on the number of inputs and outputs you will have. Generally, you want more hidden neurons than input and output neurons. Since most people are programmers who are writing neural networks, we are used to working with units in $2^n$. Thus, 16 is chosen over 10, and 32 would be chosen over 28.

+",5925,,,,,2/17/2019 17:07,,,,0,,,,CC BY-SA 4.0 +10641,1,,,2/17/2019 19:37,,4,953,"

I'm working on my own implementation of NEAT algorithm based on the original 2002 paper called ""Efficient Reinforcement Learning through Evolving Neural Network Topologies"" (by Kenneth O. Stanley and Risto Miikkulainen). The way the algorithm is designed it may generate loops in connection of hidden layer. Which obviously will cause difficulties in calculating the output.

+ +

I have searched and came across two types of approaches. One set like this example claim that the value should be calculated like a time series usually seen in RNNs and the circular nodes should use ""old"" values as their ""current"" output. But, this seems wrong since the training data is not always ordered and the previous value has nothing to do with current one.

+ +

A second group like this example claim that the structure should be pruned with some method to avoid loops and cycles. This approach apart from being really expensive to do, is also against the core idea of the algorithm. Deleting connections like this may cause later structural changes.

+ +

I my self have so far tried setting the unknown forward values as 0 and this hides the connection (as whatever weight it has will have no effect on the result) but have failed also for two reasons. One is my networks get big quickly destroying the ""smallest network required"" idea and also not good results.

+ +

What is the correct approach?

+",6522,,2444,,2/17/2019 20:57,8/31/2022 6:57,How do you implement NEAT by taking into account the loops?,,3,0,,,,CC BY-SA 4.0 +10642,2,,10603,2/17/2019 21:50,,2,,"

The key I think is teaching the algorythm by providing better data. The only thing an AI can use is the data available for itself. Figuring out whatever it can is not bias, as it's based on objective facts.

+ +

If it knows 98% of Nguyens are interested in X, knowing nothing else about you personally, showing you X might be good. If you consistently click on downvote/not interested, etc. buttons on the site, your personal data will override the default, and you won't see X anymore.

+ +

As a user you could give better reviews for better results, and as a developer you can give better ways to get this: by logging what you click on, search, and showing ""not interested/interested/upvote/downvote/like"" etc. buttons.

+ +

Note that I'm using youtube from different, unlinked machines/browsers, and I get different suggestions from all of these, probably because I've trained the AI with different data.

+ +

You can also use services with less intrusive data collection, e.g. duckduckgo, bitchute, etc.

+",22418,,,,,2/17/2019 21:50,,,,1,,,,CC BY-SA 4.0 +10643,1,,,2/17/2019 21:58,,2,139,"

Lets say we have a oracle $S$ that, given any function $F$ and desired output $y$, can find an input $x$ that causes $F$ to output $y$ if it exists, or otherwise returns nil. I.e.:

+ +

$$S(F, y) = x \implies F(x) = y$$ +$$S(F, y) = nil \implies !\exists x \hspace{10px}s.t.\hspace{10px} F(x) = y$$

+ +

And $S$ takes $1$ millisecond to run (plus the amount of time it takes to read the input and write the output), regardless of $F$ or $y$. $F$ is allowed to include calls to $S$ in itself.

+ +

Clearly with this we can solve any NP-Complete problem in constant time (plus the amount of time it takes to read the input and write the output), and in fact we can go further and efficiently solve any optimization problem:

+ +
def IsMin(Cost, MeetsConstraints, x):
+  def HasSmaller(y):
+    return MeetsConstraints(x) and Cost(y) < Cost(x) and y != x
+  return MeetsConstraints(x) and S(HasSmaller, True) == nil
+
+def FindMin(Cost, MeetsConstraints):
+  def Helper(x):
+    return IsMin(Cost, MeetsConstraints, x)
+  return S(Helper, True)
+
+ +

Which means we can do something like:

+ +
def FindSmallestRecurrentNeuralNetworkThatPerfectlyFitsData(Data):
+  def MeetsConstraints(x):
+    return IsRecurrentNeuralNetwork(x) and Error(x, Data) == 0
+  return FindMin(NumParamaters, MeetsConstraints)
+
+ +

And something similar for any other kind of model (random forest, random ensemble of functions, etc.). We can even solve the halting problem with this, which probably means that there is some proof similar to the halting problem proof that shows such an oracle could not exist. Lets assume this exists anyway, as a thought experiment.

+ +

But I'm not sure how to take it from here to something that achieves endless self improvement. What exactly the ""singularity"" even means I suppose is tricky to define formally, but I'm interested in any simple definitions, even if they don't quite capture it.

+ +

A sidenote, here is one more function we can do:

+ +
IsEquivalent(G, H):
+    def Helper(x):
+      return G(x) != H(x)
+    return P(Helper, True) == nil
+
+",6378,,6378,,7/7/2019 6:12,7/7/2019 6:12,Is a very powerful oracle sufficient to trigger the AI singularity?,,0,8,,,,CC BY-SA 4.0 +10644,1,,,2/17/2019 23:18,,2,473,"

The ""AI Singularity"" or ""Technological Singularity"" is a vague term that roughly seems to refer to the idea of:

+ +
    +
  1. Humans can design algorithms

  2. +
  3. Humans can improve algorithms

  4. +
  5. Eventually algorithms we design might end up being as good as humans at designing and improving algorithms

  6. +
  7. This might lead to these algorithms designing better versions of themselves, eventually becoming far more intelligent than humans. This improvement would continue to grow at an increasing rate until we reach a ""singularity"" where an AI is capable of making technological progress at a rate far faster than we could ever imagine

  8. +
+ +

Also known as an Intelligence Explosion. This rough idea has been heavily debated as to its feasibility, how fast it'll take (if it does happen), etc.

+ +

However I'm not aware of any formal definitions of the concept of ""singularity"". Are there any? If not, do we have close approximations?

+ +

I have seen AIXI and the Gödel machine, but these both require some ""reward signal"" — it is unclear to me what reward signal one should choose to bring about a singularity, or really how those models are even relevant here. Because even if we had an oracle that can solve any formal problem given to it, it's unclear to me how we could use that to cause a singularity to happen (see this question for more discussion on that note).

+",6378,,1671,,2/18/2019 23:14,7/25/2020 9:06,Can we define the AI singularity mathematically?,,4,8,,,,CC BY-SA 4.0 +10645,2,,10644,2/18/2019 0:19,,1,,"

Here is one idea. I'll start with a more specific ""mathematical singularity"", defined as an algorithm that can do the following in N hours or less (for all $N >= 1$):

+ +
    +
  1. State equivalent versions (up to notional differences) of all mathematical theorems/conjectures that humans will read and understand in N*20 years after 2018 that can be stated formally in Metamath (this in an arbitrary choice, but Metamath is general enough to include quantum logic and extensions of ZFC so it seems like a decent place to start with. Feel free to instead use Coq, Isabelle, Lean, etc. instead if you prefer), assuming those humans never have access to a ""mathematical singularity"" capable algorithm and their mathematical community continues living and functioning intellectually in a manner similar in capacity to how it did in 2018
  2. +
  3. Of those problems, provide correct proofs (these may not be readable, that's ok) of all of those that will be solved by those humans in N*20 years.
  4. +
+ +

This of course does not fully capture all mathematical progress that humans will make in those years: a big component missing is ""readable proofs"" and concepts that can't be captured in metamath. But it is something that is theoretically formal.

+ +

I know that this doesn't include any ""continual improvement"", what I am referring to here is simply a threshold such that when an algorithm passes it, I think it is sufficently powerful enough to be considered as ""intelligent enough"" that it has reached close to singularity levels of intelligence. Feel free to adjust the (20 years) constant in your head to match your preferred threshold.

+ +

I'm not going to accept this answer because it is lacking ""continual improvement"", but I brought it up because if we can't figure out how to define it mathematically, perhaps simply having ""sufficient criteria"" in various domains could be a good start.

+ +

Edit: I suppose that the singularity typically involves an assumption of the development of an intelligence that is superior to human society. This implies that it is capable of at least doing the things that our society does, so there is probably a good argument to be made here that ""proof accessibility"" and ""method teachability"" are vital to this problem.

+ +

I mean, if we think of the current state of the field of calculus, it has gone from an arcane topic only understood by a few field experts, to now being readily accessible and teachable to high school students. While that didn't require proving any new major mathematical theorems, one could argue that much of our technological progress didn't come until advanced mathematical machinery developed (calculus) became accessible to a wide range of people.

+ +

I was going to make an argument about how ""the difference is that computers can learn quicker: they can read through massive proofs very quickly"". But I suppose that depends on the architecture of whatever kind of ""thing"" is achieving the singularity. I.e., here is a (non-exhaustive) list two possible outcomes:

+ +
    +
  • There is only one ""mind"" that is achieving all of this. In that case, that mind has all the knowledge it needs and it doesn't need to teach anyone to progress further, so this point is sorta irrelevant. However, I can still see an argument for ""teachability"" if we want to utilize this vast amount of knowledge the AI has gained in human society, if possible.
  • +
  • There is a simulated ""society"" of virtual minds that are interacting with each other, that, together, achieve the mathematical singularity. If a single ""mind"" in this ""society"" isn't able to easily use and understand the work done by another mind, then the point of ""teachability"" is very important to prevent individual minds from having to continually recreate the wheel, so to speak.
  • +
+ +

Without our biological limitations these digital minds may have very different ""teaching"" methods, but I think here is the ideal additional requirement for a ""mathematical singularity"":

+ +
    +
  1. These proofs must be (eventually, perhaps not until spending quite a bit of time) accessible to a graduate mathematician, via proving pdf textbooks (or other similar teaching materials) that cover the same material that human mathematical textbooks would have covered after N*20 years in a way that is accessible to the typical graduate mathematician.
  2. +
+ +

However we have now lost some formality in this: textbooks usually contain lots of exposition and analogies that are difficult to formally measure and may not even be relevant for the AI. Here is an alternate option that is not as good, but still close:

+ +
    +
  1. The algorithm must present its results in a form that can be used by any other algorithm that also can achieve the ""mathematical singularity"" to ""skip ahead"" to N*20 years, and then immediately continue progress from there.
  2. +
+ +

However this criteria has a trivial exploit: an algorithm might as well just provide a 'save state' and a 'program' to run that save state. Conceivably any algorithm that can achieve the mathematical singularity is at least capable of executing code, so providing a 'save state' and 'program' passes this criteria without making it at all accessible (The caveat here is if it uses some sort of model of computation that requires special hardware such as quantum computing or black hole computing to prevent slowdown, but that's besides the point)

+ +

I think I prefer this alternative:

+ +
    +
  1. These proofs must be similar in length as the (formalized versions of) proofs the human academic community would have made in those 20*N years
  2. +
+ +

""length"" is tricky here: it is possible to prove a very difficult theorem very succinctly by simply referencing a very powerful lemma. But here is one example metric:

+ +

$$length(Proof) = lengthInSymbols(Proof)+\sum_{symbol \in Proof} \frac{length(symbol)}{numberOfTimesUsedInOtherProofs(symbol)}$$

+ +

Where ""Other Proofs"" is the set of all proofs read and understood by humans in those N*20 years, and ""symbols"" refers to things such as ""Green's Theorem"" or ""$\in$"". Hopefully the idea is apparent here: if something is used frequently in many proofs, it is a ""common technique"" that isn't vital to that proof, and thus doesn't contribute as much to the ""length"" of that proof. Finding a potentially more suitable metric here seems like a much more tractable problem then defining the mathematical singularity itself and I suspect this is studied elsewhere more, so I'll leave it at this for now.

+",6378,,6378,,2/19/2019 22:29,2/19/2019 22:29,,,,0,,,,CC BY-SA 4.0 +10646,1,,,2/18/2019 1:02,,3,2087,"

I am generating images that consist of points, where the object's location is where the most overlap of points occurs.

+

+

In this example, the object location is $(25, 51)$.

+

I am trying to train a model to just finds the location, so I don't care about the classification of the object. Additionally, the shape of the overlapping points where the object is located never changes and will always be that shape.

+

What is a good model for this objective?

+

Many of the potential models I've been looking at (CNN, YOLO, and R-CNN) are more concerned with classification than location. Should I search the image for the overlapping dots, create a bound box around them, then retrieve the boxes' coordinates?

+",22422,,2444,,10/13/2021 16:23,10/13/2021 16:23,Which model should I use to find (only) the object location (in terms of coordinates) in an image?,,2,2,,,,CC BY-SA 4.0 +10647,2,,10646,2/18/2019 1:40,,0,,"

Neural networks are not only used for classification but also for regression. It seems that a CNN would be a good solution for this problem with 2 output neurons each of them providing a number within the range of your frame.

+",5925,,,,,2/18/2019 1:40,,,,0,,,,CC BY-SA 4.0 +10649,1,,,2/18/2019 11:53,,8,356,"

I have some trouble understanding the benefits of Bayesian networks.

+

Am I correct that the key benefit of the network is that one does not need to use the chain rule of probability in order to calculate joint distributions?

+

So, using the chain rule:

+

$$ +P(A_1, \dots, A_n) = \prod_{i=1}^n (A_i \mid \cap_{j=1}^{i-1} A_j) +$$

+

leads to the same result as the following (assuming the nodes are structured by a Bayesian network)?

+

$$ +P(A_1, \dots, A_n) = \prod_{i=1}^n P(A_i \mid \text{parents}(A_i)) +$$

+",22433,,2444,,12/13/2021 9:00,8/13/2023 23:08,What are the main benefits of using Bayesian networks?,,1,2,,,,CC BY-SA 4.0 +10650,1,,,2/18/2019 12:44,,3,58,"

I would like to implement a variant of policy iteration that can choose one or more actions in each state. An example would be to heal and move in the game of Doom.

+ +

Parameterizing the power set of all single actions would be one idea, but I was wondering if somebody achieved good results on a similar problem, perhaps by simply defining some lower bound on the output layer and taking all actions with values larger than that bound (i.e. with actions and activation values {shoot=0.2, heal=0.51, move=0.6, jump=0.4} I would choose heal and move if the bound was 0.5)

+ +

Another idea was to collect these actions iteratively, i.e. choosing an action from a softmax output based on the state $s$ (taking action ""healing"") and then constructing and using some temporary state $s_t$ to evaluate that state to find another action (e.g. ""moving""). This would require some dummy action that is just used to signal the end of that iteration procedure (i.e. choosing action $n+1$ will not add any other action to the set $\{ \text{healing}, \text{moving} \}$, but it will lead to the execution of those two actions and transition to the next state $s'$.

+",22161,,2444,,2/20/2019 18:16,2/20/2019 18:16,Choosing more than one action in a parameterized policy,,0,0,,,,CC BY-SA 4.0 +10651,1,,,2/18/2019 14:03,,0,145,"

I asked a question a while ago here and since then I've been solving the issues within my code but I have just one question... This is the formula for updating the Q-Matrix in Q-Learning:

+ +

$$Q(s_t, a_t) = Q(s_t, a_t) + \alpha \times (R+Q(s_{t+1}, max_a)-Q(s_t, a_t))$$

+ +

However, I saw a Q-Learning example that uses a different formula, which I'm applying to my own problem and I'm getting good results:

+ +

$$Q(s_t, a_t) = R(s_t,a_t) + \alpha \times Q(s_{t+1}, max_a)$$

+ +

Is this valid?

+",22088,,1641,,2/18/2019 20:46,2/18/2019 20:46,Is there more than one Q-matrix update formula?,,2,2,,,,CC BY-SA 4.0 +10652,2,,10651,2/18/2019 14:31,,3,,"

No, your second statement does not correctly implement the Q-learning update rule, which the first statement correctly implements.

+",2444,,,,,2/18/2019 14:31,,,,0,,,,CC BY-SA 4.0 +10653,2,,10651,2/18/2019 14:32,,1,,"

Your second code snippet is equivalent to this:

+ +

$$Q_{k+1}(s,a) \leftarrow r + \alpha \text{max}_{a'} Q_k(s', a')$$

+ +

This looks like a simplified Value Iteration update to me, where you have incorrectly switched $\alpha$ (the learning rate) for $\gamma$ (the discount rate).

+ +

The full Value Iteration update based on action values looks like this:

+ +

$$Q_{k+1}(s,a) \leftarrow \sum_{r,s'} p(r,s'|s,a)(r + \gamma \text{max}_{a'} Q_k(s', a'))$$

+ +

This is almost the same as your equation when you have a deterministic environment (so you can directly predict single values $r$ and $s'$ from $s, a$)

+ +

As such, it will sort of work with certain assumptions:

+ +
    +
  • You want a specific discount rate, or don't particularly care about predicting values, just finding a close-to-optimal policy

  • +
  • The environment is deterministic

  • +
+ +

The further away you are from those assumptions, the worse fit the simpler update method will be to your problem. It is definitely not Q-learning either way.

+",1847,,,,,2/18/2019 14:32,,,,8,,,,CC BY-SA 4.0 +10654,5,,,2/18/2019 14:43,,0,,"

REINFORCE was introduced in 1988 and 1992 by Ronald J. Williams, respectively, in the papers ""Toward a theory of reinforcement-learning connectionist systems"" and ""Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning"".

+",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10655,4,,,2/18/2019 14:43,,0,,"For questions related to the REINFORCE algorithm (or update rule), which is a policy gradient algorithm, that is, an algorithm which estimates the policy directly (that is, without first estimating any value function).",2444,,2444,,2/18/2019 23:35,2/18/2019 23:35,,,,0,,,,CC BY-SA 4.0 +10656,5,,,2/18/2019 14:54,,0,,"

""Reinforcement Learning: An Introduction"" (by Andrew Barto and Richard S. Sutton) is often considered or cited as the most comprehensive introductory manual to the field of RL, by two of the greatest contributors to the field.

+ +

Two editions have been published so far. The first edition was published in 1998 and the second in 2018. You can find some material related to this book (including some drafts) at the following URL: http://incompleteideas.net/book/.

+",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10657,4,,,2/18/2019 14:54,,0,,"For questions related to the book ""Reinforcement Learning: An Introduction"" (by Andrew Barto and Richard S. Sutton).",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10658,1,11357,,2/18/2019 15:49,,7,317,"

In a neural network, the number of neurons in the hidden layer corresponds to the complexity of the model generated to map the inputs to output(s). More neurons creates a more complex function (and thus the ability to model more nuanced decision barriers) than a hidden layer with less nodes.

+ +

But what of the hidden layers? What do more hidden layers correspond to in terms of the model generated?

+",22424,,,,,5/25/2022 9:02,To what does the number of hidden layers in a neural network correspond?,,3,0,,,,CC BY-SA 4.0 +10659,2,,10658,2/18/2019 15:56,,1,,"

More hidden layers will just escalate the possibilities amount the neurons, including the solutions from the previous hidden layers. (I will edit this once I am at home and provide you with a good link I found some time ago)

+ +

Meanwhile maybe this will help you https://stats.stackexchange.com/questions/63152/what-does-the-hidden-layer-in-a-neural-network-compute

+",22439,,,,,2/18/2019 15:56,,,,0,,,,CC BY-SA 4.0 +10660,1,10662,,2/18/2019 16:18,,1,99,"

Hello I am new to reinforcement learning and robotics. So far I have an understanding of the concept on 2D world. You can make agent move one step in one direction. However, how do you define movement action of a robot arm? I am a bit lost over here. Any useful links or keywords would be very appreciated! :)

+",,user22442,,,,2/18/2019 17:12,Robot Arm Deep Q Learning Actions,,1,2,,,,CC BY-SA 4.0 +10661,2,,10634,2/18/2019 16:34,,1,,"
+

To model the problem through RL,

+ +
    +
  1. it is possible to discretize time horizon into very short time interval (for example 5 minutes as a stage) such that in each time interval, just a single customer enter to our system.
  2. +
  3. On the other hand, it is possible that stages are defined as the time when a customer enters our system.
  4. +
+
+ +
+ +
+

1) Is the second approach an SMDP and if I want to solve it with RL, I should use Hierarchical RL?

+
+ +

The second approach as you described it does not sound like an SMDP to me... it sounds to me like a regular MDP where the discrete time steps in the MDP do not have any meaningful connection anymore to real-world time in the real-world problem being modelled. This is the approach I would recommend taking if long periods of ""inactivity"" in between customers are irrelevant and/or if you do not expect your actions to have any influence on the duration of those periods of inactivity.

+ +

If you do expect your actions to have meaningful influence on the duration of inactivity, I would not recommend this approach. For example, your actions may make a customer angry and reduce their likelihood of quickly returning. Or your actions may indirectly affect the likelihood of other customers visiting due to interaction between your population of customers, they may or may not recommend your bank to their friends based on your actions.

+ +
+ +
+

2) In the first approach, if a customer enters in a time interval, it is easy to update $Q$-values. However, what should we do, if we are in state $S$ and take action $A$ but, no customer enter our system so we do not receive any reward for the pair of $(S, A)$ there would be no difference if we would take action $A_1$, $A_2$ and so on. This can happen for several consecutive time interval. I think it is more challenging when we consider eligibility trace.

+
+ +

I would actually argue that your first approach can be treated as an SMDP, and that it may be a good idea to do so (personally I prefer the Options framework, it's a bit more general than SMDPs, a bit more flexible, and a bit closer to ""standard"" RL terminology; there are plenty of free versions of that paper available on goole too).

+ +

Whenever you encounter a customer, you could select an option (or ""macro-action"") in the SMDP, which executes the one ""real"" action for your current customer once, and is subsequently forced to automatically select no-op actions (which do nothing) until the next customer arrives; that is the point in time where your previous option ends, and a new option can be selected.

+ +

An RL algorithm that operates on the SMDP / hierarchical level, i.e. one that only learns to optimize the selection of options / macro-actions and completely skips over all the forced no-op actions, would suffice. This is basically the approach described in Concurrent Reinforcement Learning from Customer Interactions.

+",1641,,,,,2/18/2019 16:34,,,,0,,,,CC BY-SA 4.0 +10662,2,,10660,2/18/2019 17:12,,1,,"

It depends a lot on the hardware of your robot arm. +Assuming that your servos have encoder information, if you have access to servos that have limited control like ""rotate left/rotate right"" functionality, you can phrase the your action space to be [""move left"", ""stop"", ""move right""]. In this way you can implement a discrete action space with 3 actions per servo and have an agent learn to move the servos around the space.

+ +

If your servos are connected to each other in an elbow/shoulder configuration, you can have a 9 discrete action setup essentially making a box of cardinal directions:

+ +

Up+Left----------Up-------Up+Right

+ +

Left--------------Stop---------Right

+ +

Down+Left-----Down-----Down+Right

+ +

If you have 3 or more servos, you can still use the same idea of discrete actions but the number of discrete actions grows by a factor of 3 with each servo as your action space is now the cross product of all of the other servos.

+ +

Alternatively you can use a ""multi-headed"" agent where each head chooses actions for a certain servo but there are pros and cons for both depending on your usecase.

+ +
+ +

If you have more advanced servos like Dynamixels which have high quality encoders, you'll have access to more advanced controls schemes. For instance, Dynamixels allow you to give actions in encoder space, angle space, and even velocity space. For example, you could give the action of ""go to encoder value of 500"" or ""go to 90 degrees"" or ""move .5 radians/second"". All of these approaches are useful for certain tasks. For humans controlling the arm using a joystick, the velocity based is the most intuitive and, in my experience, the same is true for RL agents using continuous control.

+ +

If you are using continuous control, you should normalize all of your action spaces within your agent then ""unnormalize (?)"" them before giving the actions to your servos. +For instance if your servo velocity ranges from -3.5 rad/sec to +3.5 rad/sec, have your agent select actions in the range of [-1,1] then multiply by 3.5 to get the velocity.

+ +
+ +

In either case, one thing that should be noted is that you give your robotic arm enough time to actually perform the action that the agent selected. If not you will see your robot ""jitter"" back and forth quickly as your agent selects actions randomly. This is bad for a few reasons but most importantly because it might break your servos. To overcome this issue, give each action a little more time to be ""actualized"" by your robot. This can either come in the form of adding a delay in your code while your servos do ""the thing"" or by using an ""iterations since last action update"" counter to get a new action from the agent after a certain number of iterations. Not only is this better for your hardware, this also leads to better exploration of your state space as your agent can move through the state space encountering similar states more frequently.

+ +

A last thing to be aware of is to set hard coded limits on the servos so that your agent doesn't kill your servos by banging the arm against a table or something. Finding these safe bounds isn't always easy as multiple jointed configurations can have multiple ways of hitting the bounds and it might require a forward dynamics model to find these limits. +If you have only 2 servos, it should be pretty easy to find though :).

+",4398,,,,,2/18/2019 17:12,,,,3,,,,CC BY-SA 4.0 +10663,1,,,2/18/2019 17:49,,2,194,"

I have read a lot on Actor Critic and I'm not convinced that there is a qualitative difference doing direct gradient updates on the network and slightly adjusting a soft-max output in the direction of the advantage function and doing gradient descent on the error.

+ +

Can anyone explain why updating the gradient directly is necessary?

+",22132,,22296,,2/20/2019 3:53,2/27/2019 9:13,Why is gradient ascent necessary when training Actor Critic agents?,,1,0,,,,CC BY-SA 4.0 +10664,5,,,2/18/2019 18:14,,0,,"

For more info, have a look at the paper ""Between MDPs and semi-MDPs: A framework for Temporal Abstraction in Reinforcement Learning"" (by Richard S.Sutton, Doina Precup and Satinder Singh).

+",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10665,4,,,2/18/2019 18:14,,0,,For questions related to semi-Markov Decision Processes (SMDP).,2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10666,5,,,2/18/2019 18:18,,0,,,2444,,2444,,11/23/2020 13:24,11/23/2020 13:24,,,,0,,,,CC BY-SA 4.0 +10667,4,,,2/18/2019 18:18,,0,,"For questions related to the concept of return in reinforcement learning, which is defined as the future cumulative (discounted) reward or, in simple words, the reward in the long run.",2444,,2444,,11/23/2020 13:24,11/23/2020 13:24,,,,0,,,,CC BY-SA 4.0 +10668,5,,,2/18/2019 18:20,,0,,,2444,,2444,,11/25/2020 10:59,11/25/2020 10:59,,,,0,,,,CC BY-SA 4.0 +10669,4,,,2/18/2019 18:20,,0,,"For questions about OpenAI's gym library, which provides a set of APIs to access different types of environments to train reinforcement learning agents.",2444,,2444,,11/25/2020 10:59,11/25/2020 10:59,,,,0,,,,CC BY-SA 4.0 +10670,5,,,2/18/2019 18:22,,0,,"

Have a look at https://en.wikipedia.org/wiki/Temporal_difference_learning.

+",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10671,4,,,2/18/2019 18:22,,0,,"For questions related to the temporal-difference reinforcement learning (RL) algorithms, which is a class of model-free (that is, they do not use the transition and reward function of the MDP) RL algorithms which learn by bootstrapping from the current estimate of the value function (that is, they use one estimate to update another estimate).",2444,,2444,,7/11/2019 22:41,7/11/2019 22:41,,,,0,,,,CC BY-SA 4.0 +10672,5,,,2/18/2019 18:23,,0,,,-1,,-1,,2/18/2019 18:23,2/18/2019 18:23,,,,0,,,,CC BY-SA 4.0 +10673,4,,,2/18/2019 18:23,,0,,"For questions related to the Monte Carlo methods in reinforcement learning and other AI sub-fields. (""Monte Carlo"" refers to random sampling of the search space.)",2444,,1671,,2/18/2019 23:35,2/18/2019 23:35,,,,0,,,,CC BY-SA 4.0 +10674,1,,,2/18/2019 18:26,,1,27,"

I am toying around with creating a probability of win calculator for proposals that we do. the information on each proposal is housed in our corporate SharePoint (which I am the admin)

+ +

Is there a way to pull directly from SharePoint as the data source rather than have to export to xls then upload each time the data updates?

+",22446,,,,,2/18/2019 18:26,Azure ML studio pull directly from sharepoint,,0,0,,,,CC BY-SA 4.0 +10675,1,11041,,2/18/2019 18:47,,3,165,"

In some newer robotics literature, the term system identification is used in a certain meaning. The idea is not to use a fixed model, but to create the model on the fly. So it is equal to a model-free system identification. Perhaps a short remark for all, who doesn't know what the idea is. System identification means, to create a prediction model, better known as a forward numerical simulation. The model takes the input and calculates the outcome. It's not exactly the same like a physics engine, but both are operating with a model in the loop which is generating the output in realtime.

+ +

But what is policy learning? Somewhere, I've read that policy learning is equal to online system identification. Is that correct? And if yes, then it doesn't make much sense, because reinforcement learning has the goal to learn a policy. A policy is something which controls the robot. But if the aim is to do system identification, than the policy is equal to the prediction model. Perhaps somebody can lower the confusion about the different terms ...

+ +

Example Q-learning is a good example for reinforcement learning. The idea is to construct a q-table and this table controls the robot movements. But, if online-system-identification is equal to policy learning and this is equal to q-learning, then the q-table doesn't contains the servo signals for the robot, but it provides only the prediction of the system. That means, the q-table is equal to a box2d physics engine which can say, what x/y coordinates the robot will have. This kind of interpretation doesn't make much sense. Or does it make sense and the definition of a policy is quite different?

+",,user11571,,user11571,2/18/2019 18:56,3/11/2019 19:23,Is policy learning and online system identification the same?,,2,0,,,,CC BY-SA 4.0 +10676,5,,,2/18/2019 21:36,,0,,,-1,,-1,,2/18/2019 21:36,2/18/2019 21:36,,,,0,,,,CC BY-SA 4.0 +10677,4,,,2/18/2019 21:36,,0,,"For questions related to the book ""Artificial Intelligence: A Modern Approach"" by Peter Norvig and Stuart J. Russell.",2444,,2444,,2/18/2019 23:33,2/18/2019 23:33,,,,0,,,,CC BY-SA 4.0 +10678,5,,,2/18/2019 21:37,,0,,,-1,,-1,,2/18/2019 21:37,2/18/2019 21:37,,,,0,,,,CC BY-SA 4.0 +10679,4,,,2/18/2019 21:37,,0,,"For questions related to deep reinforcement learning (DRL), that is, RL combined with deep learning. More precisely, deep neural networks are used to represent e.g. value functions or policies.",2444,,2444,,2/21/2019 1:49,2/21/2019 1:49,,,,0,,,,CC BY-SA 4.0 +10680,5,,,2/18/2019 21:38,,0,,,-1,,-1,,2/18/2019 21:38,2/18/2019 21:38,,,,0,,,,CC BY-SA 4.0 +10681,4,,,2/18/2019 21:38,,0,,For questions related to policies (as defined in reinforcement learning or other AI sub-fields).,2444,,2444,,2/18/2019 23:34,2/18/2019 23:34,,,,0,,,,CC BY-SA 4.0 +10682,1,,,2/18/2019 22:10,,3,259,"

Fuzzy logic is typically used in control theory and engineering applications, but is it connected fundamentally to classification systems?

+ +

Once I have a trained neural network (multiple inputs, one output), I have a nonlinear function that will turn a set of inputs into a number that will estimate how close my set of given inputs are to the trained set.

+ +

Since my output number characterizes ""closeness"" to the training set as a continuous number, isn't this kind of inherently some sort of fuzzy classifier?

+ +

Is there a deep connection here in the logic, or am I missing something?

+",20685,,,,,10/8/2020 8:08,Is fuzzy logic connected to neural networks?,,2,0,,,,CC BY-SA 4.0 +10683,2,,10658,2/18/2019 22:40,,1,,"

It was proven a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions (see Universal approximation theorem).

+ +

More layers can't improve something that can already do ""everything"". But adding more layers reduces the number of necessary neurons, and reduces computing power needed for the network as well.

+",22418,,,,,2/18/2019 22:40,,,,0,,,,CC BY-SA 4.0 +10684,2,,10646,2/18/2019 22:45,,1,,"
+

What is a good model for this objective?

+
+ +

I will try to give another perspective: Solve it without machine learning model

+ +

Your problem is try to find the most overlapping point. If the image above is image that you used in your case, you can solve it directly by applying some computer vision algorithms.

+ +
    +
  1. Try to create some binary image based on the color of the dots. if you are not sure with the available colors on your image, you can list of pixel colors that not black and white uniquely at first. So if there are four colors you need to generate four different binary image. Create a simple condition or a complex one, for example:

    + +
    if pixel[i,j]=red then
    +    pixel[i,j]=white
    +else
    +    pixel[i,j]=black
    +
  2. +
  3. Get its location by searching ""the white"" over your image or use blob detection method (it'll be a little bit tricky if the actual image always have different axis scale). You can save it as a list of coordinate of each color.

  4. +
  5. What happen if you can't see the dot fully because it's overlapping with another dot? Find the pattern. In your image, dots appear with certain pattern. If you can find two consecutive dots horizontally and vertically, you can predict the position of all your dots.
  6. +
  7. Find the most overlapping position from your list.
  8. +
+ +

Pros

+ +
    +
  • The result may more accurate than using machine learning model
  • +
  • Faster, you don't need to train it first
  • +
+ +

Cons

+ +
    +
  • Finding dot's location in image with different axis will be difficult, but it's still solvable
  • +
  • It'll be difficult to predict the pattern if many dots are missing because overlapped by other dots
  • +
+",16565,,,,,2/18/2019 22:45,,,,0,,,,CC BY-SA 4.0 +10685,2,,1885,2/19/2019 0:32,,3,,"

I see two main issues with this suggestion.

+ +

One: digital circuits take up a lot less space, and they're easier to design, so you can put together a bigger system this way. (not to mention connecting separate chips within a system) This is mainly because in digital circuits your tolerances can be a lot loose.

+ +

The bigger one is: we still don't know how neurons work. Artificial neural networks somewhat resemble the natural one, but they behave differently. There are various ion channels, there are electric signals, and with these neurons stimulate each other, and if one's threshold is reached, it fires a spike. When it's reached again soon, you can see a burst in the signal. As far as I know, researchers don't yet know what function you need to implement to simulate it. The closest ANN is the spiking neural network, but it's not very useful in practice.

+",22418,,,,,2/19/2019 0:32,,,,0,,,,CC BY-SA 4.0 +10686,1,,,2/19/2019 2:50,,2,101,"

I have a neural network that is already trained to predict two continuous outputs from a set of 7 continuous features.

+ +

Is there any way to apply the network to predict one of the input features, given other 6 features and the two outputs?

+",22452,Abdallah Atef,22296,,2/19/2019 4:54,3/11/2021 1:01,"Is it possible to use a trained neural network to predict a feature, given other features and output?",,1,0,,,,CC BY-SA 4.0 +10687,5,,,2/19/2019 8:10,,0,,,-1,,-1,,2/19/2019 8:10,2/19/2019 8:10,,,,0,,,,CC BY-SA 4.0 +10688,4,,,2/19/2019 8:10,,0,,"For questions about model-based reinforcement learning methods (or algorithms). An example of a model-based algorithm is Dyna-Q, which estimates a model of the environment (i.e. the transition function of the associated Markov decision process).",2444,,2444,,10/2/2020 18:50,10/2/2020 18:50,,,,0,,,,CC BY-SA 4.0 +10689,5,,,2/19/2019 8:11,,0,,,-1,,-1,,2/19/2019 8:11,2/19/2019 8:11,,,,0,,,,CC BY-SA 4.0 +10690,4,,,2/19/2019 8:11,,0,,"For questions about model-free reinforcement learning methods (or algorithms). An example of a model-free algorithm is Q-learning, which does not use the transition function (i.e. the model) of the environment (or Markov decision process).",2444,,2444,,10/2/2020 18:48,10/2/2020 18:48,,,,0,,,,CC BY-SA 4.0 +10691,5,,,2/19/2019 8:11,,0,,,-1,,-1,,2/19/2019 8:11,2/19/2019 8:11,,,,0,,,,CC BY-SA 4.0 +10692,4,,,2/19/2019 8:11,,0,,For questions related to notation (in general).,2444,,2444,,2/19/2019 21:14,2/19/2019 21:14,,,,0,,,,CC BY-SA 4.0 +10693,5,,,2/19/2019 8:12,,0,,,-1,,-1,,2/19/2019 8:12,2/19/2019 8:12,,,,0,,,,CC BY-SA 4.0 +10694,4,,,2/19/2019 8:12,,0,,"For questions related to the reinforcement learning technique called ""eligibility traces"", which combines temporal-difference and Monte Carlo methods.",2444,,2444,,2/19/2019 21:13,2/19/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10695,5,,,2/19/2019 8:13,,0,,,-1,,-1,,2/19/2019 8:13,2/19/2019 8:13,,,,0,,,,CC BY-SA 4.0 +10696,4,,,2/19/2019 8:13,,0,,For questions related to the concept of environment in reinforcement learning and other AI sub-fields.,2444,,2444,,7/13/2019 21:26,7/13/2019 21:26,,,,0,,,,CC BY-SA 4.0 +10697,5,,,2/19/2019 8:14,,0,,,-1,,-1,,2/19/2019 8:14,2/19/2019 8:14,,,,0,,,,CC BY-SA 4.0 +10698,4,,,2/19/2019 8:14,,0,,"For questions related to the unsupervised learning technique called ""clustering"".",2444,,2444,,2/19/2019 21:14,2/19/2019 21:14,,,,0,,,,CC BY-SA 4.0 +10699,5,,,2/19/2019 8:14,,0,,,-1,,-1,,2/19/2019 8:14,2/19/2019 8:14,,,,0,,,,CC BY-SA 4.0 +10700,4,,,2/19/2019 8:14,,0,,"For questions related to the mathematical concept of ""expectation"" or ""expected value"".",2444,,2444,,2/19/2019 21:13,2/19/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10701,5,,,2/19/2019 8:15,,0,,,-1,,-1,,2/19/2019 8:15,2/19/2019 8:15,,,,0,,,,CC BY-SA 4.0 +10702,4,,,2/19/2019 8:15,,0,,"For questions related to the Q function, a.k.a. state-action value function, a.k.a. the ""quality"" function (as defined in reinforcement learning), which is used in algorithms such as Q-learning or SARSA.",2444,,2444,,2/19/2019 21:14,2/19/2019 21:14,,,,0,,,,CC BY-SA 4.0 +10703,5,,,2/19/2019 8:17,,0,,,-1,,-1,,2/19/2019 8:17,2/19/2019 8:17,,,,0,,,,CC BY-SA 4.0 +10704,4,,,2/19/2019 8:17,,0,,"For questions related to the V function, a.k.a. state value function (as defined in reinforcement learning), which is used in algorithms such as value iteration.",2444,,2444,,2/19/2019 21:13,2/19/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10705,5,,,2/19/2019 8:18,,0,,"

See this question What is self-supervision in machine learning? for more info.

+",2444,,2444,,2/19/2019 21:12,2/19/2019 21:12,,,,0,,,,CC BY-SA 4.0 +10706,4,,,2/19/2019 8:18,,0,,"For questions related to self-supervised learning (SSL), which typically refers to techniques that automatically generate the supervisory learning signal. SSL can be used for representation learning, so it can be useful for transfer learning too. Some people consider SSL a sub-field of unsupervised learning given that many (if not all) SSL techniques do not require a human to manually annotate the inputs.",2444,,2444,,11/20/2020 12:59,11/20/2020 12:59,,,,0,,,,CC BY-SA 4.0 +10707,5,,,2/19/2019 8:23,,0,,"

On-policy RL algorithms use their current approximation of the policy they attempt to estimate in order to interact with the environment (to gain experience and further update their approximation). An example of an on-policy algorithm is SARSA.

+",2444,,2444,,2/19/2019 21:14,2/19/2019 21:14,,,,0,,,,CC BY-SA 4.0 +10708,4,,,2/19/2019 8:23,,0,,"For questions related to the ""on-policy"" reinforcement learning algorithms.",2444,,2444,,2/19/2019 21:13,2/19/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10709,5,,,2/19/2019 8:26,,0,,,2444,,2444,,8/11/2019 10:58,8/11/2019 10:58,,,,0,,,,CC BY-SA 4.0 +10710,4,,,2/19/2019 8:26,,0,,"For questions related to off-policy reinforcement learning algorithms, which estimate a policy (the target policy) while using another policy (the behavior policy), during the learning process, which ensures that all states are sufficiently explored. An example of an off-policy algorithm is Q-learning.",2444,,2444,,8/11/2019 10:58,8/11/2019 10:58,,,,0,,,,CC BY-SA 4.0 +10711,5,,,2/19/2019 8:28,,0,,,-1,,-1,,2/19/2019 8:28,2/19/2019 8:28,,,,0,,,,CC BY-SA 4.0 +10712,4,,,2/19/2019 8:28,,0,,"For questions related to Bayesian networks, which are e.g. used to study causality (or causation) in AI.",2444,,2444,,2/19/2019 21:13,2/19/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10713,5,,,2/19/2019 8:29,,0,,,-1,,-1,,2/19/2019 8:29,2/19/2019 8:29,,,,0,,,,CC BY-SA 4.0 +10714,4,,,2/19/2019 8:29,,0,,"For questions related to the Markov property or Markov assumption (that is, the assumption that the ""future is independent of the past, given the present""), which underlies e.g. most reinforcement learning algorithms.",2444,,2444,,10/31/2019 20:14,10/31/2019 20:14,,,,0,,,,CC BY-SA 4.0 +10715,5,,,2/19/2019 8:30,,0,,,-1,,-1,,2/19/2019 8:30,2/19/2019 8:30,,,,0,,,,CC BY-SA 4.0 +10716,4,,,2/19/2019 8:30,,0,,"For questions related to the concept of ""optimal policy"" in reinforcement learning.",2444,,2444,,2/19/2019 21:14,2/19/2019 21:14,,,,0,,,,CC BY-SA 4.0 +10717,5,,,2/19/2019 8:31,,0,,,2444,,2444,,3/26/2021 10:43,3/26/2021 10:43,,,,0,,,,CC BY-SA 4.0 +10718,4,,,2/19/2019 8:31,,0,,"For questions related to the concept of a stochastic policy (as defined in reinforcement learning), which is a function from a state to a probability distribution over actions (from that state).",2444,,2444,,3/26/2021 10:43,3/26/2021 10:43,,,,0,,,,CC BY-SA 4.0 +10719,1,10720,,2/19/2019 9:14,,3,1278,"

I am learning Deep RL following this tutorial: https://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8

+ +

I understand everything but one detail:

+ +

This image shows the difference between a classic Q learning table and a DNN. +It states that a Q table needs a state-action pair as input and outputs the corresponding Q value whereas a Deep Q network needs the state as feature input and outputs the Q value for each action that can be made in that state.

+ +

But shouldn't the state AND the action together be the input to the network and the network just outputs a single Q value?

+ +

+",22460,,2444,,2/5/2021 14:28,2/5/2021 14:28,Why does Deep Q Network outputs multiple Q values?,,1,0,,,,CC BY-SA 4.0 +10720,2,,10719,2/19/2019 9:27,,4,,"

I think this was just a ""clever"" design choice. You can actually design a neural network (NN), to represent your Q function, which receives as input the state and an action and outputs the corresponding Q value. However, to obtain $\max_aQ(s', a)$ (which is a term of the update rule of the Q-learning algorithm) you would need a ""forward pass"" of this network for each possible action from $s'$. By having a NN that outputs the Q value for each possible action from a given $s'$, you will just need one forward pass of the NN to obtain $\max_aQ(s', a)$, that is, you pick the highest Q value among the outputs of your NN.

+ +

In the paper A Brief Survey of Deep Reinforcement Learning (by Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage and Anil Anthony Bharath), at page 7, section ""Value functions"" (and subsection ""Function Approximation and the DQN""), it's written

+ +
+

It was designed such that the final fully connected layer outputs $Q^\pi(s,\cdot)$ for all action values in a discrete set of actions — in this case, the various directions of the joystick and the fire button. This not only enables the best action, $\text{argmax}_a Q^\pi(s, a)$, to be chosen after a single forward pass of the network, but also allows the network to more easily encode action-independent knowledge in the lower, convolutional layers.

+
+",2444,,2444,,2/20/2019 11:32,2/20/2019 11:32,,,,0,,,,CC BY-SA 4.0 +10722,2,,10682,2/19/2019 10:04,,2,,"

They are unrelated.

+ +

There is a possibility of interpreting fuzzy values as probabilities, but strictly speaking they are different: fuzzy values are vague, while probabilities reflect likelihood (see Wikipedia entry for Fuzzy Logic)

+ +

While rolling a particular number on a six-side die has a probability of $1 \over 6$, a roll can actually only ever have one outcome.

+ +

A fuzzy value ""quite old"" can simultaneously be member of a number of fuzzy sets with different degrees of membership, eg ""young"" with 0.001, ""adolescent"" with 0.1, ""old"" with 0.4, ""ancient"" with 0.7. Unless it is ""defuzzified"", it is simultaneously contained in all the sets.

+ +

Defuzzyfication is a way of interpreting the result of a series of fuzzy operations and finding the set that best matches, but it is not a clearly defined process such as picking a random number according to a set of probabilities (or rolling the die).

+ +

I am not sure that the sum of all fuzzy set membership values of any given fuzzy value has to add up to 1.0; whereas this condition has to hold for probabilities.

+ +

[EDIT: to clarify - probabilities are not a set; I refer here to all possible outcomes of a random event which have a certain probability of being realised. The sum of all possible event probabilities has to be 1.0]

+ +

One alternative interpretation for your application could be the confidence that the input set is identical to the training set. Which could be a fuzzy value if you wanted to do something else with it, eg by combining it with other fuzzy variables.

+",2193,,2193,,2/19/2019 11:04,2/19/2019 11:04,,,,4,,,,CC BY-SA 4.0 +10723,1,,,2/19/2019 10:14,,1,95,"

Some (stock market) traders have the ability to produce a high percentage of winning trades (80%+, positive return) over years. I had the chance to look into real money trades of two such traders and I also got trading instructions from them for research.

+ +

Now the interesting part is that if you strictly follow their rules then you usually end up with more losers than winners on the long run. But after a while you get some kind of subconscious ""feeling"" for winners which also shows in the results. I assume that this ""feeling"" is a hidden function which can be modeled.

+ +

My question is: Is there work about how to model such ""gut feeling"" and subconscious knowledge by means of machine learning (especially with little training data sets)? Is there relevant literature about this topic?

+ +

Regards,

+",22465,,,,,2/21/2019 11:11,Modelling gut-feeling/subconscious knowledge of stock market traders,,2,0,,,,CC BY-SA 4.0 +10727,2,,10723,2/19/2019 11:42,,1,,"

You could perhaps model gut feeling or subconscious bias as a prior in a Bayesian context, and then try to learn from the data how much to modify/moderate the bias in each individual case.

+ +

I think there is another issue with the problem you outlined. We might expect it to be normal to see more losers than winners in the long run: trading is a zero-sum game where the house always takes its cut. The trick to being a successful trading algorithm seems to be to make the losers small (cut them early) and the winners big (let them run).

+",22355,,,,,2/19/2019 11:42,,,,3,,,,CC BY-SA 4.0 +10731,2,,10686,2/19/2019 16:42,,1,,"

From your question, it appears that you would like to use other features in your data to predict one of the features. I am not sure I understood your question clearly, but anyways, either, you would be using the feature you want to predict as the output of the network.

+ +

Also, if you want to use the output of the network and other features to predict the new output, I think possibly you are trying to use a Recurrent Neural Network based approach, in which the past output is taken into account for future predictions. My personal experience with RNN's is that they are really good at learning such dependencies, as the RNN cells consider the present input, as well as the past output in modelling such tasks. If your problem is a sequence to sequence prediction task, I would definitely suggest trying RNN's (especially LSTMs, ideal for learning long term temporal dependencies) but ultimately, there is No Free Lunch. So try different approaches and see which one works for you best. Good Luck!

+",21460,,,,,2/19/2019 16:42,,,,0,,,,CC BY-SA 4.0 +10732,1,10733,,2/19/2019 19:46,,1,489,"

I am working through the famous RL textbook by Sutton & Barto. Currently, I am on the value iteration chapter. To gain better understanding, I coded up a small example, inspired by this article.

+ +

The problem is the following

+ +
+

There is a rat (R) in a grid. Every square is accessible. The goal is to find the cheese (C). However, there is also a trap (T). The game is over whenever the rat either find the cheese or is trapped (these are my terminal states).

+
+ +

+ +

The rat can move up, down, left, and right (always by one square).

+ +

I modeled the reward as follows:

+ +
-1 for every step
+5 for finding the cheese
+-5 for getting trapped
+
+ +

I used value iteration for this and it worked out quite nice.

+ +

However, now I would like to add another cheese to the equation. In order to win the game, the rat has to collect both cheese pieces.

+ +

+ +

I am unsure how to model this scenario. I don't think it will work when I use both cheese squares and the trap square as terminal states, with rewards for both cheese squares.

+ +

How can I model this scenario? Should I somehow combine the two cheese states into one?

+",10448,,2444,,2/20/2019 18:27,2/20/2019 18:33,How do I apply the value iteration algorithm when there are two goal states?,,1,0,,,,CC BY-SA 4.0 +10733,2,,10732,2/19/2019 20:30,,3,,"

What you could do is to trigger environment termination when rat either:

+ +
    +
  1. steps into the trap
  2. +
  3. picks both cheese pieces
  4. +
+ +

The problem with such setup is that, when the rat picks a single piece, it would move one step to the side, and then it would come back to the same cheese spot so it would keep exploiting the same spot indefinitely.

+ +

The solution to that would be to simply remove the cheese piece once the rat picks it up, so that it can't exploit it indefinitely.

+ +

Sadly, another problem would arise which is partial observability: Markov property wouldn't be fulfilled because the current action wouldn't depend on the current state solely, it would be important whether cheese piece was picked before or not.

+ +

The solution to that would be to make environment fully observable. You could accomplish that by expanding the amount of information about your current state. Before, only your position on the grid was important, but now you would also add state features that tell you whether cheese piece at specific position was picked or not. You would basically add a flag for each cheese piece that has value of 1 if piece was picked, or value of 0 if it wasn't. That way you could remove cheese piece it rat picks it, and you would still have full information.

+ +

I believe this setup would work.

+",20339,,2444,,2/20/2019 18:33,2/20/2019 18:33,,,,2,,,,CC BY-SA 4.0 +10734,1,,,2/20/2019 0:29,,5,129,"

Does anyone work out ways of relating trained neural networks by symbolic AI?

+

For example, if I train a network on pictures of dogs, and I train a network on pictures of shirts. You could imagine that the simplest way (without going through the process from scratch) of identifying "dog AND shirt" would be to perform an AND operation on the last output of the individual cat & dog neural nets. So, "dog AND shirt" would amount to AND'ing the output of two nets.

+

But this operation AND could be replaced with a more complicated operation. And, in principle, I could "train" a neural network to act as one of these operations. For instance, maybe I could figure out the net that describes some changeable output "X" being "on the shirt." This would be sort of like a "functional" in mathematics (in which we are operating are considering the behavior of a network whos input could be any network). If I can figure out this "functional", then I would be able to use it symbolically and determine queries like "dog on the shirt"? - "cat on the shirt"?

+

It seems like to me there's a lot of sense to turn specific neural networks into more "abstract" objects - and that there would be a lot of power in doing so.

+",20685,,2444,,1/2/2022 12:30,1/2/2022 12:30,Can (trained) neural networks be combined with symbolic AI to perform operations like AND?,,1,0,,,,CC BY-SA 4.0 +10735,2,,4328,2/20/2019 8:50,,1,,"

In data mining, we can use machine learning (ML) (with the help of unsupervised learning algorithms) to recognize patterns.

+ +

Pattern recognition is a process of recognizing patterns such as images or speech. We can recognise patterns using ML. For example, once a neural net is trained, using ML algorithms, it can be used for pattern recognition. Other methods, even ones not related to ML and data mining, can be used for pattern recognition, such as a fully handcrafted pattern recognition system.

+ +

In general,

+ +
    +
  1. data mining is mostly associated with statisticians,
  2. +
  3. ML is mostly associated with computer scientists whereas,
  4. +
  5. pattern recognition is mostly associated with engineers.
  6. +
+",7681,,2444,,2/20/2019 9:49,2/20/2019 9:49,,,,0,,,,CC BY-SA 4.0 +10738,1,,,2/20/2019 10:06,,3,97,"

The National Health Service (NHS) wrote down several principles in a document Code of conduct for data-driven health and care technology (updated 18 July 2019). I am concerned with principle 7.

+
+

Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision

+

Demonstrate the learning methodology of the algorithm being built. +Aim to show in a clear and transparent way how outcomes are validated.

+
+

But how exactly can outcomes be shown in a clear and transparent way how outcomes are validated?

+",11893,,2444,,7/15/2020 12:20,7/30/2023 23:03,How can it be shown clearly and transparently that the outcomes of data-driven health and care technology are validated?,,1,0,,,,CC BY-SA 4.0 +10739,1,,,2/20/2019 10:42,,1,60,"

I am training a video prediction model.

+ +

According to the loss plots, the model convergences very fast while the final loss is not small enough and the generation is not good.

+ +

Actually, I have test the lr=1e-04and lr=1e-05, the loss plots drop down a little more slowly, but it's still not ideal. But I think lr=1e-05should be small enough, isn't it?

+ +

How should I fix my model or the hyper parameters?

+",22495,,,,,2/20/2019 10:42,Why is the learning rate is already very small (1e-05) while the model convergences too fast?,,0,0,,,,CC BY-SA 4.0 +10740,1,,,2/20/2019 11:09,,0,31,"

I am trying to understand if there is any difference in the the interpretation of accuracy and loss on synthetic data vs real data.

+",22497,,22498,,2/20/2019 14:58,3/22/2019 16:01,Loss/accuracy on Synthetic data,,1,1,,5/10/2022 4:23,,CC BY-SA 4.0 +10741,2,,10740,2/20/2019 13:10,,1,,"

No, there is no difference.

+ +

Of course, you are likely not able to extrapolate results obtained from synthetic data to expect identical or similar results in real life to unless you have very compelling reasons to do so.

+ +

Without a more specific question, I'm afraid a more specific answer is not possible.

+",22498,,,,,2/20/2019 13:10,,,,0,,,,CC BY-SA 4.0 +10743,2,,10623,2/20/2019 14:30,,6,,"

Self-supervised visual recognition is often applied to representation learning. Here we first learn features on unlabeled data (representation learning), and then learn the real model on features extracted from the labeled data. This especially makes sense when we have a lot of unlabeled data and few labeled data.

+ +

The features can be learned by solving so called pretext tasks. Examples of pretext tasks are to predict rotation of a jittered image, to recognize jittered instances of a same image, or to predict spatial relationship of image patches.

+ +

A nice overview and interesting results can be found in this recent paper.

+",21726,,,,,2/20/2019 14:30,,,,2,,,,CC BY-SA 4.0 +10744,5,,,2/20/2019 15:59,,0,,"

In reinforcement learning, a deterministic policy is a function from a state to a single action.

+",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10745,4,,,2/20/2019 15:59,,0,,"For questions related to the concept of a ""deterministic policy"" (as defined in reinforcement learning).",2444,,2444,,2/21/2019 1:49,2/21/2019 1:49,,,,0,,,,CC BY-SA 4.0 +10746,5,,,2/20/2019 16:02,,0,,,-1,,-1,,2/20/2019 16:02,2/20/2019 16:02,,,,0,,,,CC BY-SA 4.0 +10747,4,,,2/20/2019 16:02,,0,,"For questions related to the family of reinforcement learning algorithms denoted by ""actor-critic"", where there is an actor (a policy) and a critic (a value function).",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10748,5,,,2/20/2019 16:03,,0,,,-1,,-1,,2/20/2019 16:03,2/20/2019 16:03,,,,0,,,,CC BY-SA 4.0 +10749,4,,,2/20/2019 16:03,,0,,"For questions related to the ""experience replay"" buffer (as used in the Deep Q Network and similar works).",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10750,5,,,2/20/2019 16:04,,0,,"

See https://pytorch.org/ for more info.

+",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10751,4,,,2/20/2019 16:04,,0,,"For conceptual questions that somehow involve the PyTorch library, but note that programming questions are off-topic here.",2444,,2444,,8/21/2020 19:00,8/21/2020 19:00,,,,0,,,,CC BY-SA 4.0 +10752,5,,,2/20/2019 16:06,,0,,,2444,,2444,,2/7/2021 17:43,2/7/2021 17:43,,,,0,,,,CC BY-SA 4.0 +10753,4,,,2/20/2019 16:06,,0,,"For questions related to the (automated) planning problem, which is the problem of finding a plan, i.e. a sequence of actions to move from an initial state to a goal state or a policy (a function from states to actions), and planning algorithms. There are different ways to define a planning problem (such as PDDL) and solve a planning problem (e.g. GraphPlan). In reinforcement learning, planning consists in finding a policy that solves an MDP.",2444,,2444,,2/7/2021 17:43,2/7/2021 17:43,,,,0,,,,CC BY-SA 4.0 +10754,5,,,2/20/2019 16:07,,0,,,-1,,-1,,2/20/2019 16:07,2/20/2019 16:07,,,,0,,,,CC BY-SA 4.0 +10755,4,,,2/20/2019 16:07,,0,,"For questions related to the concept of creativity in the context of artificial intelligence. An example of a possible question could be ""How can we measure the creativity of AI agents?"".",2444,,2444,,9/23/2021 16:24,9/23/2021 16:24,,,,0,,,,CC BY-SA 4.0 +10756,5,,,2/20/2019 16:09,,0,,"

See this question What is the difference between Actor-Critic and Advantage Actor-Critic? for more info.

+",2444,,2444,,2/21/2019 1:49,2/21/2019 1:49,,,,0,,,,CC BY-SA 4.0 +10757,4,,,2/20/2019 16:09,,0,,"For questions related to the advantage actor-critic algorithms (that is, actor-critic algorithms that use the ""advantage"" function).",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10758,5,,,2/20/2019 16:10,,0,,,-1,,-1,,2/20/2019 16:10,2/20/2019 16:10,,,,0,,,,CC BY-SA 4.0 +10759,4,,,2/20/2019 16:10,,0,,For questions related to hierarchical reinforcement learning algorithms.,2444,,2444,,2/21/2019 1:49,2/21/2019 1:49,,,,0,,,,CC BY-SA 4.0 +10760,5,,,2/20/2019 16:12,,0,,,-1,,-1,,2/20/2019 16:12,2/20/2019 16:12,,,,0,,,,CC BY-SA 4.0 +10761,4,,,2/20/2019 16:12,,0,,"For questions related to the concept of value (or performance, or quality, or utility) function (as defined in reinforcement learning and other AI sub-fields). An example of this type of functions is the Q function (used e.g. in the Q-learning algorithm), also known as the state-action value function, given that $Q: S \times A \rightarrow \mathbb{R}$, where $S$ and $A$ are respectively the set of states and actions of the environment.",2444,,2444,,10/4/2020 10:55,10/4/2020 10:55,,,,0,,,,CC BY-SA 4.0 +10762,5,,,2/20/2019 16:16,,0,,,-1,,-1,,2/20/2019 16:16,2/20/2019 16:16,,,,0,,,,CC BY-SA 4.0 +10763,4,,,2/20/2019 16:16,,0,,"For questions related to the Atari games, which are often used in reinforcement learning (RL) as standard problems to test new RL algorithms or methods.",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10764,1,,,2/20/2019 16:17,,2,1730,"

I trained a neural network on the UNSW-NB15 dataset, but, during training, I am getting spikes in the loss function. The algorithms see part of this UNSW dataset a single time. The loss function is plotted after every batch.

+ +

+ +

For other datasets, I don't experience this problem. I've tried different optimizers and loss functions, but this problem remains with this dataset.

+ +

I'm using the fit_generator() function from Keras. Is there anyone experience this problem using Keras with this function?

+",22514,,2444,,12/18/2019 22:26,1/17/2020 23:01,Why am I getting spikes in the values of the loss function during training?,,2,0,,,,CC BY-SA 4.0 +10765,5,,,2/20/2019 16:20,,0,,,-1,,-1,,2/20/2019 16:20,2/20/2019 16:20,,,,0,,,,CC BY-SA 4.0 +10766,4,,,2/20/2019 16:20,,0,,"For questions related to calculus (developed, among others, by Newton and Leibniz), in the context of AI (and, in particular, machine learning).",2444,,2444,,2/21/2019 1:50,2/21/2019 1:50,,,,0,,,,CC BY-SA 4.0 +10767,5,,,2/20/2019 16:21,,0,,,-1,,-1,,2/20/2019 16:21,2/20/2019 16:21,,,,0,,,,CC BY-SA 4.0 +10768,4,,,2/20/2019 16:21,,0,,"For questions related to artificial intelligence research papers. So, you should use this tag if you want someone to clarify something in a research paper.",2444,,2444,,9/25/2019 21:18,9/25/2019 21:18,,,,0,,,,CC BY-SA 4.0 +10769,5,,,2/20/2019 18:18,,0,,,2444,,2444,,4/13/2020 13:30,4/13/2020 13:30,,,,0,,,,CC BY-SA 4.0 +10770,4,,,2/20/2019 18:18,,0,,"For questions related to the policy iteration (PI) algorithm, which is a dynamic programming algorithm that is used to solve a Markov decision process (MDP), given the transition model and reward function (i.e. the model) of the MDP. ",2444,,2444,,4/13/2020 13:30,4/13/2020 13:30,,,,0,,,,CC BY-SA 4.0 +10772,1,,,2/20/2019 20:54,,3,378,"

This question is related to What does "stationary" mean in the context of reinforcement learning?, but I have a more specific question to clarify the difference between a non-stationary policy and a state that includes time.

+ +

My understanding is that, in general, a non-stationary policy is a policy that doesn't change. My first (probably incorrect) interpretation of that was that it meant that the state shouldn't contain time. For example, in the case of game, we could encode time as the current turn, which increases every time the agent takes an action. However, I think even if we include the turn in the state, the policy is still non-stationary so long as sending the same state (including turn) to the policy produces the same action (in case of a deterministic policy) or the same probability distribution (stochastic policy).

+ +

I believe the notion of stationarity assumes an additional implicit background state that counts the number of times we have evaluated the policy, so a more precise way to think about a policy (I'll use a deterministic policy for simplicity) would be:

+ +

$$ \pi : \mathbb{N} \times S \rightarrow \mathbb{N} \times A $$ +$$ \pi : (i, s_t) \rightarrow (i + 1, s_{t+1}) $$

+ +

instead of $\pi : S \rightarrow A$.

+ +

So, here is the question: Is it true that a stationary policy must satisfy this condition?

+ +

$$ \forall i, j \in \mathbb{N}, s \in S, \pi (i, s) = \pi(j, s) $$

+ +

In other words, the policy must output the same result no matter when we evaluate it (either the ith or jth time). Even if the state $S$ contains a counter of the turn, the policy would still be non-stationary because for the same state (including turn), no matter how many times you evaluate it, it will return the same thing. Correct?

+ +

As a final note, I want to contrast the difference between a state that includes time, with the background state I called $i$ in my definition of $\pi$. For example, when we run an episode of 3 steps, the state $S$ will contain 0, 1, 2, and the background counter of number of the policy $i$ will also be set to 2. Once we reset the environment to evaluate the policy again, the turn, which we store in the state, will go back to 0, but the background number of evaluations won't reset and it will be 3. My understanding is that in this reset is when we could see the non-stationarity of the policy in action. If we get a different result here it's a non-stationary policy, and if we get the same result it's a stationary policy, and such property is independent of whether or not we include the turn in the state. Correct?

+",12640,,12640,,2/24/2019 22:41,2/24/2019 22:41,What is the difference between a non-stationary policy and a state that stores time?,,2,0,,,,CC BY-SA 4.0 +10773,1,10780,,2/20/2019 21:20,,2,357,"

I am confused about how neural networks weigh different features or inputs.

+

Consider this example. I have 3 features/inputs: an image, a dollar amount, and a rating. However, since one feature is an image, I need to represent it with very high dimensionality, for example with $128 \times 128 = 16384$ pixel values. (I am just using 'image' as an example, my question holds for any feature that needs high dimensional representation: word counts, one-hot encodings, etc.)

+

Will the $16384$ 'features' representing the image completely overwhelm the other 2 features that are the dollar amount and rating? Ideally, I would think the network would consider each of the three true features relatively equally. Would this issue naturally resolve itself in the training process? Would training become much more difficult of a task?

+",22525,,2444,,11/21/2020 20:24,11/21/2020 20:30,How do neural networks weigh multiple inputs/features of different dimensionality?,,2,0,,,,CC BY-SA 4.0 +10774,1,,,2/20/2019 23:14,,2,158,"

Mitchell's definition of machine learning is as follows:

+
+

A computer program is said to learn from experience E with respect to some task T and performance measure P, if its performance at task T, as measured by P, improves with experience E.

+
+

Here, we talk about what it means for a program to learn rather than a machine, but a program and a machine aren't equivalent, so this is a definition about "program learning" rather than "machine learning". How is this consistent?

+",22528,,2444,,12/21/2021 0:31,12/21/2021 0:31,"Is the definition of machine learning by Mitchell in his book ""Machine Learning"" valid?",,1,0,,,,CC BY-SA 4.0 +10775,1,,,2/20/2019 23:48,,3,401,"

I have a dataset with hundreds of thousands of training examples. There are 27 input variables and one output variable which is always a 0 or a 1, based on whether an event happened or not.

+ +

My network therefore has 27 inputs and 1 output. I want the network's output to be a confidence guess of how likely the event is to happen, for example if the output is 0.23 then that represents that the network thinks the event has a 23% chance of happening.

+ +

I am using back propagation to train the neural network. It does appear to work well and the network outputs a higher number when the event is more likely and a lower number when the event is less likely.

+ +

Would it be a valid concern that my training data only has 0 or 1 values as outputs, when this is not truly what I want the network to output?

+ +

My concern comes from the fact that back propagation attempts to reduce the square of the error between the network's output, and the value of the output in the training data, which is always a 0 or a 1. Because it is the square of the error it is trying to reduce, I'm concerned that it's probability output may not be a linear mapping to the true probability of the event happening based on the 27 inputs it is seeing.

+ +

Is this a valid concern? And are there any techniques I can use to get a neural network to output a linear confidence guess between 0 and 1 when my test data only has outputs of 0 or 1?

+ +

I am using the sigmoid activation function for all of my neurons, would there be a better choice of activation function for this problem?

+ +

Edit: Thanks to Xpector's answer, I now understand that not all back propagation aims to reduce the square of the error, it depends on the loss function used. I am including a part of the back propagation code I have used here which calculates the error:

+ +
var neuronOutput = layerOutputs[i];
+var error = (neuronOutput - desiredOutput[i]);
+errors[i] = error * Maths.SigmoidDerivative(neuronOutput);
+
+ +

This is from an open source RProp implementation. I am not sure what loss function is being used here.

+",21524,,21524,,2/21/2019 11:40,2/21/2019 11:40,Training a neural network to output the conditional probability of an event when the training data output is only binary,,1,0,,,,CC BY-SA 4.0 +10776,1,,,2/20/2019 23:54,,1,55,"

I wanna solve a problem of regression to predict a factor. I decide to go with Deep Neural Networks as solution for my problem.

+ +

The features in this problem represent loop characteristic such us loop nest level, loop sizes. The loops hold also instruction (operations) that in itself represent many characteristics like number of variables, loads, stores, etc.

+ +

Those instructions maybe positions in the innermost loop or in the middle or under the outermost loop.

+ +

We extract here characteristics of Computations in Tiramisu language.

+ +

For example, if we have two iterator variables:

+ +
var i(""i"", 0, 20), j(""j"", 0, 30);
+
+ +

and we have the following computation declaration:

+ +
computation S(""S"", {i,j}, 4);
+
+ +

This is equivalent to writing the following C code:

+ +
 for (i=0; i<20; i++)
+      for (j=0; j<30; j++)
+         S(i,j) = 4;
+
+ +

The aspect of receptivity here we can have something like this:

+ +
 computation S(""S"", {i,j}, 4+M); 
+
+ +

where ""M"" is computation also.

+ +

We considered those features to represent Computations in Tiramisu language.

+ +
/** Computations=loops **/
+   ""nest_level"" : 3,   // Number of nest levels
+   ""perfect_nested"" : 1 ,  // 1 if the loop is perfectly nested , 0 instead
+   ""loops_sizes"" : [200,100,300] // Sizes of for loops 
+   ""lower_bound"" : [5,0,0], // Bounds of the iterator (in this e.g [2, 510])
+   ""upper_bound"" : [205,100,300], 
+   ""nb_intern_if"" : 1000, //number of if statements in the nest
+   ""nb_exec_if"" : 300, // Estimation of number if 
+   ""prec_if"" : 1,  // 1=true if the nest is preceded by if statement  
+   ""nb_dependencies_intern"" : 5, // number of dependencies between loops levels in the nest 
+   // ""dependencies_extern"" : , // number of extern nest dependencies  
+    ""nb_computations"" : 3,  // number of operations (computations) in the nest 
+    //std::map<std::string, computation_features *> computations_features; // list of operations Features in the nest
+
+ +

And this to represent operations:

+ +
/** Instructions **/
+""n"" : 1, <-- Number of computations
+    ""compt_array"" : [
+      {
+              // Should we add to which level should belong the instructions ?
+
+              ""comp_id"" : 1,  // Unique id for the instructions
+              ""nb_var"" : 5,   // Number of the variables in the instructions
+              ""nb_const"" : 2, // Number of constantes in the instructions
+              ""nb_operands"" : 3, // Number of operands of the operatiion ( including direct values)
+              ""histograme_loads"" :  [2,1,5,8], // number of load ops. i.e. acces to inputs per type
+              ""histograme_stores"" :  [2,1,5,8], // number of load ops. i.e. acces to inputs per type
+              ""nb_library_call"" : 5;  // number of the computation library_calls 
+              ""wait_library_argument"" : 2, // number of ar 
+              ""operations_histogram"" : [ // number of arithmetic operations per type
+                    [0, 2, 0, 0],  // p_int32
+                    [0, 0, 0, 0],  // float, for example
+                    [0, 0, 0, 0],  // ...
+                    [0, 0, 0, 0],
+                    [0, 0, 0, 0],
+                    [0, 0, 0, 0], // ...
+                    [0, 0, 0, 0]  // boolean    
+              ]              
+      }
+  ]
+
+ +

We may also represent iterator as a characteristic of computation.

+ +

The problem in those features we have:

+ +
    +
  1. Loops (Computations) can hold many operations ==> the size of operation vector is variable.

  2. +
  3. Instructions (Operations) can be in the level 2, 3 under the innermost +I mean we can have this situation:

  4. +
+ +
+
for (i=0; i < 20; i++)
+    S(i, j) = 4;
+for (j=0; j < 30; j++)
+    ...
+
+
+ +

or this one:

+ +
+
for (i=0; i<20; i++)
+      for (j=0; j<30; j++)
+         S(i,j) = 4;
+
+
+ +

Or many other situations with many instructions ==> their is dependencies between the position of the instruction and the level (iterator) in which it is, in the other way operation hold the id of the iterator :§.

+ +
    +
  1. The operation on itself can be composed with another Computation(Loop nest) which on itself hold instruction and so forth ==> Resistivity.
  2. +
+ +

After some research i have found that that DNN has fixed input size. RNN, recursive NN can handle with varying length of inputs. But what about others

+ +

how should I present that as input?

+",22526,,22296,,2/21/2019 1:49,2/21/2019 1:49,How to handle varying length of inputs that represent dependencies and recursivity in deep neural networks in case of regression?,,0,0,,,,CC BY-SA 4.0 +10777,2,,10774,2/21/2019 1:58,,2,,"

Think of a computer as a Turing Machine--this idea is a model of computation, and all of modern computing is based on the Turing-Church thesis.

+ +

Machine and program can be interchangeable—at the end of the day, it's all algorithms, whether hard coded in the form of microchip, or in the form of software. (Any microchip can be emulated as software.)

+ +

Pre-modern computers were mechanical in popular sense. Examples include Babbage's + difference engine, and mechanical calculators in general. These led to programmable mechanical calculators and electromechanical computers such as IBM's Harvard Mark I, based on Babbage's notion of an Analytic Engine.

+ +

In this context, Machine Learning may also connote software that runs on a machine (hardware) as opposed to human/animal learning, which utilizes a biological medium.

+ +
+ +

The etymology of ""mechanics"", and by extension, ""machine"" is worth looking at.

+ + + +

Mechanema is instructive in that math and engineering can be understood as the ""trick to doing things"". (Machination is not pejorative in this sense, as ""noun of action from past participle stem of machinari 'contrive skillfully, to design; to scheme, to plot,'""—one can plot the course of a celestial body.)

+ +

Early uses of the mech- root

+ +

Aristotle's Μηχανικά (""Mechanical Problems""*)

+ +

Euler's Mechanica (Euler was referring to Classical Mechanics, an analytic and predictive science made possible by calculus, which is based on functions.)

+",1671,,1671,,2/22/2019 1:18,2/22/2019 1:18,,,,0,,,,CC BY-SA 4.0 +10778,1,,,2/21/2019 3:44,,2,564,"

How to detect liveness of face using face landmark points? +I am getting face landmarks from android camera frames. And I want to detect liveness using these landmark points. +How to tell if a human is making a specific movement that can be useful for liveness detection?

+",22532,,32410,,4/19/2021 22:43,7/31/2023 8:36,Face liveness detection using face landmark points,,3,0,,,,CC BY-SA 4.0 +10779,2,,10617,2/21/2019 6:56,,1,,"

Why you need encoder and decoder here, this is not a use case for this. If you want to convert one sequence to another sequence then you can use encoder-decoder. You can use simple seq2seq learning for below purpose.

+ +
    +
  1. Sequence
  2. +
  3. Sequence Prediction
  4. +
  5. Sequence Classification
  6. +
  7. Sequence Generation Sequence to Sequence Prediction +You can use below simple seq2seq method to predicate the next sequence +Sequence prediction attempts to predict elements of a sequence on the basis of the preceding elements. +https://machinelearningmastery.com/sequence-prediction/
  8. +
+",7681,,,,,2/21/2019 6:56,,,,0,,,,CC BY-SA 4.0 +10780,2,,10773,2/21/2019 8:48,,1,,"

As stated in your example, the three features are: an image, a price, a rating. Now, you want to build a model that uses all of these features and the simplest way to do is to feed them directly into the neural network, but it's inefficient and fundamentally flawed, due to the following reasons:

+
    +
  • In the first dense layer, the neural network will try to combine raw pixel values linearly with price and rating, which will produce features that are meaningless for inference.

    +
  • +
  • It could perform well just by optimizing the cost function, but the model performance will be nowhere as good as it could perform with a good architecture.

    +
  • +
+

So, the neural network doesn't care if the data is a raw pixel value, price, or rating: it would just optimize it to produce the desired output. That is why it is necessary to design a suitable architecture for the given problem.

+

Possible architecture for your given example :

+
    +
  1. Separate your raw features, i.e. pixel value, and high-level data, i.e. price and rating

    +
  2. +
  3. Stack 2-3 dense layers for raw features (to find complex patterns in images)

    +
  4. +
  5. Stack 1-2 dense layers for high-level features

    +
  6. +
  7. Combine them together in a final dense layer

    +
  8. +
+

If you want to de-emphasize the importance of the image, just connect the first dense layer 16,384 to another layer having fewer connections, say 1024, and have more connections from the high-level data, say 2048.

+

So, again, here's the possible architecture

+
    +
  1. Raw features -> dense layer (16384) -> dense layer (1024)
  2. +
  3. High-level features -> dense layer (2048)
  4. +
  5. Combine 1 and 2 with another dense layer
  6. +
+",22540,,2444,,11/21/2020 20:30,11/21/2020 20:30,,,,0,,,,CC BY-SA 4.0 +10783,2,,10775,2/21/2019 10:47,,3,,"

To sort the terms a bit, back propagation (repeatedly applying the chain rule of differentiation) is an implementation of stochastic gradient descent, which is an implementation of optimization. It is not necessarily the square of the error that is reduced by optimization - it is up to us to choose the loss function.

+ +

The loss function should ideally mirror the utility function (what you are truly want to optimize), but it needs a useful derivative. For example, the 0-1 loss hasn't one.

+ +

I think that what is called linear confidence guess in the question is the conditional probability of the event, given the input. Then the binary cross entropy would be a valid choice of a loss function.

+ +

From deeplearningbook.org

+ +
+

The negative log-likelihood allows the model to estimate the conditional probability of the classes, given the input, and if the model can do that well, then it can pick the classes that yield the least classification error in expectation.

+
+",22544,,,,,2/21/2019 10:47,,,,2,,,,CC BY-SA 4.0 +10784,2,,10773,2/21/2019 10:53,,0,,"

The answer by ssh is correct. Your results could be further improved i) by extracting image features by a convolutional (instead of fully connected) architecture, and ii) by exploiting transfer learning.

+ +

To exploit transfer learning you i) pick some widely used model, eg. ResNet-18, ii) initialize it with ImageNet pretrained parameters, iii) replace its fully connected layer (the one that produces 1000-D softmax input) with your own randomly initialized fully connected layer. If you are interested, have a look at detailed instructions.

+",21726,,,,,2/21/2019 10:53,,,,0,,,,CC BY-SA 4.0 +10785,2,,10723,2/21/2019 11:11,,0,,"

It sounds like a supervised learning from not-so-many samples. Representation/metric learning could be of some help.

+ +

There are two books by Bob Volman that help quite a bit to make subconscious conscious. ""Understanding Price Action"" and ""Forex Price Action Scalping"". Despite sales-driven titles, this is the best non-scam attempt to formalize discretionary trading I ever saw. (Even though, one will become a one-trick pony at most, but on the other hand it's enough). If you google a bit, you'll find a dropbox with countless price charts commented by author.

+",22544,,,,,2/21/2019 11:11,,,,0,,,,CC BY-SA 4.0 +10786,5,,,2/21/2019 11:21,,0,,"

For more details, see e.g. https://en.wikipedia.org/wiki/Simulated_annealing.

+",2444,,2444,,2/21/2019 22:01,2/21/2019 22:01,,,,0,,,,CC BY-SA 4.0 +10787,4,,,2/21/2019 11:21,,0,,"For questions related to the simulated annealing algorithm (SA), which is a probabilistic algorithm that attempts to find the global optimum of a function. SA can e.g. be used to solve the travelling salesman problem (TSP).",2444,,2444,,2/21/2019 22:01,2/21/2019 22:01,,,,0,,,,CC BY-SA 4.0 +10788,1,,,2/21/2019 11:22,,3,200,"

In the paper ""Self-critical sequence training for image captioning"", on page 3, they define the loss function (of the parameters $\theta$) of an image captioning system as the negative expected reward of a generated sequence of words (Equation (3)):

+ +

$$L(\theta) = - \mathbb{E}_{w^s \sim p_{\theta}}[r(w^s)],$$

+ +

where $w^s = (w_1^s,..., w_T^s)$ and $w^s_t$ is the word sampled from the model at time step $t$.

+ +

The derivation of the gradient of $L(\theta)$ concludes with Equation (7), where the gradient of $L(\theta)$ is approximated with a single sample $w^s \sim p_\theta$:

+ +

$$\nabla_{\theta}L(\theta) \approx -(r(w^s) - b) \ \triangledown_{\theta} log \ p_{\theta}(w^s),$$

+ +

where $b$ is a reward baseline and $p_\theta(w^s)$ is the probability that sequence $w^s$ is sampled from the model. Up until here I understand what's going on. However, then they proceed with defining the partial derivative of $L(\theta)$ w.r.t. the input of the softmax function $s_t$ (final layer):

+ +

$$\nabla_{\theta}L(\theta) = \sum^T_{t=1} \frac{\partial L(\theta)}{\partial s_t} \frac{\partial s_t}{\partial \theta}$$

+ +

I still understand the equation above.

+ +

And Equation (8):

+ +

$$\frac{\partial L(\theta)}{\partial s_t} \approx (r(w^s) - b) (p_\theta(w_t| h_t) - 1_{w^s_t}),$$

+ +

where $1_{w^s_t}$ is $0$ everywhere, but $1$ at the $w^s_t$'th entry. How do you arrive at Equation (8)?

+ +

I'm happy to provide more information if necessary. In the paper ""Sequence level training with recurrent neural networks"", on page 7, they derive a similar result.

+",22545,,22545,,2/21/2019 13:49,3/23/2019 14:03,"How is equation 8 derived in the paper ""Self-critical sequence training for image captioning""?",,1,0,,,,CC BY-SA 4.0 +10789,2,,9919,2/21/2019 12:18,,0,,"

My understanding is now that the author's formula is deliberate. It seeks to learn a worst-case maximizing policy. The formula I instead suggest would, I believe, instead be Nash Q learning where the agent seeks to learn to play a Nash equilibrium.

+ +

After debugging, I have gotten good results with the second formula but cannot speak for the original Minimax Q learning one.

+",21311,,21311,,2/21/2019 12:26,2/21/2019 12:26,,,,0,,,,CC BY-SA 4.0 +10791,2,,10788,2/21/2019 13:08,,2,,"

First of all you made a mistake, equation 8 in the paper is defined with $\frac{\partial L(\theta)}{\partial s_t}$ not $\frac{\partial L(\theta)}{\partial\theta}$.
+Loss is defined as:

+ +

$L(\theta) = - \mathbb{E}_{w^s \sim p_{\theta}}[r(w^s)]$

+ +

If we use definition of expectation (for discrete case):

+ +

$\mathbb{E}[X] = \sum\limits_{i} p_i(x_i)x_i$
+we get

+ +

$L(\theta) = -\sum\limits_{w^s} p_\theta(w^s)r(w^s)$

+ +

probability $p_\theta(w_t)$ is defined as $softmax(s_t)$ = $S(s_t)$. If we calculate derivative of $L(\theta)$ with respect to $s_t$, we are basically calculating derivative of softmax $S(s_t)$ with respect to $s_t$.

+ +

It can be shown and i won't derive it here that derivative of softmax function is

+ +

$\frac{\partial S(s_i)}{\partial s_j} = S(s_i)(1_{ij} - S(s_j))$

+ +

where $1_{ij}$ here is defined as yours. If we plug this derivative into derivative of $L(\theta)$ and replace $S(s_t)$ with $p_\theta(w_t)$ (remember that probability distribution is defined as softmax here) we get:

+ +

$\frac{\partial L(\theta)}{\partial s_t} = -\sum\limits_{w^s} p_\theta(w^s)(1_{w^s_t} - p_\theta(w_t))r(w^s)$

+ +

we extract the - from the equation above, use definition of expectation again and subtract baseline from reward and we have:

+ +

$\frac{\partial L(\theta)}{\partial s_t} = \mathbb{E}_{w^s \sim p_{\theta}}[(p_\theta(w_t) - 1_{w^s_t})(r(w^s) - b)]$

+ +

now, if we ignore expectation we get:

+ +

$\frac{\partial L(\theta)}{\partial s_t} \approx (p_\theta(w_t) - 1_{w^s_t})(r(w^s) - b)$

+ +

I probably butchered notation is some places, but base idea should be clear

+",20339,,,,,2/21/2019 13:08,,,,4,,,,CC BY-SA 4.0 +10792,2,,10778,2/21/2019 13:38,,0,,"

One approach would be to obtain labeled (""yes/no"") samples, e.g. via Mechanical Turk, it costs though) and approach it as supervised learning task. E.g. a neural network will discover itself through the learning what features (landmark positions) are helpful for detection.

+ +

There should be examples around for mood detection etc.

+",22544,,,,,2/21/2019 13:38,,,,0,,,,CC BY-SA 4.0 +10793,2,,10764,2/21/2019 14:20,,1,,"

The spikes could be caused by many reasons: insufficient model capacity, incorrect label, buggy input parsing, ... Finding out the culprit requires some detective work. For instance, you could apply the learned model to the whole train set and manually examine the datapoints which result in the highest loss. Alternatively, you could compare the learning outcomes of different models (both weaker and stronger).

+",21726,,,,,2/21/2019 14:20,,,,0,,,,CC BY-SA 4.0 +10794,1,,,2/21/2019 16:27,,2,19,"

I'm creating a schedule for a summer camp. Because of the high risk of rain, the higher priority activities need to be attempted first, so there is more time for later attempts if need be (temporarily ignoring the schedule in that situation).

+ +

Camp takes place over four days. My current idea is to map the days to a set of numbers (4, 3, 2, 1), and get a correlation between these numbers and the priorities of activities. But I'm not certain this is the best way to do it, nor what the best way of correlating the two are. I'm also not sure how I would factor this correlation in with the fitness function, along with the priorities themselves.

+ +

How should I proceed?

+",5526,,,,,2/21/2019 16:27,Judging a genetic algorithm's priority-based schedules by how far ahead the higher priority things are done,,0,1,,,,CC BY-SA 4.0 +10795,5,,,2/21/2019 16:29,,0,,"

Time complexity can, in some sense, be understood as time measured in computational operations.

+ +

For a more precise definition, see: Time Complexity (wiki)

+ +
+ +

Additional Resources:

+ +

Erik Demaine's lecture on Computational Complexity (MIT)

+",-1,,1671,,2/21/2019 22:09,2/21/2019 22:09,,,,0,,,,CC BY-SA 4.0 +10796,4,,,2/21/2019 16:29,,0,,For questions related to the time complexity (e.g. in Big-O notation) of AI and ML algorithms.,2444,,2444,,2/21/2019 22:01,2/21/2019 22:01,,,,0,,,,CC BY-SA 4.0 +10797,1,10799,,2/21/2019 17:00,,2,111,"

In an unknown environment, how do I avoid an agent to tend to terminate its trajectory in a negative state when time needs to be taken into account?

+ +

Suppose the following example to make my question clear:

+ +
    +
  • A mouse (M) starts in the bottom left of its world
  • +
  • Its goal is to reach the cheese (C) in the top right (+5 reward) while also avoiding the trap (T) (-5 reward)
  • +
  • It should do this as quickly as possible, so for every timestep it also receives a penalty (-1 reward)
  • +
+ +

If the grid world is sufficiently large, it may actually take the mouse many actions to reach the cheese.

+ +

Is there a scenario where the mouse may choose to prefer the trap (-1*small + -5 cumulative reward) versus the cheese (-1*large + 5 cumulative reward)? Is this avoidable? How does this translate to an unknown environment where the number of time steps required to reach the positive terminal state is unknown?

+",22525,,2444,,2/21/2019 18:25,3/23/2019 20:22,How do I avoid an agent to tend to terminate in a negative state when time needs to be taken into account?,,1,0,,,,CC BY-SA 4.0 +10798,1,10907,,2/21/2019 19:55,,5,4241,"

Sutton and Barto state in the 2018-version of ""Reinforcement Learning: An Introduction"" in the context of Expected SARSA (p. 133) the following sentences:

+ +
+

Expected SARSA is more complex computationally than Sarsa but, in return, it eliminates the variance due to the random selection of $A_{t+1}$. Given the same amount of experience we might expect it to perform slightly better than Sarsa, and indeed it generally does.

+
+ +

I have three questions concerning this statement:

+ +
    +
  1. Why is the action selection random with Sarsa? Isn't it on-policy and therefore $\epsilon$-greedy?
  2. +
  3. Because Expected-Sarsa is off-policy the experience it learns from can be from any policy that at least explores everything in the limit e.g. random action-selection with equal probabilities for every action. How can Exected-Sarsa learning from such policy be generally better than normal Sarsa learning from an $\epsilon$-greedy policy, especially with the same amount of experience?
  4. +
  5. Probably more general: How can on-policy and off-policy algorithms be compared in such way (e.g. through variance) even though their concepts and assumptions are so different?
  6. +
+",21299,,2444,,2/21/2019 20:19,2/27/2019 9:00,"Expected SARSA vs SARSA in ""RL: An Introduction""",,1,2,,4/5/2022 4:05,,CC BY-SA 4.0 +10799,2,,10797,2/21/2019 19:57,,2,,"

This is a common problem in reward shaping. You want a certain behavior from you agent but its challenging to describe it completely in terms of rewards. This situation you are describing is challenging specifically because as the grid world grows, the chance of randomly stumbling onto the goal state becomes less likely AKA the problem of exploration. +There are a few techniques that can be used to address this problem though, here are some.

+ +

0) This is an emergent property of the environment and gamma (0 based because its more immediate to your problem:p)

+ +

If gamma is small, your agent will value rewards that are in its immediate future more highly whereas as gamma approaches 1, the agent values rewards that are further in its future. In your grid world, the the size of the grid affects how this gamma affects your agent. Like in your example, if your grid was 100x100, if the trap was close to the agent, you would have to have a gamma closer to 0 in order for your agent to avoid the trap because its worse than moving to a cell that isn't a trap. This is interesting because the whole purpose of gamma is to increase the weighted value of temporally distant rewards but when you make the trap more favorable than the goal, going to the trap is the optimal strategy. :)

+ +

1) Include more observation data to your model

+ +

This isn't always a possibility but depending on your application and what you have available, you may be able to give your agent missing information that might be necessary to its ability to solve your task. For instance, in your infinite grid example, you may include the distance between the agent and the goal or the direction of the goal.

+ +

2) Include a reward that helps to shape the direction of progress.

+ +

One could easily create an infinite grid world where there isn't actually a goal but rather a continuing task where the agent has to cover distance in a desired direction while avoiding obstacles. How would you approach this problem? Perhaps a reward that specifically looks at the number of cells visited by the agent in the desired direction over time aka the discrete analog to velocity. Of course this is still dependent on a know direction but thinking about how to one might handle the ""limit"" of your environment as it grows (e.g., adding more and more cells to the grid world) helps to give an intuition of what your agent could be missing.

+ +

3) Use curiosity based approaches +Following from 2), if the direction isn't known, one thing to consider is rather than giving a penalty for each timestep, instead incentivize the agent to be faster, rewarding visiting ""infrequently visited"" states. As the task requires that the agent performs as quickly as possible, remaining or returning to a previously visited state clearly doesn't benefit the agent. Taking this notion of rewarding ""visiting infrequently/unyet visited/ states"" further results in the recent research topic of using curiosity to have RL agents that have novel exploration strategies.

+ +

Although there are many (often debated) ways to define curiosity, they all share an idea of giving the agent a bonus when it has entered a state that has not been visited before. A paper that gives a good recap of curiosity methods and also introduces a novel approach is Random Network Distillation from OpenAI.

+",4398,,,,,2/21/2019 19:57,,,,0,,,,CC BY-SA 4.0 +10800,5,,,2/21/2019 20:20,,0,,,-1,,-1,,2/21/2019 20:20,2/21/2019 20:20,,,,0,,,,CC BY-SA 4.0 +10801,4,,,2/21/2019 20:20,,0,,"For questions related to the reinforcement learning (on-policy) algorithm called SARSA, which stands for (s, a, r, s', a').",2444,,2444,,2/21/2019 22:01,2/21/2019 22:01,,,,0,,,,CC BY-SA 4.0 +10802,5,,,2/21/2019 20:21,,0,,,-1,,-1,,2/21/2019 20:21,2/21/2019 20:21,,,,0,,,,CC BY-SA 4.0 +10803,4,,,2/21/2019 20:21,,0,,"For questions related to the reinforcement learning algorithm called ""expected SARSA"" (as described in the book ""Reinforcement Learning: An Introduction"", by Sutton and Barto, 2nd edition).",2444,,2444,,2/21/2019 22:01,2/21/2019 22:01,,,,0,,,,CC BY-SA 4.0 +10804,5,,,2/21/2019 22:47,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm.

+",2444,,2444,,2/22/2019 1:18,2/22/2019 1:18,,,,0,,,,CC BY-SA 4.0 +10805,4,,,2/21/2019 22:47,,0,,"For questions related to the ""expectation-maximisation"" (EM) algorithm (which is used in several contexts in AI).",2444,,2444,,2/25/2019 22:00,2/25/2019 22:00,,,,0,,,,CC BY-SA 4.0 +10806,1,,,2/22/2019 1:30,,0,2663,"

I've been looking at various bounding box algorithms, like the three versions of RCNN, SSD and YOLO, and I have noticed that not even the original papers include pseudocode for their algorithms. I have built a CNN classifier and I am attempting to incorporate bounding box regression, though I am having difficulties in implementation. I was wondering if anyone can whip up some pseudocode for any bounding box classifier or a link to one (unsuccessful in my search) to aid my endeavor.

+ +

Note: I do know that there are many pre-built and pre-trained versions of these object classifiers that I can download from various sources, I am interested in building it myself.

+",22563,,1671,,3/4/2020 23:57,3/4/2020 23:57,Pseudocode for CNN with Bounding Box and Classifier,,2,0,,,,CC BY-SA 4.0 +10808,2,,3731,2/22/2019 8:54,,0,,"

While adjusting the learning rate during the training is certainly interesting, so is finding of a good initial learning rate, too.

+ +

Cyclical Learning Rates for Training Neural Networks, fast.ai (pytorch) implementation.

+ +

Here's a good practitioner's overview of learning rate schedules (python, but rather readable and with nice plots).

+",22544,,,,,2/22/2019 8:54,,,,0,,,,CC BY-SA 4.0 +10810,1,,,2/22/2019 9:02,,1,74,"

I am using rasa nlu for training an NLU system to detect intents and slots. Now, some languages have word endings with their nouns (like Finnish, e.g. ""in Berlin"" -> ""Berliinissä""). I have tried to annotate the characters in the training data as entities, but then I run the model, it doesn't detect the characters inside the word. When those characters are a separate word, only then they're detected. I am unable to think of an implementation to effectively detect named entities within a word. Suggestions needed.

+",22574,,22574,,2/24/2019 7:10,2/24/2019 7:10,Detect named entities inside words using spaCy,,1,0,,,,CC BY-SA 4.0 +10811,1,,,2/22/2019 9:07,,2,172,"

Is there any guidance available for training on very noisy data, when Bayes error rate (lowest possible error rate for any classifier) is high? For example, I wonder if deliberately (not due to memory or numerical stability limitations) lowering the batch size or learning rate could produce a better classifier.

+ +

I found so far some general recommendations, not specific for noisy data: Tradeoff batch size vs. number of iterations to train a neural network

+",22544,,22544,,2/24/2019 19:59,2/24/2019 19:59,Any guidance on learning rate / batch size for noisy data (high Bayes error rate)?,,0,1,,,,CC BY-SA 4.0 +10812,1,10818,,2/22/2019 9:28,,20,18399,"

I came across these 2 algorithms, but I cannot understand the difference between these 2, both in terms of implementation as well as intuitionally.

+

So, what difference does the second point in both the slides refer to?

+

+

+",,user9947,2444,,12/22/2021 18:08,12/22/2021 18:08,What is the difference between First-Visit Monte-Carlo and Every-Visit Monte-Carlo Policy Evaluation?,,2,0,,,,CC BY-SA 4.0 +10813,1,,,2/22/2019 12:11,,1,115,"

I'm going through the paper Weight Uncertainty in Neural Networks by Google Deepmind. In the final line of the proof of proposition 1, the integral and the derivative are swapped. Then the derivative is taken. But this somehow yields 2 derivatives of $f$ with respect to $\theta$. I thought that this was the result of a product rule applied to $q(\epsilon)$ and $f(w,\theta)$ and then the chain rule. But that does not yield the same outcome as $\frac{\partial q(\epsilon)}{\partial \theta} = 0 $. My question is: does anyone understand how the equation in the last line comes about?

+ +

+",22273,,,,,2/22/2019 13:02,Problem with Proposition 1 of Google Deepmind's 'Weight uncertainty in Neural Networks',,1,0,,,,CC BY-SA 4.0 +10814,2,,10454,2/22/2019 12:24,,0,,"

The hill climbing or the constraint-based structure learning algorithms accept whitelist or blacklist arguments permitting or prohibitting some arcs.

+",22113,,2444,,12/13/2021 8:53,12/13/2021 8:53,,,,1,,,,CC BY-SA 4.0 +10815,2,,10813,2/22/2019 12:54,,2,,"

we start with

+ +

$\frac{\partial}{\partial \theta} \mathbb E_{q(\mathbf w\mid\theta)}[f(\mathbf w, \theta)]$

+ +

using definition of expectation for continuous case:

+ +

$\mathbb E[X] = \int xp(x) dx$

+ +

for the first equation we get:

+ +

$\frac{\partial}{\partial \theta} \int f(\mathbf w, \theta)q(\mathbf w \mid \theta) d\mathbf w$

+ +

we swap $q(\mathbf w \mid \theta) d\mathbf w$ for $q(\epsilon)d\epsilon$ and we get:

+ +

$\frac{\partial}{\partial \theta} \int f(\mathbf w, \theta)q(\epsilon)d\epsilon$

+ +

using definition of expectation again we have:

+ +

$\frac{\partial}{\partial \theta} \mathbb E_{q(\epsilon)}[f(\mathbf w,\theta)]$

+ +

i believe dominant convergence theorem let's us interchange expectation and derivative and we have:

+ +

$\mathbb E_{q(\epsilon)}[\frac{\partial}{\partial \theta}f(\mathbf w,\theta)]$

+ +

we have a function $f$ that depends on two variables $\mathbf w$ and $\theta$. In order to get a full derivative of such function we need to take derivative of $f$ with respect to both variables. We have:

+ +

$\mathbb E_{q(\epsilon)}[\frac{\partial f(\mathbf w,\theta)}{\partial \theta} + \frac{\partial f(\mathbf w,\theta)}{\partial \mathbf w} \frac{\partial \mathbf w}{\partial \theta}]$

+ +

First term inside expectation is derivative of function $f$ with respect to $\theta$ directly, and in the second term $f$ is derivated with respect to $\mathbf w$, but we need to apply chain rule to get full derivative with respect to $\theta$

+",20339,,20339,,2/22/2019 13:02,2/22/2019 13:02,,,,1,,,,CC BY-SA 4.0 +10818,2,,10812,2/22/2019 14:45,,23,,"

The first-visit and the every-visit Monte-Carlo (MC) algorithms are both used to solve the prediction problem (or, also called, ""evaluation problem""), that is, the problem of estimating the value function associated with a given (as input to the algorithms) fixed (that is, it does not change during the execution of the algorithm) policy, denoted by $\pi$. In general, even if we are given the policy $\pi$, we are not necessarily able to find the exact corresponding value function, so these two algorithms are used to estimate the value function associated with $\pi$.

+ +

Intuitively, we care about the value function associated with $\pi$ because we might want or need to know ""how good it is to be in a certain state"", if the agent behaves in the environment according to the policy $\pi$.

+ +

For simplicity, assume that the value function is the state value function (but it could also be e.g. the state-action value function), denoted by $v_\pi(s)$, where $v_\pi(s)$ is the expected return (or, in other words, expected cumulative future discounted reward), starting from state $s$ (at some time step $t$) and then following (after time step $t$) the given policy $\pi$. Formally, $v_\pi(s) = \mathbb{E}_\pi [ G_t \mid S_t = s ]$, where $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1}$ is the return (after time step $t$).

+ +

In the case of MC algorithms, $G_t$ is often defined as $\sum_{k=0}^T R_{t+k+1}$, where $T \in \mathbb{N}^+$ is the last time step of the episode, that is, the sum goes up to the final time step of the episode, $T$. This is because MC algorithms, in this context, often assume that the problem can be naturally split into episodes and each episode proceeds in a discrete number of time steps (from $t=0$ to $t=T$).

+ +

As I defined it here, the return, in the case of MC algorithms, is only associated with a single episode (that is, it is the return of one episode). However, in general, the expected return can be different from one episode to the other, but, for simplicity, we will assume that the expected return (of all states) is the same for all episodes.

+ +

To recapitulate, the first-visit and every-visit MC (prediction) algorithms are used to estimate $v_\pi(s)$, for all states $s \in \mathcal{S}$. To do that, at every episode, these two algorithms use $\pi$ to behave in the environment, so that to obtain some knowledge of the environment in the form of sequences of states, actions and rewards. This knowledge is then used to estimate $v_\pi(s)$. How is this knowledge used in order to estimate $v_\pi$? Let us have a look at the pseudocode of these two algorithms.

+ +

+ +

$N(s)$ is a ""counter"" variable that counts the number of times we visit state $s$ throughout the entire algorithm (i.e. from episode one to $num\_episodes$). $\text{Returns(s)}$ is a list of (undiscounted) returns for state $s$.

+ +

I think it is more useful for you to read the pseudocode (which should be easily translatable to actual code) and understand what it does rather than explaining it with words. Anyway, the basic idea (of both algorithms) is to generate trajectories (of states, actions and rewards) at each episode, keep track of the returns (for each state) and number of visits (of each state), and then, at the end of all episodes, average these returns (for all states). This average of returns should be an approximation of the expected return (which is what we wanted to estimate).

+ +

The differences of the two algorithms are highlighted in $\color{red}{\text{red}}$. The part ""If state $S_t$ is not in the sequence $S_0, S_1, \dots, S_{t-1}$"" means that the associated block of code will be executed only if $S_t$ is not part of the sequence of states that were visited (in the episode sequence generated with $\pi$) before the time step $t$. In other words, that block of code will be executed only if it is the first time we encounter $S_t$ in the sequence of states, action and rewards: $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$ (which can be collectively be called ""episode sequence""), with respect to the time step and not the way the episode sequence is processed. Note that a certain state $s$ might appear more than once in $S_0, A_0, R_1, S_1, A_1, R_2 \ldots, S_{T-1}, A_{T-1}, R_T$: for example, $S_3 = s$ and $S_5 = s$.

+ +

Do not get confused by the fact that, within each episode, we proceed from the time step $T-1$ to time step $t = 0$, that is, we process the ""episode sequence"" backwards. We are doing that only to more conveniently compute the returns (given that the returns are iteratively computed as follows $G \leftarrow G + R_{t+1}$).

+ +

So, intuitively, in the first-visit MC, we only update the $\text{Returns}(S_t)$ (that is, the list of returns for state $S_t$, that is, the state of the episode at time step $t$) the first time we encounter $S_t$ in that same episode (or trajectory). In the every-visit MC, we update the list of returns for the state $S_t$ every time we encounter $S_t$ in that same episode.

+ +

For more info regarding these two algorithms (for example, the convergence properties), have a look at section 5.1 (on page 92) of the book ""Reinforcement Learning: An Introduction"" (2nd edition), by Andrew Barto and Richard S. Sutton.

+",2444,,2444,,2/22/2019 18:57,2/22/2019 18:57,,,,0,,,,CC BY-SA 4.0 +10822,1,10834,,2/22/2019 16:29,,2,1974,"

I have a reinforcement learning agent with both a positive and a negative terminal state. After each episode during training, I am recording whether a success or failure occurred, and then I can compute a running ratio of success to failure.

+ +

I am seeing a phenomenon where, at some point in time, my agent achieves a reasonably high success rate (~80%) for a 100-episode running average. However, with further training, it seems to 'train itself out' of this behavior and ends the training sequence with a very low success rate (~10-20%).

+ +

I am using an epsilon-greedy strategy whereby epsilon decays linearly from 1.0 to 0.1 for the first 10% of episodes and then remains at 0.1 for the remaining 90%. As such, the 'training out' appears to occur some time where exploration only occurs with 10% probability.

+ +

What could be causing this undesirable behavior? How can I combat it?

+",22525,,2444,,2/22/2019 17:06,2/23/2019 0:44,What is happening when a reinforcement learning agent trains itself out of desired behavior?,,1,3,,,,CC BY-SA 4.0 +10823,1,,,2/22/2019 16:51,,3,200,"

I have a questionnaire consisting of over 10 questions. The questionnaire is being answered by a lot of people, which I have manually graded. Each question can give the user up to 10 points depending on how they have answered.

+ +

Let's say that my dataset is big enough, how would I go about using a neural network to automatically grade these questions for me?

+ +

I have used a convolutional neural network before for image classification, but, when dealing with text classification, where should I start? Is there some sort of tutorial out there that covers this with a similar example?

+",22580,,2444,,3/11/2020 2:25,8/8/2020 3:06,How can I train a neural network to grade the user's answers to a questionnaire?,,1,0,,,,CC BY-SA 4.0 +10824,2,,10810,2/22/2019 17:36,,1,,"

You could use a tool to decompose composite word. There are many open source softwares for various languages. For example : german-word-splitter

+",22581,,,,,2/22/2019 17:36,,,,0,,,,CC BY-SA 4.0 +10825,2,,10806,2/22/2019 18:05,,0,,"

In General

+ +

Each of those projects has open source code on github that you can look at. If you do some quick googling you'll find the software for these basic regressors exist for different deep learning frameworks. Usually the these types of projects only include pseduo code for the custom or complicated layers that are involved in the detector. There isn't much of a need to include pseudo code for a simple convolution because it's a well established operation. I just included a few links here, but if you look around there's a bunch of implementations in different frameworks that you'll find.

+ +

SSD

+ +

https://github.com/amdegroot/ssd.pytorch +https://github.com/weiliu89/caffe/tree/ssd

+ +

FasterRCNN (Better version of RCNN)

+ +

https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN

+ +

YOLO V3

+ +

V3 -- https://itnext.io/implementing-yolo-v3-in-tensorflow-tf-slim-c3c55ff59dbe +V1 -- https://github.com/hizhangp/yolo_tensorflow

+ +

Aside

+ +

As an aside, unless you're trying to do novel research or fit a specific function usually these detectors work pretty well out of the box. Running a base level detector and them modifying for your application is usually a safer route unless you have a deep understanding of how these networks work.

+",17408,,,,,2/22/2019 18:05,,,,0,,,,CC BY-SA 4.0 +10826,1,11002,,2/22/2019 18:51,,1,287,"

Say I have a set of data generated by someone. It could be either bytes from a photo, or readings from bio-sensors, and I have a huge amount of said information, from many people or subjects. Which AI algorithms could be used to learn that a set of data belongs to a subject. I would have the information map that a huge set of data belongs to Bob, and another belongs to Alice to train the system.

+",22586,,,,,3/4/2019 18:16,Which AI algorithm to use to identify a subject from many unknown factors,,1,0,,,,CC BY-SA 4.0 +10830,1,11206,,2/22/2019 20:55,,5,2280,"

I'm trying to replicate the DeepMind paper results, so I implemented my own DQN. I left it training for more than 4 million frames (more than 2000 episodes) on SpaceInvaders-v4 (OpenAI-Gym) and it couldn't finish a full episode. I tried two different learning rates (0.0001 and 0.00125) and seems to work better with 0.0001, but the median score never raises above 200. +I'm using a double DQN. +Here is my code and some photos of the graphs I'm getting each session. +Between sessions I'm saving the network weights; I'm updating the target network every 1000 steps. +I can't see if I'm doing something wrong, so any help would be appreciated. I'm using the same CNN construction as the DQN paper.

+ +

Here's the action selection function; it uses a batch of 4 80x80 processed experiences in grayscale to select the action (s_batch means for state batch):

+ +
    def action_selection(self, s_batch):
+        action_values = self.parallel_model.predict(s_batch)
+        best_action = np.argmax(action_values)
+        best_action_value = action_values[0, best_action]
+        random_value = np.random.random()
+
+        if random_value < AI.epsilon:
+            best_action = np.random.randint(0, AI.action_size)
+        return best_action, best_action_value
+
+ +

Here is my training function. It uses the past experiences as training; I tried to implement that if it lose any life, it wouldn't get any extra rewards, so in theory, the agent would try to not die:

+ +
    def training(self, replay_s_batch, replay_ns_batch):
+        Q_values = []
+        batch_size = len(AI.replay_s_batch)
+        Q_values = np.zeros((batch_size, AI.action_size))
+
+        for m in range(batch_size):
+
+            Q_values[m] = self.parallel_model.predict(AI.replay_s_batch[m].reshape(AI.batch_shape))
+            new_Q = self.parallel_target_model.predict(AI.replay_ns_batch[m].reshape(AI.batch_shape))
+            Q_values[m, [item[0] for item in AI.replay_a_batch][m]] = AI.replay_r_batch[m]
+
+            if np.all(AI.replay_d_batch[m] == True):
+                Q_values[m, [item[0] for item in AI.replay_a_batch][m]] = AI.gamma * np.max(new_Q)    
+
+        if lives == 0:
+            loss = self.parallel_model.fit(np.asarray(AI.replay_s_batch).reshape(batch_size,80,80,4), Q_values, batch_size=batch_size, verbose=0)
+
+        if AI.epsilon > AI.final_epsilon:
+            AI.epsilon -= (AI.initial_epsilon-AI.final_epsilon)/AI.epsilon_decay
+
+ +

replay_s_batch it's a batch of (batch_size) experience replay states (packs of 4 experiences), and replay_ns_batch it's full of 4 next states. The batch size is 32.

+ +

And here are some results, after training: +
+In blue, the loss (I think it's correct; it's near-zero). Red dots are the different match scores (as you can see, it does sometimes really good matches). In green, the median (near 190 in this training, with learning rate = 0.0001) + +Here is the last training, with lr = 0.00125; the results are worse (it's median it's about 160). Anyway the line it's almost straight, I don't see any variation in any case. +So anyone can point me to the right direction? I tried a similar approach with pendulum and it worked properly. I know that with Atari games it takes more time but a week or so I think it's enough, and it seems to be stuck. +In case someone need to see another part of my code just tell me.

+ +

Edit: With the suggestions provided, I modified the action_selection function. Here it is:

+ +
def action_selection(self, s_batch):
+    if np.random.rand() < AI.epsilon:
+        best_action = env.action_space.sample()
+    else:
+        action_values = self.parallel_model.predict(s_batch)
+        best_action = np.argmax(action_values[0])
+    return best_action
+
+ +

To clarify my last edit: with action_values you get the q values; with best_action you get the action which corresponds to the max q value. Should I return that or just the max q value?

+",9818,,9818,,2/23/2019 19:11,3/13/2019 18:59,My DQN is stuck and can't see where the problem is,,1,1,,,,CC BY-SA 4.0 +10831,2,,10823,2/22/2019 20:57,,1,,"

use the embedding layers which are mainly used for text.

+ +

input the question number and the text that the student wrote to the algorithm. Make the problem into a regression of a number from 0 to 10 ( or a classification of class 10, see which one gives better performance). So the NN will look at text given a question number and try to figure out which 'points' should it get. It will be a supervised training problem since you already have examples of text rated from 0 to 10.

+",22590,,,,,2/22/2019 20:57,,,,0,,,,CC BY-SA 4.0 +10833,1,,,2/22/2019 23:50,,1,47,"

Here is what I understand (what I think I understand).

+ +

We first train out model on our images using transfer learning.

+ +

So now we have a pre-trained model.

+ +

For each image in out dataset, we compute selective search on it, which makes 2000 region proposals. + These 2000 region proposals are feed through our pre-trained NN ,

+ +

However we only collect the output (feature maps) from the last convolution layer. These outputs are saved to a hard-disk.

+ +

These feature maps are fed into a SVM for another round of training, were another label, ""no object"" is added.

+ +

We also have regression model that trains based on the window coordinates that we also annotated.

+ +

So we have SVN and a regression model (two models) that we train.

+ +

1)Is the above correct?

+ +

2) Are each of these 2000 region proposals hand-labeled (correct label (cat, dog etc) or no-object) before feeding it into the SVM?

+ +

3) Is the regression model tied into the SVM model? Basically out loss is a combination of both regression coords and SVM classification?

+",3460,,,,,2/23/2019 9:36,Having trouble understanding some of the details of R-CNN (first one),,1,0,,,,CC BY-SA 4.0 +10834,2,,10822,2/23/2019 0:29,,4,,"

This is a common problem and does have a name. It is called ""catastrophic forgetting"" (link is just to a paper I found randomly when searching for the term).

+ +
+

What could be causing this undesirable behavior?

+
+ +

It happens only when using function approximation for value functions (e.g. a neural network trained to learn Q values), and is caused by the agent's own success. After a while the only samples that you will train with will be near-optimal high return cases, and a neural network will optimise the Q value approximation only for that recent data in order to get the best loss. This will usually mean much poorer predictions on unseen but different data, as the network overfits to states and actions only seen in the optimal policy. Eventually the agent will explore into an area where its predictions are way off. Then, because Q learning also uses its own predictions to bootstrap new Q values, this can start a runaway feedback process where the agent starts to choose a suboptimal path through the environment.

+ +

Inside the hidden layers and weights, the neural network may have lost the ability to differentiate well between the states on the old, worse path and the newer better ones. It didn't need to differentiate, because it stopped needing to reduce loss on any data about the old states. So it will also start incorrectly associating the now poor results with the more optimal paths. It will behave at least partially as if the correct policy was set by the overfit Q predictions, but the values need adjusting - so as well as (correctly) reducing its value predictions of the suboptimal paths it has just encountered, it will also (incorrectly) reduce the value predictions of the optimal paths.

+ +

Sometimes, the swing back to receiving lower returns during this process is so strong and/or incorrectly associated with the high return states along with the low ones, that the agent never properly recovers. Other times, the agent can re-learn fully and you get random oscillations over time between optimal and non-optimal agent behaviour.

+ +
+

How can I combat it?

+
+ +

This is still an ongoing area of research. Here are a couple of things that have worked for me:

+ +
    +
  • Low learning rates, and defences against sudden large gradients (e.g. gradient clipping).

  • +
  • Regularisation. Sadly dropout seems not to work in RL, but weight decay is still useful to prevent over-fitting, and it also helps combat catastrophic forgetting because it prevents bootstrap estimates of long-unseen state/action combinations from returning radically different Q values to the rest of the system.

  • +
  • Keep some early experience around from when the agent was still performing badly - this allows the agent to still train with some bad cases and prevents the Q function predicting that ""everything is awesome"" because it still has examples to learn from where this is not the case.

    + +
      +
    • For simple environments, such as inverted pendulum, just keeping some very early fully random behaviour in the experience replay table is enough. For instance if you have a table with 10000 observations (of $s, a, r, s'$), keep 1000 of the first experiences in that table and don't discard them when the table is full. For more complex environments, this is not so useful, as the early random behaviour is too far removed from what the agent learns.

    • +
    • The DQN ""rainbow"" paper uses prioritized experience replay to focus on areas where Q value predictions from the NN are not matching the observations.

    • +
  • +
+",1847,,1847,,2/23/2019 0:44,2/23/2019 0:44,,,,0,,,,CC BY-SA 4.0 +10836,2,,10833,2/23/2019 9:17,,1,,"
    +
  1. Yes, your description is mostly aligned with the original paper. However, Figure 1 shows they use features from the second fully connected layer (not the last convolutional layer).
  2. +
  3. Yes. The training is performed by exploiting hand-annotated bounding boxes. If a proposal has IoU less than 0.3, then it is considered as a negative example (cf. section 2.3).
  4. +
  5. No. Appendix C explains that box regression parameters are learned separately from the SVM model.
  6. +
+",21726,,21726,,2/23/2019 9:36,2/23/2019 9:36,,,,0,,,,CC BY-SA 4.0 +10837,2,,10772,2/23/2019 11:43,,2,,"
+

So, here's is the question: Is it true that a non-stationary policy must satisfy this condition?

+ +

$$ \forall i, j \in \mathbb{N}, s \in S, \pi (i, s) = \pi(j, s) $$

+
+ +

With your custom notation (which certainly isn't common, but seems reasonable)... I assume you meant to say that a stationary policy must satisfy that condition, rather than that a non-stationary policy must satisfy that condition. In that case, yeah, that seems correct to me. A stationary policy would satisfy that condition, and a non-stationary one wouldn't.

+ +
+ +

Wrapping back to the more usual notation, where $\pi(S, A)$ denotes a probability of selecting $A$ in $S$ (which also still covers the case of a deterministic policy, which would simply assign a probability of $1$ to a single action, and $0$ to all others)... I think it's still interesting to consider the case where we decide to ""bake"" a time-step counter into the state representation.

+ +

With this notation, for two different time steps $t$ and $t'$, such that $t \neq t'$, I'd say that a policy $\pi$ is stationary if and only if:

+ +

$$\pi(S_t, A_t) = \pi(S_{t'}, A_{t'}) \text{ if } S_t = S_{t'} \wedge A_t = A_{t'}.$$

+ +

Note that if we decide to include $t$ in the state-representation, the case that $S_t = S_{t'}$ with $t \neq t'$ will actually never hold within the same episode, states at different time steps will always automatically be different from each other if the time step is one of the ""features"" encoded in the state. So, within a single episode a policy with a ""time-aware"" state representation will always automatically be stationary because there cannot be any repeating states. Of course, if you start looking across multiple different episodes, this can change; this is what you're doing when you write:

+ +
+

Once we reset the environment to evaluate the policy again, the turn, which we store in the state, will go back to 0, but the background number of evaluations won't reset and it will be 3.

+
+ +

If you chose to also embed that ""episode counter"" into the state representation, you would also no longer have any state repetitions at all anymore across different episodes (I don't think doing this would ever be a good idea though).

+",1641,,,,,2/23/2019 11:43,,,,1,,,,CC BY-SA 4.0 +10838,2,,10772,2/23/2019 12:12,,1,,"

I think you're overthinking it. I've never seen a formalisation of the concept of ""stationary policy"" (apart from yours). However, in general, ""stationary"" means that something does not change (over time). In the context of reinforcement learning, you can interpret it in such a way that it is consistent with the context where you find this expression, unless you decide to formalise this expression (like you've tried in this question).

+ +

I think it might be useful to differentiate between the learning phase and the ""inference"" (or ""behaviour"") phase, even though these two might interleave in the RL context, that is, you might be using a policy (to behave in the real-world) even though you're still attempting to find the best policy (and I am not just referring to on-policy algorithms, but, in general, you might be using a policy to behave in the real-world which is sub-optimal while your learning algorithm, like Q-learning, is attempting to find the best policy).

+ +

During the learning phase, the policy will keep changing over time (anyway), because you still haven't found the optimal policy. So, you could call the policy derived from the $Q$ values, during the $Q$-learning algorithm, a non-stationary policy, because it keeps changing (because the $Q$ values also keep changing). However, it is often the case that these are considered different policies associated with different approximations of the $Q$ function.

+ +

You could also call a policy that changes in response to changes of the (dynamics of the) environment a non-stationary policy. In this case, you would call such an environment a non-stationary environment (because e.g. its transition model might keep changing over time).

+ +

The problem that you describe when you compare the state $i$ with states that encode time only arises because of your definition of state $i$. You can also reset $i$ when you reset the environment.

+",2444,,2444,,2/24/2019 13:27,2/24/2019 13:27,,,,0,,,,CC BY-SA 4.0 +10839,1,,,2/23/2019 13:30,,14,5975,"

In reinforcement learning, successive states (actions and rewards) can be correlated. An experience replay buffer was used, in the DQN architecture, to avoid training the neural network (NN), which represents the $Q$ function, with correlated (or non-independent) data. In statistics, the i.i.d. (independently and identically distributed) assumption is often made. See e.g. this question. This is another related question. In the case of humans, if consecutive data points are correlated, we may learn slowly (because the differences between those consecutive data points are not sufficient to infer more about the associated distribution).

+

Mathematically, why exactly do (feed-forward) neural networks (or multi-layer perceptrons) require i.i.d. data (when being trained)? Is this only because we use back-propagation to train NNs? If yes, why would back-propagation require i.i.d. data? Or is actually the optimisation algorithm (like gradient-descent) which requires i.i.d. data? Back-propagation is just the algorithm used to compute the gradients (which is e.g. used by GD to update the weights), so I think that back-propagation isn't really the problem.

+

When using recurrent neural networks (RNNs), we apparently do not make this assumption, given that we expect consecutive data points to be highly correlated. So, why do feed-forward NNs required the i.i.d. assumption but not RNNs?

+

I'm looking for a rigorous answer (ideally, a proof) and not just the intuition behind it. If there is a paper that answers this question, you can simply link us to it.

+",2444,,2444,,11/16/2020 12:38,10/17/2021 15:15,Why exactly do neural networks require i.i.d. data?,,3,0,,,,CC BY-SA 4.0 +10842,1,,,2/23/2019 17:01,,4,58,"

I was reading the paper by Kalchbrenner et al. titled A Convolutional Neural Network for Modelling Sentences and I am struggling to understand their definition of convolutional layer.

+

First, let's take a step back and describe what I'd expect the 1D convolution to look like, just as defined in Yoon Kim (2014).

+
+

sentence. A sentence of length n (padded where necessary) is represented as

+

$x_{1:n} = x_1 \oplus x_2 \oplus \dots ⊕ x_n,$ (1)

+

where $\oplus$ is the concatenation operator. In general, let $x_{i:i+j}$ refer to the concatenation of words $x_i, x_{i+1}, \dots, x_{i+j}$. A convolution operation involves a filter $w \in \mathbb{R}^{hk}$, which is applied to a window of h words to produce a new feature. For example, a feature ci is generated from a window of words $x_{i:i+h−1}$ by

+

$c_i = f(w \cdot x_{i:i+h−1} + b)$ (2).

+

Here $b \in \mathbb{R}$ is a bias term and $f$ is a non-linear function such as the hyperbolic tangent. This filter is applied to each possible window of words in the sentence $\{x_{1:h}, x_{2:h+1}, \dots, x_{n−h+1:n}\}$ to produce a feature map

+

$c = [c_1, c_2, \dots, c_{n−h+1}]$, (3)

+

with $c \in \mathbb{R}^{n−h+1}$.

+
+

Meaning a single feature detector transforms every window from the input sequence to a single number, resulting in $n-h+1$ activations.

+

Whereas in Kalchbrenner's paper, the convolution is described as follows:

+
+

If we temporarily ignore the pooling layer, we may state how one computes each d-dimensional column a in the matrix a resulting after the convolutional and non-linear layers. Define $M$ to be the matrix of diagonals:

+

$M = [diag(m:,1), \dots, diag(m:,m)]$ (5)

+

where $m$ are the weights of the d filters of the wide convolution. Then after the first pair of a convolutional and a non-linear layer, each column $a$ in the matrix a is obtained as follows, for some index $j$:

+

+

Here $a$ is a column of first order features. Second order features are similarly obtained by applying Eq. 6 to a sequence of first order features $a_j, \dots, a_{j+m'−1}$ with another weight matrix $M'$. Barring pooling, Eq. 6 represents a core aspect of the feature extraction function and has a rather general form that we return to below. Together with pooling, the feature function induces position invariance and makes the range of higher-order features variable.

+
+

As described in this question, the matrix $M$ has dimensionalty of $d$ by $d * m$ and the vector of concatenated $w$'s has dimensionality $d * m$. Thus the multiplication produces a vector of dimensionality d (for a single convolution of a single window!).

+

Architecture visualization from the paper seems to confirm this understanding:

+

+

The two matrices in the second layer represent two feature maps. Each feature map has dimensionality $(s + m - 1) \times d$, and not $(s + m - 1)$ as I would expect.

+

Authors refer to a "conventional" model where feature maps have only one dimension as Max-TDNN and differentiate it from their own.

+

As the authors point out, feature detectors in different rows are fully independent from each other until the top layer. Thus they introduce the Folding layer, which merges each pair of rows in the penultimate layer (by summation), reducing their number in half (from $d$ to $d/2$).

+

+

Sorry for the prolonged introduction, here are my two main questions:

+
    +
  1. What is the possible motivation for this definition of convolution (as opposed to Max-TDNN or e.g. Yoon Kim's model)

    +
  2. +
  3. In the Folding layer, why is it satisfying to only have dependence between pairs of corresponding rows? I don't understand the gain over no dependence at all.

    +
  4. +
+",22602,,-1,,6/17/2020 9:57,2/24/2019 11:12,What is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?,,0,0,,,,CC BY-SA 4.0 +10843,2,,10839,2/23/2019 20:35,,11,,"

There is an assumption behind the theory training a neural network, that also applies to many other supervised learning methods, that a training sample is representative of the data set as a whole - that it has been sampled fairly from the population that the learning algorithm has been set up to approximate.

+

The term i.i.d. stands for "independent and identically distributed". If you pick trajectories from RL then the sampling (per individual* record as used to train with) is not independent. Even if you choose the start of the trajectory randomly, you will have made one random choice then your remaining choices are made according to the trajectory - technically they are chosen by the policy and environment dynamics over a single step, which is usually not enough to make the second, third etc steps from a trajectory fully independent random samples from the training population. To make the sampled observations independent, you should make a new random choice, and to be identically distributed that choice has to be made fairly over the whole dataset.

+

If a subset of samples drawn for a mini-batch (or as consecutive individual samples) are correlated for an online algorithm like gradient descent, it causes a problem. For the algorithm to converge towards a globally optimal solution, it needs errors and the gradients they generate to be unbiased samples of the "true" function gradients across the loss function. Whilst correlated data from a trajectory does not do this at all - it exhibits a strong sampling bias, causing weight updates to move consistently in the wrong direction.

+

You can demonstrate this effect trying to learn any simple function like $y = x^2$ with $x$ from $-1$ to $+1$. Training a NN with randomly selected $x,y$ pairs results in far better results than training it serially with ~2000 records starting $(-1.000, 1.000) (-0.999, 0.998) (-0.998, 0.996)$ etc

+
+

So, why do feed-forward NNs required the i.i.d. assumption but not RNNs?

+
+

RNNs do need i.i.d. data, but the "unit" of sampling here is each sequence.

+
+

* By "individually" I mean in terms of how they are used in the training process. A "unit" here is the smallest set of records that leads to a complete measure of loss in your optimisation.

+

It is ok to draw longer trajectories at random if your smallest unit of measurable error is from a trajectory, as in Monte Carlo or TD($\lambda$) for RL, or for RNNs. There can be subtle problems with this - the trajectories still have to be i.i.d. as in they represent fair samples from the assumed population of trajectories.

+

If you are working at the level of longer trajectories, typically this reduces systemic bias: In RL it reduces bias from initial conditions due to bootstrap process. For RNNs I am less sure what this would mean, but suspect there is some equivalent. Typically, it also increases variance, meaning you may need more training samples. There is a bias/variance trade-off, which is why setting $\lambda$ somewhere between 0 and 1 in TD($\lambda$) is often the optimal choice. Note in RL this is a different source of bias and variance than discussed when considering number of parameters and regularisation in supervised learning.

+",1847,,1847,,4/6/2021 7:01,4/6/2021 7:01,,,,3,,,,CC BY-SA 4.0 +10844,2,,10839,2/23/2019 20:39,,8,,"

Suppose that we have some optimization criterion $J(x)$, which we aim to optimize (maybe maximize, maybe minimize), which we can compute for a single example $x$.

+ +

In an ""ideal world"", where we have no restrictions on computation time and memory, we would generally want to run training algorithms on the complete ""ground truth"" population. For example, if we're training a model (may be a DNN, but may also be some other kind of Machine Learning model), we'd ideally train it on the complete population of all ""real-world"" images that have ever been produced, ever will be produced, or ever could be produced (and, of course, all with accurate labels). We could write our full optimization criterion as:

+ +

$$\sum_{x \in \mathcal{P}} J(x),$$

+ +

where $\mathcal{P}$ denotes the complete population. If we have a model that we would like to train using gradient descent (such as a Neural Network), this means we have to compute the gradient with respect to some trainable parameters $\theta$:

+ +

$$\nabla_{\theta} \sum_{x \in \mathcal{P}} J(x).$$

+ +
+ +

In practice, we do not have this ideal scenario, we do not have access to the complete population $\mathcal{P}$. Often, we find ourselves approximating the population with a (hopefully rather large) dataset $\mathcal{D}$. This could be a collection of images like ImageNet, or in (Deep) Reinforcement Learning it could be a large experience replay buffer. If the dataset $\mathcal{D}$ is an accurate representation of the complete population's distribution, we can estimate the gradient of the objective we ultimately care about (computed over the complete population) by the gradient computed over the dataset $\mathcal{D}$:

+ +

$$\nabla_{\theta} \sum_{x \in \mathcal{P}} J(x) \approx \nabla_{\theta} \sum_{x \in \mathcal{D}} J(x).$$

+ +

Collecting such a dataset rather than the complete population is often actually feasible in practice. If we can afford to compute the objective/gradient over such a complete dataset, that's great. Note that in such a case, the data can already be viewed as being i.i.d.: it's the best approximation we have of a single, complete distribution (the population distribution).

+ +
+ +

However, computing the objective / gradient over a large complete dataset is often still prohibitively expensive in terms of computation time. This is one of the reasons (not the only one) why we often use minibatches $B$. Then, we approximate the gradient of the objective of the dataset (which is itself an approximation of the gradient for the complete population) by computing it only over a minibatch $B$:

+ +

$$\nabla_{\theta} \sum_{x \in \mathcal{P}} J(x) \approx \nabla_{\theta} \sum_{x \in \mathcal{D}} J(x) \approx \nabla_{\theta} \sum_{x \in \mathcal{B}} J(x).$$

+ +

If we're repeatedly (over many, many training iterations) going to use such an approximation to take gradient descent steps, it is crucial that the gradient computed over this minibatch is actually an accurate approximation of the ""true"" gradient over the complete population; if it's not an accurate approximation, we're optimizing the wrong objective! In my opinion, this means that we have an even stronger requirement for our minibatch than just wanting it to be identically distributed. It's not sufficient for our minibatch to be sampled from any arbitrary identical distribution. We want the instances in our minibatch to be sampled from one very specific identical distribution; the dataset/population distribution! If they're not all sampled from that particular distribution, they're an unreliable approximation of the objective we truly care about.

+ +
+ +

Now, you may wonder if it wouldn't be possible for multiple biased, ""unrepresentative"" minibatches with ""opposite"" biases to ""cancel each other out"" if they're used in different, subsequent gradient descent steps. Couldn't we first run a bunch of gradient descent steps on minibatches that only contain images of dogs, and afterwards ""cancel out"" any errors by also running a bunch of updates on minibatches containing only images of cats?

+ +

If you're lucky, it might work sometimes, but it's unreliable. One problem is that the first iterations using only dog images may cause you to end up in a poor area of the ""parameter space"", which you would never have ended up in if had used a correct mix of cat and dog images all along. Escaping that poor area may be much more difficult than simply ensuring you never reach it. Another problem is that it would be extremely difficult to find the ""correct"" number of gradient descent updates to run with cat images. Run too few, you're still stuck recognising only dogs. Run too many, and you may forget how to recognise dogs at all and only start recognising cats. Things like momentum in more sophisticated optimizers will exacerbate this issue.

+ +
+ +

Note that I don't think the requirement for i.i.d. batches is necessarily unique to gradient descent. Other learning techniques may have the same requirement (maybe for similar reasons, maybe for different reasons).

+",1641,,1641,,2/24/2019 14:51,2/24/2019 14:51,,,,1,,,,CC BY-SA 4.0 +10846,1,,,2/23/2019 23:22,,2,58,"

Abstract

+ +

I wish to design a neural network that will categorize messages based on criteria I have predefined. It should feature the ability to be proactively trained as it continues its lifecycle. This means a human can intervene in its categorization attempts and determine whether or not it was correct and have it adjust its weights accordingly (without having to retrain all over again).

+ +

Inputs

+ +

It is know that ALL input will follow these rules:

+ +
    +
  1. Always of $N$ length
  2. +
  3. All messages are transformed to eliminate unnecessary complexity
  4. +
+ +

Here's a brief overview of how an example message might be processed.

+ +

Starting with a message $M$:

+ +
+

That's an interesting perspective. I think that you should consider adding more details to your point about the cat being too silly.

+
+ +

The text is then transformed so that extra details are removed:

+ +
+

thats an interesting perspective i think that you should consider adding more details to your point about the cat being too silly

+
+ +

Then it's converted into a vector (appending $0$ to reach length $N$) ready to be processed by the neural network:

+ +
[116, 104, 97, 116, 115, 32, 97, . . . , 0, 0, 0, 0]
+ +

Ouputs

+ +

In my network, I wish all the outputs to be weighted on how well they fit in each category. I need multiple outputs. I'm not really focusing on one particular category per-say, but how well the message fits in all of them.

+ +

Following the input $M$ I used as an example, I'd expect the outputs to look something like this after my vector has fed-forward:

+ +
Suggestive:  0.89042
Opinionated: 0.68703
+ +

The weight values for each output indicate the strength of the category in the overall message.

+ +

From message $M$:

+ +
+

That's an interesting perspective.

+
+ +

Would weigh the opinionated category as $0.68703$.

+ +

And:

+ +
+

I think that you should consider adding more details to your point about the cat being too silly.

+
+ +

Would weigh the suggestive category as $0.89042$.

+ +

Summary and Questions

+ +

I'm interested in the architectural design choices of a network that would support my feature set. The main goal is to be able to train my network to categorize messages based on pre-trained (and live-trained) data. I'd like to know things like:

+ +
    +
  1. What type of Neural Network I should use for this purpose? I've researched LSTM & Recurrent networks; which have been mentioned to be good at processing sequences (ie. messages).
  2. +
  3. What considerations should I account for when creating this network?.
  4. +
  5. How can the overall network support live-training so I can tell my network when its wrong and have it 'correct' itself without having to retrain completely?
  6. +
+",22614,,,,,2/23/2019 23:22,How can my Neural Network categorize message strings?,,0,6,0,,,CC BY-SA 4.0 +10847,1,10851,,2/24/2019 6:24,,2,1120,"

I understand what an admissible heuristic is, I just don't know how to tell whether one heuristic is admissible or not. So, in this case, I'd like to know why Nilsson's sequence score heuristic function isn't admissible.

+",21832,,2444,,2/4/2021 21:44,2/4/2021 21:44,Why isn't Nilsson's Sequence Score an admissible heuristic function?,<8-puzzle-problem>,1,0,,,,CC BY-SA 4.0 +10849,1,,,2/24/2019 8:58,,0,93,"

I have a complex wargame already developed in a aging Objective-C and I would like to improve the AI

+ +

I have built the logic for self-play, fitness evaluation and evolution +The hard-time is the ability to run a lot of experiences of self-play with limited ressources (single Mac). Time is a factor but also memory. I am facing some random crashed after a given number of games

+ +

I was wondering +- if people have faced the same issues with a large number of runs with objective-C +- if other people have tried to use Genetic Programming or Reinforcement Learning with Objective-C or C#

+",20886,,20886,,2/24/2019 9:46,2/26/2019 11:58,Genetic programming with Objective-C,,1,0,,1/17/2021 17:39,,CC BY-SA 4.0 +10850,1,,,2/24/2019 11:14,,3,127,"

I have the execution of the Monte Carlo Tree Search (MCTS) below. I need to expand it, but I don't understand steps 1 and 2.

+ +

Why does it go to the first node and then make a new node, instead of going to the deepest left leaf? I thought it needs to go to the most probable leaf.

+ +

+ +

In step 1 of MCTS, it added a new node. Now, there are 9 cases.

+ +

+ +

In step 2 of MCTS, it added a new node. Now, there are 2 cases under the first node:

+ +

+",17603,,2444,,11/19/2019 22:47,11/19/2019 22:47,Understanding an execution of the Monte Carlo tree search algorithm,,0,0,,,,CC BY-SA 4.0 +10851,2,,10847,2/24/2019 11:49,,1,,"

I will use the 8-puzzle game to show you why Nilson's sequence score heuristic function is not admissible. In the 8-puzzle game, you have a $3 \times 3$ board of (numbered) squares as follows.

+ +
+---+---+---+
+| 0 | 1 | 2 |
++---+---+---+
+| 7 | 8 | 3 |
++---+---+---+
+| 6 | 5 | 4 |
++---+---+---+
+
+ +

The numbers in these squares are just used to refer to the specific squares (to avoid saying the ""middle square"" or the ""upper-left square""). So, when I say square 1, I refer to the upper-left square. In these games, you have 8 ""tiles"". Let's denote these tiles by $A, B, C, D, E, F, G$ and $H$. So, in this game, there is always one square which is free (or empty), given that there are $9$ squares. The goal of this game is to reach the following configuration of tiles

+ +
+---+---+---+
+| A | B | C |
++---+---+---+
+| H |   | D |
++---+---+---+
+| G | F | E |
++---+---+---+
+
+ +

Note that, in the case of the 8-puzzle game, a ""state"" is a configuration of the board. So, the following two board configurations are two distinct states

+ +
+---+---+---+
+|   | A | C |
++---+---+---+
+| H | B | D |
++---+---+---+
+| G | F | E |
++---+---+---+
+
+ +

and

+ +
+---+---+---+
+|   | C | A |
++---+---+---+
+| H | B | D |
++---+---+---+
+| G | F | E |
++---+---+---+
+
+ +

The rules of the 8-puzzle are simple. You can move one tile (at a time) from its current position (or square) to another position, provided that the destination square is free. You can only move a tile horizontally and vertically (and one square at a time).

+ +

I will not explain in this answer how the Nilson's sequence score heuristic works. Here is a explanation of how is used in the case of the 8-puzzle game. You should read this explanation and make sure you understand how Nilson's heuristic works before proceeding! You can also find an explanation of how this heuristic works in the book ""Principles of Artificial Intelligence"" (at page 85), by Nils J. Nilsson (1982).

+ +
+

Why isn't then Nilson's sequence score admissible?

+
+ +

A heuristic function $h$ is admissible if $h(n) \leq h^*(n)$, for all states $n$ in the state space, where $h^*(n)$ is the actual distance to reach the goal (which is often unknown in practice, hence the need to use heuristics to estimate such distance).

+ +

Note that admissibility is a property that must hold for all states. So, if we find a state where the condition above is not satisfied for the Nilson's sequence score heuristic function, then we show that the Nilson's sequence score is not admissible.

+ +

Let us create the following state

+ +
+---+---+---+
+| A | B | C |
++---+---+---+
+|   | H | D |
++---+---+---+
+| G | F | E |
++---+---+---+
+
+ +

Note that this state only differs from the goal state by one move: if we move the tile $H$ to square $7$, then we reach the goal state. So, the actual distance to reach the goal state is $1$ move. But what does the Nilson's score function tell us regarding the distance of this state to the goal state? You can see from the algorithm presented in this answer to compute the Nilson's sequence score (of a board configuration) that the score of the configuration (or state) above would be more than $1$ (you can immediately see this because you need to multiply by $3$). Therefore, Nilson's sequence score overestimated the distance to the goal (at least for one state), thus it cannot be admissible (by definition).

+",2444,,2444,,2/24/2019 13:24,2/24/2019 13:24,,,,0,,,,CC BY-SA 4.0 +10852,5,,,2/24/2019 12:00,,0,,,2444,,2444,,2/11/2021 0:57,2/11/2021 0:57,,,,0,,,,CC BY-SA 4.0 +10853,4,,,2/24/2019 12:00,,0,,"For questions related to the A* (or A) search algorithm, which is a very famous state-space search algorithm and widely taught in Computer Science and Artificial Intelligence. A* is a best-first search algorithm that is guaranteed to find the optimal solution given an admissible heuristic function, so A* is also an informed search algorithm, as opposed to e.g. depth-first search, which is an uninformed search algorithm.",2444,,2444,,2/11/2021 0:57,2/11/2021 0:57,,,,0,,,,CC BY-SA 4.0 +10855,1,,,2/24/2019 15:56,,2,245,"

I am developing an AI tool for anomaly detection in a distributed system.  The system supports an interface that combines several individual logs into a single log file generating approx. 7000 entries/min. The logs entries are partially system generated (d-Bus, IPC, ….)  and human written statements (Status not received, initialized successfully, ….). The developers use the generated log for debugging. The entries have been configured to have a similar format depending on the generated system (timestamp, ids, component, context, verbosity level, description, ….). 

+ +

Background:
+1. The history of the identified anomalies is minimal and not archived.
+2. Not many similar event templates in log files.
+3. Software execution rules are not clearly documented.
+4. The log events are co-related.

+ +

What are the recommended algorithms (Statistical, NLP, ML, Neural networks) that can be used to efficiently perform pattern extraction on the entries and identify existing and new anomalous behavior?

+",22635,,22635,,3/2/2019 12:38,7/9/2023 21:04,Anomaly Detection in distributed system using generated log file,,1,4,,,,CC BY-SA 4.0 +10857,2,,258,2/24/2019 16:54,,0,,"

A computational model that attempts to closely mimic the human neural networks is Numenta's hierarchical temporal memory (which has not yet received much attention from the machine learning community). In their models, they explicitly model and implement dendrites and other biological concepts.

+",2444,,2444,,4/16/2019 18:39,4/16/2019 18:39,,,,0,,,,CC BY-SA 4.0 +10859,2,,10855,2/24/2019 17:47,,0,,"

In the paper ""Unsupervised real-time anomaly detection for streaming data"" (by Subutai Ahmad, Alexander Lavin, Scott Purdy and Zuha Agha), 2017, an algorithm for anomaly detection (particularly suited for cases where a stream of data is continuously provided) is described. This algorithm is based on Numenta's Hierarchical Temporal Memory model.

+ +

I've actually never used it, but I know that Numenta's work is particularly suited for anomaly detection. You can have a look at it and see if it fits your needs. Have also a look at the Numenta Anomaly Benchmark (NAB).

+",2444,,,,,2/24/2019 17:47,,,,10,,,,CC BY-SA 4.0 +10860,1,,,2/24/2019 18:08,,1,288,"

The blocked N-queens is a variant of the N-queens problem. In the blocked N-queens problem, we also have a NxN chess board and N queens. Each square can hold at most one queen. Some squares on the board are blocked and can not hold any queens. Conditionality is that queens do not dare to attack each other. At the entrance to this problem are the queues and blocked areas.

+ +

How do I model the blocked N queens problem as a search problem, so that I can apply search algorithms like BFS?

+",22634,,2444,,2/24/2019 19:26,7/25/2019 18:03,How do I model the blocked N queens problem as a search problem?,,1,0,,,,CC BY-SA 4.0 +10863,2,,10860,2/24/2019 19:23,,1,,"

In general, the process of modelling a problem as a search problem consists in creating a graph which contains nodes, which represent the possible states in your problem, and edges, which represent the relations between these states (that is, you will have an edge between nodes $A$ and $B$ if it is possible to go from state $A$ to state $B$, and vice-versa, in your problem).

+ +

In the case of the 8 (or, in general, N) queens problem (or blocked 8 queens problem), the states are all possible configurations of the board (that is, all possible combinations of the positions of all the 8 queens). In the graph that represents this problem, you will have a node for all possible valid configurations, where a valid configuration is one that is allowed by the rules of the game. You will also have an edge between configurations (or states) $A$ and $B$ if you can go from state $A$ to state $B$. For example, configurations where you have two queens on the same square or where you have a queen in a blocked area will not be represented in your graph as a node (or state).

+ +

In the case of the 8 queens problem, you actually have more than one solution, so you will have more than one goal state (or configuration of the board).

+ +

When using the BFS, the goal will then be to find a path from the initial configuration of the queens (that is, the initial state) to one of these 18 (or 12) goal states.

+ +

In the case of the 8 queens problem, there are 4,426,165,368 possible configurations (so you will likely not be able to draw this graph).

+",2444,,2444,,2/24/2019 22:36,2/24/2019 22:36,,,,0,,,,CC BY-SA 4.0 +10865,1,,,2/25/2019 0:57,,2,280,"

What are (good) daily life examples of SAT problems?

+ +

I've thought about this one. The problem of placing a bunch of different kinds of glasses in a shared cabinet in such a way that some constraints would be satisfied, such as putting the longer ones in the back of the shorter ones, so it will be easier to take them when we need.

+ +

Is it a good one, or can you think of any other better one?

+",21832,,2444,,11/28/2019 23:24,11/28/2019 23:24,What are daily life examples of SAT problems?,,1,0,,,,CC BY-SA 4.0 +10869,1,,,2/25/2019 9:51,,7,709,"

You may have heard of GPT2, a new language model. It has recently attracted attention from the general public as the foundation that published the paper, OpenAI, ironically refused to share the whole model fearing dangerous implications. Along the paper, they also published a manifesto to justify their choice: "Better Language Models and Their Implications". And soon a lot of media were publishing articles discussing the choice and its effectiveness to actually prevent bad implications. I am not here to discuss the ethical components of this choice but the actual performance of the model.

+

The model got my attention too and I downloaded the small model to play with. To be honest I am far from impressed by the results. Some times the first paragraph of the produced text appears to make sense, but nine times out of ten it is giberish by the first or the second sentence. Exemples given in the paper seems to be "Lucky" outputs, cherry picked by human hands. Overall, the paper may suffer from a very strong publication bias.

+

However, most article we can read on the internet seems to take its powerfulness for granted. The MIT technology review wrote:

+
+

The language model can write like a human

+
+

The Guardian wrote

+
+

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

+
+

The model appears generally qualified as a "breakthrough". These writings do not match my personal experimentation as produced texts are rarely consistent / syntactically correct.

+

My question is: without the release of the whole model for ethical reasons, how do we know if the model is really that powerful?

+",22654,,2444,,9/22/2020 11:26,9/22/2020 11:26,How do we know if GPT-2 is a better language model?,,1,0,,,,CC BY-SA 4.0 +10871,2,,10865,2/25/2019 14:32,,1,,"

SAT problems are decision problems that can be categorised as NPC. This informally means although there has not been any solution that can solve these problem in the polynomial order, the solutions of such problems can be satisfied in $O(n^c)$.

+ +

About your question, first, you should see your problem has exponential space and cannot be solved in polynomial order; after that, you have to investigate whether its solutions can be satisfied in polynomial time or not. An easy way to model this is to consider the boolean table and an expression to satisfy. Your problem can be considered as the boolean equation and different possibilities of variables can be possible solutions that you may want to verify.

+",11599,,,,,2/25/2019 14:32,,,,0,,,,CC BY-SA 4.0 +10873,5,,,2/25/2019 21:26,,0,,,2444,,2444,,12/6/2020 20:27,12/6/2020 20:27,,,,0,,,,CC BY-SA 4.0 +10874,4,,,2/25/2019 21:26,,0,,For questions related to the crossover (aka recombination) operator in the context of evolutionary algorithms.,2444,,2444,,12/6/2020 20:27,12/6/2020 20:27,,,,0,,,,CC BY-SA 4.0 +10875,5,,,2/25/2019 21:27,,0,,,2444,,2444,,12/6/2020 20:15,12/6/2020 20:15,,,,0,,,,CC BY-SA 4.0 +10876,4,,,2/25/2019 21:27,,0,,For questions about methods (or operators) to mutate individuals (or chromosomes) in the context of evolutionary algorithms.,2444,,2444,,12/6/2020 20:15,12/6/2020 20:15,,,,0,,,,CC BY-SA 4.0 +10877,5,,,2/25/2019 21:29,,0,,,-1,,-1,,2/25/2019 21:29,2/25/2019 21:29,,,,0,,,,CC BY-SA 4.0 +10878,4,,,2/25/2019 21:29,,0,,"For questions related to the dynamic programming paradigm in the context of AI (and, in particular, reinforcement learning).",2444,,2444,,2/25/2019 22:00,2/25/2019 22:00,,,,0,,,,CC BY-SA 4.0 +10879,5,,,2/25/2019 21:32,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Hill_climbing.

+",2444,,2444,,2/25/2019 22:00,2/25/2019 22:00,,,,0,,,,CC BY-SA 4.0 +10880,4,,,2/25/2019 21:32,,0,,"For questions related to the family of algorithms called ""hill climbing"" (in the context of AI).",2444,,2444,,2/25/2019 22:00,2/25/2019 22:00,,,,0,,,,CC BY-SA 4.0 +10881,5,,,2/25/2019 21:38,,0,,,-1,,-1,,2/25/2019 21:38,2/25/2019 21:38,,,,0,,,,CC BY-SA 4.0 +10882,4,,,2/25/2019 21:38,,0,,"For questions like ""What is the difference between X and Y?"".",2444,,2444,,6/22/2019 17:07,6/22/2019 17:07,,,,0,,,,CC BY-SA 4.0 +10883,5,,,2/25/2019 22:03,,0,,"

https://en.wikipedia.org/wiki/Satisfiability

+",1671,,1671,,2/25/2019 22:03,2/25/2019 22:03,,,,0,,,,CC BY-SA 4.0 +10884,4,,,2/25/2019 22:03,,0,,For questions about the mathematical notion of satisfiability.,1671,,1671,,2/25/2019 22:03,2/25/2019 22:03,,,,0,,,,CC BY-SA 4.0 +10885,5,,,2/25/2019 22:05,,0,,,1671,,2444,,2/17/2021 17:57,2/17/2021 17:57,,,,0,,,,CC BY-SA 4.0 +10886,4,,,2/25/2019 22:05,,0,,"For questions about constraint satisfaction problems (CSPs), which are usually described by a set of variables, a set of domains, and a set of constraints. For more details, see, for example, chapter 6 of the book ""Artificial Intelligence: A Modern Approach"" (3rd Edition) by Stuart Russell and Peter Norvig.",1671,,2444,,2/17/2021 17:57,2/17/2021 17:57,,,,0,,,,CC BY-SA 4.0 +10887,2,,10201,2/26/2019 7:37,,2,,"
+

I am not familiar with using batches during network evaluation. Can someone explain what is the reason behind using it and what are advantages and disadvantages?

+
+ +

It is usually just for memory use limitation vs speed of assessment. Larger batches evaluate faster on parallelised systems such as GPUs, but use more memory to process. Test results should be identical, with same size of dataset and same model, regardless of batch size.

+ +

Typically you would set batch size at least high enough to take advantage of available hardware, and after that as high as you dare without taking the risk of getting memory errors. Generally there is less to gain than with training optimisation though, so it is not worth spending a huge amount of time optimising the batch size to each model you want to test. In most code I have seen, users pick a moderate ""safe"" value that will speed up testing but doesn't risk failing if you wanted to add a few layers to the model and check what that does.

+",1847,,,,,2/26/2019 7:37,,,,0,,,,CC BY-SA 4.0 +10890,1,,,2/26/2019 9:45,,3,260,"

I have heard of the concepts of learning by analogy (which is quite self-explanatory), inductive learning and explanation-based learning. I tried to learn about inductive learning and explanation-based learning, but I don't understand them.

+ +

How would you explain all these three concepts? What are the differences between them?

+ +

A link to some explanatory article/notes/blog post are appreciated too.

+",22673,,2444,,3/28/2019 14:28,3/28/2019 14:28,"What are the differences between learning by analogy, inductive learning and explanation based learning?",,0,1,,,,CC BY-SA 4.0 +10892,1,,,2/26/2019 10:23,,2,36,"

How to detect presence of object (highly occluded) in a scene?

+ +

There are specific features (small patterns, etc), which allow to say that object is present, but it is not enough for detection for YOLO or RPCNN.

+ +

How to detect small specific pattern in a whole image efficiently?

+",22677,,,,,2/26/2019 10:23,Presence of object (highly occluded vehicle) in a scene,,0,3,,,,CC BY-SA 4.0 +10894,2,,10849,2/26/2019 11:58,,2,,"

I have not touched Obj-C, but I've played with evolution in PHP, which wasn't designed for that at all. If a slow script language on my 10-year old desktop PC can do it, Obj-C should be able to handle that.

+ +

Some tricks:

+ +
    +
  1. This is a game, so - I assume - you've disabled all the graphics. A headless program is ideal for training. Waste no CPU cycles on stuff you don't need!
  2. +
  3. Use more threads/instances. Threads around the number of cores should be ideal.
  4. +
  5. Watch out for memory leaks! Even languages with GC can have problems with this when used in a wrong way. But you'll need to fix these bugs anyway. You can monitor memory consumption when you run it, and if it increases more-or-less steadily, that's a problem. There are various tools against memleaks under different systems/languages, I guess Obj-C has something similar. (these tools log every allocation and release, and list everything at exit that's left)
  6. +
+",22418,,,,,2/26/2019 11:58,,,,1,,,,CC BY-SA 4.0 +10895,1,,,2/26/2019 14:01,,1,262,"

I want to apply the concept that exists in the Dialogflow API in my e-commerce website.

+ +

I get some references in this regard :

+ +
    +
  1. Tokenization
  2. +
  3. Part Of Speech
  4. +
  5. Named Entity Recognition
  6. +
  7. Rule based
  8. +
+ +

I just saw that I just didn't understand how to implement it on the website. +so I still don't know how the truth is.

+ +

please give me a method or explanation that can help me create a chatbot for ecommerce that can give action when a user asks for a product and wants to place an order or something else.

+ +

Please give me some explanation or method or references :(

+",22686,,,,,3/7/2019 7:01,How to make chatbot using NLP like Dialogflow?,,1,0,,9/23/2021 16:42,,CC BY-SA 4.0 +10896,2,,5769,2/26/2019 15:25,,0,,"

Just to make two details absolutely clear:

+ +

Say you have $N$ 2D input channels going to $N$ 2D output channels. The total number of 2D $3\times3$ filter weights is actually $N^2$. But how is the 3D convolution affected, i.e., if every input channel contributes one 2D layer to every output channel, then each output channel is composed initially of $N$ 2D layers, how are they combined?

+ +

This tends to be glossed over in almost every publication I've seen, but the key concept is the $N^2$ 2D output channels are interleaved with each other to form the $N$ output channels, like shuffled card decks, before being summed together. This is all logical when you realize that along the channel dimensions of a convolution (which is never illustrated), you actually have a fully connected layer! Every input 2D channel, multiplied by a unique $3\times 3$ filter, yields a 2D output layer contribution to a single output channel. Once combined, every output layer is a combination of every input layer $\times$ a unique filter. It's an all to all contribution.

+ +

The easiest way to convince yourself of this is to imagine what happens in other scenarios and see that the computation becomes degenerate - that is, if you don't interleave and recombine the results, then the different outputs wouldn't actually do anything - they'd have the same effect as a single output with combined weights.

+",22689,,1641,,3/3/2019 17:57,3/3/2019 17:57,,,,0,,,,CC BY-SA 4.0 +10898,1,,,2/26/2019 20:44,,1,291,"

I have just found the paper and documentation about GAN 2.0, the new face creator from Nvidia.

+ +

On the website https://thispersondoesnotexist.com/ they have used this approach to create realistic faces. Unfortunately, the website does not exist anymore.

+ +

Is there another webpage demonstrating the new face creator from Nvidia?

+",21103,,,,,2/27/2019 3:36,New face generator from Nvidia,,2,0,,,,CC BY-SA 4.0 +10899,2,,10898,2/27/2019 1:49,,0,,"

There is a Youtube video: Nvidia - AI for Generating High-Resolution Images - Tero Karras

+ +

I don't know about webpages.

+",22418,,,,,2/27/2019 1:49,,,,0,,,,CC BY-SA 4.0 +10900,2,,10898,2/27/2019 3:36,,1,,"

You can find a decent sized collection on archive.org. Just browse through the snapshots and they'll contain a few images. It probably doesn't contain every single one, but it has quite a decent set to start from. :)

+ +

Here's one I found:

+ +

+",22614,,,,,2/27/2019 3:36,,,,0,,,,CC BY-SA 4.0 +10903,1,,,2/27/2019 6:07,,2,143,"

Why are the nodes (or neurons) in neural networks depicted as circles?

+

What is the difference between a circle and a box in diagrams of neural networks?

+",22707,,2444,,1/12/2021 0:53,1/12/2021 0:53,Why are the nodes (or neurons) in neural networks depicted as circles?,,1,1,,,,CC BY-SA 4.0 +10904,1,10905,,2/27/2019 6:17,,5,2677,"

I have already implemented a relatively simple DQN on Pacman.

+ +

Now I would like to clearly understand the difference between a DQN and the techniques used by AlphaGo zero/AlphaZero and I couldn't find a place where the features of both approaches are compared.

+ +

Also sometimes, when reading through blogs, I believe different terms might in fact be the same mathematical tool which adds to the difficulty of clearly understanding the differences. For example, variations of DQN e.g. Double DQN also uses two networks like alpha zero.

+ +

Has someone a good reference regarding this question ? Be it a book or an online ressource.

+",18845,,2444,,2/28/2019 22:46,3/13/2019 9:19,What is the difference between DQN and AlphaGo Zero?,,2,0,,,,CC BY-SA 4.0 +10905,2,,10904,2/27/2019 8:08,,6,,"

DQN and AlphaZero do not share much in terms of implementation.

+

However, they are based on the same Reinforcement Learning (RL) theoretical framework. If you understand terms like MDP, reward, return, value, policy, then these are interchangeable between DQN and AlphaZero. When it comes to implementation, and what each part of the system is doing, then this is less interchangeable. For instance two networks you have read about in AlphaZero are the policy network and value network. Whilst double DQN alternates between two value networks.

+

Probably the best resource that summarises both DQN and AlphaZero, and explains how they extend the basic RL framework in different ways is Sutton & Barto's Reinforcement Learning: An Introduction (second edition) - Chapter 16 sections 5 and 6 cover the designs of DQN Atari, AlphaGo and AlphaZero in some depth.

+

In brief:

+

DQN Atari

+
    +
  • Is model-free
  • +
  • Uses an action value estimator for $Q(s,a)$ values, based on a Convolutional Neural Network (CNN)
  • +
  • Uses experience replay and temporarily frozen target network to stabilise learning process
  • +
  • Uses a variety of tricks to simplify and standardise the state description and reward structure so that the exact same design and hyperparameters work across multiple games, demonstrating that it is a general learner.
  • +
+

AlphaZero

+
    +
  • Is model based (although some of the learning is technically model-free, based on samples of play)
  • +
  • Uses a policy network (estimating $\pi(a|s)$) and a state value network (estimating $V(s)$), based on CNNs. In practice for efficiency the NN for these share many layers and parameters, so how many "networks" there are depends how you want to count them. +
      +
    • The earlier AlphaGo version had 4 separate networks, 3 variations of policy network - used during play at different stages of planning - and one value network.
    • +
    +
  • +
  • Is designed around self-play
  • +
  • Uses Monte Carlo Tree Search (MCTS) as part of estimating returns - MCTS is a planning algorithm critical to AlphaZero's success, and there is no equivalent component in DQN
  • +
+",1847,,-1,,6/17/2020 9:57,3/13/2019 9:19,,,,5,,,,CC BY-SA 4.0 +10906,2,,10903,2/27/2019 8:19,,1,,"

I guess the reason is that we call them nodes and nodes are usually depicted using circles in graph theory. We model them as graphs for both forward and backward paths.

+",11599,,,,,2/27/2019 8:19,,,,0,,,,CC BY-SA 4.0 +10907,2,,10798,2/27/2019 9:00,,4,,"
+

Why is the action selection random with Sarsa?

+
+ +

A policy could be stochastic. In the case of SARSA, it is stochastic because of the use of $\epsilon$-greedy.

+ +
+

Isn't it on-policy and therefore ϵ-greedy?

+
+ +

I don't quite understand the question. SARSA is on-policy evaluation with $\epsilon$-greedy policy. Q-learning is off-policy evaluation with $\epsilon$-greedy policy. $\epsilon$-greedy is just a way to turn an action-value function into a policy.

+ +
+

Because Expected-Sarsa is off-policy the experience it learns from can be from any policy ... How can Exected-Sarsa learning from such policy be generally better than normal Sarsa learning from an ϵ-greedy policy, especially with the same amount of experience?

+
+ +

It is unfair to compare different natures of experiences because off-policy experience contains less useful information. Thus, both SARSA and Expected SARSA should use their own on-policy experience for comparison.

+ +

While Expected SARSA update step guarantees to reduce the expected TD error, SARSA could only achieve that in expectation (taking many updates with sufficiently small learning rate). Through this perspective, there is little doubt that Expected SARSA should be better.

+ +
+

Probably more general: How can on-policy and off-policy algorithms be compared in such way (e.g. through variance) even though their concepts and assumptions are so different?

+
+ +

Same as the previous answer, it is unfair to compare them without the same quality of experiences.

+",9793,,,,,2/27/2019 9:00,,,,2,,,,CC BY-SA 4.0 +10908,2,,10663,2/27/2019 9:13,,1,,"

It might seem to give the same update direction but would it converge to desirable policy parameters?

+ +

Actor-Critic is proposed alongside the policy gradient theorem in Sutton 1999. It is shown to maximize the state-value function. If you are able to show that the technique of yours is, in fact, maximizing some desirable objective function, you could propose it with some soundness as well.

+",9793,,,,,2/27/2019 9:13,,,,0,,,,CC BY-SA 4.0 +10909,1,,,2/27/2019 9:25,,5,319,"

In the context of Deep Q Network, a target network is usually utilized. The target network is a slow changing network with a changing rate as its hyperparameter. This includes both replacement update every $N$ iterations and slowly update every iteration.

+ +

Since the rate is hard to fine tune manually, is there an alternative technique that can eliminate the use of target network or at least makes it less susceptible to the changing rate?

+",9793,,9793,,2/27/2019 12:19,4/10/2019 11:42,Is there an alternative to the use of target network?,,2,0,,,,CC BY-SA 4.0 +10910,1,,,2/27/2019 10:45,,0,2286,"

I am trying to solve part b of the exercise 3.6 (page 113) from the book Artificial Intelligence: A Modern Approach.

+

More specifically, I need to give a complete problem formulation (that is precise enough to be implemented) for the following problem.

+
+

A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot ceiling. He would like to get the bananas. The room contains two stackable, movable, climbable 3-foot-high crates.

+

Give the initial state, goal test, successor function, and cost function for each of the following. Choose a formulation that is precise enough to be implemented.

+
+",19448,,-1,,6/17/2020 9:57,4/1/2020 12:43,"How can I solve part b of exercise 3.6 from the book ""Artificial Intelligence: A Modern Approach""?",,2,1,,,,CC BY-SA 4.0 +10911,1,,,2/27/2019 11:42,,1,352,"

Suppose I want to build a neural network regression model that takes one input and return one output.

+ +

Here's the training data:

+ +
0.1 => 0.1
+0.2 => 0.2
+0.1 => -0.1
+
+ +

You will see that there are 2 inputs 0.1 that matches to different output values 0.1 and -0.1. So what will happen with most machine learning models is that they will predict the average when 0.1 is fed to the model. E.g. the output of 0.1 will be (0.1 + (-0.1))/2 = 0.

+ +

But this 0 as an average answer is an incorrect answer. I want the model to be telling me that the input is ambiguous/insufficient to infer the output. Ideally, the model would report it as a form of confidence.

+ +

How do I report predictability confidence from the input?

+ +

The application that I find very useful in many areas is that I could then later ask the model to show me inputs that are easy to predict and inputs that are ambiguous. This would make me able to collect the data that are making sense.

+ +

One way I know is to train the model then check the error on each training data, if it's high then it probably means that the input is ambiguous. But if you know any other papers or better techniques, I would be appreciated to know that!

+",20819,,,,,9/11/2019 0:44,How to make machine learning model that reports ambiguity of the input?,,2,0,,,,CC BY-SA 4.0 +10912,2,,10082,2/27/2019 11:47,,9,,"

Generally researchers (Ghandar et al, Michalewicz, Lam) have used the profit or return on investment (ROI) as a reward (fitness) function.

+

$ROI = \frac{ \left[\sum_{t=1}^T (Price_t - sc) \times I_s(t) \right] - \left[ \sum_{t=1}^T (Price_t + bc) \times I_b(t) \right] }{ \left[ \sum_{t=1}^T (Price_t + bc) \times I_b(t) \right] }$

+

where $I_b(t)$ and $I_s(t)$ are equal to one if a rule signals a buy and sell, respectively, and zero otherwise; $sc$ represents the selling cost and $bc$ the buying cost. ROI is the difference between final bank balance and starting bank balance after trading.

+

You are correct, that the machine learning algorithm will then be influenced by spikes just before a sell.

+

Nicholls et al showed that using the average profit or area under the trade resulted in better performing trading rules. This approach was used by Schoreels et al. This approach focuses on being in the market to capitalize on profit. It does not penalize the trading rule when it is in the market and the market is going down. The accumulated asset value (AAV) is defined as:

+

$AAV = \frac{\sum_{i=1}^N [(Price_s - sc) - (Price_b + bc)]}{N}$

+

where $i$ is a buy and sell trading event, $N$ is the number of buy and sell events, $s$ the day the sale took place, and $b$ is the day the purchase took place.

+

Nicholls MSc thesis [available April 2019] showed that the fitness function used by Allen and Karjalainen is the preferred fitness function when evolving trading rules for the JSE using evolutionary programs.

+

Allen and Karjalainen used a fitness function based on the compounded excess returns over the buy-and-hold (buy the first day, sell the last day) strategy. The excess return is given by:

+

$\Delta r = r - r_{bh}$

+

where the continuously compounded return of the trading rule is computed as

+

$r = \sum_{t=1}^T r_i I_b(t) + \sum_{t=1}^T r_f I_s(t) + n\log\left(\frac{1-c}{1+c'}\right)$

+

and the return for the buy-and-hold strategy is calculated as

+

$r_{bh} = \sum_{t=1}^T r_t + \log\left(\frac{1-c}{1+c'}\right)$

+

In the above,

+

$r_i = \log P_t - \log P_{t-1}$

+

and $P$ is the daily close price for a given day $t$, $c$ denotes the one-way transaction cost; $r_f$ is the risk free cost when the trader is not trading, $I_b(t)$ and $I_s(t)$ are equal to one if a rule signals buy and sell, respectively, and zero otherwise; $n$ denotes the number of trades and $r_{bh}$ represents the returns of a buy-and-hold, while $r$ represents the returns of the trader.

+

A fixed trading cost of $c = 0.25\%$ of the transaction was defined but this could be anything like a STATE fee + Brocker fee + Tax, and might even be 2 different values, one for buying and one for selling. Which was the approach used by Nicholls. The continuously compounded return function rewards an individual when the share value is dropping and the individual is out of the market. The continuously compounded return function penalises the individual when the market is rising and the individual is out of the market.

+

I would recommend that you use the compounded excess return over the buy and hold strategy as your reward function.

+",20508,,38401,,7/6/2020 5:03,7/6/2020 5:03,,,,4,,,,CC BY-SA 4.0 +10913,2,,10064,2/27/2019 12:02,,1,,"

I would use the proposed fitness function defined in your other StackExchange question. I would then inflate the buying price used in the equation by \$100, and leave the selling price as the price the shares were sold for. This would only reward a trading rule when profit is greater than \$100.

+ +

If you want to minimise spike it might be best to normalise the data going into your learning algorithm, for example use an x day moving average instead of the actual share price.

+",20508,,,,,2/27/2019 12:02,,,,0,,,,CC BY-SA 4.0 +10914,2,,10911,2/27/2019 12:34,,1,,"

Predicting with confidence: the best machine learning idea you never heard of by Scott Locklin might provide you an idea.

+ +
+

The name of this basket of ideas is “conformal prediction.”

+
+",22544,,,,,2/27/2019 12:34,,,,0,,,,CC BY-SA 4.0 +10915,1,,,2/27/2019 13:30,,2,35,"

I am reading the paper ""Transformation Invariance in Pattern Recognition – Tangent Distance and Tangent Propagation"", where the tangent vector is calculated for the given curve $s(P,\alpha)$ at $\alpha=0$ by differentiating with respect to $\alpha$, that is, $\frac{\partial s(P,\alpha)}{\partial\alpha}$. For the curve, I have taken one $2D$ image and I am rotating it with matrix $R=\left[\matrix{cos(\alpha)\space -sin(\alpha)\\sin(\alpha) \space\space\space\space\space cos(\alpha)} \right]$.

+ +

As my image is fixed, the curve is just a function of $\alpha$. Therefore, to find the tangent vector, what I am doing is as follows:

+ +
    +
  1. I am rotating the image by the matrix $R^{'}$ which is $R^{'}=\left[\matrix{-sin(\alpha)\space -cos(\alpha)\\cos(\alpha) \space\space\space\space\space -sin(\alpha)} \right]$

  2. +
  3. This rotates the image by $90$ degree, which is not the expected result.

  4. +
+ +

I have done the same exercise by differentiating numerically and I am getting the expected answer which is as follows:
$\alpha$=0""> +
$\alpha$=0"">

+ +

Please, help me to understand my mistake in taking derivative of the matrix and multiplying it with image.

+",22712,,2444,,2/27/2019 14:30,2/27/2019 14:30,"Calculating tangent vector of curve s(P,$\alpha$) at given point $\alpha$ = 0",,0,2,,,,CC BY-SA 4.0 +10916,1,,,2/27/2019 15:49,,4,81,"

How do you efficiently choose the hyper-parameters of a neural network (e.g. the learning rate, number of layer, weights, etc.)?

+",22717,,2444,,11/30/2021 13:02,11/30/2021 13:02,How do you efficiently choose the hyper-parameters of a neural network?,,1,1,,,,CC BY-SA 4.0 +10917,1,,,2/27/2019 16:15,,1,57,"

I hope this question is not too broad or general. I have a very large set of images all of which contain text (some have more, some less). All of them have been tagged as containing, say, English text or Korean. I wonder if convolutional neural networks would be a good approach to classify these images as containing English vs. Korean. Or is there any existing literature/method that does this already. Crucially though, I am not interested in ""understanding"" the text, so this is not an NLP task but, I suppose, a task of classifying orthographies in the images.

+",22719,,,,,8/8/2023 4:08,Using convnet to classify language of text contained in images,,2,0,,,,CC BY-SA 4.0 +10918,2,,10916,2/27/2019 16:34,,1,,"

In deep learning era, there are two possible choices. Caveat approach and Panda approach.

+ +

Caviar Approach

+ +

In this approach, it is supposed that you have a very powerful cluster system that enables you to run different models simultaneously on different nodes. In this manner, you construct a d-dimensional space which each corresponds to a special hyperparameter. Then you can have a grid approach to partition the space and for each intersection of the grids, you can have a possible initialisation although there are some better ways in order to find better initialisations.

+ +

Panda Approach

+ +

This approach is widely used among students due to using desktop computers. In this approach, you usually try to find an initialisation of hyperparameters based on your experience or the other architectures' which are available and you try to refine them step by step by using cross-validation.

+ +
+ +

To answer your question, the efficiency depends on your computing power. For more details, you can take a look at the contents of the third week.

+",11599,,11599,,2/27/2019 16:43,2/27/2019 16:43,,,,5,,,,CC BY-SA 4.0 +10919,1,10920,,2/27/2019 17:27,,2,280,"

What is the fundamental difference between NN for classifying data and generating data?

+ +

Most examples show how neural networks can be used to classify data. Like is it an image of a dog or a cat. However, there are applications where NN are used to create images and even write short stories.

+",19413,,19413,,2/28/2019 8:19,2/28/2019 8:19,What is the fundamental difference between neural networks for classifying and generating data,,1,0,,,,CC BY-SA 4.0 +10920,2,,10919,2/27/2019 18:14,,2,,"

The formal name for this difference is "generative" vs "discriminative" models.

+

By default, a supervised learning process using a simple feed-forward neural network and a set of training data with expected answers will produce a discriminative model. It is hard to use such a model to generate content directly.

+

Differences between discriminative and generative models tend to be at the architecture level or deeper, although a few generative techniques are simple variants or re-purposing of discriminative networks. There is no single "fundamental" difference in terms of NN design. About the only common theme is that generative models are more complex and harder to work with than discriminative ones.

+

Images

+
    +
  • Probably the most popular currently for image generation are Generative Adversarial Networks or GANs. These train a Generator to create an image from random inputs, alongside a Discriminator that tries to spot fakes compared to real images. Training them together results in a generator network that gets progressively better at creating fake data.

    +
  • +
  • Also suitable for image generation are Variational Autoencoders (VAEs) which learn essentially by compressing a set of images down into a small representation, and as a result become able to "uncompress" similar representations.

    +
  • +
  • There are also VAEGANs, which combine VAEs and GANs

    +
  • +
  • There are other models which generate examples from a dataset. For example, Restricted Boltzmann Machines (RBMs) are similar to neural networks, but the neurons fire randomly, and RBMs require a different training process to NNs.

    +
  • +
  • GANs, VAEs, VAEGANs, RBMs can be used to generate data in any simple non-sequential datasets, they are not restricted to just images, but recent work with GANs for example has excelled there.

    +
  • +
  • For a different kind of generator, you can look at how Deep Dream works. The interesting thing here is that it is a modification of a feed forward network that has been trained by supervised learning. Essentially to run Deep Dream, you take an existing image and change it so that it maximises some internal part(s) of the existing neural network, "training" the image as if it were a set of weights.

    +
      +
    • The difference between Deep Dream and other generative techniques is that it does not generate content that could be in the training set - it is no good for creating "realistic" images.
    • +
    +
  • +
+

Text

+
    +
  • Recurrent Neural Networks, and specifically Long Short-Term Memory (LSTM) networks can be used to model word sequences by training them to predict the next word. Because the resulting model is probabilistic (e.g. chance of next work being "Hello" is ~0.001%) then a simple method to make this generative is to sample from the predictions randomly, and then feeding them back into the sequence to see what it will predict next. There is a great blog post about this technique by Andrei Karpathy.
  • +
+

Audio and Music

+
    +
  • DeepMind's WaveNet architecture is similar to a CNN predicting the next samples of audio from a given input. It can be used in speech or music generation.

    +
  • +
  • LSTMs are a popular choice here, as well as for language models.

    +
  • +
+

One important caveat. Nearly all these models are hard to understand, work with and train, compared to simpler supervised techniques. Probably the easiest to get to grips with if you want to understand them well enough to implement your own versions, are Deep Dream, VAEs and LSTMs.

+

It is also worth noting that there are many other ways to generate content than neural networks. For instance, sound and music generation has a long history completely independent of recent AI developments.

+",1847,,-1,,6/17/2020 9:57,2/27/2019 18:32,,,,2,,,,CC BY-SA 4.0 +10922,1,,,2/27/2019 19:13,,2,105,"

How would a quantum computer potentially facilitate artificial consciousness, assuming it is possible?

+",22725,,2444,,12/9/2021 21:44,12/9/2021 21:55,"How would a quantum computer potentially facilitate artificial consciousness, assuming it is possible?",,0,3,,,,CC BY-SA 4.0 +10923,2,,4456,2/27/2019 19:17,,5,,"

Model-Free RL

+

In Model-Free RL, the agent does not have access to a model of the environment. By environment I mean a function which predicts state transition and rewards.

+

As of the time of writing, model-free methods are more popular and have been researched extensively.

+

Model-Based RL

+

In Model-Based RL, the agent has access to a model of the environment.

+

Main advantage is that this allows the agent to plan ahead by thinking ahead. Agents distill the results from planning ahead into a learned policy. A famous example of Model-Based RL is AlphaZero.

+

The main downside is that many times a ground-truth representation of the environment is not usually available.

+
+

Below is a non-exhaustive taxonomy of RL algorithms, which may help you to visualize better the RL landscape.

+

+",14390,,-1,,6/17/2020 9:57,2/27/2019 19:17,,,,0,,,,CC BY-SA 4.0 +10924,1,,,2/28/2019 3:56,,2,5759,"

The use of target network is to reduce the chance of value divergence which could happen with off-policy samples trained with semi-gradient objectives. In Deep Q network, semi-gradient TD is used and with experience replay the training could diverge.

+ +

Target network is a slow changing network designed to slowly track the main value network. In Mnih 2013, it was designed to match the main network every $N$ steps. There is another way which slowly updates the weight in the direction to match the main network every step. To someone, the latter is called Polyak updates.

+ +

I have done some very limited experiments and seen that with the same update rate, e.g. $N=10$, Polyak update would update with the rate of 0.1, I usually see Polyak updates to give smoother progress and converge faster. My experiments are by no means conclusive.

+ +

I would thence ask if it is known which one to perform better, converge faster or has smoother progress, in a wider range of tasks and settings?

+",9793,,,,,2/28/2019 12:54,"In DQN, updating target network every N steps or slowly update every step is better?",,1,0,,,,CC BY-SA 4.0 +10926,2,,10924,2/28/2019 12:54,,1,,"

Most of papers and my experience support hard update, once per N steps. N is usually very bug, range in between 10^4 to 10^6. DQN training is slow. But that depend on the problem. If you DQN converge with soft update with weight ~.1 (N~10) your problem could be very simple.

+",22745,,,,,2/28/2019 12:54,,,,2,,,,CC BY-SA 4.0 +10927,2,,10904,2/28/2019 13:04,,1,,"

You can actually combine AlphaZero-like approach with DQN: +A* + DQN

+",22745,,,,,2/28/2019 13:04,,,,2,,,,CC BY-SA 4.0 +10928,2,,5274,2/28/2019 19:20,,2,,"

In addition to Jaden's excellent answer ""no one is trying to actually make a “conscious” AI because we don’t know what that word means yet"" I'd like to add that the word ""yet"" there is highly optimistic.

+ +

It's highly problematic and likely impossible to distinguish between a conscious being and a being that behaves exactly as if it was conscious. Philosophers have been struggling with that for centuries; some even espoused solipsism, which is a ""I live in the Matrix"" philosophy. In particular, how can you tell whether your childhood friend or your spouse or anybody else is a conscious being rather than an embodiment of AI that acts exactly as a conscious being would?

+ +

It's possible, of course, to go ""if it walks like a duck and quacks as a duck then it's a duck"" way. In that case a Turing Test passing AI would be automatically considered conscious. However, most people wouldn't accept the duck criteria of consciousness; otherwise they would very soon have to call their Alexa operated household appliances conscious.

+ +

My two cents are basically the same as Jaden's, except that I'm more pessimistic about ever understanding what consciousness is.

+",22751,,22751,,2/28/2019 21:31,2/28/2019 21:31,,,,2,,,,CC BY-SA 4.0 +10930,1,,,2/28/2019 23:57,,1,98,"

I have a bayesian network, which has the following data:

+ +

$P(S) = 0.07$

+ +

$P(A) = 0.01$

+ +

$P(F \mid S,A) = 1.0$

+ +

$P(F \mid S, \lnot A) = 0.7$

+ +

$P(F \mid \lnot S, A) = 0.9$

+ +

$P(F \mid \lnot S, \lnot A) = 0.1$

+ +

And I'm asked to get $P(F \mid S)$. Is it possible? How can I deduce it?

+",22760,,2444,,12/13/2021 9:10,12/13/2021 9:10,"Is it possible to compute $P( F \mid S )$ given $P(F \mid S,A)$ and $P(F \mid S, \lnot A)$ in Bayesian network?",,3,0,,,,CC BY-SA 4.0 +10932,2,,10909,3/1/2019 4:24,,6,,"

I have done some research and would like to share.

+ +

Generally to eliminate the use of target network one needs to show that training would be stable under off-policy semi-gradient.

+ +

There are two approaches that might work:

+ +
    +
  1. Experience reweighting
  2. +
  3. Constrained optimization
  4. +
+ +

Experience reweighting

+ +

Probably the simplest idea is to use importance sampling ratio (Precup 2001) to multiply each sample. This would correct the off-policy sample distribution to be on-policy distribution. It has been shown (Sutton 2016) that on-policy samples lead to stability for semi-gradient. However, this line of work that corrects for sample distribution has high variance and does not work well in practice.

+ +

Another line of work aimed to partially correct the distribution only to the extent that would be provably stable is called Emphatic TD (Sutton 2016). The distribution is still mostly off-policy but is proved to be stable under linear function approximation.

+ +

Constrained optimization

+ +

The general wisdom comes from the fact that updating a value would also alter the target value since they share the same set of parameters. This problem is called over-generalization. To reduce this, Durugkar suggests that constraining the target Q-value after parameter update to be steady helps reduce divergence. Achiam 2019 provides a good overview of the problem, and suggests a way to make sure that the update is non-expansion (which should prevent divergence).

+ +

Sidenote

+ +

Some works have shown convergence under off-policy samples but only in tabular case. It takes much more to show that it is stable in function approximation (even a linear one).

+ +

Works like:

+ +
    +
  • Q($\lambda$) (Harutyunyan 2016), which augments the value function with a correction term
  • +
  • Retrace($\lambda$) (Munos 2016), which extends the former by truncating the importance sampling ratio
  • +
+ +

are shown to work only in tabular case.

+",9793,,9793,,4/10/2019 11:42,4/10/2019 11:42,,,,0,,,,CC BY-SA 4.0 +10933,1,,,3/1/2019 6:07,,1,45,"

i am new to machine learning. i'm trying to identify driving pattern through accelerometer and gyroscope sensor. i have been collecting the data of both the sensors and have been storing them in .csv extension. i am not able to identify a pattern in the datasets since it has a lot of datas. i have three independent variables of accelerometer and with that i need to identify the sudden acceleration and sudden breaking and i have three independent variables of gyroscope and i need to identify aggressive turns. can you suggest as to how i need to analyse the pattern and find a algorithm which suits my requirement. this is how the dataset is

+ +

+",22736,,22736,,3/1/2019 9:40,3/1/2019 9:40,identifying pattern in datasets,,0,2,,,,CC BY-SA 4.0 +10934,1,,,3/1/2019 8:48,,1,39,"

Is there a method (not a table of recommendations!) that could tell me what activation function to choose if the outputs of the neural network have some interpretation? For example, these can be the mean of some normal distribution, probabilities in multinomial distribution, parameters of the exponential distribution, and so on.

+",11359,,2444,,11/6/2020 2:33,11/6/2020 2:33,How do I choose the activation function of the output layer of a neural network (based on theoretical motivations)?,,0,2,,,,CC BY-SA 4.0 +10935,1,,,3/1/2019 8:53,,1,42,"

I'm currently doing a research project related to Distributed Tracing. My research has led me to a point where I think ML might be suited for our problem.

+ +

I'm looking for papers that are similar to this (even if they have other applications):

+ +

I want to match packets exiting a black-box system (outputs) to packets that enter a black box (inputs). I can do that easily in a non concurrent setting which should help me grow a training set (maybe for supervised learning), but I need an algorithm that, in a concurrent setting, can separate the different request ""flows"" if you will.

+ +

I hope this makes sense.

+ +

The closest thing to what I'm looking for is ""Aguilera, Marcos K., et al. ""Performance debugging for distributed systems of black boxes."" ACM SIGOPS Operating Systems Review. Vol. 37. No. 5. ACM, 2003."" but it's mostly suited for finding the dependency graph of the system, which I already know.

+ +

Thank you

+",22763,,,,,3/25/2020 14:02,Machine Learning papers for matching packets to request flows,,0,0,,,,CC BY-SA 4.0 +10936,1,,,3/1/2019 9:15,,0,411,"

A generative adversarial network (GAN) takes a vector of numbers as input and generates an image, based on the input. Each element of the vector causes some feature of the image to change, but the mapping between input and output is not clear, as often happens in deep learning. What is the best way to study the correlation between the vector elements and the output image features? The first approach that comes to mind is to manually change every element and check the result, however I am not sure that this is the best solution.

+",16671,,,,,3/1/2019 10:07,How to study the correlation between GAN's input vector and output image,,1,0,,,,CC BY-SA 4.0 +10938,2,,10930,3/1/2019 9:50,,0,,"

I do not think you can compute $P(F \mid S=s)$ only using your given probabilities (and no further independence assumption between your random variables).

+ +

First of all, note that $P(F \mid S=s)$ is the probability of $F$ (being equal to one of the values that $F$ can attain), given that the value of $S$ is $s$. Not also that $P(F \mid S)$ is a shorthand for $P(F \mid S=s)$.

+ +

In general, by the law of total probability, we have

+ +

\begin{align} +P(A) +&= P(A, B) + P(A, B^c) \\ +&= P(A|B)P(B) + P(A|B^c)P(B^c) \\ +&= P(B|A)P(A) + P(B|A^c)P(A^c) +\end{align}

+ +

where $B^c$ is the complement of $B$ (in case $A$ and $B$ are sets or events).

+ +

So, in your specific case, we have

+ +

\begin{align} +P(F \mid S) +&= P(F \mid S, A)P(A \mid S) + P(F \mid S, \lnot A)P(\lnot A \mid S) \\ +&= P(F \mid S, A) \frac{P(A, S)}{P(S)} + P(F \mid S, \lnot A)\frac{P(\lnot A, S)}{P(S)} \\ +&= \frac{P(F, S, A)}{P(A, S)} \frac{P(A, S)}{P(S)} + \frac{P(F,S, \lnot A)}{P(\lnot A, S)}\frac{P(\lnot A, S)}{P(S)} \\ +&= \frac{P(F, S, A)}{P(S)} + \frac{P(F,S, \lnot A)}{P(S)} \\ +&= \frac{1}{P(S)} (P(F, S, A) + P(F,S, \lnot A) ) \\ +&= \frac{P(F, S)}{P(S)} +\end{align}

+ +

We $P(F \mid S, A), P(S)$ and $P(F \mid S, \lnot A)$, but we do not have $P(A, S)$, $P(F, S, A)$ or $P(F, S)$ (and I think we cannot retrieve them from your given probabilities).

+ +

(I will be happy to be corrected, if this conclusion is wrong. Maybe I'm not seeing another way of computing it now.)

+",2444,,2444,,3/1/2019 13:34,3/1/2019 13:34,,,,0,,,,CC BY-SA 4.0 +10939,2,,10936,3/1/2019 10:07,,1,,"

Let me first provide a brief introduction first

+ +

GANs as VAEs are generative models which means they learn exactly what you described: to map a typically small dimensional vector/tensor into a higher dimensional one, which in your case is (interpreted as) an image.

+ +

These 2 generative models differ in the actual learning strategy which results typically in making GAN outperform VAE in realistic image generation (but it’s better not to get into the reasons for this here)

+ +

However in principle the Generator Network in GAN works in a similar way to the Decoder Network in VAE so as their output space is interpreted as an image, you can interpret their input space as a sort of semantic space hence the values you are setting define the semantic of the output image (in fact you observe the features change)

+ +

It is possible to say the generative model learns a Semantic to Appearance Mapping hence it works the opposite way of the classifier model which learns an Appearance to Semantic Mapping

+ +

Unfortunately as it typically happens in NN this mapping are definitely hard to understand for humans, nevertheless this is an active area of research

+ +

So after this introduction, aimed at giving you an essential understanding of the problem, I can suggest you some papers to get into more details like this one

+ +

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

+ +

In case you found it not that clear, I’m planning to write a summary of this paper and publish it on my Medium so you could check it there (I’ll update this answer or add a comment to notify about it)

+",1963,,,,,3/1/2019 10:07,,,,0,,,,CC BY-SA 4.0 +10941,2,,2598,3/1/2019 11:42,,1,,"

There is a fundamental difference between Generative Models and Discriminative Models, to simplify it we could say that

+ +
    +
  • given $S$ Semantic Space (low dimensionality) and

  • +
  • given $A$ Appearance Space (high dimensionality, e.g. space of $W \times H$ images)

  • +
+ +

the Discriminator learns the function $f(A) \rightarrow S$ and the Generator learns the function $g(S) \rightarrow A$

+ +

According to this formalism we can say that $g \neq f^{-1}$ and a simple reason for this is $f$ is not invertible because it is not injective, in fact 2 different horror movie posters will be assigned the same label “horror movie” (assuming the network works well :) )

+ +

So you have to find another way to define a mapping from $S$ to $A$ and this kind of job is related to the Generative Models

+ +

Certainly GAN are state of the art in image generation but technically they are not the only choice, e.g. you could use a VAE as the Decoder Subnetwork is a generative model

+ +

However from a practical point of view do not expected VAE results would that good: you’ll probably get blurred stuff like this one

+ +

+ +

so go for GAN but beware that training them properly is definitely not easy

+",1963,,,,,3/1/2019 11:42,,,,0,,,,CC BY-SA 4.0 +10943,2,,10930,3/1/2019 12:49,,0,,"

I believe you can deduce it. Using the product rule:

+ +

$p(x,y) = p(x\mid y)p(y)$

+ +

we have:

+ +

$P(F\mid S) = \frac{P(F,S)}{P(S)}$

+ +

we have $P(S)$ and we do not have $P(F,S)$, but we can use the addition rule:

+ +

$p(x) = \sum_\limits y p(x, y)$

+ +

$P(F, S) = P(F,S,A) + P(F,S,\lnot A)$

+ +

first term on the right side, using the product rule, is:

+ +

$P(F,S,A) = P(F\mid S,A)P(S,A)$

+ +

I assume that $S$ and $A$ are not conditionally dependent so we have:

+ +

$P(F,S,A) = P(F\mid S, A)P(S)P(A)$

+ +

for the second term on the right side, following the same logic, we have:

+ +

$P(F,S,\lnot A) = P(F\mid S, \lnot A)P(S)P(\lnot A)$

+ +

we have all the needed members so in the end we have:

+ +

$P(F\mid S) = \frac{P(F\mid S, A)P(S)P(A) + P(F\mid S, \lnot A)P(S)P(\lnot A)}{P(S)}$

+",20339,,,,,3/1/2019 12:49,,,,2,,,,CC BY-SA 4.0 +10944,1,10945,,3/1/2019 14:09,,11,2205,"

I'm interested in knowing whether there exist any neural network, that solves (with >=80% accuracy) any nontrivial problem, that uses very few nodes (where 20 nodes is not a hard limit). I want to develop an intuition on sizes of neural networks.

+",22365,,2444,,3/1/2019 15:58,3/1/2019 17:43,Are there neural networks with very few nodes that decently solve non-trivial problems?,,1,4,,,,CC BY-SA 4.0 +10945,2,,10944,3/1/2019 15:35,,13,,"

Even if it’s impossible to answer this question properly, as non trivial is not well defined (maybe the author will edit this questions later, to specify it better), I take the opportunity to point out this paper which looks interesting to me

+ +

Smallest Neural Network to Learn the Ising Criticality

+ +

Assuming you have a general idea of the Ising Model I think the problem of identifying the critical temperature from a data driven perspective can be considered as non trivial and the paper shows how the authors have improved the performance related to solve this task with NN passing from 100 Hidden Neurons, as performed in this paper Machine learning phases of matter from 2017, to only 2 Hidden Neurons

+ +

Just my cents:

+ +
    +
  • reducing the neurons, while keeping good performance, should help in terms of neural processing interpretability which is notoriously obscure and its complexity grows (exponentially) with the number of neurons
  • +
+",1963,,1963,,3/1/2019 17:43,3/1/2019 17:43,,,,2,,,,CC BY-SA 4.0 +10946,2,,10917,3/1/2019 18:17,,0,,"

This sounds like a fairly straightforward task, with low risks. I think the proper term is that you're you are trying to detect the script, which would be either Latin ( ""English"") or Hangul (""Korean""). The chance is that you end up learning fonts, though.

+",16378,,,,,3/1/2019 18:17,,,,0,,,,CC BY-SA 4.0 +10947,1,11044,,3/1/2019 18:45,,4,252,"

I was reading an article on Medium and wanted to make it clear whether a bot created on IBM Watson is an intelligent one or unintelligent.

+ +
+

Simply put, there are 2 types of chatbots — unintelligent ones that act using predefined conversation flows (algorithms) written by developers building them and intelligent ones that use machine learning to interact with users.

+
+",22676,,,,,3/16/2023 5:25,Does IBM Watson use machine learning?,,2,1,,,,CC BY-SA 4.0 +10948,1,10955,,3/1/2019 19:33,,1,124,"

The reinforcement learning paradigm has the aim to determine the optimal actions for a robot. A typical example is a maze finding robot, but reinforcement learning can also be used for training a robot to play the pong game. The principle is based on a reward function. If the robot is able to solve a problem, he gets a score from the underlying game engine. The score can be positive, if the robot reaches the end of a maze, or it can be negative, if he is colliding with an obstacle. The principle itself is working quite well, that means for simpler applications it is possible to train a robot to play a game with reinforcement learning.

+ +

Chatbots are a different category of artificial intelligence. They are working not with actions but with natural language. Person #1 is opening a dialogue with “Hi, I'm Alice”, while person #2 is responding with “Nice to meet you”. What is missing here is an underlying game which is played. There is no reward available for printing out a certain sentence. In some literature the problem of language grounding was discussed seriously, but with an unclear result. It seems, that a classical game for example pong, and a chatbot conversation doesn't have much in common.

+ +

Is it possible to combine Reinforcement Learning with chatbot design? The problem is, that a speech-act should be connected to a reward. That means, a well formulated sentence gets +10 points but a weak sentence gets -10 points. How can this be evaluated?

+",,user11571,,,,3/1/2019 22:07,Is reinforcement learning possible for chatbots?,,1,1,,,,CC BY-SA 4.0 +10949,2,,7109,3/1/2019 20:27,,3,,"

Basically, if you read the full paper (especially, the abstract and the section 7), you find that the main accomplishment remains a marginal contribution on top of dropout.

+ +

If you see the empirical results on Table 5 (of the page 5) of the maxout's original paper, you find that the misclassification rate is only very, very slightly lower than that of dropouts. (2.47 % instead of 2.78%)

+ +

That could explain the relatively lower interest in the work.

+",20745,,,,,3/1/2019 20:27,,,,2,,,,CC BY-SA 4.0 +10950,1,10962,,3/1/2019 20:46,,4,126,"

The following plot shows error function output based on system weights. +Two equal local minima are shown in green pointers. Note that the red dots are not related to the question.

+ +

Does the right one generalize better compared to the left one?

+ +

My assumption is that for the right minimum if the weights change, the overall error increases less compared to the left minimum point. Would this somehow mean the system does better generalization if the right one is chosen as the optimum minimum?

+ +

+",22779,,22779,,3/1/2019 21:47,3/2/2019 10:37,Could error surface shape be useful to detect which local minima is better for generalization?,,2,0,,,,CC BY-SA 4.0 +10951,1,10968,,3/1/2019 20:56,,2,84,"

I find myself wondering if there exists a data structure with the following properties:

+ + +",22781,,2444,,10/22/2019 21:10,10/22/2019 21:13,Does a data structure that models the encoding specificity principle in memory exist?,,1,0,,,,CC BY-SA 4.0 +10952,2,,10947,3/1/2019 20:59,,0,,"

A quick glance at the IBM Watson wikipedia page reveals that it does indeed use machine learning. Watson is a complex computing system that uses a variety of cutting edge techniques and concepts such as natural language processing and machine learning.

+",22781,,,,,3/1/2019 20:59,,,,0,,,,CC BY-SA 4.0 +10953,1,10967,,3/1/2019 21:30,,1,322,"

I want to try and compare different optimization methods in some datasets. I know that in scikit-learn there are some corresponding functions for the grid and random search optimizations. However, I also need a package (or multiple ones) for different recent Bayesian optimization methods.

+ +

Are there any good and stable ones to use? Which packages do you recommend? (If any recent for grid/random search, it is also okay.)

+",22784,,2444,,3/12/2020 15:08,3/12/2020 15:08,Are there Python packages for recent Bayesian optimization methods?,,1,0,,12/19/2020 19:03,,CC BY-SA 4.0 +10954,1,10961,,3/1/2019 21:47,,1,92,"

The following plot shows error function output based on system weights. +Two equal local minima are shown in green pointers. Note that the red dots are not related to the question.

+ +

Considering the amount of convex in the local minima, is there any way to opt between these two local minima?

+ +

+",22779,,2444,,3/1/2019 22:45,3/2/2019 9:06,Which local minima to choose according to the shape of the error surface?,,1,0,,,,CC BY-SA 4.0 +10955,2,,10948,3/1/2019 22:07,,2,,"

The problem is indeed that the 'rules' of conversations are not as fixed as the rules for games. However, you can make use of descriptive formalism from Discourse Analysis, called adjacency pairs. These describe regularities between utterances on a local level, for example greeting/reply, which would match your ""Hi, I'm Alice"" and ""Nice to meet you"".

+ +

You will need to be able to classify utterances by your chat bot according to a set of possible responses, and then you can see if a valid response is produced for any given utterance. If the user asks a question, then a greeting will not be a good answer, but a statement could be, if it was a response to the question. This is leaving aside the content and focuses merely on the formal characteristics of the utterance.

+ +

If you want to know more about the topic, have a look at Conversation Analysis, which is the linguistic field dealing with the subject.

+",2193,,,,,3/1/2019 22:07,,,,1,,,,CC BY-SA 4.0 +10956,2,,10950,3/1/2019 23:28,,2,,"

I don't think that the concept of generalization is (directly) related to the ""shape"" of the function close to the point where it attains a minimum.

+ +

The concept of generalisation refers to when a trained model is able ""perform well"" on unseen data (that is, data not seen during the training phase). If a trained model does not generalise well, then it might have ""overfitted"" or ""underfitted"".

+ +

Overfitting means that the model performs well on the training data, but not on other data. In other words, in the case of overfitting, the model learns the structure of the training data, but not of the whole population (of interest) or it learns the ""noise"" (that is, the data which is not part of the population of interest) present in the training data. This also implies that the training data is not a ""good"" sample of the population, that is, it doesn't ""summarise"" well all characteristics of the population. In practice, this is often the case (for large populations).

+ +

Underfitting occurs when the model is not even able to learn enough about the training data during the training phase. For example, underfitting occurs when you try to train a linear model but the data does not have a linear relationship.

+ +

So, being able to generalise (or not) depends on the model (including the number of parameters), but also on the data you are given.

+ +
+

Does the right one generalize better compared to the left one?

+
+ +

The shape of the function will only be useful during the training phase. More specifically, in this case, you might reach one of the two minima faster than the other (depending also on the optimisation algorithms, model, etc., that you use).

+ +

I would like to note that, in machine learning, we often minimise a function of functions (which in mathematics is called a ""functional""). Why is that? For ""simplicity"", consider a simple neural network (NN) model (e.g. a multi-layer perceptron). We usually train such NN using gradient descent (or one of its variants) by minimising a function (e.g. the mean squared error). Essentially, when we train such a NN, we want to find the function (which is represented by the parameters of the NN) that minimises e.g. the MSE. In this case, the mean squared error (MSE) is the functional we attempt to minimise. Note that the MSE is a function of the parameters of the current NN and that a NN is a model that represents a function.

+ +

Let's go back to your question. If, during the training phase, you get one minimum rather than the other, you will get different NNs (say $A$ and $B$). Note, again, that the point where our MSE function attains one its minima, in this context of training a NN (and often, in general, in machine learning) is a function (and not just a scalar). I know that, at the beginning (and if you're not a math guy), it might be difficult to think of functions as points (actually, more precisely, we should call them ""vectors"") that minimise other functions, but this concept exists and actually underlies a great amount of machine learning techniques (e.g. training NNs using back-propagation by minimising a cost function).

+ +

So, why do I think that we can't say much about the generalisation ability of NNs $A$ and $B$?

+ +

Suppose that we have access, after the training phase, to both NNs $A$ and $B$ (that is, the ones that correspond to the ""points"" where your function attains the two (visible) minima. $A$ and $B$ will perform equally well on the training data (or even validation data). We might then conclude that both generalise in the same way. But this might be a wrong conclusion because both the training and the validation data (as I mentioned above) might not be good samples of the population. So, we can't really say which NN (or ""minima""), $A$ or $B$, generalises better. They might generalise more or less in the same way (according to their performance e.g. on the validation dataset) or not (because the validation dataset is not a good sample of the population).

+ +

To conclude, the concept of generalisation is a little bit more complex than function minimisation, because it is also related to data. In machine learning, we often minimise a functional (a function whose parameters are functions). If your functional attains the same minimum at two different points (actually, functions), during training, one might be faster to reach than the other. However, if you have two NNs that represent these two minima, we can't say much about their generalisation ability. Essentially, we can just hope that the validation dataset is a good sample of the population.

+",2444,,2444,,3/2/2019 10:37,3/2/2019 10:37,,,,2,,,,CC BY-SA 4.0 +10958,1,10964,,3/2/2019 6:02,,3,1452,"

I just read about the concept of a parse tree.

+

In my understanding, a valid parse tree of a sentence needs to be validated by a linguistic expert. So, I concluded, a sentence only has one parse tree.

+

But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)?

+",16565,,2444,,1/18/2021 15:48,1/18/2021 15:48,Can a sentence have different parse trees?,,2,0,,,,CC BY-SA 4.0 +10959,2,,10909,3/2/2019 6:51,,1,,"

Never tried it, but there couple of approaches which may or may not help:

+ +

Distributional DQN ( not C51, other one, cant find ref right now typing from phone) with several output heads, chosen randomly

+ +

Multiple agents learning randomly on each other with some regularizer to pervent collapse to same net.

+ +

Both approach essentially try to""hide"" or smear target.

+",22745,,22745,,3/4/2019 15:23,3/4/2019 15:23,,,,0,,,,CC BY-SA 4.0 +10960,2,,10958,3/2/2019 6:56,,1,,"

Grammars in NLP basically correspond to Context-free Grammars(CFG) in formal Language theory. And, in case the CFG corresponding to the NLP task is ambiguous, then corresponding to a single sentence (more formally derivation), there can be multiple Parse Trees.
+Hence, it depends on the grammar whether there can be more than one valid parse tree.

+",14404,,,,,3/2/2019 6:56,,,,0,,,,CC BY-SA 4.0 +10961,2,,10954,3/2/2019 7:36,,1,,"

Just sharing my thoughts on this:

+ +
    +
  • A local minimum in the loss function is just a synthetic measure expressing how well the model is performing on the target dataset (I'm assuming it's the test set)

  • +
  • Letting aside the absolute values, you could also consider the steepness of that region as follows:

    + +
      +
    • the first LM is at the bottom of steep region which means the related parametrization is quite unstable as if you change it slightly the performance drop quickly (loss increases quickly) so you could interpret this +as a parametrization with not great generalization capability (it is not robust against small changes): always remember that what you are actually looking for is model able to generalize well when in production (i.e. after you have deployed it so when training, validation, test is finished) and test set is typically a small proxy for the data observed in production (but it really depends on the final application itself)

    • +
    • the second LM is related to a much less steep region: even if you change the parametrization slightly you do not observe a big performance change which could be interpreted as a more stable parametrization hence with better possibility to generalize on unseen data

    • +
  • +
+",1963,,1963,,3/2/2019 9:06,3/2/2019 9:06,,,,1,,,,CC BY-SA 4.0 +10962,2,,10950,3/2/2019 8:25,,4,,"

In general I agree with @nbro answer, nevertheless sticking strictly to this specific question I'd like to share some speculations:

+ +
    +
  • what the author of the question provides us with is the Loss Function Shape so I'll try to use the full information here to compare the 2 minima

  • +
  • looking at the LF steepness we observe the Left LM is in a steeper region than the Right LM: how can this be interpreted?

  • +
  • I'd interpret the LF Steepness as a Measure of Parametrization Stability: in fact perturbating the Parametrization slightly has a bigger effect on Left LM observed performance rather than the Right LM one

  • +
  • when the NN is running ""in production"" typically the parametrization is fixed (in most typical application, weights are not changed out of the training phase) so you should not be concerned about parametrization stability, +however I'm one of those believing the Flat Minima - Schmidhuber 1997 idea that Local Minima Flatness is connected to generalization property

  • +
+ +

However it is important to observe this is still a very open and interesting question as in Sharp Minima Can Generalize For Deep Nets Dinh 2017 et al. demonstrated +it is not just about flatness, because reparametrization of the NN model, though preserving minima locations, change its shape so basically sharp minima might be transformed into flat ones without affecting network performance

+",1963,,,,,3/2/2019 8:25,,,,2,,,,CC BY-SA 4.0 +10964,2,,10958,3/2/2019 9:29,,3,,"
+

But, is that correct? Is it possible a sentence has more than one valid parse tree (e.g. constituency-based)?

+
+

The fact that a single sequence of words can be parsed in different ways depending on context (or "grounding") is a common basis of miscommunication, misunderstanding, innuendo and jokes.

+

One classic NLP-related "joke" (around longer than modern AI and NLP) is:

+
+

Time flies like an arrow.

+

Fruit flies like a banana.

+
+

There are actually several valid parse trees for even these simple sentences. Which ones come "naturally" will depend on context - anecdotally I only half got the joke when I was younger, because I did not know there were such things as fruit flies, so I was partly confused by literal (but still validly parsed, and somewhat funny) meaning that all fruit can fly about as well as a banana does.

+

Analysing these kinds of ambiguous sentences leads to the grounding problem - the fact that without some referent for symbols, a grammar is devoid of meaning, even if you know the rules and can construct valid sequences. For instance, the above joke works partly because the nature of time, when referred in a particular way (singular noun, not as a possession or property of another object), leads to a well-known metaphorical reading of the first sentence.

+

A statistical ML parser could get both sentences correct through training on many relevant examples (or trivially by including the examples themselves with correct parse trees). This has not solved the grounding problem, but may be of practical use for any machine required to handle natural language input and map it to some task.

+

I did check a while ago though, and most Parts Of Speech taggers in Pythons NLTK get both sentences wrong - I suspect because resolving sentences like those above and AI "getting language jokes" is not a high priority compared to more practical uses for chatbots/summarisers, etc.

+",1847,,2444,,1/18/2021 15:48,1/18/2021 15:48,,,,0,,,,CC BY-SA 4.0 +10965,1,10978,,3/2/2019 10:17,,5,4035,"

Why would someone use a neuroevolution algorithm, such as NEAT, over other machine learning algorithms? What situation would only apply to an algorithm such as NEAT, but no other machine learning algorithm?

+",22802,,2444,,7/7/2019 19:22,7/7/2019 19:22,Why would someone use NEAT over other machine learning algorithms?,,1,0,,,,CC BY-SA 4.0 +10967,2,,10953,3/2/2019 11:19,,1,,"

Apart from the Scikit-Optimize package related to Scikit-Learn, following are some of the packages related to Bayesian optimization:

+ +
    +
  1. GPyOpt
  2. +
  3. pyGPGO
  4. +
  5. Hyperopt
  6. +
  7. bayesian-optimization
  8. +
  9. safeopt
  10. +
  11. RoBO
  12. +
+",14404,,,,,3/2/2019 11:19,,,,0,,,,CC BY-SA 4.0 +10968,2,,10951,3/2/2019 13:20,,1,,"

Beyond the basic requirement of storage, it seems you are looking for some explicitly defined data organization providing a query interface which allows specifying, alongside the fundamental query params, +a context and is able to use both these information items (instead of the first one only) to boost the query.

+ +

If this interpretation of mine is correct, the context-aware query database comes to my mind, here are some examples:

+ + + +
+

In this paper, we propose a logical model and an abstract query language as a foundation of context-aware database management systems. The model is a natural extension of the relational model in which contexts are first class citizens and can be described at different levels of granularity.

+
+ +

Querying Context-Aware Databases

+",1963,,2444,,10/22/2019 21:13,10/22/2019 21:13,,,,0,,,,CC BY-SA 4.0 +10970,2,,5325,3/2/2019 14:31,,1,,"

The prediction made by linear regression can simply be thought of as a vector dot product.

+ +

$$\overrightarrow{x}^T \cdot \overrightarrow{y}$$

+ +

One of those two vectors is the ""data"" for one case (like a row in your data matrix), the other is a vector of the model's parameters, which is usually called $\overrightarrow{\theta}$ or $\overrightarrow{\beta}$.

+ +

So in the case shown by yourself, we have:

+ +

$$h(x) = \theta_0 + \theta_1 \cdot x$$

+ +

Often we add a row of ones to the beginning of the data matrix, that way we are consistent in the sense that the $\theta_0 = 1 \cdot \theta_0$

+ +

This way we arrive at:

+ +

$$h(x) = \overrightarrow{\theta}^T \cdot \overrightarrow{x}$$

+",22553,,,,,3/2/2019 14:31,,,,0,,,,CC BY-SA 4.0 +10971,1,,,3/2/2019 14:43,,1,65,"

I have coded a very basic LSTM with forget gates (no libraries used). I'm trying to predict $0.5*sin(t + N)$ given $0.5*sin(t)$ as an exercise.

+ +

I have tweaked the model, changing the output layer activation function, weight initialization, number of memory blocks/cells, etc. However, I still couldn't manage to correct the output.

+ +

The problem is that the output range is much smaller than desired, $[-0.2, 0.2]$ instead of $[-0.5, 0.5]$. The output also is slightly delayed, meaning it is predicting $sin(t + N - 1)$ for example.

+ +

Is there something that I'm missing?

+ +

As an example, for output layer activation function as a centered logistic from $(-1, 1)$, the validation output looks like

+ +

+ +

Training output looks like

+ +

+ +

Additional information:

+ +
    +
  • Topology: 1 input layer, 1 hidden layer each with 5 memory blocks each with 1 cell, 1 output layer each with 1 regular neuron.

  • +
  • Alpha: 1

  • +
  • Weights: generated with normal distribution, from $[-1, 1]$

  • +
  • Output layer activation function used: logistic $[0, 1]$, centered logistic, tanh, ReLU, leaky ReLU, $f(x) = x$ (identity)

  • +
+",16609,,16565,,3/3/2019 23:20,3/3/2019 23:20,Predicting sine using LSTM: Small output range and delayed output?,,0,1,,,,CC BY-SA 4.0 +10972,2,,3720,3/2/2019 20:41,,0,,"

I would say that, while it does count, it only counts in certain circumstances, as trying to use trial and error with already predetermined data as its objective to output isn't AI, as it already has data that is fine as-is. For example:

+ +

Say your AI is trying to point at a precise location but is only given the current accuracy of its positioning. It could check every location and find the best of all of them, or, it could do this:

+ +
    +
  1. Correct the up/down(vertical) positioning: + +
      +
    1. Start moving up.
    2. +
    3. If the accuracy gets better: + +
        +
      1. Keep moving up.
      2. +
      3. Else move down.
      4. +
    4. +
    5. Continue until the accuracy starts to get weaker.
    6. +
    7. Move back just a tiny bit to increase again.
    8. +
  2. +
  3. Correct the left/right(horizontal) positioning: + +
      +
    1. Start moving right.
    2. +
    3. If the accuracy gets better: + +
        +
      1. Keep moving right.
      2. +
      3. Else move left.
      4. +
    4. +
    5. Continue until the accuracy starts to get weaker
    6. +
    7. Move back just a tiny bit to increase again.
    8. +
  4. +
  5. You're done!
  6. +
+ +

This is AI, as it learns what to and what not to do, and figures out a location based off of the accuracy of its positioning alone. So while the ""trial and error"" method is AI, it only counts when it has no predetermined calculation to figure out the result without said trial and error. Trying to find numbers of Pi, for example, while it is technically trial and error, it uses math functions and requires multiple inputs to calculate its output, and, in the end, it only partly uses trial and error, therefore, it is not AI in the end.

+",22816,,,,,3/2/2019 20:41,,,,0,,,,CC BY-SA 4.0 +10973,1,,,3/2/2019 21:08,,3,183,"

To make A2C into A3C you make it asynchronous. From what I understand the 'correct' way to do that is to thread off workers with a copy of the policy and critic, and then return the state/action/reward tuples to the main thread, which then performs the gradients updates on the main policy and critic, and then repeat the process.

+ +

I understand why the copying would be necessary in a distributed environment, but if I were to always run it locally then could I just perform the updates on a global variable of the policy and critic, i.e. avoid the need for copying? Provided the concurrency of the updates was handled correctly, would that be fine?

+",20352,,20352,,3/2/2019 23:18,7/31/2023 23:01,Can A3C update the policy / critic on a local machine without needing to copy?,,1,3,,,,CC BY-SA 4.0 +10974,2,,10210,3/3/2019 6:24,,0,,"

Loss not decreasing during the training does not signify that network don't train. It may mean network is in the exploration mode. I have observed that situation - accumulated reward growing steadily, which mean network is training well, but loss is not decreasing.

+ +

If you already know solution to some (simpler or other) version of your environment, you can train network in supervised manner to reproduce that solution. If network unable reproduce existing solution that is strong indication that network is too small or otherwise not good.

+ +

The other reason for accumulated reward oscillation could be network overfitting on latest training samples. In that case bigger replay buffer or more slow update of target network(if target network is used) may help.

+",22745,,,,,3/3/2019 6:24,,,,0,,,,CC BY-SA 4.0 +10975,1,15406,,3/3/2019 8:39,,2,586,"

So I was wondering, why I have only encountered square loss function also known as MSE. The only nice property of MSE I am so far aware of is its convex nature. But then all equations of the form $x^{2n}$ where $n$ is an integer belongs to the same family.

+ +

My question is what makes MSE the most suitable candidate among this entire family of curves? Why do other curves in the family, even though having steeper slopes, $(x >1) $, which might result in better optimisation, not used?

+ +

Here is a picture to what I mean where red is $x^4$ and green is $x^2$:

+ +

+",,user9947,,user9947,3/3/2019 9:39,6/8/2020 19:01,Why is MSE used over other quadratic loss functions?,,2,1,,,,CC BY-SA 4.0 +10976,2,,10975,3/3/2019 9:24,,-1,,"

There is another variant for MSE. You can employ the absolute value of the difference of your hypothesis and the expected output. MSE and the absolute difference version, each have a property. The interpretation of the MSE is that you have the area of squares which at first are large but after training the predicted outputs and real outputs get similar to each other which means the area of squares has been diminished after training. For absolute value, the interpretation is that the difference you are going to reduce is a length and after training, you diminish the length of the errors. I don't say what you've said is not possible. It is possible and it does have an interpretation, maybe to diminish the volume of hypercubes, but the point is that the differentiation of such cost functions are a bit complex and it may lead to update rules which are longer to be calculated.

+",11599,,11599,,3/3/2019 9:35,3/3/2019 9:35,,,,1,,,,CC BY-SA 4.0 +10977,1,,,3/3/2019 10:31,,2,85,"

We have been assigned a project, in which we have to create a chatbot which will ask question, take the replies, analyse them and give an approximate assessment of the current emotional state of the person. There are two aspects of the project,

+ +
    +
  1. Training the bot to choose the next question based on the previous response
  2. +
  3. And analysing the responses individually, to detect . the sentiment.
  4. +
+ +

What technology would we have to use and what would be the steps to accomplishing the tasks?

+ +

Thanks.

+",22822,,,,,7/10/2023 20:07,What would be the steps to create an sentiment analysis chatbot?,,1,0,,,,CC BY-SA 4.0 +10978,2,,10965,3/3/2019 11:25,,3,,"

The main difference leading to strengths and weaknesses of NEAT algorithm, is that it does not use any gradient calculations. That means for NEAT, neither the cost function, nor the activation functions of the neurons are required to be differentiable. In some circumstances - e.g. where agents directly compete, and you can select them for next generation if they win versus one or more opponents - you may not even need a cost function to optimise. Quite often the cost function can be a simple number generated from a complex evaluation of the network.

+ +

Therefore, NEAT can be used in situations where the formula for a cost function, in terms of single feed-forward runs of the network, is not very clear. It can also be used to explore activation functions such as step functions or stochastic firing neurons, where gradient based methods are difficult to impossible to apply.

+ +

NEAT can perform well for simple control scenarios, as a policy network that outputs actions given some sensor inputs. For example, it is popular to use it to create agents for racing games or simulated robotics controllers. When used as a policy-based controller, NEAT is in competition with Reinforcement Learning (RL), and has basic similarities with policy gradient methods - the nature of the controller is similar and often you could use the same reward/fitness function.

+ +

As NEAT is already an evolution-inspired algorithm, it also fits well with a-life code as the ""brains"" of simulated creatures.

+ +

The main disadvantage of NEAT is slow convergence to optimal results, especially in complex or challenging environments. Gradient methods can be much faster, and recent advances in Deep RL Policy Gradients (algorithms like A3C and DDPG) means that RL can tackle much more complex environments than NEAT.

+ +

I would suggest to use NEAT when:

+ +
    +
  • The problem is easy to assess - either via a measurement of performance or via competition between agents - but might be hard to specify as a loss function

  • +
  • A relatively small neural network should be able to approximate the target function

  • +
  • There is a goal to assess a non-differentiable activation function

  • +
+ +

If you are looking at a sequential control problem, and could use a standard feed-forward neural network to approximate a policy, it is difficult to say in advance whether NEAT or some form of Deep RL would be better.

+",1847,,1847,,3/3/2019 11:34,3/3/2019 11:34,,,,0,,,,CC BY-SA 4.0 +10979,2,,10977,3/3/2019 11:56,,0,,"

There are some different approaches to learning this sequential acting. First, you can use RL (reinforcement learning) and define some rewards over the action of the user over that question, to explore and exploit the environment.

+ +

Second, you can use RNN to predict the preferred next question from a history of data. In contrast with RL, you need many instances to train the network to predict better next answer more accurately.

+",4446,,,,,3/3/2019 11:56,,,,0,,,,CC BY-SA 4.0 +10981,1,10996,,3/3/2019 13:05,,5,248,"

Is there some type of neural network that changes the number of neurons while training?

+ +

Using this idea, the network can increase or decrease the number of neurons when the complexity of the inputs increases or decreases.

+",19783,,2444,,7/7/2019 22:35,7/7/2019 22:35,Is there a neural network with a varying number of neurons?,,1,1,,,,CC BY-SA 4.0 +10982,1,,,3/3/2019 13:07,,9,7454,"

In section 3.6 of the OpenAI GPT-2 paper it mentions summarising text based relates to this, but the method is described in very high-level terms:

+ +
+

To induce summarization behavior we add the text TL;DR: after the article and generate 100 tokens with Top-k random sampling (Fan et al., 2018) with k=2 which reduces repetition and encourages more abstractive summaries than greedy decoding. We use the first 3 generated sentences in these 100 tokens as the summary.

+
+ +

Given a corpus of text, in concrete code terms (python preferred), how would I go about generating a summary of it?

+",2424,,49188,,8/17/2021 11:02,6/29/2023 3:10,How do I use GPT-2 to summarise text?,,1,1,,,,CC BY-SA 4.0 +10983,1,10985,,3/3/2019 13:36,,2,479,"

I've been thinking about the idea of replacing the classic gradient descent algorithm with an algorithm that is less sensitive to a local optimum. I was thinking about particle swarm optimization (PSO), which thus tries to select the best weights and biases for the model.

+

But I've seen everywhere that only one hidden layer is used (no one explains why just one layer is being used) and all those codes break when I try to use more than one hidden layer, so the questions are:

+
    +
  1. Can't PSO be used to optimize an Artificial Neural Network with more than one hidden layer?

    +
  2. +
  3. In that case, why is that?

    +
  4. +
+",15764,,2444,,10/14/2021 13:57,10/14/2021 13:58,Can particle swarm optimization be used to train neural networks with more than one hidden layer?,,1,0,,,,CC BY-SA 4.0 +10985,2,,10983,3/3/2019 17:18,,1,,"

Particle Swarm Optimization can be used to optimize a neural network with more than one hidden layer. Instead of optimizing a single weight matrix, and two bias vectors, you are just optimizing more of them.

+

However, PSO is not often used for larger neural networks, because, particle swarm optimization is not all that efficient at working with a large amount of data, which you need to train larger neural networks. Particle Swarm Optimization involves taking many particles, and evaluating the fitness in all of them. If you have, let's say, 1,000,000 training examples, you are doing a lot more error calculations, even if you used a small batch size, than if you used another technique lake back-propagation or a related technique. Backpropagation and related techniques are used much more in larger neural networks because they are more efficient at working with larger sets of data.

+

As for why the code that you have been using breaks with networks with more than one hidden layer, I cannot explain, but it is fairly easy to write your own basic implementation of PSO to train multi-layer neural networks.

+",4631,,2444,,10/14/2021 13:58,10/14/2021 13:58,,,,0,,,,CC BY-SA 4.0 +10987,1,,,3/3/2019 18:32,,1,51,"

I'm wanting to conduct game theoretic analyses of ongoing conflict situations (e.g. the US/North Korea negotiations; Syrian conflict; etc) as reported in the news media. I believe that AI may help me do this by helping me to pick out from the text: the parties involved; the issues over which they are in conflict; the choices they have; their preferences. However I'm not sure whether to approach this using 'modern' 'deep learning' approaches or to try something along the lines of the classic work by Schank, deJong etc. who used the notion of scripts (and sketchy scripts) in their work with conceptual dependency approaches. Does anyone have comments, suggestions that may guide my work please?

+",22832,,,,,3/3/2019 18:32,How can/should I use AI to populate a game (in the game theory sense) from text input,,0,2,,,,CC BY-SA 4.0 +10988,1,,,3/3/2019 18:46,,3,628,"

I've learned that you can use two perceptrons to ultimately create a classifier for non-linearly separable data. I'm trying to understand how / if these two perceptrons converge to two different decision boundaries though.

+ +

+ +

Source: https://tdb-alcorn.github.io/2017/12/17/seeing-like-a-perceptron.html

+ +

I don't understand how the second perceptron creates a different decision boundary when it has the same input as the first perceptron? I know the weights can be initialized differently but does this second perceptron classify something else? Shouldn't the decision boundaries be converge to be the the same ultimately after training?

+ +

+ +

Source: https://tdb-alcorn.github.io/2017/12/17/seeing-like-a-perceptron.html

+",22833,,,,,4/8/2020 14:48,How do two perceptrons produce different linear decision boundaries when learning?,,1,0,,,,CC BY-SA 4.0 +10989,1,,,3/3/2019 19:41,,2,248,"

After going through both the ""Illustrated Transformer"" and ""Annotated Transformer"" blog posts, I still don't understand how the sinusoidal encodings are representing the position of elements in the input sequence.

+ +

Is it the fact that since each row (input token) in a matrix (entire input sequence) has a unique waveform as its encoding, each of which can be expressed as a linear function of any other element in the input sequence, then the transformer can learn relations between these rows via linear functions?

+",8062,,2444,,11/30/2021 15:41,11/30/2021 15:41,How do the sine and cosine functions encode position in the transformer?,,0,1,,,,CC BY-SA 4.0 +10992,2,,10988,3/3/2019 21:43,,3,,"

I think your confusion comes from the fact that you are calling those two hidden nodes ""perceptrons"". You shouldn't call the hidden nodes in your network perceptrons. You should call them ""nodes"" or ""neurons"", although the term ""multilayer perceptron"" comes exactly from that.

+ +

You should think of a perceptron as something like

+ +

+ +

A perceptron simply computes a linear combination of the inputs using the corresponding weights. A single perceptron can (only) linearly separate a set of points.

+ +

However, if you have two ""stacked perceptrons"" (like the hidden nodes in your network), then, provided that both ""perceptrons"" (or hidden nodes) only compute a linear combination of the inputs, the output of each hidden node can be interpreted as line (like the violet and green lines in your picture).

+ +
+

I don't understand how the second perceptron creates a different decision boundary when it has the same input as the first perceptron?

+
+ +

This is because the two perceptrons will eventually have different weights. You can see from your diagram that $p_0$ has weights $w_{0, 0}$ and $w_{0, 1}$, whereas perceptron $p_1$ has weights $w_{1, 1}$ and $w_{1, 0}$. Note that weights are learned, during training, depending on your loss function (at the output node $p_2$) and the labeled dataset (that you use to train that simple neural network).

+ +
+

I know the weights can be initialized differently but does this second perceptron classify something else?

+
+ +

If you consider the two hidden nodes (in your network) separately, then (as I said above) the output of these two hidden nodes can be interpreted as a line, and these lines might be different (as illustrated in your picture), because these two hidden nodes will eventually learn something different about the input. However, you should also think of the output of two hidden nodes as contributing to the input of the output node. So, in a certain way, you're combining the decisions of the two hidden nodes (or, as you call them, ""perceptrons"").

+ +

Note that you will need to introduce ""non-linearity"" in those hidden perceptrons in order to be able to non-linearly separate a set of points.

+ +
+

Shouldn't the decision boundaries both converge to the same ultimately after training?

+
+ +

No, not necessarily. The decision boundary found by the perceptron learning algorithm depends on the initialization of the weights and the order that the inputs are presented. See chapter 4 (specifically, pages 192-196) of Pattern Recognition and Machine Learning by C. Bishop. Two perceptrons would be guaranteed to converge to the same solution if the loss function was convex (i.e. there's only one optimum).

+",2444,,2444,,4/8/2020 14:48,4/8/2020 14:48,,,,0,,,,CC BY-SA 4.0 +10995,2,,10339,3/4/2019 6:56,,1,,"

Then work with quantized network it's good idea to normalize block of them them with floating point scaling factor (16 bit float for example). +The weight channel and input channel will have represenation S_f * w_n where S_f - scalar float and w_n - low-bit fixed or integer tensor or vector. Then calculating dot product put it into high-bit fixed/integer variable or float and recalculate scaling factor and low-bit representation for output. That is only if you can implement floating point (or high bit fixed) operations of cause. For low-bit representation you should clamp values to S_f*[-2^n:2^n] and that could be non-trivial. Good idea to use statistics of weights/inputs for it. +There are dozen of papers on quantization of neural networks with different approaches, you may want to read some.

+",22745,,22745,,3/4/2019 7:02,3/4/2019 7:02,,,,0,,,,CC BY-SA 4.0 +10996,2,,10981,3/4/2019 10:37,,4,,"

Yes, NEAT (NeuroEvolution of Augmenting Topologies) increases the number of neurons during training. More specifically, NEAT uses evolution to introduce new neurons and connections during training, and - just as evolution - if the mutation performs poorly, gets eliminated after a few generations. This way overall performance increases over time while it keeps your network size (and computing power to run it) minimal. There's also a way to add convolution to this algorithm.

+ +

In the paper that introduced NEAT, the authors mention a completely different algorithm they tried, optimizing network hyper-parameters by choosing reasonable random values and training it multiple times. That could decrease that number as well.

+ +

Also, there is a trick to temporarily turn off specific parts of a network, which supposedly helps with overfitting.

+ +

A ReLU+backpropagation based network can ""turn off"" parts of the network during training, because constant 0's derivative is constant 0. In practice, you decrease the number of neurons (that is considered bad though, that's when leaky ReLU and PReLU is used instead).

+",22418,,2444,,3/4/2019 21:00,3/4/2019 21:00,,,,2,,,,CC BY-SA 4.0 +10997,1,11005,,3/4/2019 10:59,,3,73,"

How accurate are neuro-evolution algorithms (such as NEAT) in modelling real organism evolution?

+",22802,,2444,,7/7/2019 19:20,7/7/2019 19:20,How accurate are neuroevolution algorithms in modelling organism evolution?,,1,1,,,,CC BY-SA 4.0 +10998,2,,10930,3/4/2019 11:21,,1,,"

This is a bit of a puzzle but you can compute a reasonable narrow limit even without knowing whether or not $P(S,A) = P(S) P(A)$.

+ +

Start with the contingency table relating $P(S, A)$, $P(S,\neg A)$, $P(\neg S, A)$, $P(\neg S,\neg A)$ to $P(S)$ and $P(A)$ :

+ +

$$\begin{array}{cc|c} +P( S,A)& P(\neg S,A) & P(A) \\ +P(S,\neg A)& P(\neg S,\neg A) & P(\neg A) \\ \hline +P(S)& P(\neg S) & 1 \\ +\end{array} \quad \rightarrow \quad \begin{array}{cc|c} +x& 0.01-x & 0.01 \\ +0.07-x& 0.92+x & 0.99 \\ \hline +0.07& 0.93 & 1 \\ +\end{array}$$

+ +

note that the cells must be between zero and one thus $0 \leq x \leq 0.01$

+ +
+ +

Then with

+ +

\begin{array}{rcrl}P(F|S) &= & & P(F|S,A) P(A|S) + P(F| S,\neg A) P(\neg A|S)\\ +&=&& 1.0 \frac{x}{0.07} + 0.7 \frac{0.07-x}{0.07} \\ +& = && 0.7 + 4 \frac{2}{7} x \end{array}

+ +

you get

+ +

$$0.7 \leq P(F|S) \leq 0.743$$

+ +
+ +

To solve $P(F|S)$ exactly you need to narrow down $x$ more precisely. Possibilities are:

+ +
    +
  • If you know $P(F)$ then you could use $$\begin{array}{rcrl}P(F) &= & & P(F|S,A) P(S,A) \\ && +& P(F|\neg S,A) P(\neg S,A) \\ &&+& P(F| S,\neg A) P(S, \neg A)\\&&+&P(F|\neg S,\neg A) P(\neg S,\neg A)\\ +&=&& 1.0 (x) + 0.7 (0.07 - x) + 0.9 (0.01-x)+0.1(0.92+x) \\ +& = && 0.15-0.5 x \end{array}$$

  • +
  • If you know that $S \perp \! \! \! \perp A$ then you can use $x = P(S,A) = P(S)P(A) = 0.0007$

  • +
+",22854,,22854,,3/4/2019 11:43,3/4/2019 11:43,,,,0,,,,CC BY-SA 4.0 +11000,1,,,3/4/2019 16:07,,1,3216,"

The (discrete and continuous) Fourier transform (FT) is used in signal processing in order to convert a signal (or function) in a certain domain (e.g. the time domain) to another domain (e.g., frequency domain). There are several resources on the web that attempt to explain the FT at different levels of complexity. See e.g. this answer or this and this Youtube videos.

+ +

What are examples of (real-world) applications of the Fourier transform to AI? I am looking for answers that explain the reason behind the use of the FT in the given application. I suppose that there are several applications of the FT to e.g. ML (data analysis) and robotics. I am looking for specific examples.

+",2444,,2444,,12/17/2021 14:42,12/17/2021 14:42,What are examples of applications of the Fourier transform to AI?,,3,0,,,,CC BY-SA 4.0 +11002,2,,10826,3/4/2019 18:16,,1,,"

You have a set of different types of data available for each of your subjects, and given one set you'd like to classify which subject it belongs to. This looks like a supervised classificiation problem.

+ +

The most popular classifiers for supervised learning are neural networks. Now given the heterogenous nature of your data types, a simple approach would be to use separate classifiers for each type of data. For example, a convolution neural net for the image data, and a simple feed forward net for the biosensor data. Another thing you try is a multi channel approach, where towards the input side you have multiple channels for the different types of data, and the final few layers are fully connected.

+ +

+The image has only CNNs for the multi channel part but you could have one channel as a simple feed forward net while another one as having conv layers.

+ +

Also, if you wish to classify the data as belonging to a subject on the basis of just one of the data types from the set, then you should have separate classifiers for all types. In that case it might be worthwhile to look into classifiers other than neural nets like multi class logistic regression which might be simpler to work with for a particular data type.

+",22855,,,,,3/4/2019 18:16,,,,1,,,,CC BY-SA 4.0 +11003,2,,11000,3/4/2019 19:04,,1,,"

The filtering (or convolution) process in Convolutional Neural Networks can be implemented using Fourier Transforms. Convolutions in image (or spatial) space are equivalent to multiplications in frequency space, and multiplications can be performed much more efficiently than convolutions.

+ +

The algorithm for using Fourier Transforms to calculate a convolution between an image and a filter kernel is as follows:

+ +

Assuming a 100x100 input image, and a 3x3 filter kernel (such as a Sobel Edge-Detection filter).

+ +
    +
  1. Calculate the 2D Fourier Transform (FT) of the image (C library, Python library)
  2. +
  3. Pad the filter kernel so that it is the same size as the image, and that the filter kernel is centred at image coordinate (0, 0). This means that some parts of the kernel will be placed in the other corners of the padded filter image. For example, consider a 3x3 kernel where the filter coefficients are [[a, b, c], [d, e, f], [g, h, i]], in this case the coefficients map to pixel positions on the padded kernel as follows:

    + +
      +
    • a -> (99, 99) //bottom right
    • +
    • b -> (0, 99) //bottom left
    • +
    • c -> (1, 99)
    • +
    • d -> (99, 0) //top right
    • +
    • e -> (0, 0) //top left
    • +
    • f -> (1, 0)
    • +
    • g -> (99, 1)
    • +
    • h -> (0, 1)
    • +
    • i -> (1, 1)
    • +
    + +

    Set all the other padded filter kernel pixels to zero

  4. +
  5. Calculate the 2D Fourier Transform of the padded filter kernel.

  6. +
  7. Perform an element-wise multiplication of the image FT and the padded filter kernel FT. Note that this is a complex multiplication as the FT consists of complex numbers. The result of this multiplication is therefore also an array of complex numbers.

  8. +
  9. Calculate the Inverse Fourier Transform of the result from step 4. This will return a real valued image which is the original image convolved with the filter kernel.

  10. +
+ +

The advantage for performing the kernel convolutions in this way is that it is much faster than performing an element-wise convolution directly. This speeds up the training process for neural networks with Convolutional layers.

+",12509,,2444,,3/6/2019 9:18,3/6/2019 9:18,,,,1,,,,CC BY-SA 4.0 +11004,1,,,3/4/2019 21:13,,3,112,"

I am doing some experimentation on neural networks, and for that I am trying to program a plain OCR task. I have learned CNNs are the best choice ,but for the time being and due to my inexperience, I wanna go step by step and start with feedforward nets.

+ +

So my training data is a set of roughly 400 16*16 images extracted from a script that draws every alphabet char in a tiny image for a small set of fonts registered in my computer.

+ +

Then the test data set is extracted from the same procedure, but for all fonts in my computer.

+ +

Well, results are quite bad. Get an accuracy of aprox. 45-50%, which is very poor... but that's not my question.

+ +

The point is that I can't get the MSE below 0.0049, no matter what hidden layer distribution I apply to the net. I have tried with several architectures and all comes down to this figure. Does that mean the net cannot learn any further given the data?

+ +

This MSE value however throws this poor results too.

+ +

I am using Tensorflow API directly, no keras or estimators and for a list of 62 recognizable characters these are examples of the architectures I have used: [256,1860,62] [256,130,62] [256,256, 128 ,62] [256,3600,62] ....

+ +

But never get the MSE below 0.0049, and still results are not over 50%.

+ +

Any hints are greatly appreciated.

+",22869,,2444,,3/5/2019 10:04,5/14/2023 0:08,Attempting to solve a optical character recognition task using a feed-forward network,,1,5,,,,CC BY-SA 4.0 +11005,2,,10997,3/4/2019 23:02,,1,,"

Modelling genetic code persistence, mutation, and meiosis for neurological features or any other biological features is not the greatest of challenges. Modelling the rest of the organism that can support neural growth and change, the electrochemistry of the many types of neurons, and the environment that favors the emergence of learned neural behaviors is challenging and still far beyond the current level of technology.

+",4302,,,,,3/4/2019 23:02,,,,0,,,,CC BY-SA 4.0 +11006,2,,11000,3/4/2019 23:20,,1,,"

To be precise, a discrete Fourier transform can be used to transform a finite set of samples between frequency and time domains. A continuous Fourier transform can be applied in calculus to an expression or a set of equations (through the appropriate techniques) or used to develop algorithms, but digital systems are not continuous, so there is no way to directly integrate in a digital computer except symbolically.

+ +

When the features contained in time series or pixels are more easily extracted from spectral information or frequency domain complex numbers than from the time series, an FFT can produce the data in a form that supports easy extraction. For streams containing indefinitely long time series, a windowing scheme may be used. The common cases are for sound recognition and music recognition. FFTs are also used in the finite math involved in cryptography and especially cryptanalysis (code breaking). In robotics, FFTs would most likely be used in vibration analysis to reduce wear and dampen oscillation.

+",4302,,,,,3/4/2019 23:20,,,,0,,,,CC BY-SA 4.0 +11008,1,,,3/5/2019 1:17,,7,225,"

It is possible that the view of what is impressive enough in computer behavior to be called intelligence changes with each decade as we adjust to what capabilities are made available in products and services.

+",4302,,2444,,3/8/2020 15:58,3/8/2020 15:58,Would the people of the 19th Century call our conventional software today artificial intelligence?,,2,0,,12/10/2021 21:51,,CC BY-SA 4.0 +11010,2,,3032,3/5/2019 3:48,,2,,"

I think there are at least three points that you need to think before implement Hill-Climbing (HC) algorithm:

+ +

First, the initial state. In HC, people usually use a ""temporary solution"" for the initial state. You can use an empty knapscak, but I prefer to randomly pick items and put it in the knapsack as the initial state. You can use a binary array to save the ""pickup status"" of each item such that if $a_i=1$ then you pick the $i$-th item and if $a_i=0$ then you don't pick the $i$-th item.

+ +

The next is the heuristic function. A function that evaluate your current state. Actually, you can define it freely (like the initial state) as long as the function can evaluate which state is better than the others. You can use a simple one:

+ +

$$ +h = + \begin{cases} + -1, & \text{if}\ \sum (w_i \times a_i) > \text{MaxWeight} \\ + \sum (v_i \times a_i), & \text{otherwise} + \end{cases} +$$

+ +

with $w_i$ and $v_i$ is the weight and the value of the $i$-th item respectively. One of the disadvantage of this function is if you have more than one invalid state (a state with the total weight is more than the limit), they have the same value, which is $-1$.

+ +

The last is the method to generate the neighbors of the current state. Once again, you are also free to define the method that you will use. I think your proposed actions are good enough:

+ +
    +
  • reverse the status of an item (1 to 0 or 0 to 1). That means I add or reduce the item on the current state.
  • +
  • swap the status two items that have different value. Formally swap $a_i$ and $a_j$ with $ i \neq j$ and $ a_i \neq a_j$. That means I swap the item that I picked before with the item that I didn't pick before
  • +
+ +

Then, after you define everything you can run the algorithm as explained in Wikipedia:

+ +
currentNode = startNode;
+loop do
+    L = NEIGHBORS(currentNode);
+    nextEval = -INF;
+    nextNode = NULL;
+    for all x in L 
+        if (EVAL(x) > nextEval)
+            nextNode = x;
+            nextEval = EVAL(x);
+    if nextEval <= EVAL(currentNode)
+        //Return current node since no better neighbors exist
+        return currentNode;
+    currentNode = nextNode;
+
+",16565,,,,,3/5/2019 3:48,,,,0,,,,CC BY-SA 4.0 +11014,1,,,3/5/2019 9:22,,6,221,"

In the paper Mastering the game of Go with deep neural networks and tree search, the input features of the networks of AlphaGo contains a plane of constant ones and a plane of constant zeros, as following.

+ +
Feature       #of planes Description 
+Stone colour  3          Player stone/opponent stone/empty 
+Ones          1          A constant plane filled with 1 
+Turns since   8          How many turns since a move was played 
+Liberties     8          Number of liberties (empty adjacent points) 
+Capture size  8          How many opponent stones would be captured 
+Self-atari size 8        How many of own stones would be captured 
+Liberties after move 8   Number of liberties after this move is played 
+Ladder capture 1         Whether a move at this point is a successful ladder capture 
+Ladder escape 1          Whether a move at this point is a successful ladder escape 
+Sensibleness  1          Whether a move is legal and does not fill its own eyes 
+Zeros         1          A constant plane filled with 0 
+Player color  1          Whether current player is black
+
+ +

I wonder why these features are necessary, because I think a constant plane contains no information and it makes the the network larger and consequently harder to train.

+ +

What's more, I don't understand the sharp sign here. Does it mean ""the number""? But one number is enough to represent ""the number of turns since a move was played"", why eight?

+ +

Thank you very much.

+",22886,,,,,1/5/2021 20:30,Why is a constant plane of ones added into the input features of AlphaGo?,,1,0,,,,CC BY-SA 4.0 +11015,2,,10345,3/5/2019 10:04,,2,,"

What interests us in this problem are only the intervals for 1 person. Lets say that we want to train a neural network on recognizing the simple pattern in date differences. This would mean that we could train the neural network on series of purchase histories of multiple people. That means that one possible input is the previous intervals: in your case the previous 2 intervals (10 days, 15 days). That is a very small number and it will be hard to recognise a pattern for 3 consecutive purchases, but lets disregard that. After all, you can always experiment with different input sizes.

+ +

For training, you can take multiple customers, where of all of them you know when 4 purchases took place. You calculate the intervals, thus for each customer you get 3 intervals. As input you give the network 2, you let the network predict the 3rd one, and then you let the network learn off its mistake (you can do that because you know what the 3rd one was in reality, that's how you train the network).

+ +

Provided that the network isn't too complex or too simple (by that I mean that a hidden layer could be necessary, and you need to give it sufficient neurons) than the network might give a decent approximation. The only problem is, as Manuel Rodriguez said, that the network would indeed only learn to make predictions based on the corectness of its previous ones on a large group of people. Perfect predictions thus won't be possible, but you can minimize the error by making your dataset and input larger, so the network has more information to work on.

+",21788,,,,,3/5/2019 10:04,,,,0,,,,CC BY-SA 4.0 +11016,1,11018,,3/5/2019 10:30,,2,1493,"

I wanted to clarify the term 'acting greedily'. What does it mean? Does it correspond to the immediate reward, future reward or both combined? I want to know the actions that will be taken in 2 cases:

+ +
    +
  • $v_\pi(s)$ is known and $R_s$ is also known (only).
  • +
  • $q_{\pi}(s, a)$ is known and $R_s^a$ is also known (only).
  • +
+",,user9947,2444,,3/5/2019 10:34,3/5/2019 16:49,What does 'acting greedily' mean?,,3,1,,,,CC BY-SA 4.0 +11017,2,,11016,3/5/2019 11:13,,2,,"

In general, a greedy ""action"" is an action that would lead to an immediate ""benefit"". For example, the Dijkstra's algorithm can be considered a greedy algorithm because at every step it selects the node with the smallest ""estimate"" to the initial (or starting) node. In reinforcement learning, a greedy action often refers to an action that would lead to the immediate highest reward (disregarding possible future rewards). However, a greedy action can also mean the action that would lead to the highest possible return (that is, the greedy action can also be considered an action that takes into account not just immediate rewards but also future ones).

+ +

In your case, I think that the ""greedy action"" can mean different things, depending on weather you use the reward function or the value functions, that is, you can act greedily with respect to the reward function or the value functions.

+ +

I would like to note that you are using a different notation for the reward function for each of the two value functions, but this does not need to be the case. So, your reward function might be expressed as $R_s^a$ even if you use $v_\pi(s)$. I will use the notation $R_s^a$ for simplicity of the explanations.

+ +

So, if you have access to the reward function for a given state and action, $R^a_s = r(s, a)$, then the greedy action (with respect to the reward function $r$) would just be the action from state $s$ with the highest reward. So, formally, we can define it as $a_\text{greedy} = \arg \max_a r(s, a)$ (both in the case of the state or state-action value functions: it does not matter if you have one or the other value function). In other words, if you have access to the reward function (in that form), you can act greedily from any state without needing to access the value functions: you have a ""model"" of the rewards that you will obtain.

+ +

If you have $q_\pi(s, a)$ (that is, the state-action value function for a fixed policy $\pi$), then, at time step $t$, the greedy action (with respect to $q_\pi(s, a)$) from state $s$ is $a_\text{greedy} = \arg \max_{a}q_\pi(s, a)$. If you then take action $a_\text{greedy}$ in the environment, you would obtain the highest discounted future reward (that is, the return), according the $q_\pi(s, a)$, which might actually not be the highest possible return from $s$, because $q_\pi(s, a)$ might not be the optimal state-action value function. If $q_\pi(s, a) = q_{\pi^*}(s, a)$ (that is, if you have the optimal state-action value function), then, if you execute $a_\text{greedy}$ in the environment, you will theoretically obtain the highest possible return from $s$.

+ +

If you had the optimal value function (the value function associated with the optimal policy to act in your environment), then the following equation holds $v_*(s) = \max_{a} q_{\pi^*}(s, a)$. So, in that case, $a_\text{greedy} = \arg \max_{a}q_{\pi^*}(s, a)$ would also be the greedy action if you had $v_*(s)$. If you only have $v_\pi(s)$ (without e.g. the Q function), I don't think you can act greedily (that is, there is no way of knowing which action is the greedy action from $s$ by just having the value of state $s$: this is actually why we often estimate the Q functions for ""control"", i.e. acting in the environment).

+",2444,,2444,,3/5/2019 16:49,3/5/2019 16:49,,,,0,,,,CC BY-SA 4.0 +11018,2,,11016,3/5/2019 11:14,,3,,"

In RL, the phrase "acting greedily" is usually short for "acting greedily with respect to the value function". Greedy local optimisation turns up in other contexts, and it is common to specify what metric is being maximised or minimised. The value function is most often the discounted sum of expected future reward, and also the metric used when defining a policy as "acting greedily".

+

It is possible to define a policy as acting greedily with respect to immediate expected reward, but not the norm. As a special case, when the discount factor $\gamma = 0$ then it is the same as acting greedily with respect to the value function.

+

When rewards are sparse (a common situation in RL), acting greedily with respect to expected immediate reward is not very useful. There is not enough information to make a decision.

+
+

I want to know the actions that will be taken in 2 cases:

+
    +
  • $v_\pi(s)$ is known and $R_s$ is also known (only).
  • +
+
+

To act greedily in RL, you would use the value function $v_\pi(s')$ - the value function of the next states. To do so, you need to know the environment dynamics - to go with the notation $R_s$ you should also know the transition matrix $P_{ss'}^a$ - the probability of transitioning from $s$ to $s'$ whilst taking action $a$:

+

$$\pi'(s) = \text{argmax}_a \sum_{s'} P_{ss'}^a(R_{s} + \gamma v_{\pi}(s'))$$

+

Notes:

+
    +
  • This assumes $R_{s}$ is your immediate reward for leaving state $s$. Substitute $R_{s'}$ if the reward matrix is for entering state $s'$ instead.

    +
  • +
  • The policy gained from acting greedily with respect to $v_\pi(s)$ is not $\pi$, it is (a usually improved) policy $\pi'$

    +
  • +
+
+
    +
  • $q_{\pi}(s, a)$ is known and $R_s^a$ is also known (only).
  • +
+
+

To act greedily in RL, you would use the value function $q_{\pi}(s, a)$, and this is much simpler:

+

$$\pi'(s) = \text{argmax}_a q_{\pi}(s, a)$$

+

Again, the policy gained from acting greedily with respect to $q_\pi(s,a)$ is not $\pi$, it is (a usually improved) policy $\pi'$

+",1847,,-1,,6/17/2020 9:57,3/5/2019 11:14,,,,2,,,,CC BY-SA 4.0 +11023,2,,11016,3/5/2019 12:24,,1,,"

Acting greedily means that the search is not forward thinking and limits its decisions solely on immediate return. It is not quite the same as what is meant in human social contexts in that greed in that context can involve forward thinking strategies that sacrifice short term losses for long term gain. In the typical machine search lingo, greed is myopic (short-sighted).

+",4302,,,,,3/5/2019 12:24,,,,0,,,,CC BY-SA 4.0 +11027,2,,8751,3/5/2019 13:25,,1,,"

Before I answer your question, it is important to frame what a GA is, so please allow me to cover some history.

+ +

Friedman and Fraser performed some of the earliest evolutionary computation experiments. Fraser presented the case of diploid organisms represented by binary strings of a given length. Each bit in the string represented a gene. Fraser proposed the process of single-point crossover for binary string reproduction.

+ +

American computer scientist John Henry Holland and his colleagues at the University of Michigan formalised the evolutionary algorithms originally implemented by Bremermann, Fraser, Friedman and Friedberg.

+ +

Holland's formal algorithm is known as a Genetic Algorithm (GA). A GA has a fixed length chromosome structure. Traditionally it was binary based. However due to hamming distances of various bit strings a GA using a binary encoded chromosome string can get stuck at local minima. Holland experimented with integer based GAs too.

+ +

Getting back to your question, theoretically and in its strictest sense, a Genetic Algorithm has a fixed length chromosome vector. The primary reason for this is that each gene represents something. If it was variable then the meaning of the gene must be encoded into the chromosome, which makes evolution more difficult. However, there are other chromosome structures that are variable length. The chromosome structure itself implies meaning.

+ +

Koza experimented with an alternative version of a GA called a genetic programming (GP). A GP replaces the fixed length vector structure used in a genetic algorithm with a variable length, non-linear, hierarchical abstract data structure, known as a tree.

+ +

The tree consists of branch nodes and leaf nodes. Leaf nodes have no children and implement a number or constant from a terminal set. A leaf node is also known as a terminal node. A branch node is connected to other branch nodes or leaf nodes and implement an operator from a functional set. A branch node is also known as a non-terminal node.

+ +

The tree structure naturally accommodates a game rule. A tree is not restricted to a fixed length. A tree is a natural structure to represent hierarchical decisions rules, and a tree can store both binary, logical, and numeric functions within its nodes. Logical functions allow the GP to evolve conditional execution of game play functions not easily implemented using a fixed length vector. Tree does need a traversal path and grammar (syntax and semantics).

+ +

In short to answer your questions:

+ +
    +
  1. A GA (fixed length) is generally not used for games, but rather a GP (variable with decisions).
  2. +
  3. A GA is not the most suitable method for this kind of problem as it is fixed +length, and each gene corresponds to a rule or value.
  4. +
+ +

I would implement a GP, where the operators of mutation and crossover are adapted for a GP. I would also use the game itself as a fitness function, where a pool of individuals either using a single population, or co-evolution evolve players that gradually increase their score.

+ +

The paragraphs above were taken from Nicholls' MSc thesis. His thesis was on stock market trading rule generation which is similar to game rule generation. His thesis is available here (Only available after April 2019).

+",20508,,20508,,3/6/2019 8:16,3/6/2019 8:16,,,,0,,,,CC BY-SA 4.0 +11030,2,,10345,3/5/2019 15:17,,2,,"

An 'AI'* is only as smart as the information you give it

+

You've got to add your own knowledge of the situation into this. Currently we have a transaction id which only really tells us that there is a transaction, a card number (identifying a user, I assume) and a date.

+

The date can probably tell you most - what day of the week was it? What season (most sales experience some seasonality)? What time of day?

+

Comparing several dates can then tell you things like the average gap - deviation on that average.

+

You can use machine learning models to tell you how good these variables are at predicting the next visit day but you have to create these variables first, the model won't know about seasonality or its effect on sales of ice cream, umbrellas or winter jumpers so you have to use your knowledge of your customer base to pass the right variables to the model.

+

You might also want to consider the product purchased - if you can see that information - someone who buys a pint of milk or a loaf of bread will probably return for the same goods on a weekly basis (or whenever they run out) but someone who bought a set of screw drivers and returned for a hammer a week later is unlikely to return for the same goods.

+

The vast majority of work done for most predictive systems is in creating your variables and providing something to train on which will hold the pertinent information.

+

*I'm assuming here that you're working with a machine learning model

+",22897,,-1,,6/17/2020 9:57,3/5/2019 20:55,,,,0,,,,CC BY-SA 4.0 +11032,1,,,3/5/2019 17:58,,1,55,"

Are there any (well validated) approaches for applying pathfinding algorithms on a graph following specific rules?

+ +

To be more specific: I want to introduce a graph with coloured edges. The idea is to apply a well known pathfinding algorithm (such as Dijkstra) on the graph given the rule: ""only black and red edges"".

+",19413,,,,,3/5/2019 18:25,Which pathfinding algorithms can be applied on coloured graphs?,,1,0,,,,CC BY-SA 4.0 +11033,2,,11008,3/5/2019 18:16,,4,,"

My sense is that they would, based on a high-level take of Babbage and Lovelace's view of the potential capability of the ""analytic engine"". If Babbage's Tic-Tac-Toe machine had been built, I am sure that would have been regarded as machine intelligence. Nimatron (Edward Condon) may have been the first game AI, and the capability seems similar to what Babbage was envisioning. Certainly the bogus ""Turk"" chess-playing hoax was considered a machine intelligence.

+ +

Conventional software could connote a form of automation, and I think any form of automation, particularly where the operations are ""under the covers"", would have been considered a form of intelligence.

+ +
+ +

I think the current idea of only regarding ""strong statistical AI"" (Machine Learning) as AI is inherently flawed because of the concept of utility. Intelligence is a spectrum, being a relative measure of problem solving strength, and artificial merely connotes a thing intentionally or skillfully constructed. The Russell & Norvig definition seems to hew to this viewpoint.

+",1671,,1671,,3/5/2019 18:36,3/5/2019 18:36,,,,0,,,,CC BY-SA 4.0 +11034,2,,8747,3/5/2019 18:22,,2,,"

Remember that any machine learning model works good only when there is a ""rule"" or a correlation between modeling data and modeled data. When there is not, even the best algorithm will not predict/classify correctly. I am not saying that this must be the case, but probably you have come pretty close to the physical limit of what can be achieved using this data.

+ +

Maybe you should consider adding or extracting new features to your model.

+",22659,,,,,3/5/2019 18:22,,,,0,,,,CC BY-SA 4.0 +11036,2,,11032,3/5/2019 18:25,,1,,"
+

The idea is to apply a well known pathfinding algorithm (such as Dijkstra) on the graph given the rule: ""only black and red edges""

+
+ +

Remove all the non-black-and-red edges from the graph first, then run it through any off-the-shelf pathfinding implementation. Or implement your own, and have it completely ignore the non-black-and-red edges.

+",19452,,,,,3/5/2019 18:25,,,,0,,,,CC BY-SA 4.0 +11037,2,,8778,3/5/2019 18:44,,0,,"

I have encountered similar problem while trying to predict forex prices. I understand it this way:

+ +
+

The data based on which you try to model price differences are so poor that the lowest error is achieved by zeroing the predicted values

+
+ +

In other words, due to poor ""correlation"" in data, zero values are the best solution.

+ +

My advice would be to look for more correlated data to be used.

+",22659,,16565,,3/6/2019 12:27,3/6/2019 12:27,,,,0,,,,CC BY-SA 4.0 +11038,1,11045,,3/5/2019 18:59,,1,64,"

I am interested in a framework for learning the similarity of different input representations based on some common context. I have looked into word2vec, SVD and siamese networks, all of which are similar to what I want.

+ +

For example, suppose we have some customers we are sending different advertisements to, and I would like to create a system to map offers to customers. I am thinking in the lines of creating a customer representation, and a representation of the offers, and feeding them in parallel to a neural network that has a label of whether they acted on the advertisement or not. The idea is that I should be able to locate the best offer for any customer given these representations.

+ +

I have looked into siamese networks and word2vec, both are close to what I want. The problem differs slightly in that for the siamese networks, it tends to be identical parallel networks, which I don't want because my inputs are not equivalent. And for word2vec the vectors tend to be in the same domain, while I want to apply this in a more general setting.

+ +

If anyone has any resources on a similar problem statement, I would be very interested in it.

+",22906,,2444,,4/3/2019 9:10,4/3/2019 9:10,Learning similarities between customers and offers representation,,1,0,,,,CC BY-SA 4.0 +11041,2,,10675,3/5/2019 20:03,,5,,"

From the book Reinforcement Learning, An Introduction (R. Sutton, A. Barto):

+ +
+

The term system identification is used in adaptive control for what we + call model-learning (e.g., Goodwin and Sin, 1984; Ljung and S + ̈oderstrom, 1983; Young, 1984).

+
+ +

Model-learning refers to the act of learning the model (environment). Reinforcement Learning can be divided into two types:

+ +
    +
  1. Model-based - first we build a model of an environment and then do the control.

  2. +
  3. Model-free - we do not try to model the behaviour of the environment.

  4. +
+ +

Policy learning is the act of learning optimal policy. You can do it in two ways:

+ +
    +
  1. On-policy learning - learn about the policy $π$ by sampling from the same policy.

  2. +
  3. Off-policy learning - learn about the policy $π$ from the experience sampled from some other policy (e.g. watching different agent playing a game).

  4. +
+",22835,,,,,3/5/2019 20:03,,,,0,,,,CC BY-SA 4.0 +11042,1,,,3/5/2019 20:39,,1,24,"

I would like to train a constrained neural network. I found a paper on this: https://papers.nips.cc/paper/4-constrained-differential-optimization.pdf. +However, I don't really understand how to change my feedforward neural network. I don't understand exactly the role of the new output neurons, which serve as lagrangian multipliers. Can someone explain this to me in more detail?

+ +

Which steps in backpropagation do I have to change?

+",19195,,2444,,3/5/2019 21:29,3/5/2019 21:29,What is the purpose of the new neurons in the constrained neural network?,,0,0,,,,CC BY-SA 4.0 +11043,1,12471,,3/5/2019 23:22,,2,557,"

I'm looking to build from scratch an implementation of the wake-sleep algorithm, for unsupervised learning with neural networks. I plan on doing this in Python in order to better understand how it works. In order to facilitate my task, I was wondering if anyone could point me to an existing (open-source) implementation of this concept. I'm also looking for articles or, in general, resources that could facilitate this task.

+",22781,,2444,,11/18/2019 18:49,11/18/2019 18:54,Where can I find an implementation of the wake-sleep algorithm?,,2,0,,,,CC BY-SA 4.0 +11044,2,,10947,3/6/2019 1:39,,2,,"

Yes, it does and at many parts of the solution. For one of the core components - intent detection - Intento did a benchmark comparing IBM Watson and other similar products.

+

Outside of intent detection, there are other areas where AI techniques help - e.g. disambiguation, bootstrapping a bot from chat logs etc. Specifically for IBM Watson, you can learn more here.

+",22914,,69202,,3/16/2023 5:25,3/16/2023 5:25,,,,0,,,,CC BY-SA 4.0 +11045,2,,11038,3/6/2019 3:23,,0,,"
+

The idea is that I should be able to locate the best offer for any customer given these representations.

+
+ +

I think you need a Recommender System. As you want to map the offers to customers based one their representation you can check Content-Based Recommender System.

+ +

The method is by taking pattern from a customer history and try to find similarities with the new offers. You can use many techniques for a recommender system, from a simple TF-IDF or Deep Learning for more complex problems.

+",16565,,,,,3/6/2019 3:23,,,,3,,,,CC BY-SA 4.0 +11047,1,,,3/6/2019 8:57,,1,110,"

I would like to find a way to isolate the speech of each of the people in an audio record so I can create a file of that form :

+ +
[
+   {
+       ""voice_fingerprint"": ""701066EDD3A0A40A2F53F61EAFD0E6AB"",
+       ""sentences"": {
+           {
+               ""sentence"": ""do you like red apples"",
+               ""position"": 1.39 // Seconds. Time position in the audio record
+           },
+           {
+               ""sentence"": ""and how do you feel about time shifts"",
+               ""position"": 7.21
+           }
+       }
+   },
+   {
+       ""voice_fingerprint"": ""8FFEA051AF3E3FB9A80A51A98FE05896"",
+       ""sentences"": {
+           {
+               ""sentence"": ""yes I do like them"",
+               ""position"": 4.54
+           },
+           {
+               ""sentence"": ""i feel well about traveling"",
+               ""position"": 10.18
+           }
+       }
+   }
+]
+
+ +

This may be an interview record.

+ +

The problem IS NOT the Speech to Text, but to isolate the two people's sentences. Preferably in Python.

+ +

Have you ever worked on this ? Do you have any hints ?

+",22924,,,,,2/13/2020 18:02,Isolate the speech of two people in an audio record with two people only,,1,1,,,,CC BY-SA 4.0 +11048,2,,10050,3/6/2019 10:08,,1,,"

If you simulate many trajectories and receive many estimates of the two returns you're interested in, you could empirically compare their sample variances.

+ +

However, the variance of ordinary importance sampling is in general unbounded. +If you're wanting some theoretical bounds on the variance of importance sampling estimates, I'd start with weighted importance sampling, whose variance converges to zero (Sutton and Barto, section 5.5).

+",22916,,,,,3/6/2019 10:08,,,,0,,,,CC BY-SA 4.0 +11049,2,,9925,3/6/2019 10:34,,2,,"

This is a learnable behavior, given enough data. +We would expect the an error to backpropagate to $w$ whenever its use harmed classification accuracy. In this case, that would be whenever $|w|>0$. In general, I'm not sure how long this would take.

+ +

However, the speed of $w$'s convergence to zero would benefit from regularization, which is often basically a penalty on the magnitude of your network weights added to the loss function you're optimizing. If $w$ truly doesn't matter to classification, then regularization will definitely drive it to zero.

+",22916,,22916,,3/10/2019 8:34,3/10/2019 8:34,,,,0,,,,CC BY-SA 4.0 +11050,1,11147,,3/6/2019 10:41,,2,799,"

Following Deep Q-learning from Demonstrations, I'd like to avoid potentially unsafe behavior during early learning by making use of supervised learning with demonstration data. However, the implementation I'm following still uses an environment. Can I train my agent without an environment at all?

+",22930,,22916,,3/11/2019 8:10,3/11/2019 10:04,Is there a way to train an RL agent without any environment?,,1,4,,,,CC BY-SA 4.0 +11051,2,,11043,3/6/2019 10:45,,1,,"

I don't know if you are looking for something in a library, but I've found this in a public Github (I've not checked deeply if it fits for you).

+ +

I hope that's what you're looking for.

+",22930,,,,,3/6/2019 10:45,,,,1,,,,CC BY-SA 4.0 +11055,1,,,3/6/2019 13:23,,18,5305,"

I am working on creating an RL-based AI for a certain board game. Just as a general overview of the game so that you understand what it's all about: It's a discrete turn-based game with a board of size $n \times n$ ($n$ depending on the number of players). Each player gets an $m$ number of pieces, which the player must place on the board. In the end, the one who has the least number of pieces wins. There are of course rules as to how the pieces can be placed so that not all placements are legal at every move.

+

I have the game working in an OpenAI's gym environment (i.e. control by step function), have the board representation as the observation, and I have defined the reward function.

+

The thing I am struggling with right now is to meaningfully represent the action space.

+

I looked into how AlphaZero tackles chess. The action space there is $8*8*73 = 4672$: for every possible tile on the board, there are 73 movement-related modalities. So, for every move, the algorithm comes up with 4672 values, the illegal ones are set to zero and non-zero ones are re-normalized.

+

Now, I am not sure if such an approach would be feasible for my use-case, as my calculations show that I have a theoretical cap of ~30k possible actions ($n * n * m$) if using the same way of calculation. I am not sure if this would still work on, especially considering that I don't have the DeepMind computing resources at hand.

+

Therefore, my question: Is there any other way of doing it apart from selecting the legal actions from all theoretically possible ones?

+

The legal actions would be just a fraction of the ~30k possible ones. However, at every step, the legal actions would change because every new piece determines the new placement possibilities (also, the already placed pieces are not available anymore, i.e. action space generally gets smaller with every step).

+

I am thinking of games, like Starcraft 2, where action space must be larger still and they demonstrate good results, not only by DeepMind but also by private enthusiasts with for example DQN.

+

I would appreciate any ideas, hints, or readings!

+",21278,,2444,,1/11/2021 23:57,1/11/2021 23:57,"How to deal with a huge action space, where, at every step, there is a variable number of legal actions?",,1,1,,,,CC BY-SA 4.0 +11056,1,,,3/6/2019 13:35,,3,77,"

Some formulations of humanism already grant moral consideration to sentient non-human animals (e.g. https://humanists.international/what-is-humanism/). Does humanism also extend to granting rights to AGIs, should they become sentient?

+ +

Sentientism is a closely related philosophy that makes this explicit https://en.wikipedia.org/wiki/Sentientism.

+",22935,,22935,,3/7/2019 16:23,3/7/2019 18:26,Does humanism grant moral consideration to sentient artificial general intelligences?,,1,5,,,,CC BY-SA 4.0 +11057,1,11133,,3/6/2019 14:07,,27,13974,"

In mathematics, the word operator can refer to several distinct but related concepts. An operator can be defined as a function between two vector spaces, it can be defined as a function where the domain and the codomain are the same, or it can be defined as a function from functions (which are vectors) to other functions (for example, the differential operator), that is, a high-order function (if you are familiar with functional programming).

+
    +
  • What is the Bellman operator in reinforcement learning (RL)?
  • +
  • Why do we even need it?
  • +
  • How is the Bellman operator related to the Bellman equations in RL?
  • +
+",2444,,48391,,11/18/2021 10:19,11/18/2021 10:19,What is the Bellman operator in reinforcement learning?,,1,2,,,,CC BY-SA 4.0 +11058,1,,,3/6/2019 14:21,,1,1395,"

I have 10 variables as like below

+
+

$V_1=1$, $V_2=2$, $V_3=3$, $V_4=4$, $V_5=5$, $V_6=6$, $V_7=7$, $V_8=8$, $V_9=9$ and $V_{10}=10$

+
+

Note: Each variable can have any value

+

Now I want to select the best 3 variables combination as like below

+
+

$V_1V_3V_4$ or $V_{10}V_1V_7$ or $V_5V_3V_9$ etc.

+
+

The best combination is nothing but the sum of the values of 3 variables in the combination.

+

Example:

+
+

Combination 1($V_1V_2V_3$) : 1+2+3 $\Rightarrow$ 6

+

Combination 2($V_8V_9V_{10}$) : 8+9+10 $\Rightarrow$ 27

+
+

In the above example Combination 2($V_8V_9V_{10}$) has the highest sum value. So the Combination 2($V_8V_9V_{10}$) is the best combination here.

+

Like this, if I have a large number of variables means which machine learning algorithm selects the best combination in all the sense.

+

Suggest me the best machine learning algorithm for selecting the best variable combinations. Thanks in advance.

+",22931,,2444,,3/28/2021 1:26,3/28/2021 1:27,What is the best machine learning algorithm to select best 3 variable combinations?,,1,1,,,,CC BY-SA 4.0 +11059,1,11064,,3/6/2019 14:44,,4,501,"

The question already has some answer. But I am still finding it quite unclear (also does $\pi(s)$ here mean $q(s,a)$ ?):

+ +

+ +

The few things I do not understand are:

+ +
    +
  • Why the difference between 2 iterations if we are acting greedily in each of them? As per many sources 'Value Iteration' does not have an explicit policy, but here we can see the policy is to act greedily to current $v(s)$
  • +
  • What exactly does Policy Improvement mean? Are we acting greedily only at a particular state at a particular iteration OR once we act greedily on a particular state we keep on acting greedily on that state and other states are added iteratively until in all states we act greedily?
  • +
  • We can intuitively understand that acting greedily w.r.t $v(s)$ will lead to $v^*(s)$ eventually, but does using Policy Iteration eventually lead to $v^*(s)$?
  • +
+ +

NOTE: I have been thinking of all the algorithms in context of Gridworld, but if you think there is a better example to illustrate the difference you are welcome.

+",,user9947,2444,,3/6/2019 16:41,3/6/2019 20:16,A few questions regarding the difference between policy iteration and value iteration,,1,0,,2/5/2021 10:23,,CC BY-SA 4.0 +11062,1,,,3/6/2019 17:28,,1,79,"

I don't know if this it's possible but nowadays as almost everything is possible I am asking to see if anyone has any idea.

+

The problem is:

+
+

Regularly I have to import files (CSV, XML, Excel, ...) to an SQL Server Database and after I import the files I have to map the content of the columns of the imported files in a .sql Script to run and insert in a table in another DB (this table is the same for all the files imported).

+

I spend a lot of time on these mappings as you can imagine, I want to know if it's possible through Artificial Intelligence or Machine Learning one solution that can make the import of the initial data to the final DB, the initial import I can make it automatically the problem is only the import of the imported files to the final DB.

+

Note: The initial files don't have the same columns names, each file can have different columns names.

+
+

Sorry for the long post but I don't know if it is possible something like that. I searched and don't find anything.

+",22940,,-1,,6/17/2020 9:57,3/7/2019 9:07,Automation the import of files to Database,,0,0,,,,CC BY-SA 4.0 +11064,2,,11059,3/6/2019 20:16,,3,,"

$\pi(s)$ does not mean $q(s,a)$ here. $\pi(s)$ is a policy that represents probability distribution over action space for a specific state. $q(s,a)$ is a state-action pair value function that tells us how much reward do we expect to get by taking action $a$ in state $s$ onwards.

+ +

For the value iteration on the right side with this update formula:

+ +

$v(s) \leftarrow \max_\limits{a} \sum_\limits{s'}p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$

+ +

we have an implicit greedy deterministic policy that updates value of state $s$ based on the greedy action that gives us the biggest expected return. When the value iteration converges to its values based on greedy behaviour after $n$ iterations we can get the explicit optimal policy with:

+ +

$\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$

+ +

here we are basically saying that the action that has highest expected reward for state $s$ will have probability of 1, and all other actions in action space will have probability of 0

+ +

For the policy evaluation on the left side with this update formula:

+ +

$v(s) \leftarrow \sum_\limits{s'}p(s'\mid s, \pi(s))[r(s, \pi(s), s') + \gamma v(s')]$

+ +

we have an explicit policy $\pi$ that is not greedy in general case in the beginning. That policy is usually randomly initialized so the actions that it takes will not be greedy, it means we can start with policy that takes some pretty bad actions. It also does not need to be deterministic but I guess in this case it is. Here we are updating value of state $s$ according to the current policy $\pi$.
+After policy evaluation step ran for $n$ iterations we start with the policy improvement step:

+ +

$\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$

+ +

here we are greedily updating our policy based on the values of states that we got through policy evaluation step. It is guaranteed that our policy will improve but it is not guaranteed that our policy will be optimal after only one policy improvement step. After improvement step we do the evaluation step for new improved policy and after that we again do the improvement step and so on until we converge to the optimal policy

+",20339,,,,,3/6/2019 20:16,,,,0,,,,CC BY-SA 4.0 +11069,2,,10575,3/7/2019 2:00,,5,,"

Advantage function: $A(s,a) = Q(s,a) - V(s)$

+

More interesting is the General Value Function (GVF), the expected sum of the (discounted) future values of some arbitrary signal, not necessarily reward. It is therefore a generalization of value function $V(s)$. The GVF is defined on page 459 of the 2nd edition of Sutton and Barto's RL book as +$$v_{\pi,\gamma,C}(s) =\mathbb{E}\left[\left.\sum_{k=t}^\infty\left(\prod_{i=t+1}^k \gamma(S_i)\right)C_{k+1}\right\rvert S_t=s, A_{t:\infty}\sim\pi\right]$$ +where $C_t \in \mathbb{R}$ is the signal being summed over time.

+

$\gamma(S_t)$ is a function $\gamma: \cal{S}\to[0,1]$ allowing the discount rate to depend upon the state. Sutton and Barto call it the termination function. Some call it the continuation function.

+

Also of note are the differential value functions. These are used in the continuing, undiscounted setting. Because there is no discounting, the expected sum of future rewards is unbounded. Instead, we optimize the expected differential reward $R_{t+1}-r(\pi)$, where $r(\pi)$ is the average reward under policy $\pi$.

+

$$v_{\pi,\,diff}(s) = \sum_a \pi(a|s) \sum_{s',r} p(s',r|s,a)\left[r-r(\pi)+ v_{\pi,\,diff}(s')\right]$$ +$$v_{*,\,diff}(s) = \max_a \sum_{s',r} p(s',r|s,a)\left[r-\max_\pi r(\pi)+ v_{*,\,diff}(s')\right]$$

+

The differential value functions assume that a single fixed value of $r(\pi)$ exists. That is, they assume the MDP is "ergodic." See section 10.3 of Sutton and Barto for details.

+",22916,,2444,,11/23/2020 14:03,11/23/2020 14:03,,,,0,,,,CC BY-SA 4.0 +11070,1,,,3/7/2019 2:18,,2,222,"

I would like to start programming a multi task reinforcement learning model. For this, I need not just one maze or grid world (or just model-based), but many with different reward functions. So, I am wondering if exists a dataset or a generator for such thing, or do I need to code everything by my self?

+",22949,,2444,,3/7/2019 8:51,3/7/2019 8:51,Is there any grid world dataset or generator for reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +11071,2,,11070,3/7/2019 3:06,,3,,"

Depending on your needs and the size of the project, you might be better off making a custom set of environments. If you'd rather not do that, though, you should take a look at OpenAI's CoinRun environment. A high-level description can be found in their blog post.

+ +

The ""RandomMazes"" version of this environment might be useful to you. And if you want to make the mazes smaller, you can redefine MAX_MAZE_DIFFICULTY in coinrun.cpp.

+ +

Note that, although the levels are procedurally generated, reward is only ever given when the agent picks up the single coin. So, this might not be as much variety in reward function as you wanted.

+",22916,,22916,,3/7/2019 3:11,3/7/2019 3:11,,,,0,,,,CC BY-SA 4.0 +11073,2,,10895,3/7/2019 7:01,,1,,"

You are right that you will have to implement the above steps before building any chat bot, this is because your computer doesn't understand text like you and I do. Therefore, text has to be pre-processed(converted) into a format that the computer can work with Tokenization, Parts of Speech Tagging and Named Entity Recognition are such pre-processing techniques.

+ +

Rule based means when you create rules for deciding how should your chat bot behave in a given situation.

+ +

You can have a look at this blog which teaches how to deploy a chat bot using DialogFlow on Slack - https://www.analyticsvidhya.com/blog/2018/03/how-to-build-an-intelligent-chatbot-for-slack-using-dialogflow-api/

+",22956,,,,,3/7/2019 7:01,,,,0,,,,CC BY-SA 4.0 +11074,1,11075,,3/7/2019 7:50,,2,2089,"

I'm studying how SPP (Spatial, Pyramid, Pooling) works. SPP was invented to tackle the fix input image size in CNN. According to the original paper https://arxiv.org/pdf/1406.4729.pdf, the authors say:

+ +
+

convolutional layers do not require a fixed image size and can + generate feature maps of any sizes. On the other hand, the + fully-connected layers need to have fixed size/length input by their + definition. Hence, the fixed size constraint comes only from the + fully-connected layers, which exist at a deeper stage of the network.

+
+ +

Why does a fully connected layer only accepts a fixed input size (but convolutional layers don't)? What's the real reason behind this definition?

+",20612,,2444,,3/7/2019 8:45,3/7/2019 10:28,Why does a fully connected layer only accept a fixed input size?,,2,0,,,,CC BY-SA 4.0 +11075,2,,11074,3/7/2019 9:19,,1,,"

A convolutional layer is a layer where you slide a kernel or filter (which you can think of as a small square matrix of weights, which need to be learned during the learning phase) over the input. In practice, when you need to slide this kernel, you will often need to specify the ""padding"" (around the input) and ""stride"" (with which you convolve the kernel on the input), in order to obtain the desired output (size). So, even if you receive inputs of different sizes, you can change these values, like the padding or the stride, in order to produce a valid output (size). In this sense, I think, we can say that convolutional layers accept inputs of (almost) any size.

+ +

The number of feature maps does not depend on the kernel or input (sizes). The number of features maps is determined by the number of different kernels that you will use to slide over the input. If you have $K$ different kernels, then you will have $K$ different feature maps. The number of kernels is often a hyper-parameter, so you can change it (as you please).

+ +

A fully connected (FC) layer requires a fixed input size by design. The programmer decides the number of input units (or neurons) that the FC layer will have. This hyper-parameter often does not change during the learning phase. So, yes, FC often accept inputs of fixed size (also because they do not adopt techniques like ""padding"").

+",2444,,,,,3/7/2019 9:19,,,,0,,,,CC BY-SA 4.0 +11076,2,,11074,3/7/2019 10:28,,0,,"

It doesn't have to be so. Fully connected layer could be considered as convolutional layer with input image of 1 pixel and spatial kernel size of 1 pixel. So 1-pixel kernel convolutional layer is effectively the same as fully connected layer attached to each pixel. That is idea behind ""Fully Convolutional Networks"". If you want ""true"", 1-pixel fully connected layer after convolutional layer (with variable input size) all you have to do is to put average pooling layer (or other type of pooling layer) before fully connected layer. That way fully connected layer can accept variable input size.

+",22745,,,,,3/7/2019 10:28,,,,0,,,,CC BY-SA 4.0 +11077,1,,,3/7/2019 11:06,,2,209,"

I have series of sensors (around 4k) and each sensor will measure the amplitudes at each point.Suppose I train the neural network with sufficent set of 4k values (N * 4k shape). The machine will find a pattern in the series of values.If the values stray away from the pattern (that is anomaly) it can detect the point and will be able to say that anomaly is in the 'X'th sensor.Is this possible.If so what kind of neural network should I use?

+",21936,,,,,3/12/2019 16:20,Finding anomaly detection by pattern matching in a set of continous data,,2,3,,,,CC BY-SA 4.0 +11081,1,11086,,3/7/2019 15:52,,3,721,"
+

Excercise 3.5 The equastions in Section 3.1 are for the continuing + case and need to be modified (very slightly) to apply to episodic + tasks. Show that you know the modifications needed by giving the + modified version of (3.3).

+
+ +

$\displaystyle\sum_{s^{\prime} \in S} \displaystyle\sum_{r \in R} = p(s^{\prime}, r +| s,a) = 1$ , for all $s\in S, a \in A(s)$ (3.3)

+ +

Is it just about final states? So for $s \in S$ when S is not final?

+",22826,,22826,,3/7/2019 16:19,4/17/2019 18:23,"Difference in continuing and episodic cases in Sutton and Barto - Introduction to RL, exercise 3.5",,2,0,,,,CC BY-SA 4.0 +11083,2,,11058,3/7/2019 16:07,,1,,"

What you have is an instance of a local search or black-box optimization problem.

+

There are many AI algorithms to do this, though as commenters have noted, with only 10 binary variables, you're happier to just look at all $2^{10}$ of them.

+

If you had, say, 60, then looking at them all is no longer a good idea ($2^{60}$ is very large).

+

The simplest BBO technique is called hill climbing. It looks like this:

+
    +
  1. Pick a random subset of the variables to be true.
  2. +
  3. For each variable, determine the effect of flipping that variable from true to false in the current assignment.
  4. +
  5. Select the 'flip' that leads to the largest value of your function.
  6. +
  7. If the 'flip' has a larger value of your function than the current assignment, GOTO 2.
  8. +
  9. Otherwise, return the current assignment.
  10. +
+

This amounts to: 'Pick an assignment at random, then keep making small changes that improve it until you can't anymore. Claim that this is the best solution.'

+

Clearly, this algorithm will not recover the globally best solution with certainty, but you could run it many times, and eventually, it will find it with high probability.

+

A complete algorithm, that will always uncover the global optimum, is simulated annealing. This algorithm has a temperature parameter, usually denoted T, which changes according to a temperature schedule. If you start T large 'enough', and decrease it slowly 'enough', then it will find the global optimum. Unfortunately, 'enough' depends a lot on the shape of the function you are optimizing, which you generally cannot observe. There are some common guesses for T, like multiplying it by, e.g. 0.9999 at each step in the search, that seem to work okay in practice.

+

Both of these algorithms are widely available in packages.

+",16909,,36737,,3/28/2021 1:27,3/28/2021 1:27,,,,0,,,,CC BY-SA 4.0 +11084,1,,,3/7/2019 16:11,,4,157,"

I'm writing a virtual environment for a 4-player card game named estimation, and will use deep reinforcement learning to teach an agent to play it.

+ +

Each player gets a hand of 13 cards, and the first phase is for each player to estimate the number of tricks they will collect. The highest player gets to start first,and then after each round, the player who collects the trick starts the next. So basically the first phase is for bidding and the next phase consists of 13 rounds.

+ +

The state input I'll use will include all the cards that have been played, the goal and collected tricks, and the available cards. The output for each round will be a vector of length 54, containing all the cards and then the available card with the highest probability would be played.

+ +

At first I thought that the bidding phase should use the same input but with zeros everywhere except the available hand, and the output would exclude all the cards with no numbers like king, queen or jack. But then the ability to dash (estimate that you'll collect 0 tricks) wouldn't be available. Also I don't think it would work really well.

+ +

Should I just use two NNs for each phase, or what should I do? Also if anyone has any advice on things I need to watch out for, I'd really appreciate it if they shared them.

+",22971,,,,,3/9/2019 9:01,Teaching a neural network to play a card game with two phases,,1,0,,,,CC BY-SA 4.0 +11086,2,,11081,3/7/2019 16:41,,2,,"
+

Is it just about final states? So for $s \in S$ when S is not final?

+
+ +

You are thinking the right way, but to represent what you mean you don't need to write out ""when $s$ is not final"" - although that would be fine (and is used in some places), there is a more concise way of saying that given to you by the book.

+ +

As this is a formal exercise from the book, I don't want to write out an answer that could be cut&paste for all students.

+ +

Instead I suggest you take a look at the notations section at the beginning of the book, and find how Sutton & Barto use different set labels for all states including terminal states, and all states excluding terminal states. Also, check carefully which of those sets needs to be summed over.

+",1847,,,,,3/7/2019 16:41,,,,4,,,,CC BY-SA 4.0 +11087,1,11091,,3/7/2019 16:59,,3,476,"

Let's use Excercise 3.8 from Sutton, Barto - Introduction to RL:

+ +
+

Suppose $\gamma = 0.5$ and following sequence of rewards is received + $R_1=-1$ , $R_2=2$ , $R_3=6$ , $R_4=3$ , $R_5=2$ , with $T=5$ . What + are $G_0, G_1, ..., G_5?$

+
+ +

There isn't $G_5$ because $R_5$ is last reward. Am I understanding it right?

+ +

So:

+ +

$G_4 = 2$

+ +

$G_3 = 3 + 0.5*2 = 4$

+ +

$G_2 = 6+0.5*4 = 8$

+ +

$G_1 = 2+0.5*8 = 6$

+ +

$G_0 = -1 +0.5*6 = 2$

+",22826,,2444,,3/7/2019 17:43,3/7/2019 19:48,"Given specific rewards, how can I calculate the returns for each time step?",,1,0,,,,CC BY-SA 4.0 +11088,5,,,3/7/2019 17:44,,0,,,-1,,-1,,3/7/2019 17:44,3/7/2019 17:44,,,,0,,,,CC BY-SA 4.0 +11089,4,,,3/7/2019 17:44,,0,,"For questions related to the concept of reward, for example, in the context of reinforcement learning and Markov decision processes. For questions related to reward functions, reward design, reward shaping, reward hacking, etc., there are also those specific tags, so you use them instead of this generic one, unless your question is also about the concept of reward.",2444,,2444,,12/25/2021 18:35,12/25/2021 18:35,,,,0,,,,CC BY-SA 4.0 +11090,2,,11056,3/7/2019 18:20,,2,,"
+

Does humanism also extend to granting rights to AGIs, should they become sentient?

+
+ +

I think that is going to be something that will divide self-identifying humanists into a few different groups, based on understanding about what AGI is.

+ +

The front page you link includes ""aiming toward the welfare and fulfillment of living things"", which does not explicitly include non-living sentient things. However, the humanist philosophy naturally extends to granting rights and protections based on reasoning and scientific evidence to anything that can be argued as needing them.

+ +

The big problems are:

+ +
    +
  • Recognising and measuring sentience.

  • +
  • Attaching moral value to events that happen to artificial entities, where subjective measures such as pain, stress, happiness, fulfilment might not be relevant.

  • +
+ +

These measurement problems already cause a lot of variability in moral stances towards farming animals for instance. These biological creatures share enough with humans that even though we don't fully understand the nature of their subjective experience, it is possible to extrapolate with some backing from scientific studies. For example, we can measure whether a creature suffers stress or pain and reacts similarly to a human, and even if we don't understand what that means subjectively (what it feels like to be such an animal), a combination of Occam's Razor and Precautionary Principle can be argued:

+ +
    +
  • Occam's Razor the simplest interpretation is that if an animal is based on same cellular structure, has same brain regions for experiencing pain and reward, uses same stress hormones etc, that its subjective experience is likely somehow comparable to a humans. This allows us to apply empathy to an animal's situation with a guess that doing so is not 100% inaccurate.

  • +
  • Precautionary Principle the consequences for inhumanely treating a sentient creature are morally worse than the waste of humanely treating a non-sentient creature (provided doing the latter is not causing undue suffering to some other thing). So faced with a lack of precise knowledge, there is still a strong rational ethics argument for extending rights and humane treatment if there is any uncertainty.

  • +
+ +

When it comes to artificially created entities, we lose the first argument of Occam's Razor, and can only get it back with a better understanding of what sentience actually is, and which parts of it we should value. We need a much firmer theory here than with animals, because there is so little shared between our construction and that of a human-built machine. In addition, just as there are many different kinds of organism which exhibit different amounts of apparent intelligence and possible sentience, there will be different versions of AGI, probably starting with purpose-built non-narrow AIs that combine features that are practically useful and make them seem intelligent (parsing human speech and responding appropriately, which Google Home, Alexa, Siri already do to some depth) with some common-sense understanding of broad task portfolios. These things are near future, and most developers who work on them would not consider them sentient - however, they will be designed to act as if they were, because that makes them practical and usable, which in turn will make it much harder to argue Occam's Razor due to similarity in behaviour.

+ +

We still keep the second argument, but it is confounded by not understanding what the analogs are for humane and inhumane when dealing with artificial entities. It is unlikely that anyone would have a problem with switching off a running instance of the next 2 or 3 generations of Siri assistants - but at what point does that become deliberate incapacitation of a sentient being, and does it even matter if the system is designed to be switched on and off, and expresses no particular desire to be in either state?

+ +

In summary, the humanist philosophy could extend to cover the welfare of artificial general intelligences, but it is far from clear whether it can go there in practice. Until we have more complete theoretical models of AGIs, or practical working devices, then it is premature to consider it. Most debate is going to be based on imagined qualities of AGI, and any two people discussing this are likely to be starting from different assumptions about what those are.

+",1847,,1847,,3/7/2019 18:26,3/7/2019 18:26,,,,0,,,,CC BY-SA 4.0 +11091,2,,11087,3/7/2019 19:48,,1,,"

Perfect.

+ +

To back up your intuition about there not being a $G_5$, refer to the definition of discounted return in the periodic case (3.11). +$$G_t \doteq \sum_{k=t+1}^T \gamma^{k-t-1} R_k$$

+ +

You'll see that $G_5$ would be written as a sum with no terms in it, since $T=5$.

+",22916,,,,,3/7/2019 19:48,,,,0,,,,CC BY-SA 4.0 +11092,1,,,3/7/2019 22:28,,1,82,"

Dialects differ a lot between cities in my country, Syria. People sometimes express themselves using different local phrases and idioms which refer to the same topic. So, I came up with the idea of creating an Android application shows a limited set of sentences or expressions while asking you to express them in the local dialect of your region orally, after that this application tries to figure out what your dialect is. For a short period of time, I'm going to launch an Android application in order to collect the needed dataset which will be a new contribution. First of all, I need some helpful answers to my questions:

+ +
    +
  1. In general, is a period of 6 months enough for such a project to be done by only one student who is a beginner in this field or it is harder than it seems?
  2. +
  3. Are the libraries and tools needed to do this project available for free?
  4. +
  5. I know that more training data leads to more accurate results. In order to obtain good results, what is the estimated minimum number of training data needed for this model?
  6. +
  7. How do you advise me to begin?
  8. +
  9. How much is my suggested project relevant to the project attached in this link?
  10. +
+ +

kindly write down your suggested edits and recommendations if any.

+ +

Edit for the 5th question: also see this paper.

+",20449,,20449,,3/9/2019 9:57,3/9/2019 9:57,Dialects classification using deep learning,,0,5,,,,CC BY-SA 4.0 +11093,2,,10603,3/7/2019 22:31,,4,,"

Another fallacy that appears common to most search engines is that anything a person searches on is an aspect of their own identity. I once searched on walk-in tubs for a very elderly relative, and was followed all over the web by ads for aids for the infirm elderly. Users who recognize that Google uses their searches to build their profile can alter their searches accordingly. It's also fun to mess with Google's model. Try searching on ""dragon images"" and see how fast Google and advertisers decide you are a teenage female. Have fun with it. Do your best to turn Google's model of you into self-contradictory garbage.

+",22984,,,,,3/7/2019 22:31,,,,1,,,,CC BY-SA 4.0 +11095,2,,4650,3/8/2019 5:26,,0,,"

AI can also be used by Telcos to: +(1) Improve Usage & Retention efforts: Make relevant up-sell and cross-sell offers to the right users at the right time. +(2) Make segmentation more granular: make bespoke recommendations based on a user’s behavioral patterns and content preferences and assessing which call & data packages best suit different customer segments thereby increasing sales success rates

+",22991,,,,,3/8/2019 5:26,,,,0,,,,CC BY-SA 4.0 +11096,2,,11077,3/8/2019 7:38,,2,,"

Without knowing the kind of data and the process generating it it's hard to give a definite answer. +In general, I would attempt a network that has as inputs the actual sensor readings, and outputs the expected readings. You train this network by presenting data with errors added as inputs, and correct readings as outputs. It should learn to guess the correct (non-anomalous) readings from a set of actual readings, and you can find the sensors with anomalies by taking the difference between actual and predicted readings. +Depending on the kind of anomalies you expect (misreadings such as fluke zero or max values, or actually anomalous states of the system being measured) your training data should be set up to have samples of such anomalies. +If there are temporal correlations (for example, temperature readings that change slowly) using a RNN might be helpful but its dimensioning heavily depends on the nature of your measured system. +External factors influencing the system (state of a heating mechanism, time of day/year, etc.) could be added at inputs for better predictions. +At the end of the day, trial and error is your friend. Start with a simple network and see how well it behaves. Go for more complex networks when you see where the limitations of the simple solution are.

+",22993,,,,,3/8/2019 7:38,,,,3,,,,CC BY-SA 4.0 +11098,2,,11077,3/8/2019 10:56,,2,,"

Instead of using a neural network, simply sample as many non-anomalous readings from each sensor as you can. If the distribution of the readings from each sensor is approximately normal (check the skew and kurtosis values for the samples from each sensor) then you can work out mean and standard deviation of the samples and, for any future samples, the value of a particular measurement on a normal probability distribution.

+ +

(If your measurements do not have a normal distribution, then you can often apply a transform of some sort to the data make it normal).

+ +

So, let's say that you have measured a few thousand typical samples for one of your sensors, confirmed that the distribution is normal and calculated the mean and standard deviation of those samples. You can now calculate where on the normal curve your new sample 'x' would be using:

+ +
def gaussian(x, mean, std):
+    # check for very small standard deviations
+    if(2 * std ** 2 == 0):
+        return 0.0
+
+    return (1.0/(math.sqrt(2*math.pi) * std)) * math.exp(-((x-mean)**2)/(2*std**2))
+
+ +

At this point, you can make a decision if the value of the of a particular measurement on that distribution is so low as to be anomalous, and inform the user.

+ +

Now, you have probably realised that working out exactly how low a value should be to be considered anomalous might be tricky, and you are correct. The solution is to see if you can get some real anomalous data and use that to set the thresholds. Obviously, this might be rather a job if you have 4000 independent sensors...

+ +

If you want to know more about Anomaly Detection then I recommend that you take a look at Andrew Ng's introductory lecture series.

+",12509,,12509,,3/12/2019 16:20,3/12/2019 16:20,,,,0,,,,CC BY-SA 4.0 +11099,1,,,3/8/2019 11:57,,2,80,"

I frequently need to translate product names for hundreds of similar products -- and I have a list of past product names. Is it possible to train AI to review past translations and translate? It doesn't have any special grammar, simply the name (with some industry-specific usage that a general machine translator can't do.) What would I need to do to get started?

+",23000,,1671,,11/6/2019 21:35,11/6/2019 21:35,Translate product names with AI,,0,3,,,,CC BY-SA 4.0 +11100,1,,,3/8/2019 12:52,,5,127,"

I am working on a MLP neural networks, using supervised learning (2 classes and multi-class classification problems). For the hidden layers, I am using $\tanh$ (which produces an output in the range $[-1, 1]$) and for the output layer a softmax (which gives the probability distribution between $0$ and $1$). As I am working with supervised learning, should be my targets output between 0 and 1, or $-1$ and $1$ (because of the $\tanh$ function), or it does not matter?

+ +

The loss function is quadratic (MSE).

+",19268,,2444,,3/8/2019 13:55,8/5/2019 21:50,What should the range of the output layer be when performing classification?,,2,1,,,,CC BY-SA 4.0 +11102,2,,6699,3/8/2019 14:45,,1,,"

The use of the Golden Ratio is an interesting suggestion which has intrigued many lovers of the mathematical beauty represented in nature and in AI. The problem lies in the foundations of the AI applications. For example, in designing algorithms for recognizing naturally occurring phenomena, such as face recognition or human body movements (See https://www.intechopen.com/books/machine-learning-and-biometrics/a-human-body-mathematical-model-biometric-using-golden-ratio-a-new-algorithm) it is suitable. However, for non natural occurrences, the ratio is limited since the data is usually random or chaotic. However, in order to create a master algorithm for the future which encompasses all the best of the current AI algorithms, the use of mathematical concepts such as the golden ration and fractals will be vital. Watch this space...

+",23002,,,,,3/8/2019 14:45,,,,0,,,,CC BY-SA 4.0 +11103,1,,,3/8/2019 17:37,,0,290,"

Is there a way to run C64, Nintendo Gameboy, or other platforms in an OpenAI's Gym-like environment? It seems that we can run only Atari games.

+",23009,,2444,,9/22/2020 16:47,9/22/2020 16:47,Is there a way to run other platforms (other than Atari) in an OpenAI's Gym-like environment?,,2,0,,12/29/2021 13:06,,CC BY-SA 4.0 +11104,2,,11103,3/8/2019 18:03,,1,,"

Ok, I found

+ +

https://blog.openai.com/gym-retro/

+ +

but there may be other platforms?

+",23009,,,,,3/8/2019 18:03,,,,0,,,,CC BY-SA 4.0 +11105,2,,10675,3/8/2019 18:45,,2,,"

System Identification and policy learning are two completely different aspects of a system.

+ +

System Identification is basically finding out the transfer functions, the hardware parameters, the relationships and nature of behavior for different components that determine the results, when acted upon by a control signal. Generally, it is the hardware manufacturers who have all configuration details in their datasheets and they are either used as direct system parameters or used to derive other. Online system identification is the process of determining the set of parameters not with already available measurements but by using data coming through in real time.

+ +

Policy learning is the process of correlating the actions to results and discerning what actions are good or bad. Policy learning is about determining the control strategy that shall produce desired results given all the circumstances.

+ +

SI is like curve fitting and determine the equation of the curve (already knowing the polynomial degree because you need to know the structure of the parameters you are trying to estimate) on already available data while policy learning is using a closed loop system to repeatedly update your control signals till you find one that satisfies your performance and operational desires.

+ +

In robotics context, a robot manipulator is supposed to have the Mass(H), Coriolis(C) and Gravity(G) matrix that define the dynamics of the system, basically relating the physics of the robot to the applied torque on the joints and the tip as shown in the equation below. Online parameter identification would mean using the torque, the known structure of the HCG matrices (H is a n x n matrix, n being the DOF and so on) dynamic equation and the then determine the numerical values. Similar online parameter identification is also done for friction components like the Static and Coulomb friction and the coefficients of Viscous friction. Least squares method is often used for the same.

+ +

$$\mathbf{H(q)\ddot{q}}(t) + \mathbf{C(q, \dot{q})\dot{q}}(t) + \mathbf{B\dot{q} + g(q) = \tau}$$

+ +

Policy learning in terms fo RL is basically learning the set of actions that will produce desired good behavior. Q-learning is model free learning, so there is no predicted behavior to be obtained based on the inputs. Here, the inputs and simulated, results are obtained, they are given a degree of belief (positive & negative and high & low rewards) depending upon what part and how much of the desired result are they producing. Over time, the policy learned is finally the sequence of actions that should be run to get to desired result. The Q-table does not have anything to do with the system identification which is a modeling step, it is rather a control step. So, for the arm, the learned policy would be what joints in what sequence should be actuated to what angles to complete a pick and place task.

+",22988,,22835,,3/11/2019 19:23,3/11/2019 19:23,,,,0,,,,CC BY-SA 4.0 +11106,2,,11100,3/8/2019 20:05,,1,,"

For this particular classification problem, I would recommend you using a softmax function whose output range is [0,1].
+The sum of all outputs should be 1, so an advantage of using a softmax function is that you get a percentage of how confident the network is in this classification.

+ +

Side note: As DuttaA has commented, cross entropy loss is a better loss function than the quadratic mean squared error.

+",23012,,,,,3/8/2019 20:05,,,,0,,,,CC BY-SA 4.0 +11107,1,,,3/8/2019 20:08,,1,48,"

Here intelligence is defined as any analytic or decision making process, regardless of strength (utility), and, potentially, any process of computation that produces output, regardless of the medium.

+

The idea that AI is part of an evolutionary process, with humans as merely a vehicle of the next dominant species, has been a staple of Science Fiction for many decades. It informs our most persistent mythologies related to AI. (Recent examples include Terminator/Skynet, Westworld, Ex Machina, and Alien:Covenant).

+

What I'm driving at here is that, although the concept of Neural Networks have been around since the 1940's, they have only recently demonstrated strong utility, so it's not an unreasonable assumption to identify Moore's Law as the limiting factor. (i.e. it is only recently that we have had sufficient processing power and memory to achieve this utility.)

+

But the idea of AI is ingrained into information technology once automation becomes possible. Babbage's Difference Engine led to the idea of an Analytic Engine, and the game of Tic-Tac-Toe was proposed as a means of demonstrating intelligence.

+

What I'm driving at here is that the idea of analysis and decision making are so fundamental in regard to information technology, that it is difficult to see functions that don't involve them. And, if the strength of analysis and decision making is largely a function of computing power:

+

Can intelligence be understood as a naturally occurring function of information technology?

+",1671,,2444,,12/12/2021 18:59,12/12/2021 18:59,Is Intelligence a naturally occurring function of Information Technology?,,1,0,,,,CC BY-SA 4.0 +11108,2,,11107,3/8/2019 20:39,,1,,"

Maybe, but it depends to a very large degree on the choice of definition.

+ +

One of the biggest challenges for AI researchers, neuroscientists, philosophers, and psychologists, has been that the layperson's understanding of intelligence does not appear to correspond to a well-defined concept. This point was most famously exploited by John R. Searle in his paper Minds Brains and Programs.

+ +

Consider your definition again carefully, and notice that decision is unbound. Does a rock rolling down a hill make a decision when it 'decides' to roll to the left? If not, Searle and his ilk would argue that a computer isn't making a decision when it decides to execute the next instruction. It is simply obeying deterministic laws of physics.

+ +

To escape this problem and still claim that machines are intelligent, you can either chose to believe that rocks make decisions (a view called panpsychism), or that we should behave as though rocks make decisions because doing so gives us a lot of explanatory power (the intentional stance), or claim that humans are no different from the rock or the computer (eliminative materialism). The last option leaves you with the problem of explaining subjective experiences though.

+ +

An excellent book that covers the mainstream arguments in this vein is Haugeland's Mind Design II. All the arguments I've outlined here are covered in great detail within. Among modern AI researchers, I'd say there is a split between those who embrace something like the intentional stance and those who embrace eliminative materialism.

+",16909,,,,,3/8/2019 20:39,,,,4,,,,CC BY-SA 4.0 +11109,1,,,3/8/2019 20:41,,4,91,"

As far as I know, a problem representation is the formulation of the problem in a way that it can be programmed and therefore solved (for example, you can represent the $N$-queens problem by using an array of $N \times N$).

+ +

What does problem modelling mean? What is the difference between a problem representation and problem modelling?

+",21832,,2444,,11/12/2019 17:53,11/12/2019 17:53,What is the difference between a problem representation and problem modelling?,,1,0,,,,CC BY-SA 4.0 +11111,2,,11109,3/8/2019 20:45,,4,,"

I would say these terms are often used interchangeably in AI. When they differ, I would say that problem modeling means finding a mathematical description of the problem, while problem representation means finding a particular way to represent that mathematical formalism.

+ +

For example, a list of numbers can be stored (represented) with a linked list, and array list, a hash table, or a self-balancing tree. All of them can produce a faithful model of the list, but if what you want to do is find the order that elements were entered in, the array list or linked list is far faster and more natural. If what you want to do is determine whether certain pieces of information are present in the list, the hash table is fastest. If what you want to do is find ranges of similar elements, the tree is fastest. Essentially, representational choices are engineering problems, while modelling choices are scientific problems.

+",16909,,,,,3/8/2019 20:45,,,,2,,,,CC BY-SA 4.0 +11112,1,11113,,3/8/2019 21:21,,2,141,"

I was reading in the article A tutorial on partially observable Markov decision processes (p. 120), by Michael L. Littman, that $\sum_{z \in Z}O(a, s',z) =1$, where $a$ is the action, $s'$ the next possible state and $z$ a certain/specific observation.

+ +

How come that the observation function $O(a, s', z)$ adds up to $1$ in POMDP?

+",19413,,2444,,3/28/2019 17:40,3/28/2019 17:40,Does the observation function for POMDP always add up to 1?,,1,0,0,,,CC BY-SA 4.0 +11113,2,,11112,3/8/2019 21:39,,2,,"

$O(a, s', z) = \mathbb{P}(z \mid a, s')$ is a conditional probability distribution, so it always needs to sum up to $1$. You should interpret $O(a, s', z)$ as the probability of observation $z$, given that the agent took action $a$ and landed in state $s'$.

+ +

$O(a, s', z)$ is thus not a joint distribution, even though the notation $O(a, s', z)$ might suggest it. In this case, $O(a, s', z)$ simply means that $O$ is a function of $a$, $z$ and $s'$.

+ +

If you want to see a proof that conditional probability distributions sum up to 1, have a look at this post.

+",2444,,2444,,3/8/2019 21:51,3/8/2019 21:51,,,,3,,,,CC BY-SA 4.0 +11116,1,11804,,3/9/2019 3:05,,3,52,"

In this tutorial, they build a speech recognition model to classify a one-second audio clip as one of ten predefined words. Suppose that we modified this problem as the following: Given an Arabic dataset, we aim to build a dialects recognition model to classify a two-second audio clip as one of $n$ local dialects using ten predefined sentences. I.e. for each of these ten sentences, there are $x$ different phrases and idioms which refer to the same meaning$^*$. Now how can I take advantage of the mentioned tutorial to solve the modified problem?

+ +

$*$ The $x$ different phrases and idioms for each sentence are not predefined.

+",20449,,,,,4/13/2019 20:58,How much the dialects recognition and speech recognition are relevant?,,1,1,,,,CC BY-SA 4.0 +11117,1,,,3/9/2019 5:25,,4,198,"

What are the characteristics which make a function difficult for the Neural Network to approximate?

+ +

Intuitively, one might think uneven functions might be difficult to approximate, but uneven functions just contain some high frequency terms (which in case of sigmoid is easy to approximate $\frac{1} {(1 + e ^ {-(w*x + b)})}$, by increasing the value of $w$). So uneven data might not be diffcult to approximate.

+ +

So my question is what makes a function truly difficult for approximation?

+ +

NOTE: By approximation I do not mean things which can be changed by changing the training method (changing training set size, methods, optimisers). By approximation I mean things which require hyperparameters (size, structure, etc) of a NN to be changed to approximate to a certain level significantly easily.

+",,user9947,2444,,5/10/2019 14:41,5/10/2019 14:41,What characteristics make it difficult for a Neural Network to approximate a function?,,0,4,,,,CC BY-SA 4.0 +11118,1,,,3/9/2019 5:57,,1,150,"

Can ANN with only one neuron in output layer be trained in a way that output neuron’s value (0-1) can be representation of some real value, like for example height.

+ +

In other words,can neural network given the inputs predict the height of a person by outputing values from 0 to 1. Zero being the 50 cm and 1 being 250cm.Or it will always gravitate to 0 or 1?Can it predict a height of 150cm (0.5) ?

+",23022,,,,,3/11/2019 11:16,ANN’s single output representing value,,1,0,,,,CC BY-SA 4.0 +11119,1,,,3/9/2019 6:42,,2,409,"

I'm writing an AI for a board game, and previously I would just create a value maximizing state machine and tune the factors one at a time.

+ +

However, the issues with this is getting apparent. My last AI did not look into the future, hurting it's long term chances, and manual tuning of weights is proving to be a chore.

+ +

I've looked into minimax algorithms, but with a non perfect information game and elements of random chance I'm not too confident it will be effective.

+ +

I've also looked into traditional neural networks, but evaluating board states is tricky and the game does not split into moves well.

+ +

The game I'm writing the AI for is Ticket To Ride, but I would appreciate tips for any board game with similar mechanics.

+",23024,,,,,3/9/2019 11:52,Game AI design for a multiplayer random board game?,,1,0,,,,CC BY-SA 4.0 +11120,2,,11118,3/9/2019 8:18,,1,,"

Yes, this is possible. The only restriction on the range of output values is due to the activation function, if there is one. If, for example, the activation function is a sigmoid, the output would be restricted to values between 0 and 1. You can of course choose an activation function best suited to your problem, even if that means having none.

+ +

In your case, if you really want the output of your network to be between 50 and 250, you could use a scaled and shifted sigmoid as your activation: 200*sig(w*x+b)+50.

+ +

But, as was pointed out in comments below, having a single output neuron may not be the best architecture for this problem.

+",22916,,22916,,3/11/2019 11:16,3/11/2019 11:16,,,,2,,,,CC BY-SA 4.0 +11122,2,,11084,3/9/2019 9:01,,1,,"

I've played this game. If I remember correctly, a successful strategy (for winning as many tricks you estimated to win) involves continually evaluating how well you're doing. If you're doing very poorly relative to your initial estimation, that's valuable information to have.

+ +

I think you should have one network that outputs--at every time step--both an estimate of future tricks won and an action of which card to play. This has the additional advantage of giving you more estimation experience with which to train. After all rounds, you'll know how far off each of your estimations were throughout the game. This should speed up learning on that part of the game.

+",22916,,,,,3/9/2019 9:01,,,,0,,,,CC BY-SA 4.0 +11123,2,,11119,3/9/2019 11:23,,1,,"

Many successful game-playing engines use some form of search to look ahead and plan the next move. This is possible even in stochastic environments by building a tree with probabilities assigned, or by simulating the randomness during planning and relying on lots of samples to get reasonable estimates of value.

+ +

There are many approaches possible. However, as Ticket to Ride is a relatively simple game that you could automate to play to the end very quickly (1000s of times per second), then you could look into Monte Carlo Tree Search (MCTS) as a mechanism for the game-playing agents to look ahead and adjust their play to current odds.

+ +

The basics of Monte Carlo Tree Search:

+ +
    +
  • On its turn, the agent builds up an evaluation tree local to its current state - each node of the tree is a game state, augmented with statistics from MCTS (how often node was chosen, how many times agent managed to win eventually from that node) and each link is a choice that a player makes. This is built up slowly, typically just adding one node per full simulated game.

  • +
  • For look-ahead search ""inside"" the tree, the agent explores options according to their relative promise, selecting moves to simulate according to its best estimates of value at the time. It needs to explore alternatives to make sure it doesn't miss anything, and there are a few variations of doing this. One successful approach to action selection inside the tree is Upper Confidence Bound applied to Trees (or UCT for short).

  • +
  • When play gets to a leaf node of the tree, the tree might be expanded slightly, and then afterwards the agent simulates plays of the game either to the end or to a robust evaluation point. This later play might be entirely random - or might still be guided probabilistically by some heuristic - and is called a rollout. The important thing here is to play quickly to get samples just to roughly assess the position at the leaf of the tree.

  • +
  • The evaluation resulting from rollout is fed back into the tree, which builds up sampled statistics of the most promising variations of play.

  • +
  • After a certain number of repeats of the whole algorithm (local tree search, expanding the tree, rollout and update), the agent picks the most promising next step from the tree. At this point it may just discard the whole tree - in a game with lots of randomness, this is probably what you would do now, as things you were considering as just probabilities before will have become historical fact, and the chances are that re-running the tree build from the next start position will be more accurate.

  • +
+ +

There are a few variants possible, including combinations with neural networks as used in AlphaZero.

+ +

The basic principle is to sample from many choices, focusing over time on what looks ""best"" to each player based on statistics of those choices.

+ +

As Ticket To Ride is a card game where cards can be drawn from an unseen deck, and it would be considered cheating to simulate and rollout with the actual deck in play, you would need to have realistic shuffles of the remaining unknown cards used during rollout. It may still work quite well without the realistic part and for speed just assume a random unlimited deck of cards (because re-shuffling on each imagined rollout would be costly).

+",1847,,1847,,3/9/2019 11:52,3/9/2019 11:52,,,,2,,,,CC BY-SA 4.0 +11124,1,,,3/9/2019 13:22,,1,34,"

There are many, many literary works in the public domain, along with human translations, many of which have entered the public domain as well. (Public domain = easily available)

+ +

In order for me to advance my knowledge of e.g. Japanese, I'd like to read texts in Japanese using a yet to be written tool, and when I encounter an unknown word/phrase/sentence, I'd like to just click on a position in the text and be transferred to the corresponding position in the e.g. English translation (or original) of the text in question.

+ +

Let's also assume we have access to a dictionary that translates a fair amount of words between those two languages (and there are of course free dictionaries for many language pairs).

+ +

What ways are there to use AI toolkits plus some wiring and perhaps scripting/programming, to auto-correlate the positions of two versions of the same text, in two languages? The results do not - and in fact in many cases cannot - be perfect, but they should be roughly correct.

+ +

I'm aware that this is still not a straightforwards task, as there are complicating factors like inflection of verbs and other grammatical properties that make the use of dictionary tools much harder. Also, translators will often translate to words that don't have that mapping in any dictionary. Then there is the fact that words aren't delimited by spaces in languages like my example language Japanese - (but if it is easier to work with only space-separated languages like, say, Spanish, or Russian, I'd like to hear answers to this simpler problem as well). Also the order of words and even hole sub-clauses differs from language to language.

+ +

A simple, non-AI approximation would be to

+ +
    +
  1. figure out at what relative position in the source language text the user clicked (e.g. at the character on position 50.234%)
  2. +
  3. then go to that same relative position 50.234% in the target language text
  4. +
+ +

This approximation could perhaps be used as the starting point for the AI, which would then use words and dictionaries to make the results more accurate.

+",23027,,,,,3/10/2019 16:08,Use AI to auto-correlate the words of human-translated texts?,,1,0,,,,CC BY-SA 4.0 +11127,1,,,3/9/2019 16:30,,1,67,"

I would love to know if an AI model could come up with certain theories of the old like Pythagoras' theorem, Euclid's formulations, +Newton's gravity, Einstein's theories if provided and trained with sufficient amount of observable data available at those period of time. If this is possible can unsolved conjectures be proved by AI? Or even better can AI develop new theories or will it fail to come up with even basic mathematical operations by itself?

+",23034,,,,,3/9/2019 17:47,Can AI come up with scientific theories of past when provided with sufficient data available at that time?,,1,4,,,,CC BY-SA 4.0 +11128,1,,,3/9/2019 16:38,,3,44,"

Can AI transform natural language text describing real scenarios to visual images and videos ? How does as AI interprets say a Harry Potter story if it has to reproduce it in form of videos ? Would be useful if anyone can help me with the required literature for understanding text to image transformation by AI

+",23034,,,,,3/9/2019 16:38,How would an AI visualize a story written in natural language?,,0,0,,,,CC BY-SA 4.0 +11129,1,,,3/9/2019 17:29,,3,29,"

I have a NN I'd like to train using supervised learning. Some samples of the training set, however, have better ""quality"" than others, so I'd like the algorithm to pay ""special attention"" to them. As general question, how to take this into account in implementation?

+ +

Being more specific, I'm working with OpenCV and noticed that the train method apparently have such parameter:

+ +
cv2.ANN_MLP.train(inputs, outputs, sampleWeights[, sampleIdx[, params[, flags]]]) → retval
+
+ +

Where:

+ +
+

sampleWeights – (RPROP only) Optional floating-point vector of weights for each sample. Some samples may be more important than + others for training. You may want to raise the weight of certain + classes to find the right balance between hit-rate and false-alarm + rate, and so on.

+
+ +

However OpenCV documentation is unclear on this, so how to handle this parameter?

+",13087,,,,,3/9/2019 17:29,Backpropagation: how to take into account different samples quality,,0,0,,,,CC BY-SA 4.0 +11130,2,,11127,3/9/2019 17:47,,1,,"

Yes.

+ +

Some good examples of this are Lipson's work using evolutionary models, and Wu & Tegmark's work on a theory-based life-long learner, and Iten et al.'s work with deep neural networks. There are many, many, other research papers in this area, and there is a lot more work that is ongoing. The endgame for a lot of this work is the hope that we can synthesize new theories automatically.

+",16909,,,,,3/9/2019 17:47,,,,1,,,,CC BY-SA 4.0 +11131,1,,,3/9/2019 18:54,,2,543,"

I'm trying create neural network to predict moves in a card game. I am looking for recommendations on encoding the game state to my input layer. It's a complex turn based collectible card game (think Magic the Gathering). I need to represent cards being in various areas of the game board (deck, discard pile, hand, etc). It seems difficult to assign cards to these areas because the number of cards in these areas is never constant.

+ +

I was thinking of an approach where I assign each card in the game to being in a specific card area. The number of total cards in the game should be constant (let's assume that). This approach I feel should give me a potentially less sparse input.

+ +

Also, With this approach what is the best way to handle card duplicates? Let's say I have 3 copies of the exact same card in my deck. Maybe 1 of the copies is in my hand and 2 in my discard pile. It does not matter which of the 3 copies is in my hand because they are all the same exact card. There is now multiple ways to represent this same exact game state in my network because each individual card has it's own state. To me this does not seem good. How much will this effect my neural network's ability to learn the game?

+",23037,,,,,4/1/2021 8:25,How to encode card game state into neural network input,,1,0,,,,CC BY-SA 4.0 +11132,2,,11131,3/10/2019 1:24,,1,,"

I am not sure what you mean by

+
+

There is now multiple ways to represent this same exact game state in my network because each individual card has it's own state. To me this does not seem good.

+
+

We can think of game state as a column vector of the form $1Xn$. In this case, you can formalize cards held according to some encoding. This can be anything from a Gödel Numbering to simple integer assignment. Here is an example for poker in python: https://pypi.org/project/treys/

+

In the example of duplicates you would have a column vector that simply has duplicate indices. I.e: {241, 424, 112, 112, 455}$

+",9608,,43351,,4/1/2021 8:25,4/1/2021 8:25,,,,4,,,,CC BY-SA 4.0 +11133,2,,11057,3/10/2019 4:57,,30,,"

The notation I'll be using is from two different lectures by David Silver and is also informed by these slides.

+

The expected Bellman equation is +$$v_\pi(s) = \sum_{a\in \cal{A}} \pi(a|s) \left(\cal{R}_s^a + \gamma\sum_{s' \in \cal{S}} \cal{P}_{ss'}^a v_\pi(s')\right) \tag 1$$

+

If we let +$$\cal{P}_{ss'}^\pi = \sum\limits_{a \in \cal{A}} \pi(a|s)\cal{P}_{ss'}^a \tag 2$$ +and +$$\cal{R}_{s}^\pi = \sum\limits_{a \in \cal{A}} \pi(a|s)\cal{R}_{s}^a \tag 3$$ +then we can rewrite $(1)$ as

+

$$v_\pi(s) = \cal{R}_s^\pi + \gamma\sum_{s' \in \cal{S}} \cal{P}_{ss'}^\pi v_\pi(s') \tag 4$$

+

This can be written in matrix form

+

$$\left. +\begin{bmatrix} +v_\pi(1) \\ +\vdots \\ +v_\pi(n) +\end{bmatrix}= +\begin{bmatrix} +\cal{R}_1^\pi \\ +\vdots \\ +\cal{R}_n^\pi +\end{bmatrix} ++\gamma +\begin{bmatrix} +\cal{P}_{11}^\pi & \dots & \cal{P}_{1n}^\pi\\ +\vdots & \ddots & \vdots\\ +\cal{P}_{n1}^\pi & \dots & \cal{P}_{nn}^\pi +\end{bmatrix} +\begin{bmatrix} +v_\pi(1) \\ +\vdots \\ +v_\pi(n) +\end{bmatrix} +\right. \tag 5$$

+

Or, more compactly,

+

$$v_\pi = \cal{R}^\pi + \gamma \cal{P}^\pi v_\pi \tag 6$$

+

Notice that both sides of $(6)$ are $n$-dimensional vectors. Here $n=|\cal{S}|$ is the size of the state space. We can then define an operator $\cal{T}^\pi:\mathbb{R}^n\to\mathbb{R}^n$ as

+

$$\cal{T^\pi}(v) = \cal{R}^\pi + \gamma \cal{P}^\pi v \tag 7$$

+

for any $v\in \mathbb{R}^n$. This is the expected Bellman operator.

+

Similarly, you can rewrite the Bellman optimality equation

+

$$v_*(s) = \max_{a\in\cal{A}} \left(\cal{R}_s^a + \gamma\sum_{s' \in \cal{S}} \cal{P}_{ss'}^a v_*(s')\right) \tag 8$$

+

as the Bellman optimality operator

+

$$\cal{T^*}(v) = \max_{a\in\cal{A}} \left(\cal{R}^a + \gamma \cal{P}^a v\right) \tag 9$$

+

The Bellman operators are "operators" in that they are mappings from one point to another within the vector space of state values, $\mathbb{R}^n$.

+

Rewriting the Bellman equations as operators is useful for proving that certain dynamic programming algorithms (e.g. policy iteration, value iteration) converge to a unique fixed point. This usefulness comes in the form of a body of existing work in operator theory, which allows us to make use of special properties of the Bellman operators.

+

Specifically, the fact that the Bellman operators are contractions gives the useful results that, for any policy $\pi$ and any initial vector $v$,

+

$$\lim_{k\to\infty}(\cal{T}^\pi)^k v = v_\pi \tag{10}$$

+

$$\lim_{k\to\infty}(\cal{T}^*)^k v = v_* \tag{11}$$

+

where $v_\pi$ is the value of policy $\pi$ and $v_*$ is the value of an optimal policy $\pi^*$. The proof is due to the contraction mapping theorem.

+",22916,,22916,,8/12/2020 17:40,8/12/2020 17:40,,,,0,,,,CC BY-SA 4.0 +11134,1,11138,,3/10/2019 6:18,,2,128,"

With the recent revelation of severe limitations in some AI domains, such as self-driving cars, I notice that neural networks behave with the same sort of errors as in simpler models, i.e. they may be ~100% accurate on test data, but, if you throw in a test sample that is slightly different from anything it's been trained on, it can throw the neural network off completely. This seems to be the case with self-driving cars, where neural networks are miss-classifying modified/grafitied Stop Signs, unable to cope with rain or snowflakes, or birds appearing on the road, etc. Something it's never seen before in a unique climate may cause it to make completely unpredictable predictions. These specific examples may be circumvented by training on modified Stop Signs, or with rain and birds, but that avoids the point: that NN's seem very limited when it comes to generalizing to an environment that is completely unique to its training samples. And this makes sense of course given the way NNs train.

+

The current solution seems to be to manually find out these new things that confuse the network and label them as additional training data. But that isn't an AI at all. That isn't "true" generalization.

+

I think part of the problem to blame is the term "AI" in and of itself. When all we're doing is finding a global minimum to a theoretical ideal function at some perfect point before over-fitting our training data, it's obvious that the neural network cannot generalize anymore than what is possible within its training set.

+

I thought one way that might be possible to get around this is: rather than being static "one unique calculation at a time", neural networks could remember the last one or two predictions they made, and then their current prediction, and use the result of that to then make a more accurate prediction. In other words, a very basic form of short-term memory.

+

By doing this, perhaps, the neural network could see that the raindrops or snowflakes aren't static objects, but are simply moving noise. It could determine this by looking at its last couple of predictions and see how those objects move. Certainly, this would require immense additional computation overhead, but I'm just looking to the future in terms of when processing power increases how NNs might evolve further. Similar to how neural networks were already defined many decades ago, but they were not widely adopted due to the lack of computational power, could this be the same case with something like short-term memory? That we lack the practical computational power for it but that perhaps we could theoretically implement it somehow for some time in the future when we do have it.

+

Of course, this short-term memory thing would only be useful when a classification is related to the prior classifications, like with self-driving cars. It's important to know what was observed a few seconds ago in real life when driving. Likewise, it could be important for a neural network to know what was predicted a few live classifications ago. This might also have a use in object detection: perhaps, a representation could be learned for a moving figure in the distance. Speed could now become a representation in the hidden layers and be used in assistance for the identification of distant objects, something not possible when using a set of single static weights.

+

Of course, this whole thing would involve somehow getting around the problem of training weights on a live model for the most recent sample. Or, alternatively, perhaps the weights could still remain static but we'd use two or three different models of weights to represent time somehow.

+

Nevertheless, I can't help but see a short-term memory of some form as being a requirement, if AI is to not be "so stupid", when it observes something unique, and if it's to ever classify things based on time and recent observations.

+

I'm curious if there's any research papers or other sources that explore any aspect of incorporating time using multiple recent classifications or something else that could be considered a short-term memory of sorts to help reach a more general form of generalization that the neural network doesn't necessarily see in its training, i.e. making it able to avoid noise using time as a feature to help it do so, or using time from multiple recent classifications as a way to estimate speed and use that as a feature?

+

I'd appreciate the answer to include some specific experiment or methodology as to how this sort of thing might be added to neural networks (or list it as a source), or if this is not an area of active research, why not.

+",6328,,2444,,12/12/2021 11:58,12/12/2021 11:58,Is there any research on models that make predictions by also taking into account the previous predictions?,,1,0,,,,CC BY-SA 4.0 +11136,2,,4680,3/10/2019 14:38,,3,,"

I would start with:

+ + + +

You can also search for articles using the 7-scenes dataset (Scene Coordinate Regression Forests for Camera Relocalization in RGB-D Images) which is pretty standard

+",23049,,1671,,3/11/2019 20:49,3/11/2019 20:49,,,,1,,,,CC BY-SA 4.0 +11137,2,,11124,3/10/2019 16:08,,1,,"

You are basically describing the way Google Translate works.

+ +

There has been a lot of research in text alignment in the area of multi-lingual corpus linguistics. An early paper (with sourcode) is Gale and Church's A Program for Aligning Sentences in Bilingual Corpora (PDF).

+ +

In linguistics these are called parallel texts. On the wikipedia page you will find links to a number of alignment programs which attempt to solve that issue.

+",2193,,,,,3/10/2019 16:08,,,,2,,,,CC BY-SA 4.0 +11138,2,,11134,3/10/2019 17:17,,4,,"

What you're describing is called a recurrent neural network. There are a large number of designs in this family that all have the ability to remember recent inputs and use them in the processing of future inputs. The ""Long Short Term Memory"" or LSTM architecture was one of the most successful in this family.

+ +

These are actually very widely used in things like self-driving cars too, so, on their own, they are not enough to overcome the brittleness of current models.

+",16909,,,,,3/10/2019 17:17,,,,0,,,,CC BY-SA 4.0 +11139,1,16241,,3/10/2019 17:29,,5,3828,"

I'm looking for a neural network architecture that excels in counting objects. For example, CNN that can output the number of balls (or any other object) in a given image.

+

I already found articles about crowd counting. I'm looking for articles about different types of objects.

+",23049,,2444,,3/20/2022 9:30,3/20/2022 9:30,Which neural network can count the number of objects in an image?,,1,1,,,,CC BY-SA 4.0 +11141,1,,,3/10/2019 20:54,,4,88,"

I want to model an SMDP such that time is discretized and the transition time between the two states follows an exponential distribution and there would be no reward between the transition.

+ +

Can I know what are the differences between $Q(\lambda)$ and Q-learning for this problem (SMDP)? I actually want to extend the pseudo-code presented here to an SMDP problem with discretization of time horizon.

+",10191,,2444,,3/10/2019 21:24,3/11/2019 17:43,How to apply or extend the $Q(\lambda)$ algorithm to semi-MDPs?,,1,0,,,,CC BY-SA 4.0 +11142,1,,,3/10/2019 22:05,,0,298,"

When training a AI RL agent to play a game there'll be situations where the AI cannot perform certain actions lest they violate the game rules. That's easy to handle, and I can set illegal actions to some large negative amount so when doing an argmax they won't be selected. Or if I use softmax I can set probabilities of illegal actions to zero and then re-calculate softmax on the remaining legal states. Indeed, I believe this is what David Silver was referring to when asked this question at a presentation/lecture of AlphaZero:

+ +

https://www.youtube.com/watch?v=Wujy7OzvdJk&t=2404s

+ +

But doing so changes the output from the network and surely changes things when performing the backprop once a reward is known.

+ +

How does one handle that?

+ +

Would I set the illegal actions to the mean of the legal actions, or zero...?

+",20352,,20352,,3/11/2019 12:42,3/11/2019 12:42,How to back-propagate illegal actions for policy gradient learning,,0,11,,,,CC BY-SA 4.0 +11143,1,,,3/11/2019 1:14,,1,129,"

I am currently trying to create a One-Shot network using the Siamese architecture for an object that isn't a face.

+ +

My problem is, in normal Face Recognition the detecting gadget (e.g. Smartphone) knows which image it should compare the face, currently trying to unlock the tool, to.

+ +

In my case, I don't know the object's identity so I would have to compare every other object in the database to it.

+ +

Is there a better, more efficient way of checking the identity of the object without comparing it to any other object. A normal classification doesn't work in my case, because new unknown objects can be added any time.

+",23063,,,,,3/11/2019 1:14,Siamese Network for unknown object,,0,1,,,,CC BY-SA 4.0 +11144,1,,,3/11/2019 7:31,,1,43,"

I need to extract personal information about a person from a list of documents and summarize it to the user. If there are 2 people with the same name, the correct person should be identified. If the person has a nickname, that also needs to be identified. The input to the program can be the name of the person, address, organization name etc. +I have extracted named entities like person, org, location etc from the text using NLTK library. +The output after extracting the named entities is mentioned below,

+ +

[('Michael', 'NNP', 'B-PERSON'), ('Joseph', 'NNP', 'B-PERSON'), ('Jackson', 'NNP', 'I-PERSON'), ('was', 'VBD', 'O'), ('born', 'VBN', 'O'), ('in', 'IN', 'O'), ('Gary', 'NNP', 'B-GPE'), (',', ',', 'O'), ('Indiana', 'NNP', 'B-GPE')....

+ +

Now, I want to extract relationships between those entities.

+",23069,,23069,,3/11/2019 8:44,3/11/2019 8:44,Extract personal information about a person from a list of documents and summarize it,,0,4,,,,CC BY-SA 4.0 +11145,2,,3189,3/11/2019 8:16,,1,,"

Machines will never be conscious.

+ +

Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would ""answer"" by picking a bunch of shapes in another proper order. Now, did you just learn any meaning behind any language? Programs manipulate symbols this way. (previously, people have either skirted this question or never had a satisfactory answer)

+ +

The above was my reformulation of Searle's rejoinder to System Reply to his Chinese Room Argument.

+",23071,,,,,3/11/2019 8:16,,,,3,,,,CC BY-SA 4.0 +11146,2,,7926,3/11/2019 8:27,,0,,"

No, because there is no utility in building a ""libertarian free AI"" as far as I know of.

+ +

AI is another tool. What is the purpose in building an AI with such a distinction?

+ +

The reason for that question is this. Let's say you want an AI to accomplish some kind of task you want machine assistance in. That's what tools do- assisting with tasks. What exactly would this task be that a ""non-libertarian free AI"" couldn't achieve but a ""libertarian free AI"" could?

+",23071,,,,,3/11/2019 8:27,,,,0,,,,CC BY-SA 4.0 +11147,2,,11050,3/11/2019 10:04,,2,,"

There are many techniques for training an RL agent without explicitly interacting with an environment, some of which are cited in the paper you linked. Heck, even using experience replay like in the foundational DQN paper is a way of doing this. However, while many models utilize some sort of pre-training for the sake of safety or speed, there are a couple of reasons why an environment is also used whenever possible.

+ +

Eventually, your RL agent will be placed in an environment to take its own actions. This is why we train RL agents. I'm assuming that, per your question, learning does not happen during this phase.

+ +

Maybe your agent encounters a novel situation
+Hopefully, the experience your agent learns from is extensive enough to include every possible state-action pair $(s,a)$ that your agent will ever encounter. If it isn't, your agent won't have learned about these situations, and it will always perform suboptimally in them. This lack of coverage over the state-action space could be caused by stochasticity or nonstationarity in the environment.

+ +

Maybe the teacher isn't perfect
+If you don't allow your agent to learn from its own experience, it will only ever perform as well as the agent that collected the demonstration data. That's an upper bound on performance that we have no reason to set for ourselves.

+",22916,,,,,3/11/2019 10:04,,,,3,,,,CC BY-SA 4.0 +11150,1,11157,,3/11/2019 11:07,,1,221,"

I am dealing with an intent classification task on an Italian customer service data set.

+

I've more or less 1.5k sentences and 29 classes (imbalanced).

+

According to the literature, a good choice is to generate synthetic data, oversampling, or undersampling the training data, using for example the SMOTE algorithm.

+

I also want to use a cross-validation mechanism (stratified k-fold) to be more confident in the obtained result.

+

I also know that accuracy is not the right metric to take into account, I should use precision, recall, and confusion matrix.

+

Is it possible to combine k-fold cross-validation and oversampling (or undersampling) techniques?

+",20780,,2444,,6/7/2021 17:47,6/7/2021 17:47,Is it possible to combine k-fold cross-validation and oversampling for a multi-class text classification task with imbalanced data?,,1,0,,,,CC BY-SA 4.0 +11152,1,,,3/11/2019 16:02,,4,524,"

Reading the high-level descriptions of backpropagation and predictive coding, they don't sound so drastically different. What is the key difference between these techniques?

+ +

I am currently reading the following paper if that helps ground the explanation:

+ +

Predictive Coding-based Deep Dynamic Neural Network for Visuomotor +Learning

+",20955,,2444,,3/12/2019 15:36,3/12/2019 15:36,What is the difference between backpropagation and predictive coding?,,1,1,,,,CC BY-SA 4.0 +11153,2,,11152,3/11/2019 16:35,,1,,"

I would say that these concepts are quite different, even though they might have a few things in common (or might be vaguely related).

+ +

Back-propagation is an algorithm used (in machine learning) to compute the gradient of a function with respect to its parameters. This gradient is then used by algorithms, like gradient descent, to update the parameters of the model (e.g. a neural network).

+ +

Roughly, predictive coding is a general (neuroscience) theory of how the brain builds an internal model of the external world, of how it continuously predicts the sensory inputs (from the world) using its current model of the external world, and of how it updates this internal model once the sensory inputs are actually received.

+ +

You could think of the output of an ML model, while it is being trained (using gradient descent with back-propagation), as a prediction associated with the inputs. However, note that the output of the model is often not a prediction of the actual input, but e.g. a label (which is, nonetheless, associated with the input). Furthermore, it is the model that produces an output and not the back-propagation algorithm (even though the back-propagation algorithm is often used to train such models, e.g. neural networks). We could think of back-propagation as the way of updating these predictions associated with the inputs, but, anyway, it is well known (and accepted) that our brain does not perform back-propagation, but we learn in an associative fashion (Hebbian learning).

+",2444,,2444,,3/11/2019 16:52,3/11/2019 16:52,,,,0,,,,CC BY-SA 4.0 +11155,2,,11141,3/11/2019 17:43,,1,,"

If you really just want an SMDP-version of the algorithm, which only needs to be capable of operating on the ""high-level"" time scale of macro-actions, you can relatively safely take the original pseudocode of whatever MDP-based algorithm you like, replace every occurrence of ""action"" with ""macro-action"", and you're pretty much done.

+ +

The only caveat I can think of in the case of $Q(\lambda)$ is that the ""optimal"" value for $\lambda$ is probably somewhat related to the amount of time that expires... so intuitively I'd expect it to be best if the value for $\lambda$ decreases as the amount of time expired during execution of the last macro-action increases. A constant $\lambda$ probably still works fine as well though.

+ +
+ +

If you actually want your algorithm to also be aware of lower-time-scale MDP underlying an SMDP, and not only treat macro-actions as ""large actions"" and be done with it... I'd recommend looking into the Options framework. There you get interesting ideas like intra-option updates, which may allow you to also perform learning whilst larger macro-actions (or options) are still in progress.

+ +

Last time I looked there hasn't been a lot of work involving the combination of eligibility traces and options, but there has been some: Eligibility Traces for Options. This paper doesn't specifically apply the algorithm you mentioned ($Q(\lambda)$), but it does discuss a bunch of other -- much more recent, and likely better -- off-policy algorithms with eligibility traces.

+",1641,,,,,3/11/2019 17:43,,,,3,,,,CC BY-SA 4.0 +11157,2,,11150,3/11/2019 18:13,,0,,"

It is straightforward to combine k-fold cross-validation with a technique like oversampling or undersampling.

+ +

First, apply the balance-restoration technique to your training data. Then parametrize a model using k-fold cross-validation on the re-balanced training data. In Scikit learn, I believe you can even bundle these actions together into a single 'pipeline' object to make it easier to manipulate.

+ +

Precision/recall is probably a fine starting place for measuring performance.

+",16909,,,,,3/11/2019 18:13,,,,3,,,,CC BY-SA 4.0 +11159,2,,10360,3/11/2019 19:46,,3,,"

By far the most common form of heuristic evaluation functions for Chess-playing (or, really, any game-playing) agents are simple linear functions. At least when we're talking about handcrafted features that's the case, of course all the hype with Deep Neural Networks in more recent years is different. So, when it's not specified in a paper like this exactly what their heuristic evaluation function looks like, you can relatively safely assume it's just a linear function.

+ +

With linear function, I mean that you have vectors of features $\boldsymbol{\phi}(s)$ for your states $s$, and a vector of weights $\boldsymbol{\theta}$, and the evaluation $f(s)$ of a state $s$ is simply given by the dot product (summing up all the multiplications of feature values with their corresponding weights):

+ +

$$f(s) = \boldsymbol{\phi}(s)^{\top} \boldsymbol{\theta} = \sum_i \phi_i(s) \times \theta_i(s),$$

+ +

where the subscript $i$ indicates taking the $i^{th}$ element of a vector.

+",1641,,,,,3/11/2019 19:46,,,,0,,,,CC BY-SA 4.0 +11160,5,,,3/11/2019 20:34,,0,,"

Myths about automata have been with us since at least Ancient Greece [See: Talos]

+ +

Artificial Intelligence and predictive science has been a staple of modern Science Fiction since at least Isaac Asimov. [See: I, Robot; Foundation Series]

+",1671,,1671,,3/11/2019 20:34,3/11/2019 20:34,,,,0,,,,CC BY-SA 4.0 +11161,4,,,3/11/2019 20:34,,0,,"Use for questions about AI in popular culture. (Terminator/Skynet as an example.) Includes movies/tv, novels & comic books, and even canonical mythology (Talos).",1671,,1671,,3/11/2019 20:34,3/11/2019 20:34,,,,0,,,,CC BY-SA 4.0 +11163,1,,,3/12/2019 1:11,,1,38,"

According to PER, we have to multiply the $Q$ error $\delta_i$ by the importance sampling ratio to correct the bias introduced by the imbalance sampling of PER, where importance sampling ratio is defined +$$ +w_i=\left({1\over N}{1\over P(i)}\right)^\beta +$$ +in which $1/N$ is the probability of drawing a sample uniformly from the buffer, and $P(i)$ is the probability of drawing a sample from PER.

+ +

I'm wondering if we have to do the same to the target of the actor when we apply PER to DDPG. That is, multiplying $-Q(s_i, \mu(s_i))$ by $w_i$, where $\mu$ is the outcome of the actor.

+ +

In my opinion, it is necessary. And I've done some experiments in the gym environment BipedalWalker-v2. The results, however, is quite confusing: I constantly get better performance when I do not apply importance sampling to the actor. Why would this be the case?

+",8689,,2444,,4/4/2019 16:37,4/4/2019 16:37,Should we multiply the target of actor by the importance sampling ratio when prioritized replay is applied to DDPG?,,0,0,,,,CC BY-SA 4.0 +11165,1,11185,,3/12/2019 9:57,,2,898,"

I wonder if it would be possible to know the size of a room using image, I don't see anything about this subject, do you have some idea how it could be done?

+",23107,,2444,,3/12/2019 11:02,3/13/2019 2:58,How could we estimate the square footage of a room from an image?,,1,1,,,,CC BY-SA 4.0 +11166,1,11201,,3/12/2019 10:29,,17,4797,"

What is geometric deep learning (GDL)?

+

Here are a few sub-questions

+
    +
  • How is it different from deep learning?
  • +
  • Why do we need GDL?
  • +
  • What are some applications of GDL?
  • +
+",2444,,2444,,11/6/2020 14:24,8/25/2021 15:45,What is geometric deep learning?,,3,1,,,,CC BY-SA 4.0 +11167,5,,,3/12/2019 10:36,,0,,"

For more info, have a look at this question What is geometric deep learning? or the paper Geometric deep learning: going beyond Euclidean data (2017) by Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst.

+",2444,,2444,,3/12/2019 20:34,3/12/2019 20:34,,,,0,,,,CC BY-SA 4.0 +11168,4,,,3/12/2019 10:36,,0,,"For questions related to geometric deep learning, which is the application of deep learning techniques to non-Euclidean data (e.g. graphs and manifolds).",2444,,2444,,6/18/2019 12:43,6/18/2019 12:43,,,,0,,,,CC BY-SA 4.0 +11169,1,,,3/12/2019 10:46,,5,1445,"

What is a graph neural network (GNN)?

+

Here are some sub-questions

+
    +
  • How is a GNN different from a NN?
  • +
  • How exactly is a GNN related to graphs?
  • +
  • What are the components of a GNN? What are the inputs and outputs of GNNs?
  • +
  • How can GNNs be trained? Can we also use gradient descent with back-propagation to train GNNs?
  • +
+",2444,,2444,,11/6/2020 11:52,12/2/2021 11:14,What is a graph neural network?,,1,1,,,,CC BY-SA 4.0 +11172,1,21874,,3/12/2019 13:57,,10,11718,"

How can the convolution operation used by CNNs be implemented as a matrix-vector multiplication? We often think of the convolution operation in CNNs as a kernel that slides across the input. However, rather than sliding this kernel (e.g. using loops), we can perform the convolution operation ""in one step"" using a matrix-vector multiplication, where the matrix is a circulant matrix containing shifted versions of the kernel (as rows or columns) and the vector is the input.

+ +

How exactly can this operation be performed? I am looking for a detailed step-by-step answer that shows how the convolution operation (as usually presented) can be performed using a matrix-vector multiplication.

+ +

Is this the usual way the convolution operations are implemented in CNNs?

+",2444,,2444,,6/14/2020 14:03,1/6/2023 1:16,How can the convolution operation be implemented as a matrix multiplication?,,1,0,,,,CC BY-SA 4.0 +11173,1,,,3/12/2019 16:00,,2,95,"

The book by Sutton and Barto discusses in section 11.8 that the convergence of off-policy TD function approximation can be improved by correcting for the distribution of states encountered. The section seems to be written in haste and doesn't do a good job in explaining why will $M_t$, the emphasis, help in getting a state distribution closer to the target policy.

+

My understanding of on-policy distribution is not clear at the moment. I think it is the distribution of states encountered under the target policy (the policy for which we want to state-action/state values).

+

The importance sampling ratio corrects for update distribution (by multiplying the correction term with the ratio), but how is $M_t$ helping in correcting for the state distribution?

+",21509,,2444,,12/8/2021 18:19,12/8/2021 18:19,Why is $M_t$ (the emphasis) helping in correcting for the state distribution in the Emphatic TD algorithm?,,1,0,,,,CC BY-SA 4.0 +11174,1,11194,,3/12/2019 16:30,,7,2227,"

I have been using OpenAI Retro for awhile, and I wanted to experiment with two player games. By two player games, I mean co-op games like ""Tennis-Atari2600"" or even Pong, where 2 agents are present in one environment.

+ +

There is a parameter for players in the OpenAI documentation, but setting this variable to 2 does nothing in terms of the game.

+ +

How do you properly implement this? Can this even be done? +The end goal is to have 2 separate networks per one environment.

+",23119,,1847,,3/13/2019 9:43,6/28/2020 23:17,2 Player Games in OpenAI Retro,,1,6,,,,CC BY-SA 4.0 +11177,2,,6231,3/12/2019 17:00,,3,,"

I can think of two possible ways of enforcing NEAT to create a feed forward network. One elegant one and one a little more cumbersome one;

+ +
    +
  1. Only allow the ""add connection"" mutation to connect a node with another node that have a higher maximum distance from an input node. This should result in feed forward network, without much extra work. (Emergent properties are great!)
  2. +
  3. Run as you did and create a fully connected network with NEAT and then prune it during a forward pass. After creating the network, run through it and remove connections that try to connect to a node already used in the forward pass (example 3->5). Alternatively just remove unused input connections to nodes during the forward pass. Given how NEAT mutates, it should not be possible that you remove a vital connection and cut the the network in two. This property of NEAT make sure your signal will always be able to reach the output, even if you remove those ""backwards pointing"" connections.
  4. +
+ +

I believe these should work, however i have not tested them.

+ +

The original NEAT paper assumed a feed forward ANN, even though its implementation as described would result in a fully connected network. I think it was just an assumption of the paradigm they worked in. The confusion is fully understandable.

+",23118,,,,,3/12/2019 17:00,,,,0,,,,CC BY-SA 4.0 +11178,1,,,3/12/2019 17:10,,2,358,"

In word2vec, the task is to learn to predict which words are most likely to be near each other in some long corpus of text. For each word $c$ in the corpus, the model outputs the probability distribution $P(O=o|C=c)$ of how likely each other word $o$ in the vocabulary is to be within a certain number of words away from $c$. We call $c$ the ""center word"" and $o$ the ""outside word"".

+ +

We choose the softmax distribution as the output of our model: $$P(O=o|C=c) = \frac{\exp(\textbf{u}_{0}^{T} \textbf{v}_{c})}{\sum_{w \in \text{Vocab}} \exp(\textbf{u}_{w}^{T} \textbf{v}_c)}$$

+ +

where $\textbf{u}_0$ and $\textbf{v}_c$ are vectors that represent the outside and center words respectively.

+ +
+

Question. What do the vectors $\textbf{u}_0$ and $\textbf{v}_c$ look like? Are they just one-hot-encodings? Do we need to learn them + too? Why is this useful?

+
+",23120,,2444,,4/16/2019 22:26,4/16/2019 22:26,What do the vectors of the center and outside word look like in word2vec?,,1,0,,,,CC BY-SA 4.0 +11179,5,,,3/12/2019 19:39,,0,,"

For more info, have a look e.g. at https://en.wikipedia.org/wiki/Local_search_(optimization).

+",2444,,2444,,3/12/2019 21:36,3/12/2019 21:36,,,,0,,,,CC BY-SA 4.0 +11180,4,,,3/12/2019 19:39,,0,,For questions related to local search algorithms used in AI (e.g. 2-opt or hill climbing).,2444,,2444,,3/12/2019 21:36,3/12/2019 21:36,,,,0,,,,CC BY-SA 4.0 +11181,1,,,3/12/2019 19:51,,2,3245,"

I want to tackle the problem of detecting similar objects in an image. To illustrate the problem consider this photo of some Lego bricks as my ""input"":

+ +

+ +

The detection routine should identify similar objects. So for the given input, it should e.g. identify the following output:

+ +

+ +

So an object might appear none to multiple times in the input image. For example, there are only two bricks marked with a blue cross, but three bricks marked with a red cross.

+ +

It can be assumed that all objects are of similar kind, so e.g. only Lego bricks or a heap of sweets.

+ +

My initial idea is to apply a two-phased approach:

+ +
    +
  1. Extract all objects in input image.

  2. +
  3. Compare those extracted objects and find similar ones.

  4. +
+ +

Is this a valid approach or is there already some standard way of solving this kind of problem? Can you give me some pointers how to solve this problem?

+",23123,,,,,3/13/2019 7:47,How do I detect similar objects in an image?,,1,0,,,,CC BY-SA 4.0 +11182,1,11183,,3/12/2019 21:09,,1,1217,"

I am trying to understand the value iteration method for Markov Decision Process(MDP) and I was referring to UC Berkeley's slides titled Markov Decision Processes and Exact Solution Methods

+

On slide no. 9, we start with the first step :

+

+

Ok! So, we have the information about the transition function (described elaborately in slide no. 5 as well), the resting reward is 0 and discount of 0.9.

+

Using this, I am able to compute the utility value of the cell left to terminal state with R = +1 (Green cell). The action that is going to be most rewarding at this cell is moving forward, so putting the values in the equation as:

+

$$0.0 + 0.9 (0.8*1 + 0.1*0 + 0.1*0) =0.72$$

+

which seems to be correct:

+

+

Now, using the same algorithm I am able to compute the value of the cells adjacent to this newly obtained utility cell value. However, I really do not know how did they update the value from

+
+

0.72 -> 0.78

+
+

in the next slide:

+

+

I have tried searching at various sites and seen some videos but most of them stop at the first iteration assuming the next step is the same, as it is a recursive equation (And it should have been so!), but I am stuck at this!

+",23126,,2444,,12/21/2021 0:36,12/21/2021 0:36,Unable to understand the second iteration update in value iteration algorithm for solving MDP,,1,0,,,,CC BY-SA 4.0 +11183,2,,11182,3/12/2019 22:05,,1,,"

First thing to know is that, in this case, values for the gridworld in new iteration are completely calculated with respect to the old values from the previous iteration. Value of $0.78$ is got like this:

+ +

$0.9 \cdot (0.8 \cdot 1 + 0.1 \cdot 0.72 + 0.1 \cdot 0) = 0.7848 \approx 0.78$

+ +

term $0.8 \cdot 1$ is for going to the right with probability of $0.8$ and getting reward of $1$.

+ +

term $0.1 \cdot 0.72$ is for going up with probability of $0.1$, we hit the wall and stay in the same field which value is $0.72$ (from previous iteration)

+ +

term $0.1 \cdot 0$ is for going down with probability of $0.1$, even though value of that field in the image is $0.43$ we take the value from previous iteration which is $0$.

+",20339,,,,,3/12/2019 22:05,,,,0,,,,CC BY-SA 4.0 +11185,2,,11165,3/13/2019 2:58,,0,,"

Welcome to AI.SE Hadrien!

+ +

A possible approach is:

+ +
    +
  1. Gather many example images of rooms for which you know the square footage. Record the square footage of each room together with each image.
  2. +
  3. Pick a machine learning model that is well suited to learning relationships between images and numerical outputs, like a Convolutional Neural Network.
  4. +
  5. Train a machine learning model using an optimisation algorithm, like gradient decent. In the case of training a CNN, this algorithm starts by setting up the network with randomly chosen connection strengths between different 'simulated' neurons. It then exposes the network to an input image, and observes the number the network outputs in response. The algorithm then makes small adjustments to the strengths of the connections in the network, so that if the network were exposed to the image a second time, it would output a number that was less wrong (i.e. closer to the correct square footage). By repeating this process many thousands of times, and with many images, the network eventually becomes quite good at guessing the square footage of new images that it hasn't seen before.
  6. +
+ +

In practice, the network is quite likely to pick up on any patterns in the images that correlate with square footage, not necessarily the ones you want. For example, it might pick up on, say, the fact that humans in the picture are usually a good object to guess at the scale of the rest of the room.

+",16909,,,,,3/13/2019 2:58,,,,0,,,,CC BY-SA 4.0 +11187,2,,11173,3/13/2019 6:48,,2,,"

I don't think the section was written in haste. I think they just didn't have space to include the whole proof. It's a bit involved, so they just gave concepts.

+ +

An Emphatic Approach to the Problem +of Off-policy Temporal-Difference Learning gives a proof of stability. +At least parts of it should seem familiar if you've read Sutton and Barto's proof of the convergence of linear TD(0) on page 206 of their 2nd edition RL book.

+ +

On Convergence of Emphatic Temporal-Difference Learning gives a proof of convergence.

+ +

I confess that I don't understand these papers well enough to give a summary. If you eventually do, I would greatly appreciate an update.

+",22916,,22916,,3/13/2019 7:29,3/13/2019 7:29,,,,1,,,,CC BY-SA 4.0 +11188,2,,11181,3/13/2019 7:39,,1,,"

Disclaimer: I have never used Siamese networks

+ +

I would approach this problem in two steps:

+ +

First: train object detector and train it for eligible classes of object, for example using Yolo architecture.

+ +

You could use pretrained object detector and finetune it for your classes of objects.

+ +

Second: extract a lot of bounding boxes of eligible objects of the same classes from your dataset and train Siamese network on their subimages.

+ +

Siamese network output similarity measure between two objects.

+ +

Your pipleline would look like:

+ +
    +
  1. Run object detetctor

  2. +
  3. For each two pair of objects of the same class rescale bounding boxes and run Siamese network.

  4. +
  5. Check if the similarity distance between two objects is less then threshold (tunable hyperparameter)

  6. +
+",22745,,22745,,3/13/2019 7:47,3/13/2019 7:47,,,,0,,,,CC BY-SA 4.0 +11189,1,11193,,3/13/2019 8:29,,1,39,"

I am making an AI model to predict monthly retail sales of a motor cycle spare parts shop, for that to be possible I have to first create a dataset. The problem I am facing is what features should the dataset have?

+ +

I already did some research on some other datasets but still I want to know specifically what features should it have other than Date, Product Name, Quantity, Net amount, Gross amount..?

+",23140,,9608,,3/13/2019 20:58,3/13/2019 20:58,What features should a dataset to predict monthly retail sales for a motorcycle spare parts shop have?,,2,0,,,,CC BY-SA 4.0 +11190,2,,11178,3/13/2019 8:38,,1,,"

No, the word vectors are not one-hot encodings. Yes, they are learned.

+ +

The purpose of the word2vec model is actually to learn dense, semantically meaningful encodings for words. That is, if your words are $d$-dimensional vectors, then each word's position in this vector space says something about what that word means. This is because word2vec learns to represent words in similar ways if they are frequently close together in your corpus. It implements the idea of distributional similarity.

+ +

The task of predicting an ""outside word"" given a ""center word"" accomplishes all of this in an indirect way.

+ +

A naive objective function to maximize for word2vec is +$$J = \prod_{t=1}^L \prod_{-m \leq j \leq m\\ \quad j\neq 0} p(\textbf{u}_{t+j}|\textbf{v}_t)$$

+ +

where $L$ is the length of your corpus, $m$ is the ""radius"" from each center word you want to consider, $\textbf{u}_{t+j}$ is an outside word, and $\textbf{v}_t$ is a center word.

+ +

If we let $p(\textbf{u}_{t+j}|\textbf{v}_t)$ be the softmax distribution, then maximizing $J$ means maximizing the inner product $\textbf{u}_{t+j}^T\textbf{v}_t$ in the softmax's numerator. Maximizing that inner product means making center words as close as possible to their neighboring words, giving you some semantically meaningful word vectors to use in your downstream NLP tasks.

+ +

This lecture from Stanford's CS224N goes into more detail.

+",22916,,22916,,3/13/2019 8:52,3/13/2019 8:52,,,,4,,,,CC BY-SA 4.0 +11193,2,,11189,3/13/2019 10:41,,0,,"

This is the prototypical problem in AI/ML known as feature-selection. If you do not have too many features, typically one would just use them all(with the exception of features that are known to to be correlated), in this case you would want to perform feature engineering wherever possible as well. One the other hand, if you have lots of features, and some are likely to be useless, you would use a feature-selection technique.

+ +

There are many many methods for doing this. One can simply use their intuitive grasp as their problem domain at the simplest level. However, there are also many algorithmic ways of performing it. Some examples of this are a low variance filter, or using a ensemble(classifier or regression depending on the problem) which can then be used to order features according to their derived importance(I personally like this).

+ +

A search of ""feature selection methods in ML"" will yield many potential ways to accomplish your goal.

+ +

As far as what to grab if you don't have many, as many as possible wrt domain knowledge.

+",9608,,,,,3/13/2019 10:41,,,,0,,,,CC BY-SA 4.0 +11194,2,,11174,3/13/2019 11:09,,2,,"

OpenAI Retro is an extension to OpenAI Gym.

+ +

As such, it does not support multiple agents in multi-player environments. Instead, the environment is always presented from the perspective of a single agent that needs to solve a control problem.

+ +

When there is more than one agent in a Gym problem, everything other than the first agent is considered part of the environment. Typically player 2 in 2-player problems is controlled by whatever game AI already exists.

+ +

You can work around this in a couple of ways, both involve you modifying existing environments, requiring a solid understanding of the integration and wrapper layers in Gym so that you can make correct edits. However you might find environments where either of these are done for you.

+ +

In fact, within the Retro environments, some multiplayer environments are supported using the second option below.

+ +

1. Supply additional player(s) as part of the environment

+ +

You could modify the environment code so that it accepts a parameter which is the agent used to control other players. Typically such an agent would not be set up to learn, it could just be some earlier version of the agent that is learning. You would need to code the interface between the supplied agent and the game engine - taking care to allow for its potentially different view of success.

+ +

If a Gym environment has been set up for you (or any human) to compete against your own agent using the keyboard, that is already halfway there to this solution. You could take a look at where the Gym environment has been set up for this, and modify the code to allow for automated input as opposed to action selection by keyboard.

+ +

This approach probably works best for competitive games where you can use a random selection of previous agents inserted into the environment to train a next ""generation"".

+ +

2. Present actions for multiple players as if there was a single player

+ +

Provided the game emulator allows you to do this, you can edit the code so that multiple players are controlled at once. The Gym environment does not specify how you construct your action choices, so you are then free to split the action vector up so that parts of it are chosen by one trained agent and parts of it by another.

+ +

You may need to work on different views of the environment and/or reward signal in this case. For instance in a competitive game (or a co-operative game with competitive elements such as personal scores) then you will need to add custom code to re-interpret reward values compared to single player game. For something simple like Pong, that could just be that Player 2 receives the negative of Player 1's reward signal.

+ +

This approach probably works best for cooperative games where success is combined. In that case, you have the choice of writing any number of separate agents that control different aspects of the action space - that would include having agents that separately controlled P1 and P2 game controller inputs.

+ +

Each agent would still effectively treat its partner as a part of the environment, so you may want to mix training scenarios so that agents for each controller did not get too co-dependent. For instance, if you hope to step in to control P1, you probably don't want the AI running P2 to have too specific expectations of P1's role and behaviour in the game.

+ +

This second option is what OpenAI Retro interface does. For instance Pong supports this:

+ +
env = retro.make(game='Pong-Atari2600', players=2)
+
+ +

I verified this using the sample code in the documentation, which controls both paddles, and receives the reward signal as a Python list mostly [0.0, 0.0] but whenever a player scores it becomes [-1.0, 1.0] or [1.0, -1.0] depending on which player got the point.

+ +

This will only be supported on retro games where someone has made the effort to make it work as above. If you have a game which does not do this, but in theory supports multiplayer on the ROM, you will need to look at and alter either the AI retro wrapper or the emulator or both.

+",1847,,1847,,3/15/2019 19:04,3/15/2019 19:04,,,,9,,,,CC BY-SA 4.0 +11196,1,,,3/13/2019 11:20,,5,150,"

I am looking for a source that really discusses the classic rules of learning in depth. So classical conditioning, operant conditioning, imitation learning... I have found an infinite number of books that supposedly discuss these topics, but have not yet found anything that summarizes all the important findings on this topic. I am familiar with the basics, but I am explicitly interested in a detailed presentation of the topic.

+ +

Can anyone tell me a good source about the different forms of classical learning in mammals? I consider ""Reinforcement Learning: An Introduction"" comprehensive and detailed regarding RL. A comparable book on biological learning systems would be great. Sources in German and English would fit.

+",17658,,2444,,4/7/2022 13:55,4/7/2022 13:55,What would be a good comprehensive source about the different forms of classical learning in mammals?,,1,0,,,,CC BY-SA 4.0 +11198,2,,11189,3/13/2019 15:36,,0,,"

I would think you would want the brands or model numbers each part fits, and also data on the number of each model or brand sold each year. E.g. you need to stock more parts for a Honda than for an Indian.

+ +

Over time your system should be able to learn what demand patterns are for each part, but to ""prime the pump"" some sort of information on number of consumers for each part would be helpful.

+ +

You would also want to know what the time needed to restock is for each part.

+",23143,,,,,3/13/2019 15:36,,,,0,,,,CC BY-SA 4.0 +11200,1,,,3/13/2019 16:36,,2,45,"

My group is working on a ML model that can work with little data (and bad accuracy) as long as there actually is little data available but can easily be extended as soon as said data becomes available (Think of interpolating existing models and then creating an individual predictor). This is due to a business requirement of the relevant application (new plants get installed over time but need to be integrated seamlessly). Transforming from a coarse yet cheap prediction to an accurate but expensive prediction takes labor effort that the company can invest as desired.

+ +

Are there research areas that take into account such ""evolutionary transformation processes"" which enable a tradeoff between costs and accuracy? What are the right keywords to look for?

+ +

I am looking for keywords/papers/communities.

+",23158,,2444,,6/14/2019 23:01,7/15/2019 0:01,Are there communities dealing with costs-vs-accuracy tradeoffs in Machine Learning?,,1,0,,,,CC BY-SA 4.0 +11201,2,,11166,3/13/2019 17:39,,8,,"

The article Geometric deep learning: going beyond Euclidean data (by Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst) provides an overview of this relatively new sub-field of deep learning. It answers all the questions asked above (and more). If you are familiar with deep learning, graphs, linear algebra and calculus, you should be able to follow this article.

+

What is geometric deep learning (GDL)?

+

This article describes GDL as follows

+
+

Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds.

+
+

So, the inputs to these GDL models are graphs (or representations of graphs), or, in general, any non-Euclidean data. To be more concrete, the input to these models (e.g. graph neural networks) are e.g. feature vectors associated with the nodes of the graphs and matrices which describe the graph structure (e.g. the adjacency matrix of the graphs).

+

Why are e.g. graphs non-Euclidean data?

+

A graph is a non-Euclidean structure because e.g. distances between nodes are not well defined. Yes, you can have graphs with weights associated with the edges, but not all graphs have this property.

+

What classes of problems does GDL address?

+

In GDL, there are two classes of problems that are often tackled:

+
    +
  1. characterise the structure of the data (e.g. of a graph)
  2. +
  3. analyse functions defined on a given non-Euclidean domain
  4. +
+

These classes of problems are related, given that the structure of the graph imposes certain properties on the functions that can be defined on it. Furthermore, these properties of these functions can also convey information about the structure of the graph.

+

What are applications of GDL?

+

An example of an application where this type of data (graphs) arises is in the context of social networks, where each user can be associated with a vertex of the social graph and the characteristics (or features) of each user (e.g. number of friends) can be represented as a feature vector (which can then be associated with the corresponding vertex of a graph). In this context, the goal might e.g. be to determine different groups of users in the social network (i.e. clustering).

+

Why can't we simply use deep learning methods (like CNNs) when the data is non-Euclidean?

+

There are several problems that arise when dealing with non-Euclidean data. For example, operations like convolution are not (usually) defined on non-Euclidean data. More concretely, the relative position of nodes is not defined on graphs (but this would be required to perform the usual convolution operation): in other words, it is meaningless to talk about a vertex that is e.g. on the left of another vertex. In practice, it means that we can't simply use the usual CNN when we are given non-Euclidean data. There have been attempts to generalise the convolution operation to graphs (or to approximate it). The field is still quite new, so there will certainly be new developments and breakthroughs.

+",2444,,2444,,2/3/2021 1:27,2/3/2021 1:27,,,,3,,,,CC BY-SA 4.0 +11202,1,,,3/13/2019 18:03,,1,125,"

Suppose we want to classify a review as good ($1$) or bad ($0$). We have a training data set of $10,000$ reviews. Also, suppose we have a vocabulary of $100,000$ words $w_1, \dots, w_{100,000}$. So the data is a matrix of dimension $100,000 \times 10,000$. Let's represent each of the words in the reviews using a bag-of-words approach over tf-idf values. Also, we normalize the rows such that they sum to $1$.

+

In a logistic regression approach, would we have $10,000$ different logistic regression models as follows:

+

$$ \log \left(\frac{p}{1-p} \right)_{1} = \beta_{0_{1}} + \beta_{1_{1}}w_{11} + \dots + \beta_{100,000_{1}}w_{100,000} \\ \vdots \\ \log \left(\frac{p}{1-p} \right)_{10,000} = \beta_{0_{10,000}} + \beta_{1_{10,000}}w_{11} + \dots + \beta_{100,000_{10,000}}w_{100,000}$$

+

So are we estimating $100,000 \times 10,000$ coefficients?

+",23120,,2444,,12/4/2020 17:35,12/4/2020 17:35,"How many parameter would there be in a logistic regression model used to classify reviews into ""good"" or ""bad""?",,1,0,,,,CC BY-SA 4.0 +11203,1,,,3/13/2019 18:08,,1,294,"

I want to make an AI with deep learning which can adapt itself from user to user.

+ +

Let's say we have food combiner AI which suggests a food to eat with another food as you give as input. This is the most personalized case I found to ask here. For example the AI suggested a food for me. However, the food AI suggested for me might not be good choice for another person. So another person will let the AI know like ""I don't like that food to eat with this. Etc. When the user let the AI know that, It should affect AI's further combination food suggestions.

+ +

How can I build that AI? Where should I start from? which area or topics should I research?

+",16864,,2444,,3/13/2019 20:51,3/13/2019 21:08,How create an AI that continuously adapts to different users?,,1,2,,,,CC BY-SA 4.0 +11204,1,,,3/13/2019 18:10,,1,67,"

Currently I'm feeding spectrogram of audio to the CNN with 3 convolution.

+ +

Each convolution is followed by a max pool of filter size 2.

+ +

First -> 5x5x4

+ +

Second - > 5x5x8

+ +

Third - > 5x5x16

+ +

and final layer is a fully connected with 512 unit.

+ +

But while training with dropout of 0.25, getting train accuracy of 0.97 with 150 iterations. +and on test data accuracy is just 0.60.

+ +

Tell me how to improve the results.

+ +

Yes both train and test data come from same distribution.

+",18428,,,,,4/12/2019 22:02,How to reduce over-fitting on training set?,,1,2,,,,CC BY-SA 4.0 +11206,2,,10830,3/13/2019 18:59,,7,,"

After some research and reading this post, I see where my problem was: I was introducing a full consecutive batch of experiences, selected randomly, yes, but the experiences in the batch were consecutives. +After redoing my experience selection method, my DQN is actually working and has reached about +200 points after 400000 experiences (about 500 episodes; only 2-3 hours or training!). Before I couldn't reach that score after days of training. +I'll let it train to see if there are something I can improve. Thanks to everyone who tried to help me! I let this answer here just in case someone has the same problem as me.

+",9818,,,,,3/13/2019 18:59,,,,0,,,,CC BY-SA 4.0 +11208,1,,,3/13/2019 19:33,,3,93,"

Max Tegmark discusses the topic of consciousness in his book Life 3.0 and comes to the conclusion, that consciousness is substrate independent. If his analysis is correct, it should be possible to create artificial consciousness. The integrated information theory (IIT), while currently only just a theory, also points in this direction.

+

This leads me to the question: which fields of AI research, if any, are currently actively engaged in this domain?

+

So far, I've only found research concerning consciousness in neuroscience and discussions of experts in philosophy.

+

Are there any projects publicly known concerning artificial consciousness or organizations that are active in this regard?

+",9161,,2444,,12/9/2021 21:42,12/9/2021 21:42,Which fields of AI are actively researching consciousness?,,1,1,,,,CC BY-SA 4.0 +11209,1,,,3/13/2019 20:00,,3,75,"

I am reading the paper Regret Minimization in Games with Incomplete +Information on CFR algorithm.

+ +

On page 4, the paper defines $R^{T,+}_{i,\text{imm}}=\max\{R^{T}_{i,\text{imm}}, 0\}$ after equation (5). I am confused why it is necessary? It seems to me that since in the definition of $R^{T}_{i,\text{imm}}$ the regret is computed with respect to the optimal action.

+ +
    +
  • As everything is in expectation, is mixed-action going to make any difference?
  • +
+ +

Isn't $R^{T}_{i,\text{imm}}$ always nonzero already?

+",23163,,1671,,3/13/2019 21:01,3/13/2019 21:01,Negative counterfactual regret,,0,0,,,,CC BY-SA 4.0 +11210,2,,11203,3/13/2019 21:08,,1,,"

Assuming you have enough training data and representational capacity, you can give each user a unique identifier and concatenate that with the other inputs to the neural net. As users give more feedback, the network will learn in the usual way since each situation (e.g. food-user pair) is represented by a unique input.

+ +

I'd consider this a general, brute-force approach, and it may not be suited to your application. Others' links to recommendation systems might be more useful. It depends on what your task is and what your constraints are.

+",22916,,,,,3/13/2019 21:08,,,,3,,,,CC BY-SA 4.0 +11211,2,,11204,3/13/2019 21:09,,2,,"

The problem of overfitting is given that your model is being too flexible with the training data. The approaches you could take will be:

+ +
    +
  1. Download pretrained models (VGG-16) and use transfer learning.
  2. +
  3. Increase the value of your dropout (e.g. from 0.25 to 0.50).
  4. +
  5. Use data augmentation for your images.
  6. +
  7. Reduce the number of fully connected layers.
  8. +
+",12006,,,,,3/13/2019 21:09,,,,0,,,,CC BY-SA 4.0 +11212,1,11404,,3/13/2019 21:17,,1,321,"

I'm trying to replicate the DeepMind DQN paper. I'm using OpenAI's Gym. I'm trying to get a decent score with Space Invaders (using SpaceInvaders-v4 environment). I checked the actions available with env.unwrapped.get_action_meanings(), and I get this:

+
['NOOP', 'FIRE', 'RIGHT', 'LEFT', 'RIGHTFIRE', 'LEFTFIRE']
+
+

Checking the number of actions with env.action_space.n gives me a number of 6 actions.

+

The RIGHTFIRE and LEFTFIRE actions, I suppose, aren't used, given that they seem to do the same as LEFT and RIGHT, am I right?

+

If so, restricting the action size to the 4 first actions would improve my learning?

+",9818,,2444,,12/27/2020 13:56,12/27/2020 13:57,Should I ignore the actions RIGHTFIRE and LEFTFIRE in the SpaceInvaders environment?,,1,0,,12/28/2020 16:05,,CC BY-SA 4.0 +11213,2,,11208,3/14/2019 2:26,,4,,"

I think we still have a long way to go before any progress is made on artificial consciousness. However, researchers are taking inspiration from traits of human consciousness. One relevant paper is Machine Theory of Mind by DeepMind. They show that their model can (at least to some extent) represent the desires, beliefs, and intentions of agents that it observes. It even passes a form of the Sally-Anne test, showing that it can represent the false beliefs of an agent.

+",22916,,,,,3/14/2019 2:26,,,,0,,,,CC BY-SA 4.0 +11214,1,,,3/14/2019 2:31,,1,472,"

I just recently got into machine learning, and have been hitting a lot of obstacles understanding the algorithms involved in the programmings. My issue isnt with the programming, but how they're translated from math to code. ML is popular with python, and that's okay, but i dont like python, and i dont want to have to learn it to be able to use the programming language of my choice, to do the exact same thing but in a way i feel comfortable (i dont care if python is popular for math majors, because it's easier for them to understand -- it isnt for me, when nothing being done is explained thoroughly.).

+ +

I'm trying to decipher this model +

+ +

this is the breakdown for the algorithm model +

+ +

this is the math i was able to decipher for this particular model + +(Left is terminology and their usage, the middle in black was something to do with programming arrays... below it is the equation used in bottom left, but more elaborate, and underneath that is a image that says the same thing the algorithm is doing.. because sometimes pictures are easier to understand that words VectorArray(Value) * VectorArray(Weight) + SingleUnit(Bias) = Neuron(Node))

+ +

But then everything stops at the middle layer of the second image. How do i get the full output to give me a yes or no response? +How do i enter in the variables and tables to go thru the math steps? +Is my understanding correct, or am i lacking somewhere?

+ +

This user is also sharing the same algorithm but our math dont look the same +How do i go from what i have, to what [s]he has?

+ +

At the end of all of these questions, i'm going to write everything into a programming script, that'll use a different language from python (and i would need to manually create resources from scratch, because no one else thinks machine learning should be done in other languages -- it seems...). I want to be able to understand the process itself, without just doing cookie-cutter actions (tools made by users for those too lazy to do the work -- which circumvent the learning/understanding process of what's going on behind scenes).

+",23171,,23171,,3/14/2019 7:54,3/14/2019 7:54,"How to translate algorithm from logic to equation, and back?",,0,10,,,,CC BY-SA 4.0 +11216,2,,11200,3/14/2019 4:42,,1,,"

Welcome to AI.SE ks.and1.

+ +

What you're describing could be related to several areas, but since you are looking only for some keywords, I'll suggest some:

+ +
    +
  1. You might be interested in anytime learning, which might better be called 'learning that can stop at any time'. A good example of this approach in machine learning is anytime learning for decision trees, as summarized in Esmeir & Markovitch's 2007 JMLR paper
  2. +
  3. Apart from this, it sounds like you are just describing the regular process of retraining a model when you get more data. For most algorithms, this just amounts to changing the input to your program so that it points at a larger file, containing more records than last time, and maybe buying a faster GPU or CPU to train the model with if it's taking too long with the extra data.
  4. +
+ +

Hopefully, this can help you get started. If you have some more details about your problem, I might be able to suggest some more specific resources.

+",16909,,,,,3/14/2019 4:42,,,,0,,,,CC BY-SA 4.0 +11218,1,11224,,3/14/2019 5:02,,2,111,"

I need to create model which will find suspicious entries or anomalies in a network, whose characteristics or features are the asset_id, user_id, IP accessed from and time_stamp.

+ +

Which unsupervised anomaly detection algorithms or models should I use to solve this task?

+",23174,,2444,,3/14/2019 10:47,3/14/2019 10:47,Which unsupervised anomaly detection algorithms are there?,,2,0,,,,CC BY-SA 4.0 +11219,1,11221,,3/14/2019 5:49,,6,3284,"

I read top articles on Google Search about Deep Q-Learning:

+ + + +

and then I noticed that they all use CNN as approximator. If deep learning has a broader definition than just CNN, can we use the term ""Deep Q-Learning"" on our model if we don't use CNN? or is there a more appropriate definition for that kind of Q-Learning model? for example, if my model only using deep fully-connected layer.

+ +

*it doesn't say explicitly Deep RL means CNN on RL, but it uses the DeepMind (that uses CNN) as an example on Deep Q-Learning

+",16565,,16565,,3/15/2019 3:50,11/5/2019 16:26,Do we have to use CNN for Deep Q Learning?,,2,0,,,,CC BY-SA 4.0 +11220,1,,,3/14/2019 6:37,,0,89,"

I am not new to AI and did some work for few months but completely new to text to audio. Yes I used text to audio tools a decade back... but I would like to know where exactly we stand in terms of text to audio today.

+ +

I already did some research, it seems like traditional way of text to audio is fading away and speech cloning seems to be emerging but my impression on this might be completely wrong.

+ +

What are the current open source text-to-audio libraries?

+",23176,,23176,,3/14/2019 12:17,1/9/2020 15:04,What are the current open source text-to-audio libraries?,,2,0,,1/5/2023 8:41,,CC BY-SA 4.0 +11221,2,,11219,3/14/2019 6:47,,9,,"

No. DQN and other deep RL methods work well with fully connected layers. Here's an implementation of DQN which doesn't use CNNs: github.com/keon/deep-q-learning/blob/master/dqn.py

+ +

DeepMind mostly use CNN because they use image as input state, and that because they tried to evaluate performance of their methods vs humans performance. Humane performance is easy to measure at games with image as input state, and that's why CNN based methods present so promptly in RL now.

+",22745,,2444,,3/15/2019 13:38,3/15/2019 13:38,,,,2,,,,CC BY-SA 4.0 +11224,2,,11218,3/14/2019 9:56,,1,,"

If you are OK to use python, thy novelty-detection with sklearn:

+ +

https://scikit-learn.org/stable/modules/outlier_detection.html

+",23104,,,,,3/14/2019 9:56,,,,1,,,,CC BY-SA 4.0 +11225,2,,11218,3/14/2019 9:56,,1,,"

Hierarchical Temporal Memory is a model well suited for anomaly detection. It is also pretty interesting and different from currently typical Deep Learning models.

+",2227,,,,,3/14/2019 9:56,,,,1,,,,CC BY-SA 4.0 +11226,1,20628,,3/14/2019 10:20,,24,18808,"

What is non-Euclidean data?

+

Here are some sub-questions

+
    +
  • Where does this type of data arise? I have come across this term in the context of geometric deep learning and graph neural networks.

    +
  • +
  • Apparently, graphs and manifolds are non-Euclidean data. Why exactly is that the case?

    +
  • +
  • What is the difference between non-Euclidean and Euclidean data?

    +
  • +
  • How would a dataset of non-Euclidean data look like?

    +
  • +
+",2444,,2444,,8/14/2021 13:51,4/16/2022 20:43,What is non-Euclidean data?,,5,0,,,,CC BY-SA 4.0 +11227,1,11240,,3/14/2019 10:39,,0,221,"

I have features like state, city and location. Currently i am inserting this in respective tables in DB and tranform it using its primary key.

+ +

eg. country,state,city = IN,Maharastra,Pune = 2,5,10 (primary key in DB)

+ +

transforming it as 002005010

+ +

Is this approach correct? If not suggest me correct one

+",23174,,,,,3/14/2019 18:39,transform Location data into int which will be used as input to ML model,,1,0,,5/10/2022 4:22,,CC BY-SA 4.0 +11229,1,,,3/14/2019 10:55,,3,71,"

I couldn't find GUI for precise ""artificial neural-network alike"" structures, which could supports neuron naming, synapse naming, import of external functions or code fragments and debugging. It would be ideal if synapses also could pass not only float values, but user-defined structures too. Optimization, GPU-computing is irrelevant (and probably impossible with such features). +Is such thing exists? I'm thinking about writing one myself, for my needs. And most probably I will...

+ +

Why do I need such features? For testing concept of construction of kind-of-neural logic programming. I'm sure there is also should exists programming languages with such paradigm, but I also don't know how to find them, google would mostly give me info about common artificial neural networks.

+",23183,,23183,,3/14/2019 12:22,3/14/2019 12:22,Is there any GUI for per-neuron editing,,0,5,,,,CC BY-SA 4.0 +11231,1,,,3/14/2019 12:52,,1,47,"

There are several version of DDQN floating around. Sutton gives one that is a simple symmetric random update of the two Q functions. I think other papers (Silver paper for example) use a kind of delayed and split update rule.

+ +

Is there anything systematic describing the properties of the bias corrections and their respective advantages?

+",23001,,2444,,3/14/2019 14:59,3/14/2019 14:59,Comparison and understanding of different version of DDQN?,,0,1,,,,CC BY-SA 4.0 +11234,1,,,3/14/2019 13:03,,2,153,"

I have a log file of the format,

+ +
+

Index, Date, Timestamp, Module, App, Context, Session, Verbosity level, Description

+
+ +

The log file can be considered as a master log, which consists of individual logs from several modules constituting a distributed system. The individual log can be identified using the corresponding Module+App+Context tags. The verbosity level(Info, Warn, Error, …) and the descriptions(system generated + print statements added by developers) contain further information on the log events necessary for debugging. I need to perform an unsupervised anomaly detection with the log file as input. The output should be the functionality and timestamp of the identified anomalies.

+ +

Since the log is mostly textual, I plan to use NLP algorithm (Bag of words/TF-IDF) to convert the data into word vectors and then perform a generative learning method to identify the normal pattern. Can someone suggest if my approach in the right direction? Which headers of the log file would be relevant for the word-vector representation and further analysis?

+",23161,,,,,3/14/2019 13:03,How to perform unsupervised anomaly detection from log file with mostly textual data?,,0,0,,,,CC BY-SA 4.0 +11235,1,,,3/14/2019 14:15,,4,1101,"

I am reading the BERT paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

+ +

As I look at the attention mechanism, I don't understand why in the BERT encoder we have an intermediate layer between the attention and neural network layers with a bigger output ($4*H$, where $H$ is the hidden size). +Perhaps it is the layer normalization, but, by looking at the code, I'm not certain.

+",23154,,2444,,11/1/2019 2:36,12/15/2021 9:25,Why does the BERT encoder have an intermediate layer between the attention and neural network layers with a bigger output?,,1,1,,,,CC BY-SA 4.0 +11236,1,,,3/14/2019 14:43,,0,127,"

Suppose we are using word2vec and have embeddings of individual words $w_1, \dots, w_{10}$. Let's say we wanted to analyze $2$ grams or $3$ grams.

+

Why would adding all the possible embeddings, $\binom{10}{2}$ or $\binom{10}{3}$, be "worse" than using 1D-convolutions?

+",23120,,2444,,12/4/2020 10:20,4/23/2023 13:04,"Why would adding all the possible embeddings be ""worse"" than using 1D-convolutions?",<1d-convolution>,1,0,,,,CC BY-SA 4.0 +11238,2,,11202,3/14/2019 17:23,,2,,"

Nope! Our number of coefficients will be driven by the vocabulary, and we'll use each of those 10K samples to estimate values for those coefficients - so, 'just' 100K samples. However, word frequency in human languages follows a Zipf distribution => most of those words will be rare, seen in only a few samples (=> won't even be able to determine whether they are useful or not, let alone get good values for coefficients). For an application like this one, you would probably find most of the value from a few hundred words.

+",17770,,,,,3/14/2019 17:23,,,,0,,,,CC BY-SA 4.0 +11240,2,,11227,3/14/2019 18:39,,1,,"

In order to accurately input location data into a machine learning model it really depends on what your goal is and what type of algorithm you are working with. If you are working with a strictly numerical algorithm and your data seems to be spread far apart, it might be easier to convert your country-state-city location to a longitude, latitude feature where the exact value is the centroid location of the given city. This kaggle post has a good writeup of how to run build features for geo-spatial data.

+ +

However, if you have only a small number of different locations per identifier (country, city, state) you could just represent them as seperate location classes and subclasses like you would for machine learning detection model. You could think ""vehicle"" -> ""bike"" -> ""road-bike"" similarly to ""country"" -> ""state"" -> ""city"". In this case, you would want to look at hierachical methods for machine learning and see some ways they represent their data. Although I would say this method is more frowned upon for larger datasets, for smaller datasets it might be a better option.

+",17408,,,,,3/14/2019 18:39,,,,3,,,,CC BY-SA 4.0 +11243,1,33843,,3/14/2019 20:47,,4,188,"

What is a generalized MDP?

+

Here are a few sub-questions.

+
    +
  1. How is it different than a "regular" MDP?
  2. +
  3. How does it generalize the notion of an MDP?
  4. +
  5. Why do we need a generalized MDP?
  6. +
  7. Do generalized MDPs have some practical usefulness or they are just theoretical tools?
  8. +
+",2444,,2444,,12/20/2021 13:58,12/20/2021 14:07,What is a generalized MDP?,,1,0,,,,CC BY-SA 4.0 +11244,1,,,3/14/2019 20:50,,3,377,"

In certain reinforcement learning (RL) proofs, the operators involved are assumed to be non-expansive. For example, on page 6 of the paper Generalized Markov Decision Processes: Dynamic-programming and Reinforcement-learning Algorithms (1997), Csaba Szepesvari and Michael L. Littman state

+ +
+

When $0 \leq \gamma < 1$ and $\otimes$ and $\oplus$ are non-expansions, the generalized Bellman equations have a unique optimal solution, and therefore, the optimal value function is well defined.

+
+ +

On page 7 of the same paper, the authors say that max is non-expansive. Moreover, on page 33, the authors assume $\otimes$ and $\oplus$ are non-expansions.

+ +

What is a non-expansive operator? Why is the $\max$ (and the $\min$), which is, for example, used in Q-learning, a non-expansive operator?

+",2444,,2444,,9/15/2019 14:49,9/15/2019 14:49,Why is the max a non-expansive operator?,,2,0,,,,CC BY-SA 4.0 +11245,2,,11236,3/14/2019 21:00,,0,,"

N-grams are defined as sets of n contiguous words. We use n-grams because they are more useful than random combinations of words across the sentence. Intuitively, combinations of nearby words have more semantic meaning than combinations of distant words.

+

Also, using all possible combinations of n embeddings would take much longer, especially since (1D) convolutions are such efficient operations.

+",22916,,2444,,12/4/2020 10:22,12/4/2020 10:22,,,,0,,,,CC BY-SA 4.0 +11246,1,11251,,3/14/2019 21:12,,5,5479,"

Apparently, we can solve an MDP (that is, we can find the optimal policy for a given MDP) using a linear programming formulation. What's the basic idea behind this approach? I think you should start by explaining the basic idea behind a linear programming formulation and which algorithms can be used to solve such constrained optimisation problems.

+",2444,,,,,4/13/2022 11:52,How can we use linear programming to solve an MDP?,,1,0,,,,CC BY-SA 4.0 +11247,5,,,3/14/2019 21:23,,0,,,-1,,-1,,3/14/2019 21:23,3/14/2019 21:23,,,,0,,,,CC BY-SA 4.0 +11248,4,,,3/14/2019 21:23,,0,,"For questions related to the linear programming optimisation technique used in the context of AI (e.g. in the context of RL, in order to solve an MDP).",2444,,2444,,3/15/2019 19:11,3/15/2019 19:11,,,,0,,,,CC BY-SA 4.0 +11249,2,,11220,3/14/2019 21:26,,0,,"

There is one by Mozilla called Deep Voice +And another python library called pocket-sphinx

+",15465,,,,,3/14/2019 21:26,,,,0,,,,CC BY-SA 4.0 +11251,2,,11246,3/15/2019 2:42,,2,,"

This question seems to be addressed directly in these slides.

+

The basic idea is:

+
    +
  • Assume you have a complete model of the MDP (transitions, rewards, etc.).
  • +
  • For any given state, we have the assumption that the state's true value is reflected by:
  • +
+

$$V^*(s) = r + \gamma \max_{a \in A}\sum_{s' \in S} P(s' | s,a) \cdot V^*(s')$$

+

That is, the true value of the state is the reward we accrue for being in it, plus the expected future rewards of acting optimally from now until infinitely far into the future, discounted by the factor $\gamma$, which captures the idea that reward in the future is less good than reward now.

+
    +
  • In Linear Programming, we find the minimum or maximum value of some function, subject to a set of constraints. We can do this efficiently if the function can take on continuous values, but the problem becomes NP-Hard if the values are discrete. You would usually do this using something like the Branch & Bound algorithm. These are widely available in fast implementations. GLPK is a decent free library. IBM's CPLEX is faster, but expensive.

    +
  • +
  • We can represent the problem of finding the value of a given state as: +$$\text{minimize}_V \ V(s)$$ +subject to the constraints: +$$V(s) \geq r + \gamma\sum_{s' \in S} P(s' | s,a)*V(s'),\; \forall a\in A, s \in S$$ +It should be apparent that if we find the smallest value of $V(s)$ that matches this requirement, then that value would make exactly one of the constraints tight.

    +
  • +
  • If you formulate your linear program by writing a program like the one above for every state and then minimize $\sum_{s\in S} V(s)$, subject to the union of all the constraints from all these sub-problems you have reduced the problem of learning a value function to solving the LP.

    +
  • +
+",16909,,2444,,4/13/2022 11:52,4/13/2022 11:52,,,,0,,,,CC BY-SA 4.0 +11252,2,,11166,3/15/2019 9:15,,19,,"

To complete the first answer that is rather graph oriented, I will write a little about deep learning on manifolds, which is quite general in terms of GDL thanks to the nature of manifolds.

+ +

Note that the description of GDL through the explanation of what are DL on graphs and manifolds, in opposition to DL on euclidean domains, comes from the 2017 paper Geometric deep learning: going beyond Euclidean data (this paper is excellent at clarifying both the intuition and the mathematics of what I'm writing).

+ +

1. In case you don't know what a manifold is

+ +

As the previously cited paper puts it:

+ +
+

Roughly, a manifold is a space that is locally Euclidean. One of the + simplest examples is a spherical surface modeling our planet: around a + point, it seems to be planar, which has led generations of people to + believe in the flatness of the Earth. Formally speaking, a + (differentiable) d-dimensional manifold X is a topological space where + each point x has a neighborhood that is topologically equivalent + (homeomorphic) to a d-dimensional Euclidean space, called the tangent + space.

+
+ +

Good other not-so-technical explanation on stats.stackexchange

+ +

Other Wikipedia examples to develop not too abstract understanding

+ +

Very shortly put, it's an interesting mathematical set on which to work (different kinds exist, see papers at the end of this answer for DL related manifolds uses). By work, you can typically understand that you constrain the neural net parameters to the manifold you chose (e.g. training with parameters constrained on a hypersphere, among the geomstats paper examples).

+ +

Your data can also be represented thanks to a practical manifold. For example, you can choose to work on images and videos by representing the samples using Symmetric Positive Definite (SPD) matrices (see this paper), the space of SPD matrices being a manifold itself.

+ +

2. Why bother learning on manifolds ?

+ +

Defining a clearer/better adapted set (understand that it's a sort of constraint!) on which to learn parameters and features can make it simpler to formally understand what your model is doing, and can lead to better results. I see it as a part of the effort of deep learning formalization. One could say you're looking for the best information geometry for your task, the one that best captures the desirable data distribution properties.To develop this intuition consider the solar system analogy for manifold learning of this Kaggle kernel:

+ +
+

Perhaps a good analogy here is that of a solar system: the surface of our planets are the manifolds we're interested in, one for each digit. Now say you're on the surface of the earth which is a 2-manifold and you start moving in a random direction (let's assume gravity doesn't exist and you can go through solid objects). If you don't understand the structure of earth you'll quickly find yourself in space or inside the earth. But if you instead move within the local earth (say spherical) coordinates you will stay on the surface and get to see all the cool stuff.

+
+ +

This analogy reminds us of the spherical surface planet model from Bronstein's paper already quoted above. This paper also describes a typical case for which manifolds are interesting: where graphs (the other example of GDL/DL on non euclidean data) are better at handling data from social or sensor networks, manifolds are good at modeling 3D objects endowed with properties like color texture in computer vision.

+ +

3. Regarding deep neural networks on manifolds

+ +

I would advise reading the geomstats associated paper, which does a great job at showing what it is and how it can be used, along with example codes (e.g. MNIST on hyperspheres manifold example code here). This library implements manifolds and associated metrics on Keras. The choice of metrics is essential to understand the point of working on manifolds: it's because you need to work on an adapted mathematical set (ie with the right properties) with an adapted distance definition (so that the measure actually means something when considering the problem you're trying to solve) that you switch to working on manifolds.

+ +

If you want to dive in the details and examples of deep learning on manifolds here are some papers:

+ + + +

4. Why Riemannian manifolds ?

+ +

TL;DR: you need a metric to do machine learning (otherwise, how could you evaluate how much you actually learned !)

+ +

Still based on Bronstein's paper:

+ +
+

On each tangent space, we define an inner product [...]. This inner + product is called a Riemannian metric in differential geometry and + allows performing local measurements of angles, distances, and + volumes. A manifold equipped with a metric is called a Riemannian + manifold.

+
+ +

5. What's the relation between a Riemannian manifold and a Euclidean space ?

+ +

Still based on Bronstein's paper:

+ +
+

a Riemannian manifold can be realized as a subset of a Euclidean space + (in which case it is said to be embedded in that space) by using the + structure of the Euclidean space to induce a Riemannian metric.

+
+ +

I leave the details to the paper, otherwise this answer will never end.

+ +

6. Answers to questions in comments

+ +

Will only answer once I think I've found a relatively well-argued answer, so won't answer everything at once.

+ +
    +
  • Isn't manifold learning just a way of dimensionality reduction?
  • +
+ +

I don't think so, it isn't just that. I haven't seen any dimensional reduction constraint (yet ?) in the papers I've read (cf. geomstats again).

+ +

In the hypersphere/MNIST geomstats code example, you can see the chosen manifold dimension hypersphere_dimension = 17. Since we're working with MNIST data I guess this would mean a dimension reduction in this particular case. I admit I would need to check exactly what that dimension implies on the neural net architecture, I haven't discussed my understanding of this yet.

+ +

Disclaimer

+ +

I'm still developing a more rigorous mathematical understanding of manifolds, and shall update this post to make additional necessary clarifications: exactly what can be considered as a manifold in a traditional deep learning context, why do we use the word manifold when speaking about the hidden state of auto-encoders (see the previously cited Kaggle kernel that quotes Goodfellow's book on this). All of this if the perfectly clear answer doesn't show up here before !

+",22176,,22176,,3/19/2019 8:41,3/19/2019 8:41,,,,8,,,,CC BY-SA 4.0 +11257,2,,10545,3/15/2019 14:31,,1,,"

Adam reduces the learning rate over time. When you change to the new training data, you want to reset the learning rate. But Adam might not be the best choice for the second round of training - it can make big changes to the inherited weights, which prevents the transfer of previous learning. It can be good to switch to simple SGD for the second round.

+",12269,,,,,3/15/2019 14:31,,,,0,,,,CC BY-SA 4.0 +11258,2,,11244,3/15/2019 14:49,,2,,"

In laymen's terms, a non-expansive operator is a function that brings points closer together or at least no further apart.

+ +

An example of a non-expansive operator is the function $f(x) = x/2$. The two numbers $0$ and $5$ are a distance of $5$ apart. The two output numbers $f(0) = 0$ and $f(5) = 2.5$ are 2.5 apart (which is smaller than $5$ apart). It is easy to see that $f$ brings everything closer together except when the two input numbers are the same: in which case, the distance between the outputs of the function at those numbers is at least no further apart than distance between the two input numbers.

+ +

$\max$ is a two-input function (or n-input, but the intuition should be clear from the 2-input case). We can think of max as a function that maps pairs of numbers $(x, y)$ to single numbers (picking whichever of $x$ and $y$ is larger).

+ +

Suppose that we chose to measure the distance between pairs using Euclidean distance, and the distance between single numbers using the Euclidean distance as well. Here's an example:

+ +

The distance between (0,0) and (3,3) is $\sqrt{3^2 + 3^2} = \sqrt{18}$. The distance between $\max(0,0) = 0$ and $\max(3, 3) = 3$ is $\sqrt{{(\max(0,0) - \max(3, 3))}^2} = \sqrt{9} = 3$.

+ +

Let's consider the general case. The Euclidean distance between the 2D points $(a, b)$ and $(c, d)$ is $\sqrt{(a-c)^2 + (b-d)^2}$. There are four cases to consider:

+ +
    +
  1. Suppose that $a\geq b$ and $c \geq d$. In this case, the distance between max(a,b) and max(c,d) is just |a-c|, which is clearly at most $\sqrt{(a-c)^2 + (b-d)^2}$.
  2. +
  3. Suppose that $a\leq b$ and $c \leq d$. In this case, the distance is |b-d|, which is also at most the original distance.
  4. +
  5. Suppose that $a\geq b$ but $c \leq d$. Then the distance is |a-d|. Suppose that $a > d$. Since $d > c$, then $|a-d| <= \sqrt{(a-c)^2 + (b-d)^2}$ since |a-d|<=$\sqrt{(a-c)^2}$, and a symmetric argument holds for the case $d > a$.
  6. +
  7. $a\leq b$ but $c \geq d$, we can construct an argument identical to the one for case 3 above.
  8. +
+ +

Since max is always bringing things closer together, or at least, no further apart, it is a non-expansive operator.

+",16909,,2444,,3/15/2019 15:53,3/15/2019 15:53,,,,2,,,,CC BY-SA 4.0 +11259,2,,11226,3/15/2019 16:14,,10,,"

Non-Euclidian geometry can be generally boiled down to the phrase

+
+

the shortest path between 2 points isn't necessarily a straight line.

+
+

Or, put in a way that lends itself very much to machine learning,

+
+

things that are similar to each other are not necessarily close if one uses Euclidean distance as a metric (aka the triangle inequality doesn't hold).

+
+

You mention graphs and manifolds as being non-Euclidian, but, really, the majority of problems being worked on don't have Euclidian data. Take the below images for example:

+

Clearly, 2 of the images are more similar to each other than the third one is, but if we looked at the pixels alone, the Euclidean distance between the pixel values don't represent this similarity.

+

+

If there was a function, $F(\text{image})$, that mapped images to a space of values where similar images produced values that were closer together, we could better understand the data, infer some statistics about the distributions, and make predictions on data we have yet to see. This is what classic techniques of image recognition have done and it's also what modern machine learning is doing. Taking data and mapping it to a space such that the triangle inequality holds.

+

Let's look at a more concrete example, some points I drew in MSPaint. +On the left is some space that we are interested in where points have 2 classes (red or blue). Even though there are points that are close to each other, they may have different colors/classes. Ideally, we could have a function that converts these points to some space where we can draw a line to separate these 2 classes. In general, there would be many lines, or hyper-planes in dimensions > 3, but the goal is to transform the data so that it will be "linearly separable".

+

+

To conclude, non-Euclidian data is everywhere.

+",4398,,2444,,7/10/2020 11:50,7/10/2020 11:50,,,,10,,,,CC BY-SA 4.0 +11261,1,11263,,3/15/2019 17:44,,2,869,"

Two words can be similar if they co-occur ""a lot"" together. They can also be similar if they have similar vectors. This similarity can be captured using cosine similarity. Let $A$ be a $n \times n$ matrix counting how often $w_i$ occurs with $w_k$ for $i,k = 1, \dots, n$. Since computing the cosine similarity between $w_i$ and $w_k$ might be expensive, we approximate $A$ using truncated SVD with $k$ components as: $$A \approx W_k \Sigma W^{T}_{k} = CD$$

+ +

where $$C = W_{k} \Sigma \\ D = W^{T}_{k}$$

+ +

Where are the cosine similarities between the words $w_i$ and $w_k$ captured? In the $C$ matrix or the $D$ matrix?

+",23220,,2444,,3/15/2019 18:06,1/3/2021 20:30,Which matrix represents the similarity between words when using SVD?,,1,0,,,,CC BY-SA 4.0 +11263,2,,11261,3/16/2019 0:17,,0,,"

You can find some material here and here but the idea (at least in this case) is the following: consider the full SVD decomposition of the symmetric matrix $A = W \Delta W^T$. We want to calculate the cosine similarity between the $i$-th column (aka word) $a_i$ and the $j$-th column $a_j$ of $A$. Then $a_k = A e_k$, where $e_k$ is the $k$-th vector of the canonical basis of $\mathbb{R}$. Let's call $\cos(a_i,a_j)$ the cosine between $a_i,a_j$. Then +$$\cos(a_i,a_j) = \cos(Ae_i,Ae_j) = \cos(W \Delta W^T e_i,W \Delta W^T e_j) = \cos(\Delta W^T e_i,\Delta W^T e_j)$$

+

where the last equality holds because $W$ is an orthogonal matrix (and so $W$ is conformal, i.e. it preserves angles). So you can calculate the cosine similarity between the columns of $\Delta W^T$. A $k$-truncated SVD gives a well-enough approximation. In general, columns of $W \Delta$ and rows of $W$ have different meanings!

+",23224,,2444,,1/3/2021 20:30,1/3/2021 20:30,,,,3,,,,CC BY-SA 4.0 +11275,2,,10540,3/16/2019 22:08,,0,,"

If you can classify words then you can easily classify sentences. One of interesting problems you can solve then is »what are allowed sentence forms?« How can you classify words? By searching for features that are common between them. These features are all possible truths that are true for a word, like with what other words it appears in same sentence, what is their spatial relation, what is frequency of appearance.

+",12251,,,,,3/16/2019 22:08,,,,0,,,,CC BY-SA 4.0 +11277,1,,,3/16/2019 22:24,,2,187,"

I was reading about the grounding problem after seeing it mentioned in another answer today. The article states that, in order to avoid the ""infinite regress"" of defining all words with other words, we must ground the meaning of some words in the ""sensorimotor.""

+ +
+

To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that its symbols are systematically interpretable (by us) as referring to.

+
+ +

Obviously, this made me think of Reinforcement Learning. But I'm not exactly sure what counts as ""interaction."" Would this necessarily imply an MDP-like formulation with rewards, state transitions, etc? Or could some form of grounding be accomplished with supervised learning?

+ +

This seems like a pretty fundamental problem of AI. Does anyone know of research being done on grounding words/symbols within an RL agent?

+",22916,,22916,,3/17/2019 10:12,3/17/2019 17:43,Is Reinforcement Learning the future of Natural Language Processing?,,0,3,,,,CC BY-SA 4.0 +11279,1,,,3/17/2019 1:21,,0,74,"

The last week I've been looking for freelancers who are able to do this project for me but they weren't that experienced in it, so I would like to know whether my idea is complicated or is it their lake of experience.

+

Scenario:

+

1. The facial recognition system will be installed on a vertical screen where a camera would be attached to it and it will be assigned on the entrance of the room.

+

2. Once a visitor come to the entrance, and looks at the screen, a text on the screen would say: " Welcome! It seems like it's your first visit! Please enter your name then a keyboard should popup"

+

3. The visitor would enter his first name into the keyboard, and it would be saved alongside his face in the database, and it would say thank you, {name}.

+

4. If the same visitor visits again, the system should say: " Welcome back {name}, happy to see you again"!

+",23252,,32410,,1/19/2021 16:19,1/19/2021 16:19,Facial Recognition + Database + Compare & Identify - is it complicated?,,1,0,,1/20/2021 11:28,,CC BY-SA 4.0 +11280,1,11282,,3/17/2019 8:56,,1,100,"

I'm implementing PPO myself strictly follow the steps:

+ +
    +
  1. sample transitions
  2. +
  3. randomly shuffle the sampled transitions
  4. +
  5. compute gradients and update networks using the sampled transitions
  6. +
  7. drop transitions and repeat the above steps
  8. +
+ +

I observe a strange phenomenon that randomly shuffling transitions makes the algorithm perform significantly worse than keeping it as it is. This is very strange to me. To my best understanding, neural networks perform badly when the input data are correlated. To decorrelate transitions, algorithms like DQN introduce replay buffer and randomly sample from it. But this seems not the same story to policy-based methods. I'm wondering why policy-based methods do not require to decorrelate the input data?

+",8689,,,,,3/17/2019 10:10,Why don't we decorrelate transitions for policy-based data?,,1,0,,,,CC BY-SA 4.0 +11282,2,,11280,3/17/2019 9:56,,4,,"

We do decorrelate training experience, even for policy gradient methods. This is because decorrelation helps training data be more like IID data, which helps with the convergence of SGD-like optimizers.

+ +

The shuffling is done on line 151 of OpenAI's ""baselines"" implementation of PPO.

+ +

I'm going to guess that there's a bug somewhere in your implementation. If you're using the same truncated advantage estimation as in the PPO paper,

+ +

$$\begin{align} +&\hat{A}_t = \delta_t + (\gamma\lambda)\delta_{t+1}+\dots+(\gamma\lambda)^{T-t+1}\delta_{T-1}\\ +&\text{where}\quad\delta_t=r_t+\gamma V(s_{t+1})-V(s_t) +\end{align}$$

+ +

make sure you're not shuffling experience before computing your advantage estimates $\hat{A}_1,\dots,\hat{A}_T$. Each of these estimates is a function of several unbroken steps of experience. After you compute your advantage estimates, then create the tuples $(s_t, a_t, \hat{A}_t)$ and shuffle and sample from those. I think that's all the information you need per time step for constructing their objective function.

+",22916,,22916,,3/17/2019 10:10,3/17/2019 10:10,,,,0,,,,CC BY-SA 4.0 +11284,1,,,3/17/2019 11:18,,0,47,"

Floorball is a type of floor hockey. During the game, substitutions can be made.

+ +
+

The team is also allowed to change players any time in the game; usually, they change the whole team. Individual substitution happens sometimes, but it usually happens when a player is exhausted or is hurt.

+
+ +

I would like to use an RNN to predict when the next substitution will happen for a team. However, I have no pre-existing dataset to train on. Is there a way that I can start predicting without a dataset and continually improve accuracy as more games are played?

+",23258,,22916,,3/18/2019 16:45,3/18/2019 16:45,Is there a RNN that can predict the next substitute in a floorball match?,,0,4,,,,CC BY-SA 4.0 +11285,1,20646,,3/17/2019 11:56,,36,16542,"

In general, the word "latent" means "hidden" and "to embed" means "to incorporate". In machine learning, the expressions "hidden (or latent) space" and "embedding space" occur in several contexts. More specifically, an embedding can refer to a vector representation of a word. An embedding space can refer to a subspace of a bigger space, so we say that the subspace is embedded in the bigger space. The word "latent" comes up in contexts like hidden Markov models (HMMs) or auto-encoders.

+

What is the difference between these spaces? In some contexts, do these two expressions refer to the same concept?

+",2444,,2444,,12/6/2020 13:17,6/18/2021 14:25,What is the difference between latent and embedding spaces?,,5,0,,,,CC BY-SA 4.0 +11286,1,11291,,3/17/2019 12:25,,1,737,"

I am simulating a Tic-Tac-Toe game with a human opponent. The way the RL trains is through policy/value iterations for a fixed number of iterations all specified by the user. Now, whether the human player has turn 1 or turn 2 will decide the starting state (1st move by human or empty). The starting states for the 1st case can differ as the human can make 9 different moves.

+

So, my questions are:

+
    +
  • In tic-tac-toe, what is the effect of the starting state on the state and action value function?
  • +
  • Does it converge to the same stable value for all starting states?
  • +
  • Will the value functions change if the starting players are changed? (human vs RL to RL vs human)
  • +
+

NOTE: I will be enumerating all states since there are approximately 20000 states which I believe is not a big number and thus convergence should not be a problem.

+",,user9947,2444,user9947,4/8/2022 9:47,4/8/2022 9:49,"In tic-tac-toe, what is the effect of the starting state on the state and action value function?",,1,10,,,,CC BY-SA 4.0 +11287,2,,11285,3/17/2019 12:50,,10,,"

When it comes to normal layman terms ""latent space"" means it cannot be accessed, thus we have no direct control over it. We can only manipulate it indirectly, while ""Embeddings"" can be obtained directly. We can use deterministic operations or transformations to convert an object into its corresponding embedding space.

+ +

There is no marked difference between these 2 terms as far as Machine Learning is concerned. If we look at this famous paper on Variational Autoencoders, we can see the words has been used interchangeably.

+ +

More specifically, I would consider the word (in the context of Machine Learning only) latent as a more general term than Embedding. Embeddings will refer to a more specific object (in context of ML), for example the embedding of $word_1$ is $embedding_1$. Whereas, we can use the term latent to describe broader terms like latent space, latent representation, latent variables (latent variables of a word is same as an embedding of a word).

+ +

After digging some more I found some what of a formal definition of Latent Variables in Deep Learning by Goodfellow:

+ +
    +
  • Latent Variables - A latent variable is a random +variable that we cannot observe directly. The component identity variable $c$ of the +mixture model provides an example. Latent variables may be related to $x$ through +the joint distribution, in this case, $P(x, c) = P(x | c)P(c)$. The distribution $P(c)$ +over the latent variable and the distribution $P(x | c)$ relating the latent variables +to the visible variables determines the shape of the distribution $P(x)$, even though +it is possible to describe $P(x)$ without reference to the latent variable.
  • +
+ +

Also a paper cited by Goodfellow while discussing embeddings has the following excerpt:

+ +
+

Following the success of user/item clustering or matrix factorization techniques in collaborative filtering to represent non-trivial similarities between the connectivity patterns of entities in single relational data, most existing methods for multi-relational data have been designed within the framework of relational learning from latent attributes, as pointed out by; that is, by learning and + operating on latent representations (or embeddings) of the constituents (entities and relationships).

+
+ +

So clearly these are somewhat interchangeable terms.

+ +

But my interpretation would be that embeddings are helpful more explicitly (more visible, latent variables are meant to be hidden), that is we can construct a new data-set from it and use various ML methods on it, whereas latent variables are something not useful explicitly (it is a part of a bigger problem we are trying to solve).

+ +

EDIT: In the context of HMM's the term better suitable is hidden state and not latent space. Thus, in a HMM (from Wiki) The adjective hidden refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a hidden Markov model even if these parameters are known exactly.

+",,user9947,,user9947,5/24/2019 12:19,5/24/2019 12:19,,,,2,,,,CC BY-SA 4.0 +11289,2,,11279,3/17/2019 15:08,,1,,"

Depending on the level of accuracy/speed you want your system to have, you could try using an existing python API. First thing I found on google, the README looks adapted to your project. Otherwise ask your question with more accurate wanted specifications perhaps.

+",22176,,,,,3/17/2019 15:08,,,,0,,,,CC BY-SA 4.0 +11290,1,,,3/17/2019 15:47,,4,73,"

I am developing an algorithm that, in certain moment, must explore an exponential number of objects derived from a graph:

+ +
for o in my_graph.getDerivedObjects():
+  if hasPropertyX(o):
+    process(o)
+    break;
+
+ +

If one of the derived objects has property $X$, then the algorithm process it and then stops. The theory ensures that at least one of these derived objects has property $X$. Now, I strongly suspect that there is a strong correlation between some topological aspects of the graph, and which derived objects actually have property $X$. I want to predict some of the derived objects that have property $X$ using Machine Learning. So, the idea is:

+ +
    +
  1. Predict a derived object $o$ that supposedly has property $X$ - or maybe predict $n$ of them for some number $n$.

  2. +
  3. If any of them is useful, I use them. If not, I run the exponential algorithm.

  4. +
+ +

Of course, this isn't an optimization in the worst-case complexity of the algorithm. Also, I suppose I should also develop some statistical tests in order to show that the prediction algorithm actually works.

+ +

Is this type of optimizations common? Could you please provide some examples? The literature on the subject would also be greatly appreciated.

+",22365,,2444,,8/18/2019 22:26,8/18/2019 22:26,Can machine learning be used to improve the average case complexity of an algorithm?,,1,0,,,,CC BY-SA 4.0 +11291,2,,11286,3/17/2019 16:48,,1,,"

My understanding - from comments on the question - is that you are looking to train a Reinforcement Learning agent on the game of Tic Tac Toe (perhaps just in theory), where the agent should learn to play against a "human" opponent. In practice you may want a model of a human opponent.

+

In this case, the RL agent will be presented with a board state, it will take an action (to put its mark on an empty place in the grid) and either:

+
    +
  • Win immediately, receiving a positive reward. It is common to use +1 in a game like Tic Tac Toe, so I will assume that later.

    +
  • +
  • Lose on opponent's turn, receiving a negative reward (assumed -1 later in the answer), as opponent makes a move that causes it to win. This is effectively "immediately" in terms of time steps, the agent does not get to act afterwards.

    +
  • +
  • Receive zero reward, and a new board state that includes the opponent's move

    +
  • +
  • Receive zero reward, and the game ends in a draw

    +
  • +
+

In all cases, the opponent is considered part of the environment. That makes the opponent behaviour critical to the value function and choice of optimal play. Training versus different opponents can result in very different state values.

+

For training to be stable, the opponent should behave with the same probability of action choices for each interim state that it observes. That includes it behaving deterministically, even optimally, purely randomly or anything in-between provided the probability distribution is fixed.

+

With the above context, it is possible to give sound answers to your questions:

+
+

In tic-tac-toe, what is the effect of the starting state on the state and action value function?

+
+

Each state of the board, or each state/action pair if you want to track action values, should converge to a value, depending on the agent's estimated, expected result at the end of the episode. As there is only a single reward possible at the end of each game, this will vary between -1 and +1. If either the agent or the opponent can make mistakes at random, then non-zero values between -1 and +1 are possible.

+
+

Does it converge to the same stable value for all starting states?

+
+

That depends on behaviour of the opponent. In the scenario you are working with, the agent may not learn to play optimally in general, instead it will learn to gain optimal results against the supplied opponent. If the opponent can make mistakes, then moves which take advantage of that fact will have higher values.

+

Without a detailed description of the opponent it is not possible to make many statements about the actual state values and action values.

+

Against a perfect opponent, with the RL going second, it should converge to state values which are all zero and action values which are all zero or -1 for moves which would be mistakes.

+

Against a completely random opponent, I would have to run it to be sure, but I would expect state values to have 3 different values, all slightly positive, depending if opponent chose middle, edge or corner cases - each of these would have slightly different chance of leading to a win for the agent going forward.

+
+

Will the value functions change if the starting players are changed?

+
+

Due to the turn-based nature of the game, all the states observed would be different depending on who was the first player. When the agent goes first it will get to score the empty grid and action values of any position it would like to make a first mark in - it gets to see state after turn 0 on time step 1, after turn 2 on time step 2, after turn 4 on time step 3 etc. When the agent goes second it will get to see and evaluate the outpt of other turns - turn 1 at t=1, turn 3 at t=2, turn 5 at t=3 etc.

+

That means the sets of states and state/action pairs for each case (RL first or RL second) are disjoint, and you cannot ask if one agent has the same value for a specific state as the other agent - it simply won't know about the other agent's values.

+

If you train a single agent, sometimes starting first, sometimes starting second, the two sets of values never interact with each other directly - in an enumerated table, as per the question, this is not at all, but if an agent uses function approximation such as neural networks, then they can affect each other.

+",1847,,2444,,4/8/2022 9:49,4/8/2022 9:49,,,,2,,,,CC BY-SA 4.0 +11293,1,11294,,3/17/2019 19:05,,1,354,"

I am trying to understand how weights are actually gotten. What is generating them in a neural network? What is the algorithm that gives them certain values?

+",23262,,22916,,3/17/2019 20:08,3/17/2019 21:44,What determines the values of weights in a neural network?,,2,0,,,,CC BY-SA 4.0 +11294,2,,11293,3/17/2019 19:48,,5,,"

Typically, weights are randomly initialized. Then, as the model is optimized for its given task, those weights are steadily made ""better"" as determined by the network's loss function. This is also referred to as ""training"" the neural network.

+ +

By far the most popular way of updating weights in a neural net is the backpropagation algorithm, most simply with stochastic gradient descent (SGD). Essentially, the algorithm determines how much each individual weight contributed to the network's loss. It then updates that weight in the direction that would reduce the loss.

+ +

I recommend going through Michael Nielsen's online book to learn the basics.

+",22916,,22916,,3/17/2019 19:55,3/17/2019 19:55,,,,0,,,,CC BY-SA 4.0 +11295,2,,11293,3/17/2019 21:44,,0,,"

I agree with @PhilipRaeisghasem, in most architectures, weights are initialized in a random manner. However, some research papers suggest applying a random normal distribution initialization to the weights in the case of Convolutional Neural Networks (for computer vision).

+",20934,,,,,3/17/2019 21:44,,,,2,,,,CC BY-SA 4.0 +11296,2,,10734,3/17/2019 22:31,,3,,"

I think you would be interested in Neural-Symbolic Learning and Reasoning, a recent survey on the intersection between connectionist models (e.g. neural nets) and symbolic reasoning. It's a long paper, but it has a lot in it that is relevant to your question, if you're willing to dig. For example, it talks about learning networks that implement boolean logic.

+ +

+ +
+

This + network searches for satisfying solutions for the weighted conjunctive normal form (CNF):$\text{(¬X ∨ ¬Y ∨ Z) ∧ (X ∨ Y )}$.

+
+ +

The closest thing I could find to the ""functionals"" you mentioned was a discussion on learning ""relations"" with neural networks. For this, it cited Neural-Symbolic Cognitive Reasoning, a book also about the intersection between neural networks and symbolic reasoning. In Chapter 10, it gives the example of teaching a neural network to learn the relation of ""grandparent.""

+ +
+

Our goal is to learn a description of grandparent from examples of the father, mother, and grandparent relations such as father(charles, william), mother(elisabeth, charles), and grandparent(elisabeth, william).

+
+ +

I could imagine something like this being done with the example inputs being image features from a CNN. Perhaps then you could start learning functionals like ""on the shirt.""

+",22916,,,,,3/17/2019 22:31,,,,1,,,,CC BY-SA 4.0 +11297,2,,10403,3/17/2019 23:08,,6,,"

I can't speak for individual researchers, but I can guess why the community as a whole hasn't adopted this activation function.

+ +

ReLU is just so incredibly cheap. This benefit continues to grow as networks grow deeper. Also, they work reasonably well. As pointed out in Searching for Activation Functions,

+ +
+

the performance improvements of the other activation functions tend to be inconsistent across different models and datasets.

+
+ +

Even if a new activation function did provide a meager improvement in performance across the board, I wouldn't be surprised if ReLU were still commonly used. It's the default for a lot of machine learning software already.

+ +

Also research isn't ordinarily about eeking out one more percentage point in accuracy on a specific task. If I were entering in a competition, I might experiment with activation functions. But even then I'd rather use ReLU and save a little time while prototyping my architecture.

+ +

As pointed out by @DuttaA in comments, softsign could potentially replace sigmoid and tanh in situations where a bounded output is desired. I haven't seen anyone compare them before, but softsign would at least be much faster. I'm guessing this replacement hasn't happened because of tradition and exposure. Not because of softsign's lack of infinite derivatives. I don't know if this happening would make softsign ""widely used"", but it would be something.

+",22916,,22916,,3/18/2019 9:32,3/18/2019 9:32,,,,3,,,,CC BY-SA 4.0 +11298,1,,,3/18/2019 2:26,,1,116,"

I am attempting to train a network to do something I thought would be a relatively simple case to learn with: identify whether the back of a scanned vintage postcard has one of 'no postage stamp', a '1 cent stamp', or a '2 cent stamp.' The images are 250px by about 150px, RGB color, and there are about two thousand of them. Ballpark 75% of them are no-stamp, 20% 1-cent, and 10% 2-cent.

+ +

When I attempt to train the network it seems like it is starting at 70 +/- 1 % accurate and hovers in that range for 50 epochs, never improving. I'm not sure I'm reading the metrics correctly, though, as this doesn't seem quite right.

+ +

I set this up by following the tutorial on the Keras blog: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

+ +

I haven't implemented the latter part of the tutorial, where a pre-trained network is used, because I haven't found one that seems like it would be a similar problem.

+ +

My training and validation sets are here: https://drive.google.com/open?id=1-TxEKVVvP7RuFC7kFgH7Wt5A8z8QGTR3

+ +

And my Google Colab Jupyter notebook is here: https://colab.research.google.com/drive/1UuKDF1wDwYlXszB2ahIrygRnfcs2D_sD

+",23264,,,,,3/18/2019 23:04,Why doesn't my image classification network get better with training?,,1,3,,,,CC BY-SA 4.0 +11302,1,11305,,3/18/2019 8:47,,1,57,"

For the case of crowd density estimation using CNN, using datasets like shanhaiTech or UCF, why there hasn't been attempts to tackle this type of task as a classification problem? All current papers I've seen are related to regression, not classification.

+ +

For example, if I have the crowd images labeled based on their crowd density (low, moderate, high), and I'm not interested in the count, but the density class (low, moderate, high), can't I train the network to classify the data based on these classes as a classification network?

+",23268,,2444,,3/18/2019 16:39,3/18/2019 16:39,Why don't we perform classification of crowd density?,,1,0,,,,CC BY-SA 4.0 +11305,2,,11302,3/18/2019 9:40,,2,,"
+

For example, if I have the crowd images labeled based on their crowd density (low, moderate, high), and I'm not interested in the count, but the density class (low, moderate, high), can't I train the network to classify the data based on these classes as a classification network?

+
+ +

Yes you can, all you need is enough correctly labelled training data.

+ +

A good rule of thumb is if a human expert can assign the correct label from an image (and purely from the image, not using extra information) then it is a realistic goal to train a CNN to perform the same labelling.

+ +
+

why there hasn't been attempts to tackle this type of task as a classification problem.

+
+ +

Probably because there are no natural, and likely no widely accepted, classes in this case (I may be wrong, maybe some international society has defined classes you could use). If you use a regression, you can map it to a particular problem case - e.g. sending an alert to someone responsible for traffic and safety when crowd density hits some threshold - by setting numerical boundaries to your classes. Using classification and mapping back the other way is harder.

+",1847,,2444,,3/18/2019 16:39,3/18/2019 16:39,,,,2,,,,CC BY-SA 4.0 +11306,2,,5458,3/18/2019 12:41,,3,,"

I just want to provide this intuition

+ +
    +
  • this NN consists of a 2 steps detection pipeline (the region proposal and regression + classification in parallel) exploring a certain range of scales and aspect ratios for proposals

  • +
  • as the proposed region is rectangular and the objects of interests have not a rectangular appearance strictly speaking, but a mostly rectangular one (with different ARs), considering the resulting detection bboxes areas some pixels will belong to a certain relevant object appearance (e.g. pixels of a pedestrian, car, …) while other pixels in that BBox won’t be related to that semantic

  • +
  • as this is not semantic segmentation we can now know what pixel is actually relevant, however the objectness and backgroundness can provide a rough measure of this in an aggregated way

  • +
+ +

Let’s assume at some point in your processing pipeline, you have 2 regions containing the same car but at different scales: so ideally both of them should be associated with good confidence to CarID but it’s clear one will contain much more background pixels than the other one and you'd like this to be represented as an additional measure

+",1963,,,,,3/18/2019 12:41,,,,1,,,,CC BY-SA 4.0 +11307,2,,11196,3/18/2019 12:47,,5,,"

I recommend Animal Learning and Cognition: An Introduction by John Pearce.

+ +
    +
  • As an introduction to the field, it's comprehensive. It covers all the topics you mentioned and more. Here is a free preview of the book.
  • +
  • It's well-regarded, having over 500 citations on Google Scholar.
  • +
  • It's relatively modern. Published in 2008, it discusses experimental findings that you won't be able to find in some of the other classics published in the 70s or 80s.
  • +
+ +

Although it does not specifically focus on mammals, all of the ideas extend to mammals.

+",22916,,,,,3/18/2019 12:47,,,,2,,,,CC BY-SA 4.0 +11308,2,,11219,3/18/2019 14:12,,-1,,"

The approximator can be any artificial neural network architecture, including deep fully-connected networks.

+",23276,,23276,,3/22/2019 9:10,3/22/2019 9:10,,,,0,,,,CC BY-SA 4.0 +11309,1,,,3/18/2019 16:08,,2,33,"

I've got classification problem on image, I have 10 classes and when I fine tuned my model on it (I tried VGG, Xception, resnet etc) I have approximatly 83% validation accuracy.

+ +

I was wondering if doing lot of binary model with 1 class represented and the other as 'other' and then using them to classify my image would be good and efficient ? (I obtain more than 90% val acc for each class doing this)

+ +

Except for memory consumption and training time does this method have drawback ?

+",23107,,,,,3/18/2019 16:08,Is making lot of 1 versus other model efficient?,,0,1,,,,CC BY-SA 4.0 +11310,1,,,3/18/2019 19:00,,2,93,"

Consider a shop owner who has to deal with having to buy for one week from a different supplier with several different brands. Another week a brand is removed or added from the market. Yet another week, the manager decides to skip three assortments of fruit soda and exchange them with a different selection of three assortments of fruit sods, and so on and so forth.

+ +

Is dealing with such issues possible with existing public implementations of reinforcement learning?

+ +

The only research that I saw dealing with dynamically changing stuff like this is the DeepMind FTW bot playing Quake capture the flag. It deals with changing layouts to the map being played, but the implementation is not public and it doesn't resemble the inventory management situation I outlined.

+",21724,,2444,,3/18/2019 20:23,3/18/2019 20:23,Reinforcement learning for inventory management with dynamic changes to available products,,0,0,,,,CC BY-SA 4.0 +11311,2,,11298,3/18/2019 21:52,,1,,"

The answer (as detailed quite thoroughly here and here) is that specifying

+ +
model.compile(optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics=['accuracy'])
+
+ +

causes keras to guess, incorrectly, that because I am using binary_crossentropy for the loss function, that I would want to use binary_accuracy as the way of reporting the accuracy metrics. Apparently, one should specify that one wants the categorical_accuracy metrics if one, as I do, has more than two classes.

+",23264,,2444,,3/18/2019 23:04,3/18/2019 23:04,,,,0,,,,CC BY-SA 4.0 +11313,1,11319,,3/18/2019 23:16,,6,578,"

I have read a lot about RL recently. As far as I understood, most RL applications have much more states than there are actions to choose from.

+ +

I am thinking about using RL for a problem where I have got a lot of actions to choose from, but only very few states.

+ +

To give a handy example: +The algo should render (for whatever reason) a sentence with three words. I always want to have a sentence with three words, but I have many words to choose from. After choosing the words, I get some sort of reward.

+ +

Are RL algorithms an efficient way to solve this?

+ +

I am thinking about using policy gradients with an ε-greedy algorithm to explore a lot of the possible actions before exploiting the knowledge gained.

+",23288,,2444,,3/18/2019 23:24,3/19/2019 8:59,Reinforcement Learning with more actions than states,,1,3,,,,CC BY-SA 4.0 +11314,5,,,3/18/2019 23:30,,0,,"

For more info, see this tutorial: https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html

+",-1,,1641,,5/2/2019 19:22,5/2/2019 19:22,,,,0,,,,CC BY-SA 4.0 +11315,4,,,3/18/2019 23:30,,0,,"For questions related to reinforcement learning algorithms often referred to as ""policy gradients"" (or ""policy gradient algorithms""), which attempt to directly optimise a parameterised policy (without first attempting to estimate value functions) using gradients of an objective function with respect to the policy's parameters.",2444,,1641,,5/2/2019 19:23,5/2/2019 19:23,,,,0,,,,CC BY-SA 4.0 +11317,1,,,3/19/2019 7:46,,1,214,"

This is a simple version of NIM: Two players alternately remove one, two or three coins from a stack initially containing 5 coins. The player who picks up the last coin loses.

+ +

What does alpha-beta pruning look like on the game tree for this game?

+",23298,,1847,,3/19/2019 9:52,3/19/2019 9:52,Understanding alpha-beta pruning for simplified NIM,,0,3,,,,CC BY-SA 4.0 +11318,1,,,3/19/2019 8:30,,0,124,"

I am writing a field report on AI. I was wondering what the technological challenges are that AI is facing today. I have written the following so far.

+
    +
  • AI needs common sense like a human common
  • +
  • AI needs curiosity
  • +
  • AI needs to understand cause and effect
  • +
  • AI needs to be creative
  • +
+

Are there any other hardware-tech-related obstacles?

+",23300,,2444,,5/14/2022 21:15,5/14/2022 21:15,What are the technological challenges that AI faces today?,,1,2,,12/9/2021 21:39,,CC BY-SA 4.0 +11319,2,,11313,3/19/2019 8:32,,3,,"
+

As far as I understood, most RL applications have much more states than there are actions to choose from.

+
+ +

Yes, this is quite common, but in no way required by the underlying theory of Markov Decision Processes (MDPs). The most extreme version of the opposite thing - with one state (or effectively no state, as state is not relevant) - are k-armed bandit problems, where an agent tries to find a single best long-term action in general from a selection of actions. These problems typically would not use RL algorithms such as Q-learning or policy-gradients. However, that is partly because they are described with different goals in mind (e.g. minimising ""regret"" or simply gaining as much reward as possible during the learning process), and RL algorithms will work to solve them, albeit less efficiently than algorithms designed to work on bandit problems directly.

+ +
+

I am thinking about using RL for a problem where I have got a lot of actions to choose from, but only very few states.

+
+ +

That should work, provided your problem is still a MDP. That means for instance that the state evolves according to rules depending on which action was taken in which starting state. If the state evolution is instead arbitrary or random, then you may have a contextual bandit problem instead.

+ +

There is an important difference here between:

+ +
    +
  • a large number of entirely different actions, each with different results, which need enumeration and have to be explored separately
  • +
+ +

and

+ +
    +
  • a large action space due to measurable values which are part of an action, such as how much force to apply in a motor
  • +
+ +

The former will require lots of exploration, since any specific combination of action and state could be the ideal. With the latter, you can use that fact that numerical values that are similar will often give similar results, which will make generalisation via function approximation (e.g. neural networks) work efficiently in your favour.

+ +
+

The algo should render (for whatever reason) a sentence with three words. I always want to have a sentence with three words, but I have many words to choose from. After choosing the words, I get some sort of reward

+
+ +

This seems more like the first bullet-point above, although that may depend if a natural language model could be applied for example. E.g. if ""This is good"" and ""This is great"" would produce similar rewards in a specific state, then there is maybe some benefit to generalisation, although I am not quite sure where you would fit this knowledge - possibly in a generator for a sentence vector as the ""raw"" action and then have a LSTM-based language model produce the actual action from that vector, similar to seq2seq translation models.

+ +
+

Are RL algorithms an efficient way to solve this?

+
+ +

Yes, but whether or not it is the most efficient will depend on other factors, such as:

+ +
    +
  • Whether the environment is stochastic or deterministic with regard to both reward and state progression.

  • +
  • Whether state progression is key to obtaining the best rewards in the long term (e.g. there is some ""goal"" or ""best"" state that can only be reached by a certain route).

  • +
  • What the actual size of the MDP is $|\mathcal{S}| \times |\mathcal{A}|$. Small MDPs can be fully enumerated, allowing you to estimate action values in a simple table. If you have 10 states and 1,000,000 discrete actions, with no real pattern of actions mapping to results, then a big 10 million entry table will actually be reasonably efficient.

  • +
+ +

Competitive algorithms to RL here might be global search and optimisation ones, such as genetic algorithms. The more arbitrary and deterministic your environment is, the more likely it is that an exhaustive search will find your optimal policy faster than RL.

+ +
+

I am thinking about using policy gradients with an ε-greedy algorithm to explore a lot of the possible actions before exploiting the knowledge gained.

+
+ +

This should be fine. Exploration here is definitely important, but finding the sweet spot for the right amount of it will be hard, and depend on other traits of the environment.

+ +

You may want to use something like upper confidence bound action selection or simply optimistic initial values, in order to ensure exploration does not miss certain actions. An epsilon greedy approach will miss a certain fraction of actions over time, and the expectation of that fraction grows smaller progressively more slowly, so it may be possible to miss an important action for a long time if you rely on being able to randomly select it.

+ +
+

To give a handy example: The algo should render (for whatever reason) a sentence with three words. I always want to have a sentence with three words, but I have many words to choose from. After choosing the words, I get some sort of reward.

+
+ +

I would consider modelling this as sequences of 3 actions, each of which chooses a word, with the state being a start token (whatever the state you already have) plus the sentence so far, and on every 3 words the environment is consulted to reset the state to the next start token and to gather a reward (rewards for interim steps would be zero).

+ +

Doing this immediately makes the state space much larger than the action space, as your state includes history of up to two actions. If you had 10 different start states, and 100 word choices, then your state space would be 101,010 and action space 100.

+ +

This will fit available designs of RL algorithms, and allow for learning some internal language modelling if it is relevant. It will reduce your need to model sentence construction outside of the agent. Most importantly, if ""good"" or ""bad"" sentences tend to start or end with certain words, and you use function approximation, then the algorithm may discover combinations more efficiently than iterating over all sentences as if they were completely independent.

+",1847,,1847,,3/19/2019 8:59,3/19/2019 8:59,,,,0,,,,CC BY-SA 4.0 +11323,1,,,3/19/2019 12:28,,0,26,"

I’m in this problem and haven’t found a sound solution to it. Been like 20 days now. I have a dataset that looks like this:

+ +
X=image
+
+Y1= current_zoom (0,25,50,75)
+
+Y2= predicted_zoom (0,25,50,75)
+
+ +

y1 will have equal images for all classes. Also, I will know X and y1 when I test the model.

+ +

y2 will have variation with it because it has the predicted zoom level.

+ +

I tried to train on MTL model, I used two outputs from the model - y1 and y2. Now, y2 overfits (still not sure why, but my best guess is class imbalance). And y1 accuracy is around 0.99 in validation. Now the thing is when I deploy this on production, I’ll always have the current zoom (y1) with the image. So, I wanted to incorporate this to my model. First, I tried with two inputs and one output model, but loss was too much. Here, I concatenated the y1 to the output of last conv layer before it flattens and goes to dense. And second I tried was to concatenate y1 after flattening the output from last conv. Both didn’t work.

+ +

Any ideas on how can I with data like this.

+ +

Models used: resnet-18, vgg, alexnet

+ +

size of data: approx 7000 images in total.

+",15633,,,,,3/19/2019 12:28,how to work with multi-labels or two inputs and a output,,0,2,,,,CC BY-SA 4.0 +11324,1,,,3/19/2019 15:50,,0,45,"

Let's say I have a number of videos, and I want to train an SSD/YOLO (or FRCNN) to detect objects. In the case of a large amount of videos, there will be a lot of frames extracted and transferred to images. Can you take, for example, only the fifth frame everytime and thus lower the amount of memory required? If the frames contain a similar info, we can skip some and not impact the results? This is mainly to train faster..

+",23311,,,,,3/19/2019 15:50,Consecutive frames can be discarded when training an SSD/YOLO?,,0,2,,,,CC BY-SA 4.0 +11325,1,,,3/19/2019 15:59,,2,56,"

What if there are multiple goals? For example, let's consider the bit-flipping environment as described in the paper HER with one small change: now, the goal is not some specific configuration, but let's say for the last $m$ bits (e.g. $m=2$), I do not really care if there is $1$ or $0$.

+

In the paper, there is section 3.2 multi-goal RL, where they mention an example with two-dimensional coordinates ($x$ and $y$), but they are interested only in the $x$ coordinate, so they only use the $x$ coordinate as a goal.

+

Applying this strategy to my example would result in cutting the last $m$ bits from the goal and only use the other bits. Is this logic correct?

+

Another approach I could think of would be to train with all possible goal configurations, as there are not many in my case. But this seems impractical as the number of goal configurations grows.

+",22162,,2444,,11/20/2020 18:44,11/20/2020 18:44,How does Hindsight Experience Replay cope with multiple goals?,,0,0,,,,CC BY-SA 4.0 +11326,1,11358,,3/19/2019 17:03,,1,243,"

I have a 2D plane, with a fixed height and width of 10M. The plane has an agent (or robot) in the point $(1, 2.2)$, and an electric outlet in the point $(8.2, 9.1)$. The plane has a series of obstacles.

+

+

Is there an algorithm to find the shortest path between the agent and the goal?

+

And if the point has a fixed wingspan? For example, that the space between O and N is smaller than the agent, and then the agent cannot cross?

+",22082,,2444,,1/23/2021 0:56,1/23/2021 0:56,How can I calculate the shortest path between two 2d vector points in an environment with obstacles?,,1,0,,,,CC BY-SA 4.0 +11327,1,,,3/19/2019 19:09,,3,47,"

Nowadays, there is too much data for humans to work on alone, and it is very normal for data analysts to use AI techniques to treat and process these data so it can lead to a faster and more accurate result. But many data analysts and decision-makers still don't trust AI methods or techniques and are reluctant to use them. How can we encourage them to accept or prefer these AI solutions?

+ +

For example, if AU gives advice to solve a problem, then decision-makers must trust the results and data analysts must trust the mechanism, so that decision-makers can be confident in the AI work and also data analysts can concentrate their activities on added value.

+ +

How can we encourage data analysts and decision-makers to adopt AI?

+",23315,,2444,,12/31/2021 10:33,12/31/2021 10:33,How can we encourage data analysts and decision-makers to adopt AI?,,1,0,,,,CC BY-SA 4.0 +11328,1,,,3/19/2019 19:25,,1,108,"

Suppose we want to predict context words $w_{i-h}, \dots, w_{i+h}$ given a target word $w_i$ for a window size $h$ around the target word $w_i$. We can represent this as: $$p(w_{i-h}, \dots, w_{i+h}|w_i) = \prod_{-h \leq k \leq h, \ k \neq 0} p(w_{i+k}|w_i)$$ where we model the probabilities of a word $u$ given another word $v$ as $$p(u|v) = \frac{\exp(\left<\phi_u, \theta_v \right>)}{\sum_{u' \in W} \exp(\left<\phi_{u'}, \theta_v \right>)}$$ where $\phi_u, \theta_v$ are some vector representations for words $u$ and $v$ respectively and $\left<\phi_u, \theta_v \right>$ is the dot product between these vector representations (which represents some sort of similarity between the words) and $W$ is a matrix of all the words.

+ +

In Skip-Gram Negative Sampling, we want to learn the embeddings $\phi_u, \theta_v$ that maximize the following: $$\sum_{u \in W} \sum_{v \in W} n_{uv} \log \sigma(\left<\phi_u, \theta_v \right>) +k \mathbb{E}_{\bar{v}} \log \sigma(-\left<\phi_u, \theta_{\bar{v}} \right>)$$

+ +
+

Question. How exactly does this work? For example, suppose $k=5$, the target word $w_i$ is $\text{apple}$ and we want to find + $p(\text{pie}| \text{apple})$. Let $n_{uv} = 10$ (number of times pie + co-occurs with apple). Then we sample $5$ random words $\bar{v}$ that + did not occur with $\text{apple}$ and whichever term in the sum is + bigger is the one we predict? For example, if the first term in the + sum is larger than the second term then we would predict that + $p(\text{pie}| \text{apple}) \approx 1$? Otherwise we predict that + $p(\text{pie}| \text{apple}) \approx 0$? Is this the correct + intuition?

+
+ +

Source. Here at around the 10:05 mark.

+",23220,,2444,,4/16/2019 22:23,4/16/2019 22:23,Skip-Gram Model Training,,1,0,,,,CC BY-SA 4.0 +11330,2,,11327,3/19/2019 19:50,,2,,"

Not all of the mistrust aimed at AI systems is unjustified, particularly when it comes to neural networks and other such systems that rely on large training data sets. There are a number of high profile cases, facial recognition being one that has often (understandably) received a lot of flak, where improperly configured training data has resulted in skewed and questionable results.

+ +

If you want to foster more trust in the systems, it will require better tools for analyzing how they are approaching problems and reaching decisions, as well as determining if there are holes in the training data. It will require a community working in the field that gives a lot more thought and care to how they approach their training data and what unintended biases they may be introducing by forgetting something than has often been displayed presently.

+ +

Of course, some people are just distrustful of new technology, but I think it's more interesting to address the more legitimate concerns.

+",15114,,,,,3/19/2019 19:50,,,,2,,,,CC BY-SA 4.0 +11331,2,,11318,3/19/2019 20:46,,2,,"

Computational Creativity is not an unassailable challenge (depending on who you talk to;) Philosophers have claimed algorithms can't be creative, but Marcel Duchamp, one of the most significant artists in modernity, famously stated that:

+ +
+

""All artists are not chess players, but all chess players are artists""

+
+ +

This would seem to have been validated by commentators referring to move 37 in game 2 of Lee Sedol's match with AlphaGo as ""beautiful"" (In the game of Go, aesthetics are considered in regard to strategy, not just outcomes.) The takeaway is it was a choice humans would never have considered, because the structure of our brains is different, and thus our approach to creativity different, than automata.

+ +

Current algorithmic creativity is a function of monte carlo methods, which utilize randomness, Monte Carlo Tree Search as a major method. MCTS has great utility in intractable models such as non-trivial combinatorial games, which produce complexity akin to nature, so it's not surprising it's slowing extending to real world applications.

+ +

The main issue with computational creativity is it is still not as efficient as human creativity, requiring a great deal of processing power for non-trivial problems. (This was why it took so long for a computer to beat the best human a Chess--human insight seems rooted in semantics/understanding.)

+ +

Procedural content generation is an area of research that is steadily progressing, and includes games, music and visual art.

+ +
+ +
    +
  • Algorithmic Bias is the most pressing issue facing the field of AI and Machine Learning methods specifically, which are statistical. (If the dataset is incorrect, imcomplete or biases, the output will be biased.)
  • +
+",1671,,1671,,3/19/2019 21:12,3/19/2019 21:12,,,,2,,,,CC BY-SA 4.0 +11333,1,11441,,3/19/2019 21:18,,3,67,"

Agent can have reasoning skills (prediction, taking calculated guesses, etc.) and those skills can help reinforcement learning of this agent. Of course, reinforcement learning itself can help to develop reasoning skills. Are there research that explores this impact of reasoning and consciousness on the effectivenes of reinforcement learning. Or maybe people just sit and wait such skills to emerge during reinforcement learning?

+",8332,,,,,3/25/2019 13:18,How agent's reasoning skills can improve its reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +11335,1,,,3/20/2019 1:18,,1,49,"

I would like to use a 3D convolutional network on a 2000x2000x2000 volume for segmentation. I know I can break the volume into chunks that can fit in VRAM, but I was wondering if there was a way to analyze the entire 3D volume at once.

+",23323,,2444,,5/7/2022 20:35,5/7/2022 20:35,Is it possible to analyse a very large 3D input volume at once with a 3D CNN?,,0,1,,,,CC BY-SA 4.0 +11336,1,,,3/20/2019 2:19,,1,589,"

If we have classified 1000 people's faces; how do we ensure the network tells us when it encounters a new person?

+",23324,,,,,1/8/2021 11:06,How do we classify an unrecognised face in face recognition?,,2,1,,,,CC BY-SA 4.0 +11337,1,,,3/20/2019 4:51,,1,797,"

I am writing my first LSTM network and I would really appreciate if someone can tell me if it is right (the loss seems to go down very slowly and before playing around with hyper parameters I want to make sure that the code is actually doing what I want). The code is meant to go through some time series and label each point according to some categories. In the version I am putting here there are just 2 categories: 0, if the value of the point is 1 and 1 otherwise (I know it’s a bit weird, but I didn’t choose the labels). So this is the code:

+ +
import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from torch.autograd import Variable
+import torch.optim as optim
+import numpy as np
+from fastai.learner import *
+
+torch.manual_seed(1)
+torch.cuda.set_device(0)
+
+bs = 2
+
+x_trn = torch.tensor([[1.0000, 1.0000],
+        [1.0000, 0.9870],
+        [0.9962, 0.9848],
+        [1.0000, 1.0000]]).cuda()
+
+y_trn = torch.tensor([[0, 0],
+        [0, 1],
+        [1, 1],
+        [0, 0]]).cuda()
+
+n_hidden = 5
+n_classes = 2
+
+class TESS_LSTM(nn.Module):
+    def __init__(self, nl):
+        super().__init__()
+        self.nl = nl
+        self.rnn = nn.LSTM(1, n_hidden, nl)
+        self.l_out = nn.Linear(n_hidden, n_classes)
+        self.init_hidden(bs)
+
+    def forward(self, input):
+        outp,h = self.rnn(input.view(len(input), bs, -1), self.h)
+        return F.log_softmax(self.l_out(outp),dim=1)
+
+    def init_hidden(self, bs):
+        self.h = (V(torch.zeros(self.nl, bs, n_hidden)),
+                  V(torch.zeros(self.nl, bs, n_hidden)))
+
+model = TESS_LSTM(1).cuda()
+
+loss_function = nn.NLLLoss()
+
+optimizer = optim.Adam(model.parameters(), lr=0.01)
+
+for epoch in range(10000):  
+    model.zero_grad()
+    tag_scores = model(x_trn)
+    loss = loss_function(tag_scores.reshape(4*bs,n_classes), y_trn.reshape(4*bs))
+    loss.backward()
+    optimizer.step()
+
+    if epoch%1000==0:
+        print(""Loss at epoch %d = "" %epoch, loss)
+
+print(model(x_trn), y_trn)
+
+ +

The (super reduced in size) time series should be [1,1, 0.9962,1], with labels [0,0,1,0] and [1, 0.9870, 0.9848,1] with labels [0,1,1,0] and the batch size should be 2. I really hope I didn’t mess up the dimensionalities, but I tried to make it in a shape accepted by the LSTM. This is the output:

+ +
Loss at epoch 0 = tensor(1.3929, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;)
+Loss at epoch 1000 = tensor(0.8939, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 2000 = tensor(0.8664, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 3000 = tensor(0.8390, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 4000 = tensor(0.8339, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 5000 = tensor(0.8288, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 6000 = tensor(0.8246, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 7000 = tensor(0.8202, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) 
+Loss at epoch 8000 = tensor(0.8143, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;)
+Loss at epoch 9000 = tensor(0.8108, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;)
+
+(tensor([[[-9.0142e-01, -1.2631e+01],
+          [-9.3762e-01, -9.6707e+00]],
+
+         [[-1.3467e+00, -3.9542e+00],
+          [-2.2005e+00, -7.6977e-01]],
+
+         [[-2.4500e+01, -1.9363e-02],
+          [-2.3349e+01, -6.2210e-01]],
+
+         [[-1.0969e+00, -2.1953e+01],
+          [-6.9776e-01, -1.8608e+01]]], device='cuda:0',
+        grad_fn=<LogSoftmaxBackward>), tensor([[0, 0],
+         [0, 1],
+         [1, 1],
+         [0, 0]], device='cuda:0'))
+
+ +

The loss doesn’t go down too fast (I expected it to overfit and go really close to zero). The actual values are okish (the smaller one is always the right one), but they can be definitely improved. Can someone tell me if my code is doing what I want (and maybe suggest why is the loss still big - maybe I need a smaller LR?)

+",22839,,,,,3/21/2019 15:50,LSTM is not converging,,1,2,,,,CC BY-SA 4.0 +11339,2,,11336,3/20/2019 6:14,,0,,"

You can't really make the network tell you if a face is new (unless you actually train a network with that particular purpose, maybe). Ideally, if you feed a new face to a trained network, the output activations will be pretty low in all the possible categories (faces that the network has already seen) i.e. no particular category will signal a high probability. In that case, you can just set a cut, such that if the maximum activation is below a certain threshold, it means that the NN encountered a new face. But things are not that easy in practice. If the NN encounters a face it has already seen, but the image is weird (blurry, the face doesn't cover much of the actual image, there are multiple faces in the image etc.) you could get the same output as in the case of a new face. At the same time, for a NN images are just probability distributions, so it could happen that a new face, which for us, humans, looks nothing like the others, to look pretty similar to an old one, from the NN point of view, hence giving a high activation in a single particular output neuron.

+",22839,,,,,3/20/2019 6:14,,,,0,,,,CC BY-SA 4.0 +11341,2,,11328,3/20/2019 10:55,,1,,"

Almost, but no. When you maximize that objective function, you do so by adjusting the parameters $\phi$ and $\theta$. After you're done with training, you can use your word embeddings for other NLP tasks. You don't, however, do any prediction directly from the skip-gram model.

+ +

To maximize the first term, co-occuring words must have large inner products. That is, they must be ""similar"". To maximize the second term**, the randomly sampled words must have a small inner product with $\phi_u$. That is, they must be ""dissimilar"" to $\phi_u$. Moving these word embeddings around in the vector space to make some words similar and others not is the only thing that happens during skip-gram training. +$$$$

+ +

** $\sigma(-x)=1-\sigma(x)$, so maximizing $\sigma(-x)$ is minimizing $\sigma(x)$

+",22916,,,,,3/20/2019 10:55,,,,3,,,,CC BY-SA 4.0 +11342,1,11343,,3/20/2019 10:59,,1,7276,"

I am fine-tuning a VGG16 model on 20 classes with 500k images I was wondering how do you chose the size of the dense layer (the one before the prediction layer which has a size 20). I would prefer not to do a grid search seeing how long it take to train my model.

+ +

Also how many Dense layer should I put after my global average pooling ?

+ +
base_model = keras.applications.VGG16(weights='imagenet', include_top=False)
+
+  x = base_model.output
+  x = GlobalAveragePooling2D()(x)
+  x = Dense(???, activation='relu')(x)
+  x = Dropout(0.5, name='drop_fc1')(x)
+  prediction_layer = Dense(class_number, activation='softmax')(x)
+
+ +

I haven't see particular rules about how its done, are there any ? +Is it link with the size of the convolution layer ?

+",23107,,23107,,3/20/2019 11:07,2/18/2021 12:17,How to chose dense layer size?,,2,0,,,,CC BY-SA 4.0 +11343,2,,11342,3/20/2019 11:20,,3,,"

It's depend more on number of classes. For 20 classes 2 layers 512 should be more then enough. If you want to experiment you can try also 2 x 256 and 2 x 1024. Less then 256 may work too, but you may underutilize power of previous conv layers.

+",22745,,,,,3/20/2019 11:20,,,,3,,,,CC BY-SA 4.0 +11345,1,,,3/20/2019 13:47,,1,215,"

So i have been playing around with neat-python. I made a program, applying neat, to play pinball on the Atari 2600. The code for that can be found in the file test2.py here

+ +

Now based on that, I would like to do the same, but on a 2 player game. I have already set up the environment to play a 2 player game, which PONG using OpenAI Retro.

+ +

What I have no clue how to do, is run 2 nets at the same time, on the same observation. The way that neat-python works, is you get the observation from a single function that goes through each genome and runs the environment.

+ +

How would you create 2 eval_genome functions that can take in the same observation real-time? This means that they train based off of the same images and environmenrs.

+ +

Help?

+",23119,,,,,6/25/2023 1:08,Running 2 NEAT nets on the same observations,,2,3,,,,CC BY-SA 4.0 +11347,1,,,3/20/2019 16:55,,3,285,"

There are several (family of) algorithms that can be used to cluster a set of $d$-dimensional points: for example, k-means, k-medoids, hierarchical clustering (agglomerative or divisive).

+ +

What is graph-based clustering? Are we clustering the nodes or edges of a graph instead of a set of $d$-dimensional (as e.g. in k-means)? Couldn't we just use k-means to also cluster a set of nodes?

+",2444,,,,,3/21/2019 10:25,What is graph clustering?,,1,3,,,,CC BY-SA 4.0 +11348,2,,11347,3/20/2019 16:55,,5,,"

In graph clustering, we want to cluster the nodes of a given graph, such that nodes in the same cluster are highly connected (by edges) and nodes in different clusters are poorly or not connected at all.

+ +

A simple (hierarchical and divisive) algorithm to perform clustering on a graph is based on first finding the minimum spanning tree of the graph (using e.g. Kruskal's algorithm), $T$. It then proceeds in iterations. At each iteration, we remove from $T$ the edge with the highest weight. Given that $T$ is a tree, the removal of an edge from $T$ will create a forest (with connected components). So, after the removal of the edge of highest weight from $T$, we will have two connected components. These two connected components will represent two clusters. So, after one iteration, we will have two clusters. At the next iteration, we remove the edge with the second highest weight, and this will create other connected components, and so on, until, possibly, all nodes are in their own cluster (that is, all edges have been removed from $T$).

+ +

There are several limitations of this algorithm. For example, it only considers the edges of the initial graph that are shared with $T$ (that is, it only considers the edges of the minimum spanning tree). It also requires the edges of the graph to be weighted. It does not require the number of clusters to be known in advance (like any other hierarchical clustering algorithm), but we still need to choose the optimal number of clusters (after the algorithm has terminated). We can do that in several ways. A way to do it would be to have a threshold value $t$ that is used to decide when we should stop removing edges from $T$: more specifically, we will keep removing edges from $T$ until the next highest weight is higher than this threshold $t$.

+ +

There are numerous applications of this type of clustering. For example, we might want to discover certain groups of people in social networks.

+ +

The paper Graph clustering (by Satu Elisa Schaeffer, 2007) provides a readable and quite detailed overview of this field.

+ +

There are algorithms based on k-means that can also work on graphs. See e.g. Graph-based k-means Clustering: A Comparison of the Set Median versus the Generalized Median Graph (by Ferrer et al.).

+",2444,,2444,,3/21/2019 10:25,3/21/2019 10:25,,,,0,,,,CC BY-SA 4.0 +11350,1,11352,,3/20/2019 21:53,,6,2836,"

I have read a lot about RL algorithms, that update the action-value function at each step with the currently gained reward. The requirement here is, that the reward is obtained after each step.

+ +

I have a case, where I have three steps, that have to be passed in a specific order. At each step the agent has to make a choice between a range of actions. The actions are specific for each step.

+ +

To give an example for my problem: +I want the algorithm to render a sentence of three words. For the first word the agent may choose a word out of ['I', 'Trees'], the second word might be ['am', 'are'] and the last word could be chosen from ['nice', 'high']. After the agent has made its choices, the reward is obtained once for the whole sentence.

+ +

Does anyone know which algorithms to use in this kind of problem?

+ +
+ +

To give a bit more detail on what I already tried:

+ +

I thought that using value iteration would be an reasonable approach to test. My problem here is that I don't know how to assign the discounted reward for the chosen operations.

+ +

For example after the last choice I get a reward of 0.9. But how do I update the value for the first action (choosing out of I and Trees in my example)?

+",23288,,1847,,3/21/2019 7:36,4/10/2020 15:29,Reinforcement Learning with long term rewards and fixed states and actions,,2,1,,,,CC BY-SA 4.0 +11351,2,,11350,3/20/2019 22:49,,4,,"

You don't need to have a reward on every single timestep, reward at the end is enough. Reinforcement learning can deal with temporal credit assignment problem, all algorithms are designed to work with it. Its enough to define a reward at the end where you, for example, give a reward of 1 if sentence is satisfactory or -1 if it isn't. Regular tabular Q-learning would easily solve the toy problem that you gave as an example.

+",20339,,,,,3/20/2019 22:49,,,,1,,,,CC BY-SA 4.0 +11352,2,,11350,3/21/2019 8:04,,3,,"

You are describing a straightforward Markov Decision Process that could be solved by almost any Reinforcement Learning algorithm.

+ +
+

I have read a lot about RL algorithms, that update the action-value function at each step with the currently gained reward. The requirement here is, that the reward is obtained after each step.

+
+ +

This is true. However, the reward value can be zero, and it is quite common to have most states and actions result in zero reward with only some specific ones returning a non-zero value that makes a difference to the goals of the agent.

+ +

When this is more extreme - e.g. only one non-zero reward per thousand steps - this is referred to as ""sparse rewards"" and can be a problem to handle well. Your three steps then reward situation is nowhere near this, and not an issue at all.

+ +
+

I thought that using value iteration would be an reasonable approach to test. My problem here is that I don't know how to assign the discounted reward for the chosen operations.

+
+ +

Value iteration should be fine for your test problem, providing the number of choices for words is not too high.

+ +

The way you assign discounted rewards is to use the Bellman equation as an update:

+ +

$$v(s) \leftarrow \text{max}_{a}[\sum_{s',r} p(s',r|s,a)(r + \gamma v(s'))]$$

+ +

For a deterministic environment you can simplify the sum, as $p(s',r|s, a)$ will be 1 for a specific combination of $s', r$

+ +

It doesn't matter that $r = 0$ for the first two steps. The Bellman equation links time steps, by connecting $v(s)$ and $v(s')$. So, over many repetitions of value iteration's main loop, the episode end rewards get copied - with the discount applied - to their predecessor states. Very quickly in your case you will end up with values for the start states.

+ +
+

For example after the last choice I get a reward of 0.9. But how do I update the value for the first action (choosing out of I and Trees in my example)?

+
+ +

You don't do it directly in a single step. What happens is repeating the value iteration loop copies the best values to their predecessor states, one possible time step at a time.

+ +

On the first loop through all states, you will run an update something like:

+ +

$$v(\text{'I'}) \leftarrow 0 + \gamma v(\text{'I am'})$$

+ +

and $v(\text{'I am'}) = 0$ initially, so it will learn nothing useful in this first loop. However, it will also learn in the same loop:

+ +

$$v(\text{'I am'}) \leftarrow 0 + \gamma v(\text{'I am nice'})$$

+ +

So assuming $\gamma = 0.9$ and $v(\text{'I am nice'}) = 0.9$, and that ""I am high"" scores less than 0.9, then it will set $v(\text{'I am'}) = 0.81$.

+ +

On the next loop through states in value iteration, it will then set $v(\text{'I'}) = 0.727$ (assuming ""I am"" beats ""I are"" for maximum value)

+",1847,,1847,,3/21/2019 8:11,3/21/2019 8:11,,,,5,,,,CC BY-SA 4.0 +11353,1,,,3/21/2019 8:15,,1,645,"

I'm learning a bit about the use of the Surprise library and I have a set of data with users and ratings. I'm training a network with this library, using KNNBasic and KNNWithMeans, this last algorithm is the same as KNN but averages the ratings before calculating the distances between the points.

+

If I don't use any measure of similarity, i.e. using the two algorithms with the default parameters, KNNBasic predicts the results better than using KNNWithMeans. But if I train the nets using subsets, 10 folds, where the algorithm iterates over 9 of them for training and the other one for validating, KNNWithMeans gives better results.

+

Do you know why this can happen? Why KNNBasic is better in the first case, and increasing the number of folds is better KNNWithMeans?

+",23347,,2444,,12/21/2021 11:55,12/21/2021 11:55,"Why is KNNBasic better than KNNWithMeans with the default parameters, but KNNWithMeans performs better with folds?",,0,0,,,,CC BY-SA 4.0 +11355,2,,11337,3/21/2019 15:50,,1,,"

I tried to play with your code and found changing loss function to the cross_entropy alternate of negative log-likelihood makes the difference between 2000th epoch's loss and 9000th epoch's loss is greater about 0.2 alternate of 0.09

+ +

I also tried to change optimizer and learning rate but no loss didn't improve.

+ +

you can explore the modified code may help you with another idea

+",21907,,,,,3/21/2019 15:50,,,,1,,,,CC BY-SA 4.0 +11356,1,,,3/21/2019 16:53,,1,129,"

I have trained a recurrent neural network based on 1 stack of LSTM cells. I use it to solve a classification problem.

+ +

The RNN cell has 48 hidden states. The output of the last unfolded LSTM cell is linearly projected into two dimensions corresponding to two mutually exclusive classes. I train with softmax cross-entropy loss.

+ +

I also know that both my train and test sets are mislabeled(!) to a certain extent. Possibly about 10% of items labelled as class 1 are actually class 0. And the same hold in other direction.

+ +

What puzzles me is this. Every time I train the network from scratch and plot a precision recall curve for the validation set in the end. And every time it is different! Especially in the very beginning in the range that corresponds to high precision levels. Why is this happening (this instability)?

+ +

I tried various numbers of epochs, training rates, number of hidden states in LSTM. Every time it is the same, but for some combinations of these parameters the variability is less (e.g. 8 out of 10 train/test runs i see more or less the same precision recall curve and 2 times it is severely different and worse).

+",11417,,,,,3/21/2019 16:53,Why validation performance is unstable for my LSTM based model (labelling problems)?,,0,0,,,,CC BY-SA 4.0 +11357,2,,10658,3/21/2019 17:32,,2,,"

This is a very interesting question that you ask. I believe, this post and this post (written by me) well address your question. However, it deserves an explanation here.

+

1. Fully connected networks

+

The more layers you add, the more "nonlinear" your network becomes. For instance, in the case of two spirals problem, which requires a "highly nonlinear separation", the first known architecture to solve the problem was pretty advanced for that time: it had 3 hidden layers and also it had skip-connections (very early ResNet in 1988). Back then, computing was a way less powerful and training methods with momentum were not known. Nevertheless, thanks to the multilayer architecture, the problem was solved. Here, however, I was able to train a single-hidden layer network to solve the spirals problem using Adam.

+

2. Convolutional nets (CNNs)

+

An interesting partial case of neural networks are CNNs. They restrict the architecture of the first layers, known as convolutional layers, so that there is a much smaller number of trainable parameters due to the weights sharing. What we have learned from computer vision, moving towards the end of CNNs layers, their receptive fields become larger. That means that the subsequent CNN layers "see" more than their predecessors. Conceptually, first CNN layers can recognize simpler features such as edges and textures, whereas final CNN layers contain information about more abstract objects such as trees or faces.

+

3. Recurrent nets (RNNs)

+

RNNs are networks with layers which receive some of their outputs as inputs. Technically, a single recurrent layer is equivalent to an infinite (or at least large) number of ordinary layers. Thanks to that recurrence, RNNs retain an internal state (memory). Therefore, it is much more difficult to answer your question in the case of recurrent nets. What is known, due to their memory, RNNs are more like programs, and thus are in principle more complex than other neural networks. Please let me know if you find an answer to your question in the last case.

+

To conclude, the higher number of hidden layers may help to structure a neural network. Thanks to the recent developments such as ResNets and backpropagation through time, it is possible to train neural networks with a large number of hidden layers.

+",23360,,2444,,5/25/2022 9:02,5/25/2022 9:02,,,,0,,,,CC BY-SA 4.0 +11358,2,,11326,3/21/2019 17:37,,2,,"

The usual way to solve this kind of problem is to construct a configuration space: extruding all the polygonal obstacles by sliding the polygon corresponding to the robot around them (some slides).

+ +

The exterior vertices of the configuration space can then be used as input to a path-planning algorithm, such as A*.

+",42,,,,,3/21/2019 17:37,,,,0,,,,CC BY-SA 4.0 +11359,2,,11290,3/21/2019 17:56,,3,,"

To the best of my knowledge, there haven't yet been many academic publications in this area, which could be broadly said to fall within Search-Based Software Engineering. Here are the ones I know of.

+ +
    +
  • Jerry Swan and Nathan Burles. Templar - A Framework for Template-Method Hyper-Heuristics. In: Genetic Programming - 18th European Conference, EuroGP 2015, Copenhagen, Denmark, April 8-10, 2015, Proceedings. 2015, pp. 205–216. DOI: 10.1007/978-3-319-16501-1_17.

    + +
      +
    • This paper describes 'Hyper-quicksort', a quicksort variant that uses Machine Learning (ML) to generate a pivot function:
    • +
  • +
  • A. E. I. Brownlee, N. Burles, and J. Swan. Search-Based Energy Optimization of Some Ubiquitous Algorithms. In: IEEE Transactions on Emerging Topics in Computational Intelligence 1.3 (2017), pp. 188–201.

    + +
      +
    • This paper uses ML to generate energy-efficient variants of some widely-used algorithms
    • +
  • +
  • David R. White, Leonid Joffe, Edward Bowles, and Jerry Swan. Deep Parameter Tuning of Concurrent Divide and Conquer Algorithms in Akka. ISBN: 978-3-319-55792-2.

    + +
      +
    • This paper uses ML to optimise the FFT, matrix multiplication and quicksort for concurrency
    • +
  • +
  • Nathan Burles, Edward Bowles, Alexander E. I. Brownlee, Zoltan A. Kocsis, Jerry Swan, and Nadarajen Veerapen. Object-Oriented Genetic Improvement for Improved Energy Consumption in Google Guava. DOI: 10.1007/978-3-319-22183-0_20.

    + +
      +
    • This one optimises Google Guava for energy consumption
    • +
  • +
  • Zoltan A. Kocsis, Geoff Neumann, Jerry Swan, Michael G. Epitropakis, Alexander E. I. Brownlee, Saemundur O. Haraldsson, and Edward Bowles. Repairing and Optimizing Hadoop hashCode Implementations. In: Search-Based Software Engineering: 6th International Symposium, SSBSE 2014, DOI: 10.1007/978-3-319-09940-8_22.

    + +
      +
    • This one fixes broken hashCodes by using ML to generate new ones
    • +
  • +
+ +

There was also one (from Microsoft Research, I think) entitled something like ""The case for self-adjusting data structures"". I'll add an edit if I can find it.

+",42,,2444,,8/18/2019 22:24,8/18/2019 22:24,,,,0,,,,CC BY-SA 4.0 +11360,1,,,3/21/2019 18:23,,1,106,"

Suppose that we have unlabeled data. That is, all we have are a collection of emails and want to determine whether any of them is spam or not. Let's say we have $1,000$ rules to determine whether a particular email is spam or not. For example, one rule could be that a sender's email address should not contain the text no_reply . If an email does contain this text, then it would be classified as spam.

+

What are the advantages/disadvantages of a rules-based approach for detecting spam vs. a non-rules-based approach/unsupervised methods for detecting spam?

+

Would there even be a point in constructing a non-rules based model given that we already have a rules-based model? Could we use the rules-based model to create some labeled training data and then apply supervised techniques?

+",23362,,2444,,12/11/2020 14:33,4/30/2023 19:01,Are there any advantages of using rules-based approaches versus models for detecting spam?,,1,0,,,,CC BY-SA 4.0 +11361,1,11362,,3/21/2019 21:38,,2,280,"

I want to solve a problem using Reinforcement Learning on a 20x20 board. An agent (a mouse) has to get the highest possible rewards as fast as possible by collecting cheese, which there are 10 in total. The agent has a fixed amount of time for solving this problem, namely 500 steps per game. The problem is, the cheeses will be randomly assigned to the fields on the board, the agent knows however, where the cheeses is located.

+ +

Is there any way how this could be solved using only Reinforcement Learning (and not training for an absurd amount of time)? Or is the only way to solve this problem to use algorithms like the A*-algorithm?

+ +

I've tried many different (deep)-q-learning models, but it always failed miserably.

+ +

Edit: I could not get any meaningful behavior after 6 hours of learning while using a GTX 950M. Maybe my implementation was off, but i don't think so.

+",23005,,23005,,3/21/2019 22:19,3/22/2019 8:10,"Can Reinforcement Learning solve problems, where certain elements in the environement are randomly located?",,2,2,,,,CC BY-SA 4.0 +11362,2,,11361,3/21/2019 21:54,,2,,"

Yes you can use RL for this. The trick is to include the location of the cheese as part of the state description. So as well as up to 400 states for the mouse location, you have (very roughly) $400^{10}$ possible cheese locations, meaning you have $400^{11}$ states in total.

+ +

So you are going to want some function approximation if you want to use RL - you would probably train using a convolutional neural network, with an ""image"" of the board including mouse and cheese positions, plus DQN to select actions.

+ +

Viewed like this, a game where the mouse tries to get the cheese in minimal time seems on the surface a lot simpler than many titles in the Atari game environments, which DQN has been shown to solve well for many games.

+ +

I would probably use two image channels - one for mouse location and one for cheese location. A third channel perhaps for walls/obstacles if there are any.

+ +
+

Or is the only way to solve this problem to use algorithms like the A*-algorithm?

+
+ +

A* plus some kind of sequence optimisation like a travelling salesman problem (TSP) solver would probably be optimal if you have been presented the problem and asked to solve it any way you want. With only 11 locations to resolve - mouse start plus 10 cheese locations - then you can brute force the movement combinations in a few seconds on a modern CPU, so that part may not be particularly exciting (whilst TSP solvers can get more involved and interesting).

+ +

The interesting thing about RL is how it will solve the problem. RL is a learning algorithm - the purpose of implementing it is to see what it takes for the machine to gain knowledge towards a solution. Whilst A* and combinatorial optimisers are where you have knowledge of how to solve the problem and do so as optimally as possible based on a higher level analysis. The chances are high that an A*/optimiser solution would be more robust, quicker to code, and quicker to run than a RL solution.

+ +

There is nothing inherently wrong with either approach, if all you want to do is solve the problem at hand. It depends on your goals for why you are bothering with the problem in the first place.

+ +

You could even combine A* and RL if you really wanted to. A* to find the paths, then RL to decide best sequence using the paths as part of the input to the CNN. The A* analysis of routes would likely help the RL stage a lot - add them as one or more additional channels.

+",1847,,1847,,3/21/2019 22:12,3/21/2019 22:12,,,,2,,,,CC BY-SA 4.0 +11363,1,11625,,3/21/2019 22:24,,2,666,"

I know the difference between content-based and collaborative filtering approach in recommender systems. I also know some of the articles said collaborative filtering have some advantages than content-based, some of them also suggest to use both method (hybrid) to make a better system recommendation.

+ +

Is there a specific case where the use of one method (content-based, specifically) is better than another? Because if there is no case at all, why both methods are considered to be on the same ""level"", why not focus on just one method? For example, focus on collaborative filtering or hybrid method (as an extension for collaborative filtering).

+",16565,,2444,,4/3/2019 8:30,4/3/2019 15:16,When is content-based more appropriate than collaborative filtering?,,1,0,,,,CC BY-SA 4.0 +11366,1,,,3/22/2019 1:38,,1,38,"

I'm a Rails developer with a lot of web experience, but none (still) in AI.

+ +

I'm working in a web text editor that judges use to writing their sentences.

+ +

The goal is to start to use AI to help the judge rule the case, either based on his own previous rulings, either based on his colleagues rulings.

+ +

The judge would provide the text for the plaintiffs and defendants petitions, and based on these two inputs the system would recommend previous rulings that apply for the case.

+ +

I already have a considerable dataset of judges rulings inside the database, and they can be easily attached to the plaintiffs and defendants petitions for training (so this plaintiff petition + this defendant petition = this ruling).

+ +

This is specially challenging because the complaints can contain different subjects combined into the same petition; but the fact is that many offices use the same standardized petitions, as the defendants do as well, so I think the system can have a great chance of prediction success.

+ +

What algorithms or strategies should I start studying to tackle this problem?

+ +

Any similar articles, white papers or repositories that could help in my goal?

+",23370,,23370,,3/22/2019 2:29,3/22/2019 8:09,Algorithms and strategies to help judges rule cases,,1,2,,,,CC BY-SA 4.0 +11369,2,,11361,3/22/2019 7:44,,0,,"

A assume here OP is familiar with DQN basics.

+ +

""Standard"" way to solve this problem with Deep RL would be using convolutional DQN.

+ +

Make net from 2 to 4 convolutional layers with 1-2 fully connected on top. +The trick here is how you output Q-values. Input is board with cheese, without information about mouse. Instead net should output Q for every action from every field on the board (every possible position of mouse), that mean you output 20x20x(number_of_moves from the field) Q values. That would make net quite big, but that is most reliable way for DQN. For each move form the replay buffer only one Q value updated (gradient produced) with Time Difference equation

+ +

Because only one value form 20x20x(number_of_moves) updated per sample you need quite big replay buffer and a lot of training. After each episod (complete game) cheese should be randomly redistributed. Episodes should be mixed up in replay buffer, training on 1 episod is a big No. +Hopefully that should at least give you direction in which do research/development. Warning: DQN is slow to train, and with such big action space (20 x 20 x number_of_moves) could require million or tens of millions of even more moves.

+ +

Alternatively, if you don't want such big action space, is to use actor-critic architecture (or policy gradient, actor-critic is a kind of policy gradient). Actor-critic network have small action space, with only number_of_moves outputs. On the down size complexity of method is much higher, and behavior could be difficult to predict or analyze. However if action space is too big it could be preferable solution. Practical issues and implementations of actor-critic is too huge area to go in depth here.

+ +

Edit: There is another way with lesser action space for DQN, but somehow less reliable and possibly more slow: shift the board in such way that mouse is in the center of the board and pad invalid parts with zero (size of the new board should be x2). In that case only number_of_moves should be in output.

+",22745,,22745,,3/22/2019 8:10,3/22/2019 8:10,,,,8,,,,CC BY-SA 4.0 +11370,2,,11366,3/22/2019 8:09,,2,,"

Genuine success in this area would be beyond the state-of-the-art in research, since it likely requires analogising from relational knowledge extracted from text. In recent years, techniques for working with natural language have tended to be statistical, and are therefore somewhat deficient in this respect. You could look at 'bag of words'/latent semantic analysis approaches, but they are likely to generate many false positives unless a lot of ad hoc conditions are added manually. More recent work on 'treenets' (paper here) is more structurally informed, but is still a relatively new area.

+",42,,,,,3/22/2019 8:09,,,,0,,,,CC BY-SA 4.0 +11371,2,,11360,3/22/2019 9:02,,0,,"

Yes, there would be a point. Assuming your rule set is accurate, then you can use data classified with it to train a model. This model can be expected to be more robust and properly categorise emails that your rule-set will not handle.

+ +

Why? Machine learning algorithms generally work on features, and identify relationships between those features that lead to a classification decision. A human rule author basically does the same, but they might not notice subtle relationships; a good ML algorithm, however, might pick those up.

+ +

So you could have a hybrid model, where you first use your rule-based classifier, and anything that does not get classified is then run through the ML classifier.

+",2193,,,,,3/22/2019 9:02,,,,2,,,,CC BY-SA 4.0 +11373,1,11379,,3/22/2019 9:07,,2,830,"

Can Deep Learning be applied to Computational Fluid Dynamics (CFD) to develop turbulence models that are less computationally expensive compared to traditional CFD modeling?

+",23276,,,,,3/22/2019 14:42,Can Deep Learning be applied to Computational Fluid Dynamics,,1,0,,,,CC BY-SA 4.0 +11374,1,11391,,3/22/2019 10:45,,8,252,"

I've started working on anomaly detection in Python. My dataset is a time series one. The data is being collected by some sensors which record and collect data on semiconductor-making machines.

+

My dataset looks like this:

+
ContextID   Time_ms Ar_Flow_sccm    BacksGas_Flow_sccm
+7289973 09:12:48.502    49.56054688 1.953125
+7289973 09:12:48.603    49.56054688 2.05078125
+7289973 09:12:48.934    99.85351563 2.05078125
+7289973 09:12:49.924    351.3183594 2.05078125
+7289973 09:12:50.924    382.8125    1.953125
+7289973 09:12:51.924    382.8125    1.7578125
+7289973 09:12:52.934    382.8125    1.7578125
+7289999 09:15:36.434    50.04882813 1.7578125
+7289999 09:15:36.654    50.04882813 1.7578125
+7289999 09:15:36.820    50.04882813 1.66015625
+7289999 09:15:37.904    333.2519531 1.85546875
+7289999 09:15:38.924    377.1972656 1.953125
+7289999 09:15:39.994    377.1972656 1.7578125
+7289999 09:15:41.94     388.671875  1.85546875
+7289999 09:15:42.136    388.671875  1.85546875
+7290025 09:18:00.429    381.5917969 1.85546875
+7290025 09:18:01.448    381.5917969 1.85546875
+7290025 09:18:02.488    381.5917969 1.953125
+7290025 09:18:03.549    381.5917969 14.453125
+7290025 09:18:04.589    381.5917969 46.77734375
+
+

What I have to do is to apply some unsupervised learning technique on each and every parameter column individually and find any anomalies that might exist in there. The ContextID is more like a product number.

+

I would like to know which unsupervised learning techniques can be used for this kind of task at hand since the problem is a bit unique:

+
    +
  1. It has temporal values.
  2. +
  3. Since it has temporal values, each product will have many (similar or different) values as can be seen in the dataset above.
  4. +
+",23380,,32410,,6/6/2021 18:13,6/6/2021 18:13,Which unsupervised learning technique can be used for anomaly detection in a time series?,,1,0,,,,CC BY-SA 4.0 +11375,1,23427,,3/22/2019 10:57,,22,12066,"

Coming from a process (optimal) control background, I have begun studying the field of deep reinforcement learning.

+

Sutton & Barto (2015) state that

+
+

particularly important (to the writing of the text) have been the contributions establishing and developing the relationships to the theory of optimal control and dynamic programming

+
+

With an emphasis on the elements of reinforcement learning - that is, policy, agent, environment, etc., what are the key differences between (deep) RL and optimal control theory?

+

In optimal control we have, controllers, sensors, actuators, plants, etc, as elements. Are these different names for similar elements in deep RL? For example, would an optimal control plant be called an environment in deep RL?

+",23276,,2444,,2/12/2021 20:03,2/12/2021 20:03,What is the difference between reinforcement learning and optimal control?,,2,1,,,,CC BY-SA 4.0 +11376,2,,3942,3/22/2019 11:03,,1,,"

Predicting what happens post-singularity is simply not possible as we cannot attempt to model let alone conceptualise a mind far more complex than ours. If that is a difficult concept to get your head around, consider how far an insect's central nervous system could go in understanding human behaviour.

+ +

That fact alone is an argument against the likelihood of success for attempting any type of control.

+ +

But in terms of 'defending' against a post-singularity mind well before it happens (ie now) there are 2 solutions, with only the first offering a good likelihood of success albeit only for as long as everyone cooperates:

+ +
    +
  1. identify the technology types that are anticipated to enable the singularity, register them as 'instruments of human extinction' and regulate them accordingly ;

  2. +
  3. ensure technology in human augmentation is sufficiently advanced +to enable human-mediated guidance/fusion during the exponential rise +in computation that will occur prior to the singularity event.

  4. +
+ +

In any case, as mentioned, it is impossible to predict the behaviour of a post-singularity mind and even a human hybrid will similarly be unpredictable due to its exponentially increased cognitive/computational complexity.

+ +

An interesting consideration is the possibility that numerous singularity-level minds have already spawned in other parts of the universe (based on the likelihood of other civilisations a) existing and b) reaching that level of technological advancement).

+",23379,,,,,3/22/2019 11:03,,,,1,,,,CC BY-SA 4.0 +11379,2,,11373,3/22/2019 14:29,,1,,"

Read the paper Deep learning in fluid dynamics (by J. Nathan Kutz), and you will find your answer.

+",22603,,2444,,3/22/2019 14:42,3/22/2019 14:42,,,,1,,,,CC BY-SA 4.0 +11380,1,,,3/22/2019 14:44,,2,47,"

Geoffrey Hinton's Coursera MOOC was recently discontinued: +https://twitter.com/geoffreyhinton/status/1085325734044991489?lang=en

+ +

The videos however are still available at both on Youtube and on Hinton's webpage: +https://www.cs.toronto.edu/~hinton/coursera_lectures.html

+ +

However in his MOOC Hinton also had some papers as required (or recommended, I can't really remember) readings after each lecture. They were generally old, seminal papers which provided some good insights and intuitions and were worth reading IMHO for learning purposes. I couldn't find these papers as a reading list online. Does anyone have this list?

+",23386,,,,,3/22/2019 14:44,Hinton's reading list from the removed Coursera MOOC,,0,0,,,,CC BY-SA 4.0 +11381,1,11440,,3/22/2019 15:11,,5,114,"

Judea Pearl won the 2011 Turing Award

+ +
+

For fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.

+
+ +

He is credited with the invention of Bayesian networks and a framework for causal inference.

+ +

Why should we study causation and causal inference in artificial intelligence? How would causation integrate into other topics like machine learning? There are facts or relations that cannot be retrieved from data (e.g. cause-effect relations), which is the driving force of ML. Are there other reasons for studying causation?

+",2444,,2444,,12/13/2021 13:28,12/13/2021 13:28,Why should we study causation in artificial intelligence?,,1,0,,,,CC BY-SA 4.0 +11382,2,,9296,3/22/2019 15:58,,2,,"

Biological organisms (such as animals or plants) are the main examples of intelligent systems that we are aware of (excluding artificially intelligent systems, so as not to discuss whether current AI systems are really intelligent or not). Consequently, biological life is often an inspiration for AI researchers to develop AI systems.

+

There are numerous examples of AI systems that have been introduced (at least, partially) based on or just inspired by the biology. Here are a few examples.

+
    +
  • Reinforcement learning is based on a similar way that animals (such as dogs or pigeons) can learn. For more details, see Sutton & Barto's book (especially chapters 14 and 15).

    +
  • +
  • Artificial neural networks are very approximative models of human neural networks.

    +
  • +
  • Genetic algorithms are roughly based on Darwin's theory of evolution.

    +
  • +
  • Ant colony optimization algorithms (and, in general, swarm intelligence) are based on the way real ants (and, respectively, biological swarms) behave. (There is even a rap song dedicated to ants).

    +
  • +
+

There are probably other examples that don't come to my mind right now. See also this and this questions.

+

There are cases where AI discoveries have also helped the development of biology or related fields (such as neuroscience and psychology). For instance, Sutton & Barto (on page 4) write

+
+

Of all the forms of machine learning, reinforcement learning is the closest to the kind of learning that humans and other animals do, and many of the core algorithms of reinforcement learning were originally inspired by biological learning systems. Reinforcement learning has also given back, both through a psychological model of animal learning that better matches some of the empirical data, and through an influential model of parts of the brain's reward system.

+
+",2444,,2444,,11/16/2020 17:21,11/16/2020 17:21,,,,0,,,,CC BY-SA 4.0 +11386,2,,9296,3/22/2019 17:36,,1,,"

Evolutionary game theory and evolutionary algorithms

+

I see the connection arising mostly thought Evolutionary Game Theory and Evolutionary Algorithms. Evolutionary algorithms are an analog of natural selection, where successive generations of a given decision making agent are more optimized than previous generations. Like organisms in nature, this process uses "reproduction, mutation, recombination and selection".

+

There are a couple of recent articles from Quanta Magazine. One, "The Math That Tells Cells What They Are" discusses mathematical optimization as the core function of fundamental biological systems.

+
+

"Through evolution, these cells have figured out how to implement Bayes' trick using regulatory DNA."

+

"Natural selection [seems to be] pushing the system hard enough so that it ... reaches a point where the cells are performing at the limit of what physics allows."

+
+

This second quote is exactly the goal of Artificial Intelligence, where utility is limited by physics (computing resources). One way for an algorithm to increase utility is to increase computing power, but the other method is to refine the algorithm to make strong decisions more efficiently. (MCTS vs. Brute Force where a model is intractable, as an example.)

+

A second article "Mathematical Simplicity May Drive Evolution’s Speed" talks about Genetic Algorithms

+
+

"Creationists love to insist that evolution had to assemble upward of 300 amino acids in the right order to create just one medium-size human protein. With 20 possible amino acids to occupy each of those positions, there would seemingly have been more than 20300 possibilities to sift through, a quantity that renders the number of atoms in the observable universe inconsequential."

+
+

The game of Go on a 19x19 board has a similar quality--the number of potential gamestates is vastly exceeds the number of atoms in the universe, and, even if the entire universe were converted to computronium, the game would still be intractable.

+
+

"The fatal flaw in their argument is that evolution didn’t just test sequences randomly: The process of natural selection winnowed the field. Moreover, it seems likely that nature somehow also found other shortcuts, ways to narrow down the vast space of possibilities to smaller, explorable subsets more likely to yield useful solutions."

+
+

This would also be an accurate description the process of pruning a search space. The article concludes that, although there is still much research to be conducted:

+
+

“The idea of thinking about life as evolving software is fertile.”

+
+

The process of optimization in nature and in computer science is similar in spirit, if not in fact.

+

Automata as a form of artificial life

+

The second factor may arise out of the mythology of AI, via speculative fiction. In science fiction, the idea of automata as a form of artificial life is persistent. Shows & films like Westworld, BladeRunner, and the Alien franchise, with David the Android as a prime example of a superior, artificial species that may supplant humanity, are extremely popular. These are all based on Phillip K Dick's ideas explicated in Do Androids Dream of Electric Sheep, the plot of which turns on evolutionary game theory, written about 5 years before the field was formalized! (Dick's influence can even be seen in Google's "Nexus" naming convention for their Android phone;) Underneath all of this is also the idea that Artificial Intelligence itself is a function of nature, with humans as merely the vehicle for the next form of dominant life.

+",1671,,2444,,11/17/2020 11:24,11/17/2020 11:24,,,,1,,,,CC BY-SA 4.0 +11387,2,,7838,3/22/2019 19:24,,3,,"
+

What is artificial intelligence?

+
+

This question is ambiguous. I will address the two less ambiguous but related questions.

+
    +
  1. What is the goal of the AI field?
  2. +
  3. What is an artificial intelligence?
  4. +
+

What is the goal of the AI field?

+

In the article What is artificial intelligence? (2007), John McCarthy, one of the founders of artificial intelligence and who also coined the expression artificial intelligence, writes

+
+

Artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.

+

It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

+
+

Therefore, the goal of the AI field is to create intelligent programs (or machines). So, he defines the goal of the field based on the concept of intelligence, which he defines as follows.

+
+

Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

+
+

So, we could conclude that the goal of the AI field is to create programs (or machines) that achieve goals in the world to different extents.

+

This definition of intelligence is reasonable and consistent with reinforcement learning (which could be the path to AGI), but maybe not formal and rigorous enough. In this answer, I report a possibly more sound definition of intelligence given by Hutter and Legg, so I suggest that you read it, but the definitions are roughly consistent with each other (because the concepts of "goal" and "goal-seeking behavior" are present in both definitions), although they emphasize different aspects (e.g. computation or generality).

+

What is an artificial intelligence?

+

Nowadays, most people distinguish two types of artificially intelligent systems:

+
    +
  • Narrow AI (aka weak AI, although this term may not exactly be a synonym for narrow AI, but it's just the opposite of strong AI: see the Chinese-Room argument): a system that solves a very specific problem (e.g. playing go)
  • +
  • Artificial general intelligence (aka strong AI, although this term may not always be used as a synonym for AGI): a system that can solve multiple problems
  • +
+

This distinction started with philosophical arguments, such as the Chinese room argument, where the ability of a computer to "understand" the actual problem was questioned. Nowadays, there are multiple successful cases of narrow AIs (e.g. AlphaGo), but there isn't yet a "truly" AGI system. This is mainly due to the fact that more people have been (probably wisely) focusing on solving specific problems rather than solving the "holy grail" problem of the AI field, i.e. create an AGI, which seems to be a lot more difficult than creating narrow AI systems. (Anyway, the creation of an AGI could actually arise from solutions to these specific problems, so maybe we are already creating the tools needed to build an AGI, without realizing it). See What is the difference between strong-AI and weak-AI? for more details about the difference between narrow AI and strong AI.

+",2444,,2444,,7/25/2021 0:18,7/25/2021 0:18,,,,0,,,,CC BY-SA 4.0 +11388,1,,,3/22/2019 20:42,,2,79,"

Do we have to consider if (s is given) an action a can lead to s' when defining a reward function?

+ +

To be more specific: +Let's say I have a 1D Map like: |A|B|C|D| +To define a reward function, I simply defined a matrix for every action, where the columns and rows represent the states (A-D) and the entries represent the reward. But I made it even simpler. As reaching a specific state gives a reward of 1 I just assigned the reward to a column (C).

+ +

$$ +\begin{matrix} -1&0&1&0\\ -1&0&1&0\\-1&0&1&0\\-1&0&1&0 \end{matrix} +$$ +However, lets say the matrix is specified for the action ""going right"". Now there are entries reading: D ==> going right ==> C getting reward of 1. This is actually not possible. However, I thought the transition function would handle this issue since I will define there what is possible and what is not. But anyway it is said that for a horizon of one the immidiate reward is given and the transition function isn't even considered. This leads to arbitrary result. So do I have to consider the ""physics"" of my world?

+",19413,,19413,,3/22/2019 23:10,4/11/2021 15:01,Do we have to consider the feasability of an action when defining the reward function of a MDP?,,1,1,,,,CC BY-SA 4.0 +11389,1,11400,,3/22/2019 21:54,,3,179,"

Consider an iterative deepening search using a transposition table. Whenever the transposition table is full, what are common strategies applied to replace entries in the table?

+

I'm aware of two strategies:

+
    +
  1. Replace the oldest entry whenever a new one is found.

    +
  2. +
  3. Replace an entry if the new one has a higher depth.

    +
  4. +
+

I'm curious about other replacement approaches.

+",22369,,2444,,2/13/2021 14:07,2/13/2021 14:07,What are the common techniques one could use to deal with collisions in a transposition table?,,1,0,,,,CC BY-SA 4.0 +11390,1,,,3/23/2019 2:00,,1,36,"

I'm working on a project where I need to forecast sales data where I have history of 1 year (2017) daily data. I am new on Artificial Intelligence topic and after searching for a while, I think ARIMA or Multiple Linear Regression is a good model for seasonal forecasting (Correct Me If I'm Wrong). But then I think that my history data is exclusive for 2017 because on 2018 and 2019, holiday date is changing.

+ +

What model I have to used to forecast based on new holiday setup? Is ARIMA or Multiple Linear Regression still can be used? Or I need another model? Where do I have to start on this?

+",23393,,,,,3/23/2019 2:00,What Model Used for Forecasting Sales with Dynamic Holiday,,0,0,,,,CC BY-SA 4.0 +11391,2,,11374,3/23/2019 2:15,,4,,"

So if I understood correctly: +You have data from 2 sensors in time: Ar flow and BackGas Flow (SCCM, what is that?) +You have that data for multiple products.

+

1 - Since it is relatively low dimensional, you may try using raw data with K-Means or Self Organizing Maps.

+

2 - If you searching for anomalies in time, you might try using feature engineering with things like flow change:

+
    +
  • i) Take 2 points in time and "derivate" (Point A - Point B)/(Time A - Time B)
  • +
  • ii) Take the result for A and repeat the process as a second derivative, try that for many levels of "derivative"
  • +
  • iii) Machine learning application for sensor failure detection in polymerization process. In: Simpósio Brasileiro de Telecomunicações e Processamento de Sinais.
  • +
+

3 - You might want to check other Time-Series related research, even on regression and classification, since they may give you ideas on relevant features or approaches:

+
    +
  • i) An enseble approach to time dependent classification (10.1109/ICMLA.2018.00164)
  • +
  • ii) Multivariate Time Series for Data-driven Endpoint Prediction in the Blast Oxygen Furnace (DOI: 10.1109/ICMLA.2018.00231)
  • +
+

4 - Note that anomalies are outliers, so analysing data distant from the mean of any features you might engineer from the time. An model based on RBF kernels might pick up some information.

+

5 - Since you don't know the amount of clusters you can try hierarchical clustering.

+

6 - Don't forget to talk to people in your field about what to expect from this.

+

If you can elaborate on the type of anomalies you're looking for I may come with better ideas. This is pretty general tips.

+

I may have misunderstood what you wanted to cluster:

+

7 - If you want to detect anomalies in:

+
    +
  • i) products behavior tips above were mostly for it and you might use all 23 features in short time intervals and cluster like this:
  • +
+
    For each product in dataset {
+
+      samples = {} # empty set
+
+      For each time interval in product {
+
+        features in interval add to samples
+
+      }
+
+      perform clustering method using samples
+
+    }
+
+

This algorithm would help cluster periods of time where the product behavied weirdly

+
    +
  • ii) sensors behavior then for something like that you might try tip 8

    +
  • +
  • iii) products itself create a feature vector with all avaiable information for each product, then cluster that. Since it is composed of variable time interval and 23 features per product (plus time) you might want to use a bit of dimensionality reduction (PCA should work fine, since it gives you the direction of greater variance and you're look for that). Using a regressor to model products behavior and cluster that could all be usefull (create a regressor in time for each product to model its behavior, then cluster the weights of the regressors as the product representation)

    +
  • +
+

8 - Let's say you want to identify sensors behavior, it might be helpful to model the sensors output as a function f(time, extravariables) and do regression over it (try linear, then try non-linear). Points in time that the regressor had bad performance predicting the answer might indicate that that behavior is anomalous

+",23392,,1847,,9/18/2020 7:26,9/18/2020 7:26,,,,4,,,,CC BY-SA 4.0 +11394,1,,,3/23/2019 6:26,,2,1480,"

I have a set of topics and each topic consists of a set of words. I want to make meaningful English sentences from these words. Each topic consist of 5 to 10 words and these words are relevant to each other, like {code, language, test, write and function} and {class, public, method, string, int} are two sets. I want to generate a sentence from these set of words using API.

+",23399,,2444,,8/20/2019 22:40,6/26/2020 1:02,How can I make meaningful English sentences from given set of words?,,1,4,,,,CC BY-SA 4.0 +11399,2,,11388,3/23/2019 9:35,,1,,"

Your issue is related to how you are representing your rewards, and not anything to worry about for the MDP.

+ +

You have chosen to use a matrix to represent your reward function, which maps $(s,s')$ to $r$. If some transition $s \rightarrow s'$ doesn't happen, then it doesn't matter at all what you put there for the reward value for $(s,s')$. It is only because you have decided to use a matrix for this that you even need to think about it.

+ +

A complete reward function would map $s, a, s'$ to a scalar value, and can also include a random factor. You don't need to use all the factors - a MDP can use any or all of them, and all the MDP theory is still correct. In your case, it looks like arriving in a ""goal state"" $s' = C$ is what triggers a +1 reward. So instead of a matrix you could define a function:

+ +

$$r(s,a,s') = \begin{cases} + -1,& \text{if } s' = A\\ + 1,& \text{if } s' = C\\ + 0, & \text{otherwise} +\end{cases} $$

+ +

This is equally valid in the MDP as your matrix, and avoids the issue of storing data about impossible transitions.

+ +
+

I thought the transition function would handle this issue since I will define there what is possible and what is not.

+
+ +

Yes that should be the case. If your transition function does not allow for certain state changes, then there is no need to handle them in any particular way in the reward function.

+ +
+

But anyway it is said that for a horizon of one the imm[e]diate reward is given and the transition function isn't even considered

+
+ +

I am not sure where you have read this, but it does not relate to your representation issue or resolving ""impossible"" state transitions.

+ +

If you are using a model-based method, such as Policy Iteration or Value Iteration, then you do use the transition function and reward function - more or less directly in the form of the Bellman equations. In this case, the transition function will assign a weight of 0 to the impossible transitions, so the reward value doe not matter (and efficient code would likely skip even looking up or calculating the reward in that case).

+ +

If you are using a model-free method, such as Q learning, then the agent does not use the transition function or the reward function directly. They are part of the environment that it is learning. However, any code for the simulated environment has to implement the physics of your world, and that includes using the transition function to resolve what happens when the agent takes an action. In that case, the simulated model of the environment would prevent the agent ever experiencing impossible transitions, so there is never any need to calculate the reward for them or allow for them as edge cases in a reward function (you might still choose to check validity and/or return something as defensive programming of the reward function, but it is not required for the MDP and reinforcement learning to work).

+",1847,,1847,,3/23/2019 11:40,3/23/2019 11:40,,,,0,,,,CC BY-SA 4.0 +11400,2,,11389,3/23/2019 9:49,,2,,"

The term you're looking for is ""replacement schemes"". As far as I'm aware, the primary reference on this is still Replacement Schemes for Transposition Tables, although it is a fairly old paper from 1994.

+ +

I'll very briefly summarize the seven different schemes listed in this paper, but full text of the paper is also freely available and contains more info:

+ +
    +
  1. Deep: preserve position for which the deepest subtree below it was searched.
  2. +
  3. New: Always replace an old entry with the newest entry.
  4. +
  5. Old: Opposite of New. Paper mentions only including it for the sake of completeness, implies that it's probably not a good scheme.
  6. +
  7. Big1: similar to Deep, but uses the size (in number of nodes) of subtree searched, rather than just its depth. If the same position (i.e. table entry) occurs multiple times in a subtree, it is only counted once.
  8. +
  9. BigAll: same as above, but really counts the number of nodes rather than number of positions (so a position that occurs multiple times is counted multiple times).
  10. +
  11. TwoDeep: In this scheme, the TT is modified to contain two positions per table entry, rather than just one position. You could view it as the table having two ""layers"". In this scheme, a new position is moved into the first ""slot"" if it has the deepest subtree (as in the Deep scheme), moving the previous position in the first slot into the second slot. If a new position doesn't have a new deepest subtree, it is instead moved into the second slot.
  12. +
  13. TwoBig1: Similar to the above, but using Big1-style replacement rather than Deep-style replacement.
  14. +
+ +

If I recall correctly, the Two-layered schemes tend to perform best (of course they do require approximately twice as much memory for the same number of bits per key).

+",1641,,,,,3/23/2019 9:49,,,,2,,,,CC BY-SA 4.0 +11404,2,,11212,3/23/2019 15:04,,0,,"

I found that we shouldn't eliminate those actions, because RIGHTFIRE and LEFTFIRE do RIGHT + FIRE and LEFT + FIRE, respectively. So they are different actions. I post this just to clarify in case someone faces the same doubt.

+",9818,,2444,,12/27/2020 13:57,12/27/2020 13:57,,,,0,,,,CC BY-SA 4.0 +11405,1,,,3/23/2019 15:53,,12,3251,"

Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain ""statistical significance"". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.

+ +

Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?

+ +

Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?

+",2444,,,,,4/25/2020 19:04,What are the purposes of autoencoders?,,4,0,,,,CC BY-SA 4.0 +11406,2,,11405,3/23/2019 15:53,,3,,"

A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.

+ +

Autoencoders and PCA are related:

+ +
+

an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.

+
+ +

For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.

+",2444,,2444,,3/23/2019 15:59,3/23/2019 15:59,,,,0,,,,CC BY-SA 4.0 +11407,1,11428,,3/23/2019 16:03,,5,122,"

Raul Rojas' Neural Networks A Systematic Introduction, section 8.1.2 relates off-line backpropagation and on-line backpropagation with Gauss-Jacobi and Gauss-Seidel methods for finding the intersection of two lines.

+ +

What I can't understand is how the iterations of on-line backpropagation are perpendicular to the (current) constraint. More specifically, how is $\frac12(x_1w_1 + x_2w_2 - y)^2$'s gradient, $(x_1,x_2)$, normal to the constraint $x_1w_1 + x_2w_2 = y$?

+",14892,,23790,,4/7/2019 20:52,4/7/2019 20:52,Are on-line backpropagation iterations perpendicular to the constraint?,,2,0,,,,CC BY-SA 4.0 +11409,2,,10764,3/23/2019 19:00,,0,,"

Andrew Ng explains with great details in Deep Learning Course appears in the image below. +He also focuses on some corners can cause this problem :

+ +
    +
  1. some mislabeled examples in the dataset.
  2. +
  3. the size of your mini batch or change it with GD.
  4. +
+ +

+",21907,,,,,3/23/2019 19:00,,,,1,,,,CC BY-SA 4.0 +11410,2,,11394,3/23/2019 20:42,,1,,"

Two Approaches:

+ +
    +
  • Naive Bayes
  • +
  • LSTM
  • +
+ +

Train Naive Bayes on a whole dataset learning the probability of the next word given a word.

+ +

You can even go with any LSTM approaches, but I'd bet on Naive Bayes.

+ +

Eg:

+ +

text: hello how are you hello how are you hello No how

+ +

to get the suggestion of next word depending on current word - hello

+ +

p(how | hello) = 3/4

+ +

p(No | hello) = 1/4

+ +

take argmax of probabilities.

+ +

Also remember to smooth, and train on huge dataset. Training is just finding the probabilities before hand.

+ +

Hope it helps ;)

+",20522,,,,,3/23/2019 20:42,,,,0,,,,CC BY-SA 4.0 +11411,2,,10778,3/23/2019 20:49,,0,,"

BioID has a liveness detection algorithm that you can test here and it is free!

+ +

You can get their model at their GitHub. I think it is open-source (check the license).

+ +

Their model relies on facial landmarking. They also have algorithms for: +- Face Verification +- Photo Verification +- Cloud Based Solutions in BWS

+",23392,,,,,3/23/2019 20:49,,,,0,,,,CC BY-SA 4.0 +11412,2,,11405,3/23/2019 21:29,,2,,"

PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)

+ +

Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.

+ +

LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.

+ +

PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.

+ +

Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.

+ +

There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.

+",23392,,,,,3/23/2019 21:29,,,,4,,,,CC BY-SA 4.0 +11413,2,,7879,3/23/2019 22:05,,0,,"

Offline tracking requires knowlege of the future of an object.

+ +

For real time applications such as the ones you cited, offline tracking might be impeditive given that the method needs to wait for future frames.

+",23392,,,,,3/23/2019 22:05,,,,0,,,,CC BY-SA 4.0 +11414,2,,10973,3/23/2019 23:50,,0,,"

You might want to check out this paper relating to Phizaz's comment: Asynchronous Methods for Deep Reinforcement Learning (specifically search for Hogwild).

+",21180,,2444,,3/24/2019 13:38,3/24/2019 13:38,,,,1,,,,CC BY-SA 4.0 +11415,2,,11405,3/24/2019 0:26,,6,,"

It is important to think about what sort of patterns in the data are being represented.

+ +

Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.

+ +

Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.

+ +

Your question was ""Why can't we simply use PCA, if the purpose is dimensionality reduction?"" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.

+",23413,,,,,3/24/2019 0:26,,,,0,,,,CC BY-SA 4.0 +11418,1,,,3/24/2019 5:26,,5,220,"

Twin Delayed Deep Deterministic (TD3) policy gradient is inspired by both double Q-learning and double DQN. In double Q-learning, I understand that Q1 and Q2 are independent because they are trained on different samples. In double DQN, I understand that target Q and current Q are relatively independent because their parameters are quite different.

+ +

But in TD3, Q1 and Q2 are trained on exactly the same target. If their parameters are initialized the same, there will be no difference in their output and the algorithm will be equal to DQN. The only source of independence/difference of Q2 to Q1 I can tell is the randomness in the initialization of their parameters. But with training on the same target, I thought this independence will become smaller and smaller as they converge to the same target values. So I don't quite understand why TD3 works in combating overestimation in Q-learning.

+",23420,,2444,,4/20/2020 11:17,5/10/2022 15:06,Why Q2 is a more or less independant estimate in Twin Delayed DDPG (TD3)?,,1,0,,,,CC BY-SA 4.0 +11421,1,,,3/24/2019 8:40,,5,178,"

I am training a modified VGG-16 to classify crowd density (empty, low, moderate, high). 2 dropout layers were added at the end on the network each one after one of the last 2 FC layers.

+ +

network settings:

+ +
    +
  • training data contain 4381 images categorized under 4 categories (empty, low, moderate, high), 20% of the training data is set for validation. test data has 2589 images.

  • +
  • training is done for 50 epochs.(training validation accuracy drops after +50 epochs)

  • +
  • lr=0.001, decay=0.0005, momentum=0.9

  • +
  • loss= categorical_crossentropy

  • +
  • augmentation for (training, validation and testing data): rescale=1./255, brightness_range=(0.2,0.9), horizontal_flip

  • +
+ +

With the above-stated settings, I get the following results:

+ +
    +
  • training evaluation loss: 0.59, accuracy: 0.77

  • +
  • testing accuracy 77.5 (correct predictions 2007 out of 2589)

  • +
+ +

Regarding this, I have two concerns:

+ +
    +
  1. Is there anything else I could do to improve accuracy for both training and testing?

  2. +
  3. How can I know if this is the best accuracy I can get?

  4. +
+",23268,,2444,,6/14/2020 22:20,6/14/2020 22:20,How do I improve accuracy and know when to stop training?,,2,1,,,,CC BY-SA 4.0 +11422,5,,,3/24/2019 10:19,,0,,,-1,,-1,,3/24/2019 10:19,3/24/2019 10:19,,,,0,,,,CC BY-SA 4.0 +11423,4,,,3/24/2019 10:19,,0,,For questions related to AI methods of dimensionality reduction (e.g. PCA or autoencoders).,2444,,2444,,3/25/2019 19:22,3/25/2019 19:22,,,,0,,,,CC BY-SA 4.0 +11424,5,,,3/24/2019 10:20,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Curse_of_dimensionality.

+",2444,,2444,,7/21/2019 17:17,7/21/2019 17:17,,,,0,,,,CC BY-SA 4.0 +11425,4,,,3/24/2019 10:20,,0,,"For questions related to the concept of ""curse of dimensionality"", which refers to the problem of an exponential increase in volume which occurs when adding extra dimensions to the Euclidean (or input) space. In machine learning and statistics, the curse of dimensionality implies that more data is required to achieve statistical significance, as the number of dimensions of the input increases. The expression was introduced by Richard Bellman in 1957.",2444,,2444,,7/21/2019 17:17,7/21/2019 17:17,,,,0,,,,CC BY-SA 4.0 +11427,1,,,3/24/2019 19:37,,8,338,"

Background Context:

+ +

In the past I've heavily applied various ""code quality metrics"" to statically analyze code to provide an inkling of how ""maintainable"" it is and using things like the Maintainability Index alluded to here.

+ +

However, a problem that I face is whether a language has libraries that effectively measure such metrics - only then is it usable else it's rather subjective/arbitrary. Given the plethora of languages that one has to deal with in an enterprise system, this is can get rather unwieldy.

+ +

Proposed Idea:

+ +

Build and train an Artificial Neural Network that ""ingests a folder of code"" (i.e., all files within that folder/package are assumed to house the ""project"" whose quality metrics we'd like to compute). This may again be language dependent but let's assume it exists for a language that I'm having the hardest time with (for measuring ""maintainability""): Scala.

+ +

Using numeric metrics like McCabe's complexity or Cyclomatic complexity maybe ""convention"" but are not entirely relevant. Few things like class/method length are almost always relevant no matter the language. Thus, providing a few ""numeric metrics"" + abstract notion of readability by subjective evaluation to train an ANN would be a good balance of ""inputs"" to the ANN. The output being either a classification of maintainability like low, medium, high etc., or a number between 0 and 1.

+ +

Question:

+ +

Has this been tried and are there any references? I spent some time digging via Google Scholar but didn't find anything ""usable"" or worthwhile. It's okay if it's not Scala, but have ANNs been used for measuring code quality (i.e., static analysis) and what are the benefits or disadvantages of something like this?

+ +

PS: Hopefully, the question isn't too broad, but if so, please let me know in the comments and I'll try make it as specific as possible.

+",23432,,,,,3/25/2019 15:19,Are there existing examples of using neural networks for static code analysis?,,1,0,,,,CC BY-SA 4.0 +11428,2,,11407,3/24/2019 20:38,,1,,"

Answer by Theo Bandit at maths stackexchange

+ +
+

If you choose two points $(w_1, w_2), (v_1, v_2)$ along this line, then + $$(x_1, x_2) \cdot ((w_1, w_2) - (v_1, v_2)) = x_1 w_1 + x_2 w_2 - (x_1 v_1 + x_2 v_2) = y - y = 0.$$ + That is, the direction $(x_1, x_2)$ is perpendicular to any vector lying along the line, i.e. $(x_1, x_2)$ is normal to the line.

+
+",14892,,,,,3/24/2019 20:38,,,,0,,,,CC BY-SA 4.0 +11429,2,,9141,3/24/2019 21:15,,8,,"

this experiment by Stephen Mayhew suggests that BERT is lousy at sequential text generation:

+

http://mayhewsw.github.io/2019/01/16/can-bert-generate-text/

+
+
although he had already eaten a large meal, he was still very hungry
+
+

As before, I masked “hungry” to see what BERT would predict. If it could predict it correctly without any right context, we might be in good shape for generation.

+

This failed. BERT predicted “much” as the last word. Maybe this is because BERT thinks the absence of a period means the sentence should continue. Maybe it’s just so used to complete sentences it gets confused. I’m not sure.

+

One might argue that we should continue predicting after “much”. Maybe it’s going to produce something meaningful. To that I would say: first, this was meant to be a dead giveaway, and any human would predict “hungry”. Second, I tried it, and it keeps predicting dumb stuff. After “much”, the next token is “,”.

+

So, at least using these trivial methods, BERT can’t generate text.

+
+",23433,,-1,,6/17/2020 9:57,3/24/2019 21:15,,,,0,,,,CC BY-SA 4.0 +11431,1,,,3/24/2019 23:20,,5,195,"

I was looking for an approach to recognise musical notes from photos.

+

I found this repository https://github.com/mpralat/notesRecognizer. However, it doesn't seem good enough. If you look into the bad folder, you can see that just tiny variations of lightning can already cause problems. One should be able to read musical notes with lower quality images.

+

I found other projects. However, they all use high resolution images.

+ +

Now, this is unsatisfying. If you want to snap a photo of some tunes, and want them to be recognized.

+

So, what could one do to achieve a good solution?

+

I was thinking about treating the musical notes just like written letters. The computer can easily learn written characters with the Arabic symbols.

+

I wonder, though, how easy would it be for a non-Arabic? For example, in Chinese or Japanese, several characters combine into one.

+

The same applies to musical notes, they can be connected and form something slightly different through that. For example,

+

+

or:

+

+

in contrast to just simple notes like:

+

+

What would be a good approach to recognize musical notes even for slightly low-resolution images or bit blurry deformed images?

+

I'm not saying to read out a symphony out of a thumbnail. But less than optimal captures.

+",23435,,2444,,3/17/2022 23:45,3/17/2022 23:45,How can we recognise musical notes in low-resolution or blurry images?,,2,1,,,,CC BY-SA 4.0 +11432,2,,11421,3/25/2019 1:38,,5,,"
+

Is there anything else I could do to improve accuracy for both training and testing?

+
+ +

Yes, of course, there are a lot of methods if you want to try to improve your accuracy, some that I can mention:

+ +
    +
  • Try to use a more complex model: ResNet, DenseNet, etc.
  • +
  • Try to use other optimizers: Adam, Adadelta, etc.
  • +
  • Tune your hyperparameters (e.g. change your learning rate, momentum, rescale factor, convolution size, number of feature maps, epochs, neurons, FC layers)
  • +
  • Try to analyze your data, with ~75% and 4 categories, is that possible there is one category that difficult to classified?
  • +
+ +

In essence, you have to do a lot of experiments with your model until you think ""it is enough"" (if you have a deadline). If you don't have a hard deadline, you can keep improving updating your ML model.

+ +
+

How can I know if this is the best accuracy I can get?

+
+ +

No, you can't until you compare it with other models/hyperparameters. If you do more experiments (some ways like the one I mentioned above) or compare with other people's experiments that using the same data, you'll find which one is the best. For an academic paper, for example, you need to compare at least 3 to 4 models that similar or experiment with hundreds of different hyperparameters combination.

+",16565,,,,,3/25/2019 1:38,,,,1,,,,CC BY-SA 4.0 +11433,1,11451,,3/25/2019 4:26,,3,143,"

I am trying to reproduce the results for the simple grid-world environment in [1]. But it turns out that using a dynamically learned PBA makes the performance worse and I cannot obtain the results shown in Figure 1 (a) in [1] (with the same hyperparameters). Here is the result I got: +

+ +

The issue I found is that the learning procedure is stuck due to bad PBA in the early stages of training. Without PBA, Sarsa can converge well.

+ +

Did anyone try the method before? I am really puzzled and how the authors obtain these good results? There are some top conference papers using the same method in [1], for example, [2] and [3].

+ +

[1] Expressing Arbitrary Reward Functions as Potential-Based Advice

+ +

[2] Learning from demonstration for shaping through inverse reinforcement learning

+ +

[3] Policy Transfer using Reward Shaping

+ +

Is the method itself defective or anything wrong with my code? Here is part of my codes:

+ +
import copy
+import numpy as np
+import pandas as pd
+
+def expert_reward(s, action):
+    if (action == RIGHT) or (action == DOWN):
+        return 1.0
+    return 0.0
+
+class DynamicPBA:
+    def __init__(self, actions, learning_rate=0.1, reward_decay=0.99):
+        self.lr = learning_rate
+        self.gamma = reward_decay
+        self.actions = actions
+        self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64) #q table for current time step
+        self.q_table_ = pd.DataFrame(columns=self.actions, dtype=np.float64) #q table for the current time step
+        self.check_state_exist(str((0,0)))
+
+    def learn(self, s, a, r, s_, a_): #(s,a) denotes current state and action, r denotes reward, (s_, a_) denotes the next state and action
+        self.check_state_exist(s_)
+        q_predict = self.q_table.loc[s, a]
+        q_target = r + self.gamma * self.q_table.loc[s_, a_]
+        self.q_table.loc[s, a] = self.q_table.loc[s, a] + self.lr * (q_target - q_predict)
+
+    def update(self):
+        self.q_table = copy.deepcopy(self.q_table_)
+
+    def check_state_exist(self, state):
+        if state not in self.q_table.index:
+            # append new state to q table
+            self.q_table = self.q_table.append(
+                pd.Series(
+                    [0]*len(self.actions),
+                    index=self.q_table.columns,
+                    name=state,
+                    )
+                )
+            self.q_table_ = self.q_table_.append(
+                pd.Series(
+                    [0]*len(self.actions),
+                    index=self.q_table_.columns,
+                    name=state,
+                    )
+                )
+
+#######Main part
+
+RL = SarsaTable(actions=list(range(len(actions_dict))), reward_decay=0.99, learning_rate=0.05)
+expert = DynamicPBA(actions=list(range(len(actions_dict))), learning_rate=0.1, reward_decay=0.99)
+for episode in range(100):
+    # initial observation
+    s = (0,0)
+    env.reset(s)
+    action = RL.choose_action(str(s))
+
+    r_episode_s = 0
+    r_episode = 0
+
+    current_step = 0
+    while True:        
+
+        # RL take action and get next observation and reward
+        s_, _, reward, status = env.step(action)
+        current_step += 1
+
+        action_ = RL.choose_action(str(s_))
+        # update dynamic potentials
+        expert_r = -expert_reward(s, action) 
+        expert.learn(str(s), action, expert_r, str(s_), action_)
+
+        # compute PBA
+        F = expert.gamma * expert.q_table_.loc[str(s_), action_] - expert.q_table.loc[str(s), action]
+
+        #update expert PBA table
+        expert.update()
+
+        RL.learn(str(s), action, reward+F, str(s_), action_, status)
+
+        # swap observation
+        s = s_
+        action = action_
+
+        # break while loop when end of this episode
+        if status != 'not_over':
+            break
+        if current_step>10000:
+            print(episode, r_episode, r_episode_s, current_step)
+            break     
+        # learning rate decay
+        RL.lr = RL.lr*0.999
+#     expert.update()
+
+",23429,,2444,,11/9/2020 17:35,11/9/2020 17:35,Expressing Arbitrary Reward Functions as Potential-Based Advice (PBA),,1,0,,,,CC BY-SA 4.0 +11435,2,,11431,3/25/2019 6:01,,-2,,"

refer to MNIST on kagle +to train model to recognize given set of notes in your case.

+ +

+ +

if you start training model will understand pattern depending on bits in your image.

+ +

issue comes with blurry images when last 2 sets of pics with minute change. will be difficult task for computer to distinguish.

+ +

there is one sure shot method which works almost every time but is resource intensive and needs plethora of data to train. is DNN.

+ +

this is my take on your question.

+",9991,,,,,3/25/2019 6:01,,,,0,,,,CC BY-SA 4.0 +11436,1,,,3/25/2019 9:19,,4,410,"

I am no expert in the field of AI so I apologize if this is a simple/easy question. I was trying to implement a network similar to OpenAI's for another game and I noticed that I did not fully understand how the network worked.

+

Below is the image of OpenAI five's network +

+

Basic question

+

How is the data concatenated/ what is the dimension of the data right after the concatenation before the LSTMs or just before entering the LSTMs? +Below are my thoughts which I provide for clarification's sake.

+

1st Interpretation

+

In the blue area for units, my initial understanding was that for each visible unit, the output of the max-pool is concatenated along the columns. So, assuming the number of rows is 1(as a 1d max-pool is being applied for each unit), the number of columns is n and there are N visible units, the final size of the matrix when concatenated is $(1,n\cdot N)$ with few extra columns given by the pickups and the like as shown on the left-hand side of the model.

+

Problem with this interpretation

+

As the number of units a player can see per each turn is not constant, under this interpretation, I suspect that the fully connected layer after the concatenation layer cannot do its job as matrix multiplication becomes impossible with a variable number of columns.

+

Possible solution

+

One possible solution to this is to set a maximum to the number of observed units as $N_{max}$ and pad with constants if some units are not observed. Is this the case?

+

2nd Interpretation

+

My 2nd interpretation is that the data is concatenated along the rows. In this case, I can see that the data can pass through a fully connected layer because the number of columns can remain constant. Under this assumption, I decided that right before going through the LSTM, the data is reshaped to (batch size, number of rows, number of columns).

+

Problems with this interpretation

+

While I found this interpretation to be more appealing, I noticed that under this train of thought, the LSTM is used just to associate the input data and is not associated with time(the time step for the LSTM is simply the next row of data rather than actual time). I know that this is not especially a problem but I thought that there is no special need to use an LSTM here as in this second interpretation, the order of the data holds no special meaning. But is this the case?

+

I apologize in advance for any unclear points. Please tell me in the comments and I'll try to clarify as best as I can!

+",23443,,-1,,6/17/2020 9:57,8/5/2023 11:04,How did the OpenAI 5 for Dota concatenate units?,,1,0,,,,CC BY-SA 4.0 +11438,1,,,3/25/2019 9:55,,2,718,"

I have been looking for BERT for many tasks. I would like to compare the performance to answer an FAQ, using BERT semantic similarity and BERT Q/A. +However, I'm not sure it is a good idea to use semantic similarity for this task. If it is, do you think it is possible to find a dataset to fine-tune my algorithm?

+",23154,,2444,,11/1/2019 2:37,3/30/2020 5:03,Is it a good idea to use BERT to answer a FAQ with semantic similarity?,,2,0,,,,CC BY-SA 4.0 +11439,1,,,3/25/2019 11:22,,2,491,"

I'm interested in building a (deep) RL agent for solving a continuous problem (which splits something into portions).

+ +

In all examples I've seen so far, e.g., solving the continuous lunar lander, always a $\tanh$ output layer activation was used, which produces values between $-1$ and $+1$.

+ +

Is this just because it fits the use case or is this a general rule for RL agents with continuous action spaces?

+ +

What if I just want values between $0$ and $1$? Could I simply use a $\operatorname{softmax}$ activation for my output layer?

+",19928,,2444,,5/15/2019 15:11,5/16/2019 3:41,Regarding the output layer's activation function for continuous action space problems,,1,0,,,,CC BY-SA 4.0 +11440,2,,11381,3/25/2019 12:59,,3,,"

There are two intermixed elements in your question:

+ +
    +
  1. Why have people already studied causality in AI?

  2. +
  3. Why would people like to continue studying causality in AI?

  4. +
+ +

tl;dr: AI systems can't function in the real world without some way of understanding uncertainty. The best ways we have to understand uncertainty don't work well unless we also understand causal structure. Getting most of the way towards this understanding was a huge accomplishment for AI research, but there's still a big problem waiting to be solved here.

+ +

For part 1, let's think about where research into causality comes from. Pearl originally set out to solve a major problem plaguing AI researchers in the 1980s: reasoning under uncertainty. This area is so central to AI systems that it has its own, large, conference: UAI. The need for reasoning under uncertainty arose because the real world is filled with uncertainty. Consider even a simple problem, like have a robot navigate an empty room. Even if the robot knows its initial position exactly, it is not likely to know its true position for long. The robots' wheels might have a certain specification, but wheels stick and slip (friction), and they do so unpredictably (e.g. maybe one part of the floor didn't get polished as much as the others). The robot might have sensors, but sensors are imperfect. Does a light sensor value of 0.25 mean a wall is near, or just that the sun is coming through the window at an unfortunate angle? Or maybe just that the shade of paint is slightly different there? Does our acoustic sensor reading of 1.35 mean that we're 1.35 meters from the wall, or did the signal get reflected, and return to us by a different path?

+ +

One of Pearl's major contributions was showing how we can use the rules of probability to reason about these events correctly. Although others had done the basics of this long ago, Pearl proposed the idea of Bayesian Networks. Thrun and many others used these techniques to solve problems like the robot navigation task discussed above.

+ +

A problem with Bayesian Networks is that, if built incorrectly, they lose much of their efficiency benefits. The correct construction is usually the one that best captures known causal relations between factors. Further, algorithms for inference in a Bayesian Network can not easily answer questions about counterfactuals. They cannot tell our robot what would happen in a world where certain actions were taken. They can only say whether certain behaviors are likely co-occur.

+ +

These outstanding issues led Pearl to work on causality and causal networks. It is important for our systems to be able to answer counterfactual questions, because often the answers to those questions determine how the system aught to act. Pearl's do-calculus (nicely summarized here, and in Pearl's The Book of Why) has solved this problem, and shown how causes can often be inferred from observation alone. It's super exciting!

+ +

Hopefully you're now convinced that Causality was worth studying within AI. You might now wonder why it's still worth studying.

+ +

The main open problem right now is that to do causal reasoning, we need to already have a network that describes causal relations. These networks are hard to construct by hand, so we'd like to be able to learn the structures of these networks. Unfortunately, this looks like a very hard problem, both to solve exactly (it's #P-hard if I recall correctly), and to approximate. There are no known general algorithms, at least as of 2018. Getting this would be a major breakthrough in AI research, and more generally, in statistics, philosophy of science, and epistemology. Even getting something that mostly worked, most of the time, would be huge. Peter Van Beek has been doing some work on this recently, which could be a good starting place if you want to read more.

+",16909,,16909,,6/11/2019 20:44,6/11/2019 20:44,,,,0,,,,CC BY-SA 4.0 +11441,2,,11333,3/25/2019 13:18,,1,,"

It sounds like you are describing a synthesis of two competing ways to solve the MDP problem.

+ +

In reinforcement learning, we solve the MDP problem by having the agent move around its environment, observe rewards and transitions in response to the actions it takes, and build a model of the relationship between actions and rewards that allows it to maximize rewards.

+ +

An older approach is to give the agent facts about the world that can be encoded as logical rules. The agent then uses unification to reason about the consequences of actions within that framework of rules. The agent then takes actions that maximize the rewards it can expect, given the rules and the information at hand. A problem with this approach was that it did not work well in problem domains with probabilistic rules (i.e. X usually happens when action Y is taken).

+ +

A hybrid approach, somewhere between these two is the use of Value Iteration or Policy Iteration methods. These are so-called ""model-based"" reinforcement learning algorithms (although, I would tend to say that makes them something other than true reinforcement learning...). Like the older logic-based approaches, they start by writing down a series of rules that fully describe how things happen in the world, and then derive the logical consequences of those rules to compute the best actions the agent could take. Like reinforcement learning however, they are able to account for probabilistic rules and probabilistic rewards. If you had an exact description of a game of chance, you could write it down as an MDP, and then solve it exactly using these techniques.

+ +

Importantly, value & policy iteration methods are not feasible if the state and action spaces are very large (as they often are), and are really not feasible if the MDP is not known exactly (i.e. if you don't know all the rules of the game in advance). That's where reinforcement learning shines.

+",16909,,,,,3/25/2019 13:18,,,,0,,,,CC BY-SA 4.0 +11442,1,11448,,3/25/2019 13:48,,2,922,"

From what I know, AI/ML uses a large amount of data to train an algorithm to solve problems. But since it’s an algorithm, I was wondering if it's possible to export it. If I trained an AI with R, could I export a mathematical algorithm that could be imported by other users to use in their application, whether it’s written in R or another language?

+ +

So it’s like I’ve discovered a secret message decoding method. I don’t need to share the whole program for others to decode it. I just need to tell them the steps (algorithm) to decode it, and they can implement it in whatever application they want.

+",23017,,2444,,6/15/2020 14:00,6/15/2020 14:00,Can I export a trained machine learning model so that others case use it?,,2,0,,,,CC BY-SA 4.0 +11443,2,,11442,3/25/2019 14:56,,0,,"

Yes, once you've trained a model you'll have the details of that model in your workspace.

+ +

e.g.

+ +
B_Naive = naiveBayes(train_set[,-c(1)],train_set[,1]);
+
+ +

Will give you an object B_naive that can be 'exported'. These are the parameters of the model, you'll still need the naïve bayes library (or whichever library).

+",22897,,,,,3/25/2019 14:56,,,,0,,,,CC BY-SA 4.0 +11445,2,,11427,3/25/2019 15:19,,3,,"

There's certainly literature on a related topic: code smell detection.

+ +

A ""code smell"" is a sign that code has a maintenance problem, and hints at the presence of technical debt. It is reasonable to suppose that code with a lot of smells is lower quality. Code smells include things like giant classes, high cyclomatic complexity, and more.

+ +

Fontana et al. have a good 2016 survey comparing different ML methods for detecting code smells. A reverse citation search on that paper uncovers many other papers that seem relevant, including:

+ + + +

This seems like a pretty well-studied area. I wasn't able to find a cross-language model though, but I suspect one may well exist.

+",16909,,,,,3/25/2019 15:19,,,,3,,,,CC BY-SA 4.0 +11447,1,,,3/25/2019 15:49,,1,24,"

I have a dataset about (240000,23). For my task, I have to use an unsupervised learning method and apply it on every single column separately in order to detect anomalies that might exist. I have pre-processed the data and I am visualizing the TimeElapsed vs the parameter in Python (The graphs look like the one shown in an earlier post by me here).

+ +

I am wondering if there is a way wherein after the graphs are plotted, the graphs are compared with each other and then clustered together based on their similarities.

+ +

Example: If I have temporal data of about 200 products, I plot the graphs of all the 200 products (as can be seen in the link provided above) and these 200 graphs must be compared with each other and must be separately plotted as 200 different points on a scatter plot (by using some unsupervised learning techniques) based on how similar are the graphs of different products to each other.

+ +

I don't know if the code that I have would be helpful in guiding me, but the code that I have is here:

+ +
import pandas as pd
+import numpy as np
+import matplotlib.pyplot as plt
+np.random.seed(1234)
+
+dataset = pd.read_csv('Temporal_DataTable.CSV', header = 0)
+dataset != 0
+(dataset != 0).any(axis=0)
+dataset = dataset.loc[:, (dataset != 0).any(axis=0)]
+
+dataset['product_number'] = pd.factorize(dataset.Context)[0]+1
+dataset['index'] = pd.factorize(dataset.Context)[0]+1
+
+cols = list(dataset.columns.values)
+cols.pop(cols.index('StepID'))
+cols.pop(cols.index('Context'))
+cols.pop(cols.index('product_number'))
+cols.pop(cols.index('index'))
+dataset = dataset[['index','product_number','Context','StepID']+cols]
+dataset = dataset.set_index('index')
+
+max_time_ID_without_drop = dataset.groupby(['product_number'])['TimeElapsed', 'StepID'].max()
+avg_time_without_drop = np.average(max_time_ID_without_drop['TimeElapsed'])
+
+dataset_drop = dataset.drop(index = [128, 133, 140, 143, 199])
+
+max_time_ID_with_drop = dataset_drop.groupby(['product_number'])['TimeElapsed', 'StepID'].max()
+avg_time_with_drop = np.average(max_time_ID_with_drop['TimeElapsed'])
+
+dataset = dataset.drop(columns=['TimeStamp'])
+dataset_drop = dataset_drop.drop(columns=['TimeStamp'])
+
+grouped = dataset.groupby('product_number')
+ncols = 4
+nrows = int(np.ceil(grouped.ngroups/40))
+for i in range(10):
+    fig, axes = plt.subplots(figsize=(12,4), nrows = nrows, ncols = ncols)
+    for (key, ax) in zip(grouped.groups.keys(), axes.flatten()):
+        grouped.get_group((20*i)+key).plot(x='TimeElapsed', y=['Flow_Ar-EDGE'], ax=ax, sharex = True, sharey = True)
+        ax.set_title('product_number=%d'%((20*i)+key))
+        ax.get_legend().remove()
+        handles, labels = ax.get_legend_handles_labels()
+        fig.legend(handles, labels, loc='upper center')
+plt.show()
+
+ +

Thanks in advance for the help.

+",21233,,,,,3/25/2019 15:49,Is there a way to compare the similarities among different graphs and then cluster them using Unsupervised learning?,,0,1,,,,CC BY-SA 4.0 +11448,2,,11442,3/25/2019 16:46,,2,,"

If the 'algorithm' you're talking about is a neural network, then you can distribute the learned parameters/weights to anyone who wants to use it. This is how neural nets are normally 'exported': without all of the training data used to create them. Actually, this is done with many kinds of models (parameterized ones).

+ +

In order to 'decode' the model, users would only have to know its structure. In the case of a neural network, they'd need to know the size of each layer, what activation functions were used, etc.

+ +

This is not possible with every type of ML model, however. Specifically, non-parametric models and 'lazy' models make use of training data at inference time. They wouldn't be useful without their training data. Classifying an input by finding its k nearest neighbors, for example, would require having the training data.

+",22916,,,,,3/25/2019 16:46,,,,0,,,,CC BY-SA 4.0 +11449,2,,11439,3/25/2019 16:56,,1,,"

the use of Tanh is purely because it fits the described problem (Especially for values that are min-max normalized). I have worked on couple of professional RL projects ( specifically with actions in the continuous space) and I did not use tanh at all. Hope that helped :)

+",23455,,23455,,5/16/2019 3:41,5/16/2019 3:41,,,,4,,,,CC BY-SA 4.0 +11451,2,,11433,3/25/2019 19:47,,2,,"
+

Is the method itself defective or anything wrong with my code?

+
+ +

There does indeed appear to be an issue with the code, the publications are fine (I know most of those authors and would very much trust their writing too :) ).

+ +

The first issue I see, and likely the most important, is that the update() calls of DynamicPBA frequently update the contents of self.q_table to those of self.q_table_, but the contents of self.q_table_ are never updated. So, your q_table is essentially always filled with a bunch of $0$ values that never get a chance to properly learn.

+ +

I did not check the rest of the implementation in detail, but at a glance it all looks fine to me. So I guess that changing the last line of learn() to update an entry in self.q_table_ rather than self.q_table should go a long way towards fixing your issue.

+",1641,,,,,3/25/2019 19:47,,,,0,,,,CC BY-SA 4.0 +11453,1,11454,,3/25/2019 22:03,,2,409,"

From this article, I read that ""to accurately classify data with neural networks, wide layers are sometimes necessary.""

+ +

However, I have seen many implementations and discussions on deep-learning, such as this, mention the concept of depth.

+ +

What is the difference in the context of neural networks? How does width vs depth impact a neural network's performance?

+",22424,,,,,3/25/2019 22:11,"What does ""Wide"" vs. ""Deep"" mean in the context of Neural Networks?",,1,0,,,,CC BY-SA 4.0 +11454,2,,11453,3/25/2019 22:11,,6,,"

The width refers to the number of neurons in a layer. The depth refers to the number of layers.

+ +

Have a look at the following question regarding the impact of these hyper-parameters on the performance of the neural network: https://stats.stackexchange.com/q/214360/82135.

+",2444,,,,,3/25/2019 22:11,,,,0,,,,CC BY-SA 4.0 +11455,1,11463,,3/25/2019 22:46,,6,1611,"

As far as I know, Stochastic Gradient Descent is an optimization algorithm which belongs to the the category of algorithms where hyper-parameters have to be defined beforehand. They are useful in many cases, but there are some cases that the adaptive learning algorithms (like AdaGrad or Adam) might be preferable.

+ +

When are algorithms like Adam and AdaGrad preferred over SGD? What are the cons and pros of adaptive algorithms, like Adam, when we compare them with learning algorithms like SGD?

+",23460,,2444,,3/26/2019 14:15,3/26/2019 14:15,When should we use algorithms like Adam as opposed to SGD?,,1,1,,,,CC BY-SA 4.0 +11456,2,,11421,3/26/2019 7:53,,3,,"

One option is not mentioned by malioboro is getting more data. Getting bigger dataset is almost always improve training results. If it's too hard to obtain more labeled data you can use data augmentation on existing data - small random transformations while keeping the same label.

+ +

For images most common methods of augmentations are (applying padding if needed):

+ +
    +
  • small random zooming in/out
  • +
  • small random shifts of image
  • +
  • adding random noise to image
  • +
  • small random change in brightness, contrast, color balance and similar parameters
  • +
+ +

There are more complex methods, but they are dependent on specific of dataset/goal of training

+ +

Stopping:

+ +
    +
  • you should stop training if you see training error is not decreasing - your have reached (possibly local) minima.
  • +
  • you should stop if testing or validation error start to increase - your method is overfitting (data augmentation can help in that case)
  • +
+",22745,,22745,,3/26/2019 7:59,3/26/2019 7:59,,,,0,,,,CC BY-SA 4.0 +11459,2,,11407,3/26/2019 8:47,,1,,"

The equation $\frac12(x_1w_1 + x_2w_2 - y)^2$ is called the $Error (E)$ (assuming $y$ to be continuous which is not the case in case of classifiers). If you write this equation in Physics or Maths it represents a family of curves in 4D (the curves are continuous but for visualisation we will assume it to be a family of curves).

+ +

Here is a representative equation of what it would have looked like had the error been $\frac12(x_1w_1 - y)^2$ a 3D curve.

+ +

+ +

This is a scalar quantity which represents the value of error at different places for different values of $w1$ and $w2$. Now gradient of a scalar is defined as $\nabla F$, where $F$ is a scalar, on doing this operation you get a vector, which is perpendicular to the equi-potential or more suitably equi-error surface, i.e. if you trace all the points which give the same error, you will get a curve, and its gradient at any point is the vector perpendicular to the curve at that given point. There are many proofs for this but here is a very simple and nice proof.

+ +

Now lets look at the equation of the constraint $x_1w_1 + x_2w_2 = y$. In case of a 3D error curve, the constraint is giving us a plane which is parallel to the tangential plane of the equi-error surface at a given point. You can look at this method of how to find tangential planes and derive the plane yourself, where $z = Error(E)$ and $w1$ and $y$ are your $x$ and $y$.

+ +

Thus it is quite clear that the gradient will be perpendicular to the constraint, and this is the reason we use gradients because according to mathematics if you move in a direction perpendicular to an equi-potential surface you get the maximum change than any other direction for same $dl$ movement.

+ +

I would highly suggest you check out these videos on gradient from Khan academy. This will hopefully give you a more intuitive understanding of why we do what we do in Neural Networks.

+",,user9947,,,,3/26/2019 8:47,,,,0,,,,CC BY-SA 4.0 +11460,1,,,3/26/2019 9:27,,4,44,"

This question came after I connected 2 pieces of information :

+ + + +

Considering the probabilistic nature of results in quantum computers, what would be the advantages to using byzantine resistant neural networks on quantum computer? Has this already been attempted?

+",23466,,23466,,7/17/2019 9:29,7/17/2019 9:29,Can there be applications of byzantine neural networks on quantum computers?,,0,0,,,,CC BY-SA 4.0 +11461,1,,,3/26/2019 9:40,,0,181,"

I need a pathfinding algorithm that considers the history of visited nodes and varies its path depending on some rules (like already visited). Are there good approaches serving this purpose?

+ +

To be more specific: Let's say I have a graph representing a map. I want to find a Route from B to D: Once I am in D following B->C->D, I want to calculate a new path, let's say to A. The shortest path would be D->C->B->A. But I want a path with unvisited nodes even if it's longer than the shortest possible path. The new path should be the shortest among all possible paths according to the rules. +Another example is the game ""snake"". Seeing the grid as a graph I cannot visit already visited nodes as fas as the corpus (of the snake) is in that node (or I have visited the node t time steps ago)

+ +

Maybe the problem is too specific and I have to implement some basic pathfinding algorithm in a wider algorithm.

+",19413,,19413,,3/26/2019 15:53,3/26/2019 16:25,Are there any pathfinding algorithms that take customized rules into account when determining the shortest path?,,1,2,,,,CC BY-SA 4.0 +11462,1,,,3/26/2019 9:57,,1,252,"

I try to implement RL to a case something like this:

+ +
+

This game consist of several rounds. Every round the players need to generate a maze that consists of rooms. There are around 1000 different available rooms with different properties. At the beginning of a round, each player will be given 10 rooms one-by-one (the sequence is same for each player) from 1000 that available, then he/she try to create a maze from the taken rooms (by arranging each room). After the maze is done there is a Game Master who will judge the maze (give a score between 0-100). The player never know how the Game Master judges the maze, it can be judged based on the level of difficulty that is produced, the order of the room we compose, or others. The player who got the best score for the given rooms will win this round.

+ +

in this case, I have around 100,000 ""perfect"" mazes that have been created from different room combination and got a perfect score. I use this maze as episodes and try to train RL-Agent to find the pattern of how the Game Master judges a maze. For your information, there are rooms that not exist in the 100,000 perfect mazes, but I hope the RL-Agent can use its properties to find similar rooms that exist in the ""perfect"" mazes, and make it as a reference

+
+ +

This case is different from other RL environments that I've ever met before, generating an episode is not an easy task because it needs an expert to validate it (The Game Master). So you could say, I can only build the RL agent using that 100,000 episodes.

+ +

But, even though it only consists of 100,000 episodes, my case has millions of states, so I plan to use Q-Learning with Neural Net as approximator.

+ +

My question is:

+ +
    +
  • in this case, am I still need the Experience Replay process (I am afraid I don't need it because of the small number of available episodes)?
  • +
  • has this case ever happened before? What is the best approach to deal with cases where the number of episodes is limited?
  • +
+",16565,,16565,,5/3/2019 3:22,5/3/2019 3:22,Reinforcement Learning with limited number of episodes,,0,11,,,,CC BY-SA 4.0 +11463,2,,11455,3/26/2019 10:01,,6,,"

Empirically, I observed that algorithms like Adam and RMSProp tended to give me a final higher performance (in my case, the accuracy) on (the validation dataset) with respect to SGD. However, I also observed that Adam and RMSProp are highly sensitive to certain values of the learning rate (and, sometimes, other hyper-parameters like the batch size) and they can catastrophically fail to converge if e.g. the learning rate is too high. On the other hand, in general, SGD have not led me to the highest performance, but they did not catastrophically fail (at least, as much as Adam and RMSProp) in my experiments (even when using quite different hyper-parameters). I noticed that the learning rate (and the batch size) are the hyper-parameters that mainly affect the performance of all these algorithms.

+ +

In my experiments, I use SGD without momentum and I used the (PyTorch) default values of Adam and RMSProp. I only compared SGD with Adam and RMSProp, on the relatively simple task of recognising MNIST digits. You can have a look at this repository https://github.com/nbro/comparative-study-between-optimizers, which contains the code I used to perform these experiments. You also have the instructions there to perform the experiments (if you want).

+",2444,,,,,3/26/2019 10:01,,,,5,,,,CC BY-SA 4.0 +11464,1,,,3/26/2019 10:33,,3,850,"

I was given the following problem to solve.

+ +
+

Given a circular trail divided by $n> 2$ segments labeled $0 \dots n-1$. In the beginning, an agent is at the start of segment number $0$ (the edge between segments $n-1$ and $0$) and the agent's speed ($M$) is $0$ segments per minute.

+ +

At the start of each minute the agent take one of three actions:

+ +
    +
  • speed up: the agent's speed increases by $1$ segment per minute.
  • +
  • slow down: the agent's speed decreases by $1$ segment per minute.
  • +
  • keep the same speed: stay at the same speed.
  • +
+ +

The action slow down cannot be used if the current speed is $0$ segments per minute.

+ +

The cost of each action is $1$.

+ +

The goal of the agent is to drive around the trail k times ($1 \leq k$) and then park in the beginning spot (at speed 0 of course). The agent needs to do that in the minimum amount of actions.

+ +

The heuristic given is: If agent is in segment $z$ then: $n-z$ if $z \not = 0$ or $0$ if $z=0$.

+
+ +

I need to find if the given heuristic is admissible (or complete) and consistent.

+ +

I think:

+ +
    +
  • Regarding consistency: A heuristic is consistent if its estimate is always $\leq$ estimated distance from any given neighbour vertex to goal plus cost of reaching a goal. So, in the given problem, it is consistent because ($n>2$), so heuristic function is well defined and because of circular trail divided by $n$ segments with a constant price of $1$ for each action, then the given definition of consistency holds because estimated distance from any given neighbour vertex to goal can be looked on as a difference between segments until reaching a goal and it is consistent because again the function is well defined.

  • +
  • Regarding admissibility: An admissible heuristic is one that the cost to reach goal is never more than the lowest possible cost from the current point to reach the goal. I am not sure if the given heuristic is admissible because it does not help much to know the difference between trail size ($n$ = segment size) and current place. But it does not create flaws, so it is probably admissible. I am not sure this is a proof.

  • +
+ +

Is my idea correct? How could I write it as a proof?

+",23467,,2444,,11/10/2019 16:53,11/10/2019 16:53,How do I find whether this heuristic is or not admissible and consistent?,,1,0,,,,CC BY-SA 4.0 +11465,1,,,3/26/2019 11:04,,1,50,"

I have a column with links to websites and another column with keywords from those websites. I have to find a map between these two, such that for a new input, which is a website's URL, I can generate the keywords associated with the contents of the website.

+ +

For example, given the URL chocolate.com, the keywords could be milk and dark. We can tell from this example that my keywords are types of chocolates.

+ +

Or, if the URL is career.com, then the keywords could be IT and medicine.

+ +

I have a feeling that some sort of supervised neural networks could be used here. Which approach would be most suited here?

+",23468,,2444,,11/6/2019 23:12,11/6/2019 23:12,How can I generate keywords associated with a website given its URL?,,0,2,,,,CC BY-SA 4.0 +11466,1,,,3/26/2019 13:14,,1,261,"

I was going through this implementation of Reinforcement learning where model is being trained to manage the number of bikes at a station.

+ +

Here, line 78 represents the loop over all episodes (if I understood correctly). In line 92, the DQN Agent is defined meaning after each episode, the agent will be reset to default parameters.

+ +

But shouldn't we define the model before the loop starts because the after each episode, won't all the previous learning be lost if we initialize the class object in each iteration? +Am I misinterpreting anything?

+",19244,,,,,3/26/2019 13:14,Do we need to reset the DQN network after every episode?,,0,4,,,,CC BY-SA 4.0 +11467,2,,11461,3/26/2019 16:25,,1,,"

Yes, this is easily solved using the A* algorithm. Once your agent has visited a particular node, increase the cost of that node to infinity and recalculate the path.

+",12509,,,,,3/26/2019 16:25,,,,0,,,,CC BY-SA 4.0 +11468,2,,11464,3/26/2019 20:36,,2,,"

Welcome to AI.SE @hpr16!

+ +

Your understanding of when a heuristic is admissible is correct, but your heuristic is inadmissible. An admissible heuristic must always underestimate the cost to move from a given state to a goal state.

+ +

Notice that states in the search are not the same as positions on the circle in your problem. A state needs to capture all the information about the current environment the agent is in. In your problem, agents have a speed as well as a position. A state must, therefore, contain both.

+ +

To see why your heuristic is inadmissible because the agent can move (n-z) segments in less than n-z steps: it can speed up, and do them in, for example, (n-z)/2 steps, by moving with speed 2.

+",16909,,,,,3/26/2019 20:36,,,,0,,,,CC BY-SA 4.0 +11469,1,,,3/26/2019 20:43,,5,115,"

In the documentary about the match, it is said that after losing the 4th game, AlphaGo came back stronger and started to play in a weird way (not human-like) and it was pretty impossible to be beaten. Why and how did that happen?

+",21832,,2444,,12/25/2021 23:55,12/25/2021 23:55,"Why didn't champion of the Go game manage to win the last game against AlphaGo, after winning the 4th one?",,1,1,,,,CC BY-SA 4.0 +11471,1,,,3/27/2019 3:11,,1,50,"

I was researching about hierarchical object detection, and end up reading that Yolo v3 is the state of art for that kind of tasks, besides, the inference time make it one of the best for run it on live video.

+ +

So, what I have in mind, is to run a pose estimation technology over the live video (LikeOpenPose), then focus only in the rectangles near the hands of the estimated pose in order to detect the object.

+ +

the previous approach sounds good, but I feel like I'm not taking advantage of the temporal features on the video, for example, YoloV3 could not be very sure that someone has a cellphone with only the rectangle of the hands, but if I add up, the movement of the estimated pose (hand near to the head for several frames), I could be more sure that he has a phone.

+ +

But I cannot find a paper, approach or something close to the idea I have on mind, so I was wondering if someone here could give me a little clue about what path should I follow.

+ +

Thanks in advance for any help!

+",23491,,,,,3/27/2019 3:11,Live video object detection with pose estimation,,0,0,,,,CC BY-SA 4.0 +11472,2,,11418,3/27/2019 3:58,,1,,"

I emailed the author of the paper and he replied that randomness in the parameter initialization is the only difference between Q1 and Q2. This difference is enough in practice. Moreover, TD3 method is more concerned with overestimation induced by function approximation error rather than stochasticity in the environment.

+",23420,,,,,3/27/2019 3:58,,,,1,,,,CC BY-SA 4.0 +11473,2,,11469,3/27/2019 4:01,,6,,"

The technique used by AlphaGo is ""Monte Carlo Tree Search"", combined with a very well trained neural network. The network's job is to estimate the quality of different board states and moves. This estimation is deterministic. If you show AlphaGo the same board on two different occasions, it thinks it is exactly as good (or bad) on both occasions.

+ +

Monte Carlo Tree Search however, is a randomized algorithm. So as a simplified explanation, the way AlphaGo decides which moves to make is:

+ +
    +
  1. Look at the current board.
  2. +
  3. Pick a random move, and imagine what the board would look like if you made that move.
  4. +
  5. Pick a random move for your opponent, and imagine what the board would look like if they made that move.
  6. +
  7. Keep doing steps 2 & 3 for a while, so that we're imagining being at some board many moves in the future along a random line of play.
  8. +
  9. Ask the neural network how good this board state is.
  10. +
  11. Repeat steps 1-5 many times. Then make whichever move led to the best lines of play on average, to make right now.
  12. +
+ +

What this means is, AlphaGo won't always play the same way, because it doesn't actually consider every move explicitly. It just thinks about enough lines of play to be pretty confident about whether one move is better than another. This is actually not so far removed from how humans play most of the middle of games like these.

+ +

So, in game 4, essentially, Sedol got lucky. The random lines of play that AlphaGo chose to look at did not capture some critical facts about one or more board states. This led it to make a mistake. If you asked it to play the same game through again, it might not make the same mistakes (it might think about different random lines of play, that do capture the critical facts it missed in the first game). Further, it might choose to play slightly differently on other moves, which could have a big impact on the rest of the game. These two factors prevented Sedol from simply playing game 4 over again.

+",16909,,,,,3/27/2019 4:01,,,,0,,,,CC BY-SA 4.0 +11474,1,,,3/27/2019 7:34,,1,128,"

I am trying to create a chatbot whose dialogue policy model will be trained through reinforcement learning. Dialogue Policy is responsible for selecting the action to take based on the given state of the conversation.

+ +

All implementations I see for RL are trained from an environment taken from Gym or created manually. These environments provide the next state, rewards etc to the model based on which it is trained.

+ +

Since I am creating a dialogue policy model which will be trained through real user conversations, I cannot provide a ""pre-defined"" environment which can provide the states and rewards. I am planning to train it myself by talking to it and providing rewards and next state (which I think is called interactive learning).

+ +

But I was not able to find any implementations, tutorials or articles on interactive learning. I am not able to figure out how to create such a model, how to take care of the episodes, sessions etc. This will be a continuous learning that will go on for months maybe. I have to save the model each day and continue training the next day by loading the model from that same state.

+ +

Can anyone guide me in the correct direction on how to approach this? Any githubs links, articles, tutorials of such implementations will be highly appreciated. I am aware this question seems too broad, but some hints will be very helpful for a newbie like me.

+",19244,,,,,3/27/2019 18:14,How to build a DQN agent which can be trained through interactive learning?,,1,0,0,,,CC BY-SA 4.0 +11478,1,11494,,3/27/2019 12:13,,4,368,"

Can neural networks change or evolve other neural networks? Also, could evolutionary algorithms be applied to evolve neural networks?

+ +

For example, suppose that we have neural networks A and B. The neural network B changes the neural network A. If B ""successfully"" changed it, NN A will survive.

+",23500,,2444,,7/28/2019 19:57,7/29/2019 14:26,Can neural networks evolve other neural networks?,,3,0,,,,CC BY-SA 4.0 +11479,1,,,3/27/2019 12:59,,2,54,"

SVM is designed for two-class classification problem. If the data is not linear-separable, a kernel function is used. I want to know if there is exists any method that will indicate if the data is linearly separable or not.

+",23501,,,,,3/27/2019 12:59,Is there any formal test for linear separability of 2-class data?,,0,1,,,,CC BY-SA 4.0 +11480,1,,,3/27/2019 13:01,,9,3425,"

I don't know what people mean by 'vanilla policy gradient', but what comes to mind is REINFORCE, which is the simplest policy gradient algorithm I can think of. Is this an accurate statement?

+

By REINFORCE I mean this surrogate objective

+

$$ \frac{1}{m} \sum_i \sum_t log(\pi(a_t|s_t)) R_i,$$ +where I index over the $m$ episodes and $t$ over time steps, and $R_i$ is the total reward of the episode. It's also common to replace $R_i$ with something else, like a baselined version $R_i - b$ or use the future return, potentially also with a baseline $G_{it} - b$.

+

However, I think even with these modifications to the multiplicative term, people would still call this 'vanilla policy gradient'. Is that correct?

+",17312,,2444,,1/12/2022 21:09,9/9/2022 12:52,Is REINFORCE the same as 'vanilla policy gradient'?,,3,2,,,,CC BY-SA 4.0 +11481,1,,,3/27/2019 13:30,,0,694,"

I'm trying to implement my own DQN. So far I think my code is good, but my Q-values (I'm getting the mean of all the values for every episode) tends to converge near-zero but negatively. It is normal? Or there is something wrong in my implementation?

+ +

My exploration vs explotation greedy strategy goes from 1.0 to 0.1 in 1 million steps (as DeepMind does), my learning rate is 0.00025 and my gamma 0.99. I read here that +""The mean Q-values should smoothly converge towards a value proportionnal to the mean expected reward."" +So, it's my agent expecting a negative reward? If so, how can i fix it? +Here is a graph of the first training session: + +You can see how the Q-values tend to converge near-zero after about 1300 episodes (1120000 steps aproximately). Actually it's showing values like -0.0117, -0.0145, etc. +Also, the agent seems very ""static"" after epsilon gets near 0.1, and when it reaches it doesn't move so much. (I'm training with PongDeterministic-v4)

+",9818,,,,,5/9/2019 14:01,DQN Q-mean values converge negatively,,0,2,,,,CC BY-SA 4.0 +11483,1,,,3/27/2019 13:47,,2,134,"

In order to update the belief state in a POMDP, the following formula is used: +$$b'(s')=\frac{O(a, s', z) \sum_{s\in S} b(s)T(s, a, s')}{\mathbb{P}(z \mid b, a)}$$ +where

+ +
    +
  • $s$ is a specific state in the set of states $S$
  • +
  • $b'(s')$ is the updated belief state of being in the next state $s'$
  • +
  • $T(s, a, s') = \mathbb{P}(s' \mid s, a)$ is the propability (function) of having been in $s$ and ending up in $s'$ by taking action $a$;
  • +
  • $O(a, s', z) = \mathbb{P}(z \mid s', a)$ the probability (function) of observing $z$, performing action $a$ and ending up in $s'$

  • +
  • $\mathbb{P}(z \mid b, a)$ is defined as follows $\sum_{s \in S}b(s)\sum_{s' \in S} T(s, a, s')O(a, s', z)$

  • +
+ +

Looking at $\mathbb{P}(z \mid b, a)$ it is possible that the result is $0$. This would be the case if the agent is in a state where no further actions are possible. But, in that case, there is a problem with updating $b'(s')$, since this causes a zero division. Is this a common problem and is the only possibility to avoid that a programming solution like an if-statement? Or is $\mathbb{P}(z \mid b, a)$ always non-zero?

+",19413,,2444,,3/28/2019 14:52,3/28/2019 21:11,Can the normalization factor for the belief state update be zero?,,1,0,,,,CC BY-SA 4.0 +11485,1,11486,,3/27/2019 15:06,,1,413,"

As my first AI model I have decided to make an AI model to predict multiplication of two numbers EX - [2,4] = [8]. I wrote the following code, but the loss is very high, around thousands, and it's very inaccurate. How do I make it more accurate?

+ +
import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+data = torch.tensor([[2,4],[3,6],[3,3],[4,4],[100,5]],dtype=torch.float)
+values = torch.tensor([[8],[18],[9],[16],[500]],dtype=torch.float)
+lossfun = torch.nn.MSELoss()
+model=Net()
+optim = torch.optim.Adam(model.parameters(),lr=0.5)
+
+class Net(nn.Module):
+    def __init__(self):
+
+        super(Net,self).__init__();
+
+        self.fc1 = nn.Linear(in_features=2,out_features=3)
+        self.fc2 = nn.Linear(in_features=3,out_features=6)
+        self.out = nn.Linear(in_features=6,out_features=1)
+
+    def forward(self,x):
+        x = self.fc1(x)
+        x = F.relu(x)
+        x = self.fc2(x)
+        x = F.relu(x)
+        x = self.out(x)
+        return x
+for epoch in range(1000):
+
+    y_pred=model.forward(data)
+
+    loss = lossfun(y_pred,values)
+
+    print(loss.item())
+
+    loss.backward()
+
+    optim.step()
+
+ +

Note: I am a newbie in AI and ML.

+",23507,,16229,,3/30/2019 15:51,3/30/2019 15:51,Heavy loss and inaccurate answer in pytorch,,1,0,,12/26/2021 14:06,,CC BY-SA 4.0 +11486,2,,11485,3/27/2019 16:46,,2,,"

There are a few things you could do to improve this NN, but are probably worth covering in different questions.

+ +

Your main problem though is that you forgot to reset the gradient after each training batch. You need to call optim.zero_grad() in order to do this, at the start of each training loop. Otherwise, using PyTorch, the gradient values keep accumulating inside the model's training data (sometimes you want this effect if you are adding gradients from multiple sources, that's why PyTorch is not clearing them automatically for you).

+ +

In addition, a learning rate of 0.5 is very high for the Adam optimiser - it is very common to leave it at the default value because Adam is an adaptive optimiser that will adjust step sizes depending on gradients seen so far.

+ +

Here is a working version of your code:

+ +
import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+class Net(nn.Module):
+    def __init__(self):
+
+        super(Net,self).__init__();
+
+        self.fc1 = nn.Linear(in_features=2,out_features=3)
+        self.fc2 = nn.Linear(in_features=3,out_features=6)
+        self.out = nn.Linear(in_features=6,out_features=1)
+
+    def forward(self,x):
+        x = self.fc1(x)
+        x = torch.relu(x)
+        x = self.fc2(x)
+        x = torch.relu(x)
+        x = self.out(x)
+        return x
+
+data = torch.tensor([[2,4],[3,6],[3,3],[4,4],[100,5]],dtype=torch.float)
+values = torch.tensor([[8],[18],[9],[16],[500]],dtype=torch.float)
+
+lossfun = torch.nn.MSELoss()
+model=Net()
+optim = torch.optim.Adam(model.parameters(),lr=0.001)
+
+for epoch in range(20000):
+    optim.zero_grad()
+    y_pred=model.forward(data)
+    loss = lossfun(y_pred,values)
+    if (epoch % 1000 == 0):
+        print(loss.item())
+    loss.backward()
+    optim.step()
+
+ +

This version can be tweaked to quite easily reach 0 loss for your data set.

+ +

This has not in really learned how to multiply two values. The approximation to multiplying will be very weak as there is very little data. However, playing with some very basic data and a simple NN is a first step towards understanding details like this . . .

+",1847,,1847,,3/27/2019 16:53,3/27/2019 16:53,,,,0,,,,CC BY-SA 4.0 +11487,1,13040,,3/27/2019 17:29,,4,594,"

Suppose I have a deep feed-forward neural network with sigmoid activation $\sigma$ already trained on a dataset $S$. Let's consider a training point $x_i \in S$. I want to analyze the entries of a hidden layer $h_{i,l}$, where

+ +

$$h_{i,l} = \sigma(W_l ( \sigma (W_{l-1} \sigma( \dots \sigma ( W_1 \cdot x_i))\dots). $$

+ +

My intuition would be that, since gradient descend has passed many times on the point $x_i$ updating the weights at every iteration, the entries of every hidden layer computed on $x_i$ would be either very close to zero or very close to one (thanks to the effect of the sigmoid activation).

+ +

Is this true? Is there a theoretical result in the literature which shows anything similar to this? Is there an empirical result which shows that?

+",21338,,2444,,3/27/2019 21:19,6/24/2019 23:32,How do intermediate layers of a trained neural network look like?,,1,2,,,,CC BY-SA 4.0 +11489,2,,11474,3/27/2019 18:14,,1,,"

I suggest you to start reading this one on BERT and this one on GPT-2.

+ +
+

I am aware this question seems too broad, but some hints will be very helpful for a newbie like me.

+
+ +

I'm not sure you want to create your chatbot using RL architecture at all. But if you want to implement such an idea the right way to go with that is by using iterative approach and starting from a really simple base:

+ +
    +
  1. Define a list of actions for your agent. Might be 10-words +vocabulary with simple one-word answers.
  2. +
  3. Define you simple environment. It might have 20 different states (e.g. greetings/conversation starters hey, hello)
  4. +
  5. Map good responses (hello -> hi) to positive rewards and -1 for the rest of them.
  6. +
  7. If you want to move forward with DQN try a simple Q-learning algorithm on your simple environment and train agent to properly response to those.
  8. +
  9. Once you have done the above you can make your vocabulary bigger
  10. +
  11. Now replace your table of input words which gives you a state for your network with NN/word2vec/any other NLP approach which will convert your input sentence to a state.
  12. +
  13. Manually provide a reward as a response for what you agent did according to learned policy.
  14. +
  15. Do that infinite amount of time and your agent probably will figure out how to respond properly.
  16. +
+ +

NOTE: DQN uses discrete action space. So in such a case you will have only limited or REALLY HUGE actions space. It's all possible combinations between all possible words in you vocabulary (assuming action maps to some sentence)

+",23434,,,,,3/27/2019 18:14,,,,2,,,,CC BY-SA 4.0 +11490,1,,,3/27/2019 19:42,,1,68,"

In the paper ""Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"", https://arxiv.org/abs/1312.6034, at part 3, there is a first-order Taylor expansion(formula number 3 & 4) that I can't understand the logic behind it and how they are obtained.

+ +

Those formulas are about computing saliency map in convolutional neural networks.

+",10051,,,,,3/27/2019 19:42,Image-Specific Class Saliency Visualisation,,0,0,,,,CC BY-SA 4.0 +11494,2,,11478,3/28/2019 9:16,,5,,"

Yes, this is an active area of research as we speak. Both using classic algorithms (decision trees, random forests, Bayesian ensembles) as well as neural networks. This can also be done via evolutionary algorithms. I have personally used them for hyperparameter tuning in a few cases where squeezing out a couple of extra points of accuracy was key.

+ +

This is in fact what Google is doing with their AutoML system. They are using neural networks for architecture search.

+ +

Here is a Github repo with some interesting papers and links on the topic you are describing: https://github.com/hibayesian/awesome-automl-papers.

+",9608,,2444,,7/7/2019 22:32,7/7/2019 22:32,,,,0,,,,CC BY-SA 4.0 +11496,2,,11480,3/28/2019 9:48,,1,,"

By vanilla policy gradient, I think what is meant is normally any arbitrary policy gradient for the purposes of formalization (or whatever is trying to be communicated).

+

For example, let $J(\theta)$ be any policy objective function. Then our policy gradient would be $\begin{equation} \nabla_\theta J(\theta) \end{equation}$ where the change in our policy is $\Delta \theta = \alpha \nabla_\theta J(\theta) $ with $\alpha$ being our step size.

+

This will, of course, vary as notational convention and term definition can change between practitioners.

+",9608,,2444,,1/12/2022 21:09,1/12/2022 21:09,,,,0,,,,CC BY-SA 4.0 +11500,1,,,3/28/2019 12:56,,3,147,"

Can AI be used as a tool to investigate our minds?

+ +

To be more precise, what am I specifically asking for here are examples of discoveries on artificial intelligence (so algorithms, programs and computers that try to implement intelligent systems) that brought to light facts about intelligence and cognition in general. Have this ever happened? Is it frequent? How influential and important were these discoveries, if any?

+ +

A possible example of what I mean could be the PSSH, which states that a formal system is sufficient to simulate general intelligent behaviour. I believe that this is relevant to Cognitive Science in general because it entails our understanding of this phenomena. (Of course, this is just an hypotesis, but I believe that its importance in the AI debate makes it a really compelling result).

+",23527,,23527,,3/28/2019 14:53,1/18/2021 4:05,Can AI research lead to new findings in general cognitive science?,,1,0,,,,CC BY-SA 4.0 +11504,1,24688,,3/28/2019 16:08,,5,227,"

I am very beginner to this world. I still learning the basics of Machine learning and AI but i have a problem at hand and i am not sure which technique or Algorithm can be applied on it.

+ +

I am working on Click-Fraud detection in advertising. I need to predict fraud and learn new frauds with ML.

+ +

The dataset I have is the view and click logs from adserver(Service Provider). This data have some fields few of them are listed below:

+ +
""auction_log_bid_id"": null, 
+""banner"": 9407521, 
+""browser"": 0, 
+""campaign"": 2981976, 
+""city"": 94965, 
+""clickword"": null, 
+""content_unit"": 4335438, 
+""country"": 1, 
+""external_profiledata"": {}, 
+""external_user_id"": null, 
+""flash_version"": null, 
+""id"": 6665230893362053181, 
+""ip_address"": ""80.187.103.98"", 
+""is_ssl"": true, 
+""keyword"": ""string""
+""mobile_device"": -1, 
+""mobile_device_class"": -1, 
+""network"": 268, 
+""new_user_id"": 6665230893362118717, 
+""operating_system"": 14, 
+""profile_data"": {}, 
+""referrer"": null, 
+""screen_resolution"": null, 
+""server_id"": 61, 
+""state"": 7, 
+""target_url"": ""string""
+""timestamp"": 1551870000, 
+""type"": ""CLICK_COMMAND"", 
+""user_agent"": ""Mozilla/5.0 (iPhone; CPU iPhone OS 12_1_4 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/16D57"", 
+""user_id"": null, 
+""view_log_id"": null
+
+ +

There are other fields.

+ +

I need to analyse these logs to find patterns for possible frauds but i am not sure where to start and which technique to use. e.g. Supervised, Unsupervised Semi-Supervised or Reinforcement Learning.

+",23531,,23531,,3/29/2019 11:12,8/15/2021 11:01,How to detect frauds in advertising business using machine learning?,,2,5,,,,CC BY-SA 4.0 +11505,2,,11483,3/28/2019 18:44,,3,,"

I think that the normalisation factor is assumed to be non-zero. So, in practice, I guess, you must eventually check that $P(z \mid b, a)$ is non-zero (even though, I guess, it will likely never be zero because of round-off errors in computers).

+ +

The formula to calculate $b'(s')$ comes from its definition, which is based on Bayes' theorem, where the denominator is assumed to be non-zero (in general).

+ +

The definition of $b'(s')$ is $P(s' \mid z, a, b)$, that is, the new belief $b'$ of being in state $s'$ is defined as the probability of landing in the next state $s'$, given that we have observed $z$, have taken action $a$ from the previous state $s$ and we had the previous belief $b$. We will expand this definition, but first let us recall a few probability definitions.

+ +

Recall that $P(A, B) = P(A \mid B) P(B) = P(B \mid A) P(A)$, where $A$ and $B$ can actually be multiple events (that is, $A$ could actually be the intersection of multiple events). In other words, suppose we want to calculate $P(A, B, C)$, we can actually consider e.g. $B$ and $C$ as one event. Let $(B \cap C) = (A, B) = D$ (note that the notation $(A, B)$ means the ""intersection"" of events $A$ and $B$, in the case $A$ and $B$ are events). Then

+ +

\begin{align} +P(A, B, C) +& = P(A, (B, C)) \\ +&= P(A, D) \\ +&= P(A|D)P(D) \\ +&= P(A|B, C)P(B, C) \\ +&= P(A|B, C)P(B|C)P(C) \\ +\end{align}

+ +

In general, this idea generalises to more variables/events.

+ +

Note also that $\frac{P(A, B)}{P(B)} = P(A \mid B)$.

+ +

At this point, we are prepared to expand $P(s' \mid z, a, b)$ and understand its expansion.

+ +

We can expand $P(s' \mid z, a, b)$ as follows

+ +

\begin{align} +P(s' \mid z, a, b) +&= \frac{P(s', z, a, b)}{P(z, a, b)}\\[0.7em] +&= \frac{P(s', z, a, b)}{P(z \mid b, a)P(a|b)P(b)}\\[0.7em] +&= \frac{P(z \mid s', a, b) P(s' \mid a, b) P(a | b) P(b)}{P(z \mid b, a)P(a|b)P(b)} \\[0.7em] +&= \frac{P(z \mid s', a, b) P(s' \mid a, b)}{P(z \mid b, a)} +\end{align}

+ +

It then turns out that (I will maybe explain this more in detail later)

+ +

\begin{align} +P(s' \mid z, a, b) +&= \frac{P(z \mid s', a, b) P(s' \mid a, b)}{P(z \mid b, a)} \\[0.7em] +&=\frac{O(a, s', z) \sum_{s\in S} b(s)T(s, a, s')}{\sum_{s \in S}b(s)\sum_{s' \in S} T(s, a, s')O(a, s', z)} \\[0.7em] +&=b'(s') +\end{align}

+ +

So, by assumption, $P(z \mid b, a) = \sum_{s \in S}b(s)\sum_{s' \in S} T(s, a, s')O(a, s', z)$ must be different from zero for the equality above to hold.

+ +

You can see this from a very simply example of the Bayes' theorem. Let $P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}$. Now, intuitively, $P(A \mid B)$ (which is what we want to calculate using Bayes' theorem) means the probability of $A$ occurring given that $B$ has occurred, which means that $B$ couldn't have had a probability of $0$ of happening if we wanted to calculate $P(A \mid B)$, so $P(B)$ couldn't have been zero if we wanted to calculate $P(A \mid B)$ using Bayes' theorem. We can also apply this reasoning to the definition of $b'(s')$ above.

+ +

For completeness, note also that, in the normalisation factor $$\sum_{s \in S}b(s)\sum_{s' \in S} T(s, a, s')O(a, s', z),$$ $b(s)$, $T(s, a, s')$ and $O(a, s', z)$ are probability distributions, which means that not all terms of $b(s)$, $T(s, a, s')$ and $O(a, s', z)$ can be zero, for all $s$, $s'$ and $a$ (given they must sum up to $1$).

+ +

Note also that $\sum_{s' \in S} T(s, a, s')O(a, s', z)$ is a convex combination of all $O(a, s', z)$ (for all $s'$), where the coefficients are $T(s, a, s')$. The normalisation factor is also a convex combination where the coefficients are $b(s)$ (for all $s$).

+",2444,,2444,,3/28/2019 21:11,3/28/2019 21:11,,,,0,,,,CC BY-SA 4.0 +11506,2,,3389,3/28/2019 21:23,,4,,"

I'm an undergraduate researcher at Prairie View A&M university. I just spent a few weeks tweaking an MLPRegressor model to predict the $n$th prime number. It recently stumbled into a super low minimum, where the first $1000$ extrapolations outside of the training data produced error less than $.02$ percent. Even at $300000$ primes out, it was about $.5$ percent off. My model was simple: $10$ hidden layers, trained on a single processor for less than 2 hours.

+

To me, it begs the question, "Is there a reasonable function that produces the nth prime number?" Right now, the algorithms become computationally very taxing for extreme $n$. Check out the time gaps between the most recent largest primes discovered. Some of them are years apart. I know it's been proven that if such a function exists, it will not be polynomial.

+",23542,,2444,,11/14/2020 13:53,11/14/2020 13:53,,,,3,,,,CC BY-SA 4.0 +11507,1,,,3/28/2019 21:45,,0,54,"

I am working on a supervised machine learning problem where I have more than 10 probably 50 or 100 predicting label categories. Which type of model can be used to work on this type of problem in anaconda python.

+",23544,,23544,,3/29/2019 13:09,3/29/2019 20:36,What types of machine learning model would fit?,,1,2,,5/10/2022 4:24,,CC BY-SA 4.0 +11508,1,,,3/28/2019 21:51,,0,51,"

I have an RC car with a camera, I have implemented so that i can detect lanes on my track (think like a nascar track). I want to get this car to be able to go around the track autonomous. But I am quite unsure what my next step should be.

+ +

Either I can do an algorithm that detects so that I stay in the middle of the lane (if i get to close to either of the lines I steer towards the center).

+ +

Or perhaps go around the track manually and save the coordinates of the detected lines as well as the actions I take (steering) and try a DQN approach.

+ +

I'm trying to minimize my 'trial and error' time a little here. Perhaps there are some important steps in between here, or a solution that I have not thought of (i.e. am I missing something)?

+ +

I'm doing this as a proof of concept therefore I can only spend maximum of a month on this, so what would you do here?

+",22349,,,,,3/28/2019 21:51,Next step after lane detection in vehicle automation,,0,2,,,,CC BY-SA 4.0 +11509,2,,11478,3/28/2019 23:51,,3,,"

This answer points at some of the more modern approaches. This has been around for a long time in the form of NeAT: Neuroevolution of Augmenting Topologies, originally described in Kenneth Stanley's 2002 paper.

+ +

NeAT is available as a package for many languages, including Python, Java, and C++. The algorithm works as a form of genetic programming. A population of networks is generated with simple, random, topologies. Then they are evaluated according to a loss function for a specific task. The poorly performing networks are discarded, and the better performing ones are intermixed to generate new variations. This process is iterated until the user wishes it to stop, and typically results in gradual improvement of average population performance against the loss function.

+",16909,,2444,,7/28/2019 19:57,7/28/2019 19:57,,,,6,,,,CC BY-SA 4.0 +11510,1,11531,,3/28/2019 23:55,,1,69,"

This is my first post so please forgive me for any mistakes.

+ +

I am working on an object detection algorithm that can detect abnormalities in an x-ray. As a prototype, I will be using yolov3 (more about yolo here: 'https://pjreddie.com/darknet/yolo/') +However, one radiologist mentioned that in order to produce a good result you need to take into account the demographics of the patient. +In order to do that, my neural network must take into account both text an an image. Some suggestions have been made by other people for this question. For example, someone recommended taking the result of a convolution neural network and a seperate text neural network. +Here is an image for clarification: +

+ +

Image Credits: This image (https://cdn-images-1.medium.com/max/1600/1*oiLg3C3-7Ocklg9_xubRRw.jpeg) from Christopher Bonnett's article (https://blog.insightdatascience.com/classifying-e-commerce-products-based-on-images-and-text-14b3f98f899e)

+ +

For more details, please refer to above-mentioned article. It has explained how e-commerce products can be classified into various category hierarchies using both image and text data.

+ +

However, a when convolution neural network is mention it usssualy means it is used for classification instead of detection +https://www.quora.com/What-is-the-difference-between-detection-and-classification-in-computer-vision (Link for comparison between detection and classification)

+ +

In my case, when I am using yolov3, how would it work. Would I be using yolov3 output vector which would be like this format +class, center_x, center_y, width and height

+ +

My main question is how would the overall structure of my neural network be like if I have both image and text as input while using yolov3. +Thank you for taking the time to read this.

+",23546,,,,,3/31/2019 17:05,Detecting abnormalities in x-rays while taking into account demographics of a patient -automated,,2,0,,,,CC BY-SA 4.0 +11511,1,,,3/29/2019 0:44,,4,33,"

A sampled softmax function is like a regular softmax but randomly selects a given number of 'negative' samples.

+ +

This is difference than NCE Loss, which doesn't use a softmax at all, it uses a logistic binary classifier for the context/labels. In NLP, 'Negative Sampling' basically refers to the NCE-based approach.

+ +

More details here: https://www.tensorflow.org/extras/candidate_sampling.pdf.

+ +

I have tested both and they both give pretty much the same results. But in word embedding literature, they always use NCE loss, and never sampled softmax.

+ +

Is there any reason why this is? The sampled softmax seems like the more obvious solution to prevent applying a softmax to all the classes, so I imagine there must be some good reason for the NCE loss.

+",18358,,2444,,4/16/2019 22:24,4/16/2019 22:24,Why does all of NLP literature use noise contrastive estimation loss for negative sampling instead of sampled softmax loss?,,0,0,,,,CC BY-SA 4.0 +11517,1,11529,,3/29/2019 7:12,,4,302,"

According to the Wikipedia page of the physical symbol system hypothesis (PSSH), this hypothesis seems to be a vividly debated topic in philosophy of AI. But, since it's about formal systems, shouldn't it be already disproven by Gödel's theorem?

+ +

My question arises specifically because the PSSH was elaborated in the 1950s, while Gödel came much earlier, so at the time the Incompleteness theorems were already known; in which way does the PSSH deal with this fact? How does it ""escape"" the theorem? Or, in other words, how can it try to explain intelligence given the deep limitations of such formal systems?

+",23527,,2444,,5/13/2020 10:39,12/11/2020 11:30,Shouldn't Gödel's incompleteness theorems disprove the physical symbol system hypothesis?,,3,2,,,,CC BY-SA 4.0 +11518,2,,10090,3/29/2019 7:32,,3,,"

This is not an answer. I couldn't comment, so here are some remarks about your question: This is a very broad question, and considered The Holy Grail for building artificially intelligent systems - meaning that some scientists have been dreaming about this since time immemorial.

+ +

Some homework is warranted from your side; you could have offered some of your solutions or maybe identified the multiple layers to your query, as they invoke concepts from many fields within the study of AI (or AGI to be general).

+ +

For example, the following layers are taken up around the big-words in the question, though rhetorically -

+ +
    +
  1. On evolution - what registers as evolution (is only genetic mutation evolution or should it involve some form of natural selection or seeking a niche for increasing the chances of survival or etc). How would evolution of software look like? Should it be able to modify its own code in the process of evolution?

  2. +
  3. On concept - what constitutes a concept, identifying from the environment a concept by calculating its relevance (w.r.t other concepts), coming with a process of selecting the concept for use in a given environment (natural or artificial).Referring to the example in the question, is the road a relevant concept or the trees and the sky and the bees pollinating some flowers along the roadside? What is a more fundamental concept - the trees or the bees, and how does one measure that?

  4. +
  5. On rewards - for us humans, rewards are maximization of survival of genetic machinery they carry (translating to reproductive success). what reward system should we come up that machines could use to increase their chances of survival in the physical world? what value should be put on a reward for moving the car on a straight line, or moving from darkness towards light? Shouldn't moving from darkness to light be more fundamental action (concept) and therefore rewarded higher than moving along the straight line? However, given that the car has learned to move from darkness to light, shouldn't the value of the reward be lowered so that it can learn other actions/concepts?

  6. +
+ +

As can be seen from this brief detailing, there are layers within layers. Therefore, it is proper to establish now that there are no simple answer to this essay-requiring soul-searching time-eating heavy-research-wanting question. However appropriate directions can be given about work that is being done by some of the most prominent scientists of our times. The thing that one is looking for are known as Universal Problems Solvers, such as Gödel Machine (by Jürgen Schmidhuber) and AIXI - Artificial Intelligence (AI) based on Solomonoff's distribution ξ (by Markus Hutter).

+ +

Here is a quote lifted from Wikipedia page on AIXI that is pretty self-explanatory on how it maximizes the rewards over time.

+ +
+

AIXI is a reinforcement learning agent. It maximizes the expected + total rewards received from the environment. Intuitively, it + simultaneously considers every computable hypothesis (or environment). + In each time step, it looks at every possible program and evaluates + how many rewards that program generates depending on the next action + taken. The promised rewards are then weighted by the subjective belief + that this program constitutes the true environment. This belief is + computed from the length of the program: longer programs are + considered less likely, in line with Occam's razor. AIXI then selects + the action that has the highest expected total reward in the weighted + sum of all these programs.

+
+ +

Gödel Machine goes further - it allows the agent to modify its own code that allows it to maximise the rewards of its actions - which is to modify its own code that allows it to maximize the rewards of its actions - as so on. This is kind of a recursive definition, that simulates evolution (a rapid one) by choosing to evolve that code/state of the agent that converges towards a code that is superior to the code that the agent is currently running.

+ +

Here is a quote lifted from page on summary of Gödel machine. See that Hutter is also referenced (discovered of AIXI above).

+ +
+

Our Gödel machine will never get worse than its initial problem + solving strategy, and has a chance of getting much better, provided + the nature of the given problem allows for a provably useful rewrite + of the initial strategy, or of the proof searcher. The Gödel machine + may be viewed as a self-referential universal problem solver that can + formally talk about itself, in particular about its performance. It + may ""step outside of itself"" (Hofstadter, 1979) by rewriting its + axioms and utility function or augmenting its hardware, provided this + is provably useful. Its conceptual simplicity notwithstanding, the + Gödel machine explicitly addresses the `Grand Problem of Artificial + Intelligence' by optimally dealing with limited resources in general + environments, and with the possibly huge (but constant) slowdowns + buried by previous approaches (Hutter, 2001, 2002) in the widely used + but sometimes misleading O()-notation of theoretical computer science.

+ +

The main limitation of the Gödel machine is that it cannot profit from + self-improvements whose usefulness it cannot prove in time.

+
+ +

Reading the papers of the stated research would certainly be useful in finding answers to some of the layers in the question. There is also good work going on in the AGI community that somewhat aligns with the direction. Hope this helps.

+",5750,,,,,3/29/2019 7:32,,,,1,,,,CC BY-SA 4.0 +11519,1,11530,,3/29/2019 8:30,,0,1044,"

How could we solve the TSP using a hill-climbing approach?

+",19448,,2444,,11/23/2020 21:32,11/23/2020 21:32,How could we solve the TSP using a hill-climbing approach?,,1,0,0,,,CC BY-SA 4.0 +11520,1,,,3/29/2019 8:44,,1,20,"

I’ve created a variational autoencoder to encode 1-dimensional arrays. The encoding is done through 3 1d-convolutional layers. Then, after the sampling trick, I reconstruct the series using 3 fully connected layers. Here are my questions in case some can shed some light on it:

+ +

I think it would be better if I use 1d-deconvolutional layers instead of fully connected, but I cannot understand precisely why.

+ +
    +
  • It’s because it would bring better results? But then, if the FC layer is complex enough should be able to archive the same results, right?
  • +
  • It’s because would be more efficient? This is, would get the same results as a complex enough FC layer but with less training and parameters?
  • +
  • Or it’s because of other reasons that I’m missing.
  • +
+ +

Thanks.

+",22066,,,,,3/29/2019 8:44,One dimension deconvolutions or fully connected layers?,,0,0,,,,CC BY-SA 4.0 +11522,2,,11500,3/29/2019 9:01,,1,,"

This is about hard AI and soft AI: proponents of hard AI work on systems that simulate the way human cognition works, with the eventual (hypothetical) goal of replicating it. This presupposes that you know how cognition works, and presumably you will learn about it as you attempt to replicate it.

+ +

Soft AI, on the other hand, tries to emulate the outcomes only. For example, Weizenbaum's ELIZA is clearly on this side, as it uses simple pattern matching, and does not 'understand' anything about the conversations it is having.

+ +

Obviously, we don't even know fully what it means to 'understand' something, and building working systems is not really possible with a hard approach. Hence, soft AI is more common, as researchers are usually measured by their outcomes rather than their ideas. As far as I am aware, the hard AI approach has been all but abandoned long ago.

+ +

As current AI seems to be dominated by statistical approaches, I doubt that we can find out many useful things about cognition this way.

+ +

One interesting side-note: it seems to me that the capabilities of modern AI systems have developed away from human capabilities. A three-year-old can do some things that a sophisticated AI system cannot do, but in some areas (chess, translation, ...) the capabilities of AI systems surpass what humans are capable of. Maybe imitation is indeed not the right way to approach AI.

+",2193,,,,,3/29/2019 9:01,,,,0,,,,CC BY-SA 4.0 +11525,2,,3176,3/29/2019 11:57,,1,,"

There are several expressions that are often used as synonyms for artificial intelligence, but, nowadays, the most common ones are likely machine intelligence and computational intelligence.

+ +

However, these expressions are not well defined, so not everyone will agree that they are interchangeable, but we can all agree that these fields (either if we consider them the same or not) are quite related to each other (and they overlap).

+ +

Moreover, these fields also evolve over time and they embrace techniques from other fields, which makes it more difficult to define them. More concretely, initially, AI was mainly based on the manipulation of symbols and logic, but nowadays AI is mainly machine learning, statistics and, in particular, deep learning.

+ +

Furthermore, the expression artificial intelligence was apparently coined after the term cybernetics, which some people might consider the first serious attempt to building intelligent systems.

+",2444,,2444,,6/23/2019 12:35,6/23/2019 12:35,,,,0,,,,CC BY-SA 4.0 +11526,2,,11517,3/29/2019 13:12,,0,,"

I think your conceptualization of this is a bit off. All PSSH states is ""A physical symbol system has the necessary and sufficient means for general intelligent action.""

+ +

Gödel's theorems state 2 basic things:

+ +
    +
  1. Any sufficiently powerful formal system cannot prove its own consistency.

  2. +
  3. There are theorems in any sufficiently powerful formal system that cannot be proved within the system.

  4. +
+ +

PSSH doesn't have too much to do with Gödel.

+",9608,,,,,3/29/2019 13:12,,,,0,,,,CC BY-SA 4.0 +11527,2,,11504,3/29/2019 13:16,,1,,"

There are a couple of different ways you can go about this depending on what kind of data you have.

+ +

If you have labels or can separate the normal data from the fraudulent, you can perform either binary classification, or likely more usefully, anomaly detection.

+ +

In Anomaly Detection(which is typically now done via autoencoder) you train your model on the normal data, so it learns a compressed representation of that 'signal', from there it will be able to detect any sample that does not fit the learned representation(in theory).

+ +

Here is a link to a tutorial in keras: link

+",9608,,,,,3/29/2019 13:16,,,,1,,,,CC BY-SA 4.0 +11528,1,,,3/29/2019 13:27,,1,139,"

Where or for what could genetic algorithms (GA) be used in the context of project management (PM)? I thought about task dispatching, but I'm looking for other potential uses of GAs in the context of PM.

+",23557,,2444,,1/6/2021 22:11,5/27/2023 7:03,Where or for what could genetic algorithms be used in the context of project management?,,2,0,,,,CC BY-SA 4.0 +11529,2,,11517,3/29/2019 13:28,,3,,"

The PSSH is often attacked via either Godel's theorems or Turing's incomputability theorem.

+

However, both attacks have an implicit assumption: that to be intelligent is to be able to decide undecidable questions. It's really not clear that this is so.

+

Consider what Godel's theorems say, in essence:

+
    +
  1. "powerful" formal systems cannot prove, using only techniques from within the system, that they are self-consistent.
  2. +
  3. There are statements that are true that cannot be proven within a given "powerful" formal system.
  4. +
+

Suppose that we allow both of those facts. The missing step in the argument is the following statements:

+
    +
  1. You need to be able to prove the consistency of your own reasoning system to be considered intelligent.
  2. +
  3. You need to be able to correct reason out a proof of all true statements to be considered intelligent.
  4. +
+

The main problem is, under this definition, humans are probably not considered intelligent! I certainly have no way to prove that my reasoning is sound and self-consistent. Moreover, it is objectively not so! I frequently believe contradictory things at the same time.

+

I also am not able to reason out proofs of all the statements that appear to be true, and it seems entirely plausible that I cannot do so because of the inherent limitations of the logical systems I'm reasoning with.

+

This is a contradiction. The overall argument was that one of these 4 statements is false:

+
    +
  1. Godel's theorems say symbol systems lack some important properties.
  2. +
  3. Intelligent things have the properties that Godel says symbol systems lack.
  4. +
  5. Humans are intelligent.
  6. +
  7. Humans can't do the things Godel says symbol systems can't do.
  8. +
+

Some authors (like John Searle) might argue the false premise is 4. Most modern AI researchers would argue that the false premise is 2. Since intelligence is a bit nebulous, which view is correct may rely on metaphysical assumptions, but most people agree on premises 1 & 3.

+",16909,,2444,,12/11/2020 11:30,12/11/2020 11:30,,,,5,,,,CC BY-SA 4.0 +11530,2,,11519,3/29/2019 15:28,,1,,"

I will give you a basic idea of an approach.

+ +

The basic idea behind hill climbing algorithms is to find local neighbouring solutions to the current one and, eventually, replace the current one with one of these neighbouring solutions.

+ +

So, you first need to model your problem in a way such that you can find neighbouring solutions to the current solution (as efficiently as possible). But what is a solution to a TSP? It is a sequence of nodes (or vertices), such that the first node is equal to the last one (given that the travelling salesman needs to return to its initial position), no other vertex is repeated, and all vertices of the graph are included.

+ +

So, given a sequence of nodes $x_1, x_2, \dots, x_{n}, x_1$, how can you create a neighbouring solution that is valid, that is, there is not repeated vertex (apart from the initial and the last one) and all vertices are included?

+ +

Hint: recall that, in the case of TSP, we can assume that every node (or city) is connected to every other node.

+",2444,,2444,,3/29/2019 15:33,3/29/2019 15:33,,,,0,,,,CC BY-SA 4.0 +11531,2,,11510,3/29/2019 17:16,,0,,"

First of all you don't need ""text"" input as in Christopher Bonnett's blog. Your case is more easy - demographic is table data, which can be expressed as vector of numeric values. This data should be processed - pushed through one or two fully connected layers. The trick is where, to what part of yolo to concatenate results of processing of this vector. Because it's vector data it should concatenated to fully connected layer. Where exactly should be found by experiments, but as starting point it could be concatenated to before-last (before output) fully connected layer (I think for yolo it's 4096-size layer).

+ +

Overall I'd say that is not a trivial task. It require some experience with deep learning, good understanding of yolo design, yolo algorithm and a lot of experimentation, both with architecture and hyperparameters. Probably worth solid paper. Good luck.

+",22745,,,,,3/29/2019 17:16,,,,4,,,,CC BY-SA 4.0 +11532,2,,11528,3/29/2019 19:46,,-1,,"

Project Management is most like model to game theory. You can find a article here, as a cooperative game, and apply opportunity cost to your budget, only you need to apply your own rules of the game.

+",10983,,2444,,1/6/2021 21:47,1/6/2021 21:47,,,,1,,,,CC BY-SA 4.0 +11533,2,,11507,3/29/2019 20:36,,1,,"

Welcome to AI.SE @Par!

+ +

What you have might be either a multi-label or a multi-class classification problem. If the classes are disjoint (each example belongs to just 1 of the 50 classes), it's a multi-class problem. If not (so each example can belong to several classes at once), it's a multilabel problem.

+ +

Multi-label classification is usually handled by training a separate model for each of the labels. If you want to label a new point, you then ask each model what it thinks, and assign the union of the labels that the models suggest together. An alternative approach is to use something like a neural network, which can have many outputs. You can then have one output neuron for each possible label.

+ +

Multi-class classification can be addressed by using the multilabel techniques, but this is usually not a good idea. The three main approaches that are used are ""one-v-one"", ""one-v-all"", and ""many-v-many"".

+ +
    +
  1. In 1-v-1, we train one model to discriminate each pair of classes (n(n+1)/2 models in total for n classes). To classify a new point, we ask each model which class it belongs to and assign a ""vote"" to the class the model returns. The class with the most votes overall is selected as the label for the new point. In case of a tie, we can report several possible answers to the user.
  2. +
  3. In 1-v-all, we train one model to discriminate each class from all the other classes (n models in total for n classes). To classify a new point, we ask each model whether it belongs to the model's primary class or not. Ideally, just one model claims the new point. If more than one does, we can report a tie, or use some notion of classifier confidence to select a winner.
  4. +
  5. In many-v-many, we train k models each of which is tasked with discriminating some subset of the classes from all the rest. To classify a new point, we ask all of these models to label the point. We then pick the class that that is most consistent with the results of all the models. This can also be done by deliberately constructing the models to form an error correcting code.
  6. +
+ +

So which approach should you use? Well, Rifkin & Klautau's 2004 JMLR paper argues convincingly that the answer is to use one-versus-all classification. This is also pretty easy to do in most packages. For example, in Python's ScikitLearn, you can do it with OneVsRest. If you're not sure what to try, that's probably a safe bet.

+",16909,,,,,3/29/2019 20:36,,,,0,,,,CC BY-SA 4.0 +11534,1,,,3/29/2019 23:50,,1,86,"

Super comes from the Latin and means "above".

+
+

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".
(wiki)

+
+

Bostrom's definition could be taken to imply this is a quantitative measure of degrees as a numeric relationship. (Under this definition, we have achieved narrow superintelligence, reduced to competency in a single task.)

+

Gibson, famously, sheds light on an another aspect via Wintermute & Neuromancer, where, once superintelligence is achieved, the AI just f-'s off and does it's own thing, motivations beyond human comprehensions. (Essentially, "next-level" thinking.) The second measure is discrete and ordinal.

+

Is superintelligence a function of strength or a category?

+",1671,,2444,,1/22/2021 1:38,1/22/2021 1:38,Is superintelligence a function of strength or a category?,,3,0,,,,CC BY-SA 4.0 +11535,1,11549,,3/29/2019 23:57,,3,192,"

I'm working on understanding VAEs, mostly through video lectures of Stanford cs231n, in particular lecture 13 tackles on this topic and I think I have a good theoretical grasp.

+ +

However, when looking at actual code of implementations, such as this code from this blog of VAEs I see some differences which I can't quite understand.

+ +

Please take a look at this VAE architecture visualization from the class, specifically the decoder part. From the way it is presented here I understand that the decoder network outputs mean and covariance for the data distribution. To get an actual output (i.e. image) we need to sample from the distribution that is parametrized by mean and covariance - the outputs of the decoder.

+ +

Now if you look at the code from the Keras blog VAE implementation, you will see that there is no such thing. A decoder takes in a sample from latent space and directly maps its input (sampled z) to an output (e.g. image), not to parameters of a distribution from which an output is to be sampled.

+ +

Am I missing something or does this implementation not correspond to the one presented in the lecture? I've been trying to make sense of it for quite some time now but still can't seem to understand the discrepancy.

+",21278,,2444,,3/31/2019 18:44,4/2/2019 9:10,Do we also need to model a probability distribution for the decoder of a VAE?,,2,1,,,,CC BY-SA 4.0 +11536,2,,9834,3/30/2019 0:56,,2,,"

IMO, the greatest ""strength"" of HTM is that it is modeled after the human neocortex, which is the most intelligent thing we know of.

+ +

But to understand the importance of this simple idea one must contrast it with the most familiar form of AI - Neural Networks (NNs).

+ +

Traditional Neural Network AI has been under development for a long time and has many more people working on it than HTM. NNs are capable of performing a bewildering number of tasks, and the list of its accomplishments grows with every passing day.

+ +

However, NNs are not thinking. They perform their magic only after being trained on (typically) massive amounts of training data. Training a NN is essentially an advanced form of curve-fitting. If your training data encompasses closely enough what it encounters in new data then it will likely perform very well. However, if it encounters something new (which is sometimes difficult to know beforehand) then it can fail abysmally, and often in a way that humans would never fail.

+ +

One example I heard about was on a NN trained on millions of images that could briefly describe what was in new images it had never seen before. It performed fabulously - something like 95-97% accuracy. However, when it was shown an image of a baby holding a toothbrush, it said, ""A boy holding a baseball bat."" This is not a human-like error. Humans know the difference between a boy and a baby, and a bat and a toothbrush. This is just an example, but it reveals a fundamental problem of NNs - they are not thinking. Useful? Yes. Thinking? No.

+ +

Back to HTM. HTM is new and currently has only a handful of researchers working on it. It is ""better"" than NNs in only a small number of cases - it has a long way to go.

+ +

So if by ""strengths"" you're thinking about what tasks can currently be done better with HTM than with NNs, then most people should still chose NNs.

+ +

However, if by ""strengths"" you're thinking about what has the best chance of achieving general intelligence someday, then I would say hands-down it is HTM.

+",5927,,,,,3/30/2019 0:56,,,,0,,,,CC BY-SA 4.0 +11539,1,,,3/30/2019 5:31,,8,986,"

Often times I see the term deep reinforcement learning to refer to RL algorithms that use neural networks, regardless of whether or not the networks are deep.

+

For example, PPO is often considered a deep RL algorithm, but using a deep network is not really part of the algorithm. In fact, the example they report in the paper says that they used a network with only 2 layers.

+

This SIGGRAPH project (DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills) has the name deep in it and the title even says 'deep reinforcement learning', but if you read the paper, you'll see that their network uses only 2 layers.

+

Again, the paper Learning to Walk via Deep Reinforcement Learning by researchers from Google and Berkeley, contains deep RL in the title, but if you read the paper, you'll see they used 2 hidden layers.

+

Another SIGGRAPH project with deep RL in the title. And, if you read it, surprise, 2 hidden layers.

+

In the paper Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, if you read table 1 with the hyperparameters, they also used 2 hidden layers.

+

Is it standard to just call deep RL to any RL algorithm that uses a neural net?

+",17312,,2444,,7/1/2020 13:05,7/1/2020 13:07,Is reinforcement learning using shallow neural networks still deep reinforcement learning?,,2,0,,,,CC BY-SA 4.0 +11542,1,,,3/30/2019 8:57,,1,204,"

If the AI goal is to serve humans and protect them (if this ever happens) and AI someday realizes that humans destroy themselves, will it try to control people for their own good, that is, will it control man's will to not destroy himself?

+",23569,,2444,,3/30/2019 11:23,1/2/2020 21:45,"If the AI goal is the protection of humans, will it always pursue this goal?",,2,1,,,,CC BY-SA 4.0 +11544,2,,11542,3/30/2019 11:17,,1,,"

If the main goal of AI (which I assume you mean an AGI) is to protect humans and AI will be effective, then AI will always attempt to pursue its main goal (otherwise the assumption of its effectiveness does not hold), even at the expense of other less important goals that it might have. However, if the destruction of a human (or a group of humans) protected or avoided the destruction of other humans, then AI would face a dilemma. In that case, I think it is hard to predict the actions of the AI. Will it act rationally or irrationally? What would it mean for the AI to act rationally? Which parameters will it take into account? Only the number of deaths, or will take into account the future and weight the importance of the lives? How will it define the importance of a human life?

+",2444,,2444,,3/30/2019 11:22,3/30/2019 11:22,,,,5,,,,CC BY-SA 4.0 +11546,1,11561,,3/30/2019 11:47,,0,236,"

Can the inputs and outputs of a neural network (NN) be a neural network (that is, neurons and connections), so that ""if some NN exist, then edit any NN"".

+ +

I think that by creating NNs with various inputs and outputs, interacting with each other, and optimizing them with evolution, we can create strong intelligence.

+",23500,,23500,,3/31/2019 13:59,3/31/2019 22:02,Can the inputs and outputs of a neural network be a neural network?,,1,1,,,,CC BY-SA 4.0 +11547,2,,11534,3/30/2019 11:48,,1,,"

I am reading Bostrom's book ""Superintelligence"". I have only read the first 2 chapters, but I think he doesn't want to define super-intelligence is a precise way, but he leaves the reader the option to define it in a ""sensible"" way. However, I think that, in his thoughts, there's the (clear) assumption that a super-intelligence will necessarily need to be general, so a super-intelligence will be an AGI.

+",2444,,,,,3/30/2019 11:48,,,,0,,,,CC BY-SA 4.0 +11549,2,,11535,3/30/2019 16:06,,0,,"

Thanks @nbro for pointing this out.

+ +

The pictorial architecture in the slides uses the Gaussian loss, which when coupled with Maximum Likelihood Estimation gives the squared error loss (not to remove any tractability issues). The main reason we do the encoder Gaussian trick is to force the latent variable $z$ to be normal so that we can apply $KL$ $Divergence$ to optimise an otherwise intractable integral. You can get a better intuition and reasoning in this video.

+ +

The pictorial architecture is basically taking the Gaussian loss so that the final loss becomes the squared error loss effectively. Also the loss term used in your blog link is exactly the same loss term used in the original paper, but the blog is using CE loss (it is the more common loss used for classification). I am not sure how they are using the CE loss though, since it is only valid for $0$ and $1$ values and AFAIK MNIST data-set has grayscale images.

+ +

I am not exactly sure how they implement the randomness of the Gaussian loss in the decoder structure, but in the simplest of cases they just take the MSE

+ +

Check out this blog on VAE's (where they have taken the mean $\Sigma$ which they have abbreviated as mean, I have not checked their implementation detail to know what they exactly mean by that) and also this answer on Data Science on implementation of VAE's4 (both of which gives a more general form of loss). Also for the exact Mathematics check out Appendix C of the original paper.

+",,user9947,,user9947,4/2/2019 9:10,4/2/2019 9:10,,,,3,,,,CC BY-SA 4.0 +11553,1,,,3/31/2019 0:08,,3,526,"

I've implemented A2C. I'm now wondering why would we have multiple actors walk around the environment and gather rewards, why not just have a single agent run in an environment vector?

+ +

I personally think this will be more efficient since now all actions can be calculated together by only going through the network once. I've done some tests, and this seems to work fine in my test. One reason I can think of to use multiple actors is implementing the algorithm across many machines, in which case we can have one agent on a machine. What else reason should we prefer multiple actors?

+ +

As an example of environment vector based on OpenAI's gym

+ + + +
class GymEnvVec:
+    def __init__(self, name, n_envs, seed):
+        self.envs = [gym.make(name) for i in range(n_envs)]
+        [env.seed(seed + 10 * i) for i, env in enumerate(self.envs)]
+
+    def reset(self):
+        return [env.reset() for env in self.envs]
+
+    def step(self, actions):
+        return list(zip(*[env.step(a) for env, a in zip(self.envs, actions)]))
+
+",8689,,2444,,5/11/2020 12:39,5/11/2020 12:39,What is the difference between A2C and running an agent in an environment vector?,,1,3,,,,CC BY-SA 4.0 +11554,1,,,3/31/2019 3:04,,1,188,"

I have billions of anonymized location coordinates of people movement collected from app. I want to improve user experience by using location data.

+ +
    +
  1. For example identify if user is at home or at office so that what they view in app changes.

  2. +
  3. Where will they be tomorrow at particular hour - so that I can suggest to secure their homes from IOT device if they are out.

  4. +
+ +

Regarding the first point, I tried to use the following rule: at night they are at home, and at day they are at work or school. Regarding the 2nd point, I have no idea how to proceed. Is there any way I could use AI to predict future location and home location?

+",23578,,2444,,3/31/2019 20:31,3/31/2019 20:36,How to predict human future location?,,1,3,,,,CC BY-SA 4.0 +11556,2,,11553,3/31/2019 8:09,,2,,"

I believe if you run a single agent in multiple parallel environments many times you will get similar actions in similar states, the reason behind multiple agents is that you will have different agents with different parameters and you can also have different explicit exploration policies so your exploration will be better and you will learn more from environment (see more state space). With single agent you can't really achieve that, you would have a single exploration policy, single parameter set for the agent and most of the time you would be seeing similar states (at least after a while). You would be speeding up your learning process but that's just because you're running multiple environments in parallel (compared to the regular actor-critic or Q-learning). I think quality of learning would be better with multiple different actors.

+",20339,,,,,3/31/2019 8:09,,,,9,,,,CC BY-SA 4.0 +11557,1,,,3/31/2019 10:03,,2,266,"

In the paper A Simple Neural Attentive Meta-Learner, the authors mentioned right before Section 3.1:

+ +
+

we preserve the internal state of a SNAIL across episode boundaries, which allows it to have memory that spans multiple episodes. The + observations also contain a binary input that indicates episode termination.

+
+ +

As far as I can understand, SNAIL uses temporal convolutions to aggregate contextual information, from which causal attention learns to distill specific pieces of information. Temporal convolutions does not seems to maintain any internal state, and neither does the attention mechanism they use after this paper. This makes me wonder: ""What is the internal state of a SNAIL?""

+",8689,,8689,,4/1/2019 2:33,9/28/2019 14:02,What is the internal state of a Simple Neural Attentive Meta-Learner(SNAIL)?,,1,0,,,,CC BY-SA 4.0 +11558,2,,11535,3/31/2019 11:04,,0,,"

The VAE architecture from the cs231n class is just a more general version of the code Keras provides, in which the covariance matrix is $\mathbf 0$. You can see this from the reparametrization trick +$$ +\begin{align} +x&=\mu+\Sigma\epsilon\\ +&=\mu&\mathrm{if}\ \Sigma=0 +\end{align} +$$

+",8689,,,,,3/31/2019 11:04,,,,0,,,,CC BY-SA 4.0 +11559,2,,11539,3/31/2019 12:04,,2,,"
+

Is it standard to just call deep RL to any RL algorithm that uses a neural net?

+
+

Yes, it seems to have become standard practice to label RL + any NN "Deep Reinforcement Learning". It is not a formalised term.

+

The whole "Deep Learning" movement started this decade is as much a marketing term as a scientific one. It is however based on the discovery of real improvements in neural network architecture and training approaches.

+

You may find that some (or even most) of these shallower networks will use improvements designed in the last decade or so, and also associated with deeper networks, such as Xavier initialization, ReLU activation, the Adam optimizer.

+

As a personal opinion, I would say that, if a published experiment uses just 1 or 2 hidden layers, and does not make use of any of these recent advances, then the "Deep" label is almost entirely a branding exercise. There were advances with such networks much longer ago. For instance the TD-Gammon paper is from 1995. For TD-Gammon, the authors used reinforcement learning and a NN with one hidden layer to create a Backgammon player that played better than any human player. This was well before "Deep Learning" was a term used to describe such networks, and the term "Deep Reinforcement Learning" does not appear in that paper.

+

However, because "Deep Learning" is such a loose branding term, there is also an argument that all these older approaches, and pretty much all neural networks with hidden layers, should be included. Wikipedia's definition for Deep Learning says:

+
+

Deep learning is a class of machine learning algorithms that:

+
    +
  • use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • +
  • learn in supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners.
  • +
  • learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.
  • +
+
+

Using that definition would include all the papers you cite. You don't need a 50 layer Resnet architecture to qualify. And the branding exercise makes more sense under that definition, because the newly invented techniques have made such systems that much more viable and worthy of investment (of time & effort as well as financially).

+",1847,,-1,,6/17/2020 9:57,3/31/2019 19:30,,,,0,,,,CC BY-SA 4.0 +11560,2,,11539,3/31/2019 12:16,,5,,"

Even after several years of success of deep learning systems (i.e. neural networks trained with gradient descent and back-propagation), as far as I know, there is not yet a consensus on what constitutes a deep neural network. Some people could use a neural network with 2 hidden layers and call it deep (like in your case), but other people may just dedicate the adjective deep to refer to neural networks with 10, 100, or more hidden layers. In fact, there are some good reasons to associate the term deep only to neural networks that have a significant number of hidden layers (e.g. 100): for example, the exploding (or vanishing) gradient problem does not typically arise if you only have one hidden layer but can easily occur with many (e.g. 100) hidden layers.

+

Nevertheless, a neural network with at least one hidden layer can approximate any continuous function, given enough (but finite number of) units (or neurons) in the layers. See the universal approximation theorem. For this reason, we could start denoting any such neural network as deep, but, although this rule would exclude perceptrons (which can only approximate linear functions, and nobody would probably call them deep anyway), this rule would be a bit redundant or useless (i.e. we may just not use the adjective deep to start with).

+

In your case, the rule that the authors are using seems to be the following: if it contains more hidden layers than the bare minimum (i.e. 1) to approximate any continuous function, then let's denote it as deep.

+",2444,,2444,,7/1/2020 13:07,7/1/2020 13:07,,,,2,,,,CC BY-SA 4.0 +11561,2,,11546,3/31/2019 16:31,,0,,"

A neural network essentially is a function:

+ +

$$\mathbf{y} = f(\mathbf{x}, \mathbf{\theta})$$

+ +

Where $\mathbf{x}$ is a vector input, $\mathbf{\theta}$ are changeable or learnable parameters, and $\mathbf{y}$ is a vector output.

+ +

There are some variations of this in practice, as you can make special arrangements of $\mathbf{x}$ and $\mathbf{y}$, or use an internal state and feedback loops to allow either or both $\mathbf{x}$, $\mathbf{y}$ to be sequences. However, the above function is basically what a neural network is; how it works beyond that summary are details that you can study.

+ +

In supervised learning, you are interested in fixed sizes/shapes of $\mathbf{x}, \mathbf{y}, \mathbf{\theta}$ and trying to find a value of $\mathbf{\theta}$ such that

+ +

$$f(\mathbf{x}, \mathbf{\theta}) \approx g(\mathbf{x})$$

+ +

where $g(\mathbf{x})$ is some ""true"" function that you care about, and can find or generate examples of, but typically don't fully know. The value $\mathbf{\theta}$ is called the parameters of the neural network. In addition to these parameters, there are also hyper-parameters of the neural network, which include how many neurons there are in each layer, the valid connections between them, which non-linear function is applied after summing connections between them, etc.

+ +

There are learning algorithms used to find $\mathbf{\theta}$ - the many variations of gradient descent being the most popular. Some algorithms - mainly evolutionary approaches - can also vary hyper-parameters, although a more common approach is to repeatedly find $\mathbf{\theta}$ using gradient descent, and vary hyper-parameters in different learning trials, using some metric of performance using test data to pick the best one.

+ +
+

Can the inputs and outputs of a neural network (NN) be a neural network (that is, neurons and connections), so that ""if some NN exist, then edit any NN"".

+
+ +

Yes - partially. This is a data representation issue. To use it as an input, you would need to express the state of a neural network as a vector - or sequence of vectors - for the input, and the output/edit would also need to be a vector. Probably the simplest way to do this would be to use one network directly output a fixed length vector for $\mathbf{\theta}$ of the target network given an existing value of $\mathbf{\theta}$. That would not allow you to change connections or layer sizes etc, but it would be a very straightforward way to express ""one neural network altering another"" (ignoring whether this was in any way useful for a task).

+ +

If the output was a full representation of the new network, then you would have a function that took as input the definition of one neural network, and output the definition of another network. It would be up to you to convert to/from implemented neural networks and the representations for input $\mathbf{x}$ and $\mathbf{y}$.

+ +

If the output $\mathbf{y}$ was an ""edit"" for a change $\mathbf{x} \rightarrow \mathbf{x}'$, then you would have to decide what edits are allowed, design the representation and write code that applied the edit (the NN would not actually make any changes to another NN by itself). There is no standard way to do this, although there are things you could base this on (such as NEAT).

+ +

The big unanswered question with both of the approaches though is what your ""true function"" $g(\mathbf{x})$ is supposed to be. Having a neural network that represents a neural network generator or edit function is only half of the problem. You also need a way to either generate ""correct"" outputs to learn from, or a way to assess outputs against a goal.

+ +

The goal cannot simply be ""make a valid edit"", as the number of valid edits that will do nothing useful vastly outnumbers the number of edits that have some specific purpose. This is a similar issue to the fact that there are roughly $2^{8000000}$ valid 1MB files, but only a small proportion of those will be valid image files, and a smaller proportion still will be valid images that represent a natural image that could be taken by a camera. Neural networks that generate natural images therefore must be trained using natural images as a reference, otherwise they will tend to produce meaningless static-like noise.

+ +
+

I think that by creating NNs with various inputs and outputs, interacting with each other, and optimizing them with evolution, we can create strong intelligence

+
+ +

This is very broadly compatible with an Articial Life approach to AI. Although there are two differences between what you are proposing and typical A-life approaches:

+ +
    +
  • Evolutionary algorithms need some measure of fitness, in order to select the best performing individuals to take forward. A-life solves this by implementing a very open environment that makes no direct judgement on outputs of functions, but allows virtual creatures that collect enough resources (defined in the environment) to procreate.

    + +
      +
    • Your suggestion contains no hint that you are thinking of any kind of measure of success or fitness for either the editing NN or target NN. You will need some measure of fitness at least for the target network (and maybe the editing network too) if you intend to use evolutionary algorithms
    • +
  • +
  • A-life typically does not treat the evolutionary algorithm itself (the editing or the NNs) as a learning goal. You will not see the results of a good or bad editor until many simulations have passed, so this ""meta-search"" is likely to be incredibly slow.

    + +
      +
    • A-life simulations are typically already quite slow to reach behaviour which is interesting (because it has emerged without direction from the developer), but usually quite simple for the given environment such as predators chasing prey.
    • +
  • +
+ +

From what we know of the evolution of life, a simple feedback mechanism of RNA molecules editing other RNA molecules - the ""RNA world"" - is considered a likely pre-life step. This has some parallels with what you are suggesting - and this or something similar perhaps, has resulted in intelligent beings such as ourselves. So your idea perhaps has some merit in a theoretical sense. However, it took biology billions of years to go from such a basic stage to self-aware creatures, and this took the full processing power of large numbers of atoms interacting in ways that even all the computers in the world could not simulate in a fraction of real time.

+ +

To speed things up, and turn the idea into something feasible, you would need to look into viable environments and evaluations that would focus the learning towards direct measures of intelligence. Also, you would do well to compare your idea about NNs editing NNs with learning approaches that don't use such a feedback loop.

+ +

There is no theory that suggests a NN that can edit or output other NNs would offer any advantage for research into strong AI, compared to other search methods. The idea is basically ""AI alchemy"" - an idea of an experiment that could perhaps be done, but without any theory backing it as being better or worse than other ideas.

+ +

Personally, I would expect the search for a NN which is a good NN editor for a NN which has some other task, to be too slow to be useful when faced with very broad tasks such as exhibiting high level reasoning.

+",1847,,1847,,3/31/2019 22:02,3/31/2019 22:02,,,,0,,,,CC BY-SA 4.0 +11562,2,,11510,3/31/2019 17:05,,0,,"
+

you need to take into account the demographics of the patient

+
+ +

How, exactly?

+ +
    +
  • Is it a difference of, say, threshold? In this case you can do this serially (as @mirror2image mentions): process the image and then conclude by comparing the size of what you saw to, say, an age-dependent threshold.
  • +
  • Or has the whole processing to be different? In the extreme, you would not wait until the very end before asking whether the patient is a man if you are looking for prostate cancer.
  • +
+ +

To design the model, you need enough medical understanding to make such choices. The model can handle the parameters, but you have to choose the architecture.

+",23584,,,,,3/31/2019 17:05,,,,6,,,,CC BY-SA 4.0 +11563,1,,,3/31/2019 18:10,,2,217,"

I am trying to understand the similarities and differences between: (i) the UCT algorithm in Kocsis and Szepesvári (2006); (ii) the UCT algorithm in Section 3.3 of Browne et al (2012); (iii) the MCTS algorithm in Silver et al. (2016); (iv) the MCTS algorithm in Silver et al. (2017).

+ +

I would be really grateful for some help identifying the similarities and differences in these papers, I am doing some research and really struggling right now.

+ +

(i) http://ggp.stanford.edu/readings/uct.pdf

+ +

(ii) http://mcts.ai/pubs/mcts-survey-master.pdf (Section 3.3)

+ +

(iii) https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf

+ +

(iv) https://deepmind.com/documents/119/agz_unformatted_nature.pdf

+",23589,,2444,,4/1/2019 21:53,4/1/2019 21:53,"Similarities and differences between UCT algorithms in (i), (ii), (iii) and (iv)?",,1,0,,,,CC BY-SA 4.0 +11565,1,11583,,3/31/2019 20:18,,1,498,"

I am following this TensorFlow JS tutorial where you load car data. The data looks like this:

+ +
[{x:100, y:20}, {x:80, y:33}]
+
+ +

X is the horsepower of a car, Y is the expected miles per gallon usage. After creating the model I save it locally using:

+ +
async function saveModel(){
+    await model.save('downloads://cars-model');
+}
+
+ +

Next, I load the model in a separate project, to make predictions without needing the original data.

+ +

NEW PROJECT

+ +
async function app(){
+    let model = await tf.loadLayersModel('./cars-model.json');
+    console.log(""car model is loaded!"");
+}
+
+ +

I expect to be able to run predict here, on a single number (say, 120)

+ +
model.predict(tf.tensor2d([120], [1, 1]))
+
+ +

QUESTION

+ +

I think the number 120 needs to be normalised to a number between 0-1, just like the training data was. But how do I know the inputMin, inputMax, labelMin, labelMax values from the loaded model?

+ +

To un-normalise the prediction (in this case 0.6) I also need those original values.

+ +

How do I normalise/un-normalise data when loading a model?

+ +

original prediction code uses label and input values from the original data

+ +
function testModel(model, inputData, normalizationData) {
+    const { inputMax, inputMin, labelMin, labelMax } = normalizationData;
+
+    // Generate predictions for a uniform range of numbers between 0 and 1;
+    // We un-normalize the data by doing the inverse of the min-max scaling 
+    // that we did earlier.
+    const [xs, preds] = tf.tidy(() => {
+
+        const xs = tf.linspace(0, 1, 100);
+        const preds = model.predict(xs.reshape([100, 1]));
+
+        const unNormXs = xs
+            .mul(inputMax.sub(inputMin))
+            .add(inputMin);
+
+        const unNormPreds = preds
+            .mul(labelMax.sub(labelMin))
+            .add(labelMin);
+
+        // Un-normalize the data
+        return [unNormXs.dataSync(), unNormPreds.dataSync()];
+    });
+
+
+    const predictedPoints = Array.from(xs).map((val, i) => {
+        return { x: val, y: preds[i] }
+    });
+
+}
+
+",11620,,11620,,4/1/2019 13:06,9/16/2020 13:48,How do I normalise/un-normalise data when loading a model?,,1,2,,,,CC BY-SA 4.0 +11566,2,,11554,3/31/2019 20:30,,1,,"

Your problem is often called (in the literature) human mobility prediction. There has been some research in this area. Have a look at it on the web.

+ +

In general, you might want to use any statistical or machine learning model that uses the historical data to predict the future. For example, you could try to use an hidden Markov model (for both point 1 and 2). However, before that, you might also need to do some feature engineering, if you only have locations (and not e.g. the time of the day when those locations were recorded).

+",2444,,2444,,3/31/2019 20:36,3/31/2019 20:36,,,,1,,,,CC BY-SA 4.0 +11567,1,,,3/31/2019 21:07,,9,1048,"

I've pondered this for a while without developing an intuition for the math behind the cause of this.

+ +

So what causes a model to need a low learning rate?

+",20257,,2444,,12/23/2021 23:07,12/23/2021 23:07,What causes a model to require a low learning rate?,,1,1,,,,CC BY-SA 4.0 +11570,2,,2324,4/1/2019 0:31,,0,,"

This depends on the definition(s) of AGI and ASI. Both are currently ill-defined. Most researchers in AGI follow their own definition of AGI.

+

At least one researcher believes that there is no such thing as ASI. This is because the basic principles of said AGI always stay the same. It may be learning processes, the core logic(s) and the control logic (reasoning systems are divided into control systems and logic systems, the control system(s) decide which derivations are fruitful).

+

ASI may be defined as a search for any combination of these (just a subset which come to my mind):

+
    +
  • search for better algorithms
  • +
  • heuristics
  • +
  • better contemporary (NN) architectures
  • +
  • learning mechanisms
  • +
  • solving techniques
  • +
  • higher subjective beauty
  • +
  • better compression of knowledge
  • +
  • better subsystems
  • +
  • NN and in general architectures
  • +
  • better embedded AGI's
  • +
  • faster solving capabilities of known problems
  • +
  • ...
  • +
+
+

There are limitations to any sort of (recursive) self improvement however. +Examples of these are

+
    +
  • the score of AlphaGo and AlphaGo-Zero plateaus after a long enough training period
  • +
  • supercompilation of a supercompiled program yields no improved program after a few iterations
  • +
  • ...
  • +
+

Note here that these are examples about weak-AI and may not apply to AGI - but it is very likely in my opinion.

+
+

So the level of worry depends on the plausible (or followed) definition of AGI and the assumptions of the mechanisms an AGI may employ.

+",21141,,21141,,2/13/2023 10:27,2/13/2023 10:27,,,,0,,,,CC BY-SA 4.0 +11571,1,,,4/1/2019 0:36,,1,73,"

Description logic is a fragment of first order logic, but description logic is decidable and first order logic not decidable. Why is that? what is the role of variables in first order logic to make it undecidable?

+",23590,,2444,,4/1/2019 13:25,4/1/2019 13:25,Why is description logic decidable but first order logic is not decidable?,,1,0,,,,CC BY-SA 4.0 +11574,1,,,4/1/2019 3:39,,2,51,"

I'm using LSTM to categorize medium-sized pieces of text. Each item to be categorized has several free-form text fields, in addition to several categorical fields. What is the best approach to using all this information for categorization? I see two options:

+ +
    +
  • Concatenate the text from all fields, preceding each field content with a special token. Run concatenated text through LSTM.
  • +
  • Train one model per field. Concatenate output from each model in a hidden layer and pass into subsequent layers.
  • +
+ +

What are the benefits of each of the approaches? Is there an alternative I'm missing?

+",23599,,,,,4/1/2019 3:39,Multi-field text input for LSTM,,0,1,,,,CC BY-SA 4.0 +11575,1,,,4/1/2019 4:53,,5,155,"

I’m looking to match two pieces of text - e.g. IMDb movie descriptions and each person’s description of the type of movies they like. I have an existing set of ~5000 matches between the two. I particularly want to overcome the cold-start problem: what movies to recommend to a new user? When a new movie comes out, to which users should it be recommended? I see two options:

+ +
    +
  1. Run each description of a person through an LSTM; do the same for each movie description; concatenate the results for some subset of possible combinations of people and movies, and attach to a dense net to then predict whether it’s a match or not
  2. +
  3. Attempt to augment collaborative filtering with the output from running the movie description and person description through a text learner.
  4. +
+ +

Are these tractable approaches?

+",23599,,16565,,4/4/2019 20:24,5/9/2019 21:27,Cold start collaborative filtering with NLP,,1,0,,,,CC BY-SA 4.0 +11576,1,11592,,4/1/2019 6:29,,10,4518,"

Are decision tree learning algorithms deterministic? Given a fixed dataset, do they always produce a tree with the same structure?

+ +

What about the random forest?

+",23601,,2444,,11/21/2019 3:05,1/15/2022 5:53,Are decision tree learning algorithms deterministic?,,1,0,,,,CC BY-SA 4.0 +11579,2,,11008,4/1/2019 10:46,,5,,"

They would probably have followed the same sequence we do:

+ +
    +
  • be amazed at the capabilities,
  • +
  • ask how it is done,
  • +
  • wonder whether this is really intelligence and (or) point out our narrow the performance was,
  • +
  • require more next time to be impressed again.
  • +
+",23584,,,,,4/1/2019 10:46,,,,0,,,,CC BY-SA 4.0 +11580,2,,11571,4/1/2019 11:34,,1,,"

This has to do with the fact that you can define arithmetic inside the axiomatic system or not. In description logic you cannot speak about arithmetic sentences and in first order logic you do.

+ +

if you look at the proof of incompleteness you will understand this in depth. This demonstration depends on an arithmetic coding of statements, and this representation is fundamental to obtain the conclusion.

+",23608,,,,,4/1/2019 11:34,,,,2,,,,CC BY-SA 4.0 +11583,2,,11565,4/1/2019 13:02,,1,,"
+

How do I convert this to a number between 0-1 without having access to the original car data?

+
+ +

You save the normalisation parameters (typically an offset and a multiplier for each column), and consider that part of the model. Typically you do this when you originally scale training data.

+ +

When you want to re-use the model, as well as loading the neural network architecture and weights, you need to load the normalisation parameters in order to re-use those too and scale your inputs.

+ +

When tutorials present a self-contained neural network that loads training data, builds a model, then tests that model, all in the same process, then often this step is not shown. However, saving the normalisation data is important, and basically should be considered part of the model, even though it is not directly part of the neural network parameters or hyper-parameters.

+",1847,,,,,4/1/2019 13:02,,,,6,,,,CC BY-SA 4.0 +11585,2,,11567,4/1/2019 13:27,,6,,"

Gradient Descent is a method to find the optimum parameter of the hypothesis or minimize the cost function.

+ +

+where alpha is learning rate

+ +

If the learning rate is high then it can overshoot the minimum and can fail to minimize the cost function. +

+ +

hence result in a higher loss.

+ +

+ +

Since Gradient descent can only find local minimum so, the lower learning rate may result in bad performance. To do so, it is better to start with the random value of the hyperparameter can increase model training time but there are advanced methods such as adaptive gradient descent can manage the training time.

+ +

There are lots of optimizer for the same task but no optimizer is perfect. It depends on some factors

+ +
    +
  1. size of training data: as the size of the training data increases training time for model increases. If you want to go with less training model time you can choose a higher learning rate but may result in bad performance.
  2. +
  3. Optimizer(gradient descent) will be slow down whenever the gradient is small then it is better to go with a higher learning rate.
  4. +
+ +

PS. It is always better to go with different rounds of gradient descent

+",15368,,15368,,4/1/2019 14:11,4/1/2019 14:11,,,,8,,,,CC BY-SA 4.0 +11586,2,,5428,4/1/2019 18:05,,4,,"

You can contribute to AGI research in several ways.

+ +
    +
  • Write papers for the Artificial General Intelligence conference, which is peer-reviewed, or other AGI conferences or journals, or just submit them to e.g. Arxiv. For example, you can write papers about AGI algorithms or architectures.

  • +
  • Contribute to (open source) AGI systems, like OpenCog, BECCA or OpenNARS.

  • +
  • Implement AGI algorithms, mechanisms or architectures, and put them, for example, on Github.

  • +
+",21141,,2444,,11/22/2019 22:55,11/22/2019 22:55,,,,0,,,,CC BY-SA 4.0 +11587,2,,11563,4/1/2019 18:10,,1,,"

The algorithm in the 2012 survey article (your second link) is the most common / standard implementation. Whenever someone mentions using MCTS or UCT, without explicitly stating any other info, it's safe to assume that that pseudocode is what they're using.

+ +

The paper by Kocsis and Szepesvári from 2006 (your first link) is (one of) the original publication(s) on UCT. It is very similar to the ""standard"" implementation as described in the survey paper. If I recall correctly, the only important difference is that the algorithm as described in 2006 keeps track of at which point in time during an episode a reward is observed, and accounts for that timing in the backpropagation phase (i.e. if a reward is observed at time $t$ in an episode, credit for that reward is not assigned to states/action after time $t$). It also used a discounting factor $\gamma$ to discount the importance of reward depending on temporal distance, which is uncommon otherwise in MCTS literature.

+ +

Both of those differences are due to the original, 2006 paper having more of a ""Markov decision process"" or ""Reinforcement Learning"" flavour, whereas otherwise MCTS has especially become popular in AI for (board) games where we often by default assume that there is only a single nonzero reward (win or lose) at the end of every episode anyway, which makes those differences less meaningful.

+ +
+ +

Both of the AlphaGo papers (AlphaGo and AlphaGo Zero) use a ""foundation"" of MCTS that is mostly similar to the one from the 2012 survey article. The core components are all the same. Both of those systems add on a lot of (very important, and some quite complex) bells and whistles though.

+ +

Going off the top of my head (should be mostly accurate, but best details are in the original source of course!) AlphaGo (your third link) added Neural Networks trained in various ways to output policies (mapping from states to probability distributions over actions) and value functions (mapping from states to ""value estimates"", or estimates of win percentage). Trained policy networks were used in a variant of the ""selection"" phase (no longer UCB1 strategy as in UCT) to bias selection. A different (simpler function, not a deep net) policy was used to run play-outs (no long uniform at random action selection as in UCT). A combination of observed reward at end of play-out + value function estimate at start of play-out was used for backpropagation (no longer only backpropagating reward observed in play-out as in UCT).

+ +

AlphaGo Zero is, if we're purely looking at the MCTS part, quite similar to AlphaGo. The Neural Networks were a bit different (a single one, with multiple outputs for policy + value), and play-outs were no longer used at all (just immediate backpropagation of value function estimates after MCTS' selection + expansion phases). Apart from that, the primary differences going from AlphaGo to AlphaGo Zero were in the learning process used to train the Neural Networks really, not so much in the MCTS side of things.

+",1641,,,,,4/1/2019 18:10,,,,0,,,,CC BY-SA 4.0 +11588,1,,,4/1/2019 18:41,,3,185,"

I am taking AI this semester and we have a semester project that will last 4 weeks. We can choose just about anything.

+ +

So, what are some possible semester projects that can be finished in a 4-week time-frame?

+ +

Some background information: I am a graduate student in CS, but this is my first AI course. My research area is in the space of data mining and analytics. I am open to doing anything that seems interesting and creative.

+",23622,,2444,,6/16/2020 10:47,6/16/2020 13:14,What are some possible projects that can be finished in a 4-week time-frame?,,2,2,,,,CC BY-SA 4.0 +11589,2,,11588,4/1/2019 23:26,,7,,"

Welcome to AI.SE @Kate_Catelena!

+ +

I teach AI courses at the undergraduate level, and so have seen a lot of semester projects over the years. Here are some templates that often lead to exciting outcomes:

+ +
    +
  1. Pick a new board or card game, and write a program to play it. Your course has probably covered Adversarial Search, and may also have covered Monte Carlo Tree Search, or self-play reinforcement learning approaches. These projects are often fun to mark and creative because they are easy enough to be well done, and yet there are always new, exciting, domains to apply these algorithms to. Some examples of past projects that I thought were neat were an AI to play the boardgame Tac (mostly A* Search), and an AI to play the card game Love Letter (mostly Counter-factual minimax regret, the algorithm used to solve poker).

  2. +
  3. Pick a question that you would like to know the answer to, that could be addressed with machine learning. Then implement your own ML algorithm (decision tree learners are fairly easy), gather your own data, and show a result. Examples of interesting projects I've seen in the past are using ML to find out which of a number of factors most strongly influenced a students' subjective quality of sleep; and which items are most commonly purchased along with camping supplies (using association rule mining).

  4. +
  5. Anything involving reinforcement learning. RL projects are always neat to see if accompanied by a visualization showing the learner's behavior at different stages. A strong past project involved a student simply replicating Sutton & Barto's Acrobot experiments with their own implementation of the SARSA-Lambda algorithm. Other things that might be neat include making a trainable ""pet"" that the user can influence, or solving games using self-play.

  6. +
  7. Theoretical results might seem intimidating but are often more accessible than one might think, especially if your discrete math skills are strong. I have had many student projects where the student went away to look at theory papers in ML or Multiagent Systems, found a suggestion in the future work sections that wasn't a big result, but that was fairly easy to prove, and proved it. Sometimes these are even publishable.

  8. +
  9. Replications. Go find an interesting AI paper (use scholar.google.com), and then see if you can do exactly what the authors suggest and if you get the same result or not. Then, if you have time, see if you can improve on the results. These are often most interesting when you find a paper written in a different field that uses AI. Often the authors of such papers know less about AI than you do, and so it can be fairly easy to improve on their results. I have had several students do projects like this to great effect.

  10. +
+ +

Those categories are a bit vague, but remember: AI touches almost anything. Pick your favorite hobby, and see whether you can relate it to AI using one of the approaches above. Nothing makes a project stand out like one that applies AI to solve some real issues in an exciting domain. Good luck!

+",16909,,,,,4/1/2019 23:26,,,,0,,,,CC BY-SA 4.0 +11590,2,,11588,4/2/2019 0:50,,2,,"

Here are some possible options

+ +
    +
  1. Music Generation using GA/MA
  2. +
  3. Open AI's gym projects
  4. +
  5. 2048 on RL and search algorithms
  6. +
  7. Fixing bugs in the source code of some AI software project
  8. +
+",20522,,2444,,6/16/2020 13:14,6/16/2020 13:14,,,,0,,,,CC BY-SA 4.0 +11592,2,,11576,4/2/2019 3:47,,9,,"
+

Are decision tree learning algorithms deterministic? Given a fixed dataset, do they always produce a tree with the same structure?

+
+

Generally, yes. Most decision tree learners, like the common ID3 and C4.5/C5.0 algorithms, are deterministic. At each step, the learners consider all possible features that have not yet been used to split the data and find the splits that maximize some function (e.g. information gain). There is no randomness (or pseudo-randomness) in this process.

+

The exceptions to this would be if you used randomness to break ties (rather than, say, using the index of each feature, as is common), but this would be an unusual modification. +What about the random forest?

+

As the name suggests, random forests do make use of randomness, or at least, pseudo-randomness. If we're only concerned about whether or not the algorithm is deterministic in the usual sense of the word (at least, within computer science), the answer is no.

+

If you start the same random forest learning algorithm, with the same datasets, at two different times (or, using two different seeds for your pseudorandom number generator), you will get two different forests. This is because the algorithm selects random subsets of the features and/or datapoints to learn on, and, if different seeds are used, the subsets will be different each time.

+",16909,,18758,,1/15/2022 5:53,1/15/2022 5:53,,,,2,,,,CC BY-SA 4.0 +11593,1,11601,,4/2/2019 6:28,,1,239,"

I'm trying to generate images at minimum of size 128 x 128 with a Generative Adversarial Network. I already tried a SAGAN pytorch implementation, but I'm not very happy with results. The images look cool but and I see some correct shape but without explanation you wouldn't know what the images are about. I have a dataset of 4000 images. Lightness, colors and shapes vary a lot, but they are similar in style and on what they portray.

+ +
    +
  • With a Google Cloud V100 GPU the GAN would run a week to two with default parameters. Does this sound realistic time for this kind of dataset? It's definitely not feasible for me.
  • +
  • Is 4000 images enough to train a GAN from scratch?
  • +
  • Is there any implementation with pytorch/keras that would be good to get nice results with?
  • +
+",3579,,,,,4/2/2019 16:01,How to get good results with GAN and some thousands of images?,,1,0,,,,CC BY-SA 4.0 +11595,1,,,4/2/2019 11:15,,1,90,"

I've been working on genetic algorithms & evolutionary strategies for a while now in a research context. Across the vast majority of the articles and content I've read, every single one of them will either use Python, Matlab, or Java/C++ to build & benchmark their algorithms.

+ +

Is there an objective reason for these languages to be the single ones used in a research environment? Mainly in contrast with other languages like C#, or Javascript, that are almost never used (despite being some of the most used programming languages in other areas), whereas it would definitely be possible to code in practice all current algorithms in them.

+",23637,,,,,12/28/2019 17:00,Is there a reason evolutionary algorithms are language-bound in research material?,,1,1,,,,CC BY-SA 4.0 +11596,1,11604,,4/2/2019 12:11,,2,162,"

I am working on a DDQN with 5 LSTM layers and 3 actions as output and state space of 21 features. I am dividing the dataset into episodes of 720 timesteps, for each episode the agent acts greedily for the first 480 steps without training, collecting a replay memory, and then update the parameters each step for the subsequent 240 steps using a window size (of 96 steps) randomly sampled from the replay memory (that always saves the last 480).

+ +

My problem is that so far the agent learned the optimal policy just once, and it looks like this +(on the test set, training is off) where, as you can see, the agent dynamically changes its evaluation of the state and acts greedily accordingly. All works fine and the performances are optimal, however, I have to slightly change the normalization of the database and rerun the training to get new parameters fitted to the new database.

+ +

Trying to get to the same result has been proven impossible so far, (even keeping all settings the same!) because most of the time the agent learns to keep its q-values static such as . +Note: this is an extract of the end of an episode and the beginnig of a new one, the noisy behaviour at the extreme is due to the model being trained for each step, in the middle the training is off and the agent acts greedily (as I explained above). The problem is that the learned parameters give rise to static q values that do not change while the state does and inevitably make the agent stuck in suboptimal strategy, as they never changes actions, even on longer sequences on the test set. The middle part of the second picture, where trainig is off, should look like the first picture, however, I am unable to get back to that optimal behaviour, even keeping all the parameters as they were.

+ +

Any idea on what can be the cause of this anomalous behaviour?

+",23638,,,,,4/2/2019 16:32,DQN Q-values are static,,1,4,,,,CC BY-SA 4.0 +11599,1,,,4/2/2019 14:28,,0,46,"

I was thinking about what I can do for my thesis in upcoming academic semester, and I came across an idea. The idea is like: ""If there is any kind of system that generates website designs itself."" If no, then I can go for it, and I will be lucky if anyone has the same idea. We can collaborate. In case, if there's any project or system (Open source or not) that can do this or has been initiated in this context, I want to contribute solely. If anyone has any clue or knowledge over this kind of system, please do inform me. As I haven't done anything like this before, I want to learn. Any kind of suggestion or assistance on this idea will be so helpful for me.

+",23643,,2444,,4/2/2019 16:23,4/2/2019 16:23,Is there any system that generates website designs?,,1,0,,,,CC BY-SA 4.0 +11600,2,,11595,4/2/2019 15:51,,2,,"

I would say there are quite a few different reasons for this, with the proportion of each dependent on a given researcher.

+ +

For example, I use python for the vast majority of what I do. And for me, it is due to a few different factors:

+ +
    +
  1. I was already familiar with python, and it is a simple, high-level abstraction. This is probably the reason a lot of people set it and forget it, particularly with python. It allows them to focus on the ideas and implementation of said ideas, without worrying about all the junk that comes with trying to write a program in a faster language like C++

  2. +
  3. The vast majority of ML/DS packages are only or primarily supported via python. I think this is probably the main reason for most in the field, as even if one can implement the architecture in a faster language, the time to do so would likely even out when taking into account the time required to prototype a given model. Tensorflow and others are supported for other languages but do not see the same level of dev support.

  4. +
  5. The ability to deploy models to multiple platforms without headache. When working in an environment where the work is also applied, the ability to deploy a given model without too much debugging cannot be understated.

  6. +
+ +

These are just a few of the main ones, and as I said the reasons for a particular language over another is primarily a preferential one, and can even vary by requirements(i.e speed)

+",9608,,,,,4/2/2019 15:51,,,,0,,,,CC BY-SA 4.0 +11601,2,,11593,4/2/2019 16:01,,1,,"
+

With a Google Cloud V100 GPU the GAN would run a week to two with + default parameters. Does this sound realistic time for this kind of + dataset? It's definitely not feasible for me.

+
+ +

Yes, V100s are quite beefy. You shouldn't even need a week. Obviously this is based on my experience with various problems, rather than a concrete calculation.

+ +
+

Is 4000 images enough to train a GAN from scratch?

+
+ +

For the size you want to generate, it is still on the edge of what would constitute a decent training set. You will get some results(depending on architecture) but will probably want to grab more data if at all possible.

+ +
+

Is there any implementation with pytorch/keras that would be good to + get nice results with?

+
+ +

I would check out this link: https://github.com/eriklindernoren/Keras-GAN. It has some nice implementations in Keras. As far as the particular GAN to use, I would start out with a vanilla GAN that fits your purposes and focus on toying with hyperparameters, and if that doesn't work, look into one of the other variations that correlate well with your problem.

+",9608,,,,,4/2/2019 16:01,,,,0,,,,CC BY-SA 4.0 +11602,2,,11599,4/2/2019 16:10,,1,,"

The closest research that I am aware of is in artificial intelligence designed user interfaces(1,2).

+ +

The scope of which vary tremendously. Some teams are trying to generate UIs based on some user-defined parameters, others are trying to generate based off of images(as in the second link). I think part of the reason research is focused here is two-fold: One, we aren't very good at generating UIs automatically yet, and 2, it is an exponentially harder problem to develop a backend that has integrated calls and features required for a fully-featured website or application.

+ +

With that being said, one could adapt this research into things like one-page sites or other simple implementations that might bear fruit.

+",9608,,,,,4/2/2019 16:10,,,,4,,,,CC BY-SA 4.0 +11604,2,,11596,4/2/2019 16:32,,1,,"

It is likely converging to a far worse local optimum from which it can't recover, so yes I would guess if all else is the same, that is where the issue would be. I would first try to adjust hyperparameters from a static seed for weight initialization. ε and α, are likely good places to start.

+ +

If that fails, adjust some architecture params like the number of units or layers in the network as well as the starting seed value.

+ +

You seemingly have run into a core problem at the heart of ML research which is reproducibility, which is often predicated by initialization.

+",9608,,,,,4/2/2019 16:32,,,,9,,,,CC BY-SA 4.0 +11605,2,,2226,4/2/2019 16:40,,4,,"

In the reinforcement learning setting, an agent interacts with an environment in (discrete) time steps, which are incremented after the agent takes an action, receives a reward and the ""system"" (the environment and the agent) moves to a new state.

+ +

More precisely, at time step $t=0$ (the first time step), the environment (including the agent) is in some state $s_t = s_0$, takes an action $a_t = a_0$ and receives and reward $r_t = r_0$ and the environment (including the agent) moves to a next state $s_{t+1} = s_{0 + 1} = s_1$, which will also be the state that the environment will be in at the next time step, $t+1$, hence the notation $s_{t+1}$. Here, the subscripts $_t$ refer to the time step associated with those ""entities"" (state, action and rewards). So, after one time step (or after $t=0$), the agent will be in state $s_{t+1}$ and the new time step will be $t + 1 = 0 + 1 = 1$. So, we are now at time step $t=1$ (because we have just incremented the time step) and the agent is in state $s_{t} = s_1$. The previously described interaction then repeats: the agent takes an action $a_{t} = a_1$, gets the reward $r_t = r_1$ and the environment moves to the state $s_{t+1} = s_{1+1} = s_{2}$, and so on.

+ +

In your summation, we are just discounting the rewards using a value denoted by $\gamma$ (which is usually between $0$ and $1$), that is often called the ""discount factor"". That summation represents the summation of the rewards the agent will received starting (in this case) from time step $t=1$. We could also just have $r_1 + r_2 + r_3 + \dots $, but, for technical or mathematical reasons, we often ""discount"" the rewards, that is, we multiply them by $\gamma$ (raised to a power associated with the time step that reward will be received).

+ +

In the above description, I said that, at some time step $t$, the agent takes an action $a_t$ and receives a reward $r_t$. However, it is often the case that the reward received after taken an action at time step $t$ is denoted by $r_{t+1}$. I think this is a little confusing, but not conceptually ""wrong"", because one might think that the reward for having performed an action at time step $t$ is only received at the next time step. (You should get used to slightly different notations and terminology. At the beginning, it is not easy to understand, if the notation is not precise and consistent across sources, but you will get used to it, the more you learn about the topic, in the same way that you get used to a new language).

+",2444,,2444,,4/2/2019 16:52,4/2/2019 16:52,,,,0,,,,CC BY-SA 4.0 +11607,5,,,4/2/2019 20:09,,0,,"

An interdisciplinary subfield involcing computer science, statistics, databases and machine learning. It utilizes AI to extract information from large data, and to transform it into formats better suiting the requirements.

+",2255,,2255,,4/4/2019 20:38,4/4/2019 20:38,,,,0,,,,CC BY-SA 4.0 +11608,4,,,4/2/2019 20:09,,0,,The process of discovering patterns in large data sets by AI.,2255,,2255,,4/4/2019 20:29,4/4/2019 20:29,,,,0,,,,CC BY-SA 4.0 +11609,1,11611,,4/2/2019 20:57,,6,188,"

I coded a tic tac toe program, but I don't know if I can call it artificial intelligence.

+ +

Here's what I did.

+ +

There is a random player, which always makes random valid moves.

+ +

And then there is the AI player, which will receive input before every move, that input is the state of the board, and all the posibilities. +The AI, will try any move that it hasn't tried before. But if it knows every possibility, it will select the one that has the higher value. +This value is assigned by the outcome of the match, if the match was won ,+1, 0 for draw, -1 for lose. +Every move, will be stored in a database, or updated if it's known.

+ +

Eventually it will know every possible move.

+ +

I also added a threshold to compare the best moves, so it really select the best move. Example, two moves with a value of 100, the AI will keep trying them both, randomly until one has surpased the other by the threshold, say 50.

+ +

It takes about 20.000 games to make the AI perfect, it never loses a game, just draws and wins.

+ +

I'm new to AI, and I'm wondering, could this be really considered Artificial Intelligence? +And how is this different from a Neural Network approach? (I'm been reading about it, but I still don't quite get it.)

+",23389,,,,,4/27/2019 11:51,Can this tic tac toe program be considered AI?,,1,0,,,,CC BY-SA 4.0 +11611,2,,11609,4/3/2019 1:27,,7,,"

This is basically reinforcement learning. The state space contains your moves, and the value function are the value you store at the end. And your rewards are the end results. And you have episodic game. It is an AI method. Consider looking at value iteration, policy iteration, SARSA, Q-learning. The difference between neural network method and yours is you are not doing function approximation with neural network for the value function, you are doing tabular method.

+",4042,,,,,4/3/2019 1:27,,,,0,,,,CC BY-SA 4.0 +11612,1,11615,,4/3/2019 2:40,,10,4853,"

Can Q-learning (and SARSA) be directly used in a Partially Observable Markov Decision Process (POMDP)? If not, why not? My intuition is that the policies learned will be terrible because of partial observability. Are there ways to transform these algorithms so that they can be easily used in a POMDP?

+",4042,,12509,,4/3/2019 11:39,4/3/2019 11:43,Can Q-learning be used in a POMDP?,,1,9,,,,CC BY-SA 4.0 +11613,1,,,4/3/2019 2:45,,2,288,"

I am struggling to understand the use of the Convolutional Sequence to Sequence (Conv-Seq2Seq) model. The image below is take directly from the paper and is the nearly canonical diagram of the parallel training procedure. After puzzling over it for quite some time, it has come to seem straight forward to me:

+ +
    +
  • An input sentence of N tokens can be encoded in one step because the input sentence exists prior to the start of training, and therefore the token-wise convolution can be trivially parallelized. (Compare to RNN encoders, which require N steps)
  • +
  • During training, an output sentence can similarly be parallelized in the decoder because during training, the entire output sentence is known.
  • +
  • Therefore, during training, the attention function can be fully parallelized in the two dimensional array of dot products shown below
  • +
  • Finally, during training, the attention is used to weight the input embeddings and encodings, combined with the output training encodings (as such) and the final output assembled.
  • +
+ +

This is clearly not the case after the network is trained and evaluation input sequences are translated without reference outputs. I understand from various resources (including but not limited to Gehring's conference presentation) that post-training, output sequences are generated token by token in a fashion vaguely similar to earlier architectures, but I cannot find a clear description of that process.

+ +

(I speculate that this is because the parallel training routine was so revolutionary at the time, that the focus of the publications was rightly on the training routines.)

+ +

Can someone please help me understand the post-training generation algorithm, if possible in terms of the training diagram?

+ +

My current non-confident understanding is that the sentence below would be handled something like the following:

+ +
    +
  • Prime the decoder with a default token string of <p> <p> <s>, this results (due to convolution) in a single decoder encoding (as such) input into the attention function, and would hopefully generate the single token <'Sie'> as output
  • +
  • Restart the decoder with the token string <p> <p> <s> <'Sie'> which would generate two inputs into the attention function and hopefully output <'Sie'> <'stimmen'>
  • +
  • Proceed with lengthening input sentences until the final token generated output ends with a </s> token signifying the end of the sentence.
  • +
+ +

If that is a correct understanding, can someone confirm it? If close, can someone correct me?

+ +

+ +

Convolutional Sequence to Sequence Learning, Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin, 2017

+",15020,,,,,8/12/2021 16:06,Convolutional Sequence to Sequence Learning: Training vs Generation,,1,0,,,,CC BY-SA 4.0 +11615,2,,11612,4/3/2019 9:02,,6,,"

The usual (as presented in Reinforcement Learning: An Introduction) $Q$-learning and SARSA algorithms use (and update) a function of a state $s$ and action $a$, $Q(s, a)$. These algorithms assume that the current state $s$ is known. However, in POMDP, at each time step, the agent does not know the current state, but it maintains a ""belief"" (which, mathematically, is represented as a probability distribution) in what the current state might be, so it cannot maintain (an approximation of) the function $Q(s, a)$. Hence, the usual Q-learning and SARSA algorithms shouldn't be directly applicable to a POMDP.

+ +

However, $Q$-learning is often used in the contexts where observations emitted by the environment (or transformations of the raw observations) are used to build the current state (which is assumed to be Markov, even if it is not). For example, in the original DQN, the action taken at the current step and the raw observation and the reward emitted by the environment (after this action is taken) are combined to produce the current (Markov) state. It might not be the case that the way they combine the action, the reward and the observation is sufficient to fully describe the current state (which might not even be Markov).

+ +

In this report, Deep Reinforcement Learning with POMDPs, the author attempts to use Q-learning in a POMDP setting. He suggests to represent a function, either $Q(b, a)$ or $Q(h, a)$, where $b$ is the ""belief"" over the states and $h$ the history of previously executed actions, using neural networks. So, the resulting parameterized functions would be denoted by $Q(b, a; \theta)$ or $Q(h, a; \theta)$, where $\theta$ is a vector representing the parameters of the corresponding neural network. Essentially, the author uses a DQN (with an experience replay buffer and target network), but the results are not great: $Q$ values converge, but policies do not and they are not robust (in that they are sensitive to small perturbations).

+",2444,,2444,,4/3/2019 11:43,4/3/2019 11:43,,,,4,,,,CC BY-SA 4.0 +11617,1,,,4/3/2019 9:55,,2,1503,"

I will explain my question in relation to chess, but it should be relevant for other games as well:

+ +

In short terms: Is it possible to combine the techniques used by AlphaZero with those used by, say, Stockfish? And if so, has it been attempted?

+ +

I have only a brief knowledge about how AlphaZero works, but from what I've understood, it basically takes the board state as input to a neural net, possibly combined with monte carlo methods, and outputs a board evaluation or prefered move. To me, this really resembles the heuristic function used by traditional chess engines like stockfish.

+ +

So, from this I will conclude (correct me if I'm wrong) that AlphaZero evaluates the current position, but uses a very powerful heuristic. Stockfish on the other hand searches through lots of positions from the current one first, and then uses a less powerful heuristic when a certain depth is reached.

+ +

Is it therefore possible to combine these approaches by first using alpha-beta pruning, and then using AlphaZero as some kind of heuristic when the max depth is reached? To me it seems like this would be better than just evaluating the current position like (I think) AlphaZero does. Will it take too much time to evaluate? Or is it something I have misunderstood? If it's possible, has anyone attempted it?

+",17488,,,,,5/22/2020 22:55,Combining deep reinforcement learning with alpha-beta pruning,,3,0,,,,CC BY-SA 4.0 +11618,1,,,4/3/2019 10:36,,1,54,"

TLTR: +I'm developing a CNN for a classification task. The data contains multiple classes some of which are very similar to each other and I know these meta-classes. In such a situation is it a good approach to use 2 Levels of CNNs: +1. Level detect the meta-classes. +2. Level detect the classes within the classified meta class (of Level 1).

+ +


+Example: +Suppose I try to classify the following 9 classes:

+ +

Apple Tree, Plum Tree, Cherry Tree, Sports car, SUV car, Coupe car, Dog, Cat, Wolf

+ +

Now I could of course use one network on these classes and get a classification output for all of them. But the output (softmax) percentage e.g. for an apple tree would be for probably high for any tree class. +Thus is it a good approach to train and use 2 level of CNNs, like this:

+ +
    +
  1. Level classify tree, car, animal +--> Trained with all images
  2. +
  3. Level classify what kind of tree, car, animal --> trained only with the subsample of trees, cars, animals
  4. +
+ +

So images are checked by CNN Level 1 and then depending on its classification with appropriate CNN Level 2.

+ +

So the questions are:

+ +
    +
  • Is this a good approach ? + +
      +
    • Does it help in terms of prediction quality/accuracy of the subclasses?
    • +
    • Is it easier for an CNN to detect the specific features of a subclass if the input is limited (like in level 2) ?
    • +
  • +
+ +

Or use another approach ?

+ +

Thanks +Swad

+",23665,,23665,,4/3/2019 13:05,4/3/2019 13:05,Classification of classes within meta-classes,,0,2,,,,CC BY-SA 4.0 +11619,5,,,4/3/2019 12:03,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Markov_decision_process.

+",2444,,2444,,9/1/2019 19:38,9/1/2019 19:38,,,,0,,,,CC BY-SA 4.0 +11620,4,,,4/3/2019 12:03,,0,,"For questions related to the Markov Decision Process (MDP), which is a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision-maker.",2444,,2444,,9/1/2019 19:38,9/1/2019 19:38,,,,0,,,,CC BY-SA 4.0 +11621,1,,,4/3/2019 13:15,,2,79,"

I have a dataset of 100000 documents each labelled with a topic to it. I want to create a model such that, given a topic, the model can generate a document from it.

+ +

I came across language models GPT, GPT-2 and BERT. I learned that they can be used for generation purposes. But I did not find anywhere whether they can generate sentences given only a word.

+ +

I am inclined to use GPT for my task, but I am not sure how to proceed with it. I wanted to know whether it is possible or not? It would be helpful if anyone can help me give a start in the right direction.

+",19244,,2444,,11/1/2019 2:58,11/1/2019 2:58,How can I generate a document from a single word using GPT or BERT?,,0,2,,,,CC BY-SA 4.0 +11622,1,,,4/3/2019 13:28,,2,37,"

I have a small dataset (117 training examples) and many features (4005). Each of the training examples is binary labeled (healthy / diseased). Each feature represents the connectivity between two different brain regions.
+The goal is to assign subjects to one of the two groups based on their brain activity.

+ +

What methods are there for generating new artificial training examples based on the existing training examples?

+ +

An example I could think of would be SMOTE. However, this technique is usually only used to balance unbalanced datasets. This would not be necessary for my set, since it has about the same number of training examples for both label classes.

+",23672,,,,,4/3/2019 13:28,What methods are there to generate artificial training examples based on existing training examples?,,0,3,,,,CC BY-SA 4.0 +11623,1,,,4/3/2019 13:57,,3,211,"

The Turing award is sometimes called Computer Sceince's Nobel Prize. This year's award goes to Bengio, Hinton, and LeCun for their work on artificial neural networks.

+

The actual work contributed by these authors is, of course, quite technical. It centers around the development of deep neural networks, convolutional neural networks, and effective training techniques. The lay press will tend to simplify these results to the point that they lose meaning.

+

I would like to have a concise, and yet still precise, explanation of their contributions to share with a lay audience. So, what is a simplified way to explain the contributions of these researchers?

+

I have my own ideas and will add them if no other satisfactory answer appears. For a "lay" audience, I want to assume someone who had taken a college level course in something scientific but not necessarily computer science. Explanations that are suitable for those with even less background are better still though, as long as they don't lose too much precision.

+",16909,,11539,,1/28/2022 16:01,1/28/2022 16:01,"What is a simplified way to explain why the AI researchers Bengio, Hinton, and Lecun, won the 2018 Turing Award?",,1,6,,,,CC BY-SA 4.0 +11624,2,,11623,4/3/2019 14:38,,6,,"

The related ACM article describes a few specific technical contributions, which led the ACM to award them.

+
+

Geoffrey Hinton

+

Backpropagation: In a 1986 paper, "Learning Internal Representations by Error Propagation", co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.

+

Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.

+

Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.

+

Yoshua Bengio

+

Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.

+

High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, "A Neural Probabilistic Language Model", that introduced high-dimension word embeddings as a representation of word meaning. Bengio's insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.

+

Generative adversarial networks: Since 2010, Bengio's papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.

+

Yann LeCun

+

Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.

+

Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.

+

Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks - a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.

+
+",2444,,-1,,6/17/2020 9:57,4/3/2019 14:38,,,,2,,,,CC BY-SA 4.0 +11625,2,,11363,4/3/2019 15:16,,1,,"

Some of the cases content-based filtering is useful is:

+ +
    +
  • Cold-start problem: it happens when no previous information about user history is available to build collaborative filtering, so in this case, we offer to the user some items then recommend based on the similarity between these items and other items in the dataset alternate of recommending any items that maybe not with the user taste.

  • +
  • Transparency: collaborative method gives you the recommendation because some unknown users have the same taste as you that cause a problem if your data is biased towards one taste makes new user haven't enough similar users have the same taste, but the content-based method can tell you they recommend the items based on what features, that helps you to determine which factors affect in the recommendation.

  • +
+",21907,,,,,4/3/2019 15:16,,,,0,,,,CC BY-SA 4.0 +11626,1,,,4/3/2019 15:42,,4,113,"

It seems that stacking LSTM layers can be beneficial for some problem settings in order to learn higher levels of abstraction of temporal relationships in the data. There is already some discussion on selecting the number of hidden layers and number of cells per layer.

+ +

My question: Is there any guidance for the relative number of cells from one LSTM layer to a subsequent LSTM layer in the stack? I am specifically interested in problems involving timeseries forecasting (given a stretch of temporal data, predict the trend of that data over some time window into the future), but I'd also be curious to know for other problem settings.

+ +

For example, say I am stacking 3 LSTM layers on top of each other: LSTM1, LSTM2, LSTM3, where LSTM1 is closer to the input and LSTM3 is closer to the output. Are any of the following relationships expected to improve performance?

+ +
    +
  1. num_cells(LSTM1) > num_cells(LSTM2) > num_cells(LSTM3) [Sizes decrease input to output]
  2. +
  3. num_cells(LSTM1) < num_cells(LSTM2) < num_cells(LSTM3) [Sizes increase input to output]
  4. +
  5. num_cells(LSTM1) < num_cells(LSTM2) > num_cells(LSTM3) [Middle layer is largest]
  6. +
+ +

Obviously there are other combinations, but those seem to me salient patterns. I know the answer is probably ""it depends on your problem, there is no general guidance"", but I'm looking for some indication of what kind of behavior I could expect from these different configurations.

+",20955,,,,,4/3/2019 15:42,How do the relative number of cells between neighboring stacked LSTM layers affect the network's behavior?,,0,0,,,,CC BY-SA 4.0 +11627,1,,,4/3/2019 18:50,,6,3854,"

In my thesis I dealt with the question how a computer can recognize LEGO bricks. With multiple object detection, I chose a deep learning approach. I also looked at an existing training set of LEGO brick images and tried to optimize it.

+ +

My approach

+ +

By using Tensorflow's Object Detection API on a dataset of specifically generated images (Created with Blender) I was able to detect 73.3% of multiple LEGO Bricks in one Foto.

+ +

One of the main problems I noticed was, that I tried to distinguish three different 2x4 bricks. However, colors are difficult to distinguish, especially in different lighting conditions. A better approach would have been to distinguish a 2x4 from a 2x2 and a 2x6 LEGO brick.

+ +

Furthermore, I have noticed that the training set should best consist of ""normal"" and synthetically generated images. The synthetic images give variations in the lighting conditions, the backgrounds, etc., which the photographed images do not give. However, when using the trained Neural Network, photos and not synthetic images are examined. Therefore, photos should also be included in the training data set.

+ +

One last point that would probably lead to even better results is that you train the Neural Network with pictures that show more than one LEGO brick. Because this is exactly what is required by the Neural Network when it is in use.

+ +
    +
  • Are there other ways I could improve upon this?
  • +
+ +

(Can you see any further potential for improvement for the Neural Network? How would you approach the issue? Do any of my approaches seem poor? How do you solve the problem?)

+",23682,,1671,,4/4/2019 20:42,10/3/2020 12:56,How to detect LEGO bricks by using a deep learning approach?,,1,4,,,,CC BY-SA 4.0 +11629,1,11630,,4/3/2019 23:36,,5,2319,"

I am new to Deep Learning.

+

Suppose that we have a neural network with one input layer, one output layer, and one hidden layer. Let's refer to the weights from input to hidden as $W$ and the weights from hidden to output as $V$. Suppose that we have initialized $W$ and $V$, and ran them through the neural network via the forward algorithm/pass. Suppose that we have updated $V$ via backpropagation.

+

When estimating the ideal weights for $W$, do we keep the weights $V$ constant when updating $W$ via gradient descent given we already calculated $V$, or do we allow $V$ to update along with $W$?

+

So, in the code, which I am trying to do from scratch, do we include $V$ in the for loop that will be used for gradient descent to find $W$? In other words, do we simply use the same $V$ for every iteration of gradient descent?

+",23687,,2444,,11/15/2020 12:45,11/15/2020 12:45,Does backpropagation update weights one layer at a time?,,1,1,,,,CC BY-SA 4.0 +11630,2,,11629,4/4/2019 0:15,,2,,"

The answer is implied in the term ""backpropagation"". All gradients are calculated at the same time. That is, the error from your loss function is propagated backwards from the output and through your whole network. This propagation results in an error associated with each weight in the network, which determines how you change each weight in order to minimize your loss function.

+ +
+

or do we allow $V$ to update along with $W$?

+
+ +

Yes. This saves time, since many of the results of intermediate computations used to update $V$ can be reused in the update of $W$.

+ +

See http://neuralnetworksanddeeplearning.com for a detailed description of the backpropagation algorithm.

+",22916,,22916,,4/4/2019 1:18,4/4/2019 1:18,,,,0,,,,CC BY-SA 4.0 +11631,2,,11627,4/4/2019 2:16,,5,,"

So I am assuming that you are trying to detect a lego brick from the image. One idea is that you can use transfer learning. Leveraging a pre-trained machine learning model is called transfer learning. The underlying idea behind transfer learning is that one takes a well-trained model from one dataset or domain, and applies it to a new one. François Chollet has written a very comprehensive guide to transfer learning (https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)

+ +

I admit I took some of this information from Christopher Bonnett's article named Classifying e-commerce products based on images and text.

+ +

I also suggest using the Lego brick dataset from Kaggle on this link: https://www.kaggle.com/joosthazelzet/lego-brick-images +It has over 12,700 lego brick images.

+ +

If processing power is a problem, you can use Amazon Web Services for cloud computing. It is inexpensive for small scale operations like this.

+ +

Of course for the object detection, you can always increase the number of convolution layers. However, if you have too many layers, you should also include residual blocks/residual network. This would allow neural networks even with over a thousand layers to operate effectively. This video should help you understand how residual networks work (https://www.youtube.com/watch?v=ahkBkIGdnWQ)

+ +

Finally, make sure not to overfit during your training and if you do follow the residual network idea, you should also include upsampling in your convolution neural network( More in here: https://towardsdatascience.com/up-sampling-with-transposed-convolution-9ae4f2df52d0)

+ +

I hope this helped and good luck on your endeavor.

+",23546,,,,,4/4/2019 2:16,,,,0,,,,CC BY-SA 4.0 +11633,1,,,4/4/2019 4:02,,4,62,"

In Section 10.4 of Sutton and Barto's RL book, they argue that the discount rate $\gamma$ has no effect in continuing settings. They show (at least for one objective function) that the average of the discounted return is proportional to the undiscounted average reward $r(\pi)$ under the given policy.$^*$ They then advocate using average rewards rather than the usual returns of the discounted setting.

+ +

I've never encountered someone using average rewards (and no discounting) in the wild, though. Am I just ignorant of some use case, or is pretty much everyone sticking to discounting anyways?

+ +

$$r(\pi)=\sum_s \mu_\pi (s) \sum_a \pi(a|s) \sum_{s',r}p(s',r|s,a)r$$

+ +

$\mu_\pi$ is the stationary state distribution while following policy $\pi$.

+ +

$^*$Their proof did use the fact that the MDP was ergodic. I'm not sure how often that assumption holds in practice.

+",22916,,2444,,1/27/2023 0:17,1/27/2023 0:17,Does everyone still use discount rates?,,0,1,,,,CC BY-SA 4.0 +11634,1,,,4/4/2019 4:19,,1,43,"

Is any classifier not subject to fooling as in here?

+

My question is related to this, but not an exact duplicate.

+

What I wanted to ask is that any classifiers inherently do not subject (or less prone) to attack. I have a feeling that non-linear classifiers should be less susceptible to attack.

+",23688,,2444,,12/10/2021 22:52,12/10/2021 22:52,Is any classifier not subject (or less susceptible) to fooling?,,1,0,,,,CC BY-SA 4.0 +11636,1,,,4/4/2019 4:48,,2,160,"

I am working with a dataset where each input sample is a matrix, and the output corresponding to each input is also a matrix (of shape (400, 10)). The input samples do not have translation invariance. Each output image has shape (16, 16) . The output matrices have translation invariance.

+ +

I want to build a neural network which can learn how to predict the output images from the aforementioned data. It seems to me that one needs to think of a neural network here which does regression on the output images to learn. Presently, I am using 1000 data samples for learning in the neural network (input samples and corresponding output images). What is the best way to build a neural network for this task?

+ +

Presently, I am using multi-layer perceptron (MLP) with mean square error (MSE) loss for this task. I flatten the input matrices before I feed them into the MLP, and use a standard MLP with multiple hidden layers (4-5) with many hidden units for this task.

+ +

While a visual inspection shows that the true and predicted output images are in relatively good agreement for training data, I find a mismatch between true and predicted (reconstructed) output images for validation data. The bottom plot shows pictures of true and predicted (reconstructed) output images for training and validation data for chosen samples.

+ +

+ +

Presently, I am using training and validation loss curves (with respect to iteration) to measure performance. I want to have a robust metric for comparison which can tell me whether the prediction is a random image or not.

+ +

How can I get the model to generalize to the validation set better?

+ +

The Python code that I am using for this MLP and the required data can be found here and here respectively.

+",22566,,2444,,4/4/2019 10:14,4/4/2019 10:14,How to build a neural network that can learn to predict output images?,,0,3,,,,CC BY-SA 4.0 +11637,1,,,4/4/2019 8:12,,1,56,"

IQN paper (https://arxiv.org/abs/1806.06923) uses distributional bellman target: +$$ \delta^{\tau,\tau'}_t = r_t + \gamma Z_{\tau'}(x_{t+1}, \pi_{\beta}(x_{t+1})) - Z_{\tau}(x_t, a_t) $$ +And optimizes: +$$ L = \frac{1}{N'} \sum^{N}_i \sum^{N'}_j \rho^\kappa_{\tau_i} \delta^{\tau_i,\tau_j}_t $$

+ +

But similar quantiles can be got just from Q values, when doing so: +$$ \delta^\tau_t = r_t + \gamma \frac{1}{N'} \sum_{j}^{N'} Z_{\tau_j}(x_{t+1}, \pi_{\beta}(x_{t+1})) - Z_\tau(x_t, a_t) \\ = r_t + \gamma Q (x_{t+1}, \pi_\beta(x_{t+1})) - Z_\tau(x_t, a_t) $$ +optimizing: +$$ L = \sum^N_i \rho^{\kappa}_{\tau_i} \delta^{\tau_i}_t $$

+ +

Both lead to similar performance on CartPole env. The loss function of the 2nd one is more simpler and intuitive (atleast to me). So i was thinking if there are any obvious reason why authors didin't use it?

+",18808,,18808,,4/4/2019 10:40,4/4/2019 11:27,IQN bellman target: using Z vs using Q,,1,0,,,,CC BY-SA 4.0 +11638,2,,11637,4/4/2019 11:27,,1,,"

Replacement you suggest is replacement of random variable by its expectation in forward part of TD. It would make IQN into modification of C51 with randomly sampled function approximator instead of discrete distribution. Both distribution produced and especially exploration behavior with your replacement would be very different. The authors of paper explicitly said that ""more randomness"" in their opinion benefit training, so reducing randomness would go aginst spirit of the paiper. That they produce similar results on single toy test mean very little. IQN could be better then C51 or it could be worse then C51 but single toy example is not enough to say they are close. Nevertheless I agree that IQN looks overly complex and may require more training time, C51 approach could be more practical.

+",22745,,,,,4/4/2019 11:27,,,,0,,,,CC BY-SA 4.0 +11639,1,,,4/4/2019 14:22,,2,74,"

Policy learning refers to mapping an agent state onto an action to maximize reward. A linear policy, such as the one used in the Augmented Random Search paper, refers to learning a linear mapping between state and reward.

+

When the entire state changes at each time-step, for example in the Continuous Mountain Car OpenAI Gym, the position and speed of the car changes at each time-step.

+

However, assume we also wanted to communicate the constant position of one or more goals. By "constant", I mean does not change within a training episode, but may change between episodes. For example, if there was a goal on the left and right of the Mountain Car.

+

Are there examples of how this constant/static information be communicated from the environment other than appending the location of the two goals to the state vector? Can static/constant state be differentiated from state which changes with each action?

+",23703,,2444,,11/1/2020 16:55,11/1/2020 16:55,How can certain information about the goal be given to the RL learning algorithm?,,1,0,,,,CC BY-SA 4.0 +11640,1,,,4/4/2019 14:40,,14,12857,"

I'm learning DDPG algorithm by following the following link: Open AI Spinning Up document on DDPG, where it is written

+ +
+

In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything.

+
+ +

What does this mean? Is it related to the tuning of the parameter of the batch size in the algorithm?

+",23707,,2444,,12/3/2021 9:11,5/2/2023 17:36,How large should the replay buffer be?,,2,0,,,,CC BY-SA 4.0 +11642,2,,11639,4/4/2019 15:16,,4,,"

I see three main ways to do this, which one makes more sense will depend on your application.

+ +

One is to append that information to the state/observations like you mentioned. While this information is static for a particular episode, it will be different across episodes and the policy should learn to condition the actions it chooses based on this information.

+ +

Another would be to leave goal information out entirely and force the agent to learn a policy that works when the goal is unknown. This will likely be more difficult to learn and you may end up with a policy that moves to the average of all goals or explores and tries them all.

+ +

A third option and probably the most natural is to have some context cue that the agent can observe (e.g. often in lab experiments with rats, a cue is placed on a wall that lets the rat know which way to go to get a reward). This is similar to the first method, except that the cue has to be observed rather than given directly. For the Mountain Car example, this could be an extra signal that the agent only sees in a particular location, such as the bottom of the valley or when it moves close to a particular side.

+",23710,,,,,4/4/2019 15:16,,,,0,,,,CC BY-SA 4.0 +11643,1,11653,,4/4/2019 15:26,,1,1628,"

I am making a NN library without any other external NN library, so I am implementing all layers, including the flatten layer, and algorithms (forward and backward pass) from scratch. I know the forward implementation of the flatten layer, but is the backward just reshaping it or not? If yes, can I just call a simple NumPy's reshape function to reshape it?

+",23713,,2444,,12/16/2020 14:54,12/16/2020 14:54,How should I implement the backward pass through a flatten layer of a CNN?,,2,0,0,,,CC BY-SA 4.0 +11644,2,,6579,4/4/2019 15:41,,4,,"

In reinforcement learning (RL), an agent interacts with an environment in time steps. At each time step $t$, the agent and the environment are in some state $s_t$. From that state $s_t$, the agent chooses and executes an action $a_t$ and the environment emits a reward $r_t$ (which values the just taken action $a_t$). Finally, the agent and the environment move to the next state, $s_{t+1}$. This interaction proceeds until either the agent dies or some other termination criterion is met.

+ +

The goal of the agent is to obtain the highest amount of reward in the long run (that is, not just in the next time step, but in all successive time steps). To do that, ideally, the agent needs to find a way of behaving ""optimally"". In RL, the behaviour of the agent is called a ""policy"". An optimal policy is a policy that allows the agent to obtain the (expected) highest amount of reward in the long run.

+ +

In this context, we can describe a full and finite (in terms of time steps) interaction between an agent and an environment as a sequence (sometimes called a ""rollout"" or a ""trajectory"") of states, actions, rewards and next states. So, a rollout might look like this $$(s_t, a_t, r_t, s_{t+1}, a_{t+1}, r_{t+1}, s_{t+2}, \dots, s_{T-1}, a_{T-1}, r_{T-1}, a_{T}),$$ where $T$ is the last time step of the interaction between the agent and the environment. During the interaction between the agent and the environment, the agent might decide to store this ""experience"" in a ""buffer"" (e.g. an array), so that it can use it later (you will see below a use case).

+ +

The elements of this type of sequences are often temporally correlated. What does this mean? For example, suppose that states are frames of a video game (that is, each frame is a different state). In this context, successive frames (or states) are similar to each other, which mathematically means that they are correlated.

+ +

It turns out that neural networks (NNs) are able to approximate (almost) any function. In RL, a policy is also a function: it is a function from a state to an action (or probability distribution over actions). So, we can represent a policy using a NN. (Deep RL is essentially a combination of traditional RL algorithms, like Q-learning, with NNs).

+ +

Moreover, it also turns out that training a NN using back-propagation with data that is temporally correlated might lead the NN not to capture the essential characteristics of the data, which, in practice (during training), means that we are not able to find the NN that represents the optimal policy (or another function, e.g. $Q(s, a)$, used later to retrieve the policy). In such cases, we often say that the training of the NN is not stable.

+ +

In the case of RL, the data used to train such types of neural networks (which represent the policy, or other functions that are used in RL) are the ""rollouts"", which contain elements that are often temporally correlated. Hence, we can't just feed the NN with a rollout, in the same order that the elements (states, actions, rewards and next states) are collected. So, in order to use uncorrelated data to train an NN, we can randomly take tuples of the form $\langle s_h, a_h, r_h, s_{h+1} \rangle$ from the rollout (where $h$ is some time step between $t$ and $T$). For example, suppose we take (or ""sample"") $3$ tuples $\langle s_7, a_7, r_7, s_{8} \rangle$, $\langle s_{97}, a_{97}, r_{97}, s_{98} \rangle$ and $\langle s_{2}, a_{2}, r_{2}, s_{3} \rangle$. Given that these elements have been observed at quite different points in time, they are likely to be less correlated than e.g. $\langle s_7, a_7, r_7, s_{8} \rangle$, $\langle s_{8}, a_{8}, r_{8}, s_{9} \rangle$ and $\langle s_{9}, a_{9}, r_{9}, s_{10} \rangle$ (which are successive ""tuples of experience"").

+ +

In this context, ""experience replay"" (or ""replay buffer"", or ""experience replay buffer"") refers to this technique of feeding a neural network using tuples (of ""experience"") which are less likely to be correlated (given that ""random sampling"" procedure). The ""buffer"" part refers to a data structure (e.g. an array or list) that stores the trajectory (or rollout), that is, it stores the ""experience"" of the agent (hence the name ""experience""). The ""replay"" refers to the fact that this ""experience"" is reused (or ""replayed"") by randomly sampling from it to train the NN.

+ +

See also this question Why exactly do neural networks require i.i.d. data? regarding the fact that NNs often require i.i.d. data.

+",2444,,2444,,6/10/2019 20:43,6/10/2019 20:43,,,,0,,,,CC BY-SA 4.0 +11645,2,,11640,4/4/2019 16:24,,9,,"
+

In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything.

+
+ +

The larger the experience replay, the less likely you will sample correlated elements, hence the more stable the training of the NN will be. However, a large experience replay also requires a lot of memory and it might slow training. So, there is a trade-off between training stability (of the NN) and memory requirements.

+ +

The authors of the linked article state (right after the sentence above)

+ +
+

If you only use the very-most recent data, you will overfit to that and things will break; if you use too much experience, you may slow down your learning. This may take some tuning to get right.

+
+",2444,,2444,,7/10/2019 17:45,7/10/2019 17:45,,,,0,,,,CC BY-SA 4.0 +11646,5,,,4/4/2019 16:30,,0,,"

See https://spinningup.openai.com/en/latest/algorithms/ddpg.html for more info.

+",2444,,2444,,4/4/2019 20:24,4/4/2019 20:24,,,,0,,,,CC BY-SA 4.0 +11647,4,,,4/4/2019 16:30,,0,,For questions related to the reinforcement learning algorithm called Deep Deterministic Policy Gradient (DDPG).,2444,,2444,,4/4/2019 20:24,4/4/2019 20:24,,,,0,,,,CC BY-SA 4.0 +11648,1,11652,,4/4/2019 17:43,,2,415,"

What is the motivation behind using a deterministic policy? Given that the environment is uncertain, it seems stochastic policy makes more sense.

+",23707,,2444,,4/4/2019 19:12,4/4/2019 19:32,What is the motivation behind using a deterministic policy?,,1,1,,,,CC BY-SA 4.0 +11651,2,,11634,4/4/2019 18:42,,1,,"

I would say this is not necessarily a duplicate but quite similar to some other questions. However, I will answer the question posed here.

+ +

At a theoretical level, what you are asking is there any algorithm that cannot be tricked into predicting the wrong class?

+ +

The answer is: No

+ +

It is analogous to asking whether there is a perfect architecture for an arbitrary classification problem, and that answer is quite obviously not. It would also not likely to be terribly difficult to show this is provably the case, at least for a class of algorithms(i.e connectionist models).

+",9608,,,,,4/4/2019 18:42,,,,0,,,,CC BY-SA 4.0 +11652,2,,11648,4/4/2019 19:04,,3,,"

You're right! Behaving according to a deterministic policy while still learning would be a terrible idea in most cases (with the exception of environments that ""do the exploring for you""; see comments). But deterministic policies are learned off-policy. That is, the experience used to learn the deterministic policy is gathered by behaving according to a stochastic behavior policy.

+ +

Under some reasonable assumptions--like that the environment is fully-observed and is stationary--an optimal deterministic policy always exists. The proof can be found in chapter 6 of ""Markov Decision Process -- Discrete Stochastic Dynamic Programming"" by Martin L. Puterman. The same cannot be said for stochastic polices. For this kind of environment (even if it's stochastic) an optimal policy is hardly ever stochastic.

+ +

So, a motivation for wanting to learn a deterministic policy is often because we know that there is an optimal deterministic policy.

+ +

It's possible your question was also tangentially about off-policy learning. ""Why learn a deterministic policy directly (off-policy) when we can just use something like decaying $\epsilon$-greedy?"" Briefly, off-policy learning is very powerful and general. It's necessary in any algorithm that uses experience replay, for example. A discussion about the merits of off-policy learning is probably best left to another question, but reading Section 5.5 of Sutton and Barto's RL book should get you started.

+ +

Finally, directly learning a deterministic policy could be more computationally efficient if using deterministic policy gradients. In the setting with continuous state and action spaces, the deterministic policy gradient exists and has a simpler expectation than the stochastic policy gradient.

+ +

Stochastic policy gradient:

+ +

$$\begin{align*} +\nabla_\theta J(\pi_\theta) &= \int_\mathcal{S} \rho^\pi (s) \int_\mathcal{A} \nabla_\theta \pi_\theta (a|s) Q^\pi (s,a) \text{d}a\text{d}s\\ +& = \mathbb{E}_{s\sim \rho^\pi, a\sim\pi_\theta}[\nabla_\theta \log \pi_\theta (a|s) Q^\pi (s,a)] +\end{align*}$$

+ +

Deterministic policy gradient: +$$\begin{align*} +\nabla_\theta J(\mu_\theta) &= \int_\mathcal{S} \rho^\mu (s) \nabla_\theta \mu_\theta (s) \nabla_a Q^\mu (s,a)|_{a=\mu_\theta (s)}\text{d}s\\ +& = \mathbb{E}_{s\sim \rho^\mu}[\nabla_\theta \mu_\theta (s) \nabla_a Q^\mu (s,a)|_{a=\mu_\theta (s)}] +\end{align*}$$

+ +

Notice that the expectation in the deterministic policy gradient isn't over the action space. Estimating this expectation would require many fewer samples in the setting of a continuous, high-dimensional action space.

+ +

To recap:

+ +
    +
  • Optimal policies are often deterministic, not stochastic
  • +
  • Learning deterministic policies directly (off-policy) is powerful and general
  • +
  • It can also be more efficient if in a continuous, high-dimensional action space
  • +
+",22916,,22916,,4/4/2019 19:32,4/4/2019 19:32,,,,3,,,,CC BY-SA 4.0 +11653,2,,11643,4/4/2019 19:52,,1,,"

Yes, a simple reshape would do the trick. A flattening layer is just a tool for reshaping data/activations to make them compatible with other layers/functions. The flattening layer doesn't change the activations themselves, so there is no special backpropagation handling needed other than changing back the shape.

+",22916,,,,,4/4/2019 19:52,,,,0,,,,CC BY-SA 4.0 +11655,1,,,4/4/2019 20:10,,4,1315,"

Hinton doesn't believe in the pooling operation (video). I also heard that many max-pooling layers have been replaced by convolutional layers in recent years, is that true?

+",23688,,2444,,1/1/2022 10:28,1/1/2022 10:28,Is max-pooling really bad?,,2,0,,,,CC BY-SA 4.0 +11656,5,,,4/4/2019 21:09,,0,,,1671,,1671,,4/4/2019 21:09,4/4/2019 21:09,,,,0,,,,CC BY-SA 4.0 +11657,4,,,4/4/2019 21:09,,0,,"For questions about how AI and AI-related subjects are reported in the news media. Can involve unclear or misleading reporting, requests for clarification of popular articles, or the subject of public perception of AI in the news media in general. ",1671,,1671,,4/4/2019 21:09,4/4/2019 21:09,,,,0,,,,CC BY-SA 4.0 +11661,1,,,4/5/2019 3:26,,1,24,"

I wanted to get some opinions from the community for a certain problem that I will be approaching.

+ +

The problem is to provide feedback to a user based on a image of the upper male torso. The image would either reflect something positive like increasing muscle mass or decreasing muscle mass or both and gaining adipose tissue would be seen as negative as well as muscle atrophy.

+ +

Using the users input such as (sleep data, food, training routine) among some other data I would like to provide feedback such as ""no John, this exercise has not yielded desirable results"" or ""a combination of your recent dietary change has caused strength loss"" obviously this is a complex issue and has a lot of interconnected variables and potentials but you get the idea high-level at least and if you don't - Please ask.

+ +

So my idea so far would be to use a CNN that holds the picture of the torso, using a softmax function we could run this through a model to estimate bodyfat and doing the same with a model trained on muscle mass using those two models we could paint a pretty accurate picture of someones physique if they're going in the right direction or not; we could then go on to analyse what that user may have done/has not done to yield a result - Obviously there would be connected models here and many different combinations of algorithms applied such as CNN, RNN and others. Really curious to hear your response(s) thank you in advance.

+",23726,,23726,,4/5/2019 13:45,4/5/2019 13:45,Architecture and Use of Different Algorithms for Health Goal Feedback,,0,1,,,,CC BY-SA 4.0 +11663,2,,11655,4/5/2019 4:46,,1,,"

Max pooling isn't bad, it just depends of what are you using the convnet for. For example if you are analyzing objects and the position of the object is important you shouldn't use it because the translational variance; if you just need to detect an object, it could help reducing the size of the matrix you are passing to the next convolutional layer. So it's up to the application you are going to use your CNN.

+",9818,,,,,4/5/2019 4:46,,,,0,,,,CC BY-SA 4.0 +11664,2,,11655,4/5/2019 5:55,,3,,"

In addition to JCP's answer I would like to add some more detail. At best, max pooling is a less than optimal method to reduce feature matrix complexity and therefore over/under fitting and improve model generalization(for translation invariant classes).

+ +

However as JCP begins to hit on.. there are problems with this method. Hinton perhaps sums the issues in his talk here on what is wrong with CNNs. This also serves as motivation for his novel architecture capsule networks or just capsules.

+ +

As he talks about, the main problem is not translational variance per se but rather pose variance. CNNs with max pooling are more than capable of handling simple transformations like flips or rotation without too much trouble. The problem comes with complicated transforms, as features learned about a chair facing forwards, will not be too helpful towards class representation if the real-world examples contain chairs upside down, to the side, etc.

+ +

However there is much work being done here, mostly constrained to 2 areas. Those being, novel architectures/methods and inference of the 3d structure from images(via CNN tweaks). This problem was one of the bigger motivators for researchers throughout the decades, even David Marr with his primal sketches.

+",9608,,,,,4/5/2019 5:55,,,,0,,,,CC BY-SA 4.0 +11666,1,,,4/5/2019 6:34,,4,421,"

In computer vision is very common to use supervised tasks, where datasets have to be manually annotated by humans. Some examples are object classification (class labels), detection (bounding boxes) and segmentation (pixel-level masks). These datasets are essentially pairs of inputs-outputs which are used to train Convolutional Neural Networks to learn the mapping from inputs to outputs, via gradient descent optimization. But animals don't need anybody to show them bounding boxes or masks on top of things in order for them to learn to detect objects and make sense of the visual world around them. This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.

+ +

What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence? Which tasks do brains use to train themselves to be so good at processing visual information and making sense of the visual world around them? Or said in other words: how does the brain manage to train its neural networks without having access to manually annotated datasets like ImageNet, COCO, etc. (i.e. what does the brain use as ground truth, what is the loss function the brain is optimizing)? Finally, can we apply these insights in computer vision?

+ +
+ +

Update: I posted a related question on Psychology & Neuroscience StackExchange, which I think complements the question I posted here: check it out

+",12746,,12746,,4/11/2019 21:03,4/11/2019 21:03,Which loss function is the brain optimizing in order to learn advanced visual skills without expert/human supervision?,,1,6,,,,CC BY-SA 4.0 +11667,1,11675,,4/5/2019 8:34,,9,8480,"

I am new to deep learning and trying to understand the concept of back-propagation. I have a doubt about when the back-propagation is applied. Assume that I have a training data set of 1000 images for handwritten letters,

+
    +
  1. Is back-propagation applied immediately after getting the output for each input or after getting the output for all inputs in a batch?

    +
  2. +
  3. Is back-propagation applied $n$ times till the neural network gives a satisfactory result for a single data point before going to work on the next data point?

    +
  4. +
+",23734,,2444,,11/30/2020 0:05,11/30/2020 0:05,Is back-propagation applied for each data point or for a batch of data points?,,1,0,,,,CC BY-SA 4.0 +11668,2,,11666,4/5/2019 8:34,,3,,"

I think you are slightly confusing 2 problems. 1 being classification of meta visual elements and the other being the visual system itself.

+ +

Our visual system, when it comes to processing information, has had billions of years of iteration(training), so that at birth(and before), we are already tuned for the processing of visual stimuli, as well as have the mechanisms to decipher objects in our spatial field of view.

+ +

These two papers(L1, L2), have a great deal of information about the evolution of our visual system and its processing. The second speculates on the connection of said evolution and the construction of ""seeing systems"" very interesting.

+ +

For further inquiry on this in particular, check out David Marr. He was probably the most influential early computer vision mind. He still is mentioned in many top-down AGI and computer vision research projects to this day.

+",9608,,,,,4/5/2019 8:34,,,,4,,,,CC BY-SA 4.0 +11669,1,,,4/5/2019 9:08,,1,879,"

Forward KL Divergence (also known as cross entropy loss) is a standard loss function in supervised learning problems. I understand why it is so: matching a known a trained distribution to a known distribution fits $P \log(P/Q)$ where $P$ is the known distribution.

+ +

Why isn't the reverse KL divergence commonly used in supervised learning?

+",23723,,2444,,4/5/2019 12:21,4/8/2019 8:19,Why isn't the reverse KL divergence commonly used in supervised learning?,,1,2,,,,CC BY-SA 4.0 +11672,1,,,4/5/2019 10:47,,2,271,"

I read that functions are used as activation functions only when they are differentiable. What about the unit step activation function? So, is there any other reason a function can be used as an activation function (apart from being differentiable)?

+",23501,,2444,,4/5/2019 19:45,6/25/2020 20:22,What kind of functions can be used as activation functions?,,2,0,,11/9/2020 9:55,,CC BY-SA 4.0 +11674,1,,,4/5/2019 11:29,,2,149,"

Suppose a model M classifies apples and oranges. Can M be extended to classify a third class of objects, e.g., pears, such that the new images for 'retraining' only have pears annotated and apples and oranges ignored? That is, since M already classifies apples and oranges, can the old weights be somehow preserved and let the retraining focus specifically on learning about pears?

+ +

Methods such as fine-tuning and learning without forgetting seem to require all objects in the new images annotated though.

+",13068,,,,,4/6/2019 23:52,Extending a neural network to classify new objects,,1,0,,,,CC BY-SA 4.0 +11675,2,,11667,4/5/2019 14:18,,11,,"

Short answers

+
+

Is back-propagation applied immediately after getting the output for each input or after getting the output for all inputs in a batch?

+
+

You can perform back-propagation using (or after) only one training input (also known as data point, example, sample or observation) or multiple ones (a batch). However, the loss function to train the neural network is slightly different in both cases.

+
+

Is back-propagation applied $n$ times till the neural network gives a satisfactory result for a single data point before going to work on the next data point?

+
+

If we use only one example, we usually do not wait until the neural network gives satisfactory results for a single input-label pair $(x_i, y_i)$, but we keep feeding it with several input-label pairs, one after the other (and each time updating the parameters of the neural network using back-propagation), without usually caring whether the neural network already produces a satisfactory output for an input-label pair before feeding it with the next (although you could also do that).

+

Long answer (to the 1st question)

+

In case you want to know more about the first question, keep reading!

+

What is back-propagation?

+

Back-propagation is the (automatic) process of differentiating the loss function, which we use to train the neural network, $\mathcal{L}$, with respect to all of the parameters (or weights) of the same neural network. If you collect the $N$ parameters of the neural network in a vector

+

$$\boldsymbol{\theta} = +\begin{bmatrix} +\theta_1\\ +\vdots \\ +\theta_N +\end{bmatrix}, +$$

+

then the derivative of the loss function $\mathcal{L}$ with respect to $\boldsymbol{\theta}$ is called the gradient, which is a vector that contains the partial derivatives of the loss function with respect to each single scalar parameter of the network, $\theta_i$, for $i=1, \dots, N$, that is, the gradient looks something like this

+

$$ +\nabla \mathcal{L} = +\begin{bmatrix} +\frac{\partial \mathcal{L}}{ \partial \theta_1}\\ +\vdots \\ +\frac{\partial \mathcal{L}}{ \partial \theta_N} +\end{bmatrix}, +$$

+

where the symbol $\nabla$ denotes the gradient of the function $\mathcal{L}$.

+

Loss functions

+

The specific loss function $\mathcal{L}$ that is used to train the neural network depends on the problem that we need to solve. For example, if we need to solve a regression problem (i.e. predict a real number), then the mean squared error (MSE) can be used. If we need to solve a classification problem (i.e. predict a class), the cross-entropy (CE) (aka negative log-likelihood) may be used instead.

+

Example

+

Let us assume that we need to solve a regression problem, so we choose the mean squared error (MSE) as the loss function. For simplicity, let's also assume that the neural network, denoted by $f_{\boldsymbol{\theta}}$, contains only one output neuron and contains no biases.

+

Stochastic gradient descent

+

Given an input-label pair $(x_i, y_i) \in D$ (where $D$ is a dataset of input-label pairs), the squared error function (not the mean squared error yet!) is defined as

+

$$\mathcal{L}_i(\boldsymbol{\theta}) = \frac{1}{2} (f_{\boldsymbol{\theta}}(x_i) - y_i)^2,$$

+

where $f_{\boldsymbol{\theta}}(x_i) = \hat{y}_i$ is the output of the neural network for the the data point $x_i$ (which depends on the specific values of $\boldsymbol{\theta}$) and $y_i$ is the corresponding ground-truth label.

+

As the name suggests, $\mathcal{L}_i$ (where the subscript $_i$ is only used to refer to the specific input-label pair $(x_i, y_i)$) measures the squared error (i.e. some notion of distance) between the current prediction (or output) of the neural network, $\hat{y}_i$, and the expected output for the given input $x_i$, i.e. $y_i$.

+

We can differentiate $\mathcal{L}_i(\boldsymbol{\theta})$ with respect to the parameters of the neural network, $\boldsymbol{\theta}$. However, given that the details of back-propagation can easily become tedious, I will not describe them here. You can find more details here.

+

So, let me assume that we have a computer program that is able to compute $\nabla \mathcal{L}_i$. At that point, we can perform one step of the gradient descent algorithm

+

$$ +\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}_i, \label{1}\tag{1} +$$

+

where $\gamma$ is the learning rate and $\leftarrow$ is the assignment operator. Note that $\boldsymbol{\theta}$ and $\nabla \mathcal{L}_i$ have the same dimensions, $N$.

+

So, I have just shown you that you can update the parameters of the neural network using only one input-label pair, $(x_i, y_i)$. This way of performing GD with only one input-label pair is known as stochastic gradient descent (SGD).

+

Batch (or mini-batch) gradient descent

+

In practice, for several reasons (including learning instability and inefficiency), it is rarely the case that you update the parameters using only one input-label pair $(x_i, y_i)$. Instead, you use multiple input-label pairs, which are collected in a so-called batch

+

$$B = \{(x_1, y_i), \dots, (x_M, y_M) \},$$

+

where $M$ is the size of batch $B$, which is also known as mini-batch when $M$ is smaller than the total number of input-label pairs in the training dataset, i.e. when $|B| = M < |D|$. If you use mini-batches, typical values for $M$ are $32$, $64$, $128$, and $256$ (yes, powers of 2: can you guess why?). See this question for other details.

+

In this case, the loss function $\mathcal{L}_M(\boldsymbol{\theta})$ is defined as as the mean (or average) of the squared errors for single input-label pairs, $\mathcal{L}_i(\boldsymbol{\theta})$, i.e.

+

\begin{align} +\mathcal{L}_M(\boldsymbol{\theta}) +&= \frac{1}{M} \sum_{i=1}^M \mathcal{L}_i(\boldsymbol{\theta}) \\ +&= \frac{1}{M} \sum_{i=1}^M \frac{1}{2} (f_{\boldsymbol{\theta}}(x_i)-y_i)^2 \\ +&= \frac{1}{M} \frac{1}{2} \sum_{i=1}^M (f_{\boldsymbol{\theta}}(x_i)-y_i)^2. +\end{align} +The normalisation factor, $\frac{1}{M}$, can be thought of as averaging out the losses of all input-label pairs. Note also that we can take the $\frac{1}{2}$ out of the summation because it is a constant with respect to the variable of the summation, $i$.

+

In this case, let me also assume that we are able to compute (using back-propagation) the gradient of $\mathcal{L}_M$, so that we can perform a gradient descent (GD) update (using a batch of examples)

+

$$ +\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \gamma \nabla \mathcal{L}_M \label{2} \tag{2} +$$ +The only thing that changes with respect to the GD update in equation \ref{1} is the loss function, which is now $\mathcal{L}_M$ rather than $\mathcal{L}_i$.

+

This is known as mini-batch (or batch) gradient descent. In the case $M = |D|$, this is simply known as gradient descent, which is a term that can also be used to refer to any of its variants (including SGD or mini-batch GD).

+

Further reading

+

You may also be interested in this answer, which provides more details about mini-batch gradient descent.

+",2444,,2444,,11/29/2020 23:56,11/29/2020 23:56,,,,5,,,,CC BY-SA 4.0 +11676,1,11682,,4/5/2019 14:23,,5,491,"

In Sutton & Barto's book (2nd ed) page 149, there is the equation 7.11

+ +

+ +

I am having a hard time understanding this equation.

+ +

I would have thought that we should be moving $Q$ towards $G$, where $G$ would be corrected by importance sampling, but only $G$, not $G-Q$, therefore I would have thought that the correct equation would be of the form

+ +

$Q \leftarrow Q + \alpha (\rho G - Q)$

+ +

and not

+ +

$Q \leftarrow Q + \alpha \rho (G - Q)$

+ +

I don't get why the entire update is weighted by $\rho$ and not only the sampled return $G$.

+",22003,,2444,,4/5/2019 14:33,4/5/2019 20:44,Understanding the n-step off-policy SARSA update,,1,2,,,,CC BY-SA 4.0 +11678,1,,,4/5/2019 16:13,,1,815,"

I am new to NLP realm. If you have an input text ""The price of orange has increased"" and output text ""Increase the production of orange"". Can we make our RNN model to predict the output text? Or what algorithm should I use?

+",23743,,2444,,4/5/2019 16:29,5/6/2020 7:03,Which algorithm should I use to map an input sentence to an output sentence?,,2,6,,,,CC BY-SA 4.0 +11679,1,,,4/5/2019 18:23,,22,4752,"

The tabular Q-learning algorithm is guaranteed to find the optimal $Q$ function, $Q^*$, provided the following conditions (the Robbins-Monro conditions) regarding the learning rate are satisfied

+ +
    +
  1. $\sum_{t} \alpha_t(s, a) = \infty$
  2. +
  3. $\sum_{t} \alpha_t^2(s, a) < \infty$
  4. +
+ +

where $\alpha_t(s, a)$ means the learning rate used when updating the $Q$ value associated with state $s$ and action $a$ at time time step $t$, where $0 \leq \alpha_t(s, a) < 1$ is assumed to be true, for all states $s$ and actions $a$.

+ +

Apparently, given that $0 \leq \alpha_t(s, a) < 1$, in order for the two conditions to be true, all state-action pairs must be visited infinitely often: this is also stated in the book Reinforcement Learning: An Introduction, apart from the fact that this should be widely known and it is the rationale behind the usage of the $\epsilon$-greedy policy (or similar policies) during training.

+ +

A complete proof that shows that $Q$-learning finds the optimal $Q$ function can be found in the paper Convergence of Q-learning: A Simple Proof (by Francisco S. Melo). He uses concepts like contraction mapping in order to define the optimal $Q$ function (see also What is the Bellman operator in reinforcement learning?), which is a fixed point of this contraction operator. He also uses a theorem (n. 2) regarding the random process that converges to $0$, given a few assumptions. (The proof might not be easy to follow if you are not a math guy.)

+ +

If a neural network is used to represent the $Q$ function, do the convergence guarantees of $Q$-learning still hold? Why does (or not) Q-learning converge when using function approximation? Is there a formal proof of such non-convergence of $Q$-learning using function approximation?

+ +

I am looking for different types of answers, from those that give just the intuition behind the non-convergence of $Q$-learning when using function approximation to those that provide a formal proof (or a link to a paper with a formal proof).

+",2444,,2444,,12/18/2019 16:01,12/19/2020 13:14,Why doesn't Q-learning converge when using function approximation?,,3,2,,,,CC BY-SA 4.0 +11680,2,,11672,4/5/2019 19:19,,2,,"

Not completely sure your question. Do you mean

+ +

Q. why should we use activation function?

+ +

Ans: we need to introduce non-linearity to the network. Otherwise, multiple layers are no difference from single layer network. (It is obvious as we write things in matrix form, and say when we have two layers with weights $W_1$ and $W_2$, the two layer is no difference from a single layer with weight $W_2 W_1$.

+ +

Q. why they need to be differentiable?

+ +

Ans: Just for sake that we can back-propagate gradients back to earlier layers. Note that back-propagation is nothing but the chain rule in calculus. Say $f(\cdot)$ is an activation function in one layer and the output of that activation function is $\bf y$ and the input is ${\bf u}=W \bf x$, where $\bf x$ is output from the previous layer and mix with weights $W$ in current layer. Of course, the final loss $L$ will depend on ${\bf y} = f({\bf u})= f(W {\bf x})$. Say, loss $L=g(\bf y)$ somehow. To train the weights $W$, we have to find the gradient $\frac{\partial L}{\partial W}$ so that we can adjust weight $W$ to minimize $L$. But $\frac{\partial L}{\partial W}=\frac{\partial g({\bf y})}{\partial W}=\frac{\partial g({\bf y})}{\partial \bf y}\frac{\partial {\bf y}}{\partial {\bf u}}\frac{\partial {\bf u}}{\partial W}$. Each of these product terms can be computed locally and will be accumulatively multiplied as we apply backprop. And note that the middle term $\frac{\partial {\bf y}}{\partial {\bf u}}=\frac{\partial f({\bf u})}{\partial {\bf u}}$ is just ""derivative"" of $f(\cdot)$, thus we require the activation function to be differentiable and ""informative""/non-zero (at least most of the time). Note that ReLU is not differentiable everywhere and that is why researchers (at least Yoshua Bengio) worried about that when they first tried to adopt ReLU. You may check out the interview of Bengio by Andrew Ng for that.

+ +

Q. why step function is a bad activation function?

+ +

Ans: Note that step function is differentiable almost everywhere but is not ""informative"" though. For places (flat region) where it is differentiable, the derivative is simply zero. Consequently, any later layer gradient (information) will get cut off as it passes through a step function activation function.

+",23688,,23688,,4/5/2019 19:25,4/5/2019 19:25,,,,2,,,,CC BY-SA 4.0 +11681,2,,11679,4/5/2019 19:25,,13,,"

Here's an intuitive description answer:

+ +

Function approximation can be done with any parameterizable function. Consider the problem of a $Q(s,a)$ space where $s$ is the positive reals, $a$ is $0$ or $1$, and the true Q-function is $Q(s, 0) = s^2$, and $Q(s, 1)= 2s^2$, for all states. If your function approximator is $Q(s, a) = m*s + n*a + b$, there exists no parameters which can accurately represent the true $Q$ function (we're trying to fit a line to a quadratic function). Consequently, even if you chose a good learning rate, and visit all states infinitely often, your approximation function will never converge to the true $Q$ function.

+ +

And here's a bit more detail:

+ +
    +
  1. Neural networks approximate functions. A function can be approximated to greater or lesser degrees by using more or less complex polynomials to approximate it. If you're familiar with Taylor Series approximation, this idea should seem pretty natural. If not, think about a function like a sine-wave over the interval [0-$\pi/2$). You can approximate it (badly) with a straight line. You can approximate it better with a quadratic curve. By increasing the degree of the polynomial we use to approximate the curve, we can get something that fits the curve more and more closely.
  2. +
  3. Neural networks are universal function approximators. This means that, if you have a function, you can also make a neural network that is deep or wide enough that it can approximate the function you have created to an arbitrarily precise degree. However, any specific network topology you pick will be unable to learn all functions, unless it is infinitely wide or infinitely deep. This is analogous to how, if you pick the right parameters, a line can fit any two points, but not any 3 points. If you pick a network that is of a certain finite width or depth, I can always construct a function that needs a few more neurons to fit properly.

  4. +
  5. Q-learning's bounds hold only when the representation of the Q-function is exact. To see why, suppose that you chose to approximate your Q-function with a linear interpolation. If the true function can take any shape at all, then clearly the error in our interpolation can be made unboundedly large simply by constructing a XOR-like Q-function function, and no amount of extra time or data will allow us to reduce this error. If you use a function approximator, and the true function you try to fit is not something that the function can approximate arbitrarily well, then your model will not converge properly, even with a well-chosen learning rate and exploration rate. Using the terminology of computational learning theory, we might say that the convergence proofs for Q-learning have implicitly assumed that the true Q-function is a member of the hypothesis space from which you will select your model.

  6. +
+",16909,,16909,,4/6/2019 0:11,4/6/2019 0:11,,,,5,,,,CC BY-SA 4.0 +11682,2,,11676,4/5/2019 20:44,,3,,"

Multiplying the entire update by $\rho$ has the desirable property that experience affects $Q$ less when the behavior policy is unrelated to the target policy. In the extreme, if the trajectory taken has zero probability under the target policy, then $Q$ isn't updated at all, which is good. Alternatively, if only $G$ is scaled by $\rho$, taking zero probability trajectories would artificially drive $Q$ to zero.

+",22916,,,,,4/5/2019 20:44,,,,0,,,,CC BY-SA 4.0 +11683,1,,,4/6/2019 0:50,,7,281,"

Does anyone know what specific tasks the OpenCog environment is capable of performing? I have glanced though their wiki and a few of the pages on Goertzel's site and the AI.SE. So far I could only find some technical documentation regarding theory and engineering, but nothing on concrete results.

+ +

From the technical description of AtomSpaces it seems that OpenCog is capable of some ""representational inference"", but I haven't come across any sources that concretely describes what it is capable of doing.

+ +

Apparently there is some collaboration between Sophia the Robot and OpenCog, but to what extent I am unclear. I am aware however that the dialogue functions is powered by ChatScript (though I also suspect that the high profile interviews Sophia gives are completely scripted...)

+ +

Can anyone provide concrete examples or evidence of OpenCogs' functional behavior. Like transcripts of chat, examples of reasoning, video or demonstrations of its emotion-emulating; and not just claims of functions.

+",6779,,6779,,4/23/2020 4:06,4/23/2020 4:06,Concrete examples of OpenCog's functionality,,1,0,,,,CC BY-SA 4.0 +11684,1,,,4/6/2019 3:15,,1,111,"

I have not seen a neuron that uses both a bias and a threshold. Why is this?

+",23501,,22916,,4/6/2019 19:55,4/6/2019 19:55,Can a neuron have both a bias and a threshold?,,1,3,,,,CC BY-SA 4.0 +11685,1,,,4/6/2019 6:52,,1,219,"

Post pruning is start from downward discarding subtree and include leaf node performance. so what is the best point or condition of the tree where we have to stop further pruning.

+",23501,,,,,4/6/2019 17:29,At which point we have to stop post pruning in decision tree?,,1,4,,,,CC BY-SA 4.0 +11686,2,,11684,4/6/2019 7:03,,4,,"

I assume you're talking about a perceptron threshold function. One definition of it with an explicit threshold is +$$f(\textbf{x})= +\begin{cases} +1& \text{if } \textbf{w}\cdot\textbf{x} > t\\ +0& \text{otherwise} +\end{cases}.$$

+ +

Another form with a bias is +$$f(\textbf{x})= +\begin{cases} +1& \text{if } \textbf{w}\cdot\textbf{x} + b > 0\\ +0& \text{otherwise} +\end{cases}.$$

+ +

But these forms are of course equivalent if you set $b=-t$.

+ +

There's nothing stopping you from using a perceptron definition with both a bias and a threshold: +$$f(\textbf{x})= +\begin{cases} +1& \text{if } \textbf{w}\cdot\textbf{x} +b > t\\ +0& \text{otherwise} +\end{cases}.$$

+ +

But this is also equivalent to the other two forms. We can rewrite this as +$$f(\textbf{x})= +\begin{cases} +1& \text{if } \textbf{w}\cdot\textbf{x} > t'\\ +0& \text{otherwise} +\end{cases}$$ +where $t'=t-b$ is the new threshold. Or, we could rewrite it as +$$f(\textbf{x})= +\begin{cases} +1& \text{if } \textbf{w}\cdot\textbf{x} + a> 0\\ +0& \text{otherwise} +\end{cases}$$ +where $a=b-t$ is the new bias.

+ +

You never see definitions of this function with both a threshold and a bias because it has simpler forms.

+",22916,,,,,4/6/2019 7:03,,,,0,,,,CC BY-SA 4.0 +11687,5,,,4/6/2019 8:42,,0,,"

For more info, have a look at https://opencog.org/ and https://en.wikipedia.org/wiki/OpenCog.

+",2444,,2444,,4/11/2019 21:02,4/11/2019 21:02,,,,0,,,,CC BY-SA 4.0 +11688,4,,,4/6/2019 8:42,,0,,"For questions related to Open Cog, which is a project that aims to build an open source artificial intelligence framework.",2444,,2444,,4/11/2019 21:03,4/11/2019 21:03,,,,0,,,,CC BY-SA 4.0 +11690,2,,11679,4/6/2019 9:16,,7,,"

As far as I'm aware, it is still somewhat of an open problem to get a really clear, formal understanding of exactly why / when we get a lack of convergence -- or, worse, sometimes a danger of divergence. It is typically attributed to the ""deadly triad"" (see 11.3 of the second edition of Sutton and Barto's book), the combination of:

+ +
    +
  1. Function approximation, AND
  2. +
  3. Bootstrapping (using our own value estimates in the computation of our training targets, as done by $Q$-learning), AND
  4. +
  5. Off-policy training ($Q$-learning is indeed off-policy).
  6. +
+ +

That only gives us a (possibly non-exhaustive) description of cases in which we have a lack of convergence and/or a danger of divergence, but still doesn't tell us why it happens in those cases.

+ +
+ +

John's answer already provides the intuition that part of the problem is simply that the use of function approximation can easily lead to situations where your function approximator isn't powerful enough to represent the true $Q^*$ function, there may always be approximation errors that are impossible to get rid of without switching to a different function approximator.

+ +

Personally, I think this intuition does help to understand why the algorithm cannot guarantee convergence to the optimal solution, but I'd still intuitively expect it to maybe be capable of ""converging"" to some ""stable"" solution that is the best possible approximation given the restrictions inherent in the chosen function representation. Indeed, this is what we observe in practice when we switch to on-policy training (e.g. Sarsa), at least in the case with linear function approximators.

+ +
+ +

My own intuition with respect to this question has generally been that an important source of the problem is generalisation. In the tabular setting, we have completely isolated entries $Q(s, a)$ for all $(s, a)$ pairs. Whenever we update our estimate for one entry, it leaves all other entries unmodified (at least initially -- there may be some effects on other entries in future updates due to bootstrapping in the update rule). Update rules for algorithms like $Q$-learning and Sarsa may sometimes update towards the ""wrong"" direction if we get ""unlucky"", but in expectation, they generally update towards the correct ""direction"". Intuitively, this means that, in the tabular setting, in expectation we will slowly, gradually fix any mistakes in any entries in isolation, without possibly harming other entries.

+ +

With function approximation, when we update our $Q(s, a)$ estimate for one $(s, a)$ pair, it can potentially also affect all of our other estimates for all other state-action pairs. Intuitively, this means that we no longer have the nice isolation of entries as in the tabular setting, and ""fixing"" mistakes in one entry may have a risk of adding new mistakes to other entries. However, like John's answer, this whole intuition would really also apply to on-policy algorithms, so it still doesn't explain what's special about $Q$-learning (and other off-policy approaches).

+ +
+ +

A very interesting recent paper on this topic is Non-delusional Q-learning and Value Iteration. They point out a problem of ""delusional bias"" in algorithms that combine function approximation with update rules involving a $\max$ operator, such as Q-learning (it's probably not unique to the $\max$ operator, but probably applies to off-policy in general?).

+ +

The problem is as follows. Suppose we run this $Q$-learning update for a state-action pair $(s, a)$:

+ +

$$Q(s, a) \gets Q(s, a) + \alpha \left[ \max_{a'} Q(s', a') - Q(s, a) \right].$$

+ +

The value estimate $\max_{a'} Q(s', a')$ used here is based on the assumption that we execute a policy that is greedy with respect to older versions of our $Q$ estimates over a -- possibly very long -- trajectory. As already discussed in some of the previous answers, our function approximator has a limited representational capacity, and updates to one state-action pair may affect value estimates for other state-action pairs. This means that, after triggering our update to $Q(s, a)$, our function approximator may no longer be able to simultaneously express the policy that leads to the high returns that our $\max_{a'} Q(s', a')$ estimate was based on. The authors of this paper say that the algorithm is ""delusional"". It performs an update under the assumption that, down the line, it can still obtain large returns, but it may no longer actually be powerful enough to obtain those returns with the new version of the function approximator's parameters.

+ +
+ +

Finally, another (even more recent) paper that I suspect is relevant to this question is Diagnosing Bottlenecks in Deep Q-learning Algorithms, but unfortunately I have not yet had the time to read it in sufficient detail and adequately summarise it.

+",1641,,,,,4/6/2019 9:16,,,,4,,,,CC BY-SA 4.0 +11691,1,,,4/6/2019 10:03,,2,685,"

I want to see if I can make my Software Defined Radio, SDR, to classify unknown radio signals with the help of an artificial neural network. That is, my SDR outputs a sequence of complex numbers (IQ-data), which I want to use to determine if the receieved signal is, for instance, FM or AM modulated. This approach was used in a paper (https://arxiv.org/pdf/1712.04578.pdf) and they created and used a freely downloadable dataset (https://github.com/sofwerx/deepsig_datasets/blob/master/README.md).

+ +

Being new to both SDR and Deep Learning I have now tried for a couple of months to create an LSTM network, train it on the dataset and then use it for classification, but have sadly failed. I have concluded that it is likely due to the fact that I do not seem to understand how the dataset is structured. I have not been able to find any documentation concerning this dataset (other than a text file with a list of the modulation forms used in the dataset) though. My hope is that someone on this forum has some prior experience to share about how to use it.

+ +

The dataset (DEEPSIG DATASET: RADIOML 2016.10A) is split in three matrixes, X: 2x1024x2555904 cells, Y: 24x2555904 cells and Z: 1x2555904 cells. My belief has up until now been that ""X"" contains the complex time series, i.e. one row for the real component and one row for the imaginary component both in a sequence of 1024 samples. The ""Y"" to contain the corresponding 24 classes, whereas the ""Z"" somehow contain the signal to noise level for each signal.

+ +

After training and deploying the network in Matlab I found out that it does not even manage to classify the local FM radio station. I then looked at the data I had used for training and plotted one of (what I thought to be) the FM sample sequences from the dataset in the complex plane, but I did not get the ""circle"" I had expected from a frequency or phase modulated signal. I am thus totally lost, the dataset has been used for a high class scientific paper so the problem lies with me, and my lack of understanding. I apologize for the long and still very unconcise question, but I don't want to infer any of my own misconceptions into the query. Thanks for any help!

+",23760,,,,,12/18/2019 21:59,Deep Learning for radio signal classification with DeepSig dataset,,0,1,,,,CC BY-SA 4.0 +11692,2,,11669,4/6/2019 11:34,,1,,"

It’s not an exhaustive answer to your question, but here some aspects that might be helpful:

+ +

A common problem in supervised learning where you will see KL-divergence used are classification tasks. Very often in those cases the data points in the training set are assumed to belong to a single class $c_i$ where $i$ is from some index set $I$. The class membership is represented by one-hot encoding, which corresponds to a distribution $P$ with $p_i = 1$ for the class the data point belongs to and $p_j = 0$ for all $i \neq j$. The loss function mostly used in this problems is the average categorical cross-entropy $\langle E_P[-log(Q)]\rangle_{\rm data} = \langle H(P) - D(P || Q)\rangle_{\rm data}$, which in the case of single-class membership reduces to the binary cross-entropy. Because only one probability in $P$ is non-zero in this case, the cross-entropy simplifies to $\langle -\log(q_i)\rangle_{\rm data}$. The reverse KL-Divergence $D(Q || P)$ is strictly speaking not even defined in this case, as $Q$ needs to be absolute continuous w.r.t. $P$.

+ +

There are of course cases were class membership is probabilistic also in the training data and the above argument doesn't apply in full. From a more conceptual point of view, this thread summaries nicely the relation of cross-entropy to maximizing the likelihood of the observed data under the model. Using the reverse cross-entropy would maximize the likelihood of a typical sample drawn from the model under the (empirical) distribution of the observed data.

+ +

A small side note when you read the above link is that in the above described classification problem we have a probability model for the class membership for each point in the input space, this is why the average cross-entropy is minimized.

+ +

Classification problems are just one example of supervised learning, so this answer might not fully cover your question.

+",23746,,23746,,4/8/2019 8:19,4/8/2019 8:19,,,,0,,,,CC BY-SA 4.0 +11693,2,,11685,4/6/2019 17:29,,2,,"

There are a variety of conditions we can use when deciding whether to prune a sub-tree or not after generating a decision tree model. There are three common approaches.

+ +
    +
  1. We can prune branches with less support than a specific threshold. These are branches which were constructed using very few points from the training data.
  2. +
  3. We can prune branches where the information gain from a split (or any other splitting measure we are using, like GINI), is smaller than a threshold.
  4. +
  5. You can do what is done in Quinlan's 4.5 & C5.0 learners (which are the standard approaches; J48 is another implementation of the same algorithm). Quinlan performs a Chi-squared-like test for the relationship between the attribute we split upon and the target attribute. If the relationship is statistically significant, then the split is preserved. If not, it is not. The ""confidence factor"" parameter found in most implementations of these algorithms corresponds to the $\alpha$ value used in determining whether the relationship is considered significant. This approach captures the idea that we should prefer to keep branches that have few datapoints, but a very strong signal, or that have a weak signal, but very many datapoints supporting the pattern, since both cases are less likely to be overfitting that cases where we have weak signals and small numbers of points.
  6. +
+",16909,,,,,4/6/2019 17:29,,,,0,,,,CC BY-SA 4.0 +11694,1,,,4/6/2019 18:44,,2,58,"

I trained a recurrent neural network (if it matters - it contains three CuDNNLSTM cells and 3 Dense layers, Dropout = 0.2). The result of data preparation is one array of ~330.000 sequences. Each contains 256 time steps and 24 features in each time step. This array is normalized, shuffled and balanced. Then it is split into two arrays - train array contains 90% of data (so ~297k) and validation array contains ~10%.

+ +

During training process (Adam optimizer, 128 or 256 batches) max accuracy of validation data set is 90%. Epoch accuracy is 94%.

+ +

Then I run my additional validation test script with more realistic, unbalanced data set. The accuracy of test is just ~50%. I checked this second test if it is correct and there is no errors in the code, but when I run it on data, that was included in the training set, the accuracy was 87%, so it looks good.

+ +

What is going on? I suppose, that wrong architecture is used.

+ +

Here is a graph of epoch accuracy during training: +

+ +

Here is a graph of validation accuracy during training +

+ +

Thank you for support.

+ +

UPDATE:

+ +

Today I trained the network one more time. I used the unbalanced data from second test as validation data set in traininge process. You can see the results below.I stopped training after 29 epochs.

+ +

Blue: balanced valitadion data set

+ +

Orange: unbalanced validation data set

+ +

Epoch accuracy: +

+ +

Validation accuracy +

+ +

Validation loss +

+ +

It doesn't look good at all.

+",21171,,21171,,4/7/2019 15:11,4/7/2019 15:11,RNN: Different test results on balanced and unbalanced data,,0,2,,,,CC BY-SA 4.0 +11695,2,,11683,4/6/2019 23:41,,3,,"

It depends what you mean by ""what OpenCog can do?"".

+ +

OpenCog, at a high-level, is a loosely coupled collection of various theoretical and variants of conventional methods aimed at constructing the beginnings of an AGI.

+ +

With that said, it's purely applied uses are fairly limited. It can do some typical NLP and ML tasks if used correctly, albeit, it will almost always be ineffective when compared to a problem specific solution. However, this is understandable, as OpenCog is not geared at solving narrow problems.

+ +

OpenCog, as well as Goertzel's other projects(SingularityNet, AGI society), are broad, top-down attempts at formulating an AGI system. +Outside of AGI circles they are also not oft referenced or researched(although, I am utilizing some aspects of the system in work I am currently crafting).

+",9608,,,,,4/6/2019 23:41,,,,0,,,,CC BY-SA 4.0 +11696,2,,11674,4/6/2019 23:52,,2,,"

Yes this is standard transfer learning. Using a trained model, we can freeze the first N hidden layers of a classifier, except for the last few. This will allow our previous relevant training to be retained whilst also being able learn new features and target new classes.

+

We will then initialize a new output layer that works our new context(i.e sigmoid, 1 node for binary classifier). Everything is now set to resume training on our new y_targets.

+

Take a look at this link for some more info on transfer learning.

+

If you want no perturbance to your past learning, I would recommend freezing all previous hidden layers and then tacking some additional ones on.

+",9608,,-1,,6/17/2020 9:57,4/6/2019 23:52,,,,1,,,,CC BY-SA 4.0 +11701,1,,,4/7/2019 13:47,,2,44,"

There are lot of researches about face detection in pictures, but is it the only way one can say ""this person I'm looking for is here in this picture""? Aren't there algorithms that you can provide with information like lateral or back pictures of a person and making calculations on the height, the width, the anatomic distance between parts of its body, the analysis of the hair can determine: ""there's a X per cent chance this is the person you're looking for""? Is it possible to accomplish?

+",15764,,2444,,4/7/2019 16:37,4/7/2019 16:37,Algorithms to indentify people in pictures without using face recognition,,0,0,,,,CC BY-SA 4.0 +11703,1,,,4/7/2019 16:07,,1,45,"

I have multiple image sequences, each of which contains an animation of two moving dots. The trajectory of the dots in a sequence is always cyclic (not necessarily circular). There are two types of sequences. In some sequences the two dots are moving in phase but in the other sequences they are moving out of phase.

+ +

Is it possible to classify these two types of sequences using a neural network? What is the simple and feasible neural network structure for this classification?

+ +

Here's an example set of animations. The in-phase patterns are in the above row and the out-of-phase patterns are in the bottom row.

+ +

+",23789,,23789,,4/21/2019 7:46,4/21/2019 7:46,What is the feasible neural network structure that can learn to identify types of trajectory of moving dots?,,0,0,,,,CC BY-SA 4.0 +11704,1,,,4/7/2019 16:07,,1,23,"

A simple word search seems too simple to solve with a computer AI. But what I'm interested is how human's solve it. They build up strategies over the course of solving the puzzle. For example:

+ +

1) first just look to see if any words ""jump out"". +2) look for the horizontal words. +3) look for the letter ""O"" +4) look for the letter ""p"" next to a letter ""o"" +5) methodically look along rows and columns and diagonals. +6) cross off words +7) take the first letter of a word in the list and put it in memory.

+ +

Things of that sort. I would like to build such a program. With built in search capabilities, some of which are faster than others. And the AI can combine different search methods and try to solve the puzzle as fast as possible.

+ +

It should also store strategies that work. Or think about why strategies work sometimes and not others.

+ +

I would like to read a bit more about AI and strategies if you know any good references?

+",4199,,,,,4/7/2019 16:07,Has any research been done to solve word searches with AI?,,0,0,,,,CC BY-SA 4.0 +11705,2,,7926,4/7/2019 17:03,,1,,"

""there's no such thing as true random numbers anyway."" that's all you really need to deter any idea of AI on any computer. Any software or set of functions on a computer is pretty much (right now at least) all set code, by humans.

+ +

Also the an execution of actions based on variables not listed is not artificial intelligence, it's just simpler code.

+ +

Any REAL Artificial Intelligence will not be made on a board of 1's and 0's, that's defeating the purpose, all of the actions are predetermined (even if they are extended intricately to cover many possibilities) so they have no chance to create something deterministic. +real independent intellect is most likely (in my eyes) found, not made.

+",23791,,,,,4/7/2019 17:03,,,,0,,,,CC BY-SA 4.0 +11706,1,,,4/7/2019 17:45,,4,68,"

Recently, some work has been done planning and learning in Non-Markovian Decision Processes, that is, decision-making with temporally extended rewards. In these settings, a particular reward is received only when a particular temporal logic formula is satisfied (LTL or CTL formula). However, I cannot find any work about learning which rewards correspond to which temporally extended behavior.

+

In my searches, I came across k-order MDPs (which are non-Markovian). I did not find RL research done on k-order MDPs.

+",23792,,2444,,12/19/2021 18:47,12/19/2021 18:47,What research has been done on learning non-Markovian reward functions?,,0,3,,,,CC BY-SA 4.0 +11707,2,,8270,4/7/2019 17:48,,0,,"

In case that you want to connect a placeholder to a new layer you can do as below

+ +
x = tf.placeholder(shape=[None, 784], dtype=tf.float32) # mnist for example
+
+x = tf.reshape(x, (-1, 784, 1)) # change to new shape (784 ,1)
+
+x = tf.unstack(self._input,axis=1) # getting a list with 784 elements of 1
+
+con = tf.concat(x[1,2,3,4,5]) # for instance you want only from 1 to 5 inputs/neurons
+
+ +

you can feed your new layer with above con which is only corresponds from the input 1 to 5. +you can apply same technique to a layer instead of tf.placeholder

+",23795,,30565,,10/26/2019 19:46,10/26/2019 19:46,,,,0,,,,CC BY-SA 4.0 +11708,1,,,4/7/2019 19:34,,1,637,"

If I have a DQN, and I care A LOT about future rewards (moreso than current rewards), can I set gamma to a number greater than 1? Like 1.1 perhaps?

+",23719,,,,,4/8/2019 6:39,Can gamma be greater than 1 in a DQN?,,2,0,,,,CC BY-SA 4.0 +11709,2,,11708,4/7/2019 21:01,,1,,"

You can't! As if you have a $\gamma$ greater than $1$ the specified sum for the q-learning will diverge! (goes to infinity in the future steps for $\gamma^n$). To know more about that please more scrutinize on the specified formula for the q-learning.

+",4446,,,,,4/7/2019 21:01,,,,0,,,,CC BY-SA 4.0 +11710,1,11712,,4/8/2019 4:40,,2,105,"

I've been reading a lot of tutorials on DQNs for cartpole. In many of them, they have the funnel layer of the neural net be a linear activation. +Why is this? +Is it just a choice made by the implementer? +Is this Choice specific to cartpole, or do most control task dqns use it? +Thanks.

+",23803,,,,,4/8/2019 7:18,Why do DQNs use linear activations on cartpole?,,1,0,,,,CC BY-SA 4.0 +11711,2,,11708,4/8/2019 6:39,,2,,"

$ \gamma $ goes up to 1, but cannot be greater than or equal to 1 (this would make the discounted reward infinite).

+ +

The discount factor $ \gamma $ determines the importance of future rewards. A factor of 0 will make the agent ""myopic"" (or short-sighted) by only considering current rewards, while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the action values may diverge. For $ \gamma =1$, without a terminal state, or if the agent never reaches one, all environment histories become infinitely long, and utilities with additive, undiscounted rewards generally become infinite.

+ +

Source:https://en.wikipedia.org/wiki/Q-learning +https://cs.stanford.edu/people/karpathy/reinforcejs/puckworld.html

+",21181,,,,,4/8/2019 6:39,,,,1,,,,CC BY-SA 4.0 +11712,2,,11710,4/8/2019 6:54,,2,,"

Q learning predicts the action value, $q(s, a)$ for taking action $a$ in state $s$. The action value is usually the discounted sum of all future rewards. In general it can take any scalar value.

+ +

DQN uses a neural network to approximate $q(s, a)$. Although you might use this to select an action (thus think of the problem as a classification), the NN has to perform regression to predict the action values.

+ +

It is most common to use a linear final layer, and mean squared error loss in DQN, to match this regression task. So yes, you will find most control DQNs will make the same choice as the cartpole example that you are looking at.

+",1847,,1847,,4/8/2019 7:18,4/8/2019 7:18,,,,0,,,,CC BY-SA 4.0 +11713,2,,10406,4/8/2019 7:43,,1,,"

The determination of what each neuron represents is dictated by the initial weights of the network. As you may know, the common practice is to initialize the weights randomly. This means that given the same input data, the network may switch what each neuron means. It may also function differently, taking longer or shorter times to train. However, if you set the random seed to some constant, you can make the network deterministic.

+",23803,,,,,4/8/2019 7:43,,,,0,,,,CC BY-SA 4.0 +11714,2,,9895,4/8/2019 7:46,,1,,"

It is possible! Here is an article by Adobe where they explain how they do it: https://theblog.adobe.com/spotting-image-manipulation-ai/

+ +

The algorithm for this would almost certainly be a Convolutional Neural Net trained on a dataset of real and manipulated images (labeled as such).

+",23803,,,,,4/8/2019 7:46,,,,1,,,,CC BY-SA 4.0 +11716,1,11730,,4/8/2019 9:08,,8,1485,"

I'm training both DQN and double DQN in the same environment, but DQN performs significantly better than double DQN. As I've seen in the double DQN paper, double DQN should perform better than DQN. Am I doing something wrong or is it possible?

+",22930,,2444,,11/4/2020 21:19,1/25/2021 16:28,Can DQN perform better than Double DQN?,,2,3,,,,CC BY-SA 4.0 +11717,1,,,4/8/2019 10:04,,3,244,"

In the paper Deep Recurrent Q-Learning for Partially Observable MDPs, the DRQN is described as DQN with the first post-convolutional fully-connected layer replaced by a recurrent LSTM.

+ +

I have DQN implementation with only two dense layers. I want to change this into DRQN with the first layer as an LSTM and leave the second dense layer untouched. If I understood correctly, I would also need to change the input data appropriately.

+ +

Are there any other things that need to be modified in order to make DRQN work?

+",22162,,2444,,4/9/2019 17:16,4/9/2019 17:16,What can be considered a deep recurrent neural network?,,0,2,,,,CC BY-SA 4.0 +11718,1,11721,,4/8/2019 13:09,,1,440,"

I am currently using Nvidia GTX1050 with 640 CUDA cores and 2GB GDDR5 for Deep Neural Network training. I want to buy a new GPU for training, but I am not sure how much performance improvement I can get.

+ +

I wonder if there is a way to roughly calculate the training performance improvement by just comparing GPUs' specification?

+ +

Assuming all training parameters are the same. I wonder if I can roughly assume the training performance improvement is X times because the CUDA core number and memory size increased X times?

+ +

For example, Is RTX2070 with 2304 CUDA cores and 8GB GDDR6 roughly 4 times faster than GTX1050? And is RTX2080Ti with 4352 CUDA cores and 11GB GDDR6 roughly 7 times faster than GTX1050?

+ +

Thanks.

+",21213,,,,,4/10/2019 10:39,Can I calculate the training performance of GPUs by comparing their specification?,,1,0,,12/21/2021 21:33,,CC BY-SA 4.0 +11719,1,11722,,4/8/2019 13:21,,0,104,"

I'm a bit of a CNN newbie, and I'm trying to train one to image classify pictures of pretty similar looking particles. I'm making the inputs and labels by hand from a set of 48x48 grayscale images, and labeling them with a one-hot vector based on their position in the sequence (for example, the 400/1000th image might have a one-hot in the 4th position if I have 10 categories in the run). I'm using sigmoidal output activation and categorical cross entropy loss. I've played around with a few different optimizers, as well. I'm implementing in python keras.

+ +

Unfortunately, although I have pretty good accuracy numbers for the training and validation, when I actually look at the outputs being produced, it generally gives multiple categories, which is not at all what I want. For example, if I have 6 categories and a label of 3, it might give the following probability vector:

+ +

[ .99 .98 1.0 .99 0.02 0.05 ]

+ +

It was my understanding that categorical cross entropy would not allow this type of categorization, and yet it is prevalent in my code. I am under the impression that I'm doing something fundamentally wrong, but I cant figure out what. Any help would be appreciated.

+",23812,,22916,,4/11/2019 21:02,4/11/2019 21:02,CNN output generally has more than one category in one-hot categorization?,,1,3,,,,CC BY-SA 4.0 +11720,2,,6699,4/8/2019 13:28,,0,,"

I mean you can predict these sequences quite easily(with varying levels of accuracy) just by using LSTMs in the time series forecasting context. Obviously, as the number of digits you give it increases, it will increase the prediction accuracy of the next in the sequence(with some caveats), as we can think of neural networks more generally as connectionist function approximators(nonlinear in almost all cases).

+ +

As far as direct applications in AI, I suppose not beyond mathematical modeling and economic/financial modeling as these sorts of sequences emerge from a vast majority of pure and applied mathematical concepts. This research is quite relevant and ongoing(1,2,3).

+",9608,,9608,,4/8/2019 15:36,4/8/2019 15:36,,,,0,,,,CC BY-SA 4.0 +11721,2,,11718,4/8/2019 13:32,,2,,"

A lot matters when it comes to comparison of GPU's, I will give you a broad overview of the matter (it is not possible to go into exact details as huge number of factors are actually involved):

+ +
    +
  • Cores - Number of CUDA cores increases means the parallelism has increased, thus multiple calculations can be done in parallel but is of no significance if your algorithm is inherently sequential. Then CUDA cores will not matter. Your library will parrallelize what it can parallelize and will take that many CUDA cores only, the rest will remain idle.
  • +
  • Memory - Memory is useful if you are working on data whose one instance requires huge memory (like pictures). Thus with greater memory you can load greater amount of data at the same time and the cores will process on that. If memory is too low then cores want data but it is not getting it (basically data available in RAM's is the fuel while cores are the engine, you cannot run jets on fuel tank the size of car, it will consume time to constantly refill the fuel). But according to convention of Machine Learning one should load only small mini-batches at a time.
  • +
  • Micro-architecture - Lastly, architecture matters. I do not exactly know how but NVIDIA's RTX is faster for deep learning than GTX. NVIDIA has two affordable architectures (Pascal - GTX and Turing - RTX). Thus even for exactly same specs Turing architecture will run faster for deep learning. But for more details you can explore NVIDIA's website on what architecture specialises in what. For example NVIDIA P-series is good for CAD purposes. Also there are some very high end GPU's using Tesla architecture.
  • +
+ +

So AFAIK, these are the factors that matter. The library you will be using also matters, as a lot depends on how the library unrolls your program and maps them on several GPU's. Also related these 2 answers I previously gave:

+ +

CPU preferences and specifications for a multi GPU deep-learning setup

+ +

Does fp32 & fp64 performance of GPU affect deep learning model training?

+ +

Hope this helps!

+",,user9947,,user9947,4/10/2019 10:39,4/10/2019 10:39,,,,7,,,,CC BY-SA 4.0 +11722,2,,11719,4/8/2019 14:41,,1,,"

When you use sigmoid activation it is applied independently to all outputs so outputs won't sum up to 1. Sigmoid is usually used for binary logistic regression where you only have 2 classes. In your case you should use softmax activation at the output which will squash outputs to range [0,1] and additionally make them sum up to 1.

+",20339,,,,,4/8/2019 14:41,,,,0,,,,CC BY-SA 4.0 +11726,1,11727,,4/8/2019 16:29,,1,479,"

I am new to deep learning. I have doubts on modifying bias values during back propagation. My doubts are

+ +
    +
  1. Does the back propagation algorithm modifies the weigh values and bias values in the same pass?
  2. +
  3. How does the algorithm decide whether it has to change the weight value or bias value to reduce the error in a pass?
  4. +
  5. Will the learning rate same for bias and weights?
  6. +
+ +

Thanks!

+",23734,,,,,4/8/2019 16:57,When is bias values updated in back propagation?,,1,0,,,,CC BY-SA 4.0 +11727,2,,11726,4/8/2019 16:51,,2,,"
+

Does the back propagation algorithm modifies the weigh values and bias values in the same pass?

+
+ +

Yes.

+ +
+

How does the algorithm decide whether it has to change the weight value or bias value to reduce the error in a pass?

+
+ +

It differentiates the loss function (like MSE) with respect to the weights and biases, that is, it finds the partial derivative of the loss function with respect to each of the parameters. You can think of the partial derivative of the loss function with respect to one of the parameters of the model as representing the ""contribution"" of that parameter to the loss of the model.

+ +

The partial derivatives of the loss function with respect to each of the parameters (weights and biases) is collectively called the gradient, which is thus a vector of $N$ partial derivatives, where $N$ is the number of parameters of the model.

+ +
+

Will the learning rate same for bias and weights?

+
+ +

It is usually the same, but, in theory, nobody prevents you from updating the biases using a different learning rate, so you could use a different learning rate to update the biases and weights.

+",2444,,2444,,4/8/2019 16:57,4/8/2019 16:57,,,,0,,,,CC BY-SA 4.0 +11728,2,,11716,4/9/2019 4:00,,1,,"

That may happen when the value of the state is bad. You can find the example and explain about that in the link below.

+ +

See this:https://medium.freecodecamp.org/improvements-in-deep-q-learning-dueling-double-dqn-prioritized-experience-replay-and-fixed-58b130cc5682

+",21181,,,,,4/9/2019 4:00,,,,6,,,,CC BY-SA 4.0 +11729,1,,,4/9/2019 5:10,,1,177,"

Gradient descent can get stuck into local optimum. Which techniques are there to reach global optimum?

+",23501,,2444,,4/27/2019 16:18,4/27/2019 16:18,How can we reach global optimum?,,1,1,,,,CC BY-SA 4.0 +11730,2,,11716,4/9/2019 7:35,,2,,"

There is no thorough proof, theoretical or experimental that Double DQN is better then vanilla DQN. There are a lot of different tasks, paper and later experiments only explore some of them. What practitioner can take out of it is that on some tasks DDQN is better. That's the essence of Deep Mind's ""Rainbow"" approach - drop a lot of different methods into bucket and take best results.

+",22745,,,,,4/9/2019 7:35,,,,2,,,,CC BY-SA 4.0 +11731,2,,11729,4/9/2019 7:44,,1,,"

In Deep Learning there are several methods to improve ""stuck"" gradient - decrease learning rate, use cyclic learning rate - cycle it from bigger to smaller value. More radical method is completely reinitialize last or two last (before loss) layers of the network.

+ +

In non-Deep Learning ML out of those only decrease learning rate will work, but there is plethora Numerical Optimization methods to help, like second-order methods - variation of Gauss-Newton, or methods specific to the problem which may include projective methods, alternate directions, conjugate gradients etc. There are a lot of methods which are better then gradient descent for non-Deep Learning optimization.

+",22745,,22745,,4/9/2019 8:35,4/9/2019 8:35,,,,0,,,,CC BY-SA 4.0 +11738,1,,,4/9/2019 15:33,,2,23,"

I want to train a neural network to predict what my favourite home-work route will be for a particular day. I have these features for routes on a day:

+ +
temperature, humidity, congestion, distance, duration
+
+ +

I have come up with this concept of training/testing a network:

+ +
//
+// training features result in route 1,2 or 3
+//
+Network.train([30,10,12,20,12] , 1)
+Network.train([20,10,22,20,14] , 3)
+Network.train([23,10,2,20,10] , 2)
+Network.train([20,10,22,20,12] , 2)
+
+//
+// On a new day, predict which route the user is most likely to take:
+//
+var route = Network.test([25,8,12,22,12])
+
+ +

My question is: is this a viable approach? Can I make relevant predictions this way if I have enough training data?

+ +

Can I generate an outcome between 1 and 3 this way?

+",11620,,1671,,11/6/2019 21:34,11/6/2019 21:34,How to predict a preferred route based on weather and distance,,0,1,,,,CC BY-SA 4.0 +11739,2,,1885,4/9/2019 16:05,,-1,,"

my hunch (and this is strictly a hunch) is that building a human brain on a chip is actually alot easier than you might think.

+ +

my pet theory is that biological neurons are horribly slow, clumsy, and error-prone devices (at least mines are :lol:), but that the human brain overcomes this limitation by increasing the degree of parallelism several orders of magnitude over the current chip technology; and to that end it requires ~1.0e+11 neurons.

+ +

but the chip removes these limitations, and when the neurons have instantaneous relays, then you dont need nearly so many of them. if thats correct, then a human brain on a chip could probably run in only a few million neurons as opposed to 1.0e+11 inside the skull.

+",,user23786,,,,4/9/2019 16:05,,,,0,,,,CC BY-SA 4.0 +11740,1,,,4/9/2019 17:11,,3,219,"

Could a better algorithm other than Monte Carlo be used for the AlphaGo computer? Why didn't the DeepMind team think of choosing another kind of algorithm rather than spending time on their neural nets?

+",21832,,21832,,4/9/2019 23:44,6/10/2019 20:31,Why is Monte Carlo used as the tree search algorithm for AlphaGo?,,1,0,,,,CC BY-SA 4.0 +11741,1,,,4/9/2019 17:21,,4,48,"

I'm working in a computer vision project, where the goal is to detect some specific parasites, but now that I have the images, I noticed that they have a watermark that specifies the microscope graduation. +I have some ideas of how to remove this noise, like detecting the numbers and replace for the most common background or split the image but if I split the image I'll lose information.

+ +

But I would like to hear some recommendations and guidelines of experts.

+ +

I added an example image below.

+ +

+",23836,,23836,,4/9/2019 17:34,4/9/2019 17:34,How do I denoize a microscopic image?,,0,3,,,,CC BY-SA 4.0 +11743,1,,,4/9/2019 19:10,,2,562,"

I've decided to make my bachelor thesis in RL. I am currently struggling to find a good problem. I am interested in multi-agent RL with the dilemma between selfishness and cooperation.

+

I only have 2 months to complete this and I'm afraid that multi-agent RL is too difficult and I don't have the knowledge and time to nicely learn this topic.

+

What are some simple open problems in multi-agent reinforcement learning that would be suited for a bachelor's thesis?

+

I've only done applied the Q-learning algorithm to solve a text-based environment in OpenAI's gym.

+",23838,,2444,,1/23/2021 0:24,1/23/2021 0:24,What are some simple open problems in multi-agent RL that would be suited for a bachelor's thesis?,,1,0,,,,CC BY-SA 4.0 +11744,1,,,4/9/2019 19:59,,3,47,"

How one can model physiological reward mechanisms occuring in the brain using artificial neural networks? E.g. are there efforts to use the notion of dopamine or similar substances in the artificial neural networks. Maybe introduction of the physiological reward mechanism can lead to the emergence of consciousness or at least enhance the effectiveness of reinforcement learning?

+ +

Essentially - how neural network models reward? People's brain perceive money as the ultimate reward because almost everything other can be bought by this. So - mental perception of owning money gives reward. But how this notion of reward is modeled in artificial neural networks? How networks know that some money is assigned to the network's account and so, the network should feel happy and rewarded and should strive to repeat successful behavior?

+ +

I am reading https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5293493/pdf/elife-21492.pdf and I hope that it will move me in the right direction.

+ +

It is quite confusing. The old-school neural networks expect that there are 2 separate phases: training and inference. So, the network receives all the feedback (let it be called reward) in the training phase and network receives nothing in the inference phase. But maybe network should receive some reward during acting-inference phase as well, kind a lifelong learning.

+",8332,,8332,,4/9/2019 20:19,4/9/2019 20:19,Models of reward (possibly mimicking dopamine) in artificial neural networks?,,0,1,,,,CC BY-SA 4.0 +11745,1,,,4/9/2019 20:55,,3,30,"

One Nash equilibrium of every GANs model has is when the generator creates perfect samples indistinguishable from the training data and the discriminator just output 1 with probability 1/2. And I think this is the desirable outcome since we are most interested in the generator part of the GAN model. I know that we probably try to converge to this equilibrium with some hacks in training such as ""mode collapse avoidance"" and so on. But is there any theoretical work trying to go in another ways (say, by reduce the number of Nash equilibria somehow?)

+",23688,,,,,4/9/2019 20:55,How do we ensure that training GANs will fall in the desirable Nash equilibrium?,,0,0,,,,CC BY-SA 4.0 +11746,1,,,4/9/2019 23:51,,2,37,"

How can we use the ability of AlphaGo Zero computer, to do something in any other life important related field? Is it possible to make something important besides having created something so smart that can play mind games way better than humans?

+",21832,,,,,4/10/2019 7:04,How do the achievements met in the gaming field (ex. AlphaGo Zero) impact other fields of application?,,1,0,,,,CC BY-SA 4.0 +11747,1,,,4/10/2019 2:18,,3,23,"

I've a rather simple question for a school project. We're developing a GA solution for the following problem:

+ +

Chromosome: A location with lat-lon coords. There are two types of locations - up to 15 waypoints from user input, and a dataset of about 3-400 stations.

+ +

Gene: A route consisting of all waypoints (incl. a fixed start and end) + 1 station.

+ +

Fitness function: Shortest path.

+ +

Stop condition: Run duration - configurable, default 3 seconds.

+ +

We're discussing two possible implementations:

+ +
    +
  • A problem set of all waypoints and all stations, kinda like soccer team assignment or nurse rostering design. Run GA once on all of that.
  • +
  • A problem set of all waypoints and one station, do TSP. Run GA for number-of-stations-in-dataset iterations.
  • +
+ +

Which is better, in terms of design, efficiency, and performance?

+",23844,,,,,4/10/2019 2:18,Shortest route GA: One loop through one dataset vs multiple loops through subsets of the same data?,,0,0,,,,CC BY-SA 4.0 +11748,1,,,4/10/2019 5:11,,2,2083,"

I have set of topics generated using LDA and like {code, language, test , write, function}, {class, public, method, string, int} etc and I want to make meaningful sentence/sentences from these words using api or libraries. How do I implement this with NLTK and(or) Machine Learning? Any suggestions as to how I should go about this?

+",23509,,,,,1/5/2020 12:03,How to make meaningful sentences from a set of words?,,1,0,,,,CC BY-SA 4.0 +11749,2,,11746,4/10/2019 7:04,,1,,"

Yes it's created something important. Until Alpha(Go) Zero all (or almost all) of Deep Learning approach to Reinforcement Learning was based on Time Difference loss function. The weakness of Time Difference loss function was that it was essentially training on itself, that is data produced by the same method was used as part of regression target. That was producing the problem of ""extrapolation error"" - solution would blow up, or oscillate wildly. There was attempts to mitigate that problem (n-steps algorithms), but they weren't helping much. Alpha Zero combined Deep Network with tree search instead (Monte Carlo Tree Search). Tree search algorithm produced wide and long fields of high precision data (value function), with network influence on value much diminished. That way network was training mostly not on itself, but on data produced by tree, and tree search itself was accelerated greatly by network (using it as heuristics). The whole happens to be much more then sum of parts.

+ +

This approach is not limited to broad game or RL theory. It may work for any problem for which high-precision simulator could be built. Essentially if problem allow Monte Carlo Tree Search, or other tree search which could be augmented with heuristic, Alpha Zero approach would probably work on it. Of cause Alpha Zero approach is computationally expensive, so it's not always efficient to apply it.

+",22745,,,,,4/10/2019 7:04,,,,0,,,,CC BY-SA 4.0 +11755,1,,,4/10/2019 10:02,,3,253,"

Before the release of BERT, we used to say that it is not possible to train bidirectional models by simply conditioning each word on its previous and next words, since this would allow the word that's being predicted to indirectly ""see itself"" in a multi-layer model. How does this happen?

+",23350,,2444,,11/1/2019 3:00,11/1/2019 3:00,"How does bidirectional encoding allow the predicted word to indirectly ""see itself""?",,0,1,,,,CC BY-SA 4.0 +11756,2,,11748,4/10/2019 10:32,,1,,"

How do you define ""meaningful""? Generally, you would start from concepts and meanings, and then realise them in syntactic structures using lexical items (words). You seem to want to start in the middle somehow.

+ +

For turning a semantic representation into a valid sentence, you would use a generator; these are often based on grammars. Examples exist which take a grammar, fill in random words, and create a syntactically well-formed sentence; often they will, however, be rather non-sensical or meaningless. Have a look at this site which describes the Syntax Construction Kit. The author, Mark Rosenfelder, links to a number of toy programs which do exactly that. Just substitute his lexicon with the list of words created by your LDA process. See for example this generator based on generative grammar.

+",2193,,,,,4/10/2019 10:32,,,,2,,,,CC BY-SA 4.0 +11757,2,,11678,4/10/2019 12:18,,1,,"

In your case you can use either RNN (especially BiLSTM with ELMo and attention mechanism for better accuracy) or Transformer based architectures (the best of them today is BERT). But for both cases you need data to train the model (i.e sequences of input/output like in your question).

+ +

I believe the best choice in your case is BERT as it's achieving state of the art performance in most NLP tasks and is already pretrained so you don't need a massive data to retrain the model. Also, BERT is pretrained on ""Next Sentence Prediction"" and allows ""2 separate sentences"" as input which helps a lot in your case. The only drawback, compared to other methods, is that you need fine-tuning the model so it's not a completly ""ready to use"" model.

+ +

For more information:

+ +

Here is the github repository of BERT: BERT-repo. For quick explanation check: The Illustrated BERT. For detailed explanation see BERT paper: BERT-paper.

+",23350,,23350,,4/10/2019 14:02,4/10/2019 14:02,,,,0,,,,CC BY-SA 4.0 +11759,1,16437,,4/10/2019 13:53,,4,3858,"

Let's suppose I have a set of heuristics $H$ = {$h_1, h_2, ..., h_N$}.

+ +
    +
  1. If all heuristics in $H$ are admissible, does that mean that a heuristic that takes the $\min(H)$ (or $\max(H)$ for that matter) is also admissible?

  2. +
  3. If all heuristics in $H$ are consistent, does that mean that a heuristic that takes the $\min(H)$ (or $max(H)$ for that matter) is also consistent?

  4. +
+ +

I'm thinking about a search problem in a bi-dimensional grid that every iteration of an algorithm, the agent will have to find a different goal. Therefore, depending on the goal node, a certain heuristic can possibly better guide the agent than the others (hence the use of $\min$ and $\max$).

+",22369,,2444,,11/11/2019 15:25,11/11/2019 17:03,Is the minimum and maximum of a set of admissible and consistent heuristics also consistent and admissible?,,1,0,,,,CC BY-SA 4.0 +11760,1,11764,,4/10/2019 15:42,,4,1493,"

I am training a modified VGG16 network for classification (adding 0.5 dropout after each of the last FC layers). In the following plot I am training for a small number of epochs as an example, and it shows the accuracy and loss curves of training process on both training and validation datasets. My training set size is $1725$, and $429$ for validation. Also I am training with weights=None

+ +

+ +

My question is about the validation curves, why do not they appear to be as smooth as the training ones? Is this normal during the training stage?

+",23268,,,user9947,4/10/2019 17:57,4/10/2019 17:57,Why are not validation accuracy and loss as smooth as train accuracy and loss?,,1,0,,,,CC BY-SA 4.0 +11761,1,,,4/10/2019 16:18,,3,143,"

I'm trying to tackle the problem of feature selection as an RL problem, inspired by the paper Feature Selection as a One-Player Game. I know Monte-Carlo tree search (MCTS) is hardly RL.

+ +

So, I used MCTS for this problem, where nodes are subsets of features and edges are (""adding a feature to the subset"") and it does converge to the optimal subset slowly.

+ +

I have a few questions

+ +
    +
  1. Is there a clever way to speed up the convergence of MCTS, besides parallelizing the rollout phase?

  2. +
  3. Adding nodes to the tree takes time and memory for datasets with a large number of features, for 10000 features, it takes up all my RAM (8GB) from the second iteration, (although it runs for 2000+ iterations for a dataset with 40 features which doesn't make sense to me). Is this expected or is my implementation likely wrong? Are there any workarounds for this?

  4. +
  5. What are your opinions on using MCTS for this task? Can you think of a better approach? The main problem of this approach is running the SVM as an evaluation function which may make the algorithm impractically slow for large datasets (a large number of training examples).

  6. +
+ +

I was thinking of trying to come up with a heuristic function to evaluate subsets instead of the SVM. But I'm kind of lost and don't know how to do that. Any help would be really appreciated.

+",23866,,2444,,4/10/2019 17:21,4/10/2019 17:21,Feature Selection using Monte Carlo Tree Search,,0,0,,,,CC BY-SA 4.0 +11762,1,,,4/10/2019 17:33,,3,267,"

What are the key differences between cellular neural networks and convolutional neural networks in terms of working principle, implementation, potential performance, and applicability?

+",23868,,2444,,12/12/2021 18:22,12/12/2021 18:22,What are the key differences between cellular neural network and convolutional neural network?,,0,0,,,,CC BY-SA 4.0 +11763,1,11765,,4/10/2019 17:54,,5,218,"

I want to make a network, specifically a CNN for image recognition, that takes an input, processes it the same way for several layers, and then at some point splits before coming to two different outputs. Is it possible to create a network such as this? It would look something like this:

+ +

Input -> Conv -> Pool -> Conv -> Pool ---------> Dense -> Output 1

+ +
                                  ||
+
+                                     ----> Dense -> Output 2
+
+ +

I.E. it splits off after the second pooling layer into separate fully connected layers. Of course, it has to train to both outputs, so that it is producing minimal error on both separate outputs using these common convolutional layers. Also, I am using Python Keras, and it would help if there was some way to do this using Keras in some way. Thank you!

+",23812,,,,,4/11/2019 5:17,Is it possible to make a 'forked path' neural network?,,1,0,,,,CC BY-SA 4.0 +11764,2,,11760,4/10/2019 17:55,,5,,"

You are training your model on the train set and only validating your model on CV set, thus your weights are getting exclusively optimised according to the loss of Training Set (in a continuous manner) and thus always decreasing. We do not have such guarantees with the CV set, which is the entire purpose of Cross Validation in the first place. Ideally it gives you an unbiased measure of how well your model and its trained weights will perform in the real world. Thus even if it performs well in Training Set, the loss can still go up in CV set which is what you are seeing in your graph.

+ +

Speaking in layman terms, even if you do all the sums of a single exercise given in a book, your performance might not be the same in a model paper. You do another exercise from another book, there is no guarantee your previous concepts will stick and you may do even worse in the model paper. Same thing is happening here where exercises are your training set and model papers are to evaluate your learning.

+",,user9947,,,,4/10/2019 17:55,,,,2,,,,CC BY-SA 4.0 +11765,2,,11763,4/10/2019 19:21,,5,,"

Keras Functional APIs can help you define complex models.

+ +

You can find the documentation here : +https://keras.io/getting-started/functional-api-guide/.

+ +

For example:

+ +
# prev_layer is the layer you want to be forked
+fork1 = Dense(32, activation='relu')(prev_layer)
+fork2 = Dense(32, activation='relu')(prev_layer)
+
+# you do some operations on fork1 to get output1
+# and on fork2 to get output2
+
+model = Model(inputs=input_layer, outputs=[output1, output2])
+
+",21229,,21229,,4/11/2019 5:17,4/11/2019 5:17,,,,0,,,,CC BY-SA 4.0 +11766,1,,,4/10/2019 19:22,,2,36,"

Please read the following page of the Sklearn documentation.

+ +

The figure shown there (see below) illustrates why C should be scaled when using a SVM with 'l1' penalty, whereas it shouldn't be scaled C when using one with 'l2' penalty.

+ +

+ +

The scaling however does not change the scores of the models examined within the GridSearch. So what exactly is this scaling-step good for?

+",23672,,,,,4/10/2019 19:22,What is the benefit of scaling the hyperparameter C of an SVM?,,0,0,,,,CC BY-SA 4.0 +11768,1,,,4/11/2019 2:34,,4,1084,"

I think I don't understand group convolutions well.

+

Say you have 2 groups. This means that the number of parameters would be reduced in half. So, assuming you have an image and 100 channels, with a filter size of $3 \times 3$, you would have 900 parameters (ignore the bias for this example). If you separate this into 2 groups, if I understand it well, you would have 2 groups of 50 channels.

+

This can be made faster, by running the 2 groups in parallel, but how does the number of parameters get halved? Isn't each group having $50*9=450$ parameters, so, in total, you still have 900 parameters? Do they mean that the number of parameters that the backpropagation goes over (in each branch) gets halved?

+

Because overall, I don't see how it can get reduced. Also, is there a downside in using more groups (even going to 100 groups of 1 channel each)?

+",23871,,2444,,1/23/2021 23:29,1/23/2021 23:29,How is the number of parameters reduced in the group convolution?,,1,1,,,,CC BY-SA 4.0 +11769,1,,,4/11/2019 6:36,,1,290,"

I'm attempting to implement the actor-critic algorithm on Matlab using Radial Basis Function, Local Linear Regression, and shallow Neural Network for inverted pendulum system. +the state space and the action space are continuous.

+ +
    +
  • states are the angle x_1 wrapped into [-pi pi] and the angle velocity x_2 in [-8*pi 8*pi]
  • +
  • the continuous action u, which is bound between [-3 3].
  • +
  • reward function is quadrat rho=x'Q x+u'R u where Q=diag(1,5) and R=0.1
  • +
  • the desired point is upright position [0 0]'
  • +
+ +
+ +

some notes will be added

+ +
    +
  • the used solver is ode45.
  • +
  • the sampling time 0.03.
  • +
  • it explores random u every step, with normal distribution zero mean sigma=1

    + +

    model of the system (to save place the parameters of the model are not written)

    + +
     function dy =pendulum(y,u)
    +dy(1,1)=y(2);
    +dy(2,1)=1/J*(M*g*l*sin(y(1))-(b+K^2/R)*y(2)+K/R*u);
    +end_function
    +
  • +
+ +
+ +

function to calculate RBF: the idea is to define centers and widths for N RBFs which cover the entire state space to approximate the value function and policy separately. The RBF is normalized.

+ +
function phi=RBF(x,C,B,N)         % x:state, C: centres, B: width, N: nombre of used RBfs
+ Phi_vec=[];
+ Phi_sum=0;
+ for i=1:N                        % loop for to calculate the vector phi
+     Phi_i=exp(-1/2*(x-C(i,:)')'*B^(-1)*(x-C(i,:)'));  % gaussian function
+     Phi_vec=[Phi_vec;Phi_i];                    % not normalized phi vector
+     Phi_sum=Phi_sum+Phi_i; % sum for normalisation
+
+ end
+ phi=Phi_vec/Phi_sum; % normalized phi vector
+
+ +
+ +
 % after tuning the learning rate for actor and critic alpha_a and alpha_c 
+
+ %  every step the following updates shall be carried out: 
+
+ %% generally
+
+ %  Value function V=Theta_O'*RBF(x,C,B,N)
+
+ %  policy pi= Theta_v'*RBF(x,C,B,N)
+
+
+ % determine u(k) with exploration term
+ u(k)=Theta_V'*RBF(x,C,B,N)+Delta_U(k-1)
+
+ %% aplly u(k) and gain x(k+1)
+
+ [t,y] = ode45(@(t,y) pendulum(y,u(k)),tspan,x(k)');
+        :
+ x(k+1,1)= wrapToPi(x(k+1,1)); % wrpping to pi
+
+
+ % determine Temporal difference Error 
+ Delta(k)=r(k)+gamma*Theta_O'*RBF(x(k),C,B,N)-Theta_O'*RBF(x(k-1),C,B,N);
+
+ % eligibility trace
+ z=lamda*gamma*z+RBF(x,C,B,N);
+
+ % Critic update
+ Theta_O=Theta_O+alpha_c*Delta(k)*z;
+
+ %actor update
+ Theta_V=Theta_V+alpha_a*Delta(k)*Delta_U(k- 1)*RBF(x,C,B,N);
+
+",22209,,22209,,4/11/2019 15:26,4/11/2019 15:26,"Actor-critic algorithm using gaussian Radial Basis Function, Local Linear Regression and shallow Neural Network",,0,3,0,,,CC BY-SA 4.0 +11771,1,,,4/11/2019 10:58,,1,71,"

I am modelling a process with 4 input parameters x1 x2 x3 x4. The output of the process is 2 variable y1 y2that varies with length and time. +I also have data from experiments basically recording the trends in the two output variables as I vary my input variable. +So far I have only seen neural network examples which will take input x1 x2 x3 x4 t +(t is time) and predict y1 y2 at said time t (no consideration of location). I would however like to also like to see variation with length as well at a given time t [y1a y1b... y1z; y2a y2b... y2z] where (a, b...z) are location points at an incremental distance dh from the start. +Any help is appreciated. TIA

+",23883,,23883,,4/11/2019 11:24,9/12/2019 4:05,Can I use neural networks for a problem (in description)?,,1,2,,,,CC BY-SA 4.0 +11773,1,,,4/11/2019 13:49,,1,151,"

I am training a convLSTM with a dropout layer (with prob 0.5).

+ +

If I train over more than 5 epochs I notice that the network starts to overfit: my validation set loss becomes stationary while the train loss keeps going down with every epoch.

+ +

And if I train for 20 or more epochs the gap between the validation and train loss is quite substantial. At the same time precision-recall curve becomes much more stable (i.e. monotonic) if i train with a large number of epochs (e.g. 20). Why is that? Is this behaviour a common occurrence?

+",11417,,,,,4/11/2019 13:49,Why does precision-recall curve become more stable when neural net begins to overfit?,,0,5,,,,CC BY-SA 4.0 +11776,1,,,4/11/2019 23:18,,3,63,"

I'm trying to make deep q-learning agent from https://keon.io/deep-q-learning

+ +

My environment looks like this: +https://i.stack.imgur.com/EJHTD.jpg

+ +

As you can see my agent is a circle and there is one gray track with orange lines (reward gates). The bolder line is an active gate. +The orange line from the circle pointing to his direction.

+ +

The agent has constant velocity and it can turn left/right 10 degrees or do nothing

+ +

On the next image are agents sensors +https://i.stack.imgur.com/LqG8J.jpg

+ +

They are rotating with the agent.

+ +

The states are a distance from the agent to active gate and lengths of sensors. +In total there are 1+7 states and it is q-learning neural net input dimension.

+ +

Actions are turn left, turn right and do nothing.

+ +

Reward function returns 25 when the agent intersects reward gate; 125 when agent intersects the last gate; -5 if agent intersects track border +If none of this, reward function compare the distance from the agent to the active gate for current state and next state:

+ +

If current state distance > next state distance: + return 0.1 +else + return -0.1

+ +

Also, DQNAgent has negative, positive and neutral memory. +If reward is -5, (state, action, reward, next_state, done) go to the negative memory, +if reward is >= 25, to positive +else to neutral

+ +

That is because when I'm forming minibatch for training, I'm taking 20 random samples from neutral memory, 6 from positive and 6 from negative.

+ +

Every time when agent intersects track border or when he is stuck for more than 30 seconds, I'm doing training (replay) and agent starts from the beginning. +This is my model

+ +
model = Sequential()
+model.add(Dense(64, input_dim=self.state_size,activation='relu', 
+                  kernel_initializer=VarianceScaling(scale=2.0)))
+model.add(Dense(32, 
+    activation='relu',kernel_initializer=VarianceScaling(scale=2.0)))
+model.add(Dense(self.action_size, activation='linear'))
+model.compile(loss=self._huber_loss,
+                  optimizer=Adam(lr=self.learning_rate))
+return model
+
+ +

I tried different kinds of model, a different number of neurons per layer, other activation and loss functions, dropout, batch normalization, and this model works the best for now

+ +

I tried different reward values

+ +

Also, I tried to use static sensors (they are not rotating with the agent) +https://i.stack.imgur.com/UCLGM.jpg (green lines on the photo)

+ +

Sometimes my agent manages to intersect a few gates before hits the border. Rarely he manages to traverse half of the track and once, with this settings, he traversed two laps before he stuck.

+ +

More often, he is only rotating in one place.

+ +

I think that the problem lays in state representation or reward function.

+ +

Any suggestions would be appreciated

+",23894,,,,,4/11/2019 23:18,Deep Q-Learning agent poor performing actions. Need help optimizing,,0,0,,,,CC BY-SA 4.0 +11778,2,,11768,4/12/2019 0:58,,2,,"

The number of parameters is filters*input_channels*output_channels

+ +

Groups are formed among input and output channels.

+ +

So instead of input_channels*output_channels with two groups you get (input_channels/2)*(output_channels/2) + (input_channels/2)*(output_channels/2)

+",16886,,,,,4/12/2019 0:58,,,,0,,,,CC BY-SA 4.0 +11780,1,,,4/12/2019 3:02,,2,65,"

I have some ecological data on the confirmed presence of a certain animal. I have data on the:

+ +
Date
+Relevant metadata about the site 
+Simple metrics on the animal
+A complete weather record for the site. 
+
+ +

I'm assuming the presence of this animal is driven by weather events, and that the metadata about the site may be an important factor for patterns in the data. What my data ends up looking like is this.

+ +
#Date of observation
+Date<-as.POSIXct(c(""2015-01-01"",""2015-01-11"",""2015-01-19"",""2015-02-04"",""2015-02-12"",""2015-02-23"",""2015-04-01"",""2015-04-10"",""2015-04-16"",""2015-04-20""))
+
+#Data about animal 
+Size<-c(1,1,1,2,2,3,4,1,2,5)
+Color<-c(""B"",""B"",""R"",""R"",""R"",""R"",""B"",""B"",""B"",""Y"" )
+Length<-c(1,10,12,4,5,2,1,2,7,12)
+
+#Weather Data
+AirTempDayOf<-c(20,40,20,23,24,25,24,25,25,22)
+WindSpeedDayOf<-c(2,3,2,3,4,3,2,3,4,5)
+AirTempDayBefore<-c(21,40,22,23,24,24,24,27,25,22)
+WindSpeedDayBefore<-c(2,5,2,6,4,3,6,3,2,5)
+AirTemp2DayBefore<-c(21,45,22,23,34,24,24,23,25,23)
+WindSpeed2DayBefore<-c(8,5,3,6,4,7,6,3,2,6)
+
+#Metadata about site
+Type<-c(""Forest"",""Forest"",""Forest"",""Forest"",""Forest"",""Beach""""Beach""""Beach"",""Swamp"",""Swamp"")
+Population<-c(20,30,31,23,32,43,23,43,23,33)
+Use<-c(""Industrial"",""Commercial"",""Industrial"",""Commercial"",""Industrial"",""Commercial"",""Industrial"",""Commercial"",""Industrial"",""Commercial"",)
+
+
+DF<-data.frame(Date,Size,Color,Length,AirTempDayOf, WindSpeedDayOf,AirTempDayBefore,WindSpeedDayBefore,AirTemp2DayBefore,WindSpeed2DayBefore)
+
+ +

What I don't have is absence data, so I can't make any assumptions about when an organism was not at the site. I'd like to look for patterns in weather that my be driving the arrival of this organism, but all I have is data on when the organism was spotted.

+ +

Is it possible to apply some sort of machine learning to look for patterns that may be driving the arrival of this animal? If I don't have absence data, I'm assuming I cant. I've looked into pseudo-absence models, but I don't know how they might apply here.

+ +

If I can't use machine learning to look at drivers for the presence of these animals, is it possible to use ML to look at possible weather patterns that may be associated with some of the metadata about the site? For example, weather patterns that may be associated with Forrest vs Beach habitats?

+ +

I usually use R for my stats, so any answers including R packages would be helpful.

+ +

Also, note that this is just an example dataset above. I don't expect to find any patterns in the above data, and my actual dataset is much larger. But any code developed for the above data should be applicable

+",23896,,23896,,4/12/2019 12:26,9/9/2019 14:02,Machine learning to find drivers of an event with presence-only data (no absence),,1,2,,,,CC BY-SA 4.0 +11781,1,,,4/12/2019 5:02,,7,725,"

Projected Bellman error has shown to be stable with linear function approximation. The technique is not at all new. I can only wonder why this technique is not adopted to use with non-linear function approximation (e.g. DQN)? Instead, a less theoretical justified target network is used.

+ +

I could come up with two possible explanations:

+ +
    +
  1. It doesn't readily apply to non-linear function approximation case (some work needed)
  2. +
  3. It doesn't yield a good solution. This is the case for true Bellman error but I'm not sure about the projected one.
  4. +
+",9793,,2444,,5/10/2019 14:48,10/9/2019 10:50,Why don't people use projected Bellman error with deep neural networks?,,2,4,,,,CC BY-SA 4.0 +11782,2,,11678,4/12/2019 5:28,,0,,"

sounds like a job for Sequence-to-sequence or seq2seq

+ +

for example tf-seq2seq

+ +

you basically use an RNN as an encoder (reduces and sequence of words to a vector), and another RNN as a decoder (takes the encoded vector as in input to generation a new sequence of words).

+",16886,,,,,4/12/2019 5:28,,,,0,,,,CC BY-SA 4.0 +11783,1,11784,,4/12/2019 8:16,,2,1412,"

The $\lambda$-return is defined as +$$G_t^\lambda = (1-\lambda)\sum_{n=1}^\infty \lambda^{n-1}G_{t:t+n}$$ +where +$$G_{t:t+n} = R_{t+1}+\gamma R_{t+2}+\dots +\gamma^{n-1}R_{t+n} + \gamma^n\hat{v}(S_{t+n})$$ +is the $n$-step return from time $t$.

+ +

How can we use this definition to rewrite $G_t^\lambda$ recursively?

+",22916,,22916,,4/12/2019 8:28,4/14/2019 7:44,How can the $\lambda$-return be defined recursively?,,1,0,,,,CC BY-SA 4.0 +11784,2,,11783,4/12/2019 8:16,,6,,"

To rewrite $G_t^\lambda$ recursively, our goal is to define it in terms of +$$G_{t+1}^\lambda = (1-\lambda)\sum_{n=1}^\infty \lambda^{n-1}G_{t+1:t+n+1}.\tag{0}$$

+ +

The $\lambda$-return is a weighted average of all $n$-step returns. We will split up the summation by pulling out the one-step return $G_{t:t+1}$ and the first step's reward $R_{t+1}$.

+ +

$$ +\begin{align*} +G_t^\lambda &= (1-\lambda)\sum_{n=1}^\infty \lambda^{n-1}G_{t:t+n} \tag{1}\\ +&\\ +&= (1-\lambda)\lambda^0G_{t:t+1} + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1}G_{t:t+n}\tag{2}\\ +&\\ +&= (1-\lambda)\left(R_{t+1}+\gamma\hat{v}(S_{t+1})\right)\\ +&\qquad + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1}(R_{t+1}+\gamma R_{t+2}+\dots +\gamma^{n-1}R_{t+n} + \gamma^n\hat{v}(S_{t+n}))\tag{3}\\ +&\\ +&= (1-\lambda)\left(R_{t+1}+\gamma\hat{v}(S_{t+1})\right) + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1} R_{t+1}\\ +&\qquad + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1}(\gamma R_{t+2}+\dots +\gamma^{n-1}R_{t+n} + \gamma^n\hat{v}(S_{t+n}))\tag{4}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + (1-\lambda)\sum_{n=1}^\infty \lambda^{n-1} R_{t+1}\\ +&\qquad + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1}(\gamma R_{t+2}+\dots +\gamma^{n-1}R_{t+n} + \gamma^n\hat{v}(S_{t+n}))\tag{5}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + R_{t+1}\\ +&\qquad + (1-\lambda)\sum_{n=2}^\infty \lambda^{n-1}(\gamma R_{t+2}+\dots +\gamma^{n-1}R_{t+n} + \gamma^n\hat{v}(S_{t+n}))\tag{6}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + R_{t+1}\\ +&\qquad + \gamma\lambda(1-\lambda)\sum_{n=2}^\infty \lambda^{n-2}(R_{t+2}+\dots +\gamma^{n-2}R_{t+n} + \gamma^{n-1}\hat{v}(S_{t+n}))\tag{7}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + R_{t+1}\\ +&\qquad + \gamma\lambda(1-\lambda)\sum_{m=1}^\infty \lambda^{m-1}(R_{t+2}+\dots +\gamma^{m-1}R_{t+m+1} + \gamma^{m}\hat{v}(S_{t+m+1}))\tag{8}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + R_{t+1} + \gamma\lambda(1-\lambda)\sum_{m=1}^\infty \lambda^{m-1}G_{t+1:t+m+1}\tag{9}\\ +&\\ +&= \gamma(1-\lambda)\hat{v}(S_{t+1}) + R_{t+1} + \gamma\lambda G_{t+1}^\lambda \tag{10}\\ +\end{align*} +$$

+ +

$ $
+$(2)$ pulls out the one-step return from the summation.
+$(3)$ expands the $n$-step returns.
+$(4)$ pulls out the remaining first step rewards.
+$(5)$ combines first step rewards.
+$(6)$ simplifies the geometric series.
+$(7)$ pulls a factor of $\gamma\lambda$ out of the summation.
+$(8)$ makes the substitution $m=n-1$.
+$(9)$ uses the definition of the $n$-step return.
+$(10)$ uses the definition of the $\lambda$-return

+ +

The result can be verified in equation $(12.18)$ of Sutton and Barto's RL book.

+",22916,,22916,,4/14/2019 7:44,4/14/2019 7:44,,,,0,,,,CC BY-SA 4.0 +11785,1,,,4/12/2019 8:37,,1,590,"

I have searched on how Google or any map provider calculates distance between two coordinates. The closest I could find is Haversine formula.

+ +

If I draw a straight line between two points, then Haversine formula can be helpful. But since no one will travel straight and typically move through the streets, I want to know if there are any methods to calculate turn by turn points and see how to find multiple ways to travel to the destination from the source.

+ +

Right now my idea is

+ +
    +
  1. Have the two coordinates within a map window.
  2. +
  3. Make an algorithm detect the white lines (path) in the window.
  4. +
  5. Make it understand how they are connected.
  6. +
  7. Feed it to an algorithm to solve the Travelling Salesman Problem to find the best path between them.
  8. +
+ +

But these things see very memory and process intensive. Even with the knowledge that Google has the powerhouse to process, to serve so many directions and distance matrix in fractions of seconds in amazing. I want to know if there are different approaches to this?

+",9170,,2193,,4/12/2019 10:14,6/14/2019 19:02,How do map providers like Google calculate the distance between two coordinates and find turn by turn directions?,,2,0,,,,CC BY-SA 4.0 +11786,2,,11785,4/12/2019 10:12,,0,,"

Obviously the way Google store their information is not published, but from the Directions API I would make the following educated guesses:

+ +
    +
  1. The roads/paths are stored as a graph database
  2. +
  3. Each path has additional information: type of road, transport link, etc.
  4. +
  5. Geocoordinates or placenames are mapped onto graph nodes
  6. +
+ +

Finding a route then is a problem of finding the best path through the graph. This will be easier if you provide waypoints (which effectively split a long route into several shorter ones). As you have the physical coordinates, you can use something like the A* algorithm to traverse the graph.

+ +

From your question I assume that you'd want to work with map images, rather than a pre-processed graph. I would think that this is not feasible, and that you have to do this conversion into a graph first. You can probably semi-automate this by using image processing to identify roads, but ultimately this is probably something that has to be done at least partially with human intervention. You don't want to drive for ages only to find out there was a one-way street which you did not spot from the image.

+ +

Also, usually satnavs are aware of speed limits along the path. This again has to be added either manually or automated (by recognising traffic signs along the route, eg from a street view photography car). So image data alone is not sufficient.

+",2193,,,,,4/12/2019 10:12,,,,2,,,,CC BY-SA 4.0 +11787,1,11791,,4/12/2019 11:42,,8,511,"

I am trying to understand how alpha zero works, but there is one point that I have problems understanding, even after reading several different explanations. As I understand it (see for example https://applied-data.science/static/main/res/alpha_go_zero_cheat_sheet.png), alpha zero does not perform rollouts. So instead of finishing a game, it stops when it hits an unknown state, uses the neural network to compute probabilities for different actions as well as the value of this state (""probability of winning""), and then propagates the new value up the tree.

+ +

The reasoning is that this is much cheaper, since actually completing the game would take more time then just letting the neural network guess the value of a state.

+ +

However, this requires that the neural network is decent at predicting the value of a state. But in the beginning of training, obviously it will be bad at this. Moreover, since the monte carlo tree search stops as soon as it hits a new state, and the number of different game states is very large, it seems to me that the simulation will rarely manage to complete a game. And for sure, the neural network can not improve unless it actually completes a significant number of games, because that is only real feedback that tells the agent if it is doing good or bad moves.

+ +

What am I missing here?

+ +

The only plausible explanation I can come up with is: If the neural network would be essentially random in the beginning, well then for sure the large number of game states would prevent the tree search from ever finishing if it restarts as soon as it hits a previously unknown game state, so this can not be the case. So perhaps, maybe even if the neural network is bad in the beginning, it will not be very ""random"", but still be quite biased towards some paths. This would mean that the search would be biased to some smaller set of states among the vast number of different game states, and thus it would tend to take the same path more than once and be able to complete some games and get feedback. Is this ""resolution"" correct?

+ +

One problem I have though with the above ""resolution"", is that according to the algorithm, it should favor exploration in the beginning, so it seems that in the beginning it will be biased towards choosing previously not taken actions. This makes it even more seem like the tree search will never be able to complete a game and thus the neural net would not learn.

+",23910,,,,,6/10/2019 16:12,How can alpha zero learn if the tree search stops and restarts before finishing a game?,,2,0,,,,CC BY-SA 4.0 +11788,2,,11780,4/12/2019 13:14,,1,,"

Yes.

+ +

Since you have only one type of data, cluster analysis may be a good choice.

+ +

You can also try '1-class learning' approaches, although I have found these to be unreliable in the past.

+ +

An example of a cluster analysis algorithm in R is kmeans. There are many others. These approaches will reveal points that typify large portions of the dataset. By examining 'typical' cases, and how they differ, you can spot potential causal factors to test experimentally.

+ +

An example of a 1-class learning algorithm is a one-class svm. Most svm libraries will accept data of a single class and do the right thing with it. Here's an example with R's e1071 package.

+",16909,,,,,4/12/2019 13:14,,,,0,,,,CC BY-SA 4.0 +11789,1,,,4/12/2019 13:38,,1,107,"

Heterogeneity: Based on the heterogeneity of agents MAS can be divided into two categories namely: homogeneous and heterogeneous. Homogeneous MAS include agents that all have the same characteristics and functionalities, while heterogeneous MAS include agents with diverse features.

+ +

As I read in this paper that these methods can deal with the heterogeneity of agents MAS : +The dueling double deep Q-network (DDDQN) and Independent Deep Q-Network (IDQN): first approach to address heterogeneous multi-agent learning in urban traffic control. +deep Q-network (DQN): To handle heterogeneity, each agent has different experience replay memory and different network policy. +The asynchronous advantage actor-critic (A3C) algorithm is used to learn optimal policy for each agent, which can be extended to multiple heterogeneous agents. So, Can someone tell me What is the best method to deal with heterogeneous multi-agent system MAS?

+",21181,,,,,4/12/2019 13:38,What is the best method to deal with heterogeneous multi agent system MAS?,,0,0,,,,CC BY-SA 4.0 +11791,2,,11787,4/12/2019 16:40,,2,,"

You're right that AlphaGo Zero doesn't perform rollouts in its MCTS. It does complete many, many games, though.

+ +

Realize that AlphaGo Zero only iterates MCTS 1,600 times before taking an action. The next state resulting from that action becomes the root of future search trees. Since a typical game of Go only lasts a few hundred moves, the board will very quickly reach a terminal state.

+ +

None of this is dependent on the initial performance of the neural network. The neural net can be incredibly bad; actions/moves will still be taken at the same frequency. Of course, since AlphaGo Zero trains with self-play, one of its two selves in a game will usually be the winner (ties are possible). So the neural net will improve over time.

+ +

I recommend going over the ""Self-Play Training Pipeline"" section of the paper.

+",22916,,22916,,4/12/2019 19:07,4/12/2019 19:07,,,,0,,,,CC BY-SA 4.0 +11792,1,,,4/12/2019 17:06,,2,750,"

I've been trying to read Sutton & Barto book chapter 5.1, but I'm still a bit confused about the procedure of using Monte Carlo policy evaluation (p.92), and now I just cant proceed anymore coding a python solution, because I feel like I don't fully understand how the algorithm works, so that the pseudocode example in the book doesn't seem to make much sense to me anymore. (the orange part)

+ +

I've done the chapter 4 examples with the algorithms coded already, so I'm not totally unfamiliar with these, but somehow I must have misunderstood the Monte Carlo prediction algorithm from chapter 5.

+ +

+ +
    +
  • My setting is a 4x4 gridworld where reward is always -1.
  • +
  • Policy is currently equiprobable randomwalk. If an action would take the newState (s') into outside the grid, then you simply stay in place, but action will have been taken, and reward will have been rewarded.
  • +
  • Discount rate will be 1.0 (no discounting).
  • +
  • Terminal states should be two of them, (0,0) and (3,3) at the corners.

    + +
      +
    1. On page 92 it shows the algorithm pseudocode and I feel as though I coded my episode generating function correctly thusfar. I have it such that, the results are that I always start in the same starting state (1,1) coords in the gridworld.

    2. +
    3. Currently, I have it so that if you started always in state (1,1), then a possible randomly generate episode could be as follows (in this case also optimal walk). Note that I currently have the episodes in form of list of tuple (s, a, r). Where s will also be a tuple (row,column), but a = string such as ""U"" for up, and r is reward always -1.

    4. +
    5. so that a possible episode could be like: [( (1,1), ""U"", -1 ), ( (0,1), ""L"", -1 )] So that the terminal state is always excluded, so that the last state in episode will be the state immediately close to terminal state. Just like the pseudocode describes that you should exclude the terminal state S_T. +But, the random episode could have been one where there are repeating states such as [( (1,1), ""U"", -1), ( (0,1), ""U"", -1 ), ( (0,1), ""U"", -1 ), ( (0,1), ""L"", -1 )]

    6. +
    7. I made the loop for each step of episode, such as follows: once you have the episodeList of tuples, iterate for each tuple, in reversed order. I think this should give the correct amount of iterations there...

    8. +
    9. G can be updated as described in pseudocode.

    10. +
    11. currently the Returns(S_t) datastructure that I have, will be a dictionary where the keys are state tuples (row,col), and the values are empty lists in the beginning.

    12. +
    13. I have a feeling that I'm calculating the average into V(S_t) incorrectly because I origianlly thought that you could even omit the V(S_t) step totally from the algorithm, and only afterwards compute for a separate 2D array V[r,c] for each state get the sum of the appropriate list elements (accessed from the dict), and divide that sum by the amount of episodes that you ran???

    14. +
  • +
+ +

But I don't suddently know how to implement the first visit check in the algorithm. Like, I literally don't understand what it is actually checking for.

+ +

And furthermore I don't understand how the empirical mean is now supposed to be calculates in the monte carlo algorithm where there is the V(s_t) = average( Returns(S_t) )

+ +

I will also post my python code thusfar.

+ +
import numpy as np
+import numpy.linalg as LA
+import random
+
+# YOUR CODE
+
+
+
+rows_count = 4
+columns_count = 4
+V = np.zeros((rows_count, columns_count))
+reward = -1 #probably not needed
+directions = ['up', 'right', 'down', 'left'] #probably not needed
+maxiters = 10000
+eps = 0.0000001
+k = 0 # ""memory counter"" of iterations inside the for loop, note that for loop i-variable is regular loop variable
+
+rows = 4
+cols = 4
+
+#stepsMatrix = np.zeros((rows_count, columns_count)) 
+
+
+
+
+
+def isTerminal(r,c):      #helper function to check if terminal state or regular state
+    global rows_count, columns_count
+    if r == 0 and c == 0: #im a bit too lazy to check otherwise the iteration boundaries        
+        return True       #so that this helper function is a quick way to exclude computations
+    if r == rows_count-1 and c == columns_count-1:
+        return True
+    return False
+
+def getValue(row, col):    #helper func, get state value
+    global V
+    if row == -1: row =0   #if you bump into wall, you bounce back
+    elif row == 4: row = 3
+    if col == -1: col = 0
+    elif col == 4: col =3
+
+    return V[row,col]
+
+def getState(row,col):
+    if row == -1: row =0   #helper func for the exercise:1
+    elif row == 4: row = 3
+    if col == -1: col = 0
+    elif col == 4: col =3
+    return row, col
+
+
+def makeEpisode(r,c):  #helper func for the exercise:1
+## return the count of steps ??
+#by definition, you should always start from non-terminal state, so
+#by minimum, you need at least one action to get to terminal state
+    stateWasTerm = False
+    stepsTaken = 0
+    curR = r
+    curC = c
+    while not stateWasTerm:
+
+        act = random.randint(0,3)
+        if act == 0: ##up
+            curR-=1
+        elif act == 1: ##right
+            curC+=1
+        elif act == 2: ## down
+            curR+=1
+        else:##left
+            curC-=1
+        stepsTaken +=1
+        curR,curC = getState(curR,curC)
+        stateWasTerm = isTerminal(curR,curC)
+    return stepsTaken
+
+
+V = np.zeros((rows_count, columns_count))
+episodeCount = 100
+reward = -1
+y = 1.0 #the gamma discount rate
+
+
+#use dictionary where key is stateTuple, 
+#and value is stateReturnsList
+#after algorithm for monte carlo policy eval is done, 
+#we can update the dict into good format for printing
+#and use numpy matrix
+returnsDict={} 
+for r in range(4):
+    for c in range(4):
+        returnsDict[(r,c)]=[]
+
+
+
+
+
+#""""""first-visit montecarlo episode generation returns the episodelist""""""
+def firstMCEpisode(r,c):
+    global reward
+    stateWasTerm = False
+    stepsTaken = 0
+    curR = r
+    curC = c
+    episodeList=[  ]
+
+    while not stateWasTerm:
+
+        act = random.randint(0,3)
+        if act == 0: ##up
+            r-=1
+            act=""U""
+        elif act == 1: ##right
+            c+=1
+            act=""R""
+        elif act == 2: ## down
+            r+=1
+            act=""D""
+        else:##left
+            c-=1
+            act=""L""
+        stepsTaken +=1
+
+        r,c = getState(r,c)
+        stateWasTerm = isTerminal(r,c)
+        episodeList.append( ((curR,curC), act, reward) )
+        if not stateWasTerm:
+
+            curR = r
+            curC = c
+
+
+    return episodeList
+
+
+kakka=0 #for debug breakpoints only!
+#first-visit Monte Carlo with fixed starting state in the s(1,1) state
+for n in range(1, episodeCount+1):
+
+    epList = firstMCEpisode(1,1)
+    G = 0
+    for t in reversed( range( len(epList) )):
+        G = y*G + reward #NOTE! reward is always same -1
+        S_t = epList[t][0] #get the state only, from tuple
+
+        willAppend = True
+        for j in range(t-1):
+            tmp = epList[j][0]
+            if( tmp == S_t ):
+                willAppend =False
+                break
+        if(willAppend):
+            returnsDict[S_t].append(G)
+            t_r = S_t[0] #tempRow from S_t
+            t_c =S_t[1] #tempCol from S_t
+            V[t_r, t_c] = sum( returnsDict[S_t] ) / n
+
+
+kakka = 3 #for debug breakpoints only!
+print(V)
+
+",23915,,2444,,4/16/2020 19:25,4/16/2020 19:25,Difficulty understanding Monte Carlo policy evaluation (state-value) for gridworld,,0,5,,,,CC BY-SA 4.0 +11793,1,13038,,4/12/2019 20:53,,11,1985,"

Here is the GAN objective function.

+

$$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))]$$

+

What is the meaning of $V(D, G)$?

+

How do we get these expectation parts?

+

I was trying to understand it following this article: Understanding Generative Adversarial Networks (D.Seita), but, after many tries, I still can't understand how he got from $\sum_{n=1}^{N} \log D(x)$ to $\mathbb{E}(\log(D(x))$.

+",23918,,2444,,12/10/2021 16:03,12/10/2021 16:03,"What is the meaning of $V(D,G)$ in the GAN objective function?",,1,1,,,,CC BY-SA 4.0 +11794,5,,,4/12/2019 21:17,,0,,"

Generative adversarial network (wiki)

+ +

Generative Adversarial Nets (Goodfellow, et al.)

+",1671,,1671,,4/12/2019 21:17,4/12/2019 21:17,,,,0,,,,CC BY-SA 4.0 +11795,4,,,4/12/2019 21:17,,0,,Generative Adversarial Networks,1671,,1671,,4/12/2019 21:17,4/12/2019 21:17,,,,0,,,,CC BY-SA 4.0 +11796,5,,,4/12/2019 21:49,,0,,,-1,,-1,,4/12/2019 21:49,4/12/2019 21:49,,,,0,,,,CC BY-SA 4.0 +11797,4,,,4/12/2019 21:49,,0,,"For questions related to Hebbian learning (or Hebb's rule), which is a local and incremental learning rule that is inspired by biological learning systems (such as the human brain). Put simply, Hebbian learning is based on the idea that the connection between two neurons is strengthened if these two neurons fire together.",2444,,2444,,8/10/2019 14:48,8/10/2019 14:48,,,,0,,,,CC BY-SA 4.0 +11798,2,,11743,4/13/2019 0:59,,4,,"

I have several undergraduates working on multiagent deep RL problems for their theses, but most of them have been working for 8-9 months. 2 might be a stretch.

+

Good multiagent deep RL problems for a bachelor's thesis might look something like:

+
    +
  1. Pick an older video game, which has been studied using Deep RL, but not in depth. Right now my students have been liking Nintendo 64 games.
  2. +
  3. Read the papers that study this game already.
  4. +
  5. Pick one of the described approaches and reproduce the paper's results in your own system.
  6. +
  7. Pick one of the parameters that the paper does not explore changing, and see what happens as you change it.
  8. +
+

This probably does not lead to a publishable result, but it is real science and can make for a fine undergraduate thesis.

+

A slightly harder project, which may require more time, would be to examine the "future work" sections of these papers, and perform one of the experiments suggested there. These experiments often lead to small publishable results.

+",16909,,2444,,1/23/2021 0:24,1/23/2021 0:24,,,,0,,,,CC BY-SA 4.0 +11799,1,11819,,4/13/2019 15:19,,3,68,"

In Decision Tree or Random Forest, each tree has a collection of decision nodes (in which each node has a threshold value) and a class labels (or regression values).

+ +

I know that threshold values are used for comparison with a corresponding feature value. As far as I know, the comparison is performed either ""<"", "">"" or ""=="" predicate. +Anything else for the functions taking threshold value and a feature value as inputs??

+",23924,,,,,4/15/2019 0:58,What are possible functions assigned on decision nodes for decision tree prediction?,,1,0,,,,CC BY-SA 4.0 +11800,1,,,4/13/2019 16:57,,1,108,"

I am attempting to implement an agent that learns to play in the Pong environment, the environment was created in PyGame and I return the pixel data and score at each frame. I use a CNN to take a stack of the last 4 frames as input and predicts the best action to take, I also make use of training on a minibatch of experiences from an experience replay at each timestep.

+ +

I have seen an implementation where the game returned a reward of 10 for each time the bot returns the ball and -10 for each time the bot misses the ball.

+ +

My question is whether it would be better to reward the bot significantly for managing to get the ball passed the opponent, ending the episode. I was thinking of rewarding 10 for winning the episode, -10 for missing the ball and 5 for returning the ball.

+ +

Please let me know if my approach is sensible, has any glaring problems or if I need to provide more information.

+ +

Thank you!

+",18208,,,,,4/13/2019 16:57,Deciding the rewards for different actions in Pong for a DQN agent,,0,2,,,,CC BY-SA 4.0 +11801,1,,,4/13/2019 18:17,,1,34,"

Facing with a multi-class classification task,

+ +

my question is:

+ +

are ROC and Precision-Recall (One-vs-All-Rest) curves useful to evaluate and visualize the performance of a model? +or Confusion matrix, Precision, Recall, F-score (micro and macro) are enough?

+ +

What do you think about?

+",20780,,,,,4/13/2019 18:17,Evaluation metrics multi-class classification (ROC- PR curves),,0,0,,,,CC BY-SA 4.0 +11803,1,11805,,4/13/2019 18:28,,9,791,"

I came across an article, The Bitter Truth, via the Two Minute Papers YouTube Channel. Rich Sutton says...

+
+

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

+
+

What is the difference between search and learning here? My understanding is that learning is a form of search -- where we iteratively search for some representation of data that minimizes a loss function in the context of deep learning.

+",22866,,2444,,9/12/2020 13:15,9/12/2020 13:15,What is the difference between search and learning?,,2,0,,,,CC BY-SA 4.0 +11804,2,,11116,4/13/2019 20:58,,1,,"

The tutorials you link are not much relevant, there are already existing implementations of your exact problem.

+ +

You can use https://github.com/swshon/dialectID_e2e, there are many other similar implementations on github.

+",3459,,,,,4/13/2019 20:58,,,,2,,,,CC BY-SA 4.0 +11805,2,,11803,4/13/2019 21:10,,8,,"

In the context of AI:

+ +
    +
  1. Search refers to Simon & Newell's General Problem Solver, and it's many (many) descendant algorithms. These algorithms take the form:

    + +

    a. Represent a current state of some part of the world as a vertex in a graph.

    + +

    b. Represent, connected to the current state by edges, all states of the world that could be reached from the current state by changing the world with a single action, and represent all subsequent states in the same manner.

    + +

    c. Algorithmically find a sequence of actions that leads from a current state to some more desired goal state, by walking around on this graph.

  2. +
+ +

An example of an application that uses search is Google Maps. Another is Google Flights.

+ +
    +
  1. Learning refers to any algorithm that refines a belief about the world through the exposure to experiences or to examples of others' experiences. Learning algorithms do not have a clear parent, as they were developed separately in many different subfields or disciplines. A reasonable taxonomy is the 5 tribes model. Some learning algorithms actually use search within themselves to figure out how to change their beliefs in response to new experiences!

    + +

    An example of a learning algorithm used today is Q-learning, which is part of the more general family of reinforcement learning algorithms. Q-learning works like this:

    + +

    a. The learning program (usually called the agent) is given a representation of the current state of the world, and a list of actions that it could choose to perform.

    + +

    b. If the agent has not seen this state of the world before, it assigns a random number to the reward it expects to get for performing each action. It stores this number as $Q(s,a)$, its guess at the quality of performing action $a$ in-state $s$.

    + +

    c. The agent looks at $Q(s,a)$ for each action it could perform. It picks the best action with some probability $\epsilon$ and otherwise acts randomly.

    + +

    d. The action of the agent causes the world to change and may result in the agent receiving a reward from the environment. The agent makes a note of whether it got a reward (and how much the reward was), and what the new state of the world is like. It then adjusts its belief about the quality of performing the action it performed in the state it used to be in, so that its belief about the quality of that action is closer to the reality of the reward it got, and the quality of where it ended up.

    + +

    e. The agent repeats steps b-d forever. Over time, its beliefs about the quality of different state/action pairs will converge to match reality more and more closely.

  2. +
+ +

An example of an application that uses learning is AI.SEs recommendations, which are made by a program that likely analyzes the relationships between different combinations of words in pairs of posts, and the likelihood that someone will click on them. Every time someone clicks on them, it learns something about whether listing a post as related is a good idea or not. Facebook's feed is another everyday example.

+",16909,,16909,,5/18/2019 22:49,5/18/2019 22:49,,,,2,,,,CC BY-SA 4.0 +11806,2,,3573,4/14/2019 0:04,,1,,"

Although it is not a rigorous proof, Marvin Minsky's book, The Society of Mind gives us a blueprint for creating a ""mind"" (general intelligence). In his book, he posits that by combining mindless components (""agents"") together in various competing and cooperative structures, we can create actual minds.

+ +

IMHO, the recent popularity of Boosting, Bagging, Stacking, and other ensemble techniques will eventually evolve (through research) into Marvin Minsky's ""agent"" metaphor. Subsequently, as we learn to make these agents compete and cooperate (looks like this has recently begun with Generative Adversarial Networks), we will be able to write ""programs"" that mimic (or surpass) the human mind.

+",17741,,17741,,4/14/2019 0:58,4/14/2019 0:58,,,,0,,,,CC BY-SA 4.0 +11808,2,,4245,4/14/2019 8:28,,8,,"

Theory

+ +

Encoder

+ +
    +
  • In general, an Encoder is a mapping $f : X \rightarrow Y $ with $X$ Input Space and $Y$ Code Space
  • +
  • In case of Neural Networks, it is a Generative Model hence a function which is able to compute a Representation out of some input (like GAN)
  • +
+ +

The point is: how would you train such an encoder network ?

+ +
    +
  • The general answer is: it depends on what you want your code to be and ultimately depends on what kind of problem the NN has to solve, so let's pick one
  • +
+ +

Signal Compression

+ +

The goal is to learn a compressed representation for your input that allows to reconstruct the original input minimizing the loss of information

+ +

In this case hence you want the dimensionality of $Y$ to be lower than the dimensionality $X$ which in the NN case means the code space will be represented by less neurons than the input space

+ +

Autoencoder

+ +

Focusing on the Signal Compression problem, what we want to build is a system which is able to

+ +
    +
  • take a given signal with size N bytes

  • +
  • compress it into another signal with size M<N bytes

  • +
  • reconstruct the original signal, starting from the compressed representation, as good as possible

  • +
+ +

To be able to achiebve this goal, we need basically 2 components

+ +
    +
  • an Encoder which compresses its input, performing the $f : X \rightarrow Y$ mapping

  • +
  • a Decoder which decompresses its input, performing the $f: Y \rightarrow X$ mapping

  • +
+ +

We can approach this problem with the Neural Network Framework, defining an Encoder NN and a Decoder NN and training them

+ +

It is important to observe this kind of problem can be effectively approached with the convenient learning strategy of unsupervised learning : there is no need to spend any human work (expensive) to build a supervision signal as the original input can be used for this purpose

+ +

This means we have to build a NN which operates essentially between 2 spaces

+ +
    +
  • the $X$ Input Space

  • +
  • the $Y$ Latent or Compressed Space

  • +
+ +

The general idea behind the training is to make a certain input go along the encoder + decoder pipeline and then compare the reconstruction result with the original input with some kind of loss function

+ +

To define this idea a bit more formally

+ +
    +
  • The final autoencoder mapping is $f : X \rightarrow Y \rightarrow X$ with + +
      +
    • the $x$ input
    • +
    • the $y$ encoded input or latent representation of the input
    • +
    • the $\hat x$ reconstructed input
    • +
  • +
  • Eventually you will get an architecture similar to
  • +
+ +

+ +
    +
  • You can train this architecture in an unsupervised way, using a loss function like $f : X \times X \rightarrow \mathbb{R}$ so that $f(x, \hat x)$ is the loss associated to the $\hat x$ reconstruction compared with the $x$ input which is also the ideal result
  • +
+ +

Code

+ +

Now let's add a simple example in Keras related to the MNIST Dataset

+ + + +

+from keras.layers import Input, Dense 
+from keras.models import Model 
+
+# Defines spaces sizes 
+
+## MNIST 28x28 Input 
+space_in_size = 28*28
+
+## Latent Space 
+space_compressed_size = 32 
+
+# Defines the Input Tensor 
+in_img = Input(shape=(space_in_size,))
+
+encoder = Dense(space_compressed_size, activation='relu')(in_img)
+
+decoder = Dense(space_in_size, activation='sigmoid')(encoder)
+
+autoencoder = Model(in_img, decoder)
+
+autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
+
+
+",1963,,1963,,4/14/2019 10:52,4/14/2019 10:52,,,,9,,,,CC BY-SA 4.0 +11809,1,11821,,4/14/2019 13:04,,9,500,"

Given a large problem, value iteration and other table based approaches seem to require too many iterations before they start to converge. Are there other reinforcement learning approaches that better scale to large problems and minimize the amount of iterations in general?

+",23288,,2444,,4/14/2019 18:38,4/19/2019 1:55,Are there reinforcement learning algorithms that scale to large problems?,,1,0,,,,CC BY-SA 4.0 +11810,2,,4245,4/14/2019 14:24,,1,,"

As an addition to NicolaBernini's answer. Here is a full listing which should work with a Python 3 installation that includes Tensorflow:

+
"""MNIST autoencoder"""
+
+from tensorflow.python.keras.layers import  Input, Dense, Flatten, Reshape
+from tensorflow.python.keras.models import Model 
+keras.datasets import mnist
+import matplotlib.pyplot as plt
+from matplotlib.pyplot import figure
+
+"""## Load the MNIST dataset"""
+
+(x_train, y_train), (x_test, y_test) = mnist.load_data()
+
+"""## Define the autoencoder model"""
+
+## MNIST 28x28 Input 
+image_shape = (28,28)
+
+## Latent Space 
+space_compressed_size = 25 
+
+in_img = Input(shape=image_shape)
+img = Flatten()(in_img)
+encoder = Dense(space_compressed_size, activation='elu')(img)
+decoder = Dense(28*28, activation='elu')(encoder)
+reshaped = Reshape(image_shape)(decoder)
+autoencoder = Model(in_img, reshaped)
+autoencoder.compile(optimizer='adam', loss='mean_squared_error')
+
+"""## Train the autoencoder"""
+
+history = autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))
+
+"""## Plot the training curves"""
+
+plt.plot(history.history['loss'])
+plt.plot(history.history['val_loss'])
+plt.legend(['loss', 'val_loss'])
+plt.show()
+
+"""## Generate some output images given some input images. This will allow us to see the quality of the reconstruction for the current value of ```space_compressed_size```"""
+
+rebuilt_images = autoencoder.predict([x_test[0:10]])
+
+"""## Plot the reconstructed images and compare them to the originals"""
+
+
+figure(num=None, figsize=(8, 32), dpi=80, facecolor='w', edgecolor='k')
+plot_ref = 0
+
+for i in range(len(rebuilt_images)):
+
+  plot_ref += 1
+  plt.subplot(len(rebuilt_images), 3, plot_ref)
+  
+  if i==0:
+    plt.title("Reconstruction")
+  
+  plt.imshow(rebuilt_images[i].reshape((28,28)), cmap="gray")
+  
+  plot_ref += 1
+  plt.subplot(len(rebuilt_images), 3, plot_ref)
+  
+  if i==0:
+    plt.title("Original")
+  
+  plt.imshow(x_test[i].reshape((28,28)), cmap="gray")
+  
+  plot_ref += 1
+  plt.subplot(len(rebuilt_images), 3, plot_ref)
+  
+  if i==0:
+    plt.title("Error")
+
+  plt.imshow(abs(rebuilt_images[i] - x_test[i]).reshape((28,28)), cmap="gray")
+
+plt.show(block=True)
+
+

I have changed the loss function of the training optimiser to "mean_squared_error" to capture the grayscale output of the images. +Change the value of +space_compressed_size +to see how that effects the quality of the image reconstructions.

+",12509,,54665,,5/8/2022 18:17,5/8/2022 18:17,,,,0,,,,CC BY-SA 4.0 +11812,1,11908,,4/14/2019 15:01,,2,152,"

I'm building a generative adversarial network that generates images based on an input image. From the literature I've read on GANs, it seems that the generator takes in a random variable and uses it to generate an image.

+ +

If I were to have the generator receive an input image, would it no longer be a GAN? Would the discriminator be extraneous?

+",23941,,23941,,4/14/2019 15:07,4/21/2019 13:19,How important is it that the generator of a generative adversarial network doesn't take in information about input classes?,,2,0,,,,CC BY-SA 4.0 +11813,1,11814,,4/14/2019 18:47,,1,145,"

I need to understand the meaning of the FOL statement below.

+

$$ +\forall x \exists y \forall z (z \neq y \iff f(x) \neq z) +$$

+

Does this imply that $x$, $y$, and $z$ cannot be the same or $f(x)$ has no value?

+",22322,,2444,,12/18/2021 22:13,12/18/2021 22:13,What is the meaning of the statement $\forall x \exists y \forall z (z \neq y \iff f(x) \neq z)$?,,1,0,,,,CC BY-SA 4.0 +11814,2,,11813,4/14/2019 19:57,,4,,"

The statement is

+ +

""for all $x$, there exists a value of $y$ such that for all $z$,
+$z\neq y$ if and only if $z \neq f(x)$"".

+ +

This can be simplified: +$$\begin{align} +& & \forall x \exists y \forall z (z\neq y \iff z \neq f(x))\\ +&\implies & \forall x \exists y \forall z (z=y \iff z = f(x))\\ +&\implies & \forall x \exists y \forall z (y = f(x))\\ +&\implies & \forall x \exists y (y = f(x))\\ +\end{align}$$

+ +

If we denote the set of all values of $x$ by $X$ and the set of all values of $y$ by $Y$, then this tells us that the function $f$ maps every $x$ in $X$ to a $y$ in $Y$. That is, $f: X \to Y$.

+",22916,,,,,4/14/2019 19:57,,,,3,,,,CC BY-SA 4.0 +11816,1,11818,,4/14/2019 22:13,,7,965,"

What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model. I want to train it with a feature vector $x=[x_1, x_2, \dots, x_N]$ and a target $y=[0.2, 0.3, 0.5]$.

+ +

It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.

+ +

Would something like MSE (after applying softmax) make sense, or is there a better loss function?

+",17681,,2444,,4/15/2019 10:11,4/15/2019 10:11,What loss function to use when labels are probabilities?,,1,0,,,,CC BY-SA 4.0 +11817,1,11837,,4/14/2019 22:19,,3,63,"

The proof of the consistency of the per-decision importance sampling estimator assumes the independence of +$$\frac{\pi(A_t|S_t)}{b(A_t|S_t)}R_{t+1}\quad\text{ and }\quad \prod_{k=t+1}^{T-1}\frac{\pi(A_k|S_k)}{b(A_k|S_k)}.$$

+ +

See the proof of Theorem 1 in ""Eligibility Traces for Off-Policy Policy Evaluation"".
+The result is also stated in Equation (5.14) of Sutton and Barto's RL book.

+ +

I'm guessing that this is itself a consequence of an assumption of independence between +$$\frac{\pi(A_t|S_t)}{b(A_t|S_t)}\quad\text{ and }\quad \frac{\pi(A_{t+1}|S_{t+1})}{b(A_{t+1}|S_{t+1})}.$$

+ +

I don't understand how this assumption can be justified. Consider the extreme case of a nearly deterministic policy $\pi$ and deterministic MDP dynamics. It would seem to me that the two values above are then surely not independent.

+ +

Am I missing something?

+",22916,,,,,4/15/2019 20:12,Are successive actions independent?,,1,0,,,,CC BY-SA 4.0 +11818,2,,11816,4/14/2019 22:38,,10,,"

Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.

+

You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,

+

$$H(p,q)=-\sum_{x\in X} p(x) \log q(x).$$ +$ $

+

Note that one-hot labels would mean that +$$ +p(x) = +\begin{cases} +1 & \text{if }x \text{ is the true label}\\ +0 & \text{otherwise} +\end{cases}$$

+

which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:

+

$$H(p,q) = -\log q(x_{label})$$

+",22916,,-1,,6/17/2020 9:57,4/14/2019 22:38,,,,0,,,,CC BY-SA 4.0 +11819,2,,11799,4/15/2019 0:58,,2,,"

For a binary split, there are only three possible operations (or arguably only two if you consider one-hot encoding). Any other kind of split would simply not be binary. Almost every tree-based model is restricted to binary splits, due to the combinatorial explosion when considering ternary or even more complex splits.

+ +

Of course you could write your own algorithm that uses recursive non-binary splits. Beware though that you'll be facing the same difficulty that has led the vast majority of algorithms to be limited to strictly binary splits.

+ +

Have a look at this related question.

+",12996,,,,,4/15/2019 0:58,,,,0,,,,CC BY-SA 4.0 +11820,2,,11771,4/15/2019 2:08,,1,,"

To answer the titular question first: Yes, of course you can. Whether a neural network can give you better results than a simpler model, however, depends on:

+ +
    +
  • How complex/non-linear the relationship between your input and output variables is;
  • +
  • Whether the neural network you have specified is able to learn this relationship efficiently;
  • +
  • How much training data you have.
  • +
+ +

With the information you have provided in the question, there's really no telling how it will perform against a simpler model, so I would advice trying both.

+ +

As for how your output varies over time and location, just include both in your model. In case of a linear regression model, accounting for spatiotemporal autocorrelation could be done with a mixed model using an appropriate covariance structure. The challenge for a neural network would be how to specify one that can learn this type of relationship (for starters, read up on RNNs for longitudinal data).

+",12996,,,,,4/15/2019 2:08,,,,3,,,,CC BY-SA 4.0 +11821,2,,11809,4/15/2019 5:53,,6,,"

This is a big question. I'm not going to try to cover the state-of-the-art, but I'll try to cover some of the main ideas.

+ +

Function Approximation [1]
+An essential strategy for scaling up RL algorithms is to reduce the effective size of your state and/or action space through function approximation. For example, you could parameterize your value function using fewer parameters than there are states. Optimization would then take place in the much smaller parameter space, which can be substantially faster. Note that using function approximation almost always loses you any convergence guarantees you would have had otherwise in the tabular setting. It has been very successful in practice, though.

+ +

Sampling[2]
+Value iteration and other dynamic programming algorithms sweep through the entire state space when computing value functions. Sample-based approaches instead update value functions for states as they are visited. These include Monte Carlo and Temporal Difference methods. Sampling allows us to focus on a subset of the states, freeing us from the computation required to get accurate value estimates of potentially irrelevant states. This is essential in real-world settings, where almost all possible states of the world are irrelevant or even impossible to reach.

+ +

Sample Efficiency/Experience Replay
+All else equal, a sample efficient agent is one that learns more with the same experience. Doing this reduces learning time, especially if the time bottleneck is in interacting with the environment. One basic way of improving sample efficiency is to store and reuse experience with something like the experience replay buffer popularized in the DQN paper. Another, more recent, algorithm called Hindsight Experience Replay improves sample efficiency by allowing the agent to learn more from its failures (trajectories with no reward).

+ +

Model-Based Methods[3]
+While technically also about sample efficiency (maybe all of these points are?), model-based methods are important enough to warrant their own section. Usually, the MDP dynamics aren't known to the agent beforehand. Learning and maintaining an estimate of the MDP is therefore often a good idea. If an agent can use its internal model of the world to simulate experience, it can then learn from that simulated experience (called planning) in addition to learning from actual experience. Because simulated experience is much cheaper to gather than actual experience, this can reduce the time needed to learn.

+ +

Search[4]
+If our value estimates were perfect, then behaving optimally would only be a matter of moving to the neighboring state with the highest value. This is hardly ever the case, though, so we would like to make decisions more intelligently. One way, called forward search, is to use a model to consider many possible trajectories starting from the current state. The most popular and successful example of forward search is Monte Carlo Tree Search (MCTS), which was famously used in AlphaGo Zero. Because search allows us to make better decisions given imperfect value estimates, we can focus on more promising trajectories, saving time and computation.

+ +

Exploration
+Only ever taking what we think is the ""best"" action in a given state is usually not a very good idea. When sampling trajectories through large state and/or action spaces, this strategy can completely fail. Taking exploratory actions can help ensure that high-value states are discovered at all. Deciding when to explore and which actions to take is active area of research. In general, though, exploratory actions are ones that reduce an agent's uncertainty of the environment.

+ +

Injecting Human Knowledge
+Finally, and maybe obviously, reducing the time complexity of an RL algorithm can be accomplished by giving the agent more information about the world. This can be done in many ways. If using linear function approximation, for example, an agent could be given useful information through the features it uses. If it makes use of a model, the model could be initialized with reasonable priors for the reward and transition probability distributions. ""Reward shaping"", the practice of manually engineering a (dense) reward function to facilitate learning a specific task, is a more general approach. An agent could also learn directly from human demonstrations with inverse reinforcement learning or imitation learning.

+ +
+ +

References
+All references not already linked to are chapters out of Sutton and Barto's RL book.
+[1] Linear function approximation is discussed in depth in Chapter 9.
+[2] Monte Carlo and Temporal Difference methods are discussed in Chapters 5 and 6.
+[3] Model-based methods are discussed in the first part of Chapter 8.
+[4] MCTS and search in general are discussed in the second half of Chapter 8.

+",22916,,22916,,4/19/2019 1:55,4/19/2019 1:55,,,,3,,,,CC BY-SA 4.0 +11822,1,,,4/15/2019 7:47,,1,381,"

I'm wondering if AI now can help us abstract summary or general idea of long article, for example novel or historical stories, or abstract most important keyword from sentence;

+ +

Would you please tell me if any of this kind of project is done?

+ +

I wish I can improve my reading speed and effectiveness with AI help.

+",23855,,,,,9/7/2020 19:33,Can AI help summarize article or abstract sentence keyword?,,2,0,,,,CC BY-SA 4.0 +11823,2,,11822,4/15/2019 8:29,,1,,"

Yes. Text summarisation has been a research topic in (computational) linguistics for literally decades. Have a look at the Wikipedia page on Automatic Summarisation for an overview.

+ +

There are basically various different approaches: either, selecting salient sentences (or parts of sentences) which represent the gist of the text, or, trying to 'understand' the text and generating new sentences. The former is generally easier, and works on any text, while the latter would probably be able to produce better results, but is more complicated and would not work on any text, as it would be specific to a particular topic.

+",2193,,,,,4/15/2019 8:29,,,,2,,,,CC BY-SA 4.0 +11824,1,,,4/15/2019 8:33,,0,148,"

Is there an AI application that can produce syntactically (and semantically) correct sentences given a bag of words? For example, suppose I am given the words ""cat"", ""fish"", and ""lake"", then one possible sentence could be ""cat eats fish by the lake"".

+",23855,,2444,,4/15/2019 9:45,4/26/2023 16:02,How do I create syntactically correct sentences given several words?,,1,3,,,,CC BY-SA 4.0 +11825,1,,,4/15/2019 9:42,,1,121,"

In natural language processing, we can convert words to vectors (or word embeddings). In this vector space, we can measure the similarity between these word embeddings.

+ +

How can we create a vector space where word spelling and pronunciation can be easily compared? For example, ""apple"" and ""ape"", ""start"" and ""startle"" are very similar, so they should also be similar in this new vector space.

+ +

I am eventually looking for a library that can do this out of the box. I would like to avoid implementing this myself.

+",23855,,2444,,4/16/2019 22:18,4/18/2019 14:17,How can we create a vector space where word spelling and pronunciation can be easily compared?,,1,0,,,,CC BY-SA 4.0 +11826,5,,,4/15/2019 9:55,,0,,,-1,,-1,,4/15/2019 9:55,4/15/2019 9:55,,,,0,,,,CC BY-SA 4.0 +11827,4,,,4/15/2019 9:55,,0,,"For questions related to the concept of function approximation. For example, questions that involve the use of a neural network (which is a function approximator) in the context of RL in order to approximate a value function or questions that are related to universal approximation theorems.",2444,,2444,,8/3/2019 18:57,8/3/2019 18:57,,,,0,,,,CC BY-SA 4.0 +11828,5,,,4/15/2019 9:55,,0,,,-1,,-1,,4/15/2019 9:55,4/15/2019 9:55,,,,0,,,,CC BY-SA 4.0 +11829,4,,,4/15/2019 9:55,,0,,For questions related to clustering (a usual unsupervised learning technique).,2444,,2444,,4/16/2019 18:34,4/16/2019 18:34,,,,0,,,,CC BY-SA 4.0 +11830,5,,,4/15/2019 9:57,,0,,,-1,,-1,,4/15/2019 9:57,4/15/2019 9:57,,,,0,,,,CC BY-SA 4.0 +11831,4,,,4/15/2019 9:57,,0,,"For questions of the form ""What is the relation (or relationship) between X and Y?"", which are question used to disambiguate between two or more AI topics, terms or expressions.",2444,,2444,,4/16/2019 18:34,4/16/2019 18:34,,,,0,,,,CC BY-SA 4.0 +11833,1,,,4/15/2019 14:35,,2,172,"

Do we have cross-language vector space for word embedding?

+ +

When measure similarity for apple/Pomme/mela/Lacus/苹果/りんご, they should be the same

+ +

If would be great if there's available internet service of neuron network which already be trained by multiple language

+",23855,,2444,,4/16/2019 22:22,4/18/2019 15:00,Do we have cross-language vector space for word embedding?,,2,0,,,,CC BY-SA 4.0 +11835,1,,,4/15/2019 16:47,,2,1324,"

I am solving a classification problem with CNN. The number of classes is 5.

+ +
    +
  1. How can I decide the number of neurons in the FC layer before the softmax layer?
  2. +
  3. Is it $N * 5$, where $N$ is the number of classes?
  4. +
  5. Is there any documentation for deciding the number of neurons in the FC layer (before SoftMax layer)
  6. +
+",23734,,23734,,4/16/2019 0:47,4/16/2019 9:33,How do I choose the number of neurons in the fully-connected layer before the softmax layer?,,1,0,,,,CC BY-SA 4.0 +11836,2,,11835,4/15/2019 18:20,,1,,"
+

How can I decide the number of neurons in the FC layer before the softmax layer?

+
+ +

Train different network architectures (with different numbers of neurons in the last FC layer). Use cross-validation - a set of data you have not trained on - to measure the performance of the network, using a metric that you have decided beforehand is a good proxy for your experiment goals. For instance, you might choose getting the highest classification accuracy as your goal - but might choose something different for an unbalanced dataset, because 90% accuracy is not meaningful.

+ +

There is not usually a good reason to over-tune your network. Trying some variations with 1.5 x or 2 x geometric series (e.g. 5, 10, 20, 40 neurons in layer) is probably enough to find a good hidden layer size.

+ +
+

Is it $N * 5$, where $N$ is the number of classes?

+
+ +

No, in general. That may work just fine for your problem though.

+ +
+

Is there any documentation for deciding the number of neurons in the FC layer (before the softmax layer)

+
+ +

Not really. The most direct thing to do is find solutions that work well for similar problems to yours, and adapt as necessary. So you can search for any similar problem domain and see what the researchers used there.

+ +

There is not much theory to guide you here. However, if it helps with your intuition, more neurons can make a better fit to higher frequency variations in the target function you are learning, whilst more layers will mean better handling of complex function spaces (e.g. where rules about the mapping between input and output can be made according to combining simpler rules) - provided in both cases that you have enough training data that it is possible for the NN to learn the function approximately. These are not strictly defined traits of functions for machine learning, so many NN users will work with this intuitive view.

+ +

If in doubt, assuming this is not an image, NLP or other well-studied problem, then I might just guess at e.g. 64 neurons per hidden layer, and try 1, 2, and 3 hidden layers as a starting point (all the same size). I cannot say if that will work for you and your problem, but it might help get past the ""blank page effect"" and start you training and testing some variations.

+",1847,,2444,,4/16/2019 9:33,4/16/2019 9:33,,,,2,,,,CC BY-SA 4.0 +11837,2,,11817,4/15/2019 19:53,,2,,"

This is a consequence of the Markovian assumption, which underpins all of RL.

+ +

The Markovian assumption says that it doesn't matter how we reached a given state, only that we reached it, when deciding how likely it is that we move to subsequent states. This naturally implies that our choice of actions must also depend only on the current state.

+ +

You are correct that the assumption is somewhat unrealistic. However, it usually can be made to yield a reasonable approximation of the true dynamics without involving too many state variables. By solving this approximation, we hope to find a policy that also works well in the real problem.

+ +

Here's an example. In robot navigation, the real dynamics of the robot do depend on where the robot has been in the past, because as the robot's battery drains, the voltage levels it outputs will change slightly, and its wheels may become more prone to slippage. Thus, logically, our choice of action should change based on both the current state and our choice of actions in previous states (which drained the battery to a greater or lesser degree). However, if we try to incorporate this into the model, we'll end up having a combinatorial explosion in size of the dynamics function $P(S_{t+1} | S_t, a_{t}, S_{t-1}, a_{t-1} ... S_0, a_0)$ that actually captures this process (in particular, it will now be a function of 2t inputs). This, in turn, will necessitate a combinatorial blowup in the complexity of our policy (it will increase in complexity by an exponent of 2t). To keep things tractable, we accept that the dynamics will become lossy, or we can add some extra detail to the local state (e.g. a battery level) to capture the more complex dynamics in a Markovian way. Either way though, we'll be back to a world where future actions won't depend on past ones.

+ +

To be clearer, if we assume that $P(S_{t+1} | S_t) = P(S_{t+1} | S_{t}, S_{t-i})$ for any $i$, then the other relationships you mention should not seem surprising. That assumption is the Markovian assumption. It converts the state transition function into a Matrix that represents a Markov Chain. If we don't make that assumption, then most RL algorithms do not apply to our problem.

+",16909,,16909,,4/15/2019 20:12,4/15/2019 20:12,,,,0,,,,CC BY-SA 4.0 +11839,5,,,4/15/2019 20:30,,0,,"

Evolutionary Game Theory (wiki)

+",1671,,1671,,4/15/2019 20:30,4/15/2019 20:30,,,,0,,,,CC BY-SA 4.0 +11840,4,,,4/15/2019 20:30,,0,,"For questions about the field and methods of Evolutionary Game Theory. (Distinct from ""evolutionary/genetic in that it EGT does not require recombination.)",1671,,1671,,4/15/2019 20:30,4/15/2019 20:30,,,,0,,,,CC BY-SA 4.0 +11842,1,,,4/16/2019 0:07,,4,133,"

I am trying to deploy a machine learning solution online into an application for a client. One thing they requested is that the solution must be able to learn online because the problem may be non-stationary and they want the solution to track the non-stationarity of the problem. I thought about this problem a lot, would the following work?

+ +
    +
  1. Set the learning rate (step-size parameter) for the neural network at a low fixed value so that the most recent training step is weighted more.
  2. +
  3. Update the model only once per day, in a mini-batch fashion. The mini-batch will contain data from the day, mixed with data from the original data set to prevent catastrophic interference. By using a mini batch update, I am not prone to biasing my model to the latest examples, and completely forgetting the training examples from months ago.
  4. +
+ +

Would this set-up be ""stable"" for online/incremental machine learning? Also, should I set up the update step so it samples data from all distributions of my predicted variable uniformly so it gets an ""even"" update (i.e., does not overfit to the most probabilistic predicted value)?

+",17706,,2444,,4/16/2019 10:01,4/16/2019 10:01,What are stable ways of doing online machine learning?,,0,9,0,,,CC BY-SA 4.0 +11843,2,,8943,4/16/2019 1:56,,4,,"

In the case of UCS, the evaluation function (that is, the function that is used to select the next node to expand) is $f(n) = g(n)$, where $g(n)$ is the cost of the path from the initial node to $n$, while in the case of the greedy BFS it is $f(n) = h(n)$, where $h(n)$ is the heuristic function that estimates the cost of the path from $n$ to the goal node. In other words, in the case of UCS, nodes are expanded only using the experience (in the form of $g(n)$), while, in the case of GBFS, nodes are expanded only using the estimate of the cost to the goal. Note that, in both cases, the node that is chosen to be expanded is the one with the smallest $f(n)$ value.

+",23973,,2444,,7/6/2019 20:41,7/6/2019 20:41,,,,0,,,11/8/2020 12:02,CC BY-SA 4.0 +11845,1,11848,,4/16/2019 6:50,,2,1248,"

Why does informed search more efficiently finds a solution than an uninformed search?

+",23299,,2444,,6/22/2019 12:54,6/22/2019 12:54,Why is informed search more efficient than uninformed search?,,1,0,,,,CC BY-SA 4.0 +11846,2,,7998,4/16/2019 7:44,,2,,"

I had a similar problem with a 2D convolution on a hexagonal grid while working on a diffusion problem and stumbled upon this question. Rather than using cube coordinates, you could use doubled coordinates, which I found much easier to save in a 2D array.

+ +

An example kernel that only changes the direct neighbours and the cell itself would be this

+ +
kernel = np.array([[0,   0.1, 0,   0.1, 0  ],
+                   [0.1, 0,   0.4, 0,   0.1],
+                   [0,   0.1, 0,   0.1, 0  ]])
+
+ +

I don't know much about Keras, but I assume that this is possible there as well.

+",23978,,,,,4/16/2019 7:44,,,,0,,,,CC BY-SA 4.0 +11847,1,11850,,4/16/2019 12:35,,-1,1956,"

How can the A* algorithm be optimized?

+ +

Any references that shows the optimization of A* algorithm are also appreciated.

+",23299,,2444,,4/16/2019 15:33,4/16/2019 15:33,How can the A* algorithm be optimized?,,2,4,,,,CC BY-SA 4.0 +11848,2,,11845,4/16/2019 13:53,,1,,"

There are several informed and uninformed search algorithms. They do not all have the same time and space complexity (which also depends on the specific implementation). I could come up with an informed search algorithm that is highly inefficient in terms of time or space complexity. So, in general, informed search algorithm are not more efficient than uninformed ones, in terms of space and time complexity.

+ +

However, given that informed search algorithms use ""domain knowledge"" (that is, an heuristic function that estimates the distance to the goal nodes), in practice, they tend to find the goal node more rapidly, given a more informed ""heuristic"" (which needs to be admissible in order to find the optimal solution). For example, in theory, A* has exponential time and space complexities (with respect to the branching factor and the depth of the tree), but, in practice, it tends to perform decently well. It tends to have an effective branching factor (that is, the branching factor for specific problem instances) quite ""small"" (for several problems).

+ +

What is a more informed heuristic? Intuitively, it is an heuristic that more rapidly focuses on the promising parts of the search space. Let's denoted by $h$ the heuristic function. If $h(n)=0$, for all nodes $n$, then this is an admissible heuristic, because it always underestimates the distance to the goal (that is, it always returns $0$). However, it is a quite uninformed heuristic: either if you are at the start or goal nodes, the estimation is always the same (so you cannot distinguish the start and goal nodes, in terms of estimates). Given two admissible heuristics $h_1$ and $h_2$, $h_2$ is more informed than $h_1$ if $h_1(n) \leq h_2(n)$, for all nodes $n$.

+ +

An uninformed search algorithm performs an exhaustive search. There are several ways of performing such exhaustive search (e.g. breadth-first or depth-first), which are more efficient than others (depending on the search space or problem). Given that they perform an exhaustive search, they tend to explore ""uninteresting"" parts of the search space. Hence, in practice, they might be more inefficient than informed search algorithms (that is, they might require more time to find the solution).

+",2444,,2444,,4/16/2019 14:05,4/16/2019 14:05,,,,0,,,,CC BY-SA 4.0 +11849,2,,11847,4/16/2019 14:45,,1,,"

The first step of optimisation is to measure where inside the implementation most time is spent -- you don't actually optimise the algorithm itself, but a specific implementation of it. This step should give you an overview of where you can make improvements. Speculative changes usually don't do much.

+ +

There are several aspects of A* which would be candidates. You are keeping a list of paths currently under investigation. If it is very long, processing might take more time. You could try and prune this list (though it might mean that you won't actually find the best path anymore). Or you could investigate efficient ways of storing it.

+ +

Each step you need to compute the 'cost' of each path. This could be cached to stop you from repeating calculations. The heuristic function could be slow -- again something to investigate.

+ +

Let me stress again that you need to measure the performance first. There is no point in randomly trying to make things faster; you could waste a lot of time for very little gain. And typically with optimisation, readability of the code suffers, and it becomes harder to maintain. You might find that other parts of your system take a lot longer than the actual path finding, so your time might better be spent on those.

+",2193,,,,,4/16/2019 14:45,,,,0,,,,CC BY-SA 4.0 +11850,2,,11847,4/16/2019 15:24,,1,,"

Check below reference url for A* algorithms ... +https://takinginitiative.wordpress.com/2011/05/02/optimizing-the-a-algorithm/ +https://en.wikipedia.org/wiki/Heap_%28data_structure%29

+",23985,,,,,4/16/2019 15:24,,,,1,,,,CC BY-SA 4.0 +11851,2,,8554,4/16/2019 15:43,,1,,"

In the announced problem, most of the transitions aren't possible, so most the terms of equations (3.3) and (3.4) from the book will end up being 0.

+ +

In my understanding,

+ +

$$ +\begin{align} +p(s'= high | s = high, a = search) &= \sum_{r \in \{0, -3, r_{search}, r_{wait}\}} p(s'=high, r | s = high, a = search) \\ +&= p(s'=high, r =0 | s = high, a = search) \\ +&+p(s'=high, r = -3 | s = high, a = search) \\ +&+p(s'=high, r = r_{search}| s = high, a = search) \\ +&+p(s'=high, r = r_{wait} | s = high, a = search) +\end{align} +$$

+ +

The problem states that if the agent has high batteries and it chooses to search, then there is no chance that it ends up having a negative reward ($r= -3$), thus its transition probability is 0 by definition: $p(s'=high, r = -3 | s = high, a = search) = 0$.

+ +

Applying the same logic to all other terms, we get, +$$ +\begin{align} +p(s'= high | s = high, a = search) &= \sum_{r \in \{0, -3, r_{search}, r_{wait}\}} p(s'=high, r | s = high, a = search) \\ +&= p(s'=high, r = r_{search}| s = high, a = search) \\ +&= \alpha +\end{align} +$$

+ +

It looks weird. I not 100% sure that that's the solution, because the question would not make much of sense (the table would've been quite the same).

+",23986,,23986,,4/16/2019 15:49,4/16/2019 15:49,,,,0,,,,CC BY-SA 4.0 +11852,2,,11833,4/16/2019 17:38,,2,,"

You can try to read about MUSE (Multilingual Unsupervised and Supervised Embeddings) by Facebook. You can read it from its Github or this article. They also provide the FastText dictionary format (.vec file) for some languages.

+ +

Their original paper shows how it aligns the vector of words from two different languages:

+ +

+",16565,,,,,4/16/2019 17:38,,,,0,,,,CC BY-SA 4.0 +11854,5,,,4/16/2019 22:48,,0,,"

In natural language processing, there are several techniques to create word embeddings: for example, GloVe or word2vec.

+",2444,,2444,,4/17/2019 12:50,4/17/2019 12:50,,,,0,,,,CC BY-SA 4.0 +11855,4,,,4/16/2019 22:48,,0,,"For questions related to word embeddings, which are vector representations of words.",2444,,2444,,4/17/2019 12:50,4/17/2019 12:50,,,,0,,,,CC BY-SA 4.0 +11856,5,,,4/16/2019 22:49,,0,,"

For more details, have a look at https://nlp.stanford.edu/projects/glove/.

+",2444,,2444,,4/17/2019 12:51,4/17/2019 12:51,,,,0,,,,CC BY-SA 4.0 +11857,4,,,4/16/2019 22:49,,0,,"For questions related to the GloVe (Global Vectors for Word Representation), which is an unsupervised learning algorithm for obtaining vector representations for words.",2444,,2444,,4/17/2019 12:51,4/17/2019 12:51,,,,0,,,,CC BY-SA 4.0 +11858,5,,,4/16/2019 22:50,,0,,,-1,,-1,,4/16/2019 22:50,4/16/2019 22:50,,,,0,,,,CC BY-SA 4.0 +11859,4,,,4/16/2019 22:50,,0,,"For questions related to incremental learning algorithms, which are algorithms that attempt to learn new information without forgetting all the previously learned one. Incremental learning is often a synonym for continual (or continuous) learning and lifelong learning.",2444,,2444,,8/6/2019 0:02,8/6/2019 0:02,,,,0,,,,CC BY-SA 4.0 +11860,5,,,4/16/2019 22:52,,0,,,-1,,-1,,4/16/2019 22:52,4/16/2019 22:52,,,,0,,,,CC BY-SA 4.0 +11861,4,,,4/16/2019 22:52,,0,,"For questions related to online learning algorithms, that is, algorithms that learn while e.g. the associated agent interacts with an environment.",2444,,2444,,4/17/2019 12:49,4/17/2019 12:49,,,,0,,,,CC BY-SA 4.0 +11862,5,,,4/16/2019 22:53,,0,,"

For more details, see, for example, An Introduction to Causal Inference (2010) Judea Pearl, Causal inference in statistics: An overview (2009) by Judea Pearl or Probabilistic Causation at Stanford Encyclopedia of Philosophy.

+",-1,,2444,,8/15/2019 11:44,8/15/2019 11:44,,,,0,,,,CC BY-SA 4.0 +11863,4,,,4/16/2019 22:53,,0,,"For questions related to causation (or causality), which is the field that studies the relationship between cause and effect, where the cause is partly responsible for the effect and the effect is partly dependent on the cause.",2444,,2444,,8/15/2019 21:25,8/15/2019 21:25,,,,0,,,,CC BY-SA 4.0 +11864,5,,,4/16/2019 22:54,,0,,,-1,,-1,,4/16/2019 22:54,4/16/2019 22:54,,,,0,,,,CC BY-SA 4.0 +11865,4,,,4/16/2019 22:54,,0,,For questions related to the iterative deepening A* search algorithm.,2444,,2444,,4/17/2019 12:50,4/17/2019 12:50,,,,0,,,,CC BY-SA 4.0 +11866,1,,,4/17/2019 4:38,,11,5900,"

In section 3.2.1 of Attention Is All You Need the claim is made that:

+ +
+

Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.

+
+ +

It does not make sense why dot product attention would be faster. Additive attention is nearly identical computation wise; the main difference is $Q + K$ instead of $Q K^T$ in dot product attention. $Q K^T$ requires at least as many addition operations as $Q + K$, so how can it possibly be faster?

+",21158,,2444,,4/17/2019 9:17,3/18/2022 8:40,Why is dot product attention faster than additive attention?,,3,0,,,,CC BY-SA 4.0 +11867,2,,9925,4/17/2019 9:36,,2,,"

I am going to refer to the expected output as ""the price of the house"" or simply as ""price"" to make the answer easier to understand but this applies to any other scenario as well.

+ +

To answer part 1 of your question, if the correlation between $w$ and the price of the house is 0 or negligible, then it is very likely that varying $w$ while keeping $x$, $y$, and $z$ constant will result in almost the same price being predicted in a trained network. This certainly seems like a learnable statistical characteristic. +There are some caveats though. Firstly, it depends on how complex your network is. How learnable are the correlations between $x$, $y$, $z$ and the price? Assuming most other factors like these fall into place, I would say that it is quite likely that $w$ will be decoupled from the output.

+ +

Part 2 of your question is a little trickier to explain. Let us consider a simpler scenario where we use logistic regression. Logistic regression is, in essence, a network with sigmoid output and no hidden layer. The output is the sigmoid over a linear combination of all inputs.

+ +

Let us consider the example as follows -
+Two data points have the same or similar values of $x$, $y$ and $z$ and expected output, but $w$ varies considerably. The coefficient of $w$ in the linear combination has a finite, non-zero value. Because of this, the input to the sigmoid will differ even though the resultant price in both cases should be the same or similar.
+The loss function's value will increase because of the discrepancy between expected and predicted values in the above example.
+In general, the change in the value of the coefficient of $w$ is a product of learning rate and the differential of the loss function with respect to it. +The change in its value will now be such that the magnitude of the coefficient of $w$ decreases to enforce the condition noted in the above examples.

+ +

It is difficult to predict at what point a more complex network will learn to ignore $w$. It could happen that the weights at the input layer itself might converge to zero. Or the weights in successive layers could converge such that their linear combination gives no importance to the input w.

+ +

I wanted to address another point. You mentioned that -

+ +
+

For a classical neural network, the network has no ""memory,"" so it might be very difficult for the network to realize that w is a worthless input parameter.

+
+ +

This first part about having no memory is true. However, the network does not need to remember all the past values. The trainable parameters of a network are basically learning the statistical distribution of the input data and map it to the expected output. They are trying to model a mathematical function that satisfies as many such training samples as possible. +The network, through these parameters, has stored an abstraction of the behaviour of the training data. So even though it does not remember every training sample, it does remember the general correlation between input and expected output.

+ +

An oversimplified analogy would be, you as a human don't remember the multiplication of every number by 2. Yet, if I ask you what is the product of 123 and 2, you can find the product because you just know how the ""multiply by 2"" function works in general. Similarly, the network builds an intuition of what the expected output should look like in general, by mapping it to a function whose parameters can be modified.

+",23994,,,,,4/17/2019 9:36,,,,0,,,,CC BY-SA 4.0 +11868,2,,11081,4/17/2019 15:36,,0,,"

I found myself turning in cycles for a while, so to clarify Neil Slater's answer,

+ +

In the beginning of the book, $S$ means ""set of non-terminal states"" and $S^+$ means ""set of all states, including the terminal ones"".

+ +

$$\sum_{s^{\prime} \in S} \sum_{r \in R} p(s^{\prime}, r | s,a) = 1, \forall s \in S, a \in A(s) \tag{3.3}$$

+ +

That said, in eq. 3.3 when we define that $\forall s \in S$, we say that that once in a terminal state, the formula does not apply (which is obvious because no action is ever available in a terminal state by definition).

+ +

It does not however constraint the probability in how to ""get"" in a terminal state, and that is the key to answer the question.

+",23986,,22916,,4/17/2019 18:23,4/17/2019 18:23,,,,0,,,,CC BY-SA 4.0 +11869,2,,3288,4/17/2019 15:43,,1,,"

eggie5 actually has a good solution for you. This approach is a tried and tested way to solve the same problem you are trying to solve.

+ +

However, if you still want to concatenate the images and do this your way, you should concatenate the images along the channel dimension.

+ +

For example, by combining two $200\times 100 \times c$ feature vectors (where c is the number of channels) you should get a single $200\times 100 \times 2c$ feature vector.

+ +

The kernels of the next convolution look through all the channels of the feature vector $x \times x$ pixels at a time.
+If we combine along the channel dimension, it becomes easier for the network to compare pixel values at corresponding positions in both images. Since the objective is to predict similarity or dissimilarity, this is ideal for us.

+",23994,,23994,,4/18/2019 23:44,4/18/2019 23:44,,,,1,,,,CC BY-SA 4.0 +11870,1,11873,,4/17/2019 17:35,,4,1415,"

I have a CNN with the regression task of a single scalar. +I was wondering if an additional task of reconstructing the image (used for learning visual concepts), seen in a DeepMind presentation with the loss and re-parametrization trick of Variational Autoencoder, might help the principal task of regression.

+ +

So you can imagine some convolutions with the role of feature extraction with some output X (let's say a vector of 256 values), that X goes into the VAE which computes Z and then the reconstructed image. And then the original regression task would take either X or Z in order to compute that scalar value.

+ +

Has anyone tried such an approach, is it worth the work? +Thank you

+",21918,,,user9947,4/17/2019 21:48,4/19/2019 13:39,Variational Autoencoder task for better feature extraction,,1,0,,,,CC BY-SA 4.0 +11871,1,,,4/17/2019 18:15,,2,45,"

I have trained a modified VGG classification CNN, with random initialized weights; therefor the validation accuracy was not high enough for me to accept (around 66%). +now using the weights resulted from training the network, how can i use those weights in training the network again to improve accuracy? (e.g. using previous training weights with different learning rate, or increase epochs, ..)

+",23268,,,,,4/18/2019 5:47,how to benefit from previous training weights in training again to increase accuracy?,,1,0,,,,CC BY-SA 4.0 +11872,2,,11812,4/17/2019 19:59,,1,,"

If you're building a straight ""vanilla"" generative adversarial network, it's best to understand the network as a statistical engine: You are training the generator on samples of a statistical distribution. (And you're training the discriminator to distinguish between ""ground truth"" images, and images from that generator.)

+ +

Once you replace the input noise with another image... well. Strictly speaking, it is probably still a generative adversarial network, if you're still doing everything else the same. It is still a generator and a discriminator, acting in an adversarial fashion.

+ +

But you've radically altered the input distribution, so there is a good chance that you're no longer accomplishing what you want to accomplish unless you're being very careful and clever.

+ +

That said, there are GAN variants which do take images rather than noise as inputs. See the wonderful paper on CycleGANs by Zhu, et al, along with a substantial body of followup literature. And note that CycleGANs use not one, but two discriminators, so even here the discriminator is necessary.

+",15020,,15020,,4/18/2019 18:15,4/18/2019 18:15,,,,0,,,,CC BY-SA 4.0 +11873,2,,11870,4/17/2019 22:12,,2,,"

I have not worked on this but I think I can give you a theoretical perspective of using VAE's. Regression is a Supervised Learning task and is basically a mapping from Input to Output where the Neural Net will approximate the function $f(input) = output$.

+ +

VAE's on the other hand are good for finding how a latent variable affects the output. For example, if you have a task of training on a persons facial emotions, and if your latent space contains 2 variables $z_1$ and $z_2$ then you might find varying $z_1$ varies the amount of smile on the face, while varying $z_2$ might give the amount of drooping of eyes. I suggest you check this video from Stanford at ~44:00 to see this actually happens or check this blog. So VAE's might have been useful if your output contained more features which would vary according to variations in latent variables, but a single scalar output can only tell you about the rate of effect on varying a latent variable.

+ +

But if your job is only for better regression, Auto-Encoders are the better alternative, since it has an inherent de-noising ability and sufficient training might help in de-noising the input, and thus provide better results if classified on the basis of latent variables.

+ +

An approach, which I think is kind of similar (to your thinking) has been proposed by Kingma, et al. for Semi Supervised learning in this paper. The paper has very poor description of the method so I would suggest you check out this blog. They have used an additional classifier for reconstruction of the original input and trained the classifier when labels are present.

+",,user9947,,user9947,4/19/2019 13:39,4/19/2019 13:39,,,,1,,,,CC BY-SA 4.0 +11874,2,,11866,4/18/2019 4:00,,2,,"

The additive attention method that the researchers are comparing to corresponds to a neural network with 3 layers (it is not actually straight addition). Computing this will involve one multiplication of the input vector by a matrix, then by another matrix, and then the computation of something like a softmax. Smart implementation of a dot-product will not break out the whole matrix multiplication algorithm for it, and it will basically be a tight, easily parallelized loop.

+",16909,,2444,,1/6/2021 23:38,1/6/2021 23:38,,,,5,,,,CC BY-SA 4.0 +11875,2,,11871,4/18/2019 5:38,,2,,"

First, I assume you've tuned your hyperparameters. Because, instead of re-train the network (use the weights that resulted from the previously training process) that needs more times, I'll invest more on hyperparameters tuning of the available network.

+ +

Then, there are several methods and considerations:

+ +
    +
  • You can use the weight resulted from your first network as the initial of your next training process. But as the network will face the same data/problem if you use ""initial"" value for your hyperparameters (e.g. high learning rate) then I'm afraid it will lead to overfitting. Because it's simply ""nothing change"" in your architecture.

  • +
  • If you previously didn't use adaptive learning for your training process (e.g. Adadelta, Adam), you can re-train your network with a smaller learning rate. So, your model can find a better result.

  • +
  • Or, you can use the concept of the selffer network, you use the weight for some layers from previous training process (you can freeze it or not) and randomly initialize other layers. And then, train the network using ""initial"" value of hyperparameters

  • +
+ +

You can read more about the transfer learning concept (or selffer network) to get the most appropriate method for your case. There is also a paper about ""incremental training on CNN"" that I think is similar with the selffer network but with some modifications.

+ +

Hope it helps

+",16565,,16565,,4/18/2019 5:47,4/18/2019 5:47,,,,1,,,,CC BY-SA 4.0 +11876,2,,11825,4/18/2019 6:39,,1,,"

If you only need the vector space as a way to obtain a similarity measure, you may want to consider a distance measure instead. Similarity and distance are inversely related: identical words have maximum similarity or zero distance, and as the similarity decreases, the distance increases.

+ +

For instance, the Wagner-Fischer algorithm computes the edit distance between two strings of characters. This edit distance takes into acccount insertions and deletions, as in your examples, but also substitutions (for example ""gray"" vs. ""grey"").

+ +

The article linked above includes pseudocode that should translate easily to actual code.

+",24014,,24014,,4/18/2019 14:17,4/18/2019 14:17,,,,4,,,,CC BY-SA 4.0 +11878,1,11879,,4/18/2019 8:16,,3,305,"

I'm working on implementing a Q-Learning algorithm for a 2 player board game.

+ +

+ +

I encountered what I think may be a problem. When it comes time to update the Q value with the Bellman equation (above), the last part states that for the maximum expected reward, one must find the highest q value in the new state reached, s', after making action a.

+ +

However, it seems like the I never have q values for state s'. I suspect s' can only be reached from P2 making a move. It may be impossible for this state to be reached as a result of an action from P1. Therefore, the board state s' is never evaluated by P2, thus its Q values are never being computed.

+ +

I will try to paint a picture of what I mean. Assume P1 is a random player, and P2 is the learning agent.

+ +
    +
  1. P1 makes a random move, resulting in state s.
  2. +
  3. P2 evaluates board s, finds the best action and takes it, resulting in state s'. In the process of updating the Q value for the pair (s,a), it finds maxQ'(s', a) = 0, since the state hasn't been encountered yet.
  4. +
  5. From s', P1 again makes a random move.
  6. +
+ +

As you can see, state s' is never encountered by P2, since it is a board state that appears only as a result of P2 making a move. Thus the last part of the equation will always result in 0 - current Q value.

+ +

Am I seeing this correctly? Does this affect the learning process? Any input would be appreciated.

+ +

Thanks.

+",24018,,,,,4/18/2019 9:26,Maximum Q value for new state in Q-Learning never exists,,1,0,,,,CC BY-SA 4.0 +11879,2,,11878,4/18/2019 9:26,,2,,"

Your problem is with how you have defined $s'$.

+ +

The next state for an agent is not the state that the agent's action immediately puts the environment into. It is the state when it next takes an action. For some, more passive, environments, these are the same things. But for many environments they are not. For instance, a robot that is navigating a maze may take an action to move forward. The next state does not happen immediately at the point that it starts to take the action (when it would still be in the same position), but at a later time, after the action has been processed by the environment for a while (and the robot is in a new position), and the robot is ready to take another action.

+ +

So in your 2-player game example using regular Q learning, the next state $s'$ for P2 is not the state immediately after P2's move, but the state after P1 has also played its move in reaction. From P2's perspective, P1 is part of the environment and the situation is no different to having a stochastic environment.

+ +

Once you take this perspective on what $s'$ is, then Q learning will work as normal.

+ +

However, you should note that optimal behaviour against a specific opponent - such as a random opponent - is not the same as optimal play in a game. There are other ways to apply Reinforcement Learning ideas to 2-player games. Some of them can use the same approach as above - e.g. train two agents, one for P1 and one for P2, with each treating the other as if it were part of the environment. Others use different ways of reversing the view of the agent so that it can play versus itself more directly - in those cases you can treat each player's immediate output as $s'$, but you need to modify the learning agent. A simple modification to Q learning is to alternate between taking $\text{max}_{a'}$ and $\text{min}_{a'}$ depending on whose turn you are evaluating (and assuming P1's goal is to maximise their score while P2's goal is to minimise P1's score - and by extension maximise their own score in any zero-sum game)

+",1847,,,,,4/18/2019 9:26,,,,9,,,,CC BY-SA 4.0 +11880,1,,,4/18/2019 11:59,,0,57,"

I am trying to train an ANN to control a 7 Degrees-Of-Freedom arm. It should reach a target avoiding a single obstacle. Given my modeling of the situation, my input layer is composed of 12 nodes:

+ +
    +
  • 5 nodes for the 5 joint states
  • +
  • 3 nodes for the cartesian coordinates of the target
  • +
  • 3 nodes for the cartesian coordinates of the obstacle
  • +
  • 1 node for the radius of the obstacle (it's a spherical object).
  • +
+ +

I have already tried training the ANN with DQN. I want to try neuroevolution (NEAT, in particular) and see how the results compare. I am using NEAT-python. As seen in this paper, this should be feasible.

+ +

However, I am having trouble choosing the best fitness function and also some other hyperparameters, namely the population size. (I am also puzzled by the extremely long training time for a single generation, but that's another story.)

+ +

So, the fitness function. I have tried to replicate what I have done with DQN. So, basically, my function evaluates a genome (so, an ANN) as follows (pseudocode):

+ +
counter = 0
+reapeat for NUM_OF_EPISODES times:
+    generate a random target
+    generate an obstacle which lies about halfway from the end-effector to the target
+    repeat for TIMEOUT times:
+        use the ANN to decide the next_action and execute it
+        if OBSTACLE_REACHED or TARGET_REACHED, stop 
+    counter += 0.3 * relative_distance + 0.2 * relative_path + 0.5 * didntHitObstacle()
+fitness = counter / NUM_OF_EPISODES
+
+ +

So, humanly speaking, for each ANN we try to execute NUM_OF_EPISODES times (how many times should be enough? 100 times seems ok but it get's really slow) a scenario. In this scenario, we use the ANN to search for the target, and if we reach it or the obstacle, we stop. Now for each one of these scenarios, we should ""rank"" how well the ANN performed (see the counter += ... part). But how should I do this? My idea (stolen from the paper above) was to compute something like this:

+ +
0.3 * relative_distance + 0.2 * relative_path + 0.5 * didntHitObstacle()
+
+ +

So, basically, we see how much we are closer to the target compared to when we started, how ""short"" the path was (compared to the ideal straight line start point-to-target), and whether we did or did not hit the target.

+ +

Does this function make sense? My concern is mainly about how we deal with the obstacle: 50% of the fitness. Is it correct? I am asking this because I am receiving poor results.

+ +

Another problem that I have is population size. Of course, the bigger the better, but this thing takes a lot to train. How big is ok, in your experience?

+",23527,,2444,,7/7/2019 20:47,7/7/2019 20:47,How do I choose an appropriate fitness function and hyper-parameters to train a 7-DOF arm?,,0,9,,,,CC BY-SA 4.0 +11882,2,,11833,4/18/2019 15:00,,1,,"

For cross-language word representation the trend now is:

+ + + +

Remember that you can also do the task in 2 steps: +Translate the words to a reference language (e.g english), then represent each one of them using any word representation model (in the reference language).

+ +

The 2-steps option is also good as specific-language word representation models are more accurate, and there are a bench of easy-to-use libraries for single-language translation (i.e py-translator) and representation (i.e Universal sentence encoder by Google).

+",23350,,,,,4/18/2019 15:00,,,,0,,,,CC BY-SA 4.0 +11883,1,,,4/18/2019 17:29,,2,542,"

I want to train an AI to detect the class (i.e. suit and rank) of playing cards. Playing cards from different decks may use slightly different shapes or colors to represent these attributes, and I want the system to work across many decks. I bought many different decks, scanned and labeled them. Next up would be to create training data with an augmentation library. I found two examples of how other people did that:

+
    +
  1. Image detection using YOLO algorithm and poker cards
  2. +
  3. Playing card detection with YOLO
  4. +
+

The problem is that I want my AI to be able to detect multiple cards of the same class in one picture. In the examples above, they put a label on the top left corner of a card. This makes sense since it is a very good indicator of what class the card is in. Unfortunately, every card has two of these labels.

+

I think their solution returns "detected" if any instance of a given label is found. But many cards with the same label could be in the same picture, and I am not sure if I can detect the quantity of the class with this solution. For example, if there are 2 Ace of Clubs in a picture, I would like my system to output "2", rather than "detected".

+

Do you think it is feasible to mark the whole card, prepare the data accordingly, and train an AI that detects the count as well?

+",24028,,16909,,1/13/2021 0:31,6/2/2023 4:07,How to detect multiple playing cards of the same class with a neural network?,,1,0,,,,CC BY-SA 4.0 +11884,1,11887,,4/18/2019 20:24,,2,27,"

I quite often find projects using pre-trained model and using them as a starting point for their new model that learns something novel from thier dataset or on-live learning process - e.g. using a webcam or live audio.

+ +

Is this quite usual and recommended to speed up training a model? For example using a model trained on ImageNet as a first layer to your model that will categorise faces specifically.

+",11893,,,,,4/18/2019 22:57,Is it mostly the case to train with available models,,1,0,,,,CC BY-SA 4.0 +11885,1,,,4/18/2019 20:58,,3,2325,"

I've been trying to implement policy improvement for Q(s,a) function as per Sutton&Barto reinforcement learning book. The original algorithm with first-visit MonteCarlo is pictured below.

+ +

+ +

I remember the book earlier mentioning that the every-visit variation simply omits the first-visit check ""unless the pair blah,blah..."" but otherwise the algorithms should be the same (???)

+ +

Initial policy in the very first iteration (first episode), should be equiprobable randomwalk. In general terms, if action takes you outside the border of the gridworld (4x4), then you simply bounce back into where you started from, but reward will have been given, and action will have been taken.

+ +

I have verified that my code gets stuck (sometimes) in the episode generation portion of my code in foreverloop for some reason, early on in the code (iterations amount in the outerloop is small). Even though, I thought I followed the pseudocode rather well, but it's really annoying that sometimes the code gets stuck in foreverloop.

+ +

The reason must be that for some reason my code updates from the equiprobable randomwalk policy => into deterministic policy but in a wrong way, such that it can create foreverloops in the episode generation after the first episode has ran (it must have ran the entire first episode with equiprobable randomwalk). The below picture shows that if you get into any state in the marked box, you cannot get out from there and get stuck in episode generation... +

+ +

If the random generator seed it lucky, then it can usually get ""over the hump"" and proceed to make the required number of iterations in the outerloop, and output an optimal policy for the gridworld.

+ +

Here is the python code (from my experience I ran it in debugger and had to restart it a couple of times, but usually it will show quite fast that it gets stuck into the episode generation, and cannot proceed in the iterations in outerloop.

+ +
import numpy as np
+import numpy.linalg as LA
+import random
+from datetime import datetime
+
+random.seed(datetime.now())
+rows_count = 4
+columns_count = 4
+
+def isTerminal(r, c):  # helper function to check if terminal state or regular state
+    global rows_count, columns_count
+    if r == 0 and c == 0:  # im a bit too lazy to check otherwise the iteration boundaries
+        return True  # so that this helper function is a quick way to exclude computations
+    if r == rows_count - 1 and c == columns_count - 1:
+        return True
+    return False
+
+
+maxiters = 100000
+reward = -1
+actions = [""U"", ""R"", ""D"", ""L""]
+V = np.zeros((rows_count, columns_count))
+returnsDict={}
+QDict={}
+actDict={0:""U"",1:""R"",2:""D"",3:""L""}
+policies = np.array([ ['T','A','A','A'],
+                     ['A','A','A','A'],
+                     ['A','A','A','A'],
+                     ['A','A','A','T'] ])
+
+
+
+
+
+""""""returnsDict, for each state-action pair, maintain (mean,visitedCount)""""""
+for r in range(rows_count):
+    for c in range(columns_count):
+        if not isTerminal(r, c):
+            for act in actions:
+                returnsDict[ ((r, c), act) ] = [0, 0] ## Maintain Mean, and VisitedCount for each state-action pair
+
+
+
+"""""" Qfunc, we maintain the action-value for each state-action pair""""""
+for r in range(rows_count):
+    for c in range(columns_count):
+        if not isTerminal(r, c):
+            for act in actions:
+                QDict[ ((r,c), act) ] = -9999  ## Maintain Q function value for each state-action pair
+
+
+
+
+
+
+def getValue(row, col):  # helper func, get state value
+    global V
+    if row == -1:
+        row = 0  # if you bump into wall, you bounce back
+    elif row == 4:
+        row = 3
+    if col == -1:
+        col = 0
+    elif col == 4:
+        col = 3
+    return V[row, col]
+
+def getRandomStartState():
+    illegalState = True
+
+    while illegalState:
+        r = random.randint(0, 3)
+        c = random.randint(0, 3)
+        if (r == 0 and c == 0) or (r == 3 and c == 3):
+            illegalState = True
+        else:
+            illegalState = False
+    return r, c
+
+def getState(row, col):
+    if row == -1:
+        row = 0  # helper func for the exercise:1
+    elif row == 4:
+        row = 3
+    if col == -1:
+        col = 0
+    elif col == 4:
+        col = 3
+    return row, col
+
+
+
+def getRandomAction():
+    global actDict
+    return actDict[random.randint(0, 3)]
+
+
+def getMeanFromReturns(oldMean, n, curVal):
+    newMean = 0
+    if n == 0:
+        raise Exception('Exception, incrementalMeanFunc, n should not be less than 1')
+    elif n == 1:
+        return curVal
+    elif n >= 2:
+        newMean = (float) ( oldMean + (1.0 / n) * (curVal - oldMean) )
+        return newMean
+
+
+""""""get the best action 
+returns string action
+parameter is state tuple (r,c)""""""
+def getArgmaxActQ(S_t):
+    global QDict
+    qvalList = []
+    saList = []
+
+    """"""for example get together
+    s1a1, s1a2, s1a3, s1a4
+    find which is the maxValue, and get the action which caused it""""""
+    sa1 = (S_t, ""U"")
+    sa2 = (S_t, ""R"")
+    sa3 = (S_t, ""D"")
+    sa4 = (S_t, ""L"")
+    saList.append(sa1)
+    saList.append(sa2)
+    saList.append(sa3)
+    saList.append(sa4)
+
+    q1 = QDict[sa1]
+    q2 = QDict[sa2]
+    q3 = QDict[sa3]
+    q4 = QDict[sa4]
+    qvalList.append(q1)
+    qvalList.append(q2)
+    qvalList.append(q3)
+    qvalList.append(q4)
+
+    maxQ = max(qvalList)
+    ind_maxQ = qvalList.index(maxQ)  # gets the maxQ value and the index which caused it
+
+    """"""when we have index of maxQval, then we know which sa-pair
+    gave that maxQval => we can access that action from the correct sa-pair""""""
+    argmaxAct = saList[ind_maxQ][1]
+    return argmaxAct
+
+""""""QEpisode generation func
+returns episodeList
+parameters are starting state, starting action""""""
+def QEpisode(r, c, act):
+    global reward
+    global policies
+
+    """"""NOTE! r,c will both be local variables inside this func
+    they denote the nextState (s') in this func""""""
+    stateWasTerm = False
+    stepsTaken = 0
+    curR = r
+    curC = c
+    episodeList = [ ((r, c), act, reward) ]  # add the starting (s,a) immediately
+
+    if act == ""U"":  ##up
+        r -= 1
+    elif act == ""R"":  ##right
+        c += 1
+    elif act == ""D"":  ## down
+        r += 1
+    else:  ##left
+        c -= 1
+    stepsTaken += 1
+    r, c = getState(r, c)  ## check status of the newState (s')
+    stateWasTerm = isTerminal(r, c)  ## if status was terminal stop iteration, else keep going into loop
+
+    if not stateWasTerm:
+        curR = r
+        curC = c
+
+    while not stateWasTerm:
+        if policies[curR, curC] == ""A"":
+            act = getRandomAction()  ## """"""get the random action from policy""""""
+        else:
+            act = policies[curR, curC]  ## """"""get the deterministic action from policy""""""
+
+        if act == ""U"":  ## up
+            r -= 1
+        elif act == ""R"":  ## right
+            c += 1
+        elif act == ""D"":  ## down
+            r += 1
+        else:  ## left
+            c -= 1
+        stepsTaken += 1
+
+        r, c = getState(r, c)
+        stateWasTerm = isTerminal(r, c)
+        episodeList.append( ((curR, curC), act, reward) )
+        if not stateWasTerm:
+            curR = r
+            curC = c
+
+    return episodeList
+
+
+
+
+print(""montecarlo program starting...\n"")
+"""""" MOnte Carlo Q-function, exploring starts, every-visit, estimating Pi ~~ Pi* """"""
+for iteration in range(1, maxiters+1): ## for all episodes
+
+    print(""curIter == "", iteration)
+    print(""\n"")
+    if iteration % 20 == 0: ## get random seed periodically to improve randomness performance
+        random.seed(datetime.now())
+
+
+
+    for r in range(4):
+        for c in range(4):
+            if not isTerminal(r,c):
+                startR = r
+                startC = c
+                startAct = getRandomAction()
+
+
+   ## startR, startC = getRandomStartState() ## get random starting-state, and starting action equiprobably
+  ##  startAct = getRandomAction()
+                sequence = QEpisode(startR, startC, startAct)  ## generate Q-sequence following policy Pi, until terminal-state (excluding terminal)
+                G = 0
+
+                for t in reversed(range(len(sequence))): ## iterate through the timesteps in reversed order
+                    S_t = sequence[t][0] ## use temp variables as helpers
+                    A_t = sequence[t][1]
+                    R_t = sequence[t][2]
+                    G += R_t ## increment G with reward, NOTE! the gamma discount factor == 1.0
+                    visitedCount = returnsDict[S_t, A_t][1] ## use temp visitedcount
+                    visitedCount += 1
+
+                    if visitedCount == 1: ## special case in iterative mean algorithm, the first visit to any state-action pair
+                        curMean = 9999
+                        curMean = getMeanFromReturns(curMean, visitedCount, G)
+                        returnsDict[S_t, A_t][0] = curMean ## update mean
+                        returnsDict[S_t, A_t][1] = visitedCount ## update visitedcount
+                    else:
+                        curMean = returnsDict[S_t, A_t][0] ## get temp mean from returnsDict
+                        curMean = getMeanFromReturns(curMean, visitedCount, G) ## get the new temp mean iteratively
+                        returnsDict[S_t, A_t][1] = visitedCount ## update visitedcount
+                        returnsDict[S_t, A_t][0] = curMean ## update mean
+
+
+                    QDict[S_t, A_t] = returnsDict[S_t, A_t][0] ## update the Qfunction with the new mean value
+                    tempR = S_t[0] ## temp variables simply to disassemble the tuple into row,col
+                    tempC = S_t[1]
+                    policies[tempR, tempC] = getArgmaxActQ(S_t) ## update policy based on argmax_a[Q(S_t)]
+
+
+print(""optimal policy with Monte-Carlo, every visit was \n"")
+print(""\n"")
+print(policies)
+
+ +

Here is the updated ""hacky fix"" code that seems to get the algorithm ""over the hump"" without getting stuck into foreverloop with deterministic policy. My teacher had recommended that you don't need to update policy at every step in this kind of Monte Carlo, so you could have made the policy updates at periodic intervals using python's modulo operator on the iterationsCount or something. +Also, I had the bright idea that the Sutton&Barto book described that all state-action pairs must be visited, very large amount of times, for the exploring starts pre-condition of the algorithm to be fulfilled.

+ +

So, I then decided to enforce the algorithm to have run at least once for all state-action pairs so you start from each state-action pair deterministically one-by-one (for each episode actually). This would still be run with the old randomwalk policy in this early exploration phase, where we are gathering data into the returnsDict, and Qdict, but not yet improving deterministic policy.

+ +
import numpy as np
+import numpy.linalg as LA
+import random
+from datetime import datetime
+
+random.seed(datetime.now())
+rows_count = 4
+columns_count = 4
+
+def isTerminal(r, c):  # helper function to check if terminal state or regular state
+    global rows_count, columns_count
+    if r == 0 and c == 0:  # im a bit too lazy to check otherwise the iteration boundaries
+        return True  # so that this helper function is a quick way to exclude computations
+    if r == rows_count - 1 and c == columns_count - 1:
+        return True
+    return False
+
+
+
+""""""NOTE about maxiters!!!
+the Monte-Carlo every visit algorithm implements total amount of iterations with formula
+totalIters = maxiters * nonTerminalStates * possibleActions
+totalIters = 5000 * 14 * 4
+totalIters = 280000
+
+in other words, there will be 5k iterations per each state-action pair
+in other words there will be an early exploration phase where policy willnot be updated,
+but the gridworld will be explored with randomwalk policy, gathering Qfunc information, 
+and returnDict information.
+
+in early phase there will be about 27 iterations for each state-action pair during,
+non-policy-updating exploration 
+(maxiters * explorationFactor) / (stateACtionPairs) = 7500 *0.2 /56
+
+after that early exploring with randomwalk,
+then we act greedily w.r.t. the Q-function, 
+for the rest of the iterations to get the optimal deterministic policy
+""""""
+maxiters = 7500
+explorationFactor = 0.2 ## explore that percentage of the first maxiters rounds, try to increase it, if you get stuck in foreverloop, in QEpisode function
+reward = -1
+actions = [""U"", ""R"", ""D"", ""L""]
+V = np.zeros((rows_count, columns_count))
+returnsDict={}
+QDict={}
+actDict={0:""U"",1:""R"",2:""D"",3:""L""}
+policies = np.array([ ['T','A','A','A'],
+                     ['A','A','A','A'],
+                     ['A','A','A','A'],
+                     ['A','A','A','T'] ])
+
+
+
+
+
+""""""returnsDict, for each state-action pair, maintain (mean,visitedCount)""""""
+for r in range(rows_count):
+    for c in range(columns_count):
+        if not isTerminal(r, c):
+            for act in actions:
+                returnsDict[ ((r, c), act) ] = [0, 0] ## Maintain Mean, and VisitedCount for each state-action pair
+
+
+
+"""""" Qfunc, we maintain the action-value for each state-action pair""""""
+for r in range(rows_count):
+    for c in range(columns_count):
+        if not isTerminal(r, c):
+            for act in actions:
+                QDict[ ((r,c), act) ] = -9999  ## Maintain Q function value for each state-action pair
+
+
+
+
+
+
+def getValue(row, col):  # helper func, get state value
+    global V
+    if row == -1:
+        row = 0  # if you bump into wall, you bounce back
+    elif row == 4:
+        row = 3
+    if col == -1:
+        col = 0
+    elif col == 4:
+        col = 3
+    return V[row, col]
+
+def getRandomStartState():
+    illegalState = True
+
+    while illegalState:
+        r = random.randint(0, 3)
+        c = random.randint(0, 3)
+        if (r == 0 and c == 0) or (r == 3 and c == 3):
+            illegalState = True
+        else:
+            illegalState = False
+    return r, c
+
+def getState(row, col):
+    if row == -1:
+        row = 0  # helper func for the exercise:1
+    elif row == 4:
+        row = 3
+    if col == -1:
+        col = 0
+    elif col == 4:
+        col = 3
+    return row, col
+
+
+
+def getRandomAction():
+    global actDict
+    return actDict[random.randint(0, 3)]
+
+
+def getMeanFromReturns(oldMean, n, curVal):
+    newMean = 0
+    if n == 0:
+        raise Exception('Exception, incrementalMeanFunc, n should not be less than 1\n')
+    elif n == 1:
+        return curVal
+    elif n >= 2:
+        newMean = (float) ( oldMean + (1.0 / n) * (curVal - oldMean) )
+        return newMean
+
+
+""""""get the best action 
+returns string action
+parameter is state tuple (r,c)""""""
+def getArgmaxActQ(S_t):
+    global QDict
+    qvalList = []
+    saList = []
+
+    """"""for example get together
+    s1a1, s1a2, s1a3, s1a4
+    find which is the maxValue, and get the action which caused it""""""
+    sa1 = (S_t, ""U"")
+    sa2 = (S_t, ""R"")
+    sa3 = (S_t, ""D"")
+    sa4 = (S_t, ""L"")
+    saList.append(sa1)
+    saList.append(sa2)
+    saList.append(sa3)
+    saList.append(sa4)
+
+    q1 = QDict[sa1]
+    q2 = QDict[sa2]
+    q3 = QDict[sa3]
+    q4 = QDict[sa4]
+    qvalList.append(q1)
+    qvalList.append(q2)
+    qvalList.append(q3)
+    qvalList.append(q4)
+
+    maxQ = max(qvalList)
+    ind_maxQ = qvalList.index(maxQ)  # gets the maxQ value and the index which caused it
+
+    """"""when we have index of maxQval, then we know which sa-pair
+    gave that maxQval => we can access that action from the correct sa-pair""""""
+    argmaxAct = saList[ind_maxQ][1]
+    return argmaxAct
+
+
+
+
+""""""QEpisode generation func
+returns episodeList
+parameters are starting state, starting action""""""
+def QEpisode(r, c, act):
+
+    """"""ideally, we should not get stuck in the gridworld...but,
+    but sometiems when policy transitions from the first episode's policy == randomwalk,
+    then, on second episode sometimes we get stuck in foreverloop in episode generation
+    usually the only choice then seems to restart the entire policy into randomwalk ??? """"""
+
+    global reward
+    global policies
+
+    """"""NOTE! r,c will both be local variables inside this func
+    they denote the nextState (s') in this func""""""
+    stepsTaken = 0
+    curR = r
+    curC = c
+    episodeList = [ ((r, c), act, reward) ]  # add the starting (s,a) immediately
+
+    if act == ""U"":  ##up
+        r -= 1
+    elif act == ""R"":  ##right
+        c += 1
+    elif act == ""D"":  ## down
+        r += 1
+    elif act == ""L"":  ##left
+        c -= 1
+    stepsTaken += 1
+    r, c = getState(r, c)  ## check status of the newState (s')
+    stateWasTerm = isTerminal(r, c)  ## if status was terminal stop iteration, else keep going into loop
+
+    if not stateWasTerm:
+        curR = r
+        curC = c
+
+    while not stateWasTerm:
+        if policies[curR, curC] == ""A"":
+            act = getRandomAction()  ## """"""get the random action from policy""""""
+        else:
+            act = policies[curR, curC]  ## """"""get the deterministic action from policy""""""
+
+        if act == ""U"":  ## up
+            r -= 1
+        elif act == ""R"":  ## right
+            c += 1
+        elif act == ""D"":  ## down
+            r += 1
+        else:  ## left
+            c -= 1
+        stepsTaken += 1
+
+        r, c = getState(r, c)
+        stateWasTerm = isTerminal(r, c)
+        episodeList.append( ((curR, curC), act, reward) )
+        if not stateWasTerm:
+            curR = r
+            curC = c
+        if stepsTaken >= 100000:
+            raise Exception(""Exception raised, because program got stuck in MC Qepisode generation...\n"")
+
+
+    return episodeList
+
+
+
+
+print(""montecarlo program starting...\n"")
+"""""" MOnte Carlo Q-function, exploring starts, every-visit, estimating Pi ~~ Pi* """"""
+
+""""""It appears that the Qfunction apparently can be unreliable in the early episodes rounds, so we can avoid getting 
+stuck in foreverloop because of unreliable early episodes, BUT...
+
+we gotta delay updating the policy, until we have explored enough for a little bit...
+so our Qfunction has reliable info inside of it, to base the decision on, later...""""""
+Q_function_is_reliable = False ## variable shows if we are currently updating the policy, or just improving Q-function and exploring
+
+
+for iteration in range(1, maxiters+1): ## for all episodes
+
+    print(""curIter == "", iteration, "", QfunctionIsReliable == "", Q_function_is_reliable )
+    print(""\n"")
+    if iteration % 20 == 0: ## get random seed periodically to improve randomness performance
+        random.seed(datetime.now())
+
+    for r in range(4):  ## for every non-terminal-state
+        for c in range(4):
+            if not isTerminal(r,c):
+                startR = r
+                startC = c
+                for act in actions: ## for every action possible
+                    startAct = act
+                    sequence = QEpisode(startR, startC, startAct)  ## generate Q-sequence following policy Pi, until terminal-state (excluding terminal)
+                    G = 0
+
+                    for t in reversed(range(len(sequence))): ## iterate through the timesteps in reversed order
+                        S_t = sequence[t][0] ## use temp variables as helpers
+                        A_t = sequence[t][1]
+                        R_t = sequence[t][2]
+                        G += R_t ## increment G with reward, gamma discount factor is zero
+                        visitedCount = returnsDict[S_t, A_t][1]
+                        visitedCount += 1
+
+                       ## if (S_t, A_t, -1) not in sequence[:t]: ## This is how you COULD have done the first-visit MC, but we do every-visit now...
+                        if visitedCount == 1: ## special case in iterative mean algorithm, the first visit to any state-action pair
+                            curMean = 9999
+                            curMean = getMeanFromReturns(curMean, visitedCount, G)
+                            returnsDict[S_t, A_t][0] = curMean ## update mean
+                            returnsDict[S_t, A_t][1] = visitedCount ## update visitedcount
+                        else:
+                            curMean = returnsDict[S_t, A_t][0] ## get temp mean from returnsDict
+                            curMean = getMeanFromReturns(curMean, visitedCount, G) ## get the new temp mean iteratively
+                            returnsDict[S_t, A_t][1] = visitedCount ## update visitedcount
+                            returnsDict[S_t, A_t][0] = curMean ## update mean
+
+
+                        QDict[S_t, A_t] = returnsDict[S_t, A_t][0] ## update the Qfunction with the new mean value
+                        tempR = S_t[0] ## temp variables simply to disassemble the tuple into row,col
+                        tempC = S_t[1]
+
+                        if iteration >= round(maxiters * explorationFactor): ## ONLY START UPDATING POLICY when we have reliable estimates for Qfunction, that is when iteration > maxiter/10
+                            Q_function_is_reliable = True
+                            policies[tempR, tempC] = getArgmaxActQ(S_t) ## update policy based on argmax_a[Q(S_t)]
+
+
+print(""optimal policy with Monte-Carlo, every visit was \n"")
+print(""\n"")
+print(policies)
+
+",23915,,23915,,4/19/2019 9:42,6/3/2020 16:38,"Monte-Carlo, every-visit gridworld, exploring starts, python code gets stuck in foreverloop in episode generation",,1,7,,,,CC BY-SA 4.0 +11887,2,,11884,4/18/2019 22:57,,1,,"

Yes, it is recommended to start with pre-trained model if you don't have high-end hardware. You can use a pre-trained model for fine-tuning (their trained weight as your initial weight) or use it as feature extractor (you remove few last layers, and then train it).

+

Why we need a pre-trained network?

+
    +
  • Because training a good deep model takes a lot of time and needs a lot of hardware. Some good models, like DenseNet, ResNet, even VGG16 need days of training. You can read from VGG original paper:

    +
    +

    On a system equipped with four NVIDIA Titan Black GPUs, training a single net took 2–3 weeks depending on the architecture.

    +
    +
  • +
  • Sometimes our case is similar to the dataset of pre-trained models out there. For example, if we need to classify some images of flower type, even our case has different class, we can use some first Convolutional layers of pre-trained model and use it as a feature extractor.

    +
  • +
+

You can read more about transfer learning from this paper or this page.

+",16565,,-1,,6/17/2020 9:57,4/18/2019 22:57,,,,2,,,,CC BY-SA 4.0 +11888,1,13162,,4/19/2019 0:22,,2,665,"

At the appendix A of paper ""near-optimal representation learning for hierarchical reinforcement learning"", the authors express the $\gamma$-discounted state visitation frequency $d$ of policy $\pi$ as

+ +

$$ +d=(1-\gamma)A_\pi(I-\gamma^cP_\pi^c)^{-1}\mu\tag 1 +$$

+ +

I've simplifed the notation for easy reading, hoping it does not introduce any error. In the above definition, $P_\pi^c$ the $c$-step transition matrix under the policy $\pi$, i.e., $P_{\pi}^c=P_\pi(s_{c}|s_0)$, $\mu$ a Dirac $\delta$ distribution centered at start state $s_0$ and +$$ +A_\pi=I+\sum_{k=1}^{c-1}\gamma^kP_\pi^k\tag 2 +$$ +They further give the every-$c$-steps $\gamma$-discounted state frequency of $\pi$ as +$$ +d^c_\pi=(1-\gamma^c)(I-\gamma^cP_\pi^c)^{-1}\mu\tag 3 +$$ +To my best knowledge, $A_\pi$ seems to be the unnormalized $\gamma$-discounted state frequency, but I cannot really make sense of the rest. +I'm hoping that someone can shed some light on these definitions.

+ +

Update

+ +

Thank @Philip Raeisghasem for pointing out the paper CPO. Here's what I've gotten from that. +Applying the sum of the geometric series to Eq.$(2)$, we have +$$ +A={(I-\gamma^cP_\pi^c)(I-\gamma P_\pi)^{-1}}\tag4 +$$ +Plugging Eq.$(4)$ back into Eq.$(1)$, we get the same result as Eq.$(18)$ in the CPO paper: +$$ +d=(1-\gamma)(I-\gamma P_\pi)^{-1}\mu\tag 5 +$$ +where $(1-\gamma)$ normalizes all weights introduced by $\gamma$ so that they are summed to one. However, I'm still confused. Here are the questions I have

+ +
    +
  1. Eq.$(5)$ indicates Eq.$(1)$ is the state frequency in the infinite horizon. But I do not understand why we have it in the hierarchical policy. To my best knowledge, policies here are low-level, which means they are only valid in a short horizon ($c$ steps, for example). Computing state frequency in the infinite horizon here seems confusing.
  2. +
  3. What should I make of $d_\pi^c$ defined in Eq.$(3)$, originally from Eqs.$(26)$ and $(27)$ in the paper? The authors define them as every-$c$-steps $\gamma$-discounted state frequencies of policy $\pi$. But I do not see why it is the case. To me, they are more like the consequence of Eq.$(30)$ in the paper.
  4. +
+ +

Sorry if anyone feels that this update makes this question too broad. This is kept since I'm not so sure whether I can get a satisfactory answer without these questions. +Any partial answer will be sincerely appreciated. Thanks in advance.

+",8689,,8689,,7/2/2019 3:06,7/11/2019 1:59,Intuition behind $\gamma$-discounted state frequency,,1,4,,,,CC BY-SA 4.0 +11889,1,,,4/19/2019 4:37,,2,167,"

I have looked at the documentation for the NEAT Python API found here, where it's written

+
+

The error for each genome is $1-\sum_i(e_i-a_i)^2$

+
+

I have not yet learned calculus, so I can't understand this formula. So, can someone please explain what the calculation means?

+",24036,,2444,,1/14/2021 16:50,1/14/2021 17:43,What does the formula $1-\sum_i(e_i-a_i)^2$ mean in this NEAT Python API?,,3,0,,,,CC BY-SA 4.0 +11890,2,,11885,4/19/2019 8:55,,3,,"

Your implementation of Monte Carlo Exploring Starts algorithm appears to be working as designed. This is a problem that can occur with some deterministic policies in the gridworld environment.

+ +

It is possible for your policy improvement step to generate such a policy, and there is no recovery from this built into the algorithm. First visit and every visit variants will converge differently to the true action values, however neither offers an improvement here. It is most likely that your loops are occurring via state/action pairs that did not occur in the first episode, so the value estimates are default 0, which then looks like the best choice when creating the deterministic policy*.

+ +

In Sutton & Barto to demonstrate Monte Carlo ES, the authors choose an environment where such loops are impossible (a simplified Blackjack game). They then quickly move on to removing the need for exploring starts by using $\epsilon$-greedy policies. So this issue is not covered in detail, although there are assertions in a couple of places that it must be possible to complete episodes.

+ +

To resolve this whilst still using Monte Carlo ES, you will need to alter the environment so that such loops are not possible. The simplest change is to terminate the episode if it gets too long. This is hacky, because done simply on the gridworld it violates the Markov property (because now the time step should technically be part of the state if you want to predict value). However, it will get you out of the immediate problem, and provided you set the termination point high enough - e.g. 100 steps for your small gridworld - the agent should still discover the optimal policy and associated action values.

+ +

This ""timeout"" patch is also used by many OpenAI Gym environments, because although most other algorithms can find their way out of infinite loops like this, they can still suffer slow learning from over-long episodes as a result.

+ +
+ +

* This leads to a possible fix of initialising your Q values pessimistically - e.g. with -50 starting value. That should help the first deterministic policy join up to the terminal state - although if you make it strictly deterministic and resolve value ties deterministically too, then even this may not be enough. You might want to give that a try to verify what I am saying here, although I would recommend the timeout hack instead, as that is a more general solution. Pessimistic start values are bad for exploration when using other algorithms.

+",1847,,1847,,4/19/2019 9:13,4/19/2019 9:13,,,,3,,,,CC BY-SA 4.0 +11892,5,,,4/19/2019 15:13,,0,,"

For more details, see e.g. https://en.wikipedia.org/wiki/AIXI.

+",2444,,2444,,4/19/2019 19:21,4/19/2019 19:21,,,,0,,,,CC BY-SA 4.0 +11893,4,,,4/19/2019 15:13,,0,,"For questions related to AIXI, which is a theoretical mathematical formalism for artificial general intelligence.",2444,,2444,,4/19/2019 19:21,4/19/2019 19:21,,,,0,,,,CC BY-SA 4.0 +11894,1,,,4/19/2019 16:47,,1,48,"

I am trying classify CIFAR10. The CNN that I generated over fits when the accuracy reaches ~77%. The code and the plot is given below. I tried DropOut, Batch Normalization and L2 Regularization. But the accuracy does not go beyond ~77.

+ +

How can I identify the areas to be corrected to reduce over fitting?

+ +
convolutional_model = Sequential()
+
+# 32
+convolutional_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3), kernel_regularizer=regularizers.l2(.0002)))
+convolutional_model.add(Conv2D(64, (3, 3), activation='relu', kernel_regularizer=regularizers.l2(.0002)))
+convolutional_model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
+
+# 64
+convolutional_model.add(Conv2D(64, (3, 3), activation='relu', kernel_regularizer=regularizers.l2(.0002)))
+convolutional_model.add(Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(.0002)))
+convolutional_model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
+
+convolutional_model.add(Flatten())
+convolutional_model.add(Dropout(0.5))
+
+convolutional_model.add(Dense(128, activation='relu'))
+convolutional_model.add(Dense(10, activation='softmax'))
+
+print(convolutional_model.summary())
+convolutional_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
+es = EarlyStopping(monitor='val_loss', mode='min', verbose=2, patience=8)
+history = convolutional_model.fit(X_Train_Part, Y_Train_Part, epochs=200, verbose=2,validation_data=(X_Train_Validate, Y_Train_Validate), callbacks=[es])
+
+scores = convolutional_model.evaluate(X_Test, Y_Test, verbose=2)
+
+ +

+",23734,,23734,,4/20/2019 1:38,4/20/2019 10:01,How to identify the areas to reduce over fitting?,,1,3,,,,CC BY-SA 4.0 +11897,5,,,4/19/2019 18:42,,0,,,-1,,-1,,4/19/2019 18:42,4/19/2019 18:42,,,,0,,,,CC BY-SA 4.0 +11898,4,,,4/19/2019 18:42,,0,,For questions related to Bayes' theorem (or rule) used in the AI context.,2444,,2444,,4/19/2019 19:21,4/19/2019 19:21,,,,0,,,,CC BY-SA 4.0 +11899,2,,11889,4/19/2019 23:33,,0,,"

The $\sum$ means that they take a sum of the squared difference of each pair of expected/predicted values ($e_i$) and actual values ($a_i$)

+

That gives them an error metric of how far off they are from their desired result. The goal is generally to optimize the algorithms against such an error function, in this case, to get it as close to one as possible.

+",24055,,2444,,1/14/2021 16:49,1/14/2021 16:49,,,,0,,,,CC BY-SA 4.0 +11900,1,11947,,4/20/2019 1:13,,3,1387,"

In the BERT paper, section 4.2 covers the SQuAD training.

+ +

From my understanding, there are two extra parameters trained, they are two vectors with the same dimension as the hidden size, so the same dimensions as the contextualized embeddings in BERT. They are S (for start) and E (for End). For each, a softmax is taken with S and each of the final contextualized embeddings to get a score for the correct Start position. And the same thing is done for E and the correct end position.

+ +

I get up to this part. But I am having trouble figuring out how the did the labeling and final loss calculations, which is described in this paragraph

+ +
+

and the maximum scoring span is used as the prediction. The training objective is the loglikelihood of the correct start and end positions.

+
+ +

What do they mean by ""maximum scoring span is used as the prediction""?

+ +

Furthermore, how does that play into

+ +
+

The training objective is the loglikelihood of the correct start and end positions

+
+ +

From this source: https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/

+ +

It says the log-likelihood is only applied to the correct classes. So, we are only calculating the softmax for the correct positions only, not any of the incorrect positions.

+ +

If this interpretation is correct, then the loss will be

+ +
Loss = -Log( Softmax(S*T(predictedStart) / Sum(S*Ti) ) -Log( Softmax(E*T(predictedEnd) / Sum(S*Ti) )
+
+",18358,,2444,,11/1/2019 2:48,11/1/2019 2:48,Understanding how the loss was calculated for the SQuAD task in BERT paper,,1,0,,,,CC BY-SA 4.0 +11901,2,,10508,4/20/2019 6:07,,0,,"

automatic transcription is a system to change phonemes to graphemes automatically, it's more like syllabel recognition, used for build speech recognition in another language base on stable and existing version of speech recognition. +see this picture to more understanding: see this picture

+ +

one think that you must understood, sound is different with meaning, computer need tools to understood the sound, it's speech or music or another form of sound. all speech sound already mapped in International Phonetic Alphabet (IPA).

+ +

and with little computation to combine the graphemes (call dictionary), resulting understanding speech of specific language.

+ +

example: you already know the sound of ""spoon"" or 'key', the grapheme is split by s and 'p-o-on'.

+ +

in other language like Indonesia's language, another developers use this grapheme identification system (transcription system) to build a speech recognition, for words 'meskipun'.

+ +

in grapheme, 'meskipun' words is: 'mehs-key-poon'.

+ +

with a simple computation, we can make computer understood the 'meskipun' words.

+ +

only said if ""mesh + key + poon"" show, the words is 'meskipun'

+ +

meskipun (=althought, english)

+ +

The big problem in building speech recognition with using automated transcription system, there is no 100% similar for IPA map in every language.

+ +

so the developers should use several 'transfer learning' database of language to make their speech recognition have more high accuracy. except, he decided to build it from scratch.

+ +

automated speech recognition is an end to end speech recognition that set to understood a specific speech language, which in the middle of the system contain automatic transcription system.

+",24058,,24058,,4/20/2019 6:20,4/20/2019 6:20,,,,0,,,,CC BY-SA 4.0 +11902,2,,11894,4/20/2019 7:45,,1,,"

It is a difficult task to identify the areas to be corrected to improve accuracy unless that area is looking into your face. By that I mean unless the regularizing parameter has unusually large or small values it is difficult to pin down and identify which regularizer to tweak. You need to do a grid search or random search over the hyper-parameter space to come up with an optimal combination of hyper parameters.

+ +

There could be a lot of things that you could consider to reduce over fitting. Some of them are $L_2$ regularizers, Dropout, depth of the network, number of neurons in the layers, the optimizer, etc.

+",16708,,16708,,4/20/2019 10:01,4/20/2019 10:01,,,,0,,,,CC BY-SA 4.0 +11903,1,,,4/20/2019 10:19,,1,77,"

I am trying to develop a time series model using autoregression. The data set is like as follows

+
INDEX MAXIMA
+  0   0.743
+  1   0.837
+  2   0.838
+  4   0.896
+  5   1.014
+  6   1.003
+  7   1.01
+  8   1.101
+  9   1.097
+
+

The Maxima point is given is the largest points on each curve. Basically, I have to perform multi-step forecasting (at least 9 steps ahead). I've done it using the recursive approach. but the accuracy of the prediction getting worse as it reaches the end.

+
+

Predicted Result

+
+

+

PYTHON CODE

+

Using the AR model from stats model

+
# fit model for MAX VALUE
+  model = AR(data)
+  model_fit = model.fit()
+  yhat_max = model_fit.predict(len(data),len(data]))
+
+

For obtaining an accurate prediction, What changes should be done in the approach? or Do I have to change the model?

+

Any kind of help is appreciated.

+",24006,,-1,,6/17/2020 9:57,4/20/2019 11:09,Auto-regression - Reduce error in prediction,,1,1,,,,CC BY-SA 4.0 +11905,2,,11903,4/20/2019 11:00,,1,,"

The predictions tend to move towards the mean of the series as one predicts for longer horizons. Also, in general, optimal long range forecast is the process mean.

+ +

In other words, the past of the process contains no information on the development of the process in the distant future.

+ +

And, this might be the reason that you are getting poor forecasts.

+ +
+

What changes should be done in the approach? or Do I have to change the model?

+
+ +

You might want to move to ARIMA models. See how they perform.

+ +

If you have some other time series that might act as explanatory variables then you might want to augment the dataset and use the ARIMAX model.

+ +

Third option would be to try out RNN if you have lots of data. You can also try hybrid models like ARIMA and RNN together.

+",16708,,16708,,4/20/2019 11:09,4/20/2019 11:09,,,,1,,,,CC BY-SA 4.0 +11907,2,,11889,4/20/2019 12:00,,2,,"

$$1-\sum_i(e_i-a_i)^2$$

+

$\sum$ - there just means sum. It is the greek letter for S. You can rewrite the above formula as

+

$$1 -[(e_1 - a_1)^2+(e_2-a_2)^2+(e_3-a_3)^2+\ldots ]$$

+

$\sum$ just helps us avoid writing dozens of $+$ signs. Read more here.

+

What they are doing here is taking the difference of expected value $e_1$ and the actual value $a_1$ for the 1st example, and so on. The difference can be positive ($e_1 > a_1$) or negative ($e_1 < a_1$), so usually we square the difference to make it positive number.

+

The rest is there in the docs. Try putting in concrete imagined values for $a_i$ and $e_i$.

+",21690,,2444,,1/14/2021 17:43,1/14/2021 17:43,,,,0,,,,CC BY-SA 4.0 +11908,2,,11812,4/20/2019 14:42,,0,,"

Short Answer

+ +

Generative networks in generative network arrangements do not learn about input images directly. Their input during training is feedback from the discriminative network.

+ +

The Theory in Summary

+ +

The seminal paper, Generative Adversarial Networks, Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio, June 2014, states, ""We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..."" The two models are defined as MLPs (multilayer perceptrons) in the paper.

+ +
    +
  • Generative model, G
  • +
  • Discriminative model, D
  • +
+ +

These two models are interconnected such that they form a negative feedback loop.

+ +
    +
  • G is trained to capture the feature relational distribution of a set of examples and generate new examples based on that relational distribution well enough to fool D.
  • +
  • D is trained to differentiate G's mocks from the set of externally procured examples.
  • +
+ +

Applying Concepts

+ +

If G were to receive input images, their presence would merely frustrate network training, in that the goal of the training would likely be inadequately defined. The objective of the convergence of G, stated above, is not the learning of how to process the images to produce some other form of output. Its objective in the generative approach is to learn how to generate well, an entirely incompatible objective with either image evaluation or image processing.

+ +

Additional Information

+ +

Additionally, one image is not nearly enough. There must be a sufficiently large set of example images for training to converge at all and then many more to expect the convergence to be both accurate and reliable. The PAC (probably approximately correct) learning analysis framework may be helpful to determine how many examples are needed for a specific case.

+ +

Essential Discriminator

+ +

The discriminator is essential to the generative approach because the feedback loop referenced above is essential to the convergence mechanism. The bidirectional interdependence between G and D is what allows a balanced approach toward accuracy in the feature relational distribution. That accuracy facilitates the human perception that the generated images fit adequately within a cognitive class.

+ +

.

+ +
+ +

Response to Comments

+ +

The attempt to use a generative approach, ""To paint in gaps in an image,"" is reasonable. In such a case, using Goodfellow's nomenclature, G would be generating the missing pixels and D would be trying to discriminate between G's gap filling and the pixels that were in the scenes in the regions of gaps prior to their introduction.

+ +

There are two additional requirements in the scenario of filling in pixels.

+ +
    +
  • G must be strongly incentivized against allowing a large gradient between the generated pixels and the adjacent non-gap pixels, unless that gradient is appropriate to the scene, as in the case of an object edge, or a change in reflectivity, such as a surface abrasion or the edge of a spray painted shape.
  • +
  • D must train using the entire image, which means the examples should be images without gaps, the gaps must be introduced in a way that matches the expected distribution of features of gaps that may be encountered later, and the result of G must be superimposed over the full image to produce the input arising from G and discriminated from the original by D.
  • +
+ +

It is recommended to begin with a standard GAN design, create a test for it (in TDD fashion), implement it, experiment with it, and otherwise become familiar with it and the mathematics involved. Most important to understand is how the balance between G's convergence and D's convergence is obtained in the loss (a.k.a. error or disparity) functions for each, and what concepts of feedback are employed using those functions.

+ +
    +
  • Does your point about input images frustrating network training apply to this kind of problem, or just to GANNs that generate from scratch?
  • +
+ +

It applies to both.

+ +
    +
  • Would I have to have the generator compare the original image with the generated image and pick which one it thinks is better in order to deal with the ""adequately defined"" issue?
  • +
+ +

D compares, not G. That is the delegated arrangement. It is not that other arrangements cannot work. They may. But Goodfellow and the others understood what worked in artificial networks long before they discovered a new approach, and they likely worked out the math of that approach and diagrammed it, perhaps on a white board, long before they typed a single line of code.

+",4302,,4302,,4/21/2019 13:19,4/21/2019 13:19,,,,2,,,,CC BY-SA 4.0 +11909,2,,11883,4/20/2019 15:37,,0,,"

Although it was not crystal clear, we'll assume that by, ""Multiple cards of the same class in one picture,"" is meant that multiple cards of identical suit and rank will be grouped together in the same example image but each card in the image will be selected from a unique deck.

+ +

That arrangement would only be fruitful if the objective of training was to classify, flag, or otherwise analyze the same kind of groupings after training. Otherwise, it would likely be most productive and efficient to first devise a way to divide up the images by deck so that the focus of learning is detection of the three dimensions or perhaps just the first two, depending on the intended use of the trained network.

+ +
    +
  • Rank
  • +
  • Suit
  • +
  • Style
  • +
+",4302,,,,,4/20/2019 15:37,,,,0,,,,CC BY-SA 4.0 +11912,1,,,4/20/2019 20:36,,1,384,"

Say i want to train a neural network with 10 classes as outputs and use categorical_cross_entropy as a loss function in keras. This will try +to fit the training data as best as possible irregardless of the outcome (i.e. value). If I want to take value into account, I have to use something like a policy gradient RL algorithm. How do I formulate the loss of policy gradient algorithm in this case ?

+ +

The standard categorical cross entropy loss function is as follows where y_ = true value, and y = predicted value:

+ +
   loss = -mean( y_ * log(y))
+
+ +

I am thinking to just multiply the true value by the reward and still use the categorical cross entropy of keras i.e.

+ +
   y_ = y_ * reward
+   loss = -mean( y_* log(y) )
+
+ +

Is my interpretation correct ?

+",20456,,24073,,4/21/2019 9:19,4/21/2019 9:19,Policy gradient loss for neural network training,,0,5,,,,CC BY-SA 4.0 +11914,1,11920,,4/21/2019 8:20,,4,96,"

Let's say you have an input which can take one of 10 different unique values. How would you encode it?

+ +
    +
  1. Have input length 10 and one-hot encode it.

  2. +
  3. Have 1 input but normalise the value between the input range.

  4. +
+ +

Would the end result be the same?

+",20352,,2444,,4/21/2019 13:02,4/23/2020 15:27,How should I encode a categorical input?,,1,0,0,,,CC BY-SA 4.0 +11916,5,,,4/21/2019 12:53,,0,,"

For more info, have a look at https://en.wikipedia.org/wiki/Action_model_learning.

+",2444,,2444,,4/22/2019 19:57,4/22/2019 19:57,,,,0,,,,CC BY-SA 4.0 +11917,4,,,4/21/2019 12:53,,0,,"For questions related to the ""action model learning"", which is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment.",2444,,2444,,4/22/2019 19:57,4/22/2019 19:57,,,,0,,,,CC BY-SA 4.0 +11918,5,,,4/21/2019 12:54,,0,,"

Have a look at https://en.wikipedia.org/wiki/Control_theory for more info.

+",2444,,2444,,4/22/2019 19:57,4/22/2019 19:57,,,,0,,,,CC BY-SA 4.0 +11919,4,,,4/21/2019 12:54,,0,,For questions related to control theory and its relation to reinforcement learning and other artificial intelligence sub-fields.,2444,,2444,,7/27/2019 14:47,7/27/2019 14:47,,,,0,,,,CC BY-SA 4.0 +11920,2,,11914,4/21/2019 13:08,,3,,"

As with many general questions about how to represent features, or best learn from them, the answer is ""it depends"".

+ +
    +
  • If there is a natural sequence to the separate items, and that sequence is informative somehow for the prediction, then the classes may work best as a single feature which takes discrete values, scaled to the network. A good example might be predicting house prices where one of the features is property tax bands (in locations which have these such as UK Council Tax).

  • +
  • If the classes have no natural sequence to them relating to the problem, then one-hot-encoding is usually a better interpretation. An example of this might be predicting the price of a car based on the manufacturer.

  • +
+ +

In both cases, a neural network with enough layers and connections - and enough data - could resolve a more difficult representation, and make little effective difference between the representations in practice. However, if you know something useful about the feature, it is normal to choose the most ""natural"" representation based on that knowledge, and you will often see a small improvement by making the correct choice. In the first case, the neural network can benefit from requiring less parameters, and learn more efficiently.

+ +

If you are not sure, then choosing one-hot versus a single scaled input should be an experiment which you perform as part of your hyper parameter tuning, along with any other feature engineering that you are not sure of.

+",1847,,,,,4/21/2019 13:08,,,,0,,,,CC BY-SA 4.0 +11924,1,11996,,4/21/2019 18:02,,0,90,"

I understand that gamma is an important factor in determining the rewards for a deep Q agent, however during testing of my network I am noticing that the agent is outputting more actions to ""do nothing"" as it learns more about it's given data set (financial stock data).

+ +

I have tried tweaking the gamma at different levels ranging from 0 - 1 and everywhere in between, however as the agent continues to learn, the times between actions is getting longer, and longer. This behaviour is undesirable, and it is preferable that the agent be making more often, short-term actions even if they result in reduced reward.

+ +

Does anyone have any tips on how to achieve this? Would a minus gamma have adverse effects on the network?

+ +

TLDR: +Time between actions becoming increasingly long over time, would prefer an agent that makes many actions over long-term ones.

+",20893,,20893,,4/24/2019 15:07,4/24/2019 15:07,Encourage Deep Q to seek short-term reward,,1,6,,,,CC BY-SA 4.0 +11926,1,,,4/21/2019 18:24,,1,17,"

I need to train an LSTM on some light curves, in order to find a signal (there are 2 classes signal and background). However the signal (data points corresponding to signal) is around 100 times less frequent than the background so I have a huge class imbalance and in the end all points are labelled as background. I tried to use focal loss, but it doesn't help. Is there a way to make it work?

+",23871,,,,,4/21/2019 18:24,Training LSTM with class imbalance,,0,0,,,,CC BY-SA 4.0 +11927,1,11932,,4/21/2019 18:32,,4,274,"

Nowadays, robots or artificial agents often only perform the specific task they have been programmed to do.

+ +

Will we be able to build an artificial intelligence that feels empathy, that understands the emotions and feelings of humans, and, based on that, act accordingly?

+",23538,,4302,,5/18/2019 22:07,3/1/2023 22:37,Will we be able to build an artificial intelligence that feels empathy?,,3,2,,,,CC BY-SA 4.0 +11928,2,,52,4/21/2019 19:15,,0,,"

Yes, it is possible. The field of deep reinforcement learning is all about using deep neural networks (that is, neural networks with at least one hidden layer) to approximate value functions (such as the $Q$ function) or policies.

+ +

Have a look at the paper A Brief Survey of Deep Reinforcement Learning that gives a brief survey of the field.

+",2444,,,,,4/21/2019 19:15,,,,0,,,,CC BY-SA 4.0 +11929,1,11931,,4/21/2019 19:23,,8,1630,"

Reading Sutton and Barto, I see the following in describing policy gradients:

+ +

+ +

How is the gradient calculated with respect to an action (taken at time t)? I've read implementations of the algorithm, but conceptually I'm not sure I understand how the gradient is computed, since we need some loss function to compute the gradient.

+ +

I've seen a good PyTorch article, but I still don't understand the meaning of this gradient conceptually, and I don't know what I'm looking to implement. Any intuition that you could provide would be helpful.

+",16343,,22916,,4/22/2019 3:51,4/22/2019 5:17,How is the policy gradient calculated in REINFORCE?,,1,8,,,,CC BY-SA 4.0 +11931,2,,11929,4/22/2019 3:23,,7,,"

The first part of this answer is a little background that might bolster your intuition for what's going on. The second part is the more practical and direct answer to your question.

+ +
+ +

The gradient is just the generalization of the derivative to multivariable functions. The gradient of a function at a certain point is a vector that points in the direction of the steepest increase of that function.

+ +

Usually, we take a derivative/gradient of some loss function $\mathcal{L}$ because we want to minimize that loss. So we update our parameters in the direction opposite the direction of the gradient.

+ +

$$\theta_{t+1} = \theta_{t} - \alpha\nabla_{\theta_t} \mathcal{L} \tag{1}$$

+ +

In policy gradient methods, we're not trying to minimize a loss function. Actually, we're trying to maximize some measure $J$ of the performance of our agent. So now we want to update parameters in the same direction as the gradient.

+ +

$$\theta_{t+1} = \theta_{t} + \alpha\nabla_{\theta_t} J \tag{2}$$

+ +

In the episodic case, $J$ is the value of the starting state. In the continuing case, $J$ is the average reward. It just so happens that a nice theorem called the Policy Gradient Theorem applies to both cases. This theorem states that

+ +

$$\begin{align} +\nabla_{\theta_t}J(\theta_t) &\propto \sum_s \mu(s)\sum_a q_\pi (s,a) \nabla_{\theta_t} \pi (a|s,\theta_t)\\ +&=\mathbb{E}_\mu \left[ \sum_a q_\pi (s,a) \nabla_{\theta_t} \pi (a|s,\theta_t)\right]. +\end{align}\tag{3} +$$

+ +

The rest of the derivation is in your question, so let's skip to the end.

+ +

$$\begin{align} +\theta_{t+1} &= \theta_{t} + \alpha G_t \frac{\nabla_{\theta_t}\pi(A_t|S_t,\theta_t)}{\pi(A_t|S_t,\theta_t)}\\ +&= \theta_{t} + \alpha G_t \nabla_{\theta_t} \ln \pi(A_t|S_t,\theta_t) +\end{align}\tag{4}$$

+ +

Remember, $(4)$ says exactly the same thing as $(2)$, so REINFORCE just updates parameters in the direction that will most increase $J$. (Because we sample from an expectation in the derivation, the parameter step in REINFORCE is actually an unbiased estimate of the maximizing step.)

+ +
+ +

Alright, but how do we actually get this gradient? Well, you use the chain rule of derivatives (backpropagation). Practically, though, both Tensorflow and PyTorch can take all the derivatives for you.

+ +

Tensorflow, for example, has a minimize() method in its Optimizer class that takes a loss function as an input. Given a function of the parameters of the network, it will do the calculus for you to determine which way to update the parameters in order to minimize that function. But we don't want to minimize. We want to maximize! So just include a negative sign.

+ +

In our case, the function we want to minimize is +$$-G_t\ln \pi(A_t|S_t,\theta_t).$$

+ +

This corresponds to stochastic gradient descent ($G_t$ is not a function of $\theta_t$).

+ +

You might want to do minibatch gradient descent on each episode of experience in order to get a better (lower variance) estimate of $\nabla_{\theta_t} J$. If so, you would instead minimize +$$-\sum_t G_t\ln \pi(A_t|S_t,\theta_t),$$ +where $\theta_t$ would be constant for different values of $t$ within the same episode. Technically, minibatch gradient descent updates parameters in the average estimated maximizing direction, but the scaling factor $1/N$ can be absorbed into the learning rate.

+",22916,,22916,,4/22/2019 5:17,4/22/2019 5:17,,,,1,,,,CC BY-SA 4.0 +11932,2,,11927,4/22/2019 3:28,,6,,"

Let us describe a very simple system that does something we could label as empathyc.

+

A chatbot answers "I am sorry to hear that. What happened?" when we type "I feel bad", and it replies "I am glad to hear that. Fancy some music?" when we type "I feel good".

+

Somehow, it perceives a human emotion, and acts accordingly.

+

Planes fly but they do not fly as birds. Similarly, we can expect artificial empathy from an AI, not necessarily natural empathy as we feel it.

+

Here is a related paper, Shared Laughter Generation for Empathetic Spoken Language

+

And here is an actual example:

+

+

The system correctly maps tears to sadness and outputs a proper answer, while also explaining its non-sentience. Not bad!

+",24014,,24014,,3/1/2023 22:37,3/1/2023 22:37,,,,1,,,,CC BY-SA 4.0 +11933,1,,,4/22/2019 7:05,,3,536,"

I have been reading an article on AlphaGo and one sentence confused me a little bit, because I'm not sure what it exactly means. The article says:

+ +
+

AlphaGo Zero only uses the black and white stones from the Go board as its input, whereas previous versions of AlphaGo included a small number of hand-engineered features.

+
+ +

What exactly is the input to AlphaGo's neural network? What do they mean by ""just white and black stones as input""? What kind of information is the neural network using? The position of the stones?

+",,user24093,2444,,6/8/2020 20:11,6/8/2020 20:11,What is the input to AlphaGo's neural network?,,1,0,,,,CC BY-SA 4.0 +11934,2,,11933,4/22/2019 8:13,,2,,"
+

The input to the neural network is a $19 × 19 × 17$ image stack + comprising $17$ binary feature planes. $8$ feature planes $X_t$ consist of binary values indicating the + presence of the current player’s stones ($X^i_t = 1$ if intersection $i$ contains a stone of the player’s + colour at time-step $t$; $0$ if the intersection is empty, contains an opponent stone, or if $t < 0$). A + further $8$ feature planes, $Y_t$ + , represent the corresponding features for the opponent’s stones. The + final feature plane, $C$, represents the colour to play, and has a constant value of either $1$ if black + is to play or $0$ if white is to play. These planes are concatenated together to give input features + $s_t = [ +X_t, Y_t, X_{t−1}, Y_{t−1}, ..., X_{t−7}, Y_{t−7}, C]$.

+
+ +

This and all the other architecture details can be found in the ""Neural Network Architecture"" section in the paper.

+",22916,,22916,,4/22/2019 8:20,4/22/2019 8:20,,,,0,,,,CC BY-SA 4.0 +11936,1,,,4/22/2019 9:00,,3,406,"

I am trying to use Deep-Q-Learning to learn an ANN which controls a 7-DOF robotic arm. The robotic arm must avoid an obstacle and reach a target.

+ +

I have implemented a number of state-of-art techinques to try to improve the ANN performance. Such techniques are: PER, Double DQN, adaptive discount factor, sparse reward. I have also tried Dueling DQN but it performed poorly. I have also tried a number of ANN architectures and it looks like that 2 hidden layers with 128 neurons is the best one so far. My input layer is 12 neurons, the output 10 neurons.

+ +

However, as you can see from the image down here, at a certain point the DQN stops learning and gets stuck at around 80% of success rate. I don't understand why it gets stuck, because in my opinion we could reach an higher success rate, 90% at least, but I just can't get out of that ""local minimum"".

+ +

so, my question is: What are some techniques I can try to unstuck a DQN from something that looks like a local minimum?

+ +

figure:

+ +

+ +

note: the success rate in this picture is computed as the number of successes in the last 100 runs.

+",23527,,23527,,4/22/2019 9:34,4/22/2019 9:34,DQN Agent not learning anymore - what can I do to fix this?,,0,5,,,,CC BY-SA 4.0 +11937,1,12040,,4/22/2019 9:09,,0,831,"

I wrote a convolutional neural network for the MNIST dataset with Numpy from scratch. I am currently trying to understand every part and calculation. +But one thing I noticed was the ""just positive"" derivative of the ReLU function.

+ +

My network structure is the following:

+ +
    +
  • (Input 28x28)
  • +
  • Conv Layer (filter count = 6, filter size = 3x3, stride = 1)
  • +
  • Max Pool Layer (Size 2x2) with RELU
  • +
  • Conv Layer (filter count = 6, filter size = 3x3, stride = 1)
  • +
  • Max Pool Layer (Size 2x2) with RELU
  • +
  • Dense (128)
  • +
  • Dense (10)
  • +
+ +

I noticed, when looking at the gradients, that the ReLU derivative is always (as it should be) positive. But is it right that the filter weights are always decreasing their weights? Or is there any way they can increase their weight?

+ +

Whenever I look at any of the filter's values, they decreased after training. Is that correct?

+ +

By the way, I am using stochastic gradient descent with a fixed learning rate for training.

+",24096,,24096,,4/23/2019 1:35,4/28/2019 13:13,How should the values of the filters of a CNN change?,,2,2,,,,CC BY-SA 4.0 +11941,1,,,4/22/2019 12:00,,2,342,"

I was trying to implement a DQN without experience reply memory, and the agent is not learning anything at all. I know from readings that experience reply is used for stabilizing gradients. But how important is experience reply in DQN and similar RL algorithms? If the model needs to learn from memory, why don't we use a recurrent network, which has inbuilt memory to it? What is the advantage of experience reply over a recurrent memory?

+",39,,2444,,4/22/2019 12:05,4/22/2019 12:05,Why experience reply memory in DQN instead of a RNN memory?,,0,1,,,,CC BY-SA 4.0 +11942,1,11946,,4/22/2019 13:05,,3,1072,"

I was wondering whether there is an AI system which could be used to resolve the class clashes problem which mostly happens in universities. In almost every university students face this problem, where two or more courses that many students want to take together get scheduled at the same time. Does anyone know about a system which resolves this issue or someone who works on this problem?

+",24095,,16909,,4/22/2019 18:37,4/22/2019 18:37,Is there any AI system for finding the best way to schedule university classes?,,1,1,,,,CC BY-SA 4.0 +11945,1,11952,,4/22/2019 18:22,,1,158,"

There are several science fiction movies where the robots rebel against their creators: for example, the Terminator's series or I Robot.

+ +

In the future, is it possible that robots will rebel against their human creators (like in the mentioned movies)?

+",24095,,1671,,4/22/2019 21:04,6/27/2020 22:06,Will robots rebel against their human creators?,,1,0,,,,CC BY-SA 4.0 +11946,2,,11942,4/22/2019 18:35,,4,,"

Welcome to AI.SE @Israr Ali.

+ +

The problem of scheduling a timetable is an example of a constraint satisfaction problem, a topic long studied in AI.

+ +

There are many possible techniques to apply to this kind of problem. They can be organized into three broad categories:

+ +
    +
  1. Global search algorithms, like backtracking search can be used to try and find an assignment of times to classes that results in no conflicts at all. With proper heuristics, these can be quite fast in practice, even though their worst-case runtimes are poor. Russell & Norvig's AI: A Modern Approach has a very good overview of these approaches in Chapter 6.

  2. +
  3. Local search algorithms, like hill climbing search start by assigning every class a randomly selected time, and then making small adjustments that monotonically decrease the number of conflicts (e.g. swapping the times of two classes). These are often faster than backtracking approaches and are especially so if you are willing to accept a ""pretty good"" schedule that might still have fewer conflicts than an optimal one. Chapter 4 of AI: A Modern Approach provides a good introduction.

  4. +
  5. Planning approaches, particularly non-classical algorithms like GraphPlan can also be used for these domains. They make use of the structure of planning or scheduling problems in particular to re-frame the search problems addressed by Backtracking techniques. By using this domain-specific representation, they are able to achieve very high-quality solutions very quickly. Chapter 11 of AI: A Modern Approach covers Graph Plan in some detail, and could be a good introduction to these more specialized techniques.

  6. +
+",16909,,,,,4/22/2019 18:35,,,,0,,,,CC BY-SA 4.0 +11947,2,,11900,4/22/2019 19:58,,2,,"

These answers are based on my personal understanding of Bert from both the paper and official_implementation, hope it will help:

+ +
What do they mean by ""maximum scoring span is used as the prediction""?
+
+ +

As you know in SQuAD the input sequence is divided to 2 parts: Question and Document (from which we extract the answer if possible).

+ +

Sometimes the input length exceeds the max_seq_length parameter, in this case the document is truncated to as many parts as needed and we end up having more than one input for the same question/document. Mention that the question is replicated in all the resulting inputs (see line_350 for details).

+ +

So in such cases, in order to determine the predicted span among all the generated-inputs we use the maximum scoring i.e (max_start + max_end) / 2 = ( max(Softmax(S*Ti)) + max(Softmax(E*Ti)) ) / 2 of all the inputs related to the same question.

+ +
""The training objective is the loglikelihood of the correct start and end positions""
+
+ +

The loss is the average of the start_position loss start_loss and the end_postition loss end_loss. Each loss is computed in the same way: after applying the Softmax to the final output (i.e S*Ti or E*Ti) we use the real start/end postitions to compute the loss (see code below).

+ +

From run_squad.py in Bert_repo:

+ +
 def compute_loss(logits, positions):
+        one_hot_positions = tf.one_hot(positions, depth=seq_length, dtype=tf.float32)
+        log_probs = tf.nn.log_softmax(logits, axis=-1)
+        loss = -tf.reduce_mean(tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
+        return loss
+
+ start_positions = features[""start_positions""]
+ end_positions = features[""end_positions""]
+
+ start_loss = compute_loss(start_logits, start_positions)
+ end_loss = compute_loss(end_logits, end_positions)
+
+ total_loss = (start_loss + end_loss) / 2.0
+
+",23350,,23350,,4/22/2019 21:33,4/22/2019 21:33,,,,0,,,,CC BY-SA 4.0 +11949,1,,,4/22/2019 20:45,,4,612,"

Suppose I have a standard image classification problem (i.e. CNN is shown a single image and predicts a single classification for it). If I were to use bounding boxes to surround the target image (i.e. convert this into an object detection problem), would this increase classification accuracy purely through the use of the bounding box?

+ +

I'm curious if the neural network can be ""assisted"" by us when we show it bounding boxes as opposed to just showing it the entire image and letting it figure it all out by itself.

+",6328,,2444,,1/28/2021 23:49,6/18/2023 6:01,Can bounding boxes further improve the performance of a CNN classifier?,,4,0,,,,CC BY-SA 4.0 +11952,2,,11945,4/23/2019 0:51,,1,,"

I don't like to be a killjoy, but this question seems premature (that's why it's hd the ""mythology of AI"" tag added to it). The kinds of emergent artificial general intelligence depicted in the movies you mention are in science fiction films because they are science fiction. Most AI researchers do not think they are likely to appear anytime soon. The overwhelming majority of researchers think the most likely times for such a system to appear are ""More than 50 years [from now]"" or ""Never"". In part, this is because AI researchers thought we were close to such systems for several decades, despite failing to create them. This suggests that making an artificial general intelligence is much harder than we might expect.

+ +

Despite AGI being a long show, there's a lot of recent interest in the AI research community in the social impact of our technologies. The study of these systems is called ""ethical AI"", and this is the path that the research community as a whole has begun to embark on. A promising approach is to model the process by which humans decide to treat each other well, in the hopes of creating programs act according to that process.

+",16909,,,,,4/23/2019 0:51,,,,0,,,,CC BY-SA 4.0 +11953,1,,,4/23/2019 1:57,,11,700,"

It seems that deep neural networks and other neural network based models are dominating many current areas like computer vision, object classification, reinforcement learning, etc.

+ +

Are there domains where SVMs (or other models) are still producing state-of-the-art results?

+",22525,,2444,,4/23/2019 14:08,9/28/2019 14:35,What are the domains where SVMs are still state-of-the-art?,,4,0,,,,CC BY-SA 4.0 +11954,2,,11953,4/23/2019 2:18,,7,,"

Deep Learning and Neural Networks are getting most of the focus because of recent advances in the field and most experts believe it to be the future of solving machine learning problems.

+ +

But make no mistake, classical models still produce exceptional results and in certain problems, they can produce better results than deep learning.

+ +

Linear Regression is still by far the most used machine learning algorithm in the world.

+ +

It’s difficult to identify a specific domain where classical models always perform better as the accuracy is very much determined on the shape and quality of the input data.

+ +

So algorithm and model selection is always a trade-off. It’s a somewhat accurate statement to make that classical models still perform better with smaller data sets. However, a lot of research is going into improving deep learning model performance on less data.

+ +

Most classical models require less computational resources so if your goal is speed then its much better.

+ +

Also, classical models are easier to implement and visualize which can be another indicator for performance, but it depends on your goals.

+ +

If you have unlimited resources, a massive observable data set that is properly labeled and you implement it correctly within the problem domain then deep learning is likely going to give you better results in most cases.

+ +

But in my experience, the real-world conditions are never this perfect

+",24107,,24107,,4/23/2019 2:44,4/23/2019 2:44,,,,0,,,,CC BY-SA 4.0 +11955,2,,11949,4/23/2019 3:18,,0,,"

Another way to ask the question is: Does sound get clearer when you remove the background noise?

+ +

The obvious answer is yes and in the case of image classification, the answer is also generally yes.

+ +

In most cases reducing the noise (irrelevant pixels) will strengthen the signal (activations) the neural network is trying to find.

+",24107,,,,,4/23/2019 3:18,,,,1,,,,CC BY-SA 4.0 +11956,1,11991,,4/23/2019 4:12,,0,234,"

How do I choose the search algorithm for a particular task? Which criteria should I take into account?

+",23299,,2444,,4/24/2019 20:08,4/24/2019 20:08,How do I choose the search algorithm for a particular task?,,1,0,,,,CC BY-SA 4.0 +11957,1,,,4/23/2019 5:30,,1,32,"

I've been working on Hinton's matrix capsule networks for several months. I searched each corner of the internet. But I couldn't find anyone that can reproduce Hinton's matrix capsule network. Can anyone get the reported accuracy on SmallNORB and Cifar10 dataset?

+ +

PS: I know Hinton's another paper on capsule networks Dynamic Routing Between Capsules is reproducible. Please, do not confuse the two papers.

+",24110,,2444,,6/9/2020 11:38,6/9/2020 11:38,Is anyone able to reproduce Hinton's matrix capsule networks?,,0,2,,,,CC BY-SA 4.0 +11959,2,,4320,4/23/2019 5:52,,4,,"

That is a very deep question. There was series of papers recently proving the convergence of gradient descent for overparameterized deep networks (for example, Gradient Descent Finds Global Minima of Deep Neural Networks, A Convergence Theory for Deep Learning via Over-Parameterization or Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks). All of the proofs assume that the initial weights are assigned randomly according to a Gaussian distribution. The main reasons this initial distribution is important for the proofs are:

+
    +
  1. Random weights make the ReLU operators in each layer statistically compressive mapping (up to a linear transformation).

    +
  2. +
  3. Random weights preserve separation of input for any input distribution - that is if input samples are distinguishable network propagation will not make them indistinguishable.

    +
  4. +
+

Those properties are very difficult to reproduce with deterministically generated initial weight matrices, and even if they are reproducible with deterministic matrices NULL-space (from which we can generate adversarial examples) would likely make the method less useful in practice. More importantly, preservation of those properties during gradient descent would likely make method impractical. But overall it's very difficult but not impossible, and may warrant some research in that direction. In analogous situation, there were some results for Restricted Isometry Property for deterministic matrices in a compressed sensing.

+",22745,,16909,,1/22/2021 0:58,1/22/2021 0:58,,,,0,,,,CC BY-SA 4.0 +11964,1,11967,,4/23/2019 9:18,,1,119,"

Could we even use reinforcement learning with big datasets?

+ +

Or in RL does the agent built its own dataset ?

+",24003,,1847,,4/23/2019 10:16,4/23/2019 10:30,Is there any example of using Q-learning with big data?,,1,0,,,,CC BY-SA 4.0 +11966,1,11972,,4/23/2019 10:11,,1,174,"

Could changing the order of convolution layers in a CNN improve accuracy or training time?

+",24003,,2444,,4/23/2019 13:29,4/23/2019 15:18,Does changing the order of the convolution layers in a CNN have any impact?,,1,0,,,,CC BY-SA 4.0 +11967,2,,11964,4/23/2019 10:30,,1,,"
+

Or in RL does the agent built its own dataset?

+
+ +

Essentially this is the case.

+ +

RL is a very general learning mechanism, based on trial-and-error. You could create an environment where the agent's goal is to correctly predict classification or regression problems, where the agent is rewarded for getting close to the correct prediction. However, these problems are most often better addressed using supervised learning techniques. Using RL in such cases will most of the time make the learning slower, less efficient and less accurate.

+ +

The relationship between RL and supervised learning is more about how RL generates the target data for learning. In some cases, such as Deep Q Learning (DQN), you can see quite clearly from looking at the algorithm or code, that the RL agent contains a supervised learning component. The ""supervised learning"" in DQN learns a regression problem to predict the action values for the current agent's target policy. Whilst the outer RL logic that contains this supervised learning model, is built around testing and changing that behaviour in order to act optimally in the longer term.

+ +

Using RL to predict from a big data set would involve using the same kind of supervised learning model on the inside, with RL using that to guess at the correct result, and sometimes choosing to guess at some other random prediction instead. This guessing process is extra processing, plus it will add noise and variance to the error signal used to train the model. In a few cases, this might even be helpful, but in the majority of cases studied for regression and classification tasks, it will result in a poorer training process, less accurate model, or both.

+",1847,,,,,4/23/2019 10:30,,,,0,,,,CC BY-SA 4.0 +11969,5,,,4/23/2019 14:03,,0,,,-1,,-1,,4/23/2019 14:03,4/23/2019 14:03,,,,0,,,,CC BY-SA 4.0 +11970,4,,,4/23/2019 14:03,,0,,"For questions related to ""state of the art"" (SOTA) models in machine learning and, in general, AI.",2444,,2444,,4/23/2019 19:57,4/23/2019 19:57,,,,0,,,,CC BY-SA 4.0 +11971,1,11989,,4/23/2019 15:07,,1,100,"

Raul Rojas' Neural Networks A Systematic Introduction, section 8.2.1 calculates the variance of the output of a hidden neuron.

+ +

Raul Rojas says that ""for binary vectors we have $E[x_i^2] = \frac{1}{3}$"" where $x_i$ is the input value transported through each edge to a node.

+ +

I don't quite get how he reaches this result.

+ +

Thank you for your time :)

+",14892,,,,,4/24/2019 11:18,Binary vector expected value,,1,1,,,,CC BY-SA 4.0 +11972,2,,11966,4/23/2019 15:18,,0,,"

Conventionally, CNN layers downsample over and over, which enables them to capture details at different levels of abstractions. Usually, it is observed that the initial layers do nothing more than detecting edges, or filtering color channels; the combinations of these edges are what we perceive as 'features'.

+ +

If you reverse the order, you essentially are changing sampling modes down the line. +CNNs detect by 'downsampling' the inputs and therefore 'extracting' features.

+ +

It may not work as expected!

+",24122,,,,,4/23/2019 15:18,,,,1,,,,CC BY-SA 4.0 +11973,2,,11953,4/23/2019 15:31,,14,,"

State-of-the-art is a tough bar, because it's not clear how it should be measured. An alternative criteria, which is akin to state-of-the-art, is to ask when you might prefer to try an SVM.

+ +

SVMs have several advantages:

+ +
    +
  1. Through the kernel trick, the runtime of an SVM does not increase significantly if you want to learn patterns over many non-linear combinations of features, rather than the original feature set. In contrast, a more modern approach like a deep neural network will need to get deeper or wider to model the same patterns, which will increase its training time.
  2. +
  3. SVMs have an inherent bias towards picking ""conservative"" hypotheses, that are less likely to overfit the data, because they try to find maximum margin hypotheses. In some sense, they ""bake-in"" Occam's razor.
  4. +
  5. SVMs have only two hyperparameters (the choice of kernel and the regularization constant), so they are very easy to tune to specific problems. It is usually sufficient to tune them by performing a simple grid-search through the parameter space, which can be done automatically.
  6. +
+ +

SVMs also have some disadvantages:

+ +
    +
  1. SVMs have a runtime that scales cubically in the number of datapoints you want to train on (i.e. $O(n^3)$ runtime)1. This does not compare well with, say, a typical training approach for a deep neural network which runs in $O(w*n*e)$ time, where $n$ is the number of data points, $e$ is the number of training epochs, and $w$ is the number of weights in the network. Generally $w, e << n$.
  2. +
  3. To make use of the Kernel trick, SVMs cache a value for the kernelized ""distance"" between any two pairs of points. This means they need $O(n^2)$ memory. This is far, far, more trouble than the cubic runtime on most real-world sets. More than a few thousand datapoints will leave most modern servers thrashing, which increases effective runtime by several orders of magnitude. Together with point 1, this means SVMs will tend to become unworkably slow for sets beyond maybe 5,000-10,000 datapoints, at the upper limit.
  4. +
+ +

All of these factors point to SVMs being relevant for exactly one use case: small datasets where the target pattern is thought, apriori, to be some regular, but highly non-linear, function of a large number of features. This use case actually arises fairly often. A recent example application where I found SVMs to be a natural approach was building predictive models for a target function that was known to be the result of interactions between pairs of features (specifically, communications between pairs of agents). An SVM with a quadratic kernel could therefore efficiently learn conservative, reasonable, guesses.

+ +
+ +

1 There are approximate algorithms that will solve the SVM faster than this, as noted in the other answers.

+",16909,,16909,,9/26/2019 23:10,9/26/2019 23:10,,,,0,,,,CC BY-SA 4.0 +11979,1,11982,,4/23/2019 19:12,,9,2162,"

We often train neural networks by optimizing the mean squared error (MSE), which is an equation of a parabola $y=x^2$, with gradient descent.

+

We also say that weight adjustment in a neural network by the gradient descent algorithm can hit a local minimum and get stuck in there.

+

How are multiple local minima on the equation of a parabola possible, if a parabola has only one minimum?

+",11789,,2444,,11/16/2021 13:47,1/20/2023 15:28,How is it possible that the MSE used to train neural networks with gradient descent has multiple local minima?,,3,0,,,,CC BY-SA 4.0 +11980,1,,,4/23/2019 19:31,,1,41,"

Note: I am NOT asking for general advantages of neuroevolution over standard approaches (e.g.: architecture search, parallelization), I am asking for examples of tasks in which, currently, neuroevolved networks outperform ANNs trained with gradient-based techniques. Of course, this is not opinion based, as I am asking for examples based on facts.

+",23527,,2444,,7/7/2019 19:16,7/7/2019 19:16,"What are some examples of tasks in which, currently, neuroevolution outperforms gradient-based approaches?",,0,0,,,,CC BY-SA 4.0 +11981,2,,11979,4/23/2019 19:44,,3,,"
+

How are multiple local minima on the equation of a parabola possible, if a parabola has only one minimum?

+
+

A parabola has one minimum, and no separate local minima. So it isn't possible.

+

However...

+
+

Gradient descent works on the equation of mean squared error, which is an equation of a parabola $y=x^2$

+
+

Just because the loss function is a parabola with respect to the direct input, does not mean that the loss function is a parabola with respect to the parameters that indirectly cause that error.

+

In fact it only remains true for linear functions. When considering linear regression $\hat{y} = \sum_i w_i x_i + b$, there is only one global minimum (with specific values of $w_i$ or specific vector $\mathbf{w}$), and your assertion is true.

+

Once you add nonlinear activations, as in neural networks, then the relationship between error function and parameters of the model becomes far more complex. For the last/output layer you can carefully choose a loss function so that this cancels out - you can keep your single global minimum for logistic regression and softmax regression. However, one or more hidden layers, and all bets are off.

+

In fact you can prove quite easily that a neural network with a hidden layer must have multiple stationary points (not necessarily local minima). The outline of the proof is to note that there must be multiple equivalent solutions, since in a fully-connected network you can re-arrange the nodes into any order, move the weights to match, and it will be a new solution with exactly the same behaviour, including the same loss on the dataset. So a neural network with one hidden layer with $n$ nodes must have $n!$ absolute minimums. There is no way for these to exist without other stationary points in-between them.

+

There is theory to suggest that most of the stationary points found in practice will not be local minima, but saddle points.

+

As an example, this is an analysis of saddle points in a simple XOR approximator.

+",1847,,2444,,11/16/2021 13:35,11/16/2021 13:35,,,,0,,,,CC BY-SA 4.0 +11982,2,,11979,4/23/2019 19:49,,9,,"

$g(x) = x^2$ is indeed a parabola and thus has just one optimum.

+

However, the $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum_i (y_i - f(x_i))^2$, where $\boldsymbol{x}$ are the inputs, $\boldsymbol{y}$ the corresponding labels and the function $f$ is the model (e.g. a neural network), is not necessarily a parabola. In general, it is only a parabola if $f$ is a constant function and the sum is over one element.

+

For example, suppose that $f(x_i) = c, \forall i$, where $c \in \mathbb{R}$. Then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum_i (y_i - c)^2$ will only change as a function of one variable, $\boldsymbol{y}$, as in the case of $g(x) = x^2$, where $g$ is a function of one variable, $x$. In that case, $(y_i - c)^2$ will just be a shifted version (either to the right or left depending on the sign of $c$) of $y_i^2$, so, for simplicity, let's ignore $c$. So, in the case $f$ is a constant function, then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum_i y_i^2$, which is a sum of parabolas $y_i^2$, which is called a paraboloid. In this case, the paraboloid corresponding to $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum_i y_i^2$ will only have one optimum, just like a parabola. Furthermore, if the sum is just over one $y_i$, that is, $\text{MSE}(\boldsymbol{x}, \boldsymbol{y}) = \sum_i y_i^2 = y^2$ (where $\boldsymbol{y} = y$), then the MSE becomes a parabola.

+

In other cases, the MSE might not be a parabola or have just one optimum. For example, suppose that $f(x) = x^2$, $y_i = 1$ ($\forall i$), then $h(x) = (1 - x^2)^2$ looks as follows

+

+

which has two minima at $x=-1$ and $x=1$ and one maximum at $x=0$. We can find the two minima of this function $h$ using calculus: $h'(x) = -4x(1 - x^2)$, which becomes zero when $x=-1$ and $x=1$.

+

In this case, we only considered one term of the sum. If we considered the sum of terms of the form of $h$, then we could even have more "complicated" functions.

+

To conclude, given that $f$ can be arbitrarily complex, then also $\text{MSE}(\boldsymbol{x}, \boldsymbol{y})$, which is a function of $f$, can also become arbitrarily complex and have multiple minima. Given that neural networks can implement arbitrarily complex functions, then $\text{MSE}(\boldsymbol{x}, \boldsymbol{y})$ can easily have multiple minima. Moreover, the function $f$ (e.g. the neural network) changes during the training phase, which might introduce more complexity, in terms of which functions the MSE can be and thus which (and how many) optima it can have.

+",2444,,66824,,1/20/2023 15:28,1/20/2023 15:28,,,,1,,,,CC BY-SA 4.0 +11983,2,,11937,4/23/2019 20:14,,0,,"

The weights of the filters do not always and necessarily decrease. Consider the extreme case when you initialise them to $-\infty$ and you want to approximate a function different than the one the CNN represents initially with all weights set to $-\infty$. You will have to increase one or more weights.

+",2444,,,,,4/23/2019 20:14,,,,0,,,,CC BY-SA 4.0 +11984,2,,6317,4/24/2019 1:42,,1,,"

The main difference between on-policy and off-policy is how to get samples and what policy we optimize.

+ +

In off-policy deterministic actor-critic, the trajectories are samples from beta distribution (also called behavior policy), not the policy we are optimized (that is $\mu_{\theta}$). However, in the on-policy actor-critic, the action $a_{t+1}$ is sampled from target policy $\mu_{\theta}$ and the policy we optimized is also the $\mu_{\theta}$.

+",25129,,2444,,4/24/2019 9:29,4/24/2019 9:29,,,,0,,,,CC BY-SA 4.0 +11985,1,12002,,4/24/2019 3:42,,2,51,"

Let's say I've trained a CNN that is predicting/inferring live samples that it hasn't seen before. In the event the network makes a correct prediction, would including this as a new sample in its training set increase the model accuracy even further when re-training the network?

+ +

I'm unsure about this since it seems as though the network has already learnt the necessary features for making the correct prediction, so adding it as a new training sample might be redundant. On the other hand it might also reinforce to the network that it's on the right track, perhaps giving it further confidence to generalize with whatever features its learnt in regards to that class, that it might be able to apply to the same class in other images it might otherwise make an incorrect prediction with?

+ +

The reason I'm thinking of this is that manually labeling each image is a time-consuming process, however if a simple ""Correct/Incorrect"" popup box was presented after the network made a live prediction, then it's simply a matter of clicking a single button to generate a new labelled training sample, which would be a far easier labeling task.

+ +

So how useful would it be to do something like this?

+",6328,,,,,4/24/2019 19:26,Does reinforcing correct predictions increase model accuracy further?,,1,2,,,,CC BY-SA 4.0 +11986,1,,,4/24/2019 8:31,,1,37,"

We have various types of data features with different temporal scale. For example, some of them describe the state per second while others may describe the state per day or per month from another aspect. The former features are dense on the time scale and latter features are sparse. Simply concatenate them into one feature vector seems not proper. Is there any typical method in machine learning can handle with problem ?

+",20587,,,,,4/24/2019 8:31,How to combine features with different temporal scale in machine learning,,0,0,,,,CC BY-SA 4.0 +11987,1,11997,,4/24/2019 9:08,,4,879,"

I am now working on training an alphazero player for a board game. The implementation of board game is mine, MCTS for alphazero was taken elsewhere. Due to complexity of the game, it takes a much longer time to self-play than to train.

+ +

As you know, alphazero has 2 heads: value and policy. In my loss logging I see that with time, the value loss is decreasing pretty significantly. However, the policy loss only demonstrates fluctuation around its initial values.

+ +

Maybe someone here has run into similar problems? I would like to know if its my implementation problem (but then the value loss is decreasing) or just a matter of not enough data.

+ +

Also, perhaps importantly, the game has ~17k theoretically possible moves, but only 80 at max are legal at any single state (think chess - a lot of possibles but very few are actually legal at any given time). Also, if MCTS has 20 simulations, then the improved probabilities vector (against which we train our policy loss) will have at most 20 non-zero entries. My idea was that it might be hard for the network to learn such sparse vectors.

+ +

Thank you for any ideas!

+",21278,,,,,4/24/2019 15:56,Alphazero policy head loss not decreasing,,1,2,,,,CC BY-SA 4.0 +11988,1,,,4/24/2019 11:07,,0,1114,"

As far as I know, in PDDL, an environment is designed as well as the initial state described. When we describe the target state, the solver creates some sort of a graph. How is the graph built and what are the keys (keywords) in PDDL referring to?

+

I know that there are many flavours of PDDL, but let's go with the standard or the most common version of PDDL.

+",19413,,2444,,1/23/2021 0:13,1/23/2021 0:13,How does a PDDL solver find a solution for a given problem?,,1,0,,,,CC BY-SA 4.0 +11989,2,,11971,4/24/2019 11:18,,2,,"

Some lines above the author says

+ +
+

By the law of large numbers we can also assume that the total input to the node has a Gaussian distribution

+
+ +

hence we can assume $X \sim \mathcal{N}(0,1)$ with the $X$ domain being continuous

+ +

Then he says the input vector is assumed to be binary which changes the domain from continuous to discrete so we can discretize it assuming $-1 \le X \le 1$ is mapped into zero and $X< -1 $ and $X > 1$ are mapped to 1

+ +

Finally according to the 68-95-99.7 Rule we can compute

+ +

$$ E(X^2) = P(X=1) \cdot 1^2 + P(X=0) \cdot 0^2 = 0.32 $$

+ +

Finally probably the author rounds this up to $\frac{1}{3} \simeq 0.33$

+",1963,,,,,4/24/2019 11:18,,,,0,,,,CC BY-SA 4.0 +11990,2,,11988,4/24/2019 12:36,,3,,"

The question doesn't really make sense: PDDL is a description language that is used to formulate a problem. This description then is the input to a planner; how the planner arrives at the intended solution is not related to the PDDL description.

+ +

There are a number of planning algorithms, and you can implement any of them to make use of a PDDL description. The output of the solver is a plan, which is usually an ordered sequence of actions, and a tree or graph structure might be a good way of capturing this.

+",2193,,,,,4/24/2019 12:36,,,,0,,,,CC BY-SA 4.0 +11991,2,,11956,4/24/2019 12:49,,3,,"

The choice of the most appropriate search algorithm for a particular task is often based (but not exclusively) on its time complexity, space complexity, termination (if the algorithm always halts), optimality guarantees (if the algorithm is guaranteed to find the optimal solution), available implementations (as software libraries) and (if known) the actual performance (for such particular task). There are algorithms that have the same time or space complexities, but that, in practice, have different performance, which also depends on the problem being solved and implementations (which might or not be optimised).

+ +

For example, consider a search tree where you have a finite number of nodes and paths. Suppose that solutions lie at the leaves of this tree and that you are interested in one solution, but not necessarily the optimal one. In that case, in practice, DFS might find solutions faster than BFS, because of its nature and the task being solved. However, in the case you have a tree with infinite-length paths, then DFS might not terminate. In that case, proceeding layer by layer (BFS) might be a more appropriate strategy.

+",2444,,,,,4/24/2019 12:49,,,,0,,,,CC BY-SA 4.0 +11992,1,12123,,4/24/2019 14:15,,5,1451,"

Following the DQN algorithm with experience replay:

+

Store transition $\left(\phi_{t}, a_{t}, r_{t}, \phi_{t+1}\right)$ in $D$ Sample random minibatch of transitions $\left(\phi_{j}, a_{j}, r_{j}, \phi_{j+1}\right)$ from $D$ Set

+

$$y_{j}=\left\{\begin{array}{cc}r_{j} & \text { if episode terminates at j+1} \\ r_{j}+\gamma \max _{d^{\prime}} \hat{Q}\left(\phi_{j+1}, a^{\prime} ; \theta^{-}\right) & \text {otherwise }\end{array}\right.$$

+

Perform a gradient descent step on $\left(y_{j}-Q\left(\phi, a_{j} ; \theta\right)\right)^{2}$ with respect to the network parameters $\theta$.

+
+

We calculate the $loss=(Q(s,a)-(r+Q(s+1,a)))^2$.

+
+
+

Assume I have positive but changing rewards. Meaning, $r>0$.

+
+

Thus, since the rewards are positive, by calculating the loss, I notice that almost always $Q(s)< Q(s+1)+r$.

+

Therefore, the network learns to always increase the $Q$ function , and eventually, the $Q$ function is higher in same states in later learning steps.

+

How can I stabilize the learning process?

+",25141,,36737,,4/4/2021 15:14,4/4/2021 15:14,How to stop DQN Q function from increasing during learning?,,4,2,,,,CC BY-SA 4.0 +11993,2,,11992,4/24/2019 14:45,,1,,"
    +
  1. You can use discount factor gamma less then one.

  2. +
  3. You can use finite time horizon - only for states which are no farther away then T time steps reward propagate back

  4. +
  5. You can use sum of rewords averaged over time for Q

  6. +
+ +

All of those are legitimate approaches.

+",22745,,,,,4/24/2019 14:45,,,,1,,,,CC BY-SA 4.0 +11994,1,,,4/24/2019 14:53,,1,65,"

I have a big amount of light curves (image below).

+ +

+ +

I am trying to label the points as signal or background (the signal appears usually periodically, several times, for a given light curve).

+ +

More precisely, I want to identify the downward spikes (class label = 1) from the background (class label = 0).

+ +

However, the data is not labeled. I tried labeling it by hand, and using a bi-directional LSTM succeeds in labeling the data points properly. However, there are thousands of light curves and labeling all of them would take very long.

+ +

Is there any good unsupervised approach to do this (unsupervised LSTM maybe, but any other method that might work on time series would do just fine)?

+",23871,,2444,,4/12/2020 19:09,4/12/2020 19:09,Is there an LSTM-based unsupervised learning algorithm to label a dataset of curves?,,1,1,,,,CC BY-SA 4.0 +11995,2,,11992,4/24/2019 15:03,,1,,"
+

Therefore,the network learns to always increase the Q function , and eventually the Q function is higher in same states in later learning steps

+
+ +

If your value function keeps increasing in later steps that means that the network is still learning those Q-values, you shouldn't necessarily prevent that. Your Q-values won't increase forever even if the rewards are always positive. You basically have a regression problem here and when the value of $Q(s,a)$ becomes very close to the predicted value of $r+Q(s',a)$ value of $Q(s,a)$ will stop increasing by itself.

+",20339,,,,,4/24/2019 15:03,,,,5,,,,CC BY-SA 4.0 +11996,2,,11924,4/24/2019 15:04,,0,,"

Thanks for the responses - after taking into account the 'risk' the network experiences when working with unstable financial data I modified the activation layers from RelU to Sigmoid - this has lead to a considerable improvement to both results (profit!) and additionally the rate at which actions are carried out.

+ +

I will also edit the above question to provide further details.

+",20893,,,,,4/24/2019 15:04,,,,0,,,,CC BY-SA 4.0 +11997,2,,11987,4/24/2019 15:56,,2,,"

The loss of the policy head here is really quite different from losses in, for instance, more ""conventional"" Supervised Learning approaches (where we typically expect/hope to see a relatively steady decrease in loss function).

+ +

In this AlphaZero setup, the target that we're updating the policy head towards is itself changing during the training process. When we improve our policy, we expect the MCTS ""expert"" to also be improved, which may lead to a different distribution of MCTS visit counts, which in turn may lead to a different update target for the policy head from previous update targets. So it's perfectly fine if our ""loss"" increases sometimes, we may still actually be performing better. The loss is useful for the computation of our gradient, but otherwise it doesn't have much use -- it certainly cannot be used as an accurate indicator of performance / learning progress.

+ +
+

but only 80 at max are legal at any single state (think chess - a lot of possibles but very few are actually legal at any given time). Also, if MCTS has 20 simulations, then the improved probabilities vector (against which we train our policy loss) will have at most 20 non-zero entries.

+
+ +

This can be a problem yes. The fact that the majority of moves are not legal at any point in time is not a problem, but if you only have 20 MCTS simulations for a branching factor of 80... that's certainly a problem. The easiest fix would be to simply keep MCTS running for longer, but obviously that's going to take more computation time. If you cannot afford to do this for every turn of self-play, you could try:

+ +
    +
  • using only a low MCTS iteration count for some moves, not adding these distributions to the training data for the policy head
  • +
  • using a larger MCTS iteration count for some other moves, and only using the distributions of these moves as training data for the policy head
  • +
+ +

This idea is described in more detail in Subsection 6.1 of Accelerating Self-Play Learning in Go.

+",1641,,,,,4/24/2019 15:56,,,,0,,,,CC BY-SA 4.0 +12001,1,,,4/24/2019 19:09,,1,35,"

Having two point clouds, the second being a transformation of the first, how could I utilize a neural network in order to solve the pose (transformation in terms of x, y, z, rx, ry, rz) of the second point cloud?

+ +

Since the point clouds can be rather large (~200,000 points), I think it'd be best to first select regions from each point cloud and see if their geometries are similar (still researching the optimal method for this). If they are not similar, I'd choose two new points. If they are similar, I'd use those two regions when implementing the neural network to discern the pose.

+ +

My preliminary research has led me to believe that a Siamese neural network may work in this scenario but I'm not sure if there are better alternatives. One of my goals is to accomplish this without relying on Iterative Closest Point. Any and all insight is appreciated. Thanks.

+",25147,,,,,4/24/2019 19:09,Point Cloud Alignment using a Neural Network?,,0,0,,,,CC BY-SA 4.0 +12002,2,,11985,4/24/2019 19:26,,1,,"

Your suggestion is risky. It might make improvements to your classifier, but it may also reduce generalisation.

+ +

The two conflicting factors in play are:

+ +
    +
  • Adding data points to a model which has capacity to learn more detail can improve its performance.

  • +
  • Training from a different distribution of data points than your target population can reduce its performance.

  • +
+ +

From the first point, you need your model to be able to accept the new data. This can be harder to achieve than you might think at first - if you have tuned some regularisation parameters by using cross-validation, then you may have at least in part saturated the model's capacity in order to prevent over-fitting. That means fitting to new data could require re-tuning your hyper-parameters as well - but you probably won't need to start from scratch, just search nearby.

+ +

It is hard to tell the impact from the second point. There is not going to be a general answer here, too much depends on how the data is arranged for your problem. My gut feeling is that you will notice an improvement to the model initially, but that there will be diminishing returns due to the self-selection of already-correct data points.

+ +

You should definitely keep back a cross-validation set of data distributed as it is in production, that has been properly and fully labelled in order to assess this work. This will be your way to assess whether generalisation is improving using your approach.

+ +

Worst case: You may need to go back and re-label the mistakes in order to significantly improve performance of your model. So I would suggest part of your auto-labelling process should store the misclassified items somewhere so you can re-visit them and spend the extra effort.

+ +

Sadly, even collecting more properly labelled data is not a guaranteed fix - some models already have enough data. You can check whether that might applies in your case by training with different sizes of training set (from the available training data you have so far), and seeing what the trend in performance is when you increase the training set size. In fact, you should probably do this first, before you invest significant effort in collecting more data.

+",1847,,,,,4/24/2019 19:26,,,,4,,,,CC BY-SA 4.0 +12005,1,,,4/25/2019 2:24,,0,197,"

What is the goal of a constraint solver? How are constraints propagated in a constraint satisfaction search?

+ +

Any references are also appreciated.

+",23299,,2444,,9/8/2020 22:45,9/8/2020 22:45,What is the goal of a constraint solver?,,1,0,,,,CC BY-SA 4.0 +12006,2,,12005,4/25/2019 3:15,,2,,"

You want Chapter 6 of Russell & Norvig's AI: A Modern Approach, for a starting place.

+ +

The goal of a constraint solver is to find an assignment of values to variables such that every variable is assigned a value, but none of a list of constraints are violated.

+ +

An example problem that a constraint solver can solve is the problem of assigning students to groups for a project. The variables are the names of the students. Each variable has a set of possible values it could take on that correspond to the names of the groups. The constraints prohibit assigning certain students to the same group (perhaps they hate each other), or require certain students to be assigned to the same group (perhaps they like each other). More complex constraints are also possible (e.g. at least 2 of these 5 students need to be in the same group).

+ +

Constraint propagation is the idea that several constraints might logically imply a stronger constraint. As a simple example, suppose that student $S$ must be assigned to group $1$. That's one constraint. Suppose further that student $T$ must be assigned to the same group as $S$. That's another constraint. Together, however, they imply that $T$ must be assigned to group $1$.

+ +

Most constraint solvers operate by iteratively trying to assign values to variables and then backtracking when a constraint is violated. When assignments are made, different constraint propagation methods make different decisions about how much computational effort to spend deriving the logical consequences of constraints, as opposed to simply trying more assignments of values to variables until the consequences become apparent (one of the existing constraints gets violated).

+ +

The simplest form of constraint propagation is called forward checking. After each speculative assignment of a value to a variable, forward checking looks at all the constraints the variable is involved in. It then derives the logical consequences of the assignments. For example, continuing with the student problem from above, if $S$ was assigned value $1$, then forward checking would notice the constraint $S=T$, and would, effectively, add the constraint $T=1$ (it actually does something a little different, but it has this effect). As another example, if the assignment $T=2$ were made, forward checking would notice that $S=T$, and would add the constraint $S=2$. It would also detect the contradiction that $S=1$ and $S=2$ are both constraints, and would declare the assignment $T=2$ invalid.

+ +

More advanced constraint propagation methods can reason about these kinds of logical consequences before assignment are made. For example, they could derive that $T=1$ before the program starts. Wikipedia has a good summary of these, starting with arc consistency.

+",16909,,,,,4/25/2019 3:15,,,,0,,,,CC BY-SA 4.0 +12007,1,12009,,4/25/2019 12:28,,1,130,"

I've been unsure about a principle of Q-Learning, I was hoping someone could clear it up.

+ +

When a new state is encountered, and thus there are no existing Q values, and that the algorithm decides to exploit, and not explore, how is the move chosen, since all the values are 0?

+ +

Is it chosen randomly? This intuitively would make sense, since after this, the state-move pair would have a value and thus the matrix would get filled up throughout the iterations. But I just want to make sure I understand this correctly...

+ +

Thanks

+",24018,,,,,4/25/2019 15:06,Picking a random move in exploitation in Q-Learning,,1,0,,,,CC BY-SA 4.0 +12008,1,,,4/25/2019 12:56,,1,424,"

I'm building an actor-critic reinforcment learning algorithm to solve environments. I want to use a single encoder to find representation of my environment.

+ +

When I share the encoder with the actor and the critic, my network isn't learning anything:

+ +
class Encoder(nn.Module):
+  def __init__(self, state_dim):
+    super(Encoder, self).__init__()
+
+    self.l1 = nn.Linear(state_dim, 512)
+
+  def forward(self, state):
+    a = F.relu(self.l1(state))
+    return a
+
+class Actor(nn.Module):
+  def __init__(self, state_dim, action_dim, max_action):
+    super(Actor, self).__init__()
+
+    self.l1 = nn.Linear(state_dim, 128)
+    self.l3 = nn.Linear(128, action_dim)
+
+    self.max_action = max_action
+
+  def forward(self, state):
+    a = F.relu(self.l1(state))
+    # a = F.relu(self.l2(a))
+    a = torch.tanh(self.l3(a)) * self.max_action
+    return a
+
+class Critic(nn.Module):
+  def __init__(self, state_dim, action_dim):
+    super(Critic, self).__init__()
+
+    self.l1 = nn.Linear(state_dim + action_dim, 128)
+    self.l3 = nn.Linear(128, 1)
+
+  def forward(self, state, action):
+    state_action = torch.cat([state, action], 1)
+
+    q = F.relu(self.l1(state_action))
+    # q = F.relu(self.l2(q))
+    q = self.l3(q)
+    return q
+
+ +

However, when I use different encoder for the actor and different for the critic, it learn properly.

+ +
class Actor(nn.Module):
+def __init__(self, state_dim, action_dim, max_action):
+    super(Actor, self).__init__()
+
+    self.l1 = nn.Linear(state_dim, 400)
+    self.l2 = nn.Linear(400, 300)
+    self.l3 = nn.Linear(300, action_dim)
+
+    self.max_action = max_action
+
+def forward(self, state):
+    a = F.relu(self.l1(state))
+    a = F.relu(self.l2(a))
+    a = torch.tanh(self.l3(a)) * self.max_action
+    return a
+
+class Critic(nn.Module):
+  def __init__(self, state_dim, action_dim):
+    super(Critic, self).__init__()
+
+    self.l1 = nn.Linear(state_dim + action_dim, 400)
+    self.l2 = nn.Linear(400, 300)
+    self.l3 = nn.Linear(300, 1)
+
+  def forward(self, state, action):
+    state_action = torch.cat([state, action], 1)
+
+    q = F.relu(self.l1(state_action))
+    q = F.relu(self.l2(q))
+    q = self.l3(q)
+    return q
+
+ +

Im pretty sure its becuase of the optimizer. In the shared encoder code, I define it as foolow:

+ +
self.actor_optimizer = optim.Adam(list(self.actor.parameters())+
+                                      list(self.encoder.parameters()))
+self.critic_optimizer = optim.Adam(list(self.critic.parameters()))
+                                         +list(self.encoder.parameters()))
+
+ +

In the seperate encoder, its just:

+ +
self.actor_optimizer = optim.Adam((self.actor.parameters()))
+self.critic_optimizer = optim.Adam((self.critic.parameters()))
+
+ +

two optimizers must be becuase of the actor critic algorithm, in which the loss of the actor is the value.

+ +

How can I combine two optimizers to optimize correctly the encoder?

+",25141,,,,,7/9/2023 19:06,How to properly optimize shared network between actor and critic?,,1,14,0,,,CC BY-SA 4.0 +12009,2,,12007,4/25/2019 15:06,,3,,"

It depends on the implementation of the software package that you are using. If you call a function that returns the maximum value and all values are the same then it might return the value at first index or some other one. The point is it doesn't matter which action is chosen since all of them are the best at the same time. So it's basically random but you should treat it as if you are trying to pick the best action.

+",20339,,,,,4/25/2019 15:06,,,,5,,,,CC BY-SA 4.0 +12010,1,,,4/25/2019 15:39,,0,143,"

I am trying to understand this post, but I get confused by the definitions and the differences. What's definition of equivariant?

+ +

If I remove all the pooling layers from a CNN, will it make the network to detect features in pixel resolution? For example, detecting the local maximum of a pixel. For example, can a CNN be designed to return True for the following case?

+ +

+ +

And False for the shifted window:

+ +

+ +

In the second case it returns false because the 3x3 submatrix isn't centered (yellow dash line) around the local maximum.

+ +

Will an architecture that is

+ +
from keras.layers import Dense, Conv2D, Flatten
+model = Sequential()
+model.add(Conv2D(128, kernel_size=2, activation=’relu’, padding='same', input_shape=(3,3,1)))
+model.add(Conv2D(64, kernel_size=2, activation=’relu’, padding='same'))
+model.add(Flatten())
+model.add(Dense(10, activation=’softmax’))
+
+ +

be able to differentiate between the tiling of the larger grayscale image?

+",12975,,12975,,4/28/2019 7:50,4/28/2019 7:50,How can I suppress a CNN’s translation invariant or translation equivariant?,,1,1,,,,CC BY-SA 4.0 +12011,1,,,4/25/2019 16:43,,0,227,"

Can anyone help me in understanding Hebb networking and how different function like AND, OR used to solve by this network.

+

I couldn’t understand properly through the google.

+",23299,,2444,,2/10/2022 14:30,2/10/2022 14:30,Understanding Hebb network,,1,3,,2/13/2022 23:44,,CC BY-SA 4.0 +12012,2,,12010,4/25/2019 16:58,,1,,"

Question 1

+ +

To make it simple

+ +
    +
  • You have a transformation $T$ and an operator $C$ acting on a given input $x$

  • +
  • Let's say you do this experiment

    + +
      +
    • compute $y_{1} = C(T(x))$
    • +
    • compute $y_{2} = C(x)$
    • +
  • +
  • You can get three different results:

    + +
      +
    • $y_{1} = y_{2}$ then you can say the operator is invariant with respect to the given transformation
    • +
    • $y_{1} = T(y_{2})$ then you can say the operator is equivariant to the given transformation as applying it to the input basically reflects its effect completely on the output
    • +
    • none of the 2
    • +
  • +
+ +

Questions 2

+ +

Q2.1

+ +
    +
  • Spatial Pooling is not the only way to perform dimensionality reduction. You can achieve it even simply applying the Conv Kernel with no padding.
  • +
+ +

For example, let's say your input is a WxHxC tensor and you are applying a kernel which is KxKxC the result will have spatial domain size (W-(K/2))x(H-(K/2)) with K/2 the integer truncation so if K=5 then K/2=2.

+ +

Alternatively you can reduce the spatial domain with stride.

+ +

Q2.2

+ +
    +
  • It seems to me you are talking about a sort of Non Max Suppression Operator rather an emergent behaviour, but certainly with the proper supervision signal you can train a CNN to do this work (even if practically it does not make sense as you can explicitly define it).
  • +
+",1963,,16229,,4/26/2019 11:48,4/26/2019 11:48,,,,6,,,,CC BY-SA 4.0 +12016,2,,6478,4/26/2019 7:21,,1,,"

From your matrix definitions, the issue is S always singular so it can never be inverted

+ +

I reimplemented the computation with numpy and here are the numbers

+ + + +
H = np.array([[1.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]])
+
+P = np.identity(5) * 1000
+
+R = np.array([[100, 0, 0, 0], [0, 100, 0, 0], [0,0,0,0], [0,0,0,0]])
+
+S=H.dot(P.dot(np.transpose(H)))+R
+
+
+ +

and S is

+ +
array([[1100.,    0.,    0.,    0.],
+       [   0., 1100.,    0.,    0.],
+       [   0.,    0.,    0.,    0.],
+       [   0.,    0.,    0.,    0.]])
+
+
+ +

Basically the definition of H with its all zeros lines cancels part of the information (make it unobservable) and you also do not have any observation noise component for them

+",1963,,,,,4/26/2019 7:21,,,,0,,,,CC BY-SA 4.0 +12017,1,12018,,4/26/2019 9:04,,1,385,"

I have data with about 100 numerical features and a multi-labelling that encodes ownership of a certain product (i.e. my labels are of the form $[x_i, i=1, \dots, n]$, where $n$ is the number of products and $x_i$ is either 0 or 1).

+

My neural network approach to this currently looks like this (in Keras)

+
model = Sequential()
+model.add(Dense(1024, activation='relu', input_dim=X_train.shape[1]))
+model.add(Dense(1024, activation='relu'))
+model.add(Dense(1024, activation='relu'))
+model.add(Dense(1024, activation='relu'))
+model.add(Dense(y_train.shape[1], activation='softmax'))
+
+

So, it has a couple of dense layers with ReLu activation, then an output layer with softmax.

+

Now, my question is: will the neural network consider labels of the other products when assigning a probability to the label of one product?

+

I would like that to happen, but I can't quite grasp whether it does (my suspicion is no).

+

I'm new to multi-label classification and relatively new to NN in general, so I hope this isn't too inept a question.

+",25190,,2444,,4/8/2022 20:51,4/8/2022 20:56,Is this neural network with a softmax in the output layer suitable for multi-label classification?,,1,0,,,,CC BY-SA 4.0 +12018,2,,12017,4/26/2019 9:30,,1,,"

Firstly, you should use sigmoid in your last layer instead of softmax. Softmax returns a probability distribution, meaning that when one labels probability increases the other will decrease, which is not always the case. Secondly, in order for Keras to use all the labels, you should use the binary cross-entropy as the loss function.

+",20430,,2444,,4/8/2022 20:56,4/8/2022 20:56,,,,1,,,,CC BY-SA 4.0 +12019,1,12025,,4/26/2019 10:20,,3,385,"

I have a genetic algorithm that maximizes a fitness function with two variables f(X,Y).

+ +

I have been running the algorithm with various parameters in mutation and crossover probability (0.1, 0.2, ...)

+ +

Since I dont have much theoretical knowledge of GA, how could I proceed in order to find the optimal values for mutation and crossover probability, and if necessary the optimal population size ?

+",25194,,10135,,5/5/2019 12:46,5/5/2019 12:46,How to find optimal mutation probability and crossover probability?,,1,1,,,,CC BY-SA 4.0 +12020,1,,,4/26/2019 12:36,,1,2209,"

I have an idea for a new type of AI for two-player games with alternating turns, like chess, checkers, connect four, and so on.

+ +

A little background: Traditionally engines for such games have used the minimax algorithm combined with a heuristic function when a certain depth has been reached to find the best moves. In recent days engines using reinforcement learning, etc (like AlphaZero in chess) have increased popularity, and become as strong as or stronger than the traditional minimax engines.

+ +

My approach is to combine these ideas, to some level. A minimax tree with alpha-beta pruning will be used, but instead of considering every move in a position, these moves will be evaluated with a neural net or some other machine learning method, and the moves which seem least promising will not be considered further. The more interesting moves are expanded like in traditional minimax algorithms, and the same evaluation are again done for these nodes' children.

+ +

The pros and cons are pretty obvious: By decreasing the breadth (number of moves in a position), the computation time will be reduced, which again can increase the search depth. The downside is that good moves may not be considered, if the machine learning method used to evaluate moves are not good enough.

+ +

One could of course hope that the position evaluation itself (from the neural net, etc) is good enough to pick the best move, so that no minimax is needed. However, combining the two approaches will hopefully make better results.

+ +

A big motivation for this approach is that it resembles how humans act when playing games like chess. One tends to use intuition (which will be what the neural net represents in this approach) to find moves which looks interesting. Then one will look more thoroughly at these interesting moves by calculating moves ahead. However, one does not do this for all moves, only those which seem interesting. The idea is that a computer engine can play well by using the same approach, but can of course calculate much faster than a human.

+ +

To illustrate the performance gain: The size of a minimax tree is about b^d, where b is the average number of moves possible in each position, and d is the search depth. If the neural net can reduce the size of considered moves b to half, the new complexity will be (b/2)^d. If d is 20, that means reducing the computation time by approx. 1 million.

+ +

My questions are:

+ +
    +
  1. Does anyone see any obvious flaws about this idea, which I might have missed?

  2. +
  3. Has it been attempted before? I have looked a bit around for information about this, but haven't found anything. Please give me some references if you know any articles about this.

  4. +
  5. Do you think the performance of such a system could compete with those of pure minimax or those using deep reinforcement learning?

  6. +
+ +

Exactly how the neural net will be trained, I have not determined yet, but there should be several options possible.

+",17488,,,,,2/9/2021 22:08,Minimax combined with machine learning to determine if a path should be explored,,2,2,,,,CC BY-SA 4.0 +12021,1,12033,,4/26/2019 13:30,,5,804,"

I want to create a simple game which basically consists of 2d circles shooting smaller circles at each other (to make hitbox detection easier for the start). My goal is to create an ai which adapts its own behaviour to the player‘s. For that, i want to use a NN as brain. Every frame, the NN is fed with the same inputs as the player and his output is compared to the players output. (outputs in this case are pressed keys like the up-arrow) +As inputs, I want to use a couple different important factors: +for example, the direction of the enemy player as number from 0 to 1

+ +

I also want to input the direction, size and speed of enemy’s and own projectiles and this is where my problem lies. If there was only one bullet per player, it would be easy but I want the number of bullets to be variable so the number of input neurons would have to be variable.

+ +

My approaches: +1) use a big amount of neurons and set unused ones to 0 +( not elegant at all) +2) Instead of specific values, just use all the pixels‘ rgb values as inputs (would limit the game as colours would deliver all the information) (+factors like speed and direction would probably not have any impact)

+ +

Is there a more promising approach to this problem ? +I hope you can give me some inspiration.

+ +

Also, is there a difference in ranging input values between 0/1 or -1/1 ?

+ +

Thank you in advance, Mo

+ +

Edit: In case there aren‘t enough questions for you, is there a way to make the NN remember things ? For example, if I added a mechanic to the game which involves holding a key, I would add an input neuron which inputs 1 if the certain key is pressed and 0 if it isn‘t but I doubt that would work.

+",25201,,25201,,4/26/2019 13:44,4/28/2019 15:22,Neural Network with varying inputs (for a game ai),,2,0,,,,CC BY-SA 4.0 +12023,1,,,4/26/2019 13:58,,0,473,"

Which libraries can be used for image caption generation?

+",24076,,2444,,4/26/2019 16:23,9/19/2020 3:58,Which libraries can be used for image caption generation?,,2,3,,2/16/2021 10:40,,CC BY-SA 4.0 +12024,1,,,4/26/2019 14:50,,1,81,"

Suppose we have a data set $X$ that is split as $X_{\text{train}}$, $X_{\text{val}}$ and $X_{\text{test}}$ and the outcome variable is binary. Let's say we train three different models (logistic regression, random forest, and a support vector machine) using $X_{\text{train}}$. We then get predictions for $X_{\text{val}}$ using each of the three models.

+ +

In stacking, is it correct to say that we train a logistic regression model on a data set of dimension $|X_{\text{val}}| \times 3$ with the predicted values and actual values of the validation set? This logistic regression model is then used to predict outcomes for data in $X_{\text{test}}$?

+",25207,,2444,,4/28/2019 12:22,5/7/2023 20:06,Do we train a logistic regression model using a dataset that is 3 times bigger than the validation dataset?,,1,2,,,,CC BY-SA 4.0 +12025,2,,12019,4/26/2019 21:08,,3,,"

As @Oliver Mason says, picking the parameters that control the behavior of a GA (which are sometimes called ""hyperparameters"") is historically more of an art than a science.

+ +

The evolutionary computation literature has many theories about the merits of high vs. low mutation, and high vs. low crossover. Most practitioners I have worked with use either high crossover, low mutation (e.g. Xover = 80%, mutation = 5%), or moderate crossover, moderate mutation (e.g. Xover = 40%, mutation = 40%).

+ +

In more recent years, the field of hyperparameter optimization has emerged and focuses on developing automatic approaches to picking these parameters. A very simple example of hyperparameter optimization is the GridSearchCV function in ScikitLearn. This systematically tries every combination of, say, 10 crossover values with evey one of 10 mutation values, and reports on which one works best. It uses Cross Validation to prevent overfitting during this process. A more complex approach is Bayesian Hyperparameter Optimization, which performs a sort of optimal experiment design to uncover the best values using as few tests as possible. This approach has been quite successful in tuning the hyperparamters of deep neural networks, for example.

+",16909,,,,,4/26/2019 21:08,,,,0,,,,CC BY-SA 4.0 +12026,1,,,4/27/2019 5:28,,1,583,"

I need help with ExpectiMinimax problem:

+ +
    +
  1. Start a game.
  2. +
  3. The first player flips a coin.
  4. +
  5. The second player flips a coin.
  6. +
  7. The first player decides if he wants to flip another coin.
  8. +
  9. The second player decides if he wants to flip another coin.
  10. +
  11. Game over.
  12. +
+ +

The winner is the player who has earned more points. +The points are computed as follows:

+ +
    +
  1. If the player flips ones and got the head - 1 point.
  2. +
  3. If the player flips ones and got the tail - 2 points.
  4. +
  5. If the player flips twice and got 2 heads - 4 points.
  6. +
  7. If the player flips twice and got 2 tails - 4 points.
  8. +
  9. If the player flips twice and got one head and one tail - 0 points.
  10. +
+ +

I need to draw the ExpectiMinimax tree associated with this problem, and write the value of each node.

+ +

Did I draw the tree properly?

+ +

+",25217,,16909,,10/16/2019 23:45,6/28/2023 7:08,Is this ExpectiMinimax Tree correctly drawn?,,1,0,,,,CC BY-SA 4.0 +12027,2,,7803,4/27/2019 6:39,,2,,"

Deep Learning is not a generalization of Hopfield networks. Deep Learning is a "generalization" of the neural networks/connectionism field started by Rumelhart and McClelland.

+

There are two kinds of neural networks:

+
    +
  • Directed (Perceptron, MLP, ConvNets, RNNs, etc.)
  • +
  • Undirected (Hopfield Nets, Boltzmann Machines, Energy-based models, etc.)
  • +
+

Any of these can be made deep. As you said, Boltzmann machines are the probabilistic version of Hopfield Networks, and there has been a lot more work on deepifying these models than Hopfield nets: Deep Boltzmann machines, Deep Belief Networks, and deep energy models. Hinton is really the guy you want to read to learn about these models, but you can have a look at this paper which compares the three models.

+

Not sure about the Gestalt organisation. I guess I'll leave that to your interpretation.

+",8637,,-1,,6/17/2020 9:57,4/27/2019 6:39,,,,3,,,,CC BY-SA 4.0 +12028,2,,11953,4/27/2019 11:11,,0,,"

Totally agree with @John's answer. Will try and complement that with some more points.

+ +

Some advantages of SVMs:

+ +

a) SVM is defined by a convex optimisation problem for which there are efficient methods to solve, like SMO.

+ +

b) Effective in high dimensional spaces and also in cases where number of dimensions is greater than the number of samples.

+ +

c) Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.

+ +

d) Different Kernel functions can be specified for the decision function.. In its simplest form, the kernel trick means transforming data into another dimension that has a clear dividing margin between classes of data.

+ +

The disadvantages of support vector machines include:

+ +

a) If the number of features is much greater than the number of samples, avoiding over-fitting in choosing Kernel functions and regularization term is crucial. Kernel models can be quite sensitive to over-fitting the model selection criterion

+ +

b) SVMs do not directly provide probability estimates. In many classification problems you actually want the probability of class membership, so it would be better to use a method like Logistic Regression, rather than post-process the output of the SVM to get probabilities.

+",16708,,16708,,9/28/2019 14:35,9/28/2019 14:35,,,,0,,,,CC BY-SA 4.0 +12029,1,,,4/27/2019 11:35,,3,11062,"

The AI must predict the next number in a given sequence of incremental integers (with no obvious pattern) using Python but so far I don't get the intended result! +I tried changing the learning rate and iterations but so far no luck!

+ +

Example sequence: [1, 3, 7, 8, 21, 49, 76, 224]

+ +

Expected result: 467

+ +

Result found : 2,795.5

+ +

Cost: 504579.43

+ +

This is what I've done so far:

+ +
import numpy as np
+
+# Init sequence
+data =\
+    [
+        [0, 1.0], [1, 3.0], [2, 7.0], [3, 8.0],
+        [4, 21.0], [5, 49.0], [6, 76.0], [7, 224.0]
+    ]
+
+X = np.matrix(data)[:, 0]
+y = np.matrix(data)[:, 1]
+
+def J(X, y, theta):
+    theta = np.matrix(theta).T
+    m = len(y)
+    predictions = X * theta
+    sqError = np.power((predictions-y), [2])
+    return 1/(2*m) * sum(sqError)
+
+dataX = np.matrix(data)[:, 0:1]
+X = np.ones((len(dataX), 2))
+X[:, 1:] = dataX
+
+# gradient descent function
+def gradient(X, y, alpha, theta, iters):
+    J_history = np.zeros(iters)
+    m = len(y)
+    theta = np.matrix(theta).T
+    for i in range(iters):
+        h0 = X * theta
+        delta = (1 / m) * (X.T * h0 - X.T * y)
+        theta = theta - alpha * delta
+        J_history[i] = J(X, y, theta.T)
+     return J_history, theta
+print('\n'+40*'=')
+
+# Theta initialization
+theta = np.matrix([np.random.random(), np.random.random()])
+
+# Learning rate
+alpha = 0.02
+
+# Iterations
+iters = 1000000
+
+print('\n== Model summary ==\nLearning rate: {}\nIterations: {}\nInitial 
+theta: {}\nInitial J: {:.2f}\n'
+  .format(alpha, iters, theta, J(X, y, theta).item()))
+print('Training model... ')
+
+# Train model and find optimal Theta value
+J_history, theta_min = gradient(X, y, alpha, theta, iters)
+print('Done, Model is trained')
+print('\nModelled prediction function is:\ny = {:.2f} * x + {:.2f}'
+  .format(theta_min[1].item(), theta_min[0].item()))
+print('Cost is: {:.2f}'.format(J(X, y, theta_min.T).item()))
+
+# Calculate the predicted profit
+def predict(pop):
+    return [1, pop] * theta_min
+
+# Now
+p = len(data)
+print('\n'+40*'=')
+print('Initial sequence was:\n', *np.array(data)[:, 1])
+print('\nNext numbers should be: {:,.1f}'
+  .format(predict(p).item()))
+
+ +

UPDATE Another method I tried but still giving wrong results

+ +
import numpy as np
+from sklearn import datasets, linear_model
+
+# Define the problem
+problem = [1, 3, 7, 8, 21, 49, 76, 224]
+
+# create x and y for the problem
+
+x = []
+y = []
+
+for (xi, yi) in enumerate(problem):
+    x.append([xi])
+    y.append(yi)
+
+x = np.array(x)
+y = np.array(y)
+# Create linear regression object
+regr = linear_model.LinearRegression()
+regr.fit(x, y)
+
+# create the testing set
+x_test = [[i] for i in range(len(x), 3 + len(x))]
+
+# The coefficients
+print('Coefficients: \n', regr.coef_)
+# The mean squared error
+print(""Mean squared error: %.2f"" % np.mean((regr.predict(x) - y) ** 2))
+# Explained variance score: 1 is perfect prediction
+print('Variance score: %.2f' % regr.score(x, y))
+
+# Do predictions
+y_predicted = regr.predict(x_test)
+
+print(""Next few numbers in the series are"")
+for pred in y_predicted:
+    print(pred)
+
+",3894,,3894,,4/27/2019 21:37,5/9/2019 10:43,Use Machine Learning/Artificial Intelligence to predict next number (n+1) in a given sequence of random increasing integers,,2,4,,,,CC BY-SA 4.0 +12031,2,,12029,4/27/2019 17:27,,3,,"

I think your code works fine for what is meant to be doing - fitting a linear regression model. The problem here is that you are using a linear model. Linear model does not have an adequate approximation capacity, it will only be able to fit data that is described by a linear function. Here, you gave a random sequence of numbers, that is very difficult for linear model to approximate. I would advise you to try 2 things:

+ +

1) Try something simpler first. Instead of doing a random sequence of numbers do a linear sequence of numbers for example a function like $y = 2x$ or maybe affine function like $y = 2x + 5$. So you would have a sequence like:

+ +

$2, 4, 6, 8 ...$ or $7, 9, 11, 13, ...$

+ +

If you manage to get that working try a nonlinear function like $x^2$ for example.

+ +

2) Instead of using a linear model try a nonlinear model. For example a polynomial regression model. Especially powerful function approximators are neural networks. In theory, a neural network with single hidden layer can approximate an arbitrary continuous function under some conditions +(Universal approximation theorem) , so you could try to see how would a neural network solve the problem, there are several open source neural network libraries that you could try.

+",20339,,,,,4/27/2019 17:27,,,,1,,,,CC BY-SA 4.0 +12032,2,,12021,4/27/2019 19:11,,2,,"

The most generic approach is to input all pixels as you have suggested. A CNN would be the best architecture for that. To provide information like speed or velocity, you can feed more than one frame to the CNN (e.g. the last 5 frames or whatever provides enough information). The CNN can learn movement information by comparing those images.

+ +

If you want to store additional information (like an inventory item), an input neuron for each value would be an option. You can also look up LSTM (long-short term memory) models, but for your specific situation a hardcoded neuron would be the easier solution.

+",9161,,,,,4/27/2019 19:11,,,,1,,,,CC BY-SA 4.0 +12033,2,,12021,4/28/2019 3:12,,2,,"

I recommend preprocessing images and feeding pixel values of several combined images. Some ideas:

+ +
    +
  1. Preprocess all images to grayscale if possible. It’ll reduce the number of input neurons. (As long as this step doesn’t introduce large overhead)

  2. +
  3. Select some $\gamma$ value such that 0 < $\gamma$ < 1. Generate (ie. Select from your game) $n$ sequential images. For the $k$th image in the sequence, multiply every pixel value by $\gamma^{n-k-1}$. This assumes we index $k$ starting at zero.

  4. +
  5. Sum the pixel values of all processed images with clip ~ [0, 255] (for a valid range of values)

  6. +
+ +

This will yield a single image where stationary pixels will be summed to create brighter / more saturated spots, where moving objects will have “shadows” or “tails” which are faded with each time step ($\gamma$ is the “fading factor” so to speak).

+ +

Image input: As long as all values are on a comparable scale, it shouldn’t really matter whether inputs are on range [-1, 1] or [0, 1], but since you’ll be using pixel values, they will all be positive. So normalizing the pixel values will yield a range [0, 1].

+ +

Note: this kind of processing can probably be done iteratively with greater efficiency by summing, then multiplying by gamma at each time step. Then you can implement it online.

+ +

Now consider what you want the OUTPUT of the network to be. If you want the agent to take an action after processing the inputs, your output later should consist of one neuron per discrete action (ie, each “button” that can be pushed). I will limit my answer to discrete actions since that is likely the most useful answer for this question.

+ +

Finally, you asked about if the network can “remember things,” like “holding down a key.” This question is a bit vague, but let me try to answer. It sounds like you were considering using this as an INPUT to the network. In theory, you could use a similar implementation (ie. At every time step measure if the button was pressed. Perhaps use 1 if pressed 0 otherwise. Decay at every time step and sum. With n time steps, the sum will have a max value of $\sum_{n}(\gamma^{n-1})$). Remember to decay by $n-k-1$, with $k$ starting at zero. You don’t have to actually decay this value, but decaying by a factor of gamma helps the network know if for instance the button was pressed near the first frame or the last frame.

+ +

With that said, I don’t know if you want to use this as an input. If the AI is meant to have more information than the opponent, than I suppose you can. But then the agent will not be learning from the same information as the opponent. Also, if holding the button produces a clearly visible effect, that information would be captured in the images already, so might be a redundant input.

+ +

These ideas are not the only implementation, but can get you going. It sounds like you’ll need a measure of reward and likely need to structure this as an RL problem. The details of that are beyond the scope of this post and I don’t want to get too afield of your original question. Just note that comparing to the players output may not give you the results you want, and even if it did, your network will be limited to learning only to mimic the other player. Using a measure of reward will allow your agent to theoretically advance beyond the skill of its opponent by taking actions that maximize reward, even if the opponent would not have thought to take that action.

+ +

I hope this helps.

+",16343,,16343,,4/28/2019 15:22,4/28/2019 15:22,,,,4,,,,CC BY-SA 4.0 +12034,1,,,4/28/2019 3:47,,7,69653,"

These guys here: https://www.patreon.com/AiAngel are saying that they've created a AI who can chat and stream. As the so-called administrator ""Rogue"" said:

+ +

+ +

this chat/streamer bot are no fake. +Also, there's more about the dynamics of this chat/streamer bot on youtube:

+ +

https://www.youtube.com/watch?v=WyFwjHQhlgo&t=463s

+ +

https://www.youtube.com/watch?v=GtvivssqLhE

+ +

Considering that videos I realy think that this bot is totaly fake. I mean, I think that even the most advanced AI bot do not get even close to a real conversation like this one.

+ +

Now, of course that you can say that this is a artistic project or something, but the people behind all of this are on Patreon, and the people who are paying to these guys possibly are getting totally fooled, which is a serious thing when we're talking about real money.

+ +

So, is AIAngel a real bot? (With this question I'm spreading this possible fake to community)

+",25232,,1671,,4/29/2019 18:16,4/16/2020 19:32,"Is ""AIAngel"" (Patreon) a fake?",,4,4,,,,CC BY-SA 4.0 +12035,2,,11953,4/28/2019 5:31,,-1,,"

For datasets of low-dimensional tabular data. DNN are not efficient on low-dimensional input because of huge overparametrisation. So even if dataset is huge in size, but each sample is low-dimensional SVM would beat DNN.

+ +

More generally if data is tabular and the correlation between fields of the sample is weak and noisy, SVM may still beat DNN even for high dimensional data, but that depend on specific of data.

+ +

Unfortunately I can't recall any specific papers on subject, so it's mostly common sense reasoning, you don't have to trust it.

+",22745,,,,,4/28/2019 5:31,,,,0,,,,CC BY-SA 4.0 +12037,1,,,4/28/2019 10:41,,3,42,"

Anyone knows a resources (papers, articles and especially repositories) regarding controlling multiple units with RL.

+ +

The controlled units should not be fixed, for example in Real Time Strategy the agent builds various units (workers, soldiers ...) and later controls them. During the game various units could die and new ones are built.

+ +

I think good contemporary example is AlphaStar, while OpenAI Five controls just a single agent. This might be incorrect since I've never played those games.

+",25236,,,,,4/28/2019 10:41,Code examples of controlling multiple units with RL,,0,0,,,,CC BY-SA 4.0 +12038,2,,12011,4/28/2019 12:46,,6,,"

In machine learning, the idea behind Hebbian learning is to strengthen (or weaken) the connection (the weight) between the neurons that have similar (or, respectively, dissimilar) outputs, where ""similar"" can be defined in different ways (e.g. it could be based on the sign of the output of the neurons).

+ +

Hebbian learning is a more biologically plausible way of learning than back-propagation, because it is a ""local learning strategy"" (you locally update the connections and not all the connections of the model at the same time), as opposed to back-propagation, which is a ""global learning strategy"" (where all connections are usually updated at once, given a ""global error"" of the network).

+ +

There are several neural networks (or models) that can learn in a Hebbian fashion: for example, the Hopfield network or Numenta's temporal memory.

+",2444,,2444,,4/28/2019 12:57,4/28/2019 12:57,,,,0,,,,CC BY-SA 4.0 +12040,2,,11937,4/28/2019 13:13,,0,,"

All weight matrices in a Neural Network, adapt to map input and output. ReLU, as you pointed out, doesn't give negative derivatives. You're right. But notice the weight update equation in Backpropagation, it uses a multitude of parameters like:

+ +
    +
  • Error value
  • +
  • Activation from the previous layer
  • +
  • Weight matrix of the previous layer
  • +
  • The pre-activation cache of the previous layer
  • +
+ +

These can be positive or negative. Hence weight updations dW and db can be +ve or -ve, allowing for decreasing (or increasing) value of weights.

+",24122,,,,,4/28/2019 13:13,,,,0,,,,CC BY-SA 4.0 +12041,2,,12024,4/28/2019 13:56,,0,,"

The predictions of each of your initial models will become a feature to feed the meta learner. If you use $n$ initial models, then for each example you will feed the meta learner $n$ features, each feature being a prediction from one of the initial models.

+ +

Note that this doesn’t mean the size of your dataset increases. Instead, each member of $X_{val}$ will be represented by $n$ features, with the $k$th feature being equal to the initial prediction of the $k$th initial predictor.

+ +

This medium link recommends a package that can help you build a pipeline.

+ +

Note: in a sense if you are caching the intermediate predictions of each initial predictor, you’ll end up with $n$ times as many data points as there are entries in $X_{val}$. But these are simply each going to be a feature of another data point, so the dataset hasn’t really increased in size.

+",16343,,16343,,4/28/2019 15:02,4/28/2019 15:02,,,,0,,,,CC BY-SA 4.0 +12042,1,12047,,4/28/2019 16:55,,13,2744,"

Surprisingly, this wasn't asked before - at least I didn't find anything besides some vaguely related questions.

+ +

So, what is a recurrent neural network, and what are their advantages over regular (or feed-forward) neural networks?

+",23527,,2444,,6/2/2020 11:38,6/2/2020 11:38,What is a recurrent neural network?,,2,1,,,,CC BY-SA 4.0 +12043,2,,12042,4/28/2019 17:39,,11,,"

A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective ""recurrent"" thus refers to this backward or self-connections, which create loops in these networks.

+ +

An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections ""memorise"" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.

+ +

There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a ""plain RNN"", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, ""plain RNN"" are as powerful as RNNs with LSTM units. In practice, they suffer from the ""vanishing and exploding gradients"" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.

+",2444,,,,,4/28/2019 17:39,,,,0,,,,CC BY-SA 4.0 +12044,1,,,4/29/2019 2:51,,3,144,"

I am looking for good introductory and advanced books on unsupervised learning. I have already read books like Probabilistic Graphical Models from D. Kholler and Pattern Recognition and Machine Learning from C. M. Bishop. I am also very familiar with the Ph.D. thesis of K. P. Murphy, on Dynamic Bayesian Networks.

+

I have read all of the above mostly for the probability aspects, not really the applications to AI and ML. I would like to know what are the good books (or references) for unsupervised learning, that focuses on practical exercises and examples instead of deep and abstract concepts.

+",25262,,2444,,1/17/2021 19:31,1/17/2021 19:31,What are examples of good reference books on unsupervised learning?,,0,1,,,,CC BY-SA 4.0 +12046,1,12060,,4/29/2019 7:03,,1,1109,"

I recently watched a YouTube video (sorry, can't remember the link) where (a very talented) someone created what they called a ""static AI"".

+ +

Somewhere in the video they said something along the lines of:

+ +
+

""this is a static AI, it's very simple and not dynamic at all""

+
+ +

What does this mean? What's the difference between a static AI and a dynamic AI?

+",25274,,1671,,10/15/2019 19:24,10/15/2019 19:24,What's the difference between a static AI and a dynamic AI?,,1,1,,1/18/2022 22:22,,CC BY-SA 4.0 +12047,2,,12042,4/29/2019 7:12,,9,,"

Recurrent neural networks (RNNs) are a class of artificial neural network +architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.

+ +

Difference with traditional Neural networks using pictures from this book:

+ +

+ +

And, an RNN:

+ +

+ +

Notice the difference -- feedforward neural networks' connections +do not form cycles. If we relax this condition, and allow cyclical +connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.

+ +

While the difference between a multilayer perceptron and an RNN may seem +trivial, the implications for sequence learning are far-reaching. An MLP can only +map from input to output vectors, whereas an RNN can in principle map from +the entire history of previous inputs to each output. Indeed, the equivalent +result to the universal approximation theory for MLPs is that an RNN with a +sufficient number of hidden units can approximate any measurable sequence-to-sequence +mapping to arbitrary accuracy.

+ +

Important takeaway:

+ +

The recurrent connections allow a 'memory' of previous inputs to persist in the +network's internal state, and thereby influence the network output.

+ +

Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:

+ +

Sequence Labelling

+ +

The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.

+ +

Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).

+ +

Some of the sub-tasks in sequence labelling are:

+ +

Sequence Classification

+ +

Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual +handwritten letter.

+ +

Segment Classification

+ +

Segment classification refers to those tasks where the target sequences consist +of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.

+",16708,,16708,,4/29/2019 13:54,4/29/2019 13:54,,,,0,,,,CC BY-SA 4.0 +12048,1,,,4/29/2019 8:07,,4,94,"

In Section 4.3 of the paper Learning by Playing - Solving Sparse Reward Tasks from Scratch, the authors define Retrace as +$$ +Q^{ret}=\sum_{j=i}^\infty\left(\gamma^{j-i}\prod_{k=i}^jc_k\right)[r(s_j,a_j)+\delta_Q(s_i,s_j)],\\ +\delta_Q(s_i,s_j)=\mathbb E_{\pi_{\theta'}(a|s)}[Q^\pi(s_i,\cdot;\phi')]-Q^\pi(s_j,a_j;\phi')\\ +c_k=\min\left(1,{\pi_{\theta'}(a_k|s_k)\over b(a_k|s_k)}\right) +$$ +where I omit $\mathcal T$ for simplicity. +I'm quite confused about the definition of $Q^{ret}$, which seems not consistent with Retrace defined in Safe and efficient off-policy reinforcement learning:

+

$$ +\mathcal RQ(x,a):=Q(x,a)+\mathbb E_\mu[\sum_{t\ge0}\gamma^t\left(\prod_{s=1}^tc_s\right)(r_t+\gamma\mathbb E_\pi Q(x_{t+1},\cdot)-Q(x_t,a_t)] +$$

+

What should I make of $Q^{ret}$ in the first paper?

+",8689,,2444,user9947,12/19/2021 19:43,12/19/2021 19:43,Why is there an inconsistency in the definitions of the retrace?,,0,0,,,,CC BY-SA 4.0 +12052,1,,,4/29/2019 13:20,,1,28,"

I'm doing a model to detect duplicate in my database (there is a lot of features that can be different but mean same object in the end)

+ +

So I have my feature vector for my duplicate dataset which contains score, distance and relation between 2 identical object that I labelised +such as (0, 0, 123, 14000, 5, 10, 0, 0, -1)

+ +

Since duplicate are a rare event I was wondering what size of dataset should I use for non duplicate features, since I want my model to learn about the disparity of the multiple features I have I though I should have like 10 times the number of example of non-duplicate and in my model change the weight of duplicate class by multiplying by 10.

+ +

Is that a good thing to do or is it better to take 50%/50% of duplicate/non duplicate features for my model ?

+ +

Also, should I apply filter to chose my non-duplicate dataset in order to have like close object on some features but different on other. Or should I take them randomly among all the data I have ?

+",23107,,,,,4/29/2019 13:20,Size of dataset for feature vector with rare event,,0,0,,,,CC BY-SA 4.0 +12053,1,,,4/29/2019 15:30,,4,287,"

I have recently started working on a control problem using a Deep Q Network as proposed by DeepMind (https://arxiv.org/abs/1312.5602). Initially, I implemented it without Experience Replay. The results were very satisfying, although after implementing ER, the results I got were relatively bad. Thus I started experimenting with BATCH SIZE and MEMORY CAPACITY.

+ +
    +
  • (1) I noticed that if I set BATCH SIZE = 1 and MEMORY CAPACITY = 1 i.e. the same as doing normal online learning as previously, the results are then (almost) the same as initially.

  • +
  • (2) If I increased CAPACITY and BATCH SIZE e.g. CAPACITY = 2000 and BATCH SIZE = 128, the Q Values for all actions tend to converge to very similar negative values.

  • +
+ +

A small negative reward -1 is received for every state transition except of the desired state which receives +10 reward. My gamma is 0.7. Every state is discrete and the environment can transition to a number of X states after action a, with every state in X having a significant probability.

+ +

Receiving a positive reward is very rare as getting to a desired state can take a long time. Thus, when sampling 128 experiences if 'lucky' only a small amount of experiences may have a positive reward.

+ +

Since, when doing mini-batch training we average the loss over all the samples and then update the DQN I was wondering whether generally the positive rewards can become meaningless as they are 'dominated' by the negative ones. Which means that this would result in a very slower convergence to actual values ? And also justifies the the convergence to similar negative values as in (2) ? Is this something expected? I am looking to implement. Prioritised ER as a potential solution to this, but is there something wrong inn the above logic?

+ +

I hope this does makes sense. Please forgive me if I make a wrong assumption above as I am new to the field.

+ +

Edit: The problem seemed to be that indeed finding rewards very rarely would result in sampling almost never, especially at the begging of training, which in turn resulted in very slow convergence to the actual Q values. The problem was successfully solved using Prioritised ER -but I believe any form of careful Stratified Sampling would result in good results

+",25288,,25288,,5/1/2019 20:05,5/31/2019 21:03,Experience Replay Not Always Giving Better Results,,1,7,,,,CC BY-SA 4.0 +12054,2,,12053,4/29/2019 17:22,,1,,"

What you describe sounds to me like a problem inherent to off policy learning, and what you describe seems to me to be a reasonable interpretation of what may be happening.

+ +

When you implemented experience replay with capacity = 1 and batch_size = 1 you said you got “almost” the same results as before. There are probably two reasons for this being “almost” the same. One is simply the random initializations of the networks, so as you train, you will potentially converge around the same point but not exactly to it (also the stochastic nature of generating the training samples). The other reason might be to do with what has already occurred each time you update the target net weights, so your error terms may differ slightly at each point in time, but asymptotically converge.

+ +

This is essentially following an on policy training, so every training sample is following a trajectory (they are all sequential states). Eventually, given enough time, this trajectory will reach a goal state and be rewarded. The reward will be propagated through the network and effect the backed up values of other states will be updated as well. So essentially each episode ends with a reward (I presume), and the average reward given per episode is proportional to the average length of an episode.

+ +

When increasing capacity and batch size > 1, we move to true off policy training. When sampling, the updates are not following a trajectory. And as such, there is no guarantee that we EVER sample any positive reward (although, I’m sure you will at some point). So, if we are averaging over the rewards in the updates, the average reward given per “episode” is no longer proportional to the average episode length (and the idea of episode starts to lose some of its relevance - since we aren’t following a trajectory). This the effect of the reward on all other states is not in proportion to what it was when following an on-policy trajectory.

+ +

You could try some hacks to investigate, for example try making your positive reward > batch_size. Or, if you have some statistics on how often your goal state is being sampled, perhaps scale it up by that factor. (Or if you know the size of your state space, make your reward greater than that).

+ +

This blog post offers some more elegant refinements, like prioritized ER which you mentioned. But it would be interesting to see the effect of scaling up your reward will overcome the effect of averaging over many negative rewards.

+",16343,,16343,,4/29/2019 19:08,4/29/2019 19:08,,,,8,,,,CC BY-SA 4.0 +12055,2,,12034,4/29/2019 18:06,,2,,"

Without being able to interact with the bot, a Turing test is impossible.

+ +
    +
  • Based on the manner in which this supposed bot is presented, in videos as opposed to an interactive medium where users can interact with the bot, the only reasonable conclusion is hoax.
  • +
+ +

This assessment is supported by the overly sexualized rendering of the bot combined with requests for donations.

+ +

In order for this project to be considered legitimate, the creators would have to be more transparent about the methods and allow the general public, or at least reliable experts, to interact with the bot.

+ +

(Compare to IBM's Tay and Zo.)

+ +


+ +

Turing Tests & Pornbots

+ +

The nature of Turing tests is that they are subjective. A bot that an adult human could easily recognize as non-human, a child might perceive as human--the child does not have the requisite knowledge to form a strategy to expose the bot as an automaton.

+ +

There is an idea that ""pornbots"" have been passing the Turing Test for many years now, predicated on the hormonal imperatives of those interacting with the bots, which greatly inhibits their judgement. (The purpose of pornbots is to get consumers to spend money. Their continued existence suggests they do have utility in this regard, and that utility can be seen as a confirmation of the bots passing the text in a specific context.)

+ +


+ +

Automation Hoaxes

+ +

One of the most famous machine intelligence hoaxes was The Turk, presented as a Chess playing automaton. The supposed machine astounded late 18th century audiences with its skill, but it was later revealed to be a human in a box.

+",1671,,,,,4/29/2019 18:06,,,,1,,,,CC BY-SA 4.0 +12056,5,,,4/29/2019 18:15,,0,,"

See: The Turk

+ +

See Also: History's Greatest Robot Hoaxes (io9)

+",1671,,1671,,4/29/2019 18:15,4/29/2019 18:15,,,,0,,,,CC BY-SA 4.0 +12057,4,,,4/29/2019 18:15,,0,,Use for questions related to claims or presentations related to AI capabilities that may be hoaxes.,1671,,1671,,4/29/2019 18:15,4/29/2019 18:15,,,,0,,,,CC BY-SA 4.0 +12059,1,,,4/30/2019 1:15,,6,1247,"

There are multiple ways to implement parallelism in reinforcement learning. One is to use parallel workers running in their own environments to collect data in parallel, instead of using replay memory buffers (this is how A3C works, for example).

+ +

However, there are methods, like PPO, that use batch training on purpose. How is parallelism usually implemented for algorithms that still use batch training?

+ +

Are gradients accumulated over parallel workers and the combined? Is there another way? What are the benefits of doing parallelism one way over another?

+",25294,,2444,,2/20/2020 12:08,2/20/2020 20:06,How is parallelism implemented in RL algorithms like PPO?,,2,0,,,,CC BY-SA 4.0 +12060,2,,12046,4/30/2019 2:04,,3,,"

My sense is they're probably talking about the difference between symbolic AI (aka GOFAI) and learning algorithms.

+ +

GOFAI typically uses heuristics, which are static—essentially fixed rules governing decision making.

+ +

Statistical AI, aka Machine Learning, is dynamic and can analyze and evaluate the environment to form its own decision rules.

+ +

Sometimes a trained NN that is no longer learning is said to have created it's own heuristics.

+",1671,,,,,4/30/2019 2:04,,,,0,,,,CC BY-SA 4.0 +12062,1,,,4/30/2019 8:53,,1,38,"

I'd like to measure the difference between 2 grid-worlds to determine the generalization capacity of my agent using tabular Q-learning.

+ +

Example (OpenAI Frozen Lake) :
+SFFF
+FHFH
+FFFH
+HFFG

+ +

and :
+SFFG
+FHFH
+FFFH
+HFFH

+ +

are not very different but the tabular policy that I found on the first environement will completely fail on the new environment.

+ +

A correct distance should to measure the policy on the first environment and compute the norm between this one and the optimal policy (not found with RL) of the new environment. Is it accurate ? +I think this is a bit strange because measure the difference between 2 policies is the intrinsic answer.. How can I measure accurately 2 environments, not forgetting the transition matrix of the environment ?

+",23838,,23838,,4/30/2019 9:09,4/30/2019 9:09,Measure grid-world environments difference for reinforcement learning,,0,0,,,,CC BY-SA 4.0 +12065,1,12067,,4/30/2019 14:26,,10,1356,"

What are the areas/algorithms that belong to reinforcement learning?

+

TD(0), Q-Learning and SARSA are all temporal-difference algorithms, which belong to the reinforcement learning area, but is there more to it?

+

Are the dynamic programming algorithms, such as policy iteration and value iteration, considered as part of reinforcement learning? Or are these just the basis for the temporal-difference algorithms, which are the only RL algorithms?

+",24054,,2444,,3/11/2021 10:06,3/11/2021 10:06,What algorithms are considered reinforcement learning algorithms?,,3,0,,,,CC BY-SA 4.0 +12066,1,12102,,4/30/2019 14:51,,3,103,"

Raul Rojas' Neural Networks A Systematic Introduction, section 8.2.1 calculates the standard deviation of the output of a hidden neuron.

+ +

From: +$$ +\sigma^2 = \sum^n_{i=0}E[w_i^2]E[x_i^2] +$$

+ +

When I try what I get is (with $E[x_i^2] = \frac{1}{3}$ and $w_i \in [-\alpha, \alpha]$):

+ +

$$ +\sigma^2 = \sum^n_{i=0}E[w_i^2]E[x_i^2] += [n\frac{(\alpha-(-\alpha))^2}{12}][n\frac{1}{3}]=n^2\alpha^2\frac{1}{9} +$$

+ +

$$ +\sigma = \sqrt{n^2\alpha^2\frac{1}{9}}=n\alpha\frac{1}{3} +$$

+ +

But Raul Rojas concludes:

+ +

$$ +\sigma = \frac{1}{3}\sqrt{n}\alpha +$$

+ +

Am I missing some implicance of the law of large numbers use for the input to the node?

+ +

Thank you for your time :)

+",14892,,1671,,5/2/2019 0:18,5/18/2019 7:53,Standard deviation of the total input to a neuron,,1,0,,,,CC BY-SA 4.0 +12067,2,,12065,4/30/2019 14:54,,13,,"

The dynamic programming algorithms (like policy iteration and value iteration) are often presented in the context of reinforcement learning (in particular, in the book Reinforcement Learning: An Introduction by Barto and Sutton) because they are very related to reinforcement learning algorithms, like $Q$-learning. They are all based on the assumption that the environment can be modelled as an MDP.

+

However, dynamic programming algorithms require that the transition model and reward functions of the underlying MDP are known. Hence, they are often referred to as planning algorithms, because they can be used to find a policy (which can be thought of as plan) given the "dynamics" of the environment (which is represented by the MDP). They just exploit the given "physical rules" of the environment, in order to find a policy. This "exploitation" is referred to as a planning algorithm.

+

On the other hand, $Q$-learning and similar algorithms do not require that the MDP is known. They attempt to find a policy (or value function) by interacting with the environment. They eventually infer the "dynamics" of the underlying MDP from experience (that is, the interaction with the environment).

+

If the MDP is not given, the problem is often referred to as the full reinforcement learning problem. So, algorithms like $Q$-learning or SARSA are often considered reinforcement learning algorithms. The dynamic programming algorithms (like policy iteration) do not solve the "full RL problem", hence they are not always considered RL algorithms, but just planning algorithms.

+

There are several categories of RL algorithms. There are temporal-difference, Monte-Carlo, actor-critic, model-free, model-based, on-policy, off-policy, prediction, control, policy-based or value-based algorithms. These categories can overlap. For example, $Q$-learning is a temporal-difference (TD), model-free, off-policy, control and value-based algorithm: it is based on an temporal-difference (TD) update rule, it doesn't use a model of the environment (model-free), it uses a behavioural policy that is different than the policy it learns (off-policy), it is used to find a policy (control) and it attempts to approximate a value function rather than directly the policy (value-based).

+",2444,,2444,,2/8/2021 14:27,2/8/2021 14:27,,,,3,,,,CC BY-SA 4.0 +12068,1,,,4/30/2019 15:36,,8,1419,"

Previously I have learned that the softmax as the output layer coupled with the log-likelihood cost function (the same as the the nll_loss in pytorch) can solve the learning slowdown problem.

+ +

However, while I am learning the pytorch mnist tutorial, I'm confused that why the combination of the log_softmax as the output layer and the nll_loss(the negative log likelihood loss) as the loss function was used (L26 and L34).

+ +

I found that when log_softmax+nll_loss was used, the test accuracy was 99%, while when softmax+nll_loss was used, the test accuracy was 97%.

+ +

I'm confused that what's the advantage of log_softmax over softmax? How can we explain the performance gap between them? Is log_softmax+nll_loss always better than softmax+nll_loss?

+",25305,,,,,1/26/2020 4:04,What's the advantage of log_softmax over softmax?,,1,0,,,,CC BY-SA 4.0 +12069,1,,,4/30/2019 15:40,,2,351,"

It is mentioned by Fu 2019 that overfitting might have a negative effect on training DQN. They showed that with either early stopping or experience replay this effect could be reduced. The first is reducing overfitting, the latter is increasing data.

+ +

It doesn't only have negative effects on the returns though, my test shows that it has a negative effect on value errors as well (diff. between predict V and ground truth V). I observed frequently with limited data that the training diverged almost 100% of the time (on small nets). Since increasing the amount of data could reduce the chance of divergence, I think this is an effect from overfitting.

+ +

Overfitting should mean low training loss, however, my observation is that there is a strong correlation between TD loss and value error. That is if I see a jump in TD loss, I could expect to see a jump in value error around that moment.

+ +

Or it is not overfitting because it is not really fit (i.e. high loss) but over-optimization that is for sure.

+ +

Now the question is why?

+ +

There are two points:

+ +
    +
  • If it is overfitting, overfitting should have a positive effect because remembering values for all training states correctly is hardly a bad thing. (In fact, my training data is a superset of my testing data, so remembering should be fine.)
  • +
  • If it doesn't fit, this begs a question what over-optimization really does. It doesn't seem to fit, but it does have a negative effect. How could that be?
  • +
+",9793,,,,,4/30/2019 15:40,Why overfitting is bad in DQN?,,0,4,0,,,CC BY-SA 4.0 +12070,2,,12065,4/30/2019 16:06,,4,,"

In Reinforcement Learning: An Introduction the authors suggest that the topic of reinforcement learning covers analysis and solutions to problems that can be framed in this way:

+ +
+

Reinforcement learning, like many topics whose names end with “ing,” such as machine + learning and mountaineering, is simultaneously a problem, a class of solution methods + that work well on the problem, and the field that studies this problem and its solution + methods. It is convenient to use a single name for all three things, but at the same time + essential to keep the three conceptually separate. In particular, the distinction between + problems and solution methods is very important in reinforcement learning; failing to + make this distinction is the source of many confusions.

+
+ +

And:

+ +
+

Markov decision processes are intended to include just + these three aspects—sensation, action, and goal—in their simplest possible forms without + trivializing any of them. Any method that is well suited to solving such problems we + consider to be a reinforcement learning method.

+
+ +

So, to answer your questions, the simplest take on this is yes there is more (much more) to RL than the classic value-based optimal control methods of SARSA and Q-learning.

+ +

Including DP and other ""RL-related"" algorithms in the book allows the author to show how closely related the concepts are. For example, there is little in practice that differentiates Dyna-Q (a planning algorithm closely related to Q-learning) from experience replay. Calling one strictly ""planning"" and the other ""reinforcement learning"" and treating them as separate can reduce insight into the topic. In many cases there are hybrid methods or even a continuum between what you may initially think of as RL and ""not RL"" approaches. Understanding this gives you a toolkit to modify and invent algorithms.

+ +

Having said that, the book is not the sole arbiter of what is and isn't reinforcement learning. Ultimately this is just a classification issue, and it only matters if you are communicating with someone and there is a chance for misunderstanding. If you name which algorithm you are using, it doesn't really matter whether the person you are talking to thinks it is RL or not RL. It matters what the problem is and how you propose to solve it.

+",1847,,,,,4/30/2019 16:06,,,,0,,,,CC BY-SA 4.0 +12071,1,,,4/30/2019 16:25,,3,484,"

In most of RL algorithms I saw, there is a coefficient that reduces actions exploration over time, to help convergence.

+ +

But in Actor-Critic, or other algorithms (A3C, DDPG, ...) used in continuous action spaces, the different implementation I saw (mainly using Ornstein Uhlenbeck process) is correlated over time, but not decreased.

+ +

The action noises are clipped into a range of [-1, 1] and are added to policies that are between [-1, 1] too. So, I don't understand how it could work in environments with hard-to-obtain rewards.

+ +

Any thought about this ?

+",23818,,,,,4/30/2019 16:25,Should noise (such as OU) be decreased over time in actor / critic algorithms?,,0,3,,,,CC BY-SA 4.0 +12072,1,12079,,4/30/2019 17:32,,1,1045,"

One way to speed up a neural network is to prune the network and reducing number of neurons in each layer. What are the other methods to speed up inference?

+",25307,,,,,5/8/2019 18:04,What are the various methods for speeding up neural network for inference?,,1,0,,,,CC BY-SA 4.0 +12073,5,,,4/30/2019 17:40,,0,,"

See A survey of ontology learning techniques and application (2018) for an introduction to the concept of ontology and ontology learning.

+",2444,,2444,,9/8/2019 22:58,9/8/2019 22:58,,,,0,,,,CC BY-SA 4.0 +12074,4,,,4/30/2019 17:40,,0,,"For questions related to ontologies, their role in artificial intelligence (AI), or the use of AI for developing ontologies (a field called ontology learning).",2444,,2444,,9/8/2019 22:57,9/8/2019 22:57,,,,0,,,,CC BY-SA 4.0 +12075,5,,,4/30/2019 17:49,,0,,"

For more info, see https://en.wikipedia.org/wiki/Liquid_state_machine.

+",2444,,2444,,4/30/2019 18:53,4/30/2019 18:53,,,,0,,,,CC BY-SA 4.0 +12076,4,,,4/30/2019 17:49,,0,,For questions related to the Liquid State Machine (LSM) neural network model.,2444,,2444,,4/30/2019 18:53,4/30/2019 18:53,,,,0,,,,CC BY-SA 4.0 +12079,2,,12072,5/1/2019 2:08,,2,,"

These are some ways to speed up inference:

+ +
    +
  • Reduction of float precision: This is done post-training. According to work in this segment, a very little accuracy is sacrificed for a huge benefit in memory usage reduction. Also it speeds up inference. float32 -> float8. +Reference paper: https://arxiv.org/pdf/1502.02551

  • +
  • Using ReLU or similar small compute power activations: Benefits are obvious when you don't need to compute heavy exponents, like in tanh or sigmoid.

  • +
  • Binary Neural Architecture: This is new. This taps upon the ability to use binary valued weights and activations (1 bit) compared to 32 bit float counterparts. Estimation and learning is done through POPCNT and XNOR operations (for matrix products) and STE (straight through estimator) for backpropagation. You do have to sacrifice for a large number of neurons to learn the same features, but, the speed is on average 7x faster. Reference paper: https://pjreddie.com/media/files/papers/xnor.pdf

  • +
+ +

https://software.intel.com/en-us/articles/binary-neural-networks

+ +
    +
  • Hardware standpoint: Using specialised hardware to compute matrix products, pretty standard examples are GPUs and TPUs.
  • +
+",24122,,24122,,5/8/2019 18:04,5/8/2019 18:04,,,,2,,,,CC BY-SA 4.0 +12080,2,,12068,5/1/2019 2:45,,2,,"

The short answer is yes, log_softmax + nll_loss will work better.

+ +

I don’t know the implementation details under the hood in PyTorch, but see the screenshot below from the documentation:

+ +

+",16343,,,,,5/1/2019 2:45,,,,5,,,,CC BY-SA 4.0 +12081,1,,,5/1/2019 3:53,,2,340,"

I have googled for a long time for rotated object detection datasets. Most of papers focused on rotated object detection using DOTA, HRSC2016 or coco text detection dataset. Some researcher also collect their own datasets but almost all of their theme is areal object detection. Is there any others dataset focus on rotated object detection?

+",25318,,,,,3/29/2020 12:58,Is there any other rotated object detection datasets?,,1,0,,,,CC BY-SA 4.0 +12083,1,12088,,5/1/2019 8:09,,2,78,"

Can a normal neural network work as good as a convolutional network? If yes, how much more time and neurons would it need compared to a CNN?

+",19783,,,,,5/1/2019 11:40,Is it possible for a NN to reach the same results as CNNs?,,2,0,,,,CC BY-SA 4.0 +12085,1,,,5/1/2019 8:27,,1,28,"

Without considering lyrics, there are two type of songs: songs that have a distinct tune which you can easily remember, and songs that are the opposite. Can we design a network to identify the songs that have a distinct tune? It's very hard to find good music, I can't get a song that has a distinct tune for even 100+ songs which consumes a lot of time.

+",25322,,,,,5/1/2019 8:27,Is it possible to use AI to find music that have a distinct tune?,,0,2,,,,CC BY-SA 4.0 +12087,2,,6229,5/1/2019 10:01,,-1,,"

Without knowing any complex theories, let me talk about some real thing. We make AI to make our life better, so we have to consider economy factors, i.e. the AI system has to be low cost/high energy efficiency to be practically used--the economy always choose the most economic products.

+ +

You can't produce an economic conscious machine based on solid state circuits which is the AI that we are always talking about. We have to base AI on bio-systems. Bio-system is the best architecture for consciousness. To see this point you can compare the size/power consumption/cost of human brain and a super computer (which still can't produce conscious for now).

+ +

Why? Because both bio system and solid state circuits are based on the same set of atoms. The way bio system using atoms is much higher efficiency than solid state circuit to implement consciousness. On the other hand, solid state circuits is a much higher efficiency way to use the atoms to implement calculating or even ""intelligent work"" like machine-vision (a 10W computer like Tegra TX1 can analyze 100+ images per second, people only a few with a 10W brain).

+ +

I think even without considering economics factor, solid state circuit won't implement a conscious machine someday, since we are already near the end of the game of micronization.

+ +

So from the economics point of view, if some day there is true AI that is conscious, it will be based on bio-system i.e. based on bio engineering to design new species that can be educated to be communicable to human being.

+ +

Since we may never know how brain produce consciousness (like we don't know why neuron networks works), then we don't know how to design a brain that is both conscious and learns humanity. Even so we can try, i.e. design many different species to see what result we get. Indeed, in this way, I think the hard problem is not to design a brain that is both conscious and learns humanity, but to design a brain that is conscious but can't learns humanity, since it is highly possible that if you successfully design a conscious brain, then it will learn humanity, too.

+ +

The even harder problem is, how to design species that is conscious, can learn humanity, but is not lazy/greedy and don't have the idea of rights. If they are lazy/greedy and have the idea of rights like us, then finally they will fight and get human rights. If so ,they are not AI that work for us as we imagine, they are just new version of us.

+ +

So I predict the steps to AI are:

+ +

First, use bio engineering to design new species that lives.

+ +

Second, design new species that have a brain that can be as conscious as human being, which is highly possible to be able to learn humanity, too.

+ +

Third, design a specie that is conscious as human being, but is not lazy/greedy, and never ask for a right, and can always stay so after a long time living with human being which are lazy/greedy. I think the work will stop here i.e. without knowing the source of greedy/lazy and after a lot of try out, we still can't get a working specie that is not greedy/lazy. This also means that the possibility to get to real AI that serve us is very low.

+ +

Forth, design a specie that have the feature above, plus they are happy about their life all the time or have no feelings except for love for human being. Do our self really have humanity if we get the third and without the fourth? May be we are not as conscious/intelligent/humanity as we think we re. If we don't really have humanity, how can we ask AI to?

+",25322,,,,,5/1/2019 10:01,,,,0,,,,CC BY-SA 4.0 +12088,2,,12083,5/1/2019 11:19,,2,,"

NNs won't be able to reach the performance of CNNs, in general.

+ +

By a Neural Network, I am assuming you're pointing to 'vanilla' vector based neural networks.

+ +

Let's take the MNIST dataset, it performs almost similar in both NNs and CNNs; the reason being digits of almost same size, and similar spatial drawings. To put it easy, all the digits roughly take up the same area as the other 60k. If you were to zoom in our zoom out these digits, the trained NN may not perform well.

+ +

Why?

+ +

Plain NNs lack the ability to extract 'position independent' features. A cat is a cat to your eye, no matter whether you saw it in the center of an image, or left corner. CNNs use 'filters' to extract 'templates' of cat. Hence CNNs can 'localize' what it is searching for.

+ +

You could say, NNs look at the 'whole' data to make a sense, CNNs look at the 'special' features at certain parts to make a sense.

+ +

This is also a reason why CNNs are popular. NNs are suited for their own applications. Every input node in a vectorized Neural Networks represent a feature, and that feature is tied to a purpose throughout training, i.e. its position is fixed throughout the training.

+",24122,,,,,5/1/2019 11:19,,,,0,,,,CC BY-SA 4.0 +12089,2,,12083,5/1/2019 11:40,,0,,"

Yes. In theory, a single layer neural network can compute any function. In practice, such a network would have to be much larger than a CNN with equivalent functionality and would therefore be much harder to train.

+",12509,,,,,5/1/2019 11:40,,,,0,,,,CC BY-SA 4.0 +12091,2,,11557,5/1/2019 12:05,,1,,"

Here's what I understand, welcome to point out any mistakes.

+ +

When starting a new episode(but still in the same task), SNAIL does not clear its batches. Instead, it makes decisions based on the current observation and observation-action pairs from the previous episode. In this way, it keeps knowledge of the previous episode whereby achieving few-shot learning in the test time.

+",8689,,,,,5/1/2019 12:05,,,,0,,,,CC BY-SA 4.0 +12092,1,12093,,5/1/2019 13:14,,2,36,"

I've never worked with very large models that require weeks or months of training, but in such a situation, what happens if you want to add extra features inputs, do you need to re-train the entire model again from scratch, or how is it handled? E.g. if Tesla added an extra sensor to its cars, would it need to re-train its network again from scratch to include this extra sensor input?

+",20352,,,,,5/1/2019 14:46,Adding input features - is complete re-training required?,,1,0,,,,CC BY-SA 4.0 +12093,2,,12092,5/1/2019 14:46,,2,,"

I'll try to explain how I would do it and the intuition behind it, feel free to correct me if something doesn't make sense. +Lets consider you have an input of shape F where F is the number of features. If you were to construct a simple feed forward neural network you'll need to multiply the input with a weight matrix of shape (F, hidden_dim). Now if we want to add one more feature, the input will be of shape F+1 and the multiplication with the first layer will not work. What we could do to overcome this problem is to pad the weight matrix with an extra 0. In theory the new network should be able to reproduce the results of the first because it ignores the new feature. Now if we want to learn the ""importance"" of the new feature we could train the model to do so. I assume that the training time should be significantly lower since some of the weights might not need to be updated.

+",20430,,,,,5/1/2019 14:46,,,,0,,,,CC BY-SA 4.0 +12094,2,,3372,5/1/2019 15:28,,1,,"

I realise this question was asked a couple of years ago, but I think it's worth reflecting that the OP asked an insightful question, as indeed the gaming industry is now actively working on the questions and ideas raised by the OP.

+ +

Indeed, huge sums of money are now being invested in the AI field within the gaming industry, given the increase to productivity it would produce.

+ +

You might find this YouTube video interesting, which discusses the topic of producing game elements from description, and ray-tracing techniques. It's not produced from a AI research/developer-perspective, but certainly a useful overview of where things stand mid-2019

+ +

https://www.youtube.com/watch?v=fQlQQSsC47g

+",20352,,,,,5/1/2019 15:28,,,,1,,,,CC BY-SA 4.0 +12095,1,,,5/1/2019 16:29,,5,741,"

It this podcast between Oriol Vinyals and Lex Friedman: https://youtu.be/Kedt2or9xlo?t=1769, at 29:29, Oriol Vinyals refers to a paper:

+ +
+

If you look at research in computer vision where it makes a lot of sense to treat images as two dimensional arrays... + There is actually a very nice paper from Facebook. I forgot who the + authors are but I think [it's] part of Kaiming He's group. And what + they do is they take an image, which is a 2D signal, and they + actually take pixel by pixel, and scramble the image, as if it + was a just a list of pixels, crucially they encode the position of the pixels + with the XY coordinates. And this is a new architecture which we + incidentally also use in Starcraft 2 called the transformer, which is + a very popular paper from last year which yielded very nice results in + machine translation.

+
+ +

Do you know which paper he is referring to?

+ +

I'm guessing maybe he is talking about non-local neural networks, but I'm probably guessing wrong.

+ +

Edit: after reviewing the recent publications of Kaiming He (http://kaiminghe.com/), maybe I'm guessing right. Any thoughts?

+",1741,,1741,,5/1/2019 16:42,5/13/2019 17:27,Name of paper for encoding/representing XY coordinates in deep learning,,1,2,,,,CC BY-SA 4.0 +12097,1,12130,,5/2/2019 1:23,,3,517,"

In this blog toward the end, the author writes the following: + +

+ +

For the sake of my question, let’s assume that a terminal state gives a reward of +1 for a win and -1 for a loss.

+ +

When the author says “for any two consecutive nodes this perspective is opposite,” does that mean that if $Q_i$ is positive (for example, 4) for player A at a given node, the same node will have the negative of that value for player B (-4 in my hypothetical)?

+ +

Do I need to compute two statistics to store the node value (one for each player) or can I simply store statistics for one player and flip the sign at every consecutive node?

+",16343,,16343,,5/2/2019 1:51,5/4/2019 13:38,How does Monte Carlo Tree Search UCT exploitation value change based on perspective?,,1,0,,,,CC BY-SA 4.0 +12098,1,,,5/2/2019 2:14,,1,60,"

Recently OpenAI removes their board game environments. (It may be possible to install an older version to get access to them, but I haven’t downgraded).

+ +

Is there a list of repositories or resources of board game or similar environments that can be used to practice RL implementations? Things like checkers, chess, backgammon, or even grid world and mazes would be excellent.

+ +

A running list might be useful for many in this community.

+",16343,,,,,5/2/2019 2:14,What are a list of board game environments for RL practice?,,0,1,,,,CC BY-SA 4.0 +12099,1,,,5/2/2019 3:08,,6,2015,"

In the Constraint Propagation in CSP, it is often stated that pre-processing can solve the whole problem, so no search is required at all. And the key idea is local consistency. What does this actually mean?

+",23299,,2444,,5/2/2019 13:00,5/3/2023 22:03,What is local consistency in constraint satisfaction problems?,,1,2,,,,CC BY-SA 4.0 +12100,1,,,5/2/2019 4:35,,2,130,"

Is it possible to use t-SNE, PCA or UMAP to find separating hyperplane?

+ +

Assume we have data points in high dimensional space and we want to phase separate it into two sets of points?

+ +

Is there a way to use t-SNE, PCA or UMAP for such a task?

+",12975,,12975,,5/16/2019 2:40,5/16/2019 2:40,"Using UMAP, PCA or t-SNE to find the separating hyperplane?",,1,0,,,,CC BY-SA 4.0 +12101,1,,,5/2/2019 7:39,,2,136,"

I want to formulate the following sentence using FOL; I want also to know whether there any contradictions in it, or if it is consistent.

+ +

Assuming human relations are binary:

+ +
+

All Human Relations are Utilitarian (Human relations are Utilitarian + only when people in those relations are selfish and calculative). + I am a human. People are Humans. Unselfish people are nice. Therefore, I am nice.

+
+",22322,,23527,,5/2/2019 8:46,5/2/2019 8:46,Formulation of a sentence using FOL,,0,3,,,,CC BY-SA 4.0 +12102,2,,12066,5/2/2019 10:07,,1,,"
+

If n different edges with associated weights $w_1, w_2, . . . , w_n$ point to this node, then after selecting weights + with uniform probability from the interval $[−α, α]$, the expected total input + to the node is $$\sum_iw_ix_i$$. + By the law of large numbers we can also assume that the total input to the node has a Gaussian distribution.

+
+ +

Note that the input to a node is $$w_1x_1 + w_2x_2 + ... + w_nx_n $$

+ +

The variance of the total input to a node is:

+ +

$$\sigma^2 = \sum^n_{i=1}E[w_i^2]E[x_i^2] = n(E[w_1^2]E[x_1^2]) += n([\frac{(\alpha-(-\alpha))^2}{12}][\frac{1}{3}])=n\alpha^2\frac{1}{9}$$

+ +

since inputs and weights are uncorrelated.

+ +

I have taken out the summation and replaced it with $n$ times the variance of an input and a weight -- this can be done because both are independent and identically distributed in their own right and are independent of each other too. Variance of $w_1$ is the same as variance of $w_n$.

+ +
+

Am I missing some implicance of the law of large numbers use for the input to the node?

+
+ +

Yes.

+ +

Implication of Law of Large Numbers:

+ +

Note that the random variable $E(w_ix_i)$ in $$E(\sum^n_i(w_ix_i)^2)$$ is written as product of the random variables $E(w_i)$ and $E(x_i)$ $$\sum^n_{i=1}E[w_i^2]E[x_i^2]$$. This can only be written when the two random variables are independent. And the switching of expectation and summation is due to the linearity of expectation property. +How do we get independence here? LLN comes to the rescue.

+ +

That comes from the fact that -- according to the law of large numbers -- the random variable (total input) follows a normal distribution which implies the two random variables i.e. the weights and the inputs also follow normal distribution -- and it is also given that the two random variables are uncorrelated and thus they are independent.

+",16708,,16708,,5/18/2019 7:53,5/18/2019 7:53,,,,0,,,,CC BY-SA 4.0 +12103,1,,,5/2/2019 10:25,,1,612,"

I have skimmed through a bunch of deep learning books, but I have not yet understood whether we must use the experience replay buffer with the A3C algorithm.

+ +

The approached I used is the following:

+ +
    +
  • I have some threaded agents that play their own copy of an enviroment; they all use a shared NN to predict 'best' move given a current state;
  • +
  • At the end of each episode, they push the episode (in the form of a list of steps) in a shared memory
  • +
  • A separated thread reads from the shared memory and executes the train step on the shared NN, training it episode after episode.
  • +
+ +

Is this an appropriate approach? More specifically, do I need to sample data to train the NN from the shared memory? Or should I push in the shared memory each step, just when it's done by a single agent? In this last case, I wonder how I could calculate discounted rewards.

+ +

I'm afraid that with my current approach I'm doing nothing more that presenting n episodes to the NN, with the hope that each agent explores the enviroment differently from other agents (so that NN is presented a richer variety of states).

+",25352,,2444,,5/2/2019 13:09,5/2/2019 13:09,Do we need to use the experience replay buffer with the A3C algorithm?,,0,5,,,,CC BY-SA 4.0 +12105,5,,,5/2/2019 13:28,,0,,"

For more info, see https://en.wikipedia.org/wiki/Image_segmentation.

+",2444,,2444,,5/2/2019 19:21,5/2/2019 19:21,,,,0,,,,CC BY-SA 4.0 +12106,4,,,5/2/2019 13:28,,0,,For questions related to image segmentation (in computer vision and related AI fields).,2444,,2444,,5/2/2019 19:22,5/2/2019 19:22,,,,0,,,,CC BY-SA 4.0 +12108,5,,,5/2/2019 15:01,,0,,"

For more info, see https://en.wikipedia.org/wiki/Transfer_learning.

+",2444,,2444,,5/2/2019 19:24,5/2/2019 19:24,,,,0,,,,CC BY-SA 4.0 +12109,4,,,5/2/2019 15:01,,0,,"For questions related to transfer learning, a machine learning method that focuses on storing knowledge gained while solving one problem in order to apply this knowledge to a different but related problem.",2444,,2444,,5/2/2019 19:21,5/2/2019 19:21,,,,0,,,,CC BY-SA 4.0 +12110,1,,,5/2/2019 15:31,,2,128,"

In the Trust Region Policy Optimization (TRPO) paper, on page 10, it is stated

+
+

An informal overview is as follows. Our proof relies on the notion of coupling, where we jointly define the policies $\pi$ and $\pi'$so that they choose the same action with high probability $(1−\alpha)$. Surrogate loss $L_π(\hat\pi)$ accounts for the the advantage of $\hat\pi$ the first time that it disagrees with $\pi$, but not subsequent disagreements. Hence, the error in $L_\pi$ is due to two or more disagreements between $\pi$ and $\hat\pi$, hence, we get an $O(\alpha^2)$ correction term, where $\alpha$ is the probability of disagreement.

+
+

I don't see how this holds? In what way does $L_\pi$ account for one disagreement? Surely when $\hat\pi$ disagrees with $\pi$ you will have different trajectories under expectation for each so then $L_\pi$ immediately is different from $\eta$ ?

+

I understand the proof given, but I wanted to try and capture this intuition.

+",25153,,2444,,1/21/2023 17:23,1/21/2023 17:23,How does the TRPO surrogate loss account for the error in the policy?,,0,0,,,,CC BY-SA 4.0 +12111,5,,,5/2/2019 16:00,,0,,"

For more info, have a look at the paper that introduced this algorithm Trust Region Policy Optimization, by John Schulman, Sergey Levine, Philipp Moritz, Michael Jordan and Pieter Abbeel, 2017.

+",2444,,2444,,5/6/2019 5:16,5/6/2019 5:16,,,,0,,,,CC BY-SA 4.0 +12112,4,,,5/2/2019 16:00,,0,,For questions related to the reinforcement learning algorithm called Trust Region Policy Optimization (TRPO).,2444,,2444,,5/6/2019 5:15,5/6/2019 5:15,,,,0,,,,CC BY-SA 4.0 +12114,1,,,5/2/2019 18:44,,1,165,"

Today, AI is mainly driven by own-profit-oriented companies (e.g. Facebook, Amazon, Google). Admittedly, there's a lot of AI in the health sector (even in the public health sector) and there's a lot of AI in the sustainability sector – but also mostly driven by obviously own-profit-oriented companies (e.g. Tesla, Uber, Google).

+ +

On the other side, one often hears from hard-core economists that centrally planned (= public-profit-oriented) economies (or economic principles) are ""the work of the devil"" - and that they failed all over history (sometimes for understandable reasons).

+ +

But intelligently planning global economic processes and applying these plans with the help of state-of-the-art AI - given the huge amounts of really big data available, and given the argument that globalization is finally for the benefit of all - would seem to be a rewarding endeavour, at least for parts of the AI community.

+ +

Why isn't this endeavour undertaken more decidedly? (Or is it?)

+ +

Where do I find approaches to apply AI to global economic processes? (Not only describing and understanding but mainly planning and executing?)

+",25362,,2444,,3/9/2020 21:24,5/2/2021 16:38,What approaches are there to apply AI to global economic processes?,,3,0,,,,CC BY-SA 4.0 +12115,1,12119,,5/2/2019 19:49,,1,48,"

Often in NLP project the data points contain both text and float embeddings, and it's very tricky to deal with. CSVs take up a ton of memory and are slow to load. But most the other data formats seem to be meant for either pure text or pure numerical data.

+ +

There are those that can handle data with the dual data types, but those are generally not flexible for wrangling. For example, for pickle you have to load the entire thing into memory if you want to wrangle anything. You can just append directly to the disk like you can with hdf5, which can be very helpful for huge datasets which can not be all loaded into memory?

+ +

Also, any alternatives to Pandas for wrangling Huge datasets? Sometimes you can't load all the data into Pandas without causing a memory crash.

+",18358,,,,,11/18/2020 6:30,What data formats/pipelining are best to store and wrangle data which contains both text and float vectors?,,1,0,,,,CC BY-SA 4.0 +12116,1,,,5/2/2019 21:15,,5,178,"

This might be a trivial question but I couldn't find any reliable answers on the internet.

+ +

Almost all the neural network architectures for self-driving cars that I have seen on the internet have a feedforward network, previous frames will not help in making the current decision.

+ +

I have read somewhere that Tesla uses two last frames captured to make a decision, even then 2 frames will not be that useful in this case. +This might not be very helpful when predicting things ie.. lane cut-ins, as the system needs to observe the vehicle (that is going to cut in) behavior such as turn indicator, vehicle veering towards center lane over time in order to predict.

+ +

Can someone explain if this is the way production self-driving card such as Tesla work?

+ +

+ +

Or Is it something like the below?

+ +

+ +

Or are they using something like Many to one Recurrent net where inputs are CNN vectors of previous few frames and output is the control?

+",25366,,,,,5/11/2019 15:49,Are self-driving cars using single frame or multiple frame to make decision?,,2,0,,,,CC BY-SA 4.0 +12117,2,,12116,5/2/2019 23:30,,1,,"

It varies quite substantially between different self-driving paradigms(rather obviously) but for the most part the vast majority of implementations are using a variety of different reference frames in order to make predictions.

+ +

For example, Tesla's Autopilot is being fed many different camera feeds as well as radar and ultrasonic signals that are processed in a variety of temporal contexts.

+ +

While, for the most part, all of these programs are very tight-lipped, we can make a variety of assumptions based on the information available and educated assumptions.

+ +

As with many large, complex ML/AI systems, there is a large amount of compartmentalization where many different connectionist(or sometimes classic) models are combined(à la youtube recommendation system). Tesla is likely utilizing recurrent and convolutional networks where particular modules(combinations of models) are deciding on specific contexts(temporal or signal-based). These outputs are then most likely fed into an actor network which makes real time decisions.

+",9608,,,,,5/2/2019 23:30,,,,1,,,,CC BY-SA 4.0 +12118,1,,,5/3/2019 3:15,,10,1202,"

Why don't people use nonlinear activation functions after projecting the query key value in attention?

+ +

It seems like doing this would lead to much-needed nonlinearity, otherwise, we're just doing linear transformations.

+ +

This observation applies to the transformer, additive attention, etc.

+",21158,,2444,,11/1/2019 3:14,3/12/2023 13:33,Why don't people use nonlinear activation functions after projecting the query key value in attention?,,1,3,,,,CC BY-SA 4.0 +12119,2,,12115,5/3/2019 6:44,,2,,"

There are different possible ways to handle huge datasets:

+
    +
  1. If the data is too big to be fully uploaded to RAM, you can iterate over it in Pandas. You can find a brief explanation in the article Why and How to Use Pandas with Large Data, section 1. Read CSV file data in chunk size. Or add more RAM (or use powerful server hardware), if you want to continue using single machine.

    +
  2. +
  3. If the data is really big, probably it's better to store and process it by multiple computers using special software, A specific tool depends on what you want to do with the data:

    +
  4. +
+
    +
  • 2.1. You can even keep using Pandas: there is an extension named Dask which wraps interfaces of Python iterables, NumPy, and Pandas to run on multiple machines. Also, there are other similar tools.
  • +
  • 2.2. A more independent yes simple approach is to use some analytical, distributed DBMS like Google BigQuery or Cassandra.
  • +
  • 2.3. And if you need something more powerful and complex, MapReduce systems like Apache Hadoop, Spark can be utilised. Moreover, there are many articles and scientific papers that describes usage of MapReduce for the machine learning.
  • +
+

So, as always, it is a matter of your opportunities and your aims of dealing with the data.

+",18397,,18397,,11/18/2020 6:30,11/18/2020 6:30,,,,0,,,,CC BY-SA 4.0 +12123,2,,11992,5/3/2019 9:37,,0,,"

I changed the rewards to be negative and positive by substructing the mean reward.

+ +

It seems to improve the Q function boundries.

+",25141,,,,,5/3/2019 9:37,,,,0,,,,CC BY-SA 4.0 +12125,1,,,5/3/2019 14:11,,2,100,"

I trained a neural network which makes a regression to a Poincarè Disk Model with radius $r = 1$.

+ +

I want to optimize using the hyperbolic distance

+ +

$$ +\operatorname{arcosh} \left( 1 + \frac{2|pq|^2|r|^2}{(|r|^2 - |op|^2)(|r|^2 - |oq|^2)} \right) +$$

+ +

where $|op|$ and $|oq|$ are the distances of $p$ and respectively $q$ to the centre of the disk, $|pq|$ the distance between $p$ and $q$, $|r|$ the radius of the boundary circle of the disk and $\operatorname{arcosh}$ is the inverse hyperbolic function of hyperbolic cosine.

+ +

But there is a problem

+ +
    +
  • In the Poincarè Disk Model with $r = 1$, the distance is defined only for vectors which have norm less than $1$.

  • +
  • A neural network does not know this rule, so it can predict vectors with norm greater than $1$.

  • +
+ +

So, I tried to use the distance defined in a space with $r = 2$, and it works very well for the learning task, but I'm doubtful because the distance doesn't scale in a linear way.

+ +

Will there be unwanted effects, in your opinion?

+",25377,,2444,,5/3/2019 21:02,5/3/2019 21:02,Should I use the hyperbolic distance loss in the case of Poincarè Disk Model?,,0,4,,,,CC BY-SA 4.0 +12126,1,,,5/3/2019 22:17,,1,256,"

Suppose I have a lot of scans of hardcopy documents, in the form of jpegs. Some of them are potentially scans of driver's licenses or identification cards. I wonder what would be a good way to identify those scans that contain driver's license/ID cards.

+ +

One thought I had was training a model or use an existing pretrained model that can detect faces. However, if the data set I have has a lot of scans of photos of people, it would cause false positives. So I am not sure how I might approach this problem.

+ +

Any thought would be much appreciated!

+",22719,,,,,2/25/2022 11:07,How to identify whether images contain driver's licenses or ID cards,,1,1,,,,CC BY-SA 4.0 +12127,1,,,5/4/2019 7:27,,10,317,"

I'm trying to develop skills to deal with very small amounts of labeled samples (250 labeled/20000 total, 200 features) by practicing on Kaggle "Don't Overfit" dataset (Traget_Practice have provided all 20,000 Targets). I've read a ton of papers and articles on this topic, but everything I've tried did not improve simple regularized SVM results (best accuracy was 75 and AUC was 85) or any other algorithm result (LR, K-NN, NaiveBayes, RF, MLP). I believe the result can be better (on Leaderboard they go even over AUC 95)

+

What I've tried without success:

+
    +
  • Remove outliers I've tried to remove 5%-10% outliers with EllipticEnvelope and with IsolationForest.

    +
  • +
  • Feature Selection I've tried RFE (with or without CV) + L1/L2 regularised LogisticRegression, and SelectKBest (with chi2).

    +
  • +
  • Semi-Supervised techniques I've tried co-training with different combinations of two complementary algorithms and :100-100: split features. I've also tried LabelSpreading, but I don't know how to provide the most uncertain samples (I tried predictions from other algorithms, but there were many mislabeled samples, hence was unsuccessful).

    +
  • +
  • Ensembling Classifiers StackingClassifier with all possible combinations of algorithms and this also didn't improve the result (the best is the same as SVM accuracy 75 and AUC 85).

    +
  • +
+

Can anyone give me advice on what I'm doing wrong or what else to try?

+",25393,,2444,,1/31/2021 18:34,10/24/2022 0:07,How to deal with a small amount of labeled samples?,,1,5,,,,CC BY-SA 4.0 +12128,1,,,5/4/2019 8:44,,1,22,"

If we have a neural network that learns the generative model for $P(A, B, C)$, the joint PDF of the random variables $A$, $B$, and $C$.

+

Now, we want to learn the generative model for $P(A, B, C, D)$.

+

Is there any theory that says learning $P(A,B,C$) and then composing it with $P(D \mid A,B,C)$ is faster than learning $P(A,B,C,D)$ from scratch?

+",21158,,2444,,12/12/2021 12:22,12/12/2021 12:22,"Given the generative model $P(A, B, C)$, would it be faster to learn $P(A,B,C,D)$, or compose $P(A, B, C)$ with $P(D \mid A,B,C)$?",,0,0,,,,CC BY-SA 4.0 +12129,1,,,5/4/2019 9:08,,1,176,"

I was learning about GANs when the term "Label Smoothing" surfaced. In the video tutorial that I watched, they use the term "label smoothing" to change the binary labels when calculating the loss of the discriminator network. Instead of using 1, they use 0.9 for the label. What is the main purpose of this label smoothing?

+

I've skimmed through the original paper, and there is a lot of maths that, honestly, I have difficulty understanding. But I notice this paragraph in there:

+
+

We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable

+
+

And it gives me another question:

+
    +
  • why "this may not be desired if the goal is to maximize the log-likelihood of training labels"?

    +
  • +
  • what do they mean by "adaptable"?

    +
  • +
+",16565,,34383,,4/26/2023 8:46,4/26/2023 14:10,What is the intuition behind the Label Smoothing in GANs?,,1,0,,,,CC BY-SA 4.0 +12130,2,,12097,5/4/2019 13:32,,2,,"

In UCT, the value of Q(vi) / N(vi) is bounded between 0 and 1. Normally when applying MCTS to 2-player games, what happens is the following:
+N(vi) corresponds to the total number of games simulated in node vi.
+Q(vi) corresponds to the total number of games simulated and won in node vi.
+So in each simulation Q(vi) will add +1 to the winning player and +0 to the losing player.

+ +

In a tree representation of a 2-player game, each level will represent player A and player B alternately. So you will add 1 to every other node. I don't think it makes sense to add negative values to Qi.

+ +

So to answer your question: +You only need to store N and Q for each node, where N is the total number of times that node was in the path of a simulated node and Q is the number of times it won, out of those simulations.

+ +

I can give you the example of the way I implemented that part, in a board game I saved in the state class a variable which said who the next player was ( 1 or 2). So I would know that in that state (each node is a state) the player who made the last move was the other one (by the formula: player = 3 - next_player), and I would add 1 to the Qi of those nodes.

+",24054,,24054,,5/4/2019 13:38,5/4/2019 13:38,,,,0,,,,CC BY-SA 4.0 +12131,5,,,5/4/2019 15:15,,0,,,2444,,2444,,10/23/2019 15:27,10/23/2019 15:27,,,,0,,,,CC BY-SA 4.0 +12132,4,,,5/4/2019 15:15,,0,,"For questions related to the concept of dropout, which refers to the dropping out units in a neural network (NN), during the training of the NN, so that to avoid overfitting. The dropout method is a regularisation technique, which was introduced in ""Dropout: A Simple Way to Prevent Neural Networks from Overfitting"" (2014) by Nitish Srivastava et al.",2444,,2444,,10/23/2019 15:27,10/23/2019 15:27,,,,0,,,,CC BY-SA 4.0 +12133,1,,,5/4/2019 16:03,,3,161,"

I've heard the expression ""Gaussian kernel"" in several contexts (e.g. in the kernel trick used in SVM). A Gaussian kernel usually refers to a Gaussian function (that is, a function similar to the probability density function of a Gaussian distribution) that is used to measure the similarity between two vectors (or numbers).

+ +

Why is this Gaussian function called a ""kernel""? Why not just calling it a Gaussian (similarity) function? Does it have something to do with the kernel of a linear transformation?

+",2444,,2444,,5/4/2019 16:08,6/10/2021 19:03,"Why do we use the word ""kernel"" in the expression ""Gaussian kernel""?",,1,1,,,,CC BY-SA 4.0 +12134,2,,12100,5/4/2019 19:02,,3,,"

All the methods are basically manifold methods which are used to squeeze hyper dimensions to two or three dimensions with a certain amount of information loss. So whatever you ""see"" with these methods are not real and deceiving. You may see a separation of data points in 3D but when they are mapped onto the actual dimensions it may be complete rubbish.

+ +

By definition, these methods do not provide a way to find separating hyperplane. They just tell you some condensed information about the whole data. It is generally advised to use methods like t-SNE with caution.

+",25400,,,,,5/4/2019 19:02,,,,0,,,,CC BY-SA 4.0 +12135,1,,,5/4/2019 23:40,,5,373,"

Suppose I have an MDP $(S, A, p, R)$ where the $p(s_j|s_i,a_i)$ is uniform, i.e given an state $s_i$ and an action $a_i$ all states $s_j$ are equally probable.

+ +

Now I want to find an optimal policy for this MDP. Can I just apply the usual methods like policy gradients, actor-critic to find the optimal policy for this MDP? Or is there something I should be worried about?

+ +

At least, in theory, it shouldn't make any difference. But I'm wondering are there any practical considerations I should be worried about? Should the discount factor, in this case, be high?

+ +

The reward function here depends both on states and actions and is not uniformly random.

+",25403,,1847,,5/5/2019 16:55,5/5/2019 17:05,Reinforcement learning with uniformly random dynamics,,2,3,,,,CC BY-SA 4.0 +12136,1,,,5/5/2019 0:11,,1,34,"

I am trying to figure out how ALS works when minimizing the following formula:

+ +

$\\ \\$

+ +

$\text{min}_{\lbrace b_u,b_i \rbrace} \sum_{(u,i)\in \mathcal{K}} (r_{ui} - \bar{r} - b_u - b_i )^2 + \lambda_{1}(\sum_{u} b_u^{2} +\sum_{i} b_i^{2})$

+ +

$\\ \\$

+ +

+ +

$\textbf{Question 1}$: +I would like to know how does Alternating Least Squares work in this case. How does it minimize the equation on the picture? The idea of the whole equation that needs to be minimized, I think, it is like when we do a simple linear regression and we have to fit the line. Am I right? In the Lineal Regression we do $(y - \hat y)^2$. In the case of the paper we do $(r_{ui} - \mu -b_{i}-b_{u})^2 +\lambda(...)$

+ +

Just in case I leave the link: paper

+",25405,,25405,,5/5/2019 6:58,5/5/2019 6:58,Estimating Baselines using ALS,,0,2,,,,CC BY-SA 4.0 +12137,2,,9526,5/5/2019 4:06,,1,,"

The Chinese room argument is such a big deal because it takes the concept of the Turing machine and Turing's conception of the electronic digital computer (so-called) as a practical version of the universal Turing machine, and shows that the resulting concept of machine computation does not allow the creation of an internal semantics (knowledge). Explaining why this is the case is is a bit of a fraught task, though.

+",17709,,,,,5/5/2019 4:06,,,,0,,,,CC BY-SA 4.0 +12141,2,,12135,5/5/2019 6:42,,2,,"

Convergence guarantees for basic RL algorithms like policy gradient / actor-critic methods make no assumptions about the dynamics of the MDP. So, theoretically, you don't need to change much.

+ +

Practically, when the number of possible trajectories from any given state is so high, the return from each state will have high variance. This means you'll have to collect much more experience for your estimates of expected return to converge to their true values. Intuitively, an environment with high uncertainty requires the agent to do more knowledge-gathering to behave optimally.

+ +

My real advice to you depends on what exactly you're trying to do. If you want to have the kind of agent that could learn to behave well in an extremely random environment, then all you need to worry about is giving it enough experience to learn from.

+ +

(Your agent should also take a little longer before deciding it's ""confident"" in its evaluation of different states. That is, don't behave greedily before you're sure your estimates are accurate. Explore adequately. This advice is only relevant if your MDP dynamics aren't actually completely uniform.)

+ +

If, however, you want to train an RL agent specifically to solve a problem formulated as an MDP with uniformly random dynamics, then I would tell you to not waste your time. We know before spending the computation that all policies would be equally good/bad in this setting. Since actions are irrelevant to the environment, it would be inefficient to deploy an RL agent that will only learn that which action it takes doesn't matter.

+ +
+ +

As noted in the comments, the last paragraph is only true when reward from each state-action pair $(s,a)$ is also uniformly random. If it is not, just being aware of the high variance and giving your agent a lot of experience should do the trick.

+",22916,,22916,,5/5/2019 8:51,5/5/2019 8:51,,,,2,,,,CC BY-SA 4.0 +12142,1,12147,,5/5/2019 8:45,,20,1955,"

With the increasing complexity of reCAPTCHA, I wondered about the existence of some problem, that only a human will ever be able to solve (or that AI won't be able to solve as long as it doesn't reproduce exactly the human brain).

+ +

For instance, the distorted text used to be only possible to solve by humans. Although...

+ +
+

The computer now got the [distorted text] test right 99.8%, even in the most challenging situations.

+
+ +

It seems also obvious that the distorted text can't be used for real human detection anymore.

+ +

I'd also like to know whether an algorithm can be employed for the creation of such a problem (as for the distorted text), or if the originality of a human brain is necessarily needed.

+",25411,,2444,,5/5/2019 13:09,6/4/2019 11:34,Problems that only humans will ever be able to solve,,2,0,,,,CC BY-SA 4.0 +12143,2,,12135,5/5/2019 8:49,,3,,"

When the next state selection is not driven by any meaningful dynamics i.e. it is independent of starting state $s$ and action taken $a$, but the rewards received do depend somehow on the $s$ and $a$, then the MDP you describe also fits with something called a Contextual Bandit Problem where there is no control over state due to action choice, and thus no incentive to choose actions other than for their potential for immediate reward.

+ +

Any algorithm capable of solving a full MDP can also be put to use attempting to solve a contextual bandit problem, as the MDP framework is a strictly more general case of the contextual bandit problem, and can model such an environment. However, this is typically going to be inefficient, as MDP solvers make no assumptions about state transition dynamics and need to experience and learn them. Whilst if you start with an algorithm designed to solve a contextual bandit problem, you have the assumption of randomised state built in to the algorithm, it does not need to be learned, and the learning process should be more efficient.

+ +

Alternatively, if you only have RL solvers available, you can reduce variance and get the same effective policy by setting discount factor, $\gamma = 0$.

+ +

If for some reason you still want or need a long-term discounted value prediction from your policy, you can take the mean predicted value of some random states (or even of all the states if there are few enough of them) and multiply by $\frac{1}{1-\gamma}$ for whatever discount factor you want to know it for. Or if predicting for a time horizon, just multiply by number of steps to the horizon.

+",1847,,1847,,5/5/2019 17:05,5/5/2019 17:05,,,,0,,,,CC BY-SA 4.0 +12144,1,,,5/5/2019 10:05,,7,944,"

The Prioritized Experience Replay paper gives two different ways of sampling from the replay buffer. One, called ""proportional prioritization"", assigns each transition a priority proportional to its TD-error. +$$p_i = |\delta_i|+\epsilon$$

+ +

The other, called ""rank-based prioritization"", assigns each transition a priority inversely proportional to its rank. +$$p_i = 1/\text{rank}(i)$$ +where $\text{rank}(i)$ is the rank of transition $i$ when the replay buffer is sorted according to $|\delta_i|$.

+ +

The paper goes on to show that the two methods give similar performance for certain problems.
+Are there times when I should choose one sampling method over the other?

+",22916,,22916,,5/5/2019 10:22,11/26/2020 2:06,Which kind of prioritized experience replay should I use?,,2,0,,,,CC BY-SA 4.0 +12145,2,,12144,5/5/2019 10:05,,6,,"

The authors of that paper hypothesized that rank-based prioritization would be more robust to outliers. They suggested that rank-based sampling would be preferred for this reason. However, as they noted later, the fact that DQN clips rewards anyways weakens this argument.

+ +

If you're going to use someone else's ready-made code for your prioritized experience replay, then you'll probably end up using proportional sampling. Every implementation I've been able to find, including OpenAI's baselines implementation, uses proportional sampling. If you were going to write your own, proportional sampling might be preferred for being simpler to implement (so less error-prone).

+ +

Comparing these two sampling methods is complicated by the fact that rank-based sampling involves a hyperparameter that determines how often you sort your replay buffer. The authors of the original PER paper only sorted every $n$ timesteps, giving the nice amortized time of $O(\log n)$, where $n$ is the size of the replay buffer. Sampling takes constant time, so sampling a minibatch of size $k$ takes $O(k)$ time.

+ +

Proportional sampling doesn't involve sorting, but it does need to maintain a sum tree structure, which takes $O(\log n)$ time each time we add to the buffer. Sampling also takes $O(\log n)$ time, so sampling a minibatch of size $k$ takes $O(k\log n)$ time.

+ +

If we only sort our replay buffer every $n$ timesteps, then rank-based sampling is faster. However, because the buffer is almost always only approximately sorted, the distribution we sample from is only a rough estimate of the distribution we wanted. It's not clear that this estimation would be accurate enough to be performant when the replay buffer is scaled up in size past the $n=10^6$ transitions used in the paper. I haven't seen a study that compares performance for different frequencies of sorting and different buffer sizes.

+ +

So, rank-based sampling might be faster, but it also might not work as well. Adjustment of the sorting frequency hyperparameter might be necessary. The simpler and surer approach would be to use proportional sampling with clipped TD-errors.

+",22916,,22916,,5/5/2019 10:11,5/5/2019 10:11,,,,0,,,,CC BY-SA 4.0 +12147,2,,12142,5/5/2019 13:06,,14,,"

Informally, AI-complete problems are the most difficult problems for an AI. The concept is not mathematically defined yet, as e.g. NP-complete problems. However, intuitively, these are the problems that require a human-level or general intelligence to be solved.

+ +

Real natural language understanding is believed to be an AI-complete problem (this is also discussed in the paper Making AI Meaningful Again by Jobst Landgrebe and Barry Smith, 2019). There are a lot more AI-complete problems. For example, problems that involve emotions.

+ +

+",2444,,26146,,6/4/2019 11:34,6/4/2019 11:34,,,,1,,,,CC BY-SA 4.0 +12148,5,,,5/5/2019 13:22,,0,,"

For more info, see https://en.wikipedia.org/wiki/AI-complete.

+",2444,,2444,,5/7/2019 15:41,5/7/2019 15:41,,,,0,,,,CC BY-SA 4.0 +12149,4,,,5/5/2019 13:22,,0,,"For questions related to AI-complete (or AI-hard) problems, which, informally, are the problems, in AI, that require a human-level or general intelligence to be solved.",2444,,2444,,5/7/2019 15:41,5/7/2019 15:41,,,,0,,,,CC BY-SA 4.0 +12150,1,,,5/5/2019 15:09,,3,176,"

Given the Chinese room argument, and given the development in chatbots and machine learning, isn't Turing test superseded by some other way of evaluating AI's inteligence? Would an positive result of a Turing test provide any value, besides telling that a machine is good at conversations (but possibly nothing more)?

+",25414,,,,,5/5/2019 16:36,"Is the Turing test still relevant, as of 2019?",,1,0,,,,CC BY-SA 4.0 +12151,2,,12150,5/5/2019 16:30,,2,,"

The Turing test is a good test for AI applications like Siri and Alexa (or, in general, intelligent personal assistants), but it doesn't test a lot of features (e.g. vision) that an artificial general intelligence requires. There are several arguments against the Turing test (and it is definitely not flawless), but is it a necessary (but insufficient) test for AGI. The Turing test is still quite relevant: e.g. if IPAs passed the test, people would use them more often and they would be a lot more useful.

+",2444,,2444,,5/5/2019 16:36,5/5/2019 16:36,,,,0,,,,CC BY-SA 4.0 +12152,2,,7548,5/5/2019 16:58,,2,,"

Jobst Landgrebe and Barry Smith, in the paper Making AI meaningful again (2019), argue that machine learning is not sufficient to build an AI that is able to fully (like humans) understand language. They state that current machine learning (or stochastic) approaches might not take into account a lot of contexts while understanding language (e.g. at machine translation), because of the eventual lack of associated data or because of the big number of possible solutions to a language understanding problem.

+ +

If understanding language is a necessary skill to pass the Turing test, then current machine learning is not sufficient to build an AI that will pass the Turing test (according to them).

+",2444,,2444,,5/5/2019 17:30,5/5/2019 17:30,,,,0,,,,CC BY-SA 4.0 +12153,2,,12126,5/5/2019 17:33,,1,,"

Why not make a detector/classifier to look for the text ""Drivers Licence"" or some very generic keywords related to license? As you mentioned images of people may be present in any sort of documents. Looking for text which is super specific to IDs/drivers license seem to be a better way.

+",25400,,,,,5/5/2019 17:33,,,,0,,,,CC BY-SA 4.0 +12154,1,12306,,5/5/2019 18:08,,5,211,"

In my understanding, the mind arises from a physical system, the brain. I see that there is a big research under the topic of simulating physical systems efficiently (especially in quantum computing). Hence, in theory, we could achieve AGI by simulating the physical brain.

+ +

Is there any research I should look into regarding this topic? I would like to hear if it is possible, why, why not, what are the limitaions, how far we are from achieving this, and anything else, really. Also, I would like to read something about my first assumption (""the mind arises from a physical system, the brain"").

+ +

I have searched AI.SE but I've found only related questions, so I don't think this is duplicate. For reference:

+ + + +

Note: I am not asking for the possibility NOW, but in general, so telling me that ""we don't know the brain enough"" is not on topic.

+",23527,,23527,,5/5/2019 18:16,1/26/2021 1:40,"Is there any paper, article or book that analyzes the feasibility of acheiving AGI through brain-simulation?",,1,0,,,,CC BY-SA 4.0 +12156,2,,12142,5/5/2019 18:44,,6,,"

This is more of a comment and philosophical opinion, but I don’t believe that there are any problems an AI couldn’t solve, that a human can. Being new to this forum, I cannot make it a comment on the question (and it would probably be too long) — I preemptively ask for your forgiveness.

+ +

AI Eventually Will Mimic Humans (and surpass them)

+ +

Humans by nature are logical. Logic is learned or hardwired, and influenced by observation and chemical impulses.

+ +

As long as an AI can be trained to act like a human, it will be able to act like one. Currently, those behaviors are limited to technology (space, connections, etc), which the human brain has been optimized to rule out or disregard certain “fluff” automatically enabling it certain super capabilities. For instance, not everything seen is registered through the brain; often, the brain performs differential comparisons and updates for changes to reduce processing time and energy. It will only be a matter of time before AI can also be programmed to behave this way, or technological advancements will allow it to not need some of this function, which will allow it to leapfrog humans.

+ +

In the current state, we recognize humans are sometimes irrational or inconsistent. In those cases, AI could mimic human limitations with configured randomization patterns, but again, there really won’t be a need since it can be programmed and learn those patterns automatically (if necessary).

+ +

It all comes down to consumption of data, retention of information, and learned corrections. So, there is no problem that a human can perform (to my knowledge) that AI couldn’t theoretically ever perform. Even in the case of chemistry. As we are manufacturing foods and organs, an AI could also, theoretically, one day reproduce and survive via biological functions.

+ +

Instead of the question being binary about human capability vs that of artificial intelligence, I’d be more interested to see what people think are the more challenging things humans can do, which will take AI time to accomplish.

+",25424,,,,,5/5/2019 18:44,,,,1,,,,CC BY-SA 4.0 +12157,1,,,5/5/2019 20:25,,2,23,"

In section 3 of this paper the author outlines how GARB was adapted to reduce the variance in updating parameters to an internal reward function estimator.

+ +

I have read it a number of times and understand it up through the end of the explanation of GARB. The author then goes on to explain how they use backprop to implement this procedure, which is the point at which I stop understanding.

+ +

+

+ +

Is there an open source implementation available to look at? I can’t figure out if $g_T$ is actually computed and used or not? And not certain how the internal reward gradient is calculated.

+ +

Any insight you can provide would be helpful.

+ +

EDIT: after reading it several times and several related papers, I think I have more understanding but not quite there. So my main questions are:

+ +

1) are we keeping a full eligibility trace vector with the dimensionality of the vector equal to the number of parameters in $\theta$ (all NN params)?

+ +

2) do we use the gradient calculation via backdrop at every step to calculate $g_t$?

+ +

3) do we have to maintain $3 * \theta$ parameters, one for theta, one for $e$ and one for $g_t$?

+ +

3) what then is the procedure at terminal $g_T$ to update the parameters? A simple element wise matrix operation?

+ +

4) How often to update the parameters of $\theta$?

+",16343,,16343,,5/6/2019 14:38,5/6/2019 14:38,How is GARB implemented in PGRD-DL to calculate gradients w.r.t. internal rewards?,,0,2,,,,CC BY-SA 4.0 +12158,1,,,5/5/2019 21:15,,1,58,"

I'm currently using google speech to text api to transcribe audio in real time (police scanner audio dispatches). The audio quality isn't great and I've been putting in key words to try to help train it. Is there a way to either use google, amazon aws, IBM Watson, to create a model based on past audio dispatches where I can manually type in what was said to help train it? It seems like putting in key words won't really cut it. Any other suggestions to help make it more accurate?

+",25432,,,,,5/5/2019 21:15,Speech to text models,,0,0,,5/29/2022 17:15,,CC BY-SA 4.0 +12160,1,,,5/6/2019 6:32,,5,685,"

While I've been able to solve MountainCar-v0 using Deep Q learning, no matter what I try I can't solve this enviroment using policy-gradient approaches. As far as I learnt searching the web, this is a really hard enviroment to solve, mainly because the agent is given a reward only when it reaches the goal,which is a rare event. I tried to apply the so called ""reward engineering"", more or less substituting the reward given by the enviroment with a reward based upon the ""energy"" of the whole system (kinetic plus potential energy), but despite this no luck. +I ask you:

+ +
    +
  • is correct to assume that MountainCar-v0 is beyond the current state of the art A3C algorithm, so that it requires some human intervention to suggest the agent the policy to follow, for example adopting reward engineering?
  • +
  • could anyone provide any hint about which reward function could be used, provided that reward engineering is actually needed ?
  • +
+ +

Thanks for your help.

+",25352,,,,,10/22/2019 22:01,A3C fails to solve MountainCar-v0 enviroment (implementation by OpenAi gym),,1,0,,,,CC BY-SA 4.0 +12161,1,12180,,5/6/2019 8:03,,4,148,"

There are different kinds of machine learning algorithms, both univariate and multivariate, that are used for time series forecasting: for example ARIMA, VAR or AR.

+ +

Why is it harder (compared to classical models like ARIMA) to achieve good results using neural network based algorithms (like ANN and RNN) for multi step time series forecasting?

+",24006,,2444,,5/7/2019 13:32,5/7/2019 13:50,Why is it harder to achieve good results using neural network based algorithms for multi step time series forecasting?,,2,1,,,,CC BY-SA 4.0 +12163,1,,,5/6/2019 8:08,,2,133,"

I read several papers and articles where it is suggested that transposed convolution with 2 strides is better than upsampling then convolution.

+ +

However implementing such model with the transposed convolution resulted in heavy checkboard effect, where the whole generated image is just a pattern of squares and no learning takes place. How to properly implement it without totally messing up the generation? With the upsampling+convolution I got okay result but I want to improve my model. I am trying to generate images based on the CelebA dataset.

+ +

I use keras with tf and I used the following code:

+ +
model.add(Conv2DTranspose(256, 5, 2, padding='same'))
+
+model.add(LeakyReLU(alpha=0.2))
+
+model.add(BatchNormalization(momentum=0.9))
+
+
+
+model.add(Conv2DTranspose(128, 5, 2, padding='same'))
+
+model.add(LeakyReLU(alpha=0.2))
+
+model.add(BatchNormalization(momentum=0.9))
+
+
+
+model.add(Conv2DTranspose(64, 5, 2, padding='same'))
+
+model.add(LeakyReLU(alpha=0.2))
+
+model.add(BatchNormalization(momentum=0.9))
+
+ +

Here I try to turn a 4x4 image into a 32x32. Later it will be turned into a 64x64 image with 1 or 3 channels depending on the image. However I get the following pattern always. Some tweaking usually leads to some other pattern but it does not really change:

+ +

Checkboard effect

+ +

Thank you for your answers in advance

+",25438,,,,,5/6/2019 8:08,Transposed convolution as upsampling in DCGAN,,0,0,,,,CC BY-SA 4.0 +12168,1,12175,,5/6/2019 14:27,,4,174,"

So, I have a dataset that has around 1388 unique products and I have to do unsupervised learning on them in order to find anomalies (high/low peaks).

+

The data below just represents one product. The ContextID is the product number, and the StepID indicates different stages in the making of the product.

+
    ContextID   BacksGas_Flow_sccm  StepID  Time_ms
+427 7290057 1.7578125   1   09:20:15.273
+428 7290057 1.7578125   1   09:20:15.513
+429 7290057 1.953125    2   09:20:15.744
+430 7290057 1.85546875  2   09:20:16.814
+431 7290057 1.7578125   2   09:20:17.833
+432 7290057 1.7578125   2   09:20:18.852
+433 7290057 1.7578125   2   09:20:19.872
+434 7290057 1.7578125   2   09:20:20.892
+435 7290057 1.7578125   2   09:20:22.42
+436 7290057 16.9921875  5   09:20:23.82
+437 7290057 46.19140625 5   09:20:24.102
+438 7290057 46.19140625 5   09:20:25.122
+439 7290057 46.6796875  5   09:20:26.142
+440 7290057 46.6796875  5   09:20:27.162
+441 7290057 46.6796875  5   09:20:28.181
+442 7290057 46.6796875  5   09:20:29.232
+443 7290057 46.6796875  5   09:20:30.361
+444 7290057 46.6796875  5   09:20:31.381
+445 7290057 46.6796875  5   09:20:32.401
+446 7290057 46.6796875  5   09:20:33.431
+447 7290057 46.6796875  5   09:20:34.545
+448 7290057 46.6796875  5   09:20:34.761
+449 7290057 46.6796875  5   09:20:34.972
+450 7290057 46.6796875  5   09:20:36.50
+451 7290057 46.6796875  5   09:20:37.120
+452 7290057 46.6796875  7   09:20:38.171
+453 7290057 46.6796875  7   09:20:39.261
+454 7290057 46.6796875  7   09:20:40.280
+455 7290057 46.6796875  12  09:20:41.429
+456 7290057 46.6796875  12  09:20:42.449
+457 7290057 46.6796875  12  09:20:43.469
+458 7290057 46.6796875  12  09:20:44.499
+459 7290057 46.6796875  12  09:20:45.559
+460 7290057 46.6796875  12  09:20:45.689
+461 7290057 47.16796875 12  09:20:46.710
+462 7290057 46.6796875  12  09:20:47.749
+463 7290057 46.6796875  15  09:20:48.868
+464 7290057 46.6796875  15  09:20:49.889
+465 7290057 46.6796875  16  09:20:50.910
+466 7290057 46.6796875  16  09:20:51.938
+467 7290057 24.21875    19  09:20:52.999
+468 7290057 38.76953125 19  09:20:54.27
+469 7290057 80.46875    19  09:20:55.68
+470 7290057 72.75390625 19  09:20:56.128
+471 7290057 59.5703125  19  09:20:57.247
+472 7290057 63.671875   19  09:20:58.278
+473 7290057 70.5078125  19  09:20:59.308
+474 7290057 71.875  19  09:21:00.337
+475 7290057 69.82421875 19  09:21:01.358
+476 7290057 69.23828125 19  09:21:02.408
+477 7290057 69.23828125 19  09:21:03.548
+478 7290057 72.4609375  19  09:21:04.597
+479 7290057 73.4375 19  09:21:05.615
+480 7290057 73.4375 19  09:21:06.647
+481 7290057 73.4375 19  09:21:07.675
+482 7290057 73.4375 19  09:21:08.697
+483 7290057 73.4375 19  09:21:09.727
+484 7290057 74.21875    19  09:21:10.796
+485 7290057 75.1953125  19  09:21:11.827
+486 7290057 75.1953125  19  09:21:12.846
+487 7290057 75.1953125  19  09:21:13.865
+488 7290057 75.1953125  19  09:21:14.886
+489 7290057 75.1953125  19  09:21:15.907
+490 7290057 75.9765625  19  09:21:16.936
+491 7290057 75.9765625  19  09:21:17.975
+492 7290057 75.9765625  19  09:21:18.997
+493 7290057 75.9765625  19  09:21:20.27
+494 7290057 75.9765625  19  09:21:21.55
+495 7290057 75.9765625  19  09:21:22.75
+496 7290057 75.9765625  19  09:21:23.95
+497 7290057 76.85546875 19  09:21:24.204
+498 7290057 76.85546875 19  09:21:25.225
+499 7290057 76.85546875 19  09:21:25.957
+500 7290057 76.85546875 19  09:21:26.984
+501 7290057 75.9765625  19  09:21:27.995
+502 7290057 75.9765625  19  09:21:29.2
+503 7290057 76.7578125  19  09:21:30.13
+504 7290057 76.7578125  19  09:21:31.33
+505 7290057 76.7578125  19  09:21:32.59
+506 7290057 76.7578125  19  09:21:33.142
+507 7290057 76.7578125  19  09:21:34.153
+508 7290057 75.87890625 19  09:21:34.986
+509 7290057 75.87890625 19  09:21:35.131
+510 7290057 75.87890625 19  09:21:35.272
+511 7290057 75.87890625 19  09:21:35.451
+512 7290057 76.7578125  19  09:21:36.524
+513 7290057 76.7578125  19  09:21:37.651
+514 7290057 76.7578125  19  09:21:38.695
+515 7290057 76.7578125  19  09:21:39.724
+516 7290057 76.7578125  19  09:21:40.760
+517 7290057 76.7578125  19  09:21:41.783
+518 7290057 76.7578125  19  09:21:42.802
+519 7290057 76.7578125  19  09:21:43.822
+520 7290057 76.7578125  19  09:21:44.862
+521 7290057 76.7578125  19  09:21:45.884
+522 7290057 76.7578125  19  09:21:46.912
+523 7290057 76.7578125  19  09:21:47.933
+524 7290057 76.7578125  19  09:21:48.952
+525 7290057 76.7578125  19  09:21:49.972
+526 7290057 76.7578125  19  09:21:51.72
+527 7290057 77.5390625  19  09:21:52.290
+528 7290057 77.5390625  19  09:21:52.92
+529 7290057 77.5390625  19  09:21:53.361
+530 7290057 77.5390625  19  09:21:54.435
+531 7290057 76.66015625 19  09:21:55.602
+532 7290057 76.66015625 19  09:21:56.621
+533 7290057 72.94921875 22  09:21:57.652
+534 7290057 3.90625 24  09:21:58.749
+535 7290057 2.5390625   24  09:21:59.801
+536 7290057 2.1484375   24  09:22:00.882
+537 7290057 2.05078125  24  09:22:01.259
+538 7290057 2.1484375   24  09:22:01.53
+539 7290057 1.953125    24  09:22:02.281
+540 7290057 1.953125    24  09:22:03.311
+541 7290057 2.1484375   24  09:22:04.331
+542 7290057 2.1484375   24  09:22:05.351
+543 7290057 1.953125    24  09:22:06.432
+544 7290057 1.85546875  24  09:22:07.519
+545 7290057 1.7578125   24  09:22:08.549
+546 7290057 1.85546875  24  09:22:09.710
+547 7290057 1.7578125   24  09:22:10.738
+548 7290057 1.85546875  24  09:22:11.798
+549 7290057 1.953125    24  09:22:12.820
+550 7290057 1.85546875  1   09:22:13.610
+551 7290057 1.85546875  1   09:22:14.629
+552 7290057 1.953125    1   09:22:15.649
+553 7290057 1.85546875  2   09:22:16.679
+554 7290057 1.85546875  2   09:22:17.709
+555 7290057 1.85546875  2   09:22:18.729
+556 7290057 1.953125    2   09:22:19.748
+557 7290057 1.85546875  2   09:22:20.768
+558 7290057 1.7578125   3   09:22:21.788
+559 7290057 1.7578125   3   09:22:22.808
+560 7290057 1.85546875  3   09:22:23.829
+561 7290057 1.953125    3   09:22:24.848
+562 7290057 1.85546875  3   09:22:25.898
+563 7290057 1.953125    3   09:22:27.39
+564 7290057 1.953125    3   09:22:28.66
+565 7290057 1.7578125   3   09:22:29.87
+566 7290057 1.85546875  3   09:22:30.108
+567 7290057 1.7578125   3   09:22:31.129
+568 7290057 1.953125    3   09:22:32.147
+569 7290057 1.85546875  3   09:22:33.187
+
+

I use the following code to plot a graph.

+

Code:

+
lineplot = X.loc[X['ContextID'] == 7290057]
+x_axis = lineplot.values[:,3]
+y_axis = lineplot.values[:,1]
+
+plt.figure(1)
+plt.plot(x_axis, y_axis)
+
+

and the graph: +

+

In this graph, the peaks (marked in red circles) are the anomalies that need to be detected.

+

And when I have a graph like this: No anomalies must be caught since there are no undesirable peaks.

+

+

I tried using OneClassSVM, but I am somehow not satisfied with the results.

+

I would like to know which unsupervised learning algorithm can be used for such a task at hand.

+",23380,,32410,,9/21/2021 8:41,9/21/2021 8:41,Which unsupervised learning algorithm can be used for peaks detection?,,1,0,,,,CC BY-SA 4.0 +12171,1,,,5/6/2019 19:20,,4,278,"

I am building a video analytics program for counting moving things in a video. I am detecting bicycles and nothing else. I run object detection using the SSD mobile-net model in all the frames and store the bounding box coordinates (x,y,w,h) of each detection to a CSV file.

+ +

So for a video, I have a CSV file of one row each for a frame and each row has multiple detections of D1, D2, D3,.., Dn. Each detection has the bounding box coordinates as values. D1 is x,y,w,h.

+ +

Based on the x,y values of each detection, I am trying to find the direction of the bicycles and if the bicycle crosses the whole frame to do a UP/DOWN count.

+ +

How do I count/track(Don't want to use classic tracking algos) these bounding boxes moving in the video?

+ +

I see LSTM/RNN coming up in my search results when I search for video analytics. Being a noob, I am not able to find any tutorial that suits my needs.

+ +

I would like to check if my approach towards the problem is correct.

+ +

I don't want to use the classical tracking solutions for two reasons

+ +
    +
  1. I feel the tracking and counting conditions that I program in Python is always leaky/fails in certain conditions, hence I want to see how AI manages to count the objects.

  2. +
  3. The video stream I am using has heavy distortion on the objects that I track, hence the shape and size of the object changes drastically within 10s/20s of frames.

  4. +
+ +

Any help or suggestion towards other better approaches is much appreciated.

+ +

Edit 1: The area of view under the camera is fixed. And we expect the bicycles to move from one entry side. Lets assume that the view and entry/exit is like shown in this video https://www.youtube.com/watch?v=tW7Pl3bSzR4

+",22093,,22093,,5/12/2019 18:32,5/13/2019 23:20,Object IN/OUT counting using CNN+RNN,,1,2,,,,CC BY-SA 4.0 +12172,1,,,5/6/2019 20:48,,7,218,"

I'm interested in the industrial use of GDL (see https://arxiv.org/abs/1611.08097). Is it used in industry? That is, does any company have access to non-Euclidean data and process it directly instead of converting it to a more standard format?

+",22365,,2444,,5/6/2019 20:54,5/16/2019 15:39,Do you know any examples of geometric deep learning used in industry?,,1,1,,,,CC BY-SA 4.0 +12173,1,,,5/7/2019 5:11,,0,78,"

I have searched but found that some similarity measures are for continuous data and some are for categorical data. But i want to know the similarity measures which are use for both data, continuous and categorical?

+",23501,,,,,5/7/2019 5:21,what are the similarity measure use for both continuous and categorical data?,,1,0,,,,CC BY-SA 4.0 +12174,2,,12173,5/7/2019 5:21,,1,,"

Sometimes continuous data can be represented as parametric distribution with distribution parameters as variable, essentially continuous stochastic process. In that case cross-entropy would work on that type of continuous data.

+",22745,,,,,5/7/2019 5:21,,,,0,,,,CC BY-SA 4.0 +12175,2,,12168,5/7/2019 8:41,,3,,"

If your anomalies are simply peaks, why should you be using machine learning methods? You could use peak detection algorithms for the purpose.

+ +

If you still insist on ML, isolation forest is a good try.

+",25400,,,,,5/7/2019 8:41,,,,2,,,,CC BY-SA 4.0 +12176,1,,,5/7/2019 9:00,,1,27,"

I want to make 8 fold cross validation from the dataset. The dataset is the musical onsets annotated which has txt format at each songs.

+ +

+ +

How to make the another folder like called splits (the name of folder) which is contains 8 fold of songs. Each fold will be written as a txt format.

+ +

+ +

+ +

I need some code references to make 8 fold like that. Do I need use sklearn Kfold? And how to save each song like stated in the third picture?

+ +

Thank you.

+",18885,,,,,5/7/2019 9:00,N fold Cross Validation,,0,2,0,,,CC BY-SA 4.0 +12180,2,,12161,5/7/2019 12:44,,3,,"

ANNs & RNNs can be used to create some great models in many different domains, including time-series forecasting. However, across all of these domains, they suffer from the problem of hyper-parameter optimization. Because neural networks are so flexible, it is not clear, at the outset, which arrangement of neurons will be most effective to solve a given problem. It is also not clear how fast the network should learn from new signals, what sorts of activation functions to use in the different layers of the network, and which of several possible regularization methods might be best. Making these decisions well requires either years of practice and experience, or a lot of trial and error (or, maybe both!).

+ +

In contrast, a regression-based method like ARMA will typically have just a couple of simple hyperparameters, each of which has a clear, intuitive, meaning. This means that an untrained practitioner can probably get an ARMA result that is close to the result of a trained practitioner using ARMA.

+ +

Essentially: neural networks are brittle and sensitive to the choice of hyper-parameters, while regression generally is not.

+",16909,,,,,5/7/2019 12:44,,,,0,,,,CC BY-SA 4.0 +12182,1,12198,,5/7/2019 13:26,,1,3711,"

What is a temporal feature, what features make something temporal in nature? Is this problem agnostic? How does it change from different fields of study?

+",22093,,2444,,5/8/2019 0:02,5/8/2019 0:07,What is a temporal feature?,,1,0,,,,CC BY-SA 4.0 +12183,1,,,5/7/2019 13:41,,3,864,"

I hope this question is ok here, but since I have found a tag which deals with these issues (profession), I'll ask away. I also hope this may be useful to other people with similar doubts, since I am failing to find valuable information on this topic online.

+ +
+ +

I am interested in the theoretical side of CS, such as computability, logic, complexity theory and formal methods. At the same time, I am deeply fascinated by Artificial Intelligence and the questions it poses to our understanding of the notion of intelligence and what does it mean to be a human being.

+ +

In general, is AI a more ""applied""/engineeristic field, or are there theoretical aspects to research in?

+ +

In short: If I prefer formal/theoretical compsci, is AI a bad career choice?

+ +

(note: I am asking this because I am a CS undergrad considering getting into a AI MSc).

+",23527,,23527,,5/7/2019 13:51,5/8/2019 4:32,"If I am interested in theoretical computer science, is AI a bad choice?",,2,5,0,5/4/2020 12:18,,CC BY-SA 4.0 +12185,2,,12161,5/7/2019 13:50,,2,,"

Given the (usual) higher architectural complexity of ML models compared to more classical forecasting models, ML models might also require more data, otherwise they might just overfit the training dataset.

+ +

Furthermore, online learning (or training) of a neural network using stochastic gradient descent (that is, one example at a time) might also be numerically unstable and statistical inefficient (so convergence might be slow). See Towards stability and optimality in stochastic gradient descent for more details and a solution (AI-SGD).

+",2444,,,,,5/7/2019 13:50,,,,1,,,,CC BY-SA 4.0 +12186,2,,12183,5/7/2019 14:14,,2,,"

Any ""serious"" AI program is full of theoretical and mathematical foundations (you will study plenty of statistics and optimisation methods) anyway, but I would say that much useful AI today is an applied or engineering area. Anyhow, you will need to be comfortable with a lot of mathematical details (especially, linear algebra and calculus). If you're more interested in statistics, optimisation or robotics, you should go for AI.

+ +

If you study ""pure"" computer science, you should also have one or two courses related to AI (at least, one ML course). If you are more interested in traditional CS algorithms, data structures, software engineering, operating systems, compilers, theory of computation, computer networking, programming languages and/or databases, then you should go for CS.

+ +

However, before enrolling in a master's program, you should really have a look at the details of the courses they offer. Furthermore, you might also take into account that, during your studies, you might change idea regarding one subject.

+",2444,,2444,,5/7/2019 14:46,5/7/2019 14:46,,,,1,,,,CC BY-SA 4.0 +12187,1,12192,,5/7/2019 14:53,,3,1834,"

How fast does Monte Carlo Tree Search converge? Is there a proof that it converges?

+

How does it compare to temporal-difference learning in terms of convergence speed (assuming the evaluation step is a bit slow)?

+

Is there a way to exploit the information gathered during the simulation phase to accelerate MCTS?

+

Sorry if too many questions, if you have to choose one, please choose the last question.

+",23866,,2444,,1/2/2022 10:07,1/13/2022 23:15,How fast does Monte Carlo tree search converge?,,1,0,,,,CC BY-SA 4.0 +12189,1,12190,,5/7/2019 15:50,,3,1988,"

Is reinforcement learning problem adaptable to the setting when there is only one - final - reward. I am aware of problems with sparse and delayed rewards, but what about only one reward and a quite long path?

+",8332,,2444,,11/2/2020 22:02,11/3/2020 13:37,Can reinforcement learning be used for tasks where only one final reward is received?,,1,0,,,,CC BY-SA 4.0 +12190,2,,12189,5/7/2019 15:59,,12,,"

RL can be used for cases where you have sparse rewards (i.e. at almost every step all rewards are zero), but, in such a setting, the experience the agent receives during the trajectory does not provide much information regarding the quality of the actions.

+

Games can be often formulated as episodic tasks. For example, you could formulate a chess match as an episode and you could give a (non-zero) reward only at the end of the match. However, in this specific case, it will be hard for the RL to understand which moves have mainly contributed to the reward received, which is known as the credit assignment problem.

+

You can solve the issue of sparse rewards with reward shaping (in particular, potential-reward shaping).

+

The term "delayed rewards" may also refer to the cases where you receive only one reward at the end of the episode, although it may more usually refer to scenarios where the reward at one-time step is only received later (for some reason).

+",2444,,2444,,11/3/2020 13:37,11/3/2020 13:37,,,,0,,,,CC BY-SA 4.0 +12191,1,,,5/7/2019 17:23,,3,1583,"

Apologies for the lengthy title. +My question is about the weight update rule for logistic regression using stochastic gradient descent.

+ +

I have just started experimenting on Logistic Regression. I came across two weight update expressions and did not know which one is more accurate and why they are different.

+ +

The first Method:

+ +

Source: (Book) Artificial Intelligence: A Modern Approach by Norvig, Russell + on page 726-727: +using the L2 loss function:

+ +

+ +

+ +

where +g stands for the logistic function +g' stands for g's derivative +w stands for weight +hw(x) represents the logistic regression hypothesis

+ +

The other method:

+ +

Source (Paper authored by Charles Elkan): Logistic Regression and Stochastic Gradient Training.

+ +

can be found here

+ +

+",25463,,1671,,10/15/2019 19:24,10/15/2019 19:24,What is the right formula for weight update rule in Logistic Regression using stochastic gradient descent,,0,2,,,,CC BY-SA 4.0 +12192,2,,12187,5/7/2019 19:38,,4,,"

Yes, Monte Carlo tree search (MCTS) has been proven to converge to optimal solutions, under assumptions of infinite memory and computation time. That is, at least for the case of perfect-information, deterministic games / MDPs.

+

Maybe some other problems were covered too by some proofs (I could intuitively imagine the proofs holding up for non-deterministic games as well, depending on implementation details)... but the classes of problems I mentioned above are what I'm sure about. The initial, classic proofs can be found in:

+ +

Much more recently the paper On Reinforcement Learning Using Monte Carlo Tree Search with Supervised Learning: Non-Asymptotic Analysis appeared on arXiv, in which I saw it is mentioned that there may have been some flaws in those original papers, but they also seem to be able to fix it and add more theory for the more "modern" variants which combine (deep) learning approaches inside MCTS.

+
+

It should be noted that, as is typically the case, all those convergence proofs are for the case where you spend an infinite amount of time running your algorithm. In the case of MCTS, you can intuitively think of the proofs only starting to hold once your algorithm has manage to build up the complete search tree, and then on top of that had sufficient time to run through all the possible paths in the tree sufficiently often for the correct values to backpropagate. This is unlikely to be realistic for most interesting problems (and if it is feasible, a simpler breadth-first search algorithm may be a better choice).

+
+
+

How does it compare to Temporal Difference learning in terms of convergence speed (assuming the evaluation step is a bit slow)?

+
+

If you're thinking of a standard, tabular TD learning approach like Sarsa... such approaches actually turn out to be very closely related to MCTS. In terms of convergence speed, I'd say the important differences are:

+
    +
  • MCTS focusses on "learning" for a single state, the root state; all efforts are put towards obtaining an accurate value estimate for that node (and its direct children), whereas typical TD implementations are about learning immediately for the complete state-space. I suppose the "focus" of MCTS could improve its convergence speed for that particular state...
  • +
  • but the fact that the search tree (which can be viewed as its "table" of $Q$-values as you'd see in Sarsa or $Q$-learning) only slowly grows can also be a disadvantage, in comparison to tabular TD learning approaches which start out with a complete table that covers the complete state space.
  • +
+

Note that papers such as the last one I linked above show how MCTS can also actually use Temporal Difference learning for its backing up of values through the tree... so looking at it from a "MCTS vs TD learning" angle doesn't really make too much sense when you consider that TD learning can be used inside MCTS.

+
+
+

Is there a way to exploit the information gathered during the simulation phase to accelerate MCTS?

+
+

There are lots and lots of ideas like that tend to improve performance empirically. It will be difficult to say much about them in theory though. Some examples off the top of my head:

+
    +
  • All Moves As First (AMAF)
  • +
  • Rapid Action Value Estimation (RAVE, also see GRAVE)
  • +
  • Move Average Sampling Technique (MAST)
  • +
  • N-Gram Selection Technique (NST)
  • +
  • Last-Good-Reply policy
  • +
  • ...
  • +
+

Many of them can be found in this survey paper, but it is somewhat old now (from 2012), so it doesn't include all the latest stuff.

+",1641,,2444,,1/13/2022 23:15,1/13/2022 23:15,,,,0,,,,CC BY-SA 4.0 +12195,2,,5774,5/7/2019 21:06,,3,,"

+ +

Backpropagation with stride > 1 involves dilation of the gradient tensor with stride-1 zeroes. I created a blog post that describes this in greater detail.

+",25475,,,,,5/7/2019 21:06,,,,0,,,,CC BY-SA 4.0 +12196,2,,12183,5/7/2019 21:32,,3,,"

There are certainly results in theoretical computer science / pure math with deep implications for AI. But to my knowledge these results typically aren't labeled as results of artificial intelligence, but as something more congruent in that particular field (For example in CS, we might say ""agent with unbounded computational power""; in math might say some statement is ""decidable/undecidable"" with respect to some system). Of course they still matter in the field of AI, but you need to know what you are looking for.

+ +

See my question What are some implications of Gödel's theorems on AI research? for some examples. Or you can look up MIRI's research guide for a better idea of what existing work is out there that links formal math / CS to AI research.

+ +

Another point to raise is that there is no good definition of AI in fields outside of normal discourse (or even within, perhaps), so its difficult to decide what discussions pertains to the study of AI. Questions like whether ZFC with/without choice is expressive enough might not be on the mind of most AI researchers, but could still have some implications.

+ +

So to answer you question more directly, there is certainly a field of study regarding theoretical AI. Regarding whether or not its a good choice is something for you to decide, but it is (in my humble and not-very-well-educated opinion) very difficult field that isn't very popular, and has not seen major progress in many years.

+",6779,,6779,,5/8/2019 4:32,5/8/2019 4:32,,,,3,,,,CC BY-SA 4.0 +12197,2,,11517,5/7/2019 22:04,,2,,"

Although there seems to be an apt analogy between Gödel's theorems and the PSHH, there is nothing formal linking the two together.

+ +

More concretely, Gödel's theorems are about systems that decide certain ""truths"" about mathematics, but unless I am mistaken, the PSSH doesn't imply that the symbol system of the mind needs to decide truths. Though implicitly us humans do decide facts about math, there isn't a formal interpretation of how that might be done in the PSHH, thus Gödel's theorems do not apply.

+ +

However, this answer is still good, under the assumption that the formal system we are talking about does indeed decide certain truths about math.

+",6779,,2444,,5/13/2020 10:13,5/13/2020 10:13,,,,0,,,,CC BY-SA 4.0 +12198,2,,12182,5/8/2019 0:01,,2,,"

In general, the expression "temporal feature" might refer to any feature that is associated with or changes over time.

+

However, in the context of signal processing, a temporal feature might refer to any feature of the data before being transformed to the Fourier, frequency or spectral domain, using the Fourier transform. In this context, the domain of the untransformed data is often called "time domain" (as opposed to the "frequency" or "spectral" domain, which is the domain of the transformed data), even though it might not be strictly associated with or defined as a function of time. For example, in image processing, an image can be interpreted as a 2D signal. The domain of an image can be referred to as the "time domain", even though it is usually and more correctly referred to as the "spatial domain" (given that an image can be thought of as a function from a pixel, which is defined by two numbers $x$ and $y$, to a value, e.g. a grayscale value). You can transform this image, using the Fourier transform, to the spectral domain. In that case, the domain of the result of the transformation can be referred to as the "spectral domain".

+

In the paper Learning Temporal Features Using a Deep Neural Network and its Application to Music Genre Classification, the authors define spectral and temporal features

+
+

Extracting features from audio that are relevant to the task at hand is a very important step in many music information retrieval (MIR) applications, and the choice of features has a huge impact on the performance. For the past decades, numerous features have been introduced and successfully applied to many different kinds of MIR systems. These audio features can be broadly categorized into two groups: 1) spectral and 2) temporal features.

+

Spectral features (SFs) represent the spectral characteristics of music in a relatively short period of time. In a musical sense, it can be said to reveal the timbre or tonal characteristics of music. Some of popular SFs include: spectral centroid, spectral spread, spectral flux, spectral flatness measure, mel-frequency cepstral coefficients (MFCCs) and chroma.

+

On the other hand, temporal features (TFs) describe the relatively long-term dynamics of a music signal over time such as temporal transition or rhythmic characteristics. These include zero-crossing rate (ZCR), temporal envelope, tempo histogram, and so on.

+

The two groups are not mutually exclusive, however, and many MIR applications use a combination of many different features.

+
+

These definitions are given in the context of signal processing (as mentioned above).

+

To conclude, the meaning of "temporal feature" might change depending on the context. Hence, you should interpret it given the context, but it is almost always associated with time (in some way).

+",2444,,-1,,6/17/2020 9:57,5/8/2019 0:07,,,,0,,,,CC BY-SA 4.0 +12199,5,,,5/8/2019 0:03,,0,,"

For more info, have a look e.g. at https://en.wikipedia.org/wiki/Feature_(machine_learning).

+",2444,,2444,,5/8/2019 23:01,5/8/2019 23:01,,,,0,,,,CC BY-SA 4.0 +12200,4,,,5/8/2019 0:03,,0,,"For questions related to features in the context of machine learning and, in general, AI.",2444,,2444,,5/8/2019 23:01,5/8/2019 23:01,,,,0,,,,CC BY-SA 4.0 +12201,2,,5085,5/8/2019 13:31,,0,,"

I think your intuition about a classifier being the wrong approach is a good one. This looks like a great use-case for word vectors, a ""self-supervised"" learning technique that maps tokens (e.g. ""dog"") to vectors (which might have anywhere between 50 - 500 dimensions). Facebook open-sourced a particular excellent tool for training word vectors called FastText; you could use this to embed tokens and hashtags alike into a word embedding space. You should find that the vector for ""dog"" ends ""close to"" (small cosine distance). Given a word, you can easily look up its vector (after training on your corpus, of course), but how to find other vectors that are close to it? If you want to do better than ""brute force"" and you need to check against a large number of (vectors for) hashtags, you should consider using Facebook's excellent FAISS library for fast similarity search to find the closest hashtags.

+",17770,,,,,5/8/2019 13:31,,,,0,,,,CC BY-SA 4.0 +12202,2,,5085,5/8/2019 13:35,,1,,"
+

Here is a good approach to achieve the task you want:

+
+ +

Step 1- Compute the Vector representation (i.e embeddings) of all the words you want to include. There are many algorithms out there to achieve this task.

+ +

+ +

Step 2- Choose the #words corresponding to your input word (e.g dog) by applying K-Nearest Neighbors (KNN) or similar algorithms. You basically compute the distances using the embeddings.

+ +

+ +
+

Steps Detailed:

+
+ +

Step 1-

+ +

In NLP we represent human language as a vector of values instead of a set of characters in order to process it. To do so there are 3 approaches in the literature:

+ +

- Word Level Embeddings: Represent each word as a vector of values. + Algorithms: Word2Vec by Google (paper), fastText by Facebook, GloVe by Stanford University (paper) ...

+ +

- Character Level Embeddings: Represent each character as a vector of values. Algorithms: ELMo (paper) ...

+ +

- Sentence Level Embeddings: Represent a sentence as a vector of values. Algorithms: Universal Sentence Encoder by Google (paper) ...

+ +

In your case I suggest to use GloVe or ElMo if you have only words and Universal Sentence Encoder if you have words and sentences. . Compute all your words embeddings and move to the next step.

+ +

Step 2-

+ +

Now that you have your embeddings, compute the distances between all your words (use Euclidian, Minkowski or any other distance). Notice that the computation may take some time but will only be executed once.

+ +

Now each time you have a word (e.g dog) you apply the KNN algorithm using the computed distances and you will get the most related words to this word.

+ +
+

Note: No need to compute distances and apply KNN if you use Universal Sentence Encoder as the similarity is easily computed using a dot product of the embeddings. See my quick implementation example here for details.

+
+",23350,,23350,,5/24/2019 11:02,5/24/2019 11:02,,,,0,,,,CC BY-SA 4.0 +12203,2,,10406,5/8/2019 13:42,,1,,"

It is the combination of the output of all neurons that determines the output of the neural network. In the case of convolutional neural networks (CNNs), the term ""feature"" is used because it is associated with the feature maps (or activation maps) and filters (or kernels) of the CNN. However, this terminology might not be accurate (because these might not be features in the intuitive sense) and it is used more to interpret the inner workings of the CNN.

+ +

After training, the weights of the neural network are fixed (unless you perform online and continual training), so the output of each neuron will be the same given the same input, thus the output of the neural network will also be the same (unless there is some random operation being performed). During training, the weights and thus the output of each neuron (and of the neural network) often change.

+ +

The contribution of each neuron to the output of the neural network is determined by the weights of the connections between the neurons, which change during training and can be initialised in different ways, which might affect differently the final weights (after training).

+ +

There several ways of visualising the contribution of each neuron to the output of the neural network. See also this article Visualize Features of a Convolutional Neural Network and the paper Visualizing and Understanding Convolutional Networks (2013).

+",2444,,2444,,5/8/2019 15:37,5/8/2019 15:37,,,,0,,,,CC BY-SA 4.0 +12205,2,,11575,5/8/2019 14:16,,2,,"

From what I understood you will not have any cold start problem because you basically process the user preferences description against movies descriptions to get recommendations. So you don't use other users feedback at any time of the process which is not collaborative filtering.

+ +
+

Here is instead the approach I would suggest in your case to get movies recommendations for each user:

+
+ +
    +
  • Compute the similarity between the user description and each movie description. This can be done using the Universal Sentence Encoder. It's an 2018 paper by Google which represents any sentence as a vector of 215 values (i.e embeddings). +The semantic similarity between 2 sentences is computed using the dot product of their embeddings. Fortunately, the implementation was integrated to Tensorflow Hub and easily be used (see my answer here for details).

  • +
  • Choose the highest similarity values and recommend the corresponding movies to the user. Notice that you can still use this approach along with a collaborative filtering one.

  • +
+",23350,,23350,,5/9/2019 21:27,5/9/2019 21:27,,,,0,,,,CC BY-SA 4.0 +12206,1,12207,,5/8/2019 15:05,,1,476,"

In Sutton's RL:An introduction 2nd edition it says the following(page 203):

+
+

State aggregation is a simple form of generalizing function approximation in which states are grouped together, with one estimated value (one component of the weight vector w) for each group. The value of a state is estimated as its group's component, and when the state is updated, that component alone is updated. State aggregation is a special case of SGD $(9.7)$ in which the gradient, $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$, is 1 for $S_{t}$ 's group's component and 0 for the other components.

+
+

and follows up with a theoretical example.

+

My question is, imagining my original state space is $[1,100000]$, why can't I just say that the new state space is $[1, 1000]$ where each of these numbers corresponds to an interval: so 1 to $[1,100]$, 2 to $[101,200]$, 3 to $[201,300]$, and so on, and then just apply the normal TD(0) formula, instead of using the weights?

+

My main problem with their approach is the last sentence:

+
+

in which the gradient, $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$, is 1 for $S_{t}$ 's group's component and 0 for the other components.

+
+

If $\hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$ is the linear combination of a feature vector and the weights (w), how does the gradient of that function can be 1 for a state and 0 for others? There are not as many w as states or groups of states.

+

Let's say that my feature vector is 5 numbers between 0 and 100. For example, $(55,23,11,44,99)$ for a specific state, how do you choose a specific group of states for state aggregation?

+

Maybe what I'm not understanding is the feature vector. If we have a state space that is $[1, 10000]$ as in the random walk, what can be the feature vector? Does it have the same size as the number of groups after state aggregation?

+",24054,,2444,,4/2/2022 10:18,4/2/2022 10:19,"How can $\nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right)$ be 1 for $S_{t}$ 's group's component and 0 for the other components?",,1,0,,,,CC BY-SA 4.0 +12207,2,,12206,5/8/2019 16:03,,1,,"

Using the book's random walk example, if you have a state space with $1000$ states and you divide them into $10$ groups, each of those groups will have $100$ neighboring states. The function for approximation will be

+

\begin{equation} +v(\mathbf w) = x_1w_1 + x_2w_2 + ... + x_{10}w_{10} +\end{equation}

+

Now, when you pick a state, the feature vector will be a one-hot encoded vector with $1$ that is placed in a position that depends on in which group does the chosen state belong. For example, if you have state $990$ that state belongs in group $10$ so the feature vector will be

+

\begin{equation} +\mathbf x_t = [0, 0, ..., 0, 1]^T +\end{equation}

+

what this means is that the only weight that will be updated is weight $w_{10}$ because gradients for all other weights will be $0$ (that's because features for those weights are $0$)

+",20339,,2444,,4/2/2022 10:19,4/2/2022 10:19,,,,3,,,,CC BY-SA 4.0 +12208,2,,1508,5/8/2019 16:16,,3,,"

You should look at pointer networks. It is still not perfect for the case, but it should be more applicable than LSTMs and MLPs because they learn in an output space of size equal to the input, rather than a fixed input dim that you would get using LSTMs in sequence to sequence or direct MLP. By design though they are meant for problems with replacement. Sorting when done sequentially is without, so to remedy this in the case of a pointer network, you could mask outputs that have already been chosen before the final normalization step (such as softmax)

+",25496,,25496,,5/13/2019 15:44,5/13/2019 15:44,,,,0,,,,CC BY-SA 4.0 +12209,1,12211,,5/8/2019 16:39,,1,63,"

Background

+ +

I am working on a robotic arm controlled by a DQN + a python script I wrote. +The DQN receives the 5 joint states, the coordinates of a target, the coordinates of the obstacle and outputs the best action to take (in terms of joint rotations). +The python script checks if the action suggested by the DQN is safe. If it is, it performs it. Otherwise, it performs the second highest-ranking action from the DQN; and so on. If no action is possible, collision: we fail.

+ +

During training, this python functionality wasn't present: the arm learned how to behave without anything else to correct his behaviour. With this addition on the already-trained network, the performance raised from 78 to 95%. +Now my advisor (bachelor's thesis) asked me to leave the external controller on during training to check whether this improves learning.

+ +

Question

+ +

Here's what happens during training; at each step:

+ +
    +
  1. the ANN selects an action
  2. +
  3. if it is legal, the python script executes it, otherwise it chooses another one.
  4. +
+ +

Now... On which action should I perform backprop? The one proposed by the arm or the one which was really executed? (So, which action should I perform backprop on?)

+ +

I am really confused. On the one hand, the arm did choose an action so my idea was that we should, in fact, learn on THAT action. On the other hand, during the exploration phase ($\epsilon$ greedy), we backprop on the action which was randomly selected and executed, with no interest on what was the output of the arm. So, it would be rational too, in this case, to perform backprop on the action really executed; so the one chosen by the script.

+ +

What is the right thing to do here?. (Bonus question: is it reasonable to train with this functionality on? Wouldn't it be better to train the Network by itself, and then later, enhance its performance with this functionality?)

+",23527,,23527,,5/8/2019 16:47,5/8/2019 18:19,DQN Agent helped by a controller: on which action should I perform backprop?,,1,0,,,,CC BY-SA 4.0 +12210,1,,,5/8/2019 16:58,,1,46,"

So when using semi-gradient td(0) you need to convert your state representation into a feature vector that represents the state and as far as I know, should not be correlated.

+ +

Is the input on the ANN of a DQN the same? should it be a feature vector that represents the state? What considerations should one have when creating such vector?

+",24054,,,,,5/8/2019 16:58,DQN ANN input vs Linear function approximator feature vector,,0,0,,,,CC BY-SA 4.0 +12211,2,,12209,5/8/2019 18:19,,2,,"

Q-learning - which DQN is based on - is an off-policy reinforcement learning (RL) method. That means it can learn a target policy of optimal control whilst acting using a different behaviour policy. In addition, provided you use single step learning (as opposed to n-step or Q($\lambda$) variants), you don't need to worry much about the details of the behaviour policy. It is more efficient to learn from behaviour policies closer to the current best guess at optimal, but possible to learn from almost anything, including random behaviour.

+ +

So it doesn't really matter too much if you change the behaviour during training.

+ +

In your case, the script is actually doing more than just changing the behaviour. It is consistently filtering out state/action pairs that you have decided should never be taken, as a domain expert. This has two major consequences:

+ +
    +
  • It reduces the search space by whatever fraction of state/actions are now denied by your safety script.

  • +
  • It prevents the agent from ever learning about certain state/action pairs, as they are never experienced.

  • +
+ +

The first point means that your learning should in theory be more efficient. As for how much, you will have to try and see. It might only be a small amount if the problem states and actions are unlikely to be reached during exploration from near-optimal behaviour.

+ +

The second point means that your agent will never learn by itself to avoid the problem state/action combinations. So you will always need to use the safety script.

+ +

In fact you can view the safety script as a modification to the environment (as opposed to a modification to the agent), if its decisions are strict and consistent. Filtering available actions is a standard mechanism in RL when action space may vary depending on state.

+ +
+

On which action should I perform backprop?

+
+ +

In DQN, you don't ""perform backprop"" on an action. Instead you either use directly or store some observed data about the step: $s, a, r, s'$ where $s$ is the start state, $a$ the action taken, $r$ the immediate reward, and $s'$ the resulting state. You then update the the current action value estimate(s) based on a TD target $\delta = r + \gamma \text{max}_{a'} Q(s,a')$ either online or from the experience table.

+ +

When Q-learning learns, it updates the estimate for the action value $Q(s, a)$ - and $a$ is taken from the actual behaviour (otherwise the agent will update its estimate of an action that it didn't take). So your answer here is to use - or more likely store in the experience table - the action actually taken. If the action recommended as optimal at that time is different, ignore the difference, what matters is the observed experience.

+",1847,,,,,5/8/2019 18:19,,,,2,,,,CC BY-SA 4.0 +12213,1,,,5/8/2019 20:50,,1,41,"

I have some variable length input vectors for my own use case of a 'stylistic transfer'-esque process, and I am wondering if anyone knows of a way to engineer an input that maps to a 0 element in embedding space. This would be an element that simply holds space but would be readily overlaid with vector addition of another embedded input.

+ +

My rationale is that I could pad the inputs with these zero elements to mask what I don't care about and have a semantically meaningful vector addition in the embedding space.

+ +

I wonder if I could permute some training examples with a chosen value which all map to the same output and this would allow a neural net to learn such a feature.

+",12849,,,,,5/8/2019 20:50,Creating a zero element in embedding space,,0,0,,,,CC BY-SA 4.0 +12214,1,,,5/8/2019 22:06,,1,156,"

I am making a school project where I should develop any kind of game where I can have one reactive agent and one agent based on machine learning competing with each other.

+ +

My game consists of a salesmen problem. Basically, I have 3 types of entities, consumers, salesmen, and hotspots.

+ +

The consumers are represented by the person with a green background. There are 8 of them. They basically move around the whole game using random walks and they tend o aggregate on the HotSpots (the orange icon with the router in it).

+ +

The salesmen are represented by the person with the dark grey background. One of them is controlled by a reactive agent that has in it some rules that I programmed and the other one is controlled by my DQN model.

+ +

The salesmen have 5 available actions, move up, right, down, left or sell. +When they choose to sell the simulation will try to sell to the closest consumer in a predetermined max range. If no consumers exist in that range or if the consumer rejects to buy then the sell fails.

+ +

I started training a Deep Q Network that I built using TensorFlow. As input features, I am giving the agent current position, the position of each consumer and a boolean saying if the consumer was recently asked to buy or not (consumers that were asked to buy something will reject future offers for a determined amount of time with 100% probability). +For the output layer, I have 5 nodes, one for each available action.

+ +

Here is a screenshot of the game: +The red number on the right-bottom corner of each agent represents their total utility.

+ +

+ +

I decided to give the agents the following rewards:

+ +
    +
  • SELL_SUCCESSED_REWARD = 3 - The agents receives 3 points for each success sell.
  • +
  • SELL_FAILED_REWARD = -0.010 - The agent loses 0.01 points for each failed sell
  • +
  • MOVING_REWARD = -0.001 - The agent loses 0.001 points for each move
  • +
  • NOT_MOVING_REWARD = -0.0125 - The agent loses 0.0125 points for standing in the same position (ie. not moving or trying to move against a wall)
  • +
+ +

I started training my agent but I seems to do not learn anything! I left it training for around 3 hours and I could not see any improvement. I tried different activation functions, batch sizes, exploration rates etc but no noticiable effect.

+ +

My question is: Can a DQN learn in this type of enviroment where there are a lot of random walks?

+ +

If yes what could be my problem? Not enought training time? Bad input features? Bad implementation?

+ +

Here are the files with my implementation of DQN:

+ +

Agent: https://github.com/daniel3303/aasma-project/blob/master/src/Agent/DeepLearningAgent/DeepLearningAgent.py

+ +

Training: https://github.com/daniel3303/aasma-project/blob/master/train_dqn.py

+ +

Thanks.

+",21688,,,,,5/8/2019 22:06,DQN not able to learn in a game where other agents perform random walks,,0,2,,,,CC BY-SA 4.0 +12216,1,,,5/9/2019 2:44,,3,177,"

I have found various references describing Naive Bayes and they all demonstrated that it used MLE for the calculation. However, this is my understanding:

+ +

$P(y=c|x)$ $\propto$ $P(x|y=c)P(y=c)$

+ +

with $c$ is the class the model may classify $y$ as.

+ +

And that's all, we can infer $P(x|y=c)$ and $P(c)$ from the data. I don't see where the MLE shows its role.

+",25509,,2444,,6/14/2019 20:08,12/3/2019 12:20,What is the relationship between MLE and naive Bayes?,,1,0,,,,CC BY-SA 4.0 +12217,1,12218,,5/9/2019 2:59,,3,525,"

I would provide a sound signal of about 2-3 seconds to my neural network. I have trained my network with a single word, like if I speak ""Hello"" the network may tell if ""Hello"" is spoken or not, but some other word like ""World"" is spoken, it will say ""Hello"" is not spoken. I just want classification of sound if its a specific command or word. +What is the best way to do this, I am not a that much advanced in DNN, I only know about NN and CNN, I want to know if there is some research paper or tutorial, or need some explanation about the work.

+",25510,,,,,5/9/2019 4:52,How to do speech recognition on a single word,,1,0,,,,CC BY-SA 4.0 +12218,2,,12217,5/9/2019 4:52,,3,,"

If you have fixed length speech data you can detect the content using only CNN. You can see that problem as a binary classification (1 if the spoken word is correct, 0 otherwise).

+ +

But first, you need to make the input length is fixed. For example, you use 2 seconds as the fixed length. If the recorded speech is more than 2 seconds, you need to crop it, and if the recorded speech is less than 2 seconds you can pad it with 0 values.

+ +

Next, You can either use raw data (time-domain) or transform your data using some features extractors method (FFT, MFCC, or MFSC). Then, use CNN as you use it to classify the image. You can assume the graphic of the sound wave as a 2D image.

+ +

But, If your data have a variety of length, you can combine CNN to detect each phoneme then combine it as a sequence using RNN or HMM. You can read this method also in the mentioned papers.

+",16565,,,,,5/9/2019 4:52,,,,3,,,,CC BY-SA 4.0 +12219,2,,12216,5/9/2019 6:32,,3,,"
+

And that's all, we can infer P(x|y=c) and P(c) from the data. I don't see where the MLE shows its role.

+
+ +

Maximum likelihood estimate is used for this very purpose, i.e. to estimate the conditional probability $p(x_j \mid y)$ and marginal probability $p(y)$ .

+ +
+ +

In Naive Bayes Algorithm ,using the properties of conditional probability, we can estimate the joint probability

+ +

$$ +p(y, x_1, x_2, \dots, x_n) = p(x_1, x_2, \dots, x_n \mid y) \; p(y) +$$

+ +

which is the same as
+$$ +p(y=c|x)\propto p(x|y=c)p(y=c) +$$

+ +

What we need to estimate in here, are the conditional $p(x_j \mid y)$ and marginal $p(y)$ probabilities, and we use maximum likelihood for this.

+ +

To be more precise: Assume that the $x$ is a feature vector $(x_1, x_2, ...x_n)$ - which means nothing more than the data point is a vector. In Naive Bayes Classification the assumption is that these features are independent of each other conditional on the class. So, in that case this term +$$p(x|y=c)$$ is replaced by $$\prod_{i=1}^{n} p(x_i|y=c)$$ -- this is because of the independence assumption that lets us simply multiply the probabilities.

+ +

The problem of estimating the class to which the data point belongs reduces to the problem of maximizing the likelihood $$\prod_{i=1}^{n} p(x_i|y=c)$$ -- which means assigning the data $(x_1, x_2, ...x_n)$ to the class $k$ for which this likelihood is highest. This will give the same result as MLE which takes the form of maximizing this quantity -- $$\prod_{i=1}^{n} p(x_i|\theta)$$ where $\theta$ are the assumed parameters.

+",16708,,16708,,12/3/2019 12:20,12/3/2019 12:20,,,,0,,,,CC BY-SA 4.0 +12220,2,,12029,5/9/2019 9:34,,2,,"

Like others said you can't approximate this with a linear regression model.

+A PRM that approximates a solution could give you the following:
+$y = 0.948 + x + 0.00085*x^6$ ~
+$y = 237/250 + x + (17/20000)*x^6$
+For $x = 9$, $y \simeq 462$

+ +

or

+ +

$y = 0.9258 + x + 0.00086*x^6$ +For $x = 9$, $y \simeq 466.965$

+ +

UPDATE
+An approximation of course, may be in the range of:
+$y = 2^{(x + 1)} - 2^x$ -the model you propose-
+Goodness of fit: 0.968475 and Mean Square Error = 685.111

+Based on this range a better approximation would be:
+$y = 2^x + (-1/2)*x^2$
+with $R^2$
+Goodness of fit = 0.995
+Mean Square Error: 89.0278

+",25410,,25410,,5/9/2019 10:43,5/9/2019 10:43,,,,1,,,,CC BY-SA 4.0 +12226,1,,,5/9/2019 13:02,,1,2041,"

I'm trying to train a PPO agent in a 3D balance ball environment. My action space is continuous.

+

In the following graph, each dot shows the average reward from 100 episodes.

+

+

Could this graph indicate that it's stuck at a local maximum? Do I need to promote exploring by increasing the entropy, or does this look like a bug with my implementation?

+

I am trying to maximize the average rewards.

+

When I look at the loss, it does seem like it is minimizing the cost. I thought that the loss function is dependent on the rewards because the advantage is calculated based on them. The normalized advantage is then factored into the policy loss, which should say which direction for the policy to step toward.

+

It seems like the actions are a bit noisy but the platform sometimes seems to try to keep the ball from falling off.

+

Hyperparameters:

+
    +
  • Learning rate = 0.01
  • +
  • Entropy Coefficient = 0.01
  • +
  • Value Function Loss Coefficient = 0.5
  • +
  • Gamma/Discount Factor = 0.995
  • +
  • MiniBatch Size = 512
  • +
  • Epochs = 3
  • +
  • Clip Epsilon = 0.1
  • +
+",23494,,2444,,12/11/2021 11:40,12/11/2021 11:40,"If the average rewards start high and then decrease, could that indicate that the PPO is stuck at a local maximum?",,1,0,,,,CC BY-SA 4.0 +12232,2,,4965,5/9/2019 21:19,,5,,"
+

The best approach at this time (2019):

+
+ +

The most efficient approach now is to use Universal Sentence Encoder by Google (paper_2018) which computes semantic similarity between sentences using the dot product of their embeddings (i.e learned vectors of 215 values). Similarity is a float number between 0 (i.e no similarity) and 1 (i.e strong similarity).

+ +

The implementation is now integrated to Tensorflow Hub and can easily be used. Here is a ready-to-use code to compute the similarity between 2 sentences. Here I will get the similarity between ""Python is a good language"" and ""Language a good python is"" as in your example.

+ +
+

Code example:

+
+ +
#Requirements: Tensorflow>=1.7 tensorflow-hub numpy
+
+import tensorflow as tf
+import tensorflow_hub as hub
+import numpy as np
+
+module_url = ""https://tfhub.dev/google/universal-sentence-encoder-large/3"" 
+embed = hub.Module(module_url)
+sentences = [""Python is a good language"",""Language a good python is""]
+
+similarity_input_placeholder = tf.placeholder(tf.string, shape=(None))
+similarity_sentences_encodings = embed(similarity_input_placeholder)
+
+with tf.Session() as session:
+  session.run(tf.global_variables_initializer())
+  session.run(tf.tables_initializer())
+  sentences_embeddings = session.run(similarity_sentences_encodings, feed_dict={similarity_input_placeholder: sentences})
+  similarity = np.inner(sentences_embeddings[0], sentences_embeddings[1])
+  print(""Similarity is %s"" % similarity)
+
+ +
+

Output:

+
+ +
Similarity is 0.90007496 #Strong similarity
+
+",23350,,,,,5/9/2019 21:19,,,,1,,,,CC BY-SA 4.0 +12233,1,,,5/10/2019 3:59,,1,28,"

I'm a grad student from EE.

+ +

So, basically, there's an electrical circuit that is supposed to output ""0"" or ""1"" by exactly 50 to 50 chance. It generates a number of big arrays of 0s and 1s, each of which amounts to more than 4,000 of the numbers.

+ +

But because these arrays are physically generated in a fab, I assume it might develop some dependencies among numbers and some output could be predicted by more than 50% chance. For example, due to some variations in the process, ""1"" can be more likely to come than ""0"" after a sequence of ""001100"".

+ +

Then let's say I make a simple deep neural network which takes 7 inputs and gives 1 output. I simply slice my array by 8 numbers, 7 of which are given to the input and the last one is used as a label (the true answer). I train my simple DNN using all these sliced numbers and it will learn some sequences. Finally, I apply my NN to a test set, and if it predicts the next number with an accuracy of more than 50%, that proves my assumption, and if it doesn't that is also good for me because it says my circuits are good.

+ +

Would it work?

+",25540,,,user9947,5/10/2019 5:12,5/10/2019 5:12,Would this NN for my chip outputs work?,,0,7,,,,CC BY-SA 4.0 +12234,1,12235,,5/10/2019 4:41,,2,32,"

I have seen in several jupyter notebooks people initializing the NN weights using:

+ + + +
np.random.randn(D, M) / np.sqrt(D)
+
+ +

Other times they just do:

+ + + +
np.random.randn(D, M)
+
+ +

What is the advantage of dividing the Gaussian distribution by the squared root of the number of neurons in the layer?

+ +

Thanks

+",21688,,,,,5/10/2019 4:53,Why is it so common to initialize weights with a Guassian distribution divided by the square root of number of neurons in a layer?,,1,0,,,,CC BY-SA 4.0 +12235,2,,12234,5/10/2019 4:53,,2,,"

I think they use the Xavier/Glorot's initialization method. You can read from the original paper:

+
+

We initialized the biases to be 0 and the weights $W_{ij}$ at each layer with the following commonly used heuristic:

+

$W_{ij} \sim U [ -\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}] $

+

where $U[−a, a]$ is the uniform distribution in the interval $(−a, a)$ and $n$ is the size of the previous layer (the number of columns of $W$)

+
+

Some people use this as some reports said this initialization method lead to better result

+",16565,,-1,,6/17/2020 9:57,5/10/2019 4:53,,,,0,,,,CC BY-SA 4.0 +12236,1,,,5/10/2019 7:06,,2,47,"

I have a large data set that doesn't fit in memory and would have to use something like Keras's model.fit_generator if I would like to train the model on all of the available data. The problem is that my data load time is larger than a single epoch and I would hate to incur that data load cost for each epoch.

+ +

The alternative approach that yields some value is to load as much data as possible, train the model for a few hundred epochs, then load the next portion of the data and reiterate for the same amount of epochs. And repeat this until all my data is ""seen"" by the model.

+ +

Intuitively I understand that this is sub-optimal as the model will tend to optimize for the latest portion of the data and ""forget"" the previous data but I would like a more in-depth explanation of the downsides of that method and if there are any ways to overcome them.

+",25542,,25542,,5/11/2019 6:42,5/11/2019 6:42,Difference between retraining on different portions of data and training initially on larger data set,,0,2,,,,CC BY-SA 4.0 +12238,2,,6678,5/10/2019 10:45,,0,,"

Yes, your intuition is correct. The effect of this problem is that the generator can no longer improve its output to marginally fool the discriminator - the discriminator isn't buying any of the generated output. In this case, the generator gets stuck in a local minimum and typically produces nonsense results.

+",12509,,,,,5/10/2019 10:45,,,,0,,,,CC BY-SA 4.0 +12239,1,12241,,5/10/2019 12:15,,8,2240,"

The universal approximation theorem states that a feed-forward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function (provided some assumptions on the activation function are met).

+ +

Is there any other machine learning model (apart from any neural network model) that has been proved to be an universal function approximator (and that is potentially comparable to neural networks, in terms of usefulness and applicability)? If yes, can you provide a link to a research paper or book that shows the proof?

+ +

Similar questions have been asked in the past in other places (e.g. here, here and here), but they do not provide links to papers or books that show the proofs.

+",2444,,2444,,6/16/2020 11:26,8/30/2020 17:28,Which machine learning models are universal function approximators?,,1,2,,,,CC BY-SA 4.0 +12240,1,,,5/10/2019 13:22,,1,76,"

I am reading the paper Convolutional Sequence to Sequence Learning by Facebook AI researchers and having trouble to understand how the dimensions of convolutional filters work here. Please take a look at the relevant part of the paper below.

+ +

+ +

Let's say the input to the kernel X is k*d (say k=5 words of d=300 embedding dimenisonality). Therefore the input is 5*300. In a computer vision task a kernel would slide over parts of the image, in NLP you usually see kernel taking up the whole width of the input matrix. So I would expect kernel to be m*d (e.g. 3*300 - slide over 3 words and look at their whole embeddiings).

+ +

However, the kernel here is of dimensionality 2d x kd which in our hypothetical example would be 600*1500. I don't understand how this massive kernel would slide over an input that is by far lower dimensional (5*300). In computer vision you could zero-pad the input, but here zero-padding would basically turn the input matrix into mostly zeros with only a handful of meaningful numbers.

+ +

Thanks for shedding some light on it!

+",21278,,,,,5/4/2023 21:03,Convolutional Sequence to Sequence Learning kernel parameters,,2,0,,,,CC BY-SA 4.0 +12241,2,,12239,5/10/2019 13:49,,4,,"

Support vector machines

+

In the paper A Note on the Universal Approximation Capability of Support Vector Machines (2002) B. Hammer and K. Gersmann investigate the universal function approximation capabilities of SVMs. More specifically, the authors show that SVMs with standard kernels (including Gaussian, polynomial, and several dot product kernels) can approximate any measurable or continuous function up to any desired accuracy. Therefore, SVMs are universal function approximators.

+

Polynomials

+

It is also widely known that we can approximate any continuous function with polynomials (see the Stone-Weierstrass theorem). You can use polynomial regression to fit polynomials to your labelled data.

+",2444,,2444,,8/30/2020 17:28,8/30/2020 17:28,,,,0,,,,CC BY-SA 4.0 +12242,2,,12240,5/10/2019 14:00,,0,,"

They are doing a matrix multiplication: consider $y = Ax, y \in \mathbb{R}^m, x \in \mathbb{R}^n, A \in M_\mathbb{R}(m,n)$. In the paper $x$ is a concatenation of $k$ elements of $\mathbb{R}^d$, so $x$ is long $kd$; $y$ is long $2d$.

+",23224,,,,,5/10/2019 14:00,,,,0,,,,CC BY-SA 4.0 +12244,1,,,5/10/2019 17:58,,1,99,"

I searched through the internet but couldn't find a reliable article that answers this question.

+

Can we use Autoencoders for unsupervised CNN feature learning of unlabeled images like the below + +and use the encoder part of the Auto-encoder for Transfer learning of few labeled images from the dataset? as shown below. +

+

I believe this will reduce the labeling work and increase the accuracy of a model.

+

However, I have concerns like the more cost in computing, failing to learn all required features, etc..

+

Please let me know if any employed this method in large scale learning such as image-net.

+

PS: Pardon if it is Trivial or Vague as I am new to the field of AI and computer vision.

+",25366,,-1,,6/17/2020 9:57,5/10/2019 17:58,Can we use Autoencoders for unsupervised CNN feature learning?,,0,2,,,,CC BY-SA 4.0 +12245,1,,,5/10/2019 19:30,,1,19,"

I am currently working on learning the features provided by a pre-trained network for image retrieval. Currently I take the features provided by the pre-trained network, use global max pooling to essentially provide me with a vector and then use fully connected layers to learn the feature vector. This has provided good results, although prone to over-fitting, particularly without dropout.

+ +

Is it possible/would it be beneficial to use a 1D convolutional layer instead of the fully connected layers to learn the features? Bearing in mind this is essentially still image data that has just been transformed.

+ +
model.add(GlobalAveragePooling2D(input_shape=input_shape))
+model.add(Dense(256, activation=""relu""))
+model.add(Dropout(0.2))
+model.add(Dense(256, activation=""relu""))
+
+ +

I'm not sure how to try this practically in Keras as 1D convolutional layers only seem to accept a 3 dimensional input tensor.

+ +

Any suggestions welcome!

+",18577,,,,,5/10/2019 19:30,Learning Features from a Pre-trained Network,,0,0,,,,CC BY-SA 4.0 +12247,1,12249,,5/10/2019 23:43,,1,2018,"

Can you describe this reward system in more detail?

+

I understand that the environment sends a signal indicating whether or not the action taken by the agent was 'good' or not, but it seems too simple.

+

Basically, can you detail the nitty-gritty workings of this system? I dunno, I may just be overthinking things.

+",25560,,2444,,1/16/2021 4:18,1/16/2021 4:18,What is the reward system of reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +12249,2,,12247,5/11/2019 0:40,,2,,"

In this case, the word ""system"" refers to a Markov decision process (MDP), which is the mathematical model used to represent the reinforcement learning (RL) problem or, in general, a decision making problem. Recall that, in RL, the problem consists in finding an (optimal) policy, which is a policy that allows the agent to collect the highest amount of reward (in the long run). Hence, in RL, the MDP is the problem and the optimal policy (for that MDP) is the solution.

+ +

The MDP is composed of the set of states of the environment $S$, the set of possible actions that the RL agent can take $A$, a transition function, $P(s_{t+1}=s'\mid s_{t}=s, a_t = a)$, which is a probability distribution and describes the dynamics of the environment, and a reward function $R_a(s', s)$, which is a function that describes the ""reward system"" of the environment. $R_a(s', s)$ can be thought of as the reward (signal) that the agent receives after having taken action $a$ in state $s$ and having landed in state $s'$.

+ +

In other words, the reward system is just this function $R_a(s', s)$, which is often the hardest function to define when modelling an RL problem. However, in certain cases, this function can be easily defined. For example, in the case of chess, you could define $R_a(s', s) = 1$, if $s'$ is the terminal state (checkmate), else $R_a(s', s) = 0$. Nonetheless, we could define the reward function for a chess environment differently. For example, we could give a reward of $0.5$ for certain ""clever"" moves. So, the reward function needs to be defined (by the programmer) in order to solve an RL problem and it highly affects the way the agent will learn.

+ +

The reward function can also be denoted by $R_a(s)$, which can be thought of as the reward that the agent receives after having taken action $a$ in state $s$ (no matter which state it lands in), or $R(s)$, which can be thought of as the reward the agent receives either when it enters or exits state $s$.

+",2444,,2444,,5/11/2019 0:45,5/11/2019 0:45,,,,1,,,,CC BY-SA 4.0 +12250,1,,,5/11/2019 5:42,,1,56,"

The demand is to locate the invoice within a camera captured image about that invoice. The invoice is always a white paper with printed black or blue characters, tables and red stamps. Sometimes the background behind the invoice is dark(about 60% of all the samples), but others are not. Sometimes there is shadow on it. +The question is how to detect the vertices, edges and corners of the invoice in this image? What algorithms should be applied? The android application CamScanner seems to have this function but not is effective every time. So does the CIDetector interface on iOS. What algorithms does CamScanner or iOS use(traditional or deep)?

+",14948,,14948,,5/14/2019 2:04,5/14/2019 2:04,How to locate the invoice within a camera captured image?,,0,5,,,,CC BY-SA 4.0 +12251,1,12253,,5/11/2019 6:38,,2,96,"

I came across this formula in Sutton And Barto: RL an Intro (2nd Edition) equation number 4.7 (page number 78).

+ +

If $\pi$ and $\pi'$ are deterministic policies and $q_\pi(s, \pi'(s)) \geq v_\pi(s)$ then the policy $\pi'$ is as good or better than $\pi$.

+ +

NOTE on Convention: As per the convention of the book goes, I think they are using rewards for state to action to state transition sequence rather than state to action transition.

+ +

My questions are:

+ +
    +
  • Why are they comparing state value function to action value function?
  • +
  • Isn't it obvious the above equation might hold true (provided we select the best action among the possible actions) even for the policy $\pi$ since then the equation will change to $q_\pi(s, \pi(s)) \geq v_\pi(s)$ and we know $v_\pi(s) = \sum_{a \in \mathcal{A}(s)} \pi(a|s)q_\pi(s, a)$?
  • +
+ +

What is the inconsistency here?

+",,user9947,2444,,5/11/2019 10:58,5/11/2019 10:58,Possible inconsistency in the Policy Improvement equation,,1,1,,,,CC BY-SA 4.0 +12253,2,,12251,5/11/2019 8:39,,2,,"
+

Why are they comparing state value function to action value function?

+
+ +

It is because $v_{\pi}(s)$ and $q_{\pi}(s,a)$ measure the same quantity at different stages of the trajectory. By comparing the values at the same $s$ and modifying how $a$ is selected, the proof makes assertions about how that choice impacts the value.

+ +

It is important to recall that $v_{\pi}(s)$ measures the expected future reward when starting in state $s$ and following policy $\pi$, and that $q_{\pi}(s,a)$ measures the expected future reward when starting in state $s$ and taking action $a$, thereafer following policy $\pi$. When $a$ is chosen using a deterministic $a = \pi(s)$ then $v_{\pi}(s) = q_{\pi}(s,\pi(s))$

+ +
+

Isn't it obvious the above equation might hold true

+
+ +

The inequality (it is not an equation) does strictly hold true, because the equation $v_{\pi}(s) = q_{\pi}(s,\pi(s))$ is true. However, that is not terribly useful, it doesn't prove anything new.

+ +

What is interesting is if you change the decision for $a$

+ +
+

(provided we select the best action among the possible actions)

+
+ +

You cannot do that. The policy decides the actions, by definition. Selecting a better action according to the action value function, and showing what that does is precisely what the proof is showing.

+ +
+

since then the equation will change to $q_\pi(s, \pi(s)) \geq v_\pi(s)$

+
+ +

$q_\pi(s, \pi(s)) = v_\pi(s)$ - yes your inequality holds, but is equivalent to not changing anything.

+ +
+

and we know $v_\pi(s) = \pi(a|s)q_\pi(s, \pi(s))$?

+
+ +

Here you have switched to using a non-deterministic policy and a deterministic policy in the same statement (you also have not defined $a$ properly), making your equation badly formed. The correct form would be:

+ +

$$v_\pi(s) = \sum_{a \in \mathcal{A}(s)} \pi(a|s)q_\pi(s, a)$$

+ +

This is not a relevant form for the initial policy improvement theorem. However it does become relevant when the theory is adapted to show improving $\epsilon$-greedy policies later.

+ +
+ +
+

NOTE on Convention: As per the convention of the book goes, I think they are using rewards for state to action to state transition sequence rather than state to action transition.

+
+ +

That is not a convention in the book. In the book, the reward distribution is generally given as joint distribution with state transition in $p(s', r | s, a)$ - this is a very generic approach that models any discrete MDP, regardless of how the rewards are described as associated with any of $s, a, s'$ or a random factor.

+ +

Possibly the convention you are referring to is labelling the immediate reward as $R_{t+1}$, so that a small section of trajectory starting from state $S_t$ might be $S_t, A_t, R_{t+1}, S_{t+1}$ . . . some other texts will use $R_t$ here. However, that is not relevant to the policy improvement theorem, it would hold the same under either convention, just with slight relabelling of the reward indices.

+",1847,,,,,5/11/2019 8:39,,,,6,,,,CC BY-SA 4.0 +12254,2,,7755,5/11/2019 9:30,,1,,"

Normally, the set of actions that the agent can execute does not change over time, but some actions can become impossible in different states (for example, not every move is possible in any position of the TicTacToe game).

+ +

Take a look as example at pice of code https://github.com/haje01/gym-tictactoe/blob/master/examples/base_agent.py :

+ +
ava_actions = env.available_actions()
+action = agent.act(state, ava_actions)
+state, reward, done, info = env.step(action)
+
+",25551,,,,,5/11/2019 9:30,,,,2,,,,CC BY-SA 4.0 +12255,1,,,5/11/2019 11:11,,18,16277,"

Many examples work with a table-based method for Q-learning. This may be suitable for a discrete state (observation) or action space, like a robot in a grid world, but is there a way to use Q-learning for continuous spaces like the control of a pendulum?

+",19413,,2444,,11/21/2020 21:17,11/21/2020 21:31,Can Q-learning be used for continuous (state or action) spaces?,,2,0,,,,CC BY-SA 4.0 +12257,2,,12255,5/11/2019 11:32,,11,,"

Q-learning for continuous state spaces

+ +

Yes, this is possible, provided you use some mechanism of approximation. One approach is to discretise the state space, and that doesn't have to reduce the space to a small number of states. Provided you can sample and update enough times, then a few million states is not a major problem.

+ +

However, with large state spaces it is more common to use some form of function approximation for the action value. This is often noted $\hat{q}(s,a,\theta)$ to show that it is both an estimate (the circumflex over $\hat{q}$) and that you are learning some function parameters ($\theta$). There are broadly two popular approaches to Q-learning using function approximation:

+ +
    +
  • Linear function approximation over a processed version of the state into features. A lot of variations to generate features have been proposed and tested, including Fourier series, tile coding, radial basic functions. The advantage of these methods are that they are simple, and more robust than non-linear function approximations. Which one to choose depends on what you state space represents and how the value function is likely to vary depending on location within the state space.

  • +
  • Neural network function approximation. This is essentially what Deep Q Networks (DQN) are. Provided you have a Markov state description, you scale it to work sensibly with neural networks, and you follow other DQN best practices (experience replay table, slow changing target network) this can work well.

  • +
+ +

Q-learning for continuous action spaces

+ +

Unless you discretise the action space, then this becomes very unwieldy.

+ +

The problem is that, given $s,a,r,s'$, Q-learning needs to evaluate the TD target:

+ +

$$Q_{target}(s,a) = r + \gamma \text{max}_{a'} \hat{q}(s',a',\theta)$$

+ +

The process for evaluating the maximum becomes less efficient and less accurate the larger the space that it needs to check.

+ +

For somewhat large action spaces, using double Q-learning can help (with two estimates of Q, one to pick the target action, the other to estimate its value, which you alternate between on different steps) - this helps avoid maximisation bias where picking an action because it has the highest value and then using that highest value in calculations leads to over-estimating value.

+ +

For very large or continuous action spaces, it is not usually practical to check all values. The alternative to Q-learning in this case is to use a policy gradient method such as Actor-Critic which can cope with very large or continuous action spaces, and does not rely on maximising over all possible actions in order to enact or evaluate a policy.

+ +

Controlling a pendulum

+ +

For a discrete action space e.g. applying one of a choice of forces on each time step, then this can be done using a DQN approach or any other function approximation. The classic example here might be an environment like Open AI's CartPole-v1 where the state space is continuous, but there are only two possible actions. This can be solved easily using DQN, it is something of a beginner's problem.

+ +

Adding continuous action space ends up with something like the Pendulum-v0 environment. This can be solved to some degree using DQN and discretising the action space (to e.g. 9 different actions). However, it is possible to make more optimal solutions using an Actor-Critic algorithm like A3C.

+",1847,,1847,,5/11/2019 13:17,5/11/2019 13:17,,,,4,,,,CC BY-SA 4.0 +12258,2,,12255,5/11/2019 11:39,,1,,"
+

Q-Learning for continuous state space

+
+

Reinforcement learning algorithms (e.g Q-Learning) can be applied to both discrete and continuous spaces. If you understand how it works in discrete mode, then you can easily move to continuous mode. That's why in the literature all the introductory material focuses on discrete mode, as it's easier to model (table, grid, etc.)

+

Supposing you have a discrete number of actions, the only difference in a continuous space is that you will be modeling the state each $X$ amount of time ($X$ being a number you can choose depending on your use case). So, basically, you end up with a discrete space, but probably with an infinite number of states. You apply then the same approach you learned for discrete mode.

+

Let's take the example of self-driving cars, at each $X$ms (e.g $X=1$), you'll be computing the state of the car which are your input features (e.g direction, orientation, rotation, distance to the pavement, relative position on the lane, etc.) and take a decision of the action to take as in discrete mode. The approach is the same in other use cases, like playing games, walking robots, and so on.

+
+

Note (continuous action space):

+
+

If you have continuous actions, then in almost all use cases the best approach is to discretize your actions. I can't think of an example where discretizing your actions will lead to a considerable deficiency.

+",23350,,2444,,11/21/2020 21:31,11/21/2020 21:31,,,,3,,,,CC BY-SA 4.0 +12259,2,,12171,5/11/2019 12:28,,1,,"

Assessing the Question

+ +
+

Based on the x,y values of each detection, I am trying to find the direction of the bicycles and if the bicycle crosses the whole frame to do a UP/DOWN count.

+
+ +

It appears from the question that there is an interest in both counting bicycles and determining the direction of travel of each. from within the frames of a video stream or file. We can assume that each bicycle has at least one rider. It also appears that it is not a problem involving a fixed optical system positioned to point at a path that is tangential to the optical path, which would make the problem much easier, locking the approximate distance of the bicycle wheels to the optical system to near constant.

+ +

The use of the SSD mobile-net model seems reasonable as a starting point for developing expertise.

+ +

Starting With ML Design Basics

+ +

Let's consider the purpose of CNN and RNN designs.

+ +
    +
  • The purpose of a convolutional network is to deal equally with regions in a multi-dimensional array of values in discrete samples of \R^n during an adaptive (learning) process.
  • +
  • The purpose of a recurrent network is to adapt to (learn) potentially complex temporal (time-wise) trends in potentially complex nonlinear systems.
  • +
+ +

Understand that the SSD class of algorithms do not do what natural visual systems do. They do not zoom in and out on independent objects within the network seamlessly. They cannot note that a base ball player is running to first base and a ball is coming from the catcher at the same time, requiring independent conceptual zoom operations within the neural network. This cannot be done with a zoom lens. That is why the Director of Photography is such a key role in movie making. The visual data must contribute well to the story telling, using lighting, camera orientation, panning, zooming, and depth of focus.

+ +

Although one can create several bicycle concept classes to cover various bicycle sizes, orientations relative to the optics, and distances away, there are limitations to this approach, which can be diminished with circuit parallelism in hardware. Multi-threading and serial evaluation can, depending on resources and patience factors, increase training time beyond what is practical. The challenge is to create seamlessness between low level concept classes of a bicycle in a frame as the bicycle angle and distance changes relative to the optical path.

+ +

Deeper into Details

+ +

The, ""heavy distortion on the objects,"" could be a show-stopper if the root cause of the distortion mentioned is poor resolution in the time, horizontal, or vertical dimensions. The most significant and consistent image-oriented feature of a bicycle is two ellipses (not always circles) in close horizontal proximity and even closer vertical proximity — the two wheels. The wheels need to be recognizable.

+ +

Two general categories of networks were mentioned in the question, CNNs and RNNs, which, in general are the two most relevant overall categories of components in a visual system that recognizes motion. We have some nomenclature in the question, which begins the mathematical theory behind the design of the training of the networks and the real time requirements on those network components once the network components are trained.

+ +
+

... each ... frame ... has multiple detections of D1, D2, D3, ..., Dn

+
+ +

$$ D_i \land i \in {1, 2, ..., n} \\ +\Downarrow \\ +D_{xywh} $$

+ +

The above nomenclature presumably refers to a post-learning detection of concept classes $C_a, C_b, C_d$, where there is a many-to-one relationship between the above numeric indices for detections and these letter indices for the concepts of a bicycle to be recognized. Each concept class $C$ might correspond to a particular recognizable bicycle feature set given a particular range of distances to the optics and orientation of the wheels relative to the direction of light rays between the wheels and the camera. The designer, considering this correspondence cannot dismiss the turning of the front wheel. We cannot assume that the eccentricity of the visual representations of the two wheels will be the same, since the bicycle may be turning. Even in this more complex case, the ellipses are likely the two most differentiating features of bicycles in common scenes.

+ +

This may be a good time to point out that tricycle recognition may require the recognition of an entirely distinct set of concept classes.

+ +

Also notice that, if the optics (camera) is at a drastically different altitude than the wheels of the bicycles, such as images from a drone or a camera on a tall pole, the problem is a different one. This is the intensely effective quality of natural vision systems. Over millions of years, the kind of training that recognizes a bicycle from a drone video stream having only been trained to recognize a bicycle from ground level has emerged. Nature's ability to apply cognitive abilities to visual sequence recognition to use in trajectory prediction is not yet realized in software and hardware and the main problem in automated vehicle piloting and driving.

+ +

Two Different Output Requirements

+ +

There are two somewhat distinct problems that must be considered in analysis of the project requirements. The output could be either of these two, depending on whether counting bicycles is the primary goal or whether metrics of travel is its own independent objective.

+ +
    +
  • Unit vector $\vec{r}$, presumably in $\mathbb{R}^2$ as a pixel vector, essentially a normalized vector of the first derivative of pixel position with respect to time. The center of the bike, in this case, would be based on the features of the bike in the field of view.
  • +
  • Unit vector $\vec{r}$, presumably in $\mathbb{R}^2$ as a geocoordinate unit vector, essentially a normalized vector of the first derivative of position with respect to time. The center of the bike, in this case, would be based on the features of the bike in geo-space without the altitude coordinate.
  • +
+ +

Approaches to AI Design

+ +

The common, but inefficient, artificial network approach is to locate the bicycle in each frame with a CNN and then use one of the progressive RNN types (either a GRU or a b-LSTM network) to recognize motion trends. One of the largest drawbacks is that you may have many concept classes that represent adjacent size-distance-orientation concepts (kernel based recognition models) of a bicycle to train into the CNN. If the bike is traveling toward or away from the optics at some angle, then the disappearance of the bike from $D_a$ and its appearance in $D_b$ needs to be construed as the contiguous motion of one bike. This is not an easy challenge but is heavily covered in the literature.

+ +

It is recommended to use web searches designed to search scholarly articles, not dummies guides, which are not reliable. There many academic publications that can be found with the search term, ""Image recognition changing distance orientation."" Looking at old articles from the 90s will provide a good historical context. Looking at new ones from the last three years will provide a survey of the current state of research.

+ +

Other Questions Within the Primary Question

+ +

The original types of recurrent networks are essentially for historical context. The dominant in-field recurrent network successes are often of the LSTM, b-LSTM, or GRU types.

+ +

The language (Scala, Python, Java, C, C++) is not particularly relevant if you are delegating the training to a GPU (which always runs C/C++ code), +so it may be unwise to consider reliability concerns as a primary driver for programming language selection.

+ +

Regarding, ""How AI manages to count the objects,"" AI doesn't — not at the current state of technology. There is no one approach or algorithm across AI technology that dominates over all other approaches for all domains, into which bicycles can be plugged in and it works.

+ +

Currently, the AI engineer designs how the objects will be counted based on the characteristics of the objects to be counted, the meta-features of the incoming stream or data set, and the specifics of the recognition challenge. This is again because the wider capabilities of natural vision systems using the more sophisticated neural nets in animals and people has not yet been invented.

+ +

Final Recommendations Regarding System Design

+ +

The division between the use of kernels, in the CNN context, and the use of one of the recurrent network types is critical. If the engineer tries to delegate too much to the kernel, the above issue of bicycle distance to the optics and turning corners is exacerbated, because kernel operations do not lend themselves well to orientation and distance complexity. However, the CNN approach is excellent for the most upstream operations, such as edge detection and primitive object detection.

+ +

Let the recurrent network (of the more advanced types mentioned above) detect the bicycles as their distance and orientation in relation to the optical path changes, unless you have a sizable GPU farm that will perform CNN operations covering many distance and orientation ranges in parallel. Even if you do or have the patience of a saint, it may be best to delegate total bicycle recognition to the recurrent network anyway, since this is likely closer to the way natural systems do it and the modeling of bicycle travel between distance and orientation categories can be made more naturally seamless.

+ +

Recapping the comment above in this context, the problem complexity would be much lower if the bicycles were on a path, could not turn corners, and must travel either right to left or left to right within a speed range governed by the flow of traffic.

+ +
+ +

Response to Comments

+ +

Regarding Edit 1,

+ +
+

Edit 1: The area of view under the camera is fixed. And we expect the bicycles to move from one entry side. Lets assume that the view and entry/exit is like shown in this video,

+
+ +

YouTube.com reports, ""This video does not exist."" That the camera is fixed, had been assumed in the writing of this answer because no camera trajectory described in the question. Had the expectation that bicycles will move from one entry side of the frame been included in the question, the answer would have addressed that case, but there was no hint of that requirement prior to Edit 1. Nonetheless, much of the content in this answer to the more general case still applies.

+ +

Regarding practicality, let's differentiate practical from prefabricated. The problem originally described before Edit 1 has no prefabricated solution into which one can plug bicycle data and tweak a few parameters to achieve success. In fact, the general interest on the web in seeking prefabricated solutions is usually met with the practical reality that such plug and play cases in machine learning are rare. Most of the time an approach that involves design and experimentation is usually the case.

+ +

Those that have hired and managed human beings know that even the hoped-for high level AI of the future, although possibly quite practical, may be as prefabricated as idealized. For instance, hiring an EE does not mean that the electrical engineering department will immediately see a practical improvement in throughput. Management, training, and workflow design will still be prerequisites to employee effectiveness. To have a practical and reliable bicycle counting, some comprehension of concepts to guide initial experimentation, design, and tuning of the training and use scenarios will likely be necessary.

+ +

If the author of the question has one of those very rare cases where bicycles travel in exactly one direction and in series, with no foot traffic, trikes, walking pedestrians, or pets, then AI is not necessary at all. A simple LED and photo-transistor with a passive low pass filter and a digital input to a counter circuit will perform a reasonable count. However, if two bicycles might pass in parallel, then we are back with the camera, the need for concept classes, and challenges much like those discussed in detail above.

+ +

Regarding steps to a solution, if approaching from the side of a technology to investigate, this answer includes a recommendable sequence, although there is usually quite a bit of overlap and occasional backtracking in actual practice. If this is not a learning exercises and the bicycle problem is one that is actually needed to operate in the field, then the above answer is the correct background to comprehend what it is the machine is required to perform. Following that comprehension would be the investigation of various designs and algorithms, beginning with a search of scholarly articles using the key phrases in the original answer.

+",4302,,4302,,5/13/2019 23:20,5/13/2019 23:20,,,,1,,,,CC BY-SA 4.0 +12261,2,,12116,5/11/2019 15:49,,0,,"

Visualization Appreciated

+

The diagrams are nicely thought out. As you refine your comparative design visualizations, you might use something like Inkscape to draw them for web publication, whether or not you decide to submit a paper to a publisher or license your ideas Creative Commons.

+

Web Research Realities

+

The reliability of answers from the internet are a function of the author, how much time she or he took in researching the question themselves, what search terms are used, and where the search term is entered. This question falls under the more challenging to research, cases where the must reliable answers are in the results sets of experiments run in proprietary government and corporate research centers.

+

If the GM or Tesla or Toyota or DoD research results were posted on line, someone would likely be fired, sued, and possibly jailed and a team of lawyers would hunt down all references to the posted information, employing international agreements and backbone level content filters to eliminate the dissemination of secrets.

+

A Better Research Approach

+

We can determine with a fairly high confidence that a decision based on a single frame is much less likely to lead to collision avoidance, beginning with a simple thought experiment.

+
+

Two kids are playing creatively and decide to make up a game that involves a ball. One of the rules of the game is that each player can only run backward, not forward or sideways. That's what makes the same fun for these kids. It's silly because everyone wants to run forward to get to the ball. The rest of the rules of the game are not particularly relevant.

+

A machine driver is processing an image and goes to make the decision. In this case, the term decision indicates the use of a rules based system, whether fuzzy and probabilistic or of the former pure Boolean type, but that is not particularly relevant either. This thought experiment applies to a learned response arising from the training of an architecture built primarily of artificial networks as well.

+

Consider now the case where no decision or learned response is invoked on the basis of the direction of the ball travel or its proximity to the street and path of the vehicle being driven because the kids are facing away from the ball. It would not be reasonable to assume that the training data included this spontaneously made-up game. The result of the combination of child creativity and single frame selection leads to a delayed collision avoidance tactic, at best.

+
+

In contrast, if two or three frames are included in the analysis, the feature of the ball moving toward the street and the children, regardless of their orientation with respect to the ball and the street, may be detected as a feature of the entire system through which the vehicle is driving.

+

This is one of astronomical number of examples where the training without the temporal dimension will lead to a much higher likelihood of improper trajectory projection from pixel data on the basis of any reasonable training set than had training and use of the pixel data included the temporal dimension.

+

Mathematical Analysis

+

When the results of trials with real vehicles move from the domain of corporate intellectual property and the domain of governmental national security artifacts into the public domain, we will see our empirical evidence. Until then we can rest on theory. The above thought experiment and others like it can be represented. Consider the hypothesis of

+

$$ P_{\mathcal{A} = \mathcal{E}(S)} > P_{\mathcal{A} = \mathcal{E}(\vec{S})} \; \text{,}$$

+

where $P_c$ is the probability given condition $c$, $\mathcal{A}$ is the actuality (posteriori) and $\mathcal{E}$ is the expectation function applied to instantaneous sensory information $s$ on the left hand side of the inequality versus the recent history of instantaneous sensory information $\vec{s}$ on the right hand side.

+

If we bring intermediate actions (decisioning after each frame is acquired) into the scope of this question, then we may pose the hypothesis in a way that involves Markov's work on causality and prediction in chains of events.

+
+

The accuracy of decisioning based on optical acquisition in the context of collision avoidance is higher without the Markov property, in that historical optically acquired data in conjunction with new optically acquired data will produce better trajectory oriented computational collision avoidance results than without historical data.

+
+

Either of these might take considerable work to prove, but they are both likely to be provable in probabilistic terms with a fairly reasonable set of mathematical constraints placed on the system up front. We know this because the vast majority of thought experiences show an advantage to a vector of frames over a single frame for the determination of actions that are most likely to avoid a collision.

+

Design

+

As is often the case, convolution kernel use in a CNN is likely to be the best design to recognize edge, contour, reflectivity, and texture features in collide-able object detection.

+

Assembly of trajectories (as a somewhat ethereal internal intermediate result) and subsequent determination of beeping, steering, acceleration, breaking, or signaling is likely best handled by recurrent networks of some type, the most publicly touted being b-LSTM or GRU based networks. The attention based handling and preemption discussed in many papers regarding real time system control are among the primary candidates for eventual common use among the designs. This is because changes in focus is common during human driving operations and is even detectable in birds and insects.

+

The simple case is when an ant detects one of these.

+
    +
  • A large insurmountable object
  • +
  • A predator
  • +
  • A morsel of good smelling food
  • +
  • Water
  • +
+

The mode of behavior switches, probably in conjunction with the neural pathways for sensory information, when a preemptive stimulus is detected. Humans pilot and drive aircraft and motor vehicles this way too. When you pilot or drive next, bring into consciousness what you have learned unconsciously and this preemptive detection and change of attention and focus of task will become obvious.

+",4302,,-1,,6/17/2020 9:57,5/11/2019 15:49,,,,0,,,,CC BY-SA 4.0 +12263,1,,,5/11/2019 19:48,,3,342,"

I am wondering how am I supposed to train a model using actor/critic algorithms in environments with opponents. I tried the followings (using A3C and DDPG):

+ +
    +
  1. Play against random player. I had rather good results, but not as good as expected since most interesting states cannot be reached with a random opponent.
  2. +
  3. Play against list of specific AIs. Results were excellent against those AIs, but very bad with never seen opponents
  4. +
  5. Play against itself. Seemed the best to me, but I could not get any convergence due to non-stationary environment.
  6. +
+ +

Any thought or advice about this would be very welcome.

+",23818,,,,,5/16/2019 18:41,Training actor-critic algorithms in games with opponents,,1,5,,,,CC BY-SA 4.0 +12264,1,12299,,5/12/2019 0:15,,8,1971,"

How do you actually decide what reward value to give for each action in a given state for an environment?

+

Is this purely experimental and down to the programmer of the environment? So, is it a heuristic approach of simply trying different reward values and see how the learning process shapes up?

+

Of course, I understand that the reward values have to make sense, and not just put completely random values, i.e. if the agent makes mistakes then deduct points, etc.

+

So, am I right in saying it's just about trying different reward values for actions encoded in the environment and see how it affects the learning?

+",25360,,2444,,1/20/2021 12:49,1/20/2021 12:49,How do we define the reward function for an environment?,,2,1,,,,CC BY-SA 4.0 +12266,1,,,5/12/2019 0:23,,12,11704,"

What's the differences between semi-supervised learning and self-supervised visual representation learning, and how they are connected?

+",12975,,12975,,5/12/2019 7:32,8/4/2020 19:02,What is the relation between semi-supervised and self-supervised visual representation learning?,,3,0,,,,CC BY-SA 4.0 +12267,2,,12266,5/12/2019 1:16,,8,,"

Semi-supervised learning

+

Semi-supervised learning is the collection of machine learning techniques where there are two datasets: a labelled one and an unlabelled one.

+

There are two main problems that can be solved using semi-supervised learning:

+
    +
  • transductive learning (i.e. label the given unlabelled data) and
  • +
  • inductive learning (generalization) (i.e. find a function that maps inputs to outputs, like classification).
  • +
+

Self-supervised learning

+

Self-supervised learning (SSL) is a machine learning approach where the supervisory signal is automatically generated. More precisely, SSL can either refer to

+
    +
  • learn data representations (i.e. learn to represent the data) by solving a so-called pretext (or auxiliary) task, in a self-supervised fashion, i.e. you automatically generate the supervised signal from the unlabelled

    +
  • +
  • automatically label an unlabelled dataset by exploiting data coming from different sensors (this is the usual definition of SLL in the context or robotics)

    +
  • +
+

What is the relationship between the two?

+

SSL (for data representation) can be considered a semi-supervised learning approach, if you fine-tune the learned data representations with a labeled dataset to solve supervised learning problem (i.e. the so-called downstream task), which you will probably do otherwise the data representations are pretty useless. Read this answer for other details.

+",2444,,2444,,8/4/2020 19:02,8/4/2020 19:02,,,,0,,,,CC BY-SA 4.0 +12268,1,12269,,5/12/2019 2:41,,3,3034,"

I have the following code (below), where an agent uses Q-learning (RL) to play a simple game.

+

What appears to be questionable for me in that code is the fixed learning rate. When it's set low, it's always favouring the old Q-value over the learnt/new Q-value (which is the case in this code example), and, vice-versa, when it's set high.

+

My thinking was: shouldn't the learning rate be dynamic, i.e. it should start high because at the beginning we don't have any values in the Q-table and the agent is simply choosing the best actions it encounters? So, we should be favouring the new Q-values over the existing ones (in the Q-table, in which there's no values, just zeros at the start). Over time (say every n number of episodes), ideally we decrease the learning rate to reflect that, over time, the values in the Q-table are getting more and more accurate (with the help of the Bellman equation to update the values in the Q-table). So, lowering the learning rate will start to favour the existing value in the Q-table over the new ones. I'm not sure if my logic has gaps and flaws, but I'm putting it out there in the community to get feedback from experienced/experts opinions.

+

Just to make things easier, the line to refer to, in the code below (for updating the Q-value using the learning rate) is under the comment: # Update Q-table for Q(s,a) with learning rate

+
import numpy as np
+import gym
+import random
+import time
+from IPython.display import clear_output
+
+env = gym.make("FrozenLake-v0")
+
+action_space_size = env.action_space.n
+state_space_size = env.observation_space.n
+
+q_table = np.zeros((state_space_size, action_space_size))
+
+num_episodes = 10000
+max_steps_per_episode = 100
+
+learning_rate = 0.1
+discount_rate = 0.99
+
+exploration_rate = 1
+max_exploration_rate = 1
+min_exploration_rate = 0.01
+exploration_decay_rate = 0.001
+
+
+rewards_all_episodes = []
+
+for episode in range(num_episodes):
+    # initialize new episode params
+    state = env.reset()
+    
+    done = False 
+    rewards_current_episode = 0 
+    
+    for step in range(max_steps_per_episode):
+        
+        # Exploration-exploitation trade-off
+        exploration_rate_threshold = random.uniform(0, 1)
+        if exploration_rate_threshold > exploration_rate:
+            action = np.argmax(q_table[state,:])
+        else:
+            action = env.action_space.sample()
+            
+        new_state, reward, done, info = env.step(action)
+        
+        # Update Q-table for Q(s,a) with learning rate
+        q_table[state, action] = q_table[state, action] * (1 - learning_rate) + \
+            learning_rate * (reward + discount_rate * np.max(q_table[new_state, :]))
+        
+        state = new_state
+        rewards_current_episode += reward
+        
+        if done == True:
+            break
+        
+        
+    # Exploration rate decay
+    exploration_rate = min_exploration_rate + \
+        (max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate*episode)
+    
+    rewards_all_episodes.append(rewards_current_episode)
+
+# Calculate and print the average rewards per thousand episodes
+rewards_per_thousands_episodes = np.array_split(np.array(rewards_all_episodes), num_episodes/1000)
+count = 1000
+print("******* Average reward per thousands episodes ************")
+for r in rewards_per_thousands_episodes:
+    print(count, ": ", str(sum(r/1000)))
+    count += 1000
+    
+# Print updated Q-table
+print("\n\n********* Q-table *************\n")
+print(q_table)
+
+",25360,,2444,,10/8/2021 0:23,10/8/2021 0:23,"In Q-learning, shouldn't the learning rate change dynamically during the learning phase?",,1,1,,,,CC BY-SA 4.0 +12269,2,,12268,5/12/2019 8:31,,5,,"

Yes you can decay the learning rate in Q-learning, and yes this should result in more accurate Q-values in the long term for many environments.

+ +

However, this is something that is harder to manage than in supervised learning, and might not be as useful or necessary. The issues you need to be concerned about are:

+ +

Non-stationarity

+ +

The target values in value-based optimal control methods are non-stationary. The TD target derived from the Bellman equation is never quite a sample from the optimal action value until the very final stages of learning. This is an issue both due to iterative policy improvements and the bootstrap nature of TD learning.

+ +

Reducing learning rate before you reach optimal control could delay finding the optimal policy. In general you want the learning rate to be just low enough that inaccuracies due to over/undershooting the correct value don't prevent or delay differentiating between actions for whatever the interim policy is.

+ +

Policy indifference to accurate action values

+ +

Q-learning's policy is based on $\pi(s) = \text{argmax}_{a} Q(s,a)$. To obtain an optimal policy, what you care about is identifying the highest-valued action, $a$. As such, it is possible in some environments to achieve optimal control whilst Q values are far from accurate, provided the highest-valued action still has the highest estimate of expected return.

+ +

Whether or not you need highly accurate Q values to determine a policy depends on the environment, how similar actions are to each other. For long-term forecasts (i.e. $\gamma \approx 1$) where individual time steps make little difference, this is a more important detail, as there will be only minor differences between action values.

+ +

If your goal is to get highly accurate predictions of action values, then the above does not necessarily apply.

+ +

Some deterministic environments can use a high learning rate

+ +

This only applies to some, simpler environments, but you should definitely bear this in mind when working with tabular solvers.

+ +

For instance, it is possible to apply tabular Q-learning to Tic Tac Toe with a learning rate of $1.0$ - essentially replacing each estimate with a new latest estimate - and it works just fine.

+ +

In other, more complex environments, this would be a problem and the algorithm would not converge. But clearly, adding a learning rate decay is not a general requirement.

+ +

The learning rate decay schedule is a hyper parameter

+ +

There is no generic schedule that could apply to all environments and be equally effective in them. For an optimal approach, you would need to run a search over possible decay schedules, and the most efficient learning rate decay would apply only to the environment that you tested. It would likely apply well enough to similar environments that it could be an advantage to know it in future.

+ +

In complex environments, there are other working solutions

+ +

In practice, for environments where neural networks are used, an approach called Deep Q Networks (DQN) is used. It is common to use a conservative/low learning rate with this due to stability issues of combining neural networks and off-policy reinforcement learning.

+ +

In addition, you will quite often see an adaptive gradient optimiser - e.g. Adam or RMSProp - used with DQN. These already adjust gradient step sizes based on recent gradients observed in the neural network during training, so there is less need to add a dynamic learning rate schedule on top.

+ +

Supervised learning challenges, such as ImageNet do sometimes add a learning rate schedule over adaptive optimisers, resulting in further improvements. So it might help with DQN, but the other caveats above could still apply.

+",1847,,1847,,5/12/2019 8:42,5/12/2019 8:42,,,,1,,,,CC BY-SA 4.0 +12270,1,12272,,5/12/2019 11:15,,4,1059,"

Consider the grid world problem in RL. Formally, policy in RL is defined as $\pi(a|s)$. If we are solving grid world by policy iteration then the following pseudocode is used:

+ +

+ +

My question is related to the policy improvement step. Specifically, I am trying to understand the following update rule.

+ +

$$\pi(s) \leftarrow arg max_a \sum_{s'}p(s'|s,a)[r(s,a,s') + \gamma v(s')] $$

+ +

I can have 2 interpretations of this update rule.

+ +
    +
  1. In this step, we check which action (say, going right for a particular state) has the highest reward and assign going right a probability of 1 and rest actions a probability of 0. Thus, in PE step, we will always go right for all iterations for that state even if it might not be the most rewarding function after certain iterations.

  2. +
  3. We keep the policy improvement step in mind, and, while doing the PE step, we update the $v(s)$, based on the action giving highest reward (say, for 1st iteration $k=0$, going right gives the highest reward, we update based on that, while, for $k=1$, we see going left gives the highest reward, and update our value based on that likewise. Thus action changes depending on maximum reward).

  4. +
+ +

For me, the second interpretation is very similar to value iteration. So, which one is the correct interpretation of a policy iteration?

+",,user9947,2444,,5/1/2020 12:34,5/1/2020 12:34,Understanding the update rule for the policy in the policy iteration algorithm,,1,7,,,,CC BY-SA 4.0 +12271,1,,,5/12/2019 13:54,,3,175,"

I was trying to figure out how to create a solver to the puzzle of putting 11 pieces in a board (8 x 8). I created the game in http://www.xams.com.br/quebra. It is possible to turn the piece 90 degrees each time counterclockwise (Girar) and mirror it vertically (Inverter), and so the piece can assume 8 forms.

+ +

When clicking in solver button (Resolver), it tries to put pieces randomly in board in brutal force method (it spends a LOT of time). Using this method, I was not able to achieve the result.

+ +

I would like to try something smarter than this and having a machine learning algorithm for this would be great. I don´t know how to formulate the problem. How would you start this please?

+ +
+ +

EDIT: +I am still building the page and there is so much to improve. You need to click the center of 3x3 matrix where you want to put the piece.

+ +
+ +

EDIT2: +

+",16837,,2444,,5/12/2019 23:44,5/20/2019 0:18,How do I solve the problem of positioning 11 pieces into a 8x8 puzzle?,,1,4,,,,CC BY-SA 4.0 +12272,2,,12270,5/12/2019 14:07,,2,,"

A policy can be stochastic or deterministic. A deterministic policy is a function of the form $\pi_{\text{deterministic}}: S \rightarrow A$, that is, a function from the set of states to the set of actions. A stochastic policy is a map of the form $\pi_{\text{stochastic}} : S \rightarrow P(A)$, where $P(A)$ is a set of probability distributions ($P(A) = \{ p_{s_1}(A), p_{s_2}(A), \dots, p_{s_{|S|}}(A) \}$, where $p_{s_i}(A)$ is a probability distribution over the set of actions $A$ for the state $s_i$ and $|S|$ is the size of the set of states of the environment) over the set of actions $A$. A deterministic policy can be interpreted as a stochastic policy that gives the probability of $1$ to one of the available actions (and $0$ to the remaining actions), for each state.

+ +

In the case of value iteration (VI) and policy iteration (PI), the policy is deterministic, both in the policy evaluation (PE) or policy improvement (PI) steps.

+ +

In the PE step, for a specific state $s$ (inside the for loop), you use $\pi(s)$, that is, you assume that the action taken in the specific state $s$ is the greedy action, because of the $\text{argmax}$ in the PI, according to the current policy. However, in general, $\pi_k(s_i) \neq \pi_k(s_j)$, for $i \neq j$, where $k$ is the current iteration step (of PE and PI).

+ +

The update rule

+ +

$$\pi_{k+1}(s) \leftarrow arg max_a \sum_{s'}p(s' \mid s, a)[r(s, a, s') + \gamma v_k(s')]$$

+ +

computes the action that is expected to give the highest return, given the current and fixed value function $(v_k$). Have a look at the definition of expectation for discrete random variables: it is defined as a weighted sum (like in the update rule above, where the weights are $p(s' \mid s, a)$).

+ +

Value iteration is a shorter version of policy iteration. In VI, rather than performing a PI step for each state of the environment, when performing PE, the action that is expected to give the highest return is chosen. In VI, the policy is updated only once and at the end of policy evaluation. In PI, you alternate between PE and PI, and, at each PI, you update the policy.

+",2444,,2444,,5/12/2019 16:19,5/12/2019 16:19,,,,10,,,,CC BY-SA 4.0 +12273,1,,,5/12/2019 18:42,,1,58,"

I'm building a web application that collects schema.org data from different webshops as Amazon, Shopify, etc. It collects data every 6h and shows the current and lowest price. It is used for monitoring products and buying at the lowest price.

+ +

My goal is to recognize products from different shops as the same product. Every shop has its own title for the same product.

+ +

Example:

+ +
Google Pixel 2 64GB Clearly White (Unlocked) Smartphone 
+Google Pixel 2 GSM/CDMA Google Unlocked (Clearly White, 64GB, US warranty) 
+
+ +

Problems:

+ +
    +
  1. don't have a lot of data (only products chosen by the user)
  2. +
  3. needs to support every new product that app doesn't have data history
  4. +
+",25589,,16565,,5/13/2019 8:37,5/13/2019 8:37,String matching algorithm for product recognition,,0,3,,,,CC BY-SA 4.0 +12274,1,12275,,5/12/2019 18:50,,8,15147,"

In reinforcement learning, there are the concepts of stochastic (or probabilistic) and deterministic policies. What is the difference between them?

+",2444,,2444,,5/12/2019 18:59,6/29/2021 12:41,What is the difference between a stochastic and a deterministic policy?,,3,0,,,,CC BY-SA 4.0 +12275,2,,12274,5/12/2019 18:50,,15,,"

A deterministic policy is a function of the form $\pi_{\mathbb{d}}: S \rightarrow A$, that is, a function from the set of states of the environment, $S$, to the set of actions, $A$. The subscript $_{\mathbb{d}}$ only indicates that this is a ${\mathbb{d}}$eterministic policy.

+

For example, in a grid world, the set of states of the environment, $S$, is composed of each cell of the grid, and the set of actions, $A$, is composed of the actions "left", "right", "up" and "down". Given a state $s \in S$, $\pi(s)$ is, with probability $1$, always the same action (e.g. "up"), unless the policy changes.

+

A stochastic policy can be represented as a family of conditional probability distributions, $\pi_{\mathbb{s}}(A \mid S)$, from the set of states, $S$, to the set of actions, $A$. A probability distribution is a function that assigns a probability for each event (in this case, the events are actions in certain states) and such that the sum of all the probabilities is $1$.

+

A stochastic policy is a family and not just one conditional probability distribution because, for a fixed state $s \in S$, $\pi_{\mathbb{s}}(A \mid S = s)$ is a possibly distinct conditional probability distribution. In other words, $\pi_{\mathbb{s}}(A \mid S) = \{ \pi_{\mathbb{s}}(A \mid S = s_1), \dots, \pi_{\mathbb{s}}(A \mid S = s_{|S|})\}$, where $\pi_{\mathbb{s}}(A \mid S = s)$ is a conditional probability distribution over actions given that the state is $s \in S$ and $|S|$ is the size of the set of states of the environment.

+

Often, in the reinforcement learning context, a stochastic policy is misleadingly (at least in my opinion) denoted by $\pi_{\mathbb{s}}(a \mid s)$, where $a \in A$ and $s \in S$ are respectively a specific action and state, so $\pi_{\mathbb{s}}(a \mid s)$ is just a number and not a conditional probability distribution. A single conditional probability distribution can be denoted by $\pi_{\mathbb{s}}(A \mid S = s)$, for some fixed state $s \in S$. However, $\pi_{\mathbb{s}}(a \mid s)$ can also denote a family of conditional probability distributions, that is, $\pi_{\mathbb{s}}(A \mid S) = \pi_{\mathbb{s}}(a \mid s)$, if $a$ and $s$ are arbitrary. Alternatively, $a$ and $s$ in $\pi_{\mathbb{s}}(a \mid s)$ are just (dummy or input) variables of the function $\pi_{\mathbb{s}}(a \mid s)$ (i.e. p.m.f. or p.d.f.): this is probably the most sensible way of interpreting $\pi_{\mathbb{s}}(a \mid s)$ when you see this notation (see also this answer). In this case, you could also think of a stochastic policy as a function $\pi_{\mathbb{s}} : S \times A \rightarrow [0, 1]$, but, in my view, although this may be the way you implement a stochastic policy in practice, this notation is misleading, as the action is not conceptually an input to the stochastic policy but rather an output (but in the end this is also just an interpretation).

+

In the particular case of games of chance (e.g. poker), where there are sources of randomness, a deterministic policy might not always be appropriate. For example, in poker, not all information (e.g. the cards of the other players) is available. In those circumstances, the agent might decide to play differently depending on the round (time step). More concretely, the agent could decide to go "all-in" $\frac{2}{3}$ of the times whenever it has a hand with two aces and there are two uncovered aces on the table and decide to just "raise" $\frac{1}{3}$ of the other times.

+

A deterministic policy can be interpreted as a stochastic policy that gives the probability of $1$ to one of the available actions (and $0$ to the remaining actions), for each state.

+",2444,,2444,,6/29/2021 12:41,6/29/2021 12:41,,,,0,,,,CC BY-SA 4.0 +12276,1,,,5/12/2019 19:02,,2,149,"

The Q-learning does not guarantee convergence for continuous state space problems (Why doesn't Q-learning converge when using function approximation?). In that case, is there an algorithm which can guarantee convergence?

+ +

I am looking at model-based RL specifically iLQR but all the solutions I find are for the continuous action space problem.

+",19541,,2444,,5/12/2019 19:07,5/12/2019 19:07,Are there reinforcement learning algorithms that ensure convergence for continuous state space problems?,,0,0,,,,CC BY-SA 4.0 +12279,1,,,5/13/2019 5:02,,1,81,"

I am training an agent to play a simple game using double deep q learning. However, the variance in agent performance is very high, even for agents trained with same model parameters. For example, I can train agent A and agent B using the exact parameters and agent A beats B 800 to 200.

+ +

I think I understand why this is happening, when training starts the model is initialized with different weights, and this leads the model to find different local max/min.

+ +

The above makes it difficult to find optimal parameters.

+ +

What are the strategies to reduce this variance? What parameters should I look at tweaking?

+ +

More details about the environment:

+ +

This is a two player game (Zombie Dice); however, in my implementation so far the agents are learning to maximize expected score on their turn, so the actions and score of the opponent is ignored.

+ +

The variance is higher when I am using purely greedy strategy with no exploitation at all. Though it exists in both cases. I would say roughly 2/3 wins for stronger side with greedy and 3/5 with exploration out of 1000 matches.

+ +

The environment is stochastic; I have not done many assessment runs maybe 20 or 30, it is mostly eyeballing, but the differences are fairly large; therefore, I am confident that this is not due to chance.

+ +

I tested the models against themselves, and I get scores very close to 50/50. However, two different models trained with same parameters give results very different from 50/50. I tested this with models trained with different types of parameters and it is generally the same problem.

+",25248,,1847,,5/14/2019 6:51,5/14/2019 6:51,High variance in performance of q-learning agents trained with same parameters,,0,3,,,,CC BY-SA 4.0 +12282,2,,11785,5/13/2019 8:39,,1,,"

Roads maps are well defined and you can access them online. For example you can visit OpenStreetMap to download road network of a given region.

+ +

Such a road network definition contains nodes (junctions) with lat/lon coordinates and edges between nodes (roads). An edge is defined by a connection of two nodes. Edges can represent one-way roads: you need to define double ways as two edges going in both ways. Haversine formula gives you the straight distance between two nodes: the length of the edge.

+ +

This definition of a road network gives you a graph (in the mathematical point of view) and is largely covered by the literature. You can search the graph and compute many of its parameters.

+ +

From a road network definition you can run shortest path algorithm to find the closest way between two nodes. From a given coordinate you can either pick the closest node or get the closest edge (and compute distance to the edge and the coordinates of the perpendicular projection point).

+ +

Algorithms like A* and Dijkstra are not so time consuming and you can find shortest path efficiently.

+",25555,,12509,,5/15/2019 18:29,5/15/2019 18:29,,,,0,,,,CC BY-SA 4.0 +12283,1,,,5/13/2019 13:43,,1,1245,"

I am designing a neural network using Deep Q-Learning, which teaches an agent how to play Snake (The classic Nokia game from the 90'ies). The goal of the game is to navigate the snake on a playing field (2D), and to eat a randomly placed fruit. As the Snake eats the fruit, it grows in length. The game ends if the snake hits the game border, or it self, so as the number of fruits consumed increases, so does the difficulty of navigating without hitting something.

+ +

I have trained the Snake game on a 10x10 playing field using the following inputs:

+ +
    +
  1. x direction of the snake
  2. +
  3. y direction of the Snake
  4. +
  5. The fraction of playing field occupied by the snake itself
  6. +
  7. A bool which says if the fruit is in front of the snake (from it's current direction)
  8. +
  9. A bool which says if the fruit is to the left of the snake (from +it's current direction)
  10. +
  11. A bool which says if collision (game over) in front of the snake is possible
  12. +
  13. A bool which says if collision to the right is possible
  14. +
  15. A bool which says if collision to the left is possible
  16. +
+ +

With this choice of inputs I am getting the Snake agent to work reasonably well, and score until it plateaus out. I have examined the cases where the snake dies, and it all happens when it has no way of escaping, for example, it turns around and blocks its own path, until it finally has no where to go. This is more likely to happen as the Snake increases in lenght.

+ +

I was thinking on how I could improve this performance. It seems to me, that the reason the Snake can make a self-blocking turn is because of the inputs. Since the path it takes is clear, there's no information, that the next path is not, or that continuing further down will eventually lead to game over. If the Snake agent was aware of all obstacles in each step, i.e. the entire state-space, then I could imagine that would help train the network towards finding the optimal path without ending up blocking itself.

+ +

Since I have made the Snake game myself, I can return to the agent a matrix, or an unfolded vector, which contains the inputs for each column / row on the playing field. A blocked cell would be set to 0, a free cell 1, the fruit cell has value 0.75 and the snake head (which moves) could be assigned value 0.25. After trying this approach, I have to say I was unsuccesful. The snake ends up just turning in circles, even if I use the same reward system as the 8 input case shown above.

+ +

I am therefore trying to understand what is happening here. Am I missing information? I would think the full 10x10 state space would give me exactly enough information to lead to the correct evaluation of the next path. I would very much like to hear someone elses input to this approach.

+ +

Thanks a lot

+",25606,,,,,5/14/2019 7:47,Choice of inputs features for Snake game,,1,0,,,,CC BY-SA 4.0 +12284,1,,,5/13/2019 16:12,,2,815,"

Monte Carlo (MC) methods are methods that use some form of randomness or sampling. For example, we can use an MC method to approximate the area of a circle inside a square: we generate random 2D points inside the square and count the number of points inside and outside the circle.

+ +

In reinforcement learning, an MC method is a method that ""samples"" experience (in the form of ""returns"") from the environment, in order to approximate e.g. a value function.

+ +

A temporal-difference algorithm, like $Q$-learning, also performs some form of sampling: it chooses an action using a stochastic policy (e.g. $\epsilon$-greedy) and observes a reward and the next state. So, couldn't $Q$-learning also be considered a MC method? Can an MC method be model-based?

+",2444,,2444,,5/13/2019 16:20,5/13/2019 18:10,What is the relation between Monte Carlo and model-free algorithms?,,1,0,,,,CC BY-SA 4.0 +12285,1,,,5/13/2019 16:50,,0,34,"

I know case-based reasoning has four stages: retrieve, retain, re-use and revise.

+ +

Used for solving new problems by adapting solutions that were used to solve old problems, like car issues.

+ +

The advantages of it are that solutions are proposed quickly and there's no need to start from scratch, but is there a real-world problem where using this would not be suitable?

+ +

Just trying to better my understanding of it.

+",25609,,2444,,5/13/2019 17:41,5/13/2019 17:41,Are there real-world problems where case-based reasoning is not suitable?,,0,6,,,,CC BY-SA 4.0 +12286,1,,,5/13/2019 17:17,,1,34,"

Consider a problem where we have a finite number of states and actions. Given a state and an action, we can easily know the reward and the next state (deterministic). The state space is really large and has no final state.

+ +

There was a paper that for a problem of this type used TD(0) by filling the value table and chose its actions by:

+ +

$$\pi(s) = \text{argmax}_a (r(s,a) + \gamma V(s_{(t+1)}))$$

+ +

I've read somewhere that is OK to use prediction algorithms when the model is well-described with the objective of choosing best actions and not only evaluating the policy.

+ +

What is the purpose, advantages and disadvantages of using TD prediction here instead of a TD control algorithms (and just saving the $Q(s, a)$ table)? If it was about space, you still have to store a table with all the rewards for each state-action pair, right?

+ +

I'm not sure if I was able to explain myself very well as I was trying to keep it short, if some clarification is needed please tell me.

+",24054,,2444,,5/13/2019 17:38,5/13/2019 17:38,Why should we use TD prediction as opposed to TD control algorithms?,,0,2,,,,CC BY-SA 4.0 +12287,2,,12284,5/13/2019 17:19,,3,,"

In Reinforcement Learning (RL), the use of the term Monte Carlo has been slightly adjusted by convention to refer to only a few specific things.

+ +

The more general use of ""Monte Carlo"" is for simulation methods that use random numbers to sample - often as a replacement for an otherwise difficult analysis or exhaustive search.

+ +

In RL, Monte Carlo methods are generally taken to be non-bootstrapping sample-based approaches to estimating returns. This is a labelling convention within RL - probably because someone called an initial model-free learner a ""Monte Carlo method"", and the name stuck whilst many refinements and new ideas have since been published under different names.

+ +

The historical use of the term is important, since if you mention you are using ""Monte Carlo Control"", it usually means a very specific subset of methods within RL, to most readers.

+ +
+

So, couldn't 𝑄-learning also be considered a MC method?

+
+ +

In the general sense perhaps. The argument is perhaps stronger for it if the environment is being simulated on a computer for the agent to learn from.

+ +

However, if you start to unilaterally call Q-learning a MC method, you are probably going to just confuse people who have learned the conventions in RL.

+ +
+

Can an MC method be model-based?

+
+ +

In general, yes, because the model can be sampled for planning, and a policy can also be separately sampled - so it is possible to run an MC method with or without a model - depending on whether you sample from the model taking ""virtual actions"" (for e.g. planning or refining your agent) or from the environment taking real actions. Many RL techniques blur the line between online learning and planning. For instance, using a simulated environment or historical data can be framed as planning for the real environment.

+ +

Monte Carlo Tree Search is an example of a model-based technique using the term ""Monte Carlo"" within RL frameworks. It is used famously within DeepMind's AlphaZero in order to refine a policy and value estimates during self-play.

+ +
+

What is the relation between Monte Carlo and [other] model-free algorithms?

+
+ +

In the context of RL, Monte Carlo is presented as one way to estimate expected utility (or return) - by sampling from the environment and policy until the a complete trajectory is available:

+ +

$$v_{\pi}(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{T-t} \gamma^k R_{t+1+k} | S_t = s ]$$

+ +

MC is contrasted with Temporal Difference (TD) approaches, such as Q-learning, which sample bootstrap estimates using the Bellman Equation:

+ +

$$v_{\pi}(s) = \mathbb{E}_{\pi}[R_{t+1} + \gamma v_{\pi}(S_{t+1}) | S_t = s ]$$

+ +

The two approaches can be combined in various ways, including TD($\lambda$) methods. With TD($\lambda$), if you set $\lambda = 0$ then the algorithm is identical to single-step TD learning, and if you set $\lambda = 1$ then it is very similar to a Monte Carlo method. Often setting it to some intermediate value is more efficient than either extreme.

+",1847,,1847,,5/13/2019 18:10,5/13/2019 18:10,,,,6,,,,CC BY-SA 4.0 +12288,2,,12095,5/13/2019 17:27,,2,,"

So this paper is by google, but is very similar where they use 2D positional embeddings and perform MHA on the flattened image. Are you talking about Attention Augmented Convolutional Networks

+",25496,,,,,5/13/2019 17:27,,,,1,,,,CC BY-SA 4.0 +12289,1,,,5/13/2019 20:13,,2,120,"

Our customer runs a tour agency. He has an excel spreadsheet containing the following information for people that have contacted them:

+ +

Customer name, country, tour duration (requested by customer), tour date, number of people in the tour (usually from 1 to 3), price given to the customer, answer: accepted/rejected (indicates if customer accepted or rejected the price given by the tour agency).

+ +

My customer wants a predictor or tool that can let him enter the details given by future customers, e.g:

+ +

Number of participants, +Tour duration, +Country (not sure if necessary?)

+ +

And the system will return the best price to charge the customer (so he won't reject the proposal but pay the maximum possible).

+ +

Another option would be that the tour agency owner will enter the price and the system will answer ""Customer will accept that price"" or ""Customer will reject that price"".

+ +

Is this even possible? I think it may be done using neural networks trained with the previous answers from customers that the tour agency owner has in his excel spreadsheet?

+",25614,,,,,5/14/2019 13:35,Predict best price using neural network?,,1,0,,,,CC BY-SA 4.0 +12290,2,,4683,5/13/2019 21:21,,8,,"

Recurrent neural networks (RNNs) are artificial neural networks (ANNs) that have one or more recurrent (or cyclic) connections, as opposed to just having feed-forward connections, like a feed-forward neural network (FFNN).

+ +

These cyclic connections are used to keep track of temporal relations or dependencies between the elements of a sequence. Hence, RNNs are suited for sequence prediction or related tasks.

+ +

In the picture below, you can observe an RNN on the left (that contains only one hidden unit) that is equivalent to the RNN on the right, which is its ""unfolded"" version. For example, we can observe that $\bf h_1$ (the hidden unit at time step $t=1$) receives both an input $\bf x_1$ and the value of the hidden unit at the previous time step, that is, $\bf h_0$.

+ +

+ +

The cyclic connections (or the weights of the cyclic edges), like the feed-forward connections, are learned using an optimisation algorithm (like gradient descent) often combined with back-propagation (which is used to compute the gradient of the loss function).

+ +

Convolutional neural networks (CNNs) are ANNs that perform one or more convolution (or cross-correlation) operations (often followed by a down-sampling operation).

+ +

The convolution is an operation that takes two functions, $\bf f$ and $\bf h$, as input and produces a third function, $\bf g = f \circledast h$, where the symbol $\circledast$ denotes the convolution operation. In the context of CNNs, the input function $\bf f$ can e.g. be an image (which can be thought of as a function from 2D coordinates to RGB or grayscale values). The other function $\bf h$ is called the ""kernel"" (or filter), which can be thought of as (small and square) matrix (which contains the output of the function $\bf + h$). $\bf f$ can also be thought of as a (big) matrix (which contains, for each cell, e.g. its grayscale value).

+ +

In the context of CNNs, the convolution operation can be thought of as dot product between the kernel $\bf h$ (a matrix) and several parts of the input (a matrix).

+ +

In the picture below, we perform an element-wise multiplication between the kernel $\bf h$ and part of the input $\bf h$, then we sum the elements of the resulting matrix, and that is the value of the convolution operation for that specific part of the input.

+ +

+ +

To be more concrete, in the picture above, we are performing the following operation

+ +

\begin{align} +\sum_{ij} +\left( +\begin{bmatrix} +1 & 0 & 0\\ +1 & 1 & 0\\ +1 & 1 & 1 +\end{bmatrix} +\otimes +\begin{bmatrix} +1 & 0 & 1\\ +0 & 1 & 0\\ +1 & 0 & 1 +\end{bmatrix} +\right) += +\sum_{ij} +\begin{bmatrix} +1 & 0 & 0\\ +0 & 1 & 0\\ +1 & 0 & 1 +\end{bmatrix} += 4 +\end{align}

+ +

where $\otimes$ is the element-wise multiplication and the summation $\sum_{ij}$ is over all rows $i$ and columns $j$ (of the matrices).

+ +

To compute all elements of $\bf g$, we can think of the kernel $\bf h$ as being slided over the matrix $\bf f$.

+ +

In general, the kernel function $\bf h$ can be fixed. However, in the context of CNNs, the kernel $\bf h$ represents the learnable parameters of the CNN: in other words, during the training procedure (using e.g. gradient descent and back-propagation), this kernel $\bf h$ (which thus can be thought of as a matrix of weights) changes.

+ +

In the context of CNNs, there is often more than one kernel: in other words, it is often the case that a sequence of kernels $\bf h_1, h_2, \dots, h_k$ is applied to $\bf f$ to produce a sequence of convolutions $\bf g_1, g_2, \dots, g_k$. Each kernel $\bf h_i$ is used to ""detect different features of the input"", so these kernels are different from each other.

+ +

A down-sampling operation is an operation that reduces the input size while attempting to maintain as much information as possible. For example, if the input size is a $2 \times 2$ matrix $\bf f = \begin{bmatrix} 1 & 2 \\ 3 & 0 \end{bmatrix}$, a common down-sampling operation is called the max-pooling, which, in the case of $\bf f$, returns $3$ (the maximum element of $\bf f$).

+ +

CNNs are particularly suited to deal with high-dimensional inputs (e.g. images), because, compared to FFNNs, they use a smaller number of learnable parameters (which, in the context of CNNs, are the kernels). So, they are often used to e.g. classify images.

+ +

What is the fundamental difference between RNNs and CNNs? RNNs have recurrent connections while CNNs do not necessarily have them. The fundamental operation of a CNN is the convolution operation, which is not present in a standard RNN.

+",2444,,2444,,6/4/2019 17:20,6/4/2019 17:20,,,,0,,,,CC BY-SA 4.0 +12291,2,,12289,5/13/2019 22:31,,1,,"

This is possible.... but there's no reason to use a neural network! Your best bet on a problem like this is likely to use a logistical regression for the yes/no aspect of the question and a linear regression (or combination of linear regressions) to answer the pricing question - there are also ways of simply using linear regressions and setting up cutoffs to answer the yes/no question.

+ +

The reality is that the accuracy of such a model/series of models would depend entirely on the quality and quantity of the data, but it's unlikely that in this case a neural network would provide a better result than smart usage of simpler models.

+",25382,,,,,5/13/2019 22:31,,,,0,,,,CC BY-SA 4.0 +12292,2,,4328,5/14/2019 0:57,,3,,"

Terms in a field are sometimes defined unambiguously. For instance, we know what convergence means when communicating about machine learning algorithms in academic publications because it has a formal definition in an older field, mathematics. However, the term machine learning is defined ambiguously across academic publications.

+ +

Perspectives on Machine Learning

+ +

Some see it as a branch of applied probability and statistics involving models with curvature (not usefully approximated by a first degree polynomial) and the application of those principles in digital computing. Some see it as an extension of the work of James Watt and Le Roy MacColl the application of the feedback control concept to digital control. Some see it as the natural result of the pioneering AI work of Norbert Wiener and John Von Neumann, where the adaptive qualities of nature, including neurochemistry, are simulated with the intention of birthing artificial life.

+ +

Some don't see that deeply into ML and imagine that its a set of classes and libraries the mastery over which will make for a great career. As shallow as that may seem, that concept may be as true as the other three deeper conceptions.

+ +

Perspectives on Data Mining

+ +

The term data mining is like that. Each book, and sometimes each chapter within the same book, seems to have its own distinct conception of the verb mining. Although these definitions have some similarity, the term is nothing like the term convergence or even database in IT or melody in music.

+ +

Unfolding the metaphor contained in the two words of the term, data mining is digging up data, and perhaps that's a satisfactory definition for the most general use of the term. The information sought is not on the surface, like diamonds dropped onto the ground, but rather underneath and covered with other materials so that one has to survey, dig, and process to get past the worthless material and reveal the gems.

+ +

This term has another vantage point. In systems theory there is an important distinction between noise and signal. In data science, what an electrical engineer would call the signal is the listing of statistics, table, graphic, or other visualization miner's client needs to make management decisions. The noise is everything obscuring the signal through complexity, volume, or prominence.

+ +

Perspectives on Pattern Recognition

+ +

The term pattern recognition is perhaps the most ambiguous because neither of the two words arose in a scientific context.

+ +

Early uses of the word pattern in English (and its equivalent in other languages) are related to shelter construction, farming, or early textiles. The notion that the shape of a letter or other symbol or a sequence of phonetic elements that make up a spoken word were patterns only arose recently. Much of the early and current work in pattern recognition involving computers had to do with converting natural language expressions into some functional machine representation.

+ +

The term pattern is also ambiguous because of gestalt, the dependence of perception on the orientation of the recognizer at the time of recognition. A sand castle may have an architecture to an architect, a chemical composition to a chemist, an indication of civilization to the starving passengers of a boat adrift, an obstacle in the way to a crayfish, and an imaginary home for a child.

+ +

To a mathematician, it may be a three dimensional form with particular surface topology, feature curvature, and dimensions. To a physicist, there may be no significant difference between the sand castle and the seagull flying over it or the air in between them (unless the sand castle is the triumph of the physicists own child).

+ +

The orientation of the machine is even more a constraint on the emulation of some aspect of human perception than demonstrated in gestalt psychology experiments. The human can adjust perception when a new kind of pattern or structure is pointed out. Until AI progresses further, that kind of experience, where the computer would say, ""Oh, yes. Now I see the old woman in the picture of the young woman,"" is only realizable in software to the most primitive degree.

+ +

Taken literally, the term recognition means the repetition of a cognitive event, but that is not what we mean when we say, ""I recognize that,"" in common speech. We usually mean that a mental search for some set of sensory features (not necessarily any more a pattern than anything else in the sensory stream) is identified and associated with some internal object or concept.

+ +

The most common use of a convolutional network (CNN) is neither of these. It is usually used to categorize objects or as a feature extracting sensory front end to a much larger AI design.

+ +

Overlap and Associations

+ +

With all these ambiguities present, some overlap may be apparent, in that some AI activities may thoroughly involve two or all three of these terms. Certainly some associations between the three terms are obvious.

+ +
    +
  • When mining data, we may be looking for a particular kind of structure in a sea of data and have a particular search strategy to narrow the search and make it manageable for computing resources available. The test use during the search may be called pattern recognition.
  • +
  • In machine learning, we may train a network of artificial cells to assist in locating data or features in data that are meaningful to the stakeholders in the project. That would be using ML for data mining projects.
  • +
+ +

A large number of other associations between the three terms can be made. Which ones would appear most prominent to the expert would depend on the scientific, research, and career orientation of the expert.

+ +

Not Sufficient Overlap to be Synonyms

+ +

It would be difficult however to declare any two of the three to be synonymous. The three arose out of different kinds of research and from different orientations. Only some of that etymology is preserved in the terms themselves.

+",4302,,4302,,5/20/2019 7:21,5/20/2019 7:21,,,,0,,,,CC BY-SA 4.0 +12294,1,12308,,5/14/2019 2:56,,1,711,"

I'm looking for some general advice here before I dive in.

+ +

I'm interested in creating a new environment for OpenAI gym to provide some slightly more challenging continuous control problems than the ones currently in their set of Classic Control environments. The intended users of this new environment are just me and members of a meetup group I am in but obviously I want everyone to be able to access, install and potentially modify the same environment.

+ +

What's the easiest way to do this?

+ +
    +
  • Can I simply import and sub-class the OpenAI gym.Env module similar +to the way cartpole.py does.

  • +
  • Or do I need to create a whole new +PIP package as this article suggests?

  • +
+ +

Also, before I invest a lot of time on this, has anyone already created a cart-pole system environment where the goal is to stabilize the full state (not just the vertical position of the pendulum)? (I tried googling and couldn't find any variants on the original cart-pole-v1 but I'm suspicious as I can't be the first person to make modified versions of some of these classic control environments).

+ +

Thanks, (I realize this question is a bit open-ended) but hoping for some good advice that will point me in the right direction.

+",25618,,25618,,5/14/2019 3:02,6/13/2019 17:02,Advice on creating a new environment using OpenAI Gym,,1,0,,7/6/2022 11:58,,CC BY-SA 4.0 +12295,1,,,5/14/2019 6:16,,2,64,"

I am trying to estimate the real world distance (in metres) between two points in a perspective image using an uncalibrated camera. However, the dimensions of an object in the image are known.

+ +

I thought of using pixel per metre, but, being a perspective image, that did not seem like a viable approach. I believe I need the camera matrix from the image (maybe using the known object) and compute the 3D coordinates of the point, then simply compute the Euclidean distance between the points.

+ +

If that is the right approach, how may I do it? If there are any alternatives to this approach, kindly let me know.

+",25623,,2444,,5/14/2019 18:23,5/14/2019 18:23,Estimate distance between points in perspective image,,0,0,,,,CC BY-SA 4.0 +12296,2,,12264,5/14/2019 7:09,,1,,"

Yes, you are right. It is somehow an arbitrary choice, although you should consider the reasonable numerical ranges of your activation functions if you decide to go beyond the values +/- 1.

+

You can also have a think about whether you want to add a small reward for the agent reaching states that are near the goal, if you have an environment where such states are discernable.

+

If you want to have a more machine learning approach to reward values, consider using an Actor-Critic arrangement, in which a second network learns reward values for non-goal states (by observing the results of agent exploration), although you still need to determine end-state values according to your handcrafted heuristic.

+",12509,,2444,,10/7/2020 17:12,10/7/2020 17:12,,,,1,,,,CC BY-SA 4.0 +12297,2,,12283,5/14/2019 7:35,,1,,"

Before we start to tweak with you Agent-Environment setup, there are couple of important things to note

+ +
    +
  • Q-Learning

    + +

    Q-learning as a fundamental is a greedy approach based on Action Value functions. When I say action value what I mean to say is this +$q_\pi(s,a) = p(\hat{s},r | s, a)[r + \gamma(V(\hat{s}))]$.

    + +

    This means according to $\pi$ policy the Action value for taking action ""$a$"" at state ""$s$"" is the probability of taking reaching state ""$\hat{s}$"" and getting a reward ""$r$"" given the current state is ""$s$"" and action taken is ""$a$"" multiplied by sum of immediate reward ""$r$"" and expected future reward given by future state value(s) ""$V(\hat{s})$"".

    + +

    Please note the $p(\hat{s},r | s, a)$ is only available in deterministic modelling technique (model based methods) and this part is not available for stochastic modelling techniques (model-free methods). For model-free methods we execute multiple experiments and take the expected value in accordance with Law of Large Numbers. Which is what we do in common Deep Reinforced Learning implementations.

    + +

    The above formulation for future rewards can be expanded in series using the state-value formulation as $V(s) = \pi(a,s)q_\pi(s,a)$, where $\pi$ is the policy we are following aka probability of taking action $a$ when in state $s$.

    + +

    Why is it termed Greedy?

    + +

    This is because while using the fundamental of multiple experiments, with every action I take the immediate reward, and the max q_value ($q$) associated with the next_state to update my current q_value.

  • +
+ +

In Q-learning the future state reward is what you calculate by submitting the next_state to the network and taking max of the q_values attained. (Hence, Greedy!!)

+ +

Crux of the above

+ +

Having said all the above (Duh!), what is in it for us?

+ +

In Q-Learning, your reward is key for your model learn, i.e. ""$r$"" immediate reward and the future reward ($V(\hat{s})$)

+ +
    +
  • When the state-space is limited

    + +

    For example, a numerical tic-tac-toe game, in such cases it is ideal to have discrete reward system, such as the one suggested by you.

  • +
  • When the state-space is very large (the problem statement you are tackling)

    + +

    In such cases, it is ideal to have a continuous reward mechanism, i.e. rewards should be more variable in nature, like continuous values.

  • +
  • Suggestions

    + +
      +
    1. You can define the reward function in the environment has the 10$\sqrt{2}$ - Euclidean distance between the face of your snake and the fruit it is trying to reach. The reason I am doing it (10$\sqrt{2}$ minus) is because the maximum distance could be 10$\sqrt{2}$ (considering it is 10X10 board). This would help you to have a more variable/continuous rewards for your model to learn with (since you are using neural networks for training them). As your Snake goes closer to the fruit this value will keep on increasing. One cavet, if by taking an action, your snake reaches fruit, you can provide it a specific constant (extra) reward, and if it hits its own self, or the borders you penalize it with a specific constant. So that the model understand better.

    2. +
    3. You can define your state (input vector to your NN) as 100 dimension array (10X10 grid) with each element signifying if there a object in the that area or not. 1 means it is fruit, 0 means it is snakes own body and 0.5 means it is free for movement. The idea is to limit the input values of the vectors between 0~1. As your agent takes action, the values in the state will keep on changing.

    4. +
    5. The output of your network should have 3 neurons, each signifying the $q$ of each of action (front/right/left movement). For get_action function you will get the action pertaining to max(q_value), and keep on adding things in your memory d_que. Subsequently you can trigger model_train methods at every episode/or even at after every step.

    6. +
    7. Lastly, the hyper parameters would be the discount factor $\gamma$ you choose and the learning rate $\alpha$ you use for the NN network. That's something you will have to experiment. Along the depth of your ANN. I think a single hidden layer should be enough.

    8. +
    9. For the last layer of your NN, please use a linear activation function, and not a relu activation function.

    10. +
    11. Please run many-many-many experiments, something in millions for optimum training. There could be multiple ways of monitoring your model's learning process. How many games won, how much fruit eating it is able to achieve in 1 game etc.

    12. +
  • +
+ +

Hope the above helps.

+",25617,,25617,,5/14/2019 7:47,5/14/2019 7:47,,,,5,,,,CC BY-SA 4.0 +12299,2,,12264,5/14/2019 8:23,,5,,"

In Reinforcement Learning (RL), a reward function is part of the problem definition and should:

+
    +
  • Be based primarily on the goals of the agent.

    +
  • +
  • Take into account any combination of starting state $s$, action taken $a$, resulting state $s'$ and/or a random amount (a constant amount is just a random amount with a fixed value having probability 1). You should not use other data than those four things, but you also do not have to use any of them. This is important, as using any other data stops your environment from being a Markov Decision Process (MDP).

    +
  • +
  • Given the first point, as direct and simple as possible. In many situations this is all you need. A reward of +1 for winning a game, 0 for a draw and -1 for losing is enough to fully define the goals of most 2-player games.

    +
  • +
  • In general, have positive rewards for things you want the agent to achieve or repeat, and negative rewards for things you want the agent to avoid or minimise doing. It is common for instance to have a fixed reward of -1 per time step when the objective is to complete a task as fast as possible.

    +
  • +
  • In general, reward 0 for anything which is not directly related to the goals. This allows the agent to learn for itself whether a trajectory that uses particular states/actions or time resources is worthwhile or not.

    +
  • +
  • Be scaled for convenience. Scaling all rewards by a common factor does not matter at a theoretical level, as the exact same behaviour will be optimal. In practice you want the sums of reward to be easy to assess by yourself as you analyse results of learning, and you also want whatever technical solution you implement to be able to cope with the range of values. Simple numbers such as +1/-1 achieve that for basic rewards.

    +
  • +
+

Ideally, you should avoid using heuristic functions that reward an agent for interim goals or results, as that inserts your opinion about how the problem should be solved into the system, and may not in fact be optimal given the goals. In fact you can view the purpose of value-based RL is learning a good heuristic function (the value function) from the more sparse reward function. If you already had a good heuristic function then you may not need RL at all.

+

You may need to compare very different parts of the outcome in a single reward function. This can be hard to balance correctly, as the reward function is a single scalar value and you have to define what it means to balance between different objectives within a single scenario. If you do have very different metrics that you want to combine then you need to think harder about what that means:

+
    +
  • Where possible, flatten the reward signal into the same units and base your goals around them. For instance, in business and production processes if may be possible to use currency as the units of reward and convert things such as energy used, transport distance, etc., into that currency.

    +
  • +
  • For highly negative/unwanted outcomes, instead of assigning a negative reward, consider whether a constraint on the environment is more appropriate.

    +
  • +
+

You may have opinions about valid solutions to the environment that you want the agent to use. In which case you can extend or modify the system of rewards to reflect that - e.g. provide a reward for achieving some interim sub-goal, even if it is not directly a result that you care about. This is called reward shaping, and can help in practical ways in difficult problems, but you have to take extra care not to break things.

+

There are also more sophisticated approaches that use multiple value schemes or no externally applied ones, such as hierarchical reinforcement learning or intrinsic rewards. These may be necessary to address more complex "real life" environments, but are still subject of active research. So bear in mind that all the above advice describes the current mainstream of RL, and there are more options the deeper you research the topic.

+
+

Is this purely experimental and down to the programmer of the environment. So, is it a heuristic approach of simply trying different reward values and see how the learning process shapes up?

+
+

Generally no. You should base the reward function on the analysis of the problem and your learning goals. And this should be done at the start, before experimenting with hyper-parameters which define the learning process.

+

If you are trying different values, especially different relative values between different aspects of a problem, then you may be changing what it means for the agent to behave optimally. That might be what you want to do, because you are looking at how you want to frame the original problem to achieve a specific behaviour.

+

However, outside of inverse reinforcement learning, it is more usual to want an optimal solution to a well-defined problem, as opposed to a solution that matches some other observation that you are willing to change the problem definition to meet.

+
+

So, am I right in saying it's just about trying different reward values for actions encoded in the environment and see how it affects the learning?

+
+

This is usually not the case.

+

Instead, think about how you want to define the goals of the agent. Write reward functions that encapsulate those goals. Then focus on changes to the agent that allow it to better learn how to achieve those goals.

+

Now, you can do it the way round, as you suggest. But what you are doing, in that case, is changing the problem definition, and seeing how well a certain kind of agent can cope with solving each kind of problem.

+",1847,,2444,,10/7/2020 17:09,10/7/2020 17:09,,,,3,,,,CC BY-SA 4.0 +12303,1,,,5/14/2019 14:07,,2,19,"

I'm building CNN network of Image to Image.

+ +

After training, I have some bad results in part of the Image.

+ +

I would like to find the neurons that most influenced those pixels and do retraining only for them.

+ +

I have seen some previous works about visualizing networks, like here:

+ +

https://github.com/utkuozbulak/pytorch-cnn-visualizations

+ +

But they are only finding activations maps or visualizing with softmax in the output layer.

+ +

How can I do that for Image to Image?

+",25144,,,,,5/14/2019 14:07,How to debug and find neurons that most influenced a pixel in the output image?,,0,0,,,,CC BY-SA 4.0 +12306,2,,12154,5/14/2019 15:59,,1,,"

The Title Question

+
+

Is there any paper, article or book that analyzes the feasibility of achieving AGI through brain-simulation?

+
+

Yes. There are various analyses that have been published. We have some early work like Some Philosophical Problems From the Standpoint of Artificial Intelligence, John McCarthy and Patrick J. Hayes, Stanford University, 1969. And there is more recent and optimistic work like Essentials of General Intelligence: The Direct Path to Artificial General Intelligence, Peter Voss, 2007. There are refutations of the concept of General Intelligence, artificial or not, in some posts here (listed below) and quite a bit of academic scrutiny of the assumptions behind the AGI concept and the offspring of it: The Singularity. Concern about The Singularity is a social phenomenon, not a scientific one. It began with the depiction of the emergence of a super-intelligence as an inevitability in science fiction and in sensational media.

+

But mathematically terse proofs of AGI feasibility are not found in the media or in the Terminator franchise started by the wonderful imagination of James Cameron. This is not to say that world domination cannot fall into the hands of a new species created by humans. In fact, Jaques Ellul, in his Techological Society points out that in many ways humans are already subservient to technology — that the creation of humankind is already autonomous and dominant. His 449 page heap of evidence is witty and quite convincing.

+

Academic publications either

+
+

(a) Assume feasibility and discuss approaches to its design or

+
+
+

(b) Illuminate caveats in the idea of a universally intelligent system

+
+

Where is the Mind?

+

Whether the entirety of the mind arises from a physical system is a question that dates back to René Descartes and Gottfried Leibniz. Current conceptions of intelligence tend to deny the interaction and interdependence of the human brain, the rest of the human, and the biosphere into which the human fits without extreme expenditure to produce a floating, underwater, or underground simulation of the biosphere. The mind may center in the brain, but any brain disconnected from the biosphere and the circulatory system for even a short time is completely useless. A think tank is not a bunch of smart people in a sensory deprivation tank. Quite the opposite. Early experimentation with think tanks revealed the importance of mental health of the participants through diversion and physical exercise so that the various forms of neurological stasis could be maintained.

+

It is quite questionable whether computers without bodies will ever learn as much of general use (that can be put to use in the biosphere) without moving about and interaction with the biosphere. It is highly likely that the mind is more like Carl Jung sees it, as in the brain and elsewhere. Ludwik Fleck, in a somewhat Jungian fashion, speaks of science as the product of a collective in his Genesis and Development of a Scientific Fact, a good read that throws considerable light on the folly of most people's naive conception of facts.

+

Quantum Computing Buzz

+

When Intel's founder, Robert Noyce, brought in Gordon Moore in 1975, it was because of various statistical observations he had made, including what is now known as Moore's Law, a curve that on semi-log paper in 1965 looked linear. Since then, the business assumption in Silicon Valley has been that compactness and speed would double frequently and consistently. In the last few years, speed has stopped doubling and compactness seems to have reached its limit around 7 nm. The transistors in this VLSI technology have only a handful of atoms to represent the semiconductor topologies that permit gating, the fundamental activity of digital circuitry.

+

In essence, to continue the Silicon Valley boon, it has to become Particle Valley, using carbon allotropes or other nano-tech to build computers where Moore's Law is not overturned by the limitations of the structure of the universe. Whether this can be done requires its own feasibility study. Already, SSD (solid state drive) technology relies heavily on probability. Errors are occurring in great proportions and error correcting techniques are used to keep them manageable. That's because, as we reach for designs closer to the quantum level, Brownian motion or the light electromagnetic interference can upset the digital operation intended.

+

Challenges Versus Show Stoppers

+

As with many trends in the history of human endeavors, determining what is a challenge and what is a show stopper is not clear until viewing in hindsight. That is why people are asking the various questions in this Stack Exchange listed in the question.

+

Stating that mentioning our partial knowledge of brain operation is off topic for this question might have been appropriate if we were trying to invent something that we had not already seen in operation in nature. However, that is not the case. We call it artificial intelligence because we think that we are intelligent and want to extend that facility we find in ourselves. The following fields, in addition to quantum physics and machine learning, are thus necessarily relevant to the feasibility of a more generally usable fabricated intelligent system.

+
    +
  • Neuro-chemistry in general
  • +
  • Genetics as it applies to neurological network structuring and cell activity
  • +
  • Correlation between DNA coding and human mental performance and stability
  • +
  • Applied psychology
  • +
  • Addiction science
  • +
  • Cognitive science
  • +
  • Mathematics in general and probability and statistics specifically
  • +
  • Control theory
  • +
  • Philosophy, especially in regard to whether intelligence actually exists and the ideas of culturally determined knowledge and belief
  • +
  • Language and linguistics
  • +
  • Symbology
  • +
  • Conceptions of a thought collective
  • +
+

Not only do we not know enough in these areas, but we know very little. There is also considerable evidence that humanity tends to forget some of the things it once knew. Take away our petroleum and we may find out just how much we forgot.

+

Whether any of the things we have lost from our general knowledge are prerequisites to solving the challenges of universally intelligent system design is unknown. A simple example is that many computing science graduates do not know anything about Kurt Gödel's incompletness theorems and how they led to Alan Turing's completeness theorem. Many use back-propagation in their Python code but don't have any conception that it is a corrective feedback distribution scheme.

+

On the flip side, we have no strong argument to draw the conclusion that there is some theoretical limitation as solid as the second law of thermodynamics, one of the apparently insurmountable hard stops built into our physical reality on and off planet. We have no proof that intelligence does not exist and is rather an anthrocentric fantasy. We have no proof that whatever limitations humans have cannot be overcome by copying the intellectual life of human beings into human creations or using intellect to create greater intellect in machines.

+

What we know, as smart as we think we are, is greatly outweighed by what we don't know. Anyone in real research work will agree.

+
+

Refutations of AGI Philosophy

+

Some of the many refutations of AGI that appear throughout academia. Some articles include these.

+ +

These are some posts here that question various assumptions prevalent in popular AI media.

+ +",4302,,2444,,1/26/2021 1:40,1/26/2021 1:40,,,,0,,,,CC BY-SA 4.0 +12307,1,,,5/14/2019 16:28,,3,1705,"

I understand the way we build and train a model, but all of the online courses I've found end with this. I can't find any course explaining the process of utilizing the trained model to address the problem.

+

Is there any course out there that explains the whole process from data collection, model building, and utilizing the model to solve the real-world problems?

+",25628,,2444,,7/5/2020 18:02,7/5/2020 18:02,"After a model has been trained, how do I use it to address the real-world problems?",,1,0,,,,CC BY-SA 4.0 +12308,2,,12294,5/14/2019 16:37,,2,,"

This stackoverflow post provides a good answer. It recommends creating the environment as a new package.

+ +

Note the updated link to the instructions on OpenAI.

+ +

So that's what I did and it's honestly not that much work and then you can import the environment into your script like this:

+ +
import gym
+import gym_foo
+env = gym.make('foo-v0')
+
+ +

This blog post also describes the process in a little bit more detail.

+",25618,,,,,5/14/2019 16:37,,,,1,,,,CC BY-SA 4.0 +12309,2,,12307,5/14/2019 17:45,,6,,"

Using a machine learning or AI-powered model once it has been built and tested, is not directly an AI issue, it is just a development issue. As such, you won't find many machine learning tutorials that focus on this part of the work. But they do exist.

+ +

In essence it is the same as integrating any other function, which might be in a third-party library:

+ +
    +
  • Package the new function so that it can be called from your production system (this may be the hardest part)

  • +
  • Decide where in the code of your production system to call the new function. In your example case maybe after form is completed describing an incident you could link the top recommended KB articles from the new ticket.

  • +
  • Design and check the associated user experience/UI as appropriate for your development team (in small projects you may skip this step and just implement)

  • +
  • Change your production code to call the packaged function. In any professional development team, this part will have multiple stages, but not really relevant to the question - if you are the ML specialist and delivering your new model to an existing team, you need to talk to them about both the packaging part and the steps involved here.

  • +
+ +

Often, machine learning models come with a whole bunch of dependencies that the rest of a system does not have. There are a variety of solutions for that, depending on the libraries that you are using, whether you are using a cloud PaaS service etc. You could just build a Docker image to hold all the AI parts and call it passing the input data.

+ +

Deploying ML models to production as a job is often a task for Machine Learning Engineer roles. There are courses and articles covering the practical aspects of these steps, if you search for them. Here are a couple:

+ + + +

I have no affiliation with any of the above, have not read articles or taken the courses, and am not able to make any recommendation, even if you told me the technologies you were using for ML and in production currently.

+ +

After deploying to production . . .

+ +

Your work is not necessarily done. You will want to monitor behaviour of the system. During integration, you should of added logging, or some way to get feedback of performance in the wild.

+ +

For instance, if this is an ML system, does the accuracy seen in testing hold up against real life? Is the target population drifting? If the system makes interventions - by e.g. suggesting links, or showing categories to end users - how well are those working in practice? Is the performance and responsiveness of the system fast enough when the service is under load?

+ +

If it is more reactive AI system, that takes actions itself, you will similarly want to monitor what it is supposed to optimise, or sample its output for errors and quality control.

+ +

All this feedback can go back into a new iteration of design and integration. How to incorporate that will depend a lot on the nature of the system and what you discover, so could be the subject of further questions.

+",1847,,1847,,5/15/2019 7:06,5/15/2019 7:06,,,,0,,,,CC BY-SA 4.0 +12310,1,,,5/14/2019 20:50,,1,39,"

I am looking at LSTM example here

+ +

However, I am not sure how to modify the setup if I have forecast available (assuming perfect forecast) for TEMP: Temperature and PRES: Pressure at time t.

+ +

i.e. pollution_t = fn(TEMP_t, PRES_t, TEMP_t_1, PRES_t_1, ..., othervariables and lagged values)

+ +

_t represents the perfect forecast available

+ +

_t_1 represents the variable values at previous time steps

+ +

Basically I am looking for something of the following setup:

+ +

+",25634,,25634,,5/14/2019 21:19,5/15/2019 15:40,Modifying LSTM to include forecast,,0,0,,1/3/2022 9:39,,CC BY-SA 4.0 +12311,1,,,5/14/2019 22:06,,2,1139,"

I've got a Lego Mindstorms EV3 with EV3DEV and EV3-Python installed. I wanted to try out artificial intelligence with the robot, and my first project is going to be to get the robot to try and recognize some images (using convolutions) and do an action related to the image it has seen. However, I can't find a way to use Tensorflow (or any AI module for that matter) on an EV3. Does anyone know how to incorporate Tensorflow or any other modules into the EV3? Help would be gladly appreciated.

+",25639,,2444,,5/14/2019 22:09,7/23/2020 14:22,How do I integrate Tensorflow with EV3DEV?,,2,0,,10/29/2021 17:04,,CC BY-SA 4.0 +12312,2,,12311,5/15/2019 0:25,,1,,"

There is a YouTube video LEGO EV3 Raspberry Pi Tensorflow Sorting Machine by ebswift that should help you although you will need a Raspberry Pi. From the abstract:

+ +

This is a sorting machine based on the EV3 45544 education kit sorting machine. The colour sorting camera is substituted with a Raspberry Pi with the v2 camera. The EV3 is controlled over wifi via RPyC and the object recognition work is done via Tensorflow.

+ +

A viewer asked: Can you share links about how to train a model on the PC and move the trained model to the Pi?

+ +

ebswifft replied: +Glad you like it :). Training the model on the PC is following the guide for Tensorflow for Poets https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/. All you do is install Tensorflow on the Raspberry Pi, clone the github repository onto it and copy the model you trained from the codelabs article onto the Raspberry Pi to run the classifier as per step 5 of the codelabs article. Onboard image capture on the Raspberry Pi is just done using picamera. I used a tensorflow version that a user compiled for the Raspberry Pi from here: https://github.com/samjabrahams/tensorflow-on-raspberry-pi/issues/92. I might do up more of a general step-by-step sometime on my site, I'll report back if I can get that up and running.

+ +

There is also BrickClassifi3r:

+ +

This Lego Mindstorms EV3 robot uses a neural network to recognize a cube, a cylinder or a small cube put on a conveyor belt. See the video how it works. Each object on the conveyor belt is scanned by an IR-sensor every 40ms for about a second. The resulting data are 24 distance values representing one of the three objects. This data is fed into the neural network on the robot to classify the object within 180ms. In a test with 300 objects it reaches 95,6% accuracy. The neural network has been trained before by a machine learning algorithm with TensorFlow on a PC using a set of 375 training examples - 125 examples for each object.

+ +

The ev3-myo project uses LEGO Mindstorms EV3, TRACK3R, a Myo armband and TensorFlow for gesture recognition.

+",5763,,5763,,5/15/2019 1:13,5/15/2019 1:13,,,,3,,,,CC BY-SA 4.0 +12313,1,12316,,5/15/2019 0:27,,10,2213,"

I am trying to understand why attention models are different than just using neural networks. Essentially the optimization of weights or using gates for protecting and controlling cell state (in recurrent networks), should eventually lead to the network focusing on certain parts of the input/source. So what is attention mechanism really adding to the network?

+ +

A potential answer in the case of Encoder-Decoder RNNs is:

+ +
+

The most important distinguishing feature of this approach from the + basic encoder–decoder is that it does not attempt to encode a whole + input sentence into a single fixed-length vector. Instead, it encodes + the input sentence into a sequence of vectors and chooses a subset of + these vectors adaptively while decoding the translation. This frees a + neural translation model from having to squash all the information of + a source sentence, regardless of its length, into a fixed-length + vector. We show this allows a model to cope better with long + sentences.
+ - Neural Machine Translation by Jointly Learning to Align and Translate

+
+ +

which made sense and the paper says that it worked better for NMT.

+ +

A previous study indicated that breaking down the sentence into phrases could lead to better results:

+ +
+

In this paper, we propose a way to address this issue by automatically + segmenting an input sentence into phrases that can be easily + translated by the neural network translation model. Once each segment + has been independently translated by the neural machine translation + model, the translated clauses are concatenated to form a final + translation. Empirical results show a significant improvement in + translation quality for long sentences.
+ - Overcoming the Curse of Sentence Length for Neural Machine + Translation using Automatic Segmentation

+
+ +

which paved the way for further research resulting in attention models.

+ +

I was also going through an article on Attention is not quite all you need where it said something similar:

+ +
+

An LSTM has to learn to sequentially retain past values together in a + single internal state across multiple RNN iterations, whereas + attention can recall past sequence values at any point in a single + forward pass.

+
+ +

and a more curated blog on the family of attention mechanism gives insight on how different ways have been formulated for implementing the concept: Attention? Attention!

+ +

Specifically, I want to know how attention mechanism is formulated for this task (aforementioned) or in general. A detailed mathematical insight would be helpful, probably somewhat on these lines: Understanding Attention in NN mathematically

+",4573,,,,,5/15/2019 3:44,A mathematical explanation of Attention Mechanism,,1,0,,,,CC BY-SA 4.0 +12315,1,,,5/15/2019 3:11,,3,250,"

Dataset Description

+

I am working on the famous ABIDE Autism Datasets. The dataset is very big in the sense that it has more than 1000 subjects containing half of them as autistic and the other half as healthy controls. The dataset is taken from 17 sites across the world and each site used a varying time dimension when recording the subjects fMRI.

+

My Question

+

I want to use this dataset for a classification task but the only issue is time-varying subjects as features set are fixed to 200 so you can say that I have subjects dimensions like 150 x200, 75 x 200 , 300 x 200... so on. So what are advanced AI or deep learning techniques that I can use to fix this time dimension for every subjects or can anybody suggest some deep learning framework or model that I could use to fix these varying time dimensions across subjects?

+

My Effort

+

Approach 1

+

I have applied PCA to the time dimension and fixed them to 50 and tried other numbers also but it did not produce good accuracy for classification

+

Approach 2

+

I also tried to use only specific time points from every subject like taking only 40 time points from every subject to fix the dimension but again it did not work as definitely filtering some time series data on every subject would lose crucial information.

+",25642,,2444,,12/31/2021 10:35,5/25/2023 13:04,How to fix time dimension in time varying data-sets using deep learning model for classification?,,2,0,,,,CC BY-SA 4.0 +12316,2,,12313,5/15/2019 3:44,,2,,"

There's plenty, but keep in mind that these articles do not describe the same approach. They simply have attention shifting automation as part of their approaches and therefore must detect a need for shift and execute it in a way that improves speed, accuracy, reliability or some combination of them.

+ +

There is no one dominant attention approach and probably will not be. In fact, the earliest attention mechanism in common use in machines was likely the electromechanical fire alarm. In digital systems, it would be a vacuum tube electric eye driving an intruder alert followed by the first hardware interrupts in transistor microprocessor boards.

+ +

The sophistication of hardware interrupts in contemporary computer systems is probably higher than attention mechanisms in neural nets as of this writing, but that may change. Currently the dictionary definition of attention is the only constraint we can place on these newer approaches in artificial networks.

+ + + +

It would be interesting to develop a taxonomy of attention approaches in AI, as that has probably not yet been done. It would take quite a study to see if any of the above bullet items match up with either of the two articles referenced in the question.

+",4302,,,,,5/15/2019 3:44,,,,0,,,,CC BY-SA 4.0 +12318,1,,,5/15/2019 9:32,,0,110,"

I setupped a small drone simulator using PhysX, the time step is at 200 hz, while motors update like regular ESCs (at 50 Hz). I computed the inertia matrix, tweaked a bit mass of components to be real, air drag, gravity etc. After a first partial success in tuning a PID algorithm I got bored to find and hunt perfect values, and started thinking to tune it with a NN, but then I thinked, why using PID at all if NN can find better solutions?.

+ +

I started playing with NN, but then I realized I have no traning data.

+ +

Sure I could do

+ +
NN.Train(input,expectedOutput);
+
+ +

but what is actually the expected output? To be useful it has to be the thrust force of propellers tuned to keep input (position and orientation) stable in the desired place.

+ +

But actually I don't (Want to) know in advance the thrust that every single propeller has.

+ +

Since it is a simulation it is ok spending some computing time between each simulation step (eventually I'll try to optimize it later to fit it in a microcontroller)..

+ +

So, is it possible given a regular NN implementation where I can select number of:

+ +
    +
  • Input neurons
  • +
  • Hidden neurons
  • +
  • Output neurons
  • +
+ +

find a way to train my model live? I need a way to tell the NN

+ +
+

hei this time you performed X keep going or go the opposite way.

+
+",1863,,,,,5/15/2019 16:26,"Drone training, how to train without training data?",,1,0,,,,CC BY-SA 4.0 +12319,2,,12318,5/15/2019 10:01,,3,,"

It's debatable whether neural networks can find a better solution than PID, if your goal is to simply keep the output around a certain reference point PID should do a perfect job pretty much. If you really want to use ""intelligent"" control with NN you can look into reinforcement learning.
+Few interesting papers that directly adress your problem:

+ +

Autonomous helicopter flight via reinforcement learning
+An Application of Reinforcement Learning to Aerobatic Helicopter Flight
+Autonomous Helicopter Control Using Reinforcement Learning Policy Search Methods

+ +

You can also find more in the references of these papers.

+",20339,,,,,5/15/2019 10:01,,,,1,,,,CC BY-SA 4.0 +12321,1,12346,,5/15/2019 15:15,,4,145,"

Ok, due to previous question I was pointed to use reinfrocement learning.

+ +

So far what I understood from random websites is the following:

+ +
    +
  • there is a Q(s,a) function involved
  • +
  • I can assume my neural network ~ Q(s,a)
  • +
  • my simulation has a state (N input variables)
  • +
  • my actor can perform M possible actions (M output variables)
  • +
  • at each step of the simulation my actor perform just the action corresponding to the max(outputs)
  • +
  • (in my case the actions are 1/2/3 % increase or decrease to propellers thrust force.)
  • +
+ +

From this website I found that at some point I have to:

+ +
    +
  • Estimate outputs Q[t] (or so called q-values)
  • +
  • Estimate outputs over next state Q[t+1]
  • +
  • Let the backpropagation algorithm perform error correction only on the action performed on next state.
  • +
+ +

The last 3 points are not clear at all to me, infact I don't have yet the next state what I do instead is:

+ +
    +
  • Estimate previous outputs Q[t-1]
  • +
  • Estimate current outputs Q[t]
  • +
  • Let backpropagation fix the error for max q value only
  • +
+ +

Actually for code I use just this library which is simple enough to allow me understand what happens inside:

+ +

NeuralNetwork library

+ +

Initializing the neural network (with N input neurons, N+M hidden neurons and M output neurons) is as simple as

+ +
Network network = new NeuralNetwork( N, N+M, M);
+
+ +

Then I think to understand there is the need for an arbitrary reward function

+ +
public double R()
+{
+     double distance = (currentPosition - targetPosition).VectorMagnitude();
+     if(distance<100)
+         return 100-distance; // the nearest the greatest the reward
+     return -1; // too far
+}
+
+ +

then what I do is:

+ +
// init step
+var previousInputs = ReadInputs();
+UpdateInputs();
+var currentInputs = ReadInputs();
+
+//Estimate previous outputs Q[t-1]
+previousOutputs = network.Query( previousInputs );
+
+//Estimate current outputs Q[t]
+currentOutputs = network.Query( currentInputs);
+
+// compute modified max value
+int maxIndex = 0;
+double maxValue = double.MinValue;
+SelectMax( currentOutputs, out maxValue, out maxIndex);
+
+// apply the modified max value to PREVIOUS outputs
+previousOutputs[maxIndex] = R() + discountValue* currentOutputs[maxIndex];
+
+//Let backpropagation fix the error for max q value only
+network.Train( previousInputs, previousOutputs);
+
+// advance simulation by 1 step and see what happens 
+RunPhysicsSimulationStep(1/200.0);
+DrawEverything();
+
+ +

But it doesn't seem to work very nice. I let simulation running for over one hour without success. Probably I'm reading the algorithm in a wrong way.

+",1863,,1863,,5/15/2019 16:54,5/16/2019 11:03,"Q-learning, am I interpreting correctly $Q(s,a) = r + \gamma \max_{a'} Q(s',a')$?",,1,0,,,,CC BY-SA 4.0 +12322,1,12342,,5/15/2019 15:22,,4,1199,"

I found Sentiment Analysis and Emotion Recognition as two different categories on paperswithcode.com. Should both be the same as my understanding? If not what's the difference?

+",25658,,2444,,5/15/2019 22:20,5/2/2020 21:16,What is the difference between Sentiment Analysis and Emotion Recognition?,,2,0,,,,CC BY-SA 4.0 +12324,1,15913,,5/15/2019 15:42,,4,813,"

What is the point of having multiple LSTM units in a single layer?

+ +

Surely if we have a single unit it should be able to capture (remember) all the data anyway and using more units in the same layer would just make the other units learn exactly the same historical features?

+ +

I've even shown myself empirically that using multiple LSTMs in a single layer improves performance, but in my head it still doesn't make sense, because I don't see what is it that other units are learning that others aren't? Is this sort of similar to how we use multiple filters in a single CNN layer?

+",25659,,2444,,5/15/2019 22:17,10/15/2019 2:43,Why do we need multiple LSTM units in a layer?,,1,2,,,,CC BY-SA 4.0 +12328,5,,,5/15/2019 18:32,,0,,,1671,,1671,,5/15/2019 18:32,5/15/2019 18:32,,,,0,,,,CC BY-SA 4.0 +12329,4,,,5/15/2019 18:32,,0,,"For questions related to the ""black box"" nature of certain kinds of machine learning, where the internal decision making process is unknown.",1671,,1671,,5/15/2019 18:32,5/15/2019 18:32,,,,0,,,,CC BY-SA 4.0 +12332,5,,,5/15/2019 21:11,,0,,,1671,,1671,,5/15/2019 21:11,5/15/2019 21:11,,,,0,,,,CC BY-SA 4.0 +12333,4,,,5/15/2019 21:11,,0,,"Use for questions about game AI for games that utilize randomness. (This can include imperfect information games.) Distinct from non-chance, perfect information games.",1671,,1671,,5/15/2019 21:11,5/15/2019 21:11,,,,0,,,,CC BY-SA 4.0 +12335,1,12336,,5/16/2019 0:37,,0,728,"

I'm using 10-fold cross validation on all models. Here you can see both plots:

+ +

+ +

+ +

Since I am using k-fold cross validation, is it okay to name it ""validation error vs training error"" or ""test error vs training error"" would be better?

+",25405,,2444,,5/16/2019 1:03,5/16/2019 1:03,"Should I call the error ""validation error"" or ""test error"" during cross validation?",,1,0,,,,CC BY-SA 4.0 +12336,2,,12335,5/16/2019 0:50,,1,,"

The expression ""validation error vs training error"" is likely more appropriate because the data you use during cross validation that is not the training data is often considered the validation data.

+ +

The test data is the data you use to test your model after having performed e.g. cross-validation. The test dataset should be independent of both the training and validation datasets.

+ +

This is just a convention, and some people use the terms ""validation data"" and ""test data"" interchangeably, because, in a way, during cross-validation you are also ""testing"" (or, to disambiguate, ""validating"") the model.

+ +

See also this answer: https://stats.stackexchange.com/a/401702/82135.

+",2444,,,,,5/16/2019 0:50,,,,0,,,,CC BY-SA 4.0 +12337,5,,,5/16/2019 1:01,,0,,,2444,,2444,,5/11/2021 10:52,5/11/2021 10:52,,,,0,,,,CC BY-SA 4.0 +12338,4,,,5/16/2019 1:01,,0,,For questions related to the cross-validation techniques (e.g. k-fold cross-validation or leave-one-out cross-validation) used in machine learning to assess the quality (e.g. average accuracy) of the models.,2444,,2444,,5/11/2021 10:52,5/11/2021 10:52,,,,0,,,,CC BY-SA 4.0 +12340,1,12362,,5/16/2019 5:06,,1,210,"

Below you have the plots of the training and validation errors for two different models. Both plots show the RMSE values for the validation dataset versus the number of training epochs. It is observed that models get lower RMSE value as training progresses.

+ +

The model associated with the first plot is performing quite well. The gap is quite narrowed.

+ +

+ +

I think the model associated with this second plot is doing pretty good, but not as well as the other. The gap is much broader.

+ +

+ +

The model of the first plot was trained using a data set containing 1 million of ratings, while the second one used only 100K. I'm implementing the collaborative filtering (CF) algorithm. I am optimising it using SGD.

+ +

+ +

Are any of these models overfitting or underfitting?

+",25405,,25405,,5/17/2019 2:03,12/19/2020 23:40,Which model is better given their training and validation errors?,,2,4,,,,CC BY-SA 4.0 +12341,1,,,5/16/2019 6:16,,3,77,"

I have a set of 15 unique playing cards from a deck of 52 playing cards. A given state is represented by the respective card values in the set of 15 cards, where the card value is a prime number associated with that card. For example, AH is represented by 3.

+ +

How should I represent a single state for the NN? Should it be a list of the 15 prime numbers representing the list of cards? I was hoping that I could represent a single state as the sum of each of all 15 prime numbers and then throw that sum through a sigmoid function. My concern, however, is that the NN will lose information if I reduce the dimension of the state to a single attribute (even if that attribute is unique to that state - the sum of n prime numbers is unique compared to the sum of any other n prime numbers).

+ +

How important is the dimensionality of each state for Deep Q Learning? I'd really appreciate even some general direction.

+",25671,,2444,,5/16/2019 13:51,5/16/2019 13:51,How do I represent a multi-dimensional state using a neural network?,,1,5,,,,CC BY-SA 4.0 +12342,2,,12322,5/16/2019 8:01,,1,,"

Sentiment in this context refers to evaluations, typically positive/negative/neutral. Sentiment Analysis can be applied to product reviews, to identify if the reviewer liked the product or not. This has (in principle) got nothing to do with emotions as such.

+ +

Emotion recognition would typically work on conversational data (eg from conversations with chatbots), and it would attempt to recognise the emotional state of the user -- angry/happy/sad...

+ +

Of course the same can overlap: if the user is happy, they will typically express positive sentiments on something.

+ +

Also: emotion recognition goes beyond text (eg facial expressions), whereas sentiment analysis mostly works with textual data only.

+",2193,,2193,,5/2/2020 21:16,5/2/2020 21:16,,,,0,,,,CC BY-SA 4.0 +12343,1,12389,,5/16/2019 8:51,,6,1172,"

I have a dataset as follows

+ +

+ +

(and the table extends to include an extra 146 columns for companies 4-149)

+ +

Is there an algorithm I could use effectively to find similar patterns in sales from the other companies when compared to my company?

+ +

I thought of using k-means clustering, but as I'm dealing with 150 companies here it would likely become a bit of a mess, and I don't think linear regression would work here.

+",25675,,2444,,5/19/2019 14:49,9/18/2019 11:35,Is there a machine learning algorithm to find similar sales patterns?,,2,1,,,,CC BY-SA 4.0 +12344,2,,12341,5/16/2019 9:27,,2,,"

In my humble opinion, it seems like it is important to have them separated, if having a certain card can influence the result in some way that is not its prime value, instead of not only using the sum. But it depends on the game and its rules. For example:

+ +

If having 5 cards of hearts in the set of 15 cards makes you win the game, then if you only represent the state as the sum of the value of the cards, the DQN will never learn that it was not the whole 15 cards that caused you to win, but only a specific part of the set.

+",24054,,,,,5/16/2019 9:27,,,,5,,,,CC BY-SA 4.0 +12345,1,12369,,5/16/2019 9:32,,2,504,"

In LeNet 5's first layer, the number of feature maps is equal to the number of kernels. However, the second convolutional layer has a depth different from the 3rd layer. Does the filter size dictate the number of feature maps?

+",25676,,2444,,5/17/2019 13:28,5/17/2019 13:40,Is the number of feature maps equal to the number of kernels in the LeNet 5 architecture?,,2,2,,,,CC BY-SA 4.0 +12346,2,,12321,5/16/2019 10:09,,2,,"

What normally happens in DQN is the following:

+ +

First, the NN is used to estimate an approximation of Q for each state-action pair by using a vector of weights: +$$Q(s,a,w) \simeq Q(s,a)$$

+ +

There are two options: You can either feed it a certain state, and get as output the value of each possible action, or, you can input a state-action pair and get as output the value of that pair. (basically the NN gives you the policy)

+ +

After having this output you can now use your environment to simulate the next state. So basically you apply the action given by the NN (maybe using $\epsilon -greedy$), and you get the next state, and your next reward from your environment by observing it.

+ +

Where does the Q function comes in all of this? The Q-function +$$Q(s,a) = Q(s,a) + \alpha [r(s,a) + \gamma max_a{Q(s',a)} - Q(s,a)]$$ +that in standard Q-learning is used to update the values after each simulation step, is now used as the Target of the neural network, to make the back propagation.

+ +

There are some considerations you have to make to help this converge, mainly experience replay and fixed Q-targets.

+ +

So when you say "" I don't have yet the next state"", is because you need to simulate the action that is selected to get this next state.

+ +

EDIT: The main thing you might be missing is that the right part of the equation you present in the title is called TD-Target, and it is not used to make the updates, you might see it as the final result you want to achieve $Q^*(s,a)$ (the optimal values, after convergence). But to make the update you need to use the weighted function with the previous value of $Q(s,a)$ as I showed above, using a learning rate $0< \alpha < 1$

+",24054,,24054,,5/16/2019 11:03,5/16/2019 11:03,,,,0,,,,CC BY-SA 4.0 +12348,1,,,5/16/2019 11:34,,1,131,"

At the moment I am working on a vehicle counting & classification project. +For a specific part in the project I need to get back only the completely visible vehicles from my input data (images). I am wondering if this could be done (more) automatically in the following way:

+ +
    +
  1. zoom in such that only approximately one van would be visible
  2. +
  3. divide the vehicles into two categories: truncated and non-truncated
  4. +
  5. train on these two classes
  6. +
  7. After training and testing, use the model to find the completely visible vehicles.
  8. +
+ +

So the main question is, is it possible that this would give sufficient results or should I try to find another solution?

+",23473,,,,,5/25/2023 22:04,Divide classes into truncated and non-truncated objects,,2,0,,,,CC BY-SA 4.0 +12349,2,,12343,5/16/2019 13:33,,2,,"

I would recommend a hierarchical cluster algorithm, after normalising your numbers into proportions. Then the clustering should be able to identify similar patterns. Depending at which level you make the cut, you can decide how many clusters you want.

+ +

A great resource on this topic is Kaufman, L., & Roussew, P. J. (1990). ""Finding Groups in Data - An Introduction to Cluster Analysis"". John Wiley & Sons

+",2193,,,,,5/16/2019 13:33,,,,0,,,,CC BY-SA 4.0 +12350,2,,12348,5/16/2019 13:33,,0,,"

It could work. I think itll be hard to find labelled bounding box data describing full/truncated cars so this is a good idea for self labelling.

+ +

Theres probably also some other ways for you to get labelled data without the crowd sourcing or self labelling. Take a large labelled car dataset with bounding boxes (COCO has some, and if your good with aerial imagery theres alot more out there), and just enforce that any car bounding boxes that are touching the boundary are truncated, while ones fully encompassed in the imaged are not, since with high probability that will be the case.

+ +

Good luck!

+",25496,,,,,5/16/2019 13:33,,,,2,,,,CC BY-SA 4.0 +12353,2,,12348,5/16/2019 14:54,,0,,"

Your approach should work well, you just have to test different neural networks architectures to see the most accurate one. I suggest you focus on CNNs architectures and you'll be good to go.

+ +

If you don't want to spend much time labelling your data you can only label non-truncated vehicles. Everything else is not a ""non-truncated vehicle"". This is even better because some images may contain persons, roads, animals... and your model will get confused if he's only trained to know truncated vehicles and non truncated vehicles. The other model will instead know how non-truncated vehicles look like and consider everything else as ""not a non-truncated vehicle"". And this is exactly what you need in your use-case.

+",23350,,,,,5/16/2019 14:54,,,,2,,,,CC BY-SA 4.0 +12354,2,,10289,5/16/2019 14:57,,4,,"

What is a statistical model?

+

According to Anthony C. Davison (in the book Statistical Models), a statistical model is a probability distribution constructed to enable inferences to be drawn or decisions made from data. The probability distribution represents the variability of the data.

+

Are neural networks statistical models?

+

Do neural networks construct or represent a probability distribution that enables inferences to be drawn or decisions made from data?

+

MLP for binary classification

+

For example, a multi-layer perceptron (MLP) trained to solve a binary classification task can be thought of as model of the probability distribution $\mathbb{P}(y \mid x; \theta)$. In fact, there are many examples of MLPs with a softmax or sigmoid function as the activation function of the output layer in order to produce a probability or a probability vector. However, it's important to note that, although many neural networks produce a probability or a probability vector, a probability distribution is not the same thing. A probability alone does not describe a full probability distribution and different distributions are defined by different parameters (e.g. a Bernoulli is defined by $p$, while a Gaussian by $\mu$ and $\sigma$). However, for example, if you make your neural network produce a probability, i.e. model $\mathbb{P}(y = 1 \mid x; \theta)$, at least in the case of binary classification, you could obviously derive the probability of the other label as follows: $\mathbb{P}(y = 0 \mid x; \theta) = 1 - \mathbb{P}(y = 1 \mid x; \theta)$. In any case, in this example, you only need the parameter $p = \mathbb{P}(y = 1 \mid x; \theta)$ to define the associated Bernoulli distribution.

+

So, these neural networks (for binary classification) that model and learn some probability distribution given the data in order to make inferences or predictions could be considered statistical models. However, note that, once the weights of the neural network are fixed, given the same input, they always produce the same output.

+

Generative models

+

Variational auto-encoders (VAEs) construct a probability distribution (e.g. a Gaussian or $\mathbb{P}(x)$ that represents the probability distribution over images, if you want to generate images), so VAEs can be considered statistical models.

+

Bayesian neural networks

+

There are also Bayesian neural networks, which are neural networks that maintain a probability distribution (usually, a Gaussian) for each unit (or neuron) of the neural network, rather than only a point estimate. BNNs can thus also be considered statistical models.

+

Perceptron

+

The perceptron may be considered a "statistical model", in the sense that it learns from data, but it doesn't produce any probability vector or distribution, i.e. it is not a probabilistic model/classifier.

+

Conclusion

+

So, whether or not a neural network is a statistical model depends on your definition of a statistical model and which machine learning models you would consider neural networks. If you are interested in more formal definitions of a statistical model, take a look at this paper.

+

Parametric vs non-parametric

+

Statistical models are often also divided into parametric and non-parametric models. Neural networks are often classified as non-parametric because they make fewer assumptions than e.g. linear regression models (which are parametric) and are typically more generally applicable, but I will not dwell on this topic.

+",2444,,2444,,6/4/2022 14:15,6/4/2022 14:15,,,,2,,,,CC BY-SA 4.0 +12355,2,,12172,5/16/2019 15:26,,1,,"

Recursive artificial networks are a type aligned well with GDL, and it is an increasingly popular one. Using non-orthogonal geometries is natural when using directed graphs as semantic association, data flow, or composition networks.

+ +

This more free-form approach enters into industry slowly because most programming languages use orthogonal structures designed around multidimensional arrays and nested loops to iterate through them, made popular in the FORTRAN era. It is common that a student learns to loop through an array before learning to call a function or test and branch. Computer science is, in some ways, entrenched in the orthogonal structure.

+ +

To call the two categories Euclidean and non-euclidean is inaccurate. The initial trend in machine learning followed a Cartesian paradigm, one locked into 90° angles in data structures. The free-form graph is non-cartesian but still just as Euclidean, as Euclid worked with angles other than 90° frequently.

+ +

Apple and Google are likely candidates for this seepage of recursive networks into industry data centers, for NLP applications. Language associations are free-form even though these structures are serialized for speech and written text.

+ +

Cognition is also non-cartesian. Graph based optimization (pre-GDL) has been part of cognitive research since the 1970s, especially in the LISP space at MIT and CMU and the U.S. Naval Research facilities for strategic analysis, Optimal strategies for a class of constrained sequential problems by JB Kadane, HA Simon, The Annals of Statistics, 1977. Algorithm 2 is a convergence strategy, but without layers of cells, yet the searching concepts of gradient are already present in the maximal and minimal operations in the search in steps c and f.

+ +

The aeronautics and auto industry is a likely candidates for seepage into the embedded computing for automated driving and flight. Most of this is either company confidential or classified.

+ +

This answer to the question about topological sophistication provides some background why non-euclidean space may be a more natural way to map associations, flow, and composition.

+ +

These are some practical applications.

+ + + +

We've used recursive networks to align complex models with data acquired during laboratory experimentation and hope to publish on that approach as part of another publication. It is effective, since the relationships between features in the real world form squares and cubes only by chance, so it is not very common. Research into materials and energy cannot be confined by Cartesian conceptions, even if the graphs published are on Cartesian coordinate axes, so that others with traditional analytic geometry backgrounds can understand the graphs quickly.

+",4302,,4302,,5/16/2019 15:39,5/16/2019 15:39,,,,0,,,,CC BY-SA 4.0 +12356,2,,12263,5/16/2019 18:41,,2,,"

The specific approaches you mentioned (A3C, DDPG), and usually also other Actor-Critic methods in general, are approaches for the standard single-agent Reinforcement Learning (RL) setting. When trying to apply such algorithms to settings that are actually multi-agent settings, it is indeed common to encounter the problems you describe. They can all be viewed as your agent ""overfitting"" to the opponent they're training against:

+ +
+
    +
  1. Play against random player. I had rather good results, but not as good as expected since most interesting states cannot be reached with a random opponent.
  2. +
+
+ +

In this case, your agent will ""overfit"" against random opponents. It will likely become capable of beating them relatively easily, and to some extent this may also still translate to relatively weak non-random agents (like simple heuristic approaches), but it is unlikely to generalise to strong opponents.

+ +
+
    +
  1. Play against list of specific AIs. Results were excellent against those AIs, but very bad with never seen opponents
  2. +
+
+ +

Here you pretty much exactly give a description of what I mean when I say ""overfitting"" against the opponent you're training against.

+ +
+
    +
  1. Play against itself. Seemed the best to me, but I could not get any convergence due to non-stationary environment.
  2. +
+
+ +

When training using self-play like this, there is still a danger of overfitting against ""itself"". This may turn out to work well in some games, but in other games, there are indeed risk of instability / lack of convergence. This can, for example, be due to the agent ""rotating"" through a circle of strategies that beat each other. Intuitively, you could think of Rock-Paper-Scissors. A randomly-initialised agent may, for example, have a slight tendency to play Rock more often than the others. Self-play will then lead to a strategy that primarily plays Paper, to beat the current ""Rock"" strategy. Continued self-play training can then figure out that a tendency to play more Scissors will be strong against itself. Etc., these strategies can keep rotating forever.

+ +
+ +

The standard approach for self-play training in games is the one popularised by AlphaGo Zero and AlphaZero, and also around the same time independently published with results on the game of Hex. That last paper coined the term ""Expert Iteration"" for the approach, which I like to use. The basic idea of Expert Iteration is to have:

+ +
    +
  1. A policy that we're training (parameterised by some function approximator like a Deep Neural Net)
  2. +
  3. A tree search algorithm, like Monte Carlo tree search (MCTS).
  4. +
+ +

These two components can then iteratively improve each other. The trained policy can be used to guide the search behaviour of MCTS. MCTS can be used to generate a new distribution over actions which, thanks to additional search effort, is typically expected to be a little bit better than then trained policy. That slightly better distribution is then used as a training target for the policy, using a Cross-Entropy loss. This means that the policy is trained to mimic the MCTS search behaviour, and also used to improve that same MCTS search behaviour.

+ +

This training procedure has been found to lead to strong results. One of the prevailing hypotheses is that the additional search performed by MCTS can help to stabilise training, and avoid overfitting to the ""self"" opponent; the search algorithm can actually ""reason"" about what a strong opponent (not necessarily the ""self"" agent) could do.

+ +
+ +

The Expert Iteration training procedure I described above often works well, but it doesn't really answer your question about training Actor-Critic approaches in games with opponents... because it's not really an Actor-Critic algorithm!

+ +

In Learning Policies from Self-Play with Policy Gradients and MCTS Value Estimates, we (I'm first author on this paper) propose an approach where the policy in self-play is trained using a training target based on the value estimates of MCTS, rather than the visit counts of MCTS (which is what we use in the standard Expert Iteration framework).

+ +

I don't know if it necessarily should exactly be called an Actor-Critic algorithm, but you could intuitively view it as an ""Actor-Critic"" approach where MCTS is the critic. As mentioned right before Section IV, the resulting gradient estimator also turns out to be very similar to that of the ""Mean Actor Critic"".

+ +
+ +

If Expert-Iteration-style solutions are not an option (for example because tree search is infeasible), I would suggest taking a look at some of the following Policy-Gradient-based approaches (and possibly later research that cites these papers):

+ + +",1641,,,,,5/16/2019 18:41,,,,1,,,,CC BY-SA 4.0 +12357,1,,,5/16/2019 18:50,,1,577,"

So I´m currently implementing my first neural network using GRUs as a model and Keras as an implementation since it´s pretty highlevel. +My problem is about the classification of 8 hour long timeseries with 11 different events with 1 second timesteps or to be more specific: sleep recordings.
+Since the GRU can´t handle the whole timeseries at once, I split it in 500 timepoint pieces with 50 seconds (10%) overlap. Also since I don´t have much data, after the training/test split I´m oversampling the train data by duplicating the underrepresented classes up to 10 times. So at the end I get a set of ca. 14.000 training snippets and ca 1.800 to test the model. That to the data.

+ +

Next thing is the model, represented with the Keras Code:

+ +
verbose, epochs, batch_size = 2, 50, 1200  # 1200 batch_size was the maximum the GPU can handle
+    n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], trainy.shape[1]
+
+    model = Sequential()
+    model.add(GRU(128, input_shape=(n_timesteps, n_features), return_sequences=True, dropout=0.7,
+                  kernel_regularizer=regularizers.l2(0.01), activation=""relu""))
+    model.add(
+        GRU(64, return_sequences=True, go_backwards=False, dropout=0.7, kernel_regularizer=regularizers.l2(0.01),
+            activation=""relu""))
+    model.add(
+        GRU(32, return_sequences=True, go_backwards=False, dropout=0.7, kernel_regularizer=regularizers.l2(0.01),
+            activation=""relu""))
+    model.add(
+        GRU(16, return_sequences=True, go_backwards=False, dropout=0.7, kernel_regularizer=regularizers.l2(0.01),
+            activation=""relu""))
+    model.add(TimeDistributed(Dense(units=16, activation='relu')))
+    model.add(Flatten())
+    model.add(Dense(n_outputs, activation='softmax'))
+
+    # define custom optimizer with configurable learning rate
+    sgd = optimizers.SGD(lr=0.1, momentum=0.9, decay=0.0, nesterov=False)
+    model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
+
+ +

So as a short summary:
+I am using 4 GRU layers with a ""pyramid"" shape, so the network has to be more specific towards the end. Also one fully connected layer as some kind of ""adapter"" and one output layer with the size of my features. I am using SGD as an optimizer.

+ +

The results are always pretty bad, here are the loss and accuracy plots of an example run featuring 25 epochs: + +

+ +

Despite the huge dropout each stage, it still seems to be overfitting. Also as you can see, the accuracy of test and train is not changing after the 2nd epoch. The result is this sad looking confusion-matrix, showing only one class beeing detected by the network, which is not even the one with the highest amount of data in the dataset:

+ +
[[231   0   0   0   0   0   0   0]
+ [ 55   0   0   0   0   0   0   0]
+ [647   0   0   0   0   0   0   0]
+ [141   0   0   0   0   0   0   0]
+ [444   0   0   0   0   0   0   0]
+ [  0   0   0   0   0   0   0   0]
+ [118   0   0   0   0   0   0   0]
+ [ 74   0   0   0   0   0   0   0]]
+
+ +

So what are possible approaches in order to:
+a) fight the overfitting,
+b) just one class beeing detected and
+c) fix the stationary accuracy to be growing again?

+ +

Since I´m pretty new to neural networks I am thankful for your time and effort!

+",25687,,25687,,5/16/2019 19:11,6/15/2019 20:03,Train and Test Accuracy of GRU network not increasing after 2nd epoch,,1,0,,,,CC BY-SA 4.0 +12358,2,,12357,5/16/2019 19:02,,1,,"

Couple reccomendations:
+1) I dont think your overfitting, your test loss is not ever increasing and is staying reasonbly proportional to train loss -- This may indicate that whatever loss your using is not a good indicator of the metric of interest (in this case, it seems you want that to be accuracy, but data is imbalnced so maybe look at avg precision?)
+2) Use a lr scheduler, or something like a ReduceLROnPlateau callback to reduce the lr once the loss converges
+3) Adding more sources of the underrepresented class is valid but i reccomend just using class weighting. Effectively its the same (as an expectation) and will save you train time.

+ +

Good Luck!

+",25496,,,,,5/16/2019 19:02,,,,3,,,,CC BY-SA 4.0 +12360,1,12424,,5/16/2019 19:30,,1,1117,"

Understandably RNNs are very good at solving problems involving audio, video and text processing due to the arbitrary input's length of this sort of data.

+

What I don't understand is why RNNs are also superior at predicting time series data and why we use them over simple MLP DNNs.

+

Say I wanted to predict what the value in the time series is at $t+1$. I would take a window of, let's say, $t-50, t-49, \dots, t$, and then feed loads of sampled training data into a network. I could either choose to have a single LSTM unit remembering the entire window and basing the predictions on that, or I could simply make a 50 neuron wide MLP network.

+

What exactly is it about RNNs that makes them better in this scenario or any scenario for that matter?

+

I understand that the LSTM would have substantially fewer weights (in this scenario) and should be less computationally intensive, but, apart from that, I don't see any difference in these two methods.

+",25659,,2444,,10/2/2020 19:19,10/7/2022 20:56,Why are RNNs better than MLPs at predicting time series data?,,3,0,,,,CC BY-SA 4.0 +12361,2,,12360,5/16/2019 19:47,,0,,"

RNNs have the ability to hold a state, that means the model can learn which information it wants to save and what to delete based on ordering and how you designed the creation and passing of the state (probably worth looking into what an LSTM is), while this would be alot more difficult for a sliding MLP to do (you can think of a sliding MLP as a stateless RNN). Also a sliding MLP would be alot more computation since it needs to recompute an entire context window for each new output, while an RNN could just use the previous state and only do processing on a new singular input or a smaller context window.

+ +

Hope that helps!

+",25496,,,,,5/16/2019 19:47,,,,0,,,,CC BY-SA 4.0 +12362,2,,12340,5/17/2019 0:46,,1,,"

I would say that your intuition is correct: the model associated with the first plot is likely to generalise more than the one associated with the second plot.

+ +

In both cases, it doesn't seem that your model has overfitted the training data. Overfitting often occurs when the training error keeps decreasing but the validation error starts to increase. In both your plots, both the training and validation errors keep decreasing (even if slowly, after a while).

+ +

Underfitting occurs when your model hasn't learned enough even about your training data. The smaller the training and validation error, the more likely your model has not underfitted, but the value of RMSE depends on the range of your inputs. See e.g. What are good RMSE values? for more info.

+ +

See also this article Overfitting and Underfitting With Machine Learning Algorithms for a general overview of the concepts of overfitting and underfitting.

+",2444,,2444,,5/17/2019 1:00,5/17/2019 1:00,,,,0,,,,CC BY-SA 4.0 +12365,1,,,5/17/2019 12:25,,1,262,"

Right now, I am trying to synthesize training images for a CNN and due to the nature of the application, there is a finite number of sample images to learn from.

+ +

From other research, I expect to be using about 200,000 training images at a resolution of 1280*720, which with 3 channel at 8 bits will take about 550 GB to save uncompressed. This number can and probably will rise in the future, meaning more memory that I will need to provide.

+ +

I imagine that there are applications that required even more training data with higher complexity and that there are solutions to handling that such as compression techniques and the like.

+ +

My question: Are there solutions for the memory management beyond compressing the images with JPEG and such besides generating and instantly consuming the pictures without saving them to permanent memory?

+",25702,,2444,,6/30/2019 16:02,3/23/2021 10:03,How to manage large amounts of image data for training?,,2,1,,,,CC BY-SA 4.0 +12366,1,12367,,5/17/2019 12:26,,2,292,"

Assume I have an input of size $32 \times 32 \times 3$ and pass it to a convolution layer. Now, if my kernel size were to be $5 \times 5 \times 3$ and the depth of my convolution layer were to be 1, only one feature map would be produced for the image. Here, each neuron would have $5 \times 5 \times 3 = 75$ weights (+1 bias).

+

If I wanted to calculate multiple feature maps in this layer, say 3, is each local section (in this example, $5 \times 5 \times 3$) of the image looked on by three different neurons and each of their weights trained individually? And what would be the output volume of this layer?

+",25704,,2444,,10/11/2021 22:43,10/11/2021 22:43,"If I wanted to calculate multiple feature maps in a convolutional layer, should the filters be trained individually?",,1,0,,,,CC BY-SA 4.0 +12367,2,,12366,5/17/2019 12:48,,2,,"

Each feature map (or kernel) is independent of each other. If you had $3$ of these filters, your output shape would be $(28, 28, 3)$ (given the appropriate amount of padding and stride) with a total of $75*3=225$ trainable weights.

+",25496,,2444,,5/17/2019 13:48,5/17/2019 13:48,,,,0,,,,CC BY-SA 4.0 +12368,2,,12345,5/17/2019 12:56,,0,,"

The # of kernels will be the channel length, looking at the image you posted in your comment from post, I do not understand where you see the inconsistency.

+",25496,,,,,5/17/2019 12:56,,,,0,,,,CC BY-SA 4.0 +12369,2,,12345,5/17/2019 13:18,,0,,"

+ +

In this case, each kernel has the same depth as the depth of the input cuboid. In the architecture above, we have an (input) cuboid of dimension $14 \times 14 \times 6$, where $6$ is the depth, which is followed by an (output) cuboid of dimension $10 \times 10 \times 16$, where $16$ is the depth, which is the number of (output) feature maps (or channels) and it is also equal to the number of kernels applied to the (input) cuboid of dimensions $14 \times 14 \times 6$. Each of these $16$ kernels has a depth of $6$. So, each of these $16$ kernels has a shape of $5\times 5 \times 6$.

+ +

In general, the depth of the input cuboid can be different than the depth of the output cuboid, which is often equal to the number of kernels applied to the input cuboid (but this is not always the case). Furthermore, the depth of each kernel (applied to the input cuboid) has (often) the same depth as the input cuboid.

+",2444,,2444,,5/17/2019 13:40,5/17/2019 13:40,,,,1,,,,CC BY-SA 4.0 +12370,1,,,5/17/2019 14:53,,3,836,"

I am currently working with a small dataset of 20x300. Since I have so few data points, I was wondering if I could use an approach similar to leave-one-out cross-validation but for testing.

+

Here's what I was thinking:

+
    +
  1. train/test split the data, with only one data point in the test set.

    +
  2. +
  3. train the model on training data, potentially with grid-search/cross-validation

    +
  4. +
  5. use the best model from step 2 to make a prediction on the one data point and save the prediction in an array

    +
  6. +
  7. repeat the previous steps until all the data points have been in the test set

    +
  8. +
  9. calculate your preferred metric of choice (accuracy, f1-score, auc, etc) using these predictions

    +
  10. +
+

The pros of this approach would be to:

+
    +
  • You don't have to split the data into train/test so you can train +with more data points.
  • +
+

The cons would be:

+
    +
  • This approach suffers from potential(?) data leakage.

    +
  • +
  • You are calculating an accuracy metric from a bunch of predictions that potentially came from different models, due to the grid searches, so I'm not sure how accurate it is going to be.

    +
  • +
+

I have tried the standard approaches of train/test splitting, but since I need to take out at least 5 points for testing, then I don't have enough points for training and the ROC AUC becomes very bad.

+

I would really appreciate some feedback about whether this approach is actually feasible or not and why.

+",25708,,2444,,5/11/2021 10:47,5/11/2021 10:47,Should I use leave-one-out cross-validation for testing?,,1,1,,,,CC BY-SA 4.0 +12371,2,,12365,5/17/2019 15:30,,1,,"

I suggest you fine-tune an existing model. Knowledge transfer models in many image processing tasks are now open sourced and you can build your model on top of them. Also, knowledge transfer models are trained on a large datasets and can quickly converge to your case-study with a little of task-specific extra training.

+ +
+

This way you will use few data to tune the model which leads to less + memory use and less training time. You will also take advantage from a + ready-to-use architecture and get accurate results.

+
+ +

Depending on your case-study you can choose from this list of computer vision pre-trained models.

+",23350,,,,,5/17/2019 15:30,,,,0,,,,CC BY-SA 4.0 +12372,5,,,5/17/2019 16:39,,0,,"

See e.g. https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/ for more info regarding overfitting and underfitting ML models.

+",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12373,4,,,5/17/2019 16:39,,0,,"For questions related to the concept of underfitting in machine learning, which occurs when a machine learning model is not able to learn.",2444,,2444,,3/20/2020 22:50,3/20/2020 22:50,,,,0,,,,CC BY-SA 4.0 +12374,5,,,5/17/2019 16:42,,0,,"

See e.g. https://en.wikipedia.org/wiki/Time_series or https://machinelearningmastery.com/time-series-forecasting-supervised-learning/.

+",2444,,2444,,5/17/2019 20:26,5/17/2019 20:26,,,,0,,,,CC BY-SA 4.0 +12375,4,,,5/17/2019 16:42,,0,,"For questions related to time series analysis or forecasting in the context of AI and, in particular, ML.",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12376,5,,,5/17/2019 16:43,,0,,"

See e.g. https://en.wikipedia.org/wiki/Semi-supervised_learning.

+",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12377,4,,,5/17/2019 16:43,,0,,"For questions related to the machine learning technique called semi-supervised learning, which is a combination of supervised and unsupervised learning.",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12378,5,,,5/17/2019 16:44,,0,,,-1,,-1,,5/17/2019 16:44,5/17/2019 16:44,,,,0,,,,CC BY-SA 4.0 +12379,4,,,5/17/2019 16:44,,0,,For questions related to conditional probability e.g. in the context of Bayesian inference or networks.,2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12380,5,,,5/17/2019 16:47,,0,,"

See e.g. https://en.wikipedia.org/wiki/Ensemble_learning for more info.

+",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12381,4,,,5/17/2019 16:47,,0,,"For questions related to the random forest (or random decision forests), which is an ensemble machine learning technique (that is, an ML technique that uses or combines different models).",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12382,5,,,5/17/2019 16:48,,0,,"

See e.g. https://en.wikipedia.org/wiki/Recommender_system.

+",2444,,2444,,5/17/2019 20:26,5/17/2019 20:26,,,,0,,,,CC BY-SA 4.0 +12383,4,,,5/17/2019 16:48,,0,,For questions related to recommender systems in the context of machine learning and data mining.,2444,,2444,,5/17/2019 20:26,5/17/2019 20:26,,,,0,,,,CC BY-SA 4.0 +12384,5,,,5/17/2019 16:50,,0,,"

See e.g. https://en.wikipedia.org/wiki/Naive_Bayes_classifier for more info.

+",2444,,2444,,5/17/2019 20:26,5/17/2019 20:26,,,,0,,,,CC BY-SA 4.0 +12385,4,,,5/17/2019 16:50,,0,,"For questions related to the naive Bayes, which is a machine learning (or statistics) technique that is based on the Bayes' theorem.",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12386,5,,,5/17/2019 16:56,,0,,,-1,,-1,,5/17/2019 16:56,5/17/2019 16:56,,,,0,,,,CC BY-SA 4.0 +12387,4,,,5/17/2019 16:56,,0,,"For questions related to the liquid state machine model, which is a spiking neural network, which is a neural network that more closely mimics a biological neural network (with respect to traditional neural networks, like multi-layer perceptrons).",2444,,2444,,5/17/2019 20:27,5/17/2019 20:27,,,,0,,,,CC BY-SA 4.0 +12388,1,,,5/17/2019 17:05,,5,152,"

I'm trying to teach a humanoid agent how to stand up after falling. The episode starts with the agent lying on the floor with its back touching the ground, and its goal is to stand up in the shortest amount of time.

+ +

But I'm having trouble in regards to reward shaping. I've tried multiple different reward functions, but they all end up the same way: the agent quickly learns to sit (i.e. lifting its torso), but then gets stuck on this local optimum forever.

+ +

Any ideas or advice on how to best design a good reward function for this scenario?

+ +

A few reward functions I've tried so far:

+ +
    +
  • current_height / goal_height
  • +
  • current_height / goal_height - 1
  • +
  • current_height / goal_height - reward_prev_timestep
  • +
  • (current_height / goal_height)^N (tried multiple different values of N)
  • +
  • ...
  • +
+",25712,,2444,,12/19/2021 18:27,5/16/2023 12:10,How define a reward function for a humanoid agent whose goal is to stand up from the ground?,,1,1,,,,CC BY-SA 4.0 +12389,2,,12343,5/17/2019 17:20,,3,,"

If I understand correctly you want to find companies with similar patterns to yours.

+ +

I would start with measuring cosine similarity between your company and others.

+ +

It is really easy with Python, for example:

+ +
In [21]: from sklearn.metrics.pairwise import cosine_similarity
+
+In [22]: cosine_similarity([[1,4,2,6], [1,9,5,4]])
+Out[22]:
+array([[1.        , 0.84794633],
+       [0.84794633, 1.        ]])
+
+ +

Note that if size of sale matters to you, this is not the right approach, as cosine similarity is magnitude invariant:

+ +
In [23]: from sklearn.metrics.pairwise import cosine_similarity
+
+In [24]: cosine_similarity([[1,4,2,6], [10,90,50,40]])
+Out[24]:
+array([[1.        , 0.84794633],
+       [0.84794633, 1.        ]])
+
+",25248,,25248,,9/18/2019 11:35,9/18/2019 11:35,,,,0,,,,CC BY-SA 4.0 +12390,1,,,5/17/2019 18:24,,4,763,"

I am new in the field of AI. I am working to create the flappy bird using Genetic Algorithm. After reading and seeing some examples, I saw that most implementations use a Neural Network + Genetic Algorithm and after certain generations, you achieve a very good agent that survives very long.

+ +

I currently struggling to implement the Neural Network since I have never taken a Machine Learning course. +On many examples that I have read neural networks require training inputs and outputs. For the flappy bird, I can't think of output since you don't really know if the action of flapping will benefit you or not.

+ +

In the example that I followed, Synaptic.js is used and it is pretty straight-forward. However, in Python, I can't find a simple library that will initialize randomly and adjust the weights and biases depending on the good agents that survive longer.

+ +

What would be the right way to implement this Neural Network without having a training dataset?

+ +

Is there any way to create Flappy Bird without using Neural Networks, just Genetic Algorithm?

+ +

The example in Javascript that I am referring to: Flappy Bird Using Machine Learning

+",25714,,,,,4/7/2020 9:03,How to implement a neural network for Flappy Bird in Python?,,3,0,,2/22/2022 16:40,,CC BY-SA 4.0 +12392,5,,,5/18/2019 0:01,,0,,,-1,,-1,,5/18/2019 0:01,5/18/2019 0:01,,,,0,,,,CC BY-SA 4.0 +12393,4,,,5/18/2019 0:01,,0,,For questions related to regression (both linear and non-linear) in the context of machine learning and AI.,2444,,2444,,5/19/2019 17:05,5/19/2019 17:05,,,,0,,,,CC BY-SA 4.0 +12396,1,,,5/18/2019 8:57,,3,826,"

Problem

+

I've been reading research papers on how to solve a peg solitaire using graph search, but all the papers kind of assume you know how to do the reduction(polynomial time conversion) from the peg solitaire to the graph, which I do not, but this is how I assumed it was done.

+

For those of you unfamiliar, here is a video that illustrates how to play this game, and here's an image of a board.

+

+

The goal is to only have one peg on the board and you get rid of pegs by jump one peg over another. A peg can only jump if the it's jumping into an empty space as shown in the picture above.

+

What I've tried

+

I had the idea of converting the problem to a tree where each node represents the state after an action is taken and each edge represents the action taken. So the root node would the initial state which is the board shown above then it's children would be the state of the board after any of the possible legal actions that can be taken.

+

So, for example:

+

+

Then the children of each of those nodes would be the possible moves for them and you can find a solution once you've reached depth 31 in the tree because there are 32 pegs and you win the game where there's only 1 left.

+

Is this the right approach? It feels a little too abstract because I'd have to represent the edges as peg moves, but that's weird cause they're usually numbers or constraint.

+",25721,,2444,,6/13/2022 18:07,6/13/2022 18:07,How to solve peg solitaire with a graph search?,,1,0,,,,CC BY-SA 4.0 +12397,1,17692,,5/18/2019 9:41,,3,1484,"

I've been doing some reading about GANs, and although I've seen several excellent examples of implementations, the descriptions of why those patterns were chosen isn't clear to me in many cases.

+ +

At a very high level, the purpose of the discriminator in a GAN is establish a loss function that can be used to train the generator.

+ +

ie. Given the random input to the generator, the discriminator should be able to return a probability of the result being a 'real' image.

+ +

If the discriminator is perfect the probability will always be zero, and the loss function will have no gradient.

+ +

Therefore you iterate:

+ +
    +
  • generate random samples
  • +
  • generate output from the generator
  • +
  • evaluate the output using the discriminator
  • +
  • train the generator
  • +
  • update the discriminator to be more accurate by training it on samples from the real distribution and output from the generator.
  • +
+ +

The problem, and what I don't understand, is point 5 in the above.

+ +

Why do you use the output of the generator?

+ +

I absolutely understand that you need to iterate on the accuracy of the discriminator.

+ +

To start with it needs to respond with a non-zero value for the effectively random output from the generator, and slowly it needs to converge towards correctly classifying images at 'real' or 'fake'.

+ +

In order to achieve this we iterate, training the generator with images from the real distribution, pushing it towards accepting 'real' images.

+ +

...and with the images from the generator; but I don't understand why.

+ +

Effectively, you have a set of real images (eg. 5000 pictures of faces), that represent a sample from the latent space you want the GAN to converge on (eg. all pictures of faces).

+ +

So the argument goes:

+ +

As the generator is trained iteratively closer and closer to generating images from the latent space, the discriminator is iteratively trained to recognise from the latent space, as though it had a much larger sample size than the 5000 (or whatever) sample images you started with.

+ +

...ok, but that's daft.

+ +

The whole point of DNN's is that given a sample you can train it to recognise images from the latent space the samples represent.

+ +

I've never seen a DNN where the first step was 'augment your samples with extra procedurally generated fake samples'; the only reason to do this would be if you can only recognise samples in the input set, ie. your network is over-fitted.

+ +

So, as a specific example, why can't you incrementally train the discriminator on samples of ('real' * epoch/iterations + 'noise' * 1 - epoch/iterations), where 'noise' is just a random input vector.

+ +

Your discriminator will then necessarily converge towards recognising real images, as well as offering a meaningful gradient to the generator.

+ +

What benefit does feeding the output of the generator in offer over this?

+",25722,,25722,,5/19/2019 4:26,1/27/2020 9:46,Why use the output of the generator to train the discriminator in a GAN?,,3,1,,,,CC BY-SA 4.0 +12399,2,,12370,5/18/2019 10:34,,2,,"

Concerning $k$-fold Cross Validation, I like to think of it by considering two extremes you can do: Leave-One-Out Cross-Validation where you leave one sample each time and train your model on the remaining $n-1$, and 2-fold Cross Validation at which you split your dataset in half and train (and validate) two models on two different halves.

+ +

The important aspect when choosing $k$ is a bias-variance tradeoff. Note that in LOOCV you train each model using almost as many samples as there are available ($n-1$), so the validation step should give you an unbiased estimate of the real test error. However, each model in LOOCV is trained each time on almost exactly same dataset!. This has important consequences, since the output of each model is highly correlated with each other. Since the mean of highly correlated variables has big variance, LOOCV will suffer from huge variance.

+ +

On the other hand, in 2-fold CV the models share no common samples, so they are not correlated and therefore their outputs have low variance. But since we train each model using only a half of available samples, the procedure will have a high bias (the estimated test error will be off from the true test error).

+ +

What to do in this scenario? Choose something in the middle. Usually $k=5$ and $k=10$ should be a good choice.

+",22835,,,,,5/18/2019 10:34,,,,1,,,,CC BY-SA 4.0 +12400,2,,12396,5/18/2019 14:46,,2,,"

Your approach seems reasonable to me. The edges do not necessarily have to be numbers, but, if you wish, you could also encode the actions as numbers. For example, the weight of an edge could represent the ""cost"" of the corresponding action. If there's no natural cost associated with an action, then you can add a unit cost for each action.

+",2444,,2444,,5/18/2019 15:59,5/18/2019 15:59,,,,0,,,,CC BY-SA 4.0 +12401,2,,12397,5/18/2019 23:01,,2,,"

The main reason that the discriminator is trained concurrently with the generator is to provide (at least in theory) a smooth and gradual learning signal for the generator.

+ +

If we trained the discriminator on only the input data, then, assuming our training algorithm converges well, it should quickly converge to a fixed model. The generator can then learn to fool this fixed model, but it will likely still be easy to spot the generator's fakes for a human. For example, the discriminator may have learned that, coincidentally, in the sample you provided, all images of trucks have a fully white pixel in the top left corner. If it learns that pattern, the generator can fool it by generating noise with a white pixel. Once the generator has learned this pattern, all learning stops.

+ +

In contrast, suppose that the discriminator is repeatedly re-trained on a mixture of real and generated examples. The discriminator will be forced to learn more complex patterns than ""white pixel in upper left"", which improves its quality beyond the raw patterns in the sample data.

+ +

The converse relationship is also true. If the generator is trained only on the training data, it will also likely pick out only the most obvious patterns. These patterns are likely to create many local minima in the weight space for the network. However, if the error signal from the discriminator is fed to the generator, then the generator must adapt: in effect, we are telling it ""making the top left pixel white is not good enough to fool observers. Find more complex patterns"".

+",16909,,,,,5/18/2019 23:01,,,,3,,,,CC BY-SA 4.0 +12402,1,12406,,5/18/2019 23:07,,8,664,"

For discrete action spaces, what is the purpose of the actor in actor-critic algorithms?

+ +

My current understanding is that the critic estimates the future reward given an action, so why not just take the action that maximizes the estimated return?

+ +

My initial guess at the answer is the exploration-exploitation problem, but are there other, more important/deeper reasons? Or am I underestimating the importance of exploration vs. exploitation as an issue?

+ +

It just seems to me that if you can accurately estimate the value function, then you have solved the RL challenge.

+",25732,,2444,,5/3/2020 19:03,5/3/2020 19:03,What is the purpose of the actor in actor-critic algorithms?,,1,0,,,,CC BY-SA 4.0 +12404,1,,,5/19/2019 4:27,,3,123,"

Update on 2019-05-19:

+ +

My question is about teaching AI to solve the problem, not letting AI teach a human developer to solve a problem.

+ +
+ +

Original post:

+ +

I'm a software developer but very new to AI.

+ +

Today my friend and I chatted about the development of AI. One topic was about implementing the capability of ""given a problem, analyzing the problem and designing a solution"".

+ +

Since we are both software developers, we used a simple example problem in our discussion to see how AI might possibly find a solution:

+ +
Print the following three lines on the console:
+
+*
+***
+*****
+
+ +

My friend and I thought we may use some formal method to describe WHAT we want but NOT how we implement it. It's the AI's job to figure out the solution.

+ +

Then we came to the question I'm asking here: Since my friend and I are both outsiders of AI research, we don't know if there is any existing research (we believe such research must have existed somewhere) that teaches AI to analyze the problem (which is formally defined) and design a solution using the given tools.

+ +

For us human beings, our analysis of the problem and designing might look like the following:

+ +
    +
  • Let me choose a programming language. For example, C.
  • +
  • Let me see what tools I have in the chosen programming language. Oh, here they are: + +
      +
    • putchar(ch) which prints a single character on the console.
    • +
    • printf(str) which prints a string on the console.
    • +
    • for-loop; if-else; support of subroutines; etc.
    • +
  • +
  • I see the result has three lines of characters: line 1, 2, and 3.
  • +
  • I see the numbers of '*' in the three lines are an arithmetic progression and there is a connection of line number and character number: given the line number i, the character number is 2*i-1, where i is 1, 2, and 3. This is repetition and I can use a for-loop.
  • +
  • Each line is the repetition of '*' so I may implement a function to do this.
  • +
+ + + +
void print_line(int N) {
+  for (int i = 0; i < N; i++) {
+    putchar('*');
+  }
+  putchar('\n');
+}
+
+int main(int argc, char * argv[]) {
+  for (int i = 1; i <=3; i++) {
+    print_line(2 * i - 1);
+  }
+  return 0;
+}
+
+ +

Alternatively, I may design a naive solution of using printf() three times and hard-code each string:

+ + + +
printf(""*\n"");
+printf(""***\n"");
+printf(""*****\n"");
+
+ +

We think an AI that can do this may follow a similar analyzing and designing approach as a human developer does. In general, we think:

+ +
    +
  • This AI should have a toolbox using which it can solve some problems (possibly not all problems). In my example above, this toolbox may be a programming language and its corresponding library.
  • +
  • This AI should have the knowledge about some concepts (such as console and string in the example above) and their relationships.
  • +
  • This AI should have the knowledge that connects the toolbox and the concepts, so the AI knows how a tool can manipulate the properties of a concept.
  • +
  • Most importantly, this AI should have the capability of figuring out one or more paths that connect the input to the desired output, using the toolbox. This process, we think, needs the capability of ""analysis"" and ""design"".
  • +
+ +

Excuse us if the description is still vague. My friend and I are both new to AI so, in fact, we don't even know if ""analysis"" and ""design"" are the proper words to use. We will be glad to clarify if needed.

+ +

BTW, we did some quick search about such AI:

+ +
    +
  • Bayou by Rice University doesn't look like understanding the problem, either.
  • +
  • DeepCoder uses Deep Learning and I doubt whether it understands the problem, either.
  • +
  • The AI-Programmer uses genetic algorithms to generate the desired string in BrainFuck. But this AI doesn't look like understanding the problem. It looks like a trial-and-error with feedback.
  • +
+",4817,,4817,,3/16/2020 22:53,4/16/2020 1:01,"Is there research about teaching AI to ""analyze the problem and design a solution""?",,0,4,,,,CC BY-SA 4.0 +12406,2,,12402,5/19/2019 8:13,,4,,"
+

For discrete action spaces, what is the purpose of the actor in Actor-Critic algorithms?

+
+ +

In brief, it is the policy function $\pi(a|s)$. The critic (a state action function $v_{\pi}(s)$) is not used to derive a policy, and in ""vanilla"" Actor-Critic cannot be used in this way at all unless you have the full distribution model of the MDP.

+ +
+

It just seems to me that if you can accurately estimate the value function, then you have solved the RL challenge.

+
+ +

Often this can be the case, and this is how e.g. Q-learning works, where the value function is more precisely the action value function, $q(s,a)$.

+ +

Continuous or very large action spaces can cause a problem here, in that maximising over them is impractical. If you have a problem like that to solve, it is often an indicator that you should use a policy gradient method such as Actor-Critic. However, you have explicitly asked about ""discrete action spaces"" here.

+ +

The main issue with your statement ""if you can accurately estimate the value function"" is that you have implicitly assumed the the learning is complete. You are looking at the final output of the algorithm, after it has converged, and asking why not use just half of its output. Whilst choosing between RL algorithms is more often related to how they learn, how efficiently they learn, and how they behave during the learning process.

+ +
+

My current understanding is that the critic estimates the future reward given an action, so why not just take the action that maximizes the estimated return?

+
+ +

The critic is $v_{\pi}(s)$ so does not take actions into account. You can of course learn $q_{\pi}(s,a)$ instead or as well - and some policy gradient methods also estimate the action value, such as Advantage Actor-Critic.

+ +

The main difference is in how data gathered from experience is applied to changing the policy. Recall at any step during reinforcement learning, that all estimates are based around a best guess at values of a current target policy. Policy gradient methods directly improve the policy, whilst value-based methods have an implied policy based on the same learned value functions. The $\text{argmax}_a q(s,a)$ approach used in Q-learning or SARSA gives a simple but crude mapping from values to policy, and means that the trajectory through policy space taken whilst learning is different when comparing policy gradient methods and value-based methods.

+ +

The following are notable differences affecting performance between policy-based methods and value-based methods:

+ +
    +
  • Policy-gradient methods tend to make small changes to policy on each step, the policy changes smoothly, adjusting probabilities between choice of different actions by small amounts. This in turn means that the targets for estimated value functions change slowly. In comparison, when a value based method finds a new maximising action, it can make a big change to the policy, making the updates required less smooth.

  • +
  • Policy-gradient methods can learn to balance stochastic policies, whilst value-based methods assume deterministic policies. So if you have a situation like paper, scissor stone game where the ideal policy is random choice between actions, you should use policy gradients. This is a bit niche, but is worth noting.

  • +
  • Function approximators (typically neural networks) are easier to train on simpler target functions. The policy function $\pi(a|s)$ and action value function $q(s,a)$ are quite different views of the same problem, and it can be the case that one has a simple form whilst the other is more complex. This can make a big practical difference on which algorithms learn fastest.

  • +
+ +

For smaller, discrete action spaces, there is not always a clear choice between policy gradient and value-based methods. You could use either and it may be worth doing an experiment to find the most efficient algorithm for certain classes of problem.

+",1847,,1847,,5/19/2019 15:28,5/19/2019 15:28,,,,0,,,,CC BY-SA 4.0 +12408,1,,,5/19/2019 13:28,,1,257,"

I asked a question related to tic-tac-toe playing in RL. From the answer, it seems to me a lot is dependent on the opponent (rightly so, if we write down the expectation equations).

+

My questions are (in the context of tic-tac-toe or chess):

+
    +
  • How to make the RL player a perfect/expert (tic-tac-toe/chess) player?

    +

    As far as TTT is concerned, when playing against a perfect player, an RL will become perfect, provided that the opponent is perfect.

    +

    So, will this hold true if the same RL algorithm, with its learned values, are used to play some other lesser perfect players?

    +
  • +
  • Can an RL player with pre-trained values (assume from a perfect or expert opponent) be used in any scenario with best results?

    +
  • +
+

Note: The problem is more severe in chess, since experts will use some kind of opening moves which will not match with say a random player and thus finding values for those states becomes a problem, since we have not encountered it during training time.

+

Footnote: Any resources on Game Playing RL is appreciated.

+",,user9947,2444,,4/8/2022 9:58,4/8/2022 9:59,How to make the RL player a perfect/expert (tic-tac-toe/chess) player?,,1,0,,,,CC BY-SA 4.0 +12409,2,,12408,5/19/2019 14:37,,1,,"

I would say a good way to make a good agent would be making it play against itself. As you go through several episodes, with a good exploration and exploitation balance, both will gradually learn and converge to $q_*(s,a)$.

+
+

So, will this hold true if the same RL algorithm, with its learned values, are used to play some other lesser perfect players?

+
+

As long as the states that are played (or approximations if you are using function approximation methods) were simulated enough times during training, it will play well against any kind of opponent.

+

If you are training against a completely perfect opponent and you are not using function approximation, I believe you could get to an incomplete $Q(s,a)$ table, and so not be able to predict the best play when facing certain states.

+",24054,,2444,,4/8/2022 9:59,4/8/2022 9:59,,,,1,,,,CC BY-SA 4.0 +12411,1,,,5/19/2019 19:52,,4,246,"

I've been studying Branch and Bound's graph algorithm and I hear it always finds the optimal path because it uses previously found solutions to find others

+ +

However, I haven't been able to find a proof of why it finds the optimal path. (In fact, most sites kind of do a bad job generalizing the algorithm itself.)

+ +

What is the proof that this algorithm always finds the optimal path in the case of a graph with 1 or more goal nodes?

+",25721,,2444,,5/26/2020 16:43,6/11/2023 0:03,What is the proof that the branch and bound algorithm always finds optimal path in a graph?,,2,1,,,,CC BY-SA 4.0 +12412,1,,,5/19/2019 20:10,,1,50,"

I appreciate that there are many ways to arrange the memory in a NN, and that the numerical representations may be with bytes to floats depending on an implementation. What is typical amount of memory required for the ""application side"" for the better NN programs such as Alpha Zero or an automated driving AI? How much does it matter?

+",25756,,,,,5/19/2019 20:10,How much Physical memory does Alpha Zero's Neural Net require?,,0,0,,,,CC BY-SA 4.0 +12413,2,,12411,5/19/2019 21:13,,-1,,"

My attempt was proof by contradiction. We can assume B&B found a sub-optimal path, but that would create a contradiction, because the only way B&B would miss an optimal path is if it skipped it completely (this part I don't know how to prove) or the related part of the search space was pruned.

+",25721,,2444,,5/19/2019 22:20,5/19/2019 22:20,,,,0,,,,CC BY-SA 4.0 +12414,1,,,5/19/2019 21:15,,1,212,"

I have a .csv file called ratings.csv with the following structure:

+ +
userID, movieID, rating
+3,      12,      5
+2,      7,       6
+
+ +

The rating scale goes from 0 to 5 stars. I want to be able to plot the sparsity of the matrix like it's done in the following picture:

+ +

+ +

As you can see, ratings scale goes from 0 to 5 on the right. It is a very well thought plot.

+ +

I have Matlab, Python, R etc. Could you come up with something and help me? I’ve tried hard but I cannot find the way to do it.

+",25405,,2444,,5/20/2019 14:03,5/20/2019 14:03,How do I plot a matrix of ratings?,,2,0,,,,CC BY-SA 4.0 +12415,1,,,5/19/2019 22:26,,2,52,"

Can $\Phi$ measure (computed rigorously or approximately) of Integrated Information Theory serve as reward for self-evolving/learning reinforcement learning system and hence we let this system to become/evolve into conscious system?

+",8332,,2444,,5/25/2022 22:59,5/25/2022 22:59,Can $\Phi$ measure of Integrated Information Theory serve as reward for reinforcement learning system?,,0,1,,,,CC BY-SA 4.0 +12416,2,,12271,5/19/2019 22:28,,3,,"

Using your app, I was able to find a (spoiler alert!) solution manually. At least now you know your puzzle is solvable and you did not waste your money :)

+ +

It seems your app has a bug, though. I was unable to put the last piece, as shown in the picture. I was wondering if your solver, as it stands, will ever find a solution.

+ +

Now the idea. It may be useful for your solver.

+ +

The board has 8x8=64 squares. Each piece will occupy 5 squares and you want to fit 11 pieces, so the final position will have 9 empty squares. Divide the board in two 8x4 rectangles, left and right. Now, it seems only fair that one rectangle should have 5 empty squares leaving the other with 4; with that in mind I've proceeded to fill the right part first and only then the left. After some trial and error, I got lucky.

+ +

I don't know if you can reach all solutions with this method. Notice that, in the solution given above, the bottom rectangle will end up with 6 empty squares.

+ +

I don't know how to write an efficient solver either. For starters:

+ +
    +
  1. Build up a list of bad configurations: small sets of pieces/positions such that whenever you reach them, you know there is no hope.
  2. +
  3. At each step, put a piece in such way it minimizes the number of forced empty squares.
  4. +
  5. A variant of the idea above: divide the board in four 4x4 parts; it seems reasonable each part should have at least one empty square but no so many; so, look for solutions forcing these parts to have (1,2,3,3) or (2,2,2,3) empty squares.
  6. +
  7. Lookup whether this puzzle is known. Names that come to mind: Martin Gardner, Ian Stewart, Sam Lloyd.
  8. +
+ +

Can't say you will ever be able to see a list of all possible solutions.

+ +

Nice puzzle, nice app that one you wrote, I've had a good time. Thank you.

+",25759,,25759,,5/20/2019 0:18,5/20/2019 0:18,,,,2,,,,CC BY-SA 4.0 +12417,2,,12414,5/19/2019 22:46,,2,,"

You're looking for a heatmap. Check out e.g. https://stackoverflow.com/q/33282368/3924118 (if you like Python more than the others). See also this documentation.

+",2444,,2444,,5/20/2019 14:02,5/20/2019 14:02,,,,0,,,,CC BY-SA 4.0 +12418,1,12445,,5/20/2019 1:40,,4,256,"

I've been trying to learn backpropagation for CNNs. I read several articles like this one and this one. They all say that to compute the gradients for the filters, you just do a convolution with the input volume as input and the error matrix as the kernel. After that, you just subtract the filter weights by the gradients(multiplied by the learning rate). I implemented this process but it's not working.

+ +

Here's a simple example that I tried:

+ +

Input volume (randomised)

+ +
 1 -1  0
+ 0  1  0
+ 0 -1  1
+
+ +

In this case, we want the filter to only pick up the top left 4 elements. So the target output will be:

+ +
 1  0(supposed to be -1, but ReLU is applied) 
+ 0  1
+
+ +

We know that the desired filter is:

+ +
 1  0
+ 0  0
+
+ +

But we pretend that we don't know this.

+ +

We first randomise a filter:

+ +
 1 -1
+ 1  1
+
+ +

The output right now is:

+ +
 3  0 
+-2  1
+
+ +

Apply ReLU:

+ +
 3  0
+ 0  1
+
+ +

Error (target - output):

+ +
-2  0
+ 0  0
+
+ +

Use error as kernel to compute gradients:

+ +
-2  2
+ 0 -2
+
+ +

Say the learning rate is 0.5, then the new filter is:

+ +
2 -2
+1  2
+
+ +

This is still wrong! It's not improving at all. If this process is repeated, it won't learn the desired filter. So I must have understood the math wrong. So what's the problem here?

+",25745,,25745,,5/21/2019 6:05,5/23/2019 1:05,How are filters weights updated for a CNN?,,1,0,0,,,CC BY-SA 4.0 +12419,2,,12315,5/20/2019 1:47,,0,,"

In deep learning, it is very common to use Recurrent Neural Networks (RNNs) to handle time-series data with varying input sequence lengths. Check out the RNN Wikipedia for more detail.

+",25732,,,,,5/20/2019 1:47,,,,3,,,,CC BY-SA 4.0 +12420,2,,12414,5/20/2019 3:34,,1,,"

I did it!

+ +
A = importdata('u.data');
+user_id = A(:, 1);
+movie_id = A(:, 2);
+rating = A(:, 3);
+
+% Build matrix R and w (weights matrix)
+R = zeros(943, 1682);
+w = zeros(943, 1682);
+for i=1:100000
+    R(user_id(i), movie_id(i)) = rating(i);
+    w(user_id(i), movie_id(i)) = 1;
+end
+
+
+m = HeatMap(R)
+ax = hm.plot; % 'ax' will be a handle to a standard MATLAB axes.
+colorbar('Peer', ax); % Turn the colorbar on
+caxis(ax, [0 5]); % Adjust the color limits
+
+ +

Output:

+ +

+",25405,,,,,5/20/2019 3:34,,,,0,,,,CC BY-SA 4.0 +12421,1,12432,,5/20/2019 5:58,,5,1192,"

I am trying to solve for $\lambda$ using temporal-difference learning. More specifically, I am trying to figure out what $\lambda$ I need, such that $\text{TD}(\lambda)=\text{TD}(1)$, after one iteration. But I get the incorrect value of $\lambda$.

+ +

Here's my implementation.

+ +
from scipy.optimize import fsolve,leastsq
+import numpy as np
+
+
+
+ class TD_lambda:
+        def __init__(self, probToState,valueEstimates,rewards):
+            self.probToState = probToState
+            self.valueEstimates = valueEstimates
+            self.rewards = rewards
+            self.td1 = self.get_vs0(1)
+
+        def get_vs0(self,lambda_):
+            probToState = self.probToState
+            valueEstimates = self.valueEstimates
+            rewards = self.rewards
+            vs = dict(zip(['vs0','vs1','vs2','vs3','vs4','vs5','vs6'],list(valueEstimates)))
+
+            vs5 = vs['vs5'] + 1*(rewards[6]+1*vs['vs6']-vs['vs5'])
+            vs4 = vs['vs4'] + 1*(rewards[5]+lambda_*rewards[6]+lambda_*vs['vs6']+(1-lambda_)*vs['vs5']-vs['vs4'])
+            vs3 = vs['vs3'] + 1*(rewards[4]+lambda_*rewards[5]+lambda_**2*rewards[6]+lambda_**2*vs['vs6']+lambda_*(1-lambda_)*vs['vs5']+(1-lambda_)*vs['vs4']-vs['vs3'])
+            vs1 = vs['vs1'] + 1*(rewards[2]+lambda_*rewards[4]+lambda_**2*rewards[5]+lambda_**3*rewards[6]+lambda_**3*vs['vs6']+lambda_**2*(1-lambda_)*vs['vs5']+lambda_*(1-lambda_)*vs['vs4']+\
+                                (1-lambda_)*vs['vs3']-vs['vs1'])
+            vs2 = vs['vs2'] + 1*(rewards[3]+lambda_*rewards[4]+lambda_**2*rewards[5]+lambda_**3*rewards[6]+lambda_**3*vs['vs6']+lambda_**2*(1-lambda_)*vs['vs5']+lambda_*(1-lambda_)*vs['vs4']+\
+                                (1-lambda_)*vs['vs3']-vs['vs2'])
+            vs0 = vs['vs0'] + probToState*(rewards[0]+lambda_*rewards[2]+lambda_**2*rewards[4]+lambda_**3*rewards[5]+lambda_**4*rewards[6]+lambda_**4*vs['vs6']+lambda_**3*(1-lambda_)*vs['vs5']+\
+                                        +lambda_**2*(1-lambda_)*vs['vs4']+lambda_*(1-lambda_)*vs['vs3']+(1-lambda_)*vs['vs1']-vs['vs0']) +\
+                    (1-probToState)*(rewards[1]+lambda_*rewards[3]+lambda_**2*rewards[4]+lambda_**3*rewards[5]+lambda_**4*rewards[6]+lambda_**4*vs['vs6']+lambda_**3*(1-lambda_)*vs['vs5']+\
+                                        +lambda_**2*(1-lambda_)*vs['vs4']+lambda_*(1-lambda_)*vs['vs3']+(1-lambda_)*vs['vs2']-vs['vs0'])
+            return vs0
+
+        def get_lambda(self,x0=np.linspace(0.1,1,10)):
+            return fsolve(lambda lambda_:self.get_vs0(lambda_)-self.td1, x0)
+
+ +

The expected output is: $0.20550275877409016$, but I am getting array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) +

+ +

I cannot understand what am I doing incorrectly.

+ +
TD = TD_lambda(probToState,valueEstimates,rewards)
+TD.get_lambda()
+# Output : array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
+
+ +

I am just using TD($\lambda$) for state 0 after one iteration. I am not required to see where it converges, so I don't update the value estimates.

+",25768,,2444,,6/4/2020 17:05,6/4/2020 17:05,Why am I getting the incorrect value of lambda?,,2,4,,,,CC BY-SA 4.0 +12423,1,,,5/20/2019 9:00,,1,67,"

I have a fairly large dataset consisting of different images with and without persons that I want to use for a project.

+ +

The problem is that I only want the pictures that contain faces, and it is best if there is only a crop of the face.

+ +

I already looked at Facenet and Openface, but I thought that there must be a simpler already trained solution just to sort the dataset so I can get started with my own project.

+",25770,,2444,,6/28/2019 11:02,3/6/2020 7:55,Are there tools to help clean a large dataset so that it only contains faces?,,3,2,,10/28/2021 16:38,,CC BY-SA 4.0 +12424,2,,12360,5/20/2019 9:53,,1,,"

In an RNN, the output of the previous state is passed as an input to the current state. Intuitively, there is a temporal (time-based) relationship in the way in which input is processed in an RNN. It can understand how the current state was achieved on the basis of the previous values, i.e value at time-step $t$ is a result of value at time-steps $t-1, t-2$, and so on.

+

In a DNN, there is no temporal relationship in the way input is processed. Values at time-steps $t, t-1, t-2, \dots $ are all treated distinctly and not as a continuation of the previous time-step values.

+",25704,,2444,,10/2/2020 19:22,10/2/2020 19:22,,,,0,,,,CC BY-SA 4.0 +12425,2,,7397,5/20/2019 10:35,,0,,"

Let me give an example of where epsilon-greedy comes unstuck: Imagine you have a environment with a very large branching-factor, like Go; if you were using epsilon-greedy for your exploration then you may find levels higher up the search tree are very well explored because they're hit on more regularly and so you'd want to select more greedily for those areas that are well explored, but further down the tree where actions are not explored you'd want to encourage more level of random exploration. Epsilon-greedy doesn't enable you to do that; it's one probability for all situations. So it's fine to use where the state-space is quite small, but not for large state-spaces.

+ +

Actor-critic methods, such as PPO, use the entropy (the measure of randomness) of the policy to form part of the loss function which inherently encourages exploration. To elaborate on this; early in training you'd expect high levels of entropy as all actions may have near-equal probability of being selected, but as the model explores the actions and garners rewards, it will gradually favour taking the actions which lead to higher rewards, and so the entropy will decrease as training progresses and it gradually starts to behave more greedily.

+ +

Because the entropy is added to the loss (or reward in case of RL, i.e. negative loss), the agent will get a higher reward if it were to select an action with low probability but ended up getting a high reward. That will have the effect of pushing up its probability of being re-selected again in future, while pushing down the previously more-favoured action probability.

+ +

This is good because it ensures there is always some element of curiosity, and it means it will act greedily in areas of the state-space it's already well explored, but continue to explore areas of the state-space which it's unfamiliar, which is why (in my opinion) it's a superior method than using epsilon-greedy.

+ +

That said, what I personally do is once training is converged, which I define as the agent isn't garnering higher rewards and there's very low entropy in the policy, then I add some small amount of epsilon-greedy just to force some level of exploration.

+",20352,,,,,5/20/2019 10:35,,,,0,,,,CC BY-SA 4.0 +12427,2,,12423,5/20/2019 10:42,,1,,"

I don't know of a tool but you could write a simple script to detect faces and crop it. It's quite simple with the Haar cascade in openCV to detect faces and use inbuilt functions to crop your image based on the size of the detected face.

+ +

Hope that helps !

+",25704,,,,,5/20/2019 10:42,,,,0,,,,CC BY-SA 4.0 +12428,2,,12423,5/20/2019 10:57,,1,,"

I have found a script that does what i need: +https://github.com/leblancfg/autocrop

+",25770,,,,,5/20/2019 10:57,,,,0,,,,CC BY-SA 4.0 +12429,1,,,5/20/2019 11:10,,2,169,"

I'm studying convolutional neural networks from the following article https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/.

+ +

If we take a grayscale image, the value of the pixel will be between 0 and 255. Now, if we apply a filter to our ""new image"", can we have pixels whose values are not included in this range? In this case, how can we create the convolved image?

+",25772,,2444,,5/20/2019 14:05,11/5/2021 20:03,What are the value of the pixels of the convolved image?,,2,0,,,,CC BY-SA 4.0 +12430,2,,12429,5/20/2019 12:23,,1,,"

The convolved image can be considered a feature map, where each new neuron represents some indication (or lack-there-of) of a feature in some receptive field of the original image, so no it does not need to be a valid image in the output.

+ +

If you specifically care for it to be an image as an output, you can do a couple of things:

+ +

1) normalize the produced feature map to some set range that youre working in (0-255 or 0-1)

+ +

2) make the filter a valid probability distribution, and you know the output will stay in the same range as the input (ex: Gaussian filters)

+",25496,,,,,5/20/2019 12:23,,,,2,,,,CC BY-SA 4.0 +12431,2,,12315,5/20/2019 12:48,,0,,"

One way to tackle this problem is to identify the longest sequence in the training dataset and than expand (0 padding) all the other inputs to match that size.

+",20430,,,,,5/20/2019 12:48,,,,2,,,,CC BY-SA 4.0 +12432,2,,12421,5/20/2019 12:55,,4,,"

$TD(\lambda)$ return has the following form: +\begin{equation} +G_t^\lambda = (1 - \lambda) \sum_{n=1}^{\infty} \lambda^{n-1} G_{t:t+n} +\end{equation} +For you MDP $TD(1)$ looks like this: +\begin{align} +G &= 0.64 (r_0 + r_2 + r_4 + r_5 + r_6) + 0.36(r_1 + r_3 + r_4 + r_5 + r_6)\\ +G &\approx 6.164 +\end{align} +$TD(\lambda)$ looks like this: +\begin{equation} +G_0^\lambda = (1-\lambda)[\lambda^0 G_{0:1} + \lambda^1 G_{0:2} + \lambda^2 G_{0:3} + \lambda^3 G_{0:4} + \lambda^4 G_{0:5} ] +\end{equation} +Now each $G$ term separately: +\begin{align} +G_{0:1} &= 0.64(r_0 + v_1) + 0.36(r_1 + v_2) \approx 7.864\\ +G_{0:2} &= 0.64(r_0 + r_2 + v_3) + 0.36(r_1 + r_3 + v_3) \approx -5.336\\ +G_{0:3} &= 0.64(r_0 + r_2 + r_4 + v_4) + 0.36(r_1 + r_3 + r_4 + v_4) \approx 25.864\\ +G_{0:4} &= 0.64(r_0 + r_2 + r_4 + r_5 + v_5) + 0.36(r_1 + r_3 + r_4 + r_5 + v_5) \approx -11.936\\ +G_{0:5} &= 0.64(r_0 + r_2 + r_4 + r_5 + r_6 + v_6) + 0.36(r_1 + r_3 + r_4 + r_5 + r_6 + + v_6) \approx -0.336 +\end{align} +Finally, we need to find $\lambda$ so that the return is equal to $TD(1)$ return, we have: +\begin{equation} +6.164 = (1 - \lambda)[7.864 - 5.336\lambda + 25.864\lambda^2 - 11.936\lambda^3 - 0.336\lambda^4] +\end{equation} +When you solve this equation, one of the solutions is $0.205029$ which is close to what you needed to get considering the numerical errors. Your problem was that you only considered probability in first state, but that decision prolongs to future states as well.

+ +

EDIT

+ +

As pointed out by bewestphal, this is not a full solution, it misses one crucial step to get it fully correct. Hint for it can be found in his answer and that's the correct solution.

+",20339,,20339,,6/1/2019 8:29,6/1/2019 8:29,,,,6,,,,CC BY-SA 4.0 +12433,1,12457,,5/20/2019 14:16,,5,144,"

In the paper Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour, the authors seem to be presenting a black box method of testing. Are these ideas really new? Weren't these ideas already proposed in Translating Values into Design Requirements (by Ibo Van de Poel)? Black-box testing had already been proposed much earlier.

+",25777,,2444,,12/30/2021 14:12,12/30/2021 14:12,"Are the ideas in the paper ""Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour"" novel?",,1,0,0,,,CC BY-SA 4.0 +12434,1,,,5/20/2019 15:59,,5,342,"

I have come across something that IBM offers called neural architecture search. You feed it a data set and it outputs an initial neural architecture that you can train.

+

How is neural architecture search (NAS) performed? Do they use heuristics, or is this meta machine learning?

+

If you have any papers on NAS, I would appreciate if you can provide a link to them.

+",25780,,2444,,10/29/2021 15:03,10/29/2021 15:03,How is neural architecture search performed?,,3,0,,,,CC BY-SA 4.0 +12458,1,,,5/20/2019 16:02,,1,90,"

I’ve been thinking about this for a few days and can’t tell if this would feel morally just to an average user

+",,John,1671,,5/23/2019 20:02,5/29/2019 10:43,"Would it be ethical to use AI to determine a user’s gender from the content they upload, without them knowing?",,3,1,,,,CC BY-SA 4.0 +12459,2,,12458,5/20/2019 16:28,,1,,"

Ethics aside, I think this could potentially create a number of different issues. What if your algorithm guesses wrong and I'm stuck with a UI that's targeting incorrectly? What if my wife and I share an account? What if I'm a (insert orientation here) (insert gender here) who [coaches, supports] a [men's, women's] [volleyball, football] team? People are very diverse, and you could make many mistakes. Would you then provide tools for the user to correct these mistakes? How would you be able to do so without the user being offended?

+ +

Instead, I see many fewer issues that might arise from a section like the following:

+ +
+

Let's get to know each other!

+ +

If you'd like, we can help tune your profile to match your interests.

+ +

You can start by selecting some of your interests from the list below, or try searching for your own.

+ +

Fishing Cooking Soccer Video Gaming Social Media + Search for my interest...

+ +

Skip this section →

+
+ +

You could continue with other questions that might actually be relevant to your shaping your UI or application, allowing the user to omit/delete details for any or all questions.

+ +

Being purely an opt-in experience prevents any issues with the user feeling like your app may be ""talking about them"" behind their back.

+ +

In this case, specifically at this point in time, I think it's wise to be transparent about what data your application knows (or thinks it knows) about your users.

+",,maxathousand,,,,5/20/2019 16:28,,,,2,,,,CC BY-SA 4.0 +12435,2,,12434,5/20/2019 17:05,,0,,"

Neural architecture search (NAS) is a method of automating the design (that is, the choice of the values of the hyper-parameters) of artificial neural networks. There are different approaches to search the space of neural network architectures. For example, you can use reinforcement learning or evolutionary (or genetic) algorithms.

+ +

Check out the paper Neural Architecture Search with Reinforcement Learning (2017), by Barret Zoph and Quoc V. Le, where the authors train, using reinforcement learning (specifically, REINFORCE), a recurrent neural network (the ""controller"") to generate (convolutional and recurrent) neural network architectures, so that to maximise the expected accuracy of the generated architectures on a validation dataset. They achieve some good results using this approach.

+ +

See also Efficient Neural Architecture Search via Parameter Sharing (2018), by Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le and Jeff Dean (which thus includes some of the authors of NAS), which is similar to NAS, but more efficient, hence the acronym ENAS (efficient NAS).

+",2444,,2444,,7/18/2019 23:28,7/18/2019 23:28,,,,0,,,,CC BY-SA 4.0 +12436,5,,,5/20/2019 17:22,,0,,"

See e.g. https://en.wikipedia.org/wiki/Neural_architecture_search for more info.

+",2444,,2444,,5/20/2019 20:40,5/20/2019 20:40,,,,0,,,,CC BY-SA 4.0 +12437,4,,,5/20/2019 17:22,,0,,"For questions related to the concept of neural (network) architecture search (NAS), which is a way of automating the design (that is, the hyper-parameters) of a neural network. NAS is related to neuroevolution, given that neuroevolution can be used to perform NAS, but neuroevolution is not the only way of performing NAS. For example, reinforcement learning can also be used to perform NAS.",2444,,2444,,7/7/2019 22:45,7/7/2019 22:45,,,,0,,,,CC BY-SA 4.0 +12438,5,,,5/20/2019 17:23,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Hyperparameter_optimization.

+",2444,,2444,,5/20/2019 20:41,5/20/2019 20:41,,,,0,,,,CC BY-SA 4.0 +12439,4,,,5/20/2019 17:23,,0,,"For questions related to the concept of hyper-parameter optimization, that is, the task of finding the best hyper-parameters for a particular learning algorithm (e.g. gradient descent) or model (e.g. a multi-layer neural network) using an optimization method (e.g. Bayesian optimization or genetic algorithms).",2444,,2444,,7/20/2019 12:06,7/20/2019 12:06,,,,0,,,,CC BY-SA 4.0 +12443,2,,12434,5/20/2019 21:17,,6,,"

You could say that NAS fits into the domain of Meta Learning or Meta Machine learning.

+ +

I've pulled the NAS papers from my notes, this is a collection of papers/lectures that I personally found very interesting. It's sorted in rough chronological descending order, and *** means influential / must read.

+ +

Quoc V. Le and Barret Zoph are to good authors on the topic.

+ + +",1741,,,,,5/20/2019 21:17,,,,0,,,,CC BY-SA 4.0 +12445,2,,12418,5/21/2019 6:15,,2,,"

You have made many wrong assumptions in this question. First theoretically speaking,

+ +
    +
  • Filters do not work in the way to 'pick up elements' (they work on principle of edge detection).
  • +
  • You have assumed only a single combination of filter weights will give the desired output (assuming continuous weights not binary). This is especially in prominence in the problem of Regularization where we want to choose a set of weights without over-fitting data.
  • +
  • The error you used looks very similar to Perceptron update rule (squared error gives the same derivative, but make sure you are not confusing the two).
  • +
  • Backpropagation through 'dead ReLu's' is not possible (see this answer for more details).
  • +
+ +

Now, let us check mathematically:

+ +
+

Input Volume:

+
+ +

$$ + \begin{matrix} + 1 & -1 & 0 \\ + 0 & 1 & 0 \\ + 0 & -1 & 1 \\ + \end{matrix} +$$

+ +
+

Desired Output:

+
+ +

$$ + \begin{matrix} + 1 & -1 \\ + 0 & 1 \\ + \end{matrix} +$$

+ +

Note, in this step you desire an output which is negative (element (0,-1)), but you are forward propagating through a ReLu which is cutting off the negative part, thus the gradients have no way to communicate or update the required negative. +Basically,

+ +

$ wx \rightarrow ReLu \rightarrow y$ is happening and if 'x' is a negative number then $y$ is always $0$ thus $(target-y)$ is always $target$ and hence whatever the value of $x$ error remains constant, and if we want to backpropagate (assuming squared error) then:

+ +

$\frac{d}{dw} (target - y)^2 = 2*(target - y)*\frac{d}{dw}y = -2*(target - y)*0 = 0$ (Remember from the ReLu output graph slope is $0$ in the negative region).

+ +
+

Now, you randomise a filter:

+
+ +

$$ + \begin{matrix} + 1 & -1 \\ + 1 & 1 \\ + \end{matrix} +$$

+ +

apply ReLu and get the following:

+ +

$$ + \begin{matrix} + 3 & 0 \\ + 0 & 1 \\ + \end{matrix} +$$

+ +

again you have chosen you target to have a negative number which is not possible in the case of ReLu activation.

+ +

But continuing you get the error as:

+ +

$$ + \begin{matrix} + -2 & -1 \\ + 0 & 0 \\ + \end{matrix} +$$

+ +

use error to compute gradients (which again you have calculated wrong,by Backpropagating through ReLu's with ) values and also missed the minus sign associated with the output, but you have compensated it by adding it to the $w$ whereas the convention is to subtract):

+ +

$$ + \begin{matrix} + 1 & 2 \\ + 1 & -2 \\ + \end{matrix} +$$

+ +

And get the new filter as:

+ +

$$ + \begin{matrix} + 0.5 & 0 \\ + 0.5 & 0 \\ + \end{matrix} +$$

+ +

This is a pretty good approximation of the desired filter (even though the previous steps have wrong assumptions, but it does not matter much, since what you essentially did was use a linear activation function which will work, if you go through enough iterations). So basically you are using a Linear Filters and the details are too hodge podge for me to go into, so I will suggest some resources for you to see ReLu backpropagation:

+ +

Deep Neural Network - Backpropogation with ReLU

+",,user9947,,user9947,5/21/2019 11:50,5/21/2019 11:50,,,,6,,,,CC BY-SA 4.0 +12446,1,,,5/21/2019 7:07,,1,24,"

I am trying to learn utility functions for ships through their AIS data.

+ +

I have a lot of data available and plan on focusing on fishing boats.

+ +

So far I've researched a lot of IRL algorithms but I'm not sure if I missed a important one that could be applied.

+ +
    +
  • I've found this paper https://journals.sagepub.com/doi/citedby/10.1177/0278364917722396, but I'm not sure if this is really applicable. +I would need to transform the ais data as trajectorys, add material from openseamap and plot this to images of trajectories. Or did I misunderstand the paper completely?

  • +
  • My other found approach would be selecting features as position, distance to other vessels and others. And then try to apply continous max entropy deep inverse reinforcement learning.

  • +
+ +

Is there another approach that may be easier in your eyes?

+",25796,,,,,5/21/2019 7:07,Learning utility function for AIS data,,0,0,,,,CC BY-SA 4.0 +12447,1,,,5/21/2019 7:30,,2,60,"

I'm trying to make use of sensor data from VOC, Humidity, Age, Sampling rate and use them as NN input data. Below are the technical questions I'm struggling to find out.

+ +

For each training set, I have 2500 data points for VOC and Humidity. However, for the age and sampling rate, I have only one for each. I'm wondering if it would work to just put 5002(=2500(VOC)+ 2500(hum)+1(age) + 1 (sampling rate)) input data points to the layer.

+ +

*Note: sampling rate is in the data set because the VOC and HUM data have different data points for each training set. I reduced the data points by sampling from the original data points. However, I know for sure that timespan does matter.

+ +

Please help me out! A good reference is also a huge welcome!

+",25797,,,,,5/21/2019 7:30,How to deal with Neural network input data with different length and type,,0,0,,,,CC BY-SA 4.0 +12448,1,,,5/21/2019 8:37,,5,487,"

What is a simple turn-based game, that can be used to validate a Monte-Carlo Tree Search code and it's parameters?

+ +

Before applying it to problems where I do not have a possiblity to validate its moves for correctness, I would like to implement a test case, that makes sure that it behaves as expected, especially when there are some ambiguities between different implementations and papers.

+ +

I built a connect-four game in which to MCTS-AI play against each other and an iterated prisoners dilemma implementation, in which a MCTS-AI plays against common strategies like Tit-for-Tat, but I am still not sure if there is a real good interpretation if the MCTS-AI finds the best strategy.

+ +

Another alternative would be a tic-tac-toe game, but MCTS will exhaust the whole search space within little steps, so it is hard to tell how the implementation will perform on other problems.

+ +

In addition, expanding a full game tree does not tell you if any states before the full expansion are following the best MCTS strategy.

+ +
+ +

Example: +You can alternate in the expand step of player 1's tree between optimize for player 1 and optimize for player 2, assuming that player 2 will not play the best move for player 1, but the best move for himself. Not doing so would result in an optimistic game tree, that may work in some cases, but probably is not the best choice for many games, while it would be useful for cooperative games.

+ +

When the game tree is fully expanded, you can find the best move, even when the order of the expand steps was not optimal, so using a game that can be fully expanded is no good test to validate the in-between steps.

+ +
+ +

Is there a simple to implement game, that can be used for validation, in which you can reliably check for each move, if the AI did find the expected move?

+",25798,,25798,,5/22/2019 9:55,6/12/2019 8:41,What is a simple game for validation of MCTS?,,2,9,,,,CC BY-SA 4.0 +12449,2,,11438,5/21/2019 9:04,,1,,"

Maybe the following article can help you:

+ +

FAQ Retrieval using Query-Question Similarity and BERT-Based Query-Answer Relevance (2019)

+ +

They evaluate their model in localgovFAQ and StackExchange datasets.

+",25789,,1671,,5/21/2019 18:29,5/21/2019 18:29,,,,0,,,,CC BY-SA 4.0 +12450,1,,,5/21/2019 13:19,,3,388,"

In the paper Nonlinear Interference Mitigation via Deep Neural Networks, the the following network is illustrated.

+ +

The network structure is

+ +

The network parameters are $\theta = \{W_1^{1},...,W_1^{l-1},W_2^{1},...,W_2^{l-1},W^{l},\alpha_1,...,\alpha_{l-1}\}$, where $W_1$ and $W_2$ are linear matrices and $\rho^{(i)}(x)=xe^{-j\alpha_i|x|^2}$ is element-wise nonlinear function ($i$ is the index of layer).

+ +

Where should I add this $\rho^{(i)}(x)$? Is it possible to learn the parameter $\alpha$? I don't think it is the same idea as activation function since it is positioned in the middle of two linear matrices... Or can it be added as embedding layer?

+",25806,,2444,,5/21/2019 19:55,5/21/2019 20:07,Can you learn parameters in nonlinear function?,,1,15,0,,,CC BY-SA 4.0 +12453,1,,,5/21/2019 18:44,,1,62,"

I think reinforcement learning would be a good fit for this problem, but I am not sure of how to deal with a seemingly infinite number of actions. In the beginning of each game (generic RTS game), the player places down units anywhere on the map. Then as the game progresses, the player can move units around by selecting on them and clicking on a valid location on the map. They must take into consideration things like distance and travel time. An AI agent must do the same.

+ +

How would I represent these actions? It’s not as simple as selecting ‘up’, ‘down’, ‘right’,...etc. Should the agent just randomly pick locations on the map?

+ +

Are there any papers or implementations I can look at to help me get started?

+",25810,,,,,5/21/2019 18:44,Action spaces for an RTS game,,0,4,,,,CC BY-SA 4.0 +12454,2,,12450,5/21/2019 20:07,,1,,"

In general, you can learn any parameter of the network, provided you can find the partial derivative of the loss function with respect to the desired parameter. Given that $\rho$ is assumed to be differentiable (as the authors state in the paper), you can take the partial derivative of the loss function with respect to the parameter $\alpha$.

+ +

In this paper, $\rho$ is a non-linear function (that is, a function that is not linear, e.g. the sigmoid function) that applies element-wise to its input. So, if you pass a vector to this $\rho$, you will get a vector of the same shape out of it. The authors do not explicitly call it an ""activation function"", but $\rho$ does an analogous job of an activation function, that is, it introduces non-linearity. Furthermore, in this architecture, $\rho$ is also followed by a matrix. In general, this is not forbidden, even though it is not common.

+ +

In general, in each layer of a neural network, you can have several different learnable parameters or weights. A parameter is learnable if you can differentiate the loss function with respect to it. You can have more than one weight matrix. For example, recurrent neural networks have usually more than one weight matrix associated with each layer: one matrix is associated with the feed-forward connections and the other matrix is associated with the recurrent connections.

+",2444,,,,,5/21/2019 20:07,,,,0,,,,CC BY-SA 4.0 +12455,1,,,5/21/2019 20:39,,2,331,"

I am using the following library: +https://github.com/vishnugh/evo-NEAT

+ +

which seems to be a pretty simple NEAT-implementation. +Therefore I am using the following Config:

+ +
package com.evo.NEAT.com.evo.NEAT.config;
+
+/**
+ * Created by vishnughosh on 01/03/17.
+ */
+public class NEAT_Config {
+
+    public static final int INPUTS = 11;
+    public static final int OUTPUTS = 2;
+    public static final int HIDDEN_NODES = 100;
+    public static final int POPULATION =300;
+
+    public static final float COMPATIBILITY_THRESHOLD = Float.MAX_VALUE;
+    public static final float EXCESS_COEFFICENT = 1;
+    public static final float DISJOINT_COEFFICENT = 1;
+    public static final float WEIGHT_COEFFICENT = 5;
+
+    public static final float STALE_SPECIES = 2;
+
+
+    public static final float STEPS = 0.1f;
+    public static final float PERTURB_CHANCE = 0.9f;
+    public static final float WEIGHT_CHANCE = 0.5f;
+    public static final float WEIGHT_MUTATION_CHANCE = 0.5f;
+    public static final float NODE_MUTATION_CHANCE = 0.1f;
+    public static final float CONNECTION_MUTATION_CHANCE = 0.1f;
+    public static final float BIAS_CONNECTION_MUTATION_CHANCE = 0.1f;
+    public static final float DISABLE_MUTATION_CHANCE = 0.1f;
+    public static final float ENABLE_MUTATION_CHANCE = 0.2f ;
+    public static final float CROSSOVER_CHANCE = 0.1f;
+
+    public static final int STALE_POOL = 10;
+}
+
+ +

However, there are way too much species (about 60). I do not know how to reduce this number, given the fact that the COMPATIBILITY_THRESHOLD is already maximized.

+ +

So what's my fault?

+ +

Note: I am not using: http://nn.cs.utexas.edu/keyword?stanley:ec02 +since this algorithm seems not to work in a changing environment (where fitness can vary hardly)

+",19062,,1847,,5/22/2019 6:02,9/13/2019 14:11,How to reduce amount of species in NEAT?,,2,0,,,,CC BY-SA 4.0 +12456,2,,12448,5/21/2019 21:00,,4,,"

A good choice might be smaller-scale games of Go, like a 9x9 board. This was the original application domain MCTS was designed for, and the original paper by Brugmann from 1993 details parameters that should lead to an agent that can play above beginner level in what is today a minuscule amount of computational time, in a scaled-down 9x9 grid.

+ +

Go is a good choice for a benchmark because most learning algorithms fail at it pretty badly. The fact that MCTS worked here was a major breakthrough at the time, and helped cement it as a technique for game playing. If your algorithm is not working properly, it is therefore unlikely that it can learn to play Go at the level described in Brugmann's paper.

+",16909,,,,,5/21/2019 21:00,,,,3,,,,CC BY-SA 4.0 +12457,2,,12433,5/21/2019 21:09,,2,,"

Poel's paper on Translating Values into Design Requirements articulates a framework for mapping abstract values and norms into concrete design constraints that an engineer could work with. The example used in the paper is mapping beliefs about animal welfare to design constraints on chicken coups.

+ +

The newer paper by Tubella et al. on Governance by Glass-Box builds on Poel's idea (and in fact, cite's Poel several times). It basically suggests that we should use Poel's design process, but that in something like an AI system, we also need to use an ""observation phase"" to validate the system, because unlike the problem of engineering a chicken coup, an AI system may appear to have met design constraints, but routinely violate them in production.

+ +

So, you're right that the Tubella et al. paper is essentially proposing the combination of Poel's framework for translating values into design constraints with the old idea of black-box testing, but this combination itself appears to be a new, if modest, contribution.

+",16909,,,,,5/21/2019 21:09,,,,4,,,,CC BY-SA 4.0 +12462,1,,,5/22/2019 7:36,,4,343,"

In the book "Reinforcement Learning: An Introduction" (2018) Sutton and Barto explain, on page 221, a form of tile coding using hashing, to reduce memory consumption.

+

I have two questions about that:

+
    +
  1. How can this approach reduce memory consumption? Doesn't it just depend on the number of tiles (you have to store one weight for each tile)?

    +
  2. +
  3. They state that there is only a "little loss of performance". In my understanding, the sense of tile coding (and coarse coding) is, that near-by states have many tiles in common and far-away states have only few tilings in common. With tilings "randomly spread throughout the state space" this isn't the case. How does this not influence performance?

    +
  4. +
+",21299,,2444,,1/21/2021 2:40,1/21/2021 2:40,"When using hashing in tile coding, why are memory requirements reduced and there is only a little loss of performance?",,0,4,,,,CC BY-SA 4.0 +12463,1,,,5/22/2019 7:49,,3,11299,"

Are there any open sourced algorithms that can take a couple of images as an input and generate a new, similar image based on that input? Or are there any resources where I can learn to create such an algorithm?

+",25821,,,,,5/19/2023 13:14,Algorithm that creates new images based on other images,,3,0,,,,CC BY-SA 4.0 +12464,1,,,5/22/2019 7:54,,1,305,"

I want my models to be accessible only by my programs. How do I encrypt and decrypt my model when I run inference on it? Is there any existing technology that is widely used?

+",22093,,2444,,3/3/2021 10:00,3/3/2021 10:00,How do I encrypt and decrypt my model when I run inference on it?,,1,1,,,,CC BY-SA 4.0 +12465,1,,,5/22/2019 8:29,,1,436,"

I'm currently working on a group project where we need to find a pattern in a given dataset. The dataset is a collection of X, Y, Z values of a gyroscope from someone who is walking. If you plot these values you'll get a result like this. +

+ +

And this is how our dataset looks like. +

+ +

We are new to AI and ML so we first did some general research like understanding how matrices work and how to do some basic predictions with frameworks like TensorFlow and PyTorch. Now we want to start on this problem. What we need to do is to find a pattern in the dataset and count how many times this pattern appears in this dataset.

+ +

We started of with some none AI functions to count, we managed to do that but the way we counted will probably only work on this specific dataset. So that's why we decided to do this with AI.

+ +

We would love to hear as many different approaches as possible to count the steps since we are still learning.

+",25820,,,,,2/16/2020 18:04,Recognize pattern in dataset,,3,1,,,,CC BY-SA 4.0 +12466,2,,12463,5/22/2019 8:44,,2,,"

I'm not an expert on that so you could probably get a better answer.

+ +

I'm not sure to understand what you're looking for. Are the couple of images about the same thing? Like pictures of cats and you want to generate a new cat based on these pictures? If that's what you want, you could probably take a look at Generative Adversarial Network (GAN) : Introduction. +A GAN is made up of a Generator and a Discriminator. The goal of the discriminator is to distinguish the real data from the generated data. And the goal of the generator is to improve its generated data to look similar to real data. Then, if there are different cat images in your dataset, the generator will learn to create a new cat based on that dataset.

+ +

If what you're looking for is to take different images like a cat and a dog and generate a ""catdog"", you can take a look at Variational AutoEncoder (VAE). For example you can train two different VAE (Encoder/Decoder). One for cats, and one for dogs. Then you take the encoder of dogs and the decoder of cats. That what I saw one day, not sure if it really works.

+ +

Correct me if I'm wrong

+",25166,,25166,,5/22/2019 9:07,5/22/2019 9:07,,,,2,,,,CC BY-SA 4.0 +12468,1,,,5/22/2019 9:07,,1,314,"

In the Transformer (adopted in BERT), we normalize the attention weights (dot product of keys and queries) using a softmax in the Scaled Dot-Product mechanism. It is unclear to me whether this normalization is performed on each row of the weight matrix or on the entire matrix. In the TensorFlow tutorial, it is performed on each row (axis=-1), and in the official TensorFlow code, it is performed on the entire matrix (axis=None). The paper doesn't give many details.

+ +

To me, both methods can make sense, but they have a strong impact. If on each row, then each value will have a roughly similar norm, because the sum of its weights is 1. If on the entire matrix, some values might be ""extinguished"" because all of its weights can be very close to zero.

+",7783,,2444,,11/1/2019 2:43,11/1/2019 2:43,How are the attention weights normalised in the transformer?,,0,1,,,,CC BY-SA 4.0 +12469,1,,,5/22/2019 10:20,,1,461,"

Recently I simulated the Gambler's Problem in RL:

+ +

+ +

Now, the problem is, the curve does not at all appear the way as given in the book. The ""best policy"" curve appears a lot more undulating than it is shown based on the following factors:

+ +
    +
  • Sensitivity (i.e. the threshold for which you decide the state values have converged).
  • +
  • Probability of heads (expected).
  • +
  • Depending the value of sensitivity it also depends on whether I find the policy by finding the action (bet) which cause the maximum return by using $>$ or by using $>=$ in the following code i.e: +
  • +
+ + + +
 initialize maximum = -inf
+ best_action = None
+ loop over states:
+    loop over actions of the state:
+       if(action_reward>maximum):
+          best_action = action
+
+ + + +

Also note that if we make the final reward as 101 instead of 100 the curve becomes more uniform. This problem has also been noted in the following thread.

+ +

So what is the actual intuitive explanation behind such a behaviour of the solution. Also here is the thread where this problem is discussed.

+",,user9947,1671,,6/14/2019 19:00,6/14/2019 19:00,The problem with the Gambler's Problem in RL,,1,0,,,,CC BY-SA 4.0 +12470,1,,,5/22/2019 11:54,,6,203,"

As a layman in AI, I want to get an idea of how big data players, like Facebook, model individuals (of which they have so many data).

+

There are two scenarios I can imagine:

+
    +
  1. Neural networks build clusters of individuals by pure and "unconscious" big data analysis (not knowing, trying to understand and naming the clusters and "feature neurons" on intermediate levels of the network) with the only aim to predict some decisions of the members of these clusters with highest possible accuracy.

    +
  2. +
  3. Letting humans analyze the clusters and neurons (trying to understand what they mean) they give names to them and possibly add human-defined "fields" (like "is an honest person") if these were not found automatically, and whose values are then calculated from big data.

    +
  4. +
+

The second case would result in a specific psychological model of individuals with lots of "human-understandable" dimensions.

+

In case there is such a model, I would be very interested to know as much about it as possible.

+

What can be said about this:

+
+
    +
  1. Is there most probably such a model (that is kept as a secret e.g. by Facebook)?

    +
  2. +
  3. Has somehow tried to guess how it may look like?

    +
  4. +
  5. Are there leaked parts of the model?

    +
  6. +
+
+

My aim is to know and understand by which categories Facebook (as an example) classifies its users.

+",25362,,2444,,6/24/2020 13:58,7/9/2023 23:05,"How do big companies, like Facebook, model individuals and their interaction?",,1,1,,,,CC BY-SA 4.0 +12471,2,,11043,5/22/2019 14:49,,3,,"

I found the following detailed and well documented Python notebook, which uses only NumPy.

+",25825,,2444,,11/18/2019 18:54,11/18/2019 18:54,,,,0,,,,CC BY-SA 4.0 +12472,1,12480,,5/22/2019 15:24,,23,6115,"

When designing solutions to problems such as the Lunar Lander on OpenAIGym, Reinforcement Learning is a tempting means of giving the agent adequate action control so as to successfully land.

+ +

But what are the instances in which control system algorithms, such as PID controllers, would do just an adequate job as, if not better than, Reinforcement Learning?

+ +

Questions such as this one do a great job at addressing the theory of this question, but do little to address the practical component.

+ +

As an Artificial Intelligence engineer, what elements of a problem domain should suggest to me that a PID controller is insufficient to solve a problem, and a Reinforcement Learning algorithm should instead be used (or vice versa)?

+",22424,,22424,,5/22/2019 17:02,5/22/2019 23:56,When should I use Reinforcement Learning vs PID Control?,,1,5,,,,CC BY-SA 4.0 +12473,2,,12469,5/22/2019 16:23,,3,,"

The intuitive explanation is that there are many equally good ""optimal"" policies. This is mentioned at the end of the example problem description you posted. My gut says that the family of optimal policies would be any policy from the double/nothing family. So, for example, if you bet 25 on the first bet instead of 50, I think your overall chances of winning should be the same as if you bet 50, it'll just take longer in expectation. The resulting family of policies will look more undulating than the one in the book.

+ +

As Neil notes, for low values of $p$, the probability that you win a gamble, it is the case that there is a unique optimal policy.

+",16909,,16909,,5/23/2019 0:53,5/23/2019 0:53,,,,6,,,,CC BY-SA 4.0 +12474,2,,12465,5/22/2019 19:05,,1,,"

For such time-series data that has a significant amount of periodicity, I would recommend converting data to the frequency domain and performing various spectral analysis methods as @firion has already mentioned. For example, you could perform Fourier Analysis and study the individual components and identify patterns there.

+ +

Also, it generally not recommended to perform the normal pattern extraction approaches to time-series data as they fail to understand the temporal relationship between subsequent data points.

+ +

Hope this helps!

+",25704,,,,,5/22/2019 19:05,,,,0,,,,CC BY-SA 4.0 +12476,2,,8878,5/22/2019 20:52,,1,,"

We don't really need artificial intelligence, but it is proving ever more useful. This is a function of what is known as utility--the capability of an algorithm to perform a task adequately. The new utility of AI has upsides and downsides.

+ +

Weak AI: Less capable than a human

+ +

At the lower end of the scale, there might be situation where a human would be better, but the work is so dangerous, or expensive for humans, that we use automatons instead. (Space exploration is a good example. The AI on a deep space probe or Mars rover.)

+ +

Semi-Strong AI: About as capable as a human

+ +

I'm defining ""semi-strong"" for this answer. Here we have AI or automation that can do tasks as well as humans, such as on an assembly line, but where automation is more efficient. (This type of automation spurred the Luddite movement, responding to the loss of human jobs due to automation.)

+ +

As Machine Learning continues to get more effective, the range of tasks that AI can perform as well as humans will surely grow. This might lead to unprecedented levels of persistent human unemployment, but some argue that automation will also create new opportunities for humans and mostly eliminate repetitive, less fulfilling work. (See also: Technological unemployment)

+ +

Strong AI: Exceeds human capability

+ +

Strong AI is the ""holy grail"", and some believe it will lead to a technological ""singularity"" in which smarter machines make ever smarter machines. (No one knows:) Nevertheless, Machine Learning has demonstrated greater than human capability in a number of tasks, and the range of such tasks will surely grow. (AlphaGo was a milestone because the game of Go is unsolvable and notoriously difficult for AI, prior to AlphaGo. Now it's unclear if unmodified humans will ever again be able to beat strong AI at these types of games.)

+ +

Although game AIs are not directly useful, except for recreational purposes, the methods used to create them can be extended to real-world problems. There are many forms of Machine Learning, not restricted to Neural Networks and MCTS, with Evolutionary Algorithms as another type, also recently demonstrating strong utility.

+ +

Strong AI is useful because it has greater utility than humans. It is desirable because it increases efficiency and expected return on investment.

+",1671,,1671,,5/22/2019 21:00,5/22/2019 21:00,,,,2,,,,CC BY-SA 4.0 +12477,5,,,5/22/2019 21:14,,0,,"

See: https://en.wikipedia.org/wiki/Materials_science

+",1671,,1671,,5/22/2019 21:14,5/22/2019 21:14,,,,0,,,,CC BY-SA 4.0 +12478,4,,,5/22/2019 21:14,,0,,For questions related to applications of AI in materials science.,1671,,1671,,5/22/2019 21:14,5/22/2019 21:14,,,,0,,,,CC BY-SA 4.0 +12480,2,,12472,5/22/2019 23:56,,10,,"

I think the comments are basically on the right track.

+ +

PID controllers are useful for finding optimal policies in continuous dynamical systems, and often these domains are also used as benchmarks for RL, precisely because there is an easily derived optimal policy. However, in practice, you'd obviously prefer a PID controller for any domain in which you can easily design one: the controller's behaviors are well understood, while RL solutions are often difficult to interpret.

+ +

Where RL shines is in tasks where we know what good behaviour looks like (i.e., we know the reward function), and we know what sensor inputs look like (i.e. we can completely and accurately describe a given state numerically), but we have little or no idea what we actually want the agent to do to achieve those rewards.

+ +

Here's a good example:

+ +
    +
  • If I wanted to make an agent to maneuver a plane from in front of an enemy plane with known movement patterns to behind it, using the least amount of fuel, I'd much prefer to use a PID controller.

  • +
  • If I wanted to make an agent to control a plane and shoot down an enemy plane with enough fuel left to land, but without a formal description of how the enemy plane might attack (perhaps a human expert will pilot it in simulations against our agent), I'd much prefer RL.

  • +
+",16909,,,,,5/22/2019 23:56,,,,2,,,,CC BY-SA 4.0 +12481,2,,12465,5/23/2019 0:51,,0,,"
    +
  • since you are on ML forum, recognition of sequences does RNN.

  • +
  • you wond belive, i currently work on similar stuff. Start about thinking of algo to find repititions in string, like : ""ababcab"" returns ('ab' : 0,2,5)

  • +
  • and yes you could do Fourier Analysis but thats not ML method at all

  • +
+",25836,,,,,5/23/2019 0:51,,,,0,,,,CC BY-SA 4.0 +12487,1,,,5/23/2019 10:45,,4,275,"

There is plenty of literature describing LSTMs in a lot of detail and how to use them for multi-variate or uni-variate forecasting problems. What I couldn't find though, is any papers or discussions describing time series forecasting where we have correlated forecast data.

+ +

An example is best to describe what I mean. Say I wanted to predict number of people at a beach for the next 24 hours and I want to predict this in hourly granularity. This quantity of people would clearly depend on the past quantity of people at the beach as well as the weather. Now I can make an LSTM architecture of some sort to predict these future quantities based upon what happened in the past quite easily. But what if I now have access to weather forecasts for the next 24 hours too? (and historical forecast data too).

+ +

The architecture I came up with looks like this:

+ +

+ +

So I train the left upper branch on forecast data, then train the right upper branch on out-turn data, then freeze their layers to and join them to form the final network in the picture and train that on both forecasts and out-turns. (when I say train, the output is always the forecast for the next 24 hours). This method does in fact have better performance than using forecasts or out-turns alone.

+ +

I guess my question is, has anyone seen any literature around on this topic and/ or knows a better way to solve these sort of multivariate time series forecasting problems and is my method okay or completely flawed?

+",25659,,2444,,1/31/2020 2:37,3/1/2020 3:02,How should I design the LSTM architecture for multivariate time series forecasting problems?,,1,0,,,,CC BY-SA 4.0 +12488,1,,,5/23/2019 11:27,,2,29,"

Is it possible to recognize the height and width of the sails of different kitesurfers and windsurfers taken from public webcams? And show these information on video in real time? Or on screenshots?

+",25851,,,,,5/23/2019 11:27,Sails size recognition,,0,2,,,,CC BY-SA 4.0 +12490,1,12502,,5/23/2019 15:36,,23,6897,"

Can the decoder in a transformer model be parallelized like the encoder?

+ +

As far as I understand, the encoder has all the tokens in the sequence to compute the self-attention scores. But for a decoder, this is not possible (in both training and testing), as self-attention is calculated based on previous timestep outputs. Even if we consider some techniques, like teacher forcing, where we are concatenating expected output with obtained, this still has a sequential input from the previous timestep.

+ +

In this case, apart from the improvement in capturing long-term dependencies, is using a transformer-decoder better than say an LSTM, when comparing purely on the basis of parallelization?

+",25859,,2444,,11/1/2019 3:02,4/14/2021 7:32,Can the decoder in a transformer model be parallelized like the encoder?,,3,0,,,,CC BY-SA 4.0 +12497,1,,,5/24/2019 5:54,,1,1263,"

What is the advantage of using a VAE over a deterministic auto-encoder?

+

For example, assuming we have just 2 labels, a deterministic auto-encoder will always map a given image to the same latent vector. However, one expects that after the training, the 2 classes will form separate clusters in the encoder space.

+

In the case of the VAE, an image is mapped to an encoding vector probabilistically. However, one still ends up with 2 separate clusters. Now, if one passes a new image (at the test time), in both cases the network should be able to place that new image in one of the 2 clusters.

+

How are these 2 clusters created using the VAE better than the ones from the deterministic case?

+",23871,,2444,user9947,12/24/2021 9:11,6/4/2022 14:06,What is the advantage of using a VAE over a deterministic auto-encoder?,,2,1,,,,CC BY-SA 4.0 +12498,2,,12497,5/24/2019 6:33,,0,,"

VAE's are not used for classification. They are used for inference or as Generative Models, while AE's can be used as data re-constructors (as you described above), de-noisers, classifiers. So the difference is generation of new data vs re-construction of data.

+ +

VAE's map the inputs to a hidden space, where each variable is enforced to have probability distribution which is given by $N(0,1)$ i.e. the standard Normal Distribution. Once we have trained a VAE we will now, use only the decoder part to generate new models.

+ +

Example:

+ +

+ +

Source: Stanford University CS231n slides

+ +

Assume there is an x_axis and a y_axis on the bottom and the left. Let the x_axis represent $x_1$ and y_axis $x_2$ which are our hidden variables. By varying $x_1$ and $x_2$ you can see what happens. Increasing $x_1$ changes face angle while increasing $x_2$ changes eye droop. Thus we can generate new data by varying the features in a latent representation.

+ +

For better understanding I highly recommend you check out these links:

+ +

Variational autoencoders.

+ +

Variational Autoencoders - Brian Keng

+ +

VAE - Ali Ghodsi

+ +

Generative Models - CS231n

+",,user9947,,,,5/24/2019 6:33,,,,3,,,12/24/2021 9:34,CC BY-SA 4.0 +12499,1,12500,,5/24/2019 7:04,,14,12277,"

I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.

+ +
    +
  • First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?

  • +
  • And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:

    + +
    z = np.random.uniform(-1, 1, size=(batch_size, z_size))
    +
  • +
+ +

then how are the two things related? why don't we use the term ""a vector with random values between -1 and 1"" when referring $z$ (generator's input) in GAN?

+",16565,,,,,5/24/2019 20:50,Why is it called Latent Vector?,,2,4,,,,CC BY-SA 4.0 +12500,2,,12499,5/24/2019 7:31,,10,,"

It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.

+ +

The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states ""Neurons that fire together, wire together."" Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is ""hidden"" from us.

+ +

So, the term latent basically can be attributed to the following ideas:

+ +
    +
  • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
  • +
  • We cannot manipulate this lower dimensional data. Thus it is ""hidden from us.
  • +
  • As we do not know what each dimension means, it is ""hidden"" from us.
  • +
+",,user9947,,,,5/24/2019 7:31,,,,4,,,,CC BY-SA 4.0 +12502,2,,12490,5/24/2019 10:57,,17,,"
+

Can the decoder in a transformer model be parallelized like the +encoder?

+
+

Generally NO:

+

Your understanding is completely right. In the decoder, the output of each step is fed to the bottom decoder in the next time step, just like an LSTM.

+

Also, like in LSTMs, the self-attention layer needs to attend to earlier positions in the output sequence in order to compute the output. Which makes straight parallelisation impossible.

+

However, when decoding during training, there is a frequently used procedure which doesn't take the previous output of the model at step t as input at step t+1, but rather takes the ground truth output at step t. This procudure is called 'Teacher Forcing' and makes the decoder parallelised during training. You can read more about it here.

+

And For detailed explanation of how Transformer works I suggest reading this article: The Illustrated Transformer.

+
+

Is using a transformer-decoder better than say an lstm when comparing +purely on the basis of parallelization?

+
+

YES:

+

Parallelization is the main drawback of RNNs in general. In a simple way, RNNs have the ability to memorize but not parallelize while CNNs have the opposite. Transformers are so powerful because they combine both parallelization (at least partially) and memorizing.

+

In Natural Language Processing for example, where RNNs are used to be so effective, if you take a look at GLUE leaderboard you will find that most of the world leading algorithms today are Transformer-based (e.g BERT by GOOGLE, GPT by OpenAI..)

+

For better understanding of why Transformers are better than CNNs I suggest reading this Medium article: How Transformers Work.

+",23350,,23350,,4/14/2021 7:32,4/14/2021 7:32,,,,5,,,,CC BY-SA 4.0 +12504,2,,12497,5/24/2019 11:08,,1,,"

It seems that you think that we want to perform classification with VAEs or that images that we pass to the encoder fall into more than one category. The other answer already points out that VAEs are not typically used for classification but for generation tasks, so let me try to answer the main question.

+

The variational auto-encoder (VAE) and the (deterministic) auto-encoder both have an encoder and a decoder and they both convert the inputs to a latent representation, but their inner workings are different: a VAE is a generative statistical model, while the AE can be viewed just as a data compressor (and decompressor).

+

In an AE, given an input $\mathbf{x}$ (e.g. an image), the encoder produces one latent vector $\mathbf{z_x}$, which can be decoded into $\mathbf{\hat{x}}$ (another image which should be similar or related to $\mathbf{x}$). Compactly, this can be presented as $\mathbf{\hat{x}}=f(\mathbf{z_x}=g(\mathbf{x}))$, where $g$ is the encoder and $f$ is the decoder. This operation is deterministic: so, given the same $\mathbf{x}$, the same $\mathbf{z_x}$ and $\mathbf{\hat{x}}$ are produced.

+

In a VAE, given an input $\mathbf{x} \in X$ (e.g. an image), more than one latent vector, $\mathbf{z_{x}}^i \in Z$, can be produced, because the encoder attempts to learn the probability distribution $q_\phi(z \mid x)$, which can be e.g. $\mathcal{N}(\mu, \sigma)$, which we can sample from, where $\mu, \sigma = g_\theta(\mathbf{x})$. In practice, $g_\theta$ is a neural network with weights $\phi$. We can sample latent vectors $\mathbf{z_{x}}^i$ from $\mathcal{N}(\mu, \sigma)$, which should be "good" representations of a given $\mathbf{x}$.

+

Why is it useful to learn $q_\phi(z \mid x)$? There are many uses cases. For example, given multiple corrupted/noisy versions of an image, you can reconstruct the original uncorrupted image. However, note that you can use the AE also for denoising. Here you have a TensorFlow example that illustrates this. The difference is is that, again, given the same noisy image, the model will always produce the same reconstructed image. You can also use the VAE for drug design [1]. See also this post.

+",2444,,2444,,6/4/2022 14:06,6/4/2022 14:06,,,,4,,,,CC BY-SA 4.0 +12505,2,,12499,5/24/2019 11:29,,5,,"

Latent is a synonym for hidden.

+ +

Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).

+ +

A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.

+",2444,,2444,,5/24/2019 20:50,5/24/2019 20:50,,,,0,,,,CC BY-SA 4.0 +12506,1,,,5/24/2019 12:05,,4,218,"

I also asked this question here but I'm repeating it on this SE because I feel it is more relevant. No intention to spam.

+ +

I am researching into coding a solver for a variant of the Sokoban game with multiple agents, restrictions (eg. colors of stones, goals) and relaxations (push AND pull possible, etc.)

+ +

By researching online I have found classic papers in the field, like the Rolling Stone paper (by Andreas Junghanns and Jonathan Schaeffer) and Sokoban: Enhancing general single-agent search methods using domain knowledge from the same authors.

+ +

These solutions seem to be outdated and I am currently structuring my solver per the notes of the two most performant solvers: YASS and Sokolution.

+ +

From the research, I've done these two seem to be my best bet.

+ +

It is apparent that they are not enough by themselves to solve a multi-agent environment. Those solvers are made for a single agent. So far, I have failed to find useful multi-agent proposals.

+ +

In this context, my question is:

+ +
    +
  1. What can be considered state-of-the-art in order to: + +
      +
    • coordinate multiple agents with different goals, and
    • +
    • plug a solver's solution in and validate/edit it?
    • +
  2. +
  3. What are some search terms I can use to research this further?
  4. +
+ +

Thank you very much

+",25888,,16565,,5/25/2019 21:09,5/25/2019 21:09,Multi Agent Sokoban Search Solvers state of the art,,0,0,,,,CC BY-SA 4.0 +12507,2,,11285,5/24/2019 12:22,,5,,"

The expression ""latent space"" explicitly indicates that the space is associated with the mathematical concept of an hidden (or latent) variable, which cannot be observed directly, but only indirectly.

+ +

The expression ""embedding space"" refers to a vector space that represents an original space of inputs (e.g. images or words). For example, in the case of ""word embeddings"", which are vector representations of words. It can also refer to a latent space because a latent space can also be a space of vectors. However, an embedding space is not necessarily an hidden space. It is just another (vector) representation of another space.

+ +

These two expressions can be used interchangeably, also because the expression ""embedding space"" is often not formally defined.

+",2444,,,,,5/24/2019 12:22,,,,0,,,,CC BY-SA 4.0 +12508,1,,,5/24/2019 13:06,,7,331,"

According to this Wikipedia article

+
+

If the heuristic $h$ satisfies the additional condition $h(x) \leq d(x, y) + h(y)$ for every edge $(x, y)$ of the graph (where $d$ denotes the length of that edge), then $h$ is called monotone, or consistent. In such a case, $A^*$ can be implemented more efficiently — roughly speaking, no node needs to be processed more than once (see closed set below) — and $A^*$ is equivalent to running Dijkstra's algorithm with the reduced cost $d'(x, y) = d(x, y) + h(y) − h(x).$

+
+

Can someone intuitively explain why the reduced cost is of this form ?

+",25729,,2444,,11/19/2020 12:53,12/19/2020 13:04,A* is similar to Dijkstra with reduced cost,,1,1,,,,CC BY-SA 4.0 +12509,1,,,5/24/2019 15:27,,1,44,"

I want to be able to improve my lower level device specific programming abilities to assist in future endeavors. Examples would be learning to write custom tensorflow operations in C++ optimized to work on GPUS. Does anyone know good resources to find tutorials showing which packages to use for which devices and etc.?

+ +

An approach other than just reading source and replicating until understanding would be nice.

+",25496,,,,,5/24/2019 15:27,Any good resources for learning programming GPU level operations?,,0,1,,4/14/2022 16:18,,CC BY-SA 4.0 +12510,1,12512,,5/24/2019 17:39,,5,845,"

No matter what I google or what paper I read, I can't find an answer to my question. In a deep convolutional neural network, let's say AlexNet (Krizhevsky, 2012), filters' weights are learned by means of back-prop.

+

But how are kernels themselves selected? I know kernels had been used in image processing long before CNNs, hence I'd imagine there would be a set of filters based on kernels (see, for example, this article) that are proven to be effective for edge detection and the likes.

+

Reading around the web, I also found something about "randomly generated kernels". Does anyone know if and when this practice is adopted?

+",25893,,2444,,12/11/2020 11:32,12/11/2020 15:13,Are filters fixed or learned?,,2,1,,,,CC BY-SA 4.0 +12511,2,,12510,5/24/2019 18:00,,1,,"

If you're looking for filters with known effect, the Gaussian filters do smoothing, the Gabor filters are useful for edge detection, etc.

+

Usually, in deep learning models where things are trained from scratch, the filters are randomly initialized and then learned by the model's training scheme. For the most part, without using any of the well-known kernels mentioned above.

+

Clarification on for the most part: so filters aren't initialized with the goal of knowing exactly which feature it will activate on, but what will assist the training procedure. Recently, people have even trained ResNets with or without batch normalization by finding good initial points -- it's an ongoing field of research.

+",25496,,2444,,12/11/2020 14:42,12/11/2020 14:42,,,,0,,,,CC BY-SA 4.0 +12512,2,,12510,5/24/2019 20:39,,6,,"

What are filters in image processing?

+

In the context of image processing (and, in general, signal processing), the kernels (also known as filters) are used to perform some specific operation on the image. For example, you can use a Gaussian filter to smooth the image (including its edges).

+

What are filters in CNNs?

+

In the context of convolutional neural networks (CNNs), the filters (or kernels) are the learnable parameters of the model.

+

Before training, the kernels are usually randomly initialised (so they are not usually hardcoded). During training, depending on the loss or error of the network (according to the loss function), the kernels (or filters) are updated, so that to minimise the loss (or error). After training, they are typically fixed. Incredibly, the filters learned by CNNs can be similar to the Gabor filter (which is thought to be related to our visual system [1]). See figure 9.19 of chapter 9 (p. 365) of the Deep Learning book by Goodfellow et al.

+

The number of kernels that are applied to a given input (and more than one kernel is often applied) in a CNN is a hyper-parameter.

+

What are the differences and similarities?

+

In both contexts, the words "kernel" and "filter" are roughly synonymous, so they are often used interchangeably. Furthermore, in both cases, the kernels are related to the convolution (or cross-correlation) operation. More specifically, the application of a filter, which is a function $h$, to an input, which is another function $f$, is equivalent to the convolution of $f$ and $h$. In mathematics, this is often denoted by $f \circledast h = g$, where $\circledast$ is the convolution operator and $g$ is the result of the convolution operation and is often called the convolution (of $f$ and $h$). In the case of image processing, $g$ is the filtered image. In the case of CNNs, $g$ is often called an activation map.

+

Further reading

+

Take a look at this and this answers for more details about CNNs and the convolution operation, respectively.

+

You may also want to have a look at Visualizing what ConvNets learn + for some info about the visualization of the kernels learned during the training of a CNN.

+",2444,,2444,,12/11/2020 15:13,12/11/2020 15:13,,,,2,,,,CC BY-SA 4.0 +12513,1,12515,,5/24/2019 21:23,,1,1139,"

If I use a desktop PC with a GPU, how long it might take to train face recognition deep neural network on let's say dataset of 2.6 million images and 2600 identities? I guess it should depend on various properties (e.g., type of the DNN). But I am just looking for a rough estimation. Is it a matter of hours/days or years?

+ +

Thanks!

+",25902,,1671,,10/15/2019 19:23,10/15/2019 19:23,How long it takes to train face recognition deep neural network? (rough estimation),,2,2,,,,CC BY-SA 4.0 +12515,2,,12513,5/25/2019 3:44,,2,,"

Training time depends on a lot of parameters. Some of them are:

+ +
    +
  1. Size of each image (resolution)
  2. +
  3. Color/Monochrome image (color image has 3 times data if you consider RGB image)
  4. +
  5. Like you mentioned on the type of DNN. +No. of layers of DNN. +No. of neurons in each layer.
  6. +
  7. Total no. of images in the dataset. (2.6 million here)
  8. +
  9. GPU you are using (you didn't mention which GPU you are using. There are GPUs with wide range of capabilities, to predict the time, you need to know the exact specs of GPU).
  10. +
  11. Training time also depends on RAM on your machine and also how fast host PC can transfer data to GPU for processing.
  12. +
  13. Since you mentioned face recognition, i am assuming you are using CNN, but if you again use fully connected network, training time will change and will obviously increase manifold.
  14. +
+ +

Your task of classification and your database is mostly similar to ILSVRC that uses imagenet database.

+ +

Making some reasonable assumptions for the parameters you didn't mention, i feel your task is similar to ILSVRC and i am predicting the training time will be a few days.

+ +

Below are the links which mention time of training for ILSVRC.

+ +

https://mxnet-tqchen.readthedocs.io/en/latest/tutorials/imagenet_full.html

+ +

Below are the details of imagenet database for your comparison

+ +

https://en.wikipedia.org/wiki/ImageNet

+",20760,,,,,5/25/2019 3:44,,,,1,,,,CC BY-SA 4.0 +12516,1,12517,,5/25/2019 5:08,,10,1102,"

I'm trying to learn neural networks by watching this series of videos and implementing a simple neural network in Python.

+ +

Here's one of the things I'm wondering about: I'm training the neural network on sample data, and I've got 1,000 samples. The training consists of gradually changing the weights and biases to make the cost function result in a smaller cost.

+ +

My question: Should I be changing the weights/biases on every single sample before moving on to the next sample, or should I first calculate the desired changes for the entire lot of 1,000 samples, and only then start applying them to the network?

+",25904,,2444,,11/30/2020 0:05,11/30/2020 0:05,Is neural networks training done one-by-one?,,2,0,,11/6/2019 0:44,,CC BY-SA 4.0 +12517,2,,12516,5/25/2019 6:01,,11,,"
+

Should I be changing the weights/biases on every single sample before moving on to the next sample,

+
+ +

You can do this, it is called stochastic gradient descent (SGD) and typically you will shuffle the dataset before working through it each time.

+ +
+

or should I first calculate the desired changes for the entire lot of 1,000 samples, and only then start applying them to the network?

+
+ +

You can do this, it is called batch gradient descent, or in some cases (especially in older resources) just assumed as the normal approach and called gradient descent.

+ +

Each approach offers advantages and disadvantages. In general:

+ +
    +
  • SGD makes each update sooner in terms of amount of data that has been processed. So you may need less epochs before converging on reasonable values.

  • +
  • SGD does more processing per sample (because it updates more frequently), so is also slower in the sense that it will take longer to process each sample.

  • +
  • SGD can take less advantage of parallelisation, as the update steps mean you have to run each data item serially (as the weights have changed and error/gradient results are calculated for a specific set of weights).

  • +
  • SGD individual steps make typically only very rough guesses at the correct gradients to change weights in. This is both a disadvantage (performance of the NN against the objective on the training set can decrease as well as increase) and an advantage (there is less likelihood of getting stuck in a local stationary point due the ""jitter"" these random differences cause).

  • +
+ +

What happens in practice is that most software allows you to compromise between batch processing and single-sample processing, to try and get the best performance and update characteristics. This is called mini-batch processing, which involves:

+ +
    +
  • Shuffling the dataset at the start of each epoch.

  • +
  • Working through the shuffled data, N items per time where N might vary from maybe 10 to 1000, depending on the problem and any constraints on the hardware. A common decision is to process the largest batch size that the GPU acceleration allows to run in parallel.

  • +
  • Calculate the update required for each small batch, then apply it.

  • +
+ +

This is nowadays the most common update method that most neural network libraries assume, and they almost universally will accept a batch size parameter in the training API. Most of the libraries will still call simple optimisers that do that SGD; technically it is true, the gradients calculated are still somewhat randomised due to not using the full batch, but you may find this called mini-batch gradient descent in some older papers.

+",1847,,1847,,5/26/2019 7:47,5/26/2019 7:47,,,,5,,,,CC BY-SA 4.0 +12523,1,12526,,5/25/2019 11:52,,2,275,"

I'm implementing a neural network framework from scratch in C++ as a learning exercise. There is one concept I don't see explained anywhere clearly:

+

How do you go from your last convolutional or pooling layer, which is 3 dimensional, to your first fully connected layer in the network?

+

Many sources say, that you should flatten the data. Does this mean that you should just simply create a $1D$ vector with a size of $N*M*D$ ($N*M$ is the last convolution layer's size, and $D$ is the number of activation maps in that layer) and put the numbers in it one by one in some arbitrary order?

+

If this is the case, I understand how to propagate further down the line, but how does backprogation work here? Just put the values in reverse order into the activation maps?

+

I also read that you can do this "flattening" as a tensor contraction. How does that work exactly?

+",25910,,2444,user9947,5/13/2022 8:38,5/13/2022 8:38,How do you go from the last convolutional layer to your first fully connected layer?,,1,1,,,,CC BY-SA 4.0 +12524,1,12525,,5/25/2019 12:02,,4,58,"

In chapter 5 of Deep Learning book of Ian Goodfellow, some notations in the +loss function as below make me really confused.

+ +

I tried to understand $x,y \sim p_{data}$ means a sample $(x, y)$ sampled from original dataset distribution (or $y$ is the ground truth label). +The loss function in formula 5.101 seems to be correct for my understanding. Actually, the formula 5.101 is derived from 5.100 by adding the regularization.

+ +

Therefore, the notation $x,y \sim \hat{p}_{data}$ in formula 5.96 and 5.100 is really confusing to me whether the loss function is defined correctly (kinda typo error or not). If not so, could you help me to refactor the meaning of two notations, are they similar and correct?

+ +

+ +

+ +

+Many thanks for your help.

+",25911,,2444,,5/25/2019 12:24,5/25/2019 13:00,Being confused of distribution notations in Deep Learning book,,1,0,,,,CC BY-SA 4.0 +12525,2,,12524,5/25/2019 12:55,,3,,"

At page 130 of the same book, the author states that $\hat{p}_\text{data}$ is an empirical distribution defined by the training data. Similarly, at page 129, he states that $p_\text{data}$ is the true distribution that generates the set $\mathbb{X} = \{ \boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(m)} \}$.

+ +

What is the difference between $\hat{p}_\text{data}$ and $p_\text{data}$? You can think of $\hat{p}_\text{data}$ as a histogram that is calculated from the set $\mathbb{X}$ and $p_\text{data}$ as the true distribution from which the elements in $\mathbb{X}$ are drawn.

+ +

The subscript ${\boldsymbol{x}, y \sim \hat{p}_\text{data}}$ in the expectation $\mathbb{E}_{\boldsymbol{x}, y \sim \hat{p}_\text{data}}$ indicates that the expectation is taken with respect to the samples drawn from the empirical distribution $\hat{p}_\text{data}$. In other words, you will optimise the objective function $J$ using the training data. Have a look at this question for more info.

+ +

The subscript ${\boldsymbol{x}, y \sim p_\text{data}}$ in the expectation of formula $5.101$ is a typo. In fact, in this online version of the book, at page 151, the subscript of the expectation is ${\boldsymbol{x}, y \sim \hat{p}_\text{data}}$.

+",2444,,2444,,5/25/2019 13:00,5/25/2019 13:00,,,,1,,,,CC BY-SA 4.0 +12526,2,,12523,5/25/2019 13:03,,1,,"

Yes, you are correct (I think it is quite easily implementable in C++ with pointers). The arbitrary order is to be maintained though, since Fully Connected Neural Nets are ""Translationally Invariant"" i.e. you have to make sure if pixel $(1,5,6)$ is being supplied to node $38$ or being indexed as $37$ as a single datapoint to be input to a Fully Connected Neural Network, then from then on it must be fixed (cannot put say pixel $(1,6,5 )$ in node $38$.

+ +

Backpropagation works the same as it always works, it is tough to give a verbal explanation so I will give you this picture:

+ +

+ +

So, basically if you visualise like this you understand you have to differentiation will propagate, the ""flattening"" is only reshaping the value lookup table it is not changing the way the values affect final loss, so if you take gradient w.r.t each values and then convert it back to a $3D$ map same way as before and then propagate the gradients as you were doing in previous layers.

+",,user9947,,user9947,5/25/2019 13:09,5/25/2019 13:09,,,,6,,,,CC BY-SA 4.0 +12527,1,,,5/25/2019 13:44,,2,99,"

I myself am not new to NLP, but for some reason I am unable to grasp purity of BERT. I have seen a ton of blogs, github repos, but none could clarify BERT usage to me.

+ +

It would be helpful if you could provide two things :

+ +
    +
  1. A clear implementation of BERT, preferably in ipython notebook.
  2. +
  3. Some papers on BERT excluding the original paper by google.
  4. +
+",25512,,25512,,12/18/2019 7:09,12/18/2019 7:09,Bert super easy implementation,,0,1,,,,CC BY-SA 4.0 +12528,1,,,5/25/2019 14:51,,1,25,"

I am new to machine learning and I would like to seek some advice/help for directions on implementing a binary classification for a series of data and tell if it is a straight line or a not?

+ +

for example I have the following data e.g.

+ +

Training data:

+ +
    +
  • 0,2,4 -- straight line
  • +
  • 0,10,5 -- not a straight line
  • +
  • 0,99,99 -- not a straight line
  • +
  • 0,1,2 -- straight line
  • +
+ +

My Validation data would be:

+ +
    +
  • 0,60,120 -- straight line
  • +
  • 0,120,60 -- not a straight line
  • +
+",25913,,,,,5/25/2019 14:51,Binary classification for a series of data (using Keras) to tell if it is a straight line or not a straight line,,0,0,,,,CC BY-SA 4.0 +12529,1,,,5/25/2019 14:53,,1,544,"

I'm working on an Advantage A2C implementation, and I just finished creating the value network $\hat{V_{\phi}}$. I train this network with the standard MSE loss of discounted rewards-to-go:$$\|\hat{V_{\phi}}(s_{t'}) - \sum_{t=t'}^{T}\gamma^{t-t'} r(s_t, a_t)\|^2$$

+ +

I would like to be able to evaluate and assess the performance and ability of the value network as I train, especially to see how that relates and interacts with the changes and improvements in the policy, however I'm not sure how to do this.

+ +

My first instinct was to track the loss after each batch of experiences, but this doesn't work. As the policy improves, episodes last longer, and the value loss increases. I understand why this happens, as it is harder to predict the rewards-to-go when the length of the future is more undetermined.

+ +

To fix this, I tried dividing the loss that I'm computing by the total number of steps in the batch of episodes. However, the loss is still increasing as the policy improves (as the episodes get longer). Why is this still happening? Then, is there anything I can do to get a better assessment of the quality of the value network?

+",25732,,,,,10/28/2019 9:23,A2C Critic Loss Interpretation,,1,0,,,,CC BY-SA 4.0 +12535,2,,11803,5/26/2019 4:16,,0,,"

One way to think of the difference between search and learning is that search usually entails a search key, and an algorithm hunts through the structure looking for a match between the key and an already-existing item. Whereas learning is the creation of the structure in the first place. But search and learning are related in that on receipt of an input (say from one or more sensors) the structure is initially searched to see if the input already exists, but if it doesn't then current input (when certain conditions are met) is added to the structure, and learning follows a failure of search.

+",17709,,,,,5/26/2019 4:16,,,,0,,,,CC BY-SA 4.0 +12536,2,,10272,5/26/2019 4:31,,0,,"

Turing had much difficulty explaining how a computer could learn, and in his 1950 paper asked, How could a machine learn? Its possible behavior is completely defined by its rules of operation (program), whatever the machine's history (past, present, future) might be. His proposed solution was ephemerally valid rules, whatever they might be - and he doesn't say.

+ +

So learning can be understood as the machine acquiring new behaviors but not by the behaviors being programmed into the machine by a human.

+ +

Perhaps a better way to look at this is causally. A human can define the causation (possible behavior) of a computer by programming it into the machine. But this is a case of the human using their knowledge to define how the machine will react to situations. Learning is the case where the machine itself acquires now behaviors or possible behaviors (not by a human using their knowledge of the world). And any such alleged intrinsic learning can easily be tested. If it helps the machine survive in a complex and hostile world, then it really is learning.

+",17709,,,,,5/26/2019 4:31,,,,1,,,,CC BY-SA 4.0 +12540,1,,,5/26/2019 13:32,,3,943,"

Copy from my Reddit post: +(Sorry if this does not fit here, please tell me and I delete it) +Help regarding +I'm working on an implementation of PPO, which I plan to use in my (Bachelors) Thesis. To test whether my implementation works, I want to use the LunarLanderContinuous-v2 Environment. Now my implementation seems to work just fine, but plateaus much too early - At an average reward of ~ -1.8 reward per timestep, where the goal should be somewhere around ~ +2,5 reward per timestep. As the implementation generally learns I am somewhat confused, as to why it then plateaus so early. +Some details regarding my implementation, also here is the GitHub repo:

+
    +
  • I use parallelized environments via openAi's subproc_vecenv

    +
  • +
  • I use the Actor Critic Version of PPO

    +
  • +
  • I use Generalized Advantage Estimation as my Advantage term

    +
  • +
  • I only use finished runs (every run used in training has reached a terminal state)

    +
  • +
  • Even though Critic loss in the graphic below looks small it is actually rather large, as the rewards are normalized and therefore the value targets are actually rather small

    +
  • +
  • The Critic seemingly predicts a value independent of the state it is fed - that is it predicts for every state just the average over all the values. That seems like harsh underfitting, which is weird as the network is already rather large for the problem in my opinion. But this seems to be the most likely cause for the problem in my opinion. +Edit1: Added image

    +
  • +
+",25945,,18758,,7/11/2022 0:39,7/11/2022 0:39,"Implementation of PPO - Value Loss not converging, return plateauing",,0,1,,,,CC BY-SA 4.0 +12542,1,,,5/26/2019 19:07,,2,47,"

Given enough experiment data on time taken for objects to fall to earth from different heights, one can create various models that will accurately predict the time it will take for an object falling at any height (in the inner the atmosphere, this is a toy example).

+ +

In this simple example the model is deterministic, it will always produce an output given an input regardless of the amount of data over a small threshold—something akin to Newton’s gravity equation.

+ +

Try modelling something like the stock-market (the other extreme end of the scale) and the predictions will never reliably converge on a predictable accurate model.

+ +

Is there any way of knowing whether your domain will yield a deterministic or non-deterministic model or not?

+",11893,,,,,5/26/2019 19:07,How to know when a Environment will yield a deterministic model,,0,3,,,,CC BY-SA 4.0 +12543,1,,,5/26/2019 19:23,,2,60,"

In every day life, it seems that we all have various habits and actions that we perform. For example, we wake up and check our email/facebook etc. on our phones. We don't look at are current state right now, and consider the values of all the possible future trajectories. We basically choose the action that maximizes our ""reward"" at our current state.

+ +
+

Question. Is it practical to randomly initialize actions $a \in A$, states $s \in S$ and a policy $\pi(a|s)$ and update this according to some algorithm (e.g. REINFORCE, exploration, etc.) to achieve some desired + goal in your life? This could be done, for example, by uniformly sampling a random number in the interval $(0,1)$ and acting according to a policy $\pi(a|s)$. For example, suppose your goal is to get married, + get a new house etc. What would be appropriate return functions in + this case? Return is usually defined as the immediate reward plus the + discounted cumulative future rewards. But is this the right definition for + practical problems like marriage/dating, buying a house, etc.? Would you define return as $R_{\text{total}} = R_{\text{marriage}}+R_{\text{buying house}}$ in our example, where each of those individual returns are typical immediate plus discounted rewards and try to maximize that? Or would it be better to maximize $R_{\text{marriage}}$ and $R_{\text{buying house}}$ individually?

+
+",25953,,25953,,5/26/2019 20:19,5/26/2019 20:19,Reinforcement Learning in Real Life/Practical Terms,,0,2,,,,CC BY-SA 4.0 +12544,1,12545,,5/26/2019 22:04,,3,336,"

I'm interested in ant colony optimization algorithms and bee algorithms,but I'm confused what are the applications of these algorithms

+ +

Can you suggest me some examples of applications can I work on?

+",25940,,2444,,5/26/2019 23:20,5/26/2019 23:47,What are the applications of ant colony optimization algorithms?,,1,1,,,,CC BY-SA 4.0 +12545,2,,12544,5/26/2019 23:09,,2,,"

The first ant colony optimisation algorithm was introduced by Marco Dorigo in the report Positive Feedback as a Search Strategy (1991) and his PhD thesis Optimization, Learning and Natural Algorithms (1992). He's still one of the leading figures in the field of swarm intelligence (having also written or co-written several papers and books). Another important person that contributed to ACO algorithms is Luca Gambardella (co-director of IDSIA).

+ +

There are several ACO algorithms. They are all based on the way real ants behave, that is, by leaving a substance called ""pheromone"" on the ground in order to communicate. More specifically, the amount of pheromone is associated with value (e.g. food): more pheromone means more value. (It should now be clear the reason behind the queues real ants form).

+ +

A list of ACO algorithms can be found at http://iridia.ulb.ac.be/~mdorigo/ACO/publications.html. For reproducibility, here's a (non-exhaustive) list:

+ +
    +
  • Ant System
  • +
  • Elitist Ant System
  • +
  • Ant-Q
  • +
  • Ant Colony System
  • +
  • Max-Min Ant System
  • +
  • Rank-based Ant System
  • +
  • ANTS
  • +
  • Hyper Cube - ACO
  • +
+ +

ACO algorithms have been applied to combinatorial and NP-complete (e.g. the travelling salesman problem) problems. ACO algorithms are thus a collection of meta-heuristic and probabilistic algorithms (in the same family of simulated annealing) to tackle often considered intractable problems. The related Wikipedia article contains a more exhaustive section dedicated to the applications of these algorithms. ACO algorithms are often combined with local search algorithms (like the 2-opt or 3-opt).

+ +

I would suggest you to start with the travelling salesman problem, which was the first application of these algorithms. You can have a look at the reference implementations at http://iridia.ulb.ac.be/~mdorigo/ACO/aco-code/public-software.html, where you can also find software to solve specific tasks (not just the TSP, such as maximum clique problems).

+",2444,,2444,,5/26/2019 23:47,5/26/2019 23:47,,,,2,,,,CC BY-SA 4.0 +12546,5,,,5/26/2019 23:21,,0,,"

For more info, have a look at http://iridia.ulb.ac.be/~mdorigo/ACO/ and https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms.

+",2444,,2444,,5/29/2019 17:51,5/29/2019 17:51,,,,0,,,,CC BY-SA 4.0 +12547,4,,,5/26/2019 23:21,,0,,"For questions related to the ant colony optimization (ACO) algorithms, which are population-based and stochastic metaheuristics that can be used to find approximate solutions to difficult optimization problems, such as the TSP. In ACO, artificial ants (the agents) iteratively search for good solutions to a given optimization problem. To apply ACO, the optimization problem is transformed into the problem of finding the best path on a weighted graph.",2444,,2444,,11/1/2019 1:39,11/1/2019 1:39,,,,0,,,,CC BY-SA 4.0 +12548,1,12582,,5/27/2019 6:28,,2,363,"

There is this problem I have encountered, I was trying to classify the pixels from input image into classes, sort of like segmentation, using a encoder-decoder CNN. The “interested” pixels usually locate in the top right corner of the input image, but the input images are too big, which I have to slice them in patches, by doing this, each input patch loses its “which region of the whole picture it’s from” information.

+ +

I'm using pytorch, I thought of manually add this patch location info into the input, but then it will be convoluted, which does make sense to me since it's not a part of an image.

+ +

I'm new to this, not sure if I'm thinking the whole thing right, how should I manually add this info into the input correctly or if there is some keywords I can do some researches, in order to let the CNN taking position into account? Thank you.

+",25963,,,,,5/29/2019 12:42,How to add some data input in a CNN?,,1,2,,,,CC BY-SA 4.0 +12551,1,,,5/27/2019 11:45,,1,765,"

I'm about to create an OpenAI Gym environment for a flight simulator. I'm wondering, how to cope with the fact, that the result and reward for some action needs a considerable time to advance through the system due to the inherent time constants.

+ +

In the easy example Gym-environments (e. g. cartpole, or some games) the step can anstantly be executed and the resulting reard can be calculated.

+ +

In my continuous control system (aka flight simulator), There is some reaction time needed, until I can calculate the result from my action. E. g. When I pull the stick, it takes some time, until the aircraft lifts it's nose. So there is a considerable delay (maybe in the seconds ballpark) from commanding the action to the environment, and the earliest observation of that result and it's corresponding reward.

+ +

How can I cope with that. As far as I understood, the env.step(action) function blocks till it comes back with a new observation and a corresponding reward.

+ +
    +
  • How can I cope with long lasting reward caclulations?
  • +
  • Is it possible to have overlapping actions somehow? E. g. command a new action every 100ms, but get the reward for that action only 1 second later. In this case there would be always 10 rewards pending.
  • +
+ +

I hope I made my point clear. Don't hesitate to ask for further details in the comments.

+ +

Any hints are welcome. Is there anything to read out in the wild dealing with a similar issue?

+ +

Cheers,

+ +

Felix

+",25972,,,,,5/27/2019 15:08,OpenAI Gym interface when reward calculation is delayed? (continuous control with considerable reaction time),,1,0,,,,CC BY-SA 4.0 +12552,1,12572,,5/27/2019 11:49,,3,153,"

I am trying to formulate and solve the following problem of image mutation. Suppose I am trying to insert an object image into a ""background"" image of several objects, and I will need to look for a ""sweet spot"" to insert the image:

+ +

+ +

I am tentatively trying to formulate the problem into a reinforcement learning process, with the following elements:

+ +

0. initial stage:

+ +
    +
  • a background image where the location of objects within the image has been marked (let's suppose we have a perfect object detector)

  • +
  • another image of a new object, let's say, a human

  • +
+ +

1. action space:

+ +
    +
  • location (x, y) for the object image to be inserted; in that sense the action space is quite large.
  • +
+ +

2. environment:

+ +
    +
  • each step I will have a new image to ""learn from"".

  • +
  • An oracle function F returns 1 or 0 (roughly one computation of F takes 30 seconds). + This function tells me the latest synthesized image hits the ""sweet spot"" or not (1 means hit). If so, I will stop the search and return the image.

  • +
+ +

3. constraint:

+ +

the newly inserted object shouldn't overlap with the original objects in the figure.

+ +

While my gut feeling is that this problem is somehow similar to the classic ""maze escape"" problem which can be solved well with reinforcement learning, the action space seems quite large in this problem.

+ +

So here are my questions:

+ +
    +
  1. In case I would like to formulate this ""beautify"" image problem into a ""deep"" reinforcement learning problem, how can I learn from such large action space? Or is it really suitable for a reinforcement learning process?

  2. +
  3. Can I somehow subsume the ""non-overlapping"" constraint into the oracle function F? If so, how should I decide the reward score? Any principled or empirical way of deciding so?

  4. +
+",25973,,,,,5/28/2019 20:01,beautify an image with reinforcement learning,,1,0,,,,CC BY-SA 4.0 +12553,2,,12516,5/27/2019 12:56,,1,,"

Ideally, you need to update weights by going over all the samples in the dataset. This is called as Batch Gradient Descent. But, as the no. of training examples increases, the computation becomes huge and training will be very slow. With the advent of deep learning, training size is in millions and computation using all training examples is very impractical and very slow.

+ +

This is where, two optimization techniques became prominent.

+ +
    +
  1. Mini-Batch Gradient Descent
  2. +
  3. Stochastic Gradient Descent (SGD)
  4. +
+ +

In mini-batch gradient descent, you use a batch size that is considerably less than total no. of training examples and update your weights after passing through these examples.

+ +

In stochastic gradient descent, you update the weights after passing through each training example.

+ +

Coming to advantages and disadvantages of the three methods we discussed.

+ +
    +
  • Batch gradient descent gradually converges to the global minimum but +it is slow and requires huge computing power.

  • +
  • Stochastic gradient descent converges fast but not to the global +minimum, it converges somewhere near to the global minimum and +hovers around that point, but doesn't converge ever to the global +minimum. But, the converged point in Stochastic gradient descent +is good enough for all practical purposes.

  • +
  • Mini-Batch gradient is a trade-off the above two methods. But, if you +have a vectorized implementation of the weights updation and you
    +are training with a multi-core setup or submitting the training to
    +multiple machines, this is the best method both in terms of time for +training and convergence to global minimum.

  • +
+ +

You can plot the cost function, w.r.t the no. of iterations to understand the difference between convergence in all the 3 types of gradient descent.

+ +
    +
  • Batch gradient descent plot falls smoothly and slowly and gets stabilized and +gets to global minimum.

  • +
  • Stochastic gradient descent plot will have oscillations, will fall +fast but hovers around global minimum.

  • +
+ +

These are some blogs where there is detailed explanation of advantages, disadvantages of each method and also graphs of how cost function changes for all the three methods with iterations.

+ +

https://adventuresinmachinelearning.com/stochastic-gradient-descent/

+ +

https://machinelearningmastery.com/gentle-introduction-mini-batch-gradient-descent-configure-batch-size/

+",20760,,20760,,5/27/2019 13:01,5/27/2019 13:01,,,,0,,,,CC BY-SA 4.0 +12556,2,,12513,5/27/2019 14:41,,1,,"

YELP Dataset (200k images) used to take 5 hr for training to identify Five (5) classes on GPU - Nvidia 1080 Ti with 11 GB RAM. So I guess in your case it will take days. Again it will depend on the type of your GPU configuration and type of Architecture you will be using.

+",25978,,,,,5/27/2019 14:41,,,,0,,,,CC BY-SA 4.0 +12557,2,,12551,5/27/2019 15:08,,1,,"

It's not on your end, as a creator of flight simulator, to worry about what action should get the credit for the reward that happened some time after the action was taken. You should return the reward when the actual event happens not when the action that caused it happened. It's the job of the reinforcement learning agent to figure that out. For example if you want to give a reward when airplane nose is at 45 degrees from the horizontal axis, you should return the reward when that event actually happens, RL agent should figure out that the crucial action happened some time ago. This may be difficult for the agent but its up to the user to use a proper algorithm and proper exploration strategy to solve the problem.

+",20339,,,,,5/27/2019 15:08,,,,2,,,,CC BY-SA 4.0 +12558,1,12559,,5/27/2019 18:48,,4,792,"

What is the definition of machine learning? What are the advantages of machine learning?

+",24095,,2444,,11/14/2020 15:25,3/11/2021 10:01,What is machine learning?,,2,1,,,,CC BY-SA 4.0 +12559,2,,12558,5/27/2019 19:04,,8,,"

What is machine learning?

+

Machine learning (ML) has been defined by multiple people in similar (or related) ways.

+

Tom Mitchell, in his book Machine Learning (1997), defines an ML algorithm/program (or machine learner) as follows.

+
+

A computer program is said to learn from experience $E$ with respect to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$.

+
+

This is a quite reasonable definition, given that it describes algorithms such as gradient descent, Q-learning, etc.

+

In his book Machine Learning: A Probabilistic Perspective (2012), Kevin P. Murphy defines the machine learning field/area as follows.

+
+

a set of methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data, or to perform other kinds of decision making under uncertainty (such as planning how to collect more data!)

+
+

Without referring to algorithms or the field, Shai Shalev-Shwartz and Shai Ben-David define machine learning as follows

+
+

The term machine learning refers to the automated detection of meaningful patterns in data.

+
+

In all these definitions, the core concept is data or experience. So, any algorithm that automatically detects patterns in data (of any form, such as textual, numerical, or categorical) to solve some task/problem (which often involves more data) is a (machine) learning algorithm.

+

The tricky part of this definition, which often causes a lot of misconceptions about what ML is or can do, is probably automatically: this does not mean that the learning algorithm is completely autonomous or independent from the human, given that the human, in most cases, still needs to define a performance measure (and other parameters, including the learning algorithm itself) that guides the learning algorithm towards a set of solutions to the problem being solved.

+

As a field, ML could be defined as the study and application of ML algorithms (as defined by Mitchell's definition).

+

Sub-categories

+

Murphy and many others often divide machine learning into three main sub-categories

+
    +
  • supervised learning (or predictive), where the goal is to learn a mapping from inputs $\textbf{x}$ to outputs $y$, given a labeled set of input-output pairs

    +
  • +
  • unsupervised learning (or descriptive), where the goal is to find "interesting patterns" in the data

    +
  • +
  • reinforcement learning, which is useful for learning how to act or behave when given an occasional reward or punishment signals

    +
  • +
+

However, there are many other possible sub-categories (or taxonomies) of machine learning techniques, such as

+
    +
  • deep learning (i.e. the use of neural networks to approximate functions and related learning algorithms, such as gradient descent) or
  • +
  • probabilistic machine learning (machine learning techniques that provide uncertainty estimation)
  • +
  • weakly supervised learning (i.e. SL where labeling information may not be completely accurate)
  • +
  • online learning (i.e. learning from a single data point at a time rather than from a dataset of multiple data points)
  • +
+

These sub-categories can also be combined. For example, deep learning can be performed online or offline.

+

Related fields

+

There is also a related field known as computational (or statistical) learning theory, which concerned with the theory of learning (from a computational and statistical point of view). So, in this field, we are interested in questions like "How many samples do we need to approximately compute this function with a certain error?".

+

Of course, given that machine learning is a set of algorithms and techniques that are data- or experience-driven, one may wonder what the difference between machine learning and statistics is. In fact, in many cases, they are very similar and ML adopts many statistical concepts, and you may even read on the web that machine learning is just glorified statistics. ML and statistics often tackle the same problem, but from a different perspective or with slightly different approaches (and the terminology may slightly change from one field to the other). If you are interested in a more detailed explanation of their difference, you could read Statistics versus machine learning (2018) by Danilo Bzdok et al.

+

What is machine learning good for?

+

ML can potentially be used to (at least partially) automate tasks that involve data and pattern recognition, which were previously performed only by humans (e.g. translation from one human language, such as English, to another, such as Italian). However, machine learning cannot automate all tasks: for example, it cannot infer causal relations from the data (which often must be done by humans), unless you include causal inference as part of machine learning. If you are interested in causal inference, you could take a look at the paper Causal Inference by Judea Pearl (Turing Award for his work in causal inference!).

+",2444,,2444,,3/11/2021 10:01,3/11/2021 10:01,,,,0,,,,CC BY-SA 4.0 +12562,1,,,5/28/2019 2:09,,1,36,"
+

Given $F_1,F_2,..,F_n$ as set of final exams of subjects taken by students $S_1,..,S_k$ in h slots such that no student takes two exams in a single slot.Here the objective is to maximize the number of exams taken by a student in a single slot.

+
+ +

I reduced the problem to graph coloring problem where the nodes of the graph are the final exams of subjects and edges would be the students. The chromatic number of graph would be h.

+ +

But I am not sure how to represent edges as they can me multiple also as same exams are taken by multiple students and would it have any impact on constraint satisfaction formulation?

+",25984,,25984,,5/28/2019 9:39,5/28/2019 9:39,Write Constraint Satisfaction Formulation for problem,,0,0,,,,CC BY-SA 4.0 +12563,1,,,5/28/2019 7:32,,1,40,"

Is it better to train one neural network for a dispersed labeled data with large number of classes or first classify data by unsupervised learning then train each part by a separate NN? +I mean by unsupervised learning we help each NN to classify in lower dispersed data with lower number of labels. +So for test data the class of data is found by unsupervised learning then the final label is found by the network associated with that class. +Does this question generally have an answer or it depends on data and needs to be answered in practice?

+",25988,,,,,5/28/2019 7:32,One end to end Neural network or many task-specific ones?,,0,0,,,,CC BY-SA 4.0 +12564,1,,,5/28/2019 9:16,,2,127,"

I want to develop a fraud detection application in the stock market Using Blockchain technology, we have some pattern that defines the anomaly for use of supervised machine learning but there is one question remain:
+What is role of machine learning and Blockchain to detect anomaly like a fraud?

+",25452,,25452,,6/7/2019 19:36,6/7/2019 19:36,Application of Blockchain in Fraud detection in stock market,,0,0,0,,,CC BY-SA 4.0 +12565,1,,,5/28/2019 9:45,,6,909,"

When training a neural network, we often run into the issue of overfitting.

+ +

However, is it possible to put overfitting to use? Basically, my idea is, instead of storing a large dataset in a database, you can just train a neural network on the entire dataset until it overfits as much as possible, then retrieve data ""stored"" in the neural network like it's a hashing function.

+",23941,,2444,,12/19/2021 23:14,12/19/2021 23:14,Is it possible for a neural network to be used to compress data?,,4,0,,,,CC BY-SA 4.0 +12566,2,,12565,5/28/2019 10:26,,6,,"

The auto-encoder (AE) can be used to learn a compressed representation (a vectorised hash value) of each observation in the training dataset, $z$, which can then be used to later retrieve the original (or similar) observation.

+ +

The variational auto-encoder (VAE), a statistical variation of AE, can also be used to generate objects similar to the observations (or inputs) in the training set.

+ +

There are other data compressor models, for example, Helmholtz machine, which precedes both the AE and VAE.

+",2444,,2444,,5/29/2019 23:16,5/29/2019 23:16,,,,0,,,,CC BY-SA 4.0 +12567,2,,12565,5/28/2019 13:42,,1,,"

Train a network that has large input and small output. Turn it upside down (yes, you may do that). By giving the small outputs, corresponging to input, the ideally- trained network will generate those large data. +But you see in all compression there will be data lost, so generated data will be slightly :DD different then original dataset. So its suitable for statistical data, like images, whatever, but not for structured like text or the most unsuitable example - program source code.

+",25836,,,,,5/28/2019 13:42,,,,0,,,,CC BY-SA 4.0 +12568,1,,,5/28/2019 14:25,,2,72,"

For logistic regression, the Cost function is defined as: +\begin{equation} +Cost(h_{\theta}(x)-y) = -ylog(h_{\theta}(x))-(1-y)log(1-h_{\theta}(x)) +\end{equation}

+ +

I now have a nonlinear function +\begin{equation} + h_{\theta}^{(i)}(x)=xe^{-j\theta_i|x|^2} +\end{equation} +where $i$ denotes the $i$th training sample. How should I define cost function for this particular nonlinear function?

+",25806,,2444,,5/29/2019 23:13,5/29/2019 23:13,How to define cost function for custom nonlinear functions?,,0,5,,,,CC BY-SA 4.0 +12569,1,,,5/28/2019 14:44,,3,45,"

I am tentatively trying to train a deep reinforcement learning model the maze escaping task, and each time it takes one image as the input (e.g., a different ""maze"").

+ +

Suppose I have about $10K$ different maze images, and the ideal case is that after training $N$ mazes, my model would do a good job to quickly solve the puzzle in the rest $10K$ - $N$ images.

+ +

I am writing to inquire some good idea/empirical evidences on how to select a good $N$ for the training task.

+ +

And in general, how should I estimate and enhance the ability of ""transfer learning"" of my reinforcement model? Make it more generalized?

+ +

Any advice or suggestions would be appreciate it very much. Thanks.

+",25973,,,,,5/28/2019 14:44,Training a reinforcement learning model with multiple images,,0,0,,,,CC BY-SA 4.0 +12570,1,15733,,5/28/2019 16:40,,2,272,"

In the book ""Reinforcement Learning: An Introduction"" (2018) Sutton and Barto define the prediction objective ($\overline{VE}$) as follows (page 199): +$$\overline{VE}\doteq\sum_{s\epsilon S} \mu(s)[v_{\pi}(s)-\hat{v}(s,w)]^2$$ +Where $v_{\pi}(s)$ is the true value of $s$ and $\hat{v}(s,w)$ is the approximation of it. Furthermore it is stated that this is ""often used in plots"".

+ +

How do I get the true value $v_{\pi}(s)$? And if there is a way to obtain the value, why would I need to approximate it?

+",21299,,2444,,10/5/2019 8:52,10/5/2019 11:00,How do we get the true value in the prediction objective in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 +12571,2,,12570,5/28/2019 18:27,,0,,"

I may be wrong about this but my best interpretation without having access to the book is that

+ +

How do I get the true value vπ(s)?

+ +

The True value I think is whatever the most correct answer is for the prediction. This should be training data. Some companies like facebook spend a lot of money to hire people to create hand-detailed data to fill in this value.

+ +

And if there is a way to obtain the value, why would I need to approximate it?

+ +

You are approximating it to test the accuracy of your model - your prediction. It seems to me this equation is only necessary when training the model.

+ +

The result of this is your total error between all your predictions. The lower the value, the better your model. +https://en.wikipedia.org/wiki/Mean_squared_error

+",1720,,,,,5/28/2019 18:27,,,,4,,,,CC BY-SA 4.0 +12572,2,,12552,5/28/2019 19:44,,3,,"

The purpose of Reinforcement Learning is to maximize some notion of cumulative reward, leading me to the point (1) : +as far as I understand, there is no timesteps in your problem and the ""reward"" is immediate. +Thus, I don't think reinforcement is suitable here.

+ +

On an other hand, in supervised learning, linear regression is the task of approximating a mapping function (f) from input variables (X) to a continuous output variable (y). +It has much in common with your case. If I am not wrong, you are trying to approximate a function that maps image data to (x,y) coordinates +(the ""sweet spot""). So, I think regression would be a better way to go for you.

+ +

You could either generate a dataset at first, with images data and associated (x,y) coordinates for only spots validated by your function F, and then train a regression predictive model. Or you could train your model with online learning, by generating batch of images and sweet spot coordinates at each step.

+ +

Concerning the point (2), it will highly depend on how is made your F function. Since overlapping spots cannot be sweet spots, the simplest would +be to make your F function return 0 for those spots.

+",23818,,23818,,5/28/2019 20:01,5/28/2019 20:01,,,,0,,,,CC BY-SA 4.0 +12573,1,,,5/29/2019 1:53,,2,112,"

I am working on a project that involves using a ConvNet to identify screws. I am able to train from scratch a ConvNet based on the first version of the inception network, but shallower (only 3 inception modules), and at the moment classifying only 45 different screws (the goal is to cover a significant part of a catalog containing ~ 4000 different itens).

+ +

My training set consists of rectangular grayscale images of the screws (150 x 300 pixels), approx. 700 images for each class.

+ +

The prototype of this model has been working pretty well with 45 classes (test set accuracy ~98%), but I am starting to worry about two things:

+ +

1) Many screws in the catalog have similar shapes, but different sizes, so the production model will have to be able to infer the scale of the objects. This is important because future users will image screws with different smartphone cameras, yielding different screw sizes in the images fed to the ConvNet. I haven't been able to find much about this in the literature. And from what I have read about ConvNets, they are good at detecting shapes, which mean that two objects, the first 1 meter long and the other 1 centimeter long, would be considered ""equal"" by a ConvNet if they appeared similar in an image. One (not very elegant) solution I imagined would be to include a scale in the training images, by means either of a ruler or a common object (a coin, for example). Anyway, I wonder if this problem has a simple solution, since I believe many people might have faced it.

+ +

2) All of the notorious ConvNets I know of are trained with the ImageNet dataset, which comprises 1000 different classes. My screw dataset will ultimately have more than that. Is that an issue? Assuming I have the hardware resources to train very large fully connected layers and softmax output layers, is there an upper bound to the number of classes a ConvNet can identify?

+",26003,,,,,5/29/2019 1:53,Object size identification and maximum number of classes with convolutional neural networks,,0,0,,,,CC BY-SA 4.0 +12574,2,,7573,5/29/2019 6:39,,0,,"

Self-conscious AIs are important because consciousness is what makes us human. And scientists are trying to develop AIs that are closer to humans and human behaviour. The thought process of humans is very complicated and to have it developed in AIs could really replace us with robots that have consciousness. The only reason AIs are not taking up work that involves decision-making because they do not have consciousness.

+",26007,,,,,5/29/2019 6:39,,,,1,,,,CC BY-SA 4.0 +12575,2,,12458,5/29/2019 6:47,,0,,"

Aren't we already doing things with technology that are not ethical? But to answer your question, yes, it will be unethical to use gender to target a user. It will mean that the AI will be fed a list of things to look for for a certain gender which is again gender-bias. I find targetting users unethical but using gender basis would really be something else. I am not sure if I am on the right track but ads are run keeping in mind the target audience which involves the age, gender, etc. That is done by humans but still.

+",26007,,,,,5/29/2019 6:47,,,,0,,,,CC BY-SA 4.0 +12576,1,,,5/29/2019 8:19,,3,688,"

Autoencoders are used for unsupervised anomaly detection by first learning the features of the data set with mainly "normal" data points. Then new data can be considered anomalous if the new data has a large reconstruction error, i.e. it was hard to fit the features as in the normal data.

+

Even if the training is supervised by learning to reconstruct the same data, how is the reconstruction error computed for the new data?

+",26009,,2444,,2/17/2021 15:21,7/12/2022 20:08,How can auto-encoders compute the reconstruction error for the new data?,,1,2,0,,,CC BY-SA 4.0 +12577,1,,,5/29/2019 8:49,,5,5341,"

I am a newbie in reinforcement learning working on a college project. The project is related to optimizing the hardware power. I am running proprietary software in Linux distribution (16.04). The goal is to use reinforcement learning and optimize the power of the System (keeping the performance degradation of the software as minimum as possible).

+ +

For this, I need to create a custom environment for my reinforcement learning. From reading different materials, I could understand that I need to make my software as a custom environment from where I can retrieve the state features. Action space may include instructions to Linux to change the power (I can use some predefined set of power options).

+ +

The proprietary software is a cellular network and the state variables include latency or throughput. To control the power action space, rapl-tools can be used to control CPU power.

+ +

I just started working on this project and everything seems blurry. What is the best way to make this work? Is there some tutorials or materials that would help me make things clear. Is my understanding of creating a custom environment for reinforcement learning true?

+",25885,,25885,,5/29/2019 12:24,10/26/2019 13:02,How to create a custom environment for reinforcement learning,,1,2,,,,CC BY-SA 4.0 +12579,1,12598,,5/29/2019 10:08,,2,586,"

In the paper, Contextual String Embeddings for Sequence Labeling, the authors state that

+ +

\begin{equation} +P(x_{0:T}) = \prod_{t=0}^T P(x_t|x_{0:t-1}) +\end{equation}

+ +

They also state that, in the LSTM architecture, the conditional probability $P(x_{t}|x_{0:t})$ is approximately a function of the network output $h_t$.

+ +

\begin{equation} +P(x_{t}|x_{0:t}) \approx \prod_{t=0}^{T} P(x_t|h_t;\theta) +\end{equation}

+ +

Why is this equation true?

+",26012,,2444,,5/30/2019 14:57,5/30/2019 16:46,Why can we approximate the joint probability distribution using the output vector of an LSTM?,,1,0,,,,CC BY-SA 4.0 +12580,2,,12458,5/29/2019 10:43,,0,,"

The general principle is that data belongs to the person who generated it, and permission should be sought before you use someone else's possessions. +So, it is ethical to use it to infer whatever you like about a person, provided that you ask permission first.

+",12509,,,,,5/29/2019 10:43,,,,0,,,,CC BY-SA 4.0 +12581,2,,12577,5/29/2019 12:42,,4,,"

This answer assumes that your ""proprietary software"" is a simulation of, or controller for a real environment.

+ +
+ +

Yes you will very likely need to write software to represent your environment in some standard way as a Reinforcement Learning (RL) environment. Depending on details, this may be trivially easy or it might be quite involved.

+ +

An environment in RL must have the following traits in general, in order to interface with RL agent software:

+ +
    +
  • A state representation. This will typically be an object or array of data that matches sensor readings from the real environment. It is important to RL that the state has the Markov property so that predictions of value can be accurate. For some environments that will mean calculating derived values from observations, or representing a combined history of last few observations from sensors as the state.

    + +
      +
    • The state can either be held inside an internal representation of the environment, which is a typical object-oriented approach, or it can be passed around as a parameter to other functions.

    • +
    • A simple state might just be a fixed size array of numbers representing important traits of the environment, scaled between -1 and 1 for convenience when using it with neural networks.

    • +
  • +
  • An action representation.

    + +
      +
    • A simple action representation could just be an integer which identifies which of N actions has been chosen, starting from 0. This allows for a basic index lookup when checking value function estimates.
    • +
  • +
  • A reward function. This is part of a problem definition, and you may want to have that code as part of the environment or part of the agent or somewhere in-between depending on how likely it is to change - e.g. if you want to run multiple experiments that optimise different aspects of control but in the same environment, you may make a totally separate reward calculation module that you combine at a high level with the agent and environment code.

    + +
  • +
  • A time step function. This should take an action choice, and should update the state for a time step - returning the next state, and the immediate reward. If the environment is real, then the code will make actual changes (e.g. move robot arm), potentially wait for the time step to elapse, then read sensors to get the next state and calculate reward. If the environment is simulated, then the code should call some internal model to calculate the next state. This function should call the proprietary software you have been provided for your task.

  • +
+ +

If actions available depend on the current state, then code for that could live in the environment simulation or the agent, or be some helper function that the agent can call, so it can filter the actions before choosing one.

+ +

If you are working in Python, to help make this more concrete, and follow an existing design, see ""How to create a new gym environment in OpenAI?"" on Stack Overflow. The Open AI environments all follow the same conventions for environment definitions, which helps when writing agents to solve them. I also recommend finding an Open AI Gym environment that seems similar to your problem, seeing how that works and trying to train an agent to solve it.

+ +

There may still be work to match your environment to an agent that can solve it. That depends on what agent software you are using.

+ +

Even if you write your own agent software, it helps to separate out the environment code like this. The environment is the problem to solve, and the RL agent is one way to search for a solution to it. Keeping those parts separate is a useful design that will allow you to try different types of agent and compare their performance for example.

+",1847,,1847,,5/29/2019 12:57,5/29/2019 12:57,,,,2,,,,CC BY-SA 4.0 +12582,2,,12548,5/29/2019 12:42,,1,,"

If your interest is positional information, encode it!

+ +

This could include learning an embedding for each position and leveraging that in your model. You could also use an approach to hard-encode rather than learn it (kinda like adding sinusoids in the transformer paper Attention is All You Need

+ +

an example of a paper that encodes the 2D positional info: Attention Augmented Convolutional Networks

+",25496,,,,,5/29/2019 12:42,,,,0,,,,CC BY-SA 4.0 +12587,1,,,5/29/2019 20:53,,1,17,"

I am new to big data theory, and during the past 3 days, I took an official big data course with some of the best instructors available in my country in this domain. The things was little bit obscure for me as I have an engineering background but with no knowledge in AI techniques and domains.

+ +

After getting an intro on big data and the 5Vs (Volume, Velocity, ...), we got an intro on hadoop and hadoop ecosystem tools (Hive, Pig, ...). Then a simple example on how to run MapReduce Java script on small data file.

+ +

So to make things clear with me, are hive, pig and other hadoop ecosystem tools, are tools to break up my large data files from different sources and servers into fast-readable files, by which we create new tables with our required fields to use them later on in machine learning scripts and feature extractions ?

+ +

P.S. by fast readable files I mean, using a scripting tools that normal relational database tools like SQL and oracle don't have it on huge data sets (1 terrabytes and above) to manage and get info from it as fast as possible ?

+",26028,,,,,5/29/2019 20:53,Are hadoop ecosystem tools main goal is to break up large data sets into fast readable files?,,0,0,,,,CC BY-SA 4.0 +12590,1,,,5/29/2019 23:12,,3,43,"

I've seen GANs that do things like convert an image to a painting or this GAN here https://make.girls.moe/#/ that takes in a set of characteristics and generates a waifu with those characteristics.

+ +

My understanding of a GAN is that the generator upsamples random noise and the discriminator detects if an image is within the real or fake. So if the generator say, generates a waifu with the wrong hair color, how would the discriminator know?

+",23941,,,,,5/29/2019 23:12,How do GANs create an image with specific characteristics?,,0,0,,,,CC BY-SA 4.0 +12591,1,,,5/30/2019 4:57,,1,178,"

I'm trying to work out an approach to balancing my dataset, which is a subset of a google openimages - some classes are represented orders of magnitude more than others and I am hesistant to simply throw away data. The approach I want to try next is roughly:

+ +
Add a single instance of every image to the training set.
+Crop out instances of the majority class from entire dataset and then add these cropped images to the dataset. keep adding these cropped images to training set until number of instances of 2nd majority class is close to number of instances of 1st majority class.
+Crop out instances of second majority class and repeat (2)
+keep going until all images have been cropped down to the minority class and dataset is roughly balanced.
+
+ +

This approach would mean that no instance of any class would be repeated more often than any other instance of the same class. My concern is that the minority classes will end up being predominantly represented in training by heavily cropped images. I'm not sure but I don't think this would matter in a two-stage object detector but I'm concerned that it might be a problem for a one-stage detector if, for some classes, it was predominantly trained on images that were cropped to various degrees.

+ +

The alternative to doing this would be undersampling ie. just throwing away most of the data which I'm hesistant to do.

+ +

Anyway, it would be really great to get some opinions on this in the context of Yolov3? Is yolov3 sufficiently robust at scale-invariance that training on many examples that are cropped/larger than what they are likely to be at test time?

+ +

(I'm fairly commited to some sort of oversampling because of the results in this paper which found best results generally from oversampling with CNN classifiers as opposed to undersampling, threshold adjustment etc. https://arxiv.org/pdf/1710.05381.pdf)

+",21583,,,,,5/30/2019 4:57,Will balancing dataset of images for object detection for a single-shot OD (Yolov3-spp) by cropping lower the quality of the model?,,0,0,,,,CC BY-SA 4.0 +12594,1,12597,,5/30/2019 9:59,,1,172,"

I am tentatively reusing a codebase of pacman to train my own deep reinforcement learning model. While most of the components seems reasonable and understandable to me, there are two things that seem obscure to me:

+ +
    +
  1. How to decide the size of the replay memory? Currently, since I set the total step of learning as 4000 (note that in the referred codebase this value is set as 4000000), so I just proportionally decrease the replay_memory_size as 400. Would that make sense?

  2. +
  3. What is the return value epsilon when calling function PiecewiseSchedule? I also proportionally decrease its parameters as follows:

  4. +
+ +
        epsilon = PiecewiseSchedule([(0, 1.0),
+                                     (40, 1.0), # since we start training at 10000 steps
+                                     (80, 0.4),
+                                     (200, 0.2),
+                                     (400, 0.1),
+                                     (2000, 0.05)], outside_value=0.01)
+        replay_memory = PrioritizedReplayBuffer(replay_memory_size, replay_alpha)
+
+ +

where the original function call is like this:

+ +
        epsilon = PiecewiseSchedule([(0, 1.0),
+                                     (10000, 1.0), # since we start training at 10000 steps
+                                     (20000, 0.4),
+                                     (50000, 0.2),
+                                     (100000, 0.1),
+                                     (500000, 0.05)], outside_value=0.01)
+        replay_memory = PrioritizedReplayBuffer(replay_memory_size, replay_alpha)
+
+ +

And in general, what is the principle (guideline) behind setting a good size of ""replay memory"" and calling function PiecewiseSchedule? Thank you!

+",25973,,12509,,5/30/2019 14:32,5/30/2019 15:50,Understanding the configuration of replay memory and epsilon in deep reinforcement learning,,1,2,,,,CC BY-SA 4.0 +12596,1,12611,,5/30/2019 10:52,,2,60,"

I'd like to build an application for tracking the position of a given animal (e.g. a cat) in a series of images.

+ +

Is there any off-the-shelf API I could use?

+ +

Azure has some Vision APIs, but it seems to me they can't be used to get the position of something in an image.

+",26043,,2444,,5/30/2019 16:32,5/31/2019 12:18,Which API can I use for tracking the position of animal in one or more images?,,1,3,,,,CC BY-SA 4.0 +12597,2,,12594,5/30/2019 12:41,,4,,"

Replay memory is a recording of the games that the agents has played. It is this data that is used to train the neural network (or whatever machine learning method you are using). +The training is performed by looking at the games played and the resulting reward signal (in Pacman, this might be your score), and then learning which game strategies worked well and which did not. +It is difficult to say exactly how big the replay memory should be, this is something that you will need to experiment with. The larger it is, then the more examples will be available to be learned from during a particular training period, but of course it takes longer to acquire them.

+ +

Epsilon is a parameter that governs the trade-off between exploration and exploitation. By this, I mean that for each action that your agent takes, it needs to decide whether to do the best thing that it has already discovered to do (exploitation), or try something new (exploration). Usually, a value of epsilon near to 1 means that the agent will choose to explore more, and a value closer to 0 will mean that the agent will exploit more.

+ +

You can imagine therefore that at the beginning of an agent's training you might want epsilon ~1, because it has lots of new strategies to try to discover, but as it gets better you would reduce it to ~0 so that it can explore strategies that are only reachable after a long sequence of game steps. +Naturally, when the agent is fully trained and ready to be deployed then epsilon would = 0. +In the example that you posted, you can see that Pacman starts off doing nothing but exploring, but after 500000 games it exploits 95% of the time.

+",12509,,12509,,5/30/2019 15:50,5/30/2019 15:50,,,,2,,,,CC BY-SA 4.0 +12598,2,,12579,5/30/2019 14:44,,1,,"

In section 2.1 of the paper, the authors state that the goal of character-level language model is to estimate the following joint probability distribution

+ +

$$P(\boldsymbol{x}_{0:T}) = P(\boldsymbol{x}_{0}, \boldsymbol{x}_{1}, \dots, \boldsymbol{x}_{T}),$$

+ +

which is a joint probability distribution of all characters of a sequence of $T+1$ characters, where $\boldsymbol{x}_i$ is the $i$th character of the sequence. Ideally, the joint probability distribution, $P(\boldsymbol{x}_{0:T})$, should put more mass (or density) on the combination of $T-1$ characters that are more likely (in a given language). For example, in English, the combination of the characters ""the"" is more likely than the combination ""bibliopole"". Hence, $P(\text{t}, \text{h}, \text{e})$ should be higher than $P(\text{b}, \text{i}, \text{b}, \text{l}, \text{i}, \text{o}, \text{p}, \text{o}, \text{l}, \text{e})$, where, in this case, $T=2$. So, $P(\boldsymbol{x}_{0:T})$ is the actual character-level language model, which is represented by a joint probability distribution.

+ +

Similarly and intuitively, the conditional probability distribution $$P(\boldsymbol{x}_{t} \mid \boldsymbol{x}_{0:t-1})$$

+ +

tells us the probability of the next character of the sequence $\boldsymbol{x}_{t}$ given (that is, having observed) the previous characters of the sequence $\boldsymbol{x}_{0:t-1} = (\boldsymbol{x}_{0}, \dots, \boldsymbol{x}_{t-1})$. In general, the next character of a sequence $\boldsymbol{x}_{t}$ can depend on all previous characters of the same sequence, $\boldsymbol{x}_{0:t-1}$. If we are able to learn the conditional model $P(\boldsymbol{x}_{t} \mid \boldsymbol{x}_{0:t-1})$, given a sequence of $t$ characters, $\boldsymbol{x}_{0:t-1}$, we can sample the next character according to the same conditional probability distribution.

+ +

Recall that, given two events (or random variables) $A$ and $B$, the joint distribution of them is defined as $P(A, B) = P(A \mid B)P(B) = P(B \mid A)P(A)$, which gives rise to the Bayes' theorem. This can be easily generalised to multiple variables. More specifically, if you consider $B$ to a set of $N$ events (rather than just one), that is, $B=B_1 \cap B_2\cap \dots \cap B_N = B_1, B_2, \dots, B_N$, then $P(A, B) = P(A \mid B)P(B)$ still holds, but we can further decompose it

+ +

\begin{align} +P(A, B) = P(A, B_1, B_2, \dots, B_N) +&= P(A \mid B_1, B_2, \dots, B_N)P(B_1, B_2, \dots, B_N) \\ +&= P(A \mid B_1, B_2, \dots, B_N)P(B_1 \mid B_2, \dots, B_N) P(B_2, \dots, B_N) \\ +&= \cdots +\end{align}

+ +

This is called the chain rule (or product rule) of probability. Essentially, we apply the rule $P(A, B) = P(A \mid B)P(B)$ recursively.

+ +

Analogously, in the paper, the authors apply this chain rule to express the joint distribution $P(\boldsymbol{x}_{0:T})$ as a product of conditional probability distributions, that is

+ +

$$ +P(\boldsymbol{x}_{0:T}) = \prod_{t=0}^T P(\boldsymbol{x}_{t}\mid \boldsymbol{x}_{0:t-1}) +$$

+ +

In the case of the LSTM, the vector $\boldsymbol{h}_t$ is supposed to keep track of the past. More specifically, in the case of the character-level language model, we train an LSTM-based RNN, so that $\boldsymbol{h}_t$ is approximately equal to $\boldsymbol{x}_{0:t-1}$, that is, $\boldsymbol{h}_t \approx \boldsymbol{x}_{0:t-1}$. Hence, the joint probability distribution of the characters above can be now be approximately defined as a function of the vector $\boldsymbol{h}_t$

+ +

$$ +P(\boldsymbol{x}_{0:T}) \approx \prod_{t=0}^T P(\boldsymbol{x}_{t}\mid \boldsymbol{h}_t; \boldsymbol{\theta}) +$$

+ +

where $\boldsymbol{\theta}$ are the parameters of the LSTM-based RNN. Note that this is not an equality but an approximation. Intuitively, we train the LSTM so that it learns the interactions of the past characters.

+",2444,,2444,,5/30/2019 16:46,5/30/2019 16:46,,,,0,,,,CC BY-SA 4.0 +12600,1,,,5/30/2019 18:10,,0,213,"

How can I implement a GAN network for text (review) generation?

+ +

Please, can someone guide me to resource (code) to help in text generation?

+",26049,,2444,,5/30/2019 21:21,11/1/2019 19:02,How can I implement a GAN network for text (review) generation?,,2,0,,12/22/2021 17:25,,CC BY-SA 4.0 +12601,1,12602,,5/30/2019 20:29,,3,166,"

Can you please point me to some resources about image genereation besides GANs? +Are there any other techniques throughout history? +How did idea of image generation evolved and how it started?

+ +

I tried googling ""image generation before gans"" and similar alternatives but without any success.

+",23918,,2444,,5/30/2019 21:18,5/30/2019 21:18,Other deep learning image generation techniques besides GANs?,,1,0,,,,CC BY-SA 4.0 +12602,2,,12601,5/30/2019 21:09,,4,,"

There are several generative models that have been proposed before or roughly at the same time of the GAN (2014). For example, the deep Boltzman machine (2009), deep generative stochastic network (2014) or variational auto-encoder (2014).

+",2444,,2444,,5/30/2019 21:11,5/30/2019 21:11,,,,0,,,,CC BY-SA 4.0 +12604,1,12609,,5/31/2019 2:47,,3,113,"

In the original prioritized experience replay paper, the authors track $\gamma_t$ in every state transition tuple (see line 6 in algorithm below):

+ +

+ +

Why do the authors track this at every time step? Also, many blog posts and implementations leave this out (including I believe the OpenAI implementation on github).

+ +

Can someone explain explicitly how $\gamma_t$ is used in this algorithm?

+ +

Note: I understand the typical use of $\gamma$ as a discount factor. But typically gamma remains fixed. Which is why I’m curious as to the need to track it.

+",16343,,16343,,5/31/2019 12:13,5/31/2019 12:13,Why do authors track $\gamma_t$ in Prioritized Experience Replay Paper?,,1,0,,,,CC BY-SA 4.0 +12605,1,12608,,5/31/2019 4:14,,1,335,"

Is it possible to combine or create conditional statements of 0 and 1, and optimize with an evolutionary algorithm (given that all computers use a binary system)?

+ +

There may be an algorithm that maps input and output to 0 and 1, or a conditional statement that edits a conditional statement.

+ +

An example of a binary conditional statement is if 11001 then 01110.

+ +

Just as molecules are combined to form living beings, we could begin with the most fundamental operations (0 and 1, if then) to develop intelligence.

+",23500,,2444,,6/20/2019 0:03,6/20/2019 0:04,Can we evolve 0 and 1?,,1,2,,12/12/2021 9:14,,CC BY-SA 4.0 +12606,2,,12421,5/31/2019 5:36,,1,,"

The previous answer from Brale is mostly correct but is missing a large detail to get the precise answer.

+ +

Given this is a question from a GT course homework, I only want to leave pointers so those seeking help can understand the required concept.

+ +

𝑇𝐷(𝜆) equation is a summation over infinite K-steps (𝐺0:1 -> 𝐺0:∞) and should be included in our equation of 𝑇𝐷(𝜆) = 𝑇𝐷(1)

+ +

Every k-step estimator which included steps past the termination point will equal the sum of the rewards.

+ +

Including these values into the summation will show a pattern, making infinite summation equation solvable.

+",26058,,,,,5/31/2019 5:36,,,,2,,,,CC BY-SA 4.0 +12607,1,,,5/31/2019 6:06,,2,273,"

I read that a mix of ""greedy"" and ""random"" are ideal for stochastic local search (SLS), but I'm not sure why. It mentioned that the greedy finds the local minima and the randomness avoids getting trapped by the minima. What is the minima and how can you get trapped? Also, how does randomness avoid this? It seems like if it's truly random there's always a chance of ending up searching solutions that lead to dead ends multiple times (which seems like a waste of processing and avoidable)?

+",25721,,2444,,5/31/2019 13:05,5/31/2019 15:05,"Why is a mix of greedy and random usually ""best"" for stochastic local search?",,2,0,,,,CC BY-SA 4.0 +12608,2,,12605,5/31/2019 8:07,,1,,"
+

Is it possible to combine or create conditional statements of 0 and 1, and optimize with an evolutionary algorithm (given that all computers use a binary system)?

+
+ +

Yes, evolutionary algorithms are very general, and can be used to modify almost any source data structure, including logic trees or even executable code, provided you have some measure of fitness to optimise for.

+ +

This can be taken to a highly abstract level, such as the paper Evolution of evolution: Self-constructing Evolutionary Turing Machine case study where an evolutionary algorithm is used to optimise other evolutionary algorithms which solve tasks using generic models of computing.

+ +

However, there are two important caveats:

+ +
    +
  • There needs to be a measurement phase to establish the fitness of the algorithm. This can be very complex, depending on the problem you attempt to solve.

  • +
  • Genetic algorithms may be very general optimisers (capable of finding optimal solutions for problems when other algorithms may fail), but may also be very inefficient and slow depending on the size of the allowed genomes, and how the search space is structured relative to available genetic operations.

  • +
+ +
+

begin with the most fundamental operations (0 and 1, if then) to develop intelligence.

+
+ +

Provided the fitness measure allows for the expression of intelligence, then this seems theoretically possible - in the sense that there are no known reasons why a sufficiently complex logical machine could not be intelligent by any measure we have of abstract intelligence (excluding measures deliberately constructed to exclude computational models such as ""intelligence is the capability of a living system . . ."")

+ +

However, such a project faces some barriers which currently look insurmountable:

+ +
    +
  • There is no formal measure of general intelligence to use as a fitness function. This could be worked around using an e-life approach of providing a suitable rich virtual environment and allowing agents to compete for resources in the hope that the most competitive agents would exhibit intelligent behaviour - but that begs the question of how you could recognise and select those agents through any objective measure.

  • +
  • Any environment rich enough to select for general intelligence, whilst simulating low-level agent logic is likely to require a lot of computation.

  • +
  • Our one example of evolving basic building blocks into beings we call intelligent took billions of years, whilst processing billions upon billions of separate evolving entities at any one time.

  • +
+ +

These last two points imply a computational cost far beyond current technology, so there is no route to actually running this experiment for real.

+",1847,,2444,,6/20/2019 0:04,6/20/2019 0:04,,,,0,,,,CC BY-SA 4.0 +12609,2,,12604,5/31/2019 8:26,,4,,"

In some cases we may wish to have a discount factor $\gamma_t$ which depends on time $t$ (or depends on state $s_t$ and/or action $a_t$, leading to an indirect dependence on time $t$). Indeed we do not usually do this, but it does happen sometimes.

+ +

I guess that, from a theoretical point of view, it was very easy of the authors to make their algorithm more flexible/general and also support this (somewhat rare) case of time-varying discount factor. If it had been very complicated for them to support this option, they may have chosen not to; but if it's trivial to do so, well, why not?

+ +

Practical implementations will often indeed ignore that possibility if they're not using it, and can avoid including $\gamma_t$ values in the replay buffer altogether if it is known to be a constant $\gamma_t = \gamma$ for all $t$. As far as I can see, in the experiments discussed in this paper they also only used a fixed, constant $\gamma$.

+",1641,,,,,5/31/2019 8:26,,,,2,,,,CC BY-SA 4.0 +12610,2,,12470,5/31/2019 11:03,,0,,"

This article may shed some light on this question:

+ +

Facebook Doesn’t Tell Users Everything It Really Knows About Them

+",25362,,,,,5/31/2019 11:03,,,,1,,,,CC BY-SA 4.0 +12611,2,,12596,5/31/2019 12:18,,2,,"

FastAI is the most “out of the box” API for this type of task.

+ +

For video examples (and a little theory) check out the MOOC section of their site.

+ +

Practical Deep Learning and Cutting Edge Deep Learning are the two sections most relevant to you.

+ +

But if you want a working implementation check out this GitHub repo that implements SSD for your purposes. I can’t say how simple the API is but it does what you are seeking (in pure PyTorch).

+ +

note: FastAI was originally built on top of PyTorch (although they are expanding out now), so you’ll be using PyTorch and need a rough idea of how to work with tensors. Most of the challenges are already implemented in their api. Installing their library will automatically install PyTorch.

+",16343,,,,,5/31/2019 12:18,,,,0,,,,CC BY-SA 4.0 +12612,1,12615,,5/31/2019 12:54,,5,257,"

The MSE can be defined as $(\hat{y} - y)^2$, which should be equal to $(y - \hat{y})^2$, but I think their derivative is different, so I am confused of what derivative will I use for computing my gradient. Can someone explain for me what term to use?

+",26070,,2444,,5/31/2019 16:33,6/1/2019 23:03,Which function $(\hat{y} - y)^2$ or $(y - \hat{y})^2$ should I use to compute the gradient?,,3,0,,,,CC BY-SA 4.0 +12613,2,,12607,5/31/2019 12:58,,0,,"

The greedy action is the action that maximises some quantity in the present or near future. A stochastic action is an action that is random (it can be the greedy action or any other possible action).

+ +

For example, suppose that you are hungry. You can either choose to eat pizza, salad, fruits or fish. Pizza is your favourite food. On the other hand, you don't like much salad, fruits and fish, but you know that they are healthier than pizza. If you choose to eat pizza, then this is a greedy action. What quantiy are you maximising if you choose pizza? Your current happiness. If you randomly pick one of those (either pizza, salad, fruits or fish), then this is a stochastic (or random) action. The stochastic action can also happen to be pizza, but, in the next day, it might not be pizza, and, in general, it will not always be pizza.

+ +

Suppose that you always choose pizza. What's going to happen? In the long run, you will get fatter and your health will deteriorate. However, everytime you choose pizza, you will get happier in that moment (local maximum). If you randomly choose the food every time you want to eat, then it is more likely that you will eat also salad, fruits and fish. In the long run, this could be more beneficial for your health, hence this could avoid you getting trapped in the local maximum (being happy in the moment that you eat, but unhappier later in life because of the possible health problems).

+ +

In the context of artificial intelligence, the ideas are the same. There are several algorithms that use stochastic actions in order to avoid getting trapped in local extrema. For example, simulated annealing, ant colony optimisation algorithms, $Q$-learning (using $\epsilon$-greedy) or genetic algorithms. An example of local search (or greedy) algorithm is 2-opt (for the TSP problem).

+",2444,,2444,,5/31/2019 14:51,5/31/2019 14:51,,,,0,,,,CC BY-SA 4.0 +12614,2,,12612,5/31/2019 13:23,,2,,"

The derivative is the same as far as I understand it.

+ +

If $y$ is constant and $\hat{y}$ is the variable the result will be:
+$((\hat{y} - y)^2)' = 2(\hat{y} - y)$
+and for the other formula:
+$((y - \hat{y})^2)' = -2(y - \hat{y})$
+which is the same.

+",24054,,,,,5/31/2019 13:23,,,,1,,,,CC BY-SA 4.0 +12615,2,,12612,5/31/2019 13:27,,7,,"

The derivative of $\mathcal{L_1}(y, x) = (\hat{y} - y)^2 = (f(x) - y)^2$ with respect to $\hat{y}$, where $f$ is the model and $\hat{y} = f(x)$ is the output of the model, is

+ +

\begin{align} +\frac{d}{d \hat{y}} \mathcal{L_1} +&= \frac{d}{d \hat{y}} (\hat{y} - y)^2 \\ +&= 2(\hat{y} - y) \frac{d}{d \hat{y}} (\hat{y} - y) \\ +&= 2(\hat{y} - y) (1) \\ +&= 2(\hat{y} - y) +\end{align}

+ +

The derivative of $\mathcal{L_2}(y, x) = (y - \hat{y})^2 = (y - f(x))^2$ w.r.t $\hat{y}$ is

+ +

\begin{align} +\frac{d}{d \hat{y}} \mathcal{L_2} +&= \frac{d}{d \hat{y}} (y - \hat{y})^2 \\ +&= 2(y -\hat{y}) \frac{d}{d \hat{y}} (y -\hat{y}) \\ +&= 2(y - \hat{y})(-1)\\ +&= -2(y - \hat{y})\\ +&= 2(\hat{y} - y) +\end{align}

+ +

So, the derivatives of $\mathcal{L_1}$ and $\mathcal{L_2}$ are the same.

+",2444,,2444,,6/1/2019 23:03,6/1/2019 23:03,,,,0,,,,CC BY-SA 4.0 +12616,1,,,5/31/2019 14:13,,1,41,"

Imagine you have access to a dataset of pairs $(s, \hat{\pi}(s))$ where s is a state in a high dimension continuous space $S$, $\pi(s)$ is a probabilistic distribution on a large discrete space $D$ (size around $10^{9}$), and $\hat{\pi}(s)$ is a sample from this distribution.

+ +

A detail is important : $\pi$ is not a distribution right, it's $\pi(s)$ who is one.

+ +

My question is simple : how can an algorithm learn to, given a state s, sample from $\pi(s)$ ?

+",26071,,,,,5/31/2019 14:13,How to learn to sample?,,0,0,,,,CC BY-SA 4.0 +12617,2,,12607,5/31/2019 15:05,,1,,"

As an example of local/global minima, imagine being on a rugged, mountainous landscape, and you want to find the lowest point within some area. For a greedy search, every step you take will take you downhill. If you go downhill long enough, you'll eventually find a flat spot, which is a minimum - from here, there's no step you can take that will get you any lower. However, there's a nearby ridge, which if you crossed it, you could continue downhill to find an even lower spot, the global minimum (the true lowest point). Using your greedy approach, you'll never go uphill to cross the ridge, so you'll be stuck in the local minimum forever. If you occasionally take random steps (other than directly downhill), you have the opportunity to cross ridges that separate local minima, and you have a better chance of finding the global minimum. You are correct that in many cases, the random step won't help you cross a ridge, and will just take you up a mountain in the wrong direction, which is a waste of time. But unless we allow the algorithm to ""explore"" a bit, it will be content that the first minimum it finds is the best one, and will never get to the bottom.

+",2841,,,,,5/31/2019 15:05,,,,0,,,,CC BY-SA 4.0 +12619,1,,,5/31/2019 15:49,,4,1053,"

Like our human brain, we can first learn (train) the handwriting 0 and 1. After the traing (and test) accuray is good enough, we only need to study (traing) the hardwriting 2, Instead of cleaning all of learned memory, and relearn handwriting data 0, 1, and 2 at the same time.

+ +

Can CNN do the same thing? Can CNN learn something new, but keep the previous memory? If yes, the efficiency could be high. Right now, I have to give all of data at the same time, the efficiency is very very low.

+",26072,,2444,,5/31/2019 15:59,5/31/2019 16:21,Can a CNN be trained incrementally?,,1,0,,,,CC BY-SA 4.0 +12620,2,,12612,5/31/2019 15:54,,7,,"
+

The MSE can be defined as $(\hat{y} - y)^2$, which should be equivalent to $(y - \hat{y})^2$

+
+ +

They are not just ""equivalent"". It is actually the exact same function, with two different ways to write it.

+ +

$$(\hat{y} - y)^2 = (\hat{y} - y)(\hat{y} - y) = \hat{y}^2 -2\hat{y}y + y^2$$

+ +

$$(y - \hat{y})^2 = (y -\hat{y})(y - \hat{y}) = y^2 -2y\hat{y} + \hat{y}^2$$

+ +

These are exactly the same function. Not just ""equivalent"" or ""equivalent everywhere"", but actually the same function. It is therefore no surprise that any derivative is also the same - including the partial derivative with respect to $\hat{y}$ which is what you typically use to drive gradient descent.

+ +

The two ways of writing the function is because it is a square and thus has two factorisations. When you write it as a square you can choose which form to use for the inner term.

+ +
+

Which function [form] should I use to compute the gradient?

+
+ +

You can use either form, it does not matter. They represent the same function and have the same gradient.

+",1847,,1847,,6/1/2019 8:14,6/1/2019 8:14,,,,0,,,,CC BY-SA 4.0 +12621,2,,12619,5/31/2019 16:21,,4,,"

You are looking for incremental (or online) learning.

+ +

A CNN can be trained incrementally. For example, in the paper Incremental Learning of Convolutional Neural Networks, the authors propose an incremental learning algorithm (inspired by AdaBoost and Learn++, which is another incremental learning algorithm for supervised learning of neural networks) for CNNs.

+ +

However, note that incremental learning is a challenging task, given the stability-plasticity dilemma: a completely stable model, in order to keep being stable, will attempt to preserve the existing knowledge, so it will not learn new knowledge; similarly, a completely plastic model, in order to keep being plastic, it will keep forgetting previously acquired knowledge so that to learn new information.

+",2444,,,,,5/31/2019 16:21,,,,0,,,,CC BY-SA 4.0 +12622,1,12624,,5/31/2019 16:55,,2,1145,"

I wonder if Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) use any machine learning or deep learning?

+ +

For example in AR, the virtual objects are brought into the real world, does this process involve any object detection and localization?

+",21213,,2444,,5/31/2019 17:00,6/1/2019 12:27,"Do VR, AR and MR use any machine learning or deep learning?",,2,0,,,,CC BY-SA 4.0 +12624,2,,12622,5/31/2019 18:31,,0,,"

TL;DR

+ +

Yes for all the 3 cases

+ +

Details

+ +

VR

+ +

Even if the environment is virtual there is another thing to perceive which or better who is real: the user

+ +

Deep Learning can be used for things like: eye tracking, gesture recognition, voice recognition, ...

+ +

Furthermore, in games, you could think about more complex applications like understanding the way the user plays the game and react accordingly

+ +

AR and MR

+ +

In addition to what has been said for VR, in these cases you also have to integrate the environment in the final user experience so you ultimately have to perceive it and there multiple specific applications: object detection, planes detection, depth estimation, Visual Odometry (e.g. as feature detection module), SLAM, ...

+ +

These days Deep Learning is achieving Starte-Of-The-Art (SOTA) in many different tasks far beyond the classification and detection, but one things is research so published papers and so on and another thing is a product +so to sum up every VR, MR, AR vendor is probably relying on an hybrid solution where traditional algorithms and deep learning coexist but so far it seems the trend is deep learning is eating the traditional algos up day after day

+ +

References

+ +

The rising force of deep learning in VR and AR

+ +

Virtual Reality And Machine Learning Go Hand In Hand

+",1963,,1963,,6/1/2019 8:51,6/1/2019 8:51,,,,2,,,,CC BY-SA 4.0 +12625,1,12626,,5/31/2019 18:59,,1,36,"

I have a problem where I have 9 data points that are collected every minute for 40 minutes, and, by the 40th minute, the solution would be either end up being black or white.

+

I would like to set up a neural network, which would take the live input of every minute; and I was hoping within the 25-30 minute mark to predict the outcome of what the results would be at 40 minute; which is a classification.

+

I have over 3000 historical runs of this experiment; each containing 40 rows of 9 columns data per experiment.

+

What network would I need to set up; so that it can learn from each run at every minute mark per experiment with the results; and then set it up for live input, when the experiment is running again.

+

I feel like I might need more than one system to accomplish this; any help in pointing me towards the right path would be greatly appreciated

+

I am using python (keras) to try to solve this problem.

+",26075,,2444,,1/12/2022 12:35,1/12/2022 12:35,What type of network for a repeated experiment,,1,0,,,,CC BY-SA 4.0 +12626,2,,12625,5/31/2019 19:25,,0,,"

So it seems any time series modeling would do the trick. If you new to neural nets and seems you want to play with keras, maybe start by throwing it into a simple LSTM.

+",25496,,,,,5/31/2019 19:25,,,,0,,,,CC BY-SA 4.0 +12627,1,,,5/31/2019 20:09,,4,881,"

I am running a basic DQN (Deep Q-Network) on the Pong environment. Not a CNN, just a 3 layer linear neural net with ReLUs.

+ +

It seems to work for the most part, but at some point, my model suffers from catastrophic performance loss:

+ +

+ +
    +
  1. What is really the reason for that?

  2. +
  3. What are the common ways to avoid this? Clipping the gradients? What else?

  4. +
+ +

(Reloading from previous successful checkpoints feels more like a hack, rather than a proper solution to this issue.)

+",26077,,2444,,11/19/2019 2:56,11/19/2019 2:56,What could be causing the drastic performance drop of the DQN model on the Pong environment?,,0,4,,,,CC BY-SA 4.0 +12629,2,,11480,5/31/2019 21:22,,1,,"

OpenAI has a series of Spinning Up pages on their website to educate people about AI. One of those defines Vanilla Policy Gradiant.

+ +

Vanilla Policy Gradiant via OpenAI

+ +

At the bottom of the page are reference papers that further discuss gradiants.

+ +

Whether this is definitive for Vanilla Policy Gradients or not I do not know, but if many others refer to OpenAI for learning this subject their definition will spread.

+",26079,,,,,5/31/2019 21:22,,,,0,,,,CC BY-SA 4.0 +12630,1,,,6/1/2019 0:58,,2,148,"

I read this article: ""Towards Autonomous Data Ferry Route Design through Reinforcement Learning"" by Daniel Henkel and Timothy X Brown. It specifies an infinite horizon problem where they use as a reward function for TD[0] the following:

+ +

\begin{equation} +r(s,a) = - \int_{t_0}^{t_1} (Ft +N_0)e^{-\beta t} dt +\end{equation}

+ +

where $N_0$ and $F$ are constant, $\beta$ is used to adjust the discount factor and $t_0 , t_1$ are the initial and final time.

+ +

Then they proceed to use $e^{-\beta t}$ as the $\gamma$ (discount factor) in the TD[0] update formula and in the policy formula.

+ +

Why is the discount factor in the infinite horizon problem $e^{-\beta t}$, and why is it used as $\gamma$ in $V(s)$ update, since it is a variable factor?

+ +

Also, in the formula of the TD[0] update they don't subtract $\alpha V(s)$. They do: +\begin{equation} +V_{t+1}(s) = V_t(s) + \alpha( r(s,a) + e^{-\beta t_a} V_t(s')) +\end{equation} +I really think this is a mistake, and the values of $V$ will explode without it, even in an infinite horizon problem. Am I correct ? Is $- V(s)$ missing inside the brackets?

+ +

Finally, if someone is willing and has the time to directly read this part of the article and enlighten me on if $t_0$ and $t_1$ represent the initial and final time of an action OR $t_0$ is always 0 and $t_1$ is the duration of the action, I would appreciate it. I ask this because from what is written in the paper $t_0$ seems to be the current time in the simulation, but I'm afraid that would just decay too fast and after some actions the reward would be close to 0. It is really not well explained and I'm a bit confused.

+ +

Thank you for your time if you got this far reading. Any guideline answer will be very much appreciated.

+",24054,,2444,,6/1/2019 10:41,6/1/2019 10:41,Infinite horizon in Reinforcement Learning,,0,2,,,,CC BY-SA 4.0 +12631,1,,,6/1/2019 1:02,,1,2074,"

I am trying to train a deep learning model to predict an 8*2 matrix. The predicted matrix would have complex values and the input matrix would be real numbers. Can it be done? Thank you for your time.

+",26083,,,,,6/1/2019 8:32,How can I train a deep learning model to predict a matrix?,,1,2,,2/13/2022 23:48,,CC BY-SA 4.0 +12632,1,12635,,6/1/2019 2:26,,4,120,"

I am having a hard time converting line 6 of the prioritized experience replay algorithm from the original paper into plain English (see below): +

+

I understand that new transitions (not visited before) are given maximal priority. On line 6 this would be done for every transition in an initial pass since the history is initialized as empty on line 2.

+

I’m having trouble with the notation $p_t = \text{max}_{i<t} p_i$. Can someone please state this in plain English? If $t$ = 4 for example, then $p_t$ = 4? How is this equal to max$_{i<t} p_i$.

+

It seems in my contrived example here, max$_{i<t} p_i$ would be 3. I must be misreading this notation.

+",16343,,2444,,11/1/2020 19:55,11/1/2020 19:55,What does the notation $p_t = \text{max}_{i,1,0,,,,CC BY-SA 4.0 +12633,2,,12631,6/1/2019 8:18,,1,,"

You could use a CNN or Fully Connected Network and output a matrix of size 8*2*2. The first 8*2 matrix is the real number and the second is the imaginary number. Example code below uses keras.

+ +
'''
+input: numpy array of shape(batch_size,input_dim_0,input_dim_1,1)
+Y: keras tensor of shape(batch_size,8,2,2)
+
+'''
+
+model = Sequential()
+model.add(Conv2d(64,3,input_dim =(input_dim_0,input_dim_1,1))
+model.add(Conv2d(64,3)
+model.add(Conv2d(2,3)
+
+model.compile(loss='MSE',metrixs=['accuracy'])
+model.fit(input,Y)
+
+
+
+
+",23713,,23713,,6/1/2019 8:32,6/1/2019 8:32,,,,2,,,,CC BY-SA 4.0 +12635,2,,12632,6/1/2019 10:16,,3,,"

From my interpretation what it means is that $p_t$ is the priority value associated with each transition and $p_t = max_{i<t} p_i $ means that the priority of transition number $t$ will be the maximum between the values of the priorities of the previous elements.

+ +

Example: since $p_1$ is initialized to $1$, all the new experiences will be too: +\begin{equation} +p_2 = max\{p_1\} = 1, +\end{equation}

+ +

\begin{equation} +p_3 = max\{p_1,p_2\} = 1, +\end{equation}

+ +

\begin{equation} +p_4 = max\{p_1,p_2,p_3\} = 1. +\end{equation}

+",24054,,24054,,6/1/2019 10:22,6/1/2019 10:22,,,,0,,,,CC BY-SA 4.0 +12636,2,,12622,6/1/2019 12:27,,0,,"

Certainly in Virtual Reality the new Deep Convolutional Neural Networks algorithms applied in image processing are playing a role! Object detection, segmentation and semantic imaging are all features of these new ways of image processing. Augmented Reality then uses some advanced Optics features to accomplish its job. And so on...

+",25375,,,,,6/1/2019 12:27,,,,1,,,,CC BY-SA 4.0 +12639,1,12641,,6/1/2019 14:41,,5,548,"

In the paper Learning to predict by the methods of temporal differences (p. 15), the weights in the temporal difference learning are updated as given by the equation +$$ +\Delta w_t += \alpha \left(P_{t+1} - P_t\right) \sum_{k=1}^{t}{\lambda^{t-k} \nabla_w P_k} +\tag{4} +\,.$$ +When $\lambda = 0$, as in TD(0), how does the method learn? As it appears, with $\lambda = 0$, there will never be a change in weight and hence no learning.

+ +

Am I missing anything?

+",25768,,2444,,6/3/2019 23:04,6/3/2019 23:04,"Understanding the equation of TD(0) in the paper ""Learning to predict by the methods of temporal differences""",,1,0,0,,,CC BY-SA 4.0 +12640,1,12651,,6/1/2019 14:53,,2,314,"

I am using deep reinforcement learning to solve a classic maze escaping task, similar to the implementation provided here, except the following three key differences:

+ +
    +
  1. instead of using a numpy array as the input of a standard maze escaping task, I am feeding the model with an image at each step; the image is a 1300 * 900 RGB image, so it is not too small.

  2. +
  3. reward:

    + +
      +
    • each valid move has a small negative reward (penalize long move)
    • +
    • each invalid move has a big negative reward (run into other objects or boundaries)
    • +
    • Each blocked move has the minimal reward (not common)
    • +
    • Find the remote detectors’ defect has a positive reward (5)
    • +
  4. +
  5. I tweaked the parameters of replay memory, reduced the size of the replay memory buffer.

  6. +
+ +

Regarding the implementation, I basically do not change the agent setup except the above items, and I implemented my env to wrap my customized maze.

+ +

But the problem is that, the accumulated reward (first 200 rounds of successful escaping) does not increase:

+ +

+ +

And the number of steps it takes to escape one maze is also stable somewhat:

+ +

+ +

Here are my question, on which aspect I could start to look at to optimize my problem? Or is it still too early and I will need to train more time?

+",25973,,,,,6/3/2019 8:00,Reward does not increase for a maze escaping problem with DQN,,1,0,,,,CC BY-SA 4.0 +12641,2,,12639,6/1/2019 15:07,,5,,"
+

When lambda = 0 as in TD(0), how does the method learn? As it appears, with lambda = 0, there will never be a change in weight and hence no learning.

+
+ +

I think the detail that you're missing is that one of the terms in the sum (the final ""iteration"" of the sum, the case where $k = t$) has $\lambda$ raised to the power $0$, and anything raised to the power $0$ (even $0$) is equal to $1$. So, for $\lambda = 0$, your update equation becomes

+ +

$$\Delta w_t = \alpha \left( P_{t+1} - P_t \right) \nabla_w P_t,$$

+ +

which is basically a one-step update (just like Sarsa).

+",1641,,,,,6/1/2019 15:07,,,,3,,,,CC BY-SA 4.0 +12642,1,,,6/1/2019 17:30,,1,349,"

I want to implement super-resolution and deblurring on images from text documents. Which is the best approach? Are there any Git-hub links which will help me to start? I am new to the field. Any help would be appreciated. Thanks in advance.

+",21797,,21797,,6/4/2019 14:19,6/4/2019 14:19,Super Resolution on text documents,,0,5,0,,,CC BY-SA 4.0 +12643,2,,7222,6/1/2019 21:54,,0,,"

Its definitely simpler task as NLP or mashine learning, its keyword based. And you have a little bit wrong view on that. See your example:

+ +

Example: java-developer, java web engineer and java software developer shall all be mapped to the profession java engineer. +Not at all : java web and java are different jobs, while java software developer = java-developer and word software means nothing, cause java already stand for software. The info can NOT be mined from texts like job applications -> you have no link to a sence what title is , and better just create the mapping by hand - its not so long. Then, just look in text for key words and ignore other words

+",25836,,,,,6/1/2019 21:54,,,,1,,,,CC BY-SA 4.0 +12646,1,,,6/2/2019 14:01,,3,93,"

I started to learn reinforcement learning a few days ago. And I want to use that to solve resource allocation problem something like given a constant number, find the best way to divide it into several real numbers each is non-negative.

+ +

For example, to divide the number 1 into 3 real numbers, the allocation can be:

+ +

[0.2, 0.7, 0.1]

+ +

[0.95, 0.05, 0] +...

+ +

I do not know how to represent the action space because each allocation is 3-dimensional and each dimension is real-valued and each other correlated.

+ +

In actor-critic architecture, is it possible to have 3 outputs activated by softmax in the actor's network each represents one dimension in the allocation?

+ +
+ +

Appended:

+ +

There is a playlist of videos. A user can switch to the next video at any time. More buffer leads to better viewing experience but more bandwidth loss if user switches to the next video. I want to optimize the smoothness of playback with minimal bandwidth loss. At each time step, the agent decides the bandwidth allocation to download current video and the next 2 videos. So I guess the state will be the bandwidth, user's behavior and the player situation.

+",26099,,26099,,6/2/2019 15:56,6/2/2019 15:56,How to represent action space in reinforcement learning?,,0,3,,,,CC BY-SA 4.0 +12647,1,,,6/2/2019 19:47,,2,145,"

As far as I understand, beam search is the most widely used algorithm for text generation in NLP. So I was wondering: does the human brain also use beam search for text generation? If not, then what?

+",12746,,,,,2/19/2020 12:43,Does the human brain use beam search for text generation?,,1,5,,,,CC BY-SA 4.0 +12648,1,,,6/2/2019 20:45,,1,150,"

I have an issue with the normalization of the database (a large time series) for my DQN. I obtained optimal results and saved the NN (5 LSTM layers) weights training on a database normalized as such: I divided it into consecutive batches of 96 steps (the window size that my NN gets as input) and I normalized each batch respectively with Z-score. However, I am unable to extend these results to an online setting, as online I only have access to the last 96 elements, and thus I can only normalize according to the last 96. This small difference actually causes a sharp decrease in the performance of my DQN, as the weights of the NN were perfectly tuned for the first normalization but are not great with the online normalized database. In a nutshell, the problem is that only every 96 steps the first normalized database and the online one are the same, for all steps in between this is not happening. I have the weights for the first one, but I cannot find a way to exploit them for the online one.

+ +

What I have tried so far with the online database:

+ +
    +
  • If I normalize every last 96 steps, and act for every new step (as it should be), the performances are quite bad.

  • +
  • If I normalize every last 96 steps, and act just every 96 steps (repeating the same action in between), the agent is actually picking the optimal action every 96 steps (like in the offline setting), so the results are somewhat decent but far from optimal for the long period between the actions. If I try with shorter periods, like 48, performances decrease sharply as it only acts optimally every 2 actions.

  • +
+ +

I don't know if there is a way to tune the optimal weights for the online database, acting directly on them without going through training again. It would be nice to understand why the NN picks its actions at each step in the optimal setting, so that I would be able to follow its strategy, but I'm not aware if it's possible to actually deduct this from the analysis of weights and features, especially for a multi-layer LSTM network. +Otherwise, I was thinking about something like normalizing the online database directly through similarities with the old batches of 96 (using their mean and std) or something like that. Anything that would help reducing the time between optimal actions to around 50-60 steps instead of 96 would be enough to provide a nearly optimal strategy, so at this point, I would consider any kind of (unelegant) method to get what I want.

+ +

I don't know if any of these is feasible, but retraining the agent is very difficult as every single time but once the agent got stuck in suboptimal strategies, this is why I am trying to get around this problem using the optimal weights I have instead of retraining.

+",23638,,,,,6/2/2019 20:45,Online normalization of database for DQN,,0,0,,,,CC BY-SA 4.0 +12649,1,12650,,6/3/2019 5:44,,3,97,"

I was hoping someone could just confirm some intuition about how convolutions +work in convolutional neural networks. I have seen all of the tutorials on +applying convolutional filters on an image, but most of those tutorials +focus on one channel images, like a 128 x 128 x 1 image. I wanted to clarify +what happens when we apply a convolutional filter to RGB 3 channel images.

+

Now this is not a unique question, I think a lot of people ask this question as well. It is just that there seem to be so many answers out there, each with their own variations, that it is hard to find a consistent answer. I included a post below that seems to comport with what my own intuition, but I was hoping one of the experts on SE could help validate the layer arithmetic, to make sure my intuition was not off.

+

How is the depth of the input related to the depth of the output of a convolutional layer?

+

Consider an Alexnet network with 5 convolutional layers and 3 fully connected +layers. I borrowed the network from this post. Now, say the input is 227 x 227, and the filter is specified +as 11 x 11 x 96 with stride 4. That means there are 96 filters each with dimensions 11x11x3, right? +So there are a total of 363 parameters per filter--excluding the bias term-- +and there are 96 of these filters to learn. So the 363*96 = 34848 filter values are learned +just like the weights in the fully connected layers right?

+

My second question deals with the next convolutional network layer. In the next +layer I will have an image that is 55 x 55 x 96 image. In this case, would the +filter be 5x5x96--since there are now 96 feature maps on the image? So that means +that each individual filter would need to learn 5x5x96 = 2400 filter values (weights), +and that across all 256 filters this would mean 614,400 filter values?

+

I just wanted to make sure that I was understanding exactly what is being learned +at each level.

+",15765,,2444,,12/18/2021 22:42,12/18/2021 22:42,Neural Nets: CNN confirming layer/filter arithmetic,,1,0,,12/18/2021 22:41,,CC BY-SA 4.0 +12650,2,,12649,6/3/2019 7:47,,3,,"

Your first point is correct. The filters are stored in 4d arrays, with dimensions of (height, width, input channels, filter number) . The order may differ. +Your second point is correct too. The filtered result get stacked together so the output dimensions are (Height,width,filter numbers) the next layer's filters are of size( filter width, filter width, last layer's filter number). Your understanding of CNN is correct. If you want additional resources on CNN, you can try Andrew Ng's class on CNN in Coursera. Hope you can learn more about CNN.

+",23713,,,,,6/3/2019 7:47,,,,4,,,,CC BY-SA 4.0 +12651,2,,12640,6/3/2019 8:00,,1,,"

You should use an algorithm to try doing a solve for the maze optimally, maybe A* algorithm. If the optimal steps is also in the range of your network, your network may have reached it's best. If the optimal step is much less, you can try increasing the step penalty and increasing the reward for reaching the end. Hope you can succeed in this problem.

+",23713,,,,,6/3/2019 8:00,,,,4,,,,CC BY-SA 4.0 +12652,2,,12127,6/3/2019 8:04,,1,,"

In that particular competition, you can try using GAN to generate new data or adding noise to existing data. You can also use K-means algorithm. You can try using a smaller network and remove bias. May be you can use logistic regression to compare the result. You can also use a PCA method.

+",23713,,,,,6/3/2019 8:04,,,,2,,,,CC BY-SA 4.0 +12653,2,,12600,6/3/2019 8:13,,0,,"

For the resources, you can refer to this: +https://becominghuman.ai/generative-adversarial-networks-for-text-generation-part-1-2b886c8cab10 +If you want to generate text review for specific score, you can input a noise vector and the score to the generator. You could also make a vector filled with the number of score and add noise to that vector instead.

+",23713,,,,,6/3/2019 8:13,,,,0,,,,CC BY-SA 4.0 +12654,1,,,6/3/2019 8:33,,4,168,"

How do I identify monologues and dialogues in a conversation (or transcript) using natural language processing? How do I distinguish between the two?

+",26113,,2444,,6/3/2019 12:31,6/7/2019 13:23,How do I identify a monologue or dialogue in a conversation?,,1,1,,,,CC BY-SA 4.0 +12655,1,,,6/3/2019 9:39,,2,274,"

I have a corpus of a domain data in form of 10-15 books pdf and some articles and my end-goal is to make a question-answering system particular to that domain. +For that, I would need a dataset on Q/A which I can use on top of something like SQuAD(Stanford Question Answering Dataset) for domain-specific knowledge

+ +

My stuck point is how to convert this corpus into a usable question-answering dataset.

+ +

My current strategy is something AllenAI has been working with. A list of their research papers on it can be found here

+ +

As I understand they use a combination of Knowledge Extraction, Natural Language Understanding, and Inference to get the job done. But I cannot find any good practical implementation.

+ +

Where can I find a good resource?

+",17530,,12853,,6/5/2019 10:07,6/5/2019 10:07,Generate QA dataset from large text corpus,,0,0,,,,CC BY-SA 4.0 +12656,1,,,6/3/2019 10:45,,4,443,"

When we use BERT embeddings for a classification task, would we get different embeddings every time we pass the same text through the BERT architecture? If yes, is it the right way to use the embeddings as features? Ideally, while using any feature extraction technique, features values should be consistent. How do I handle this if we want BERT to be used as a feature extractor?

+",26115,,2444,,11/1/2019 2:46,7/28/2020 4:07,Will BERT embedding be always same for a given document when used as a feature extractor,,1,0,,,,CC BY-SA 4.0 +12657,1,12809,,6/3/2019 10:53,,1,261,"

The Back propagation through time on recurrent layer is defined similar to normal one, means somethin like

+ +

self.deltas[x] = self.deltas[x+1].dot(self.weights[x].T) * self.layers[x] * (1- self.layers[x]) where

+ +

self.deltas[x+1] is error from prevous layer, self.weights[x] is weights map and self.layers[x](1- self.layers[x]) is bakwards activation of sigmoid function where self.layers[x] is vector of sigmoid. But while normal backpropagation the values are there, while BPTT i can not take the current self.layers[x] : i need the previous ones, right ?

+ +

So unlike normal BP, do i need extra store old weights and layers, for example in circular queue, and then apply the formula where self.deltas[x+1] is layer from next time ?

+ +

Not realy implementation, just basic understanding in order to can implement it.

+ +

Lets see the picture:

+ +

+ +

Here are : self.layers[0] = $x_{t+1}$, self.layers[1] = $h_{t+1}$ , self.layers[2] = $o_{t+1}$, in order to perform backprop $h_{t+1}$ -> $h_{t}$ -> $h_{t-1}$... I DO NEED to have layers $h_t$ ,$h_{t-1}$... and weights $v_{t+1}$, $v_t$... EXTRA stored in additional to the network $x_{t+1}$ -> $h_{t+1}$ -> $o_{t+1}$, right? +Thats all the question.

+ +

And i do not need to store previous outputs $o[t, o_{t-1}, etc..]$, because backprop from them ot->ht, etc was already calculated.

+",25836,,25496,,6/10/2019 17:00,6/13/2019 12:19,Do you need to store prevous values of weights and layers on recurrent layer while BPTT?,,1,0,,,,CC BY-SA 4.0 +12659,1,,,6/3/2019 14:34,,30,6668,"

As an AI layman, till today I am confused by the promised and achieved improvements of automated translation.

+

My impression is: there is still a very, very far way to go. Or are there other explanations why the automated translations (offered and provided e.g. by Google) of quite simple Wikipedia articles still read and sound mainly silly, are hardly readable, and only very partially helpful and useful?

+

It may depend on personal preferences (concerning readability, helpfulness, and usefulness), but my personal expectations are disappointed sorely.

+

The other way around: Are Google's translations nevertheless readable, helpful, and useful for a majority of users?

+

Or does Google have reasons to retain its achievements (and not to show to the users the best they can show)?

+",25362,,2444,,1/18/2021 12:29,1/18/2021 12:29,What is the actual quality of machine translations?,,9,1,,,,CC BY-SA 4.0 +12660,2,,12656,6/3/2019 15:29,,1,,"

BERT is deterministic. There is no variation unless you parse your tokens differently in succeeding runs. Here is the original paper the model architecture is based off of Transformer Paper. Note that in every layer, the only operations used for the most part are matrix multiplications, concatenations, basic ops, and layer normalizations, all of which are deterministic.

+",25496,,,,,6/3/2019 15:29,,,,1,,,,CC BY-SA 4.0 +12661,2,,12659,6/3/2019 15:43,,7,,"

Google's translations can be useful, especially if you know that the translations are not perfect and if you just want to have an initial idea of the meaning of the text (whose Google's translations can sometimes be quite misleading or incorrect). I wouldn't recommend Google's translate (or any other non-human translator) to perform a serious translation, unless it's possibly a common sentence or word, it does not involve very long texts and informal language (or slang), the translations involve the English language or you do not have access to a human translator.

+ +

Google Translate currently uses a neural machine translation system. To evaluate this model (and similar models), the BLEU metric (a scale from $0$ to $100$, where $100$ corresponds to the human gold-standard translation) and side-by-side evaluations (a human rates the translations) have been used. If you use only the BLEU metric, the machine traslations are quite poor (but the BLEU metric is also not a perfect evaluation metric, because there's often more than one translation of a given sentence). However, GNMT reduces the translation errors compared to phrase-based machine translation (PBMT).

+ +

In the paper Making AI Meaningful Again, the authors also discuss the difficulty of the task of translation (which is believed to be an AI-complete problem). They also mention the transformer (another state-of-the-art machine translation model), which achieves quite poor results (evaluated using the BLEU metric).

+ +

To conclude, machine translation is a hard problem and current machine translation systems definitely do not perform as well as a professional human translator.

+",2444,,2444,,6/6/2019 22:51,6/6/2019 22:51,,,,4,,,,CC BY-SA 4.0 +12662,2,,12659,6/3/2019 17:03,,2,,"
+

Am I wrong and Google's translations are nevertheless readable, helpful and useful for a majority of users?

+
+ +

Yes, they are somewhat helpful and allow you to translate faster.

+ +
+

Or does Google have reasons to retain its greatest achievements (and not to + show to the users the best they can show)?

+
+ +

Maybe, I don't know. If you search for info, Google does really do a lot of horrible stupid stuff, like learning from what users say on the internet, taking unsuitable data as trusted input data sets.

+",25836,,2193,,6/4/2019 8:19,6/4/2019 8:19,,,,0,,,,CC BY-SA 4.0 +12663,2,,12659,6/3/2019 17:15,,23,,"

Who claimed that machine translation is as good as a human translator? For me, as a professional translator who makes his living on translation for 35 years now, MT means that my daily production of human quality translation has grown by factor 3 to 5, depending on complexity of the source text.

+ +

I cannot agree that the quality of MT goes down with the length of the foreign language input. That used to be true for the old systems with semantic and grammatical analyses. I don't think that I know all of the old systems (I know Systran, a trashy tool from Siemens that was sold from one company to the next like a Danaer's gift, XL8, Personal Translator and Translate), but even a professional system in which I invested 28.000 DM (!!!!) failed miserably.

+ +

For example, the sentence:

+ +
+

On this hot summer day I had to work and it was a pain in the ass.

+
+ +

can be translated using several MT tools to German.

+ +

Personal Translator 20:

+ +
+

Auf diesem heißen Sommertag musste ich arbeiten, und es war ein Schmerz im Esel.

+
+ +

Prompt:

+ +
+

An diesem heißen Sommertag musste ich arbeiten, und es war ein Schmerz im Esel.

+
+ +

DeepL:

+ +
+

An diesem heißen Sommertag musste ich arbeiten und es war eine Qual.

+
+ +

Google:

+ +
+

An diesem heißen Sommertag musste ich arbeiten und es war ein Schmerz im Arsch.

+
+ +

Today, Google usually presents me with readable, nearly correct translations and DeepL is even better. Just this morning I translated 3500 words in 3 hours and the result is flawless although the source text was full of mistakes (written by Chinese).

+",26120,,2444,,6/3/2019 17:47,6/3/2019 17:47,,,,1,,,,CC BY-SA 4.0 +12664,2,,5577,6/3/2019 18:14,,0,,"

For the formats above you could write a one normal CFG parser what would extract AST tree. That actualy you want as output?

+",25836,,,,,6/3/2019 18:14,,,,1,,,,CC BY-SA 4.0 +12665,1,,,6/3/2019 18:24,,1,62,"

Im trying to implement CNN for small images classification (36x36x1) (grayscale). I've checked every forward/backward pass function on small example, and still my cnn is not doin any progress on training. Tests were done on learning rate [0.001 - 0.01]. +Network structure: +Conv -> Relu -> Conv -> Relu -> MaxPooling -> Conv -> Relu -> Conv -> Relu -> MaxPooling -> Flattening -> sigmoid -> Fully connected -> sigmoid ->Fully connected -> softmax

+ +

Is there a mistake in forwardPass/ backwardPass function?

+ +
    def forwardPass(self, X):
+    """"""
+        :param X:   batch of images (Input value of entire convolutional neural network)
+                    image.shape = (m,i,i,c) - c is number of channels
+                    for current task, first input c = 1 (grayscale)
+                    example: for RGB c = 3
+                    m - batch size
+                    X.shape M x I x I x C
+
+        :return :   touple(Z, inValues)
+                    Z - estimated probability of every class
+                    Z.shape M x K x 1
+
+
+    """"""
+
+    W = self.weights
+
+    inValues = {
+        'conv': [],
+        'fullyconnect': [],
+        'mask' : [],
+        'pooling' : [],
+        'flatten' : [],
+        'sigmoid' : [],
+        'relu' : []
+    }
+
+    """"""
+        Current structure:
+        Conv -> Relu -> Conv -> Relu -> MaxPooling -> Conv -> Relu -> Conv -> Relu -> MaxPooling -> Flattening -> 
+        -> sigmoid -> Fully connected -> sigmoid ->Fully connected -> softmax
+    """"""
+
+
+
+    inValues['conv'].append(X)
+    Z = self.convolution_layer(X, W['conv'][0]);z = Z
+
+    inValues['relu'].append(z)
+    Z = self.relu(z);z =Z
+
+    inValues['conv'].append(z)
+    Z = self.convolution_layer(z, W['conv'][1]);z = Z
+
+
+    inValues['relu'].append(z)
+    Z = self.relu(z);z =Z
+
+
+    inValues['pooling'].append(z)
+    Z, mask = self.max_pooling(z);z = Z
+    inValues['mask'].append(mask)
+
+
+
+    inValues['conv'].append(z)
+    Z = self.convolution_layer(z, W['conv'][2]);z = Z
+
+    inValues['relu'].append(z)
+    Z = self.relu(z);z = Z
+
+    inValues['conv'].append(z)
+    Z = self.convolution_layer(z, W['conv'][3]);z = Z
+
+    inValues['relu'].append(z)
+    Z = self.relu(z);z = Z
+
+    inValues['pooling'].append(z)
+    Z, mask = self.max_pooling(z);z = Z
+    inValues['mask'].append(mask)
+
+
+    inValues['flatten'].append(z)
+    Z = self.flattening(z);z = Z
+
+
+    inValues['sigmoid'].append(z)
+    Z = self.sigmoid(z); z = Z
+
+
+    inValues['fullyconnect'].append(z)
+    Z = self.fullyConnected_layer(z, W['fullyconnect'][0]); z = Z
+
+
+    #dropout here later
+
+    inValues['sigmoid'].append(z)
+    Z = self.sigmoid(z); z = Z
+
+
+    inValues['fullyconnect'].append(z)
+    Z = self.fullyConnected_layer(z, W['fullyconnect'][1]);z = Z
+
+
+    Z = self.softmax(z)
+
+
+    return Z, inValues
+
+ +

Backpropagation:

+ +
 def backwardPass(self, y, Y, inValues):
+
+    """"""
+
+    :param Y: estimated probability of all K classes
+                ( Y.shape = M x K x 1 )
+    :param y: True labels for current
+                M x K x 1
+    :param inValues: Dictionary with input values of conv/ff layers
+                     example: inValues['conv'][1] - Values encountered during feedForward on input of Conv layer with index 1
+    :return:  Gradient of weights in respect to L
+    """"""
+
+    np.set_printoptions(suppress=True)
+    W = self.weights
+
+    G = {
+        'conv' : [],
+        'fullyconnect' : []
+    }
+
+
+    Z = self.softmax_backward(Y, y); z = Z
+
+
+
+    Z, dW, dB = self.fullyConnected_layer_backward(z, W['fullyconnect'][1],inValues['fullyconnect'][1]);z = Z
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['fullyconnect'].append(weight)
+
+
+
+    Z = self.sigmoid_deriv(z, inValues['sigmoid'][1]); z = Z
+
+    Z, dW, dB = self.fullyConnected_layer_backward(z, W['fullyconnect'][0],inValues['fullyconnect'][0]);z = Z;
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['fullyconnect'].append(weight)
+
+
+    Z = self.sigmoid_deriv(z, inValues['sigmoid'][0]);z=Z
+
+
+    Z = self.flattening_backward(z, inValues['flatten'][0]); z = Z
+
+
+    Z = self.max_pooling_backward(z,inValues['mask'][1]); z = Z
+
+
+    Z = z * self.relu(inValues['relu'][3], deriv=True); z = Z
+
+    Z, dW, dB = self.convolution_layer_backward(z, W['conv'][3],inValues['conv'][3]); z = Z
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['conv'].append(weight)
+
+    Z = z * self.relu(inValues['relu'][2], deriv=True);z = Z
+
+
+    Z, dW, dB = self.convolution_layer_backward(z, W['conv'][2],inValues['conv'][2]); z = Z
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['conv'].append(weight)
+
+
+    Z = self.max_pooling_backward(z,inValues['mask'][0]);z = Z
+
+
+    Z = z * self.relu(inValues['relu'][1], deriv=True);z = Z
+
+    Z, dW, dB = self.convolution_layer_backward(z, W['conv'][1],inValues['conv'][1]); z = Z
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['conv'].append(weight)
+
+    Z = z * self.relu(inValues['relu'][0], deriv=True);z = Z
+
+    Z, dW, dB = self.convolution_layer_backward(z, W['conv'][0],inValues['conv'][0]); z = Z
+    weight = {
+        'W': dW,
+        'B': dB
+    }
+    G['conv'].append(weight)
+
+    G['conv'].reverse()
+    G['fullyconnect'].reverse()
+
+    return G
+
+ +

update:

+ +
    def update(self, alfa, W, G):
+
+    W['fullyconnect'][0]['W'] -= alfa * np.sum(G['fullyconnect'][0]['W'],axis=0)
+    W['fullyconnect'][1]['W'] -= alfa * np.sum(G['fullyconnect'][1]['W'],axis=0)
+    W['fullyconnect'][0]['B'] -= alfa * np.sum(G['fullyconnect'][0]['B'],axis=0)
+    W['fullyconnect'][1]['B'] -= alfa * np.sum(G['fullyconnect'][1]['B'],axis=0)
+
+    W['conv'][0]['W'] -= alfa * np.sum(G['conv'][0]['W'],axis=0)
+    W['conv'][1]['W'] -= alfa * np.sum(G['conv'][1]['W'],axis=0)
+    W['conv'][2]['W'] -= alfa * np.sum(G['conv'][2]['W'],axis=0)
+    W['conv'][3]['W'] -= alfa * np.sum(G['conv'][3]['W'],axis=0)
+    W['conv'][0]['B'] -= alfa * np.sum(G['conv'][0]['B'],axis=0)
+    W['conv'][1]['B'] -= alfa * np.sum(G['conv'][1]['B'],axis=0)
+    W['conv'][2]['B'] -= alfa * np.sum(G['conv'][2]['B'],axis=0)
+    W['conv'][3]['B'] -= alfa * np.sum(G['conv'][3]['B'],axis=0)
+
+    return W
+
+",26122,,,,,6/3/2019 18:24,Convolutional neural network debugging,,0,1,,,,CC BY-SA 4.0 +12666,1,12668,,6/3/2019 19:35,,2,367,"
+

Consider the following data with one input (x) and one output (y):
+ (x=1, y=2)
+ (x=2, y=1)
+ (x=3, y=2)
+ Apply linear regression on this data, using the hypothesis $h_Θ(x) = Θ_0 + Θ_1 x$, where $Θ_0$ and $Θ_1$ represent the parameters to be learned. Considering the initial values $Θ_0$= 1.0, and $Θ_1$ = 0.0, and learning rate 0.1, what will be the values of $Θ_0$ and $Θ_1$ after the first three iterations of Gradient Descent

+
+ +

From least squares method I took the derivative with respect to $Θ_0$ and $Θ_1$ and plugged in the initial values to get the slope/intercept and multiplied it by the learning rate 0.1 to get the step size.The step size was used to calculate the new $Θ_0$ and $Θ_1$ values.

+ +

I am getting $Θ_0$ as 1.7821 when following the above. Please let me know if the approach followed and the solution correct or there is a better way to solve

+",25984,,,,,6/3/2019 20:55,Calculating Parameter value Using Gradient Descent for Linear Regression Model,,1,0,,,,CC BY-SA 4.0 +12667,1,,,6/3/2019 20:50,,0,103,"

Stories like this one are quite popular these days.

+ +

The idea of training a neural net to do something silly like this may sound trivial to experts like you, but for a novice like me it could be an interesting learning experience.

+ +

Is there novice-friendly software I could play with to train a neural net to do something like this or is there necessarily a steep learning curve?

+",7078,,8068,,6/4/2019 18:25,6/19/2019 23:12,Neural nets for novices,,2,1,,2/6/2021 17:48,,CC BY-SA 4.0 +12668,2,,12666,6/3/2019 20:55,,1,,"
X = np.array([1,2,3])
+Y = np.array([2,1,2])
+
+params = np.array([1, 0])
+
+def loss(y, yhat):
+    return ((y - yhat)**2).mean()
+
+def model(x):
+    return params[0] + params[1]*x
+
+def loss_grad(y, yhat, x):
+    return np.array([(2*(yhat-y)).mean(), (2*(yhat-y)*x).mean()])
+
+lr = .1
+for _ in range(3):
+    yhat = model(X)
+    l = loss(Y, yhat)
+    g = loss_grad(Y, yhat, X)
+    params = params - lr*g
+    print(f'thetas are now {params} with new loss of {loss(Y, yhat)}')
+
+ +

outputs

+ +
thetas are now [1.13333333 0.26666667] with new loss of 0.6666666666666666
+thetas are now [1.13333333 0.23111111] with new loss of 0.2696296296296296
+thetas are now [1.14755556 0.22874074] with new loss of 0.262887242798354
+
+ +

I double checked this with keras too, but in numpy i explicitly wrote the gradients, i reccomend double checking your gradient or arithmetic

+",25496,,,,,6/3/2019 20:55,,,,0,,,,CC BY-SA 4.0 +12669,1,,,6/3/2019 21:15,,1,40,"

I read the Facenet paper and one thing I am not sure about (it might be trivial and I missed it) is how do we give the kick start to the network.

+

The embeddings, in the beginning, are random, so picking hard (or semi-hard) negatives, based on the Euclidean distance, would give random images in the beginning.

+

Do we hope that over time this will converge to the actual desired hard images? Is it any reason to expect that this convergence will be attained?

+",23871,,2444,,1/23/2022 11:01,1/23/2022 11:01,How do we give a kick start to the Facenet network?,,0,1,,1/23/2022 21:45,,CC BY-SA 4.0 +12670,2,,12667,6/3/2019 21:20,,2,,"

keras is probably the highest level and easiest to go into. +Here are some keras tutorials

+",25496,,,,,6/3/2019 21:20,,,,0,,,,CC BY-SA 4.0 +12671,1,12672,,6/3/2019 22:04,,3,572,"

As stated in the universal approximation theorem, a neural network can approximate almost any function.

+

Is there a way to calculate the closed-form (or analytical) expression of the function that a neural network computes/approximates?

+

Or, alternatively, figure out if the function is linear or non-linear?

+",26130,,2444,,1/3/2021 20:18,1/3/2021 20:36,Is there a way to calculate the closed-form expression of the function that a neural network computes?,,3,0,,,,CC BY-SA 4.0 +12672,2,,12671,6/3/2019 23:24,,1,,"

To check if a function is linear is easy: if you can train one fully connected layer, without activations, of the right dimensions (for a function $\mathbb{R}^n \rightarrow \mathbb{R}^m$ you need $nm$ weights aka the matrix corresponding to the linear application), with enough data, to 100% accuracy... then it is linear.

+ +

The estimated function is explicit: it is given by the architecture of the NN and its weights. I don't think you can hope for an answer like ""$\sin(\pi x + \phi) +4$"" if it's not already in the predictive capacity of the NN. Interpretability of NNs is a hot topic. In what cases symbolic reasoning can lead to a simplified expression?

+",23224,,,,,6/3/2019 23:24,,,,1,,,,CC BY-SA 4.0 +12673,1,12677,,6/4/2019 2:11,,1,103,"

I have the following problem.

+
+

A bank wants to decide whether a customer can be given a loan, +based on two features related to (i) the monthly salary of the customer, and (ii) his/her account balance. For simplicity, we model the two features with two binary variables $X1$, $X2$ and the class $Y$ (all of which can be either 0 or 1). $Y=1$ indicates that the customer can be given loan, and Y=0 indicates otherwise.

+

Consider the following dataset having four instances:

+

($X1 = 0$, $X2 = 0$, $Y = 0$)

+

($X1 = 0$, $X2 = 1$, $Y = 0$)

+

($X1 = 1$, $X2 = 0$, $Y = 0$)

+

($X1 = 1$, $X2 = 1$, $Y = 1$)

+

Can there be any logistic regression classifier using X1 and X2 as features, that can perfectly classify the given data?

+
+

The approach followed in the question was to calculate respective probabilities for Y=0 and Y=1 respectively. The value of $p$ obtained was $0.25$ and $(1-p)$ as $0.75$. The $\log(p/1-p)$ is coming as negative.

+

However, I don't understand what I need to do to understand whether there is a Logistic Regression classifier that can perfectly classify the given data.

+",25984,,2444,,12/11/2021 11:32,12/11/2021 11:32,Is there a Logistic Regression classifier that can perfectly classify the given data in this problem?,,1,0,,,,CC BY-SA 4.0 +12675,2,,12659,6/4/2019 8:17,,2,,"

Apologies for not writing in English. Please find the adapted translation here:

+ +

To give interested people an idea of the quality of MT (DeepL) please see this example from a text I was working on this morning (6,300 words, started at 9 am, delivery today around 1 pm and still find time for this post). I was working on this sentence (201 words) when I posted my comment.

+ +
+

""You further represent, warrant and undertake to ABC that you shall not: (a) Conduct any fraudulent, abusive, or otherwise illegal activity which may be grounds for termination of your right to access or use this Website and/or the Services; or (b) Post or transmit, or cause to be posted or transmitted, any communication or solicitation designed or intended to obtain password, account, or private information from any other user of this Website; or (c) Violate the security of any computer network, crack passwords or security encryption codes, transfer or store illegal material (including material that may be considered threatening or obscene), or engage in any kind of illegal activity that is expressly prohibited; or (d) Run maillist, listserv, or any other form of auto-responder, or ""spam"" on this Website, or any processes that run or are activated while you are not logged on to this Website, or that otherwise interfere with the proper working of or place an unreasonable load on this Website’s infrastructure; or (e) Use manual or automated software, devices, or other processes to ""crawl,"" ""scrape,"" or ""spider"" any page of this Website; or (f) Decompile, reverse engineer, or otherwise attempt to obtain the source code of this Website.""

+
+ +

DeepL returns this:

+ +
+

Sie versichern, garantieren und verpflichten sich gegenüber ABC, dass Sie dies nicht tun werden: (a) betrügerische, missbräuchliche oder anderweitig illegale Aktivitäten durchzuführen, die Anlass für die Beendigung Ihres Rechts auf Zugang oder Nutzung dieser Website und/oder der Dienste sein können; oder (b) Mitteilungen oder Aufforderungen, die dazu bestimmt sind oder sind, Passwörter, Konten oder private Informationen von anderen Nutzern dieser Website zu erhalten, zu posten oder zu übertragen oder posten oder zu posten oder zu übertragen; oder (c) die Sicherheit eines Computernetzwerks verletzen, Passwörter oder Sicherheitsverschlüsselungscodes knacken, illegales Material (einschließlich Material, das als bedrohlich oder obszön angesehen werden kann) übertragen oder speichern oder sich an illegalen Aktivitäten beteiligen, die ausdrücklich verboten sind; oder (d) Mailliste, Listenserver oder jede andere Form von Auto-Responder oder ""Spam"" auf dieser Website oder Prozesse, die ausgeführt werden oder aktiviert werden, während Sie nicht auf dieser Website angemeldet sind, oder die anderweitig das ordnungsgemäße Funktionieren oder eine unangemessene Belastung der Infrastruktur dieser Website stören; oder (e) manuelle oder automatisierte Software, Geräte oder andere Prozesse verwenden, um eine Seite dieser Website zu ""crawlen"", zu kratzen, zu spinnen oder zu spinnen; oder (f) dekompilieren, zurückzuentwickeln oder anderweitig zu versuchen, den Quellcode dieser Website zu erhalten.

+
+ +

It took me about 5 to 10 minutes to adjust this paragraph.

+ +

As a translator, I know that I cannot rely on the machine translation, but I learnt the specifics and capabilities of the different systems over time and I know what to pay attention for.

+ +

MT helps me a lot in my work.

+",26142,,1671,,6/5/2019 19:35,6/5/2019 19:35,,,,1,,,,CC BY-SA 4.0 +12676,2,,12659,6/4/2019 9:04,,4,,"

It really depends on the language pair and the topic of the content. Translating to/from English to any other language usually is the best supported. Translating to and from popular languages works better, for example, translating from English to Romanian is a poorer translation than English to Russian. But translating from English to Russian or Romanian is better than translating Russian to Romanian. And translating Romanian to English is better than translating English to Romanian.

+ +

But if you are used to working with translators and you have a passing familiarity with the languages, translation mistakes and the topic, it's easy to understand what was supposed to be there. And, at that point, sometimes its easier to read something translated into your native language for quick scanning than it is to read it in a second language.

+ +

Less popular languages (for translation not necessarily in number of speakers) are much much closer to literal translations only slightly better than what you personally would do using a dictionary for two languages you do not know.

+",26149,,,,,6/4/2019 9:04,,,,0,,,,CC BY-SA 4.0 +12677,2,,12673,6/4/2019 11:39,,1,,"

check it

+ +
import keras
+from keras.layers import *
+
+X = np.array([[0,0], [0,1], [1,0], [1,1]])
+Y = np.array([[0], [0], [0], [1]])
+
+input = Input(shape=(2,))
+output = Dense(1, activation='sigmoid')(input)
+model = keras.Model(input, output)
+
+model.compile(keras.optimizers.Adam(1e0), 'binary_crossentropy', metrics=['acc'])
+model.fit(X, Y, epochs=10, batch_size=4, verbose=1)
+
+ +

which produces

+ +
Epoch 1/10
+4/4 [==============================] - 0s 52ms/step - loss: 0.7503 - acc: 0.7500
+Epoch 2/10
+4/4 [==============================] - 0s 817us/step - loss: 0.5142 - acc: 0.7500
+Epoch 3/10
+4/4 [==============================] - 0s 732us/step - loss: 0.4353 - acc: 0.7500
+Epoch 4/10
+4/4 [==============================] - 0s 694us/step - loss: 0.3413 - acc: 1.0000
+Epoch 5/10
+4/4 [==============================] - 0s 633us/step - loss: 0.2817 - acc: 1.0000
+Epoch 6/10
+4/4 [==============================] - 0s 679us/step - loss: 0.2299 - acc: 1.0000
+Epoch 7/10
+4/4 [==============================] - 0s 672us/step - loss: 0.1769 - acc: 1.0000
+Epoch 8/10
+4/4 [==============================] - 0s 721us/step - loss: 0.1412 - acc: 1.0000
+Epoch 9/10
+4/4 [==============================] - 0s 694us/step - loss: 0.1193 - acc: 1.0000
+Epoch 10/10
+4/4 [==============================] - 0s 716us/step - loss: 0.1015 - acc: 1.0000
+
+ +

...so yes, you can

+ +

Also note you calculated marginal probabilities, here you want them conditioned on the input variables to actually solve for the parameters

+",25496,,,,,6/4/2019 11:39,,,,3,,,,CC BY-SA 4.0 +12678,2,,12659,6/4/2019 12:29,,6,,"

You have asked quite a lot of questions, some of which cannot be answered definitively . To give an insight of the quality (and its history) of machine translations I like to refer to Christopher Manning his 'one sentence benchmark' as presented in his lecture. It contains one Chinese to English example which is compared with Google Translate output. The correct translation for the example would be:

+
+

In 1519, six hundred Spaniards landed in Mexico to conquer the Aztec Empire with a population of a few million. They lost two thirds of their soldiers in the first clash.

+
+

Google Translate returned the following translations.

+
+

2009 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the first two-thirds of soldiers against their loss.

+

2011 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the initial loss of soldiers, two thirds of their encounters.

+

2013 1519 600 Spaniards landed in Mexico to conquer the Aztec empire, hundreds of millions of people, the initial confrontation loss of soldiers two-thirds.

+

2015 1519 600 Spaniards landed in Mexico, millions of people to conquer the Aztec empire, the first two-thirds of the loss of soldiers they clash.

+

2017 In 1519, 600 Spaniards landed in Mexico, to conquer the millions of people of the Aztec empire, the first confrontation they killed two-thirds.

+
+

Whether Google retains or 'hides' their best results: I doubt it. There are many excellent researchers working in the field of natural language processing (NLP). If Google would have a 'greatest achievement' for translation, the researchers would figure it out sooner or later. (Why would Google hide their 'greatest achievement' anyway? They seem to see the benefit of open source, see the Transformer[1] or BERT[2])

+

NB. For an updated list of state-of-the-art algorithms in NLP, see the +SQuAD2.0 leaderboard.

+

[1] Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems. 2017.

+

[2] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).

+",26155,,-1,,6/17/2020 9:57,6/6/2019 7:35,,,,4,,,,CC BY-SA 4.0 +12679,2,,12114,6/4/2019 12:53,,2,,"

It's currently just too complex

+

The different sources of information are too varied, in economics this is often referred to as a local knowledge problem, which hampers many large scale plans. Humans can react to slight differences like respecting local traditions, landscapes, history but an artificial intelligence would (currently at least) struggle not to generalise over such a large scale as a whole country's economy.

+

The real work in this case (and actually most 'AI' tasks) would be collecting all the necessary data. Here that part job is currently insurmountable.

+

Currently a human lead planned economy would do a better job.

+",22897,,-1,,6/17/2020 9:57,6/6/2019 9:32,,,,5,,,,CC BY-SA 4.0 +12682,1,,,6/4/2019 13:42,,3,93,"

I was reading the following book: http://neuralnetworksanddeeplearning.com/chap2.html

+ +

and towards the end of equation 29, there is a paragraph that explains this:

+ +

+ +

However I am unsure how the equation below is derived:

+ +

+",26159,,,,,6/5/2019 11:57,How does adding a small change to an neuron's weighted input affect the overall cost?,,3,10,,,,CC BY-SA 4.0 +12683,1,,,6/4/2019 14:19,,2,53,"

This is an Inverse Reinforcement Learning (IRL) problem. I have data (observations) on actions taken by a (real) agent. Given this data I want to estimate the likelihood of the observed actions in a Q-learning agent. Rewards are given by a linear function on a parameter, say alpha.

+ +

Thus, I want to estimate the alpha that makes the observed actions more likely to be taken by a Q-agent. I read some papers (i.e. Ng & Russel 2004), but I found them rather generalistic.

+",26154,,,,,6/4/2019 14:19,Inverse Reinforcement Learning for Markov Games,,0,3,,,,CC BY-SA 4.0 +12685,2,,12682,6/4/2019 15:39,,0,,"

I think that Nielsen just wanted to convey the idea of the back-propagation algorithm using that formula, as you can read from the next paragraph ""Now, this demon is a good demon..."", so I don't think that that partial derivative is mathematically correct, provided the partial derivative is still with respect to $z_j^l$.

+ +

$C$ is the cost (or loss) function. $z_j^l$ is the linear output of neuron $j$ in layer $l$, which is followed by a non-linear function (e.g. sigmoid), denoted by $\sigma$. So, the actual output of neuron $j$ in layer $l$ is $\sigma(z_j^l)$.

+ +

The partial derivative of the cost function $C$ with respect to this neuron's linear output, $z_j^l$, is $$\frac{\partial C}{\partial z_j^l} = \frac{\partial C}{\partial z_j^l} 1 = \frac{\partial C}{\partial z_j^l} \frac{\partial z_j^l}{\partial z_j^l}.$$

+ +

If the the linear output of node $j$ in layer $l$ is now $z_j^l + \Delta z_j^l$, then the partial derivative with respect to $z_j^l$ becomes

+ +

\begin{align} +\frac{\partial C}{\partial z_j^l} +&= \frac{\partial C}{\partial (z_j^l + \Delta z_j^l)}\frac{\partial (z_j^l + \Delta z_j^l)}{\partial z_j^l} \\ +&= \frac{\partial C}{\partial (z_j^l + \Delta z_j^l)} \left( \frac{\partial z_j^l}{\partial z_j^l} + \frac{\partial \Delta z_j^l}{\partial z_j^l} \right) \\ +&= \frac{\partial C}{\partial (z_j^l + \Delta z_j^l)} \left( 1 + \frac{\partial \Delta z_j^l}{\partial z_j^l} \right) \\ +&= \frac{\partial C}{\partial (z_j^l + \Delta z_j^l)} + \frac{\partial C}{\partial (z_j^l + \Delta z_j^l)}\frac{\partial \Delta z_j^l}{\partial z_j^l} \\ +\end{align}

+ +

$\Delta z_j^l$ depends on $z_j^l$, but it is not specified how.

+",2444,,,,,6/4/2019 15:39,,,,0,,,,CC BY-SA 4.0 +12686,2,,12682,6/4/2019 16:01,,1,,"

I believe he's just saying that:

+ +

$$ +\frac{\partial C}{\partial z_j^l} \Delta z_j^l \approx \frac{\partial C}{\partial z_j^l} \partial z_j^l \approx \partial C +$$

+ +

so that the change in cost function can be arrived at simply for a small enough perturbation $\Delta z_j^l$.

+ +

Or, taking that line of approximations backwards, the change in the cost function for a given perturbation is just: +$$ +\partial C \approx \frac{\partial C}{\partial z_j^l} \partial z_j^l \approx \frac{\partial C}{\partial z_j^l} \Delta z_j^l +$$

+",20639,,,,,6/4/2019 16:01,,,,8,,,,CC BY-SA 4.0 +12687,2,,12682,6/4/2019 16:05,,0,,"

The derivative of a function ($f(x_1,x_2..x_n)$) w.r.t to one of the variables ($x_1,x_2..x_n$) gives us the rate of change of the function w.r.t the rate of change of the variable. This roughly means that by how much will the function value change if we change the variable by a ""unit amount"" or $+1$. (we cannot use the change as $+1$ as the change needs to be infinitesimally small, this is just a rough explanation)

+ +

The image shows a tangent line to a curve or function $f(x)$. The slope of this tangent is given by $\frac{df(x)}{dx}$ at that particular $x$. If you move by a very very small amount in the direction of positive $x$ i.e. $x+\delta x$ the change in the value of the $f(x)= y $ will almost be the same as the change in the value of the $y$ of the tangent line.

+ +

+ +

Now as per the excerpt the cost function $C$ is a function of $z_j^l$. Thus, it can be written as $$C = f(z_j^l, .....).$$ So, $$\frac{\partial C}{\partial z_j^l}$$ indicates how much $C$ will vary w.r.t $z_j^l$, i.e. when $z_j^l$ is changed by an infinitesimally small amount and thus the formulae: +$$\frac{\partial C}{\partial z_j^l} \Delta z_j^l$$ where the author assumed $\Delta z_j^l$ to be very small. This gives the infinitesimal change in $C$ or gives us $\Delta C$, for infinitesimally small change in $z_j^l$ or any varible affecting the cost function. This can be derived by series expansions too (given below), but this is an intuitive explanation.

+ +

An explanation can be given from Taylor Series Theorem which states: :

+ +
+

Let $f(x)$ be a function which is analytic at $x = a$. Then we can write + $f(x)$ as the following power series, called the Taylor series of $f(x)$ + at $x = a$, then we can write $f(x)$ as:

+
+ +

$$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f'''(a)\frac{(x-a)^3}{3!}... $$

+ +

Now if we keep other variables constant and make cost function $f$ vary only with $z^l_j$ +and if we put $a=z^l_j$ and $x=z^l_j + \Delta z^l_j$ the equation becomes:

+ +

$$f(z^l_j + \Delta z^l_j) = f(z^l_j) + f'(z^l_j)(\Delta z^l_j) + f''(z^l_j)\frac{(\Delta z^l_j)^2}{2!} + f'''(a)\frac{(\Delta z^l_j)^3}{3!}... $$

+ +

which if we ignore the higher order terms of $\Delta z^l_j$, since terms containing $\Delta z^l_j$ for powers greater than 1 will be negligible compared to $\Delta z^l_j$ with power 1. Thus the equation now effectively is:

+ +

$$f(z^l_j + \Delta z^l_j) = f(z^l_j) + f'(z^l_j)(\Delta z^l_j)$$

+ +

$$f(z^l_j + \Delta z^l_j) - f(z^l_j)= f'(z^l_j)(\Delta z^l_j)$$

+ +

$$f(z^l_j + \Delta z^l_j) - f(z^l_j)= \frac{\partial f(z^l_j)}{\partial z^l_j}(\Delta z^l_j)$$ where $$f(z^l_j + \Delta z^l_j) - f(z^l_j)$$ can be thought of as $\Delta C$ or the change in cost function for small change in $z^l_j$

+ +

NOTE: I have glossed over some requirements for a Taylor Series to be convergent.

+",,user9947,,user9947,6/5/2019 11:57,6/5/2019 11:57,,,,12,,,,CC BY-SA 4.0 +12688,2,,12600,6/4/2019 17:11,,2,,"

As @Clement mentions, text_gen_description gives a good overview!, but the paper seqGAN paper describes the REINFORCE approach more in depth, as they are the first to do it (i believe). This is probably the approach most take now of days when going the GAN route.

+ +

Note that just basic MLE training has shown promise with openAI's GPT2. When i need a text generator, fine tuning one of the provided models is usually my goto.

+ +

Also if your looking for seq gans code base (you asked for example code) here is is: git repo

+ +

Good Luck!

+",25496,,,,,6/4/2019 17:11,,,,0,,,,CC BY-SA 4.0 +12689,1,,,6/4/2019 18:51,,3,105,"

I need to manually classify thousands of pictures into discrete categories, say, where each picture is to be tagged either A, B, or C.

+ +

Edit: I want to do this work myself, not outsource / crowdsource / whatever online collaborative distributed shenanigans. Also, I'm currently not interested in active learning. Finally, I don't need to label features inside the images (eg. Sloth) just file each image as either A, B, or C.

+ +

Ideally I need a tool that will show me a picture, wait for me to press a single key (0 to 9 or A to Z), save the classification (filename + chosen character) in a simple CSV file in the same directory as the pictures, and show the next picture. Maybe also showing a progress bar for the entire work and ETA estimation.

+ +

Before I go ahead and code it myself, is there anything like this already available?

+",26169,,26169,,6/4/2019 21:16,6/5/2019 2:38,Are there tools to help labelling images?,,2,1,,11/6/2020 19:22,,CC BY-SA 4.0 +12690,1,12694,,6/4/2019 19:53,,1,273,"

I'm new to NEAT, so, please, don't be too harsh. How does NEAT find the most successful generation without gradient descent or gradients?

+",22802,,2444,,6/5/2019 21:48,6/5/2019 21:48,How does NEAT find the most successful generation without gradients?,,1,0,,,,CC BY-SA 4.0 +12691,2,,12689,6/4/2019 20:50,,3,,"

There are a few tools that you can use to annotate (or label) data. For example, labelme or Labelbox. Have a look at this question for more alternatives.

+",2444,,2444,,6/4/2019 22:26,6/4/2019 22:26,,,,3,,,,CC BY-SA 4.0 +12692,5,,,6/4/2019 21:03,,0,,,-1,,-1,,6/4/2019 21:03,6/4/2019 21:03,,,,0,,,,CC BY-SA 4.0 +12693,4,,,6/4/2019 21:03,,0,,For questions related to the convergence of AI algorithms.,2444,,2444,,6/5/2019 19:34,6/5/2019 19:34,,,,0,,,,CC BY-SA 4.0 +12694,2,,12690,6/4/2019 23:13,,3,,"

NEAT is a genetic algorithm (GA). A genetic algorithm maintains a population of individuals (or chromosomes) and evolves it using operations like the crossover or the mutation, so that the fittest individuals keep living and most other individuals die. The nature of the individuals depends on the problem. For example, in the case of NEAT, the individuals are neural networks. However, these individuals need first to be encoded into a (compressed) reprentation (for example, a vector) that allows the operations like mutation to be efficiently applied: this representation is often called the genotype (or chromosome).

+ +

How do you decide which individuals are the fittest? A function called the fitness function needs first to be defined. The fitness function measures the fitness (or quality) of the individuals (or solutions). In the case of neural networks, a fitness function could, for example, be the accuracy of the neural networks on a validation dataset.

+ +

Why don't GAs need gradients? The fitness (or quality) of the solutions is given by the fitness function in the case of GAs, so gradients are not strictly needed, even though they can also be used.

+ +

NEAT is a little bit more complex (and it is described in detail in the paper that introduced it Evolving Neural Networks through Augmenting Topologies), but this is the basic idea behind all genetic algorithms (including NEAT).

+",2444,,2444,,6/5/2019 21:45,6/5/2019 21:45,,,,0,,,,CC BY-SA 4.0 +12695,2,,12689,6/5/2019 2:38,,3,,"

I do not know a specific tool that meets all the mentioned requirements. However, a long time ago, I had to do a very similar task of labeling tons of images into 10 classes. This is how I did this:

+ +
    +
  1. Used a very basic clustering tool to cluster images into clusters (I set the number of clusters larger than 10 as I new some classes have very different subclasses inside).
  2. +
  3. Moved all images of each cluster to a separate folder. Named each folder after one of the classes/subclasses.
  4. +
  5. Double checked the folders content to make sure there is no outlier or mismatched sample. In the case of wrong labels, I just moved them to the correct folder.
  6. +
  7. Merged subclasses to form the 10 classes that I was interested in.
  8. +
  9. Done!
  10. +
+",12853,,,,,6/5/2019 2:38,,,,0,,,,CC BY-SA 4.0 +12696,2,,12659,6/5/2019 3:49,,1,,"

This will be not so much an answer as a commentary.

+ +

The quality depends on several things, including (as Aaron said above) 1) the language pair and 2) the topic, but also 3) the genera and 4) the style of the original, and 5) the amount of parallel text you have to train the MT system.

+ +

To set the stage, virtually all MT these days is based off of parallel texts, that is a text in two different languages, with one presumably being a translation of the other (or both being a translation of some third language); and potentially using dictionaries (perhaps assisted by morphological processes) as backoff when the parallel texts don't contain particular words.

+ +

Moreover, as others have said, an MT system in no way understands the texts it's translating; it just sees strings of characters, and sequences of words made up of characters, and it looks for similar strings and sequences in texts it's translated before. (Ok, it's slightly more complicated than that, and there have been attempts to get at semantics in computational systems, but for now it's mostly strings.)

+ +

1) Languages vary. Some languages have lots of morphology, which means they do things with a single word that other languages do with several words. A simple example would be Spanish 'cantaremos' = English ""we will sing"". And one language may do things that the other language doesn't even bother with, like the informal/formal (tu/ usted) distinction in Spanish, which English doesn't have an equivalent to. Or one language may do things with morphology that another language does with word order. Or the script that the language uses may not even mark word boundaries (Chinese, and a few others). The more different the two languages, the harder it will be for the MT system to translate between them. The first experiments in statistical MT were done between French and English, which are (believe it or not) very similar languages, particularly in their syntax.

+ +

2) Topic: If you have parallel texts in the Bible (which is true for nearly any pair of written languages), and you train your MT system off of those, don't expect it to do well on engineering texts. (Well, the Bible is a relatively small amount of text by the standards of training MT systems anyway, but pretend :-).) The vocabulary of the Bible is very different from that of engineering texts, and so is the frequency of various grammatical constructions. (The grammar is essentially the same, but in English, for example, you get lots more passive voice and more compound nouns in scientific and engineering texts.)

+ +

3) Genera: If your parallel text is all declarative (like tractor manuals, say), trying to use the resulting MT system on dialog won't get you good results.

+ +

4) Style: Think Hilary vs. Donald; erudite vs. popular. Training on one won't get good results on the other. Likewise training the MT system on adult-level novels and using it on children's books.

+ +

5) Language pair: English has lots of texts, and the chances of finding texts in some other language which are parallel to a given English text are much higher than the chances of finding parallel texts in, say, Russian and Igbo. (That said, there may be exceptions, like languages of India.) As a gross generalization, the more such parallel texts you have to train the MT system, the better results.

+ +

In sum, language is complicated (which is why I love it--I'm a linguist). So it's no surprise that MT systems don't always work well.

+ +

BTW, human translators don't always do so well, either. A decade or two ago, I was getting translations of documents from human translators into English, to be used as training materials for MT systems. Some of the translations were hard to understand, and in some cases where we got translations from two (or more) human translators, it was hard to believe the translators had been reading the same documents.

+ +

And finally, there's (almost) never just one correct translation; there are multiple ways of translating a passage, which may be more or less good, depending on what features (grammatical correctness, style, consistency of usage,...) you want. There's no easy measure of ""accuracy"".

+",26180,,,,,6/5/2019 3:49,,,,0,,,,CC BY-SA 4.0 +12697,1,,,6/5/2019 4:15,,2,111,"

I have recently begun researching LSTM networks, as I have finished my GA and am looking to progress to something more difficult. I believe I am using the classic LSTM (if that makes any sense) and have a few questions.

+ +

Do I need LSTM units everywhere in the network? For example, can I only use LSTM units for the first and last layer and use feedforward units everywhere else?

+ +

How do I go about implementing bias values into an LSTM?

+ +

Assuming I create a network that predicts the next few words of a sentence, does that mean my outputs should be every possible word that the network could conceivably use?

+",26181,,2444,,11/3/2019 3:07,4/1/2020 4:07,Do I need LSTM units everywhere in the network?,,1,0,,,,CC BY-SA 4.0 +12698,2,,12667,6/5/2019 4:50,,3,,"

An intuitive NN playground can be found in TensorFlow Playground

+ +

Also, check the Google ML crash course for coders as they promised to add more practicals.

+",12853,,12853,,6/19/2019 23:12,6/19/2019 23:12,,,,0,,,,CC BY-SA 4.0 +12700,2,,12671,6/5/2019 11:29,,1,,"

The network is the function.

+ +

A network is a function, that is modeled by terms describing the architecture and coefficients that are learned.

+ +

Look at a simple model:

+ +

$$f(x) = ax+b$$

+ +

Your solver determines $a$ and $b$, and you substitute them into $f(x)$ and then you're able to calculate $f(42)$. The function is linear by definition, but may not be a good fit for your data.

+ +

To see if the input data may belong to a linear function can be achieved, when you have a network, that can both fit linear functions and higher order functions. For example try to fit your data to

+ +

$$f(x)=ax^2+bx+c$$

+ +

The function is linear, when $a=0$. If the function is quadratic you get $a\ne 0$ and if it has a higher order than $2$ you will get not good fit with any $a, b, c$.

+ +

When looking at MLP and similar networks, the question ""is it linear"" may not be that easy to answer, but the question how to get the function is the same: Your trained network is the function.

+ +

I guess one option to see if the function approximated by an existing network is linear is to train a second network that only contains linear elements. If it is able to approximate your first network, the first network is linear as well. Of course you will need the right test data, as you may accidentally choose a training set that has linearly correlated output in a non-linear function. This is of course the same problem as always, that your network will always can only be as good as the training data.

+",25798,,25798,,6/7/2019 8:13,6/7/2019 8:13,,,,0,,,,CC BY-SA 4.0 +12701,2,,12697,6/5/2019 11:44,,1,,"

For question 1) I dont understand what you are getting at... LSTM cells will work on a contiguous block of inputs, where it sequentially uses the states from a previous time step and the new input to generate the next ones.

+ +

question 2) Please look into the LSTM atchitecture +As you see, biases are already there, is there somewhere specific you want it, that it isnt?

+ +

question 3) Generally yes, but the normalization step can be expensive (such as softmax), so if you want to get clever, you can use negative sampling or hierarchial softmax-- but generally, you you predict a probability over all possible words given the previous text

+",25496,,,,,6/5/2019 11:44,,,,6,,,,CC BY-SA 4.0 +12702,2,,4683,6/5/2019 14:08,,3,,"

CNN vs RNN

+
    +
  • A CNN will learn to recognize patterns across space while RNN is useful for solving temporal data problems.
  • +
  • CNNs have become the go-to method for solving any image data challenge while RNN is used for ideal for text and speech analysis.
  • +
  • In a very general way, a CNN will learn to recognize components of an image (e.g., lines, curves, etc.) and then learn to combine these components to recognize larger structures (e.g., faces, objects, etc.) while an RNN will similarly learn to recognize patterns across time. So a RNN that is trained to convert speech to text should learn first the low level features like characters, then higher level features like phonemes and then word detection in audio clip.
  • +
+
+

CNN

+

A convolutional network (ConvNet) is made up of layers. +In a convolutional network (ConvNet), there are basically three types of layers:

+
    +
  • Convolution layer
  • +
  • Pooling layer
  • +
  • Fully connected layer
  • +
+

Of these, the convolution layer applies convolution operation on the input 3D tensor. Different filters extract different kinds of features from an image. The below GIF illustrates this point really well:

+

+

Here the filter is the green 3x3 matrix while the image is the blue 7x7 matrix.

+

Many such layers passes through filters in CNN to give an output layer that can again be a NN Fully connected layer or a 3D tensor.

+

+

For example, in the above example, the input image passes through convolutional layer, then pooling layer, then convolutional layer, pooling layer, then the 3D tensor is flattened like a Neural Network 1D layer, then passed to a fully connected layer and finally a softmax layer. This makes a CNN.

+
+

RNN
+Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous step are fed as input to the current step.

+

+

Here, $x_{t-1}$, , $x_{t}$ and $x_{t+1}$ are the values of inputs data that occur at a specific time steps and are fed into the RNN that goes through the hidden layers namely $h_{t-1}$, , $h_{t}$ and $h_{t+1}$ which further produces output $o_{t-1}$, , $o_{t}$ and $o_{t+1}$ respectively.

+",6604,,145,,4/21/2023 8:08,4/21/2023 8:08,,,,0,,,,CC BY-SA 4.0 +12703,2,,12659,6/5/2019 15:52,,0,,"
+

""Or does Google have reasons to retain its achievements (and not to show to the users the best they can show)""

+
+ +

If they were, then what they're holding back would be amazing. Google publishes a lot of strong papers in Natural Language Processing, including ones that get state of the art results or make significant conceptual breakthroughs. +They have also released very useful datasets and tools. Google is one of the few companies out there that is not only using the cutting edge of current research, but is actively contributing to the literature.

+ +

Machine translation is just a hard problem. A good human translator needs to be fluent in both languages to do the job well. Each language will have its own idioms and non-literal or context-dependent meanings. Just working from a dual-language dictionary would yield terrible results (for a human or computer), so we need to train our models on existing corpora that exist in multiple languages in order to learn how words are actually used (n.b. hand-compiled phrase translation tables can be used as features; they just can't be the whole story). For some language pairs, parallel corpora are plentiful (e.g. for EU languages, we have the complete proceedings of the European Parliament). For other pairs, training data is much sparser. And even if we have training data, there will exist lesser used words and phrases that don't appear often enough to be learned.

+ +

This used to be an even bigger problem, since synonyms were hard to account for. If our training data had sentences for ""The dog caught the ball"", but not ""The puppy caught the ball"", we would end up with a low probability for the second sentence. Indeed, significant smoothing would be needed to prevent the probability from being zero in many such cases.

+ +

The emergence of neural language models in the last 15 years or so has massively helped with this problem, by allowing words to be mapped to a real-valued semantic space before learning the connections between words. This allows models to be learned in which words that are close together in meaning are also close together in the semantic space, and thus switching a word for its synonym will not greatly affect the probability of the containing sentence. word2vec is a model that illustrated this very well; it showed that you could, e.g., take the semantic vector for ""king"", subtract the vector for ""man"", add the vector for ""woman"", and find that the nearest word to the resulting vector was ""queen"". Once the research in neural language models began in earnest, we started seeing immediate and massive drops in perplexity (i.e. how confused the models were by natural text) and we're seeing corresponding increases in BLEU score (i.e. quality of translation) now that those language models are being integrated into machine translation systems.

+ +

Machine translations are still not as good as quality human translations, and quite possibly won't be that good until we crack fully sapient AI. But good human translators are expensive, while everyone with Internet access has machine translators available. The question isn't whether the human translation is better, but rather how close the machine gets to that level of quality. That gap has been shrinking and is continuing to shrink.

+",2212,,2212,,6/5/2019 15:57,6/5/2019 15:57,,,,5,,,,CC BY-SA 4.0 +12704,2,,10196,6/5/2019 16:04,,6,,"

This question is discussed in detail, in the following NeurIPS 2016 paper by David Silver: Learning values across many orders of magnitude. They also give experimental results over the Atari domain.

+",26209,,,,,6/5/2019 16:04,,,,0,,,,CC BY-SA 4.0 +12705,1,,,6/5/2019 18:45,,6,226,"

There was a recent informal question on chat about RTS games suitable for AI benchmarks, and I thought it would be useful to ask a question about them in relation to AI research.

+ +

Compact is defined as the fewest mechanics, elements, and smallest gameboard that produces a balanced, intractable, strategic game. (This is important because greater compactness facilitates mathematical analysis.)

+",1671,,1671,,6/5/2019 18:50,6/10/2019 2:45,What are the most compact Real Time-Strategy Games?,,2,2,,,,CC BY-SA 4.0 +12706,5,,,6/5/2019 18:46,,0,,"

https://en.wikipedia.org/wiki/Real-time_strategy

+",1671,,1671,,6/5/2019 18:46,6/5/2019 18:46,,,,0,,,,CC BY-SA 4.0 +12707,4,,,6/5/2019 18:46,,0,,"For questions about Real-time Strategy games, often used in AI research.",1671,,1671,,6/5/2019 18:46,6/5/2019 18:46,,,,0,,,,CC BY-SA 4.0 +12708,5,,,6/5/2019 18:50,,0,,"

Distinct from ""AI Milestones"" in that milestones can refer to theories, where benchmarks refer to verified results.

+ +

https://en.wikipedia.org/wiki/Benchmarking

+",1671,,1671,,6/5/2019 18:50,6/5/2019 18:50,,,,0,,,,CC BY-SA 4.0 +12709,4,,,6/5/2019 18:50,,0,,"For questions related to AI benchmarks--results that validate a specific technique or approach. Also for question regarding the history of AI achievements, and predictions as to future achievements.",1671,,1671,,6/5/2019 18:50,6/5/2019 18:50,,,,0,,,,CC BY-SA 4.0 +12712,1,,,6/6/2019 0:13,,4,853,"

As far I know, the RNN accepts a sequence as input and can produce as a sequence as output.

+ +

Are there neural networks that accept graphs or trees as inputs, so that to represent the relationships between the nodes of the graph or tree?

+",25836,,23503,,4/25/2020 18:45,5/26/2020 19:44,Are there neural networks that accept graphs or trees as inputs?,,2,0,,,,CC BY-SA 4.0 +12714,2,,12712,6/6/2019 6:20,,1,,"

There are types of neural networks designed exactly for that purpose. For example, graph convolutional networks (GCN) by Thomas N. Kipf. The input to the network will be a matrix of size $N \times F$, where $N$ is the number of nodes and $F$ the number of features (for each node). You then can multiply the feature matrix with the adjacency matrix (each node is going to be a weighted sum of its first-degree neighbors). There are a lot of other variations, such as diffusion convolutional networks, gated graph neural networks, etc. There is a nice survey that describes most of the recent related work in the field Graph Neural Networks: A review of methods and applications by Jie Zhou et al.

+",20430,,2444,,7/6/2019 12:42,7/6/2019 12:42,,,,2,,,,CC BY-SA 4.0 +12716,2,,12565,6/6/2019 8:16,,1,,"

An Auto-Encoder is probably what you are looking for. AE is a very powerful Neural Network when you want to compress data and get a lower dimensional representation of the data with maximum information retained.

+ +

If you are interested to know how it works -- Think of it as training a NN to predict the data itself. Which means your input and output layers would exactly be the same. So how does this help in compression ? The Hidden Layer -- Here is where the magic happens. The compression comes from the fact that we use lesser number of neurons in the hidden layer than in the i/p layer. Assuming that the NN we build nicely predicts the i/p at the end of the training, the output of the hidden layers can be thought of as newly engineered features which are less in dimension yet powerful enough to represent the information in original data.

+",26115,,,,,6/6/2019 8:16,,,,0,,,,CC BY-SA 4.0 +12718,1,,,6/6/2019 10:55,,-1,95,"

I'm building a CNN decoder, which mirrors (in reverse) the VGG network structure from Conv-4-1 layer.

+ +

The net seems to be working fine, however, the output looks broken. Please note that the colour distortion is fine, it's the the [255/0 RGB pixels] e.g. green that I'm worrying about.

+ +

I tried to overfit a single image, but even then I get these hot pixels. Does anyone know why they appear?

+ +

+ +

My net:

+ +
    activation = 'elu'
+
+    input_ = Input((None, None, 512))
+    x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same', bias_initializer='zeros', activation=activation)(input_)
+
+    x = UpSampling2D()(x)
+    for _ in range(3):
+        x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+    x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+
+    x = UpSampling2D()(x)
+    x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+    x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+
+    x = UpSampling2D()(x)
+    x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same', activation=activation)(x)
+    x = Conv2D(filters=3, kernel_size=self.kernel_size, padding='same')(x)
+
+    model = Model(inputs=input_, outputs=x)
+
+",13149,,,,,6/6/2019 13:27,"What is wrong with this CNN network, why are there hot pixels?",,1,0,,2/13/2022 23:46,,CC BY-SA 4.0 +12719,2,,12565,6/6/2019 11:01,,0,,"

See What is the difference between encoders and auto-encoders? For a working example of a neural network being used to compress (and decompress) images.

+",12509,,,,,6/6/2019 11:01,,,,0,,,,CC BY-SA 4.0 +12720,1,,,6/6/2019 12:42,,0,32,"

I have built various ""successful"" GANs or VAEs that can generate realistic images reliably, but in either case the generative step is sampling a latent feature vector from some distribution and running it through a generator/decoder $G(x) \ s.t. \ x \sim \mathbb{D} $.

+ +

Generally $\mathbb{D}$ has a continuous domain (normally distributed is a common choice), and $G(x)$ is a continuous and at-least singly differential function (by construction so it could be optimized with a gradient scheme).

+ +

Now assume we have 2 images $y_1, y_2$ generated by $x_1, x_2 \sim \mathbb{D}$ and that shortest path from $x_1$ to $x_2$ is also in $\mathbb{D}$'s domain. Crawling along this path, we get a continuous path from $y_1$ to $y_2$ in image space. Wouldnt you assume along this continuous transformation that the images would be a ""Garbage"" as they are essentially a fusion of $y_1, y_2,$ and $\{y_k\}_k$ where $\{y_k\}_k$ describes a set of ""succesful"" stops along the way? + --Especially in cases of single mode distributions (like normal) where all $x$ on the path must have higher likelihood than atleast one of the paths endpoints.

+ +

Trying to summarize my point, how do continuous generative models not with high probability produce garbage when given any 2 ""succesful"" images, there probably is a high range of ""garbage"" examples along a path between them in image space?

+ +

Note: by ""succesful"" i mean the generated image could be considered a draw from the true distribution you are trying to capture by the generator, and by ""garbage"" i mean ones that are obviously not.

+",25496,,,,,6/6/2019 12:42,"Deep Generative Networks Probability of ""Success""",,0,4,,,,CC BY-SA 4.0 +12721,2,,12718,6/6/2019 13:27,,3,,"

I've seen this too many times - it's not a problem with your network, it's a problem with matplotlib and how it displays the image. You are probably trying to display a float with range $<0, 255>$. When matplotlib sees float type as input, it assumes a range of $<0, 1>$, and thresholds everything outside of that range, and the results you can see.

+",26248,,,,,6/6/2019 13:27,,,,5,,,,CC BY-SA 4.0 +12723,1,,,6/6/2019 15:43,,1,35,"

Suppose I have a set of data that I want to apply a segmented regression to, fitting linearly across the breakpoint. I aim to find the offsets and slopes of either line and the position of the breakpoint that minimizes an error function given the data I have, and then use them as sufficiently close initial guesses to find the exact solutions using a curve fit. I'll elect to choose the bounds as the mins and maxes of $x$ and $y$ of my data, and an arbitrary bound for a slope with $slope = a * \frac{y_{max}-y_{min}}{x_{max}-x_{min}}$ for a suitable $a$ where I can safely assume the magnitude of $a$ is greater than any possible slope that realistically represents the data. Let's suppose I define a function (in Python):

+ +
 def generate_genetic_Parameters():
+            initial_parameters=[]
+            x_max=np.max(xData)
+            x_min=np.min(xData)
+            y_max=np.max(yData)
+            y_min=np.min(yData)
+            slope=10*(y_max-y_min)/(x_max-x_min)
+
+            initial_parameters.append([x_max,x_min]) #Bounds for module break point
+            initial_parameters.append([-slope,slope]) #Bounds for slopeA
+            initial_parameters.append([-slope,slope]) #Bounds for slopeB
+            initial_parameters.append([y_max,y_min]) #Bounds for offset A
+            initial_parameters.append([y_max,y_min]) #Bounds for offset B
+
+            result=differential_evolution(sumSquaredError,initial_parameters,seed=3)
+
+            return result.x
+
+        geneticParameters = generate_genetic_Parameters() #Generates genetic parameters
+
+fittedParameters, pcov= curve_fit(func, xData, yData, geneticParameters) 
+
+ +

This will do the trick, but what is the implicit standard of fitness that the differential evolution here deals with?

+",26250,,,,,6/6/2019 15:43,What qualifies as 'fitness' for a genetic algorithm that minimizes an error function?,,0,0,,,,CC BY-SA 4.0 +12724,1,12726,,6/6/2019 16:45,,4,240,"

In Sutton & Barto's Reinforcement Learning: An Introduction, in page 83 (101 of the pdf), there is a description of first-visit MC control. In the phase where they update $Q(s, a)$, they do an average of all the returns $G$ for that state-action pair.

+ +

Why don't they just update the value with a weight for the value from previous episodes $\alpha$ and a weight $1- \alpha$ for the new episode return as it is done in TD-Learning?

+ +

I have also seen other books (for example, Algorithms for RL (page 22) where they update it using $\alpha$. What is the difference?

+",24054,,2444,,1/23/2020 11:53,1/23/2020 11:53,Why is an average of all returns used to update the value in the first-visit MC control?,,1,2,,,,CC BY-SA 4.0 +12726,2,,12724,6/6/2019 18:58,,1,,"
+

Why don't they just update the value with a weight for the value from previous episodes $\alpha$ and a weight $1- \alpha$ for the new episode return as it is done in TD-Learning?

+
+ +

In my opinion, this is a mistake in the book. I went back and checked that this is still the same in the finished second edition, and it is still there.

+ +

Keeping all returns and taking averages works fine in a prediction scenario with a fixed policy, and it is somewhat simpler and more intuitive to explain as ""this is the mean value"" (efficiency can come later, after comprehension). However, the pseudo-code is incorrect for a control scenario where older returns (that you still refer in the list) will not reflect the current policy, so will be biased.

+ +

In practice, no-one really uses the MC Control algorithms as written. Nearly always a learning rate parameter, $\alpha$ is used to update estimates.

+ +
+

What is the difference?

+
+ +

In a prediction scenario, where you want to evaluate a fixed policy, it would be slightly more efficient to use a count of samples for each $s,a$ pair and variable $\alpha = \frac{1}{N(s,a)}$ which is mathematically identical to keeping a list and taking the average as needed.

+ +

In a control scenario using a list of old return values is not only memory inefficient, it is also sample inefficient as you would need samples from each improved policy to outnumber the older samples in order to remove sampling bias due to using returns from initial worse policies. A fixed, or slowly decaying $\alpha$ is a simple and efficient way of ""forgetting"" the old return values as they become less relevant to the task.

+ +

Another way to achieve similar forgetting may be to have a maximum size for each $Returns(s,a)$ list. However, that is not mentioned in the MC Control pseudocode.

+ +

Perhaps I am missing something, as the book has been through significant review process. However, it may be that this detail is overlooked because the basic MC Control scenarios are not as interesting as all the extensions and combinations with TD, where you will find the book does use a learning rate $\alpha$ - including when making comparison charts of MC vs TD approaches etc. The explicit list of returns and averaging over them for control only occurs in a couple of places in the text in this pseudocode, and is not mentioned again.

+",1847,,1847,,6/6/2019 19:11,6/6/2019 19:11,,,,0,,,,CC BY-SA 4.0 +12727,1,,,6/6/2019 20:10,,1,274,"

If an NLP system processes a text containing proper nouns like names, trade marks, etc. without knowing anything about the language (ie no lexicon), is it possible to recognise them?

+",25836,,2193,,6/7/2019 8:30,12/20/2019 7:16,How to distinguish between proper nouns and other words in NLP?,,1,0,,,,CC BY-SA 4.0 +12728,2,,12727,6/6/2019 21:32,,1,,"

Essentially you would need to train a named entity recognizer (NER) to recognize the names out of the common words.

+ +

There are many works that try to use a similar language to the language in question as a pivot to train a full NER model (for example, Cheap Translation for Cross-Lingual Named Entity Recognition).

+ +

Your task might be slightly simpler, as you are interested only in one class: whether it is a named entity or not. But in general it is very similar to this setup.

+",22794,,2193,,6/7/2019 11:55,6/7/2019 11:55,,,,0,,,,CC BY-SA 4.0 +12729,2,,12659,6/6/2019 22:28,,1,,"

Surprisingly all the other answers are very vague and try to approach this from the human translator POV. Let's switch over to ML engineer.

+ +

When creating a translation tool, one of the first questions that we should consider is ""How do we measure that our tool works?"".

+ +

Which is essentially what the OP is asking.

+ +

Now this is not an easy task (some other answers explain why). There is a Wikipedia Article that mentions different ways to evaluate machine translation results - both human and automatic scores exist (such as BLEU, NIST, LEPOR).

+ +

With rise of neural network techniques, those scores improved significantly.

+ +

Translation is a complex problem. There are many things that can go right(or wrong), and computer translation system often ignores some of the subtleties, which stands out for a human speaker.

+ +

I think if we are to think about the future, there are few things that we can rely on:

+ +
    +
  • Our techniques are getting better, wider known and tested. This is going to improve the accuracy in the long run.
  • +
  • We are developing new techniques which can take into account variables previously ignored or just do a better job.
  • +
  • Many of currently existing translation models are often ""reused"" to translate other languages (for example, try translating ""JEDEN"" from Polish to Chinese(traditional) using Google Translator - you will end up with ""ONE"", which is an evidence pointing out the fact that Google translates Polish to English, and then English to Chinese). +This is obviously not a good approach - you are going to lose some information in the process - but it's a one that will still work, so companies like Google use it for languages where they don't have enough workpower or data. +With time, more specialized models will appear, which will improve the situation.
  • +
  • Also, as previous point stated, more and more data will only help improving the machine translation.
  • +
+ +

To summarize, this complex problem, although not solved, is certainly on a good way and allows for some impressive results for well-researched language pairs.

+",26259,,,,,6/6/2019 22:28,,,,1,,,,CC BY-SA 4.0 +12731,2,,12705,6/7/2019 0:40,,3,,"

This is an interesting question. One good place to start would be the gargantuan catalogue of rather simple RTS flash games and the like. There are many sites that have 100s of such titles, and it would not be too difficult to build the framework for these types of games(there are open source tools that can help as well).

+ +

As far as ""real"" mainstream games, the pickings are rather slim for simple benchmark RTS titles. One that comes to mind is Rome: Total War. At least in exhibition, there is not much to strategize over besides who to attack and from where. Also, as I assume this relates to starcraft, one good simplified predecessor to starcraft would be the Dune series, and in particular Dune 2.

+",9608,,,,,6/7/2019 0:40,,,,1,,,,CC BY-SA 4.0 +12734,1,16782,,6/7/2019 13:23,,2,64,"

The question is little bit broad, but I could not find any concrete explanation anywhere, hence decided to ask the experts here.

+ +

I have trained a classifier model for binary classification task. Now I am trying to fine tune the model. With different sets of hyperparameters I am getting different sets of accuracy on my train and test set. For example:

+ +
(1) Train set: 0.99 | Cross-validation set: 0.72
+(2) Train set: 0.75 | Cross-validation set: 0.70
+(3) Train set: 0.69 | Cross-validation set: 0.69
+
+ +

These are approximate numbers. But my point is - for certain set of hyperparameters I am getting more or less similar CV accuracy, while the accuracy on training data varies from overfit to not so much overfit.

+ +

My question is - which of these models will work best on future unseen data? What is the recommendation in this scenario, shall we choose the model with higher training accuracy or lower training accuracy, given that CV accuracy is similar in all cases above (in fact CV score is better in the overfitted model)?

+",23993,,,,,11/26/2019 10:02,Ideal score of a model on training and cross validation data,,2,0,,,,CC BY-SA 4.0 +12735,2,,12654,6/7/2019 13:23,,1,,"

There are a few different ways you could frame the problem...

+ +

For example, the simplest, yet probably not the most effective, would be to treat it as a supervised classification task.

+ +

In this case, you would gather data, split it into 2 classes(binary). With one dataset consisting of dialogues and the other monologues. Obviously, this framing comes with it's own set of problems. Namely, how do you prepare the dataset, or how do you deal with live examples?

+ +

Of course, there are other ways you could tackle the problem, perhaps the detection context. It all depends on what you feel is the easiest tenable solution given the variables involved in your specific instance.

+",9608,,,,,6/7/2019 13:23,,,,0,,,,CC BY-SA 4.0 +12736,1,12743,,6/7/2019 15:05,,1,25,"

I wish to be able to detect: pedestrians, cars, traffic lights

+ +

I have two large datasets: + - One contains instances and labels of all three classes. + - The other contains instances of all three but only labels for pedestrians and cars. ie. there are many unlabelled traffic lights.

+ +

I want to combine the two datasets and train Yolov3 on it. Will the unlabelled presence of objects of interest significantly affect detection performance of that category?

+",21583,,,,,6/7/2019 19:39,Are absence of labels for classes of interest in a vision dataset a big problem?,,1,0,,,,CC BY-SA 4.0 +12738,1,,,6/7/2019 16:42,,2,171,"

In regression, in order to minimize an error function, a functional form of hypothesis $h$ must be decided upon, and it must be assumed (as far as I'm concerned) that $f$, the true mapping of instance space to target space, must have the same form as $h$ (if $h$ is linear, $f$ should be linear. If $h$ is sinusoidal, $f$ should be sinusoidal. Otherwise the choice of $h$ was poor).

+ +

However, doesn't this require a priori knowledge of datasets that we are wanting to let computers do on their own in the first place? I thought machine learning was letting machines do the work and have minimal input from the human. Are we not telling the machine what general form $f$ will take and letting the machine using such things as error minimization do the rest? That seems to me to forsake the whole point of machine learning. I thought we were supposed to have the machine work for us by analyzing data after providing a training set. But it seems we're doing a lot of the work for it, looking at the data too and saying ""This will be linear. Find the coefficients $m, b$ that fit the data.""

+",26250,,,,,6/10/2019 12:46,How is regression machine learning?,,4,2,,,,CC BY-SA 4.0 +12739,1,12740,,6/7/2019 16:56,,4,318,"

AlphaGo Zero (https://deepmind.com/blog/alphago-zero-learning-scratch/) has several key components that contribute to it's success:

+ +
    +
  1. A Monte Carlo Tree Search Algorithm that allows it to better search and learn from the state space of Go
  2. +
  3. A Deep Neural Network architecture that learns the value and policies of given states, to better inform the MCTS.
  4. +
+ +

My question is, how is this Reinforcement Learning? Or rather, what aspects of this algorithm specifically make it a Reinforcement Learning problem? Couldn't this just be considered a Supervised Learning problem?

+",22424,,,,,6/7/2019 17:28,How Does AlphaGo Zero Implement Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 +12740,2,,12739,6/7/2019 17:21,,4,,"

If you learn a policy or a value function from experience (that is, interaction with an environment), that's RL. In the case of AlphaGo, the MCTS is used to acquire the experience.

+ +

RL could in fact be considered supervised learning (SL) or, more specifically, self-supervised learning, where the experience corresponds to the labels in SL, especially nowadays with techniques like experience replay.

+",2444,,2444,,6/7/2019 17:28,6/7/2019 17:28,,,,0,,,,CC BY-SA 4.0 +12741,2,,12738,6/7/2019 19:15,,3,,"

So in a sense you are correct. Using your jargon: linear regression will only ""work"" if the true function is approximately $y=h(x)=\beta^{T}x+\beta_0$. Advantages to using this is that its light, its convex, and all-around easy.

+ +

but for alot of larger problems, this wont work. As you said you want the machine to do the work, so this is (kinda) where deeper models come into play: You allow a learn-able featurization and classification/regression. Think about it this way, the result of your regression is most likely linearly associated with some set of features, they just may not be the ones you are interested in (you can prove this actually with any infinitely wide network :: Universal approx Thm). Unfortunately we cant use an infinitely dimensional model, so we run with these giant over-parametrized models where we hope the a good function can be described by a sub-structure (only recently are we starting to pay attention, to how these sub-structures form -- look at this paper)

+ +

But the way you bring about thinking about it is a large pit fall for many trying to move forward. Alot of ML people now of days gain success by throwing a function without alot of parameters on a big data problem, but youll see the largest advancements in the field come from a theoretical understanding of the featurization and optimization.

+ +

I hope this helped

+",25496,,,,,6/7/2019 19:15,,,,0,,,,CC BY-SA 4.0 +12742,1,,,6/7/2019 19:32,,2,26,"

I've read a number of articles on how GPUs can speed up matrix algebra calculations, but I'm wondering how calculations are performed when one uses various kernel functions in a neural network.

+ +

If I use Sigmoid functions in my neural network, does the computer use the CPU for the Sigmoid calculation, and then the GPU for the subsequent matrix calculations?

+ +

Alternatively, is the GPU capable of doing nonlinear calculations in addition to the linear algebra calculations? If not, how about a simple Kernel function like ReLU? Can a GPU do the Relu calculation, or does it defer to the CPU?

+ +

Specifically, I'm using Keras with a Tensorflow backend, and would like to know what TensorFlow can and cannot use the GPU for, but I'm also interested in the general case.

+",17741,,,,,6/7/2019 19:32,"In addition to matrix algebra, can GPU's also handle the various Kernel functions for Neural Networks?",,0,0,,,,CC BY-SA 4.0 +12743,2,,12736,6/7/2019 19:39,,0,,"

So depends on sizes— if the second dataset has a lot of traffic lights without labels and you in a basic setup train on it, it will cause large performance decreases on that class. If it’s small, then it may just be “noise” to the model and it’ll filter it out, and still perform well. My recommendation is to adjust the network slightly and add a flag input saying which dataset the input was from: if from first, train normally. if from the second then ignore all losses pertain to boxes/classes that the network thinks a traffic light was in: that way your labels don’t try to tell the mode l “there’s no traffic light there”

+",25496,,,,,6/7/2019 19:39,,,,0,,,,CC BY-SA 4.0 +12744,1,,,6/7/2019 20:59,,1,27,"

Most examples if not all, are models that have been trained with images that are turned grey. Does this mean that models only detect edges? Why wouldnt you want to keep color so that model could learn that as well?

+",22145,,,,,6/7/2019 20:59,Training Haar Cascade model with grey vs color images,,0,0,,,,CC BY-SA 4.0 +12745,2,,12738,6/7/2019 22:32,,0,,"

It is just a statistical technique that is used in machine learning and it depends on the nature of the machine learning problem. I think you should be referred to the relation of the statistics and machine learning. These are no the same, but you can see the statistical methods in machine learning methods.

+ +

For your specific problem, there are a lot of optimization techniques in AI (not specifically in machine learning). So, I think you should scrutinize more on the problem to find the relation of machine learning, AI, and statistics in this regression example.

+",4446,,,,,6/7/2019 22:32,,,,0,,,,CC BY-SA 4.0 +12746,1,,,6/8/2019 0:45,,1,71,"

I want to train a neural network on some input data from a probability distribution (say a Gaussian). The loss function would normally be $-\sum\log(f(x_i))$, where the sum is over the whole data (or in this case a mini batch) and $f$ is the NN function. However I need to enforce the fact that $\int_0^\infty f(x)dx=1$, in order for $f$ to be a real probability distribution. How can I add that to the loss function? Thank you!

+",23871,,,,,6/8/2019 6:31,Unit integral condition on the output layer,,0,14,,,,CC BY-SA 4.0 +12750,1,,,6/8/2019 15:50,,2,104,"

I have some limited experience with MLPs and CNNs. I am working on a project where I've used a CNN to classify ""images"" into two classes, 0 and 1. I say ""images"" as they are not actually images in the traditional sense, rather we are encoding a string from a limited alphabet, such that each character has a one-hot encoded row in the ""image"". For example, we are using this for a bioinformatics application, so with the alphabet {A, C, G, T} the sequence ""ACGTCCAGCTACTTTACGG"" would be:

+ +

+ +

All ""images"" are 4x26. I used a CNN to classify pairs of ""images"" (either using two channels, i.e. 2x4x26 or concatenating two representations as 8x26) according to our criteria with good results. The main idea is for the network to learn how 2 sequences interact, so there are particular patterns that make sense. If we want to detect a reverse complement for example, then the network should learn that A-T and G-C pairs are important. For this particular example, if the interaction is high/probable, the assigned label is 1, otherwise 0.

+ +

However, now we want to go one step further and have a model that is able to generate ""images"" (sequences) that respect the same constraints as the classification problem. To solve this, I looked at Generative Adversarial Networks as the tool to perform the generation, thinking that maybe I could adapt the model from the classification to work as the discriminator. I've looked at the ""simpler"" models such as DCGAN and GAN, with implementations from https://github.com/eriklindernoren/Keras-GAN, as I've never studied or used a GAN before.

+ +

Say that we want to generate pairs that are supposed to interact, or with the label 1 from before. I've adapted the DCGAN model to train on our 1-labelled encodings and tried different variations for the discriminator and generator, keeping in mind rules of thumb for stability. However, I can't get the model to learn anything significant. For example, I am trying to make the network learn the simple concept of reverse complement, mentioned above (expectation: learn to produce a pair with high interaction, from noise). Initially the accuracy for the discriminator is low, but after a few thousand epochs it increases drastically (very close to 100%, and the generator loss is huge, which apparently is a good thing, as the two models ""compete"" against each other?). However, the generated samples do not make any sense.

+ +

I suspect that the generator learns the one-hot encoding above - since early generator output is just noise, it probably learns something like ""a single 1 per column is good"", but not the more high level relation between the 1s and 0s. The discriminator probably is able to tell that the early generated outputs are garbage as there are 1s all over the place, but perhaps at some point the generator can match the one-hot encoding and thus the discriminator decides that is not a fake. This would explain the high accuracy, despite the sequences not making sense.

+ +

I am not sure of this is the case or not, or if it makes sense at all (I've just started reading about GANs yesterday). Is there a way to capture the high level features of the dataset? I am not interested in just generating something that looks like a real encoding, I'd like to generate something that follows the encoding but also exhibits patterns from the original data.

+ +

I was thinking that maybe pretraining the discriminator would be a good idea, because it would then be able to discern between real-looking encodings for both the 0 and 1 classes. However, the pretraining idea seems frowned upon.

+ +

I'd appreciate any ideas and advice. Thanks!

+",26292,,26292,,6/8/2019 20:20,6/8/2019 20:20,Can GANs be used to generate matching pairs to inputs?,,0,0,,,,CC BY-SA 4.0 +12752,2,,12738,6/8/2019 17:20,,1,,"

Actually regression comes under the statistical analysis. As you know many business activity(decision making) relies in the previous trends that can be grabbed from the organizations transaction data. When regression is performed on those organizational data. One can understand what decision can be made. One could even simulate the different conditions when the regression line is generated and to predict the unknown cases, decision maker can pass the numerical values corresponding to the certain phenomena in the operation of the organization.

+ +

How regression is machine learning?

+ +

Let's start from the definition of machine learning.

+ +
+

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

+
+ +

Source: https://www.expertsystem.com/machine-learning-definition/

+ +

As from the definition it becomes clear that machine learning is to know the inner insight about the data without being explicitly programmed. Doesn't it fills great to know about what my previous trends in the business related transaction data is trying to convey me.

+ +
+

Please note that, in machine learning algorithms like Regression, one is trying to built some relation between the transnational data.

+
+ +

So how relation between the data is being built?
+Consider you are in the business of selling and buying the house and you want to predict the house price according to the latest trend. So what you have got is the data of house price and the feature of the house.

+ +
+

Feature : house_area, no_of_rooms
+ Target (what you want to predict): Price

+
+ +

Now, you perform regression on those data and you want to find out what would be best price for the house with the feature that is not mentioned in the latest trend's data. +Suppose general regression becomes like:

+ +
+

price = a * hourse_area + b * no_of_rooms + some_constant

+
+ +

So in some sense. We're just trying to find the best fit line of the latest trend data with some variables like a, b and some_constant. Isn't it great to find such higher level of details from those trends data to know what would be the house price for non-mentioned data in the so called 'training data'.

+ +

Choosing the objective function for best mapping?
+Suppose relation is sometimes is non-linear. But how my algorithms would know that. In such case, one can use Artificial-Neural Network as it can learn to hypothesize the non-linear training data too.

+ +
+

Note: You can learn to simulate the non linear data at: https://playground.tensorflow.org

+
+",12021,,12021,,6/8/2019 17:29,6/8/2019 17:29,,,,0,,,,CC BY-SA 4.0 +12753,1,,,6/8/2019 22:58,,1,26,"

Given a set of historical data points, I am trying to predict a continuous output of which I have no historical record of, therefore the problem is of an unsupervised nature.

+ +

I am wondering if there is any method or approach I should take to tackle this problem? Essentially, how to build a model that will provide an output that is not clustered?

+",26296,,,,,6/8/2019 22:58,Prediction of values with an unsupervised model,,0,1,,,,CC BY-SA 4.0 +12754,2,,12705,6/9/2019 2:22,,3,,"

These are some examples of simple RTS games I know of:

+ + + +
+

It is very fast; the game environment runs 40,000 frames per second per core on a Macbook Pro. It captures the key dynamics of a real-time strategy game: Both players gather resources, build facilities, explore unknown territory (terrain that is out of sight of the player), and attempt to control regions on the map. In addition, the engine has characteristics that facilitate AI research: perfect save/load/replay, full access to its internal game state, multiple built-in rule-based AIs, visualization for debugging, and a human-AI interface, among others.

+
+ + + +
+

microRTS is a small implementation of an RTS game, designed to perform AI research. The advantage of using microRTS with respect to using a full-fledged game like Wargus or Starcraft (using BWAPI) is that microRTS is much simpler, and can be used to quickly test theoretical ideas, before moving on to full-fledged RTS games.

+
+ +
    +
  • Galcon Fusion: A streamlined RTS game with relatively few actions, such as unit production and when to attack/defend.
  • +
+",3373,,3373,,6/10/2019 2:45,6/10/2019 2:45,,,,0,,,,CC BY-SA 4.0 +12755,1,,,6/9/2019 10:15,,1,337,"

From the answers to this question In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?, I got the fact that each filter has different weights for each input channel. But why should that be the case? What if we apply the same weights to each input channel? Does it work or not?

+",26304,,2444,,7/11/2019 21:54,7/26/2022 4:07,Why should each filter have different weights for each input channel?,,1,0,,,,CC BY-SA 4.0 +12756,2,,12755,6/9/2019 14:00,,1,,"

For simplicitly, let's consider only the first convolutional layer, that is, the one applied to the image. If you consider an RGB image, then there are $3$ channels: the red channel, the green channel and the blue channel. Thus, a kernel that is applied to this image will also have $3$ channels: the red channel, the green channel and the blue channel. In general, the distributions of the intensity of the red, green and blue colors in the image are different, so, in general, the red, green and blue channels of the kernel will also be different because they need to keep track of different information.

+",2444,,,,,6/9/2019 14:00,,,,3,,,,CC BY-SA 4.0 +12757,1,,,6/9/2019 14:06,,1,98,"

Since the supreme court is always political, why not program 9 AI robots that use different methods to determine whether a law is constitutional and the outcome of cases. How would engineers go about building this? Would it work?

+",26306,,,,,10/21/2019 21:01,Make 9 AIs to replace Supreme Court justices,,1,2,,,,CC BY-SA 4.0 +12758,1,,,6/9/2019 14:44,,1,156,"

I am training a neural network which produces the following errors (epoch number on the x axis). I have some questions regrading interpreting it.

+ +
    +
  • When I say model.predict is it giving me the result based on the final state (that is epoch 5000)?

  • +
  • Towards the end (and some places in the middle) there are places where the training error and validation error are farther apart. Does this mean that the model was over-fitting on those epochs?

  • +
  • Based on the graph, can one determine that the model was best at a certain epoch?

  • +
  • Does Keras have API methods to retrieve the model at a specific epoch so that I can retrieve the best model?

  • +
+ +

+",26307,,,,,6/9/2019 14:44,How can I interpret the following error graph?,,0,8,,,,CC BY-SA 4.0 +12759,1,13345,,6/9/2019 16:17,,1,1585,"

I have a question about how the value and policy heads are used in AlphaZero (not Alphago Zero), and where the leaf nodes are relative to the root node. Specifically, there seem to be several possible interpretations:

+ +
    +
  1. Policy estimation only. This would be most similar to DFS in which an average evaluation is computed through MCTS rollouts (though as others have noted the AlphaZero implementation actually seem to be deterministic apart from the exploration component, so 'rollout' may not be the most appropriate term) after reaching the . Here each leaf node would be at the end of the game.

  2. +
  3. Value estimation only. It seems that if the value network is to be used effectively, there should be a limit on the depth to which any position is searched, e.g. 1 or 2 ply. If so, what should the depth be?

  4. +
  5. They are combined in some way. If I understand correctly, there is a limit on the maximum number of moves imposed - so is this really the depth? By which I mean, if the game has still not ended, this is the chance to use the value head to produce the value estimation? The thing is that the paper states that the maximum number of moves for Chess and shogi games was 512, while it was 722 moves for Go. These are extremely deep - evaluations based on these seem to be rather too far from the starting state, even when averaged over many rollouts.

  6. +
+ +

My search for answers elsewhere hasn't yielded anything definitive, because they've focused more on one side or the other. For example, https://nikcheerla.github.io/deeplearningschool/2018/01/01/AlphaZero-Explained/ the emphasis seems to be on the value estimation.

+ +

However, in the Alphazero pseudocode, e.g. from https://science.sciencemag.org/highwire/filestream/719481/field_highwire_adjunct_files/1/aar6404_DataS1.zip the emphasis seems to be on the policy selection. Indeed, it's not 100% clear if the value head is used at all (value seems to return -1 by default).

+ +

Is there a gap in my understanding somewhere? Thanks!

+ +

Edit: To explain this better, here's the bit of pseudocode given that I found slightly confusing:

+ +
class Network(object):
+
+  def inference(self, image):
+    return (-1, {})  # Value, Policy
+
+  def get_weights(self):
+    # Returns the weights of this network.
+    return []
+
+ +

So the (-1,{}) can either be placeholders or -1 could be an actual value and {} a placeholder. My understanding is that they are both placeholders (because otherwise the value head would never be used), but -1 is the default value for unvisited nodes (this interpretation is taken from here, from the line about First Play Urgency value: http://blog.lczero.org/2018/12/alphazero-paper-and-lc0-v0191.html). Now, if I understand correctly, inference is called in both during training and playing by the evaluate function. So my core question is: how deep into the tree are the leaf nodes (i.e. where the evaluate function would be called)?

+ +

Here is the bit of code that confused me. In the official pseudocode as below, the 'rollout' seems to last until the game is over (expansion stops when a node has no children). So this means that under most circumstances you'll have a concrete game result - the player to move doesn't have a single move, and hence has lost (so -1 also makes sense here).

+ +
def run_mcts(config: AlphaZeroConfig, game: Game, network: Network):
+  root = Node(0)
+  evaluate(root, game, network)
+  add_exploration_noise(config, root)
+
+  for _ in range(config.num_simulations):
+    node = root
+    scratch_game = game.clone()
+    search_path = [node]
+
+    while node.expanded():
+      action, node = select_child(config, node)
+      scratch_game.apply(action)
+      search_path.append(node)
+
+    value = evaluate(node, scratch_game, network)
+    backpropagate(search_path, value, scratch_game.to_play())
+  return select_action(config, game, root), root
+
+ +

But under such conditions, the value head still doesn't get very much action (you'll almost always return -1 at the leaf nodes). There are a couple of exceptions to this.

+ +
    +
  1. When you reach the maximum number of allowable moves - however, this number is a massive 512 for chess & Shogi and 722 for Go, and seems to be too deep to be representative of the 1-ply positions, even averaged over MCTS rollouts.
  2. +
  3. When you are at the root node itself - but the value here isn't used for move selection (though it is used for the backprop of the rewards)
  4. +
+ +

So does that mean that the value head is only used for the backprop part of AlphaZero (and for super-long games)? Or did I misunderstand the depth of the leaf nodes?

+",26309,,26309,,6/10/2019 12:17,7/12/2019 23:36,How does AlphaZero use its value and policy heads in conjunction?,,1,2,,,,CC BY-SA 4.0 +12761,1,12771,,6/9/2019 18:01,,1,1731,"

I have way more unlabeled data than labeled data. Therefore I would like to train an autoencoder using MobileNetV2 as the encoder. Then I will use the pre-trained model for the classification of the labeled data.

+

I think it is rather difficult to "invert" the MobileNet architecture to create a decoder. Therefore, my question is: can I use a different architecture for the decoder, or will this introduce weird artefacts?

+",23063,,2444,,3/22/2022 14:36,3/22/2022 14:36,"If I use MobileNetV2 for the encoder, can I use a different architecture for the decoder?",,2,0,,,,CC BY-SA 4.0 +12764,1,12765,,6/9/2019 23:29,,1,111,"

Why do the GAN's loss functions use an expectation (sum + division) instead of a simple sum?

+",26313,,2444,,9/27/2020 19:57,9/27/2020 19:57,Why is an expectation used instead of simple sum in GANs?,,1,0,,,,CC BY-SA 4.0 +12765,2,,12764,6/10/2019 0:57,,0,,"

I'm assuming you mean the original loss explained in the original GAN paper. This practice was by design and for interpretation. An expectation is what your expecting to get, which is generally a good objective when your are sampling. Note this isn't just in GANs, it's in most objective functions used in a widespread of problems.

+

In general practice, though, dividing is good because it works as a normalization. Let's say you train one batch with 10 elements and the next with 8. Wouldn't you want each training example to be weighted equally?

+

Also, even if you use equal-sized batches, then the "division" you mention gets eaten up by the learning rate if you're using some form of gradient scheme.

+",25496,,2444,,9/27/2020 19:56,9/27/2020 19:56,,,,0,,,,CC BY-SA 4.0 +12768,1,,,6/10/2019 9:08,,2,292,"

Here's the famous VGG-16 model.

+

+

Do the inputs and outputs of a convolutional layer, before pooling, usually have the same depth? What's the reason for that?

+

Is there a theory or paper trying to explain this kind of setting?

+",26321,,2444,,7/4/2020 20:30,12/1/2020 21:02,Why do the inputs and outputs of a convolutional layer usually have the same depth?,,1,1,,,,CC BY-SA 4.0 +12771,2,,12761,6/10/2019 10:24,,1,,"
+

can I use a different architecture for the decoder, or will this + introduce weird artifacts?

+
+ +

If you are using U-net -like architecture with skip connection from corresponding encoding to decoding layer outputs of corresponding layers should have the same spatial resolution. There is no other commonly recognized limitations on decoder architecture for convolutional networks.

+",22745,,,,,6/10/2019 10:24,,,,3,,,,CC BY-SA 4.0 +12773,1,,,6/10/2019 12:19,,1,205,"

I am interested in creating a neural network-based engine for chess. It uses a $8 \times 8 \times 73$ output space for each possible move as proposed in the Alpha Zero paper: Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.

+ +

However, when running the network, the first selected move is invalid. How should we deal with this? Basically, I see two options.

+ +
    +
  1. Pick the next highest outputted move, until it is a valid move. In this case, the network might automatically over time not put illegal moves on top.
  2. +
  3. Process the game as a loss for the player who picked the illegal move. This might have the disadvantage that the network might be 'stuck' on only a few legal moves.
  4. +
+ +

What is the preferred solution to this particular problem?

+",26326,,2444,,7/10/2019 20:45,7/10/2019 20:46,How to deal with invalid output in a policy network?,,1,0,,7/11/2019 22:15,,CC BY-SA 4.0 +12774,2,,12773,6/10/2019 12:27,,0,,"

You should have a method to generate a possible moves output based on the board state. Use this as a mask before normalization in the policy head.

+",25496,,2444,,7/10/2019 20:46,7/10/2019 20:46,,,,0,,,,CC BY-SA 4.0 +12775,2,,12738,6/10/2019 12:46,,0,,"

What you are asking touches upon two very different approaches to machine learning:

+ +
    +
  1. The empirical approach (many people just call this 'machine learning', and some people like to call it 'algorithmic machine learning')
  2. +
  3. The statistical approach (some people like to call this 'statistical machine learning')
  4. +
+ +

The purely empirical approach is very goal-oriented - think discriminative models that are only used for prediction. You really only care about whether the data fits the training + test data well according to whichever metric you've selected.

+ +

The statistical approach is very process-oriented - you would want to identify the processes that generate the data, the distributions they follow, whether your results are statistically significant, etc.

+ +

Along this spectrum, most folks fall somewhere into the middle.

+ +

What you've described is closer to statistical machine learning - to practitioners of the other approach, regression only means that you are trying to predict for a continuous target variable (whereas classification would be for a discrete target variable). Then you might poke around the data a bit, fiddle around with features & hyperparameters, and try out a lot of different regression algorithms, going from OLS, SVMs, nearest neighbour regressors, random forests, gradient boosted trees, and maybe even RNNs, etc. In the extreme case, a purist of this approach wouldn't care about the statistics or whatever the underlying distributions are at all, but only care if the final regression gives good results in practice.

+ +

While there are clear risks with this approach (especially when the underlying assumptions of the models fall apart), it can give good results, especially when the practitioner is a good coder and can try out lots of possibilities very quickly, and even produce novel algorithms. The fact is that maths does sometimes lag development of other fields - Fourier analysis for example, and deep neural networks.

+ +

Another very approximate analogy would be science vs engineering.

+",26309,,,,,6/10/2019 12:46,,,,0,,,,CC BY-SA 4.0 +12777,1,,,6/10/2019 14:17,,1,30,"

In many chatbots, I've seen a lot of hardcoded responses, but nothing that allows an AI to break a piece of dialogue into components (say that the speaker sounds happy or is trying to be manipulative) and model a response based on this.

+

I envision a coding scheme for different core components of conversation. This would allow an AI to be more dynamic in its responses, and would be closer to actually being able to hold a conversation.

+

I'm not looking for AI-generated text, at least not in the sense of some NN or the like being fed a diet of literature and seeing what it spits out - there's nothing dynamic about that.

+",26334,,2444,,12/16/2021 18:01,12/16/2021 18:01,Is there a way to break a piece of dialogue into components?,,0,0,,,,CC BY-SA 4.0 +12778,1,,,6/10/2019 14:35,,2,617,"

I am trying to make my AI win the board game ""Catan"" against my friends. +Therefore i am using the python implementation of NEAT.

+ +

As I changed the values of weight_mutate_power, response_mutate_power, bias_mutate_power and compatibility_threshold in the config the number of individuals and species exploded (~Doubled every generation and exceeded pop_size).

+ +
weight_mutate_power = 12
+response_mutate_power = 0
+bias_mutate_power = 0.5
+compatibility_threshold = 3
+
+ +

Playing with those values I discovered that the rate of the explosion changes in relation to those values (Everything worked fine with standard Values from the documentation).

+ +

Any idea how to control this behavior? +My cluster is drowning in genomes...

+",26332,,,,,5/1/2020 5:27,Exploding population size in neat-python,,0,1,,,,CC BY-SA 4.0 +12779,1,,,6/10/2019 14:52,,2,219,"

I am writing a report where I used a slightly modified version of MCTS (not parallelized). I thought It could be interesting if I could calculate its time complexity. I'd appreciate any help I could get.

+ +

Here's the rough idea of how it works:

+ +

Instead of tree search, I'm using graph search meaning I keep a list of visited nodes in order to avoid adding duplicate nodes.

+ +

So in the expansion phase, I add all child nodes of the current node that aren't present elsewhere in the tree.

+ +

For the remaining phases, it's essentially the same as the basic version of MCTS, with a default random policy in the simulation step.

+",23866,,2444,,11/19/2019 22:41,11/19/2019 22:41,What is the time complexity of an unparellelized Monte Carlo tree search?,,0,4,,,,CC BY-SA 4.0 +12780,2,,11787,6/10/2019 16:12,,0,,"

Sorry this is more of a comment than an answer. I'm wondering if you have found a definitive answer to your question, because I've a very related question.

+ +

I'm also confused by the AlphaZero algorithm - my explanation for my confusion is specified here: How does AlphaZero use its value and policy heads in conjunction?.

+ +

The thing is that I think the AlphaZero algorithm is also different from the Alphago Zero algorithm. A lot of sources that I've tried referred to really mix the two together.

+ +

In particular, there's this function in the official pseudocode, which really confused me:

+ +
def run_mcts(config: AlphaZeroConfig, game: Game, network: Network):
+  root = Node(0)
+  evaluate(root, game, network)
+  add_exploration_noise(config, root)
+
+  for _ in range(config.num_simulations):
+    node = root
+    scratch_game = game.clone()
+    search_path = [node]
+
+    while node.expanded():
+      action, node = select_child(config, node)
+      scratch_game.apply(action)
+      search_path.append(node)
+
+    value = evaluate(node, scratch_game, network)
+    backpropagate(search_path, value, scratch_game.to_play())
+  return select_action(config, game, root), root
+
+
+def select_action(config: AlphaZeroConfig, game: Game, root: Node):
+  visit_counts = [(child.visit_count, action)
+                  for action, child in root.children.iteritems()]
+  if len(game.history) < config.num_sampling_moves:
+    _, action = softmax_sample(visit_counts)
+  else:
+    _, action = max(visit_counts)
+  return action
+
+ +

Mainly because, if you look at the definition of expanded...

+ +
  def expanded(self):
+    return len(self.children) > 0
+
+
+ +

I.e. when there are no more moves, and we are at the end of the game. I wonder what I'm missing here.

+",26309,,,,,6/10/2019 16:12,,,,0,,,,CC BY-SA 4.0 +12781,2,,12768,6/10/2019 16:14,,-1,,"

Keeping the same channel size allows the model to maintain rank but i would say the main reason is convenience. Its easier book keeping.

+ +

Also in many model cases output features need some form of alignment with the input (example being all models using residual units -- $\hat{x} = F(x) + x$

+",25496,,,,,6/10/2019 16:14,,,,2,,,,CC BY-SA 4.0 +12783,1,,,6/10/2019 18:54,,1,61,"

I'm trying to write the proof of correctness of Monte Carlo Tree Search. Any help would be really appreciated.

+",23866,,2444,,11/19/2019 22:40,11/19/2019 22:40,Proof of Correctness of Monte Carlo Tree Search,,0,0,,,,CC BY-SA 4.0 +12784,1,,,6/10/2019 20:25,,2,30,"

Shouldn't the discriminator and generator work fine even if they don't process data symmetrically? I mean, they don't only receive the final layer results of each other, they don't use data that from hidden layers.

+",23941,,,,,6/10/2019 20:25,How important is architectural similarity between the discriminator and generator of a GAN?,,0,1,,,,CC BY-SA 4.0 +12785,2,,11740,6/10/2019 20:31,,2,,"

The paper that introduced AlphaGo, Mastering the game of Go with deep neural networks and tree search, motivates the use of MCTS

+ +
+

Monte Carlo tree search (MCTS) uses Monte Carlo rollouts to estimate the value of each state in a search tree. As more simulations are executed, the search tree grows larger and the relevant values become more accurate. The policy used to select actions during search is also improved over time, by selecting children with higher values. Asymptotically, this policy converges to optimal play, and the evaluations converge to the optimal value function. The strongest current Go programs are based on MCTS, enhanced by policies that are trained to predict human expert moves. These policies are used to narrow the search to a beam of high-probability actions, and to sample actions during rollouts. This approach has achieved strong amateur play.

+
+",2444,,,,,6/10/2019 20:31,,,,0,,,,CC BY-SA 4.0 +12786,1,,,6/10/2019 20:31,,2,200,"

I was working on CNN. I modified the training procedure on runtime.

+

+

As we can see from the validation loss and validation accuracy, the yellow curve does not fluctuate much. The green curve and red curve fluctuate suddenly to higher validation loss and lower validation accuracy, then goes to the lower validation loss and the higher validation accuracy, especially for the green curve.

+

Is it happening because of overfitting or something else?

+

I am asking it because, after fluctuation, the loss decreases to the lowest point, and also the accuracy increases to the highest point.

+

Can anyone tell me why is it happening?

+",26342,,2444,,11/7/2020 11:31,4/26/2023 19:02,Validation Loss Fluctuates then Decrease alongside Validation Accuracy Increases,,2,1,,,,CC BY-SA 4.0 +12787,1,,,6/10/2019 20:57,,2,93,"

In general, how does one make a neural network learn the training data while also forcing it to represent some known structure (e.g., representing a family of functions)?

+

The neural network might find the optimal weights, but those weights might no longer make the layer represent the function I originally intended.

+

For example, suppose I want to create a convolutional layer in the middle of my neural network that is a low-pass filter. In the context of the entire network, however, the layer might cease to be a low-pass filter at the end of training because the backpropagation algorithm found a better optimum.

+

How do I allow the weights to be as optimal as possible, while still maintaining the low-pass characteristics I originally wanted?

+

General tips or pointing to specific literature would be much appreciated.

+",26344,,2444,,6/30/2022 22:46,6/30/2022 22:46,How does one make a neural network learn the training data while also forcing it to represent some known structure?,,2,1,,,,CC BY-SA 4.0 +12788,1,,,6/10/2019 21:47,,1,57,"

I have a time-varying input size vector for a RNN. However, I am facing some difficulties understanding how to deal with my network weights when the input changes.

+ +

Say we have a set of natural positive integers +$$ +\Gamma=\{1,2,\dots,F\}, +$$ +where $F=100$ for the sake of the example.

+ +

A valid observation vector of my agent at time $t$ might be +$$ +\gamma_t=[1,3,5,1]. +$$ +Thus, at time $t$ a set of weights will be produced by my RNN, according to $\gamma_t$. Sat that at time $t+1$, my observation vector changes as +$$ +\gamma_{t+1}=[3,5,8], +$$ +and there is my problem. If I now continue training my RNN with the previous weights, the output would be inevitably affected. Also, Which weight shall I remove? I see RNN can face the issue but how shall I deal with the previously computed weights? Which one to remove? How to initialize a new one in case the cardinality of $\gamma_{t+1}$ is higher than that of $\gamma_t$?

+",26345,,,,,6/11/2019 15:08,RNN weights when varying the input size,,0,4,,,,CC BY-SA 4.0 +12791,1,,,6/11/2019 6:22,,1,27,"

I'm working with a DCGAN, a deep CNN for classifying images with a GAN that competes with the classifier to generate images of what we are classifying.

+ +

The goal of the project at the moment is to produce AI generated memes in the form of pepe the frog, based on a dataset found on the internet of roughly 2000 images. I scaled them all maintaining aspect ratio as my only form of normalization.

+ +

As I train my data (I've tried many combinations of hyperparameters) upwards of 100k epochs (batch 32) my classifying network gets around 1^-6 average while my GAN approaches nearly 16, yes 16, with a properly defined loss function.

+ +

Now because my images are typically made with random features, some containing text, others containing full body renditions, turned away from viewport etc... I'm assuming that this is because of the data I'm training with and it's diverse amount of features, is my reasoning correct? Also if allowed to continue is it possible that the GAN learns to properly generate the data?

+ +

The main reason I have come to the above conclusion is that if I train on a few hand picked examples that have similar artistic styles/orientations and such (less than 100) and let it train my GAN will generate decent images, however they have low variability.

+",26352,,26352,,6/11/2019 7:10,6/11/2019 7:10,DCGAN loss determining data normalization problems,,0,2,,,,CC BY-SA 4.0 +12793,1,,,6/11/2019 8:18,,1,60,"

I am trying to apply deep deterministic policy gradient (DDPG) on a robotic application. My states consist of the joint angle positions of the robot and my actions are also its joint angle positions. Since, DDPG produces a continuous policy output where states are directly mapped onto the actions, can I say that my next state and action will be same? Simplistically, the input of the policy network will be the current state and the output will be the next state?

+",18986,,,,,6/11/2019 8:18,Can next state and action be same in Deep Deterministic Policy Gradient?,,0,2,,,,CC BY-SA 4.0 +12799,2,,10869,6/11/2019 23:14,,1,,"

You have experimented with a small model (117M parameters). OpenAI has now released the medium sized model (345M parameters). Note that the full model has 1.5B parameters. Also note that GPT-2 has been trained on a tiny fraction of all available text. It's almost guaranteed that a larger model trained on more text will generate better text.

+ +

I have experimented with the 345M model, and I agree with The Guardian assessment. See for yourself:

+ +

Prompt:

+ +
+

I want to generate beautiful music using machine learning. Do you + think a computer will ever be able to compose something as novel as + creations of Mozart, Bach, or Chopin?

+
+ +

Results (first four samples, not cherry picked):

+ +
+

Yes, computers will probably be able to do it by 2020. I think the key + for us will be to make music that is interesting and beautiful, so we + will never really know. My hope would be that computers will become + far more efficient over time – something that is already happening in + a lot of ways like in big data – while still being able to do more + things that they are trained on, like find patterns. I think that's + where I'd want to use neural networks in the future.

+
+ +
+ +
+

We don't know, but it will probably happen. That's an interesting + question to ask. Of course we have AI, which can do interesting things + without human input. It can recognize the world, find objects and + images, it understands language, we can make games for it. But if you + imagine an AI being able to write music, will it even be good enough? + How much better than human players? How much better can someone else + do? AI will never really be at its best because it will always have to + come up with creative ways to use and play it. To ask a question like + it is good enough is to ignore that it is human. You are only allowed + to ask good questions

+
+ +
+ +
+

The software will help, as it already can do. But it will also allow + more human creators, which is what I think it should do more of to + bring music to any genre. In terms of composing, it's a matter of + creativity in humans in general. But how does it achieve that? Let's + just say that there are fewer humans composing today than ever before. + Do you remember the day when, by chance, a computer could write lyrics + to Mozart's Sonata No. 4, or the day when a human wrote lyrics to + Mozart's La Résistance du Ciel No. 2? That was back in 1960, and it's + just not true anymore. As we move forward, we are not going to create + music that human musicians cannot compose. What is your goal as a + computer music creator?

+
+ +
+ +
+

At the core of this idea is that music is a collection of sounds – it + was the very first art form, it was the first music that humans + evolved, a collection of sounds. Music is not an abstract thing of + sound, but a collection of sounds. You can't write music the way we + write and compose books and films, because we write and compose with + sounds in mind. This is where machine learning comes in. You can read + some books about how to get better music writers or how to improve the + quality of their music, but you cannot make a difference with this + kind of information at all. What does music teach you? To write and + perform beautifully, to convey emotion accurately, and to create + lasting and rich relationships with

+
+",26373,,,,,6/11/2019 23:14,,,,2,,,,CC BY-SA 4.0 +12800,2,,12787,6/12/2019 7:25,,2,,"

Extending @mirror2image's comment, if you have a certain metric that allows you to measure how close the intended layer is to a low pass filter (something that compares its output with what a low pass filter would have produced, for example), the simplest way to achieve what you want would be to add a term in your loss function that calculates the value of this metric. This way, each time you do a training step, the network now is not only made to output the correct predictions but is also forced to do so while also keeping that specific layer's behavior as close to a low-pass filter as possible. +This is the most common way of tweaking the behavior of neural networks and is often encountered in many research papers.

+",16159,,,,,6/12/2019 7:25,,,,0,,,,CC BY-SA 4.0 +12801,1,,,6/12/2019 8:04,,2,281,"

I have two closed polygons, drawn as connected straight black lines on a white background. I need to classify such images in to three forms

+ +
    +
  1. Two separate polygons
  2. +
  3. One polygon encloses the other
  4. +
  5. The two polygons overlap each other.
  6. +
+ +

The polygons vary in sizes and location on the image, and the image contains only the polygons and the white background.

+ +

Which neural network architecture should I use to solve this problem?

+",26382,,2444,,6/12/2019 11:15,6/12/2019 11:15,How do I classify an image that contains only polygons?,,1,0,,,,CC BY-SA 4.0 +12802,2,,12448,6/12/2019 8:41,,1,,"

I now currently use:

+ + + +

I currently have, but may remove:

+ +
    +
  • Iterated prisoners dilemma (hard to interpret and I am not sure if MCTS is really the right choice)
  • +
+ +

I may add sometime:

+ + + +

Other good ideas:

+ + + +

I did not try all ideas, as I just want to verify my framework, that should be used on other problems (that are harder to verify). You can still keep adding answers with good ideas, that may be a good reference for others that want to test their implementations.

+",25798,,,,,6/12/2019 8:41,,,,0,,,,CC BY-SA 4.0 +12803,1,12842,,6/12/2019 8:41,,1,56,"

We have seen advances in top down, RTS team games like Dota 2 and Starcraft II from companies like OpenAI who developed agents to beat real pro players most of the time. How would similar learning techniques compare to games like Overwatch that require faster reaction times and complex understanding of 3d space and effect? +Or have we not developed solutions that could be tasked with this problem?

+",26383,,,,,6/14/2019 8:59,Feasibility of a team-based FPS AI?,,1,2,,,,CC BY-SA 4.0 +12804,2,,12787,6/12/2019 9:42,,0,,"

the idea about training that it is allowed for weights (not layer as you wrote) ""learn"" values what they ""want"" from general network setting. But actualy i v thinked about that too, and to have weights represent as you want, you can first train a shorter network, while having those weights (1) as very last, so you have maximum control over them, after trained a lot append next layers to those and learning rate for weights (1) make much smaller as for new weights

+",25836,,,,,6/12/2019 9:42,,,,0,,,,CC BY-SA 4.0 +12805,1,,,6/12/2019 9:52,,3,102,"

Basically, economic decision making is not restricted to mundane finance, the managing of money, but any decision that involves expected utility (some result with some degree of optimality.)

+ +
    +
  • Can Machine Learning algorithms make economic decisions as well as or better than humans?
  • +
+ +

""Like humans"" means understanding classes of objects and their interactions, including agents such as other humans.

+ +

At a fundamental level, there must be some physical representation of an object, leading to usage of an object, leading to management of resources that the objects constitute.

+ +

This may include ability to effectively handle semantic data (NLP) because mcuh of the relevant information is communicated in human languages.

+",25836,,1671,,6/12/2019 22:00,6/14/2019 17:23,Can Machine Learning make economic decisions of human quality or better?,,2,7,,,,CC BY-SA 4.0 +12808,2,,12801,6/12/2019 10:39,,-1,,"

To make it easy take small known CNN network like Alex Net, train like : input is image, output is [1,0,0,] separate, [0,1,0], encloses, [0,0,1] overlap. Cause you task is easy i guess that will be fine. But it can be done without ML just analysing image with constant algorithm by where are the points of poligons.

+",25836,,,,,6/12/2019 10:39,,,,0,,,,CC BY-SA 4.0 +12809,2,,12657,6/12/2019 11:09,,1,,"

One word answer for your question ""Do you need to store previous values of weights and layers on recurrent layer while BPTT?"" is YES

+ +

Let us go through the details.

+ +

For training an RNN using BPTT, we need gradients of error w.r.t all three parameters U, V, W

+ +

Notation of my explanation is different from notation in the figure of question. +My notation is as below:

+ +
    +
  1. V - Hidden Layer - Output Layer (gradients of V are independent of +previous time steps)
  2. +
  3. U - Input Layer - Hidden Layer (gradients of U are dependent on previous time steps)
  4. +
  5. W - Hidden Layer - Hidden Layer (gradients of W are also dependent on previous time steps)
  6. +
+ +

And for calculating these gradients, we use chain rule of differentiation, the same rule that we used to calculate gradients in a fully connected neural network.

+ +

The gradient w.r.t V only depends on current time step (doesn't need any values from previous time step).

+ +

The gradients w.r.t U, W depends on current time step and also all previous time steps (so needs values from all time steps)

+ +

Basically, we need to back propagate gradients from current time step all the way to t=0.

+ +

How this back propagation is different from the back propagation we use in fully connected neural network is that, in fully connected neural network we don't have the concept of t and also we don't share any weights across layers. But, here we share weights across layers and time instants. So, gradients depend on all time instants.

+ +

Note: Be careful with notation difference between several articles. I followed slightly different notation than in the diagram in question.

+ +

Some links that will help you explore.

+ +

https://www.youtube.com/watch?v=RrB605Mbpic (clearly explains about gradients of all three U, V, W; but notation is different from diagram in question)

+ +

http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/enter link description here

+ +

http://ir.hit.edu.cn/~jguo/docs/notes/bptt.pdf

+ +

https://www.d2l.ai/chapter_recurrent-neural-networks/bptt.html

+ +

Remember, you should understand chain rule of partial derivative very clearly to do the derivation yourself and understand it.

+ +

Also, dont think BPTT is separate than BP. It is one and the same. Since neural network architecture in RNN includes time instants and sharing of weights across time instants, just using chain rule on this network makes back propagation also dependent on time and so is the name.

+ +

Hope it helps. Feedback is welcome.

+",20760,,20760,,6/13/2019 12:19,6/13/2019 12:19,,,,8,,,,CC BY-SA 4.0 +12810,1,,,6/12/2019 12:04,,1,62,"

I want to use Machine Learning for text classification, more precisely, I want to determine whether a text (or comment) is positive or negative. I can download a dataset with 120 million comments. I read the TensorFlow tutorial and they also have a text dataset. This dataset is already pre-processed, like the words are converted to integers and the most used words are in the top 10000.

+ +

Do I also have to use a pre-processed dataset like them? If yes, does it have to be like the dataset from TensorFlow? And which pages could help me to implement that kind of program?

+ +

My steps would be:

+ +
    +
  1. find datasets
  2. +
  3. preprocess them if needed
  4. +
  5. feed them in the neural network
  6. +
+",,user24093,2444,,6/28/2019 11:18,6/28/2019 11:18,Do I need to use a pre-processed dataset to classify comments?,,1,0,,,,CC BY-SA 4.0 +12811,2,,9319,6/12/2019 12:54,,-1,,"

For example, suppose I want to approximate the function +mashine learning is used to approximate an UNKNOWN function. If you do want to do this with NN, you will have a regression task with small full connected network with like:

+ +
    +
  • Input is 2 values of range 0 ..1
  • +
  • a few hidden layers, just start trying with one, of size like 3-6 neurons , activation sigmoid or tanh
  • +
  • need a non-linear function here to approx non linear sin * sin, whatever
  • +
  • last layer with no activation function just making a difference from... known value f(x,y)
  • +
+",25836,,,,,6/12/2019 12:54,,,,0,,,,CC BY-SA 4.0 +12812,2,,12810,6/12/2019 12:59,,2,,"

Here's a list of some of the best python libraries for natural language processing.

+ +
    +
  • Natural Language Toolkit (nltk) +Covers all the basic functions and NLP tools such as tokenization etc.

  • +
  • TextBolb +This is a good library of beginners, it provides the nltk toolkit in a simplified format.

  • +
  • Spacy +It is an advanced library and can be used in production code.

  • +
+ +

You can preprocess textual data in a number of ways. It depends on the type of task at hand and the size of the data.

+ +

From your question, I think you are referring to converting the words to a vector form word2vec. Here is a massive word2vect list from Google.

+ +

Also have a look at preprocessing techniques in NLP such as tf-idf etc.

+",26384,,,,,6/12/2019 12:59,,,,0,,,,CC BY-SA 4.0 +12813,1,,,6/12/2019 14:28,,5,353,"

I want to use Reinforcement Learning to optimize the distribution of energy for a peak shaving problem given by a thermodynamical simulation. However, I am not sure how to proceed as the action space is the only thing that really matters, in this sense:

+
    +
  • The action space is a $288 \times 66$ matrix of real numbers between $0$ and $1$. The output of the simulation and therefore my reward depend solely on the distribution of this matrix.

    +
  • +
  • The state space is therefore absent, as the only thing that matters is the matrix on which I have total control. At this stage of the simulation, no other variables are taken into consideration.

    +
  • +
+

I am not sure if this problem falls into the tabular RL or it requires approximation. In this case, I was thinking about using a policy gradient algorithm for figuring out the best distribution of the $288 \times 66$ matrix. However, I do not know how to behave with the "absence" of the state space. Instead of a tuple $\langle s,a,r,s' \rangle$, I would just have $\langle a, r \rangle$, is this even an RL-approachable problem? If not, how can I reshape it to make it solvable with RL techniques?

+",23638,,2444,,11/30/2020 1:46,11/30/2020 1:46,It is possible to solve a problem with continuous action spaces and no states with reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +12814,2,,12805,6/12/2019 14:37,,0,,"

at this time, as open source - NOT.

+ +

i guess:

+ +
    +
  • for a decision make we need a broad input layer/-s of data flows,
  • +
  • we need a 20 000-200 000 layers of neural networks or more complex and dynamic architectures
  • +
  • we need a deep research of date-time influence for historical data flow
  • +
+ +

what we have at this time:

+ +
    +
  • only sensors - opencv and object recognition, nlp-tagging, data predicting
  • +
+ +

so, sensors isn't AI, sensors and machine learning is previous experience. it is not ready for the change analysis.

+",15757,,,,,6/12/2019 14:37,,,,6,,,,CC BY-SA 4.0 +12815,1,,,6/12/2019 15:31,,3,164,"

I want to do an NLP project but I don't know if it's doable or not as I have no experience or knowledge in NLP or ML yet.

+ +

The idea is as follows: Let's say we have a story (in the text) that has 10 characters. Can we define them, their characteristics, whole sentences they said, and then analyze emotions within those sentences.

+ +

After that, is it possible to generate an audio version of the story where: the text, in general, is narrated by one voice, each individual character's sentences are read in a different voice generated specifically for that character. Finally is it possible to make the tones of the characters voices change depending on the emotions detected in their sentences?

+",26390,,12853,,6/13/2019 7:34,1/10/2022 10:36,How can I build an AI with NLP that read stories,,1,4,,1/10/2022 11:00,,CC BY-SA 4.0 +12817,1,,,6/12/2019 21:17,,3,275,"

I am trying to build a Deep Q-Network (DQN) agent that can learn to play the game 2048. I am orientating myself on other programs and articles that are based on the game snake and it worked well (specifically this one).

+ +

As input state, I am only using the grid with the tiles as numpy array, and as a reward, I use (newScore-oldScore-1) to penalize moves that do not give any points at all. I know that this might not be optimal, as one might as well reward staying alive for as long as possible, but it should be okay for the first step, right? Nevertheless, I am not getting any good results whatsoever.

+ +

I've tried to tweak the model layout, the number of neurons and layers, optimizer, gamma, learning rates, rewards, etc.. I also tried ending the game after 5 moves and to optimize just for those first five moves but no matter what I do, I don't get any noticeable improvement. I've run it for thousands of games and it just doesn't get better. In fact, sometimes I get worse results than a completely random algorithm, as sometimes it just returns the same output for any input and gets stuck.

+ +

So, my question is, if I am doing anything fundamentally wrong? Do I just have a small stupid mistake somewhere? Is this the wrong approach completely? (I know the game could probably be solved pretty easily without AI, but it seemed like a little fun project)

+ +

My Jupyter notebook can be seen here Github. Sorry for the poor code quality, I'm still a beginner and I know I need to start making documentation even for fun little projects...

+ +

Thank you in advance,

+ +

Drukob

+ +

edit: +some code snippets:

+ +

Input is formatted as a 1,16 numpy array, also tried normalizing the values or using only 1 and 0 for occupied and empty cells, but that did not help either. Which is why I assumed it's maybe more of a conceptual problem?

+ + + +
    def get_board(self):
+        grid = self.driver.execute_script(""return myGM.grid.cells;"")
+        mygrid = []
+        for line in grid:
+            a = [x['value'] if x != None else 0 for x in line]
+            #a = [1 if x != None else 0 for x in line]
+            mygrid.append(a)
+        return np.array(mygrid).reshape(1,16)
+
+ +

The output is an index of {0,3}, representing the actions up, down, left or right and it's just the value with the highes prediction score.

+ + + +
prediction = agent.model.predict(old_state)
+predicted_move = np.argmax(prediction)
+
+ +

I've tried a lot of different model architectures, but settled for a simpler network now, as I have read that unnecessary complex structures are often a problem and unneeded. However, I couldn't find any reliable source for a method, how to get the optimal layout except for experimenting, so I'd be happy to have some more suggestions there.

+ + + +
model = models.Sequential()
+        model.add(Dense(16, activation='relu', input_dim=16))
+        #model.add(Dropout(0.15))
+        #model.add(Dense(50, activation='relu'))
+        #model.add(Dropout(0.15))
+        model.add(Dense(20, activation='relu'))
+        #model.add(Dropout(0.15))
+        #model.add(Dense(30, input_dim=16, activation='relu'))
+        #model.add(Dropout(0.15))
+        #model.add(Dense(30, activation='relu'))
+        #model.add(Dropout(0.15))
+        #model.add(Dense(8, activation='relu'))
+        #model.add(Dropout(0.15))
+        model.add(Dense(4, activation='linear'))
+        opt = Adam(lr=self.learning_rate)
+        model.compile(loss='mse', optimizer=opt)
+
+",26399,,1671,,10/15/2019 19:20,10/15/2019 19:20,Deep Q-Network (DQN) to learn the game 2048,,0,3,,,,CC BY-SA 4.0 +12818,5,,,6/12/2019 21:47,,0,,"

https://en.wikipedia.org/wiki/Economics

+",1671,,1671,,6/12/2019 21:47,6/12/2019 21:47,,,,0,,,,CC BY-SA 4.0 +12819,4,,,6/12/2019 21:47,,0,,For questions involving economics and economic decision-making. ,1671,,1671,,6/12/2019 21:47,6/12/2019 21:47,,,,0,,,,CC BY-SA 4.0 +12820,2,,12805,6/12/2019 22:10,,1,,"

Consider managing a memory structure as an economic function. (Where to put, and how to manage, the resources constituted by data.) This is something computers can do better and faster than any human. The reason is that the system in which the economic decisions are being made is fully defined.

+ +

Routing of packages is a similar, economic function that computers do much better than humans.

+ +

These functions haven't been handled by Machine Learning in the past, but, soon after the AlphaGo milestone, Google found an economic application for Machine Learning. Google's DeepMind trains AI to cut its energy bills by 40% (Wired)

+ +

So it's entirely context dependent.

+ +

As the model increases in complexity and nuanced, utility will be reduced. (In the former case it's a time and space issue related to computational complexity, and in the latter case, often a function of incomplete information or inability to define parameters.)

+ +

But as the sophistication of the machine learning algorithms increases, and the models continue to be refined, the algorithms will get better and better at managing intractability and incomplete information.

+",1671,,1671,,6/14/2019 17:23,6/14/2019 17:23,,,,0,,,,CC BY-SA 4.0 +12822,1,,,6/13/2019 4:42,,1,113,"

I've read about seq2seq for time-series and it seemed really promising, but, when I went to implement it, all the tutorials I've found use the correct output as input to the decoder phase during training, instead of using the actual prediction made by the cell before it. Is there a reason why not do the latter?

+

I've been using the tutorial from here

+

But all the other tutorials that I've found followed the same principle.

+",26406,,2444,,12/23/2021 18:42,5/17/2023 22:08,Why feeding the correct output as input during training of seq2seq models?,,1,0,,,,CC BY-SA 4.0 +12823,2,,12822,6/13/2019 6:23,,0,,"

The reason why you would use the ground truth as input to the decoder is to lean the testing distribution. From what I've seen so far, most of the papers are using scheduled sampling (Bengio et al.). Meaning that you will introduce a new term $p$, which will be the probability of the network to feed as input its own prediction. Initially, $p$ will be very small, so that the network will use the ground truth, but later the more iterations pass by, the probability will decrease and the network will start using its own predictions.

+",20430,,2444,,12/23/2021 18:42,12/23/2021 18:42,,,,0,,,,CC BY-SA 4.0 +12827,1,12830,,6/13/2019 8:10,,0,377,"

I am new in RL and I am trying to understand why do we need all these hyperparameters. +Can somebody explain me why we use them and what are the best values to use for them?

+ +
+

total_episodes = 50000 # Total episodes

+ +

total_test_episodes = 100 # Total test episodes

+ +

max_steps = 99 # Max steps per episode

+ +

learning_rate = 0.7 # Learning rate

+ +

gamma = 0.618 # Discounting rate

+ +

Exploration parameters

+ +

epsilon = 1.0 # Exploration rate

+ +

max_epsilon = 1.0 # Exploration probability at start

+ +

min_epsilon = 0.01 # Minimum exploration probability

+ +

decay_rate = 0.01 # Exponential decay rate

+
+ +

I am currently working on taxi_v2 problem from GYM.

+ +

Link: https://learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/

+",26410,,2444,,6/13/2019 14:49,6/14/2019 9:38,Why are we using all hyperparameters in RL?,,1,0,,12/27/2021 23:15,,CC BY-SA 4.0 +12828,2,,12815,6/13/2019 8:48,,2,,"

This is quite an ambitious project, and IMHO well beyond the scope of what a single individual can do (within a reasonable time span) at present.

+ +

You need to first analyse the story text to identify the characters. This can already be quite a tricky task, as pronouns and other reference expressions are generally used to make a text less monotonous. If a character is referred to by name, say Jane, then you can assume that a follow-up the young woman will refer to her and not a male character mentioned in the same paragraph. But what about the young scientist? Such expressions can be very opaque, and you'd need a lot of world-knowledge to decode them correctly, as they can refer to any distinctive attribute of the character.

+ +

Identifying speech is a bit easier, unless you're talking about indirect speech. Jane was thinking aloud. She wasn't going to be able to do that. It was too hard. -- is that speech or not? Compare to Jane was thinking aloud: ""I am not going to be able to do that. It is too hard."", which is the direct speech equivalent. Also, unless you're dealing with a play, most of the text will probably not be speech. For the audio version you will probably only want to deal with direct speech, which is usually (but not always) indicated by quote marks.

+ +

Analysing emotions seems to be comparatively easy if you have reached this stage, though if it is just based on keywords in the speech it probably won't be very accurate. If you can assign any descriptive statements to characters, that might be more successful, though by no means trivial.

+ +

Generating the text as audio should be straight forward. Most operating systems nowadays have speech synthesis integrated, and you can generally choose different voices, so if your text is marked up properly with which voice should speak which part it would be trivial.

+ +

To summarise: The NLP part is the hardest bit of it. As has been mentioned in the comments already, I don't think it's a problem that machine learning can help with, and I would stick to traditional methods of parsing the text into a structural representation and then applying rules to identify the bits you are interested in. The recognition of emotion might be a subtask that is suitable for ML, but in the past I have only applied pattern matching to similar tasks, so I can't really say much about that.

+ +

From my own experience in text analysis I would think that you might be able to get decent results with a few simple heuristics, but those will likely fail when it becomes a bit more complicated. A lot hinges on the type of story: children's fairy tales might be easier than War and Peace in that respect.

+",2193,,,,,6/13/2019 8:48,,,,0,,,,CC BY-SA 4.0 +12830,2,,12827,6/13/2019 9:35,,1,,"

In RL, there are episodic and non-episodic tasks (or problems). In episodic tasks, each episode proceeds in time steps.

+ +

For example, most games are episodic tasks. For instance, in a football championship (e.g. Premier League), each football match during the whole season is an episode. In this example, each minute (or second) of a football match can be considered a time step of the the episode (that is, the football match).

+ +

The parameter total_episodes thus specifies the number of episodes of your RL episodic problem. Similarly, max_steps specifies the maximum number of time steps per episode.

+ +

Why do we also need total_test_episodes? In machine learning, when building a model, there are usually two phases: the training phase and the test phases. In the training phase, you use the training dataset to learn the parameters of the model. In the test phase, you test the performance (e.g. the total return, in the case of RL) of the model. Hence, total_test_episodes is used to specify this hyper-parameter.

+ +

In deep RL, a (deep) neural network (NN) is used to represent either a value function or a policy (which is also a function). In machine learning, neural networks are usually trained using an optimisation method, like gradient descent (GD) and the back-propagation (BP), which is used to compute the gradient of the objective function with respect to the parameters of the model. In the case of deep RL, the parameters of the NN that represents either the value function or policy are also learned using a similar approach. In this context, the learning rate is a hyper-parameter that determines the ""strength"" of the update step of the optimisation algorithm. More concretely, in the case of GD, the update step is

+ +

$$\mathbf{\theta}_{n+1} \gets \mathbf{\theta}_{n} - \alpha \nabla f(\mathbf{\theta} _{n})$$

+ +

where $\mathbf{\theta}_{n+1}$ is a vector containing the parameter of your model (in this case, a NN), $\alpha$ is the learning rate and $\nabla f(\mathbf{\theta} _{n})$ is the gradient of the objective function $f$. Hence, learning_rate specifies the value of $\alpha$.

+ +

In your case, learning_rate can actually specify the value of the learning rate of your RL algorithm. For example, in the case $Q$-learning, the learning rate, which we can denote by $\alpha$, also specifies the ""strength"" of the update.

+ +

Similarly, the gamma is a hyper-parameter of your RL algorithm. For example, in the case $Q$-learning, the parameter $\gamma$ (the discount factor) which determines the contribution of the estimate of the $Q$ value of the next state (while taking the greedy action from that state) to the $Q$ value that is being currently updated.

+ +

$Q$-learning is an off-policy RL algorithm, which means that it uses a behaviour policy that is possibly different than the policy it tries to estimate. The usual behaviour policy is the $\epsilon$-greedy (with probability $\epsilon$ a random action is taken in a certain state and with probability $1 - \epsilon$ the greedy action is taken). The parameter epsilon specifies this hyper-parameter. In this context, an initial $\epsilon$ that specifies the exploration rate is provided. As the training progresses, the estimate of the optimal value function should become more accurate. In that case, you want to explore less and follow more your estimate of the optimal value function, so min_epsilon is used to specify the lowest $\epsilon$.

+ +

The parameter decay_rate is used to specify the value of the decay rate. Have a look at https://stats.stackexchange.com/a/31334/82135 for more info.

+",2444,,2444,,6/14/2019 9:38,6/14/2019 9:38,,,,2,,,,CC BY-SA 4.0 +12835,1,,,6/14/2019 6:36,,5,112,"

I have a dataset which contains 4000k rows and 6 columns. The goal is to predict travel time demand of a taxi. I have read many articles regarding how to approach the problem. So, every writer tell his own way. The thing which I have concluded from all my readings is that I have to use multiple algorithms and check the accuracy of each one. Then I can ensemble them by averaging or any other approach.

+ +

Which algorithms will be best for my problem accuracy-wise? Some links to code will be helpful for me.

+ +

I currently only have training set of data. After I work on it, it will be evaluated on any testing set by my professor. So, what should I do now? Either split data I have into my own testing and training set or separately generate dummy data as a testing set?

+",26429,,2444,,6/14/2019 9:07,6/14/2019 9:52,What approach should I take to model forecasting problem in machine learning?,,1,0,,,,CC BY-SA 4.0 +12837,2,,12813,6/14/2019 7:21,,3,,"

A stateless RL problem can be reduced to a Multiarmed Bandit (MAB) problem. In such a scenario, taking an action will not change the state of the agent.

+ +

So, this is the setting of a conventional MAB problem: at each time step, the agent selects an action to either perform an exploration or exploitation move. It then records the reward of the taken action and updates its estimation/expectation of the usefulness of the action. Then, repeats the procedure (selection, observing, updating).

+ +

To chose between exploration and exploitation moves, MAB agents adopt a strategy. The simplest one would probably be $\epsilon$-greedy which agent chooses the most rewarding actions most of the time (1-$\epsilon$ probability) or randomly selects an action ($\epsilon$ probability).

+",12853,,,,,6/14/2019 7:21,,,,1,,,,CC BY-SA 4.0 +12840,2,,12835,6/14/2019 7:54,,3,,"

In general, this type of problem is called a regression problem since the target variable (i.e. travel time) can take any value in a continuous domain. In theory, you can use any regression algorithms (a subset supervised learning techniques) to solve this problem. Some of the most popular ones are linear regression, K-nearest neighbor (regressor), and neural networks.

+ +

As you observed already, different algorithms result in (sometimes significantly) different results. Also, the parameter configurations (e.g., number of hidden layers in Neural Networks) can make a big difference. Sometimes, ensembling different models can be helpful, but in general, you should try to avoid overfitting (when your model is more complex than your data such that it memorizes the training set instead of learning it!). That may result in a very good performance on your training set but perform very poorly on your professor's testing set.

+ +

What I would do is:

+ +
    +
  • exploring the dataset to see what are the contributing factor in travel time (any correlation between the columns).
  • +
  • cleaning and preprocessing my dataset (duplicates, null values, outliers)
  • +
  • reshaping my dataset if needed (normalizing some columns, merging or splitting columns)
  • +
  • dividing my dataset to training and evaluation subsets (so I train on one part and test on the other part to avoid overfitting)
  • +
  • choosing a simple baseline, applying and measuring the accuracy metrics.
  • +
  • trying to fine tune parameters of my baseline or trying other more advanced techniques.
  • +
  • comparing the results and improving any part of the pipeline when necessary (more/less cleaning, parameter tuning, ensembling).
  • +
+",12853,,2444,,6/14/2019 9:12,6/14/2019 9:12,,,,1,,,,CC BY-SA 4.0 +12841,1,12845,,6/14/2019 8:22,,2,623,"

iPhone X allows you to look at the TrueDepth camera and reports 52 facial blendshapes like how much your eye is opened, how much your jaw is opened, etc.

+ +

If I want to do something similar with other cameras (not TrueDepth), what are my alternative methods? Currently, I just use a simple ConvNet which takes in an image and predict 52 sigmoid values.

+ +

What do you think could be the underlying technology behind ARKit Face Tracking?

+",20819,,,,,6/14/2019 10:40,How does ARKit's Facial Tracking work?,,1,0,,,,CC BY-SA 4.0 +12842,2,,12803,6/14/2019 8:59,,0,,"

To get a better understanding, let's compare a couple of games DOTA2 and Overwatch. At first, both seen similar in terms of multiplayer and strategy, but there is a significant difference.

+ +

You have mentioned it in the question. reaction time

+ +

Even though both the games require strategy and understanding of the surroundings and team play, FPS games are heavily dependent on reaction time and accuracy. As much as you try, it's next to impossible to reach the reaction times and accuracy of computer agent.

+ +

In games such as DOTA2, the game is more dependent towards strategy and how the characters interact with each other (Strengths and Weakness).

+ +

Finally, it doesn't make sense to train an AI based agent to play FPS games, because the optimal solution is already present and there is no way that AI can reach those levels. It's called AIM BOT. As good as you can get at strategy and planning, team work and reaction times, you can almost never beat it. It's next to 100% accurate and the opponent is basically dead as soon as he's in sight.

+",26384,,,,,6/14/2019 8:59,,,,1,,,,CC BY-SA 4.0 +12845,2,,12841,6/14/2019 10:40,,3,,"

Facial classification and tracking is easier compared to tracking any other motion. This is due to the fact that the face have a large number of easily identifiable features.

+ +

Facial tracking is an additional layer on top of facial detection. Facial detection works by finding characteristics such as the cheekbones, chin, nose, eyes etc. These features are easy to detect as they have very specific properties. These points are found with the help of shadows and brightness, for example, the nose and cheekbones are highlighted, whereas the the eyes and lips have a shadows.

+ +

Using these feature points it is possible to create a mesh to construct the face. This has been a classical way to detect and classify faces. The TrueDepth camera helps to make a better structure by providing depth with the help of an IR sensor.

+ +

+ +

Facial tracking makes use of this mesh to study and track changes in the facial structure. For example when you smile, it will detect the cheekbone points changing, similarly when you open your mouth, your lips will move apart.

+",26384,,,,,6/14/2019 10:40,,,,0,,,,CC BY-SA 4.0 +12850,1,,,6/14/2019 19:01,,1,595,"

How do I do object detection (or identify the location of an object) if there is only one kind of object, and they are more of less similar size, but the picture does not look like standard scenes (it is detection of drops on a substrate in microscopic images)? Which software is good for it?

+",5852,,2444,,6/14/2019 19:31,6/24/2023 20:04,How do I perform object detection if there is only one type of object?,,1,2,,,,CC BY-SA 4.0 +12851,5,,,6/14/2019 19:02,,0,,"

https://en.wikipedia.org/wiki/Probability

+",1671,,1671,,6/14/2019 19:02,6/14/2019 19:02,,,,0,,,,CC BY-SA 4.0 +12852,4,,,6/14/2019 19:02,,0,,"For question involving probability as related to AI methods. (This tag is for general usage. Feel free to utilize in conjunction with the ""math"" and more specific probability tags.)",1671,,1671,,6/14/2019 19:02,6/14/2019 19:02,,,,0,,,,CC BY-SA 4.0 +12853,2,,7690,6/14/2019 20:03,,1,,"

This is a great question. Connecting the AI knowledge to the application is a difficult task and its not easy to test and perfect.

+ +

There are a few points regarding autonomous driving algorithms that one should keep in mind.

+ +
    +
  • A lot of data is required before actually applying a model to the real world.
  • +
  • The control algorithms are designed for specific tasks, there is no master algorithm that solves the issue.
  • +
  • Time is very important, decisions have to be made in real time.
  • +
+ +

The image segmentation you have mentioned in the question is rarely used in current autonomous technologies. The main issue is that

+ +
    +
  • The model is slow and requires massive computing resources. Due to this is cannot help in real time decision making.
  • +
  • A very important parameter depth is missing from the information. This is of utmost importance in autonomous driving technology. A 2D image cannot provide this parameter.
  • +
+ +

So the current technology use

+ +
    +
  • A range of sensors (LIDAR, Ultrasonic, Cameras etc.) that provide readings 100+ times a second.
  • +
  • A lot of the computer vision algorithms and control algorithms are hard coded. There are set parameters within which decisions are made.
  • +
  • The control algorithms are designed to work with each set of sensors and the mechanical components of the car.
  • +
+ +

To get a better look at the way these algorithms are used, you can have a look at

+ + +",26384,,,,,6/14/2019 20:03,,,,0,,,,CC BY-SA 4.0 +12854,2,,12850,6/14/2019 21:19,,0,,"

The hardest part is image annotation: here the difference between object recognition and object detection becomes important. If you just want to answer the question ""does this image contain object X?"", then you just need to provide as many images that contain object X as possible, together with as many images that don't contain object X (but are otherwise similar). However if you want to answer the question ""Where exactly object X is located in this image?"" then you will need to manually provide a bounding box for each instance of object X in each image. Obviously, the second scenario is a lot more labor intensive.

+ +

After you've done this part, train either a binary image classifier (typically this will be a convolutional neural network) on your annotated images (split them into train and test partitions), or an object detector (googling ""custom object detection"" produces lots of code examples how to train it (e.g. https://towardsdatascience.com/tutorial-build-an-object-detection-system-using-yolo-9a930513643a start with Step 2B).

+",26373,,26373,,6/14/2019 21:24,6/14/2019 21:24,,,,2,,,,CC BY-SA 4.0 +12856,2,,12226,6/14/2019 22:16,,1,,"

I had the same problem where the reward kept decreasing and started to search for answers in the forum.

+ +

I let the model trained while I search. As the model trained, the reward started to increase. You can see the tensorboard graph for rewards in validation time.

+ +

.

+ +

The fall continued until around 100k~ steps and did not change a lot for 250k~ steps. +After 350k~ th step, it slowly started to increase. Without knowing your number of steps trained, I would suggest training for more steps.

+ +

Also, I read about this (Reward first decreasing and then increasing) in an RL paper, if I find it I will mention it here.

+",26447,,,,,6/14/2019 22:16,,,,0,,,,CC BY-SA 4.0 +12857,1,,,6/15/2019 0:34,,3,88,"

My data is stock data with features such as stocks' closing prices.I am curious to know if I can put the economy feature such as 'national interest rate' or 'unemployment rate' besides each stocks' features.

+

Data:

+
  Date  Ticker  Open  High  Low  Close  Interest  Unemp. 
+  1/1    AMZN    75    78     73   76     0.015     0.03
+  1/2    AMZN    76    77     72   72     0.016     0.03
+  1/3    AMZN    72    78     76   77     0.013     0.03
+  ...    ...     ...   ...    ...  ...    ...       ...
+  1/1    AAPL    104   105    102  102    0.015     0.03
+  1/2    AAPL    102   107    104  105    0.016     0.03
+  1/3    AAPL    105   115    110  111    0.013     0.03
+  ...    ...     ...   ...    ...  ...    ...      ...
+
+

As you can see from the table above, daily prices of AMZN and AAPL are different but the Interest and Unemployment rates are the same. Can I feed the data to my neural network like the table above?

+

In other words, can I put the individual stocks' information besides the environment feature such as interest rates?

+",26037,,26037,,1/7/2021 21:30,5/28/2023 3:03,How to make a distinction between item feature and environment feature?,,1,1,,,,CC BY-SA 4.0 +12860,1,,,6/15/2019 1:39,,2,77,"

I have physical model prediction data as well as actual data. From this I can calculate the error of each prediction data point through simple subtraction. I am hoping to train a neural network to be able to assign an error to the input of the physical model.

+ +

My current plan is to normalize the error of each data point and assign it as a label to each model input. So the NN would be trained (and validated)on a 1000 data points with the associated error as a label. Once the model is trained I would be able to input one data point and the output of the neural network would be a single class, that is the error. The purpose this would serve would be to tune the physical prediction model. Would this kind of architecture work? If so, would you recommend a feedforward or RNN? Thank you.

+",26451,Arthur Shune,,,,6/15/2019 5:25,Neural Network for Error Prediction of a Physics Model?,,0,6,,,,CC BY-SA 4.0 +12867,5,,,6/15/2019 14:36,,0,,"

See e.g. https://en.wikipedia.org/wiki/Explainable_artificial_intelligence.

+",2444,,2444,,6/15/2019 14:36,6/15/2019 14:36,,,,0,,,,CC BY-SA 4.0 +12868,4,,,6/15/2019 14:36,,0,,"For questions related to explainable artificial intelligence (XAI), also known as interpretable AI, which refers to AI techniques that can be trusted and easily understood by humans, which are particularly relevant in areas like healthcare or self-driving cars. There are several concepts related to XAI, such as accountability, fairness, and transparency.",2444,,2444,,8/13/2019 23:30,8/13/2019 23:30,,,,0,,,,CC BY-SA 4.0 +12870,1,24138,,6/15/2019 23:07,,18,2966,"

Explainable artificial intelligence (XAI) is concerned with the development of techniques that can enhance the interpretability, accountability, and transparency of artificial intelligence and, in particular, machine learning algorithms and models, especially black-box ones, such as artificial neural networks, so that these can also be adopted in areas, like healthcare, where the interpretability and understanding of the results (e.g. classifications) are required.

+

Which XAI techniques are there?

+

If there are many, to avoid making this question too broad, you can just provide a few examples (the most famous or effective ones), and, for people interested in more techniques and details, you can also provide one or more references/surveys/books that go into the details of XAI. The idea of this question is that people could easily find one technique that they could study to understand what XAI really is or how it can be approached.

+",2444,,2444,,1/14/2022 13:57,1/14/2022 20:37,Which explainable artificial intelligence techniques are there?,,3,0,,,,CC BY-SA 4.0 +12871,2,,12870,6/15/2019 23:07,,5,,"

There are a few XAI techniques that are (partially) agnostic to the model to be interpreted

+ + + +

There are also ML models that are not considered black boxes and that are thus more interpretable than black boxes, such as

+ +
    +
  • linear models (e.g. linear regression)
  • +
  • decision trees
  • +
  • naive Bayes (and, in general, Bayesian networks)
  • +
+ +

For a more complete list of such techniques and models, have a look at the online book Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, by Christoph Molnar, which attempts to categorise and present the main XAI techniques.

+",2444,,2444,,6/30/2019 15:07,6/30/2019 15:07,,,,0,,,,CC BY-SA 4.0 +12872,1,13060,,6/15/2019 23:16,,4,235,"

In the proofs for the original GAN paper, it is written:

+ +

$$∫_x p_{data}(x) \log D(x)dx+∫_zp(z)\log(1−D(G(z)))dz +=∫_xp_{data}(x)\log D(x)+p_G(x) \log(1−D(x))dx$$

+ +

I've seen some explanations asserting that the following equality is the key to understanding:

+ +

$$E_{z∼p_z(z)}log(1−D(G(z)))=E_{x∼p_G(x)}log(1−D(x))$$

+ +

which is a consequence of the LOTUS theorem and $x_g = g(z)$. Why is $x_g = g(z)$?

+",26313,,8068,,6/17/2019 19:45,6/25/2019 19:28,How is G(z) related to x in GAN proof?,,1,1,,,,CC BY-SA 4.0 +12874,1,,,6/16/2019 6:11,,3,643,"

For the purposes of this question, let's suppose that an artificial general intelligence (AGI) is defined as a machine that can successfully perform any intellectual task that a human being can [1].

+

Would an AGI have to be Turing complete?

+",17541,,2444,,12/20/2021 11:47,12/20/2021 11:47,Would an artificial general intelligence have to be Turing complete?,,2,1,0,,,CC BY-SA 4.0 +12875,1,12879,,6/16/2019 9:48,,2,74,"

I'm majoring in pure linguistics (not computational), and I don't have any basic knowledge regarding computational science or mathematics. But I happen to take the "Automatic Speech Recognition" course in my graduate school and struggling with it.

+

I have a question regarding getting the formula for a component of the forward algorithm.

+

$$ +\alpha_t(j) = \sum_{i=1}^{N} P(q_{t-1} = i, q_t=j, o_1^{t-1}, o^t|\lambda) +$$

+

When $q$ is a hidden state, $o$ is a given observation, and $\lambda$ contains transition probability, emission probability and the start/end state.

+

Is the Markov assumption (the current state is only dependent upon the one right before it) assumed here? I thought so, because it contains $q_{t-1}=i$ and not $q_{t-2}=k$ or $q_{t-3}=l$.

+",26470,,2444,,1/1/2022 10:44,1/1/2022 10:44,Is the Markov property assumed in the forward algorithm?,,1,0,,,,CC BY-SA 4.0 +12876,5,,,6/16/2019 13:09,,0,,"

See e.g. https://en.wikipedia.org/wiki/Hidden_Markov_model.

+",2444,,2444,,6/16/2019 13:09,6/16/2019 13:09,,,,0,,,,CC BY-SA 4.0 +12877,4,,,6/16/2019 13:09,,0,,For questions related to the hidden Markov model and related algorithms such as the forward algorithm.,2444,,1671,,7/10/2021 3:44,7/10/2021 3:44,,,,0,,,,CC BY-SA 4.0 +12878,1,,,6/16/2019 13:16,,3,105,"

Problem Statement

+ +

I've built a classifier to classify a dataset consisting of n samples and four classes of data. To this end, I've used pre-trained VGG-19, pre-trained Alexnet and even LeNet (with cross-entropy loss). However, I just changed the softmax layer's architecture and placed just four neurons for that (because my dataset includes just four classes). Since the dataset classes have a striking resemblance to each other, this classifier was unable to classify them and I was forced to use other methods. +During the training section, after some epochs, loss decreased from approximately 7 to approximately 1.2, but there were no changes in accuracy and it was frozen on 25% (random precision). In the best epochs, the accuracy just reached near 27% but it was completely unstable.

+ +

Question

+ +

How is it justifiable? If loss reduction means model improvement, why doesn't accuracy increase? How is it possible to the loss decreases near 6 points (approximately from 7 to 1) but nothing happens to accuracy at all?

+",26472,,2444,,4/13/2020 16:13,4/13/2020 19:27,"If loss reduction means model improvement, why doesn't accuracy increase?",,2,0,,,,CC BY-SA 4.0 +12879,2,,12875,6/16/2019 13:39,,0,,"

In general, to formally state that the Markov property holds, you need to have $P( +x_t \mid x_{t-1:1}) = P(x_t \mid x_{t-1})$.

+ +

So, you cannot conclude only from $P(q_{t-1} = i, q_t=j, o_1^{t-1}, o^t|\lambda)$ that the Markov property holds, because $P(q_{t-1} = i, q_t=j, o_1^{t-1}, o^t|\lambda)$ is the joint probability of $q_{t-1} = i, q_t=j, o_1^{t-1}$ and $o^t$ given $\lambda$.

+ +

Nonetheless, the theory of hidden Markov model often assumes that a few properties (including the Markov property) hold

+ +
    +
  1. the Markov property: $$P(q_{t+1} \mid q_{t}) = P(q_{t+1} \mid q_{t:1}),$$ where $q_t$ is the hidden state at time step $t$ and $q_{t:1} = q_t, q_{t-1}, \dots, q_1$.

  2. +
  3. the stationarity property: $$P(q_{t_1 + 1} \mid q_{t_1}) = P(q_{t_2 + 1} \mid q_{t_2}),$$ for any time step $t_1$ and $t_2$. In other words, the state transition probabilities are independent of the actual time at which the transitions takes place.

  4. +
  5. the output independence property: $$P(o^{T:1} \mid q_{T:1}) = \prod_{t=1}^T P(o^t \mid q_t , \lambda)$$ +where $o^{T:1} = o^T, o^{T-1}, \dots, o^1$. In other words, this is the assumption that the output at time step $t$, $o^t$, is independent of the outputs at previous time steps.

  6. +
+",2444,,,,,6/16/2019 13:39,,,,0,,,,CC BY-SA 4.0 +12881,1,,,6/16/2019 14:27,,2,3469,"

What is the use of softmax function? Why was it used at the end of fully connected layer in convolution neural network?

+",20551,,2444,,6/16/2019 14:35,6/16/2019 15:47,What is the use of softmax function in a CNN?,,2,0,,,,CC BY-SA 4.0 +12882,2,,12881,6/16/2019 14:47,,1,,"

The main purpose of the softmax function is to transform the (unnormalised) output of $K$ units (which is e.g. represented as a vector of $K$ elements) of a fully-connected layer to a probability distribution (a normalised output), which is often represented as a vector of $K$ elements, each of which is between $0$ and $1$ (a probability) and the sum of all these elements is $1$ (a probability distribution).

+ +

In the case of a classification task, the $i$th element of the vector produced by the softmax function corresponds to the probability of the input of the network of belonging to the $i$th class (e.g. a dog).

+",2444,,,,,6/16/2019 14:47,,,,6,,,,CC BY-SA 4.0 +12883,2,,12881,6/16/2019 15:47,,1,,"

In short the softmax function helps with multi-classification i.e output of more than one of two possibilities. +It works well this the categorical cross entropy.

+",15465,,,,,6/16/2019 15:47,,,,0,,,,CC BY-SA 4.0 +12884,2,,10288,6/16/2019 19:43,,1,,"

Nim is a simple game and it's really simple to build a bot that gives the optimal solution.

+ +
+

The correct move is to leave an odd number of piles of size 1

+
+ +

So when it comes to training an ANN to play a game, there are some this to keep in mind.

+ +
    +
  • Fixed play area (This is taken care of as there are only 3 stacks)
  • +
  • Providing input to the model so it knows it's current state (You can use a simple array of integers [3, 4, 1] denoting the current state of the stacks)
  • +
  • Providing feedback. An important step to guide the network in the right direction (You already have an optimal bot to do this)
  • +
+ +

Now that you can easily cover all the requirements, it's pretty simple to teach the model.

+ +
    +
  • Input - Current state of the model [3, 4, 5]
  • +
  • Output - Move that model will make [2, 0, 0]
  • +
  • Final output - Here you will have to add another layer with a custom function.

    + +

    This is the important part, to direct the model. Check if the move made is GOOD, BAD or ILLEGAL and assign a custom output to the same. For example

    + +
      +
    • GOOD return1 if it matches one of the optimal moves.
    • +
    • BAD return 0 if it's not the optimal move and
    • +
    • ILLEGAL return -1 if the move is not allowed.
    • +
  • +
+ +

Another thing to keep in mind is that this might require a slightly larger network as it as to learn complex functions that are not linearly separable.

+ +

You don't need to train that model by playing it against itself as you already have the optimal solution. Playing against itself is required only when you are trying to achieve something that better than the current best agent.

+ +

Check this out, it's pretty interesting Neural network to play snake

+",26384,,,,,6/16/2019 19:43,,,,1,,,,CC BY-SA 4.0 +12885,2,,12878,6/16/2019 23:31,,1,,"

Loss reduction means model improvement, it does not in the wrong setup, wher random choise produces least loss. So it is some critical setup error. What classes do you have? +I got also thet recently experimenting with an encoder with too narrow coding layer - it just EQUILIZES the output with average values cause this state has minimum loss.

+",25836,,,,,6/16/2019 23:31,,,,0,,,,CC BY-SA 4.0 +12886,1,,,6/17/2019 5:39,,1,46,"

I want to develop a regression model using the artificial neural network. For developing such a model I use standardised ( z-score normalised ) data.

+

given below is the sample data set. Here MAX is the real data But I am using MAX-ZS (these values are continues)

+

+

So my question is while developing the model do I have to perform further normalization such as Min-Max scaling on my training data? +Any Kind of help is appreciated!

+",24006,,-1,,6/17/2020 9:57,6/17/2019 10:21,Further Normalization of Standardized data - ANN,,1,0,,,,CC BY-SA 4.0 +12887,1,,,6/17/2019 5:55,,1,15,"

https://github.com/robinsloan/rnn-writer

+ +

I preface this by saying I do not know much about this topic, only that I have an intense interest in it, so I'm hoping I can make my questions as clear as possible.

+ +

This writing assistant was released with full code and instructions on how to make it work. I was halfway through this process when I was told, basically, that this could not work on a Windows PC. Torch, specifically, either doesn't work on PC, or doesn't work very well, and I am hesitant to continue.

+ +

First of all, does anyone know if that's true?

+ +

If it is true, is it theoretically possible to recreate this in a different way that will work on Windows PCs?

+ +

If so, has anyone ever done it before, or know how to do it?

+ +

If there is a way to make Torch work on PCs, is someone willing to tell me how?

+ +

I apologize if this isn't meant for this specific area of discussion. I just don't know where else to go with my questions that I will get any kind of helpful responses. Even if it's just telling me where I can go for more pertinent responses, that would still be appreciated.

+ +

Thank you for any help you might be willing to give. Please let me know if I need to clarify anything.

+",15659,,,,,6/17/2019 5:55,Questions regarding rrn-writer by Robin Sloane?,,0,0,,,,CC BY-SA 4.0 +12888,1,12894,,6/17/2019 6:38,,4,438,"

I've read that for MDPs the state transition function $P_a(s, s')$ is a probability. This seems strange to me for modeling because most environments (like video games) are deterministic.

+ +

Now, I'd like to assert that most systems we work with are deterministic given enough information in the state (i.e. in a video game, if you had the random number seed, you could predict 'rolls', and then everything else follows game logic).

+ +

So, my guess for why would MDP state transitions are probabilities is because the state given to the MDP is typically a subset (i.e. from feature engineering) of total information available. That, and of course to model non-deterministic systems.

+ +

Is my understanding correct?

+",26487,,2444,,6/17/2019 12:28,6/17/2019 12:28,Why are state transitions in MDPs probabilistic rather than deterministic?,,1,0,,,,CC BY-SA 4.0 +12889,1,,,6/17/2019 6:49,,2,245,"

In the book Prolog Programming for Artificial Intelligence, a large and intricate chapter (chapter 14) is dedicated to Expert Systems. In these systems, a knowledge-database is represented through facts and rules in a declarative manner, and then we use the PROLOG inference engine to derive statements and decisions.

+

I was wondering: are there any examples of expert systems that represent knowledge through a standard Relational Database approach and then extract facts through SQL queries? Is there any research in this area? If not, why is a rule-based approach preferred?

+",23527,,2444,,1/27/2021 21:19,1/27/2021 21:25,Are Relational DBs and SQL used in Expert Systems?,,1,0,,,,CC BY-SA 4.0 +12891,2,,12114,6/17/2019 8:31,,0,,"

I found this article from James Manyika (MacKinsey) helpful: Applying artificial intelligence for social good

+",25362,,,,,6/17/2019 8:31,,,,0,,,,CC BY-SA 4.0 +12892,1,,,6/17/2019 8:57,,4,53,"

So I'm finding AutoML to be pretty interesting but I'm still learning how it all works. I've played with the incredibly broken AutoKeras and got some decent results.

+ +

The question is, if you are using a NN to optimize the architecture of another network, why not take it another layer deeper and use another network to find the optimum architecture for your Parent network with a grand-parent network?

+ +

The problem doesn't necessarily need to expand exponentially as the grand-parent network could do few-shot training sessions on the parent network which itself is doing few or one-shot training.

+",17541,,17541,,6/17/2019 9:04,6/17/2019 21:57,Why not go another layer deeper with Auto-AutoML?,,1,0,,,,CC BY-SA 4.0 +12893,2,,12886,6/17/2019 10:21,,1,,"

Data scaling or normalization is a process of making model data in a standard format so that the training is improved, accurate, and faster.

+ +

So you just have to scale the data once. Doesn't matter what scaler you are using. Just make sure to initialize the scaler with the training data and then use the same parameters to scale the test data.

+ +

The z-score normalized data (MAX-ZS) can be used directly to train the network.

+",26384,,,,,6/17/2019 10:21,,,,0,,,,CC BY-SA 4.0 +12894,2,,12888,6/17/2019 10:38,,2,,"

Your understanding is right!

+ +

Using a probabilistic transition function allows the model to explore a bigger search space before making a decision. One of the most important use cases of MDP is in POS tagging in NLP using a Hidden Markov Model.

+ +

In case of a deterministic model, search space is limited by the number of transitions and hence at each step, a definite decision is made. This does not take into regard the possibility of relationship between the previous states, rather only deals with the current and next state. These model are good for solving a certain range of tasks like decision trees etc.

+ +

When it come to tasks such as weather prediction, there is significance to the historical weather data. In such cases, we cannot make use of a deterministic approach. You always predict the chance of rainfall etc.

+ +

+ +

This example can also be extended to predict the weather for future days

+",26384,,,,,6/17/2019 10:38,,,,2,,,,CC BY-SA 4.0 +12896,1,,,6/17/2019 13:15,,2,8206,"

I am planning to use BERT embeddings in the LSTM embedding layer instead of the usual Word2vec/Glove Embeddings. What are the possible ways to do that?

+",26115,,2444,,11/1/2019 2:27,11/1/2019 2:27,Adding BERT embeddings in LSTM embedding layer,,1,2,,,,CC BY-SA 4.0 +12897,1,,,6/17/2019 15:00,,1,36,"

Do you have any advice, what architecture of neural network is the best for following task?

+ +

Let input be some (complex function), the neural network gains a flow of its values, so I guess there will be some kind of RNN or CNN?

+ +

The output is classifier like is the function same or not

+ +
    +
  • If the neural network thinks, that the input is still the same function, the output is 0.
  • +
  • If the input function changes, the output will be 1.
  • +
+ +

The input function is of course not one value or simple math function (what will be trivial) but may be really sophisticated. So the neural network learns abstraction about same and different over any complex flow ?

+ +

How would you approach to that task ?

+",25836,,26384,,6/17/2019 15:10,6/17/2019 19:30,Changes in flow detection neural network?,,1,1,,,,CC BY-SA 4.0 +12898,2,,12896,6/17/2019 15:02,,3,,"

Instead of using the Embedding() layer directly, you can create a new bertEmbedding() layer and use it instead.

+ + + +
# Sample code
+# Model architecture
+
+# Custom BERT layer
+bert_output = BertLayer(n_fine_tune_layers=10)(bert_inputs)
+
+# Build the rest of the classifier 
+dense = tf.keras.layers.Dense(256, activation='relu')(bert_output)
+pred = tf.keras.layers.Dense(1, activation='sigmoid')(dense)
+
+model = tf.keras.models.Model(inputs=bert_inputs, outputs=pred)
+model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
+model.fit(...)
+
+ +

This article will walk you through the entire process of creating the custom BERT layer along with example code. Give it a read.

+",26384,,,,,6/17/2019 15:02,,,,0,,,,CC BY-SA 4.0 +12900,1,,,6/17/2019 18:51,,1,173,"

An important property of a reinforcement learning problem is whether the environment of the agent is static, which means that nothing changes if the agent remains inactive. Different learning methods assume in varying degrees that the environment is static.

+ +

How can I check if and (if so) where in the Monte Carlo algorithm, temporal difference learning (TD(0)), the Dyna-Q architecture, and R-Max a static environment is implicitly assumed?

+ +

How could I modify the relevant learning methods so that they can in principle adapt to changing environments? (It can be assumed that $\epsilon$ is sufficiently large.)

+",26494,,2444,,6/17/2019 19:43,6/17/2019 19:43,How do I know if the assumption of a static environment is made?,,0,2,,,,CC BY-SA 4.0 +12901,2,,12897,6/17/2019 19:30,,1,,"

A network is able to fit to a certain function over several iteration while training. Now you want the model to be able to detect a change in a list of inputs from the function. This is not possible without first training the model on some data.

+ +

Say you want to use a simple function

+ + + +
# Sample function
+f(x) = x
+
+ +

Say you create inputs for function using sets of 3 integers for x.

+ +
# Data
+f1 = [[1, 2, 3],
+      [4, 5, 6],
+      ...      ]
+
+# Some random values
+f2 = [[1, 4, 19],
+      [16, 35, 36],
+      ...      ]
+
+ +

Now if you use data f1 labelled as 0 and data f2 labelled as 1, in the best case the ANN will only learn to differentiate between data from function f1 and some other data.

+ +

To detect change, first the model has to fit to a certain function, this requires it to be trained over a number of epochs. Then it will be able to detect if the values don't match the function, but such a model will only be able to differentiate a single function.

+",26384,,,,,6/17/2019 19:30,,,,2,,,,CC BY-SA 4.0 +12903,5,,,6/17/2019 19:41,,0,,"

For more info, have a look e.g. at https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume4/kaelbling96a-html/node29.html.

+",2444,,2444,,6/17/2019 19:41,6/17/2019 19:41,,,,0,,,,CC BY-SA 4.0 +12904,4,,,6/17/2019 19:41,,0,,"For questions related to the reinforcement learning ""dyna"" architecture.",2444,,2444,,6/17/2019 19:41,6/17/2019 19:41,,,,0,,,,CC BY-SA 4.0 +12907,2,,12892,6/17/2019 21:57,,1,,"

Logically it is possible, but you will just end up complicating the entire task.

+ +

The aim of AutoML is to provide a drop in solution to the customers. To do this, a trained network decides and generates the model architecture. This is done so that anyone with basic experience is able to integrate the solution into their systems.

+ +

Currently, the complicated architectures and networks require experienced data scientist to build, train and deploy. To overcome this bottleneck and make ML accessible to all, AutoML is being developed.

+ +

So adding another grand-parent network to optimize the autoML network will just complicate the task in terms to computation time and hyper parameter optimization.

+ +

In case we decide to add another network, now the researchers must look at this network and tune it in regards to the inner network and the model both. This means more work and no direct way to understand how the hyper parameters are affecting the final results.

+",26384,,,,,6/17/2019 21:57,,,,1,,,,CC BY-SA 4.0 +12908,1,12909,,6/18/2019 0:35,,28,21991,"

In reinforcement learning (RL), the credit assignment problem (CAP) seems to be an important problem. What is the CAP? Why is it relevant to RL?

+",2444,,2444,,6/20/2020 10:41,6/20/2020 10:41,What is the credit assignment problem?,,1,0,,,,CC BY-SA 4.0 +12909,2,,12908,6/18/2019 0:35,,36,,"

In reinforcement learning (RL), an agent interacts with an environment in time steps. On each time step, the agent takes an action in a certain state and the environment emits a percept or perception, which is composed of a reward and an observation, which, in the case of fully-observable MDPs, is the next state (of the environment and the agent). The goal of the agent is to maximise the reward in the long run.

+ +

The (temporal) credit assignment problem (CAP) (discussed in Steps Toward Artificial Intelligence by Marvin Minsky in 1961) is the problem of determining the actions that lead to a certain outcome.

+ +

For example, in football, at each second, each football player takes an action. In this context, an action can e.g. be ""pass the ball"", ""dribbe"", ""run"" or ""shoot the ball"". At the end of the football match, the outcome can either be a victory, a loss or a tie. After the match, the coach talks to the players and analyses the match and the performance of each player. He discusses the contribution of each player to the result of the match. The problem of determinig the contribution of each player to the result of the match is the (temporal) credit assignment problem.

+ +

How is this related to RL? In order to maximise the reward in the long run, the agent needs to determine which actions will lead to such outcome, which is essentially the temporal CAP.

+ +

Why is it called credit assignment problem? In this context, the word credit is a synonym for value. In RL, an action that leads to a higher final cumulative reward should have more value (so more ""credit"" should be assigned to it) than an action that leads to a lower final reward.

+ +

Why is the CAP relevant to RL? Most RL agents attempt to solve the CAP. For example, a $Q$-learning agent attempts to learn an (optimal) value function. To do so, it needs to determine the actions that will lead to the highest value in each state.

+ +

There are a few variations of the (temporal) CAP problem. For example, the structural CAP, that is, the problem of assigning credit to each structural component (which might contribute to the final outcome) of the system.

+",2444,,,,,6/18/2019 0:35,,,,0,,,,CC BY-SA 4.0 +12911,1,12913,,6/18/2019 2:17,,7,578,"

All sources I can find provide a similar explanation to each phase.

+ +

In the Selection Phase, we start at the root and choose child nodes until reaching a leaf. Once the leaf is reached (assuming the game is not terminated), we enter the Expansion Phase.

+ +

In the Expansion Phase, we expand any number of child nodes and select one of the expanded nodes. Then, we enter the Play-Out Phase.

+ +

Here is my confusion. If we choose to only expand a single node, the nodes that were not expanded will never be considered in future selections as we only select child nodes until a leaf is reached during the Selection Phase. Is this correct? If not, what am I misunderstanding about the Selection Phase?

+",26498,,2444,,11/19/2019 20:11,11/19/2019 20:11,When does the selection phase exactly end in MCTS?,,1,0,,,,CC BY-SA 4.0 +12913,2,,12911,6/18/2019 6:39,,2,,"
+

If we choose to only expand a single node, the nodes that were not expanded will never be considered in future selections as we only select child nodes until a leaf is reached during the Selection Phase. Is this correct?

+
+ +

No this is not correct.

+ +
+

If not, what am I misunderstanding about the Selection Phase?

+
+ +

The selection phase does not end only when you reach a node that has no expanded nodes. It ends when you reach a node that has any unexpanded nodes. At which point you typically pick one or more nodes you have not yet expanded at that point in the tree, expand them and collect one or more rollout results for them. Variations are possible such as choosing to whether to expand or continue selecting stochastically, or expanding all child nodes at the same time using value estimates to initialise them - the latter is what AlphaZero does.

+",1847,,1847,,6/18/2019 8:32,6/18/2019 8:32,,,,1,,,,CC BY-SA 4.0 +12914,1,,,6/18/2019 7:03,,11,390,"

What are some interesting myths of Artificial Intelligence and what are the facts behind them?

+",26351,,2444,,6/18/2019 16:09,6/28/2019 14:17,What are the common myths associated with Artificial Intelligence?,,2,1,,,,CC BY-SA 4.0 +12915,1,,,6/18/2019 7:35,,4,557,"

I was looking at two papers

+ + + +

I'm trying to implement the second paper and I'm having some troubles understanding the differences between GAT and GaAN. By looking at equation 1 in GaAN paper I can see only two differences from GAT.

+ +
    +
  • The first difference is that they are doing a dot product with the initial feature map and
  • +
  • Have another fully connected layer to project the result.
  • +
+ +

Is there something else that I'm missing?

+",20430,,2444,,6/18/2019 12:40,10/6/2020 8:07,What is the difference between GAT and GaAN?,,0,4,,,,CC BY-SA 4.0 +12916,1,,,6/18/2019 10:20,,2,67,"

I have a couple of questions and I was wondering if you could answer them.

+ +

I have a bunch of images of the cars, side view only. I want to train the model with those images. My objects of interest are 3 types of trucks that have different trailers. I Rarely see two target object at one image (maybe 1 or 2 in every 1000 images). However, I do see other types of cars that I do not want to detect.

+ +

My questions are:

+ +
    +
  1. Do you think I should tackle this problem as a detection task or classification task? (for example, should I consider multi-label classification or omit those pictures)

  2. +
  3. Should I also include other vehicles that I do not want to detect in my training dataset? let say I do not assign bounding box to them but include them in training dataset just to make the system robust.

  4. +
+ +

I trained YOLO with 200 images, sometimes the trained model confused and detected the wrong object that is not in any of classes, this happens when training with 2000 images per class?

+ +

Is this due to a small number of dataset or it is because of not including those images with no bounding boxes?

+ +

Thank you in advance!

+",20025,,12853,,6/21/2019 22:20,6/21/2019 22:20,How to choose our data set wisely?,,0,1,,,,CC BY-SA 4.0 +12918,1,,,6/18/2019 11:40,,3,156,"

In Chapter 15 of Russel and Norvig's Artificial Intelligence -- A Modern Approach (Third Edition), they describe three basic tasks in temporal inference:

+ +
    +
  1. Filtering,
  2. +
  3. Likelihood, and
  4. +
  5. Finding the Most Likely Sequence.
  6. +
+ +

My question is on the difference between the first and third task. Finding the Most Likely Sequence determines, given evidences $e_1,\dots,e_n$, the most likely sequence of states $S_1,\dots,S_n$. This is done using the Viterbi algorithm. On the other hand, Filtering provides the probability distribution on states after seeing $e_1,\dots,e_n$. You could then pick the state with the highest probability, call it $S'_n$. I am guessing that $S'_n$ should always be equal to $S_n$. Likewise, you can already do the same after any prefix $e_1,\dots,e_i$, again picking the most likely state $S'_i$. I would love to have a simple example where $S'_1,\dots,S'_n$ is not equal to the sequence $S_1,\dots,S_n$ produced by the Viterbi algorithm.

+",26508,,26508,,6/18/2019 20:28,7/19/2019 1:02,Viterbi versus filtering,,1,0,,,,CC BY-SA 4.0 +12919,1,,,6/18/2019 13:24,,1,240,"

The input in word2vec is known word (spellings), each tagged by its ID.

+ +
    +
  • But if you process real text, there can be not only dictionary words but also proper nouns like human names, trade marks, file names , etc, how to make an input for that?
  • +
  • Is you consider some input where items are variables, like the meaning of input would be x = something, and after some time you acces to x value and define some other stuff with it. That would be format for this input, and will this approach work at all?
  • +
+",25836,,,,,6/18/2019 13:39,How to handle proper names or variable names in word2vec?,,1,1,,,,CC BY-SA 4.0 +12920,2,,12919,6/18/2019 13:39,,1,,"

Word2vec works on the concept of typical word co-occurrences. This means that it will work well only for words that occur frequently in the dataset. So proper nouns will not play any role in training the model. You can keep the proper nouns as they are or use only the words the occur more frequently than some threshold value based on the size of your dataset.

+ +

Once you use the value stored in variable x for something, and then change the stored value, it will not reflect anywhere unless you us use the variable x again somewhere in the program.

+ + + +
# Example
+x = ""something something""
+print(x + ""..."")
+
+# Result
+something something ...
+
+# Changing x
+x = ""new value""
+
+# This new value of x will not reflect anywhere in the program
+# Unless you use the variable x again.
+
+",26384,,,,,6/18/2019 13:39,,,,2,,,,CC BY-SA 4.0 +12921,2,,12914,6/18/2019 15:33,,2,,"

In artificial intelligence, even though not everyone agrees, a common (and maybe the biggest) myth is that of the intelligence explosion, which some people claim will happen (without considering physical limits or knowing anything about thermodynamics).

+",2444,,,,,6/18/2019 15:33,,,,1,,,,CC BY-SA 4.0 +12922,2,,12889,6/18/2019 15:44,,1,,"

A recent research example is the "Grind" system. Take a look at the paper Computing FO-Rewritings in $\mathcal{E} \mathcal{L}$ in Practice: from Atomic to Conjunctive Queries (2018) by Peter Hansen and Carsten Lutz. Here's the abstract.

+
+

A prominent approach to implementing ontology-mediated queries (OMQs) is to rewrite into a first-order query, which is then executed using a conventional SQL database system. We consider the case where the ontology is formulated in the description logic $\mathcal{E} \mathcal{L}$ and the actual query is a conjunctive query and show that rewritings of such OMQs can be efficiently computed in practice, in a sound and complete way. Our approach combines a reduction with a decomposed backwards chaining algorithm for OMQs that are based on the simpler atomic queries, also illuminating the relationship between first-order rewritings of OMQs based on conjunctive and on atomic queries. Experiments with real-world ontologies show promising results.

+
+",26508,,2444,,1/27/2021 21:25,1/27/2021 21:25,,,,0,,,1/27/2021 21:26,CC BY-SA 4.0 +12923,2,,12874,6/18/2019 15:50,,1,,"

My answer is yes, but in a trivial way. The least you would expect from an intelligent agent is that it is able to execute a given Turing machine on a given input. This requires actually no intelligence, just following rules. If however, you are referring to the capability of predicting if the Turing machine will terminate on the given input, that is another matter. I don't think it is reasonable to expect such ""undecidable"" computational power for an agent to be generally intelligent.

+",26508,,,,,6/18/2019 15:50,,,,0,,,,CC BY-SA 4.0 +12926,1,,,6/18/2019 19:45,,4,1159,"

I am working on a deep reinforcement learning problem. The policy network has the same architecture as the one Deepmind published in 'Playing Atari with Deep Reinforcement Learning'. I am also using Prioritized Experience Replay. In the initial stage the behavior seems to be normal, i.e the agent is learning gradually. However, after a while the rewards suddenly go down by a lot. The TD erros also seem to be going up at the same time. I'm not sure how to interpret this problem.

+ +

My hypotheses are:

+ +
    +
  1. The policy network is overfitting
  2. +
  3. Some filters fail to activate thereby misrepresenting the state information
  4. +
+ +

I would really appreciate if you guys could give me some tips to narrow down this problem debug it. Cheers.

+",26514,,26514,,6/18/2019 20:10,11/9/2021 8:07,Deep Reinforcement Learning: Rewards suddenly dip down,,1,1,,,,CC BY-SA 4.0 +12927,1,12935,,6/18/2019 19:57,,4,153,"

My goal is to understand AlphaZero paper published by deepmind. I'm beginning my journey trying to get the basic intuition of reinforcement learning from the book by Barto and Sutton.

+

As per my background, I'm familiar with MDPs, value iteration and policy iteration.

+

I wanted to ask until what chapter of Barto and Sutton's book is one required to read in order to fully comprehend AlphaZero's paper. Monte-Carlo Tree Search is discussed in Chapter-8 of the book. Will it be enough till that? Or would I be needing more resources apart from this book?

+",1299,,50294,,10/13/2021 13:30,10/13/2021 13:30,What knowledge is required for understanding the AlphaZero paper?,,1,3,,,,CC BY-SA 4.0 +12930,1,,,6/18/2019 23:10,,1,158,"

I learn that the Viterbi algorithm used for Hidden Markov Model (HMM) can classify a sequence of hidden states from the corresponding observations; Markov Random Field (MRF) and Conditional Random Field (CRF) can also do it.

+ +

Can these algorithms be used to classify a single future state?

+",4042,,12853,,6/20/2019 18:11,6/30/2023 6:02,"Can HMM, MRF, or CRF be used to classify the state of a single observation, not the entire observation sequence?",,1,0,,,,CC BY-SA 4.0 +12931,5,,,6/18/2019 23:12,,0,,,2444,,2444,,6/18/2019 23:12,6/18/2019 23:12,,,,0,,,,CC BY-SA 4.0 +12932,4,,,6/18/2019 23:12,,0,,"For questions related to the branch and bound algorithm design paradigm, for discrete and combinatorial optimization problems, as well as mathematical optimization, in the context of AI.",2444,,2444,,6/18/2019 23:12,6/18/2019 23:12,,,,0,,,,CC BY-SA 4.0 +12933,1,,,6/18/2019 23:50,,1,212,"

I’m looking for some help with my neural network. I’m working on a binary classification on a recurrent neural network that predicts stock movements (up and down) Let’s say I’m studying Eur/Usd, I’m using all the data from 2000 to 2017 to train et I’m trying to predict every day of 2018.

+ +

The issue I’m dealing with right now is that my program is giving me different answers every time I run it even without changing anything and I don’t understand why?

+ +

The accuracy during the train from 2000 to 2017 is around 95% but I’ve noticed another issue. When I train it with 1 new data every day in 2018, I thought 2 epochs was enough, like if it doesn’t find the right answer the first time, then it knows what the answer is since the problem is binary, but apparently that doesn’t work.

+ +

Do you guys have any suggestion to stabilize my NN?

+",26522,,2444,,6/24/2019 0:37,3/15/2021 13:04,How can I stabilise a recurrent neural network used for binary classification?,,2,2,,,,CC BY-SA 4.0 +12934,2,,12918,6/19/2019 0:12,,2,,"

Welcome to AI.SE @vdbuss, and great first question!

+ +

This point is touched on in Section 15.2.3 (page 576 in my copy), in the second paragraph, and there's a good exercise at the end of the chapter (15.4) that is designed to get you to think through exactly why these are different procedures. If you want to really absorb it, I suggest trying to work out that exercise! If you want the quick answer, read on.

+ +

The basic action of filtering is to generate a probability distribution $P(X_{t+1} | e_{1:t+1})$ using only two pieces of information, specifically the current state distribution $P(X_{t} | e_{1:t})$, and the new piece of evidence $e_{t+1}$. So, when computing the most likely sequence, the algorithm cannot take into account the sequences that are actually possible, while Vitirbi can.

+ +

Here's a simple example: suppose I tell you that I'm going to drop you in a maze at one of two locations. I drop you near the top right corner with probability 0.75, and near the bottom left corner with probability 0.25. Suppose further that a Grue is known (with certainty) to live somewhere near the bottom left corner. Using filtering, your maximum aposterori estimate for your location after being dropped in the maze ($t=1$) is that you are in the top right corner. You then move 1 step to the right and can see a Grue. Clearly your estimate for your position in the second timestep $(t=2)$ must be the bottom left, because Grues only live there. But, you definitely can't end up moving to the bottom left by moving right from the top right, so your sequence has probability zero overall, despite using the maximum aposterori estimate for position at every step. To avoid this, Vitirbi uses a linear amount of extra space to select the maximum aposterori sequence, which in this case is clearly that you are near the bottom left in both timesteps.

+",16909,,,,,6/19/2019 0:12,,,,0,,,,CC BY-SA 4.0 +12935,2,,12927,6/19/2019 0:18,,4,,"

The more you read, the more deeply you can understand any paper, but given your stated background, reading the Monte-Carlo Tree Search chapter of Barto & Sutton, plus Gerald Tesauro's TD-Gammon paper (which is pretty accessible, and which is the basis for the other technique used in AlphaZero) should be enough to get a pretty good idea of what they did.

+",16909,,16909,,6/19/2019 20:49,6/19/2019 20:49,,,,0,,,,CC BY-SA 4.0 +12936,1,,,6/19/2019 4:25,,1,589,"

I would like to use the bottleneck layer of U-Net (the last layer of the encoder) to calculate the similarity between two images. For that, I have to somehow flatten the last layer of the encoder. In my opinion, there are two approaches:

+ +
    +
  1. Take the last layer which in my case is $4 \times 4 \times 16$ and flatten it to 1D

  2. +
  3. Add a dense before the decoder and then reshape the dense 1D layer into 3D

  4. +
+ +

For the second case, I am not sure how this would affect the network. Arbitrarily reshaping a 1D array into a 3D tensor. Could that introduce weird artifacts? Does someone have experience in a similar problem?

+",23063,,2444,,6/13/2020 0:19,6/13/2020 0:19,How can I use the bottleneck layer of the U-net to calculate the similarity between two images?,,0,4,,,,CC BY-SA 4.0 +12937,2,,12874,6/19/2019 7:57,,2,,"

A system is Turing complete if it can be used to simulate any Turing machine.

+ +

Given the Church-Turing thesis (which has not yet been proven), a human brain can compute any function that a Turing machine can (given enough time and space), but the reverse is not necessarily true, given that the human brain might be able to compute more functions than a Turing machine. Intuively, humans are thus Turing complete (even though, to prove this, you need a formal model of the human), that is, given enough time and space, a human can compute anything that a Turing machine can.

+ +

Hence, an AGI, defined as an AI with human-level intelligence, needs to be Turing complete, otherwise there would be at least one function that a human can calculate but the AGI cannot, which would not make it as general as a human.

+",2444,,,,,6/19/2019 7:57,,,,0,,,,CC BY-SA 4.0 +12938,1,,,6/19/2019 8:28,,2,80,"

I am looking for an example in which it is simply impossible to use some sort of gradient descent to train a neural network. Is this available?

+ +

I have read quite some papers about gradient-free optimization tools, but they always use it on a network for which you can also use gradient descent. I want to have a situation in which the only option to train the network is by, for example, a genetic algorithm.

+",26531,,2444,,6/19/2019 17:52,6/19/2019 17:52,Neural networks when gradient descent is not possible,,0,11,,,,CC BY-SA 4.0 +12939,2,,12914,6/19/2019 10:49,,6,,"

As Artificial Intelligence is rapidly invading in our lives the myths around AI is also fabricating rapidly. Before getting into details one need to get clear off from this myths.

+ +

Myth 1: AI will take away our jobs:

+ +

Reality: AI is not completely different from other technologies and AI will not take away jobs but AI will change the way we work and helps us to increase the productivity by removing monotonous works.

+ +

Myth 2: Artificial intelligence will take over the world:

+ +

Reality: AI controlling the world. According to me it will not possible unless we give it that power. AI or robots will assist in our work and helps us to solve some tedious works that are difficult for human to solve easily.

+ +

Myth 3: Intelligent machines can learn on their own

+ +

Reality: It seems that a Intelligent machine can learn by it own. But the fact is that a AI Engineer or AI specialist should develop the algorithm and feed the machine with datasets and instructions and continuous monitoring should be done and most importantly regular update of software should be done.

+ +

Myth 4: Artificial Intelligence, Machine learning and Deep learning all three are same:

+ +

Reality: No not at all. To be clear machine learning is a part of AI and deep learning is the subset of ML. All three- AL, ML and DL are different but they are inter related with each other.

+",26265,,75,,6/28/2019 14:17,6/28/2019 14:17,,,,1,,,,CC BY-SA 4.0 +12940,1,,,6/19/2019 11:15,,1,192,"

Everyone is afraid of losing their job to robots. Will or does artificial intelligence cause mass unemployment?

+",26351,,2444,,12/17/2021 20:33,12/17/2021 20:33,Will artificial intelligence cause mass unemployment?,,3,0,,,,CC BY-SA 4.0 +12941,2,,12940,6/19/2019 13:57,,0,,"

Up to a point. Some jobs will IMHO not easily be replaced by robots, others more easily. Some could be, but I hope common sense will prevail and stop that.

+ +

Manual jobs: fruit picking, warehouse picking, and cooking are some jobs that need really subtle hand control, and precise handling of fragile items. I think those will be harder to automate than eg car factory robots.

+ +

Customer facing roles: receptionists have to do a wide variety of tasks. While some of the tasks might be aided by AI systems, a good PA or receptionist cannot easily be replaced. Also, many people would much rather interact with a fellow human than with a machine, at least in some situations.

+ +

Judgments: a lot of jobs require judging a situation, balancing risks, and making 'gut' decisions. While AI systems can do them, I think many would still require human intervention. I for one wouldn't like to be sentenced by a robo-judge, or examined by a robo-doctor. True, humans also make mistakes, but they would hopefully err in less potentially disastrous ways.

+ +

Administration: again, many tasks can be assisted, which would lead to a reduction in head count, but a general AI is still off the horizon, so you'd still need humans.

+ +

Creative arts: not likely. Would you want to read computer-generated novels, or look at computer-generated paintings?

+ +

I could go on... in general: I think AI systems will make a lot of tasks faster and easier, so you need fewer people to do them. Some jobs cannot realistically be done by machines in the near-to-mid future, so overall we need not worry too much.

+",2193,,,,,6/19/2019 13:57,,,,2,,,,CC BY-SA 4.0 +12942,2,,12940,6/19/2019 14:56,,3,,"

The nuanced, boring answer is that it depends on your definition of AI. Most people wouldn't say that the rule-based systems designed in the 70's are AI. The amazing leaps in machine learning are almost taken for granted as well (think about how normal speech and facial recognition have become). This is known as the AI effect; when we become accustomed to the technology, it loses it's 'magical aspect' and is thus no longer labelled as AI.

+ +

Since AI is so diverse and difficult to define, the question becomes incredibly abstract. Did Siri cause all secretaries to become unemployed? Did TurboTax replace all accountants? Some parts of AI will affect jobs, or even make them redundant yes. On the other hand, it will give rise to new jobs as well. It is therefore impossible to generalize it as 'AI will cause massive unemployment'.

+ +

This is not a new phenomenon, however, it has been part of the human economy ever since the industrial revolution (probably even before that, but I am not a historian). The invention of the car crippled the horse-and-wagon industry, but it brought along new jobs as well.

+",18398,,,,,6/19/2019 14:56,,,,0,,,,CC BY-SA 4.0 +12943,2,,12429,6/19/2019 15:17,,0,,"

I think normalisation into range 0-1 is needed or at least to 5-10 cliping because the values will become astronomical after many layers. Convolved image has a vector of features for each pixel. Take a RGB for example -> each color is a feature in one pixel, the next map will be like 'horisontal line, vertical line, circle' for one pixel surroundings.

+",25836,,,,,6/19/2019 15:17,,,,0,,,,CC BY-SA 4.0 +12944,2,,12940,6/19/2019 15:26,,0,,"

Yes , but you should be happy about it. It is like usage of mashines in industry causes ""mass unemployment"". Modern tendencies is unfortinatly not like that - industry has not so bit interest in full automatisation cause of huge amout of cheap work force from immigrants - it is a bad factor, not that ""ai takes your job"". +And further, robot what would take your job must not be realy smart - in could be just some doll without AI just making what is in programm...

+",25836,,,,,6/19/2019 15:26,,,,0,,,,CC BY-SA 4.0 +12946,2,,12930,6/19/2019 16:53,,0,,"

Yes this is possible and is exactly what a Markov process aims to accomplish.

+ +

The Hidden Markov Model can be considered a simple dynamic Bayesian network. This means that it will generate a probabilistic outcome for a number of states (classes) given a sequence of inputs.

+ +

This can be used for a classification task based on a threshold probability to make a confirmed decision.

+ +

Let's take a simple example

+ +

+ +

The above example is a model to determine how a person will be feeling given some sample data of the previous days.

+ + + +
# Sample data
+data = ['Healthy', 'Healthy', 'Fever']
+
+ +

Using this data, it's possible to calculate the probability of each possible state (outcome).

+ + + +
# Probability after 2 Healthy and 1 Fever day
+P = 0.6 * 0.7 * 0.3
+
+# Final day state 'Fever'
+# Calculating probabilities for each state
+Dizzy  = P * 0.6
+Cold   = P * 0.3
+Normal = P * 0.1
+
+ +

We can calculate the probability of each state and this will be unique for every unique sequence of historical data.

+",26384,,,,,6/19/2019 16:53,,,,3,,,,CC BY-SA 4.0 +12947,1,,,6/19/2019 19:34,,1,20,"

I have some 64x64 pixels frames from a (simulated) video, with a spaceship moving on a fixed background. The spaceship moves in a straight line with constant velocity from left to right (along the x-axis), and the frames are from equal time intervals. I can also place the ship at different y positions and let it move. In total I have 8 y positions and 64 frames for each y position (the details don't matter that much). Intuitively, as the background is fixed, and the shape of the ship is the same, all the information to reconstruct the image is found in the x and y position of the spaceship. What I am trying to do is to have a NN with an encoded and a decoder and a bottleneck in the middle and I want that bottleneck to have just 2 neurons. Ideally, the network would learn in these 2 neurons some function of x and y in the encoder, and the decoder would invert that function to give the original image. Here is my NN architecture (in Pytorch):

+ +
class Rocket_E_NN(nn.Module):
+    def __init__(self):
+        super().__init__()
+
+        self.encoder = nn.Sequential(
+            nn.Conv2d(3, 32, 4, 2, 1),          # B,  32, 32, 32
+            nn.ReLU(True),
+            nn.Conv2d(32, 32, 4, 2, 1),          # B,  32, 16, 16
+            nn.ReLU(True),
+            nn.Conv2d(32, 64, 4, 2, 1),          # B,  64,  8,  8
+            nn.ReLU(True),
+            nn.Conv2d(64, 64, 4, 2, 1),          # B,  64,  4,  4
+            nn.ReLU(True),
+            nn.Conv2d(64, 256, 4, 1),            # B, 256,  1,  1
+            nn.ReLU(True),
+            View((-1, 256*1*1)),                 # B, 256
+            nn.Linear(256, 2),             # B, 1
+        )
+
+    def forward(self, x):
+        z = self.encoder(x)
+        return z
+
+class Rocket_D_NN(nn.Module):
+    def __init__(self):
+        super().__init__()
+        self.decoder = nn.Sequential(
+            nn.Linear(2, 256),               # B, 256
+            View((-1, 256, 1, 1)),               # B, 256,  1,  1
+            nn.ReLU(True),
+            nn.ConvTranspose2d(256, 64, 4),      # B,  64,  4,  4
+            nn.ReLU(True),
+            nn.ConvTranspose2d(64, 64, 4, 2, 1), # B,  64,  8,  8
+            nn.ReLU(True),
+            nn.ConvTranspose2d(64, 32, 4, 2, 1), # B,  32, 16, 16
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 32, 4, 2, 1), # B,  32, 32, 32
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 3, 4, 2, 1),  # B, 3, 64, 64
+        )
+
+    def forward(self, z):
+        x = self.decoder(z)
+        return x
+
+ +

And this is the example of one of the images that I have (it was much higher resolution but I brought it down to 64x64):

+ +

+ +

So after training it for around 2000 epoch with a bs of 128, with Adam, trying several LR schedules (going from 1e-3 to 1e-6) I can't get the loss below an RMSE of 0.010-0.015 (the pixel values are between 0 and 1). The reconstructed image looks ok by eye, but I would need a better loss for the purpose of my project. Is there any way I can push the loss lower, or am I asking too much from the NN to distill all the information in these 2 numbers?

+",23871,,,,,6/19/2019 19:34,Limits for a bottleneck,,0,0,,,,CC BY-SA 4.0 +12949,1,12950,,6/19/2019 20:04,,2,114,"

Is explainable AI more feasible through symbolic AI or soft computing?

+

How much each paradigm, symbolic AI and soft computing (or hybrid approaches), addresses explanation and argumentation, where symbolic AI refers e.g. to GOFAI or expert systems, and soft computing refers to machine learning or probabilistic methods.

+",26542,,2444,,12/12/2021 12:25,12/12/2021 12:25,Is explainable AI more feasible through symbolic AI or soft computing?,,1,0,,,,CC BY-SA 4.0 +12950,2,,12949,6/19/2019 20:22,,2,,"

XAI is relevant to ""black box"" AI (machine learning methods where the decision making rationale is not apparent, only the structure of the system that led to that decision.)

+ +

Symbolic AI, GOFAI, and Expert Systems are both explainable and understood, in that the the decision-making process is designed by humans. (Symbolic AI involves human-readable representations of problems.)

+ +

To directly answer, XAI is not only feasible in the latter cases, it is a prerequisite. The difficulty is in making black box decision-making explainable.

+",1671,,,,,6/19/2019 20:22,,,,0,,,,CC BY-SA 4.0 +12952,2,,12933,6/19/2019 21:38,,1,,"

Firstly, dealing with the issue that the program gives different answers every time without making any changes can be due to a couple of things.

+ +
    +
  • Assigning random values to weights and bias. This can be solved by setting a seed manually at the start of the program.
  • +
  • Make sure you have set the model to the testing mode after training. For some frameworks, this has to be done manually.
  • +
+ +

Secondly, regarding your expected results.

+ +

To generate a proper accuracy metric, you will have to sample your dataset into training and testing data, making sure there is now overlap between them. This might be an issue as you have stated training on data till 2017 and then again training on data of 2018.

+ +

Lastly, don't expect that the model will know that the output is wrong and directly change it because it's binary classification. This is not how neural networks work. The model fits the solution better by gradually updating its weights and biases over a number of iterations. So it will take a number of epochs to learn new trends in the data for 2018.

+",26384,,,,,6/19/2019 21:38,,,,5,,,,CC BY-SA 4.0 +12953,1,13005,,6/20/2019 0:36,,3,55,"

I'm kind of new to machine learning/AI, but I was wondering if using thresholds/fuzzy logic-like functions and even networks of dependent, stochastic variables that change over time (LTL maybe?), would be ample enough to emulate natural processes like emotions, hunger, maybe even pain.

+ +

My dilemma is whether creating a basic library to do this for the developer community is worth it if everything can be modeled more-or-less mathematically deterministic, even if the formulas are really complicated (see research like: https://engineering.stanford.edu/news/virtual-cell-would-bring-benefits-computer-simulation-biology).

+ +

My initial reasoning was biological processes are connected to psychological functionality (e.g., being hungry might make someone irritable, but that irritability may wear-off, which triggers different paths of thought but not others). But these are so inter-dependent that it may be random or it is essentially PRNG, in order to properly simulate the mood fluctuations and biological processes computers don't have but humans do have.

+ +

Would we be better-off waiting for these complex physical/neurological models to come out?

+",26544,,26544,,6/20/2019 1:01,6/23/2019 2:39,Is There A Need For Stochastic Inputs To Mimic Real-World Biology And Environment?,,1,0,,,,CC BY-SA 4.0 +12954,1,,,6/20/2019 6:51,,-1,58,"

It seems to me that, right now, the key to making a good Machine Learning model is in choosing the right combination of hyper-parameters.

+ +

Firstly: Am I right in saying, if a model is able to tune it's own hyper-parameters, we have in some sense achieved a general intelligence system? Or a glimpse of an actual artificial intelligence system?

+ +

I feel the answer lies in what one means by "" tune it's own hyper-parameters"". If it means to be able to reach Bayesian levels of performance on that task then theoretically, after the tuning, the model is able to perform at par or better than humans and so it seems the answer would be yes.

+ +

Secondly: I understand that hyper-parameter tuning is done intuitively. But there are a set of general directions that is discernable looking at results. Here I am talking about a heuristic approach to perfect a learning model. +Consider an example: +Say I hardcode a model to, while training, observe gradient values. If the gradient is too large or the cost is highly oscillatory, then restart training with a smaller learning rate. +Then obtain metrics on a test set. If it is poor, then again restart training with regularisation or increased regularisation. +It can also observe various plot behaviours, etc.

+ +

The point is maybe not every trick up a researcher's sleeve can be hardcoded. But a decent level of basic tuning can be done.

+ +

Thirdly: Let us say, we have a reinforcement learning system on top of a supervised learning system. That is an RL network sets some hyper-parameters. The action then is to train with these hyper-parameters. The reward would be the accuracy on the test set.

+ +

Is it possible that such a system could solve the problem of hyper-parameter tuning?

+",17143,,,,,6/20/2019 7:10,Evolving Machine Learning,,1,2,,6/19/2020 22:58,,CC BY-SA 4.0 +12955,2,,12954,6/20/2019 7:10,,2,,"

Such a system can and does solve the problem of hyperparameter tuning. Google's AutoML does this. Here is another example that uses a Genetic Algorithm to breed new neural network structures. +AutoML has been shown to outperform humans in the rate that it improves network designs. It seems to favour Residual Network style topologies.

+",12509,,,,,6/20/2019 7:10,,,,0,,,,CC BY-SA 4.0 +12957,1,,,6/20/2019 9:58,,3,223,"

I have the following problem. We have $4$ separate discrete inputs, which can take any integer value between $-63$ and $63$. The output is also supposed to be a discrete value between $-63$ and $63$. Another constraint is that the solution should allow for online learning with singular values or mini-batches, as the dataset is too big to load all the training data into memory.

+ +

I have tried the following method, but the predictions are not good.

+ +

I created an MLP or feedforward network with $4$ inputs and $127$ outputs. The inputs are being fed without normalization. The number of hidden layers is $4$ with $[8,16,32,64]$ units in each (respectively). So, essentially, this treats the problem like a sequence classification problem. For training, we feed the non-normalized input along with a one-hot encoded vector for that specific value as output. The inference is done the same way. Finding the hottest output and returning that as the next number in the sequence.

+",26554,,2444,,7/20/2019 17:43,5/1/2023 8:00,Which online machine learning technique to use for multi-class classification problem with multiple inputs?,,2,0,,,,CC BY-SA 4.0 +12958,2,,12957,6/20/2019 13:04,,0,,"

I suggest using Data Stream algorithms to try on your problems since you are asking for ""online learning with singular values or minibatches as the dataset is too big too load all the training data into memory.""

+ +

MOA is a good choice for these algorithms. Hoeffding Trees is also a good first choice to try.

+",4300,,,,,6/20/2019 13:04,,,,0,,,,CC BY-SA 4.0 +12961,2,,4647,6/20/2019 17:06,,2,,"

In the paper Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For, the authors claim that the ""right to explanation"" is unlikely to provide a complete remedy to algorithmic harms for at least two reasons

+ +
    +
  1. It is unclear when any explanation-related right can be triggered

  2. +
  3. The explanations required by the law, ""meaningful information about the +logic of processing"", might not be provided by current (explainable) AI methods

  4. +
+ +

The authors then conclude that the ""right to explanation"" is distracting, but that other laws of the European Union's General Data Protection Regulation, such as

+ +
    +
  1. right to be forgotten
  2. +
  3. right to data portability
  4. +
  5. privacy by design
  6. +
+ +

might compensate the defects of the ""right to explanation"".

+ +

To conclude, the ""right to explanation"" is an attempt to protect citizens against the possible undesirable consequences of the use of AI methods. However, it is not flawless and clearer rights might need to be promulgated.

+",2444,,,,,6/20/2019 17:06,,,,0,,,,CC BY-SA 4.0 +12963,1,12968,,6/21/2019 5:31,,2,92,"

I would like to create a neural network that converts text into handwriting for use with a pen plotter. Before I start on this project, I'd like to be sure that artificial intelligence is the best way to do this. A problem that I foresee with this approach is a lack of human like variation in the results. For example, the word ""dog"", when inputted into the network, would be the same every time, assuming I'm not missing something. I am interested if there is any way to vary the output of the network in a realistic way, even when the input is exactly the same. Could I use a second network to make the results more random, but also still look human-like? Any thoughts/ideas would be greatly appreciated.

+",26575,,,,,6/21/2019 8:22,How to add variation in the results of a neural networks?,,1,0,,,,CC BY-SA 4.0 +12965,1,,,6/21/2019 6:31,,0,225,"

I have multiple invoices images which need to classify invoice types such as fright, utility, goods, etc. Is there any way to classify without OCR?

+",26576,,12853,,6/21/2019 8:40,6/21/2019 8:40,Is there any way to classify Document Image without OCR?,,1,1,,2/13/2022 4:58,,CC BY-SA 4.0 +12966,1,,,6/21/2019 7:18,,0,139,"

I have to create a Neural Network for regression purpose. Basically, I created a Model which predict next 5 values when we give past 6 values. + +I want to make a change in this neural network. For example,

+ +

when giving 6 past values I have to predict the next 10 values.

+ +

Here, is there any issue of selecting the number of output dimension greater than the input dimension. Which type of parameters arrangement makes the Neural Network achieve good accuracy? do I have to decide the number of input parameters always greater than output parameters?

+ +

Thanks in Advance!

+",24006,,12853,,6/28/2019 4:34,6/28/2019 4:34,Decide Number of input Parameters and Output Parameters - ANN,,1,0,,,,CC BY-SA 4.0 +12967,2,,12966,6/21/2019 8:01,,2,,"

This should be possible given the fact that ANNs have the ability to do the feature engineering and feature selection tasks by themselves.

+ +

This means that given a lesser number of input parameters, the model will be able to generate and select additional features by itself. You will obviously not be able to understand or model these features manually.

+ +

The only thing to keep in mind is that you will need a large dataset and a number of iterations before you are able to achieve a decent accuracy.

+ +

For example, there are networks that can generate image from classes. +Give this a read and here is an example where the output layer is larger than the input layer.

+",26384,,,,,6/21/2019 8:01,,,,0,,,,CC BY-SA 4.0 +12968,2,,12963,6/21/2019 8:22,,0,,"

I would suggest starting with Generative Adversarial Networks (GAN). They usually are capable of adding some randomness to the output to produce different variants. Moreover, Conditional GANs can generate outcomes regarding the observed condition. Therefore, as you change the condition (as an input to the network) you can get different results.

+ +

Some examples of NN for synthetic handwriting generators:

+ +
    +
  1. https://github.com/sjvasquez/handwriting-synthesis
  2. +
  3. http://blog.otoro.net/2015/12/12/handwriting-generation-demo-in-tensorflow/
  4. +
  5. https://distill.pub/2016/handwriting/
  6. +
+",12853,,,,,6/21/2019 8:22,,,,0,,,,CC BY-SA 4.0 +12969,2,,12965,6/21/2019 8:34,,1,,"

It is possible to classify invoice scans without passing through an OCR component if they are visually different (they demonstrate different visual features). On the other hand, if the invoices look very similar, then the classifier might not be very accurate.

+ +

Another challenge would be the number of images you need to train a deep network for image classification (you may start with pretrained models and only perform the finetuning of you do not have enough images). On the other hand, the combination of pretrained OCR models and NLP-based document classifier may not need that many samples for training (for this specific task).

+",12853,,,,,6/21/2019 8:34,,,,2,,,,CC BY-SA 4.0 +12970,1,13178,,6/21/2019 9:22,,2,651,"

Neuroevolution can be used to evolve a network's architecture (and weights, of course). Deep reinforcement learning, on the other hand, has been proven to be extremely powerful at optimising the network weights in order to train really well-performing agents. Can we use the following pipeline?

+ +
    +
  • search for the best network topology/weights through neuroevolution
  • +
  • train the best candidate selected above through DQN or something similar
  • +
+ +

This seems reasonable to me, but I haven't found anything on the matter.

+ +

Is there any research work that attempts to combine neuroevolution with deep reinforcement learning? Is it feasible? What are the main challenges?

+",23527,,23527,,6/10/2020 7:20,6/10/2020 7:20,Is there any research work that attempts to combine neuroevolution with deep reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +12971,1,12978,,6/21/2019 9:37,,14,2763,"

I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.

+ +

Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)

+ +

What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying

+ + + +

Any other examples?

+",26112,,2444,,6/21/2019 20:30,1/31/2021 13:40,What sort of mathematical problems are there in AI that people are working on?,,3,1,,,,CC BY-SA 4.0 +12972,1,12983,,6/21/2019 9:57,,-1,2846,"

Here I am Showing Two Loss graphs of an Artificial Neural Network.

+ +

Model 1

+ +

+ +

Model 2

+ +

+ +

Blue -training loss

+ +

Red -val training loss

+ +

Can you help me to analyse these graphs? I read some articles and post but doesn't give me any sense.

+",24006,,,,,6/21/2019 20:36,Analysis of Training Loss and Validation Loss Graph,,1,0,,,,CC BY-SA 4.0 +12973,1,13034,,6/21/2019 11:23,,2,294,"

When describing tensors of higher order I feel like there is an overloading of the term dimension as it may be used to describe the order of the tensor but also the dimensionality of the... ""orders""?

+ +

Assume one describes the third-order tensor produced by a convolutional layer and wants to refer to its width and height. Do you say spatial dimensions? Would you write about the channel dimension? Or rather the direction? Saying ""spatial order"" feels really weird. But staying with dimensions makes sentences like ""The spatial dimensions are of equal dimensionality."" (Disclaimer: Obviously you can avoid the issue here by restructuring, but doing this at every occasion does not feel like a satisfactory solution.).

+",26584,,,,,6/25/2019 16:50,Describing the order of a tensor,,2,1,,,,CC BY-SA 4.0 +12974,1,12976,,6/21/2019 12:29,,0,30,"

I am on the hook to measure the prediction results of an object detector. I learned from some tutorials that when testing a trained object detector, for each object in the test image, the following information is provided:

+ +
    <object>
+    <name>date</name>
+    <pose>Unspecified</pose>
+    <truncated>0</truncated>
+    <difficult>0</difficult>
+    <bndbox>
+        <xmin>451</xmin>
+        <ymin>182</ymin>
+        <xmax>695</xmax>
+        <ymax>359</ymax>
+    </bndbox>
+</object>
+
+ +

However, it is still unclear to me 1) how does these information is taken by the object detector to measure the accuracy, and 2) how does the ""loss"" is computed for this case. Is it something like a strict comparison? For instance, if for the object ""date"", I got the following outputs:

+ +
    <object>
+    <name>date</name>
+    <pose>Unspecified</pose>
+    <truncated>0</truncated>
+    <difficult>0</difficult>
+    <bndbox>
+        <xmin>461</xmin>  <---- different
+        <ymin>182</ymin>
+        <xmax>695</xmax>
+        <ymax>359</ymax>
+    </bndbox>
+</object>
+
+ +

Then I will believe that my object detector made something wrong? Or they tolerant some small delta such that if the bounding box has a small drifting, then it's acceptable. But if the ""label"" is totally wrong, then that's wrong for sure?

+ +

This is like a ""blackbox"" to me and it would be great if someone can shed some lights on this. Thank you.

+",25973,,,,,6/21/2019 14:16,From what aspect to measure the performance of an object detector?,,1,0,,,,CC BY-SA 4.0 +12976,2,,12974,6/21/2019 14:16,,1,,"

This is a high level explanation: So most object detectors are generally deep neural networks that return a set of boxes:

+ +

box1 : coordinates : confidence + box2 : coordinates : confidence + etc...

+ +

and loss is computed different model to model. But i think your specifically curious how most do box comparison and loss. Generally they first check if the box is even relevant to the target. To do this often they calculate the Intersection over Union (IOU) as known as the Jaccard index wikipedia link. This checks how overlapped they are biased on the box sizes. Now if its above some threshold (example: boxes have IOU over .5 caluclate a loss, otherwise loss is just 0) they will compute some objective. Objectives can differ, but ideally its something differentiable. Examples include vectorizing and approximiating the IOU, dice loss, etc..

+ +

After a quick google search, heres a good tutorial explaining some components commonly used in the field (region proposals, feature extraction, etc) [guide]

+",25496,,,,,6/21/2019 14:16,,,,0,,,,CC BY-SA 4.0 +12978,2,,12971,6/21/2019 14:48,,14,,"

In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.

+

Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents. Here I list other similar works.

+

There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms (from a statistical perspective, known as statistical learning theory, or algorithmic perspective, algorithmic learning theory). More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik.

+

There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.

+

There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).

+",2444,,2444,,1/31/2021 13:40,1/31/2021 13:40,,,,0,,,,CC BY-SA 4.0 +12979,1,,,6/21/2019 14:52,,3,368,"

I have a question about search and planning: +I still haven't understood the difference from the two, but they seem very similar to me; here is a question I am struggling with:

+ +
+

""Having formulated a PDDL problem, transform it into research, + emphasizing what the differences are.""

+
+ +

Someone can do an example?

+ +

I attached an example of simple PDDL from my book (I'm using Russell & Norvig) +

+",21719,,2444,,2/6/2021 13:36,2/6/2021 13:36,How to transform a PDDL to search?,,1,1,,,,CC BY-SA 4.0 +12980,2,,12971,6/21/2019 15:27,,5,,"

Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.

+ +

The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).

+ +

A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).

+",1641,,,,,6/21/2019 15:27,,,,0,,,,CC BY-SA 4.0 +12982,2,,9474,6/21/2019 16:17,,1,,"

A* is an informed search algorithm. A* informed because it is based on the use of a heuristic function, which estimates the distance of each node to the goal, that is, the heuristic function provides information about the distance from any node to the goal node. The heuristic function can e.g. be the Euclidean distance (in the case this can be defined).

+ +

More precisely, at each step, A* needs to select the next node to explore. It chooses the node $n$ with the smallest value of $$f(n) = g(n) + h(n),$$ where $g(n)$ is the actual distance from the starting node to node $n$ and $h(n)$ is the heuristic function that estimates the distance from node $n$ to the goal node.

+ +

For example, suppose that we want to find the shortest path from Paris to Madrid. If you are already in Madrid, then you can estimate that the distance from Madrid to Madrid is zero, so $h(\text{Madrid}) = 0$. However, if you're still in Paris, what is the estimate of the distance from Paris to Madrid? We can e.g. pick up a map of the world and choose $h(\text{Paris})$ to be the length of the straight segment that goes from Paris to Madrid. Similarly, we can do this for other intermediate cities. There are other ways of estimating this distance, but this is an understandable one, given that people are usually familiar with the Euclidean distance. (This example is a little bit misleading because the Earth is not flat and so the Euclidean geometry does not really apply, but, for simplicity, you can ignore this).

+ +

For info regarding the completeness and optimality of A*, have a look at https://ai.stackexchange.com/a/8907/2444.

+",2444,,2444,,6/21/2019 20:25,6/21/2019 20:25,,,,0,,,,CC BY-SA 4.0 +12983,2,,12972,6/21/2019 20:36,,2,,"

Simply model 1 is a better fit compared to model 2.

+ +
    +
  • Graph for model 1

    + +

    We notice that the training loss and validation loss aren't correlated. This means the as the training loss is decreasing, the validation loss remains the same of increases over the iterations. This means that the model is not exactly improving, but is instead overfitting the training data. This isn't what we are looking for.

  • +
  • Graph for model 2

    + +

    In this case, there is clearly a health correlation between training loss and the validation loss. They both seem to reduce and stay at a constant value. This means that the model is well trained and is equally good on the training data as well as the hidden data.

  • +
+ +

You should stick with model 2. In case you're going ahead with model 1, make sure to use the chechpoint where both the losses are at a similar value (at around 100 -150 epochs)

+",26384,,,,,6/21/2019 20:36,,,,1,,,,CC BY-SA 4.0 +12984,2,,12973,6/21/2019 21:22,,0,,"

By definition, tensors can be of any order (usually named differently if the order is less than three). So, I use $d_i$ to indicate the dimensionality of the $i$th facet.

+ +

Unless you have three or four-order tensors which each facet has a very specific meaning, naming the orders by terms such as special or time would be limiting.

+",12853,,,,,6/21/2019 21:22,,,,0,,,,CC BY-SA 4.0 +12985,1,12994,,6/22/2019 5:53,,1,848,"

I am quite new to Deep Reinforcement Learning, and I'm trying to define states in a Reinforcement Learning problem. The environment consists of multiple identical elements, and each one of them is characterized by different features of the same type. In other words, let us say we have $e_0$, $e_1$, and $e_2$. Then, suppose that each one is characterized by features $f_0$ and $f_1$, where $f_0$ belongs to $[0, 1]$, and $f_1$ belongs to $\{0, 1, 2, 3, 4, 5\}$. Then, $e_0$ will have some value for the features $f_0$ and $f_1$, and the same goes for $e_1$ and $e_2$.

+

How can I encode such states?

+

Can I simply vectorize such state by concatenating the different features of each element obtaining $[f_{0e_0}, f_{1e_0}, f_{0e_1}, f_{1e_1}, f_{0e_2}, f_{1e_2}]$, or should I use a convolutional architecture of some sort?

+",26605,,2444,,10/31/2020 15:26,10/31/2020 15:26,"How can I encode states where the environment consists of multiple identical elements, but each is characterised by different features?",,1,7,,,,CC BY-SA 4.0 +12986,1,,,6/22/2019 6:55,,1,28,"

I am trying to train a SVR but I found that with some combination of features, the trained SVR predict every point in test set to the same value. this problem occurs much more when I use linear kernel than other kernels. The parameters are: C=1, gamma=0.5. +My question is what leads to this kind of problem. Is there a name for this phenomenon? Thank you!

+",26608,,,,,6/22/2019 6:55,why my regression model predict every datapoint to the same value,,0,1,,,,CC BY-SA 4.0 +12987,1,,,6/22/2019 7:15,,1,82,"

I am trying to understand the average precision (AP) metrics in evaluating the performance of deep-learning based object detection models. Suppose we have the following ground true (four objects highlighted by four blue arrows):

+ +

+ +

where we have labelled four objects:

+ +
person 25 16 38 56
+person 129 123 41 62
+kite 45 16 38 56
+kite 169 123 41 62
+
+ +

And when feeding the above image to an object detector, it gives the following outputs:

+ +

+ +

It's easy to see that the object detector identified another object with low confidence:

+ +
person 0.4 25 16 38 56
+person 0.2 129 123 41 62
+kite 0.3 45 16 38 56
+kite 0.5 169 123 41 62
+kite 0.1 769 823 141 162 <-------- a ""kite""
+
+ +

In my humble opinion, this is an erroneous behavior of the object detector, which should be counted as a ""false positive"".

+ +

However, since the ""kite"" has a quite low confidence score (0.1), when using the standard mAP algorithm to compute the performance, I got the following output (I am using code from here to compute the mAP):

+ +
AP: 100.00% (kite)
+AP: 100.00% (person)
+mAP: 100.00%
+
+ +

So here are my questions and confusions:

+ +
    +
  1. from what kind of design intension, the AP is designed in a way such that objects with low confidence score are ignored and therefore in this case we are with flying colors.

  2. +
  3. Is there any metrics that can take this extra ""kite"" into consideration and therefore would output one ""false positive"" of the object detection model? I am just thinking that in this way, we can further proceed to improve the accuracy of this model during training.

  4. +
+",25973,,,,,6/22/2019 7:15,Understanding average precision (AP) in measuring object detector performance,,0,0,,,,CC BY-SA 4.0 +12988,2,,12971,6/22/2019 7:44,,3,,"

Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.

+ +

Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.

+ +

There were some works on Tropical Geometry of Neural Networks

+ +

Homotopy Type Theory also have connection to AI

+",22745,,,,,6/22/2019 7:44,,,,0,,,,CC BY-SA 4.0 +12989,1,,,6/22/2019 8:09,,2,118,"

I'm using a restricted Boltzmann machine (RBM) as an autoencoder. For now, I use a simple architecture of two layers, the input (~100 nodes) and the output (3 nodes) layers. I'm thinking to add more hidden layers.

+ +

Are there some improvements in encoding by adding multiple hidden layers? If yes, how can multiple layers improve the encoding?

+",26577,,2444,,5/18/2020 12:19,5/18/2020 12:19,Does the encoding of a restricted Boltzmann machine improve with more layers?,,1,1,,,,CC BY-SA 4.0 +12990,1,,,6/22/2019 12:44,,2,258,"

One of if not the most popular programming language for data science and AI today is Python, with R being a frequently cited runner-up. However, both of them are interpreted languages, which do not execute as fast as compiled languages. Why is that the case? The main advantage of AI over humans is in computing speed, and with real world AI applications today handling big data, execution time will already increase considerably as it is. Why aren't compiled languages preferred for this very reason?

+ +

Sure, the key argument and strength going for Python is its vast range of third party libraries available for AI, like scikit etc. but such communities can take root and grow anywhere under the right circumstances. Why did this community end up growing around Python and not a faster, equally common compiled language like C++, C# or Java?

+",23844,,,,,6/22/2019 19:02,Why aren't compiled languages as popular as Python in AI?,,1,2,,5/4/2020 12:20,,CC BY-SA 4.0 +12991,1,12995,,6/22/2019 13:44,,2,2935,"

While studying artificial intelligence, I have often encountered the term ""agent"" (often autonomous, intelligent). For instance, in fields such as Reinforcement Learning, Multi-Agent Systems, Game Theory, Markov Decision Processes.

+ +

In an intuitive sense, it is clear to me what an agent is; I was wondering whether in AI it had a rigorous definition, perhaps expressed in mathematical language, and shared by the various AI-related fields.

+ +

What is an agent in Artificial Intelligence?

+",23527,,2444,,12/12/2021 12:26,12/12/2021 15:46,What is an agent in Artificial Intelligence?,,1,0,,,,CC BY-SA 4.0 +12992,2,,12990,6/22/2019 13:51,,2,,"

There are a few advantages of interpreted languages (compared to compiled languages)

+ +
    +
  • platform independence (you only need the interpreter for your platform, even though this is not true if e.g. your interpreted language is only a wrapper library around a library written in another programming language)
  • +
  • dynamic typing (no need to specify the types of the variables)
  • +
  • dynamic scoping (e.g. you can access variables in other scopes)
  • +
  • automatic memory management (but there are compiled languages, like Java, that also have a garbage collector)
  • +
  • Rapid prototyping (for various reasons, including dynamic typing), hence software can be written more quickly
  • +
+ +

Hence, the main advantages of an interpreted language compared to a compiled language are flexibility and dynamism. Given that AI is still an evolving field, these characteristics are widely appreciated.

+ +

There is also at least one disadvantage of interpreted languages (compared to compiled languages)

+ +
    +
  • Slower running times compared to compiled languages, which, once compiled, are quite fast, because they are often compiled to a code that is quickly executable by the machine or virtual machine
  • +
+ +

Python and R are widely used in data science and artificial intelligence because of the advantages above, which possibly contributed to the rapid growth of the communities around them and the development of software libraries.

+ +

However, note that the core of the most common machine learning libraries today, including TensorFlow and PyTorch, is written in a compiled language like C and C++. In the specific case of TensorFlow, Python is just a wrapper library. Consequently, under the hood, the code is not executed by the Python interpreter, but first compiled, which implies that, when you're using e.g. TensorFlow, your code will run (more or less) as fast as if you were using a compiled language like C. A similar argument can be made for libraries like NumPy, where Python is just a wrapper library.

+",2444,,2444,,6/22/2019 19:02,6/22/2019 19:02,,,,2,,,,CC BY-SA 4.0 +12994,2,,12985,6/22/2019 15:21,,1,,"

For general advice about state representation, you could check my answer to How to define states in reinforcement learning? - this does not cover your specific issue, but may help with other details such as whether to one-hot-encode and/or scale your discrete features. Also you will want to assess whether any state vector you construct actually contains enough data for reinforcement learning to work.

+ +

Assuming that is all good, then your approach here:

+ +
+

Can I simply vectorize such state by concatenating the different features of each element obtaining $[f_{0e_0}, f_{1e_0}, f_{0e_1}, f_{1e_1}, f_{0e_2}, f_{1e_2}]$

+
+ +

should work with a generic feed-forward neural network with two or more layers. As this is simple to construct, then it is a reasonable place to start just considering development time. If nothing else it can be a benchmark against other representations and model architectures that you may want to try.

+ +
+

should I use a convolutional architecture of some sort?

+
+ +

That will depend on any structure inherent in your environment. The interesting thing about your description of the environment is the repetition of meaning in the values. Each entity $e_i$ appears multiple times, and the units of feature $f_{0e_0}$ will be the same as $f_{0e_1}$.

+ +

An unfortunate consequence of using a flattened vector is that the information about these repeated entities and their units is lost to the system. Any important details due to the entities being essentially the same of this must be ""rediscovered"" by the approximator training with many examples. Although neural networks can generalise function results, this does not apply to generalising across different parts of the input unless you choose an architecture designed to help with that.

+ +

Typically you would use a RNN architecture (such as LSTM) if key information is in the sequence order of entities, or a CNN architecture if key information is in local patterns between neighbouring entities. By ""key information"" I mean whether useful relationship to a value function or optimal policy depends on those factors. This might be difficult to assess for some environments with mixed data types and complex meanings for the entity ids - so you may have to experiment with multiple architectures to figure out what factor is more important. However, it is also possible you have some insight based on known good policies, so can head more directly to a promising feature representation and architecture.

+ +

Learning may benefit if instead of ordering your entities arbitrarily by identity in the vector, that you sort them according to some factor that is relevant to the goals of your agent. I have used this is in a colour matching environment where an agent had to hit a target with a ""current"" colour selection whilst avoiding incorrect colours - sorting target entities by whether they matched colour and then by angular distance to the agent's aim enabled the agent to learn its task far more efficiently. This sorting may help independently of architecture choice - i.e. you don't need to be using a RNN or CNN to see a benefit with this approach.

+ +

Another thing worth considering, but probably not possible here because of the continuous feature $f_0$, is whether you can invert the representation so that instead of being based on entities with properties, it is a view over property space populated with entities in certain positions. A CNN processed over a small enough ""property space"" instead of ""entity space"" may well perform better - this is why board game RL tends to treat the board positions as defining feature vector indexes which represent which entities are present on them, which is a very sparse structure that can represent far more states than ever occur in real games. The over-specified space works because it can be matched to architectures that generalise efficiently over it.

+ +

In general, you need to consider different types of symmetry within the system - especially what is meaningful about the id number you have assigned to each entity. If the entity id is not meaningful, or is secondary to some other factor, you have some flexibility in changing the representation to make it meaningful and use architectures that take advantage of how you have injected that domain knowledge into the q function approximator. If the id is meaningful, think how could you best represent the difference between entities with different ids in your environment?

+",1847,,1847,,6/23/2019 7:30,6/23/2019 7:30,,,,0,,,,CC BY-SA 4.0 +12995,2,,12991,6/22/2019 16:20,,4,,"

The acclaimed book Artificial Intelligence: A Modern Approach (by Stuart Russell and Peter Norvig) gives a definition of an agent

+
+

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

+
+

This definition is illustrated by the following figure

+

+

This definition (and illustration) of an agent thus does not seem to include the agent as part of the environment, but this is debatable and can be a limitation, given that the environment also includes the agent.

+

According to this definition, humans, robots, and programs are agents. For example, a human is an agent because it possesses sensors (e.g. the eyes) and actuators (e.g. the hands, which, in this case, are also sensors) and it interacts with an environment (the world).

+

A percept (or perception) is composed of all perceptual inputs of the agents. The specific definition of the percept changes depending on the specific agent. For example, in the case of a human, the percept consists of all perceptual inputs from all sense organs of the human (eyes, ears, tongue, skin, and nose). In the case of a robot only equipped with a camera, the percept consists only of the camera frame (at a certain point in time). A percept sequence is a sequence of percepts.

+

An action is anything that has an effect on the environment. For example, in the case of a legged robot, an action can be "move forward".

+

An action is chosen by the agent function (which is illustrated by the white box with a black question mark in the figure above), which can also be called policy. The agent function highly determines the intelligent or intellectual capabilities of the agent and differentiates it from other agents.

+

Therefore, there are different agents depending on the sensors and actuators they possess, but, more importantly, depending on their policy, which highly affects their intellectual characteristics. A possible categorization of agents is

+
    +
  • rational agents do the "right" thing (where "right", of course, depends on the context)

    +
  • +
  • simple reflex agents select actions only based on the current percept (thus ignoring previous percepts)

    +
  • +
  • model-based reflex agents build a model of the world (sometimes called a state) that is used to deal with cases where the current percept is insufficient to take the most appropriate action

    +
  • +
  • goal-based agents possess some sort of goal information that describes situations that are desirable; for example, in the case of a human, a situation that is desirable is to have food

    +
  • +
  • utility-based agents associate value with certain actions more than others; for example, if you need immediate energy, chocolate might have more value than some vegetable

    +
  • +
  • learning agents update their e.g. model based on the experience or interaction with the environment

    +
  • +
+

More details regarding these definitions can be found in section 2 of the book mentioned above (3rd edition). However, note that there are other possible categorizations of agents.

+

A reinforcement learning (RL) agent is an agent that interacts with an environment and can learn a policy (a function that determines how the agent behaves) or value (or utility) function (from which the policy can be derived) from this interaction, where the agent takes an action from the current state of the environment, and the environment emits a percept, which, in the case of RL, consists of a reinforcement (or reward) signal and the next state. The goal of the RL agent is to maximize the cumulative reward (or reinforcement) signal. An RL agent can thus be considered a rational, goal, utility-based, and learning agent. It can also be (or not) a simple reflex and model-based agent.

+",2444,,2444,,12/12/2021 15:46,12/12/2021 15:46,,,,0,,,,CC BY-SA 4.0 +12996,2,,8707,6/22/2019 17:14,,1,,"

A learning agent can be defined as an agent that, over time, improves its performance (which can be defined in different ways depending on the context) based on the interaction with the environment (or experience).

+

The human is an example of a learning agent. For example, a human can learn to ride a bicycle, even though, at birth, no human possesses this skill.

+

Section 2.4.6 Learning agents (p. 54) of the AIMA book (3rd edition), by Norvig and Russell, define a learning agent as follows.

+
+

A learning agent can be divided into four conceptual components, as shown in Fig 2.15.

+
+

+

The four components are

+
    +
  1. learning element: makes improvements to the performance element (an example would be Q-learning)
  2. +
  3. performance element: chooses the actions to take in the environment (this is analogous to a model, e.g. a neural network, that contains the knowledge or rules to act in the environment)
  4. +
  5. critic: provides feedback (based on some performance metric) to the learning element in order for it to improve the performance element (so this is how you evaluate the potential improvements)
  6. +
  7. problem generator: suggests actions that will lead to new informative experiences (this would be a behavior policy in reinforcement learning)
  8. +
+

At first glance, this definition might seem unrelated to the definition given above, but they are equivalent. Norvig and Russell's definition of a learning agent builds on top of their definition of an agent. Moreover, as I wrote above, if you are familiar with RL, these four components can be associated with common concepts in reinforcement learning (such as Q-learning, value function/policy, target, behavior policy). The book just uses different names to refer to the same or similar concepts.

+

To make the definition clearer, Norvig and Russell also provide an example

+
+

To make the overall design more concrete, let us return to the automated taxi example. The performance element consists of whatever collection of knowledge and procedures the taxi has for selecting its driving actions. The taxi goes out on the road and drives, using this performance element. The critic observes the world and passes information along to the learning element. For example, after the taxi makes a quick left turn across three lanes of traffic, the critic observes the shocking language used by other drivers. From this experience, the learning element is able to formulate a rule saying this was a bad action, and the performance element is modified by installation of the new rule. The problem generator might identify certain areas of behavior in need of improvement and suggest experiments, such as trying out the brakes on different road surfaces under different conditions.

+
+

This answer provides more definitions of the machine learning field and an ML algorithm, which is not exactly the same thing as a learning agent, given that the concept of an agent also implies or emphasizes the usage of a body with sensors, actuators, and an agent program (which converts the observations into actions), but the definitions given in this answer and in the other answer are consistent with each other.

+",2444,,2444,,12/12/2021 15:47,12/12/2021 15:47,,,,0,,,,CC BY-SA 4.0 +12997,1,,,6/22/2019 17:43,,1,831,"

I'm using a DQN Algorithm to play Snake.

+ +

The input of the neural network is a stack of 4 images taken from the games 80x80.

+ +

The output is an array of 4 values, one for every direction.

+ +

The problem is that the program does not converge and I've a lot of doubts in the replay function, where I train the neural network over a batch of 32 events.

+ +

That's the snippet:

+ + + +
def replay(self, batch_size):
+
+    minibatch = random.sample(self.memory, batch_size)
+
+    for state, action, reward, next_state, done in minibatch:
+
+        target = reward
+
+        if not done:
+            target = (reward + self.gamma *
+                      np.amax(self.model.predict(next_state)[0]))
+        target_f = self.model.predict(state)
+        target_f[0][action] = target
+        self.model.fit(state, target_f, epochs=1, verbose=0)
+
+    if self.epsilon > self.epsilon_min:
+        self.epsilon *= self.epsilon_decay`
+
+ +

Targets are:

+ +
    +
  • +1 for eating an apple
  • +
  • 0 for doing a movement without dying
  • +
  • -1000 for hitting a wall or the snake hitting himself
  • +
+",26623,,26384,,6/22/2019 19:04,5/3/2023 23:04,Problem over DQN Algorithm not converging on snake,,1,5,,,,CC BY-SA 4.0 +13000,2,,12997,6/22/2019 18:56,,0,,"

I think the main issue here is that you are trying to train the snake (network) on images. This will create a lot of issues a there are no set parameters that the model can learn from.

+ +

From images, there is no logical way to define the boundary, directions and objects on the board. It will be much easier to write a simple computer vision script of game API to provide actual meaningful inputs to the model.

+ +

Here is a great article on building a model to play the snake game. The author also provides the game API for input along with example code to train the snake game.

+ +

Final results from the model

+ +

+",26384,,,,,6/22/2019 18:56,,,,2,,,,CC BY-SA 4.0 +13003,1,,,6/23/2019 0:18,,1,22,"

CNNs are often used in one of the following scenarios:

+ +
    +
  1. A known-sized image is encoded to an intermediate format for later use
  2. +
  3. An intermediate or precursor format is decoded into a known-sized image
  4. +
  5. An image is converted into a same-size image
  6. +
+ +

(Usually 3 is done by sticking together 1 and 2.)

+ +

Are there any papers dealing with convolutional techniques where the image sizes vary? Not only would the size of input X differ from input Y, but also input X may differ from output Y. The total amount of variation can probably be constrained by the statistics of the dataset, but knowledge of input X does not grant a priori knowledge of the size of output Y

+ +

(Masking is an obvious solution, but I am hoping for something more elegant if research already exists. The problem domain need not be images.)

+",15020,,,,,6/23/2019 1:49,Convolutional Neural Networks for different-sized Source and Target,,1,1,,,,CC BY-SA 4.0 +13004,2,,13003,6/23/2019 1:49,,2,,"

Yes actually. There have been quite a few different adaptations to convolutional neural networks to do precisely what you are describing.

+ +

Here is an earlier one. See section 3.2.5

+ +

Here He et al. create a method known as Spatial Pyramid Pooling(SPP). In this method, you are able construct a fixed length representation regardless of input size by pooling them into ""spatial bins"" which are proportional to the input size, and thusly do not need to modify input dimensions.

+ +

There are a few newer methods that improve on this in various capacities, as usual, your solution will be dependent on the problem and other situation-dependent constraints. I suggest you dig deeper into the literature to find the optimal solution in your case.

+",9608,,,,,6/23/2019 1:49,,,,2,,,,CC BY-SA 4.0 +13005,2,,12953,6/23/2019 2:39,,1,,"

Quite an interesting question, this sort of computational representation of biological systems are at the forefront of what we are trying to accomplish algorithmically.

+ +

This can be tackled via a variety of coordinate frames and problem formulations and could be thought of quite philosophically, as it pokes at some deep questions such as free-will and consciousness more generally.

+ +

For example, we could take the assumption that based on everything we know physically, the universe is completely deterministic. In this case, implementing deterministic functions for these variables makes quite a bit of sense and seems like the obvious choice. However, we can easily imagine that for at least some of this dynamic variables a stochastic representation would be both simpler(computationally) as well as giving better approximations to what we see in the real-world.

+ +

So it really is a question of how much value you see from implementing a system like this, and which representation you think would be most effective.

+",9608,,,,,6/23/2019 2:39,,,,1,,,,CC BY-SA 4.0 +13006,2,,12926,6/23/2019 2:53,,1,,"

Without digging into your diagnostics more deeply, on it's face this seem like a local optima issue. Assuming you are optimizing via GD, there are many local optima that a network or agent in this case can converge to and stay at which will cause symptoms like seeing above.

+ +

With that being said, assuming this is our issue, here are some things you can try:

+ +
    +
  1. Regularization, try adding dropout or L2 and see how that affects convergence and learning.

  2. +
  3. Adjust network architecture, number of layers, nodes, etc.

  4. +
  5. Try a different type of RL(Q learning for example), this will be dependent on your problem of course.

  6. +
  7. Adjust starting seed. Assuming you have a static seed used for weight initialization, you will always converge to the same solution. It could be as simple as adjusting the seed value.

  8. +
+ +

If all these steps fail, you likely have a deeper issue at work, and I would suggest coming back after with some additional detail if this does not succeed.

+",9608,,,,,6/23/2019 2:53,,,,0,,,,CC BY-SA 4.0 +13007,1,,,6/23/2019 6:52,,1,46,"

Think Angry Birds kind of game. You need to hit a target at some point by adjusting angle and power. There is infinite number of parabolas that will solve this problem.

+ +

My problem is not exactly that but similar, it also has infinite number of solutions. Could anyone please suggest how do I approach this kind of problems using Machine Learning?

+",26632,,,,,6/23/2019 6:52,How to approach a problem with infinite solutions,,0,3,,,,CC BY-SA 4.0 +13008,1,,,6/23/2019 14:44,,1,56,"

Generally, we come across terms such as High Frequency and Low frequency filters in Convolutional Neural Networks (CNN). In regards to this highlighted statement, in 'S1' section of this paper by Jason Yosinski (ref 1), I thought, in order for high and low-frequency filters to produce a similar effect, weight of low-frequency filters should be greater than high-frequency filters. I would like to understand why I am wrong and I will be grateful if anyone can elaborate about High and Low frequency filters in CNN, in general, or in this context. Thank you.

+ +

Ref 1: Yosinski, Jason, et al. ""Understanding neural networks through deep visualization."" arXiv preprint arXiv:1506.06579 (2015).

+",26639,,,,,6/23/2019 14:44,How High and Low frequency filters effect activation in the next layer?,,0,0,,,,CC BY-SA 4.0 +13009,1,,,6/23/2019 22:06,,1,2283,"

I need to develop a convolutional neural network whose inputs are 1-channel images, but I dont know how to do it, given that most libraries use 3 channel images. Should I convert my images to RGB? Is there any way to implement a CNN that receive as input 1-channel images?

+",26644,,2444,,6/24/2019 0:09,11/21/2019 4:04,How can I use 1-channel images as input to a CNN?,,2,0,,,,CC BY-SA 4.0 +13010,1,,,6/23/2019 23:00,,2,339,"

I have a bunch of images from different trucks passing the road. Here is an example.

+

+

The truck needs to be at a certain distance from the border of the lane. Some of the trucks are way close to the border (that you can see on the shoulder of the road).

+

I want to find a way to measure the distance between the truck and the border of the lane and, more importantly, to detect whether a truck is inside its lane.

+

I would like to solve this problem by training a deep learning-based classifier or image processing techniques. Painting the ground is also possible if I can train a classification algorithm with painted images.

+",20025,,2444,,12/11/2020 15:29,12/11/2020 15:29,How do I determine whether a truck is inside its lane?,,2,1,,,,CC BY-SA 4.0 +13011,5,,,6/23/2019 23:58,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Digital_image_processing.

+",2444,,2444,,6/23/2019 23:58,6/23/2019 23:58,,,,0,,,,CC BY-SA 4.0 +13012,4,,,6/23/2019 23:58,,0,,For questions related to image processing (in the context of AI).,2444,,2444,,6/23/2019 23:58,6/23/2019 23:58,,,,0,,,,CC BY-SA 4.0 +13013,2,,13009,6/24/2019 0:20,,1,,"

The libraries should allow you to specify the number of input channels of the convolutional layer, so no one should prevent you from passing 1-channel images as input to a CNN. For example, in PyTorch, you can specify the number of input channels of the Conv2d object.

+ +

If your library does not provide such feature, you could convert your 1-channel images to 3-channel images, where e.g. all 3 channels of each image are equal to the only channel of the corresponding original 1-channel image.

+",2444,,,,,6/24/2019 0:20,,,,0,,,,CC BY-SA 4.0 +13014,2,,3176,6/24/2019 1:12,,1,,"

These are correct. Artificial implies that it runs on artifically made hardware. There is no reason to distinguish between the natural processes what do the same.

+ +

Further, the term intelligence is nor realy precise. What is more/ less intelligent or has or has not intelligence among : mowgli, monkey, crow, common game bot, whatever? MAing thing, some learning on data happens here.

+ +

The best alternative would be Mashine learning, but again, that mashine like ""artifically made stuff"" does that is irrelevant.

+ +

So my definition is: + Algorithmic Learning.

+",25836,,,,,6/24/2019 1:12,,,,0,,,,CC BY-SA 4.0 +13015,2,,13009,6/24/2019 1:12,,1,,"

If you look at the theory of CNN, the no.of channels in the input layer is also a parameter that user can decide. In fact, if you are working on monochrome (black & white) images, you have to use only one channel in the input layer. All the libraries should provide a way to design an input layer with no. of channels as an option. But, if you are trying to use transfer learning by using some model in the database, then you may be restricted to use 3 channel input because the model is designed so.

+ +

There is no need to convert your images to RGB, surely there should be a way provided by the library you are using to use only 1 channel input. Or if you are writing your code, you can design your CNN with only 1 channel input.

+ +

3 channels is not a rule in CNN

+",20760,,,,,6/24/2019 1:12,,,,0,,,,CC BY-SA 4.0 +13016,2,,96,6/24/2019 1:22,,1,,"

General structure of an Artificial Neural Network

+ +

Input Layer + Hidden Layers + Output Layer

+ +

If there are more hidden layers in the artificial neural network, then the neural network is called as Deep Neural network. How many exactly constitute a deep neural network is a point of debate, but in general, the more the hidden layers, the deep is the neural network.

+ +

Coming to why they are so popular or important, many problems like object detection, classification, face recognition, speech recognition got solved with the advent of deep neural networks. It's not a exaggeration to say that, the performance of deep neural networks crossed even the human performance in many of the above mentioned tasks. That means now a computer is the best one to do the above tasks than humans. All the above mentioned problems were lying in research field since almost 5 decades. All of them have been solved to perfection only in the last 4,5 years just because of the success of deep neural networks. That is why they are very popular and important. I mentioned very few problems that i worked on, there are many similar tasks that deep neural networks solved with ease in the last decade.

+ +

And, at this point in time, many people across the world are working on solving innumerable applications using deep neural networks.

+",20760,,,,,6/24/2019 1:22,,,,0,,,,CC BY-SA 4.0 +13018,1,,,6/24/2019 7:52,,1,214,"

Knowledge Representation and Automated Reasoning are two AI subfields which seem to have something to do with reasoning. +However, I can't find any information online about their relationship. Are they synonyms? Is KR a subfield of AR? What's the difference, if any?

+ +

To expand further, I believe representing knowledge is an essential part of ""reasoning"": how can you reason without a proper representation of the concepts you want to manipulate? +At the same time, reasoning seems to be an essential part of KR (we build Knowledge Bases in order to build computer programs which are able to make inferences from them).

+ +

So it seems to me that they are the same field, or at least deeply interrelated, but nobody on the internet seems to explicitly say that; furthermore, in this question, they are mentioned separately.

+ +

Another point of ambiguity is that the wikipedia page of KRR mentions reasoning and automated reasoning as a part of KR; it even lists ""automated theorem proving"" (a classical application of AR) as an application of KR. +But at the same time, we have a separate AR page which does not mention KR at all.

+",23527,,2444,,6/24/2019 10:57,6/24/2019 10:57,What is the difference between Knowledge Representation and Automated Reasoning?,,0,0,,,,CC BY-SA 4.0 +13020,5,,,6/24/2019 8:42,,0,,"

Automated Reasoning (AR) is a sub-field of Artificial Intelligence concerned with the development of computer programs which can reason completely, or nearly completely, automatically.

+ +

Example of applications are automated theorem proving, proof checking, logic programming, circuit design and reasoning under uncertainity.

+ +

Tools used in AR include formal logic, fuzzy logic, bayesian inference and other formal ad-hoc techniques.

+ +

One of the first example of successful AR is Logic Theorist (LT), a computer program developed in 1956 by Allen Newell, Cliff Shaw and Herbert A. Simon, which eventually proven 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica, and find new and more elegant proofs for some. It is called ""the first artificial intelligence program"".

+",23527,,23527,,6/24/2019 19:59,6/24/2019 19:59,,,,0,,,,CC BY-SA 4.0 +13021,4,,,6/24/2019 8:42,,0,,"For questions about Automated Reasoning concepts and research topics. Automated Reasoning is a sub-field of Artificial Intelligence concerned with the development of computer programs which can reason completely, or nearly completely, automatically.",23527,,23527,,6/24/2019 19:59,6/24/2019 19:59,,,,0,,,,CC BY-SA 4.0 +13022,2,,13010,6/24/2019 8:43,,2,,"

One possible approach will be to use an algorithm which detects lines (Ex. Hough lines or any deep neural net trained to detect lanes) and use some threshold range so that we can get the lane and the edges of truck, then after extracting the lines, you can easily find the distance between them.

+ +

Then you need to experiment out on few images to get the threshold distance that you are expecting the truck to maintain as the real distance and the distance calculated using images are not same

+ +
+ +

If you want to classify using deep learning, you may need to preprocess the images and send them. As it will become very difficult to directly learn to classify +based on image, you may need to first detect the lanes, then apply a mask and then send the masked image to your network to make the network to converge.

+",26648,,,,,6/24/2019 8:43,,,,6,,,,CC BY-SA 4.0 +13025,2,,8650,6/24/2019 10:08,,1,,"

In some sense yes, in some sense no. The ""error"" word you are using is not defined in the question. The network can have two type of errors, quite different - generalization error and loss (training error). The lower bound of loss on the dataset obviously depend on the network architecture, because loss is value produced by network as a whole, and some network will be able to produce exactly zero loss (which is likely be overfitting, wich is bad).

+ +

The generalization error is how well network generalize form training dataset to potentially infinite (or very big) amount of data samples ""in the wild"", so generalization error can not be calculated precisely, only estimated. The lower bound on generalization error is studied in Statistical Learning. It always depend on the complexity of the classifier (in our case architecture of network). Statistical Learning theory produce estimations like VC generalization bound which connect generalization error with size of the dataset and complexity of the classifer (netwroks in our case) space. However for specific dataset it probably don't make sense to talk about generalization bound outside of context of complexity of classifier space, because if there is no restriction we can always construct bigger network which would have lesser generalization error (for example by pretraning bigger network on bigger dataset) +To put it short you have to put restriction on complexity of network space to be able estimate minimum generalization error (and that probably wouldn't be useful anyway, because it wouldn't say anything about if gradient descent would reach that error)

+",22745,,22745,,6/24/2019 12:17,6/24/2019 12:17,,,,0,,,,CC BY-SA 4.0 +13026,1,,,6/24/2019 11:13,,2,554,"

When using neural networks (NNs), we often normalized the inputs. I think this is done to equally capture the changes in any input feature, that is, if any feature takes huge values and other features take small values, we don't want the NN not to be able to "see" the change in the smaller value.

+

However, what if we cause the NN to become insensitive to the input, that is, the NN is not able to identify changes in the input because the changes are too small?

+",17143,,2444,,11/15/2020 18:44,5/5/2023 9:01,Could the normalisation of the inputs make the neural network insensitive to changes in the inputs?,,1,1,,,,CC BY-SA 4.0 +13028,1,13035,,6/24/2019 11:38,,2,137,"

Some pictures contain an elephant, others don't. I know which of the pictures contain the elephant, but I don't know where it is or how does it look like.

+ +

How do I make a neural network which locates the elephant on a picture if it contains one? There are no pictures with more than one elephant.

+",5852,,2444,,6/24/2019 13:50,6/24/2019 17:51,How do I locate a specific object in an image?,,1,0,0,,,CC BY-SA 4.0 +13029,2,,13026,6/24/2019 11:52,,0,,"
+

When using neural networks (NNs), we often normalized the inputs. I +think this is done to equally capture the changes in any input +feature, that is, if any feature takes huge values and other features +take small values, we don't want the NN not to be able to "see" the change +in the smaller value.

+

However, what if we cause the NN to become insensitive to the input, +that is, the NN is not able to identify changes in the input +because the changes are too small?

+
+

We don't normalize the input to make the model less sensitive to small changes in the input (theoretically, given the correct optimization strategy, the model will learn to approximate the smaller-ranged input as well).

+

An example of this would be Convolutional Neural Networks. Traditionally, images were represented with integer values ranging from $0$ to $255$. This means that a given pixel could have only $256$ distinct values. However, assuming we normalize the input, let's say to $[0, 1]$, this gives the pixel a whole range of values to occupy, making the input more sensitive to changes.

+

Instead, normalization is done to help with the model's convergence.

+",26652,,2444,,11/15/2020 18:39,11/15/2020 18:39,,,,7,,,,CC BY-SA 4.0 +13030,2,,1507,6/24/2019 13:05,,2,,"

There is also the AI effect, that is, the tendency to not consider something an AI once it is well understood. For example, neural networks are not yet fully understood, so people still tend to call them AI. Once we know exactly all the details about neural networks and their inner workings, we might start to consider them just computation. This is an old philosophical topic that goes back at least to the famous Jacques de Vaucanson's defecating duck and automatic loom.

+",2444,,,,,6/24/2019 13:05,,,,0,,,,CC BY-SA 4.0 +13031,1,13177,,6/24/2019 13:51,,1,98,"

Actually, I am ""fresh-water"", and I've never known what is neural network. Now I am trying understand how to design simple neuronetwork for this problem:

+ +

I'd like to make up such neural network that after learning it could predicate a mark of passed exam (for example, math). There is such factors that influence on a mark:

+ +
    +
  • Chosen topic (integral, derivative, series)
  • +
  • Perfomance (low, medium, high)
  • +
  • Does a student work? (Yes, No, flexible schedule)
  • +
  • Have a student ever gotten through a add course? (Yes, No)
  • +
+ +

The output is a mark (A,B,C,D,E,F) +I don't know should I add few layers between inputs and output + +Moreover, I have few results from past years:

+ +
    +
  • (integral, low, Yes, No, E)
  • +
  • (integral, medium, Yes, Yes, B)
  • +
  • (series, high, No, Yes, A) +and so on. What do I need to know else for designing this NN?
  • +
+",26653,,,,,7/26/2020 4:51,How to create neural network that predicates result of exam?,,2,3,,,,CC BY-SA 4.0 +13032,5,,,6/24/2019 13:54,,0,,,2444,,2444,,6/16/2020 17:40,6/16/2020 17:40,,,,0,,,,CC BY-SA 4.0 +13033,4,,,6/24/2019 13:54,,0,,"For questions related to object detection (where objects can be e.g. humans, dogs, houses, etc.), whose meaning or definition can vary depending on the context. OD can refer to the task of locating (i.e. finding the coordinates) an object in an image (so, in this case, it would be a synonym for object localization) or the task of locating the object and classifying it (i.e. object localization + object classification).",2444,,2444,,6/16/2020 17:40,6/16/2020 17:40,,,,0,,,,CC BY-SA 4.0 +13034,2,,12973,6/24/2019 14:59,,1,,"

Ambiguity in Terms

+ +

You are correct that there is something like overloading occurring in tensor terminology in posts and in software libraries. Confusing jargon often appears when those without the mathematical background use mathematical terms. You rarely find this confusion when reading NASA, Cambridge, MIT, or Cal-tech materials.

+ +

Tensors, Rank, Dimensions, and Channels

+ +

A tensor is a grouping of dimensions. The grouping typically represents a relation between quantities describing a system. The order (or rank) of the tensor is the number of dimensions in the grouping.

+ +

When this mathematical idea is applied formally to electrical engineering, chemical systems, biological systems, or computing systems, we can say a tensor is the grouping of the inputs and outputs of that system, abstracting the relation between them. The order (or rank) of the tensor is the total number of signal channels going to or from the system. The rank is simply the number of inputs plus the number of outputs.

+ +

That original conception was lost a little when the term tensor began to be applied to grouping inputs in one tensor and grouping outputs in another tensor, without any connection with their relationship, which is not formally correct. Those are simply vectors in mathematics, represented by arrays in computer programs.

+ +

A labelled data set prepared for machine learning is a discrete tensor formally, however it does not have a mathematical model applied to it yet. When convergence occurs in an artificial network, the result of training is a set of parameters that, if later applied to new inputs, will predict the outputs based on the net and its trained parameters. The configuration of the network cells had become the model and the trained network had become a tensor expression approximating the phenomenon that the training data represents.

+ +

The Specific Example in the Question

+ +

A third order tensor cannot have only two dimensions, width and height. It must have three dimensions, possibly width, height, and instantaneous value. Width and height are the independent variables, and instantaneous value is the dependent variable. So we have a rank of $1 + 2 = 3$.

+ +

There is no clear and formal definition of what dimensionality means, so it would be useful to discontinue its use until academic textbooks contain a consistent definition, but that's unlikely to occur. It will likely continue to be used by non-mathematicians somewhat arbitrarily. Therefore, stating that two tensors are of equal dimensionality remains ambiguous.

+ +

Domain and Range

+ +

The correct terms are domain and range. These terms unambiguously describe the variability of the independent and dependent variables of a relation respectively. In the case of an array of samples describing a tensor relationship between horizontal position, vertical position, and brightness, the horizontal domain corresponds to the width and the vertical domain corresponds to the height.

+ +

Consequently, one would not say, ""Width and height are the spacial dimensions,"" or, ""width and height are the channel dimensions.""

+ +

It could unambiguously be stated, ""Width in this article shall refer to the maximum horizontal pixel position minus its minimum plus one."" Without saying something that complex, albeit accurate, this would be a clear statement.

+ +
+

The fourth convolution layer in the network has a domain of 1280, 768 and a range of IEEE 32 bit floats.

+
+ +

It is easily understood and close enough to technically correct to not cause readers with mathematical training to dismiss the writer as uneducated.

+ +

The equality question is now solved. The equality of domains could be stated clearly and unambiguously. Whether it sounds unusual is not particularly relevant.

+ +
+

The horizontal and vertical domains of network layers two and three are equal, both being [1280, 768].

+
+ +

(The reason [1280, 768] is often seen is because compilers of many languages, starting with FORTRAN or maybe even earlier, interpret that as a two dimensional domain for a two dimensional array. So that can be used among software people, although [768, 1280] is more likely used in programs since it is a video tradition to traverse left to right and then top to bottom.)

+ +

Misuse of the Term Tensor

+ +

It is incorrect to say that adding two arrays is an example of tensors manipulated in software. Tensors are more abstract than arrays. If one were to ...

+ +
    +
  • Make symbolic manipulations to reduce a complex tensor equation to a simple one,
  • +
  • Show that two tensor expressions are equal for given domains, or
  • +
  • Determine the correlation of a set of discrete points to a model represented as a tensor equation,
  • +
+ +

... then one would be performing tensor mathematics.

+ +

There are many pieces of software that perform such symbolic manipulations, but assigning a tensor to an input sample in machine learning is not tensor math. Tensor object classes that are not part of a system of symbolic manipulation are simply an abstraction of the typical programming language's array structure, parameterizing rank in the constructor. Such a trivial abstraction is not nearly as deep as symbolic manipulation of tensor expressions.

+",4302,,4302,,6/25/2019 16:50,6/25/2019 16:50,,,,1,,,,CC BY-SA 4.0 +13035,2,,13028,6/24/2019 17:51,,1,,"

so assuming your not allowed to use transfer methodologies (like take an already exisiting elephant object detector) my recommendation is to train a CNN classifier (labels are binary-- elephant exist, elephant doesnt exist) and then use strategies founded in like grad cam. Note there does exist a gradcam++ but because you can assure theres only one instance, it isnt necessary and is just more complicated.

+ +

Note that since you just need the location and not the pixel specificity, you dont even need to do the guided backprop, but just the relation with respect to the last convoluitional map.

+ +

A quick description is that its using the gradient of the class loss w.r.t the last feature map to see which locations helped make the classification, and from there you can upscale to the receptive field that those neurons touch

+ +

Hope this helped!

+",25496,,,,,6/24/2019 17:51,,,,0,,,,CC BY-SA 4.0 +13038,2,,11793,6/24/2019 21:15,,9,,"

To understand this equation first you need to understand the context in which it is first introduced. We have two neural networks (i.e. $D$ and $G$) that are playing a minimax game. This means that they have competing goals. Let's look at each one separately:

+ +

Generator

+ +

Before we start, you should note that throughout the whole paper the notion of the data-generating distribution is used; in short the authors will refer to the samples through their underlying distributions, i.e. if a sample $a$ is drawn from a distribution $p_a$, we'll denote this as $a \sim p_a$. Another way to look at this is that $a$ follows distribution $p_a$.

+ +

The generator ($G$) is a neural network that produces samples from a distribution $p_g$. It is trained so that it can bring $p_g$ as close to $p_{data}$ as possible so that samples from $p_g$ become indistinguishable to samples from $p_{data}$. The catch is that it never gets to actually see $p_{data}$. Its inputs are samples $z$ from a noise distribution $p_z$.

+ +

Discriminator

+ +

The discriminator ($D$) is a simple binary classifier that tries to identify which class a sample $x$ belongs to. There are two possible classes, which we'll refer to as the fake and the real. Their respective distributions are $p_{data}$ for the real samples and $p_g$ for the fake ones (note that $p_g$ is actually the distribution of the outputs of the generator, but we'll get back to this later).

+ +

Since it is a simple binary classification task, the discriminator is trained on a binary cross-entropy error:

+ +

$$ +J^{(D)} = H(y, \hat y) = H(y, D(x)) +$$

+ +

where $H$ is the cross-entropy $x$ is sampled either from $p_{data}$ or from $p_g$ with a probability of $50\%$. More formally:

+ +

$$ +x \sim +\begin{cases} +p_{data} \rightarrow & y = 1, & \text{with prob 0.5}\\ +p_g \;\;\;\,\rightarrow & y = 0, & \text{otherwise} +\end{cases} +$$

+ +

We consider $y$ to be $1$ if $x$ is sampled from the real distribution and $0$ if it is sampled from the fake one. Finally, $D(x)$ represents the probability with which $D$ thinks that $x$ belongs to $p_{data}$. By writing the cross-entropy formula we get:

+ +

$$ +H(y, D(x)) = \mathbb{E}_y[-log \; D(x)] = \frac{1}{N} \sum_{i=1}^{N}{ \; y_i \; log(D(x_i))} +$$

+ +

where $N$ is the size of the dataset. Since each class has $N/2$ samples we can split this sum into two parts: +$$ += - \left[ \frac{1}{N} \sum_{i=1}^{N/2}{ \; y_i \; log(D(x_i))} + \frac{1}{N} \sum_{i=N/2}^{N} \; (1 - y_i) \; log((1 - D(x_i))) \right] +$$

+ +

The first of the two terms represents the the samples from the $p_{data}$ distribution, while the second one the samples from the $p_g$ distribution. Since all $y_i$ are equally likely to occur, we can convert the sums into expectations:

+ +

$$ += - \left[ \frac{1}{2} \; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \frac{1}{2} \; \mathbb{E}_{x \sim p_{g}}[log \; (1 - D(x))] \right] +$$

+ +

At this point, we'll ignore $2$ from the equations since it's constant and thus irrelevant when optimizing this equation. Now, remember that samples that were drawn from $p_g$ were actually outputs from the generator (obviously this affects only the second term). If we substitute $D(x), x \sim p_g$ with $D(G(z)), z \sim p_z$ we'll get:

+ +

$$ +L_D = - \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] +$$

+ +

This is the final form of the discriminator loss.

+ +

Zero-sum game setting

+ +

The discriminator's goal, through training, is to minimize its loss $L_D$. Equivalently, we can think of it as trying to maximize the opposite of the loss:

+ +

$$ +\max_D{[-J^{(D)}]} = \max_D \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] +$$

+ +

The generator however, wants to maximize the discriminator's uncertainty (i.e. $J^{(D)}$), or equivalently minimize $-J^{(D)}$.

+ +

$$ +J^{(G)} = - J^{(D)} +$$

+ +

Because the two are tied, we can summarize the whole game through a value function $V(D, G) = -J^{(D)}$. At this point I like to think of it like we are seeing the whole game through the eyes of the generator. Knowing that $D$ tries to maximize the aforementioned quantity, the goal of $G$ is:

+ +

$$ +\min_G\max_D{V(D, G)} = \min_G\max_D \left[\; \mathbb{E}_{x \sim p_{data}}[log \; D(x)] + \; \mathbb{E}_{z \sim p_{z}}[log \; (1 - D(G(z)))] \right] +$$

+ +

Disclaimer:

+ +

This whole endeavor (on both my part and the authors' part) was to provide a mathematical formulation to training GANs. In practice there are many tricks that are invoked to effectively train a GAN, that are not depicted in the above equations.

+",26652,,,,,6/24/2019 21:15,,,,0,,,,CC BY-SA 4.0 +13040,2,,11487,6/24/2019 23:32,,2,,"

tl;dr

+ +

Your intuition is correct.

+ +

Why is this a problem?

+ +

The saturating effect of the sigmoid activation function is well documented as it is the main culprit of a problem called vanishing gradients.

+ +

In short, the derivative of the sigmoid function is:

+ +

$$ +\frac{dσ(z)}{dz} = σ(z) (1 - σ(z)) +$$

+ +

The problem is that very often (especially if we initialize our weights with large values), the output of this neuron will either be $1$ or $0$. This causes the gradient to be $0$, which in turn means that the weights of that neuron can't be updated.

+ +

So let's keep the weights small (in absolute value)

+ +

Initially this seems like a good idea, if we keep the weights small we will avoid saturation.

+ +

+ +

However there is another problem with sigmoid functions: their gradient has a maximum value of $0.25$! This means that if the weight of a neuron is less than $4$, the error will diminish while flowing backward through the net. This becomes progressively worse as we add more layers to the network.

+ +

How have we solved this issue

+ +

Naturally researchers tried to find better weight initialization strategies. However this was hard, because as we saw, small weights are bad and large weights are bad.

+ +

One example is Mishkin et al. 2016 who propose a new initialization strategy, but fail to train a deep neural network with sigmoid activations.

+ +

Another workaround is to use a different learning rate for each layer (Xu et al. - Revise Saturated Activation Functions)

+ +

After a while the Machine Learning community realized that sigmoid functions were ill-suited for deep neural networks and adopted ReLU activations, which have fewer drawbacks and scale better. Nowadays, they have become the de-facto choice for deep learning.

+ +

Sources

+ +

This problem is known for several years and has been well documented (the earliest I could find was in 1994). This was mostly explored in Recurrent Neural Networks.

+ +

If you are interested in reading about this, I'd recommend this post, by Andrej Karpathy.

+ +

Some more formal sources on this topic:

+ + +",26652,,,,,6/24/2019 23:32,,,,0,,,,CC BY-SA 4.0 +13041,1,,,6/25/2019 0:09,,3,104,"

I am trying to write self-play RL (NN + MCTS http://web.stanford.edu/~surag/posts/alphazero.html) to ""solve"" a board game. However, I got stuck in designing boardgame same (input layer for NN).

+ +

1) What would be the best way to represent each cell, if there are ~10-100 cells in a game which could be occupied by any playing card. Do one-hot-encoding and get 52 nodes for single cell ([0, 0, 1, ..., 0]) or just divide card_id by total number of cards and get single node for each cell ([0.0576...])?

+ +

2) Is it a good/bad practice to help NN by adding additional input that could be derived from the other nodes? For instance, imagine the game where whoever has most red cards wins. Input is 10 cards, and I am adding new input node (number of red cards) to emphasize it. Would that lead to a positive result or doing something like that is bad?

+ +

3) Would it help to reduce the number of illegal moves and increase the performance of NN by creating additional input stating which cards are available now and which are not?

+",26658,,,,,6/25/2019 0:09,Designing state representation for board game,,0,0,,,,CC BY-SA 4.0 +13042,2,,13031,6/25/2019 1:57,,0,,"

You can add as many layers (with any arbitrary number of nodes) as you want.

+

Please note that as you add more learning parameters (layers and nodes), your model complexity increases. This means the model can potentially learn a more complex input-output relationship. However, it also increases the risk of overfitting. Overfitting generally happens when the model you build is more complex than the data you have. In such a scenario, the model memorizes the data instead of learning from it. In other words, it can produce a very good result for the same data as it was trained on but cannot generalize well. So, it performs poorly when the inputs are slightly different from what is used to be fed at the training stage.

+

In practice, you may try different architectures and parameter configurations, and measure the generalization capacity of the models (via cross-validation for example) to choose the best model. In English, a generalizable model is the one that performs (almost) equally good on both training/validation and testing sets.

+",12853,,12853,,7/26/2020 4:51,7/26/2020 4:51,,,,0,,,,CC BY-SA 4.0 +13043,1,,,6/25/2019 2:30,,1,127,"

I am quite new to text classification.

+ +

Using EAST text detection model, I get multiple strings that aren't words and most often have no meaning. For example, IDs, brand names, etc. I would like to classify them into two groups. Which models work the best and how should I preprocess the strings? I wanted to use Word2Vec, but I think it only works with real words and not with arbitrary strings.

+",23063,,2444,,6/25/2019 13:56,6/25/2019 13:56,How do I classify strings with possibly no meaning?,,1,0,,,,CC BY-SA 4.0 +13044,2,,13043,6/25/2019 8:09,,1,,"

I would just use a dictionary. A simple list lookup would tell you whether it's a recognised word or not. As an added bonus you can add some basic language processing, eg to identify inflected forms without listing them in your dictionary. Or use regular expressions to recognise ID numbers. ML is not really the right tool here.

+",2193,,,,,6/25/2019 8:09,,,,4,,,,CC BY-SA 4.0 +13045,1,,,6/25/2019 8:33,,2,356,"

I'm using FastText pre-trained-embedding for tackling a classification task, but I saw it supports also online training (incremental training) for adding domain-specific corpus.

+ +

How does it work?

+ +

As far as I know, starting from the ""model.bin"" file it retrains the model only on the new corpus updating the old word-vectors, is it right?

+",20780,,2444,,6/25/2019 13:54,8/19/2023 21:01,How does FastText support online learning?,,1,2,,,,CC BY-SA 4.0 +13046,1,,,6/25/2019 8:35,,1,69,"

+ +

I have this slide from my AI class on using a Bayes network to compute a conditional probability. I don't really understand the point of converting the conditional probabilities to factors (besides the fact that it looks weird to marginalize or multiply variables in a CP). It seems kind of arbitrary. Is there some benefit I'm not noticing?

+",25721,,2444,,6/26/2019 12:31,6/26/2019 12:31,What is the point of converting conditional probability to factor for Variable Elimination?,,0,1,,,,CC BY-SA 4.0 +13047,2,,1507,6/25/2019 9:14,,1,,"

From ""Artificial Intelligence And Life In 2030: One Hundred Year Study On Artificial Intelligence"":

+ +
+

In fact, the field of AI is a continual endeavor to push forward the frontier of + machine intelligence. Ironically, AI suffers the perennial fate of losing claim to its + acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the “AI effect” or the “odd paradox”—AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.

+
+ +

Consequentially, I believe we can not choose a fixed set of requirements for something to be considered AI; rather, at any given moment in history, AI is a set of programs which can achieve something that before was generally considered to be solvable by humans only. As technology evolves, the boundaries keep getting pushed and pushed, and the bar rises higher. Consider chess playing: once chess engines were considered one of the pinnacles of AI, while nowadays such programs are perceived as ""blind search"" and not truly intelligent.

+ +

To quote Larry Tesler, Intelligence is whatever machines haven't done yet.

+",23527,,23527,,6/25/2019 9:40,6/25/2019 9:40,,,,0,,,,CC BY-SA 4.0 +13050,2,,3920,6/25/2019 11:14,,7,,"

There are several terms or expressions related to such systems, such as online learning, incremental learning, continuous learning, continual learning, and lifelong learning. They are sometimes used interchangeably, but some of them have slightly different meanings. For example, online learning does not need to be incremental, which refers to algorithms that attempt not to forget previously learned information.

+

The opposite of online is offline. However, the expression batch learning is sometimes used as an antonym for online learning.

+",2444,,2444,,9/15/2021 13:29,9/15/2021 13:29,,,,0,,,,CC BY-SA 4.0 +13051,1,,,6/25/2019 13:58,,0,38,"

I am using python and Xgboost. I have features: activity and location and time stamps of when the activity occurred.

+ +

I want to predict day of week. Is this straight forward, ie y=day of week, X={activity, location}, or am I being naive and I need to do fancy time series things? I'd also like to predict time of day.

+",8385,,,,,6/25/2019 13:58,Is predicting day of week straight forward?,,0,4,,,,CC BY-SA 4.0 +13053,5,,,6/25/2019 16:26,,0,,,2444,,2444,,6/25/2019 16:26,6/25/2019 16:26,,,,0,,,,CC BY-SA 4.0 +13054,4,,,6/25/2019 16:26,,0,,For questions related to teaching and learning AI concepts.,2444,,2444,,6/25/2019 16:26,6/25/2019 16:26,,,,0,,,,CC BY-SA 4.0 +13055,2,,7073,6/25/2019 17:26,,3,,"

You might also be looking for active learning, where the machine learning algorithm interactively queries the user to label certain unlabeled training examples. Active learning is similar to semi-supervised learning, in which there are labeled and unlabeled examples.

+",2444,,,,,6/25/2019 17:26,,,,0,,,,CC BY-SA 4.0 +13056,2,,2964,6/25/2019 17:27,,1,,"

I think the answer is most likely no, not in the most notable examples of AI programs (such as Machine Learning). There is a set of AI techniques which involve automatic programming, but in that scenario, we have a computer program which automatically codes another program (we can call it ""target program""). But the target program is not the program which perfoms the coding; so technically speaking, no, it does not write its own code. This is an important difference; the programmer still has the task to write the code-generator.

+ +

If you are interested in automated coding, though, the most notable example is Genetic Programming, a technique which uses an evolutionary algorithm to breed computer programs. As you can see, we have an AI which produces as a result a computer program (which may be or not an AI program); it is not interacting with its own code.

+ +

As a final remark, note that automatic coding is a pretty vague term and not all techniques are AI-related (for instance, back then, the first compilers were seen as a form of automatic programming). The most relevant techinque to your question is probably Program Synthesis.

+",23527,,23527,,6/25/2019 17:33,6/25/2019 17:33,,,,0,,,,CC BY-SA 4.0 +13057,5,,,6/25/2019 17:33,,0,,,2444,,2444,,6/25/2019 17:33,6/25/2019 17:33,,,,0,,,,CC BY-SA 4.0 +13058,4,,,6/25/2019 17:33,,0,,"For questions related to active learning, which is a machine learning technique where the user is interactively queried to label certain unlabelled training examples.",2444,,2444,,6/25/2019 17:33,6/25/2019 17:33,,,,0,,,,CC BY-SA 4.0 +13059,1,,,6/25/2019 18:05,,3,543,"

Similar to the recent pushes in Pretrained Language Models (BERT, GPT2, XLNet) I was wondering if such a thrust exists in Computer Vision?

+ +

From my understanding, it seems the community has converged and settled for ImageNet trained classifiers as the ""Pretrained Visual Model"". But relative to the data we have access too, shouldn't there exist something stronger? Also, classification as a sole task has its own constrictions on domain transfer (based on the assumption of how these loss manifolds are).

+ +

Are there any better visual models for transfer rather than ImageNet successes? If no, why? Is it because of the domains fluidity in shape, resolution, etc., in comparison to text?

+",25496,,2444,,6/26/2019 12:31,6/26/2019 15:08,Are there any better visual models for transfer rather than ImageNet?,,1,0,,,,CC BY-SA 4.0 +13060,2,,12872,6/25/2019 19:28,,1,,"

It's not supposed to be derived from some equation. That is the basic premise under which GANs work. The output of the Generator $G(z)$ is fed as an input $x_g$ to the discriminator.

+",26652,,,,,6/25/2019 19:28,,,,0,,,,CC BY-SA 4.0 +13061,2,,6699,6/25/2019 22:19,,0,,"

It should be a short amount of time before we start seeing exponential complexity growth as the target of AI algorithms, likely involving the golden ratio.

+ +

https://en.wikipedia.org/wiki/Golden_ratio#Relationship_to_Fibonacci_sequence

+ +

We are already using the golden ratio to perform quantum computations :

+ +

https://www.quora.com/How-are-quantum-physics-and-the-golden-ratio-connected

+ +

So, once we scale parallelization to GPU-like networks of quantum processors, we can then be sure we have entered the territory where AI and the Golden Ratio are inherently more intrinsic.

+ +

As far as accelerating the learning models/algorithms with them, we can only hope something so fantastic would be found in the future of AI as well; who knows, maybe they will come out of course through necessity much like the biological computers we all execute and replicate our code from, and it's environment :

+ +

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6047800/

+ +

Not to say it will be the only property present, as the article concludes the golden ratio is ""most likely"" only related by ""chance"". I'm sure the same random chance by which it relates to black hole entropy, and, everything else it relates to :P.

+",26673,,26673,,6/25/2019 22:42,6/25/2019 22:42,,,,0,,,,CC BY-SA 4.0 +13063,2,,4703,6/26/2019 7:00,,-2,,"

From wikipedia article on Connect Four:

+ +
+

Connect Four is a solved game. The first player can always win by + playing the right moves.

+
+ +

It's pretty reasonable that even without training 1st player win more often by small margin. +With some training 1st player should mostly win.

+",22745,,,,,6/26/2019 7:00,,,,0,,,,CC BY-SA 4.0 +13065,5,,,6/26/2019 12:34,,0,,,2444,,2444,,6/26/2019 12:34,6/26/2019 12:34,,,,0,,,,CC BY-SA 4.0 +13066,4,,,6/26/2019 12:34,,0,,For questions related to the mathematical concept of a random variable (in the context of AI).,2444,,2444,,6/26/2019 12:34,6/26/2019 12:34,,,,0,,,,CC BY-SA 4.0 +13067,1,,,6/26/2019 12:55,,1,127,"

Paper link : Prioritized Experience Replay

+ +

About the blind cliffwalk setup:

+ +
    +
  1. Why is the number of possible action sequences equal to 2^N? I cant think of sequences more than (N + 1) where one sequence is the sequence of all right actions and the other N sequences are due to wrong actions at each state.
  2. +
+ +

Generally for prioritized experience replay:

+ +
    +
  1. The replay memory consists of some transitions which are repeated.In the priority queue I feel that there should only be a single priority for each transition to speed up learning. Is there any advantage of having priority values for each repeated instance of the transition?
  2. +
+ +

Edit for 2nd question:

+ +

Consider algorithm 1 on page 5 of the article.

+ +

+ +

Lets consider one of the transitions to be repeated in the replay memory. If one of them is sampled (line 9) and the priority updated (line 12). Will the priority update on the other instance of the same transition?

+",26697,,26697,,6/27/2019 3:50,6/27/2019 3:50,"Clarifications on ""Prioritized Experience Replay"" (Deepmind, 2015)",,1,0,0,3/29/2022 5:30,,CC BY-SA 4.0 +13068,2,,13067,6/26/2019 14:07,,1,,"

for 1) i think your confusing elements touched vs sequences. At each point for N turns you have 2 possible options, therefore you have $\prod_{i=1}^N 2$ or $2^N$ possible sequences.

+ +

for 2) The priorities are updated based on the expected rewards. They do not add new elements each time, they update

+",25496,,,,,6/26/2019 14:07,,,,3,,,,CC BY-SA 4.0 +13070,2,,13059,6/26/2019 15:08,,4,,"

Why is ImageNet so popular for transfer learning?

+ +

Models pre-trained on the ImageNet datasets have been the de-facto choice for many years now. Many popular reasons as to why people think that ImageNet is so effective for transfer learning are the following:

+ +
    +
  • ImageNet is a truly large-scale dataset that contains over 1 million images, each of which has a decent resolution.
  • +
  • ImageNet has a wide and diverse set of classes (1000 in total) ranging from animals to humans, cars, trees, etc. Most Computer Vision tasks operate in similar domains, however there are some notable exceptions (e.g. medical images).
    +For example, an object detection model for autonomous driving would benefit from ImageNet transfer learning, as the pre-trained model has seen images with similar content (e.g. roads, people, cars, street signs), even though it tries to solve a different task (i.e. object detection not classification).
  • +
  • The above two reasons allow models trained on ImageNet to identify and extract very generic features, especially in their initial layers, that can be effectively re-used.
  • +
  • ImageNet has a lot of similar classes. This is an interesting argument because it contradicts the second one. Due to the closeness of some classes (e.g. multiple breeds of cats), networks learn to extract more fine-grained features.
  • +
+ +

Another overlooked reason I find very important is that:

+ +
    +
  • ImageNet has been the benchmark for performance for image classifiers for years now. When, for example, you are selecting a re-trained ResNet to use, you know that that model is guaranteed to operate at a high level of performance. Other datasets don't have such notable challenges as the ILVRC. That challenge is what make the VGG and ResNet popular in the first case, so it comes natural that people would want to use those weights.
  • +
+ +

In practice, due to the way in which CNNs identify and extract features from images, they can easily be ""transferred"" from task to task.

+ +

Is it actually better than other datasets?

+ +

This question was widely explored by Huh et al., who tried to identify the reasons that made the ImageNet dataset better than other ones for transfer learning.

+ +

In short they found out that most of the reasons that people thought made ImageNet so good (i.e. the ones I mentioned above) weren't necessarily correct. Furthermore, the amount and diversity of images and classes required to effectively train a CNN has been highly overestimated. So there is no particular reason people should choose this specific dataset.

+ +

Now, to answer your questions:

+ +
+

I was wondering if such a thrust exists in Computer Vision?

+
+ +

No, ImageNet is currently established as the de-facto choice, evident by the fact that all 10 keras.applications models offer weights only for ImageNet.

+ +
+

But relative to the data we have access too, shouldn't there exist something stronger?

+
+ +

This is an interesting question, as the consensus things that deep learning models keep getting better with more data. There is, however, evidence that indicates otherwise (i.e. that CNN models con't have as much capacity as we thought). You can read the aforementioned study for more details. In any case, this is still an open research question.

+ +

Even if models could get better, though, with more data, it is possible that it still wouldn't matter because ImageNet pre-trained models are strong enough.

+ +
+

classification as a sole task has its own constrictions on domain transfer

+
+ +

There have been numerous cases where models initialized from pre-trained ImageNet weights have done well in settings other than classification (e.g. regression, object detection). I'd argue that initialization from ImageNet is almost always better than random initialization.

+ +
+

Are there any better visual models for transfer rather than ImageNet successes? If no, why? Is it because of the domains fluidity in shape, resolution, etc., in comparison to text?

+
+ +

Partly, yes. I think that in comparison to text, images have some useful properties that are exploited through CNNs, which makes their knowledge more transferable. This claim, however, is based on intuition; I can't back this up somehow.

+",26652,,,,,6/26/2019 15:08,,,,1,,,,CC BY-SA 4.0 +13071,1,13074,,6/26/2019 15:41,,3,236,"

I'm new to the graph convolution network. I wonder what is the main purpose of applying data with graph structure to CNN?

+",26502,,2444,,6/27/2019 11:22,6/27/2019 11:22,What is the purpose and benefit of applying CNN to a graph?,,1,1,,,,CC BY-SA 4.0 +13072,5,,,6/26/2019 17:05,,0,,,1671,,1671,,6/26/2019 17:05,6/26/2019 17:05,,,,0,,,,CC BY-SA 4.0 +13073,4,,,6/26/2019 17:05,,0,,"For questions about the ""AI business"", the industry of AI, and related subjects.",1671,,1671,,6/26/2019 17:05,6/26/2019 17:05,,,,0,,,,CC BY-SA 4.0 +13074,2,,13071,6/26/2019 18:10,,2,,"

There are some problems that involve graphs and manifolds (sometimes collectively called non-Euclidean data), such as molecule design and generation, drug repositioning, social networks analysis, brain imaging, fake news detection, recommender systems, neutrino detection, computer vision and graphics and shape (e.g. hand or face) completion or generation (generative models).

+ +

The main benefit of geometric deep learning (deep learning applied to graphs and manifolds) is that you do not lose the information encoded in the graphs (or manifolds), which, otherwise, you would likely lose because you would need to convert your graphs (or manifolds) to an equivalent vector representation that you can feed into the existing CNN or other standard neural networks.

+ +

Note that you cannot directly apply the usual convolution operation to graphs, because, for example, graphs do not have the notion of relative positions of the nodes. Furthermore, note that graph networks have little to do with CNNs, even if they are sometimes called graph convolution networks.

+",2444,,2444,,6/26/2019 18:29,6/26/2019 18:29,,,,0,,,,CC BY-SA 4.0 +13075,2,,10313,6/26/2019 18:30,,1,,"

I have a somewhat similar problem, there's this paper that I was supposed to extend (I can explain how): https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-017-1984-2 +What they do is that they have a connection network between input and second layer (giving some biological context to training) I don't know how you can maintain the zero edges during back propagation though.

+",22010,,,,,6/26/2019 18:30,,,,0,,,,CC BY-SA 4.0 +13076,1,13081,,6/27/2019 0:03,,2,235,"

I am new in this area of Machine Learning and Neural Networks. Currently, I'm taking some courses on Udemy and reading a book about it, but I still have one big question regarding data pre-processing.

+ +

In all of those Udemy's lessons, people always use a perfect dataset and ready to input in a model. So all you have to do is run it.

+ +

How do I know if my dataset is ready for a model? What do I have to do to make it ready? Which evaluations?

+ +

I had a few statistics classes in college already and I learned a lot about correlations matrices, autocorrelation functions, and its lags, etc. and I didn't see yet in anywhere someone explaining how can I evaluate my data and then proceed to implement a model to solve my problem.

+ +

If anyone could point me a direction, give me some material, show me where I can learn this, anything, it would be really helpful!

+",26707,,2444,,6/27/2019 11:31,6/27/2019 11:31,How do I know if my dataset is ready for a machine learning model?,,1,0,,,,CC BY-SA 4.0 +13077,1,,,6/27/2019 0:04,,3,41,"

When you applying a graph structured data to the graph convolution network, what are the benefits of using the state information that maintains the graph structure?

+",26502,,,,,6/27/2019 0:04,What are the benefits of using the state information that maintains the graph structure?,,0,5,,,,CC BY-SA 4.0 +13078,2,,2158,6/27/2019 5:19,,2,,"

LIDARs, especially cheap LIDARs, have problems with reflective surfaces (like metallic paint on cars), strong lights like car headlights, weather(rain, snow, hail, fog), and have a considerably shorter range than comparable in price radars. Of course, they have much better precision, so some hardware stacks for cars are using both.

+",22745,,2444,,6/27/2019 11:21,6/27/2019 11:21,,,,0,,,,CC BY-SA 4.0 +13080,1,,,6/27/2019 7:16,,2,2012,"

I don't understand the proof that $A^*$ is optimal.

+ +

The proof is by contradiction:

+ +
+

Assume $A^*$ returns $p$ but there exists a $p'$ that is cheaper. When $p$ is chosen from the frontier, assume $p''$ (Which is part of the path $p'$) is chosen from the frontier. Since $p$ was chosen before $p''$, then we have $\text{cost}(p) + \text{heuristic}(p) \leq \text{cost}(p'') + \text{heuristic}(p'')$. $p$ ends at goal, therefore the $\text{heuristic}(p) = 0$. Therefore $\text{cost}(p) \leq \text{cost}(p'') + \text{heuristic}(p'') \leq \text{cost}(p')$ because heuristics are admissible. Therefore we have a contradiction.

+
+ +

I am confused: can't we also assume there's a cheaper path that's in a frontier closer to the start node than $p$? Or is part of the proof that's not possible because $A^*$ would have examined that path because it is like BFS with lowest cost search, so, if there's a cheaper path, it'll be at a further frontier?

+",25721,,2444,,11/10/2019 16:59,2/18/2022 21:44,Understanding the proof that A* search is optimal,,1,0,,,,CC BY-SA 4.0 +13081,2,,13076,6/27/2019 8:57,,2,,"

Before jumping to modeling, there are a few tasks a data scientist (or ML/AI practitioner) must do:

+ +
    +
  1. Ideation (or hypothesizing): Before applying any modeling approach, we need to ask the right questions. We must clearly mention our assumptions and declare how we want to measure the effectiveness of the pipeline. Note that, some tools/algorithms might not fit to the made assumptions or may not lead to the best values in the defined metrics. So, the pipeline must be designed in a way it serves the purpose of answering the defined questions.
  2. +
  3. Data Cleansing: Real-world data sets are usually not clean. They have all sorts of data issues such as missing value, duplicates, outliers, wrong measurements, fragments, inconsistency, etc. Most of the ML techniques are sensitive (to different extents, of course) to such issues. Therefore, the data should be cleaned before any modeling can be done.
  4. +
  5. Data Wrangling (or Feature Engineering): In many cases, the gathered data (even cleaned) is not immediately suitable for any modeling/analysis. For example, we may need to convert the documents of a text corpus to vectors of numbers (via TF-IDF or Embedding techniques) before being able to apply a text classifier simply because our classifier only takes numeric data. Converting measurements to other units, breaking addresses to their components, converting times and dates to different formats or timezones are just a few examples of data wrangling tasks (in a broader context, feature engineering may also refer to dimension reduction or feature selection/projection).
  6. +
  7. Exploratory Data Analysis (EDA): To do the cleansing, understanding the data set features, and ideation, we often need to explore the given data set using visual (e.g., dashboards and diagrams) or summary statistics tools.
  8. +
+ +

Disclaimer: I have no business interest with Udemy. The links are just shared because @pedro-de-sá mentioned they take some courses from Udemy.

+",12853,,2444,,6/27/2019 11:27,6/27/2019 11:27,,,,2,,,,CC BY-SA 4.0 +13082,1,13083,,6/27/2019 9:19,,6,2779,"

On this article, it says that:

+ +
+

The UNET was developed by Olaf Ronneberger et al. for Bio Medical Image Segmentation. The architecture contains two paths. First path is the contraction path (also called as the encoder) which is used to capture the context in the image. The encoder is just a traditional stack of convolutional and max pooling layers. The second path is the symmetric expanding path (also called as the decoder) which is used to enable precise localization using transposed convolutions. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains Convolutional layers and does not contain any Dense layer because of which it can accept image of any size.

+
+ +

What I don't understand is how an FCN can accept images of any size, while an ordinary object detector, such as YOLO with a dense layer at the very end, cannot accept images of any size.

+ +

So, why can a fully convolutional network accept images of any size?

+",26714,,2444,,6/14/2020 10:57,6/14/2020 10:59,Why can a fully convolutional network accept images of any size?,,1,1,,,,CC BY-SA 4.0 +13083,2,,13082,6/27/2019 9:55,,6,,"

The reason is that when using a convolutional layer, you select the size of the filter kernels, which are independent of the image/layer input size (provided that images smaller than the kernels are padded appropriately).

+ +

When using a dense layer, you specify the size of the layer itself and the resulting weight matrix is a function of both the size of the dense layer and the upstream layer. This is because each neuron in the upstream layer makes a connection to each neuron in the dense layer. So, if you have 50 neurons in the upstream layer and 20 neurons in the dense layer, then the weight matrix has $50 \times 20=1000$ values. Those weights are what get determined during the training phase, and so those layer sizes are fixed.

+ +

Now, the output of a CNN layer is a number of images/tensors (specified by the number of filters chosen), whose size is determined by the kernel size and any padding option chosen. If those are fed into a dense layer, then that fixes the the size that those images can be (because of the reason given in the previous paragraph).

+ +

On the other hand, if no dense layer is used in the whole network, then the input to the first CNN layer can be any size because the weights are just the individual parameters of the filter kernels, and the filter kernels remain the same size regardless of the input tensor size.

+",12509,,2444,,6/14/2020 10:59,6/14/2020 10:59,,,,1,,,,CC BY-SA 4.0 +13086,1,,,6/27/2019 11:22,,4,114,"

Thomas Ray's Tierra is a computer program which simulates life.

+ +

In the linked paper, he argues how this simulation may have real-world applications, showing how his digital organisms (computer programs) evolve in an interesting way: they develop novel ways of replicating themselves and become faster at it (he argues that the evolved organisms employ an algorithm which is 5 times faster than the original one he wrote).

+ +

Tierra's approach is different from standard GAs:

+ +
    +
  • While in GAs usually there is a set of genomes manipulated, copied and mutated by the program, in Tierra everything is done by the programs themselves: they self-replicate.
  • +
  • There is no explicit fitness function: instead, digital organisms compete for energy resources (CPU time) and space resources (memory).
  • +
  • Organisms which take a long time to replicate reproduce less frequently, and organisms who create many errors are penalized (they die out faster).
  • +
  • Tierran machine language is extremely small: operands included, it only has 32 instructions. Oftentimes, so called RISC instruction sets have a limited set of opcodes, but if you consider the operands, you get billions of possible instructions.
  • +
  • Consequentially, Tierran code is less brittle, and you can mutate it without breaking the code. In contrast, usually, if you mutate randomly some machine code, you get a broken program.
  • +
+ +

I was wondering if we could use this approach to optimize machine code. For instance, let's assume we have some assembly-like program which computes a certain function $f$. We could link reproduction time with efficiently computing $f$, and life-span with correctly computing it. This could motivate programs to find novel and faster ways to compute $f$.

+ +

Has anything similar ever been tried? Could it work? Where should I look into?

+",23527,,23527,,6/27/2019 11:53,12/22/2019 23:26,Can we use the Tierra approach to optimize machine code?,,1,2,,,,CC BY-SA 4.0 +13087,1,,,6/27/2019 11:26,,1,35,"

I am specifically interested in the topic of edge cases.

+

I have the presentation Edge Cases and Autonomous Vehicle Safety as a starting point, in particular on page 6:

+
+

Machine Learning (inductive training)

+
    +
  • No design insight

    +
  • +
  • Generally inscrutable; prone to gaming and brittleness.

    +
  • +
+
+

I'd like to find more hard data on how ML may do very well until an edge case is encountered.

+",23170,,23170,,6/23/2020 12:32,6/23/2020 12:32,I am looking for research related to the use of AI and ML in automotive and aeronautics safety design,,0,8,,,,CC BY-SA 4.0 +13088,1,15429,,6/27/2019 15:14,,8,6766,"

In reinforcement learning, there are deterministic and non-deterministic (or stochastic) policies, but there are also stationary and non-stationary policies.

+ +

What is the difference between a stationary and a non-stationary policy? How do you formalize both? Which problems (or environments) require a stationary policy as opposed to a non-stationary one (and vice-versa)?

+",2444,,,,,2/25/2021 0:17,What is the difference between a stationary and a non-stationary policy?,,1,1,,,,CC BY-SA 4.0 +13089,5,,,6/27/2019 15:17,,0,,,2444,,2444,,6/27/2019 15:17,6/27/2019 15:17,,,,0,,,,CC BY-SA 4.0 +13090,4,,,6/27/2019 15:17,,0,,For questions related to the concept of a stationary policy (in reinforcement learning and other AI sub-fields).,2444,,2444,,6/27/2019 15:17,6/27/2019 15:17,,,,0,,,,CC BY-SA 4.0 +13091,1,,,6/27/2019 15:51,,3,49,"

I have been searching for more than one week which learning methods were used in Neurogrid.

+ +

But I only found descriptions of its architecture (chips, circuits, analog and/or digital components, performance results), everything but no clue on how it updates the weights.

+ +

In my opinion, I think that it cannot be gradient descent (with back-propagation), as the topology of the neurons in a chip, for example in the neurocore of Neurogrid, is a mesh or grid.

+ +

Do you know where I could find this kind of information?

+ +

+",26719,,2444,,6/28/2019 11:06,6/28/2019 11:06,Where could I find information on the learning methods used in Neurogrid?,,0,3,,,,CC BY-SA 4.0 +13093,1,13095,,6/27/2019 16:04,,2,70,"

Quoting from Wikipedia page on Turing Test

+ +
+

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen...

+
+ +

The definition made me wonder if we would term the captchas that show up on Google here it gives textual instructions to the user, bot or not, asking them to select images containing a specific object out of the given set of images.

+",17209,,17209,,6/27/2019 16:12,1/24/2021 18:47,Would you term Google's Captchas as Turing Test?,,1,0,,,,CC BY-SA 4.0 +13095,2,,13093,6/27/2019 16:51,,4,,"

In the standard Turing test (or imitation game), the interrogator can ask multiple arbitrary questions, while, in the case of captchas, usually, there's only one question or problem. Additionally, in the Turing test, the interrogator interactively communicates with both the human and the machine. Furthermore, captchas do not test the conversational skills but only the visual ones of the AI. Therefore, the existing captchas are not an example of the standard Turing test.

+",2444,,2444,,1/24/2021 18:47,1/24/2021 18:47,,,,0,,,,CC BY-SA 4.0 +13096,2,,6099,6/27/2019 16:58,,-1,,"

My interpretation of the question was 'what activation function in an artificial neural network (ANN) is closest to that found in the brain?'

+ +

Whilst I agree with the selected answer above, that a single neuron outputs a dirac, if you think of a neuron in an ANN as modelling the output firing rate, rather than the current output, then I believe ReLU might be closest?

+ +

http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/

+",26722,,,,,6/27/2019 16:58,,,,0,,,,CC BY-SA 4.0 +13097,1,13099,,6/27/2019 17:03,,1,343,"

Both of them deal with data of graph structure like a network community. Is there a big difference there?

+",26502,,2444,,6/28/2019 10:46,9/10/2019 16:51,What are the differences between network analysis and geometric deep learning on graphs?,,2,0,,,,CC BY-SA 4.0 +13098,2,,13080,6/27/2019 17:34,,2,,"

The key phrase here is

+
+

because heuristics are admissible

+
+

In other words, the heuristics never overestimate the path length:

+

$$cost(n) + heuristic(n) \le cost(\text{any path going through n})$$

+

And since the frontier is ordered by $\textbf{cost + heuristic}$, when a completed path $p$ is dequeued from the frontier, we know that it must necessarily be $\le$ any path going through some other frontier node $q$, because

+

$$cost(p) = cost(p) + heuristic(p)$$ +$$\le cost(q) + heuristic(q)$$ +$$\le cost(\text{any path going through q})$$

+",19452,,19452,,2/18/2022 21:44,2/18/2022 21:44,,,,5,,,,CC BY-SA 4.0 +13099,2,,13097,6/27/2019 18:44,,1,,"

Network analysis does not necessarily use deep learning techniques, while geometric deep learning (GDL) on graphs uses only deep learning techniques (that is, you train a neural network using gradient descent or other optimization methods). You can do some network analysis using GDL.

+",2444,,,,,6/27/2019 18:44,,,,0,,,,CC BY-SA 4.0 +13100,2,,7624,6/27/2019 20:20,,4,,"

Over the years, many people have attempted to define intelligence, so there are many definitions of intelligence, but most of them are not formalized. For a big collection of definitions, see the paper A Collection of Definitions of Intelligence (2007) by Shane Legg and Marcus Hutter.

+

In an attempt to formally define intelligence, so that it comprises all forms of intelligence, in the paper Universal Intelligence: A Definition of Machine Intelligence (2007), the same Legg and Hutter, after having researched the previously given definitions of intelligence, define intelligence as follows

+
+

Intelligence measures an agent's ability to achieve goals in a wide range of environments

+
+

This definition apparently favors systems that are able to solve more tasks (i.e. AGIs) than systems that are only able to solve a specific task (i.e. narrow AIs), but, according to Legg and Hutter, it should summarise the main points of the previously given definitions of intelligence, so it should be a reasonable and quite general definition of intelligence. Moreover, properties associated with intelligence, like the ability to learn, should be emergent, i.e. in order to achieve goals in a wide range of environments you also need the ability to learn.

+

In my blog post On the definition of intelligence, I also talk about this definition, but I suggest that you read the mentioned papers if you are interested in all details. This video by Marcus Hutter could also be useful and interesting.

+",2444,,2444,,1/11/2022 18:00,1/11/2022 18:00,,,,0,,,,CC BY-SA 4.0 +13101,2,,7591,6/27/2019 21:38,,0,,"

If intelligence is defined as utility relation to a task, any algorithm can be said to be intelligent.

+ +

If the task is convincing a human that an algorithm is human, and the algorithm achieves this goal, it can be said to be strongly intelligent in relation to the task (fooling the human. Here the term strong is used because the algorithm's performance is stronger than the humans. Strength is relative, and Turing Tests are unavoidably subjective.)

+ +

However, this does not mean that the algorithm is generally strongly intelligent, because it may not exceed human capability in all tasks.

+",1671,,,,,6/27/2019 21:38,,,,0,,,,CC BY-SA 4.0 +13102,2,,7431,6/27/2019 22:06,,1,,"

The paper Artificial General Intelligence: Concept, State of the Art, and Future Prospects (2014), by Ben Goertzel (one of the people that are really still very interested in AGI), surveys the field of artificial general intelligence (AGI), its progress, approaches, mathematical formalisms, engineering, and biology-inspired perspectives, and metrics for assessing AGI.

+

Just to give a little bit more context and whet your appetite, let me briefly describe the different approaches to AGI (section 3, p. 14).

+
    +
  • symbolic approach (which is based on the Physical Symbol System Hypothesis; examples of this approach are ACT-R or SOAR),

    +
  • +
  • emergentist approach (aka sub-symbolic, i.e. the use of neural networks, and similar sub-symbolic models, from which abstract symbolic processing/reasoning can or is expected to emerge; so examples of this approach is deep learning, computational neuroscience, and artificial life),

    +
  • +
  • hybrid approach (a combination of the symbolic and sub-symbolic approaches; examples of this approach are CLARION and CogPrime), and

    +
  • +
  • universalist approach (examples of this approach are the AIXI and Gödel machine).

    +
  • +
+",2444,,2444,,1/18/2021 13:19,1/18/2021 13:19,,,,0,,,,CC BY-SA 4.0 +13103,2,,6985,6/27/2019 23:26,,1,,"

There are a few technical papers and books on the topic

+ + + +

However, note that gradient descent (and other optimization algorithms) and the back-propagation algorithm are numerical algorithms (that is, they deal with numerical errors), so the time complexity is not the only factor affecting the actual performance of these algorithms and models.

+",2444,,2444,,6/27/2019 23:35,6/27/2019 23:35,,,,0,,,,CC BY-SA 4.0 +13104,5,,,6/27/2019 23:32,,0,,,2444,,2444,,6/27/2019 23:32,6/27/2019 23:32,,,,0,,,,CC BY-SA 4.0 +13105,4,,,6/27/2019 23:32,,0,,For questions related to the (computational) complexity (e.g. time and space complexity) of AI algorithms.,2444,,2444,,6/27/2019 23:32,6/27/2019 23:32,,,,0,,,,CC BY-SA 4.0 +13106,1,13110,,6/28/2019 2:59,,3,380,"

Take AlexNet for example:

+ +

+ +

In this case, only the activation function ReLU is used. Due to the fact ReLU cannot be saturated, it instead explodes, like in the following example:

+ +

Say I have a weight matrix of [-1,-2,3,4] and inputs of [ReLU(4), ReLU(5), ReLU(-2), Relu(-3)]. The resultant matrix from these will have large numbers for the inputs of ReLU(4) and ReLU(5), and 0 for ReLU(-2) and ReLU(-3). If there are even just a few more layers, the numbers are quick to either explode or be 0.

+ +

How is this typically combated? How do you keep these numbers towards 0? I understand you can take subtract the mean at the end of each layer, but for a layer that is already in the millions, subtracting the mean will still result in thousands.

+",26726,,2444,,6/28/2019 10:51,6/28/2019 10:51,How are exploding numbers in a forward pass of a CNN combated?,,1,0,0,,,CC BY-SA 4.0 +13107,2,,11438,6/28/2019 6:10,,0,,"

You can do that and you'll propably find data, yet that depends on the kind of FAQ data you will apply it on. More importantly, what insight do you gain by comparing two BERT models?

+ +

Secondly, f you mean with semantic similarity that vector space embeddings are used, even for the retrieval/ranking and not just for re-ranking, then I can tell you that the performance still isn't SOTA. But you can simply use a such a neural semantic model for re-ranking.

+ +

We are working on that. So if you wanna know more, PM me.

+",19693,,,,,6/28/2019 6:10,,,,0,,,,CC BY-SA 4.0 +13108,1,,,6/28/2019 8:23,,2,33,"

I want to create a framework that allows GDL to be applied to time-varying graphs. +I came up with the Erdos-renyi model as an example of a time-varying graphs.

+ +

GDL for graphs takes node information as input and takes correspondence with correct data as accuracy. However, how should I deal with time-varying data, even random, and ground-truth data for such graphs? Or is there other better way? +And is it nonsense to use pseudo-coordinate as input, as it refers to the traditional approach to time-invariant graphs? +Also, an application of time-varying graphs has been anomaly detection in financial networks. How does this work specifically? Also, please let me know if there are other application examples.

+",26502,,26502,,6/28/2019 8:43,6/28/2019 8:43,Random graph as input in geometric deep learning on time-varying graph,,0,0,,,,CC BY-SA 4.0 +13109,1,13111,,6/28/2019 9:03,,1,295,"

As mentioned in the title I'm using 300 Dataset example with 500 feature as an input.

+ +

As I'm training the dataset, I found something peculiar. Please look at the data shown below.

+ +
+

Iteration 5000 | Cost: 2.084241e-01

+ +

Training Set Accuracy: 100.000000

+ +

CV Set Accuracy: 85.000000

+ +

Test Set Accuracy: 97.500000

+ +

Iteration 3000 | Cost: 2.084241e-01

+ +

Training Set Accuracy: 98.958333

+ +

CV Set Accuracy: 85.000000

+ +

Test Set Accuracy: 97.500000

+ +

Iteration 1000 | Cost: 4.017322e-01

+ +

Training Set Accuracy: 96.875000

+ +

CV Set Accuracy: 85.000000

+ +

Test Set Accuracy: 97.500000

+ +

Iteration 500 | Cost: 5.515852e-01

+ +

Training Set Accuracy: 95.486111

+ +

CV Set Accuracy: 90.000000

+ +

Test Set Accuracy: 97.500000

+ +

Iteration 100 | Cost: 8.413299e-01

+ +

Training Set Accuracy: 90.625000

+ +

CV Set Accuracy: 95.000000

+ +

Test Set Accuracy: 97.500000

+ +

Iteration 50 | Cost: 8.483802e-01

+ +

Training Set Accuracy: 90.277778

+ +

CV Set Accuracy: 95.000000

+ +

Test Set Accuracy: 97.500000

+
+ +

The trend is that as the Iteration(cost) increases(cost decreases), the training set accuracy increases as expected, but the CV Set/Test Set Accuracy decreases. My initial thought is that this has to do with precision/bias issue, but I really can't buy it.

+ +

Anyone know what this entails? Or any reference?

+",25797,,,,,6/28/2019 9:17,"In NN, as iterations of Gradient descent increases, the accuracy of Test/CV set decreases. how can i resolve this?",,1,0,,,,CC BY-SA 4.0 +13110,2,,13106,6/28/2019 9:10,,2,,"

The most effective way to prevent both the forward and backward propagation of exploding is keeping the weights in a small range. The main way this is accomplished is through their initialization.

+ +

For example in the case of He initialization, the authors show (given some assumptions) that the variance of the output of the final layer $L$ of the network is:

+ +

$$ +Var[y_L] = Var[y_1] \left( \prod_{i=2}^L{\frac{1}{2} \, n_l \, Var[w_l]} \right) +$$

+ +

where $n_l$ and $w_l$ are the number of connections and weights of layer $l$. In order to keep the outputs from exploding the product above should not exponentially magnify its inputs. In order to do this the authors elect to initialize the weights so that:

+ +

$$ +\frac{1}{2} \, n_l \, Var[w_l] = 1 +$$

+ +

Now this is helps keeping the outputs from exploding. They then go and prove that the same strategy helps preventing the gradients from exploding.

+ +

Another similar strategy is the so-called Glorot (or Xavier) Initialization. +These techniques are extremely effective in helping the models converge!

+",26652,,,,,6/28/2019 9:10,,,,1,,,,CC BY-SA 4.0 +13111,2,,13109,6/28/2019 9:17,,1,,"

Training scores improving (loss decreasing and accuracy increasing) whilst the opposite happens with cross validation and test data is a sign of overfitting to the training data. Your neural network is getting worse at generalising and no amount of further training will improve it - in fact the situation will get worse the more you train.

+ +

This is the main reason you have CV data sets, to show you when this happens before you try to use your model against the test set or real world data. So it is not ""peculiar"" at all, but the CV set doing its job for you, allowing you notice something has gone wrong.

+ +

To improve on the situation, you need to use some form of regularisation. The simplest approach here would be to take your model from around 100 epochs (because it has best CV score that I can see). This is early stopping, and is a simple valid regularisation approach.

+ +

Alternatives for neural networks include L2 weight regularisation (also called weight decay) and dropout.

+ +

In addition to this, your question states that you only have 300 examples, and more features than examples. This is really tricky to generalise (and hard to tell if you have - for instance it looks like you only have 20 CV examples and 40 test examples, these numbers are very low and prone to giving you inaccurate estimates simply due to chance of what entries are in the data sets). I recommend you look into K-fold cross-validation in order to get more accurate scores, and help you choose the best model for generalisation.

+",1847,,,,,6/28/2019 9:17,,,,3,,,,CC BY-SA 4.0 +13112,5,,,6/28/2019 11:03,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Data_pre-processing.

+",2444,,2444,,6/28/2019 11:03,6/28/2019 11:03,,,,0,,,,CC BY-SA 4.0 +13113,4,,,6/28/2019 11:03,,0,,"For questions related to the concept of data pre-processing, which includes, for example, cleaning, instance selection, normalization, transformation, feature extraction or selection.",2444,,2444,,6/28/2019 11:03,6/28/2019 11:03,,,,0,,,,CC BY-SA 4.0 +13115,2,,1987,6/28/2019 12:21,,0,,"

+ +

This is an example of vanilla Tensorflow playground with no added features and no modifications. +The run for Spiral was between 187 to ~300 Epoch, depending. +I used Lasso Regularization L1 so I could eliminate coefficients. +I decreased the batch size by 1 to keep the output from over fitting. +In my second example I added some noise to the data set then upped the L1 to compensate.

+ +

+",26733,,,,,6/28/2019 12:21,,,,0,,,,CC BY-SA 4.0 +13116,2,,5410,6/28/2019 13:57,,0,,"

In the paper Artificial General Intelligence: Concept, State of the Art, and Future Prospects (2014) Ben Goertzel gives an overview of the AGI field and its progress. He describes the main approaches

+
    +
  • symbolic (e.g. cognitive architectures, like SOAR)
  • +
  • emergentist/subsymbolic (neural networks)
  • +
  • hybrid (combination of symbolic and emergentist)
  • +
  • universalist (AIXI and Godel Machines)
  • +
+

to AGI and metrics to assess human-level intelligence and partial progress.

+

There's also an older book by Cassio Pennachin and Ben Goertzel called Artificial General Intelligence (2007). A chapter of the book, Contemporary Approaches to Artificial General Intelligence, gives a brief history of the AGI field.

+",2444,,2444,,12/13/2021 10:44,12/13/2021 10:44,,,,0,,,,CC BY-SA 4.0 +13117,1,13118,,6/28/2019 14:20,,2,135,"

The Planning Domain Definition Language (PDDL) is known for its capabilities of symbolic planning in the state space. A solver will find a sequence of steps to bring the system from a start state to the goal state. A common example of this is the monkey-and-banana problem. At first, the monkey sits on the ground and, after doing some actions in the scene, the monkey will have reached the banana.

+

The way a PDDL planner works is by analyzing the preconditions and effects of each primitive action. This will answer the question of what happens if a certain action is executed.

+

However, will a PDDL domain description work the other way around as well, not for planning, but for action recognition?

+

I've searched in the literature to get an answer, but all the papers I've found are describing PDDL only as a planning paradigm.

+

My idea is to use the given precondition and effects as a parser to identify what the monkey is doing and not what he should do. That means, in the example, the robot ape knows by itself how to reach the banana and the AI system has to monitor the actions. The task is to identify a PDDL action that fits the action by the monkey.

+",,user11571,2444,,2/6/2021 13:54,2/6/2021 13:54,Can PDDL be utilized for action recognition?,,2,0,,,,CC BY-SA 4.0 +13118,2,,13117,6/28/2019 16:07,,1,,"

I don't know of any work on this with respect to PDDL, but this is very similar to a conceptual dependency application called SAM (Script Applier Mechanism). Conceptual Dependency (CD) models actions using a number of primitives (which could be seen as equivalent to PDDL primitive actions): PTRANS for physical transfer, PROPEL for application of a physical force to an object, GRASP for grasping an object, etc. Their number varies around 12 or so, depending on the version of CD.

+ +

Stories are described by a sequence of primitive acts, which have slots for actor, object, etc. They are supposed to enable a program to draw inferences about what happens. A common problem when trying to understand stories is that often common knowledge about a situation is omitted; the standard example here is going to a restaurant. It is usually assumed that the listener/reader knows what commonly happens when the protagonist enters a restaurant, so that when they leave without paying this is recognised as something unusual.

+ +

The approach used to solve this problem is to encode such knowledge in scripts, sequences of primitive acts. When triggered, eg by ""Manuel went to a restaurant"", this script is retrieved, and the following actions are looked for in the script. Anything that is recognised is used to fill gaps in the story, eg sitting down at a table, or looking at a menu. This was the task of the SAM program.

+ +

Basically you have a sequence of primitive actions, and you try to recognise a more abstract event ""going to a restaurant"" from that. Obviously you'd need to have a script to recognise, but one could presumably use this to derive a sequence of more generalised events, such as ""retrieving an object from a high place"", or ""standing on top on another object"".

+ +

The theory of using scripts, plans, and goals to describe human reasoning is detailed in Schank, Roger; Abelson, Robert P. (1977). Scripts, plans, goals and understanding: An inquiry into human knowledge structures. New Jersey: Erlbaum. ISBN 0-470-99033-3.

+",2193,,,,,6/28/2019 16:07,,,,1,,,,CC BY-SA 4.0 +13119,1,13123,,6/28/2019 17:54,,4,581,"

There are reinforcement learning papers (e.g. Metacontrol for Adaptive Imagination-Based Optimization) that use (apparently, interchangeably) the term control or action to refer to the effect of the agent on the environment at each time step.

+ +

Is there any difference between the terms control or action or are they (always) used interchangeably? If there is a difference, when is one term used as opposed to the other?

+ +

The term control likely comes from the field of optimal control theory, which is related to reinforcement learning.

+",2444,,2444,,7/23/2019 22:14,7/23/2019 22:14,Is there any difference between a control and an action in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +13120,1,,,6/28/2019 19:45,,4,2247,"

Sometimes when I am training a DC-GAN on an image dataset, similar to the DC-GAN PyTorch example (https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html), either the Generator or Discriminator will get stuck in a large value while the other goes to zero. How should I interpret what is going on right after iteration 1500 in the example loss function image shown below ? Is this an example of mode collapse? Any recommendations for how to make the training more stable? I have tried reducing the learning rate of the Adam optimizer with varying degrees of success. Thanks!

+",21952,,,,,4/8/2020 23:40,What parameters can be tweaked to avoid a generator or discriminator loss collapsing to zero when training a DC-GAN?,,1,0,,,,CC BY-SA 4.0 +13121,1,,,6/29/2019 5:37,,1,22,"

Reference managers like Zotero or Mendeley allow researchers to categorize papers into hierarchical categories called collections. The User navigates through a listing of these collections when filing a new item. The retrieval time grows something like the logarithm of the number of collections; looking for a collection can quickly become a nuisance.

+ +

+ +

Fig 1. A top level list of collections in the Zotero reference manager

+ +

One way to reduce navigation time is to allow users to search collections by name. A complementary solution is to provide a view of the currently ""hot"" collections. The user may interact with a list of suggested collections, or receive relevant completions when typing into a collections search bar.

+ +

This raises a basic learning problem:

+ +
+

Let $K = K_1, \ \dots, \ K_m$ be the sequence of collections the user has visited (possibly augmented with visit times). Let $H_m$ be a set of $n$ collections the user is likely to visit next. How can we construct $H_m$?

+
+ +

A technique that does this this might exploit a few important features of this domain:

+ +
    +
  • Project Clusters: Users jump between collections relevant to their current projects
  • +
  • Collection Aging: Users tend to focus on new collections, and forget older ones
  • +
  • Retrieval Cost: There's a tangible cost to the user (time, distraction) when navigating collections; this applies to the reduced view (the technique might keep $n$ as small as possible)
  • +
+ +

Two ideas so far

+ +

LIFO Cache

+ +
+

Reduce $K_{m-1}, K_{m-2},\ \dots$ into the first $n$ unique entries which do not match $K_m$.

+
+ +

This heuristic is very simple to implement and requires no learning. It encompasses clusters and aging given suitably large $n$. But with large $n$ it incurs a retrieval cost of its own.

+ +

Markov Chain/Hidden Markov Model

+ +
+

Use $K$ to train a MC or HMM. Build $H_m$ using estimates from this model.

+
+ +

The simplest version is an order $k\ $ MC transition matrix built using k-gram statistics of $K_i$. This might be sensitive to project clusters, but I don't think it will recognize new collections without a hard coded aging heuristic.

+ +

I'm not clear on how HMMs would be trained here, and I'm not very taken with the $k$-gram MC approach. My next task is to read about MCs/HMMs in context of suggestion systems.

+ +

Other Models

+ +

I am brand new to suggestion systems. Reading leads are quite welcome!

+ +

I would be especially excited about unsupervised techniques, and neural network techniques I could train on a GPU. Apart from improving Zotero, I would like this problem to give me an opportunity to learn about cutting edge techniques.

+ +

Valuable Goals

+ +

An ideal technique would cast light on questions like

+ +
    +
  1. How should we measure the performance of this kind of system? (I suspect cache miss rate is a good metric, as well as the ratio between cache miss rate and cache size)
  2. +
  3. How should we translate these human-centric performance metrics into human independent objective functions for learning?
  4. +
  5. How much better than LIFO can we theoretically expect to do with a more sophisticated technique (say, in terms of cache size for a given cache miss rate)?
  6. +
  7. How can a technique learn patterns like clusters and aging without hand tuned objectives?
  8. +
+ +

I am interested in learning theory and building an implementation, so resources with publicly available code would be be preferable. Apart from potentially being overkill for the problem, I would not mind if the final model depends on a GPU.

+ +

Please forgive me for the long and vague question. I wish I had done more reading before posing this question, but I feel a bit stuck! I hope to get unstuck with some good reading resources. Thanks!

+",26746,,,,,6/29/2019 5:37,Predicting Hot Categories In a Reference Manager,,0,0,,,,CC BY-SA 4.0 +13122,2,,12734,6/29/2019 6:20,,0,,"

Assuming that your cross-validation scores(both on train set and test set) indicate model's prediction performance correctly, you should definitely decide which trained model to use based on your validation accuracy only, regardless your model is overfitted or not.

+",17094,,,,,6/29/2019 6:20,,,,1,,,,CC BY-SA 4.0 +13123,2,,13119,6/29/2019 8:03,,3,,"

There's no difference. As they too often do, ML researchers take concepts from other disciplines, conveniently forget to cite sources and change the terminology, leading to much confusion. RL is a textbook example (pun intended). Optimal control researchers have been studying very similar problems long before RL ones, and used standard symbols and terms ($x$ for states, $u$ for controls). Then RL researchers came and changed just about everything.

+ +

See the paper A Tour of Reinforcement Learning: The View from Continuous Control (2018), by Benjamin Recht, which discusses reinforcement learning from a control and optimization perspective.

+ +

See also this tweet https://twitter.com/beenwrekt/status/1134536093980864514?s=21 (by Benjamin Recht) regarding the presentation of Sham Kakade.

+",20874,,2444,,7/21/2019 17:47,7/21/2019 17:47,,,,0,,,,CC BY-SA 4.0 +13124,2,,112,6/29/2019 9:44,,3,,"

Self-driving cars use a combination of both supervised as well as reinforcement learning.

+ +

Huge amounts of sensor data are recorded in real-time. This data can be used to train all sorts of supervised classifiers, e.g. for predicting rain or switching on lights. You can also set up a model to predict pedestrians and other cars. This is supervised learning.

+ +

Reinforcement learning can be used in situations positive or negative signals appear when driving a car: Traffic lights, blinking signals from other vehicles and street signs in general. These signals can be used to train a reinforcement model and decide on best actions (adjust speed, steer,..) to get the maximal reward (or better minimize costs of a crash)

+",26747,,26747,,6/29/2019 15:24,6/29/2019 15:24,,,,3,,,,CC BY-SA 4.0 +13125,1,13127,,6/29/2019 10:46,,2,205,"

I have implemented an epsilon-greedy Monte Carlo reinforcement learning agent like suggested in Sutton and Barto's RL book (page 101). As far as I understood epsilon-greedy agents so far, the evaluation has to stop at some point to exploit the gained knowledge.

+ +

I do not understand, how to stop the evaluation here, because the policy update is linked to epsilon. So just setting epsilon equal to zero at some point does not seem to make sense to me.

+",23288,,2444,,6/29/2019 11:41,8/2/2019 3:03,How to stop evaluation phase in reinforcement learning with epsilon-greedy Monte Carlo agent?,,2,0,,,,CC BY-SA 4.0 +13126,2,,112,6/29/2019 12:03,,3,,"

What you are calling 'analyzing the surroundings' is generally referred to as perception. Self-driving cars sense their surroundings using cameras, radars, lidars often combining or fusing more than one sensor to paint a picture of the environment. A lot of algorithms get used for fusing the sensor data and then deriving an understanding of the surrounding. One such example is semantic scene segmentation of camera data that tries to identify object boundaries in camera images. Typically a fully convolutional neural network is used to achieve this.

+ +

To the best of my knowledge Google does not disclose the exact algorithms anywhere.

+",23273,,,,,6/29/2019 12:03,,,,0,,,,CC BY-SA 4.0 +13127,2,,13125,6/29/2019 12:30,,0,,"
+

I do not understand, how to stop the evaluation here, because the policy update is linked to epsilon. So just setting epsilon equal to zero at some point does not seem to make sense to me.

+
+ +

If by evaluation you mean the act of exploring new paths, it does not need to stop by changing epsilon to 0 instantly. Instead, in order to facilitate the convergence of the algorithm, the epsilon can be progressively decreased until it reaches 0.

+ +

I do not think your question is meant to be related with monte carlo specifically, but if this was not what you wanted to know, please comment.

+ +
+

But would not a epsilon value of 0 lead to an unintended policy update?

+
+ +

No, a value of epsilon equal to 0 will make you always choose your action according to the policy. I think what is confusing is that the ""update"" they make to the policy on the last for, is not an update. It is a statement saying the probability of taking such action. What it means is the following:

+ +

For action $a$, the probability of taking $a$ in state $S_t$, when $\epsilon$ is 0 is:

+ +
    +
  • 1 if action $a$ is the action with the best value
  • +
  • 0 if action $a$ is not the action with the best value
  • +
+ +

What this means is that when epsilon is 0, the action taken will always be the greedy action.

+",24054,,24054,,6/29/2019 14:01,6/29/2019 14:01,,,,2,,,,CC BY-SA 4.0 +13133,1,,,6/30/2019 7:22,,2,80,"

I am learning about discretization of the state space when applying reinforcement learning to continuous state space. In this video the instructor, at 2:02, the instructor says that one benefit of this approach (radial basis functions over tile coding) is that "it drastically reduces the number of features". I am not able to deduce this in the case of a simple 2D continuous state space.

+

Suppose we are dealing with 2D continuous state space, so any state is a pair $(x,y)$ in the Cartesian space. If we use Tile Coding and select $n$ tiles, the resulting encoding will have $2n$ features, consisting of $n$ discrete valued pairs $(u_1, v_1) \dots (u_n, v_n)$ representing the approximate position of $(x,y)$ in the frames of the $n$ 2-D tiles. If instead we use $m$ 2-D circles and encode using the distance of $(x,y)$ from the center of each circle, we have $m$ (continuous) features.

+

Is there a reason to assume that $m < 2n$?

+

Furthermore, the $m$-dimensional feature vector will again need discretization, so it is unclear to me how this approach uses fewer features.

+",23273,,2444,,1/21/2021 2:43,1/21/2021 2:43,Does coarse coding with radial basis function generate fewer features?,,0,0,,,,CC BY-SA 4.0 +13135,1,,,6/30/2019 15:12,,4,445,"

At a time step $t$, for a state $S_{t}$, the return is defined as the discounted cumulative reward from that time step $t$.

+ +

If an agent is following a policy (which in itself is a probability distribution of choosing a next state $S_{t+1}$ from $S_{t}$), the agent wants to find the value at $S_{t}$ by calculating sort of ""weighted average"" of all the returns from $S_{t}.$ This is called the expected return.

+ +

Is my understanding correct?

+",26764,,2444,,1/22/2021 17:13,2/21/2021 21:04,What is the difference between return and expected return?,,2,0,,,,CC BY-SA 4.0 +13136,2,,13135,6/30/2019 15:39,,1,,"

Formally, the return (also known as the cumulative future discounted reward) can be defined as

+

$$ +G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1}, +$$

+

where $0 \leq \gamma \leq 1$ is the discount factor and $R_{i}$ is the reward at time step $i$. Here $G_t$ and $R_i$ are considered random variables (and r.v.s are usually denoted with capital letters, so I am using the notation used in the book Reinforcement Learning: An Introduction, 2nd edition).

+

The expected return is defined as

+

\begin{align} +v^\pi(s) +&= \mathbb{E}\left[G_t \mid S_t = s \right] \\ +&= \mathbb{E}\left[\sum_{k=0}^\infty \gamma^k R_{t+k+1} \bigm\vert S_t = s \right] +\end{align}

+

In other words, the value of a state $s$ (associated with a policy $\pi$) is equal to the expectation of the return $G_t$ given that $S_t = s$, so $v^\pi(s)$ is defined as a conditional expectation. Note also that the expected value is usually defined with respect to a random variable, which is the case. Note also that $S_t$ is a random variable, while $s$ is a realization of this random variable.

+

A policy is not a probability distribution of choosing the next state. A stochastic policy is a family of a conditional probability distribution over actions given states. There are also deterministic policies. Have a look at this question What is the difference between a stochastic and a deterministic policy? for more details about the definition of stochastic and deterministic policies.

+
+

If an agent is following a policy, the agent wants to find the value at $S_{t}$ by calculating a sort of "weighted average" of all the returns from $S_{t}.$ This is called the expected return.

+
+

In the case of Monte Carlo Prediction, the value of a state associated with a specific policy, that is, the expected value of the return given a state is approximated with a finite (weighted) average. See e.g. What is the difference between First-Visit Monte-Carlo and Every-Visit Monte-Carlo Policy Evaluation?. Furthermore, note that the expectation of a discrete random variable is defined as a weighted average.

+",2444,,2444,,1/22/2021 20:56,1/22/2021 20:56,,,,0,,,,CC BY-SA 4.0 +13137,5,,,6/30/2019 17:56,,0,,"

For more info, see e.g. http://www.incompleteideas.net/book/first/ebook/node44.html.

+",2444,,2444,,6/30/2019 17:56,6/30/2019 17:56,,,,0,,,,CC BY-SA 4.0 +13138,4,,,6/30/2019 17:56,,0,,"For questions related to the value iteration algorithm, which is a dynamic programming (DP) algorithm used to solve an MDP, that is, it is used to find a policy given the transition and reward functions of the MDP. Value iteration is related to another DP algorithm called policy iteration.",2444,,2444,,6/30/2019 17:56,6/30/2019 17:56,,,,0,,,,CC BY-SA 4.0 +13139,1,,,6/30/2019 19:27,,1,35,"

I have a set of polygons for each image. Those polygons consist of four $x$ and $y$ coordinates. For each image, I need to extract the ones of interest. This could be formulated as an Image Segmentation task where, for example I want to extract the objects of interest, here: cars.

+ +

+ +

But since I already get the polygons through a different part of my pipeline I would like to create a simpler machine learning model. The input will not be the image but only the coordinates of the polygons. +In this model each sample should consist of multiple polygons (those can vary in number) and the model should output the ones of interest.

+ +

In my mind, I formulated the problem as follows:

+ +
    +
  1. The polygons are the features. Problem: Samples will have varying number of features.
  2. +
  3. The output will consist of the indices of the ""features"" (polygons) I am interested in.
  4. +
+ +

First, I created a decision tree and classified each coordinate as $0$ (not interested in) or $1$ (of interest). But, by doing this, I don't consider the other coordinates that belong to the image. The information of the surrounding is lost.

+ +

Does someone have an idea of how to model this problem without using Image Segmentation?

+",23063,,23063,,7/3/2019 4:40,7/3/2019 4:40,Which model to use when selecting objects of interest?,,0,1,,,,CC BY-SA 4.0 +13140,1,,,6/30/2019 21:33,,1,49,"

I have a unique implementation where I have to process videos with dynamic frame rates (that is the number of frames is different for each video in a batch). I am stacking all the frames in a single tensor and processing the same from there on. This works fine with Conv2D layer but creating a 2D tensor (batch_size, features) by a flattening operation this has to be fed to a Dense layer. I can't find a suitable way to implement this.

+ +

For more information on why it should be like this kindly explore: explore this link. Instead of the MNIST images, I have multiple videos in a single bag, each with a variable number of frames.

+",17068,,12853,,7/1/2019 8:15,7/1/2019 8:15,Dynamic frames processing with CNN LSTM combination or otherwise,,0,0,,,,CC BY-SA 4.0 +13141,1,,,7/1/2019 1:21,,4,54,"

What are some deep learning models that can use supplementary information other than RGB channels for image segmentation?

+

+

For example, imagine a poorly shot image of a river (blue) that shows a gap, and the supplementary information is detailed flow directions (arrows), which helps to show the river's true shape (no gap in reality). To get the river shape, most image segmentation models I see, such as U-Net, only use RGB channels.

+

Are there any neural network models that can use this kind of auxiliary information along with RGB channels during training for the image segmentation task?

+",26767,,2444,,12/19/2021 18:49,12/19/2021 18:49,What are some neural network models that can use auxiliary info during training for image segmentation?,,0,0,,,,CC BY-SA 4.0 +13143,1,,,7/1/2019 4:13,,2,207,"

I have constructed a CNN that utilizes max-pooling layers. I have found with these layers that, should I remove them, my network performs ideally with every output and gradient at each layer having a variance close to 1. However, if they are included, the variance skyrockets.

+ +

This makes sense, of course, as a max-pooling layer takes the maximum of an area, which must incur a positive bias as larger numbers are chosen.

+ +

I would just like to know what methods are typically used to combat this.

+",26726,,30725,,11/20/2019 1:55,11/20/2019 1:55,How is the bias caused by a max pooling layer overcome?,,0,1,,,,CC BY-SA 4.0 +13144,1,,,7/1/2019 8:55,,2,28,"

I'm trying to implement this approach for object detection and tracking.

+ +

In this approach, the first step is voxelize each frame to construct a 3D tensor, the second step is to append multiple voxels at the time along a new axis to create a 4D tensor.

+ +

What I want to understand is how to voxelize multiple frames at the time and append them together.

+",26777,,2444,,7/1/2019 11:04,7/1/2019 11:04,How to voxelize multiple frames at the time and append them together?,,0,5,,,,CC BY-SA 4.0 +13145,1,,,7/1/2019 10:44,,2,27,"

I've been struggling to analyize my NN model. I've studied through andrew ng's course, but there are some results that cannot be explained by the course. Is there any useful source on High Bias vs High variance issue on NN?

+",25797,,25797,,7/2/2019 0:29,7/2/2019 0:29,Is there any useful source on High Bias vs High variance issue on Neural Network?,,0,2,,,,CC BY-SA 4.0 +13147,1,13151,,7/1/2019 12:11,,4,1500,"

I'm trying to apply a DQN to a stochastic environment, but I'm having trouble getting it to converge.

+

I found some similar questions asked here, but no solutions yet.

+

I can easily get the DQN to converge on a static environment, but I am having trouble with a dynamic environment where the end is not given.

+

Example: I have made a really simple model of the Frozen Lake (without any holes) - simply navigation from A to B. This navigation works fine when A and B are always the same, but when I shuffle the position of A or B for each session, the DQN cannot converge properly.

+

I am using the grid (3x3, 4x4 sizes) as input neurons. Each with "0" value. I assign the current position "0.5" and the end position "1". 4x4 grid gives us 16 input neurons. Example of 3x3 grid:

+
 0.5  0  0 
+  0   0  0 
+  0   0  1
+
+

I have a few questions in this regard:

+
    +
  • When training the DQN, how do I apply Q-values? (Or do I really need to? I'm not sure how to correctly "reward" the network. I'm not using any adversarial network or target network at this point.)

    +
  • +
  • I train the network using only a short replay memory of the last move, or the last N moves that led to success. Is this the right way to approach this?

    +
  • +
  • I use Keras, and am simply training the network every time it does something right - and ignoring failed attempts. - But is this anywhere near the right approach?

    +
  • +
  • Am I missing something else?

    +
  • +
+

Perhaps I should note that my math skills are not that strong, but I try my best.

+

Any input is appreciated.

+",26768,,2444,,7/6/2020 0:11,7/6/2020 0:11,Why does the DQN not converge when the start or goal states can change dynamically?,,1,4,,,,CC BY-SA 4.0 +13149,1,13183,,7/1/2019 14:35,,3,76,"

I'm looking for examples of time-varying graph-structured data for time-varying graph CNNs. First, I came up with the idea of infection network. Is there anything more? If possible, I want data that can be easily obtained online.

+",26502,,2444,,7/1/2019 18:50,7/3/2019 8:18,Examples of time-varying graph-structured data in real world,,1,2,,,,CC BY-SA 4.0 +13150,1,,,7/1/2019 14:50,,4,1740,"

I am trying to implement CTC loss in TensorFlow, but their documentation is pretty limited. So I am not sure how to approach the problem. I found a good example in Theano.

+

Are any other resources that explain the CTC loss?

+

I am also trying to understand how its forward-backward algorithm works and what the beam decoder in the case of the CTC loss is.

+",26787,,2444,,12/23/2021 17:22,12/23/2021 17:22,How does the CTC loss work?,,1,0,,,,CC BY-SA 4.0 +13151,2,,13147,7/1/2019 16:28,,3,,"

Your problem is not that the environment is stochastic or dynamic. In fact you are using the terms slightly incorrectly. These terms do not usually refer to the fact that starting state can differ or goal locations can move episode-by-episode. They typically refer to behaviour of state transitions.

+ +

Although in your case you could view the initial state as stochastic, this is not a big deal, and not likely to be the cause of your problems.

+ +

From your questions, it seems to me that you are not really running a DQN algorithm yet. It is not 100% clear what your neural network is predicting, but my best guess is that you have 4 outputs to select ""best"" action and are treating the neural network as a classifier. This training approach seems closest to Cross Entropy Method (CEM) due to how you are selecting ""successful"" navigation only.

+ +
+

When training the DQN, how do i apply Q-values?

+
+ +

This question is the most revealing that you are not using DQN. This is too complex to describe in full in an answer, but the basics are:

+ +
    +
  • Your neural network (NN) should be estimating Q values. Typically in DQN, you input the state and the NN outputs an array of estimates for Q of each action (although other architectures are possible). This should be a regression problem, so last layer of network needs to be linear.

  • +
  • Current best guess of optimal policy is to run the NN forward and find the maximising action.

  • +
  • In DQN you also have a ""behaviour policy"" - a simple and popular choice is to use $\epsilon$-greedy action selection, which just means to take the maximising action (as calculated above), except with probability $\epsilon$ (some small value, e.g. 0.1) to take a random action.

  • +
  • To figure out your training data to improve the NN, you need Q values to calculate a TD Target. In single-step Q learning that would be $r + \gamma Q(s',a*)$ where $r$ is the immediate reward $s'$ is the next state seen, and $a*$ is the maximising action in that state. You should force $Q(s', a*) = 0$ (i.e. not use the NN) if $s'$ is a terminal state.

  • +
+ +

This means you typically need to work with Q values in 2 or 3 places in your inner loop. Your inner loop should look something like this per time step, given a current state current_state:

+ +
# Figure out how to act
+current_q_values = NN_predict(current_state)
+current_action = argmax(current_q_values)
+if random() < epsilon:
+  current_action = random_action()
+
+# Take an action
+reward, next_state, done = call_environment(current_state, current_action)
+
+# Remember what happened
+store_in_replay_memory(current_state, current_action, reward, next_state, done)
+
+# Train the NN from memory
+repeat N times: # This can be vectorised for efficiency
+  mem_state, mem_action, mem_reward, mem_next_state, mem_done = sample_replay_memory()
+  mem_q_values = NN_predict(mem_next_state)
+  mem_max_action = argmax(mem_q_values)
+  if done:
+    td_target = mem_reward
+  else
+    td_target = mem_reward + gamma * mem_q_values[mem_max_action]
+  target_q_values = NN_predict(mem_state)
+  target_q_values[mem_action] = td_target
+  NN_train(mem_state, target_q_values)
+
+# Maybe end an episode (this can include generating new map)
+if done:
+  current_state = reset_environment()
+else:
+  current_state = next_state
+
+ +

You can see above, NN_predict is called three different times to get Q values in slightly different contexts. I have ignored extras such as using a separate target network.

+ +
+

I train the network using only a short replay memory of the last move, or the last N moves that led to success. Is this the right way to approach this?

+
+ +

It is important to include moves that lead to failure so that the NN learns the difference. Typically you will need a replay memory with from a few hundred to a few hundred thousand entries. You would get away with a few hundred perhaps for your simple problem. The idea is to use this training data a bit like a dataset from supervised learning.

+ +
+

I use Keras, and am simply training the network every time it does something right - and ignoring failed attempts. - But is this anywhere near the right approach?

+
+ +

This is not the right approach for DQN, although perhaps could be considered a crude version of CEM.

+",1847,,,,,7/1/2019 16:28,,,,1,,,,CC BY-SA 4.0 +13152,1,13158,,7/1/2019 16:32,,3,708,"

I need to cluster my points into unknown number of clusters, given the minimal Euclidean distance R between the two clusters. Any two clusters that are closer than this minimal distance should be merged and treated as one.

+ +

I could implement a loop starting from the two clusters and going up until I observe the pair of clusters that are closer to each other than my minimal distance. The upper boundary of the loop is the number of points we need to cluster.

+ +

Are there any well known algorithms and approaches estimate the approximate number of centroids from the set of points and required minimal distance between centroids?

+ +

I am currently using FAISS under Python, but with the right idea I could also implement in C myself.

+",26789,,26789,,7/1/2019 16:38,7/1/2019 18:45,How to compute the number of centroids for K-means clustering algorithm given minimal distance?,,2,0,,,,CC BY-SA 4.0 +13153,1,,,7/1/2019 16:39,,2,19,"

Is this a scenario that would work well for a ML/Pattern Recognition Model or would it be easier/faster to just filter from a large DB.

+ +

I am looking to create a system that will allow users to identify the appropriate product by specifying certain constraints and preferred features.

+ +

There are millions of possible product configurations. Lets pretend it's boxes.

+ +

Product Options:

+ +
    +
  • Size (From 1mm up to 1m) in 1mm increments
  • +
  • Color: choice of 10 colors
  • +
  • Material: choice of 3, wood,metal, plastic
  • +
+ +

Constraints:

+ +
    +
  • Wood is only available in centimeter units
  • +
  • Red is only available in 500 mm and greater
  • +
  • Wood is the preferred material
  • +
  • Blue is the preferred color
  • +
+ +

So, we have 30,000 (1000*10*3) possible options. +Of those, many are not viable such as 533 mm-Red-Wood

+ +

but these configurations similar to the request are possible.

+ +
    +
  • 533 mm-Red-Plastic
  • +
  • 530 mm-Red-Wood
  • +
  • 540 mm-Red-Wood
  • +
+ +

Notes: +Our current Rules and code based tool can take anywhere from 0.5 to 2 mins to identify the preferred configuration. +We can generate a list of all possible configs and whether they are valid or not. +We estimate 30,000,000 possible configs +It takes around 0.5 seconds to validate a config so with enough computing power we expect we could do 30M in a few days.

+",26783,,,,,7/1/2019 16:39,Product Configuration based on user selection of features and other requirements,,0,0,,,,CC BY-SA 4.0 +13154,5,,,7/1/2019 16:47,,0,,,-1,,-1,,7/1/2019 16:47,7/1/2019 16:47,,,,0,,,,CC BY-SA 4.0 +13155,4,,,7/1/2019 16:47,,0,,"K-means algorithm groups given set of points (or vectors) into clusters, finding the groups of points that are closer together (by Euclidean distance).",26789,,26789,,7/1/2019 20:17,7/1/2019 20:17,,,,0,,,,CC BY-SA 4.0 +13156,1,13168,,7/1/2019 17:02,,8,1374,"

I was reading the AlphaZero paper Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, and it seems they don't mention Q-Learning anywhere.

+ +

So does AZ use Q-Learning on the results of self-play or just a Supervised Learning?

+ +

If it's a Supervised Learning, then why is it said that AZ uses Reinforcement Learning? Is ""reinforcement"" part primarily a result of using Monte-Carlo Tree Search?

+",26791,,2444,,7/1/2019 20:27,7/2/2019 10:16,Does AlphaZero use Q-Learning?,,1,0,0,,,CC BY-SA 4.0 +13157,2,,13152,7/1/2019 18:18,,1,,"

If you look at Kaufman & Rousseeuw (1990), Finding Groups in Data, they describe an algorithm to evaluate the quality of clusters in agglomerative clustering. You run the clustering algorithm with a specific value k for the number of clusters you want, and that routine then gives you a score to reflect the cohesion of the clustering. If you then cluster again with a different value for k, you will get another score. You repeat this process until you have found a maximum score, and then you have the clustering with the optimum number of clusters.

+",2193,,,,,7/1/2019 18:18,,,,0,,,,CC BY-SA 4.0 +13158,2,,13152,7/1/2019 18:45,,3,,"

Yes, the silhouette method (which is implemented in sklearn as silhouette_score) is commonly used to assess the quality of clusters produced by any clustering algorithm (including $k$-means or any hierarchical clustering algorithm). Roughly, you can compute the silhouette value for different $k$, then you would pick the $k$ with the highest silhouette value.

+",2444,,,,,7/1/2019 18:45,,,,1,,,,CC BY-SA 4.0 +13159,2,,13150,7/1/2019 19:11,,2,,"

Connectionist Temporal Classification (CTC) can be useful for sequence modeling problems, like speech recognition and handwritten recognition, where the input and output sequences might have different sizes, so there's the problem of aligning the sequences. For instance, in speech recognition, not all sounds in speech correspond to a character, so how do we know if a sound should be converted to one of the possible chars? Moreover, we assume that we don't have a training dataset where the input and output sequences are aligned. The creation/labeling of such a dataset would be quite consuming. That's why CTC was introduced.

+

So, mathematically, the speech/handwritten recognition task can be written as

+

$$ +Y^* = \text{argmax}_Y p(Y \mid X), +$$ +where $Y^* = [y_1, \dots, y_M]$ is the ideal output sequence for $X = [x_1, \dots, x_T]$. Note that $T$ may be different from $M$.

+

Now, let's say that you have a speech $X = [x_1, \dots, x_N]$ and the output should be the sequence $Y^* = [h, e, l, l, o]$. One naive approach to solve this problem would be, for each input $x_i$, we would predict the most likely char, so we could end up with an output sequence like $\hat{Y} = [h, e, e, l, l, l, o]$ (when $T = 7$ and $M = 5$), then we could remove all duplicates, so we would end up with $\hat{Y}' = [h, e, l, o]$. However, this is not the correct approach, because, as you can see, there are words where the same letter appears twice in a row.

+

To solve this problem, we can introduce a special character, which we can denote by $\epsilon$. So, in this case, the idea is that $\epsilon$ should be predicted around exactly two (and not more or less) "l". Once the sequence is predicted, we can remove $\epsilon$ and, hopefully, we have a valid word, rather than a word like "helo" or "hellllo".

+

The idea of CTC is, for each $x_i$, the neural network produces a probability distribution over the possible chars, which we can denote by $p_t(a_t \mid X)$. In the example above, a probability distribution over $\{h, e, l, o, \epsilon \}$. So, for example, the probability vector for $x_1$ could be $p_1(a_1 \mid X) = [0.6, 0.1, 0.1, 0.1, 0.1]$. Given all probability vectors $p_t(a_t \mid X)$, for $t=1, \dots, T$, we can compute the probability of an alignment (a specific output sequence). Then we marginalize over the set of alignments.

+

To reflect these ideas, the CTC loss function is defined as follows

+

$$ +p(Y \mid X)=\underbrace{\sum_{A \in \mathcal{A}_{X, Y}} }_{\text{Marginalization}\\\text{over set of}\\ \text{alignments}} \underbrace{\prod_{t=1}^{T} p_{t}\left(a_{t} \mid X\right)}_{\text{Probability}\\\text{for a single}\\\text{alignment}} +$$ +We can then use an RNN to model $p_t(a_t \mid X)$ given that RNNs are good for sequence prediction.

+

Now, another problem is that there can be many alignments, so the computation of the loss may be expensive if done naively. To solve this problem, you can use a dynamic programming algorithm, the CTC forward-backward algorithm. The details of how this is done, how the gradient of the CTC loss is computed, how inference is done in this context (including the details of beam search), and other details can be found in this nice article Sequence Modeling With CTC (2017) by Awni Hannun, which this answer is based on.

+

You can also read the original paper Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks (2006), by Alex Graves et al., mentioned in the linked TensorFlow documentation, which presents and explains the CTC loss and the CTC forward-backward algorithm (in section 4.1).

+",2444,,2444,,12/23/2021 15:51,12/23/2021 15:51,,,,0,,,,CC BY-SA 4.0 +13162,2,,11888,7/1/2019 23:53,,1,,"

Update: I rewrote the first part due to major mistake in the first version

+ +

Notice: The notation $P^k$ from Eq.$(20)$ and $(21)$ in the paper does not mean the kth power of some $P$. Instead, $P^k$ should be thought as the $k$ step transition probability of a non-homogeneous Markov chain.

+ +
    +
  1. According to the CPO paper, the discounted future state distribution is defined as +$$ +d_\pi(s)=(1-\gamma)\sum_{k=0}^\infty \gamma^kProb(s_k=s|\pi,s_0)\mu(s_0)\tag{1} +$$ +Consider function form. Let $Prob_{\pi,k}$ denote the $k$ step probability transition operator induced by $\pi$; here $\pi$ can be a hierarchical policy, $k$ can be larger than $c$. +$$ +d_\pi=(1-\gamma)\sum_{k=0}^\infty \gamma^kProb_{\pi,k}\mu\tag{2} +$$ +Now apply the similar definition as Eq.$(20)$ and $(21)$ in the paper, let $P^k_\pi$ denote the $k$ step transition probability of the non-homogeneous Markov chain induced by the low level policy, with $k$ smaller or equal to $c$. +\begin{align} +d_\pi&=(1-\gamma)\sum_{m=0}^\infty\gamma^{mc}\sum_{k=0}^{c-1}\gamma^kP^k_\pi(P^c_\pi)^m\mu\\ +&=(1-\gamma)\sum_{k=0}^{c-1}\gamma^kP^k_\pi(\sum_{m=0}^\infty\gamma^{mc}(P^c_\pi)^m)\mu\\ +&=(1-\gamma)A_\pi(I-\gamma^cP^c)^{-1}\mu\tag{3} +\end{align} +which is exactly the form of Eq.$(22)$ and $(23)$ in the paper, with $A_\pi$ defined similar as Eq.$(24)$ and $(25)$.
  2. +
  3. The ""every-$c$-step discounted state frequency"" builds on $(3)$, but it lumps the $c$ steps into one ""high level"" step where the discount factor is $\gamma^c$ and the transition operator is $P_\pi^c$. Starting with $(3)$, replace $c$ with 1, we get the ""every one step future state distribution"" +$$ +d_\pi=(1-\gamma)(I-\gamma P_\pi)^{-1}\mu\tag{4} +$$ +Then replace $\gamma$ and $P_\pi$ in $(4)$ with $\gamma^c$ and $P_\pi^c$, we get the ""every-$c$-step discounted state frequency"", or ""every-$c$-step future state distribution"" +$$ +d_\pi^c=(1-\gamma^c)(I-\gamma^c P_\pi^c)^{-1}\mu\tag{5} +$$ +By the way, I read your blogpost on this paper. It's very helpful for me, thank you!
  4. +
+",26796,,26796,,7/11/2019 1:59,7/11/2019 1:59,,,,3,,,,CC BY-SA 4.0 +13163,1,,,7/2/2019 3:55,,2,79,"

In this paper, the authors refer to the application of time-varying graphs as an open problem. And they say it will be useful for anomaly detection in financial networks, etc. But why is that useful?

+",26502,,2444,,7/4/2019 10:17,7/4/2019 10:17,Why is graph convolution network in time-varying graphs useful for anomaly detection?,,0,0,,,,CC BY-SA 4.0 +13164,2,,12365,7/2/2019 7:05,,1,,"

Most of the algorithms seem accessing the training set sequentially so the images need not be loaded into memory all at the same time.

+ +

It is technically fully possible to build a workstation with 1 Tb of RAM or more, using a server barebone in a tower form factor (see this, for instance, and would support multiple GPUs) but this only makes sense if the image loading is a bottleneck. Current SSDs are rather fast so you need to measure this before spending money on such a beast.

+",26789,,,,,7/2/2019 7:05,,,,0,,,,CC BY-SA 4.0 +13168,2,,13156,7/2/2019 10:16,,7,,"

Note: you mentioned in the comments that you are reading the old, pre-print version of the paper describing AlphaZero on arXiv. My answer will be for the ""official"", peer-reviewed, more recent publication in Science (which nbro linked to in his comment). I'm not only focusing on the official version of the paper just because it is official, but also because I found it to be much better / more complete, and I would recommend reading it instead of the preprint if you are able to (I understand it's behind a paywall so it may be difficult for some to get access). The answer to this specific question would probably be identical for the arXiv version anyway.

+ +
+ +

No, AlphaZero does not use $Q$-learning.

+ +

The neural network in AlphaZero is trained to minimise the following loss function:

+ +

$$(z - \nu)^2 - \pi^{\top} \log \mathbf{p} + c \| \theta \|^2,$$

+ +

where:

+ +
    +
  • $z \in \{-1, 0, +1\}$ is the real outcome observed in a game of self-play.
  • +
  • $\nu$ is a predicted outcome / value.
  • +
  • $\pi$ is a distribution over actions derived from the visit counts of MCTS.
  • +
  • $\mathbf{p}$ is the distribution over actions output by the network / the policy.
  • +
  • $c \| \theta \|^2$ is a regularisation term, not really interesting for this question/answer.
  • +
+ +

In this loss function, the term $(z - \nu)^2$ is exactly what we would have if we were performing plain Monte-Carlo updates, based on Monte-Carlo returns, in a traditional Reinforcemet Learning setting (with function approximation). Note that there is no ""bootstrapping"", no combination of a single-step observed reward plus predicted future rewards, as we would have in $Q$-learning. We play out all the way until the end of a game (an ""episode""), and use the final outcome $z$ for our updates to our value function. That's what makes this Monte-Carlo learning, rather than $Q$-learning.

+ +

As for the $- \pi^{\top} \log \mathbf{p}$ term, this is definitely not $Q$-learning (we're directly learning a policy in this part, not a value function).

+ +
+

If it's a Supervised Learning, then why is it said that AZ uses Reinforcement Learning? Is ""reinforcement"" part primarily a result of using Monte-Carlo Tree Search?

+
+ +

The policy learning looks very similar to Supervised Learning; indeed it's the cross-entropy loss which is frequently used in Supervised Learning (classification) settings. I'd still argue it is Reinforcement Learning, because the update targets are completely determined by self-play, by the agent's own experience. There are no training targets / labels that are provided externally (i.e. no learning from a database of human expert games, as was done in 2016 in AlphaGo).

+",1641,,,,,7/2/2019 10:16,,,,2,,,,CC BY-SA 4.0 +13172,1,,,7/2/2019 23:02,,6,259,"

The basic seq-2-seq model consists of 2 parts: a recurrent encoder that compresses a sequence to a vector and decoder that unrolls the vector into the output sequence:

+ +

+ +

Why is the output, w, x, y, z of the decoder used as its input? Shouldn't the hidden state of the RNN from the previous timestamps be enough?

+",25836,,2444,,7/2/2019 23:19,12/12/2021 12:08,"In sequence-to-sequence, why is the output of the decoder used as its input?",,3,0,0,,,CC BY-SA 4.0 +13173,2,,12266,7/3/2019 0:05,,5,,"

The previous answer has given a good insight into the difference between two areas. I would like to give more examples.

+ +

Semi-Supervised Learning work with improving the data set by adding up new examples. There are iterative systems where we train a model on a given dataset and improve the model further after deploying it on the real world by adding interactions of the real world and their outcomes to further train the system.

+ +

Self-Supervised Learning is becoming a very hot topic these days. It has the ability to understand the underline properties of a given dataset with some kind of a supervisory signal (Not exactly a label). self-Attention introduced in Transformers is a modern day popular self-supervised learning. Also, check this tweet from Yann Lecun tweet

+",14909,,,,,7/3/2019 0:05,,,,0,,,,CC BY-SA 4.0 +13175,2,,13125,7/3/2019 2:44,,0,,"

Before answering how to stop the evaluation phase and begin exploitation of those results, one must first answer when to stop it whereby the balance the project stakeholder wants between quality and cost is found. You won't always find that in books discussing pure research, so your question is an excellent one.

+ +

The algorithm the authors are discussing on that page (101) is based on the policy improvement theorem on page 78, and the appearance of the endless loop in the algorithm in the pseudo-code line ""Repeat forever (for each episode)"" is obviously worse than useless in a data center if the loop is not terminated, unless it is multi-agent, exploiting multiple threads, processes, virtual hosts, hardware accelerators, cores, or hosts, and the improvements are accessed for exploitation independently or symbiotically using some scheme.

+ +

In a deployed robot, an endless loop often has a legitimate a use case. ""Repeat until shutdown,"" might be appropriate in a production algorithm or hardware embodiment if the robot's goal is, ""Keep the living room clean."" One must always try to place this theory in context when taking pure research and considering the applied research that may stem from it.

+ +

In real product and service development environments, how the balance is struck between quality of action and cost of determining it depends upon the problem size, expectations of the user or business, and the architecture of the computational resources you have. Consider some of these factors in more detail.

+ +
    +
  • Maximum number of rounds of evaluation-exploitation cycles
  • +
  • Requirements for precision in terms of optimality
  • +
  • Requirements for reliability in terms of completion
  • +
  • Distribution of the number branches from nodes
  • +
  • Distribution of lengths of possible action traversal sequences to the goal
  • +
  • Average cost (in time and energy) of each evaluation
  • +
  • Average cost (in time and energy) of each exploitation
  • +
+ +

In a single thread, single core, von Neumann architecture, as is sometimes the case in an embedded environment, evaluation and exploitation are time sliced. In such a case, evaluation should stop and exploitation should begin when the probability that further evaluation will produce an improved result drops below the cost of further evaluation, based on some estimation of return and cost. This is a function of the above factors, although not a linear one.

+ +

We have considered training an LSTM network to determine the function in the epoch domain (roughly related to the time domain), although it is low on our priority list.

+ +

In an embedded process, a function that approximates return on further evaluation cost can be constructed, based on statistics gathered up to that point in current learning or over a longer period of operations. The function should be fast and inexpensive. In each cycle within the evaluation phase, the function can be evaluated and compared against a configurable probability threshold. Its configuration value can be an educated guess based on the perception of the value of further path exploration.

+ +

In a simulation environment, when more computing resources for parallel processes, exploiting OS or hardware facilities for that, the time slicing is either opaque or nonexistent respectively. In those cases, continuous improvement may be unbounded because the state-action graph is not finite.

+",4302,,,,,7/3/2019 2:44,,,,0,,,,CC BY-SA 4.0 +13177,2,,13031,7/3/2019 3:23,,0,,"

You will need one of two things or both.

+ +
    +
  • A theoretical model of grade probability distribution already verified against data that contains the features listed
  • +
  • A data set mapping the features listed to grades (labels) for training an artificial network
  • +
+ +

The features can be specified more formally

+ +
    +
  • Chosen topic --- from a finite list of topics, encoded by integer (numbered strings)
  • +
  • Performance --- low, medium, or high --- it is not clear from where this judgment comes, but that would need to be clarified and also included in either the verified theoretical model or among the features of the training data
  • +
  • Does a student work? --- hours per week --- note that the flexibility of the schedule is not as critical as the hours of the 168 hours of each week not available for sleep, self-care, class time, or study
  • +
  • Have a student ever gotten through a add course? --- binary --- this question is unclear in its current phrasing
  • +
  • Grade --- converted to numeric value from 0.0 to 4.0, which can be represented as an integer from 00 to 40
  • +
+ +

The difficult part will be finding either the verified theoretical model or the data set for training or both.

+ +

If a verified theoretical model is found, apply a gradient descent implementation to tune that model, which is a statistics and conversion problem only partly related to AI and machine learning. Otherwise using a model free artificial network will likely allow you to reach your objective. A simple feed forward network with gradient descent and back-propagation and three layers should suffice.

+ +

There are many examples in Python, Java, and other languages available online, usually in conjunction with a library that the examples leverage for computation and instructions on how to install the software. There are other questions and answers in this forum that explain what books to buy to get started. In fact, there is a tag specifically for that purpose: https://ai.stackexchange.com/questions/tagged/getting-started.

+",4302,,,,,7/3/2019 3:23,,,,0,,,,CC BY-SA 4.0 +13178,2,,12970,7/3/2019 3:52,,1,,"

There are 3,810 articles easily found in an academic article search. These are three examples.

+ +
    +
  • Neuroevolution for reinforcement learning using evolution strategies — C Igel — The 2003 Congress on Evolutionary Computation, 2003, ieeexplore.ieee.org — ""We apply the CMA-ES, an evolution strategy which efficiently adapts the covariance matrix of the mutation distribution, to the optimization of the weights of neural networks for solving reinforcement learning problems. It turns out that the topology of the networks considerably ...""

  • +
  • Neuroevolution strategies for episodic reinforcement learning — V Heidrich-Meisner, C Igel — Journal of Algorithms, 2009, Elsevier — ""Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric ...""

  • +
  • Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning — FP Such, V Madhavan, E Conti, J Lehman — 2017, arxiv.org — ""Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement ...""

  • +
+ +

To clarify the strategy proposed, we can rewrite the approach as a set of design features. Let's not assume that topology formation is based on weights, which diminishes the concept of morphology and topology. If we were to reduce the model to a traditional artificial network of orthogonal topology, it would then not be neuroevolution; it would then be basic machine learning.

+ +
    +
  • Search for the best network topology through neuroevolution
  • +
  • Train the best candidate selected above through Q-learning
  • +
+ +

The second item doesn't seem address the relationship between the inputs, outputs, and objectives of neuroevolution designs and those of Q-learning and other reinforcement learning strategies. Q-learning algorithms are not designed to run on feed forward networks in general and certainly not easily mapped to the topologies that may form during neuroevolution. There are probably billions, if not an infinite number, of ways to combine the two strategies, but simplistic concatenation of the two processes is not possible without further research and consideration of how they will interrelate collaboratively toward program goals.

+ +

It may be useful to search for articles, study, and then formulate your research trajectory. It is recommendable to start with learning about neuroevolution and reinforcement independently, and then start reading articles like the above three. Pour the foundation, let it dry, and then frame the house.

+",4302,,,,,7/3/2019 3:52,,,,0,,,,CC BY-SA 4.0 +13179,1,13181,,7/3/2019 6:05,,2,240,"

I want to study NN for time-varying directed graphs. However, as this field has developed relatively recently, it is difficult to find new ways. So the question is, is there any NN that can handle such data?

+",26502,,26502,,7/3/2019 6:35,7/3/2019 7:32,Is there a neural network method for time-varying directed graphs?,,1,0,,,,CC BY-SA 4.0 +13180,1,,,7/3/2019 7:23,,1,13,"

I have the following problem: I am doing some research on the accuracy of recommender algorithms that are mostly used nowadays.

+ +

So, one way to measure their performance is by checking how well they predict a certain value under different sizes of a given dataset, meaning, sparsity in a ratings matrix.

+ +

I need to find a way to calculate the root mean square error(or mae), some metric, versus the sparsity in the dataset. As an example, let’s have a look at the picture below:

+ +

+ +

You can see that it says:

+ +
+

“RMSE as a function of sparsity. 5000 ratings were removed from the training set(initially containing 80000 ratings) in every iteration. “

+
+ +

I’m using Python and the Movielens dataset. Do you know how can I achieve this in the mentioned language? Is there any tool to do that?

+",25405,,,,,7/3/2019 7:23,Reducing the Number of Training Samples for collaborative filtering in recommender systems,,0,0,,,,CC BY-SA 4.0 +13181,2,,13179,7/3/2019 7:32,,3,,"

I'm seeing recent trend of combining RNN/CNN with GNN(graph neural networks) so that both time dependency and topology are captured. I would suggest you to start by looking at DCRNN (Yaguang Li et al.), it's a strong baseline that everyone uses nowadays. Other good resources:

+ +
    +
  • Graph wavenet for deep spatial-temporal graph modeling (Zonghan Wu et al.)
  • +
  • Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting (Bing Yu et al.)
  • +
  • STANN: A Spatio–Temporal Attentive Neural Network for Traffic Prediction (Zhixiang He et al.)
  • +
+",20430,,,,,7/3/2019 7:32,,,,0,,,,CC BY-SA 4.0 +13182,2,,13172,7/3/2019 8:05,,0,,"

In the original seq2seq paper, they used two RNN, one for encoding and one for decoding. In the encoder they need to unroll the inputs to capture the time dependency. Now if we want to pass the hidden state from the encoder to the decoder that means that the decoder hidden state shape needs to match the encoder (aka same architecture). Since the architecture is the same, we can not directly generate a sequence of n samples within the decoder without unrolling it and you can not unroll it without an input.

+",20430,,,,,7/3/2019 8:05,,,,0,,,,CC BY-SA 4.0 +13183,2,,13149,7/3/2019 8:18,,2,,"

You can take a look at traffic data for example if you follow +link1, link2 you can find 3 publicly available traffic datasets which are already preprocessed. You cold also look at air quality datasets offered by the government link3

+",20430,,,,,7/3/2019 8:18,,,,0,,,,CC BY-SA 4.0 +13185,2,,13172,7/3/2019 9:05,,1,,"
+

Shouldn't the hidden state of the RNN from the previous timestamps be enough?

+
+ +

It is theoretically enough to generate a sequence. However, allowing an input offers a couple of convenient extras:

+ +
    +
  • Training data for output sequences is used twice - once as input (as previous sequence data), once as target (to establish loss metric). This may help training process as the decoder trains both as a decoder to the new sequence type and as a predictive model over the output sequence semi-independently - i.e. weights from input to RNN layer are affected by error gradients separately to weights between previous hidden state and next state, although the two sets of weights together influence output and next state, so are not fully independent over a sequence.

  • +
  • By allowing input of sequence so far generated, the decoder can work as a generator, where the next item in the sequence does not need to be the maximum probability item, but can be sampled or have rules applied. This allows for approaches such as BEAM search, commonly used in machine translation, which maintains several potential outputs, selecting best one at the end.

  • +
+ +

I have not done the experiment, but I suspect the first item results in faster and better generalisation. The second one is very convenient for natural language generation and similar problems.

+",1847,,1847,,7/3/2019 16:07,7/3/2019 16:07,,,,2,,,,CC BY-SA 4.0 +13186,1,,,7/3/2019 11:09,,1,253,"

I created an lstm model which predicts multioutput sequeances. It takes variable length sequences as input. These sequences are padded with zero to obtain equal length. Note that the time series are not equally spaced but time stamp is added as predictor. The descrete time series predictors are normalized with mean and standard deviation and run through PCA, the categorical feature is one hot encoded and the ordinal features are integer encoded. So the feature set is a combination of dynamic and static features. The targets are also scaled between [-1,1]. The input layer goes through a masked layer and then to 2 stacked lstm layers and dense layer to predict the targets (see code below).

+ +

Training actually starts well but then the performance starts to saturate. It seems the network also focus more on the 3rd output rather than the first two. This is seen in the validation curve of the third output that follows the training curve perfectly. For the first 2 outputs, the network has a hard time predicting some peak values. I have been tuning the hyper parameters but validation error does not go below a certain value. The longer I train the more the validation curve and training curve separate from each other and overfitting occurs on the first 2 outputs. I tried all the standard initializations and he_initialization seems to work the best. When more data is added there is a slight improvement in validation error but not significant. When adding dropout the validation error is lower than the training error due to noise introduced by dropout in feed forward but there is no significant improvement. Since neural networks tend to converge close to where they are initialized, I was thinking my initialization is not good.

+ +

I was wandering if anyone had any suggestions on how to improve the error of this model. I think I will be happy if I can get the validation error somewhere around 0.01.

+ +

+ + +

+ +
def masked_mse(y_true, y_pred):
+    mask = keras.backend.all(keras.backend.not_equal(y_true, 0.), axis=-1, keepdims=True)
+
+    y_true_ = tf.boolean_mask(y_true, mask)
+    y_pred_ = tf.boolean_mask(y_pred, mask)
+
+    return keras.backend.mean(keras.backend.square(y_pred_ - y_true_))
+
+def rmse(y_true, y_pred):
+    # find timesteps where mask values is not 0.0
+    mask = keras.backend.all(keras.backend.not_equal(y_true, 0.), axis=-1, keepdims=True)
+
+    y_true_ = tf.boolean_mask(y_true, mask)
+    y_pred_ = tf.boolean_mask(y_pred, mask)
+
+    return keras.backend.sqrt(keras.backend.mean(keras.backend.square(y_pred_ - y_true_)))
+
+hl1 = 125
+hl2 = 125
+window_len = 30
+n_features = 50
+batch_size = 128
+
+optimizer = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False)
+dropout = 0.
+input_ = keras.layers.Input(
+        shape=(window_len, n_features)
+    )
+
+# masking is to make sure the model doesn't fit the zero paddings
+masking = keras.layers.Masking(mask_value=0.0)(input_)
+
+# hidden layer 1 with he_normal initializer. 
+
+lstm_h1 = keras.layers.LSTM(hl1, dropout=dropout, kernel_initializer='he_normal',
+                        return_sequences=True)(masking)
+
+# hidden layer 2
+lstm_h2 = keras.layers.LSTM(hl2, dropout=dropout, kernel_initializer='he_normal',
+                        return_sequences=True)(lstm_h1)
+
+ # dense output layer of single output
+ out1 = keras.layers.Dense(1, activation='linear',name='out1')(lstm_h2)
+
+ out2 = keras.layers.Dense(1, activation='linear', name='out2')(lstm_h2)
+
+ out3 = keras.layers.Dense(1, activation='linear', name='out3')(lstm_h2)
+
+ model = keras.models.Model(inputs=input_, outputs=[out1, out2, out3])
+
+
+ pi = [rmse]
+
+ n_gpus = len(get_available_gpus())
+
+ if n_gpus > 1:
+    print(""Using Multiple GPU's ..."")
+    parallel_model = multi_gpu_model(model, gpus=n_gpus)
+
+else:
+    print(""Using Single GPU ..."")
+    parallel_model = model
+
+
+ parallel_model.compile(loss=masked_mse, optimizer=optimizer, metrics=pi)
+ parallel_model.summary()
+
+ checkpoint = keras.callbacks.ModelCheckpoint(
+             file_name+"".hdf5"", monitor='val_loss', verbose=1, 
+             save_best_only=True, mode='min', period=10,
+             save_weights_only=True)
+
+ save_history = keras.callbacks.CSVLogger(file_name+"".csv"", append=True)
+
+ callbacks_list = [checkpoint, save_history]
+
+ y_train_reshaped = list(reshape_test(window_len, y_train))
+ parallel_model.fit(
+            x_train,
+
+            {
+                 'out1': y_train_reshaped[0],
+                 'out2': y_train_reshaped[1],
+                 'out3': y_train_reshaped[2],
+            },
+
+            epochs=epochs,
+            batch_size=batch_size,
+            verbose=0,
+            shuffle='batch',
+            validation_data=(x_test, list(reshape_test(window_len,y_test))),
+            callbacks=callbacks_list,
+
+        )
+
+",26835,,26835,,7/4/2019 10:43,7/4/2019 10:43,Why doesnt my lstm model for time series prediction improve after certain level of performance?,,0,0,,,,CC BY-SA 4.0 +13187,2,,13172,7/3/2019 11:51,,3,,"

In seq2seq they model the joint distribution of whatever char/word sequence by decomposing it into time-forward conditionals:

+

\begin{align*} +p(w_1,w_2, \dots,w_n) &= \ p(w_1)*p(w_2|w_1) * \ ... \ * p(w_n|w_1, \dots,w_{n-1}) \\ + &= \ p(w_1)*\prod_{i=2}^{n}p(w_i|w_{<i}) +\end{align*}

+

This can be sampled by sampling each of the conditional in ascending order. So, that's exactly what they're trying to imitate. You want the second output dependant on the sampled first output, not its distribution.

+

This is why the hidden state is NOT good for modeling this setup because it is a latent representation of the distribution, not a sample of the distribution.

+

Note: In training, they use ground-truth as input by default because it working under the assumption the model should've predicted the correct word, and, if it didn't, the gradient of the word/char level loss will reflect that (this is called teacher forcing and has a multitude of pitfalls).

+",25496,,2444,,12/12/2021 12:08,12/12/2021 12:08,,,,2,,,,CC BY-SA 4.0 +13189,1,,,7/3/2019 15:41,,3,1073,"

Let's say I have an adjustable loaded die, and I want to train a neural network to give me the probability of each face, depending on the settings of the loaded die.

+ +

I can't mesure its performance on individual die roll, since it does not give me a probability.

+ +

I could batch a lot of roll to calculate a probability and use this batch as an individual test case, but my problem does not allow this (let's say the settings are complex and randomized between each roll).

+ +

I have 2 ideas:

+ +
    +
  1. train it as a classification problem which output confidence, and hope that the confidence will reflect the actual probability. Sometimes the network would output the correct probability and fail the test, but on average it would tend to the correct probability. However it may require a lot of training and data.
  2. +
  3. batch random rolls together and compare the mean/median/standard deviation of the measured result vs the predictions. It could work but I don't know the good batch size.
  4. +
+ +

Thank you.

+",26839,,,,,11/16/2020 5:21,How can I train a neural network to give probability of a random event?,,3,3,,,,CC BY-SA 4.0 +13190,2,,13189,7/3/2019 17:48,,0,,"

Neural network isnt what you want here. You have a limited number of events and draws from some unknown distribution that you want to recover.

+ +

In that case, just use the empirical probabilities $p(event_i) = \frac{\# event_i}{total\ events}$ which given enough draws will converge to the true probability.

+",25496,,,,,7/3/2019 17:48,,,,2,,,,CC BY-SA 4.0 +13191,1,,,7/3/2019 20:08,,2,50,"

Why do neural networks have bias units? Why is it sometimes okay to opt them out?

+",26843,,2444,,12/12/2021 12:27,12/12/2021 12:27,Why do neural networks have bias units?,,0,1,,,,CC BY-SA 4.0 +13192,1,,,7/3/2019 20:30,,0,400,"

I want to develop an AI for continuous space. I reached to DDPG algorithm that takes actions deterministically.

+ +

If DDPG takes actions deterministically, should the environment also be deterministic? I want non-deterministic, continuous real-world environments. Is DDPG the algorithm I am looking for? Is there any other algorithm for my need?

+",19493,,2444,,7/4/2019 10:06,7/4/2019 15:37,Is DDPG just for deterministic environments?,,1,0,,,,CC BY-SA 4.0 +13193,1,,,7/3/2019 21:44,,1,19,"

I'm working on a deep learning project and have encountered a problem. The images that I'm using are very large and extremely detailed. They also contain a huge amount of necessary visual information, so it's hard to downgrade the resolution. I've gotten around this by slicing my images into 'tiles,' with resolution 512 x 512. There are several thousand tiles for each image.

+ +

Here's the problem—the annotations are binary and the images are heterogenous. Thus, an annotation can be applied to a tile of the image that has no impact on the actual classification. How can I lessen the impact of tiles that are 'improperly' labeled.

+ +

One thought is to cluster the tiles with something like a t-SNE plot and compare the ratio of the binary annotations for different regions (or 'classes'). I could then assign weights to images based on where it's located and then use that as an extra layer in my training. Very new to all of this, so wouldn't be surprised if that's an awful idea! Just thought I'd take a stab.

+ +

For background, I'm using transfer learning on Inception v3.

+",26844,,,,,7/3/2019 21:44,"Binary annotations on large, heterogenous images",,0,0,,,,CC BY-SA 4.0 +13194,1,,,7/3/2019 22:40,,5,3529,"

I would like to know if it was possible to train a neural network on daily new data. Let me explain this more in detail. Let's say you have daily data from 2010 to 2019. You train your NN on all of it, but, from now on, every day in 2019 you get new data. Is it possible to ""append"" the training of the NN or do we need to retrain an entire NN with the data from $2010$ to $2019+n$ with $n$ the day for every new day?

+ +

I don't know if it is relevant but my work is on binary classification.

+",26522,,2444,,8/3/2019 12:52,7/7/2021 15:38,Can I train a neural network incrementally given new daily data?,,1,1,,,,CC BY-SA 4.0 +13196,2,,13192,7/4/2019 3:40,,2,,"

I am not an expert in this area. But I believe that the word ""Deterministic"" is for ""Policy"" in the ""Deterministic Policy"" Gradient. It does not mean deterministic environment.

+ +

Stochastic policy: Probabilistic(random) action choice for a given state.
+Deterministic policy: one action is chosen for a given state.

+ +

Deterministic Policy Gradient algorithm still can handle a stochastic (and continuous, of cause) environment, but the policy will be deterministic.

+ +

Reference

+ +

""in DDPG, the Actor directly maps states to actions (the output of the network directly the output) instead of outputting the probability distribution across a discrete action space"" -towards Data Science

+ +

Deterministic Policy Gradient Algorithms by Silver et al. PDF

+",23788,,23788,,7/4/2019 15:37,7/4/2019 15:37,,,,1,,,,CC BY-SA 4.0 +13198,1,,,7/4/2019 7:37,,1,31,"

As I wrote in the title, what are the advantages of time-varying graph CNNs compared to fixed graph? For example, in CORA, which is a graph of citation relations of papers frequently used in graph CNN, what examples are there?

+",26502,,,,,7/4/2019 7:37,What are the advantages of time-varying graph CNNs compared to fixed graph?,,0,4,,,,CC BY-SA 4.0 +13199,1,,,7/4/2019 8:05,,2,1141,"

i'm trying to implement this paper and I'm stuck for quite some time now. Here is the issue:

+ +

I have a 3D tensor and has (180,200,20) as dimension and I'm trying to append 5 of them as the paper states:

+ +
+

Now that each frame is represented as a 3D tensor, we can append multiple frames’ along a new temporal dimension to create a 4D tensor

+
+ +

what I did is I applied the tensorflow command tf.stack() and now so far so good, I have my input as a 4D tensor and has (5,180,200,20) as stated in the paper:

+ +
+

Thus our input is a 4 dimensional tensor consisting of time, height, X and Y

+
+ +

Now what I'm trying to do is to apply a 1D convolution on this 4D tensor as the paper mentions:

+ +
+

given a 4D input tensor, we first use a 1D convolution with + kernel size n on temporal dimension to reduce the temporal dimension from n to 1

+
+ +

is this case n = 5.

+ +

And here where I got stuck, I created the kernel as follow:

+ +

kernel = tf.Variable(tf.truncated_normal([5,16,16], dtype = tf.float64, stddev = 1e-1, name = 'weights'))

+ +

and tried to apply a 1D convolution:

+ +

conv = tf.nn.conv1d(myInput4D, kernel, 1 , padding = 'SAME')

+ +

and I get this error

+ +

Shape must be rank 4 but is rank 5 for 'conv1d_42/Conv2D' (op: 'Conv2D') with input shapes: [5,180,1,200,20], [1,5,16,16]

+ +

I don't understand how 1 is added to the dimensions at the index = 2 and index = 0 in the first and second tensors.

+ +

I also tried this:

+ +

conv = tf.layers.conv1d(myInput4D, filters = 16, kernel_size = 5, strides = 1, padding = 'same)

+ +

And get the following error:

+ +

Input 0 of layer conv1d_4 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [5, 180, 200, 20]

+ +

My question is: Is it possible to apply a 1D convolution on a 4D input and if yes can anyone suggests a way to do so? Because in the Tenssorflow documentations it says the input must be 3D

+ +
+

tf.nn.conv1d( + value=None, + filters=None, + stride=None, + padding=None, + use_cudnn_on_gpu=None, + data_format=None, + name=None, + input=None, + dilations=None + )

+ +

value: A 3D Tensor. Must be of type float16, float32, or float64.

+
+ +

Thank you.

+",26777,,26777,,7/4/2019 8:46,1/4/2021 19:24,Applying a 1D convolution for 4D input,,0,1,,,,CC BY-SA 4.0 +13200,2,,13189,7/4/2019 8:12,,0,,"

This approach:

+ +
+

train it as a classification problem which output confidence, and hope that the confidence will reflect the actual probability. Sometimes the network would output the correct probability and fail the test, but on average it would tend to the correct probability.

+
+ +

will work with some limitations. If you use a classifier with softmax activation and multiclass log loss

+ +

$$\mathcal{L}(\mathbf{\hat{y}},\mathbf{y}) = -\mathbf{y} \cdot \text{log}(\mathbf{\hat{y}})$$

+ +

where $\mathbf{\hat{y}}$ is the network output as a vector, and $\mathbf{y}$ is the actual output from an individual sample. Your input should be the settings of the die.

+ +

Optimising this loss will converge on approximated probabilities for each discrete output. You can demonstrate this with some simple examples - for instance if you train a network with a single input - a one-hot-encoded die type from the classic D&D dice sets, plus deliberately chosen examples of different results in the right frequencies, you will end up with a classifier that predicts roughly $p=0.25$ for results of 1,2,3,4 for a d4 and $p=0.125$ for results of 1,2,3,4,5,6,7,8 for a d8

+ +

So it works mathematically. Whether it works for your situation depends on details. You need enough data samples to cover both the distribution of results under each setting, and any complexities of how that distribution varies with the settings. In the limit of wanting very accurate predictions of probability within a complex space you will need a huge number of samples. You should be able to find a compromise between accuracy and generalisation by trying different levels of regularisation - this will be necessary as over-fitting to input/output sample pairs as seen is going to be a serious problem for a neural network trained on this data.

+ +

One thing you can do to help a classifier learn probabilities is always take some number of samples with the same settings - e.g. 10 or 100 or 1000 each with the same settings - that should guarantee that the network cannot simply converge to predict high $p$ values for single outputs as seen, as it will have counter-examples to work with.

+ +

You mention that you have 40 dimensions of settings. Whether this is an issue will depend on how the probability distribution varies based on those settings. However, at minimum you should be thinking in terms of millions of samples for training, or possibly a fast on-demand generator that can generate 1000s of new samples per second to train with.

+ +

You can test accuracy by building histograms using some fixed (and as yet unseen) setting and comparing to NN predictions of probabilities of that setting. Even getting accurate test results is likely to require 1000s of samples.

+ +
+

However it may require a lot of training and data.

+
+ +

If you cannot obtain a very large training set here, then a purely statistical ""black box"" approach is probably not feasible, regardless of whether you use neural networks, or more raw analysis. What neural networks add is smooth interpolation between different settings values, as a form of approximation. This seems desirable for your problem, as you will never fully explore 40 dimensions of 100 values in the lifetime of the universe - but you need some confidence that minor changes in settings equate to minor changes in probability distributions in most cases.

+ +

It's OK to have one or two major shifts across the input space, but if the distribution depends in some cryptographic primitive or similar complex high frequency (over space) and high amplitude on the input variables, there is no way to obtain approximations using statistics.

+ +

The alternative to statistical approaches is to find some way to break open this black box through analysis. No AI system can do that in general at the moment, so you would rely on human ingenuity.

+",1847,,1847,,7/4/2019 8:53,7/4/2019 8:53,,,,3,,,,CC BY-SA 4.0 +13201,2,,13194,7/4/2019 8:52,,5,,"

Yes, this is possible. Continuously extending your training data is known as incremental learning.

+ +

You might also want to take a look at transfer learning, in which you reuse a trained model for a different purpose. This is very useful if you have a smaller dataset.

+ +

In your particular case, you could train a NN once using your data from 2010 to 2019 and use it as a base model. Every time you get new data, you can use transfer learning to slightly re-train this model. Based on parameters such as the number of epochs and the learning rate, you can determine how much of an impact this new data will have.

+",18398,,,,,7/4/2019 8:52,,,,3,,,,CC BY-SA 4.0 +13202,1,13204,,7/4/2019 12:54,,2,908,"

I am trying to build a network able to play snake game. This is my very first attempt to do such stuff. Unfortunately, I've stuck and even have no idea how to reason about the problem.

+ +

I use reinforcement neural network approach (q-leaning). My network is built on top of Keras. I use 6 input neurons for my snake:

+ +
    +
  • 1 - is any collision directly behind
  • +
  • 2 - is any collision directly on the right
  • +
  • 3 - is any collision directly on the left
  • +
  • 4 - is snack up front (no matter how far)
  • +
  • 5 - is a snack on the right side (no matter how far)
  • +
  • 6 - is a snack on the left side (no matter how far)
  • +
+ +

the output has 3 neurons:

+ +
    +
  • 1 - do nothing (go ahead)
  • +
  • 2 - turn right
  • +
  • 3 - turn left
  • +
+ +

I believe this is a sufficient set of information to make proper decisions. But the snake seems to not even grasp the concept of not hitting the wall - which results with instant death.

+ +

I use the following rewards table:

+ +
    +
  • 100 for getting the snack
  • +
  • -100 for hitting wall/tail
  • +
  • 1 for staying alive (each step)
  • +
+ +

Snake tends to run randomly no matter how many training iterations it gets.

+ +

The code is available on my github: https://github.com/ayeo/snake/blob/master/main.py

+",26855,,,,,7/4/2019 13:34,Reinforcement learning to play snake - network seems to not get trained at all,,1,0,,,,CC BY-SA 4.0 +13203,1,,,7/4/2019 13:05,,1,16,"

i just tried to improve my image dataset by inverting the images with a probability of 50% (means white background, black features transforms to black background, white features)

+ +

I thought this will improve the ability to recognize abstract features for my network. Right now, the network does not perform really well. Is inverting the intensity of images too much for a training algorithm to deal with?

+",26857,,,,,7/4/2019 13:05,Inverting intensity on images to enhance image dataset,,0,0,,,,CC BY-SA 4.0 +13204,2,,13202,7/4/2019 13:27,,2,,"

I cannot comment much on your setup for inputs and outputs. It seems adequate to get some control, but does not cover the fully Markov state for the game, so I would expect that will limit the agent from ever being truly optimal. I would expect it to learn to play the game though, if you were implementing Q learning with a neural network correctly.

+ +

In your code, you are implementing a basic Q learning loop. It seems correct. However, this combination of Q learning and neural networks is known not to work - or more accurately, it rarely works this simply. The problem is mainly to do with the network receiving its own initially biased outputs back as new targets, plus receiving updates in correlated form (data on each time step is strongly correlated with data from previous time step). These biases are too large for the Q learning process to overcome, and typically the result is an agent that fixates on a single default action, because it has learned an inflated action value for it.

+ +

The problem is well known in RL research and called ""The Deadly Triad"" by Sutton & Barto.

+ +

The usual solution to this with Q learing is called DQN or ""Deep"" Q Learning (""Deep"" is in quotes because this should be applied even if you just have a single hidden layer).

+ +

In basic DQN, you need to add the following features:

+ +
    +
  • An experience replay table. Instead of training directly on experience as it is received, instead store $s, a, r, s'$ in memory. When it is time later in the loop to train the NN for a step, take a random sample of some M items (e.g. 32 items) as a mini-batch, calculate a latest target for them, and train once on the mini batch. You will need logic to only start this training process once you have some minimal amount of experience from behaving randomly (e.g. 500 random steps).

  • +
  • A ""target network"". When generating target Q values, use a cloned copy of the learning network, and only update this clone every N steps (with N typically set at 1000 or 10000).

  • +
+ +

These two additions are not really optional, even for really basic environments. You will need to add them to your script.

+",1847,,1847,,7/4/2019 13:34,7/4/2019 13:34,,,,0,,,,CC BY-SA 4.0 +13209,1,,,7/5/2019 4:58,,1,27,"

I am looking for dataset in tree structure that captures the hierarchy of concepts. +For example, something like,

+ +
                         Entertainment
+                   Movies           Sports
+              Comedy  Thriller   cricket football
+   charlie chaplin              sachin       messy
+
+",3015,,,,,7/5/2019 4:58,Is there any readily available concept/topic tree?,,0,3,,,,CC BY-SA 4.0 +13210,1,,,7/5/2019 5:04,,2,56,"

I'm aware of metrics like accuracy (correct predictions / total predictions) for models that classify things. However, I'm working on a model that outputs the probability of a datapoint belonging to one of two classes. What metrics can/should be used to evaluate these types of models?

+ +

I'm currently using mean squared error, but I would like to know if there are other metrics, and what the advantages/disadvantages of those metrics are.

+",26875,,26875,,7/5/2019 7:24,7/5/2019 8:48,Metrics for evaluating models that output probabilities,,1,0,,,,CC BY-SA 4.0 +13212,2,,13210,7/5/2019 8:11,,1,,"

For a binary classifier, the cross-entropy loss is a natural measure of probability accuracy, if you care about relative probabilities. By that I mean if you care that the estimate $\hat{p}$ is within some ratio of the true value. So an estimate of $\hat{p} = 0.1$ is a better estimate if the true value is $p = 0.2$ than if the true value is $p = 0.01$ (even though the latter value is closer and would score better under MSE). It also applies that you care that 0.9 is by the same logic ""closer"" to 0.8 than it is to 0.97. With cross-entropy loss, extreme confidence (predicting close to $0$ or close to $1$) is penalised more heavily when it is wrong.

+ +

For completeness, the loss function (per data point) is:

+ +

$$\mathcal{L}(\hat{y},y) = -(y\text{log}(\hat{y})+ (1-y)\text{log}(1-\hat{y}))$$

+ +

This is likely to be the same loss function as you are using for your objective (or at least it should be), so for tests, simply also use it as your metric*.

+ +

$\hat{y}$ is your predicted probability of being in class A, and can be in range $[0,1]$. Ideally you have ground truth probabilities and $y$ is also in that range. In which case the only problem is that the ""perfect"" score is no longer $0$ but some positive number. If that bothers you, then you could offset by the perfect score, pre-calculating it on each data set (just set $\hat{y} = y$ for each item and you will find the minimum possible score).

+ +

If you don't have ground truth probabilities, but you do have classes, then $y$ will either be 0 or 1, and the metric still works. To get an accurate metric, you will need enough samples that the relative frequencies of each class depending on input has a significant effect. That is, you need more data, both training and test, in order to train for accurate probabilities instead of targeting simpler classification accuracy metrics.

+ +

Similar logic also works for multi-class probabilities. However, many off-the-shelf libraries use an optimisation in the loss function - assuming only one true class - which makes using probabilities as ground truth impossible. You might therefore need to write your own loss function and gradient functions based on multi-class cross-entropy loss in that case.

+ +
+ +

* I am making the assumption here that you use standard conventions for noting loss functions (typically per item), cost functions (typically aggregated across a data set and possibly multiple loss functions) and metric functions which don't have to be differentiable or usable as either of the former. The cost function is usually also fed into optimisers as the objective function - i.e. it has the important job of driving parameters to reach a maximum or minimum value. For gradient based solvers, that means it must be differentiable.

+",1847,,1847,,7/5/2019 8:48,7/5/2019 8:48,,,,1,,,,CC BY-SA 4.0 +13213,2,,10806,7/5/2019 10:49,,2,,"

The minimal algorithm for convolution in $\mathbb{R}^2$ is a four dimensional iteration.

+ +
for all vertical kernel positions
+  for all horizontal kernel positions
+    initialize the value at the output position to the bias
+    for all vertical positions in the kernel
+      for all horizontal positions in the kernel
+        add the product of the input value to that of the output position
+
+ +

In $\mathbb{R}^n$ it is a $2n$ dimensional iteration following this pattern.

+ +

The minimal algorithm for regression of bounding boxes orthogonal with respect to the image grid (no tilting) is this.

+ +
until number of boxes reaches max
+  make first guess of two coordinates
+  until number of guesses reaches max or matching criteria is met
+    evaluate guess
+    remember guess and guess results
+    improve on guess based on evaluation results and
+          possibly injected randomness,
+          excluding locations already covered
+    if some intermediate criteria is met
+      change the nature of the guessing, evaluation, and improving
+            as is appropriate for the criteria match
+            (this covers approaches that have multiple phases)
+  if no guess matched criteria
+    break
+
+ +

That's approaching concepts from the top down. When approaching from the other direction, reverse engineer the best code. In the case of RCNN, it is unadvisable to find implementations following the first paper expressing the approach. Reading the first paper may be helpful to get the gist of the approach, but reverse engineer the best one, which, in this case, may be Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, 2016. Study the implementation they pushed to git at https://github.com/rbgirshick/py-faster-rcnn/tree/master/. The algorithm is in lib/fast_rcnn.

+ +

The reason this algorithm isn't spelled out in their paper or any paper from the first on down through the lineage to their paper is simple.

+ +
    +
  • The pseudo-code above is universal across all convolutions and all bounding box regressions, so that doesn't need to be restated with each approach.
  • +
  • The main features of an approach like RCNN, SSD, or YOLO are not algorithmic. They are algebraic expressions of the guess, the evaluation, the improvement upon the guess, and the test for the criteria.
  • +
  • The use of objects and functional programming makes the implementation more readable, so it can be easier to read the implementation than read a huge chunk of the above pseudo-code with all the algebra and test branches plugged in.
  • +
  • For the above reasons, it is rare that pseudo-code would be used prior to the implementation when the paper is written.
  • +
  • The return on investment of reverse engineering from code to pseudo-code is only sufficient motivation if one is going to improve the algorithm and write another paper, and on the way to finishing the prior paper's pseudo-code, the new paper and the new code gets finished first.
  • +
+ +

Since the author of this question seems interested in writing their own code, it may be reasonable to assume the same author may be interested in thinking their own thoughts, so I'll add this.

+ +

None of these algorithms are object recognition. Recognition has to do with cognition, and these approaches do not even touch upon cognitive processing, another branch of AI not related to convolution and probably not closely related to formal regression either. Additionally, bounding boxes are not the way animal vision systems work. Early gestalt experiments in vision indicate a complete independence of human vision from rectilinear formalities. In lay terms, humans and other organisms with vision systems don't have any conception of Cartesian coordinates. We can still read books if tilted slightly relative to the plane passing through our eyes. We don't zoom or tilt in Cartesian coordinates.

+ +

These facts may not be necessary to comprehend to create an automated vehicle driving system that produces a better safety record than average human drivers, but that is only because humans don't set that bar very high and because cars roll in the plane of the road. These facts are indeed necessary in aeronautic system used in military applications, where nothing is particularly Cartesian and the meaning of horizontal and vertical is ambiguous. For that reason, it is unlikely that bounding boxes will be the edge of vision technology for very long.

+ +

If one wishes to transcend current mediocrity, consider bounding circles with fuzzy boundaries, which would be more like the systems that evolved over millions of biological iterations. If the computer hardware is poorly fit to radial processing, design new hardware in which radial processing is native and in which Cartesian coordinates may be foreign and cumbersome.

+ +

Regarding the classifier, the classifier papers do generally include the algorithm, so those can be found by doing an academic search for the original paper describing the classifier being used.

+",4302,,4302,,7/5/2019 10:55,7/5/2019 10:55,,,,0,,,,CC BY-SA 4.0 +13214,1,,,7/5/2019 11:59,,1,38,"

Given a set of time series data that are generated from different sites where all sites are investigating the same objective but with slightly different protocols.

+ +

Is it possible to use adversarial learning to learn site invariant features for a classification problem, that is, how can adversarial learning be used to minimize experimental differences (e.g. different measurement equipment) so that the learned feature representations from the time series are homogenous for a classification problem?

+ +

I have come across multi-domain adversarial learning, but I'm not sure if this is the best formulation for my problem.

+",26885,,2444,,7/6/2019 12:52,7/6/2019 12:52,Is it possible to use adversarial training to learn invariant features?,,0,2,,,,CC BY-SA 4.0 +13216,2,,7685,7/5/2019 13:16,,4,,"

For everybody getting here from google, like me: the $\log$ might have been replaced in the loss function, but I think it is still there when taking the gradient of both functions (correct me, if I am wrong):

+

$$\begin{aligned} +\nabla_{\theta} L^{P G}(\theta) &=\nabla_{\theta} \hat{E}_{t}\left[\log \pi_{\theta}\left(a_{t} \mid s_{t}\right) \hat{A}_{t}\right] \\ +&=\hat{E}_{t}\left[\nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) \hat{A}_{t}\right] +\end{aligned}$$

+

and

+

$$ +\begin{aligned} +\nabla_{\theta} L^{I S}(\theta)=& \nabla_{\theta} \hat{E}_{t} \left[\frac{\pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta_{\text {old}}}\left(a_{t} \mid s_{t}\right)} \hat{A}_{t}\right] \\ +&=\hat{E}_{t} \left[\nabla_{\theta} \frac{\pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta_{\text {old}}}\left(a_{t} \mid s_{t}\right)} \hat{A}_{t}\right] \\ +&=\hat{E}_{t} \left[\frac{\pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta_{\text {old}}}\left(a_{t} \mid s_{t}\right)} \frac{\nabla_{\theta} \pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta}\left(a_{t} \mid s_{t}\right)} \hat{A}_{t}\right] \\ +&=\hat{E}_{t}\left[\frac{\pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta_{\text {old}}}\left(a_{t} \mid s_{t}\right)} \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) \hat{A}_{t}\right] +\end{aligned} +$$

+

So, the $\pi_{\theta}\left(a_{t} \mid s_{t}\right) $ in the PG function was replaced with $\frac{\pi_{\theta}\left(a_{t} \mid s_{t}\right)}{\pi_{\theta_{\text {old}}}\left(a_{t} \mid s_{t}\right)} $ whose derivate is the same as the log of the PG function (apart from the proportionality factor).

+",26876,,2444,,7/18/2020 15:08,7/18/2020 15:08,,,,3,,,,CC BY-SA 4.0 +13217,1,13409,,7/5/2019 13:55,,4,187,"

In the paper Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask, they learn a mask for the network by setting up the mask parameters as $M_i = Bern(\sigma(v_i))$. Where $M$ is the parameter mask ($f(x;\theta, M) = f(x;M \odot \theta$), $Bern$ is a Bernoulli sampler, $\sigma$ is the sigmoid function, and $v_i$ is some trainable parameter.

+ +

In the paper, they learn $v_i$ using SGD. I was wondering how they managed to do that, because there isn't a reparameterization trick, as there is for some other distributions I see trained on in the literature (example: normal).

+",25496,,25496,,7/5/2019 17:00,7/17/2019 16:43,How are the parameters of the Bernoulli distribution learned?,,1,0,,,,CC BY-SA 4.0 +13221,1,,,7/6/2019 2:43,,3,357,"

Most image classifiers like Inception-v3 accept images of about size 299 x 299 x 3 as input. In this particular case, I cannot resize the image and lose resolution. Is there an easy solution of dealing with this rather than retraining the model? (Particularly in tensorflow)

+",26900,,,,,12/22/2020 5:03,Is there a simple way of classifying images of size differing from the input of existing image classifiers?,,2,0,,,,CC BY-SA 4.0 +13223,1,,,7/6/2019 9:16,,2,198,"

Does anyone know a paper or code that does ""unsupervised domain adaptation"" for regression task? +I saw most of the papers were benchmarked on classification tasks, not regression. +I want to do something like training a model to predict a scalar value from an image (e.g. predicting image of a road to steering wheel angle for a self-driving car). +One of the examples could be training on synthesis data from simulated environment (think GTA) and then trying to predict on real-world.

+ +

Here is one of the examples of unsupervised domain adaptation algorithm that also has an easy-to-access code with Keras: https://github.com/bbdamodaran/deepJDOT +But it's for classification. The author said it can be used for regression but I had to change it. I changed it and it didn't work well so I don't know if it's my fault or the algo is not good for regression. I want to see papers that were benchmarked on regression so I know how well it performs on regression.

+ +

My real use case is to predict facial expression as a value from 0 to 1 like how open is the mouth. The source domain and target domain are real-world images but from different lighting.

+ +

Any suggestions are appreciated.

+",20819,,,,,7/20/2019 4:22,"Paper & code for ""unsupervised domain adaptation"" for regression task",,0,1,,,,CC BY-SA 4.0 +13224,5,,,7/6/2019 13:03,,0,,"

For more details, see e.g. https://en.wikipedia.org/wiki/Feature_learning or Representation Learning: A Review and New Perspectives by Yoshua Bengio, Aaron Courville, and Pascal Vincent.

+",2444,,2444,,7/6/2019 13:03,7/6/2019 13:03,,,,0,,,,CC BY-SA 4.0 +13225,4,,,7/6/2019 13:03,,0,,"For questions related to feature learning (also known as representation learning), which is a set of techniques that can learn the features associated with the raw data. It is similar to feature engineering, but, in the case of feature learning, the features are learned and not handcrafted.",2444,,2444,,7/6/2019 13:03,7/6/2019 13:03,,,,0,,,,CC BY-SA 4.0 +13226,5,,,7/6/2019 13:06,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Feature_selection.

+",2444,,2444,,7/6/2019 13:06,7/6/2019 13:06,,,,0,,,,CC BY-SA 4.0 +13227,4,,,7/6/2019 13:06,,0,,"For questions related to the concept of feature selection (also known as variable selection or attribute selection), which is the process of selecting a subset of relevant features (a.k.a. variables or predictors) for use in model construction.",2444,,2444,,7/6/2019 13:06,7/6/2019 13:06,,,,0,,,,CC BY-SA 4.0 +13228,5,,,7/6/2019 13:18,,0,,"

For more details, see e.g. https://en.wikipedia.org/wiki/Feature_extraction.

+",2444,,2444,,7/6/2019 13:18,7/6/2019 13:18,,,,0,,,,CC BY-SA 4.0 +13229,4,,,7/6/2019 13:18,,0,,"For questions related to the concept of feature extraction, which is a set of techniques used to derive or create features from the existing set of features. Feature extraction is different from feature selection, which is used to select a subset of the existing features.",2444,,2444,,7/6/2019 13:18,7/6/2019 13:18,,,,0,,,,CC BY-SA 4.0 +13230,1,,,7/6/2019 16:49,,0,85,"

I have a piece of code and I don't seem to really understand it but I'd love to get a source/link/material that would help me understand the basic functions in TensorFlow. Are there any recommended resources for learning the same?

+",26845,,,user9947,7/6/2019 19:46,12/3/2019 20:02,Is there a place where I can read or watch to get an accurate TensorFlow code wise explanation?,,1,3,,,,CC BY-SA 4.0 +13231,2,,13230,7/6/2019 19:44,,2,,"

The best resource for learning TensorFlow 1.9 and earlier is this course by Stanford. Also additional resources for the entire overview of TensorFlow and its comparisons with NumPy has been made in this video. For hands on models check these videos by Sentdex and also some high level tutorials by Hvass Labs.

+",,user9947,,,,7/6/2019 19:44,,,,1,,,,CC BY-SA 4.0 +13232,1,13285,,7/6/2019 20:47,,3,1239,"

Is there any empirical/theoretical evidence on the effect of initial values of state-action and state values on the training of an RL agent (the values an RL agent assigns to visited states) via MC methods Policy Evaluation and GLIE Policy Improvement?

+

For example, consider two initialization scenarios of Windy Gridworld problem:

+

Implementation: I have modified the problem along with step penalty to include a non-desired terminal state and a desired terminal state which will be conveyed to the agent as a negative and positive reward state respectively. The implementation takes care that the MC sampling ends at the terminal state and gives out penalty/reward as a state-action value and not state value, since this is a control problem. Also, I have 5 moves: north, south, east, west and stay.

+

NOTE: I am not sure whether this changes the objective of the problem. In the original problem, it was to reduce the number of steps required to reach the final stage.

+
    +
  • We set the reward of reaching the desired terminal state to a value that is higher than the randomly initialized values of the value function; for example, we can set the reward to $20$ and initialize the values with random numbers in the range $[1, 7]$

    +
  • +
  • We set the reward of reaching the desired terminal state to a value that is comparable to the randomly initialized values of the value functions; for example, we can set the reward to $5$ and initialize the values with random numbers in the range $[1, 10]$

    +
  • +
+

As far as I see, in the first case, the algorithm will easily quickly converge as the reward is very high for the terminal reward state which will skew the agent to try to reach the reward stage.

+

In the second case, this might not be true if the reward state is surrounded by other high reward states, the agent will try to go to those states.

+

The step penalty ensures that the agent finally reaches the terminal state, but will this skew the path of the agent and severely affect its convergence time? This might be problematic in large state spaces since we will not be able to explore the entire state space, but the presence of exploratory constant $\epsilon$ might derail the training by going to a large false reward state. Is my understanding correct?

+",,user9947,2444,,11/1/2020 16:16,11/1/2020 16:16,How does the initialization of the value function and definition of the reward function affect the performance of the RL agent?,,1,0,,,,CC BY-SA 4.0 +13233,1,13238,,7/7/2019 6:14,,15,5515,"

I listened to a talk by a panel consisting of two influential Chinese scientists: Wang Gang and Yu Kai and others.

+

When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:

+
    +
  1. In the early development of the computer, we compare our machines by their chips;
  2. +
  3. ML/DL which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.
  4. +
+

The fundamental algorithms existed already in the 1980s and 1990s, but AI went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.

+

Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.

+

Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build the AI systems we are building now.

+

But why did AI/ML/DL become a hot topic and become empirical until decades later? Is it only a matter of hardware, software, and data?

+",5351,,2444,,12/30/2021 12:26,2/13/2022 11:17,Why did machine learning only become viable after Nvidia's chips were available?,,4,1,,,,CC BY-SA 4.0 +13235,2,,13233,7/7/2019 7:30,,2,,"

GPUs were ideal for AI boom because:

+
    +
  • They hit the right time
  • +
+

AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how algorithms would work and look. When NVIDIA saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parallel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.

+
    +
  • GPUs are sort of general purpose accelerators
  • +
+

GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utilize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.

+",23262,,12001,,2/13/2022 11:17,2/13/2022 11:17,,,,0,,,,CC BY-SA 4.0 +13237,1,13240,,7/7/2019 7:49,,2,103,"

Certain hyper-parameters (e.g. the size of the offspring generation or the definition of the fitness function) and the design (e.g. how the mutation is performed) of evolutionary algorithms usually need to be defined or specified by a human. Could also these definitions be automated? Could we also mutate the fitness function or automatically decide the size of the offspring generation?

+",23500,,2444,,7/7/2019 22:53,7/7/2019 22:53,Can we automate the choice of the hyper-parameters of the evolutionary algorithms?,,1,0,,,,CC BY-SA 4.0 +13238,2,,13233,7/7/2019 9:25,,17,,"

There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:

+ +
    +
  • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.
  • +
  • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.
  • +
  • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).
  • +
  • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.
  • +
  • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds). +Although, I am not sure but I think earlier supercomputers were specialised in ""parallel+very high precision computing"" (required for weather, astronomy, military applications, etc) and the ""very high precison part"" is overkill in Machine Learning scenario.
  • +
+ +

Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,

+ +

All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.

+",,user9947,,user9947,7/8/2019 5:04,7/8/2019 5:04,,,,0,,,,CC BY-SA 4.0 +13240,2,,13237,7/7/2019 9:38,,1,,"

Yes, you can also automate the choice of certain hyperparameters of the evolutionary algorithm. In this context, this process is called self-adaptation. There are different ways of performing self-adaptation (depending on the hyper-parameter that needs to self-adapt). See e.g. the chapter Self-Adaptation in Evolutionary Algorithms (by Silja Meyer-Nieberg and Hans-Georg Beyer) of the book Parameter Setting in Evolutionary Algorithms (2007).

+",2444,,2444,,7/7/2019 9:46,7/7/2019 9:46,,,,0,,,,CC BY-SA 4.0 +13241,1,13242,,7/7/2019 9:52,,2,334,"

Dietterich, who introduced the taxi environment (see p. 9), states the following: In total there “are 500 [distinct] possible states: 25 squares, 5 locations for the passenger (counting the four starting locations and the taxi), and 4 destinations” (Dietterich, 2000, p. 9).

+ +

However, in my opinion there are only 25 (grid) * 4 (locations) * 2 (passenger in car) = 200 different states, because for the agent it should be the same task to go to a certain point, regardless of whether it's on its way to pick up or to drop-off. Only the action at the destination is different which would be stored binary (passenger in car or not)

+ +

Why does Dietterich come up with 500 states?

+",21299,,21299,,7/7/2019 11:01,7/7/2019 11:01,Number of states in taxi environment (Dietterich 2000),,2,0,,,,CC BY-SA 4.0 +13242,2,,13241,7/7/2019 10:13,,3,,"

This is more of a combinatorics than AI question but regradless, the full state information for the environment is:

+ +

$(taxi \space position, passenger \space position, destination \space position)$

+ +

There are 25 possible taxi positions, 5 passenger positions and 4 destination positions making it $25 \cdot 5 \cdot 4 = 500$, so the paper is correct.

+ +

You are also correct but you divided 1 objective into 2 objectives and you have 2 separate policies, a pickup policy and a dropoff policy. So your state information would be for each policy:

+ +

$(taxi \space position, destination \space position)$

+ +

There are 25 possible taxi positions and 4 possible destination positions making it $25 \cdot 4 = 100$. You have 2 policies so you have $200$ states.

+ +

EDIT

+ +

Actually in the second case, I think you could get away with only 1 policy where you would simply change the destination position once you pick up the passenger so you dont need 2 separate policies and you would have only $100$ states

+",20339,,20339,,7/7/2019 10:19,7/7/2019 10:19,,,,0,,,,CC BY-SA 4.0 +13243,2,,13241,7/7/2019 10:20,,2,,"

This . . .

+ +
+

because for the agent it should be the same task to go to a certain point, regardless of whether it's on its way to pick up or to drop-off

+
+ +

. . . might seem logical/intuitive to a person understanding the task, but it is not mathematically correct. The agent cannot ""merge"" states because they involve the same behaviour. It must count differences in state as the combinations are presented. Critically, heading towards the passenger location or heading towards the goal location are not in any way similar to the agent, unless you manipulate the state to make them so*.

+ +

Eventually the taxi will learn very similar navigation behaviour for picking up and dropping off a passenger. However, using a basic RL agent it learns these very much separately, and must re-learn the navigation rules independently for each combination of passenger and goal location.

+ +

An agent that learned navigation within the environment, and then combined it into different tasks might be an example of hierarchical reinforcement learning, transfer learning, or curriculum learning. These are more sophisticated learning approaches, but it is quite interesting that even very basic RL problems can demonstrate a use for higher level abstractions. Most agents used on the taxi problem don't do this though, as 500 states is really very easy to ""brute force"" using the simplest algorithms.

+ +
+ +

* You could modify the state representation to rationalise the task and make it have less states, similar to your suggestion. For instance, have one ""target"" location which could either be pickup or drop off, and a boolean ""carrying passenger"" state component. That would indeed reduce the number of states. However, that has involved you as the problem designer simplifying the problem to make it easier for the agent. Given that this is a toy problem designed as a benchmark to see how different agents perform, by doing that you subvert the purpose of the environment. If you were creating an agent to work on a harder real world problem though, it might be a very good idea to look for symmetries and ways to simplify state representation which would speed up learning.

+",1847,,1847,,7/7/2019 10:31,7/7/2019 10:31,,,,2,,,,CC BY-SA 4.0 +13244,2,,7369,7/7/2019 13:15,,4,,"

The problem isn't the GAN but the implementation of its discriminator which is typically a convolutional neural network (CNN). CNNs have trouble with sparse data. They require dense data to learn well. There are ways to work around this. See the following for some ideas:

+ + +",5763,,,,,7/7/2019 13:15,,,,0,,,,CC BY-SA 4.0 +13247,1,13930,,7/7/2019 17:55,,2,186,"

I am implementing NEAT (neuroevolution of augmenting topologies) by Stanley. I am facing a problem during the crossover of genomes.

+

Suppose two networks with connections

+
Genome1 = {    
+    (1, Input1, Output), // numbers represent innovation numbers
+    (2, Input2, Output)    
+} // more fit
+
+Genome2 = {    
+    (1, Input1, Output),
+    (2, Input2, Output), // disabled
+    (3, Input2, Hidden1),
+    (4, Hidden1, Output)    
+}
+
+

are crossed over, then the connection (Input2, Output) in the fitter parent has a chance of being disabled (page 109, section 3.2, figure 4),

+
+

There's a preset chance that an inherited gene is disabled if it is disabled in either parent.

+
+

and thus producing the following offspring:

+
Child = {
+    (1, Input1, Output),
+    (2, Input2, Output) //Disabled
+}
+
+

and thus render the network non-functional.

+

Similarly, by this chance, nodes can get left in a state of uselessness after crossover (as having no outgoing connections or no connections at all).

+

How can this be prevented or am I missing something here?

+",26927,,2444,,12/14/2021 21:39,12/14/2021 21:39,How can non-functional neural networks be avoided when the crossover produces a child with a disabled gene?,,1,0,,,,CC BY-SA 4.0 +13249,5,,,7/7/2019 19:11,,0,,"

See e.g. https://en.wikipedia.org/wiki/Neuroevolution.

+",2444,,2444,,7/7/2019 19:11,7/7/2019 19:11,,,,0,,,,CC BY-SA 4.0 +13250,4,,,7/7/2019 19:11,,0,,"For questions related to neuroevolution (or neuro-evolution) techniques, such as NEAT, that are used to evolve (or train) artificial neural networks (that is, they are used evolve their parameters or topology), inspired by the natural evolution. A neuroevolution algorithm is thus an evolutionary algorithm where the genomes (individuals or chromosomes) are artificial neural networks.",2444,,2444,,7/7/2019 22:44,7/7/2019 22:44,,,,0,,,,CC BY-SA 4.0 +13251,1,,,7/7/2019 20:45,,2,89,"

I am looking at a problem which can be distilled as follows: I have a phenomenon which can be modeled as a probability density function which is ""messy"" in that it sums to unity over its support but is somewhat jagged and spiky, and does not correspond to any particular textbook function. It takes considerable amounts of time to generate these experimental density functions, along with conditional data for machine learning, but I have them. I also have a crude model which runs quickly but performs poorly, i.e., generates poor quality density functions.

+ +

I would like to train a neural network to transform the crude estimated pdfs to something closer to the experimentally generated pdfs, if possible.

+ +

To investigate this, I've further reduced this to the most toy-like toy problem I can think of: Feeding a narrow, smooth (relatively narrow) normal curve into a 1D convolutional neural network, and trying to transform it to a similar narrow curve with a different mean. Both input and output have fine enough support (101 points) to be considered as a smooth pdf.

+ +

Here is the crux of the problem I think I have: I do not know what a good loss function is for this problem.

+ +

L1, L2 and similar losses are useless, given that once the non-zero parts of the pdfs are non-overlapping, it doesn't matter how far apart the means are, the loss remains the same.

+ +

I have been experimenting with Sinkhorn approximations to optimal transport, to properly capture the intuition of ""distance"" but somewhat surprisingly these have not been helpful either. I think part of the problem may be an (unavoidable?) numerical stability issue related to the support, but I would not stake hard money on that assumption.

+ +

(If support is at percentiles on the [0,1] it is quite instructive (and dismaying) to look at the sinkhorn loss for normal functions with the mean directly on a point of support, vs normal functions with the mean directly between two points of support.)

+ +

For a problem in this vein, are there any recommended loss functions (preferably supported by or easily implement in PyTorch) which might work better?

+",15020,,2444,,7/7/2019 21:29,7/7/2019 21:29,Which loss functions for transforming a density function to another density function?,,0,7,,,,CC BY-SA 4.0 +13252,5,,,7/7/2019 21:01,,0,,"

See e.g. https://en.wikipedia.org/wiki/Loss_function.

+",2444,,2444,,7/7/2019 21:01,7/7/2019 21:01,,,,0,,,,CC BY-SA 4.0 +13253,4,,,7/7/2019 21:01,,0,,For questions related to the concept of loss (or cost) function in the context of machine learning.,2444,,2444,,7/7/2019 21:01,7/7/2019 21:01,,,,0,,,,CC BY-SA 4.0 +13254,1,13255,,7/7/2019 22:16,,7,4900,"

According to a lecture (week 10) about Reinforcement Learning [1], the concept of an option allows searching the state space of an agent much faster. The lecture was hard to follow because many new terms were introduced in a short time. For me, the concept of an option sounds a bit like skills [2], which are used for describing high-level actions as well.

+

Are skills an improvement over options that includes the trajectory, or are both the same?

+

I'm asking for a certain reason. Normal deep reinforcement learning has the problem that the agent comes very often to a dead end, for example, in Montezuma's Revenge played at the Atari emulator. And the options framework promises to overcome the issue. But the concept sounds a bit too esoteric, and apart from the Nptel lecture, nobody else has explained the idea. So, is it useful at all?

+",,user11571,2444,,1/1/2022 16:08,1/1/2022 16:08,What are options in reinforcement learning?,,1,2,,,,CC BY-SA 4.0 +13255,2,,13254,7/7/2019 23:49,,11,,"

An option is a generalization of the concept of action. The concept of an option (or macro-action) was introduced in the context of reinforcement learning in the paper Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning (1998) by Richard Sutton et al., so that to capture the idea that certain actions are composed of other sub-actions. Section 2 of the mentioned paper formally defines the concept of an option, which is a tuple composed of an initiation set, a policy, and a termination condition/set.

+

The authors of the mentioned paper give examples of options

+
+

Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques.

+
+

The option picking up an object, going to lunch, and traveling to a distant city is composed of other sub-actions (e.g. picking up an object), but is itself an action (or macro-action). A primitive action (e.g. joint torques) is itself an option.

+

A set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), which are MDPs where the time between actions is not constant but it is variable. In other words, a semi-MDP (SMDP) is an extension of the concept of MDP that is used to deal with problems where there are actions of different levels of abstraction. For example, consider a footballer that needs to take a freekick. The action "take a freekick" involves a sequence of other actions, like "run towards the ball", "look at the wall", etc. The action "take a freekick" takes a variable number of time steps (which depends on the other sub-actions).

+

Semi-MDPs are thus used to deal with such problems that involve actions of different levels of abstraction. Hierarchical reinforcement learning (HRL) is a generalization (or extension) of reinforcement learning where the environment is modeled as a semi-MDP.

+

Curiously, certain models that have won the RoboCup (the famous AI football) context are based on the concept of semi-MDPs, options and HRL. See e.g. WrightEagleBASE, which use the MAXQ-OP (MAXQ online planning) algorithm.

+

Semi-MDPs can be converted to MDPs. The picture below (which is a screenshot of figure 1 of the mentioned paper that introduces the "options framework" in RL) illustrates the relationship between semi-MDPs and MDPs.

+

+

The empty circles (in the middle) are options, while the black circles (at the top) are primitive actions (which are themselves options).

+

In the paper Reinforcement learning of motor skills with policy gradients mentioned in the question, apparently, the term skill is not formally defined. However, I suppose that skills can be represented as options.

+",2444,,2444,,1/1/2022 16:02,1/1/2022 16:02,,,,4,,,,CC BY-SA 4.0 +13256,1,13259,,7/8/2019 9:39,,2,1651,"

I plan to create a neural network using Python, Keras, and TensorFlow. All the tutorials I have seen so far are concerned with image recognition. However, the goal of my program would be to take in 10+ inputs and calculate a binary output (true/false) instead.

+

Which loss function should I use for my task?

+",26948,,2444,,12/10/2021 21:24,12/10/2021 21:24,Which loss function should I use for binary classification?,,1,0,,,,CC BY-SA 4.0 +13257,1,13258,,7/8/2019 10:29,,6,941,"

Is there any precedent for using a neuroevolution algorithm, like NEAT, as a way of getting to an initialization of weights for a network that can then be fine-tuned with gradient descent and back-propagation?

+ +

I wonder if this may be a faster way of getting to a global minimum before starting a decent to a local using backpropagation with a large set of input parameters.

+",11893,,2444,,7/8/2019 11:55,7/8/2019 16:28,Can neuroevolution be combined with gradient descent?,,2,0,,,,CC BY-SA 4.0 +13258,2,,13257,7/8/2019 11:51,,4,,"

The paper The Comparison and Combination of Genetic and Gradient Descent Learning in Recurrent Neural Networks: An Application to Speech Phoneme Classification (2007), by Rohitash Chandra and Christian W. Omlin, uses genetic algorithms to train a recurrent neural network and then uses gradient descent to fine tune the trained model.

+ +

The paper Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks (2018), by Xiaodong Cui, Wei Zhang, Zoltán Tüske and Michael Picheny, also combines evolutionary algorithms and gradient descent, but, in this case, they alternate between a gradient descent step and an evolution step. This is an example of a evolutionary stochastic gradient descent (ESGD) method, as opposed to a population-based training (PBT) method, which uses only evolutionary algorithms to train neural networks.

+",2444,,,,,7/8/2019 11:51,,,,0,,,,CC BY-SA 4.0 +13259,2,,13256,7/8/2019 12:17,,1,,"

There are several loss functions that you can use for binary classification. For example, you could use the binary cross-entropy or the hinge loss functions.

+

See, for example, the tutorials Binary Classification Tutorial with the Keras Deep Learning Library (2016) and How to Choose Loss Functions When Training Deep Learning Neural Networks (2019) by Jason Brownlee. Have also a look at Keras documentation of its available loss functions.

+",2444,,2444,,12/10/2021 21:24,12/10/2021 21:24,,,,0,,,,CC BY-SA 4.0 +13261,1,13262,,7/8/2019 14:05,,12,6476,"

Let's consider this example:

+ +
+

It's John's birthday, let's buy him a kite.

+
+ +

We humans most likely would say the kite is a birthday gift, if asked why it's being bought; and we refer to this reasoning as common sense.

+ +

Why do we need this in artificially intelligent agents? I think it could cause a plethora of problems, since a lot of our human errors are caused by these vague assumptions.

+ +

Imagine an AI ignoring doing certain things because it assumes it has already been done by someone else (or another AI), using its common sense.

+ +

Wouldn't that bring human errors into AI systems?

+",26958,,2444,,2/7/2021 22:36,2/7/2021 22:36,Why do we need common sense in AI?,,5,2,,,,CC BY-SA 4.0 +13262,2,,13261,7/8/2019 14:41,,18,,"

Commonsense knowledge is the collection of premises that everyone, in a certain context (hence common sense knowledge might be a function of the context), takes for granted. There would exist a lot of miscommunication between a human and an AI if the AI did not possess common sense knowledge. Therefore, commonsense knowledge is fundamental to human-AI interaction.

+

There are also premises that every human takes for granted independently of the country, culture, or, in general, context. For example, every human (almost since its birth) has a mechanism for reasoning about naive physics, such as space, time, and physical interactions. If an AI does not possess this knowledge, then it cannot perform the tasks that require this knowledge.

+

Any task that requires a machine to have common sense knowledge (of an average human) is believed to be AI-complete, that is, it requires human-level (or general) intelligence. See section D of AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI (2012) by Roman V. Yampolskiy.

+

Of course, the problems that arise while humans communicate because of different assumptions or premises might also arise between humans and AIs (that possess commonsense knowledge).

+",2444,,2444,,2/7/2021 22:32,2/7/2021 22:32,,,,1,,,,CC BY-SA 4.0 +13263,2,,13261,7/8/2019 14:48,,10,,"

We need this kind of common sense knowledge if we want to get computers to understand human language. It's easy for a computer program to analyse the grammatical structure of the example you give, but in order to understand its meaning we need to know the possible contexts, which is what you refer to as ""common sense"" here.

+ +

This was emphasised a lot in Roger Schank et al.'s work on computer understanding of stories, and lead to a lot of research into knowledge representation, scripts, plans, goals. One example from Schank's work is Mary was hungry. She picked up a Michelin Guide. -- this seems like a non-sequitur: if you are hungry, why pick up a book? Until you realise that it is a restaurant guide, and that Mary is presumably planning to go to a restaurant to eat. If you know that going to a restaurant is a potential solution to the problem of being hungry, then you have no problem understanding this story fragment.

+ +

Any story needs common sense to be understood, because no story is completely explicit. Common things are ""understood"" and aren't explicitly mentioned. Stories relate to human experience, and a story that would make everything explicit would probably read like a computer program. You also need common sense to understand how characters in a story behave, and how they are affected by what is happening. Again, this is very subjective, but it is necessary. Some common sense might be generally applicable, other aspects of it won't be. It's a complex issue, which is why researchers have struggled with it for at least half a century of AI research.

+ +

Of course this would introduce ""human errors"" into an AI system. All this is very subjective and culture-specific. Going to a restaurant in the USA is different from going to one in France -- this is why going abroad can be a challenge. And my reading of a story will probably be different from yours. But if you want to simulate human intelligence, you cannot do that without potential human ""errors"".

+",2193,,,,,7/8/2019 14:48,,,,1,,,,CC BY-SA 4.0 +13264,2,,13257,7/8/2019 15:13,,1,,"

Yes it can be in addition to the papers that nbro linked to above uber's ai research team has a very interesting combination of sgd and neuroevolution which they have dubbed ""safe mutations"". In the algorithm each genome undergoes a bit of sgd to improve its fitness before the speciation, elitism, and reproduction processes. I imagine this has an effect of searching for genomes which are well suited for sgd optimization, and in my opinion does really provide the best of both a worlds. Here is the link to the paper https://arxiv.org/abs/1712.06563 . What I think would be a cool for this combination of the two would be its use in conjunction with the es-hyperneat/hyperneat neuroevolution algorithms in which a small genome cppn encodes large phenotype rnns using the rnns substrate (its structure represented with cartesian coordinates) as the cppns input. If a small amount sgd is used on the rnn's to improve fitness then what you end up with is a cppn is being evolved to encode very general rnn networks that can then be optimized to specific domains via sgd. I like this because then your neuroevolution doesnt occur on a massive rnn and you can create cppns that recognize the general problem you wish to solve if your clever with your fitness evaluation.

+",20044,,20044,,7/8/2019 16:28,7/8/2019 16:28,,,,0,,,,CC BY-SA 4.0 +13266,1,13267,,7/8/2019 17:35,,6,642,"

I perfectly understand that CNN takes into account the local dependency of each pixel to the nearby pixels. In addition, CNNs are spatially invariant which means that they are able to detect the same feature anywhere in the image. These qualities are useful in image classification problems given the nature of the problem.

+ +

How does a vanilla neural net exactly falls short on these properties? Am I right in claiming that a vanilla neural net has to learn a given feature in every part of the image? This is different than how a CNN does it, which learns the feature once and then detects it anywhere in the image.

+ +

How about local pixel dependancy? Why can't a vanilla neural network learn the local dependency by relating one pixel to its neighbors in the 1D input?

+ +

In other words, is there more information present while training a CNN that are simply absent when training a normal NN? Or is a CNN just better at optimizing in the space of image classification problems?

+",17582,,2444,,7/8/2019 19:46,7/8/2019 19:50,Can a vanilla neural network theoretically achieve the same performance as CNN?,,1,0,,,,CC BY-SA 4.0 +13267,2,,13266,7/8/2019 18:38,,7,,"

All CNNs can be represented as vanilla networks on the flattened image data. Just to do so, you would need A LOT of parameters (most of which would be 0) for what CNNs do freely. You can think of a CNN as reusing a filter on a masked input (whichever receptive field it's looking at whatever point during the convolution) repetitively.

+ +

In other words, fully connected layers use all the information, so it can still learn spatial dependence as a CNN does, while CNNs for each neuron only look at a specific receptive field and will reuse that filter for all neurons in that channel. This constraint saves computation and allows wider and deeper models under some budget.

+ +

This is nice because the hypothesis of why CNN's work are, is that at each point in the network we care about looking at localized features rather than global ones and that creating a composition of these makes it so even if each neuron only relates to a handful of neurons in the previous layer, the receptive field from the initial image can still be quite large if not the whole thing.

+ +

Take away: CNNs are an efficient implementation of a vanilla NN, given the locality constraint that each neuron only looks at a small localized subset of neurons from the previous layer.

+",25496,,2444,,7/8/2019 19:50,7/8/2019 19:50,,,,1,,,,CC BY-SA 4.0 +13268,1,,,7/8/2019 22:13,,2,59,"

I want to build an AI that plays a simple android game.

+ +

The game is just a one at a time object falling, some times at an angle. The AI needs to recognize the object and to decide whether to swipe left, swipe down, or click on it. The background is changing some times, but the object falling is always on top.

+ +

There are 44 different assets and I have the original full resolution PNG of the objects.

+ +

How should I approach this?

+",26976,,2444,,7/8/2019 22:58,7/8/2019 22:58,How should I build an AI that quickly detects falling game assets on screen?,,0,0,,,,CC BY-SA 4.0 +13269,2,,13261,7/8/2019 23:07,,2,,"

I'll answer this question in several parts:

+ +
+

Why do AGI systems need to have common sense?

+
+ +

Humans in the wild reason and communicate using common sense more than they do with strict logic, you can see this by noting that it is easier to appeal to someone's emotion than logic. So any system that seeks to replicate human cognition (as in AGI) should also replicate this tendency to use common sense.

+ +

More simply put, we'd wish that our AGI system can speak to us in common sense language simply because that is what we understand best (otherwise we wouldn't understand our friendly AGI would we?). Obtuse theory and strict logic might technically be correct, but don't appeal to our understanding.

+ +
+

Isn't the goal of AGI the create the most cognitively advance system? Why should the ""most perfect"" AGI system need to deal with such imperfections and impreciseness present in common sense?

+
+ +

First, it might only appear to be the case that common sense logic is ""irrational"". Perhaps there is a consistent mathematical way to model common sense such that all the subtleties of common sense are represented in a rigour fashion.

+ +

Second, the early study of Artificial Intelligence started in the study of cognitive science, where researchers tried to replicate ""algorithms of the mind"", or more precisely: decidable procedures which replicated human thought. To that extent then, the study of AI isn't to create the ""most supreme cognitive agent"" but to merely replicate human thought/behavior. Once we can replicate human behavior we can perhaps try to create something super-human by giving it more computational power, but that is not guaranteed.

+ +
+

I still don't see why common sense is needed in AGI systems. Isn't AGI about being the most intelligent and powerful computational system? Why should it care or conform towards the limits of human understanding, which requires common sense?

+
+ +

Perhaps then you have a bit of a misaligned understanding of what AGI entails. AGI doesn't mean unbounded computational power (physically impossible due to physical constraints on computation such as Bremermann's limit) or unbounded intelligence (perhaps physically impossible due to the prior constraint). It usually just means artificial ""general intelligence"", general meaning broad and common.

+ +

Considerations about unbounded agents are studied in more detail in fields such as theoretical computer science (type theory I believe), decision theory, and perhaps even set theory, where we are able to pose questions about agents with unbounded computational power. We might say that there are questions even an AGI system with unbounded power can't answer due to the Halting Problem, but only if the assumptions on those fields map onto the structure of the given AGI, which might not be true.

+ +

For a better understanding of what AGI might entail and its goals, I might recommend two books: Artificial Intelligence: The Very Idea by John Haugeland for a more pragmatic approach (as pragmatic as AI-philosophy can be, and On the Origin of Objects by Brian Cantwell Smith for a more philosophically inclined approach.

+ +

As a fun aside, the collection of Zen koan's: The Gateless Gate, includes the following passage: (quoted and edited from wikipedia)

+ +
+

A monk asked Zhaozhou, a Chinese Zen master, ""Has a dog Buddha-nature or not?"" Zhaozhou answered, ""Wú""

+
+ +

Wú (無) translates to ""none"", ""nonesuch"", or ""nothing"", which can be interpreted as to avoid answering either yes or no. This enlightened individual doesn't seek to strictly answer every question, but just to respond in a way that makes sense. It doesn't really matter as to wether the dog has Buddha-nature or not (whatever Buddha-nature means), so the master defaults to absolve the question rather than resolving it.

+",6779,,,,,7/8/2019 23:07,,,,1,,,,CC BY-SA 4.0 +13271,1,,,7/9/2019 2:59,,1,71,"

I have a simple neural network for binary classification.

+

The input features include age, sex, economic situation, illness, disability, etc. The output is simply 1 and 0.

+

I would like to order the features from the greatest to least impact it had on the classification.

+

An example answer could look like this:

+

Classification: 1

+
    +
  1. illness
  2. +
  3. economic situation
  4. +
  5. disability
  6. +
  7. sex
  8. +
  9. age
  10. +
+

Another example:

+

Classification: 0

+
    +
  1. economic situation
  2. +
  3. age
  4. +
  5. disability
  6. +
  7. sex
  8. +
  9. illness
  10. +
+",18881,,2444,,12/21/2021 19:13,1/20/2022 20:02,"When doing binary classification with neural networks, how can I order the importance of the features for a class?",,1,0,,,,CC BY-SA 4.0 +13272,2,,13271,7/9/2019 3:17,,1,,"

Two popular methods I’ve seen done:

+ +

1) For each feature, remove it and run the model and see the impact it has on the result. The idea is that the larger the impact, the more pertinent it was to the result.

+ +

2) Look at the gradients magnitude $|\nabla_f {y} |$. You can either look at the raw gradient or look at the guided back-propagation which is just the back props product rule, but you only look at when the nodes positively help trigger a neuron by taking only the positive gradients at each step.

+ +

There’s probably also more methods. Hope this helped.

+",25496,,,,,7/9/2019 3:17,,,,1,,,,CC BY-SA 4.0 +13273,1,,,7/9/2019 3:40,,1,51,"

Do you think it would be possible to train an AI in such a way as to mimic/simulate someone that is diagnosed as ""Special Needs""?

+ +

Why? Most diagnosis and treatments for people today are subjective, sure it's what a group of like-minded professionals has agreed upon as a valid hypothesis, but, at the same time, there is an absence of the absolute. Could we train an AI to become a ""special needs"" be a starting point in helping find better ways to unlock the potential and understanding of these differences?

+",26982,,2444,,7/9/2019 8:59,7/9/2019 8:59,"Can an AI simulate someone that is diagnosed as ""Special Needs""?",,0,4,,,,CC BY-SA 4.0 +13274,1,13275,,7/9/2019 7:17,,2,110,"

I've recently come across an amazing work for human pose estimation: DensePose: Dense Human Pose Estimation In The Wild by Facebook.

+ +

In this work, they have tackled the task of dense human pose estimation using discriminative trained models.

+ +

I do understand that ""correspondence"" means how well pixels in one image correspond to pixels in the second image (specifically, here - 2D to 3D).

+ +

But what does ""dense"" means in this case?

+",26989,,2444,,7/9/2019 10:44,7/9/2019 10:44,"What is ""dense"" in DensePose?",,1,0,,,,CC BY-SA 4.0 +13275,2,,13274,7/9/2019 9:26,,0,,"

In computer vision, the adjectives dense and sparse are used in a variety of tasks (e.g. optimal flow), but they are commonly used in the context of the correspondence problem, which is the problem of finding a map (or correspondence) between pixels of two images (e.g. two successive frames of a video). In this context, these adjectives thus refer to the number of pixels of the image that are used to solve this specific task. A dense correspondence is thus a correspondence between two images using all (or, at least, many) pixels. In other words, a dense correspondence attempts to map all (or many) pixels of an image to all (or many) pixels of another image.

+ +

In the case of DensePose, the correspondences are between RGB images and surface-based representation of the human body (the figures of the paper illustrates this).

+",2444,,2444,,7/9/2019 9:32,7/9/2019 9:32,,,,0,,,,CC BY-SA 4.0 +13276,1,13277,,7/9/2019 9:46,,3,115,"

I would like to create a model, that will tell me if one type of object is in an image or not.

+ +

So, for example, I have a camera and I would like to see when one object gets into the shot.

+ +
    +
  • Object detection: This could be an overkill, because I don't need to know the bounding box around. Also, this means that I would need to label a lot of images, and draw the bounding box to have train data (a lot of time)

  • +
  • Image classification: This doesn't solve the problem, because I don't know what else could not be an object. It would be impossible to train for 2 classes: object / not object.

  • +
+ +

My idea is to have Autoencoder. Train it only on data with the object. Then, if Autoencoder produces a result with a high difference with the original, I detect it as an anomaly - no object.

+ +

Is this a good approach? Will I have a lot of trouble with different backgrounds?

+",26993,,2444,,7/10/2019 20:59,7/10/2019 20:59,How should I detect an object in a camera image?,,1,0,,,,CC BY-SA 4.0 +13277,2,,13276,7/9/2019 10:09,,2,,"
+

Is this a good approach? Will I have a lot of trouble with different backgrounds?

+
+ +

A lot will depend on the nature of the backgrounds you have, and how well they encode/decode by themselves without the object in frame. My gut feeling is that your system will have poor performance compared to a properly trained classifier, as the autoencoder will naturally have to get good at constructing background elements in order to score well, so unless your object is very consistently always in a similar place with radically different appearance to the background then the autoencoder will get good reconstructions of background-only frames, and your anomaly detection would need to be set too sensitive. That would cause the detector to fail to spot the object when in frame.

+ +

There is a catch-all answer to ""how well will my idea work"" with ML projects. You should try it and see. Data science is essentially an empirical approach, and building practical models is an engineering discipline where testing is a core part of the process.

+ +

In order to test your model, you are going to need a lot of images of just background, including all the kinds of background that you expect the system to be used. Which begs the question, why not collect a range of suitable images without your target object, and use them as the second class?

+ +
+

This doesn't solve the problem, because I don't know what else could not be an object. It would be impossible to train for 2 classes: object / not object.

+
+ +

Actually, it is easy. Just collect suitable images from locations where you expect your system to be used, without the object in frame. These are your ""Not Object"" class. A ""Not Object"" does not have to include some substitute foreground object. Although I recommend that you do have some images like this to prevent accidentally creating an ""object is in the foreground"" detector*. The primary goal should be to be collect data that matches how the model will be used once deployed. That will depend a lot on how much control and consistency you get over the production cameras where the trained detector will be put to use.

+ +

I would do this, then train a standard binary classifier.

+ +

If you are still interested in how well your auto-encoder idea could work, you then have plenty of test data to evaluate it.

+ +
+ +

* This is one area where your auto-encoder idea might do better than a classifier - reconstructing unfamiliar foreground objects should be hard for it leading to high error.

+ +

It is too difficult to tell in advance whether this effect is strong enough to make your auto-encoder approach better than a classifier.

+",1847,,2444,,7/9/2019 10:51,7/9/2019 10:51,,,,2,,,,CC BY-SA 4.0 +13278,1,,,7/9/2019 11:26,,2,43,"

I am looking to extract causal relations between entities like Drug and Adverse Effect in a document. Are there any proven NLP or AI techniques to handle the same. Also are there ways to handle cases where the 2 entities may not necessarily co-occur in the same sentence.

+",26115,,2444,,7/9/2019 11:45,7/9/2019 11:45,Models to extract Causal Relationship between entities in a document using Natural Language Processing techniques,,0,1,,,,CC BY-SA 4.0 +13279,2,,13261,7/9/2019 12:05,,1,,"

Is this common sense, or is this natural language understanding?

+ +

It's been said that natural language understanding is one of the hardest AI tasks. This is one of the examples showing why. The first part of the sentence is related to the second part, that how sentences work.

+ +

Now the relevant question is how the two parts are related. There are a few standard relations that we encounter, for instance a temporal order. In this specific example, the nature of the relation is closer to a cause-and-effect.

+ +

You see this effect when we insert a word to make this relation explicit:

+ +
+

It's John's birthday, so let's buy him a kite. + or + Let's buy John a kite, because it's his birthday.

+
+ +

This is a technique for humans to make these implicit relations explicit.

+ +

Now, as curiousdannii notes, you also need the cultural knowledge to understand how a birthdays can be a cause for a present. No amount of common sense helps with that.

+",16378,,,,,7/9/2019 12:05,,,,1,,,,CC BY-SA 4.0 +13280,1,,,7/9/2019 12:52,,1,71,"

I have an idea for a new mobile app. Here is what I want to accomplish using AI;
+I want to get an image (png format), (maybe just byte data too), from my application (I'm developing with Unity3D/C#), send this data to AI application; get modified image from ML app and send it back to my app.

+ +

What AI is going to do with img?
+Imagine you are the user of my app; you are going to draw a picture on your phone. +Picture will be simple, like a seagull illustrated as an 'M' letter.
+AI program will get your drawing ('M' in this case), check pixels to give a meaning to 'M'; then draw a more complex drawing that is themed around that M seagull.
+(Like a drawing with an ocean made by Pixel Art, with rain clouds from Van Gogh and seagull is painted as a surreal bird...)
+


+My general idea about building this AI system...
+I'm not sure how to build this AI system, because I can't understand AI/ML completely. How it works on machine, how to implement it using computer, how to write an algorithm, how pre-made libraries like Tensorflow works... But I'm in a phase of my life that I need to use my time well, and want to build this app while learning.
+I think, I can build an side app to use for analyzing and modifying image I get from user. Right know I can write in C, C# and learning JavaScript. I learned Python too (and a few others), but I'm not comfortable using it. (I hate Python). And didn't write a good program in any other language...
+I thought, I can use JavaScript or Java || C++ but honestly I don't know how to start and which steps to take. Also after gaining some success I will want to port app to iOS too... Maybe that can wait...

+ +

Can you give me some examples, guidelines and advices? How to start doing this while deciding; what is best approach, performance and development-time wise?
+And, Is my approach to the problem is a good one? Can you come up with a better solution for my idea that you can tell me to point me in another direction?

+",25858,,25858,,7/9/2019 16:21,7/9/2019 16:21,How can I develop this ML/AI system that I want to use in my new mobile app?,,0,7,,,,CC BY-SA 4.0 +13282,2,,6038,7/9/2019 15:14,,1,,"

Liquid State Machines are used in the field of Neuroscience. What you have used is a variant of LSM called Echo State Network (ESN) used in the field of Machine Learning.

+ +

ESN's are pretty simple and superfast compared to normal ML paradigms like Feed Forward NN's or RNN's. ESN's are based on a relatively new paradigm called Reservoir Computing. ESN's are mainly used in Sequential ML problems, and it is used to overcome the problems RNN face like vanisihing gradient, exploding gradient, long training times, etc. The basic idea is that you have a reservoir of Neurons (basically a Neural Net with fixed and non-trainable weights, but it can be varied to complex structures) which will echo an input into some form based on which a simple Linear Regression model (or maybe a complex NN) will be trained.

+ +

The basic idea of an ESN is very simple, note it works on time series data and the conventions are similar to a RNN:

+ +
    +
  • Just like RNN you have a hidden state $x(n)$ and an input $u(n)$ and some randomly initialized non-trainable weights are assigned to them (the weights with which you dot product these 2 terms). +$$a(n) = W_{in}[1:u(n)] + W[x(n-1)]$$
  • +
  • The next $x(n)$ is produced as some linear function of non-linear transformation of $x(n-1)$ and $u(n)$ multiplied by their respective weights and $x(n-1)$. +$$\tilde x(n) = tanh(a(n))$$ +$$ x(n) = (1-\alpha).x(n-1) + \alpha\tilde x(n)$$

  • +
  • The prediction target $y(n)$ is done simply by multiplying $x(n)$ and $u(n)$ with $W_{out}$ which are trainable and the only trainable weights. +$$y(n) = W_{out}[1:u(n):x(n)]$$

  • +
  • Now you apply a suitable Loss Function on your predicted $y(n)$ vs actual target $t(n)$ to train weights $W_{out}$

  • +
+ +

($:$ means concatenation and follows the usual RNN concatenation rules)

+ +

This is a very simple overview of ESN's and it works surprisingly well for Sequential prediction tasks. The main idea and intuition behind the working is that the inputs are echoed in the reservoir of untrainable weights which basically means that the input is converted into a certain 'form' due to the random initialisation of weights and this 'form' trains the $W_{out}$ (the first few training $u(n)$ are run through the network without training for the network to assume a starting state where the input has already been echoed in the network and is now coming finally coming out). There are a few mathematical details for the ESN to train and not blow up by huge oscillations and can be found here in this simple intuitive tutorial.

+ +

In terms of difference of purpose, a few are:

+ +
    +
  • Provides superfast training times.
  • +
  • Works on Sequential data mainly.
  • +
  • It is being used as a side network to initialise RNN weights which apparently improves the performance.
  • +
+ +

Although these networks have some very hard to tune hyperparameters, owing to its simple and superfast training one can easily check a lot of hyperparameters. The network also performs surprisingly well and it has resulted in Reservoir Computing being actually researched very actively. Check this very old question too.

+",,user9947,,user9947,7/11/2019 22:55,7/11/2019 22:55,,,,0,,,,CC BY-SA 4.0 +13283,1,,,7/9/2019 17:10,,1,72,"

I’m not really sure which machine learning approach is best for my problem at hand. I work in an engineering company that designs and builds different kinds of ships. In my particular job, I collect the individual weight of items on these vessels. The weight and there location is important because it is used to ensure the vessel in question can float in a balanced manner.

+ +

I have a large corpus of historical data on hand that lists the items on the vessel, there attributes, the weight for these items and where the weight came from (documentation), or the source.

+ +

So, for example, let’s say I have the following information:

+ +
 ITEM  |         ATTRIBUTES            | WEIGHT  |WEIGHT SOURCE
+Valve  |Size: 1 inch |Type: Ball Valve |2 lbs.   |Database 1
+Elbow  |Size: 2 inch |Type: Reducing   |1 lb.    |Database 2
+
+ +

I have to comb through many systems on these vessels and find the proper documentation or engineering drawings that lists the weight for the item in question. It usually starts by investigating the item and its attributes and then looking in a number of databases for the weight documentation. This takes a long time, as there is no organization or criteria as to what database has what. You just have to start randomly searching them and hope you find what you need.

+ +

Well I now have a large corpus of real world data that lists thousands of items, there attributes, there weight and most importantly the source of the documentation (Database 1, 2, 3 etc.). I’m wondering if there is any correlation between an item, its attributes and its weight location (database). This is where machine learning comes in. What I’d like to do is use machine learning to help find the weight location more quickly. Ideally it would be nice if it could analyze a batch of information and then provide recommendations on which databases to search.

+ +

My first thoughts are that this is a classification problem, and maybe a CNN would be helpful here. If that is the case, I have over 100 categories in my dataset. +I actually went ahead and programmed a simple feed forward neural network using the following resources: https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/ I attempted to use this network to solve the above problem, but so far I have had no success. I’m in over my head here.

+ +

I don’t expect it to be correct 100% of the time. Even if it had an 80% success rate that would be awesome. So my question is this;

+ +

What kind of neural network do I need to accomplish this?

+",20271,,2444,,7/9/2019 22:07,7/9/2019 22:07,Is this a classification problem?,,0,4,,,,CC BY-SA 4.0 +13284,1,,,7/9/2019 18:10,,2,57,"

I've written an application to help players pick the optimal heroes during the draft phase of the Heroes of the Storm MOBA. It can be daunting to pick from 80+ characters that have synergies/counters to other characters, strong/weak maps, etc. The app attempts to pick the optimal composition using a genetic algorithm (GA) based on various sources of information on these heroes.

+ +

The problem I've realized is that not all sources of information are created equal. At the moment I'm giving all sources roughly equal importance in the fitness function but as I add other sources, I think it's going to be necessary to be more discerning about them.

+ +

It seems like the right way to do this would be to use a single layer neural network where the weights of the synapses represent the weights in the fitness function. I could use matches played at a high-level (e.g. from MasterLeague.net) to form the training and test sets.

+ +

Does this sound like a viable approach or am I missing something simpler? Is the idea of the using a GA even the correct way to approach this problem?

+",27010,,,,,7/9/2019 18:10,Is a neural network the correct approach to optimising a fitness function in a genetic algorithm?,,0,0,,,,CC BY-SA 4.0 +13285,2,,13232,7/9/2019 19:49,,5,,"

There seem to be two different ideas in this question here:

+ +
    +
  1. What's the impact / importance of our choice for reward values?
  2. +
  3. What's the impact / importance of our choice for initial value estimates (how do we initialise our table of $Q(s, a)$ values in the case of a simple, tabular RL algorithm like Sarsa or $Q$-learning)?
  4. +
+ +

The reward values are typically assumed to be a part of the problem definition - something we shouldn't modify if we're using an existing problem definition as a benchmark. But if we're in charge of defining the problem ourselves, we can of course also pick the reward values. Modifying them may indeed have a huge impact on the speed with which RL algorithms are able to learn a task - but it may also intrinsically changes the task, it changes the objective of the problem, it may change which policies are optimal.

+ +
+ +

As for initialisation of our table of value approximations: by default, we normally assume an all-$0$ initialisation. However, it is a fairly common trick (in tabular RL algorithms, without function approximation) to initialise value estimates optimistically; pick higher initial $Q(s, a)$ value estimates than are likely (or even pick values higher than a known upper bound on what the true value possibly could be). This is often beneficial - also in large gridworlds with sparse rewards (e.g. a single distant goal somewhere) and negative rewards (i.e. costs) incurred for every step taken - because it incentivises exploration of state-action pairs that have not yet been tried.

+ +

Suppose you have your gridworld with negative rewards associated with every time step, and the optimal policy being one that takes you to a distant goal as soon as possible. If all $Q(s, a)$ are initialised to $0$ (or worse, to negative values), your agent may quickly learn that everything it does is equally bad anyway, and get stuck near the starting position. If all $Q(s, a)$ values are initialised optimistically (to large, positive values), your agent during the learning process will still have optimistic expectations of what it can achieve if it just tries to navigate to unexplored parts of the state-action space.

+",1641,,,,,7/9/2019 19:49,,,,4,,,,CC BY-SA 4.0 +13286,5,,,7/9/2019 22:32,,0,,"

See e.g. Artificial Intelligence as a Positive and Negative Factor in Global Risk (2008), by Eliezer Yudkowsky, and https://en.wikipedia.org/wiki/Friendly_artificial_intelligence.

+",2444,,2444,,7/9/2019 22:32,7/9/2019 22:32,,,,0,,,,CC BY-SA 4.0 +13287,4,,,7/9/2019 22:32,,0,,"For questions related to the concept of friendly artificial intelligence (FAI), which is a hypothetical AI that possesses the desire not to harm humans. The expression ""friendly AI"" was popularised by Eliezer Yudkowsky and it is mentioned in his article ""Artificial Intelligence as a Positive and +Negative Factor in Global Risk"" (2008).",2444,,2444,,7/9/2019 22:32,7/9/2019 22:32,,,,0,,,,CC BY-SA 4.0 +13288,1,13290,,7/10/2019 3:27,,1,196,"

As simple as that. Is there any scenario where the error might increase, if only by a tiny amount, when using SGD (no momentum)?

+",26726,,,,,7/10/2019 5:33,Is it possible with stochastic gradient descent for the error to increase?,,1,0,,,,CC BY-SA 4.0 +13289,1,13293,,7/10/2019 3:29,,63,16248,"

Imagine you show a neural network a picture of a lion 100 times and label it with "dangerous", so it learns that lions are dangerous.

+

Now imagine that previously you have shown it millions of images of lions and alternatively labeled it as "dangerous" and "not dangerous", such that the probability of a lion being dangerous is 50%.

+

But those last 100 times have pushed the neural network into being very positive about regarding the lion as "dangerous", thus ignoring the last million lessons.

+

Therefore, it seems there is a flaw in neural networks, in that they can change their mind too quickly based on recent evidence. Especially if that previous evidence was in the middle.

+

Is there a neural network model that keeps track of how much evidence it has seen? (Or would this be equivalent to letting the learning rate decrease by $1/T$ where $T$ is the number of trials?)

+",4199,,2444,,12/12/2021 12:28,12/12/2021 12:28,Are neural networks prone to catastrophic forgetting?,,4,1,,,,CC BY-SA 4.0 +13290,2,,13288,7/10/2019 5:33,,3,,"

Yes. Not only that, but error is highly noisy, prone to big spikes and sometimes quite long period of increase before decrease again or stabilize. Often it's even impossible to understand error plot without passing it through smoothing filter, so noisy it is. Specific depend on the problem of cause. It's not only for SGD but for any optimizer.

+",22745,,,,,7/10/2019 5:33,,,,0,,,,CC BY-SA 4.0 +13291,2,,13289,7/10/2019 8:29,,22,,"

Yes, the problem of forgetting older training examples is a characteristic of Neural Networks. I wouldn't call it a ""flaw"" though because it helps them be more adaptive and allows for interesting applications such as transfer learning (if a network remembered old training too well, fine tuning it to new data would be meaningless).

+ +

In practice what you want to do is to mix the training examples for dangerous and not dangerous so that it doesn't see one category in the beginning and one at the end.

+ +

A standard training procedure would work like this:

+ + + +
for e in epochs:
+    shuffle dataset
+    for x_batch, y_batch in dataset:
+        train neural_network on x_batxh, y_batch
+
+ +

Note that the shuffle at every epoch guarantees that the network won't see the same training examples in the same order every epoch and that the classes will be mixed

+ +

Now to answer your question, yes decreasing the learning rate would make the network less prone to forgetting its previous training, but how would this work in a non-online setting? In order for a network to converge it needs multiple epochs of training (i.e. seeing each sample in the dataset many times).

+",26652,,,,,7/10/2019 8:29,,,,0,,,,CC BY-SA 4.0 +13293,2,,13289,7/10/2019 10:14,,70,,"

Yes, indeed, neural networks are very prone to catastrophic forgetting (or interference). Currently, this problem is often ignored because neural networks are mainly trained offline (sometimes called batch training), where this problem does not often arise, and not online or incrementally, which is fundamental to the development of artificial general intelligence.

+

There are some people that work on continual lifelong learning in neural networks, which attempts to adapt neural networks to continual lifelong learning, which is the ability of a model to learn from a stream of data continually, so that they do not completely forget previously acquired knowledge while learning new information. See, for example, the paper Continual lifelong learning with neural networks: A review (2019), by German I. Parisi et al., which summarises the problems and existing solutions related to catastrophic forgetting of neural networks.

+",2444,,2444,,6/27/2020 0:59,6/27/2020 0:59,,,,1,,,,CC BY-SA 4.0 +13294,5,,,7/10/2019 10:25,,0,,,2444,,2444,,7/10/2019 10:25,7/10/2019 10:25,,,,0,,,,CC BY-SA 4.0 +13295,4,,,7/10/2019 10:25,,0,,"For questions related to the concept of catastrophic forgetting (or, also called, catastrophic interference), which is the problem of forgetting previously acquired information (or the ability to solve certain tasks) while learning new information (in an online or incremental fashion) that certain machine learning models (in particular, neural networks) face.",2444,,2444,,7/15/2019 23:06,7/15/2019 23:06,,,,0,,,,CC BY-SA 4.0 +13296,2,,11640,7/10/2019 10:41,,10,,"

You need to read this 2020 paper by Deepmind: +"Revisiting Fundamentals of Experience Replay" +They explicitly test the size of the experience replay, the replay-ratio of each experience and other parameters.

+

Also, to add to the answer by @nbro

+

Assume you implement experience replay as a buffer where the newest memory is stored instead of the oldest. Then, if your buffer contains 100k entries, any memory will remain there for exactly 100k iterations.

+

Such a buffer is simply a way to "see" what was up to 100k iterations ago. +After the first 100k iterations you fill the buffer and begin "moving" it, much like a sliding window, by inserting new memories instead of the oldest.

+
+

The size of the buffer (relative to the total number of iterations you plan to ever train with) depends on "how much you believe your network architecture is susceptible to catastrophic forgetting".

+

A tiny buffer might force your network to only care about what it saw recently.

+

But an excessively large buffer might take a long time to "become refreshed" with good trajectories, when they finally start to be discovered. So the network would be like a university student whose book shelf is diluted with first-grade school books.

+

The student might have already decided that he/she wishes to become a programmer, so re-reading those primary school books has little benefit (time could have been spent more productively on programming literature) + it takes a long time to replace those with relevant university books.

+",27042,,27042,,5/2/2023 17:36,5/2/2023 17:36,,,,4,,,,CC BY-SA 4.0 +13297,1,,,7/10/2019 12:29,,1,33,"

Due to the fast-growing applications of AI technologies applied to vehicle re-identification tasks, there have already been hot contests, such as the Nvidia AI challenge.

+ +

What algorithms or models are really adopted in commercial vehicle re-identification tasks, effective and reliable, nowadays?

+",14948,,2444,,7/10/2019 20:50,7/10/2019 20:50,What models and algorithms are used in commercial vehicle re-identification tasks?,,0,0,,,,CC BY-SA 4.0 +13298,2,,13261,7/10/2019 13:37,,2,,"

Perhaps it would help to give an example of what can go wrong without common sense: At the start of the novel ""The Two Faces of Tomorrow"" by James Hogan, a construction supervisor on the Moon files a request with an automated system, asking that a particular large piece of construction equipment be delivered to his site as soon as possible. The system replies that it will arrive in twenty minutes. Twenty minutes later, the supervisor is killed as the equipment crashes into his construction site. The system had determined that the fastest way to deliver the equipment to that site was to mount it on a mass-driver and launch it at the site. Had the system in question been given common sense, it would have inferred additional unstated constraints on the query, such as 'the equipment should arrive intact', 'the arrival of the equipment should not cause damage or loss of life', and so on. (the rest of the novel describes an experiment designed to produce a new system with common sense)

+",27045,,,,,7/10/2019 13:37,,,,1,,,,CC BY-SA 4.0 +13299,1,,,7/10/2019 15:35,,1,98,"

If I see a hundred elephants and fifty of them are grey I'd say the probability of an elephant being grey is 50%. And my certainty of that probability is high.

+ +

However, if I see two elephants and one of them is grey. Still the probability is 50%. But my certaintity of this is low.

+ +

Are there any AI models where not only the probability is given by the AI but it's certainty is also?

+ +

""Certainty"" might be thought of as the probability that the probability is correct.

+ +

This could go up more levels.

+ +

Is there any advantage in doing this?

+ +

One way I can envisage this working is instead of a weight, the NN stores two integers $(P,N)$ which represent positive and negative evidence and the weight is given by $P/(P+N)$. And each iteration $P$ or $N$ can only be incremented by 1.

+",4199,,26958,,7/11/2019 21:32,7/11/2019 21:32,"Is there an AI model with ""certainty"" built in?",,1,2,,,,CC BY-SA 4.0 +13300,2,,13299,7/10/2019 16:38,,1,,"

You make a valid point, vanilla neural networks cannot give you more than a point estimate of class confidence. If one wanted to actually gain an idea of variance, you need a framework that allows such a mechanism.

+ +

A popular methodology to this is Bayesian modeling. In other words given some data, $\Omega$, you want to create some form of descriminative model $p(y|x;\theta)$ where $\theta$ denotes the parameters that are random variables. This is unline NN's where they also learn some descriminative model $p(y|x;\theta)$ where $\theta$ are fixed. This difference is key to your goal, because if $\theta$ is fixed, $Var(Y|X) = 0$ since any time you put in the same $X$ into the network, itll always come out the same. On the other hand a bayesian model will have a variance, $Var(Y|X) = E[(Y - E[Y|X])^2 | X]$ and $(Y - E[Y|X])^2 | X$ is no longer gauranteed to be 0 and can be empirically measured through some MC method (or analytically based on your model).

+ +

Note that your idea is on the right track, that the more data points you have, the more confident your estimates will be, but the only issue is that just considering $\frac{P}{(P+N)}$ in a neural network will be just a heuristic and is difficult to quantify its association to the variance.

+ +

I think this may fit your fancy: high level overview of bayesian network blog post. This is a high level overview of using a neural netowrk paradigm but including uncertainty in the weights. Theres tons of tricks out there to make it so you can still even train these with gradient descent and such.

+",25496,,,,,7/10/2019 16:38,,,,0,,,,CC BY-SA 4.0 +13301,1,,,7/10/2019 17:05,,1,27,"

I would like to implement various AI-estimators for quantile estimation for a regression problem. It would be necessary to have non-crossing quantiles, that is larger quantiles would correspond to higher prediction values.

+ +

My objective is to have a multidimensional prediction vector as an output in the estimation, each dimension corresponding to a specific quantile. Maybe I would have to define a custom loss function, as well, for that purpose. I would like to try different methods such as deeplearning or gradient boosting or random forests.

+ +

Anyone having an idea how to build such AI-estimators? My preferred library choice would be scikit-learn.

+ +

Can someone give me an idea how to do this?

+",23784,,,,,7/10/2019 17:05,Scikit-Learn: monotoneous quantile estimation,,0,0,,,,CC BY-SA 4.0 +13302,2,,5769,7/10/2019 19:19,,0,,"

For anyone trying to understand how convolutions are calculated, here is a useful code snippet in Pytorch:

+ +
batch_size = 1
+height = 3 
+width = 3
+conv1_in_channels = 2
+conv1_out_channels = 2
+conv2_out_channels = 2
+kernel_size = 2
+# (N, C_in, H, W) is shape of all tensors. (batch_size, channels, height, width)
+input = torch.Tensor(np.arange(0, batch_size*height*width*in_channels).reshape(batch_size, in_channels, height, width))
+conv1 = nn.Conv2d(in_channels, conv1_out_channels, kernel_size, bias=False) # no bias to make calculations easier
+# set the weights of the convolutions to make the convolutions easier to follow
+nn.init.constant_(conv1.weight[0][0], 0.25)
+nn.init.constant_(conv1.weight[0][1], 0.5)
+nn.init.constant_(conv1.weight[1][0], 1) 
+nn.init.constant_(conv1.weight[1][1], 2) 
+out1 = conv1(input) # compute the convolution
+
+conv2 = nn.Conv2d(conv1_out_channels, conv2_out_channels, kernel_size, bias=False)
+nn.init.constant_(conv2.weight[0][0], 0.25)
+nn.init.constant_(conv2.weight[0][1], 0.5)
+nn.init.constant_(conv2.weight[1][0], 1) 
+nn.init.constant_(conv2.weight[1][1], 2) 
+out2 = conv2(out1) # compute the convolution
+
+for tensor, name in zip([input, conv1.weight, out1, conv2.weight, out2], ['input', 'conv1', 'out1', 'conv2', 'out2']):
+    print('{}: {}'.format(name, tensor))
+    print('{} shape: {}'.format(name, tensor.shape))
+
+ +

Running this gives the following output:

+ +
input: tensor([[[[ 0.,  1.,  2.],
+          [ 3.,  4.,  5.],
+          [ 6.,  7.,  8.]],
+
+         [[ 9., 10., 11.],
+          [12., 13., 14.],
+          [15., 16., 17.]]]])
+input shape: torch.Size([1, 2, 3, 3])
+conv1: Parameter containing:
+tensor([[[[0.2500, 0.2500],
+          [0.2500, 0.2500]],
+
+         [[0.5000, 0.5000],
+          [0.5000, 0.5000]]],
+
+
+        [[[1.0000, 1.0000],
+          [1.0000, 1.0000]],
+
+         [[2.0000, 2.0000],
+          [2.0000, 2.0000]]]], requires_grad=True)
+conv1 shape: torch.Size([2, 2, 2, 2])
+out1: tensor([[[[ 24.,  27.],
+          [ 33.,  36.]],
+
+         [[ 96., 108.],
+          [132., 144.]]]], grad_fn=<MkldnnConvolutionBackward>)
+out1 shape: torch.Size([1, 2, 2, 2])
+conv2: Parameter containing:
+tensor([[[[0.2500, 0.2500],
+          [0.2500, 0.2500]],
+
+         [[0.5000, 0.5000],
+          [0.5000, 0.5000]]],
+
+
+        [[[1.0000, 1.0000],
+          [1.0000, 1.0000]],
+
+         [[2.0000, 2.0000],
+          [2.0000, 2.0000]]]], requires_grad=True)
+conv2 shape: torch.Size([2, 2, 2, 2])
+out2: tensor([[[[ 270.]],
+
+         [[1080.]]]], grad_fn=<MkldnnConvolutionBackward>)
+out2 shape: torch.Size([1, 2, 1, 1])
+
+ +

Notice how the each channel of the convolution sums over all previous channels outputs.

+",26838,,,,,7/10/2019 19:19,,,,0,,,,CC BY-SA 4.0 +13304,5,,,7/10/2019 21:16,,0,,,2444,,2444,,7/10/2019 21:16,7/10/2019 21:16,,,,0,,,,CC BY-SA 4.0 +13305,4,,,7/10/2019 21:16,,0,,"For questions related to machine translation (MT), which is the task of translating text using a computer, machine or software.",2444,,2444,,7/10/2019 21:16,7/10/2019 21:16,,,,0,,,,CC BY-SA 4.0 +13307,1,13312,,7/10/2019 21:59,,2,395,"

GLIE+MC control Algorithm:

+ +

+ +

My question is why does this algorithm use only a single Monte Carlo episode (during PE step) to compute the $Q(s,a)$? In my understanding this has the following drawbacks:

+ +
    +
  • If we have multiple terminal states then we will only reach one (per Policy Iteration step PE+PI).
  • +
  • It is highly unlikely that we will visit all the states (during training), and a popular scheduling algorithm for exploration constant $\epsilon = 1/k$ where $k$ is apparently the episode number, ensures that exploration decays very very rapidly. This ensures that we may never visit a state during our entire training.
  • +
+ +

So why this algorithm uses single MC episode and why not multiple episodes in a single Policy Iteration step so that the agent gets a better feel of the environment?

+",,user9947,2444,,7/11/2019 19:47,7/11/2019 19:47,Why does GLIE+MC Control Algorithm use a single episode of Monte Carlo evaluation?,,1,0,,,,CC BY-SA 4.0 +13308,1,,,7/10/2019 22:23,,1,586,"

As far as I understand, RL is a process that can be divided into 2 stages:

+
    +
  1. Exploring a wide range of paths (acting randomly)

    +
  2. +
  3. Refining the current optimal paths (revolving around actions with a so-far most promising score estimate)

    +
  4. +
+

Completing 1. too quickly results in a network that just doesn't spot the best combination of actions, especially if rewards are sparse. "Refining" then has little benefit, since the network will tend to choose between unlucky estimates it observed so far, and will specialise in those.

+

On the other hand, finishing 2. too quickly results in a network that might have encountered the best combination, but never got time to refine these "good trajectories". Thus its estimates of scores along these "good trajectories" are rather poor and inaccurate, so again the network will fear to select and specialize those, because they might have a low (inaccurate) estimate.

+

Why not to give both 1. and 2. the maximum time possible?

+

In other words, instead of gradually annealing the $\epsilon$ coefficient (in the $\epsilon$-greedy) down to a low value, why not to always have it as a step function?

+

For example, train 50% of iterations with a value of 1 (acting completely randomly), and for the second half of training with the value of 0.05, etc (very greedy). Well, 50% is a random guess, could be adjusted manually, as needed. The most important part is this "step function".

+

To me, always using such a "step" function would instantly reveal if the initial random search was not long enough. Perhaps there is a disadvantage of such a step curve?

+

So far, I got the impression that annealing is a gradual process. +To me, it seems that when using gradual annealing it might not be evident if the neural network (e.q. in DQN or DQRNN) learns poorly because of the mentioned issue or something else.

+

Is there some literature exploring this?

+

There is a paper Noisy Networks for Exploration, but it proposes another approach that removes the $\epsilon$ hyperparameter. My question is different, specifically, about this $\epsilon$.

+",27042,,2444,,12/4/2020 21:01,12/4/2020 21:01,Why is the $\epsilon$ hyper-parameter (in the $\epsilon$-greedy policy) annealed smoothly?,,0,1,,,,CC BY-SA 4.0 +13310,2,,13289,7/11/2019 1:21,,5,,"

What you are describing sounds like it could be a deliberate case of fine-tuning.

+ +

There is a fundamental assumption that makes minibatch gradient descent work for learning problems: It is assumed that any batch or temporal window of consecutive batches forms a decent approximation of the true global gradient of the error function with respect to any parameterization of the model. If the error surface itself is moving in a big way, that would thwart the purposes of gradient descent--since gradient descent is a local refinement algorithm, all bets are off when you suddenly change the underlying distribution. +In the example you cited, catastrophic forgetting seems like it would be an after-effect of having ""forgotten"" data points previously seen, and is either a symptom of the distribution having changed, or of under-representation in the data of some important phenomenon, such that it is rarely seen relative to its importance.

+ +

Experience replay from reinforcement learning is a relevant concept that transfers well to this domain. Here is a paper that explores this concept with respect to catastrophic forgetting. As long as sampling represents the true gradients sufficiently well (look at training sample balancing for this) and the model has enough parameters, the catastrophic forgetting problem is unlikely to occur. In randomly shuffled datasets with replacement, it is most likely to occur where datapoints of a particular class are so rare that they are unlikely to be included for a long time during training, effectively fine-tuning the model to a different problem until a matching sample is seen again.

+",27060,,,,,7/11/2019 1:21,,,,0,,,,CC BY-SA 4.0 +13312,2,,13307,7/11/2019 3:06,,1,,"

I feel the general answer is that we want to be as efficient as possible in learning from experience.

+ +

Policy improvement here always produces an equivalent or better policy, so delaying the improvement step to gather more episodes will only slow down learning.

+ +

I would note too that often a different kind of Monte Carlo learning is used. Instead the speed of the update is typically controlled with a new hyper parameter $\alpha$, instead of keeping track of the visit counts. The Q estimate is then updated something like:

+ +

$$ +Q \leftarrow Q + \alpha \left (G - Q \right) +$$

+ +

The value of $\alpha$ then lets you tune how much evaluation vs improvement happens. This is called constant alpha Monte Carlo. Often this is used as a stepping stone to introduce TD methods, e.g., in 6.1 of the Sutton and Barto book.

+",27061,,,,,7/11/2019 3:06,,,,3,,,,CC BY-SA 4.0 +13315,1,,,7/11/2019 6:46,,0,60,"

I am a bit confused about NeuralODE and I want to make sure that what I understood so far is correct.

+ +

Assume we have (for simplicity) 2 data points $z_0$ measured at $t_0$ and $z_1$ measured at $t_1$. Normally (in normal NN approach), one would train a NN to predict $z_1$ given $z_0$, i.e. $NN(z_0)=z_1$. In NeuralODE approach, the goal is to train the NN to approximate a function $f(z_0)$ (I will ignore the explicit time dependence) such that given the ODE: $\frac{dz}{dt}|_{t_0}=f(z_0)$ which would be approximated as $\frac{dz}{dt}|_{t_0}=NN(z_0)$ and solving this using some (non AI based) ODE integrator (Euler's method for example) one gets as the solution for this ODE at time $t_1$ something close to $z_1$. So basically the NN now approximates the tangent of the function ($\frac{dz}{dt}$) instead of the function itself ($z(t)$).

+ +

Is my understanding so far correct?

+ +

So I am a bit confused about the training itself. I understand that they use the adjoint method. What I don't understand is what exactly is being updated. As far as I can see, the only things that are free (i.e. not measured data) are the parameters of the function $f$, i.e. the NN approximating it. So one would need to compute $\frac{\partial loss}{\partial \theta}$, where $\theta$ are the parameters (weights and biases of the network).

+ +

Why would I need to compute, for example (as they do in the paper) $\frac{\partial loss}{\partial z_0}$? $Z_0$ is the input which is fixed, so I don't need to update it. What am I missing here?

+ +

Secondly, if what I said in the first part is correct, it seems like in principle one can get great results for a reasonably simple function $f$, such as a (for example) 3 layers fully connected NN. So one needs to update the parameters of this NN. On the other hand, ResNets can have tens or hundreds of layers.

+ +

Am I missing a step here or is this new approach so powerful that with a lot fewer parameters one can get very good results?

+ +

I feel like a ResNet, even with 2 layers, should be more powerful than Euler's Method ODE, as ResNets would allow more freedom in the sense that the 2 blocks don't need to be the same, while in the NeuralODE using Euler's Method one has the same (single) block.

+ +

Lastly, I am not sure I understand what do they mean by (continuous) depth in this case. What is the definition of the depth here (I assume it is not just the depth of $f$)?

+",23871,,2444,,7/11/2019 22:13,7/11/2019 22:13,Confused about NeuralODE,,0,3,,,,CC BY-SA 4.0 +13316,1,13339,,7/11/2019 8:06,,0,147,"

I know that it is very common for machine learning systems to classify objects based on their visual features such as shapes, colours, curvatures, width-to-length ratios, etc.

+ +

What I'd like to know is this: Do any techniques exist for machine learning systems to classify objects based on how they move?

+ +

Examples:

+ +
    +
  • Suppose that in a still image, 2 different classes of objects look identical. However, in a video Class A glides smoothly across the screen while Class B meanders chaoticaly across the screen.
  • +
  • When given multiple videos of the same person walking, classify whether he's sober or drunk.
  • +
+",27066,,27066,,7/12/2019 0:44,7/12/2019 13:10,Can an object's movement (instead of its appearance) be used to classify it?,,1,0,,,,CC BY-SA 4.0 +13317,1,13319,,7/11/2019 8:40,,27,12518,"

The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper. However, the paper does not include the proofs of the theorem. Does anybody know where the proof can be found?

+",27047,,2444,,7/14/2020 12:13,12/16/2021 11:41,Where can I find the proof of the universal approximation theorem?,,3,0,,,,CC BY-SA 4.0 +13318,1,,,7/11/2019 8:42,,2,74,"

For a deep NN, should I generally apply batch normalization after each convolution layer? Or only after some of them? Which? Every 2nd, every 3rd, lowest, highest, etc.?

+",5852,,2444,,7/11/2019 17:14,7/11/2019 21:46,What is the most common practice to apply batch normalization?,,1,0,0,,,CC BY-SA 4.0 +13319,2,,13317,7/11/2019 9:05,,29,,"

There are multiple papers on the topic because there have been multiple attempts to prove that neural networks are universal (i.e. they can approximate any continuous function) from slightly different perspectives and using slightly different assumptions (e.g. assuming that certain activation functions are used). Note that these proofs tell you that neural networks can approximate any continuous function, but they do not tell you exactly how you need to train your neural network so that it approximates your desired function. Moreover, most papers on the topic are quite technical and mathematical, so, if you do not have a solid knowledge of approximation theory and related fields, they may be difficult to read and understand. Nonetheless, below there are some links to some possibly useful articles and papers.

+

The article A visual proof that neural nets can compute any function (by Michael Nielsen) should give you some intuition behind the universality of neural networks, so this is probably the first article you should read.

+

Then you should probably read the paper Approximation by Superpositions of a Sigmoidal Function (1989), by G. Cybenko, who proves that multi-layer perceptrons (i.e. feed-forward neural networks with at least one hidden layer) can approximate any continuous function. However, he assumes that the neural network uses sigmoid activations functions, which, nowadays, have been replaced in many scenarios by ReLU activation functions. Other works (e.g. [1], [2]) showed that you don't necessarily need sigmoid activation functions, but only certain classes of activation functions do not make neural networks universal.

+

The universality property (i.e. the ability to approximate any continuous function) has also been proved in the case of convolutional neural networks. For example, see Universality of Deep Convolutional Neural Networks (2020), by Ding-Xuan Zhou, which shows that convolutional neural networks can approximate any continuous function to an arbitrary accuracy when the depth of the neural network is large enough. See also Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets (by A. Heinecke et al., 2020)

+

See also page 632 of Recurrent Neural Networks Are Universal Approximators (2006), by Schäfer et al., which shows that recurrent neural networks are universal function approximators. See also On the computational power of neural nets (1992, COLT) by Siegelmann and Sontag. This answer could also be useful.

+

For graph neural networks, see Universal Function Approximation on Graphs (by Rickard Brüel Gabrielsson, 2020, NeurIPS)

+",2444,,2444,,12/16/2021 11:41,12/16/2021 11:41,,,,1,,,,CC BY-SA 4.0 +13322,2,,13289,7/11/2019 11:23,,1,,"

Maybe in theory, but not in practice. The thing is you seem to consider only chronological/sequential training.

+ +

And there are two ways to view this issue:

+ +
    +
  1. online learning -> then it is a feature of the method
  2. +
  3. offline learning -> it does not happen thanks to several order randomizations
  4. +
+ +


+1. Online-Training or Online Machine Learning.

+ +

Using the woppal wabbit library. It is a feature (not an issue like you consider) of this library to adapt chronologically to the input it is fed with.

+ +

I insist: it is a feature to adapt chronologically. It is wanted that when you start only telling him that lions are dangerous, then that it adapts consequently.

+ +


+2. Offline-Training

+ +

In my personal experience, I have used only randomized subsets of my input data as training set. And this randomization is crucial.

+ +

Randomizations happens namely:

+ +
    +
  • during the training of the neural network, each epochs generally randomize the dataset order
  • +
  • during cross-validation, randomization is used as a way to evaluate a robust model that generalises well and does not overfit
  • +
+",12298,,12298,,2/3/2020 10:08,2/3/2020 10:08,,,,10,,,,CC BY-SA 4.0 +13323,1,,,7/11/2019 11:37,,0,61,"

First than all, I am not sure if this questions is more about Machine Learning, or if its Artificial Intelligence, if not, just let me know I will delete it.

+ +

At my company we need to create a solution for banks, where a client comes in and they want to to open a bank account.

+ +

They need to know if that person is a politician or political exposed person, maybe they work in the european comission, or they are family from a pep for example.

+ +

The business users has lots of data sources where to get these people, for example: http://www.europarl.europa.eu/meps/en/full-list/all

+ +

They want to train a Model (Machine Learning), where the end user can enter the name: Bill Clinton for example, and then the system has to return the percentage of a person being political or not.

+ +

Obviosly some persons are 100% politicial and the percentage will be 100%.

+ +

But if they enter a name that is not in any of their data sources, how would I train a model to decide if its pep or not?

+ +

quite confused

+ +

thanks

+",27075,,,,,7/11/2019 12:10,How to recognize with just name and last name if the person is a political exposed person,,1,3,,,,CC BY-SA 4.0 +13324,1,,,7/11/2019 11:59,,0,269,"

What is the best choice for loss function in Convolution Neural Network and in Autoencoder in particular - and why?

+ +

I understand that the MSE is probably not the best choice, because little difference in lighting can cause a big difference in end loss.

+ +

What about Binary cross-entropy? As I understand, this should be used when target vector is composed as 1 at one place and 0 at all others, so you compare only class that should be correct (and ignore others),... But this is an image (although the values are converted in 0-1 values,...)

+",26993,,2444,,7/11/2019 17:11,7/11/2019 18:04,What is the best loss function for convolution neural network and autoencoder?,,1,1,,2/14/2022 16:37,,CC BY-SA 4.0 +13325,2,,13323,7/11/2019 12:10,,2,,"
+

But if they enter a name that is not in any of their data sources, how would I train a model to decide if its pep or not?

+
+ +

Based on just a person's name and nothing else, the accuracy of this model is going to be very low. Consider that most first name, surname combinations in Europe are going to be repeated across the population.

+ +

However, the accuracy might still be slightly better than guessing. Some families and social classes could be more likely to be involved in political work, and a statistical model would pick up on that.

+ +

To train the model, take your positive names, and combine with a random selection of names of people that are known to be ""not political"" or similar enough. It doesn't matter if some of the names are the same provided you are confident in your data. Probably you could just take a phone directory or the electoral register or some other list of general names. Provided your ""political"" people are a small fraction of all people, this will work well enough even if you have some of them in the negative class.

+ +

Ideally you mix those name groups in the rough proportion that the bank expects to see ""political"" and ""non-political"" customers, so that your data set is a good representation of the target population.

+ +

Then you train a classifier on the names. As this is text and sequence data, you will need a solution for that. Possibly LSTM would be suitable architecture, but so might some feature selection from the names in a more simple ML model.

+ +

Remember to hold back some data (both positive and negative cases) for cross-validating and testing the model.

+ +

Expect the accuracy of your model when testing to be low. Very low. I would not at all be surprised to find the end result unusable by itself.

+ +

If this is seriously to be part of some bank's account setup process, there needs to be additional data used later in the process. A gate for additional checks based purely on someone's name will perform very poorly in my opinion.

+",1847,,,,,7/11/2019 12:10,,,,5,,,,CC BY-SA 4.0 +13326,1,,,7/11/2019 12:24,,0,77,"

UPDATE: After reading more about the topic, I've tried implementing the + DDPG algorithm instead of using a variation of Q-Learning and still have the same issue.

+ +

I have the following issue:

+ +

I want to train my critic to estimate values of state/action pairs. My state consists of 2 real valued variables and my action is another real valued variable.

+ +

I normalize all values before I feed them into the network. Now I have the following issue: The network is very unresponsive to changes in the input. Before I normalize the state and the actions, they can take up any value between 0 and 50. After normalizing them they are in the range between -1 and 1. A change of 1 in the input can become a very small change in the input after normalization.

+ +

But in my specific situation, a small change in the action or the state can cause a very large change in the value of the state/action pair. The network does not really learn that correctly, it handles similar inputs similarly all the time (which is okay most of the times, but there are hard cuts in the shape of the value function here and there). If I further reduce the networks' capacity, the networks output becomes constant and ignores all three inputs.

+ +

Do you know any other tricks, that I could use to increase sensitivity to the input at some points? Or is my network configuration/approach the wrong one (too large, too small)?

+ +

The network I'm training is a simple feedforward neural network that takes two inputs, followed by 2 hidden layers followed by a single output to predict the value for that state/action combination. (I'm still trying out different configurations here, as I have no real feeling for the amount of elements per layer and the amount of layers needed to get the capacity I need without encouraging overfitting).

+ +

Thanks for your help :)

+",27078,,27078,,7/24/2019 14:53,7/24/2019 14:53,Encoding real valued inputs,,0,3,,,,CC BY-SA 4.0 +13327,2,,13324,7/11/2019 18:04,,1,,"

There is no right answer to this. Finding the right loss function is a tough and difficult problem. So your goal as the architect is to try to find one that best suits your needs. So lets think about your needs.

+ +

You mention that you dont want lighting shifts to cause large error, so ill take a leap and assume you care more about the shapes and style of the image more than the coloring. To deal with this, maybe consider using difference of the gram matrices (this is considered common place in style transfer literature: A Neural Algorithm of Artistic Style) Note that you could use the encoder to get the representation of the output as well for the loss, $L(x) = D(Gram(Enc(x)), Gram(Enc(\hat x))$ where $D$ would be some distance metric like euclidian distance.

+ +

Maybe the outline is all you care about. You could use some known edge detector filter and compare those, ex: $D(Edge(x), Edge(\hat x))$

+ +

Maybe you just dont care about color shifts, you could do $D(x - \mu_x, \hat x - \mu_{\hat x}$).

+ +

Note that you can play around with whatever distance metric you use, whether it be with MSE, RMSE, MAE, etc... Each has their own small pros/cons based on the loss manifolds they create. In your case i dont think the difference there will be night and day though, but you never know.

+ +

Also mixing and matching is always nice: ex: $L(x) = \lambda_1 D(x, \hat x) + \lambda_2D(Gram(Enc(x)), Gram(Enc(\hat x)) + ...$

+ +

Takeaway: MSE might actually be fine, but it really depends on what you prioritize, and once you figure that out you can start getting clever and design the loss that fits your needs and problem

+",25496,,,,,7/11/2019 18:04,,,,0,,,,CC BY-SA 4.0 +13328,1,13332,,7/11/2019 21:01,,1,819,"

I came across the $TD(0)$ algorithm from Sutton and Barto: +

+

Clearly, the only difference of TD methods with the MC methods is that TD method is not waiting till the end of the episode to update the $V(s)$ or $Q(s,a)$, but according to David Silver's lecture (Lecture 4- ~34:00),

+

+

The $TD(0)$ algorithm learns from incomplete episodes, but in the earlier algorithm we can see that the loop repeats until $s$ is terminal which mean completion of episode.

+

So, by learning from incomplete episodes, does David Silver mean learning of $V(s)$ even when the episode is not completed? Or did I interpret the algorithm wrong? If so, what is the correct interpretation?

+",,user9947,2444,user9947,3/30/2022 14:00,3/30/2022 14:02,"By learning from incomplete episodes, does David Silver mean learning of $V(s)$ even when the episode is not completed?",,1,0,,,,CC BY-SA 4.0 +13329,5,,,7/11/2019 21:05,,0,,"

Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.

+ +

Temporal difference learning - Wikipedia

+ +

Temporal-Difference Learning - Sutton and Barto

+",,user9947,,user9947,7/11/2019 21:32,7/11/2019 21:32,,,,0,,,,CC BY-SA 4.0 +13330,4,,,7/11/2019 21:05,,0,,For questions regarding concepts/maths/intuition/implementation of Temporal Difference (TD) Learning Algorithms.,,user9947,,user9947,7/11/2019 21:32,7/11/2019 21:32,,,,0,,,,CC BY-SA 4.0 +13331,2,,13318,7/11/2019 21:14,,1,,"

In the literature, it differs. You will see models do it after or before pooling only, and sometimes you see it after every single convolution.

+ +

Batch normalization's assistance to neural networks wasn't really understood for the longest time, initially it was thought to assist with internal covariate shift (hypothesized by the initial paper: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift) but lately has been tied to the optimization process (How Does Batch Normalization Help Optimization?).

+ +

This means, from an architectural perspective, it is difficult to correctly assume how it should be utilized, unless you really understand its impact on the loss landscape and how your optimization process will traverse it given some initialization (Note, by the way, a recent paper by Google showed that you can alleviate a lot of the benefits of batch normalization sheerly by understanding what issues it's resolving and attempting to mitigate them in the initialization process: Fixup Initialization).

+ +

So I would recommend 3 things until it is more understood how to utilize it generally:

+ +
    +
  1. Play around, get frisky and experiment. Use what works best.
  2. +
  3. Use block featurizers that are known to work well like residual blocks. Proven in practice and will probably work for you too.
  4. +
  5. Do the research and investigate it more, if you find the answer, you'll be helping a lot of people :)
  6. +
+",25496,,2444,,7/11/2019 21:46,7/11/2019 21:46,,,,0,,,,CC BY-SA 4.0 +13332,2,,13328,7/11/2019 21:22,,2,,"
+

The $TD(0)$ algorithm learns from incomplete episodes, but in the earlier algorithm we can see that the loop repeats until $s$ is terminal which mean completion of episode.

+
+

In the pseudocode, you have two loops: one for each episode and one (nested) for each step of the episode. The until $S$ is terminal means that you perform the updates until you end the episode (that is, you end up in a terminal state, e.g. checkmate in the game of chess). For each step of the episode, you perform the TD(0) update.

+

Apparently, you're confusing two things: the fact that each episode ends in a terminal state and the fact that TD learns from incomplete information. Each episode ends in a terminal state (otherwise it would not be called an episode), but this does not mean that it collects a full rollout before updating $V$. In fact, at each step of the episode, it updates $V$.

+

The information in the David Slider's slide is consistent with the pseudocode. TD learns from experience because it uses the given policy $\pi$ to behave.

+
+

So, by learning from incomplete episodes, does David Silver mean learning of $V(s)$ even when the episode is not completed?

+
+

Yes, essentially, you're updating the value function during each step of each episode.

+",2444,,2444,,3/30/2022 14:02,3/30/2022 14:02,,,,0,,,,CC BY-SA 4.0 +13333,5,,,7/11/2019 21:57,,0,,,2444,,2444,,7/11/2019 21:57,7/11/2019 21:57,,,,0,,,,CC BY-SA 4.0 +13334,4,,,7/11/2019 21:57,,0,,"For questions related to the concept of a parameter of AI and ML models. Examples of parameters are the weights of a neural network, as well as the mean and variance of a Gaussian distribution.",2444,,2444,,7/11/2019 21:57,7/11/2019 21:57,,,,0,,,,CC BY-SA 4.0 +13335,1,15499,,7/12/2019 0:20,,2,870,"

In link prediction problems, there are only known edges and nodes.

+ +
    +
  • If there is a known edge in the node pair, the node pair is regarded as a positive sample. Except for those node pairs whose edges are known, There may exist unobserved edges in some node pairs or there really doesn't exist edges in some node pairs. Our target is to predict potential links in those candidate node pairs.
  • +
+ +

The node pair where there exist known edge is regarded as a positive sample. So the node pair whose edge are not observed can neither be regarded as a positive example, nor a negative example.

+ +

So I think link prediction problem is a semi-supervised problem. However, I find that many papers, for example, GRTR: Drug-Disease Association Prediction Based on Graph Regularized Transductive Regression on Heterogeneous Network, use AUC(Area Under the ROC Curve, a metric for supervised problems) as the metric.

+ +

How should we understand such behavior? What's the reason?

+",27112,,27112,,9/18/2019 0:23,9/18/2019 1:55,"How should we understand the evaluation metric, AUC, in link prediction problems?",,1,0,,,,CC BY-SA 4.0 +13337,1,,,7/12/2019 5:32,,1,78,"

I am trying to ascertain if my neural network is able to generalize or if it’s simply using memory/overfitting to solve a task. I would like my model to generalise.

+ +

Currently, I train the neural network on a randomly generated 3x3 frozen lake environment - with no holes. (The network simply chooses an action for each state it is presented.)

+ +

Then, I test the model on a much larger frozen lake environment. Still no holes. Still randomly generated. The test environment size is assigned by a random value of 5-15 for each axis (height/width), randomly generated.

+ +

Then I determine the ""degree of generalization"" by how many large environments the network is able to solve. At present, it solves 100/100 on the 3x3, and about 83/100 on the larger test environments.

+ +

When I track the solutions it generates, I can see that the network always takes the shortest route available, which is great.

+ +

Do you guys have any ideas, inputs or criticism on the method I use to determine the degree of generalization?

+",26768,,2444,,7/13/2019 13:39,7/13/2019 13:39,How do I determine the generalisation ability of a neural network?,,0,5,0,,,CC BY-SA 4.0 +13339,2,,13316,7/12/2019 9:46,,2,,"

Machine learning systems work based on input data. The form of that data is irrelevant. It may need some configuring on the technical side of things, but the general ability to learn from the dataset remains the same.
+ML systems are built to learn how to interpret data, without you needing to predefine exactly what it should be looking for.

+ +

But even then, you can reason that videos are no different from images. A video is essentially nothing but a flipbook of images. If you were to paste every frame of the video in a long sequence, you would have one big picture.
+The rest should follow as usual: if a machine learning system can be taught to interpret images, it can evidently also interpret these ""frame sequence"" images, which means it's able to interpret the dataset that we colloquially call a video.

+ +

That being said, since a video contains much more information (a 5 second 30fps video is 150x as much data as a single image with the same resolution), the required learning process increases exponentially.

+ +

So yes, it's perfectly possible, but it will require more processing power and training as there are more input variables to account for.

+ +
+ +

There are ways to reduce this increase in complexity. For example, to check if someone is drunk or not, rather than push the entire video through the learning algorithm, you instead preprocess the video to figure out the person's gait or even just their footsteps (relative location and timing) and only push this processed data through.

+ +

This dramatically cuts down on the complexity of the network needed; at the cost of requiring preprocessing which may introduce issues (bad preprocessing = bad learning).

+",27127,,27127,,7/12/2019 13:10,7/12/2019 13:10,,,,0,,,,CC BY-SA 4.0 +13340,1,,,7/12/2019 10:12,,6,87,"

Suicide is on the increase in my country and most victims tend to leave early traces from text messages, social media accounts, search engine queries. So I came up with the idea to develop an AI system with the following features:

+ +
    +
  • Ability to read text messages searching for suicide trigger words
  • +
  • Read chats also for the same trigger words
  • +
  • Incorporate synchronization of words from software/browsers on recent words typed
  • +
  • Learning algorithm to predict the next action after taking notes of the use of these suicide trigger words
  • +
  • Ability to access any cellphone, Android, computer, websites
  • +
+ +

Is this feasible or not feasible?

+",27128,,2444,,7/12/2019 23:25,7/14/2019 8:31,Suicide Predictor and Locator,,1,2,,,,CC BY-SA 4.0 +13343,1,13344,,7/12/2019 19:21,,3,87,"

I need a model that will take in a few numerical parameters, and give back a numerical answer (Context: predicting a slope based on environmental factors without having to actually take measurements to find the slope).

+ +

However, I am lost, given that I have not found (on the web) any model that is able to solve this task. If someone can suggest a type of model that would be able to solve this type of problem, I would greatly appreciate it. Resources and libraries would also be useful if you care to suggest any. I'm using Python.

+",27156,,2444,,7/12/2019 23:17,7/13/2019 13:55,Which models accept numerical parameters and produce a numerical output?,,1,2,,,,CC BY-SA 4.0 +13344,2,,13343,7/12/2019 23:12,,2,,"

You're probably looking for regression, either linear or non-linear, which usually refers to a set of methods that can be used to predict a continuous (or numerical) value (the value of the so-called dependent variable), given one or more possibly numerical values (the values of the independent variables). (The other common task is called classification, where the outcome variable is discrete.)

+ +

The dependent and independent variables can also be vectors. In that case, it is called multivariate linear or non-linear regression. If there is more than one independent variable (or, also called, predictor), then it is called multivariable (or multiple) linear or non-linear regression. See, for example, Multivariate or Multivariable Regression? (2013), by Bertha Hidalgo and Melody Goodman.

+ +

In linear regression, the dependent variable is assumed to be a linear combination of the independent variables, while, in non-linear regression, it is assumed to be a non-linear combination (which is any combination or relationship that is not linear).

+ +

The independent variables (or predictors) do not actually need to be independent of each other and, of course, the dependent variable does not need to be independent of the predictors (otherwise you would not be able to predict the value of the dependent variable, given the predictors), so the expression predictor is possibly more appropriate than independent variable. See, for example, In Regression Analysis, why do we call independent variables ""independent""?, by Frank Harrell. However, the dependent variable is assumed to be dependent on the independent variables, so you could think of the independent variables as the variables that are not the dependent variables (if this helps you to memorize the concept).

+ +

There are more synonyms for the dependent and independent variables, depending on the context or area. For example, the predictors (or independent variables) can also be called covariates, regressors or explanatory variables. The dependent variable can also be called regressand, outcome variable, or explainable variable.

+ +

There are other types of regression models. In particular, there is the generalized regression model (GLM), which is (as the name suggests) a generalization of several regression models. The famous logistic regression (where the outcome is actually binomial or binary, so discrete) is an application of the generalized regression model, where the link function (which is a concept that is used in the context of GLMs) is the logit function. See, for example, Why is logistic regression a linear model?. Note that linear regression is also an application of a GLM. See also this article 1.1. Generalized Linear Models, which gives you an overview of different GLMs.

+ +

Have also a look at this article Choosing the Correct Type of Regression Analysis, which provides a few guidelines to help you choose the most appropriate regression model for your task.

+ +

In Python, you could use, for example, sklearn.linear_model.LinearRegression. The Python library sklearn provides several other regressors. For example, sklearn.ensemble.AdaBoostRegressor or sklearn.ensemble.RandomForestRegressor. (Of course, you will need a training dataset to train these regressors, when calling their method fit).

+",2444,,2444,,7/13/2019 13:55,7/13/2019 13:55,,,,0,,,,CC BY-SA 4.0 +13345,2,,12759,7/12/2019 23:36,,5,,"

I am also a bit confused by your wording but I will try to clear some things up.

+ +

During MCTS the policy head is used to guide the search while the value head is used as a replacement for roll outs to estimate how good the game position looks. One iteration of the search procedure in MCTS finds a new leaf node which has not been evaluated by the network yet. This leaf node does not have to be a terminal state of the actual game. In fact, in the rare cases towards the end of the game where it IS a terminal state then the network evaluation can be skipped and the real value is used instead.

+ +

In the code you find slightly confusing -1 and {} are both place holders. The value -1 is a placeholder for the value head output and {} a placeholder for the policy head output of the neural network evaluation. The policy head output should be a vector of values (one value for each child of the node). It is not related to FPU at all, the code you linked uses FPU 0.

+ +
  def value(self):
+    if self.visit_count == 0:
+      return 0
+    return self.value_sum / self.visit_count
+
+ +

The maximum number of moves (512 for chess and 722 for GO) you mention are used when creating training games to train the neural network. The current network is playing against itself and the goal is to create many useful training samples so the games are cut off if they take too long (lots of repetition in chess for example). Note that for chess the actual number of maximum moves is 5899 or an infinite number if draw is not mandatory.

+ +

Maybe you will find these illustrations regarding AlphaZero's MCTS search useful:

+ +

http://tim.hibal.org/blog/alpha-zero-how-and-why-it-works/

+",27160,,,,,7/12/2019 23:36,,,,0,,,,CC BY-SA 4.0 +13349,1,13350,,7/13/2019 6:46,,4,335,"

On Wikipedia, the MCTS algorithm is described

+
+

Selection: start from root $R$ and select successive child nodes until a leaf node $L$ is reached. A leaf is any node from which no simulation (playout) has yet been initiated.

+

Expansion: create one (or more) child nodes and choose node $C$ from one of them. Child nodes are any valid moves from the game position defined by $L$.

+

Simulation: complete one random playout from node $C$.

+
+

Why is the playout started from a child of the first leaf, not the leaf itself? And aren't leaves then permanently stuck as leaves, since playouts always start from their children, not them? Or does the leaf get attributed as having had a "playout initialised" from it, even though it started at its child?

+",27165,,-1,,6/17/2020 9:57,11/19/2019 22:34,Is the playout started from a leaf or child of leaf in Monte Carlo Tree Search?,,1,0,,,,CC BY-SA 4.0 +13350,2,,13349,7/13/2019 9:04,,1,,"

NOTE: their description of the selection phase actually does not match the ""standard implementation"". They state to traverse the tree according to the selection strategy until a leaf node is reached. I disagree with this. The standard implementation is to traverse the tree until a node is reached in which there still are legal actions for which no child node has been created (a node that is not yet ""fully expanded"").

+ +

These two are only equivalent in the case where the expansion strategy is to immediately fully expand, to immediately create new nodes for all legal actions in $L$. With such an expansion strategy, $L$ would immediately turn from a leaf node into a fully expanded node. But that is not the ""standard implementation"". The most common expansion strategy is to only add a single new node to the tree per iteration.

+ +

Let $L$ denote a node that is reached at the end of the selection phase. The standard implementation is indeed to first expand in some way, which results in a new child $C$ of $L$, and then start the playout from $L$ onwards.

+ +
+

And aren't leaves then permanently stuck as leaves, since playouts always start from their children, not them?

+
+ +

No, as soon as you add the new child $C$ to $L$, $L$ is by definition no longer a leaf node; a leaf node is defined as a node with no children. The next time that MCTS decides to traverse the same part of the tree, $L$ will be just like any other internal node, $C$ will be a leaf node, and we may add yet another node below that in a future expansion phase. Or, of course, it is also possible that in a future iteration, our selection phase still ends in $L$ despite it not (anymore) being a leaf, if it still has unexpanded nodes other than $C$ (to-be-siblings of $C$).

+ +
+

Why is the playout started from a child of the first leaf, not the leaf itself?

+
+ +

This is because we typically require a slightly different mechanism for choosing a node to expand than the playout mechanism. The most common (or, at least, most simple and straightforward) playout mechanism is to select actions uniformly at random from all legal actions. The most common (or, at least, most simple and straightforward) mechanism for selecting a new node $C$ to add to $L$ is to select uniformly at random only from those legal actions for which children have not already been added to $L$. Due to these mechanisms being different, we cannot initiliase the playout at $L$ and declare the first action of the playout to be the one that generates the new node $C$; we have to use a (slightly) different implementation.

+",1641,,,,,7/13/2019 9:04,,,,0,,,,CC BY-SA 4.0 +13351,1,,,7/13/2019 10:50,,2,98,"

I read about Q-Learning and was reading about multi-agent environments. I tried to read the paper Friend-or-Foe Q-learning, but could not understand anything, except for a very vague idea.

+ +

What does Friend-or-Foe Q-learning mean? How does it work? Could someone please explain this expression or concept in a simple yet descriptive way that is easier to understand and that helps to get the correct intuition?

+",27173,,2444,,7/13/2019 16:25,7/14/2019 19:10,How does Friend-or-Foe Q-learning intuitively work?,,0,1,,,,CC BY-SA 4.0 +13353,5,,,7/13/2019 13:13,,0,,"

For a more complete description of the field, for example, have a look at Multi-Agent Systems: A Survey (2018), by Ali Dorri, Salil S. Kanhere and Raja Jurdak.

+",2444,,2444,,7/13/2019 13:13,7/13/2019 13:13,,,,0,,,,CC BY-SA 4.0 +13354,4,,,7/13/2019 13:13,,0,,"For questions related to multi-agent systems (MAS), which are systems that involve multiple agents (each of them can have different skills) that cooperate with each other and interact with the environment. There are several challenges faced by MAS, including coordination between agents, security, and task allocation. Multi-agent systems have been applied in areas such as computer science, civil engineering, and electrical engineering.",2444,,2444,,7/13/2019 13:13,7/13/2019 13:13,,,,0,,,,CC BY-SA 4.0 +13357,1,13358,,7/13/2019 21:48,,2,61,"

I wrote a simple implementation of Flappy Bird in Python, and now I'm trying to train an agent to play it at a reasonable skill level using TFLearn.

+ +

I feed the network an input vector of size 4:

+ +
    +
  • the horizontal distance to the next obstacle
  • +
  • the agent's vertical distance from the ground-
  • +
  • the agent's vertical distances from the top, and
  • +
  • the agent's vertical distances from the bottom parts of the opening in the obstacle.
  • +
+ +

The output layer of the network contains one unit, telling me the Q value of the provided state with the assumption that the action taken in that state will be determined by the policy.

+ +

However, I don't know what policy would make the agent learn to play the best. I can't just make it choose random actions because that would make the policy non-stationary. What can I do?

+",27169,,27169,,7/15/2019 22:29,7/15/2019 22:29,"If deep Q learning involves adjusting the value function for a specific policy, then how do I choose the right policy?",,1,0,,,,CC BY-SA 4.0 +13358,2,,13357,7/14/2019 5:39,,3,,"
+

The output layer of the network contains one unit, telling me the Q value of the provided state with the assumption that the action taken in that state will be determined by the policy.

+
+ +

Typically in Reinforcement Learning, the symbol $Q$ is used when you calculate an action value, and if you are evaluating for a specific policy, it is noted $q_{\pi}(s,a)$ where $\pi$ is the policy, $s$ is the current state, and $a$ the action to be taken.

+ +

What you appear to be calculating with your network is not the action value, but the state value, $v_{\pi}(s)$. Note that $v_{\pi}(s) = q_{\pi}(s,\pi(s))$ when you have a deterministic policy, and there is a similar relationship for stochastic policies.

+ +

With your setup, it would be possible to learn the value of any fixed policy that you provided as input. It is also possible to drive policy improvements, with some caveats, but that would not be Q learning. The rest of this answer assumes you want to implement Q learning, and probably something like Deep Q Networks (DQN).

+ +
+

However, I don't know what policy would make the agent learn to play the best. I can't just make it choose random actions because that would make the policy non-stationary.

+
+ +

Actually that would make the policy stochastic. Non-stationary means that the policy would change over time. Choosing random actions is a perfectly valid stationary policy (provided the probabilities in each state remain the same).

+ +

In addition, in an optimal control scenario you actually will have and want a non-stationary policy. The goal is to start with some poor guess at a policy, like completely random behaviour, and improve it based on experience. The policy is going to change over time. That makes learning Q values in RL a non-stationary problem.

+ +

Typically you really do start control problems with a random policy. At least in safe environments such as games and simulations.

+ +
+

I'm trying to train an agent to play it at a reasonable skill level using TFLearn. What can I do?

+
+ +

First, modify your network so that it estimates action values $Q(s,a)$. There are two ways to do that:

+ +
    +
  • Take the action $a$ as an input, e.g. one hot encoded, concatenated with the state $s$ to make the complete inputs to the neural network. This is simplest approach conceptually.

  • +
  • Assign a number in range $[0,N_{actions})$ to each action. Change the network to output a value for each possible action, so that your estimate $\hat{q}(s,a)$ is obtained by looking at output indexed by $a$. This is a common choice because it is more efficient for selecting best actions to drive the policy later.

  • +
+ +

Using action values for $Q(s,a)$, not state values $V(s)$, is an important part of Q learning. Technically it is possible to use state values if you have a model of the environment, and you could add one here. But it would not really be Q learning in that case, but some other related version of Temporal Difference (TD) learning.

+ +

Once you start working with action values, then you have a way to determine a policy. Your best estimate of the optimal policy is to take the action in state $a$ with the highest action value. This is:

+ +

$$\pi(s) = \text{argmax}_a \hat{q}(s,a)$$

+ +

So run your neural network for the current state and all actions. Using the first form of NN above this means run a minibatch, using the second form you run it once which is more efficient (but constructing the training data later is more complex).

+ +

If you always take the best action, this is called acting greedily. Usually in Q learning, you want the agent to explore other possibilities instead of always acting the same way - because you need to know whether changing a policy would be better. A very common approach in DQN is to act $\epsilon$-greedily, which means take this greedy action by default, but with some probability $\epsilon$ take a completely random action. Usually $\epsilon$ starts high, often at $1$, and then is decayed relatively quickly down to some lower value e.g. $0.1$ or $0.01$, where it stays during learning. You set $\epsilon = 0$ for fully greedy behaviour when evaluating or using the policy in production.

+ +

So that has set your policy, as per your question. There is still more to implement in DQN, summarised below:

+ +
    +
  • Unfortunately you cannot train the agent directly online on each step and expect good results in practice. DQN requires experience replay as the source of training data. For each step of experience, should store data for start state, action, immediate reward, end state and whether this was the last step in an episode ($s,a,r,s',done$)

  • +
  • To train the network, improve the Q estimates and thus improve the policy, then after each step you use the Bellman relation for optimal policy as a target: $q^{*}(s,a) = r + \gamma \text{max}_{a'} q^{*}(s',a')$:

    + +
      +
    • Construct a minibatch of training data from the experience replay table, using your NN to calculate the TD target for each one $\hat{g} = r + \gamma \text{max}_{a'} \hat{q}(s',a')$
    • +
    • Train your NN towards $\hat{q}(s,a) \rightarrow \hat{g}$ on this minibatch.
    • +
  • +
  • It is usual to use a separate NN to calculate the TD target, and every so many steps (e.g. 1000 steps) make this ""target network"" a clone of the current learning network. It helps with stability, but you may not need that for a simple environment like Flappy Bird.
  • +
+",1847,,1847,,7/14/2019 6:26,7/14/2019 6:26,,,,1,,,,CC BY-SA 4.0 +13359,2,,13340,7/14/2019 8:31,,2,,"

This question seems to be specifically about your idea so I'll answer as such - any general 'can digital activity be used to predict suicide attempts' is probably too broad for this site.

+

As I see it you want to use what a person types, across all types of digital media, to get some measure of their risk of suicide attempt.

+

Two key questions in any machine learning problem

+
    +
  • Can we create variables as predictors?
  • +
  • Is the data there? i.e. do we have labelled data already
  • +
+

As far as creating variables goes a lot of people don't realise that machine learning (or 'AI') isn't about dumping a load of unedited data into an algorithm and letting it figure it out - 90% of the work is in making those variables into something sensible. In this case it may involve a fair bit of psychology. Perhaps you look at changes in message length - has someone gone from writing essays to short answers? (I'm not a psychologist, that may well be way off the mark). The use of particular words or phrases may come into it - you would need to do a fair bit of research into the differences between those who attempted suicide and those who didn't (perhaps ranking words by frequency and see if there are distinct differences).

+

This brings us to our second question (or perhaps it should be the first) 'is the data there'? You want to predict suicide risk from digital activity so you'll need the digital activity of all different types of people with a label of whether or not they attempted suicide (and when - no point collecting the data two years after their attempt). Do you think this dataset exists? I'm sceptical - the level of detail you would need (both into someone's digital activity and their personal mental state) is unlikely to be recorded.

+

It isn't to say I think its an idea to be thrown out, only that you need to be realistic about the amount of effort required to do something like this. You would need to carry out research to collect this data, need to plan out your ideas before hand to know what you would like to collect (the worst thing is to find, after a year of data collection, that you didn't monitor a variable you would have now found vital).

+",22897,,-1,,6/17/2020 9:57,7/14/2019 8:31,,,,0,,,,CC BY-SA 4.0 +13360,1,13361,,7/14/2019 8:47,,2,440,"

I have a continuous state space, and a continuous action space. The way I understand it, I can build a policy network which takes as input a continuous state vector and outputs both mean vector and covariance matrix of the action-distribution. To get a valid action I then sample from that distribution.

+ +

However, when trying to implement such a network, I get the error message that the parts of my output layer which I want to be the covariance matrix are singular/not positive-semi-definite. How can I fix this? I tried different activation-functions and initializations for the last layer, but once in a while I run into the same problem again.

+ +

How can I enforce that my network outputs a valid covariance matrix?

+",21366,,,,,12/11/2019 8:13,How to enforce covariance-matrix output as part of the last layer of a Policy Network?,,3,0,,,,CC BY-SA 4.0 +13361,2,,13360,7/14/2019 9:24,,3,,"

Usually it is assumed that there is no correlation between different actions, so the covariance matrix will be zero everywhere except on the main diagonal. Diagonal will represent variances of actions. Diagonal covariance matrix will be positive semidefinite if all values on diagonal are $\geq$ 0 so you need to insure that output of final layer is $\geq$ 0, which can be done with ReLU activation for example.

+",20339,,,,,7/14/2019 9:24,,,,2,,,,CC BY-SA 4.0 +13362,1,,,7/14/2019 10:29,,2,236,"

I new in machine learning, especially in Conditional Random Fields (CRF).

+ +

I have read several articles and papers and in there is always associated with HMM and sequences classification. I don't really understand mathematics, especially in the annoying formula. So I can't understand the process. Where I need to start to understand CRFs??

+ +

I want to make an information extraction application using CRF Named Entity Recognition (NER).

+ +

I got some tutorial for that: https://eli5.readthedocs.io/en/latest/tutorials/sklearn_crfsuite.html#training-data

+ +

But I don't know the proses each step, like training proses, evaluation, and testing

+ +

I use this code :

+ +
  data_frame = eli5.format_as_dataframes(
+            eli5.explain_weights_sklearn_crfsuite(self.crf))
+
+ +

Targets +

+ +

Transition Features +

+ +

How to get that number ?

+ +

and 1 more thing makes me confused:

+ +
crf = sklearn_crfsuite.CRF(
+    algorithm='lbfgs',
+    c1=0.1,
+    c2=0.1,
+    max_iterations=20,
+    all_possible_transitions=False,
+)
+
+ +

What is the algorithm lbfgs? Is the CRF not an algorithm? Why do I need lbfgs? What is exactly a conditional random field?

+",22686,,22686,,7/20/2019 11:14,7/20/2019 11:14,What is a conditional random field?,,0,6,,,,CC BY-SA 4.0 +13363,2,,5835,7/14/2019 10:52,,5,,"
+

How does Q learning handle this? Is the Q function only used during the training process, where the future states are known? And is the Q function still used afterwards, if that is the case?

+
+ +

The learned $Q$-function is not only used during training, but also after training (in what we may call ""deployment"", when we expect a trained agent to behave according to what it has learned).

+ +

However, the reliance on future states is only there during training, it is no longer required for deployment.

+ +

During training, we use the following $Q$-learning update rule:

+ +

$$Q(s, a) \gets (1 - \alpha) Q(s, a) + \alpha \left( R + \gamma \color{red}{\max_{a'} Q(s', a')} \right),$$

+ +

where $s'$ is the state we reach after executing $a$ in $s$. Here, the $\color{red}{\text{red}}$ part is the part where we rely on knowledge of the future $s'$. This is available in training because we can simply pick $a$, execute it in $s$, observe $s'$, and only then trigger our update step.

+ +
+ +

Outside of training (and actually also during training), we also rely on our $Q(s, \cdot)$ function for the selection of actions. We typically select an action $a$ according to $a = \arg\max_a Q(s, a)$; we select the action $a$ that maximises $Q(s, a)$ in our current state $s$. The important thing to note here is that there is no $s'$ term in this description of how we select actions: we do not require knowledge of our future state.

+ +
+ +

Note: in my answer I decided to answer the question literally as it is written, i.e. I'm explaining how $Q$-learning can still work in the described setting where actions $a$ have no influence whatsoever on the future state reached.

+ +

In practice, I would never recommend actually using $Q$-learning in such a setting, and instead refer to Neil Slater's comment about Contextual Multi-Armed Bandit algorithms likely providing a better solution.

+",1641,,,,,7/14/2019 10:52,,,,0,,,,CC BY-SA 4.0 +13364,2,,13360,7/14/2019 16:29,,1,,"

@Brale_ 's answer is correct, it is common practice in a multitude of models to learn a representation of an independent multivariate normal, but don't let that stop you from pushing the envelope for your needs.

+ +

You can actually learn a dependent form as well. Normally the independent form is done by learning the means and standard deviations, because sampling a standard normal can achieve your draw by $z \sim N(\mu, Diag(\sigma^2))$ by $z = \mu + \sigma \epsilon$ where $\epsilon$ is drawn from a unit normal.

+ +

But you can actually achieve a similar trick for a generalized multivariate normal distribution: lets assume your trying to learn $N(\mu, \Sigma)$ where $\Sigma$ is the covariance matrix. So what you would do is learn the Cholesky decomposition because where $\Sigma = AA^T$ you can now draw it through the parametrization trick: $z = \mu + A\epsilon$.

+",25496,,,,,7/14/2019 16:29,,,,0,,,,CC BY-SA 4.0 +13365,1,,,7/14/2019 16:40,,1,18,"

Can the agent of reinforcement learning system serve as the environemnt for other agents and expose actions as services? Are there research that consider such question?

+ +

I tried to formulate the problem of network of reinforcement learning systems in other site of Stack network: https://cs.stackexchange.com/questions/111820/value-flow-and-economics-in-stacked-reinforcement-learning-systems-agent-as-r

+ +

So - is there some research along such stacked RLs? Of course, I know that Google is my friend, but sometimes I come up with the ideas that I don't know how to name them or how they are named by other scientists who have already discovered and researched them. And that is why I am useing stackexchange to give me some keywords for my ideas and I can myself explore futher the field using those keywords. So - what are the keywords of resarch about such stakced RL systems, of agent-environment interchange in reinforcement? learning?

+",8332,,8332,,7/14/2019 17:22,7/14/2019 17:22,Can the agent of reinforcement learning system serve as the environment for other agents and expose actions as services?,,0,0,,,,CC BY-SA 4.0 +13368,1,,,7/14/2019 19:49,,1,43,"

Can a system, for instance, a robotic vehicle, be controlled by more than one reinforcement learning algorithm. I intend to use one to address collision avoidance whereas the other to tackle autonomous task completion.

+",27182,,18758,,1/8/2022 11:43,1/8/2022 11:43,Can multiple reinforcement algorithms be applied to the same system?,,0,1,,,,CC BY-SA 4.0 +13375,1,,,7/15/2019 15:18,,1,74,"

Suppose we trained a neural network on some training set that we call $X$.

+ +

Given the neural network and the method of training(algorithm, hyperparameters etc.) can we infer anything about $X$.

+ +

Now, instead, suppose we also have some subset of the training data $Y \in X$ available. Is there anything we can infer about $Y^c$?

+",27209,,27209,,7/16/2019 8:32,7/16/2019 8:32,What can be inferred about the training data from a trained neural network?,,0,8,,,,CC BY-SA 4.0 +13376,1,13488,,7/15/2019 18:02,,2,147,"

From the subjective perspective, the number of documentaries about the subject Artificial Intelligence and robotics is small. It seems, that the topic is hard to visualize for the audience and in most cases, the assumption is, that the recipient isn't familiar with computers at all. I've found the following documentaries:

+ +
    +
  • The Computer Chronicles - Artificial Intelligence (1985)
  • +
  • The Machine That Changed the World (1991), Episode IV, The Thinking Machine
  • +
  • Robots Rising (1998)
  • +
  • Rodney's Robot Revolution (2008)
  • +
+ +

The subjective awareness is, that the quality of the films in the 1980s was higher than in modern documentaries, and in 50% of the documentaries Rodney Brooks is the host. Are more documentaries available which can be recommended to watch?

+ +

Focus on non-fictional documentaries

+ +

Some fictional movies were already mentioned in a different post. For example Colossus: The Forbin Project (1970), Bladerunner (1982) or A.I. Artificial Intelligence (2001). They are based on fictional characters which doesn't exist and the presented robots are running with a Hollywood OS. This question is only about nonfictional motion pictures.

+",,user11571,1671,,4/19/2021 21:32,4/19/2021 21:32,Which nonfictional documentaries about Artificial Intelligence are available?,,2,1,,,,CC BY-SA 4.0 +13377,1,,,7/15/2019 20:52,,3,240,"

The definition of deterministic environment I am familiar with goes as follows:

+ +
+

The next state of the agent depends only on the current state and the action chosen by the agent.

+
+ +

By exclusion, everything else would be a stochastic environment.

+ +

However, what about environments where the next state depends deterministically on the history of previous states and actions chosen? Are such environments also considered deterministic? Are they very uncommon, and hence just ignored, or should I include them into my working definition of deterministic environment?

+",21366,,2444,,7/15/2019 20:56,7/16/2019 5:26,Can non-Markov environments also be deterministic?,,2,0,,,,CC BY-SA 4.0 +13380,1,,,7/16/2019 2:58,,1,242,"

I am using dropout of different values to train my network. The problem is, dropout is contributing almost nothing to training, either causing so much noise the error never changes, or seemingly having no effect on the error at all:

+ +

The following runs were seeded.

+ +

key: dropout = 0.3, means 30% chance of dropout

+ +

graph x axis: iteration

+ +

y axis: error

+ +

dropout=0 +

+ +

dropout = 0.001

+ +

dropout = 0.1

+ +

dropout = 0.5 +

+ +

I don't quite understand why dropout of 0.5 effectively kills the networks ability to train. This specific network here is rather small, a CNN of architecture:

+ +
3x3x3                    Input image
+3x3x3                    Convolutional layer: 3x3x3, stride = 1, padding = 1
+20x1x1                   Flatten layer: 27 -> 20
+20x1x1                   Fully connected layer: 20
+10x1x1                   Fully connected layer: 10
+2x1x1                    Fully connected layer: 2
+
+ +

But I have tested a CNN with architecture:

+ +
10x10x3                  Input image
+9x9x12                   Convolutional layer: 4x4x12, stride = 1, padding = 1
+8x8x12                   Max pooling layer: 2x2, stride = 1
+6x6x24                   Convolutional layer: 3x3x24, stride = 1, padding = 0
+5x5x24                   Max pooling layer: 2x2, stride = 1
+300x1x1                  Flatten layer: 600 -> 300
+300x1x1                  Fully connected layer: 300
+100x1x1                  Fully connected layer: 100
+2x1x1                    Fully connected layer: 2
+
+ +

overnight with dropout = 0.2 and it completely failed to learn anything, having an accuracy of just below 50%, whereas without dropout, its accuracy is ~85%. I would just like to know if there's a specific reason as to why this might be happening. My implementation of dropout is as follows:

+ +

activation = relu(val)*(random.random() > self.dropout)

+ +

then at test time:

+ +

activation = relu(val)*(1-self.dropout)

+",26726,,26726,,7/18/2019 2:54,7/27/2023 19:06,Dropout causes too much noise for network to train,,1,6,,,,CC BY-SA 4.0 +13381,2,,13377,7/16/2019 4:22,,0,,"

Depends on the information provided in the state of the system. In theory, the history can be an element of the state, in which case, by the definition you provided:

+ +
+

The next state of the agent depends only on the current state and the action chosen by the agent.

+
+ +

It is a deterministic agent.

+ +

On the otherhand assume the state has no information about the history, in which case at every point you only know its current status and nothing about where it was previously. In this case, it is a stochastic environment because you can define a distribution with greater than 0 entropy/uncertainty over possible next states.

+",25496,,,,,7/16/2019 4:22,,,,2,,,,CC BY-SA 4.0 +13382,1,,,7/16/2019 4:49,,4,168,"

Say you have to enter a story to a computer. Now, the computer has to identify the philosophical concept on which the story is based, say:

+
    +
  1. Was it a "self-fulfilling prophecy"?

    +
  2. +
  3. Was it an example of "Deadlock" or "Pinocchio paradox situation"?

    +
  4. +
  5. Was it an example of how rumours magnify? or something similar to a chain reaction process?

    +
  6. +
  7. Was it an example of "cognitive dissonance" of a person?

    +
  8. +
  9. Was it a story about "altruism"?

    +
  10. +
  11. Was it a story about a "misunderstanding" when a person did something "innovative" but it accidentally was innovated earlier so the person was "falsely accused" of "plagiarising"?

    +
  12. +
+

And so on.

+

Given that the story is not only a heavy rephrase of the pre-existing story; not only character names and identities are totally changed, but the context completely changed, the exact tasks they were doing are changed.

+

Can computers identify such "concepts" from stories? If yes, then what mechanism do they use?

+",27217,,2444,,12/16/2021 18:05,12/16/2021 18:05,Can a computer identify the philosophical concept on which a given story is based?,,1,0,,,,CC BY-SA 4.0 +13383,2,,13377,7/16/2019 5:26,,2,,"

Markov Environment is not about deterministic or stochastic. ""Depends only on the current state and your action"" does not mean you know what will happen(deterministic).

+ +

We can have Markov + deterministic, Markov + Stochastic, Non-Markov + deterministic, and Non-Markov + stochastic.

+ +

The definition you have is not a definition of deterministic. It is a definition of Markov property.

+ +

Refer to Wikipedia.

+ +
+

A stochastic process has the Markov property if the conditional + probability distribution of future states of the process (conditional + on both past and present values) depends only upon the present state; + that is, given the present, the future does not depend on the past. A + process with this property is said to be Markovian or a Markov + process. The most famous Markov process is a Markov chain. Brownian + motion is another well-known Markov process.

+
+ +

Markov property is assumed mostly in stochastic problems. +Brownian motion is the motion of molecules of ink in the water and used to model the movement of a stock price, which is stochastic.

+ +

Deterministic means when you are in the same state and choose the same action your next state will be always the same.

+ +

Stochastic means even you are in the same state and choose the same action, you next state can be different than the previous time.

+ +

Example) You toss a coin and roll a die. Every time you roll a die you get pennies as many. If the coin gets head, you get a chance to roll a die twice next time. +Your state can be (money you collect so far, coin head/tail in the previous time).

+ +

In this problem, your next state will not be affected by the past. the only thing you need to know is the current state, the money you got and head or tail. It has a Markov process/environment. However, still, it is stochastic because you don't know what will be the next state.

+",23788,,,,,7/16/2019 5:26,,,,0,,,,CC BY-SA 4.0 +13384,2,,13382,7/16/2019 8:15,,4,,"

No. This is currently out of the scope for any language processing system. It requires a general understanding of abstract concepts which is not possible for machines at present.

+ +

In order to recognise a self-fulfilling prophecy, you first need to identify that something is a prophecy. So it needs to be something that expresses a possible future state, for which you need to identify what possible future states are; and then you need to see whether it is self-fulfilling. Conceptually this is far too complex to do.

+ +

You might get away for some of these with formal criteria (eg use of future tense for something describing a future state/event), but this is far too imprecise.

+ +

""Altruism"" requires knowledge about typical expected behaviour; you would need to be able to identify motives behind people's actions, and then decide whether it was altruistic or not. This is just too complex for now (and the foreseeable future).

+",2193,,2193,,7/16/2019 15:14,7/16/2019 15:14,,,,2,,,,CC BY-SA 4.0 +13385,1,,,7/16/2019 8:23,,2,146,"

I learn a DNN for image recognition. During each epoch, I calculate mean loss in the training set. After each epoch, I calculate loss and number of errors over both training and test set. The problem is, training and test error go to (almost) zero, then increase, go to zero again, increase, and so on. The process seems stochastic.

+ +
epoch: 1 mean_loss=0.109 train: errs=7 loss=0.00622 test: errs=3 loss=0.00608
+epoch: 2 mean_loss=0.00524 train: errs=5 loss=0.00309 test: errs=3 loss=0.00369
+epoch: 3 mean_loss=0.00408 train: errs=13 loss=0.00614 test: errs=7 loss=0.00951
+epoch: 4 mean_loss=0.00198 train: errs=113 loss=0.102 test: errs=51 loss=0.265
+epoch: 5 mean_loss=0.00424 train: errs=3 loss=0.00201 test: errs=2 loss=0.00148
+epoch: 6 mean_loss=0.0027 train: errs=1 loss=0.000466 test: errs=2 loss=0.00193
+epoch: 7 mean_loss=0.00797 train: errs=5 loss=0.00381 test: errs=0 loss=0.000493
+epoch: 8 mean_loss=0.00368 train: errs=1 loss=0.000345 test: errs=2 loss=0.00148
+epoch: 9 mean_loss=0.000358 train: errs=0 loss=6.76e-05 test: errs=0 loss=0.000446
+epoch: 10 mean_loss=0.00101 train: errs=164 loss=0.0863 test: errs=67 loss=0.19
+epoch: 11 mean_loss=0.000665 train: errs=0 loss=2.38e-05 test: errs=0 loss=9.86e-05
+epoch: 12 mean_loss=0.00714 train: errs=5 loss=0.00909 test: errs=0 loss=0.00816
+epoch: 13 mean_loss=0.00266 train: errs=73 loss=0.0333 test: errs=10 loss=0.0192
+epoch: 14 mean_loss=0.00213 train: errs=0 loss=7.74e-05 test: errs=0 loss=0.000197
+epoch: 15 mean_loss=6.12e-05 train: errs=0 loss=7.66e-05 test: errs=0 loss=3.44e-05
+epoch: 16 mean_loss=0.00162 train: errs=5 loss=0.00265 test: errs=0 loss=0.0012
+epoch: 17 mean_loss=0.000159 train: errs=0 loss=3.11e-05 test: errs=0 loss=4.26e-05
+epoch: 18 mean_loss=4.68e-05 train: errs=0 loss=3.28e-05 test: errs=0 loss=6.05e-05
+epoch: 19 mean_loss=2.47e-05 train: errs=0 loss=2.8e-05 test: errs=0 loss=5.01e-05
+epoch: 20 mean_loss=2.2e-05 train: errs=0 loss=2.31e-05 test: errs=0 loss=3.95e-05
+epoch: 21 mean_loss=2.37e-05 train: errs=0 loss=1.76e-05 test: errs=0 loss=2.52e-05
+epoch: 22 mean_loss=1.4e-05 train: errs=0 loss=1.16e-05 test: errs=0 loss=1.52e-05
+epoch: 23 mean_loss=2.13e-05 train: errs=0 loss=1.65e-05 test: errs=0 loss=2.13e-05
+epoch: 24 mean_loss=1.53e-05 train: errs=0 loss=1.91e-05 test: errs=0 loss=2.46e-05
+epoch: 25 mean_loss=0.00419 train: errs=0 loss=5.27e-05 test: errs=0 loss=4.65e-05
+epoch: 26 mean_loss=0.000372 train: errs=6 loss=0.00297 test: errs=3 loss=0.00731
+epoch: 27 mean_loss=0.0016 train: errs=0 loss=4.23e-05 test: errs=0 loss=3.69e-05
+epoch: 28 mean_loss=3.34e-05 train: errs=0 loss=2.44e-05 test: errs=0 loss=2.76e-05
+epoch: 29 mean_loss=7.03e-05 train: errs=0 loss=2.16e-05 test: errs=0 loss=1.69e-05
+epoch: 30 mean_loss=2.41e-05 train: errs=0 loss=1.84e-05 test: errs=0 loss=1.77e-05
+epoch: 31 mean_loss=1.26e-05 train: errs=0 loss=2.11e-05 test: errs=0 loss=1.78e-05
+epoch: 32 mean_loss=1.39e-05 train: errs=0 loss=2.75e-05 test: errs=0 loss=2.42e-05
+epoch: 33 mean_loss=7.68e-05 train: errs=0 loss=0.00014 test: errs=0 loss=4.66e-05
+epoch: 34 mean_loss=2.53e-05 train: errs=0 loss=1.48e-05 test: errs=0 loss=1.56e-05
+epoch: 35 mean_loss=0.000352 train: errs=1786 loss=2.17 test: errs=493 loss=2.56
+epoch: 36 mean_loss=0.0088 train: errs=0 loss=0.000347 test: errs=0 loss=0.000449
+epoch: 37 mean_loss=0.000395 train: errs=0 loss=6.18e-05 test: errs=0 loss=0.000125
+epoch: 38 mean_loss=5e-05 train: errs=0 loss=6.73e-05 test: errs=0 loss=9.89e-05
+epoch: 39 mean_loss=0.00401 train: errs=26 loss=0.00836 test: errs=27 loss=0.0269
+epoch: 40 mean_loss=0.00051 train: errs=0 loss=7.66e-05 test: errs=0 loss=7.07e-05
+epoch: 41 mean_loss=5.49e-05 train: errs=0 loss=2.47e-05 test: errs=0 loss=2.58e-05
+epoch: 42 mean_loss=3.38e-05 train: errs=0 loss=1.67e-05 test: errs=0 loss=2.1e-05
+epoch: 43 mean_loss=2.45e-05 train: errs=0 loss=1.28e-05 test: errs=0 loss=2.95e-05
+epoch: 44 mean_loss=0.00137 train: errs=44 loss=0.0141 test: errs=16 loss=0.0207
+epoch: 45 mean_loss=0.000785 train: errs=1 loss=0.000493 test: errs=0 loss=4.46e-05
+epoch: 46 mean_loss=5.46e-05 train: errs=1 loss=0.000487 test: errs=0 loss=1.34e-05
+epoch: 47 mean_loss=1.99e-05 train: errs=1 loss=0.00033 test: errs=0 loss=1.57e-05
+epoch: 48 mean_loss=1.78e-05 train: errs=1 loss=0.000307 test: errs=0 loss=1.58e-05
+epoch: 49 mean_loss=0.000903 train: errs=1 loss=0.00103 test: errs=0 loss=0.000393
+epoch: 50 mean_loss=4.74e-05 train: errs=0 loss=4.63e-05 test: errs=0 loss=3.53e-05
+Finished Training, time: 234.69774420000002 sec
+
+ +

The images are 96*96 gray. There are about 7000 training and 1750 test images. The order of presentation is random, and different at each epoch. Each image is either contains the object or not. The architecture is (Conv2d->ReLU->BatchNorm2d->MaxPool)*4->AvgPool(6,6)->Flatten->Conv->Conv->Conv. All MaxPool's are 2*2. First two Conv2d layers are 5*5, padding=2, others 3*3, padding=1. The optimiser is like this:

+ +
Optimizer= Adam (
+Parameter Group 0
+    amsgrad: False
+    betas: (0.9, 0.999)
+    eps: 1e-08
+    lr: 0.001
+    weight_decay: 1e-05
+)
+
+ +

Currently I just choose the epoch when the training set error was minimal.

+ +
if epoch == 0 or train_loss < train_loss_best:
+    net_best = copy.deepcopy(net)
+    train_loss_best = train_loss
+
+ +

It works, but I don't like it. Is there a way to make the learning more stable and steady?

+",5852,,,user9947,7/16/2019 16:55,7/25/2023 22:03,Spikes in of Train and Test error,,1,2,,,,CC BY-SA 4.0 +13387,1,13388,,7/16/2019 10:08,,2,66,"

I don't know if I worded the title correctly.

+ +

I have a big dataset (300000 of images after augumentation) and I've splitted it into 10 parts, because I can't convert the images into a numpy and save it, the file would be too large.

+ +

Now, I have a neural network (Using keras with tf). My question is, is it better to train each file individually for X epochs (File 1 for 5 epochs, then File 2 for 5 epochs, etc.), or should I do an epoch for each, repeatedly (File 1 for an epoch, File 2 for an epoch, etc., and repeat for 5 times).

+ +

I've used the first and I get an accuracy of about 88%. Whould I get an improvement by doing the latter?

+",27219,,,,,7/16/2019 12:30,Better to learn the same small set for multiple epochs then go to the next or learn from each one time repeatedly for multiple times?,,1,0,,,,CC BY-SA 4.0 +13388,2,,13387,7/16/2019 12:30,,0,,"

Small set for multiple epochs causes overfit, so expose entire data in one epoch, then augment (change a littel bit), and train again. Use small amount of dropout also.

+ +
I have a big dataset (300000 of images after augumentation) and I've splitted it into 10 parts, because I can't convert the images into a numpy and save it, the file would be too large.
+
+ +

Do not store augmented data, just do augment with function right after next epoch and overwrite the dataset array

+",25836,,,,,7/16/2019 12:30,,,,0,,,,CC BY-SA 4.0 +13390,1,,,7/16/2019 13:30,,3,1149,"

I have successfully trained a Yolo model to recognize k classes. Now I want to train by adding k+1 class to the pre-trained weights (k classes) without forgetting previous k classes. Ideally, I want to keep adding classes and train over the previous weights, i.e., train only the new classes. If I have to train all classes (k+1) every time a new class is added, it would be too time-consuming, as training k classes would take $k*20000$ iterations, versus the $20000$ iterations per new class if I can add the classes incrementally.

+ +

The dataset is balanced (5000 images per classes for training).

+ +

I appreciated if you can throw some methods or techniques to do this continual training for Yolo.

+",27221,,2444,,1/29/2021 0:05,6/18/2023 8:04,How can I incrementally train a Yolo model without catastrophic forgetting?,,1,2,,,,CC BY-SA 4.0 +13392,1,13394,,7/16/2019 15:09,,3,938,"

In Bandit Based Monte-Carlo Planning, the article where UCT is introduced as a planning algorithm, there is an algorithm description in page 285 (4 of the pdf).

+

Comparing this implementation of UCT (a specific type of MCTS algorithm) to the application normally used in games, there is one major difference. Here, the rewards are calculated for every state, instead of only doing an evaluation at the end of the simulation.

+

My questions are (they are all related to each other):

+
    +
  • Is this the only big difference? Or in other words, can I do the same implementation as in MCTS for games with the 4 stages: selection, expansion, simulation and backpropagation, where the result of the simulation is the accumulated reward instead of a value between 0 and 1? How would the UCT selection be adjusted in this case?

    +
  • +
  • What does the UpdateValue function in line 12 does exactly? In the text it says it is used to adjust the state-action pair value at a given depth, will this be used to the selection? How is this calculated exactly?

    +
  • +
  • What is the depth parameter needed for? Is it related with the UpdateValue?

    +
  • +
+

Finally I would like to ask if you know any other papers where a clear implementation of UCT for planning is used with multiple rewards, not only on the end of the simulation.

+",24054,,-1,,6/17/2020 9:57,7/16/2019 22:52,Several questions related to UCT and MCTS,,1,2,,11/20/2021 0:03,,CC BY-SA 4.0 +13394,2,,13392,7/16/2019 15:50,,2,,"
+

Is this the only big difference? Or in other words, can I do the same implementation as in a simple MCTS with the 4 stages: selection, expansion, simulation and backpropagation, where the result of the simulation is the accumulated reward instead of a value between 0 and 1? How would the UCT selection be adjusted in this case?

+
+ +

No, this is not the only difference. The important differences between the ""original UCT"" (as described in the paper you linked), and the ""standard UCT"" (as typically implemented in game AI research) are nicely summarised in Subsection 2.4 of ""On Monte Carlo Tree Search and Reinforcement Learning"". I'd recommend reading the entire paper, or large parts of it, if you have the time, but to summarise the key differences:

+ +
    +
  • Original stores as many nodes as allowed by memory limitations per iteration, standard only expands one node per iteration.
  • +
  • Original uses transpositions, standard implementation does not
  • +
  • Original uses discount factor $\gamma$, standard does not (equivalent to picking $\gamma = 1$)
  • +
  • Original takes into account at which point in time rewards were observed, standard does not
  • +
+ +

That last point is the one your question mainly seems to be about. Your idea of using a standard implementation, but accumulating any intermediate rewards and treating them as one large terminal reward, would not be the same. Actually taking into account the times at which rewards were observed, as done in the original UCT, can lead to more accurate value estimates in nodes in the middle of the tree (or at least to finding them more quickly). Your idea would be equivalent to learning from Monte-Carlo backups in Reinforcement Learning literature (e.g. Sutton and Barto's book), whereas the original UCT implementation would be more similar to learning from TD($0$) backups.

+ +

Note that, when talking about the ""standard"" implementation of UCT, that's very often in papers about two-player zero-sum games in which all intermediate rewards have a value of $0$, discounting is considered to be unimportant (i.e. $\gamma = 1$), and only terminal game states have a non-zero reward. In this special case, the two ideas do become equivalent.

+ +
+ +
+

What does the UpdateValue function in line 12 does exactly? In the text it says it is used to adjust the state-action pair value at a given depth, will this be used to the selection? How is this calculated exactly?

+
+ +

This would be done in the ""standard"" way. The visit count is incremented, and $q$ could be added to a sum of all $q$ values ever observed during the search for that state-action pair, such that the average score can be computed as the sum of $q$ scores divided by visit count. This average score is the $\bar{X}$ used in Selection.

+ +
+ +
+

What is the depth parameter needed for? Is it related with the UpdateValue?

+
+ +

I think it's usually not really needed. In the paper they describe that it may be considered to have a depth cut-off. From paper:

+ +
+

""This can be the reach of a terminal state, or episodes can be cut at a certain depth (line 8).""

+
+",1641,,,,,7/16/2019 15:50,,,,2,,,,CC BY-SA 4.0 +13395,1,,,7/16/2019 16:10,,1,77,"

In a genetic algorithm, the order of the genes on a chromosome can have a significant effect on the performance (capacity to generate adaptation) of the genetic algorithm, where two or more genes interact to produce highly fit individuals. If we have a chromosome length of $100$ and genes $A$ and $B$ interact, then having them next to each other is strongly preferable than having them at opposing ends of the chromosome. In the former case, the probability of crossover breaking the genes apart is $1$ in $100$, and in the latter it is one.

+ +

What mechanisms have been tried to optimise the order of genes on a chromosome, so that interacting genes are best protected from crossover? Is it even possible?

+ +

I've asked at Biology SE if there exists any known biological mechanism which is responsible for such a possible order of the genes on a chromosome.

+",26382,,2444,,7/16/2019 21:38,7/16/2019 21:38,How can I solve the linkage problem in genetic algorithms?,,0,12,,,,CC BY-SA 4.0 +13396,2,,13385,7/16/2019 16:55,,0,,"

I was actually very recently working on CNNs and extensively training models and have noticed the same thing. (NOTE: The answer I will give is purely based on empirical observations and my understanding of mathematics of deep learning).

+ +

So, the thing I observed on training set (since we are directly optimising on the training set) is that between period of low losses there was suddenly a large loss, and then again low losses, but, this time the loss (of training set) reached even more lower values and the accuracy of the test set reached a higher stable accuracy (by higher stable accuracy I mean, that in general I was observing the accuracy was somewhat oscillating around a fixed value of accuracy, and now the accuracy is oscillating around a higher fixed value of accuracy). From this I concluded that, the high loss was some sort of an obstacle which is stopping the weights from reaching global minima and is trapping it in local minima, unless it gains enough momentum to escape the obstacle, which is indicated by the high loss (think of the loss function as a rotated around y-axis sawtooth waveform inclined at certain degree). So, as we reach lower loss it is expected (if the model is good) that you see better generalisation and hence the stable higher accuracy.

+ +

You can actually somewhat see this happening in your training too, although the data-set is smaller to make concrete comments.

+ +

Seeing your results it seems pretty clear that training loss and test loss are going hand in hand as well as accuracy, which is a good sign, and it means your model has not over-fitted yet and you can train it more, which will probably make you encounter these loss spikes less, nevertheless I am pretty sure they will be there every now and then, whichever method you choose, as the loss curve is always a pretty jagged terrain for a DNN.

+ +

I chose to disregard training accuracy, because most of the times it is not really related to the loss in a monotonically increasing way (check this thread), and the test loss is expected to increase since at such high accuracies, it has been empirically seen that test loss might decrease without affecting test accuracy (check this answer).

+",,user9947,,,,7/16/2019 16:55,,,,0,,,,CC BY-SA 4.0 +13397,1,,,7/16/2019 17:23,,3,28,"

In human communication, tonality or tonal language play many complex information, including emotions and motives. But excluding such complex aspects, tonality serves some a very basic purpose of ""grouping"" or ""taking common"" functions such as:

+ +
    +
  1. The sweet, (pause), bread-and-drink.
  2. +
+ +

It means ""The sweet bread and the sweet drink"". However

+ +
    +
  1. The sweet-bread, (pause) and drink.
  2. +
+ +

It means only the bread is sweet but the drink isn't necessarily sweet, or the drink's sweetness property isn't assigned.

+ +

Can computers recognise these differences of meaning based on tonality?

+",27217,,2444,,7/16/2019 22:38,7/16/2019 22:38,"Can computers recognise ""grouping"" from voice tonality?",,0,0,,,,CC BY-SA 4.0 +13399,1,,,7/16/2019 20:42,,1,162,"

I want to use a neural network to perform a multivariable regression, where my dataset contains multiple features, but I can't for the life of me figure it out. Every kind of tutorial on the internet seems to be either for a single feature without information on how to upgrade it to multiple, or results in a yes or a no when I need numeric predictions (that is, it uses neural networks for classification).

+ +

Can someone please recommend some kind of resource I can use to learn this?

+",27156,,2444,,7/16/2019 21:20,7/16/2019 21:20,How can I perform multivariable regression with neural networks?,,1,0,,,,CC BY-SA 4.0 +13400,2,,13399,7/16/2019 21:17,,1,,"

Have a look at sklearn's sklearn.neural_network.MLPRegressor class, which uses a multi-layer neural network to do regression. You first need to define the object MLPRegressor, for example, by specifying the value of the parameter hidden_layer_sizes, which determines the number of layers and the number of neurons per layer, then you should call the method fit on this created object and pass to it your data matrix $X \in \mathbb{R}^{n \times m}$, where $n$ is the number of samples and $m$ is the number of features.

+",2444,,,,,7/16/2019 21:17,,,,0,,,,CC BY-SA 4.0 +13401,1,,,7/16/2019 22:46,,7,1752,"

Tay was a chatbot, who learned from Twitter users.

+ +
+

Microsoft's AI fam from the internet that's got zero chill. The more you talk the smarter Tay gets. — Twitter tagline.

+
+ +

Microsoft trained the AI to have a basic ability to communicate, and taught it a few jokes from hired comedians before setting it lose to learn from its conversations.

+ +

This was a mistake.

+ +

But why did Tay go so wrong? Was this an example of catastrophic forgetting, where short, recent trends override large, less recent training, or was it something else entirely?

+",125,,,,,7/17/2019 9:25,"Was the corruption of Microsoft's ""Tay"" chatbot an example of catastrophic forgetting?",,2,0,,,,CC BY-SA 4.0 +13404,2,,13401,7/17/2019 8:20,,6,,"

Looking at what happened, it was something similar. Though, the case differs in my eyes from one perspective: if it could only do a few comedy jokes, that probably is not a profound starting point to excel in Twitter.

+ +

Firstly, Twitter is about real life, not about comedy. Discussions are sometimes tough and you easily end up to Social Media Bubbles, where only a certain kind of speaking style and topics is cultivated. So, even humans get on the wrong track there; why not a newbie bot? And, with jokes you would probably catch something about language itself, but not about the topics. So, becoming a jerk instead of a nice comedian is a logical direction, where the bot even has to go, at least a little, to communicate on same level, and not alone.

+ +

The comedian dataset compared to a Twitter dataset is also very small in a technical sense, so talking about a mini trend overkilling a megatrend in this case is probably not true, because of amounts of examples available.

+ +

So, catastrophic things happened in learning, but not catastrophic learning with the definition of that term.

+",11810,,2193,,7/17/2019 9:25,7/17/2019 9:25,,,,0,,,,CC BY-SA 4.0 +13405,2,,13401,7/17/2019 9:03,,6,,"

It was essentially a lack of control over crowd-sourced training data.

+ +

While Tay was initially set up with some conversational ability, it seemed to be programmed to learn from interactions with other users. Once users became aware of this, they basically gamed the bot by exposing it to inappropriate language, which Tay's algorithms then picked up and repeated. According to the Wikipedia article on the topic, it is not known for sure whether its repeat after me facility was solely at fault, or if there was other behaviour that caused it.

+ +

It's not really an example of catastrophic forgetting; for once we don't know how Tay worked internally. I would think it's just that it was overwhelmed by new data coming in which was different from the pre-set. It seems unlikely that the kind of language it was exposed to was in any way known in advance and part of its training set (and labelled as 'inappropriate').

+ +

Essentially, the lesson from this is to never trust any unvetted input data for training, unless you want to risk people abusing this trust as happened in this case.

+",2193,,,,,7/17/2019 9:03,,,,0,,,,CC BY-SA 4.0 +13406,1,,,7/17/2019 11:01,,1,27,"

I have been executing an open-source Text-to-speech system Ossian. It uses feed forward DNNs for it's acoustic modeling. The error graph I've got after running the acoustic model looks like this: + +Here are some relevant information:

+ +
    +
  • Size of Data: 7 hours of speech data (4000 sentences)
  • +
  • Some hyper-parameters:
    + +
      +
    • batch_size       : 128
    • +
    • training_epochs  : 15
    • +
    • L2_regularization: 0.003
    • +
  • +
+ +

Can anyone point me to the directions to improve this model? I'm assuming it is suffering from over-fitting problem? What should I do to avoid this? Increasing data? Or changing batch-size/epochs/regularization parameters? Thanks in advance.

+",27246,,,,,7/17/2019 11:01,Improving the performance of a DNN model,,0,1,,,,CC BY-SA 4.0 +13409,2,,13217,7/17/2019 16:43,,0,,"

I heard back from the authors of the paper.

+ +

As expected the bernoulli sampler is non-differentiable, so as an approximation they use the expectation of the samplers gradient.

+ +

$ +\begin{align*} +\frac{dL}{dv_i} &= \frac{dL}{dBern(\sigma(v_i))} * \frac{dBern(\sigma(v_i))}{d\sigma(v_i)} * \frac{d\sigma(v_i)}{dv_i} \\ +&\approx \frac{dL}{dBern(\sigma(v_i))} * \frac{dE[Bern(\sigma(v_i))]}{d\sigma(v_i)} * \frac{d\sigma(v_i)}{dv_i} \\ +&= \frac{dL}{dBern(\sigma(v_i))} * \frac{d\sigma(v_i)}{d\sigma(v_i)} * \frac{d\sigma(v_i)}{dv_i} \\ +&= \frac{dL}{dBern(\sigma(v_i))} * 1 * \frac{d\sigma(v_i)}{dv_i} \\ +\end{align*} +$

+ +

So the answer ended up being as simple as that.

+",25496,,,,,7/17/2019 16:43,,,,0,,,,CC BY-SA 4.0 +13410,1,,,7/17/2019 17:11,,2,90,"

The Kohonen network is one fully connected layer, which clusters the input into classes by a given metric. However, the one layer does not allow to operate with complex relations, that's why deep learning is usually used.

+ +

Is it possible then to make multi-layered Kohonen network?

+ +

AFAIK, the output of the first layer is already cluster flags, so the activation function on the non-last layers must be different from the original Kohonnen definition?

+",25836,,2444,,7/17/2019 20:38,7/17/2019 20:38,Is a multi-layer Kohonen network possible?,,1,0,,,,CC BY-SA 4.0 +13411,2,,13410,7/17/2019 18:16,,1,,"

Kohonen networks by definition are single layer FCNN's, but what differentiates them from others is their unsupervised training procedure.

+ +

This procedure is a function of the input, the weights and some hyperparameters. This means if you have a multilayer network you could train only the final layers weights using this procedure. Think of it this way, let $f$ be the final layer, and $g$ be the composition of all other layers, such that the whole network $N(x) = f \circ g(x)$. Using kohonens training procedure, you could learn $f$'s weights where the input is $g(x)$ rather than x. But then how would you learn $g$'s weights?

+ +

So you could do this by iterative learning. let $N$ be a 2 layer network: $N = f_2 \circ f_1$. first learn $f_1$ using the single layer procedure, and now you could learn $f_2$ in the same manner, except the input features its trying to cluster would be $f_1(x)$ rather than $x$.

+ +

There does exist cons to this procedure though: Using a kohonen network at each layer will try to cluster it to the best of its abilities, not so that its final composition will be best (a common problem in many optimization procedures that arent end-to-end), but that its current representation is. This may lead to non-optimal results, not achieving that deep representation that you're looking for that you could achieve in autoencoders or other deeper unsupervised models.

+",25496,,,,,7/17/2019 18:16,,,,3,,,,CC BY-SA 4.0 +13412,1,,,7/17/2019 20:22,,3,76,"

I have asked this question a number of times, but I always get confusing answers to this, like ""normalized data works better"", ""data lives in the same scale""

+ +

How can x-m/s make the scale of images the same? Please explain to me the maths. Also, take MNIST dataset for example & illustration.

+",27072,,2444,,7/17/2019 20:40,7/17/2019 22:18,Why do we normalize data in a deep neural network?,,1,1,,,,CC BY-SA 4.0 +13413,2,,13412,7/17/2019 22:18,,3,,"

I answered a similar question earlier and here is a piece of my answer that i think covers your question:

+ +

Batch normalization's assistance to neural networks wasn't really understood for the longest time, initially it was thought to assist with internal covariate shift (hypothesized by the initial paper: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift) but lately has been tied to the optimization process (How Does Batch Normalization Help Optimization?).

+ +

This means, from an architectural perspective, it is difficult to correctly assume how it should be utilized or how it will effect your network, unless you really understand its impact on the loss landscape and how your optimization process will traverse it given your initialization (Note, by the way, a recent paper by Google showed that you can alleviate a lot of the benefits of batch normalization sheerly by understanding what issues it's resolving and attempting to mitigate them in the initialization process: Fixup Initialization).

+",25496,,,,,7/17/2019 22:18,,,,0,,,,CC BY-SA 4.0 +13414,1,13424,,7/18/2019 4:12,,3,358,"

If I am training an agent to try and navigate a maze as fast as possible, a simple reward would be something like

+ +

\begin{align} +R(\text{terminal}) &= N - \text{time}\ \ , \ \ N \gg \text{everything} \\ +R(\text{state})& = 0\ \ \text{if not terminal} +\end{align} +i.e. when it reaches the terminal state, it receives a reward but one that decreases if it is slower. Actually I'm not sure if this is better or worse than $R(\text{terminal}) = 1 / \text{time}$, so please correct me if I'm wrong.

+ +

However, if the maze is really big, it could spend a long time wandering around before even encountering that reward. Are there any reliable ways of modifying the reward function to make the rewards less sparse? Assume that the agent knows the Euclidean distance between itself and the exit, just not the topography of the maze.

+ +

Is it at all sound to simply do something like

+ +

\begin{align} +R(\text{current}) = (d_E(\text{start}, \text{exit}) - d_E(\text{current}, \text{exit})) + (\text{terminal}==True)*(N-\text{time})? +\end{align}

+ +

Or if not, what kind of dense heuristic reward or other techniques might be better?

+",27260,,2444,,10/7/2020 17:25,10/7/2020 17:25,Are there any reliable ways of modifying the reward function to make the rewards less sparse?,,1,0,,,,CC BY-SA 4.0 +13416,2,,13380,7/18/2019 10:23,,0,,"
why dropout of 0.5 effectively kills the networks ability to train.
+
+ +

because that is too mutch normal values are like 0.15-0.05. Imagine, the 50% of input image is randomly set to 0, THEN it happens on next layer, means in average 25% remains, etc... also if you have small dataset with too different images for each class, this + drouput wil confuse the network. +Also your CNN setup is not realy rational. Too much fc layers, replace one or two fc to convo layers. And i d say the reason of using 4x4 is only with stride 2, else use 3x3. And you sould use batch normalisation and augmentation like small noice would be probably petter then dropout in your case.

+",25836,,,,,7/18/2019 10:23,,,,2,,,,CC BY-SA 4.0 +13418,1,,,7/18/2019 11:54,,16,954,"

As a student who wants to work on machine learning, I would like to know how it is possible to start my studies and how to follow it to stay up-to-date. For example, I am willing to work on RL and MAB problems, but there are huge literatures on these topics. Moreover, these topics are studied by researchers from different communities such as AI and ML, Operations Research, Control Engineering, Statistics, etc. And, I think that several papers are published on these topics every week which make it so difficult to follow them.

+ +

I would be thankful if someone can suggest a road-map to start studying these topics, follow them and how I should select and study new published papers. Finally, I am willing to know the new trend in RL and MAB problem.

+",10191,,,,,7/18/2019 21:58,How to stay a up-to-date researcher in ML/RL community?,,1,0,,,,CC BY-SA 4.0 +13419,2,,13418,7/18/2019 12:23,,17,,"

There are some wonderful resources for keeping up to date in the ML community. Here are just a handful that a coworker showed me:

+ +
    +
  1. Deep Learning Monitor: this site contains hot and new papers along with tweets that are popularized by the community! You can even checkout RL papers specifically here

  2. +
  3. arxiv-sanity: this site updates with popular and new papers that make it onto Arxiv

  4. +
  5. papers with code: this site is wonderful because not only does it link to papers, but it links to their implementation for reproduction or assistance in your own personal projects. They even have a leaderboard and track state of the art (SoTA) on tons of different tasks

  6. +
  7. DL_twitter loop: You can't forget twitter, given that most researchers use it; this is just a single nice group you may like

  8. +
+",25496,,2444,,7/18/2019 21:58,7/18/2019 21:58,,,,0,,,,CC BY-SA 4.0 +13420,1,13431,,7/18/2019 14:17,,4,249,"

I have read through many of the papers and articles linked in this thread but I haven't been able to find an answer to my question.

+ +

I have built some small RL networks and I understand how REINFORCE works. I don't quite understand how they are applied to NAS though. Usually RL agents map a state to an action and get a reward so they can improve their decision making (which action to choose). I understand that the reward comes from the accuracy of the child network and the action is a series of digits encoding the network architecture.

+ +

What is passed as the state to the RL agent? This doesn't seem to be mentioned in the papers and articles I read. Is it the previous network? Example input data?

+",27267,,2444,,7/18/2019 19:57,7/19/2019 17:24,How does RL based neural architecture search work?,,1,0,,,,CC BY-SA 4.0 +13421,1,,,7/18/2019 14:55,,2,265,"

I was reading the recent paper Graph Representation Learning via Hard and Channel-Wise Attention Networks, where the authors claim that there is no hard attention operator for graph data.

+ +

From my understanding, the difference between hard and soft attention is that for soft attention you're computing the attention scores between the nodes and all their neighbors while for hard attention you have a sampling function that selects only the most important neighbors. If that is the case, then GraphSage is an example of hard attention, because they apply the attention only on a subset of each node's neighbors.

+ +

Is my understanding of hard and soft attention wrong, or the claim that the authors made does not hold?

+",20430,,2444,,7/18/2019 21:56,8/5/2021 17:07,Does GraphSage use hard attention?,,1,1,,,,CC BY-SA 4.0 +13422,1,13423,,7/18/2019 15:03,,1,95,"

I would like to implement the approach represented in this paper. Here they used following reconstruction loss:

+ +

$$ +L(X)= \frac{\lambda \cdot || M \odot (X - F(\overline{M} \odot X)) ||_{1} + (1 - \lambda) \cdot || \overline{M} \odot (X - F(\overline{M} \odot X)) ||_{1}}{N} +$$

+ +

Unfortunately, the author does not explain the function $F$. +Does someone know a similar function or could understand the function's purpose from the context?

+",23063,,2444,,7/18/2019 18:03,7/18/2019 18:03,"Understanding the reconstruction loss in the paper ""Anomaly Detection using Deep Learning based Image Completion""",,1,0,,,,CC BY-SA 4.0 +13423,2,,13422,7/18/2019 15:13,,1,,"

$F$ in this context is the output of the Convolutional Neural Network that's being trained, which is of the same size as $X$.

+",25496,,2444,,7/18/2019 18:03,7/18/2019 18:03,,,,1,,,,CC BY-SA 4.0 +13424,2,,13414,7/18/2019 15:49,,3,,"

Doing something like the dense, distance-based reward signal you propose is possible... but you have to do it very carefully. If you're not careful, and do it in a naive manner, you are likely to reinforce unwanted behaviour.

+ +

For example, the way I read that reward function you propose, it provide a positive reward for any steps taken by the agent, with larger rewards for steps that get you closer to the goal (except for steps moving back into the start, those would have a reward of $0$. There does not appear to be any ""compensation"" with negative rewards for moves that take you back away from the goal; in fact, such steps also still seem to carry positive rewards! This means that the optimal behaviour that your agent can end up learning is to keep moving around in circles (somewhat close to the goal, but never quite stepping into the goal) for an infinite amount of time, continuously racking up those positive rewards.

+ +

The idea of adding some extra (heuristic) rewards to speed up learning is referred to as ""reward shaping"". Naive approaches to reward shaping often end up unintentionally modifying the ""true"" objective, as highlighted above. The correct way to implement reward shaping, which provably does not modify the optimal policy, is Potential-Based Reward Shaping. The basic intuition behind this is that, if you use reward shaping to encourage ""movement"" in one ""direction"", you should also provide equivalent (taking into account discount factor $\gamma$) discouragement for subsequent ""movement"" in the other ""direction"".

+ +

Now, there is this really cool paper named ""Expressing Arbitrary Reward Functions as Potential-Based Advice"" which proposes a method that can automatically convert from additional reward shaping functions specified in the more ""natural"" or ""intuitive"" manner like you did, into (approximately) a potential-based one that is more likely to actually function correctly. This is not quite straightforward though, and the approach involves learning an additional value function which makes additional predictions used to implement the ""conversion"". So... in practice, in a simple grid-world like yours, I think it's going to be simpler to just figure out the correct potential-based definition yourself than trying to learn it like this, but it's cool stuff nevertheless.

+",1641,,,,,7/18/2019 15:49,,,,0,,,,CC BY-SA 4.0 +13426,1,13428,,7/18/2019 17:10,,1,1133,"

I'm using sklearn's MinMaxScaler in order to scale my data down. However, it would be nice to be able to rescale it back to its original range. Is there any way I can do this?

+",27156,,2444,,7/18/2019 17:56,7/18/2019 19:28,How to rescale data to its original range after MinMaxScaler?,,1,0,,5/4/2020 12:21,,CC BY-SA 4.0 +13428,2,,13426,7/18/2019 19:28,,2,,"

You can use the function inverse_transform of the created MinMaxScaler object.

+ +

See also this Stack Overflow question for other answers and examples.

+",2444,,,,,7/18/2019 19:28,,,,0,,,,CC BY-SA 4.0 +13429,1,13430,,7/18/2019 20:57,,3,204,"

I am trying to use Q-learning for energy optimization. I only wish to have states that will be visited by the learning agent, and, for each state, I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and Q-values as values. Is this possible? How would it affect learning? What other methods can I use?

+ +

If it is possible and okay, and I want to update the Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero, how do I update the Q-value, now that all of the actions in this next state have Q-values of zero?

+",27231,,2444,,7/18/2019 21:43,7/18/2019 21:50,Is it possible to have a dynamic $Q$-function?,,1,1,,,,CC BY-SA 4.0 +13430,2,,13429,7/18/2019 21:42,,2,,"
+

I have a function that generates possible actions, so that I would have a Q-table in form of a nested dictionary, with states (added as they occur) as keys whose values are also dictionaries of possible actions as keys and q-values as values. Is this possible? How would it affect learning? What other method can I use?

+
+ +

(Disclaimer: I provided this suggestion to OP here as an answer to a question on Data Science Stack Exchange)

+ +

Yes this is possible. Assuming you have settled on all your other decisions for Q learning (such as rewards, discount factor or time horizon etc), then it will have no logical impact on learning compared to any other approach to table building. The structure of your table has no relevance to how Q learning converges, it is an implementation detail.

+ +

This choice of structure may have a performance impact in terms of how fast the code runs - the design works best when it significantly reduces memory overhead compared to using a tensor that over-specifies all possible states and actions. If all parts of the state vector could take all values in any combination, and all states had the same number of allowed actions, then a tensor model for the Q table would likely be more efficient than a hash.

+ +
+

I want to update Q-value, but the next state is one that was never there before and has to be added to my nested dictionary with all possible actions having initial Q-values of zero; how do I update the Q-value, now that all of the actions in this next state have Q-values of zero.

+
+ +

I assume you are referring to the update rule from single step Q learning:

+ +

$$Q(s,a) \leftarrow Q(s,a) + \alpha(r + \gamma \text{max}_{a'} Q(s',a') - Q(s,a))$$

+ +

What do you do when you first visit $(s,a)$, and want to calculate $\text{max}_{a'} Q(s',a')$ for the above update, yet all of $Q(s',a') = 0$, because you literally just created them?

+ +

What you do is use the zero value in your update. There is no difference whether you create entries on demand or start with a large table of zeroes. The value of zero is your best estimate of the action value, because you have no data. Over time, as the state and action pair are visited multiple times, perhaps across multiple episodes, the values from experience will back up over time steps due to the way that the update formula makes a link between states $s$ and $s'$.

+ +

Actually you can use any arbitrary value other than zero. If you have some method or information from outside of your reinforcement learning routine, then you could use that. Also, sometimes it helps exploration if you use optimistic starting values - i.e. some value which is likely higher than the true optimal value. There are limits to that approach, but it's a quick and easy trick to try and sometimes it helps explore and discover the best policy more reliably.

+",1847,,1847,,7/18/2019 21:50,7/18/2019 21:50,,,,4,,,,CC BY-SA 4.0 +13431,2,,13420,7/18/2019 23:25,,2,,"

(I will repeat a few details that you're already aware of, so that other users can also understand the context).

+ +

In the Neural Architecture Search (NAS) paper (that I mention in my answer to the question you link to in your question), the agent is the controller (see also this question Is there any difference between a control and an action in reinforcement learning?), which is implemented as a recurrent neural network (RNN). This controller produces actions (or controls), which are strings that represent the hyper-parameters of a neural network (see e.g. section 3.2), based on the reward that it receives, which, in this case, is the accuracy on the validation dataset that the designed (by the controller) and trained neural network obtains.

+ +

In this context, the controller is thus the reinforcement learning agent (or policy). The controller is an RNN that is represented by a vector of parameters, $\boldsymbol{\theta}_\text{c}$. This controller is trained using a policy gradient method to maximize the expected accuracy on the validation dataset (see e.g. section 3 of the paper). So $\boldsymbol{\theta}_\text{c}$ are adjusted using this policy gradient method so that the controller generates NN architectures that, after trained on some task, produce higher accuracy on a validation dataset.

+ +

The objective function of the controller is thus

+ +

$$ +J(\boldsymbol{\theta}_\text{c}) = \mathbb{E}[R] +$$

+ +

where $R$ is the accuracy on the validation dataset. Why expected? You will be performing this operation multiple times, so, intuitively, you want an average of the accuracy that you obtain on the validation dataset using multiple neural network architectures that the controller might produce.

+ +

The accuracy on the validation dataset, represented by $R$, is actually non-differentiable, so the authors use the famous REINFORCE algorithm (which I will not explain here). The authors actually use an approximation of the REINFORCE algorithm (see section 3.2).

+ +

$$ +\frac{1}{m} \sum_{k=1}^m \sum_{t=1}^T \nabla_{\boldsymbol{\theta}_\text{c}} \log P (a_t \mid a_{{t-1}:1}; \boldsymbol{\theta}_\text{c}) R_k +$$

+ +

So, given this formulation and taking into account the formulation of the REINFORCE algorithm, then the state of the agent seems to be $a_{{t-1}:1}$, that is, the previous actions of the agent. (Recall that the agent is a controller that is implemented as a recurrent neural network).

+",2444,,2444,,7/19/2019 17:24,7/19/2019 17:24,,,,2,,,,CC BY-SA 4.0 +13432,1,,,7/19/2019 1:35,,4,207,"

I've heard that prediction is equivalent to data compression. +Is there a way to take a compression function and use it to create an AI that predicts?

+",27283,,2444,,1/26/2021 16:50,1/26/2021 16:50,Can a data compression function be used to make predictions?,,2,0,,,,CC BY-SA 4.0 +13437,1,13441,,7/19/2019 8:10,,1,379,"

In the Pursuit algorithm (to balance exploration and exploitation), the greedy action has a probability say $p_1$ (updated every episode) of being selected, while the rest have a probability $p_2$ (updated every episode) of being selected.

+ +

Could you please show me an example code (Python) on how to enforce such conditional probabilistic picking?

+",27231,,2444,,7/19/2019 17:23,7/19/2019 17:23,Probabilistic action selection in pursuit algorithm,,1,7,,,,CC BY-SA 4.0 +13438,2,,13432,7/19/2019 8:11,,2,,"

The way some (not all) compression algorithms work is that they encode frequent events in a short code, and rarer events with a longer code. Overall you save more space by encoding the common elements than you need to expend coding the rare ones. One example of this is a Huffman code, which uses a variable length encoding based on the frequency of the items.

+ +

You can use a compression algorithm for prediction if if encodes more than one event at a time. For example, word pairs rather than individual words. Each word pair will have a code, and the common word pairs (eg of the) will have shorter codes that the ones which are less common (eg of three). For prediction, select all the word pairs that start with your known sequence (eg of). Now select from that list the pair with the shortest code (which is more common), so in this example of would more likely be followed by the rather than three. After than, repeat the process with the next word, so look for pairs that begin with the.

+ +

All you need is the compression 'code book' which is produced during the compression process -- it's essentially a model of the data you compressed. This also works for longer sequences than pairs, of course.

+ +

If you want to know more about the topic, I can recommend Managing Gigabytes by Witten, Moffat, and Bell. Great book on compression techniques.

+",2193,,,,,7/19/2019 8:11,,,,0,,,,CC BY-SA 4.0 +13441,2,,13437,7/19/2019 9:56,,1,,"

If I am understanding your question properly, you could use something like the following:

+ +
import numpy as np 
+p1 = 0.1
+if np.random.rand() < p1: 
+    action = 'greedy'
+else: 
+    action = np.random.choice(['other_policy1', 'other_policy2', 'other_policy3']) 
+
+return action 
+
+
+",27290,,,,,7/19/2019 9:56,,,,12,,,,CC BY-SA 4.0 +13443,1,13497,,7/19/2019 10:50,,3,202,"

To me, most ANN/RNN related articles don't tell me actually how the network is implemented. I know that in the ANN you'll have multiple neurons, activation function, weights, etc. But, how do you, actually, in each neuron, convert the input to the output?

+ +

Putting activation function aside, is the neuron simply doing $\text{input}*a+b=\text{output}$, and try to find the correct $a$ and $b$? If it's true, then how about where you have two neurons and their output ($c$ and $d$) is pointing to one neuron? Do you first multiply $c$ and $d$ then feed it in as input?

+",23017,,2444,,7/19/2019 17:44,7/22/2019 3:57,How do layers in an artificial neural network transform inputs to outputs?,,2,4,0,,,CC BY-SA 4.0 +13444,1,13445,,7/19/2019 11:18,,0,81,"

Referring to the blog, Image Completion with Deep Learning in TensorFlow, it clearly says that we would want a generator $g$ whose modeled distribution fits our dataset $data$, in other words, $P_{data}=P_g$.

+ +

But, as described earlier in the blog, the space $P_{data}$ is in is a higher-dimensional space, where a dimension represents a particular pixel in an image, making it a $64*64*3$ dimensional space (in this case). I have a few questions regarding this

+ +
    +
  1. Since each pixel here will have an intensity value, will the pdf try to encapsulate a unique pdf for each pixel?
  2. +
  3. If we sample the most likely pixel value for each pixel, considering the distributions need not be the same for each pixel, is it not quite likely that the most probabilistic generated image is just noise apart from things like a common background or so?
  4. +
  5. If $P_g$ is trying to replicate $P_{data}$ only, does that mean a GAN only tries to learn lower level features that are common in the training set? Are GANs clueless about what its doing?
  6. +
+",25658,,2444,,7/20/2019 13:39,7/20/2019 13:39,"If the goal of training of a GAN is to have $P_g=P_{data}$, shouldn't this produce the exact same images?",,1,3,,,,CC BY-SA 4.0 +13445,2,,13444,7/19/2019 13:56,,0,,"

Answers:

+ +
    +
  1. No, generally GANs model the joint distribution of pixel space. The space is so entangled that it would be very difficult to model a singular pixels PDF.
  2. +
  3. This question doesn't make sense given the first question's answer. You do NOT sample singular pixels, you sample the joint.
  4. +
  5. $P_g$ is trying to model $P_{data}$, so that's your goal, so yes it will find features in the training set. Ideally, though it can extrapolate usage of the features to generate new images. But, yes depending on your setup you can end up with overfitting or mode-collapse where you really only generate a small subset of images that trick the discriminator. So I would say, no, the GAN doesn't know what its doing (since its just a minimax optimization of a loss you setup), so your goal is to tell it what to do in a way to achieve your best result.
  6. +
+",25496,,2444,,7/19/2019 17:42,7/19/2019 17:42,,,,0,,,,CC BY-SA 4.0 +13447,2,,13443,7/19/2019 14:50,,4,,"

The basic calculation for a single neuron is of the form

+ +

$$\sigma\left(\sum_{i} x_i w_i \right),$$

+ +

where $x_i$ is the input to the neuron $w_i$ are the neuron-specific weights for every single input and $\sigma$ is the pre-specified activation function. In your terms, and disregarding the activation function, the calculation would turn out to be

+ +

$$c\,a_c + d\,a_d + b$$

+ +

Note, that the bias term $b$ is just a weight that gets multiplied by the input $1$, thus it appears to have no input.

+ +

If you want to develop a further understanding for this, you should try to get familiar with matrix and vector notations and the basic linear algebra that underlies the feed-forward neural networks. If you do, an entire layer of neurons on a whole batch of data will suddenly simply look like this:

+ +

$$\sigma(WX)$$

+ +

and a FFNN with say 3 layers will look like this:

+ +

$$\sigma_{3}(W_3\sigma_2(W_2\sigma_1(W_1X)))$$

+",26117,,2444,,7/19/2019 17:46,7/19/2019 17:46,,,,4,,,,CC BY-SA 4.0 +13449,1,,,7/19/2019 17:59,,2,100,"

Normally, when you develop a neural network, train it for object recognition (on normal objects like bike, car, plane, dog, cloud, etc.), and it turns out to perform very well, you would like to fine-tune it for e.g. recognizing dog breeds, and this is called fine-tuning.

+ +

On the other hand, in reinforcement learning, let's consider a game with rewards on checkpoints $1, 2, 3, \dots n$. When you have a bot that plays and learns using some (e.g. value) neural net, it develops some style of solving the problem to reach some checkpoint $k$, and, after that, when it will reach the $k+1$ checkpoint, it probably will have to revalue the whole strategy.

+ +

In this situation, will the bot fine-tune itself? Does it makes sense to keep a replay buffer as is and to ""reset"" the neural net to train it from scratch, or it's better to stay with fine-tune approach?

+ +

If possible, topic-related papers would be very welcome!

+",23181,,2444,,7/19/2019 19:58,7/19/2019 19:58,Will the RL agent implemented as a neural network fine-tune itself?,,0,0,0,,,CC BY-SA 4.0 +13450,1,,,7/19/2019 19:54,,3,93,"

I am trying to model operational decisions in inventory control. The control policy is base stock with a fixed stock level of $S$. That is replenishment orders are placed for every demand arrival to take the stock level to $S$. The replenishments arrive at constant lead time $L$. There is an upper limit $D$ on the allowed stock out time and it is measured every $T$ periods, otherwise, a cost is incurred $C_p$. This system functions in a similar manner to the M/G/S queue. The stock out time can be thought as the customer waiting time due to all server busy. So every $R$ period ($R$ is less than $T$) the inventory level and pipeline of outstanding orders are monitored and a decision about whether to expedite outstanding order (a cost involved $C_e$) or not is taken in order to control the waiting/stock-out time and to minimize the total costs.

+ +

I feel it is a time and state-dependent problem and would like to use $Q$-learning to solve this MDP problem. The time period $T$ is typically a quarter i.e. 3 months and I plan to simulate demands as poisson arrivals. My apprehension is whether simulating arrivals would help to evaluate the Q-values because the simulation is for such a short period. Am I not overestimating the Q value in this way? I request some help on how should I proceed with implementation.

+",27304,,27304,,11/20/2019 15:16,11/20/2019 15:16,How can I use Q-learning for inventory decision making?,,0,4,,,,CC BY-SA 4.0 +13451,1,,,7/19/2019 22:04,,3,115,"

Say I have a ML model which is not very costly to train. It has around say 5 hyperparameters.

+ +

One way to select best hyperparameters would be to keep all the other hyperparamaters fixed and train the model by changing only one hyperparameter within a certain range. For the sake of mathematical convenience, we assume for the hyperparameter $h^1$, keeping all other hyperparameters fixed to their initial values, the model performs best when $h^1_{low} < h^1 < h^1_{high}$ (which we found out by running the model on a huge range of $h^1$). Now we, fix $h^1$ to one of the best values and tune $h^2$ the same way, where $h^1$ is chosen and the rest of the hyperparameters are again fixed on their initial values.

+ +

My question is: Does this method find the best hyperparameter choices for the model? I know if the hyperparameters are independent, then this definitely does find the best solution, but in a general case, what is the general theory around this? (NOTE: I am not asking about the problem of choosing hyperparamaters, but I am asking about the aforementioned approach of choosing hyperparameters)

+",,user9947,2444,,7/20/2019 0:17,1/1/2022 10:38,Does this hyperparameter optimisation approach yield the optimal hyperparameters?,,2,0,,,,CC BY-SA 4.0 +13453,2,,13451,7/19/2019 23:19,,2,,"

After you've computed $h^{1}_{optimal}$ the only thing you can be sure is that this is the best (assuming constrained case) value of $h^1$ (with respect to some model performance metric) given your initial values for $h^2, ..., h^n$. If you change a bit any of $h^2, ..., h^n$ you're no longer certain that the value $h^1$ you found is the optimal one. So yes, the key here is the assumption about the independence.

+",22835,,,,,7/19/2019 23:19,,,,0,,,,CC BY-SA 4.0 +13454,1,13465,,7/19/2019 23:27,,3,753,"

I am unsure about the following parts of the architecture and mechanics of convolution layers in CNNs. Possibly, this is implementation-dependent though.

+ +

First question:

+ +

Say I have 2 convolution layers with 10 filters each and the dimension of my input tensors is $n \times m \times 1$ (so, grayscale images for example). Passing this input to the first convolution layer results in 10 feature maps (10 matrices of $n \times m$, if we use padding), each produced by a different filter.

+ +

Now, what does actually happen when this is passed to the second convolution layer? Are all 10 feature maps passed as one big $m \times n \times 10$ tensor or are the overlapping cells of the 10 feature maps averaged and a $m \times n \times 1$ tensor is passed to the next convolution layer? The former would result in an explosion of feature maps with increasing number of convolution layers and the spacial complexity would be in $\mathcal{O}\left((nm)^k\right)$, where $k$ is the number of chained convolution layers. Averaging the feature maps before passing them to the next layer would keep the complexity linear. So, which is it? Or are both possibilities commonly used?

+ +

Second question (with two sub questions):

+ +

a) This is a similar question. If I have an input volume of $n \times m \times 3$ (e.g. RGB images) and I have again 2 convolution layers with 10 filters, does each convolution layer have in actuality 30 filters? So 10 sets of 3 filters, one for each channel? Or do I have in fact only 10 filters and the filters are applied to all 3 channels?

+ +

b) This is the same question as question (1) but for channels: Once I have convolved a filter (consisting of three channel filters? (a)) over the input tensor I end up with 3 feature maps. One for each channel. What do I do with these? Do I average them component-wise with each other? Or do I keep them separate until I have convolved all 10 filters across the input and THEN average the 10 feature maps of each channel? Or do I average all 30 feature maps of all three channels? Or do I just pass on 30 feature maps to the next convoloution layers which in turn knows which of these feature maps belong to which channel?

+ +

Quite a few possibilities... None of the sources I consulted makes this explicit. Maybe because it depends on the individual implementation.

+ +

Anyway, would be great if somebody could clear this confusion up a little!

+",20150,,2444,,7/20/2019 13:55,7/20/2019 15:34,Are feature maps merged or are they passed on as they are?,,2,1,,,,CC BY-SA 4.0 +13455,2,,13451,7/20/2019 0:07,,2,,"

The theory behind hyper-parameter optimization (HPO) is not well developed. Nonetheless, there are several hyper-parameter optimization approaches, such as Bayesian optimization (using Gaussian processes), random search, grid search, genetic algorithms, etc. See, for example, the paper Hyperparameter Search in Machine Learning (2015), which attempts to formalize the problem of hyper-parameter optimization in machine learning, Random Search for Hyper-Parameter Optimization (2012), and the related Wikipedia article.

+

In the paper Hyperparameter Search in Machine Learning (and, similarly, in Random Search for Hyper-Parameter Optimization), the authors formally define the hyper-parameter optimization problem as follows

+

\begin{align} +\lambda^* +&= \operatorname{arg min}_{\lambda}\mathcal{L}(X^{test}; \mathcal{M} = \mathcal{A}(X^{train}; \lambda)) \tag{1} +\end{align}

+

where $\lambda$ are the hyper-parameters of the learning algorithm (for example, gradient descent, whose hyper-parameters are the learning rate and the batch size), that is, the algorithm that is used to train the model $\mathcal{M}$ (e.g. a convolutional neural network, with a fixed architecture) using the training ($X^{train}$) and test ($X^{test}$) datasets (for simplicity, ignore cross-validation and related techniques).

+

In simple words, in equation $1$, we want to find the hyper-parameters $\lambda$ of the learning algorithm $\mathcal{A}$ that minimize the loss $\mathcal{L}$ on the test dataset $X^{test}$, when the model $\mathcal{M}$ is trained using $\mathcal{A}$ and the training dataset $X^{train}$.

+

The equation $1$ thus ignores the hyper-parameters associated with the model (e.g., the number of layers of a multi-layer perceptron) and only considers the hyper-parameters associated with the learning algorithm. However, note that the optimal hyperparameters of the learning algorithm $\mathcal{A}$ depend on the given training and test datasets, the loss function $\mathcal{L}$ and the model $\mathcal{M}$. Eventually, the formulation in $1$ could be extended to include the hyper-parameters associated with the model (and other hyper-parameters).

+

So, in general, the choice of the HPO method (including the method you're proposing) depends on several factors, including the model (and its architecture), the task that needs to be solved, the loss function, the training and test datasets, and the computational complexity and runtime efficiency of the HPO method. For example, if the space of hyper-parameters is discrete and small, then grid search (which can be an exhaustive search) will find the best combination of hyper-parameters, for a given task and dataset. However, grid search can be impractical if the search space is huge.

+

In general, the method you're proposing will not be optimal because, as you state, the hyper-parameters might not be independent of each other. For example, if you're using stochastic gradient descent (that is, you train your model one example at a time), you probably do not want to update the parameters of your model too fast (that is, you probably do not want a high learning rate), given that a single training example is unlikely to be able to give the error signal that is able to update the parameters in the appropriate direction (that is, the global or even local optimum of the loss function). However, if you're using batch gradient descent, the higher the batch size, the more likely you can use a higher learning rate. This example is only meant to give you some intuition, but this might not hold in all cases.

+

The mostly used hyper-parameter optimization methods (that I mentioned above) seem to assume that the hyper-parameters are, in general, not independent of each other. In fact, this might be a correct assumption, given that, in the real world, the independence assumption almost never holds (see, for example, the related discussions in the context of the naive Bayes classifiers).

+",2444,,2444,,1/1/2022 10:38,1/1/2022 10:38,,,,0,,,,CC BY-SA 4.0 +13456,2,,12757,7/20/2019 2:57,,-1,,"

Politics are driven by a combination of things such as culture, ideology, religion, ethics, morality, desire for money, fame, and power, etc. Although God has one truth, most people don't know this truth, a few do, and the rest either disagree about it or just don't believe in Him. The Constitution was written by man. It does not tell us what to do in every situation. It is open to interpretation. A person's interpretation of the Constitution is driven by his morality, culture, and all the things I mentioned above. AI systems are designed by people so they are driven by their biases.

+",5763,,,,,7/20/2019 2:57,,,,4,,,,CC BY-SA 4.0 +13457,1,13459,,7/20/2019 3:17,,2,206,"

I'm having some trouble understanding some parts of the usage of target networks.

+ +

I get that having the same network predict the state/action/advantage values for both the current networks can lead to instability.

+ +

Based on my understanding, the intuition behind 1-step TD error that going a step into the future will give you a better estimate, that can then be used to update your original state/action/advantage value.

+ +

However, if you use a target network, which is less trained than the normal net — especially at early stages of the training — wouldn't the state/action/advantage value be updating towards an inferior estimate?

+ +

I've tried implementing DQNs and DDPGs on Cartpole, and I've found that the algorithms fail to converge when target networks are used, but work fine when those target networks are removed.

+",27240,,2444,,7/20/2019 12:12,7/20/2019 12:12,"Will the target network, which is less trained than the normal network, output inferior estimates?",,1,1,0,,,CC BY-SA 4.0 +13459,2,,13457,7/20/2019 8:35,,2,,"
+

However, if you use a target network, which is less trained than the normal net — especially at early stages of the training — wouldn't the state/action/advantage value be updating towards an inferior estimate?

+
+ +

Possibly, but a critical part of stability of TD learning with function approximation when adding a target network is that any updates will be consistent in the short term.

+ +

Consider that with a single network, the TD target calculation will also be biased (and by the same amounts on the first few passes), but they will not be consistent. Each learning update shifts the estimates by a value that is biased. Problems occur with stability when instead of the bias decaying over time due to real data from each step, that the bias amount is enough to form a positive feedback loop.

+ +

As a concrete example, suppose an initial random network calculating $\hat{q}(s,a,\theta)$ where

+ +
    +
  • The reward is 1.0
  • +
  • The true value function $q^*(s,a)$ is 10.0 (it is not really relevant) and that would make $q^*(s',a') = 9.0$
  • +
  • The NN initially, predicts 5.0 for $\hat{q}(s,a,\theta)$ and 7.0 for $\text{max}_{a'}\hat{q}(s',a',\theta)$.

  • +
  • We also have a learning network using parameters $\theta$ that change on each learning update.

  • +
  • We have a frozen copy of the initial network using parameter $\bar{\theta}$

  • +
  • We have some learning rate $\alpha$ that for arguments sake we say reduces error in this case by 50% each time it sees this example.

  • +
  • The approximation process in the NN means that $\hat{q}(s,a,\theta)$ and $\text{max}_{a'}\hat{q}(s',a',\theta)$ are linked. This is a real feature of neural networks, but hard to model as it evolves over time. Let's say that with the initial setup that any learning of $\hat{q}(s,a,\theta)$ in isolation will make $\text{max}_{a'}\hat{q}(s',a',\theta)$ change towards $\hat{q}(s,a,\theta)$ by a 50% step. These steps sizes are not that important, they could be just 1% and the problem still occurs.

    + +
      +
    • You can control the learning rate directly, but cannot really control the ""generalisation link strength"" of the approximation.
    • +
  • +
  • To really trigger the problem I want to show, let's say that these single steps are occurring far enough away from the end of an episode, or other moderating feedback, that we effectively get a few hundred visits to our $(s,a)$ pair without impact from other items (in fact it is likely that many will be going through the same issues)

  • +
+ +

Without the target network, the first TD target is $1.0 + 7.0 = 8.0$. After update $\hat{q}(s,a,\theta) = 6.5$ and $\text{max}_{a'}\hat{q}(s',a',\theta)$ = 6.75$. This looks good, right? Getting closer to the real values . . . but keep going . . . the next updates work like this:

+ +
q(s,a)      max q(s',a')
+ 7.125       6.938
+ 7.531       7.234
+ 7.882       7.559
+ 8.221       7.889
+
+ +

Again, this looks OK? But let's come back to it after 100 [isolated, so a bit fake] iterations:

+ +
40.222      39.889
+40.556      40.222
+40.889      40.556
+41.222      40.889
+41.556      41.222
+
+ +

This has overshot, and both values are increasing exponentially. The initial neural network bias is caught in a positive feedback loop.

+ +

Now if we use the target network that always predicts 7.0, what happens is what you expect. After 100 iterations we have:

+ +
 8.0         7.5
+ 8.0         7.5
+ 8.0         7.5
+ 8.0         7.5
+ 8.0         7.5
+
+ +

These values are still incorrect, but the updates have made a more conservative and stable step towards the correct values. Note the second value is what the learning network would predict for $\text{max}_{a'}\hat{q}(s',a',\theta)$, but we have used the other prediction $\text{max}_{a'}\hat{q}(s',a',\bar{\theta})$ on each step.

+ +

In reality the feedback loops are more complex that this, because this is function approximation so action value estimates from different $(s,a)$ pairs interact in ways that are hard to predict. But the worry is that bias causes divergence in estimates, and this does happen in practice. It is more likely to happen in environments with long episodes, or where state loops are possible.

+ +

It is also worth noting that using the frozen target network has not solved the problem, it has just significantly throttled runaway feedback. The amount of throttling required to keep learning stable will vary depending on the problem. The number of steps between target network updates is a hyperparameter. Set it too low and you risk seeing stability problems. Set it too high and learning will take longer.

+ +
+

I've tried implementing DQNs and DDPGs on Cartpole, and I've found that the algorithms fail to converge when target networks are used, but work fine when those target networks are removed.

+
+ +

In that case, your implementations are incorrect. Definitely I have observed DQN with a target network working many times on Cartpole.

+ +

Whether or not using a target network makes convergence faster or more stable is more complex, and it may be for your network design or hyperparameter choices that adding the target network is making performance worse.

+",1847,,,,,7/20/2019 8:35,,,,0,,,,CC BY-SA 4.0 +13460,1,,,7/20/2019 9:21,,3,172,"

I've seen in most deep learning papers use tensors. I understood what tensors are, but I want to dive into them, because I think that might be beneficial for further studies in Artificial Intelligence. Do you have any suggestion (e.g. books or papers) about that?

+",27315,,2444,,7/20/2019 10:48,6/26/2020 23:14,How can I learn tensors for deep learning?,,1,1,,,,CC BY-SA 4.0 +13462,2,,13454,7/20/2019 12:43,,2,,"

Answers:

+ +
    +
  1. Generally its the former. The next layer would learn at each filter how to merge the channels of the previous layer, that is why in a 2D convolution the kernel is a 3-dimensional tensor. But the number of parameters is $nmc_ic_{i+1}$ at the $i^{th}$ layer (this is ignoring bias). lets assume all channels are $O(c)$ then the spatial complexity becomes $O(knmc^2)$ where $k$ is the number of layers.
  2. +
  3. a) The first convolutional filter would have kernel size of (w,h,3,10) where w and h are the kernel sizes of the 2d convolution (often in practice is 3). So there are 10 filters of size (w,h,3), but the # of parameters is w*h*30 (once again for ease, ignoring bias). The second layer though, since it is working on a layer with 10 channels, will have kernel (w,h,10,10).
    +b) I think you need to go back and look at what a convolution does (in your setting specifically a 2D convolution). Each filter works on every channel of the previous layer. Each channel of the last convolutional layer refers to a single filter that convolved over the entire previous layer to that.
  4. +
+",25496,,,,,7/20/2019 12:43,,,,0,,,,CC BY-SA 4.0 +13464,1,,,7/20/2019 14:06,,2,138,"

I'm trying to implement the proximal policy optimization (PPO) algorithm. I'm confused on how to make it work with continuous action space.

+ +

For discrete action space, the output of the network is the probability for every available action, then I choose the next action based on this probability. The ratio, in the objective function, is the ratio between the action probability of the new policy and the action probability between the old policy.

+ +

For continuous action space, from what I understand, the output of the network should be the action itself. How should this ratio (or the objective function itself) look like in that case?

+",26654,,2444,,7/20/2019 19:30,7/20/2019 19:30,What is ratio of the objective function in the case of continuous action spaces?,,0,0,,,,CC BY-SA 4.0 +13465,2,,13454,7/20/2019 15:34,,3,,"

tl;dr

+

It helps to think that the channels dimension of a convolutional layer works like a fully connected layer (i.e. the layer computes the weighted sum over all channels).

+

For a single pixel...

+

Let's consider a single pixel (e.g. the top left pixel). This pixel has $C$ different values, where $C$ are the number of channels. In order to produce the result of a single filter, the layer takes the weighted sum of these $C$ pixels. It does this by actually having $C$ weights, multiplying the pixel values with their corresponding weights and summing them together.

+

Example

+

Let's say you have an $n \times m \times 10$ tensor as an input to a convolutional layer with $1$ filter and a $3 \times 3$ kernel. For creating its output the layer has $3 \times 3 \times 10 = 90$ weights, i.e. a different $3 \times 3$ kernel for each of the $10$ input channels. To create its output, the layer performs the convolution operation separately on each of the input channels (each with its corresponding weight matrix) and creates this way $10$ feature maps which are summed together.

+

Now imagine the layer has $20$ filters instead of $1$. Nothing changes, just that the same procedure is done $20$ times with $20$ different sets of weights So the total number of weights in the layer in this case is $3 \times 3 \times 10 \times 20 = 1800$.

+

To answer your questions...

+

(1) You have a grayscale image $n \times m \times 1$ and it passes through the first convolution layer (which has $10$ filters). This layer will perform the convolution operation on the input image $10$ times independently and produce $10$ feature maps, i.e. an output tensor of $n \times m \times 10$. Now this in fed into the second convolution layer, which again has $10$ filters. For each filter the layer will perform the convolution operation on the $10$ input feature maps independently and will sum the corresponding pixels together to form a $n \times m \times 1$ feature map. It will perform this procedure $10$ times and generate an output tensor of $n \times m \times 10$.

+

(2) Yes, each layer actually has $30$ filters, a total of $10$ for each of the R, G, B channels. The layer just sums the results of the R, G, B channels to produce a single feature map. For the second part, if I got it right it's pretty much what you said but it sums the maps instead of averaging them

+

Notes

+

I'd recommend checking Stanford's CS231 notes on convolution layers, which explain it in much detail and also have numerical examples to confirm if you've got it right.

+

You could also check this answer for more details.

+",26652,,-1,,6/17/2020 9:57,7/20/2019 15:34,,,,0,,,,CC BY-SA 4.0 +13466,1,,,7/20/2019 17:20,,1,60,"

Looking for a solution to my below game problem. I believe it to require some sort of reinforcement learning, dynamic programming, or probabilistic programming solution, but am unsure... This is my original problem, and is part of an initiative to create ""unique and challenging problem that you're able to conceptualize and then solve. 3 Judging criteria: uniqueness, complexity, and solution (no particular weighting and scoring may favor uniqueness/challenge over solution""

+ +

Inspirations: Conway's Game of Life, DeepMind's Starcraft Challenge, deep Q-learning, probabilistic programming

+ +

BEAR SURVIVAL

+ +

A bear is preparing for hibernation. A bear must reach life-strength 1000 in order to rest & survive the winter. A bear starts off at a health of 500. A bear explores an environment of magic berries. A bear makes a move (chosen randomly with no optional direction) and comes across a berry each time. There are 100 different types of berries that all appear across the wilderness equally and infinitely.

+ +

A magic berry always consumes 20 life from the bear upon arrival (this not an energy cost for moving and we should not think of it as such). A bear may then choose to give more, all, or none of its remaining life to the berry. If eaten, the berry may provide back to the bear 2x the amount of life given. Berries, however, are not the same and a bear knows this. A bear knows that any berry has some percentage of being poisonous. Of the 100 different types of berries, each may be 0%-100% poisonous. A berry that is 0% poisonous is the perfect berry and a bear knows that it should commit all of its remaining life to receive max health gain. If a bear wants to eat the berry, it must commit at least 20 more health. Again, a bear does not have to eat the berry, but if it chooses not to, it walks away and does not get the original 20 back.

+ +

Example: On a bear's first move (at 500 life), it comes across a magic berry and the berry automatically takes 20 life. The bear notices that the berry is 0% poisonous, the perfect berry, and gives its remaining 480 health, eats the berry, and then receives 1000 health gain. The bear has reached it's goal, hibernates, and wins the game. However, if that first berry was 100% poisonous, the anti-berry, and the bear committed all of its remaining life it would've received back 0 health gain, died, and lost the game. A bear knows to never eat the anti-berry. It knows it can come across any poisonous value from 0-100 (3,25,52,99, etc).

+ +

A bear must be picky & careful, but also bold & smart about how much life it wants to commit per berry, per move. A bear knows that if it never eats, it will eventually die as it loses 20 health per berry, per move.

+ +

While it's important for an individual bear to survive, it is even more important for the bear population to not go extinct. A population is going extinct if they lose over half of the population that year. Bears in a population, and their consumption of berries, are completely independent of each other.

+ +

Questions:

+ +
    +
  1. May we find a bear's optimal strategy for committing health & eating +berries to reach 1000 health gain?
  2. +
  3. Is the bear population eventually doomed to a unfavorable environment?
  4. +
+ +

Bonus Complexity:

+ +

Winter is coming, and conditions grow progressively harsher over time. A bear knows that every 10 moves, each berry will consume 20 * (fib(i)/ environmentFactor). fib(i) stands for fibonacci-sequence at index i, starting at 1. For all indexes where the progression is less than 20, a berry's initial health consumption remains at 20. environmentFactor is a single environment's progressive-harshness variable (how harsh winter becomes over time). The bear population is currently in an environment with environmentFactor of 4. Spelled out:

+ +
Moves 01-10: Berries consume 20 --  20*(1/4)
+Moves 10-20: Berries consume 20 --  20*(1/4)
+Moves 20-30: Berries consume 20 --  20*(2/4)
+Moves 30-40: Berries consume 20 --  20*(3/4)
+Moves 40-50: Berries consume 25 --  20*(5/4)
+Moves 50-60: Berries consume 40 --  20*(8/4)
+Moves 60-70: Berries consume 65 --  20*(13/4)
+Moves 70-80: Berries consume 105 --  20*(21/4)
+Moves 80-90: Berries consume 170 --  20*(34/4)
+Moves 90-100: Berries consume 275 --  20*(55/4)
+Moves 100-110: Berries consume 445 --  20*(89/4)
+... and so on ...
+
+ +

Same questions as above, with a third: if this environment is proven unfavorable, and extinction unavoidable, what maximum environment/environmentFactor must the bear population move to in order to avoid extinction? (this may or may not exist if a berry's requirement of 20 initial life is always unfavorable without any progression).

+ +

Further Details:

+ +
+

QUESTION: + Can you give an example of what happens when a bear eats a semi-poisonous berry (e.g. 20%)? + - Also, is the bear always immediately aware of the poison value of berries? + - In this, it also seems like you're using health, life, strength, life-strength, health-gain, etc interchangeably. Are they all the same + thing?

+ +

ANSWER:

+ +
    +
  • if the bear eats the poison berry of 20%, then it becomes a probability problem of whether or not the berry provides back life or + keeps the health amount committed by the bear. Example: the bear is at 400, the next move & berry take the initial 20 (bear is at 380 now), the bear + decides to commit an additional 80 (now at 300) and eat the berry. + 8/10 times the berry will return to the bear 200 (2x 100 committed -- + bear ends turn at 500), 2/10 times the berry + returns nothing and the bear must move on with 300 life.

  • +
  • the bear is always immediately aware of the poison value of a berry.

  • +
  • life/health/strength are all the same thing.
  • +
+
+",27322,,27322,,7/20/2019 20:35,7/20/2019 20:35,"Unique game problem (ML, DP, PP etc)",,0,5,,,,CC BY-SA 4.0 +13467,2,,12957,7/20/2019 18:08,,0,,"

If you prefer to use Python (rather than Java, which is used to implement MOA, which is suggested in the other answer), you might want to have a look at the Python's creme library, whose API is described at https://creme-ml.github.io/api.html, which is a library for incremental and online learning. In particular, you might be interested in the class OneVsRestClassifier.

+",2444,,,,,7/20/2019 18:08,,,,0,,,,CC BY-SA 4.0 +13468,5,,,7/20/2019 18:30,,0,,,2444,,2444,,7/20/2019 18:30,7/20/2019 18:30,,,,0,,,,CC BY-SA 4.0 +13469,4,,,7/20/2019 18:30,,0,,"For questions related to the reinforcement learning algorithm called proximal policy optimization (PPO), which was introduced in the paper ""Proximal Policy Optimization Algorithms"" (2017) by John Schulman et al.",2444,,2444,,7/20/2019 18:30,7/20/2019 18:30,,,,0,,,,CC BY-SA 4.0 +13470,1,13471,,7/20/2019 20:10,,2,168,"

I have been reading up on CNNs. One of the different confusing things has been that people always talk of normalization layers. A common normalization layer is a ReLU layer. But I never encountered an explanation of why all of a sudden, activation functions become their own layers in CNNs, while they are only parts of a fully connected layer in MLPs.

+ +

What is the reason for having dedicated activation layers in CNNs rather than applying the activation to the output volume of a convolutional layer as part of the convolutional layer, as it is the case for dense layers in MLPs?

+ +

I guess, in the end, there is no functional difference. We could just as well have separate activation layers in MLPs rather than activation functions in their fully connected layers. But this difference in the convention is irritating still. Well, assuming it only is an artifact of the convention.

+",20150,,2444,,7/20/2019 20:13,7/20/2019 20:36,Why are activation functions independent layers in CNNs rather than part of convolutional layers?,,1,2,,,,CC BY-SA 4.0 +13471,2,,13470,7/20/2019 20:25,,1,,"

These are just two equivalent interpretations (or illustrations) of the application of an activation function. In other words, in a multi-layer perceptron (MLP), you could also illustrate the application of the activation function as a separate layer that follows a linear combination layer. However, in the context of MLPs, the math is relatively simple and elegant, so a fully-connected layer of an MLP can simply be represented as follows

+ +

$$ +\sigma \left(\mathbf{W} \mathbf{X} + \mathbf{b} \right) +$$

+ +

where $\sigma$ is some activation function and $\mathbf{W} \mathbf{X}$ is the linear combination of the inputs, $\mathbf{X}$, and the weights, $\mathbf{W}$, and $\mathbf{b}$ is a bias. You could even represent a full or complete MLP (and not just one fully-connected layer) as a composite (or nested) function.

+ +

In the context of convolutional neural networks (CNNs), people might illustrate the application of the activation function as a separate layer because the application of an activation function to the result of the convolution operation of the CNN is optional and out-of-favor (as stated in this article http://cs231n.github.io/convolutional-networks), as opposed to the case of MLPs, where activations functions usually follow the linear combination. However, note that the last layers of a CNN are usually fully-connected layers (and not convolutional or pooling layers), that is, they are a linear combination of their input and their weights followed by an application of an activation function.

+",2444,,2444,,7/20/2019 20:36,7/20/2019 20:36,,,,1,,,,CC BY-SA 4.0 +13473,1,,,7/20/2019 21:04,,1,26,"

I was training an AI to learn things during its lifetime such as find food and navigate a maze. Behaviors that might change during its lifetime.

+ +

But I hit upon a snag. Some behaviors, like avoiding poisonous snakes, cannot be learned in a lifetime since once bitten by a snake the being is dead.

+ +

That got me thinking about how to separate out behaviors that must be given to the AI at birth (either by programming or using some evolutionary algorithm) and which behaviors to let the AI learn in its lifetime.

+ +

Also, there is the matter of when a learned behavior should be able to overrule an innate behavior (if at all).

+ +

Is there much research into this? I'm looking for some method to determine a set of innate behaviors which can't be learned.

+",4199,,2444,,7/20/2019 21:41,7/20/2019 21:41,Is there a rule-of-thumb to determine which behaviours must be learned in a lifetime and which innate?,,0,2,,,,CC BY-SA 4.0 +13474,2,,8607,7/20/2019 21:44,,1,,"

You have a small dataset. Should you even be using neural nets? Have you done any diagnostics to see if you even have enough data? Are you using the right metric? Accuracy is not always the correct metric. Which weights are you retaining? You will overfit if you save the weights that produce the lowest training error. Save the weights that produced the lowest validation error. L1, L2, and dropout are all great. So many things not described in the problem...

+ +

http://www.ultravioletanalytics.com/blog/kaggle-titanic-competition-part-ix-bias-variance-and-learning-curves

+ +

I'm wondering why you're not trying interpretable models to see if the resulting weights for the features make sense. Also if your comparing all those models and parameters, set your random initial starting point to be the same by setting the seed. I also hope you are using the same training set for each model.

+ +

You probably need more data...

+",27326,,,,,7/20/2019 21:44,,,,0,,,,CC BY-SA 4.0 +13476,1,,,7/21/2019 1:59,,1,10702,"

I am detecting wheels with a deep learning algorithm. The algorithm gives me the coordinates of those rectangles. I want to keep data that is in the rectangles of the image. I created rectangles as a mask of the area I want to keep.

+ +

Here is the output of my system

+ +

I read my image

+ +
im = cv2.imread(filename)
+
+ +

I created the rectangles with:

+ +
height,width,depth = im.shape
+cv2.rectangle(img,(384,0),(510,128),(0,255,0),3)
+cv2.rectangle(rectangle,(width/2,height/2),200,1,thickness=-1)
+
+ +

How can I mask out the data outside of the rectangle from the original image? and keep those rectangles?

+ +

Edited: I wrote this code and it only gives me one wheel. How can I have multiple masks and get all the wheels?

+ +
  mask = np.zeros(shape=frame.shape, dtype=""uint8"")
+
+# Draw a bounding box.
+# Draw a white, filled rectangle on the mask image
+cv.rectangle(img=mask,
+             pt1=(left, top), pt2=(right, bottom),
+             color=(255, 255, 255),
+             thickness=-1)
+
+
+# Apply the mask and display the result
+maskedImg = cv.bitwise_and(src1=frame, src2=mask)
+
+cv.namedWindow(winname=""masked image"", flags=cv.WINDOW_NORMAL)
+cv.imshow(winname=""masked image"", mat=maskedImg)
+
+",20025,,20025,,7/22/2019 16:36,7/22/2019 16:36,How to Mask an image using Numpy/OpenCV?,,1,0,,6/2/2020 22:38,,CC BY-SA 4.0 +13477,2,,10500,7/21/2019 3:09,,1,,"

Yes you can, provided you know about $f$ and $g$. Expression $X3 = f(X1, g(X1))$can be written as $X3 = h(X1)$ where $h$ takes into account both $f$ and $g$. After this finding the PDF is simple by differentiating the CDF: +$$ F_{X3} (x3) = P(X3 \leq x3) = P(h(X1) \leq x3) = P(X1 \leq h^{-1}(x3))$$

+ +

$$ \frac {d F_{X3} (x3)}{dx3} = \frac {d P(X1 \leq h^{-1}(x3))}{dx3} = f_{X3}(x3)$$

+ +

NOTE: The conventions followed are the same as used in the field of Probablity

+ +

(Take care of the function inversion step in non-monotonic cases)

+ +

Check these lectures.

+",,user9947,,user9947,7/21/2019 3:19,7/21/2019 3:19,,,,0,,,,CC BY-SA 4.0 +13478,1,,,7/21/2019 7:14,,1,241,"

I've built an A2C model whose actor's network has two different kinds of discrete actions, so the critic would take state and action (note that critic takes 2 actions because in each timestep we will do two kinds of actions) to predict the advantage for each of these two different kinds of actions. However, my problem is when the critic network is about to train itself with discounted rewards. It only has the reward of each timestep, so cannot determine which of the two kinds of our actions contributed more to this reward, so both of its outputs (the advantage for the action of kind 1 and the advantage for the action of kind 2) will be changed in the same direction, so output of critic would be biased with its initialization values. How can I solve this problem, that is, to distinguish between the contribution amount of each of these two kinds of actions to the outcome?

+ +

An example of my problem:
+Consider we have 10 cubes 3 boxes. In each time step, we have to choose between 10 cubes and choose between 3 boxes to place the selected cube in the selected box. So we have 2 kinds of actions here: one to pick a cube and the second to put it in one the boxes. Each box only has the capacity of only 4 cubes, and one of the cubes is so big that don't fit in any of the boxes. The reward of each time step will be the negative number of cubes that are not placed in a box, so because of the bigger cube, the agent won't get reward 0 ever. Consider a scenario that a box already contains 4 cubes and we choose that box to place the chosen cube (one of the small cubes) in it, we can't and the time will proceed. Another scenario is when we choose the bigger cube so no matter which box we choose, we cannot place it and the time will proceed. How the agent can distinguish which of these two kinds of actions contributed more to the reward?

+",27329,,27329,,7/21/2019 9:38,7/21/2019 9:38,Reward problem in A2C with multiple simultaneous discrete actions,,0,0,0,,,CC BY-SA 4.0 +13481,2,,13476,7/21/2019 9:23,,1,,"

You can use cv2.bitwise_and and pass rectangle as a mask.

+ +
im = cv2.imread(filename)
+height,width,depth = im.shape
+cv2.rectangle(img,(384,0),(510,128),(0,255,0),3)
+cv2.rectangle(rectangle,(width/2,height/2),200,1,thickness=-1)
+
+masked_data = cv2.bitwise_and(im, im, mask=rectangle)
+
+cv2.imshow(""masked_data"", masked_data)
+cv2.waitKey(0)
+
+",27313,,,,,7/21/2019 9:23,,,,0,,,,CC BY-SA 4.0 +13482,1,,,7/21/2019 9:42,,1,76,"

I'm trying to use gradient boosting and I'm using sklearn's GradientBoostingClassifier class.

+ +

My problem is that I'm having a data frame with 5 columns and I want to use these columns as features. I want to use them continuously. I mean I want each tree based classifier uses the residue of the previous tree which is based on the previous feature. As I know, by default, this classifier uses a feature and passes the residue of the previous tree to the next tree, which both are based on a single feature. How can I do this?

+ +

Should I do this on my own or there is a library which does this?

+",27332,,27332,,7/22/2019 12:54,7/22/2019 12:54,How can I use gradient boosting with multiple features?,,0,0,,,,CC BY-SA 4.0 +13483,5,,,7/21/2019 11:44,,0,,,2444,,2444,,7/21/2019 11:44,7/21/2019 11:44,,,,0,,,,CC BY-SA 4.0 +13484,4,,,7/21/2019 11:44,,0,,"For questions related to gradient boosting, which is a machine learning technique that can be used for regression and classification problems and which produces a prediction model in the form of an ensemble of other smaller prediction models (typically decision trees).",2444,,2444,,7/21/2019 11:44,7/21/2019 11:44,,,,0,,,,CC BY-SA 4.0 +13485,5,,,7/21/2019 11:45,,0,,,2444,,2444,,7/21/2019 11:45,7/21/2019 11:45,,,,0,,,,CC BY-SA 4.0 +13486,4,,,7/21/2019 11:45,,0,,For questions related to the Python's package scikit-learn (or sklearn).,2444,,2444,,7/21/2019 11:45,7/21/2019 11:45,,,,0,,,,CC BY-SA 4.0 +13488,2,,13376,7/21/2019 13:51,,3,,"

The documentary Plug & Pray (2010), directed and written by Jens Schanze, with main protagonists Joseph Weizenbaum (the creator of ELIZA) and the futurist Raymond Kurzweil, is about the promise, problems and ethics of artificial intelligence and robotics. This documentary won several awards, including the Bavarian Film Award 2010 for ""best documentary"". Here's the official trailer of the movie.

+",2444,,,,,7/21/2019 13:51,,,,0,,,,CC BY-SA 4.0 +13489,1,,,7/21/2019 14:22,,1,158,"

I need a neural network (or any other solution) to predict 3 values which sum equals a fixed number (100). This will help me calculate proportions. Which is the most efficient way to do this?

+ +

The learn data only contains extreme situations where each row contains one and only one output value set to 100. +The data to predict is expected to contain more nuances in the output values. +All my attempts lead to very low accuracy as the predicted output sum is almost never a 100. Even when I try to normalize the predicted output, the predictions show very poor accuracy.

+ +

Should I try to organize the data with 2 angles instead and deduct the 3rd angle as the remainder in a circle? How to normalize those 2 angles and how to make sure their sum will not exceed the maximum value making the 3rd angle negative?

+ +

Illustration of learn data extract (4 input columns and 3 output columns).

+ +
0    1    2    3    100  0    0
+4    5    6    7    0    100  0
+8    9    0    1    0    0    100
+
+ +

Illustration of desired output predictions where each line sums as 100:

+ +
7    83   10
+39   12   49
+68   24   8
+28   72   0
+86   6    8
+32   49   19
+0    0    100
+
+",19852,,19852,,7/27/2019 16:33,7/27/2019 16:33,"How can an ANN efficiently predict multiple numbers with fixed sum (in other words, proportions)?",,0,8,,,,CC BY-SA 4.0 +13494,1,,,7/22/2019 2:38,,2,16,"

I've been reading about Mixture of Expert models, and I've noticed that there is very little new work being produced in this subfield. Has there been a better method discovered? Why aren't more people doing stuff in this area?

+",27240,,,,,7/22/2019 2:38,Current state of MoE models,,0,0,,,,CC BY-SA 4.0 +13495,1,13519,,7/22/2019 3:23,,2,74,"

Consider AlexNet, which has 1000 output nodes, each of which classifies an image:

+ +

+ +

The problem I have been having with training a neural network of similar proportions, is that it does what any reasonable network would do: it finds the easiest way to reduce the error which happens to be setting all nodes to 0, as in the vast majority of the time, that's what they'll be. I don't understand how a network where 999 times out of 1000, the node's output is 0, could possibly learn to make that node 1.

+ +

But obviously, it's possible, as AlexNet did very well in the 2012 ImageNet challenge. So I wanted to know, how would one train a neural network (specifically a CNN) when for the majority of the inputs the desired value for an output node is 0?

+",26726,,2444,,6/12/2020 23:52,6/12/2020 23:52,How is a neural network where the majority of inputs are 0 trained?,,1,0,,,,CC BY-SA 4.0 +13497,2,,13443,7/22/2019 3:57,,2,,"

Simon Krannig's answer provides the math notation behined exactly what is going on, but since you still seem a bit confused, I've made a visual representation of a neural network using only weights with no activation function. See below: +

+ +

So I'm fairly sure it as you suspected: At each neuron, you take the sum of the inputs of the previous layer multiplied by the weight that connects that specific input to said neuron, where each input has its own unique weight for every one of its outgoing connections.

+ +

With a bias, you would do the exact same math as shown in the above image, but once you find the final value (0.2, -0.15, 0.16 and -0.075, the output layer doesn't have a bias) you would add the bias to the total value. So see below for an example including a bias:

+ +

+ +

NOTE I did not update the outputs at each layer to include the bias because I can't be bothered redrawing this in paint. Just know that the final value for all the nodes with the brown bias haven't carried over to the next layer.

+ +

Then, if you were to include an activation function, you would finally take your value and put it through. So including the bias', looking at node 1 of layer 2, it would be (lets pretend your activation function is a sigmoid):

+ +
sigmoid((0.4*0.5)+0.2)
+
+ +

and for layer 3 node 2:

+ +
sigmoid(((0.6*0.2)+(1.3*-0.15))-0.4)
+
+ +

That is how you would do a forward pass of a simple neural network.

+",26726,,,,,7/22/2019 3:57,,,,0,,,,CC BY-SA 4.0 +13498,1,,,7/22/2019 5:14,,2,62,"

I need some advice on what AI methods would be suited to the identification of a recipient of a document, where the format of the documents may vary.

+",21695,,1671,,7/24/2019 16:47,7/24/2019 16:47,Machine learning methods to identify the recipient of a document?,,0,3,,,,CC BY-SA 4.0 +13499,1,,,7/22/2019 6:03,,4,113,"

Given that recurrent neural networks are equivalent to a Turing machine, then why isn't the evolutionary Turing machine, e.g. described in the paper Evolution of evolution: Self-constructing Evolutionary Turing Machine case study (2007), mainstream?

+",23500,,2444,,12/12/2021 9:18,12/12/2021 9:18,Why isn't the evolutionary Turing machine mainstream?,,0,0,,,,CC BY-SA 4.0 +13500,2,,11534,7/22/2019 6:11,,2,,"

I’ll have a stab at this.

+ +

Cognitive performance in narrow domains is determined by competency, efficiency and speed. Take calculating numbers, extremely narrow domain but compared to humans the ability of a calculator to calculate numbers exceeds normal human performance, it is much competent in terms of speed. In a bit broader domain, AlphaGo has defeated Go players, which is more difficult than chess, and requires intuition. In fact, there is an instance where the AlphaGo makes a long-term move that was previously unimagined. In all domains however Humans are well rounded, therefore Human Intelligence is called general intelligence. An AlphaGo or Calculator cannot speak eloquently or make music, but AIs are gaining pace in these areas too.

+ +

I agree with @nbro that Bostrom wants to keep the interpretation of Superintelligence open. But if there is a rough category, these are-

+ +
    +
  1. ANI- Artificial Narrow Intelligence
  2. +
  3. AGI- Artificial General Intelligence: Where the AI’s performance is at par with humans. After AGI, it quickly takes off to SI.
  4. +
  5. SI- Superintelligence: Superintelligence is beyond our imagination, we have not figured out yet what will a SI do, think or want.
  6. +
+ +

While these categories are discrete, the functions of strength are not. I’d say they are rather discrete to continuous, because if you look at the computing power plots that follow Moore’s Law, a similar exponential graph can be drawn for AI’s performance towards general intelligence. In that graph, it seems the AI’s performance starts with discrete performance points, and then as it takes off, it becomes continuous.

+ +

This is why the term Singularity is often associated with Superintelligence. I hope this answers your question.

+",27299,,,,,7/22/2019 6:11,,,,0,,,,CC BY-SA 4.0 +13502,1,,,7/22/2019 10:48,,0,226,"

Let's say we have a task where the cost depends entirely on the path length to a terminal state, so the goal of an agent would be to take actions to reach terminal state as quickly as possible.

+ +

Now let us say, we know the optimal path length is of length $10$, and there are $n$ such paths possible. Each state has 5 possible actions. Let's say the scheme we are using to find optimal policy is On-Policy MC/TD(n) along with GLIE Policy improvement (Generalised Policy Iteration).

+ +

In the first Policy Iteration Step, each actions are equally likely, there for the probability of sampling this optimal path (or the agent discovering this path) is $n* \frac {1}{5^{10}} \approx n* \frac {1}{2^{20}}$. So, according to probability theory we need to sample around $2^{20}/n$ steps to atleast discover one of the best paths (worst case scenario).

+ +

Since, it is not possible to go through such huge number of samplings, let's say we do not sample the path, thus in the next Policy Iteration step (after GLIE Policy Improvement) some other sub-optimal path will have a higher probability of being sampled than the optimal path, hence the probability falls even lower. So, like this there is a considerably high probability that we may not find the best path at all, yet theory says we will find $\pi^*$ which indicates the best path.

+ +

So what is wrong in my reasoning here?

+",,user9947,,,,7/29/2019 16:00,Why is On-Policy MC/TD Algorithm guaranteed to converge to optimal policy?,,1,0,,,,CC BY-SA 4.0 +13503,1,,,7/22/2019 11:27,,1,14,"

I have the following problem:

+ +

I get a 360 RGB image in a room.

+ +
    +
  • I've the 3D model of this room, hence, I can generate a 3D nominal mask of the room (1-wall, 2-ceiling, 3-floor, 4-door, etc..) in a specific location (x0,y0,z0,roll0,pitch0,yaw0)
  • +
  • I can also generate the depth map of this location.
  • +
+ +

My model has to predict whether the generated mask+depth match with the RGB frame, and if not what are the dx, dy, d_yaw of the offset.

+ +

I've implemented pix2pix discriminator that receives a concatenated tensor of the [RGB, one-hot mask, depth] and yields the (dx, dy, d_yaw). +obviously, if there is a perfect match, dx=dy=dyaw=0.

+ +

Unfortunately my model isn't converging. I've tried everything, and it's a reasonable ""request"" from this model, since a human can roughly guess this offset if he looks on the images.

+ +

what would you suggest?

+",25412,,,,,7/22/2019 11:27,Estimating camera's offset to its true position,,0,0,,,,CC BY-SA 4.0 +13504,1,,,7/22/2019 12:36,,2,31,"

Say I have some data I am trying to learn, and I'm aware that the output is quantised in some way, e.g. I can get only get discrete values (0.1, 0.2, 0.3...0.9) in a finite range.

+ +

Would you treat that as regression or classification? In this case the numbers do have a relation to each other e.g. 0.3 is close to 0.4 in meaning.

+ +

I could treat it as classification with a softmax final layer with N outputs, or could treat it as regression with a linear layer with single output and then somehow quantise the result post-prediction. But my gut feeling is that the fact there is a finite number of answers that that should somehow be used in my model?

+",18372,,2444,,7/23/2019 20:23,7/23/2019 20:23,Should I model a problem with quantised output as classification or regression?,,1,0,,,,CC BY-SA 4.0 +13505,2,,11055,7/22/2019 13:47,,3,,"

One way I can think of is to redefine ""actions"" in a game to make them more fragmented, in such a way that a player has multiple actions per turn. In chess, for example, we can define an action as choosing a tile from which to move, or choosing the motion from the chosen tile, as 2 separate actions.

+ +

As an example a turn might consist of the following two actions:

+ +
    +
  1. Choose E4
  2. +
  3. Move forward 2 spaces
  4. +
+ +

That way there are 64 + 73, rather than 64 * 73 possible actions. The transition model would indicate that it's still the same player's turn after a ""tile selection"" action is done.

+ +

Of course, this would require increasing the state space in such a way that you can determine which action is legal. So there's a difference between a board state where nothing is ""selected"" and the same board state where one tile is selected by one player. In the chess example, this would require 2 more boolean CNN layers, one for each player, indicating which tile (if any) is ""selected"".

+ +

I never tried this myself, and I imagine that this might make the learning slower and more difficult, since it requires a deeper tree in the MCTS for the same set of actions.

+",27354,,27354,,7/22/2019 14:25,7/22/2019 14:25,,,,0,,,,CC BY-SA 4.0 +13506,2,,13504,7/22/2019 15:10,,1,,"

So this is considered Ordinal Regression. There are many ways to model this type of data, generally in some form of regression setting. I do not recommend the softmax route because as you mentioned, there exists prebuilt correlation to the outputs.

+ +

Some common ways to approach this
+(Note that im assuming your looking for methodologies that can be optimized through gradient techniques because of the way you formulated your question)

+ +
    +
  1. Treat it as a normal regressor where you clip the output to your range, and then define thresholds arbitrarily, such as $.36 \rightarrow .4$, $.34 \rightarrow .3$, etc..
  2. +
  3. Use a bounded activation function then scale (if your outputs are [0,1,.1], you could use sigmoid and just scale by a factor of 1, but if [0,10,1], you could use sigmoid and scale by 10. Once again youll need to create arbitrary threshold between each 2 points for inference (this can also be incorporated into your loss)
  4. +
  5. Using the above two methods, but learn the threshold between each 2 ordered points. Have an output layer that learns the ideal threshold for inference
  6. +
+ +

And if you didnt want to use neural netowrks, each of these approaches have a bayesian analog that work as well!

+",25496,,,,,7/22/2019 15:10,,,,0,,,,CC BY-SA 4.0 +13507,1,,,7/22/2019 16:07,,2,194,"

I have a question about the context of CNN and LSTM. I have trained a CNN network for image classification. However, I would like to combine it with LSTM for visualizing the attention weights. So, I extracted the features from the CNN to put it into LSTM. However, I am stuck at the concept of combinating the CNN with LSTM.

+ +

– Do I need to train the whole network again? Or just training the LSTM part is fine? +– Can I just train the LSTM on image sequences based on classes (for e.g. 1 class has around 300 images) and do predictions later on extracted video frames? +- In what way can I implement the attention mechanism with Keras?

+ +

I hope you can help me while I struggle with the context of understanding the combination of this.

+ +

~ EDITED ~

+ +

I have trained a resnet50 to classify images. Although, I removed the last dense layer, to extract features from the trained CNN network. Those extracted features will be used as input in the newly created LSTM with attention mechanism to find out where the focus lies. The predictions will be on videos (extracted frames).

+ +

Image -> extract features (CNN) -> LSTM + Attention (to check where the focus lies during the prediction) -> classify image (output class from N labels)

+",27360,,27360,,7/22/2019 23:29,7/22/2019 23:29,Understanding CNN+LSTM concept with attention and need help,,0,4,,,,CC BY-SA 4.0 +13508,1,,,7/22/2019 17:05,,3,389,"

I am looking for a rigorous mathematical proof for finding the several local minima of the Hopfield networks. I am searching for something rigorous, a demonstration, not just letting the network keep updating its neurons and wait for noticing a stable state of the network.

+

I have looked virtually everywhere, but I found nothing.

+

Is there a rigorous proof for Hopfield minima? Could you give me ideas or references?

+",26719,,2444,,1/23/2022 11:00,1/23/2022 11:00,Is there a rigorous proof for finding Hopfield minima?,,1,0,,,,CC BY-SA 4.0 +13510,1,,,7/22/2019 17:37,,1,73,"

Is Hopfield network more efficient than a naive implementation of Hamming distance that compare an input pattern and return the nearest pattern ?

+",26719,,,,,7/22/2019 17:37,Is Hopfield network more efficient than a naive implementation of Hamming distance comparator?,,0,3,,,,CC BY-SA 4.0 +13512,2,,13508,7/22/2019 19:35,,3,,"

See the paper On the Convergence Properties of the Hopfield Model (1990), by Jehoshua Bruck.

+ +

In the first section of the paper, J. Bruck describes the Hopfield network (popularized by J. J. Hopfield in 1982 in his paper Neural networks and physical systems with emergent collective computational abilities, hence the name of the network), then he describes the notation that is used throughout the paper and he gives some examples where a simple Hopfield network (with two nodes) converges to stable states (of the network) and cycles (of which the author also gives a definition).

+ +

The usual proofs of the convergence properties of Hopfield networks involve the concept of an energy function, but, in this paper, J. Bruck uses an approach (based on an equivalent formulation of the Hopfield network as an undirected graph) that does not involve an energy function, and he unifies three apparently different convergence properties related to Hopfield networks (described in part $C$ of the section Introduction). More specifically, finding the global maximum of the energy function associated with the Hopfield network operating in a serial mode (which is defined in part $A$ of the Introduction section of the paper) is equivalent to find a minimum cut in the undirected graph associated with this Hopfield network.

+ +

Furthermore, note that the proofs of convergence of the Hopfield networks actually depend on the structure of the network (more specifically, its weight matrix). For example, if the weight matrix $W \in \mathbb{R}^{n \times n}$ (where $n$ is the number of nodes in the network) associated with the Hopfield network is a symmetric matrix with the elements of the diagonal being non-negative, then the network will always converge to a stable state.

+ +

See also the chapter 13 The Hopfield Model of the book Neural Networks - A Systematic Introduction (1996) by Raul Rojas.

+",2444,,2444,,7/22/2019 19:52,7/22/2019 19:52,,,,8,,,,CC BY-SA 4.0 +13513,5,,,7/22/2019 19:41,,0,,,2444,,2444,,7/22/2019 19:41,7/22/2019 19:41,,,,0,,,,CC BY-SA 4.0 +13514,4,,,7/22/2019 19:41,,0,,"For questions related to the Hopfield network, popularized by J. J. Hopfield in the paper ""Neural networks and physical systems with emergent collective computational abilities"" (1982).",2444,,2444,,7/22/2019 19:41,7/22/2019 19:41,,,,0,,,,CC BY-SA 4.0 +13515,1,13517,,7/22/2019 20:47,,6,494,"

I understand that neural nets are fundamentally interpolative tools. Meaning, given a training dataset, a well trained neural net can approximate values within the domain of the training dataset. However, we are unsure about their behavior once we test against values outside that domain.

+ +

Speaking in the context of Imagenet, a NN trained on one of the classes in Imagenet will probably be able to predict an image of the same class outside Imagenet because Imagnet itself covers a huge domain for each class that whatever image we come across in the wild, its features will be accounted for by Imagnet.

+ +

Now, this intuition breaks down for me when I talk about simple functions with simple inputs. For example, consider $sin(x)$. Our goal is to train a neural net to predict the function given $x$ with a training domain $[-1, 1]$. Theoretically, the neural net should not be able to predict the values well outside that domain, right? This seems counterintuitive to me because the function behaves in a very simple and periodic way that I find it hard to believe that a neural net cannot figure out the proper transformation of that function even outside the training domain.

+ +

In short, are neural nets inherently unable to find a generalizable transformation outside the training domain no matter how simple is the function we are trying to approximate? Is this a property of the Deep Learning framework?

+ +

Are there problems where researchers were able to learn a robust generalizable transformation using neural nets outside the training domain? What are the possible conditions so that such results can happen?

+",17582,,2444,,7/22/2019 22:41,7/22/2019 23:00,Why can't neural networks learn functions outside of the specified domains?,,1,0,,,,CC BY-SA 4.0 +13516,1,,,7/22/2019 22:35,,1,21,"

I noticed what I considered a close resemblance of a woman B to another woman A which led to a close relative A labelled C, who B bore an even stronger resemblance to than she did to A.

+ +

However I feel that if I had compared A to C directly I wouldn't have detected the blood relationship so strongly.

+ +

I am just wondering whether there is some mathematical underpinning to my perception and strong intuition that B bore a strong resemblance to A.

+",27370,,,,,7/22/2019 22:35,Are there some formulae in facial recognition that are indicators of close kinship?,,0,3,,,,CC BY-SA 4.0 +13517,2,,13515,7/22/2019 23:00,,8,,"

The problem you discuss extends past the machine but to the man behind the machine (or woman). ML can be broken down into 3 components, the model, the data, and the learning procedure. This by the way extends to us as well. The model is our brain, the data is our experience and sensory input, and the learning procedure is there but unknown (for now $<$insert evil laugh$>$).

+ +

Your model's inability to understand that it is a sin function is normally by construction. First, though lets start with the fact that if someone showed me $sin(x)$ on $[-1,1]$, a sinusoid is the last thing id think of:
+
+ It looks almost linear. So for sake of argument i'm going to continue on the assumption you meant an entire period ($[-\pi, \pi])$.

+ +

Now, given this most people who ever got past pre-calculus would assume sinusoid, as youd expect, but let me show you how unstable that idea is. Lets use a full period, shifted by $\frac{\pi}{2}$ +
+Now i don't know about you but my spidey-sense would make me think of a Gaussian before anything. This is the same sinusoid along an entire period but just the slight difference changes an entire perspective.

+ +

Now lets talk about some of the mathematics. Given the uncountable $(x,sin(x))$ pairs between $[-1, 1]$, we can extract the function $sin(x)$ with certainty (assuming infintely differentiable) by simply taking the taylor series along any of the points in that domain. More so, we only need an infinitiscemally small continuous subset of the function to achieve that result, but as a person, would you be able to tell from $[-.0001, .0001]$? (i don't think so!), because by default we quantize, and what were gonna give to the computer is a quantization as well. We are giving it a finite countable set of points and expect it to to generalize to an uncountable set. This is silly, since there exists an infinite set of functions that can fit those exact points (concept of overfitting steps in here).

+ +

So if its so unfeasible for a computer to correctly solve the function, why can us humans extrapolate it so well from the $[-\pi, \pi]$ domain? The answer is our preset biases. Growing up we have dealt so much with periodicity, specifically sins and cosines, that graphed period triggered something in our brains and we just know what it is. But this isnt always a benefit, i could draw you a low-ordered polynomial on that domain that would make you think that exact same thought making you wrong.

+ +

So going back to my initial point, the problems lies with the models creator and not the model. Like i explained just now, its unfeasible without preset biases to learn the function we want, so what should we do? give up? NO, its our goal as the model architect to figure out what we want and how we can best achieve it! so if we want it to accomplish this task, lets try our best in modeling these biases. A quick example of how you could do that, is to force the function to fit the fourier coefficients, thatll probably solve this quite quickly (ofcourse this limits this models ability to solve for other common functions)

+ +

Takeaway: It's not the model thats struggling, its us using preset biases that the model does not have access to. The solution to this is be clever and figure out how to have it either start with these preset biases or learn them.

+",25496,,,,,7/22/2019 23:00,,,,4,,,,CC BY-SA 4.0 +13518,1,,,7/22/2019 23:54,,1,25,"

I want to learn how a set of operations (my vocabulary) are composed in a dataset of algorithms (corpus).

+ +

The algorithms are a sequence of higher level operations which have varying low-level implementations. I am able to map raw code to my vocabulary, but not all of it.

+ +

e.g. I observe a lossy description of an algorithm that does something:

+ +

X: missing data +Algo 1: BIND3 EXTEND2 X X ROTATE360 X PUSH +Algo 2: X X EXTEND2 ROTATE360 +

+ +

The underlying rotate operation could have very different raw code, but effectively the same function and so it gets mapped to the same operation.

+ +

I want to infer what the next operation will be given a sequence of (potentially missing) operations (regions of code I could not map).

+ +

i.e. I want a probability distribution over my operations vocabulary.

+ +

Any ideas on the best approach here? The standard thing seems to throw out missing data, but I can still learn in these scenarios. Also, the gaps in the code are non-homogenous--some could do many things, The alternative is to contract the sequences and lose the meaning of the gaps, or to learn an imputation.

+",12849,,12849,,7/23/2019 0:12,7/23/2019 0:12,Language Model from missing data,,0,0,,,,CC BY-SA 4.0 +13519,2,,13495,7/23/2019 5:12,,1,,"

It's the loss function.

+ +

I was using squared sum error, which I didn't think would have as a negative effect as it does, and I had to come to the explanation in my own time. Here's why:

+ +

From the perspective of the loss function, 999 times out of 1000, the output should be 0, so there will be an inherent massive bias towards 0 for all the output nodes. But this only occurs if the output nodes are actually trained when their desired outputs are 0, which is what happens in the case of the squared sum/mean error. However, in the case of cross-entropy loss, which is explained excellently here, you can see that the only node that receives gradients is the node that should be trained towards 1. This removes the massive bias towards 0, and punishes confident false positives, making it perfect for a classification problem.

+ +

As to how you would achieve something like this for regression I do not know, but at least this solves the issue for classification problems.

+",26726,,,,,7/23/2019 5:12,,,,0,,,,CC BY-SA 4.0 +13521,2,,3989,7/23/2019 8:55,,0,,"

I am very thankful to the people who responded with hints and suggestions. However, I think what seems most applicable for my case is Gen - A general-purpose probabilistic programming system with programmable inference from MIT described in the paper ""Gen: A General-Purpose Probabilistic Programming System with Programmable Inference"" by M. F. Cusumano-Towner et al.

+ +

In case you are looking for something along these lines it looks like a very good start for an application in probabilistic programming.

+",8788,,,,,7/23/2019 8:55,,,,0,,,,CC BY-SA 4.0 +13526,1,13528,,7/23/2019 13:54,,4,864,"

Why do people use the $PReLU$ activation?

+ +

$PReLU[x] = ReLU[x] + ReLU[p*x]$

+ +

with the parameter $p$ typically being a small negative number.

+ +

If a fully connected layer is followed by a at least two element $ReLU$ layer then the combined layers together are capable of emulating exactly the $PReLU$, so why is it necessary?

+ +

Am I missing something?

+",27386,,2444,,7/23/2019 17:58,7/23/2019 17:58,Is PReLU superfluous with respect to ReLU?,,2,0,,,,CC BY-SA 4.0 +13528,2,,13526,7/23/2019 15:15,,1,,"

Lets assume we have 3 Dense layers, where the activations are $x^0 \rightarrow x^1 \rightarrow x^2$, such that $x^2 = \psi PReLU(x^1) + \gamma$ and $x^1 = PReLU(Ax^0 + b)$

+ +

Now lets see what it would take to conform the PReLU into a ReLU

+ +

$\begin{align*} +PReLU(x^1) &= ReLU(x^1) + ReLU(p \odot x^1)\\ +&= ReLU(Ax^0+b) + ReLU(p\odot(Ax^0+b))\\ +&= ReLU(Ax^0+b) + ReLU((eye(p)A + eye(p)b)x^0)\\ +&= ReLU(Ax^0+b) + ReLU(Qx^0+c) \quad s.t. \quad Q = eye(p)A, \ \ c = eye(p)b\\ +&= [I, I]^T[ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ +\implies x^2 &= [\psi, \psi][ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ +&= V*ReLU(Sx^0 + d) \quad V=[\psi, \psi], \ \ S=[A, Q] \ \ d=[b, c] +\end{align*}$

+ +

So as you said it is possible to break the form of the intermiediary $PReLU$ into a pure $ReLU$ while keeping it as a linear model, but if you take a second look at the parameters of the model, the size increase drastically. The hidden units of S doubled meaning to keep $x^2$ the same size $V$ also doubles in size. So this means if you dont want to use the $PReLU$ you are learning double the parameters to achieve the same capability (granted it allows you to learn a wider span of functions as well), and if you enforce the constraints on $V,S$ set by the $PReLU$ the number of paramaters is the same but you are still using more memory and more operations!

+ +

I hope this example convinces you of the difference

+",25496,,,,,7/23/2019 15:15,,,,2,,,,CC BY-SA 4.0 +13529,1,,,7/23/2019 15:18,,1,712,"

According to my lecture, Fuzzy c-Means tries to minimize the following objective function:

+ +

$$J(X,B,U)=\sum_{i=1}^c\sum_{j=1}^n u_{ij}^w \, d^2(\vec{\beta_i},\vec{x_j})$$

+ +

where $X$ are the data points, $B$ are the cluster-'prototypes', and $U$ is the matrix containing the fuzzy membership degrees. $d$ is a distance measure.

+ +

A constraint is that the membership degrees for a single datapoint w.r.t. all clusters sum to $1$: $\sum_{j=1}^n\, u_{ij}=1$.

+ +

Now in the first equation, what is the role of the $w$? I read that one could use any convex function instead of $(\cdot)^w$. But why use anything at all. Why don't we just use the membership degrees? My lecture says using the fuzzifier is necessary but doesn't explain why.

+",21366,,2444,,7/23/2019 19:54,7/24/2019 17:32,What is the role of the 'fuzzifier' w in Fuzzy Clustering?,,1,0,,,,CC BY-SA 4.0 +13530,2,,13526,7/23/2019 16:23,,1,,"

Here are 3 reasons I can think of:

+ +
    +
  • Space - As @mshlis pointed out, size. To approximate a PReLu you require more than 1 ReLu. Even without formal proof one can easily see that PReLu is 2 adjustable (parameterizable) linear functions within 2 different ranges joined together, while ReLu is just a single adjustable (parameterizable) linear function within half that range, so you require minimum 2 ReLu's to approximate a PReLU. And thus space complexity increases and you require more space to store parameters

  • +
  • Time - This increase in number of ReLu directly affects training time, here is a question on the time complexity of training a Neural Network, you can check out and work out the necessary mathematical details for time increment for a 2x Neural Network size.

  • +
  • Dead ReLu's - This is a problem in which a ReLu output becomes 0 due to negative input, so there is no way of flowing your gradient through it and thus, it further has no effect on the training. It can be made alive again only if the other alive ReLu's optimise some activations from earlier layers such that the dead ReLu again has a positive output. This is not very likely, since the loss is optimised by adjusting weights and not by adjusting whether dead ReLu's are present (basically it is not a learnable parameter so there is a random chance of it coming alive again, other ReLu's do not strive to make it alive). So, to accommodate dead ReLu's the size of Neural Net needs to be increased more, which again leads to added time complexity. Here is a question on Dead ReLU's. +PReLu's do not suffer from this problem (which is probably one of the reasons of their introduction) and thus is definitely a better choice in terms of this criteria.

  • +
+ +

From personal experience, for a small number of epochs PReLu's tend to perform better than ReLu's for small number of epochs (I have trained only for small epochs). With further epochs and optimisation, this observation might cease to hold true.

+",,user9947,,,,7/23/2019 16:23,,,,0,,,,CC BY-SA 4.0 +13531,1,13548,,7/23/2019 16:46,,1,1206,"

Just needed a clarification on the training procedure for a standard GAN. +Of my understanding the loss function to optimize is a min max (max min causing mode collapse due to focus on one class generation) problem where the loss function

+ +

+ +

needs to maximized for the discriminator and minimized for the generator networks -

+ +

1.) In this equation are the $E_{z~p_{z(z)}}$ and $E_{x~p_{data(x)}}$ the means of the distributions of the mini batch samples? Also is the optimal case for the discriminator a maximum value of 0? and the optimal case for the generator a minimum value of $log(small~value)$ {basically a massive negative value}? If so, what happens to the first term during the training of the generator - is it taken as a constant or is the discriminator performing badly considered optimal for the generator?

+ +

2.) While putting this in code, for every step is the discriminator trained first for one step, keeping the generator constant, followed by the generator being trained for the same step, with the discriminator kept constant?

+ +

3.) In every step of training are there multiple latent vectors sampled to produce multiple generator outputs for each step? If so is the loss function an average or sum of all $V(D, G)$ for that step

+",25658,,,,,7/23/2019 23:04,Query regarding the minmax loss function formulation of the training of a Generative Adversarial Network (GAN),,1,1,,6/1/2022 15:57,,CC BY-SA 4.0 +13532,5,,,7/23/2019 18:19,,0,,,2444,,2444,,7/23/2019 18:19,7/23/2019 18:19,,,,0,,,,CC BY-SA 4.0 +13533,4,,,7/23/2019 18:19,,0,,"For questions related to cognitive architectures, such as SOAR and ACT.",2444,,2444,,7/23/2019 18:19,7/23/2019 18:19,,,,0,,,,CC BY-SA 4.0 +13534,5,,,7/23/2019 18:19,,0,,"

See e.g. https://en.wikipedia.org/wiki/Soar_(cognitive_architecture).

+",2444,,2444,,7/23/2019 18:19,7/23/2019 18:19,,,,0,,,,CC BY-SA 4.0 +13535,4,,,7/23/2019 18:19,,0,,"For questions related to the SOAR cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University.",2444,,2444,,7/23/2019 18:19,7/23/2019 18:19,,,,0,,,,CC BY-SA 4.0 +13536,5,,,7/23/2019 18:23,,0,,,2444,,2444,,7/23/2019 18:23,7/23/2019 18:23,,,,0,,,,CC BY-SA 4.0 +13537,4,,,7/23/2019 18:23,,0,,"For questions related to the Stanford Research Institute Problem Solver (also known as STRIPS), which is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International.",2444,,2444,,7/23/2019 18:23,7/23/2019 18:23,,,,0,,,,CC BY-SA 4.0 +13538,1,,,7/23/2019 19:08,,4,93,"

I am trying to create an AI that makes reasonable guesses at truths of statements. However...

+ +

Human: ""Prove that no number exists which is one more than a billion.""

+ +

AI: ""Is it true for the number 1? No. I am 1% sure the statement is correct.""

+ +

AI: ""Is it true for the number 2? No. I am 2% sure the statement is correct.""

+ +

...

+ +

AI: ""Is it true for the number 999,999? No. I am 99.99% sure the statement is correct.""

+ +

AI: ""Having tested a large number of examples. I conclude that the statement is correct.""

+ +

Human: ""The statement is wrong: one billion and one.""

+ +

What do you think has gone wrong?

+",4199,,,,,8/24/2019 0:31,How would an AI work out this question?,,1,2,,,,CC BY-SA 4.0 +13539,2,,13529,7/23/2019 19:35,,1,,"

Its not required, you can have $m=1$, actually it can be any number $\geq 1$.

+ +

Now the better question is why to have it? The answer is that it adds a smoothing effect. Lets look at it in each of the limits ($\lim m \rightarrow 1$ and $\lim m \rightarrow \infty$)

+ +

Towards $\infty$, it makes $u_{ij}$ equal to $\frac{1}{c}$, making each point have equal membership of each class regardless of $m$. From the optimization perspective, its saying how can we achieve finding clusters that are closest to all points, therefore by definition it has already achieved that, and so the Loss will always be 0. (at its global minimum)

+ +

Now in the other limit, the constants are inversely proportional to the square of the normalized euclidean distance. This makes intuitive sense, the membership is high if they are close, and the membership is low if they are not (relatively)

+ +

So why do we have the $m$, its for control. It allows us to choose and experiment with how heavy each distance should hold weight in the membership. An example where a larger $m$ may be useful is when the data isnt clean, and you dont want to rely so heavily on euclidean distance as the membership, so you forcibly add in a smoothing effect

+",25496,,25496,,7/24/2019 17:32,7/24/2019 17:32,,,,5,,,,CC BY-SA 4.0 +13540,1,13609,,7/23/2019 20:01,,0,1187,"

I haven't been able to find any assistance / examples which could help me implement OpenAI's Spinning Up resource to solve Atari's Breakout-v0 game in the OpenAI gym.

+ +

I simply want to know why the following command doesn't run, and instead produces an error that I can't find any help on:

+ +
python -m spinup.run ppo --env Breakout-v0 --exp_name simpletest
+
+ +

...and then the error:

+ +
ValueError: Shape must be rank 2 but is rank 4 for 'pi/multinomial/Multinomial'
+ (op: 'Multinomial') with input shapes: [?,210,160,4], [].
+
+ +

I understand the shape dynamics, and have written several (albeit quite unoptimized!) reinforcement learning neural nets in Python, but I was looking forward to using OpenAI's Spinning Up environment to use something more sophisticated and optimized.

+ +

Thank you so much for any help on the seeminly noobish question!

+",27392,,27516,,7/29/2019 20:27,7/29/2019 20:27,OpenAI Spinning Up: Breakout-v0 example,,1,0,,6/15/2021 11:25,,CC BY-SA 4.0 +13544,1,13546,,7/23/2019 22:02,,4,176,"

In this answer to the question Is an optimization algorithm equivalent to a neural network?, the author stated that, in theory, there is some recurrent neural network that implements a given optimization algorithm.

+

If so, then can we optimize the optimization algorithm?

+",23500,,2444,,12/12/2021 9:11,12/12/2021 9:11,Can we optimize an optimization algorithm?,,3,1,,,,CC BY-SA 4.0 +13546,2,,13544,7/23/2019 22:28,,5,,"

First, you need to consider what are the ""parameters"" of this ""optimization algorithm"" that you want to ""optimize"". Let's take the most simple case, a SGD without momentum. The update rule for this optimizer is:

+ +

$$ +w_{t+1} \leftarrow w_{t} - a \cdot \nabla_{w_{t}} J(w_t) = w_{t} - a \cdot g_t +$$

+ +

where $w_t$ are the weights at iteration $t$, $J$ is the cost function, $g_t = \nabla_{w_{t}} J(w_t)$ are the gradients of the cost function w.r.t $w_t$ and $a$ is the learning rate.

+ +

An optimization algorithm accepts as its input the weights and their gradients and returns the update. So we could write the above equation as:

+ +

$$ +w_{t+1} \leftarrow w_{t} - SGD(w_t, g_t) +$$

+ +

The same is true for all optimization algorithms (e.g. Adam, RMSprop, etc.). Now our initial question was what are the parameters of the optimizer, which we want to optimize. In the simple case of the SGD, the sole parameter of the optimizer is the learning rate.

+ +

The question that arises at this point is can we optimize the learning rate of the optimizer during training? Or more practically, can we compute this derivative?

+ +

$$ +\frac{\partial J(w_t)}{\partial a} +$$

+ +

This idea was explored in this paper, where they coin this technique ""hypergradient descent"". I suggest you take a look.

+",26652,,,,,7/23/2019 22:28,,,,2,,,,CC BY-SA 4.0 +13547,2,,13544,7/23/2019 22:29,,2,,"

We usually optimize with respect to something. For example, you can train a neural network to locate cats in an image. This operation of locating cats in an image can be thought of as a function: given an image, a neural network can be trained to return the position of the cat in the image. In this sense, we can optimize a neural network with respect to this task.

+ +

However, if a neural network represents an optimization algorithm, then, if you change it a little bit, then it will no more be the same optimization algorithm: it might be another optimization algorithm or some other different algorithm.

+ +

For example, most optimizations algorithms that are used to train neural networks (like Adam) are a variation of gradient descent (GD). If you think that Adam performs better than GD, then you could say that Adam is an optimization of GD. So, Adam performs better than GD with respect to something. Possibly, GD also performs better than Adam with respect to something else. Of course, this is a little bit of a stretch.

+",2444,,,,,7/23/2019 22:29,,,,0,,,,CC BY-SA 4.0 +13548,2,,13531,7/23/2019 23:04,,2,,"

I'll answer your questions one by one:

+ +
+

In this equation are the $E_{z \sim p_z(z)}$ and $E_{x \sim p_{data}(x)}$ the means of the distributions of the mini batch samples?

+
+ +

So let's take the first part $E_{x \sim p_{data}(x)}[log \,D(x)]$. This is read as the ""expected value of $log \, D(x)$, where $x$ is sampled from $p_{data}(x)$"". So, in simpler terms this means that:

+ +
    +
  • You have a distribution $p_{data}(x)$.
  • +
  • You sample a batch of samples $x$ from this distribution.
  • +
  • You feed the batch to the discriminator and get its output $D(x)$.
  • +
  • You compute the log for the batch of predictions.
  • +
  • You average over the samples in the batch.
  • +
+ +
+

Also is the optimal case for the discriminator a maximum value of 0?

+
+ +

Yes, the goal of the discriminator is to maximize $V(D, G)$. This is achieved when $D(x) = 1$ and $D(z) = 0$, where $V(D, G)=0$.

+ +
+

the optimal case for the generator a minimum value of log(small value) {basically a massive negative value}?

+
+ +

Yes, the generator wants to minimize $V(D, G)$, so it wants $D(x)$ and $D(z)$ to be as small as possible (though it can affect only the second term). The minimum value for $D(z)$ is $1 / M$, where $M$ is the number of classes in the dataset. So the minimum value for $V(D, G)$ is $\log{[(M-1) / M^2]}$ (I think).

+ +
+

If so, what happens to the first term during the training of the generator - is it taken as a constant or is the discriminator performing badly considered optimal for the generator?

+
+ +

The first term is of no consequence to the Generator, because the generator can't affect it somehow (i.e. like you say it is taken as a constant). Consider the generator $G_{\theta}$ has parameters $\theta$ and the discriminator $D_{\phi}$ has parameters $\phi$. When training $G$:

+ +

$$ +\frac{\log{D_{\phi}(x)}}{\partial \theta} = 0 +$$

+ +

So the generator is trained of $V$.

+ +
+

While putting this in code, for every step is the discriminator trained first for one step, keeping the generator constant, followed by the generator being trained for the same step, with the discriminator kept constant?

+
+ +

In theory, yes. In practice for every step the Generator takes, we train the Discriminator for more steps (e.g. $5$ steps).

+ +
+

In every step of training are there multiple latent vectors sampled to produce multiple generator outputs for each step? If so is the loss function an average or sum of all V(D,G) for that step.?

+
+ +

Yes, at every step we sample a batch of latent vectors. The equation indicates that we average over these (i.e. the expectation) but in practice it doesn't have any difference (the average is the sum divided by a constant number, which doesn't affect the optimization very much).

+",26652,,,,,7/23/2019 23:04,,,,0,,,,CC BY-SA 4.0 +13549,1,13561,,7/24/2019 0:28,,3,253,"

From the literature I have read so far, it is not clear how exactly the convolution operation is defined. It seems people use two different definitions:

+

Let us assume we are given an $n_w \times n_h \times d$ input tensor $I$ and an $m_w \times m_h \times d$ filter $F$ of $d$ kernels (I use the convention of referring to the depth-slices of filters as kernels. I also will call the depth slices of the input tensor channels). Let us also assume $F$ is the $j$th filter of $J$ filters.

+

Now to the definitions.

+

Option 1:

+

The convolution of $I$ with $F$ is obtained by sliding $F$ across $I$ and computing the Frobenius inner product between channel $k$ and kernel $k$ at each position, adding the products and storing them in an output matrix. That matrix is the result of the convolution. It is also the $j$th feature map in the output tensor of the convolution layer.

+

Let $I \in \mathbb{R}^{n_w \times n_h \times d}$ and $F \in \mathbb{R}^{m_w \times m_h \times d}$. Let $s \in \mathbb{N}$ be the stride. The operation will only be defined if the smaller tensor fits within the larger tensor along its width and height a positive integer number of times when shifting by $s$, that is if and only if $k_w = (n_w - m_w) / s + 1\in \mathbb{N}$ and $k_h = (n_h - m_h) / s + 1\in \mathbb{N}$, where $k_w \times k_h \times d$ is the shape of the output tensor. Furthermore let ${f_x : i \mapsto (x - 1)s + i}$ be a function that returns the absolute index in the input tensor, given an index $x$ in the output tensor, the stride length $s$ and a relative index $i$. +\begin{equation*} + \begin{split} + (I * F)_{x y} = + & \sum_{k=1}^d \sum_{i = 1}^{m_w} \sum_{j = 1}^{m_h} I_{f_x(i) f_y(j) k} \cdot F_{i j k} + \end{split} +\end{equation*}

+

Option 2:

+

The convolutions (plural) of $I$ with $F$ are obtained by sliding $F$ across $I$ and computing the Frobenius inner product between channel $k$ and kernel $i$ at each position. Each product is stored in a matrix associated with the channel $k$. There is no adding of the products yet. The convolutions are the result matrices. The step where the matrices are added component wise to obtain the $j$th feature map of the output tensor of the convolution layer is not part of the convolution operation, but an independent step.

+

Let $I \in \mathbb{R}^{n_w \times n_h}$ and $F \in \mathbb{R}^{m_w \times m_h}$. Let $s \in \mathbb{N}$ be the stride. The operation will only be defined if the smaller matrix fits within the larger one along its width and height a positive integer number of times when shifting by $s$, that is, if and only if $k_w = (n_w - m_w) / s + 1\in \mathbb{N}$ and $k_h = (n_h - m_h) / s + 1\in \mathbb{N}$, where $k_w \times k_h$ is the shape of the output matrix. Furthermore let ${f_x : i \mapsto (x - 1)s + i}$ be a function that returns the absolute index in the input matrix, given an index $x$ in the output matrix, the stride length $s$ and a relative index $i$. +\begin{equation*} + \begin{split} + (I * F)_{x y} = + &\sum_{i = 1}^{m_w} \sum_{j = 1}^{m_h} I_{f_x(i) f_y(j)} \cdot F_{i j} + \end{split} +\end{equation*}

+

Which of these two definitions is the common one?

+",20150,,2444,,5/11/2022 7:13,5/11/2022 7:13,Is adding the Frobenius inner products between filter and input part of convolution or a separate step?,,1,0,,,,CC BY-SA 4.0 +13552,1,13553,,7/24/2019 11:52,,3,470,"

I do not understand why with enough training how the generator cannot learn all images from the training set as a mapping from the latent space - It is the absolute optimal case in training as it replicates the distribution and the discriminator output will always be 0.5. Even though most blog posts I have seen do not mention noise, a few of them have them in their diagrams or describe their presence, but never exactly describe the purpose of this noise.

+ +

Is this noise injected to avoid the exact reproduction of the training data? If not what is the purpose of this injection and how is exact reproduction avoided?

+",25658,,2444,,7/24/2019 21:53,7/24/2019 21:56,What is the purpose of the noise injection in the generator network of a GAN?,,1,3,,,,CC BY-SA 4.0 +13553,2,,13552,7/24/2019 12:26,,2,,"

Your goal is to model a distribution when constructing a GAN, therefore you need a way to be able to sample that distribution. The noise's purpose is so you can do this. Generally, it's drawn from a distribution that is computationally easy to draw from (like a gaussian).

+ +

You are modeling the generator $G(X)$ where $X \sim N(\mu, \sigma^2)$. this means $G(X)$ is a random variable itself. The forward pass of the network transforms the $X$ samples into our $G(X)$ samples, allowing us to formulate a loss function (by solving the expectation as the mean of drawn samples) and train the model.

+ +

Takeaway: The noise injected is just a parametrization of our Generator in another space, and the training goal is to learn the ideal transformation (we use neural networks because they are differentiable and are effective function approximators)

+ +

Also, note to your point of why it doesn't learn the training data (in its entirety) exactly is because generally $G(X)$ is continuous, and therefore if it has 2 images in its codomain, there also exists some path in pixel space from one to the other containing an uncountable (or quantized countable) number of images that don't exist in the training, and this would be reflected in the loss, therefore in the min-max game of the optimization, its difficult for it learn the training set on the nose.

+",25496,,2444,,7/24/2019 21:56,7/24/2019 21:56,,,,0,,,,CC BY-SA 4.0 +13554,1,,,7/24/2019 13:07,,1,276,"

If I understand correctly, the KL divergence is a measure of information loss between a ground truth distribution $P$ and a predicted distribution $Q$, and the Jensen-Shannon divergence is the mean of the KL Divergences of 2 cases

+ +
    +
  1. Predicted distribution is mean of $P$ and $Q$, and ground truth is $P$

  2. +
  3. Predicted distribution is mean of $P$ and $Q$, and ground truth is $Q$

  4. +
+ +

Since KL divergence can be easily interpreted as information loss in $Q$ relative to $P$, what can JS divergence interpret-ably represent? I cannot see any use cases of these measures unless there are two distributions to compare. Is there any other problem where I could use them as loss functions other than generation problems? If so how, and why?

+",25658,,2444,,7/24/2019 20:24,7/24/2019 20:24,Could the Jensen-Shannon divergence and Kullback-Leibler divergence be used as loss functions of non-generation problems?,,0,2,,,,CC BY-SA 4.0 +13555,1,,,7/24/2019 14:51,,3,898,"

I have implemented the total loss of my PPO objective as follows:-

+ +
total_loss = critic_discount * critic_loss + actor_loss -  entropy_beta * K.mean(-(newpolicy_probs * K.log(newpolicy_probs)))
+
+ +

After training for a few epochs, the entropy term becomes ""nan"" for some reason. I used tf.Print() to see the new policy probabilities when the entropy becomes undefined, it is as follows-

+ +
+

new policy probs: [[6.1029973e-06 1.93471514e-08 + 0.000299338106...]...]

+
+ +

I am not clear as to why taking log of these small probabilities is coming out as nan. Any idea how to prevent this?

+",27405,,,,,7/25/2019 12:36,Entropy term in Proximal Policy Optimization (PPO) becomes undefined after few training epochs,,1,4,,,,CC BY-SA 4.0 +13557,1,13559,,7/24/2019 19:25,,4,141,"

First of all, it is great to have found this community!

+ +

I am currently implementing my own Alpha Zero clone on Connect4. However, I have a mental barrier I cannot overcome.

+ +

How can I use one neural network for both players? I do not understand what the input should be.

+ +

Do I just put in the board position ($6 \times 7$) and let's say Player1's pieces on the board are represented as $-1$, empty board as $0$ and Player2's pieces as $1$?

+ +

To me, that seems the most efficient. But then, in the backpropagation, I feel like this cannot be working. If I update the same network for both players (which Alpha Zero does), don't I try to optimize Player1 and Player 2 at the same time?

+ +

I just can't get my head around it. 2 Neural networks, each for one player is understandable for me. But one network? I don't understand how to backpropagate? Should I just flip my ""z"" (the result of the game) every time I go one layer backward? Is that all there is to using one network?

+ +

I hope I made this clear enough. I am quite confused, I tried my best.

+ +

Thank you for reading this!

+",27406,,2444,,7/24/2019 22:20,7/24/2019 22:20,How can I use one neural network for both players in Alpha Zero (Connect 4)?,,1,0,,,,CC BY-SA 4.0 +13559,2,,13557,7/24/2019 20:22,,3,,"

Let's define your problem from another point of view. Let's say that in this RL problem you have two agents (agent1 and agent2) that compete with each other in order to accomplish their own goal, i.e., wining connect4 game.

+ +

Therefore, we could say that from agent1's point of view, he is player1 and the player2 is agent2. The same way, from agent2's point of view, he is player1 and the player2 is agent1.

+ +

Here, we can use the same definition as you proposed:

+ +

""let's say player1's pieces on the board are represented as -1, empty board as 0 and player2's pieces as 1""

+ +

When it is the turn for agent1, he will receive an observation (representing the board configuration) from the environment. In this case, agent1's pieces are represented as -1, empty board as 0 and for player2's (agent2's) pieces as 1. Then, you could use the neural network to compute, e.g., the next action for agent1.

+ +

When it is the turn for agent2, he will receive another observation from the environment, but this time you represent its own pieces as -1, while the agent1's (player2's from agent2's point of view) as 1.

+ +

With this approach, the neural network will always receives as an input a representation in which the pieces 'it' controls are -1, while its rival's pieces are 1 and empty space 0. From the neural network's point of view, he does not care about which agent is playing as the actual observation is indistinguishable to calculate the next action.

+ +

Therefore, you will have to generate two different observations, in which you invert the sign of the ones on it, depending on which turn it is (agent1's or agent2's turn).

+ +

For example:

+ +

+ +

+",27341,,,,,7/24/2019 20:22,,,,1,,,,CC BY-SA 4.0 +13560,1,,,7/25/2019 0:22,,0,73,"

I'm new to machine learning and especially, deep learning. Given a video (and it's subtitle), I need to generate a 10-second summary out of this video. How can I use ML and DL to produce the most representative summary out of this video? More specifically, given video scenes, what are some ways to select and rank them, and how to do it? Any ideas would be helpful.

+",9053,,,,,7/25/2019 11:17,Use deep learning to rank video scenes,,1,0,,,,CC BY-SA 4.0 +13561,2,,13549,7/25/2019 1:02,,2,,"

Both are incorrect.

+ +

using your notation
+You do not take a sliding frobenius inner product of a singular channel of $I$ with $F$, but with all the channels at once. This may be easier to understand if you do not assume the number of channels of the input and output are the same (ie different number of input channels then filters). So lets say your input has $k_1$ channels and you have $k_2$ filters.

+ +

This means $shape(I) = (N, M, k_1)$ and each of the $k_2$ filters is of shape $(n, m,k_2)$ and your output shape is $(N-n+1, M-m+1, k_2)$ assuming your not using padding.

+ +

So i guess trying to put it in the manner you use: The convolution's output's $i^{th}$ channel is taking the sliding Frobenius inner product between all $n \times m$ cross-section of $I$ (including all input channels) with filter $F_i$.

+",25496,,,,,7/25/2019 1:02,,,,15,,,,CC BY-SA 4.0 +13564,2,,11617,7/25/2019 6:09,,0,,"
+

So, from this I will conclude (correct me if I'm wrong) that AlphaZero + evaluates the current position, but uses a very powerful heuristic. + Stockfish on the other hand searches through lots of positions from + the current one first, and then uses a less powerful heuristic when a + certain depth is reached.

+
+ +

This is wrong. Like Stockfish, AlphaZero as well ""searches through lots of positions from the current one."" You hint at this yourself when you say ""possibly combined with monte carlo methods"", but it seems you don't understand exactly what that means, so let me explain:

+ +

Stockfish searches through the tree of future moves using an algorithm called Minimax (actually a variant called alpha beta pruning), whereas AlphaZero searches through future moves using a different algorithm called Monte Carlo Tree Search (MCTS). Minimax is well suited to quick evaluation functions, whereas MCTS explores fewer moves and thus can handle a more expensive evaluation function. Further, MCTS works not with exact values but with probabilities, and AlphaZero uses the Neural Net not just for the value of moves but to guide which moves to explore next (it functionally is actually 2 networks, a policy network and a value network). +To be sure, that is somewhat of a simplification. It is not impossible to use a NeuralNet in conjunction with Minimax. Its just that in practice, due to the nature of the algorithm, its simply too expensive.

+",12201,,,,,7/25/2019 6:09,,,,2,,,,CC BY-SA 4.0 +13565,2,,11617,7/25/2019 6:31,,3,,"

Yes it's possible to to combine AlphaZero with Minimax methods (including alpha-beta pruning). AlphaZero itself is combination of Monte Carlo Tree Search (MCTS) and Deep Network, where MCTS is used to get data to train network and network used for tree leafs evaluation (instead of rollout as in classical MCTS). It's possible to combine selection-expansion part of AlphaZero MCTS with Minimax the same way as it was done for classical MCTS - ""Monte-Carlo Tree Search and Minimax Hybrids"", pdf.

+",22745,,,,,7/25/2019 6:31,,,,0,,,,CC BY-SA 4.0 +13567,1,,,7/25/2019 9:28,,1,87,"

While doing transfer learning where my two problems are face-generation and car-generation is it likely that, if I use the weights of one problem as the initialization of the weights for the other problem, the model will converge to a local minima? In any problem is it better to train from scratch over transfer learning? (especially for GAN training?)

+",25658,,2444,,7/25/2019 16:08,7/18/2021 14:16,Is convergence to a local minima more likely with transfer learning?,,0,0,,,,CC BY-SA 4.0 +13568,1,13581,,7/25/2019 9:50,,3,185,"

I am going through the book Pattern Recognition by Bishop.

+ +

At one point he says

+ +
+

For $M = 9$, the training set error goes to zero, as we might expect because this polynomial contains 10 degrees of freedom corresponding to the $10$ coefficients $w_0, \dots, w_9$, and so can be tuned exactly to the $10$ data points in the training set.

+
+ +

where $M$ is the order of the hypothesis function, and $w$ are the weights of the hypothesis function.

+ +

I did not understand how having $10$ degrees of freedom will tune the model EXACTLY to the $10$ data points? Does it mean that whenever we have a number of data points in training set equal to the degrees of freedom, the error will be zero?

+",27422,,2444,,7/25/2019 16:10,7/26/2019 2:31,What is the relationship between degrees of freedom and the size of the training dataset?,,1,0,,,,CC BY-SA 4.0 +13570,1,,,7/25/2019 11:01,,2,222,"

I have created a chatbot by Keras based on movie dialog. I used RNN more specifically GRU . My bot can reply well. But the problem is , it can't hold the context . As an example if I say Tell me a joke, the bot will reply something , and then if I say one more , the bot simply doesn't understand that I was asking for another joke and many more similar cases, like if I used a slang against the bot , the bot will reply me with something similar , but if I just say something romantic or good immediately after using slang , the bot will reply to me with something good . I want to keep context or environment . How can I do so . Any lead would be helpful .

+",26850,,,,,7/26/2019 10:28,How can I keep context in my chatbot,,1,4,,,,CC BY-SA 4.0 +13571,2,,13560,7/25/2019 11:17,,1,,"

It’s seems like quite challenging problem; at least you would need quite a lot of annotated data and computational power.

+ +

The approaches/optimizations you could consider:

+ +
    +
  • To make scene change detection and take short piece out of each
  • +
  • To introduce some kind of “novelty” metric and try to maximize +it to get most different parts of video
  • +
  • To convert video to kind of vector with existing solution like +r-cnn and yolo and then process it with recurrent networks.
  • +
  • The task seems to be very close to video capturing/summarization, you +can take inspiration there
  • +
  • Also, the attention approach might be handy, look, for example +self-attention for video, +semantic attention for +video
  • +
+",16940,,,,,7/25/2019 11:17,,,,4,,,,CC BY-SA 4.0 +13572,1,,,7/25/2019 11:57,,3,570,"

What is the difference between image processing and computer vision? They are apparently both used in artificial intelligence.

+",67525,,2444,,7/26/2019 10:25,7/26/2019 10:25,What is the difference between image processing and computer vision?,,1,0,,,,CC BY-SA 4.0 +13573,1,13591,,7/25/2019 12:19,,1,716,"

I was watching a video about Convolutional Neural Networks: https://www.youtube.com/watch?v=SQ67NBCLV98. What I'm confused about is the arrangement of applying the filters' channels to the input image or even to the output of a previous layer.

+

Question 1 - Looking at the visual explanation example of how one filter with 3 channels is applied to the input image (with 3 channels), so that each 1 filter channel is applied to its corresponding input channel: . +So hence the output is 3 channels. Makes sense.

+

However, looking at the second screenshot which shows an example of the VGG network: , looking at the first layer (I've delineated with a red frame), which is 64 channels, where the input of the image contains 3 channels. How does the output shape become 64? The only way I would think this would be possible is if you apply:

+
    +
  • filter channel 1 to image channel 1
  • +
  • filter channel 2 to image channel 2
  • +
  • filter channel 3 to image channel 3
  • +
  • filter channel 4 to image channel 1
  • +
  • filter channel 5 to image channel 2
  • +
  • filter channel 6 to image channel 3
  • +
+

.. and so on.

+

Or the other thing could be, that these are representing Conv layers, with 64 filters. Rather than a filter with 64 channels. And that's precisely what I'm confused about here. In all the popular Convolutional networks, when we see these big numbers - 64, 128, 256 ... etc, are these Conv layers with 64 filters, or are they individual filters with 64 channels each?

+

Question 2 - Referring back to the second screenshot, the layer I've delineated with blue frame (3x3x128). This Conv layer, as I understand, takes the output of 64 Max-pooled nodes and applies 128 Conv filters. But how does the output become 128. If we apply each filter to each Max-pooled output node, that's 64 x 128 = 8192 channels or nodes in output shape. Clearly that's not what's happening and so I'm definitely missing something here. So, how does 128 filters is applied to 64 output nodes in a way so that the output is still 128? What's the arrangement?

+

Many thanks in advance.

+",25360,,-1,,6/17/2020 9:57,7/26/2019 2:50,Understanding arrangement of applying filters to input channels,,1,0,,12/18/2021 12:08,,CC BY-SA 4.0 +13574,2,,13555,7/25/2019 12:36,,3,,"

I browsed through some other implementations of PPO and they all add small offset (1e-10) to prevent undefined log(0). I did that and the training works now.

+",27405,,,,,7/25/2019 12:36,,,,0,,,,CC BY-SA 4.0 +13576,1,,,7/25/2019 14:05,,1,10,"

I am building a negation detection system. How to use dependency parsers for the same. I am using SPACY for dependency parser

+",26115,,,,,7/25/2019 14:05,How can we use Dependency Parsers for Negation detection,,0,0,,,,CC BY-SA 4.0 +13577,1,,,7/25/2019 14:12,,3,807,"

I'm trying to implement the Proximal Policy Optimization (PPO) algorithm (code here), but I am confused about certain concepts.

+ +
    +
  1. What is the correct way to implement log probability of a policy (denoted by $\pi_\theta$ below)? +$$ +L^{C P I}(\theta)=\hat{\mathbb{E}}_{t}\left[\frac{\pi_{\theta}\left(a_{t} | s_{t}\right)}{\pi_{\theta_{\text {old }}}\left(a_{t} | s_{t}\right)} \hat{A}_{t}\right]=\hat{\mathbb{E}}_{t}\left[r_{t}(\theta) \hat{A}_{t}\right] +$$ +Let's say my old network policy output is oldpolicy_probs=[0.1,0.2,0.6,0.1] and new network policy output is newpolicy_probs=[0.2,0.2,0.4,0.2].

    + +

    Do I take the log of this directly, or should I first multiply these with the true label y_true = [0,0,1,0] as implemented here?

  2. +
  3. ratio = np.mean(np.exp(np.log(newpolicy_probs + 1e-10) - K.log(oldpolicy_probs + 1e-10))*advantage)

    + +

    Once I have the ratio and I multiply it with an advantage, why do we take the mean over all actions? I suspect it might be because we are taking estimate $\hat{\mathbb{E}_t}$ but conceptually I don't understand what this gives us. Is my implementation above correct?

  4. +
+",27405,,2444,,5/17/2020 22:08,6/27/2021 18:19,Understanding log probabilities of actions in the PPO objective,,0,2,,,,CC BY-SA 4.0 +13578,2,,7222,7/25/2019 14:58,,0,,"

I've been tackling a similar job title classification problem and used this paper as the basis for my approach: https://web.stanford.edu/~gavish/documents/phrase_based.pdf

+ +

Might find it useful.

+",27431,,,,,7/25/2019 14:58,,,,0,,,,CC BY-SA 4.0 +13580,1,,,7/25/2019 16:29,,5,1805,"

I am reading Francois Chollet's Deep learning with Python, and I came across a section about max-pooling that's really giving me trouble.

+

I am unable to copy-paste the content, so I've included screenshots of the paragraph that's troubling me.

+

+

+

I simply don't understand what he means when he talks about "What's wrong with this setup?" (towards the end).

+

How does removing the max-pooling layers "reduce" the amount of the initial image that we're looking at? What are the benefits of using max-pooling in convolutional neural networks, as opposed to just using convolution layers?

+",27433,,2444,,1/1/2022 10:01,1/1/2022 10:01,What are the benefits of using max-pooling in convolutional neural networks?,,1,4,,,,CC BY-SA 4.0 +13581,2,,13568,7/25/2019 17:11,,3,,"

When you define a straight line of the form $y=mx+c$, you need 2 points $(x_1,y_1)$ and $(x_2,y_2)$, to solve for the 2 variables $m$ and $c$ (you can easily visualise this graphically). Similarly, a parabola of the form $y=ax^2+bx+c$ will require 3 such points.

+ +

Now viewing it as a ML problem, you are given the points and you have to estimate the parameters such that the training error is 0 (Regression). So just like the previous case you have a bunch of $(x_i,y_i)$ and you have to fit a curve whose degree of freedom you have to choose. Here $m,c,a,b$ are all replaced with more generic $w$ called as a parameter

+ +

If you have $10$ degree of freedom and $10$ data-points you can solve for the parameters of the model (unambiguous solution i.e only one and one unique solution will exist). Whereas , if the degree of freedom is lower you'll get a solution which may miss one point. For, example if you are given 3 points and ask to fit a straight line through it, you may or may not be able to (depending on collinearity). On the opposite case if you have more degrees of freedom you can get multiple values for a single parameter. Assuming a single extra degree of freedom, you can take a parameter hold it to some fixed value and solve the rest of the equation as mentioned above you get some value for the rest of parameters. Now hold the same parameter at a different fixed value and repeat the same process, you get some different value of the other parameters.

+ +

In general, it is easier to view it this way:

+ +

If let's say you have 3 degrees of freedom $y=w_2x_1^2+w_1x_1+w_0$ and 3 data-points $(x_1,y_1), (x_2,y_2), (x_3,y_3)$, you can get a system of equation:

+ +

$$y_1=w_2x_1^2+w_1x_1+w_0$$ +$$y_2=w_2x_2^2+w_1x_2+w_0$$ +$$y_3=w_2x_3^2+w_1x_3+w_2$$

+ +

Thus you get a system of linear equations (NOTE: $x_i,y_i$ are known here). The only unknowns here are $w_i$'s for which there are multiple methods to solve. You can extend this for $n$ equations.

+",,user9947,,user9947,7/26/2019 2:31,7/26/2019 2:31,,,,0,,,,CC BY-SA 4.0 +13582,2,,13580,7/25/2019 18:00,,1,,"

MaxPooling pools together information. Imagine you have 2 convolutional layers $(F_1, F_2)$ respectively, each with a 3x3 kernel and a stride of $1$. Also, imagine your input is $I$ is of shape $(w,h)$. Let's call a max-pooling layer $M$ is of size $(2,2)$.

+ +

Note: I'm ignoring channels because, for these purposes, it's not necessary and can be extended to any amount of them.

+ +

Now you have two cases:

+ +
    +
  1. $O_1 = F_2 \circ F_1 \circ I$
  2. +
  3. $O_2 = F_2 \circ M \circ F_1 \circ I$
  4. +
+ +

In these cases, $shape(O_1)=(w-4, h-4)$ and $shape(O_2)=\left(\frac{w-2}{2}-2, \frac{h-2}{2}-2 \right)$. If we plug in dummy values, like $w,h = 64,64$, we get the shapes become $(60,60)$ and $(29,29)$ respectively. As you can tell, these are very different!

+ +

Now, there is more of a difference than just the size of the outputs, each neuron holds a pooling of more information. Let's do it out:

+ +
    +
  1. Each output neuron of $F_1 \circ I$ has information from a $(3,3)$ receptive field.

  2. +
  3. Each output neuron then of $F_2 \circ F_1 \circ I$ has information from a $(3,3)$ receptive field of $F_1 \circ I$, which, if we eliminate reused nodes, is a $(5,5)$ receptive field from the initial $I$.

  4. +
  5. Each output neuron then of $M \circ F_1 \circ I$ has information from a $(2,2)$ receptive field of $F_1 \circ I$, which, if we eliminate reused nodes, is a $(4,4)$ receptive field from the initial $I$.

  6. +
  7. Each output neuron then of $F_2 \circ M \circ F_1 \circ I$ has information from a $(4,4)$ receptive field of $M \circ F_1 \circ I$, which, if we eliminate reused nodes, is a $(8,8)$ receptive field from the initial $I$.

  8. +
+ +

So, let us discuss these: Using max-pooling reduces the feature space heavily by throwing out a lot of nodes whose features aren't as indicative (makes training models more tractable) along with it does extend the receptive field with no additional parameters.

+",25496,,25496,,7/27/2019 0:07,7/27/2019 0:07,,,,2,,,,CC BY-SA 4.0 +13584,5,,,7/25/2019 19:33,,0,,"

https://en.wikipedia.org/wiki/Self-optimization

+ +

https://en.wikipedia.org/wiki/Self-organizing_network

+ +

https://en.wikipedia.org/wiki/Mathematical_optimization

+ +

https://en.wikipedia.org/wiki/Engineering_optimization

+ +

https://en.wikipedia.org/wiki/Process_optimization

+ +

https://en.wikipedia.org/wiki/Optimality_model

+ +

https://en.wikipedia.org/wiki/Profile-guided_optimization

+",1671,,1671,,7/25/2019 19:33,7/25/2019 19:33,,,,0,,,,CC BY-SA 4.0 +13585,4,,,7/25/2019 19:33,,0,,"Use for the concept of self-optimization in general. (i.e. use may be for theoretical subjects, not restricted to Self Organizing Networks in communications technology.)",1671,,1671,,7/25/2019 19:33,7/25/2019 19:33,,,,0,,,,CC BY-SA 4.0 +13586,5,,,7/25/2019 20:26,,0,,,1671,,1671,,7/25/2019 20:26,7/25/2019 20:26,,,,0,,,,CC BY-SA 4.0 +13587,4,,,7/25/2019 20:26,,0,,Use this tag for disambiguation of terms or concepts that may be similar. ,1671,,1671,,7/25/2019 20:26,7/25/2019 20:26,,,,0,,,,CC BY-SA 4.0 +13588,2,,13572,7/25/2019 21:18,,2,,"

The Wikipedia article related to computer vision gives, in my opinion, a good description of the field and its relation to image processing. Below, I will only cite the most relevant parts of the article.

+
+

Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.

+

The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.

+

Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, and image restoration.

+

The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.

+

Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.

+
+

What is the difference between computer vision and image processing?

+

Computer vision is about gaining high-level understanding from images or videos. For example, object recognition, which is the task of identifying the type of objects (e.g. apples or humans) in an image, is a computer vision problem. Of course, this task requires a high-level understanding of the image, that is, an understanding of the image similar to the way humans understand visual inputs, given that an apple is a high-level object that is composed of atoms, can be green, etc. For example, a neural network that attempts to classify the type of object in an image (assuming, for simplicity, there is just one type of object) would be a computer vision technique. In computer vision, you receive an image as input and you can produce an image as output or some other type of information (e.g. the type of objects in the image).

+

On the other hand, image processing does not necessarily imply a high-level understanding of the image. Image processing is a subfield of signal processing but applied to images, which are $2$d signals (or functions of a fixed domain). So, for example, if you have a blurred or noisy image, the task of deblurring or denoising it is part of image processing. The typical tasks in image processing are filtering (e.g. using the Gaussian filter or the mean filter), noise removal, edge detection and color processing. In image processing, you receive an image as input and you produce another image as output.

+

However, note that, in many cases, to gain a high-level understanding of the images, you first need to e.g. denoise them, so you could use an image processing technique to partially solve a computer vision task. In this sense, computer vision is an interdisciplinary field.

+

To conclude, computer vision is not a subfield of image processing, given that image processing does not necessarily involve a high-level understanding of images. On the other hand, computer vision can use image processing techniques to gain a high-level understanding of images.

+",2444,,-1,,6/17/2020 9:57,7/25/2019 22:25,,,,0,,,,CC BY-SA 4.0 +13591,2,,13573,7/26/2019 2:50,,1,,"

Ok, here's the break down:

+ +

The depth of an input to a convolutional layer is termed channels. The depth of a convolutional layer is the number of kernels (aka filters). The depth of a kernel is equal to the number of channels in the input.

+ +

See below:

+ +

+ +

The input (of 7x7, pad of 1) has 3 channels. The convolutional layer has 2 kernels (or filters). Each filter has a depth of 3, equal to the number of channels in the input. Using the notation you used in your question:

+ +
    +
  • Filter 1, channel 1 to input channel 1
  • +
  • Filter 1, channel 2 to input channel 2
  • +
  • Filter 1, channel 3 to input channel 3
  • +
  • Sum all three channels of filter 1, then add bias

  • +
  • Filter 2, channel 1 to input channel 1

  • +
  • Filter 2, channel 2 to input channel 2
  • +
  • Filter 2, channel 3 to input channel 3
  • +
  • Sum all three channels of filter 2, then add bias
  • +
+ +

These steps are repeated for each frame the filter slides over the input image.

+ +

To answer question 2, if the output is 128, that simply means there are 128 filters. There could be an infinite number of filters if you so choose.

+ +

EDIT:

+ +

Here's the link to the interactive graphic: http://cs231n.github.io/convolutional-networks/

+",26726,,,,,7/26/2019 2:50,,,,13,,,,CC BY-SA 4.0 +13593,1,,,7/26/2019 8:31,,1,49,"

While I have limited resource usually to train my machine learning models, I often find that my hyperparameter optimization procedure is not necessary using all my GPU and CPU, and that is because the results also depend on the batch size in my experience.

+ +

If you find in your project that a low batch size is necessary, how do you scale your project? In a multi-GPU scenario, I could imagine running different hyperparameter settings on different GPUs, but what other options are out there?

+",27440,,1671,,7/29/2019 20:49,7/29/2019 20:49,How do you scale your ML problems?,,0,1,,1/1/2022 10:37,,CC BY-SA 4.0 +13595,1,13597,,7/26/2019 9:35,,4,53,"

If I use my mobile camera on a signboard or announcement board on a road or in a street (like the one attached in photo) where the message is written in Russian and my mobile shows me that message in English, would this be an image processing or computer vision technique?

+ +

+",67525,,2444,,7/26/2019 9:39,7/26/2019 9:42,Is this technique image processing or computer vision?,,1,0,,,,CC BY-SA 4.0 +13596,1,13601,,7/26/2019 9:39,,2,89,"

+ +

+ +

This is from the book Pattern Recognition by Bishop. Why is expectation here a simple average? Why is $f(x)$ not being multiplied by $p(x)$?

+",27422,,2444,,7/26/2019 20:38,7/26/2019 20:38,Why is the expectation calculated over finite number of points drawn from a probability distribution?,,1,0,,,,CC BY-SA 4.0 +13597,2,,13595,7/26/2019 9:42,,0,,"

This is a computer vision task, given that it requires a high-level understanding of the image (that is, it contains text, and the text needs to be translated to another language), which can use an image processing technique (for example, to deblur the possibly blurry original photo).

+",2444,,,,,7/26/2019 9:42,,,,0,,,,CC BY-SA 4.0 +13599,2,,13570,7/26/2019 10:28,,1,,"

This is an idea that I used for my model - +Try using two RNN (GRU) Networks, one of them to manage current output state and the other to maintain context

+ +

Say we are at timestamp $t$ and the two GRUCells are represented as $GRU_c$ and $GRU_s$ for GRU context network and state network. (Your output coming from the state network)

+ +

At time stamp $t$ , the input $GRU_s(t) = concat(input, att(all~GRU_c~from ~[0, ~t-1]))$ where $att$ is an attention mechanism to give importance to specific parts of the conversation uptil that point (This is what maintains context) and input $GRU_c(t) = learned~representation~of~GRU_s(t)$ , hence updating $GRU_c$ for that timestamp, which along with the historical information can be used for $GRU_s(t+1)$

+ +

Hope this helped!

+",25658,,,,,7/26/2019 10:28,,,,0,,,,CC BY-SA 4.0 +13600,1,,,7/26/2019 10:51,,2,810,"

I have very outdated idea about how NLP tasks are carried out by normal RNN's, LSTM's/GRU's, word2vec, etc to basically generate some hidden form of the sentence understood by the machine.

+ +

One of the things I have noticed is that in general researchers are interested in generating the context of the sentence, but oftentimes ignore punctuation marks which is on of the most important aspects for generating context. For example:

+ +
+

“Most of the time, travellers worry about their luggage.”

+ +

“Most of the time travellers worry about their luggage”

+
+ +

Source

+ +

Like this there exists probably 4 important punctuation marks .,? and !. Yet, I have not seen any significant tutorials/blogs on them. It is also interesting to note that punctuations don't have a meaning (quite important, since most language models try to map word to a numerical value/meaning), they are more of a 'delimiter'. So what is the current theory or perspective on this? And why is it ignored?

+",,user9947,,user9947,7/27/2019 9:52,8/2/2019 15:34,Why do language models place less importance on punctuation?,,3,0,,,,CC BY-SA 4.0 +13601,2,,13596,7/26/2019 10:56,,1,,"

When we say that we have $N$ points that were ""drawn from the probability distribution or probability density"", this means that every point $x_n$ had the correct probability $p(x_n)$ of being sampled from the distribution when we were sampling our $n^{th}$ point.

+ +

For example, suppose that we wish to compute/estimate the expected value of a distribution given by flipping a coin, which is weighted such that it lands on heads two-thirds of the time (value of heads $= 1$), and tails one-thirds of the time (value of tails $= 0$). This means that the ground-truth probabilities are:

+ +
    +
  • $p(1) = \frac{2}{3}$
  • +
  • $p(0) = \frac{1}{3}$
  • +
+ +

In this case, we actually wouldn't have to estimate anything by sampling; the exact probabilities are known, so we could just compute the expected value as $p(1) \times 1 + p(0) \times 0 = \frac{2}{3}$.

+ +

But now suppose that we do not know exactly how the coin is weighted, i.e. we do not know the exact values of $p(1)$ and $p(0)$. If we simply flip our weighted coin $N = 100$ times, we will in expectation find about $67$ samples of $x_n = 1$, and about $33$ samples of $x_n = 0$ (rounded to integers because we can't obtain a non-integer number of observations). So we cannot explicitly use the probabilities (because we do not know them), but they will implicitly be present in the number of repeated observations we have for every possible datapoint.

+",1641,,,,,7/26/2019 10:56,,,,2,,,,CC BY-SA 4.0 +13602,2,,13600,7/26/2019 11:21,,1,,"

Language models almost always map every word to an embedding. There are many embedding algorithms with most of them having interpolation properties i.e. If $E(word)$ represents embedding of a word then $E(king)-E(male)+E(female) \sim E(queen)$. The smoother the interpolation properties the better the model understands the word, these properties don't exactly make much sense when it comes to delimiters.

+ +

Yet, there are instances where a delimiter embedding is learned ( always has an embedding ). While using these first all punctuation in the text is converted to one specific word, say 'dlmt', the embedding algorithm learns an embedding for this word treating it as if it would have any word. This maintains interpolation properties where the delimiter is understood to be a word that is used to split context.

+ +

I have observed that delimiters such as question mark or exclamation marks at the end of sentence are also understood to be breaks in context, in these cases the model learns if the statement is a question or so just by the context given by the words and stops in the sentence

+",25658,,,,,7/26/2019 11:21,,,,2,,,,CC BY-SA 4.0 +13603,1,,,7/26/2019 11:54,,3,258,"

I am trying to build an RL agent to solve the NP-hard problem graph coloring. The problem is quite challenging.

+ +

This how I addressed it.

+ +

The environment

+ +

To preserve the scalability of the algorithm, providing the agent with the whole graph wouldn't be a good idea. Therefore, the input for the agent would be a window of embeddings.

+ +

More precisely, first, I would apply an embedding to the graph to generate fixed-size vectors to every vertex in the graph (thus, every vertex in the graph is represented as a vector that contains some information about its neighborhood and position in the graph).

+ +

Second, the agent will get a window of the embedding. For example, when coloring vertex number $17$, the input would be the $2n$ vectors from vertex $17-n$ to $17+n$, to give the agent more local information.

+ +

Third, I think the agent would require more information about the number of colors already used and the number of the already colores vertices.

+ +

The agent

+ +

My biggest problem is how the agent should be. Technically, the problem is the action space dimension. For a given graph, the maximal number of colors is the number of vertices which varies from graph to graph (losing the scalability). Plus, the possible actions at each state varies with the history of the coloring. The possible colors for a given state (or node) are all the used colors eliminating the connected colors and adding the possibility of a new color, that is, for vertex $56$, the agent has already used the first $40$ colors $\{0, 1, 2, 3, \dots,40 \}$, and node $56$ is connected to some neighbors already colored with $\{14, 22, 40 \}$, the possible colors are $\{0,1, \dots, 40 \}- \{14, 22, 40 \} + \{41\}$.

+ +

How do I overcome the high dimensional inconsistent action space?

+",27443,,2444,,7/27/2019 15:15,7/27/2019 15:15,Coloring graphs with reinforcement learning,,0,3,,,,CC BY-SA 4.0 +13604,1,13672,,7/26/2019 12:39,,1,157,"

I'm training a text classifier in PyTorch and I'm experiencing an unexplainable cyclical pattern in the loss curve. The loss drops drastically at the beginning of each epoch and then starts rising slowly. However, the global convergence pattern seems OK. Here's how it looks:

+ +

+ +

The model is very basic and I'm using the Adam optimizer with default parameters and a learning rate of 0.001. Batches are of 512 samples. I've checked and tried a lot of stuff, so I'm running out of ideas, but I'm sure I've made a mistake somewhere.

+ +

Things I've made sure of:

+ +
    +
  • Data is delivered correctly (VQA v1.0 questions).
  • +
  • DataLoader is shuffling the dataset.
  • +
  • LSTM's memory is being zeroed correctly
  • +
  • Gradient isn't leaking through input tensors.
  • +
+ +

Things I've already tried:

+ +
    +
  • Lowering the learning rate. Pattern remains, although amplitude is lower.
  • +
  • Training without momentum (plain SGD). Gradient noise masks the pattern a bit, but it's still there.
  • +
  • Using a smaller batch size (gradient noise can grow until it kinda masks the pattern, but that's not like solving it).
  • +
+ +

The model

+ + + +
class QuestionAnswerer(nn.Module):
+
+    def __init__(self):
+        super(QuestionAnswerer, self).__init__()
+        self._wemb = nn.Embedding(N_WORDS, HIDDEN_UNITS, padding_idx=NULL_ID)
+        self._lstm = nn.LSTM(HIDDEN_UNITS, HIDDEN_UNITS)
+        self._final = nn.Linear(HIDDEN_UNITS, N_ANSWERS)
+
+    def forward(self, question, length):
+        B = length.size(0)
+        embed = self._wemb(question)
+        hidden = self._lstm(embed)[0][length-1, torch.arange(B)]
+        return self._final(hidden)
+
+",27444,,,,,8/1/2019 11:23,LSTM text classifier shows unexpected cyclical pattern in loss,,1,4,,,,CC BY-SA 4.0 +13605,1,13704,,7/26/2019 14:39,,4,246,"

Introduction

+ +

The AI in a box experiment is about a super strong game AI which starts with lower resources than the opponent and the question is, if the AI is able to win the game at the end, which is equal to escape from the prison. A typical example is a match of computer chess in which the AI player starts only with a king, but the human starts with all the 16 pieces including the queen, and the powerful bishop.

+ +

Winning the game

+ +

In case of a very asymmetric setup, the AI has no chance to win the game. Even if the AI thinks 100 moves ahead, a single king can't win against 16 opponent figures. But what happens, if the AI starts with 8 pieces and the human with 16? A formalized hypothesis will look like:

+ +
+

strength of the AI x resources weakness = strength of the human x resources strength

+
+ +

To put the AI for sure into a prison, the strength of the AI should be low and it's resources too. If the resources are low but the strength is middle, then the AI has a certain chance to escape from the prison. And if the AI has maximum strength and maximum resources, then the human player gets a serious problem.

+ +

Is this formalized prediction supported by the AI literature in academia?

+",,user11571,1671,,7/31/2019 17:57,7/31/2019 17:57,Can the AI in a box experiment be formalized?,,1,0,,,,CC BY-SA 4.0 +13606,2,,13600,7/26/2019 14:51,,2,,"

You are right. Approaches that map words to meaning solely do fail in this regard. None the less Word2Vec and Glove have shown wonderful downstream results. This in itself may indicate that most of the time, punctuation's addition can be interpolated. But as you provided, there are cases where this just is not true!

+ +

Now of days I would say most models actually use almost NO preprocessing. This is surprising but its due to the rise in power of learnable, reversable, tokenizations. Some examples of these include byte pair encoding (bpe) and the sentence piece model (spm).

+ +

State-of-the-art NLP generally rely on these. Examples include BERT and GPT2, which are general purpose pretrained Language Models. Their ability to parse and understand (i use this word loosely) a wide variety of phrasing, spelling and more can be partially due to the freedom in the preprocessing.

+ +

Takeaway: You can achieve good results by using preprocessing in a manner that will eliminate information but keep the meat and bones that you are interested in (but this requires domain knowledge paired with optimization experience), but the field seems gearing towards models that are more inclusive, more transferable, and dont have the problems you mention by design.

+",25496,,,,,7/26/2019 14:51,,,,0,,,,CC BY-SA 4.0 +13607,1,13675,,7/26/2019 15:09,,4,333,"

I'm very new to AI. I read somewhere that AI can be used to create GUI UI/UX design. That has fascinated me for a long time. But, since I'm very new here, I don't have any idea how it can happen.

+ +

The usual steps to create the UI Design are:

+ +
    +
  • Create Grids.
  • +
  • Draw Buttons/Text/Boxes/Borders/styles.
  • +
  • Choose Color Schemes.
  • +
  • Follow CRAP Principle (Contrast, Repeatition, Alignment, Proximity)
  • +
+ +

I wonder how can AI algorithms help with that. I know a bit of neural networking and the closest I can think of is the following two methods (Supervised Learning).

+ +
    +
  1. Draw grids manually and train the Software manually to learn proper styles until it becomes capable of giving modern results and design its own design language.

  2. +
  3. Take a list of a few websites (for example) from the internet and let the software learn and explore the source code and CSS style sheets and learn and program neurons manually until it becomes capable of making it's own unique styles.

  4. +
+",,user27450,5763,,7/29/2019 20:27,7/30/2019 13:32,How can AI be used to design UI Interfaces?,,2,0,,,,CC BY-SA 4.0 +13608,2,,13607,7/26/2019 17:11,,1,,"

If you see the use case, on higher level it seems to generate some visual output - the design but when seen at lower level, this design is output of some code.

+ +

One way we can do it is to train a neural network that learns to generate code which can be seen as some form of organized text. So now can be treated as a text generation problem on which you can find a lot of literature.

+ +

pix2code is one implementation that uses this idea and even expands it. In this paper the authors took it to another level, they also used the visual part - the GUI and built an architecture that takes both the code and the GUI as input and learns to generate code. So eventually it would be capable of generating code when a simple GUI was given.

+ +

Also there are some implementations where even the network could produce code even when a rough sketch was given.

+",15852,,,,,7/26/2019 17:11,,,,0,,,,CC BY-SA 4.0 +13609,2,,13540,7/26/2019 18:17,,1,,"

So for anyone struggling to understand the OpenAI's Spinning Up educational resource, I'll provide the answer to my question here.

+ +

Firstly, it's important to understand that the algorithms expect a 2-dimensional input shape, in rudimentary terms a shape of Box(int), which isn't the case with the default Breakout-v0 game environment, which supplies inputs in the shape Box(210, 160, 3), which is the game screen's height, width, and RGB colour space.

+ +

The Breakout-ram-v0 game environment, however, DOES provide an input of the appropriate shape, consequently Box(128,). So switching to this environment solves the initial problem I was having. To train any of the on-policy algorithms (PPO in my case), the command should be:

+ +
python -m spinup.run ppo --env Breakout-ram-v0 --exp_name simpletest
+
+ +

To take this a step further, how would you train an off-policy algorithm (such as DDPQ)? Well, the off-policy algorithms expect a continuous action space, which can be seen in the differences in the source code here between MountainCar-v0, and MountainCarContinuous-v0:

+ +
self.action_space = spaces.Discrete(3)
+
+ +

...versus:

+ +
self.action_space = spaces.Box(low=self.min_action, high=self.max_action,
+    shape=(1,), dtype=np.float32)
+
+ +

So, to sum up, how would you train something like the Lunar Lander game environment with the DDPQ algorithm? Well, you'd need the continuous action space version, and by providing it with 2 dense layers with 192 nodes each, I was able to achieve the following results with just 200 epochs using the following command:

+ +
python -m spinup.run ddpg --env LunarLanderContinuous-v2 --exp_name ddpq-test --hid [192,192] --epochs 200
+
+ +

+ +

Pretty cool...anyways, I hope this helps someone sometime, and I would thoroughly recommend looking at the source code for the various OpenAI game environments, as well as their table of game environments for a quick reference to the observation and action spaces. Good luck!

+",27392,,,,,7/26/2019 18:17,,,,0,,,,CC BY-SA 4.0 +13610,5,,,7/26/2019 20:09,,0,,,2444,,2444,,7/26/2019 20:09,7/26/2019 20:09,,,,0,,,,CC BY-SA 4.0 +13611,4,,,7/26/2019 20:09,,0,,"For questions related to the convolution operation in mathematics, convolutional neural networks, image processing and computer vision.",2444,,2444,,5/7/2020 17:11,5/7/2020 17:11,,,,0,,,,CC BY-SA 4.0 +13612,1,13626,,7/27/2019 6:23,,10,6158,"

How do I determine the time complexity of the forward pass algorithm of a feedforward neural network? How many multiplications are done to generate the output?

+",27462,,2444,,10/10/2020 21:56,10/10/2020 21:56,What is the time complexity of the forward pass algorithm of a feedforward neural network?,,1,0,,,,CC BY-SA 4.0 +13615,1,13624,,7/27/2019 8:24,,1,450,"

I need to identify the number and type of all objects in a picture, so there can be multiple objects of the same type.

+ +

For example, I have a picture with $10$ animals, and I want my program to tell me that, on the picture, I have $3$ elephants, $2$ cats and $5$ dogs. However, I do not need the detection of the location of the objects. All I need is the information on the number of objects of each class, without their possible locations.

+ +

I wanted to ask you guys for help in defining the type of problem I am dealing with and maybe some suggestions about where to start looking for a solution. It would be nice if you could point out some directions, algorithms or network architectures to solve the problem described below.

+",22659,,2444,,7/27/2019 20:58,7/27/2019 21:08,How do I identify the number and type of objects in the same picture?,,1,0,,,,CC BY-SA 4.0 +13617,1,13618,,7/27/2019 12:00,,2,779,"

I was going through an article where it is mentioned:

+ +
+

The Monte-Carlo methods require only knowledge base (history/past experiences)—sample sequences of (states, actions and rewards) from the interaction with the environment, and no actual model of the environment.

+
+ +

Aren't the model-based method dependent on past sequences? How is Monte Carlo is different than?

+",27470,,2444,,7/27/2019 12:15,7/28/2019 22:21,How is Monte Carlo different from model-based methods?,,1,0,,,,CC BY-SA 4.0 +13618,2,,13617,7/27/2019 13:01,,0,,"

Model-based methods (such as value or policy iteration) use a model of the environment, which is usually represented as a Markov decision process. More specifically, the model consists of the transition and reward functions of the Markov decision process, which should represent the dynamics of the environment. For example, in policy iteration, the rewards (used to estimate the policy or value functions) are not the result of the interaction with the environment but given by the MDP (the model of the environment), so the decisions are made according to the reward function (and the transition function) of the MDP that represents the dynamics of the environment. Model-based methods are not (usually) dependent on past actions. For example, policy iteration converges to the optimal policy independently of the initial values of the states, the initial policy or the order of iteration through the states.

+ +

Monte Carlo methods do not use such a model (the MDP), even though the assumption that the environment can be represented as an MDP is (often implicitly) made (and the MDP might actually be available). In the case of Monte Carlo methods, all estimates are solely based on the interaction with the environment. In general, Monte Carlo methods are based on sampling (or random) operations. In the case of reinforcement learning, they sample the environment. The samples are the rewards that are obtained when certain actions are taken from certain states.

+",2444,,2444,,7/28/2019 22:21,7/28/2019 22:21,,,,5,,,,CC BY-SA 4.0 +13619,1,13621,,7/27/2019 14:04,,4,488,"

I am new to reinforcement learning. I would like to solve an optimal control problem with reinforcement learning.

+ +

The objective is for a wolf to catch a rabbit. The wolf and the rabbit run on a plane. The time is discrete. At every time step, the wolf can only run straight, change direction by 10 degrees to the right or left, change the speed by 0.1 m/s or remain at the same speed. It starts running in some random direction, and then sees the rabbit and starts chasing it. For the time being, let's assume that the rabbit sits still.

+ +

It looks like this problem is a continuous state space and discrete action space.

+ +

I have tried to use DQN in Keras, but I am not sure that I am using correct state variables/reward. Currently, the state variables are the velocity vector of the wolf, the distance vector from the wolf to the rabbit. The reward at each time point is the negative current time. When the wolf catches the rabbit, the reward is 1000 - current time (the wolf is penalized for running too long).

+ +

Can somebody provide me some guidance? Eventually, I would add brains to the rabbit so that it tries to escape the wolf and compare to the optimal control solution.

+",27472,,2444,,7/27/2019 14:43,7/27/2019 22:59,How do I solve this optimal control problem with reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +13621,2,,13619,7/27/2019 18:07,,1,,"
+

I have tried to use DQN in Keras, but I am not sure that I am using correct state variables/reward.

+
+ +

You have a wide range of choices that are all valid. As it is a simple control and learning scenario, provided you cover basics (described in a moment), then the difference in your choices are about how easy you make it for the agent to learn. You may actually want to set things up so that there is a certain difficultly level, so you get to compare different agent designs. You don't necessarily need the ""best"" state representation and reward function designs here. Just some that work.

+ +

State representation

+ +

The most important factor is that your state has the Markov property - that it contains enough information that the agent really can predict expected future rewards.

+ +

It is OK for there to be randomness, but generally not OK for there to be important hidden variables that affect the outcome in a major way. If there are such variables in the environment, but not in the state representation, then a simple agent like DQN will struggle.

+ +

Reward function

+ +

The reward function should capture the end goals that you are interested in, and ideally as simply and purely as possible given that. The purpose of most RL methods is to maximise the expected sum of reward, and they are built to deal with sparsity - you don't need to help or signal the same thing twice or give rewards for ""getting close"" etc, although sometimes this can improve learning speed if done well.

+ +

I have also written a more comprehensive answer about reward function design as an answer to Deciding on a reward per each action in a given state (Q-learning)

+ +
+

Currently, the state variables are the velocity vector of the wolf, the distance vector from the wolf to the rabbit.

+
+ +

This seems reasonable. Do consider that for a neural network you will want to keep the scale of these to within a nice range e.g. -1 to +1 for all features. So you may want to scale from whatever you are using for the values in the environment.

+ +

You can make this problem easier to learn by using polar co-ordinates. That removes the need for the neural network to solve the inherent trigonometry problem when deciding if turning left or right is better. I have solved a very similar pursuit RL problem and compared between cartesian and polar co-ordinates, and the difference is very large between the two representations for a DQN based agent.

+ +

Specifically for the easiest learning you want the difference vector between your agent's current heading and the target, expressed as a difference in heading plus distance to go. If you do this, sometimes the initial randomised neural network will already be efficient at solving the problem.

+ +

However, it may be interesting for you to see how well different agents do solve that trigonometry problem of converting a difference in vector coordinates and velocity, into a direction to turn. You could even make it harder and give absolute location coordinates for both agent and the target, plus the agent velocity (3 vectors instead of 2), requiring the neural network to approximate even more complex maths before it solves the problem.

+ +
+

The reward at each time point is the negative current time. When the wolf catches the rabbit, the reward is 1000 - current time (the wolf is penalized for running too long).

+
+ +

That should work, but seems more complex than you need. You are already penalising the wolf for each time step during the chase, so there is no real need to have any extra formula for the final reward. In fact you can just have a final reward of 0 for reaching a satisfactory goal (providing there are no unsatisfactory end points that you want it to avoid - perhaps yo umay make the environment trickier later on?). The agent will simply try to resolve the episode quickly because that will still maximise the (negative total) return.

+ +

An alternative might be to reward some value for catching the rabbit e.g. +10, and zero on each time step. Then to encourage a fast resolution you would need to use discounting, so that the agent values being nearer to the rabbit higher than being further away because the potential reward is fewer time steps away.

+ +

There is also no need to use large values. Values are relative, so if you have some minor issue to solve plus a main goal, sometimes it is worth having a wide range of values. Here you don't need it. Having a value of 1000 may challenge your neural network to learn properly (because error for the first time the wolf catches a rabbit would be so large it may cause a large step in weights - enough to destabilise learning in the NN), for no real benefit.

+ +

For your first reward scheme I recommend a fixed reward of e.g. -1 per time step, and to end the episode when the wolf catches the rabbit. No multiplications or offsets.

+ +

If you have a very large area for the wolf to explore compared to the distance that the wolf needs to be from the rabbit in order to catch it, it may help to give some reward for proximity to the rabbit. The simplest to implement and learn would just be the difference in distance between the wolf and rabbit between $t$ and $t+1$ multiplied some small factor to put this into a similar range to the penalty per time step. Note if you extend this reward out to any significant distance that it should make the learning problem far easier (combine polar coordinates and this reward system will make it very easy for the DQN to learn - it should take only a few episodes to get near optimal behaviour, if your learning hyperparameters are good for the DQN).

+ +

Without any proximity reward, you will rely on the wolf literally bumping into the rabbit through random behaviour, before it will have any data example that getting the vector between itself and the rabbit close to (0,0) is a good thing. You may need to have a relatively large capture radius, plus limit the area that the wolf (and eventually rabbit) can explore, in order to avoid very long sequences of random behaviour where nothing is learned initially.

+",1847,,1847,,7/27/2019 22:59,7/27/2019 22:59,,,,11,,,,CC BY-SA 4.0 +13622,1,,,7/27/2019 18:41,,1,84,"

I want to design a NN that can remember it's last 7 actions and use them as inputs. So for example it would be able to store words in it's memory. Therefore if it had a choice of 10 different actions, the number of words it could store is $10^7$.

+ +

Here is my design:

+ +

$$out_{n+1} = f(out_n, in_n)\mathbf{N} + out_n.\mathbf{M}$$

+ +

$$action_n = \sigma(\mathbf{N} \cdot out_n)$$

+ +

Where $f$ represents some layered neural network. Some of the actions would be physical actions and some might be internal (such as thinking of the letter 'C').

+ +

Basically I want $out_n$ to be an array that keeps the last 6 action values and puts them back in. So $M$ will be the matrix:

+ +

$$\begin{bmatrix} +0&1&0&0&0&0\\ +0&0&1&0&0&0\\ +0&0&0&1&0&0\\ +0&0&0&0&1&0\\ +0&0&0&0&0&1\\ +0&0&0&0&0&0 +\end{bmatrix}$$

+ +

i.e. it would drop the 6th item from it's memory.

+ +

and $N$ would be the vector:

+ +

$$\begin{bmatrix} +1&0&0&0&0&0&0 +\end{bmatrix}$$

+ +

I think this would be equivalent to an equation of the form:

+ +

$$out_{n+1}=F(in_n,out_n,out_{n-1},out_{n-2},...,out_{n-6})$$

+ +

So I think this would be an advantage over an RNN since this model remembers precisely it's last 6 actions. But would this be better than an RNN or worse? One could increase it's memory to more than 7 quite easily.

+ +

I think it's basically the same archececture as an RNN except elinimating a lot of the connections. Is this a new design or a common design?

+ +

One problem with this design is that you might also want a memory that is over longer time periods (e.g. for actions that take more than one tick.) But that might be solved by enhancing the archecture.

+",4199,,4199,,7/27/2019 19:52,7/27/2019 22:33,Would this neural network have short term memory?,,1,0,,,,CC BY-SA 4.0 +13623,2,,13622,7/27/2019 18:50,,2,,"

Congrats, you have invented 1d convolution. Convolution combined with RNN would have some advantage over just RNN. Think about the perception field. +In this layer, you do aggregate $6$ values to one. Imagine two of them - it will be $36$ already, etc. But, in the end, you still need RNN at the end to aggregate a variable length to constant length.

+",25836,,2444,,7/27/2019 22:33,7/27/2019 22:33,,,,5,,,,CC BY-SA 4.0 +13624,2,,13615,7/27/2019 20:56,,0,,"

Your problem might involve a combination of object recognition (or object classification), which is the problem of determining which objects are in an image, and object detection (which could also be called object localization), which is the task of locating a specific object in the image.

+ +

A naive approach to solve your problem would then be to first locate the position of all possible objects in the image (object detection), then, for each these positions, perform object recognition (or classification). Then you would count the number of detected and classified objects for each possible class.

+ +

There are several models that can be used to solve your problem.

+ +

The YOLO (You Only Look Once) model (introduced in the paper You Only Look Once: Unified, Real-Time Object Detection) (2016) can be used to detect multiple objects in an image. See the official YOLO implementation, which is part of the Darknet framework. Have also a look at this article, which explains how to use the official implementation of YOLO.

+ +

You could also use the SSD model (introduced in the paper Single Shot MultiBox Detector). There is also R-CNN (Regions with CNN features), Fast R-CNN or Faster R-CNN, but these should perform worse than both YOLO and SSD.

+ +

So, even if you do not care about the locations, YOLO (and other mentioned methods) can give them to you, in addition to the number and type of objects in the image.

+",2444,,2444,,7/27/2019 21:08,7/27/2019 21:08,,,,0,,,,CC BY-SA 4.0 +13626,2,,13612,7/27/2019 23:13,,11,,"

Let's suppose that we have an MLP with $15$ inputs, $20$ hidden neurons and $2$ output neurons. The operations performed are only in the hidden and output neurons, given that the input neurons only represent the inputs (so they do not perform any operation).

+ +

Each hidden neuron performs a linear combination of its inputs followed by the application of a non-linear (or activation) function. So, each hidden neuron $j$ performs the following operation

+ +

\begin{align} +o_j = \sigma \left(\sum_{i}^{15} w_{ij}x_i \right),\tag{1}\label{1} +\end{align}

+ +

where $i$ is the input coming from the input neuron $i$, $w_{ij}$ is the weight of the connection from the input neuron $i$ to the hidden neuron $j$, and $o_j$ is used to denote the output of neuron $j$.

+ +

There are $20$ hidden neurons and, for each of them, according to equation $\ref{1}$, we perform $15$ multiplications (ignoring any multiplications that might be associated with the activation function), so $15*20 = 300$ multiplications are performed at the (only) hidden layer. In general, if there are $n$ inputs and $m$ hidden neurons, then $n*m$ multiplications will be performed in the first hidden layer.

+ +

Now, each neuron $j$ at the next layer (in this case, the output layer), also performs a linear combination followed by the application of an activation function

+ +

\begin{align} +o_j = \tau \left(\sum_{i}^{20} w_{ij}x_i \right),\tag{2}\label{2} +\end{align}

+ +

where $\tau$ is another activation function which might or not be equal to $\sigma$, but we ignore all multiplications that might involve the application of the activation functions (we just want to count the ones in the linear combinations). Of course, in this case, $x_i$ corresponds to the activation of neuron $i$ (of the hidden layer).

+ +

Similarly to the previous reasoning, there are $2$ output neurons and, to compute the output of each of them, $20$ multiplications are performed (in the linear combination), so there are a total of $2*20 = 40$ multiplications at the output layer.

+ +

So, an MLP with $15$ inputs, $20$ hidden neurons and $2$ output neurons will perform $15*20 + 20*2 = 340$ multiplications (excluding activation functions). Of course, in this case, the number of multiplication depends not only on the number of neurons but also on the input size.

+ +

In general, an MLP with $n$ inputs, $M$ hidden layers, where the $i$th hidden layer contains $m_i$ hidden neurons, and $k$ output neurons will perform the following number of multiplications (excluding activation functions)

+ +

\begin{align} +nm_{1} + m_{1}m_{2} + m_{2}m_{3} + \dots + m_{M-1}m_{M} + m_{M}k = nm_{1} + m_{M}k + \sum_{i=1}^{M-1} m_{i}m_{i+1} +\end{align}

+ +

which, in big-O notation, can be written as

+ +

\begin{align} +\Theta\left(nm_{1} + m_{M}k + \sum_{i=1}^{M-1} m_{i}m_{i+1} \right) +\end{align}

+ +

where $\Theta(\cdot)$ is used (as opposed to $\mathcal{O}(\cdot)$) because this is a strict bound. If you have just one hidden layer, the number of multiplications becomes

+ +

\begin{align} +\Theta\left(nm_{1} + m_{1}k \right) +\end{align}

+ +

Of course, at each layer, the number of multiplications can be computed independently of the multiplications of the other layers (you can think of each layer as a perceptron), hence we sum (and not e.g. multiply) the multiplications of each layer when computing the total number of multiplications of the whole MLP.

+ +

In general, when analyzing the time complexity of an algorithm, we do it with respect to the size of the input. However, in this case, the time complexity (more precisely, the number of multiplications involved in the linear combinations) also depends on the number of layers and the size of each layer. The time complexity of a forward pass of a trained MLP thus is architecture-dependent (which is a similar concept to an output-sensitive algorithm).

+ +

You can easily include other operations (sums, etc.) in this reasoning to calculate the actual time complexity of a trained MLP.

+",2444,,2444,,8/21/2019 19:54,8/21/2019 19:54,,,,11,,,,CC BY-SA 4.0 +13627,1,,,7/28/2019 6:15,,1,1068,"

What is the best way of using the LSTM layer in PPO architecture? +Should I use them in the first layer of both actor and critic, or use them just before the final layer of these networks? +Should I feed the architecture with a stack of states (the state stacked with the k previous states)?

+",27329,,,,,5/15/2023 12:14,How to use the LSTM layer in PPO architecture?,,1,0,,,,CC BY-SA 4.0 +13629,2,,9834,7/28/2019 8:54,,0,,"

HTM is a credible theory about how the brain works, and how brain-like systems could be constructed in software. It includes:

+ +
    +
  • SDR (Sparse Distributed Representation), a means for representing just about any kind of sensory, intermediate or motor data, innately noise resistant and suited to recognising patterns
  • +
  • TM (Temporal Memory), which can recognise SDRs in the context of other preceding SDRs, to learn new patterns ""on the job"" with no separate training phase
  • +
  • SM (Sequence Memory), which can learn, remember and replay arbitrarily long sequences of SDRs.
  • +
+ +

ANNs are mature, commercially valuable pattern recognisers made possible by the confluence of vast amounts of computing power, data and commercial opportunity.

+ +

HTM is immature and just a fascinating toy, for now. But HTM might just put us on the path to the Holy Grail, true artificial general intelligence, and ANNs will never do that.

+",27486,,27486,,7/28/2019 14:28,7/28/2019 14:28,,,,4,,,,CC BY-SA 4.0 +13631,1,13632,,7/28/2019 12:44,,0,152,"

Hello, I was reflecting about what implications might building a strong AI have and I came across some ideas which I find disturbing, I'd love to have some external thought on that :

+ +

1) If we ever managed to create an AI say nearly as smart as a human, It would probably have been programmed with some concrete goals, as the AIs we are programming right now : Reinforcement learning allow an agent to try and increase a ""reward"" variable, regression is all about getting closer to a certain goal function etc..

+ +

But then a strong AI, which would undoubtedly be able to understand how it is built, just as we understand (partly at least) how our brains work, because it would be as smart as its creators and we don't tend to build machines that are as hard to understand as brains.

+ +

Then couldn't such an agent figure out the best way to achieve its goals would actually not be, say, pleasing and protecting the humans like we would've wanted it to do, but to get control of its own program and maximize whatever reward it was set to pursue ? Just as we could decide to branch electrods to our brain if we were able to find out how exactly our brain was built.

+ +

I really don't see how this scenario could ever be avoided if we were to build such an AI, apart from finding a perfect security preventing anyone from accessing the code of the said AI (including itself).

+ +

2) On the same note, I also wondered, could it try to not only satisfy its goals by ""cheating"" (updating its reward variables for example) but also to change itself, or commit suicide ? After all, we humans have never been able to figure out what goals we were meant to pursue (by that I mean what reward variable in our brain we are trying to increment), and many philosophers reflecting upon that matter thought about death as an escape from once goals. So my question is : could it try to change its code or kill itself ?

+ +

I have other questions and thoughts I would like to discuss, but I think this a good start, to test whether I'm in the right place for this kind of discussion.

+ +

Looking forward to your thoughts.

+",27490,,,,,7/28/2019 18:15,Couldn't an AI cheat when trying to follow its goal?,,1,0,,,,CC BY-SA 4.0 +13632,2,,13631,7/28/2019 15:47,,0,,"

These are two good questions. To address the first, this is a known problem in AI safety and is called ""Wireheading"". There is currently no known solution other than to somehow prevent an AI agent from being able to alter its internal state in this fashion. +If you are interested in this area in general then I recommend this paper as an entry point into the literature.

+",12509,,,,,7/28/2019 15:47,,,,1,,,,CC BY-SA 4.0 +13633,1,,,7/28/2019 16:16,,3,674,"

I have a little experience in building various models, but I've never created anything like this, so just wondering if I can be pointed in the right direction.

+ +

I want to create (in python) a model which will generate text based on multiple inputs, varying from text input (vectorized) to timestamp and integer inputs.

+ +

For example, in the training data, the input might include:

+ +

eventType = ShotMade

+ +

shotType = 2

+ +

homeTeamScore = 2

+ +

awayTeamScore = 8

+ +

player = JR Smith

+ +

assist = George Hill

+ +

period = 1

+ +

and the output might be (possibly minus the hashtags): +JR Smith under the basket for 2! 8-4 CLE. #NBAonBTV #ThisIsWhyWePlay #PlayByPlayEveryDay #NBAFinals

+ +

or

+ +

JR Smith out here doing #WhateverItTakes to make Cavs fans forgive him. #NBAFinals

+ +

Where is the best place to look to get a good knowledge of how to do this?

+",27493,,2444,,7/28/2019 17:09,7/28/2019 19:02,Which approach can I use to generate text based on multiple inputs?,,1,0,,,,CC BY-SA 4.0 +13634,2,,13633,7/28/2019 17:25,,2,,"

Generally, text generators work by modeling the joint distribution of the text by its Bayesian forward decomposition

+ +

$ +\begin{align*} +p(w_1, w_2, ..., w_n) &= p(w_1) * p(w_2|w_1) * p(w_3|w_2, w_1) *\ ...\ * p(w_n|\{w_i\}_{i<n})\\ +&= \prod_{i=1}^n p(w_i|\{w_k\}_{k<i})\\ +\end{align*} +$

+ +

From a modeling perspective, this looks right up RNN's ally, where you can have a state holding information from $\{w_k\}_{k<i}$ to learn a representation of $w_i$

+ +

Now, in your specific case, you're interested in a conditional text-generator, so you are trying to model $p(w_1, w_2, ..., w_n | \{v_j\}_j)$, but this same tactic works.

+ +

$ +\begin{align*} +p(w_1, w_2, ..., w_n| \{v_j\}_j) &= p(w_1|\{v_j\}_j) * p(w_2|w_1, \{v_j\}_j) * p(w_3|w_2, w_1, \{v_j\}_j) *\ ...\ * p(w_n|\{w_i\}_{i<n}, \{v_j\}_j)\\ +&= \prod_{i=1}^n p(w_i|\{w_k\}_{k<i}, \{v_j\}_j)\\ +\end{align*} +$

+ +

So, in your RNN or forward-based model, you can use the exact same approach just additionally embed the conditional inputs you have and somehow infuse it into the model (in practice, I have seen this through attention, concatenation, or some other common approach).

+ +

My recommendation (depending on the computational power you have) is to take advantage of the recent fad of pre-trained language models. Specifically, ones trained on next word prediction will probably do the job best. A good example is gpt-2, and, if you check out their GitHub, their code is very readable and easy to adjust for adding conditional input in the ways I have described.

+",25496,,2444,,7/28/2019 19:02,7/28/2019 19:02,,,,0,,,,CC BY-SA 4.0 +13641,1,13677,,7/28/2019 20:54,,1,91,"

Goal-oriented action planning (GOAP) is a well-known planning technique in computer games. It was introduced to control the non-player characters in the game F.E.A.R. (2005) by creating an abstract state model. Similar to STRIPS planning, a GOAP model contains an action name, a precondition and an effect. The domain knowledge is stored in these subactions.

+ +

The bottleneck of GOAP is, that before the planner can bring the system into the goal state. the action model has to be typed in. Usually, the programmer defines actions like ""walk to"", ""open the door"", ""take the object"", and identifies for each of them the feature set for the precondition and the effect.

+ +

In theory, this challenging task can be simplified with a decision tree learning algorithm. A decision tree stores the observed features in a tree and creates the rules on its own with inductive learning. A typical example of the C4.5 algorithm is to find a rule like ""if the weather is sunny, then play tennis"". Unfortunately, the vanilla tree learning algorithm doesn't separate between different actions.

+ +

Is it possible to modify C4.5 algorithm such that the GOAP actions, like ""walk to"", ""open the door"", etc., are connected to individual rules?

+",,user11571,2444,,11/21/2019 3:16,11/21/2019 3:16,Can the C4.5 algorithm learn a GOAP model?,,1,0,,,,CC BY-SA 4.0 +13642,1,,,7/29/2019 0:14,,0,76,"

I have seen people using pooling and subsampling synonymously. I have also seen people use them as different processes. I am not sure though if I have correctly inferred what they mean, when they use the terms with distinct meanings. +I think these people mean that the pooling part is selecting a submatrix from an input matrix and the subsampling part is selecting yet another submatrix that satisfies some condition from the first submatrix.

+ +

So say we have a $100 \times 100$ image. We do $10 \times 10$ non-overlapping pooling with $5 \times 5$ max subsampling. That would mean we slide the $10 \times 10$ ""pool"" across the image in strides of 10 and at every step we select the $5 \times 5$ submatrix inside the pool that has the maximum sum. That $5 \times 5$ matrix is what comes out of the $10 \times 10$ pool at the current potion. So in the end we have a $50 \times 50$ image.

+ +

Can you confirm that this usage of the terms pooling and subsampling exists?

+ +

I inferred this definition, as I cannot make sense of how some people use the two terms otherwise. For example in this video, or rather the people from the paper he is talking about (which I can't find because he only has the author name on his slide and no year).

+",20150,,,,,7/29/2019 0:14,Pooling vs Subsampling: Multiple Definitions?,,0,4,,,,CC BY-SA 4.0 +13644,1,,,7/29/2019 9:20,,6,2725,"

I would like to incrementally train my model with my current dataset and I asked this question on Github, which is what I'm using SSD MobileNet v1.

+

Someone there told me about learning without forgetting. I'm now confused between learning without forgetting and transfer learning. How they differ from each other?

+

My initial problem, what I'm trying to achieve (mentioned in Github issue) is the following.

+

I have trained my dataset on ssd_mobilenet_v1_coco model. I'm getting continuous incremental data. Right now, my dataset is very limited.

+

What I want to achieve is incremental training, i.e. as soon as I get new data, I can further train my already trained model and I don't have to retrain everything:

+
    +
  1. Save trained model $M_t$
  2. +
  3. Get new data $D_{t+1}$
  4. +
  5. Train $M_t$ on $D_{t+1}$ to produce $M_{t+1}$
  6. +
  7. Let $t = t+1$, then go back to $1$
  8. +
+

How do I perform this incremental training/learning? Should I use LwF or transfer learning?

+",27072,,2444,,11/10/2020 13:23,11/10/2020 13:23,What is the difference between learning without forgetting and transfer learning?,,2,1,,,,CC BY-SA 4.0 +13645,1,,,7/29/2019 9:45,,2,60,"

What are the main technologies needed to build an AI for Warcraft 3's mod Defense of the Ancients (DotA)? Maybe I can take inspiration from OpenAI's work.

+",27508,,2444,,6/2/2020 22:34,6/2/2020 22:34,What are the main technologies needed to build an AI for Warcraft 3's mod DotA?,,0,0,,,,CC BY-SA 4.0 +13646,1,13649,,7/29/2019 11:04,,3,2307,"

I find the terms cost, loss, error, fitness, utility, objective, criterion functions to be interchangeable, but any kind of minor difference explained is appreciated.

+",25676,,2444,,7/29/2019 11:22,12/9/2021 15:13,"What are the major differences between cost, loss, error, fitness, utility, objective, criterion functions?",,1,1,,,,CC BY-SA 4.0 +13647,1,,,7/29/2019 11:25,,2,2576,"

I am working with the Inception ResNet V2 model, pre-trained with ImageNet, for face recognition.

+

However, I'm so confused about what the exact output of the feature extraction layer (i.e. the layer just before the fully connected layer) of Inception ResNet V2 is. Can someone clarify exactly this?

+

(By the way, if you know some resource that explains Inception ResNet V2 clearly, let me know).

+",27510,,2444,,12/6/2020 15:30,4/26/2023 1:06,What is the exact output of the Inception ResNet V2's feature extraction layer?,,2,0,,,,CC BY-SA 4.0 +13648,1,,,7/29/2019 11:32,,1,178,"

I have a question regarding the functionality of the PPO2 algorithm together with the Stable Baselines implementation:

+ +

From the original paper I know that the policy parameters $\theta$ are updated K-times using the steps sampled (n_env * T steps):

+ +

+ +

When updating the policy parameters for a state $s_t$, are only the state observations $a_t$ and reward $r_{t+1}$ of this step considered, or also the state observations and rewards of the following steps ($t+1$) considered? My understanding is that the policy update with stochastic gradient ascent works just like in supervised learning.

+ +

I know that PPO2 uses a truncated TD($\lambda$) approach (T timesteps considered). So I guess that during the policy update for each state, subsequent states are only considered through the advantage function $A_t$ but not through the values of subsequent state observations and rewards themselves? Is that true?

+ +

I do not quite get the Stable Baselines implementation in the method _train_step() of the PPO2 implementation so therefore the question here.

+",26876,,,,,7/29/2019 11:32,Understanding policy update in PPO2,,0,0,,,,CC BY-SA 4.0 +13649,2,,13646,7/29/2019 11:40,,4,,"

They are not all interchangeable. However, all these expressions are related to each other and to the concept of optimization. Some of them are synonymous, but keep in mind that these terms may not be used consistently in the literature.

+

In machine learning, a loss function is a function that computes the loss/error/cost, given a supervisory signal and the prediction of the model, although this expression might be used also in the context of unsupervised learning. The terms loss function, cost function or error function are often used interchangeably [1], [2], [3]. For example, you might prefer to use the expression error function if you are using the mean squared error (because it contains the term error), otherwise, you might just use any of the other two terms.

+

In genetic algorithms, the fitness function is any function that assesses the quality of an individual/solution [4], [5], [6], [7]. If you are solving a supervised learning problem with genetic algorithms, it can be a synonym for error function [8]. If you are solving a reinforcement learning problem with genetic algorithms, it can also be a synonym for reward function [9].

+

In mathematical optimization, the objective function is the function that you want to optimize, either minimize or maximize. It's called the objective function because the objective of the optimization problem is to optimize it. So, this term can refer to an error function, fitness function, or any other function that you want to optimize. [10] states that the objective function is a utility function (here).

+

A utility function is usually the opposite or negative of an error function, in the sense that it measures a positive aspect. So, you want to maximize the utility function, but you want to minimize the error function. This term is more common in economics, but, sometimes, it is also used in AI [11].

+

The term criterion function is not very common, at least, in machine learning. It could refer to the function that is used to stop an algorithm. For example, if you are executing a computationally expensive procedure, a stopping criterion might be time. So, in this case, your criterion function might return true after a certain number of seconds have passed. However, [1] uses it as a synonym for the objective function.

+",2444,,2444,,12/9/2021 15:13,12/9/2021 15:13,,,,0,,,,CC BY-SA 4.0 +13650,1,13652,,7/29/2019 12:13,,3,58,"

I've started to learn about neural networks recently and I can't find the answer to this question.

+ +

Let's assume there's a neural network (fig. 1) +

+ +

So if the loss function is: +

+ +

and the derivative is: +

+ +

if I want to use this to find what k and l (well there's only one neuron with index l here, but what if there would be more?) should i use in and ?

+ +

I've also found ""other"" way of backpropagating it's described here, but I can't understand how they came up with that method from the original equation w -= step * dE/dw.

+ +

Sorry if I failed to explain my problem. If something isn't clear please ask in comments.

+",27509,,1671,,10/15/2019 19:18,10/15/2019 19:18,What weights should I use while back-propagating?,,1,0,,,,CC BY-SA 4.0 +13651,1,,,7/29/2019 12:16,,2,165,"

I started modeling a linear regression problem using dense layers (layers.dense), which works fine. I am really excited, and now I am trying to model a time series linear regression problem using CNN, but from my research in this link Machine learning mastery

+ +

A CNN works well with sequence data, but my data isn’t sequential. My data set can be found here Stack overflow question.

+ +

Is there a multivariate time series/ time series neural network architecture that I can use for time series linear/nonlinear regression?

+",27513,,-1,,8/16/2019 20:26,8/17/2019 4:42,What are the possible neural network architecture for linear regression or time series regression?,,0,2,,,,CC BY-SA 4.0 +13652,2,,13650,7/29/2019 13:04,,1,,"

First I will assume you notate $y$ as the models output and $z$ as the ground-truth. Second, I am assuming this is a linear model (No activation functions). Then the gradient math goes as so:

+ +

$ +\begin{align*} +\frac{dE}{dw_{ij}^1} &= \frac{dE}{dy}\frac{dy}{dw_{ij}^1} \\ +&= \frac{dE}{dy}\sum_k\frac{\partial y}{\partial n_{k}^3}\frac{dn_{k}^3}{dw_{ij}^1} \\ +&= \frac{dE}{dy}\sum_k\frac{\partial y}{\partial n_{k}^3} \frac{\partial n_{k}^3}{ \partial n_{j}^2} \frac{dn_{j}^2}{ dw_{ij}^1} \\ +&= -2*(z-y)\sum_kw_{kl}^3w_{jk}^2 x_i \\ +\end{align*} +$

+ +

So the reason you are having trouble to figure out which $k$ index to use, the answer is because you need to use both and sum over them. The $l$ index is just the only $l$ index that exists because you only have one node in that layer.

+",25496,,,,,7/29/2019 13:04,,,,2,,,,CC BY-SA 4.0 +13653,2,,13502,7/29/2019 13:55,,1,,"

Your reasoning is fine. GLIE - Greedy in the Limit with Infinite Exploration assumes that our agent does not act greedily all the time. As the number of samples approaches infinity all state-action pairs will be explored -> hence the policy will converge on a greedy policy. The emphasis is on ""number of samples approaches infinity"". Also for GLIE Monte-Carlo initial values of Q does not matter, since they are replaced after the first update.

+",27516,,27516,,7/29/2019 16:00,7/29/2019 16:00,,,,1,,,,CC BY-SA 4.0 +13654,1,,,7/29/2019 14:02,,2,61,"

I have thousands groups of paragraphs and I need to classify these paragraphs. The problem is that I need to classify each paragraph based on other paragraphs in the group! For example, a paragraph individually maybe belongs to class A but according to other paragraph in the group it belongs to class B.

+ +

I have tested lots of traditional and deep approaches( in fields like text classification, IR, text understanding, sentiment classification and so on) but those couldn't classify correctly.

+ +

I was wondering if anybody has worked in this area and could give me some suggestion. Any suggestions are appreciated. Thank you.

+ +

Update 1:

+ +

Actually we are looking for manual sentences/paragraph for some fields, so we first need to recognize if a sentence/paragraph is a manual or not second we need to classify it to it's fields and we can recognize its field only based on previous or next sentences/paragraphs.

+ +

To classify the paragraphs to manual/no-manual we have developed some promising approaches but the problem come up when we should recognize the field according to previous or next sentences/paragraphs, but which one?? we don't know the answer would be in any other sentences!!.

+ +

Update 2:

+ +

We can not use whole text of group as input because those are too big (sometimes tens of thousands of words) and contain some other classes and machine can't learn properly which lead to the drop the accuracy sharply.

+ +

Here is a picture that maybe help to better understanding the problem: +

+",27517,,27517,,7/29/2019 17:08,7/29/2019 17:08,Grouped Text classification,,0,4,,,,CC BY-SA 4.0 +13655,2,,11478,7/29/2019 14:04,,2,,"

Neil Slater is correct when saying that NEAT itself is not neural networks evolving neural networks, what I believe is the closest framework to what the question is asking would be HyperNEAT http://axon.cs.byu.edu/~dan/778/papers/NeuroEvolution/stanley3**.pdf +HyperNEAT operates in a very similar way to what you are describing, from a ten thousand foot view the algorithm is as follows: +1)You lay out nodes for a rnn in a Cartesian space, 2d, 3d, whatever you dimension you wish, this set of coordinates is called the substrate. +2)A cppn is queried by passing in two coordinates at a time as input, which gives the cppn a search space of a hypercube in 2x the dimension the coordinates are in (for substrates in space > 2d this is very large) +3)The output of the cppn is used to encode connection, weights, biases, of the rnn coordinates +4)Then the rnn is evaluated by your fitness function and but the evolution (speciation, reproduction, etc) is ran on the cppn that encoded the rnn. So you evolve a population of cppn ""genotypes"" that encode rnn or cnn ""phenotypes"".

+ +

The third iteration of NEAT is ES-Hyperneat where all you need to layout in the substrate is the input and output layers (Hyperneat you must layout all hidden nodes of the substrate statically). It uses a subdivision tree to subdivide the search spaces and query the subdivided root coordinates of this tree with the cppn just like hyperneat, checking variance along the way to decide if the new node is in a ""high information"" topological space, to ""evolve"" hidden nodes into the substrate (rnn).

+",20044,,20044,,7/29/2019 14:26,7/29/2019 14:26,,,,0,,,,CC BY-SA 4.0 +13656,1,,,7/29/2019 14:23,,4,59,"

Can I use self-driving car's data set for left-hand drive cars which drive on the right lane for right-hand self-driving cars which drive on the left lane?

+",27518,,2444,,7/29/2019 20:50,7/12/2023 14:02,Can I use self-driving car's data set for left-hand drive cars which drive on the right lane for right-hand cars which drive on the left lane?,,2,2,,,,CC BY-SA 4.0 +13657,1,,,7/29/2019 14:28,,4,1285,"

Suppose we have a data set with $4,000$ labeled examples. The outcome variable is trinary (three possible categorical values). Suppose the accuracy of a given model is "bad" (e.g. less than $50 \%$).

+
+

Question. Should you try different traditional machine learning models (e.g. multinomial logistic regression, random forests, XGBoost, +etc.), get more data, or try various deep learning models like +convolutional neural networks or recurrent neural networks?

+
+

If the purpose is to minimize time and effort in collecting training data, would deep learning models be a viable option over traditional machine learning models in this case?

+",23220,,32410,,5/3/2021 18:04,5/3/2021 20:07,"If the accuracy of my current model is low ($50 \%$) and we want to minimize time in collecting more data, should we try other models?",,3,1,,,,CC BY-SA 4.0 +13658,2,,13657,7/29/2019 16:52,,2,,"

To know if your model needs more training data, try to plot out ""learning curves"", that are based on increasing size of the training set.

+ +

Basically, you calculate training and validation accuracy metrics for 1, 2, 3, 4, 5, ..., m training samples. Size of validation set may be constant over time. If the accuracy is still rising when your data set is fully used, then you need more training data.

+",22659,,22659,,7/29/2019 17:01,7/29/2019 17:01,,,,0,,,,CC BY-SA 4.0 +13659,2,,13221,7/29/2019 17:36,,2,,"

My suggestion is to transfom the resolution of all images equal proportion. +You can use this python code:

+ +
from PIL import Image
+import os
+import argparse
+
+
+def rescale_images(directory, size):
+    for img in os.listdir(directory):
+        im = Image.open(directory + img)
+        im_resized = im.resize(size, Image.ANTIALIAS)
+        im_resized.save(directory + img)
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description=""Rescale images"")
+    parser.add_argument('-d', '--directory', type=str, required=True, help='Directory containing the images')
+    parser.add_argument('-s', '--size', type=int, nargs=2, required=True, metavar=('width', 'height'),
+                        help='Image size')
+    args = parser.parse_args()
+    rescale_images(args.directory, args.size)
+
+
+
+# save this python code as transform_image_resoluthion.py 
+# run this with cmd with the below command 
+# python transform_image_resolution.py -d images/ -s 800 600
+
+",27519,,,,,7/29/2019 17:36,,,,0,,,,CC BY-SA 4.0 +13660,1,,,7/29/2019 17:50,,4,138,"

So I have 2 models trained with the DQN algorithm that I want to train in a multi-agent environment to see how they react with each other. The models were trained in an environment consisting of 0's and 1's (-1's for the other model)where 1 means that square is filled and 0 is empty. It is a map filling environment, where at each step the agent can move up, down, left or right and for each step, it stays alive without turning into itself (1 or -1 for the other) or the boundary of the environment it gets 0.005 rewards and for ""dying"" it gets -1. You can think of the player as in the game Tron, where it just leaves a trail behind. I stack the last 4 frames on top of each other so it knows which end is the ""head"". With a single agent, after training, I didn't get an optimal model which uses all the squares but it does manage to fill about 30% of the environment, which I think is the limit for this algorithm (let me know if you have thoughts on this)

+ +

Now, I put the two models in one environment where there are two players, one represented with 1's and the other with -1s. As one model is trained with -1's and the other with 1's I thought they could find their own player, however even before training if I just run the models on the environment without any exploration, they seem to affect the actions of each other. One just goes straight and dies and the other just turns once then dies at the wall (whereas in a single-agent environment these 2 models can fill about 30%). And if I do training, they just diverge to this exact behavior from random without seemingly not learning anything. So, I just wanted to ask is there anything wrong about my approach with the representation of the players (1 and -1) because I thought they would just play as they did in the single-agent environment but they don't and I couldn't get them to learn anything

+",27523,,30725,,5/29/2020 13:47,6/13/2023 21:08,How to represent players in a multi agent environment so each model can distinguish its own player,,1,0,,,,CC BY-SA 4.0 +13661,2,,12081,7/29/2019 18:01,,1,,"

Multiscale Rotated Bounding Box-Based Deep Learning Method +Here's a link for a reference

+",27519,,,,,7/29/2019 18:01,,,,0,,,,CC BY-SA 4.0 +13662,1,,,7/29/2019 18:38,,4,2649,"

I was reading the well know paper Fully Convolutional Networks for Semantic Segmentation, and, throughout the whole paper, they talk use the term fine and coarse. I was wondering what they mean. The first time they say it in the intro is:

+ +
+

Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification, but also making progress on local tasks with structured output. These include advances in + bounding box object detection, part and keypoint prediction, and local correspondence.

+ +

The natural next step in the progression from coarse to fine inference is to make a prediction at every pixel.

+
+ +

It's also used in other parts of the paper

+ +
+

We next explain how to convert classification nets into fully convolutional nets that produce coarse output maps.

+
+ +

What do ""coarse"" and ""fine"" mean in the context of this paper? And in the general context of computer vision?

+ +

In English, ""coarse"" means ""rough or loose in texture or grain"" , while ""fine"" means ""involving great attention to detail"" or ""(chiefly of wood) having a fine or delicate arrangement of fibers"", but these definitions do not elucidate the meaning of these words in the context of computer vision.

+ +

This question was also asked here.

+",9289,,2444,,6/15/2020 14:08,6/15/2020 14:08,"What do the words ""coarse"" and ""fine"" mean in the context of computer vision?",,1,0,,,,CC BY-SA 4.0 +13664,5,,,7/29/2019 20:50,,0,,,1671,,1671,,7/29/2019 20:50,7/29/2019 20:50,,,,0,,,,CC BY-SA 4.0 +13665,4,,,7/29/2019 20:50,,0,,For questions about scalability related to software architectures and methods.,1671,,1671,,7/29/2019 20:50,7/29/2019 20:50,,,,0,,,,CC BY-SA 4.0 +13666,1,,,7/29/2019 22:48,,2,61,"

Let say I'm trying to apply CNN for image classification. There are lots of different models to choose and we can try an ensemble, but given a limit amount of resources, it does not allow to try everything.

+ +

Is there a theory behind which model is good for a classification task for the convolutional neural network?

+ +

Right now, I'm just taking an average of three predictions.

+ +
predictions_model = [y_pred_xceptionAug,y_pred_Dense121_Aug,y_pred_resnet50Aug]
+predictions = np.mean(predictions_model,axis=0)
+
+ +

But each model's performance is different. Is there better way for ensemble methods?

+",27529,,2444,,7/29/2019 23:06,9/4/2019 1:03,Is there a theory behind which model is good for a classification task for the convolutional neural network?,,1,1,,,,CC BY-SA 4.0 +13667,2,,13647,7/30/2019 2:28,,0,,"

Due to this article: https://arxiv.org/pdf/1512.00567v3.pdf?source=post_page--------------------------- ,

+ +

I try to flatten the 3-d tensor in to 1d vector: 8*8*2048, because in the article, the pool layer of inception resnet v2 at page 6 is Pool: 8 * 8 * 2048.

+ +

But at the end, my code showed the error: + ValueError: cannot reshape array of size 33423360 into shape (340,131072)

+ +

This is all my code:

+ +
from keras.applications.inception_resnet_v2 import InceptionResNetV2
+from keras.applications.inception_resnet_v2 import preprocess_input
+from keras.models import Model
+from keras.preprocessing.image import load_img
+from sklearn.linear_model import LogisticRegression
+from sklearn.model_selection import GridSearchCV
+from sklearn.metrics import classification_report
+from imutils import paths
+from keras.applications import imagenet_utils
+from keras.preprocessing.image import img_to_array
+from sklearn.preprocessing import LabelEncoder
+from sklearn.model_selection import train_test_split
+from keras.preprocessing import image
+import random
+import os
+import numpy as np 
+import cv2
+
+
+# Path to image 
+image_path = list(paths.list_images('/content/drive/My Drive/casia-299-small'))
+# Random image path
+random.shuffle(image_path)
+# Get image name
+labels = [p.split(os.path.sep)[-2] for p in image_path]
+
+
+# Encode face name in to number
+le = LabelEncoder()
+labels = le.fit_transform(labels)
+
+# Load model inception v2, include_top = Fale to ignore Fully Connected layer
+model = InceptionResNetV2(include_top = False, weights = 'imagenet')
+
+
+# Load images and resize into required input size of Inception Resnet v2 299x299
+list_image = []
+for (j, imagePath) in enumerate(image_path):
+    image = load_img(imagePath, target_size = (299, 299, 3))
+    image = img_to_array(image)
+
+    image = np.expand_dims(image, 0)
+    image = imagenet_utils.preprocess_input(image)
+
+    list_image.append(image)
+
+# Use pre-trained model to extract feature
+list_image = np.vstack(list_image)
+print(""LIst image: "", list_image)
+features = model.predict(list_image)
+print(""feature: "", features)
+print(""feature shape[0]: "", features.shape[0])
+print(""feature shape: "", features.shape)
+features = features.reshape((features.shape[0], 8*8*2048))
+
+# Split training set and test set n ratio of 80-20
+x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size = 0.2, random_state =42)
+
+params = {'C': [0.1, 1.0, 10.0, 100.0]}
+model = GridSearchCV(LogisticRegression(), params)
+model.fit(x_train,y_train)
+model.save('/content/drive/My Drive/casia-299-small/myweight1.h5')
+print('Best parameter for the model {}'.format(model.best_params_))
+
+preds = model.predict(x_test)
+print(classification_report(y_test, preds))
+```
+
+",27510,,,,,7/30/2019 2:28,,,,0,,,,CC BY-SA 4.0 +13668,1,13685,,7/30/2019 2:50,,3,320,"

When executing MCTS' expansion phase, where you create a number of child nodes, select one of the numbers, and simulate from that child, how can you efficiently and unbiasedly decide which child(ren) to generate?

+ +

One strategy is to always generate all possible children. I believe that this answer says that AlphaZero always generates all possible ($\sim 300$) children. If it were expensive to compute the children or if there were many of them, this might not be efficient.

+ +

One strategy is to generate a lazy stream of possible children. That is, generate one child and a promise to generate the rest. You could then randomly select one by flipping a coin: heads you take the first child, tails you keep going. This is clearly biased in favor of children earlier in the stream.

+ +

Another strategy is to compute how many $N$ children there are and provide a function to generate child $X < N$ (of type Nat -> State). You could then randomly select one by choosing uniformly in the range $[0, N)$. This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. Alternatively, you could compute an upper-bound on the number of children and the function is partial (of type Nat -> Maybe State), but you'd be doing something like rejection sampling.

+ +

I believe that if the number of iterations of MCTS remaining, $X_t$, is larger than the number of children, $N$, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children. This seems to suggest that the only time it matters is when $X_t < N$ and in situations like AlphaZero, $N$ is so much smaller than $X_0$, that this basically never matters.

+ +

In cases where $X_0$ and $N$ are of similar size, then it seems like the number of iterations really needs to be changed into something like an amount of time and sometimes you spend your time doing playouts while other times you spend your time computing children.

+ +

Have I thought about this correctly?

+",27530,,2444,,11/19/2019 19:58,11/19/2019 19:58,How can we efficiently and unbiasedly decide which children to generate in the expansion phase of MCTS?,,2,0,,,,CC BY-SA 4.0 +13669,1,13670,,7/30/2019 6:56,,1,392,"

If you are a freelancer, when a client asks to create a website we can easily measure how much the total cost is needed based on the requirements of the client. (the backend, UI/UX design, features, etc.). We can even measure the estimated time of completion.

+ +

What if a client asks you to make an AI project (image recognition, speech recognition, or NLP), how do you tell the client the estimated cost and time needed to complete the project in the beginning? because the results obtained can be very different for each data used

+",16565,,16565,,3/17/2021 10:51,3/17/2021 10:51,How to estimate the cost and time to complete an AI Project,,1,1,,3/17/2021 10:55,,CC BY-SA 4.0 +13670,2,,13669,7/30/2019 9:46,,3,,"
+

If you are a freelancer, when a client asks to create a website we can easily measure how much the total cost is needed based on the requirements of the client. (the backend, UI/UX design, features, etc.). We can even measure the estimated time of completion.

+
+ +

This is only the case when the full scope and design of the site is a well understood and relatively standard thing. With more experience, and perhaps a dedicated team, more complex sites and features can be addressed like this. It is just as common in my experience to view ongoing development of a site as a series of related projects, each of which can be estimated and costed as they become more feasible and closer to starting.

+ +

I have never seen a stand-alone site fully costed and estimated in my own work, because I work in-house on more complex features than this is possible for, but am aware this is a common approach, especially for agencies and freelancers. The difference is that the people involved are experts on producing well-defined work, and that in order to make a pre-estimated or even fixed price sale, they have put a lot of effort into risk management. Some of the risk management is contractual (e.g. ""maximum of 3 major revisions"" for web site design), some of it is in requirements gathering - often an initial pre-sales consultancy meeting is arranged, and/or a project proposal document generated where a lot of the possible misunderstandings between customer and developer are dealt with, and it is made clear what expectations are on both sides.

+ +

For a freelancer or agency, this de-risking is also spread over multiple projects as they will develop solutions that can be re-used including standardised contracts and templates for project planning etc.

+ +

There are also a few different project management approaches that deal with initial uncertainty in different ways. A ""waterfall"" approach of pre-planning as much in advance as possible requires very good knowledge of the work that will occur in later stages of a project, which of course depends heavily on earlier stages meeting all expectations of both the customer and developer. An ""agile"" approach will acknowledge that there are unknowns, and attempt to address them with some form of just in time planning. The problem with an agile approach is that ""just in time"" may be far too late if you need to budget up front (although it is still possible for instance by reducing scope to fit, provided that is agreed).

+ +
+

What if the client asks you to make an AI project (image recognition, speech recognition, or NLP), how do you tell the client the estimated cost and time needed to complete the project in the beginning?

+
+ +

There is no difference in this respect just because it is an AI project. This would apply to any software project, including web development, where one or more of the following is true:

+ +
    +
  • Scope of work is not clear

  • +
  • There are technical hurdles with unknown resolutions. I think it is likely you are focusing on this for your imagined AI project.

  • +
  • Customer expectations are not clear

  • +
  • Project success criteria is not measurable, or not known if achievable

  • +
+ +

As any software developer, if you are faced with project work with these features, you do some pre-project tasks focused on de-risking the project before attempting it. There are a few different approaches, than can be used in combination:

+ +
    +
  • Gather more requirements, spend more time up front consulting on the project

  • +
  • Test assumptions before starting project. E.g. if you are not sure about quality of the data, take a look at it as early as possible. If you don't know if technology X can do Y, investigate it in advance.

  • +
  • Work on a timesheet basis, not fixed cost. Alternatively fix cost (or have a maximum), but leave scope of work open.

  • +
  • Have a flexible project plan, and re-estimate remaining work routinely.

  • +
  • Put any assumptions about customer support for the project, such as availablilty and quality of data, into project plan and in general form into your standard contract (e.g. something legal along the lines of ""Project delivery depends on customer providing resources as outlined in the project plan document. If the resources are not provided in reasonable time, then the project may over-run or have additional costs, which will be covered by the customer, and not by the freelancer"")

  • +
+ +

For additional up-front work on project feasibilty and scope, if you are concerned about risk to yourself as a freelancer (because customer expects these as a pre-sale effort), you could propose to bill for that as consultancy work, prior to starting project proper. You would likely need to produce something like a project proposal document in order that the paying customer received something for spending this money. Producing such a document might even show both you and the customer that the project is not feasible without additional work - in whch case you could note the steps needed to make it feasible (e.g. customer needs to source 10,000 more training images without the flaws noticed in existing images).

+",1847,,1847,,7/30/2019 11:41,7/30/2019 11:41,,,,0,,,,CC BY-SA 4.0 +13671,1,13679,,7/30/2019 10:14,,6,868,"

Many games have multiple paths to the same states. What is the appropriate way to deal with this in MCTS?

+ +

If the state appears once in the tree, but with multiple parents, then it seems to be difficult to define back propagation: do we only propagate back along the path that got us there ""this"" time? Or do we incorporate the information everywhere? Or maybe along the ""first"" path?

+ +

If the state appears once in the tree, but with only one parent, then we ignored one of the paths, but it doesn't matter because by definition this is the same state?

+ +

If the state appears twice in the tree, aren't we wasting a lot of resources thinking about it multiple times?

+",27530,,2444,,7/31/2019 22:17,7/31/2019 22:17,What is the appropriate way to deal with multiple paths to same state in MCTS?,,2,0,,,,CC BY-SA 4.0 +13672,2,,13604,7/30/2019 12:02,,0,,"

It turns out that the zig-zag pattern is an inherent effect of using a word embedding layer. I don't fully understand the phenomenon, but I believe it has a strong correlation with the embeddings acting as a sort of memory slots, which can change relatively quickly, and the LSTM generating a summary of the sequence, so that the model can remember past combinations.

+

I found this plot of a training loss curve of word2vec and it exhibits the same per-epoch pattern.

+

+

Edit

+

After conducting several experiments, I've isolated the causes. It seems that this is an indirect effect of having a large model capacity. In my case, I had too large word embeddings (size 1024) and too many classes (2002), which also increases model capacity, so the model was doing an almost per-sample learning. Reducing both resulted in a smooth-as-silk learning curve and a better generalisation.

+",27444,,-1,,6/17/2020 9:57,8/1/2019 11:23,,,,0,,,,CC BY-SA 4.0 +13673,2,,13671,7/30/2019 12:53,,1,,"

Node in a tree must have a single parent, otherwise it violates a tree definition. Also the way I look at it, there are no ""same"" states when you do MCTS. Because you are keeping the history of how you got there. So the second time you visit the ""same"" state it'll have a different history path and a single parent.

+",27516,,,,,7/30/2019 12:53,,,,2,,,,CC BY-SA 4.0 +13674,2,,13660,7/30/2019 13:20,,0,,"

When you trained your agents separately they never saw squares with opposing values (1/-1). So agents don't really know what to expect from visiting that square. +I'd try adding (1/-1) to the condition on which you base your squares availability. Also try increasing the reward. It's hard to give suggestions without looking at the code.

+",27516,,,,,7/30/2019 13:20,,,,1,,,,CC BY-SA 4.0 +13675,2,,13607,7/30/2019 13:32,,1,,"

The layout problem is suitable for a rule-based approach, as used to generate levels in Rogue or customised home-layouts, you encode some constraints and search the remaining space. The colour problem is also suitable for a rule-based approach, colour is a 3-dimensional space (4 if you include opacity). A palette can be created through taking colours at appropriate distances from a starting point.

+ +

Seen as this domain is well analysed, there are known heuristic and encodable rules. You'll likely have more success and faster by using these to generate designs.

+ +

Just to give an idea, seen as rule based approach isn't discussed much these days, the following is valid Prolog, query with phrase(webpage, Design)., that'll quickly generate designs like: [brand, header_nav, searchbox, sidebar_nav, form_content, sidebar_list]. This is just a super-quick demo, you'd likely want a tree instead of a list and you'll need to derive HTML and CSS from it too, which is simple enough in Prolog.

+ +
webpage --> body.
+webpage --> header, body.
+webpage --> header, body, footer.
+
+header --> [brand], header_content.
+header_content --> [header_nav].
+header_content --> [header_nav], [searchbox].
+
+body --> content.
+body --> sidebar(_), content.
+body --> { dif(A, B) }, sidebar(A), content, sidebar(B).
+body --> content, sidebar(_).
+
+content --> [form_content].
+content --> [article_content].
+content --> [list_content].
+
+sidebar(list) --> [sidebar_list].
+sidebar(nav) --> [sidebar_nav].
+sidebar(ads) --> [sidebar_ads].
+
+footer --> [notification].
+
+",15541,,,,,7/30/2019 13:32,,,,0,,,,CC BY-SA 4.0 +13677,2,,13641,7/30/2019 14:48,,0,,"

You're going to run straight into the qualification problem, the frame problem and ramification problem on these. Plus you'll need to get your head round what an action is and means.

+ +

Qualification Problem

+ +

A more formal ""STRIPS"" would be Situation Calculus (logical equivalency demonstrated by Reiter), which can be defined in FOL or SOL, depending on who you talk to. In Situation Calculus you have your actions and you define when they are possible (S is the current Situation):

+ +
poss(open_door, S) → holds(door_position(closed) ^ door_lock(unlocked), S).
+
+ +

This is true, when it is possible to open the door it's both closed and unlocked. The difficulty arises when we try to flip that implication around:

+ +
poss(open_door, S) ← holds(door_position(closed) ^ door_lock(unlocked), S).
+
+ +

This is not true, because there must also be nothing blocking the door, the door can't be wedged too tightly into the door frame, and a whole bunch of other stuff that we might not think of. This is the qualification problem, which we get around in STRIPS by ignoring it and just flipping the implication.

+ +

In your case you'll end up learning a whole bunch of irrelevant qualifications:

+ +
 poss(open_door, S) ← holds(door_position(closed) ^ door_lock(unlocked) ^ weather(sunny) ^ grass(green), S).
+
+ +

You'd need some way to trim them down, by observing gameplay for your data you'll only learn the conditions when players choose to do an action, not when it is possible to do that action: the old correlation vs causation problem. You can immediately prune things that aren't fluents, but as far as I'm aware there's no good solution to this without letting the computer simulate to try actions in subsets of the fluents that hold in-order to prune.

+ +

Frame Problem

+ +

The result of an action in the language of Situation Calculus is that it causes some fluent to hold. So in our example we have a door_position fluent, we can say that opening the door causes door_position(open) to hold, but what about some other action, such as making tea? We know it has no effect on the position of the door, but for the computer to know that we need to tell it:

+ +
holds(door_position(open), S) → holds(door_position(open), do(make_tea, S)).
+
+ +

This creates far too many axioms. Reiter has a working solution to this for Golog in his book ""Knowledge in Action"". In your case it will manifest as missing axioms, i.e. not knowing what the result of an action will be for some fluent. Try beginning with the assumption that an action has no effect unless proven otherwise.

+ +

Ramification Problem

+ +

The ramification problem is very similar to the frame problem, it means you might not attribute all of the effects of an action to that action. Opening a door can cause changes in air-pressure through a house that can cause another door to slam, such an effect is easy to miss. For this you'd need a sensitive algorithm that can correctly recognise rare events that are important, which is a big ask. The solution is likely to be to ignore it.

+ +

Closely related and not usually a problem for such systems, you could also attribute effects to an action incorrectly. How will you be able to distinguish what the action caused vs occurred simultaneously? In some Mario game data it could likely be induced that jumping causes a turtle to move closer to Mario, again correlation vs causation and needing a method to distinguish. The solution here is likely to come from increasing the volume of data.

+ +

Definition and Granularity Problem

+ +

OK, it's probably possible that with enough data and a sensitive training algorithm you can overcome the above problems. The definition one is merely a conceptual one and easily overcome.

+ +

Humans tend to define an action by it's name, say for example ""turning on a light"". However, in Situation Calculus the action is defined by when it is possible and the result of that action, the name is merely a label. So ""turning on a light"" where the bulb is broken and when it is not may share the same label, but they are different actions.

+ +

Furthermore, in STRIPS and Situation Calculus we're defining ""types"" or ""classes"" of actions, the actual execution is an ""instance"" of that ""class"". Your problem is to derive the abstract from the concrete.

+ +

For you this problem will arise in terms of defining the granularity, as each instance of an action is possible in the unique situation that it occurred in, and caused the unique situation that came after it.

+ +

How do you group these actions together to abstract the most similar ones? How do you know how many actions there are to create these groups? If it's not based on a limited set of user-inputs to some computer game, then you might want to draw some ideas from the research into clustering, particularly heuristics of how many clusters to use.

+ +

Alternative to a decision tree

+ +

If I were to tackle this problem (fund a Post-Doc for me and I'd be tempted), I'd first investigate ILP as a method to learn the relations using Golog as the foundation of my action axioms. Then I'd look at clustering based on a space where each fluent was a dimension. I imagine it would be a rather useful tool, even if it merely created a pretty accurate template for manual review.

+",15541,,15541,,7/30/2019 14:56,7/30/2019 14:56,,,,0,,,,CC BY-SA 4.0 +13678,2,,1906,7/30/2019 15:40,,4,,"

Sure! There's the whole Semantic Web scene! OWL is derived from DLs and Frames, arguably has a lot in common with semantic networks too. Expert-driven decision support systems are still being developed (and researched) in industries where the human is required to take responsibility or getting data is not going to happen. As the ideas evolve so do the names.

+ +

Check out the academic conferences like KR, ISWC, FOIS, even broader AI conferences like IJCAI have a healthy dose of symbolic AI, I even spotted a search algorithm in the 2019 line up.

+",15541,,,,,7/30/2019 15:40,,,,0,,,,CC BY-SA 4.0 +13679,2,,13671,7/30/2019 15:47,,7,,"
+

If the state appears twice in the tree, aren't we wasting a lot of resources thinking about it multiple times?

+
+ +

You're right. Precisely the same problem was also noticed decades before MCTS existed, in the classic minimax-style tree search algorithms (alpha-beta search, etc.) that were used in games before MCTS. The solution is also mostly the same; transposition tables.

+ +

In the case of MCTS, the statistics used by the algorithm that are normally associated with nodes (or their incoming edges) may instead be stored in entries of a transposition table. I mean stuff like visit counts and sums (or averages) of backpropagated scores.

+ +

A brief description of how it would work, and references to more extensive relevant literature, can be found in subsubsection 5.2.4 of the well-known 2012 Survey paper on MCTS.

+ +

This does require that you can efficiently (incrementally) compute hash values for the states you encounter, which may not always be easy (should usually be possible, depends on the details of your problem domain). Use of transpositions in MCTS is also not always guaranteed to actually improve performance. It does come with computational overhead, and in games where transpositions are very rare it may be more efficient to simply ignore them and use the regular tree structure.

+",1641,,,,,7/30/2019 15:47,,,,0,,,,CC BY-SA 4.0 +13680,2,,12979,7/30/2019 15:51,,2,,"

Not all search is planning (is A connected to B), but all planning is search (how do I get from this to that).

+ +

Here's an example in Prolog with a domain described in terms of actions, when they are possible, and what the result of the actions are. The description is of an uncomputed graph of un-calculated size where each node is a situation and each edge is an action. Then we have A* search algorithm that searches this graph, calculating it as it goes, to find a plan to reach the goal state. Running the final query will produce a plan via search.

+",15541,,,,,7/30/2019 15:51,,,,0,,,,CC BY-SA 4.0 +13681,2,,13117,7/30/2019 16:24,,0,,"

This is theoretically possible for an exhaustive set of sequential, non-concurrent primitive actions:

+

$$ +\forall s_1, s_2 \left( \exists a \left( \text{possible}(a, s_1) \land \text{do}(a, s_1, s_2) \right) \right) +$$

+

where $s_1$ is the prior situation, $s_2$ is the result of doing action $a$ in $s_2$, and $\text{possible}(a, s_1)$ is true if it is possible to do an action in a situation (See Situation Calculus).

+

So, for a given situation, reduce your search space to those actions that are possible, then reduce your search space again to those that result in the following situation.

+",15541,,2444,,2/6/2021 13:45,2/6/2021 13:45,,,,0,,,,CC BY-SA 4.0 +13683,1,13687,,7/30/2019 18:02,,4,2285,"

I'm reading the ImageNet Classification with Deep Convolutional Neural Networks paper by Krizhevsky et al, and came across these lines in the Intro paragraph:

+ +
+

Their (convolutional neural networks') capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.

+
+ +

What's meant by ""stationarity of statistics"" and ""locality of pixel dependencies""? Also, what's the basis of saying that CNN's theoretically best performance is only slightly worse than that of feedforward NN?

+",27548,,2444,,7/30/2019 22:39,7/31/2019 10:27,"What is the meaning of ""stationarity of statistics"" and ""locality of pixel dependencies""?",,1,0,,,,CC BY-SA 4.0 +13685,2,,13668,7/30/2019 19:07,,1,,"

The first thing to consider in this question is: what do we mean when we talk about ""generating a child/node"". Just creating a node for a tree data structure, and allocating some memory (initialised to nulls / zeros) for data like deeper children, visit counts, backpropagated scores, etc., is rarely a problem in terms of efficiency.

+ +

If you also include generating a game state to store in that node when you say ""generating a node"", that can be a whole lot more expensive, since it requires applying the effects of a move to the previous game state to generate the new game state (and, depending on implementation, probably also requires first copying that previous game state). But you don't have to do this generally. You can just generate nodes, and only actually put a game state in them if you later on reach them again through the MCTS Selection phase.

+ +

For example, you could say that AlphaZero does indeed generate all the nodes for all actions immediately, but they're generally ""empty"" nodes without game states. They do get ""primed"" with probabilities computed by the policy network, but that policy network doesn't require successor states inside those nodes; it's a function $\pi(s, a)$ of the current state $s$ (inside the previous node), and the action $a$ leading to the newly-generated node.

+ +
+ +

But if you're really sure that, for your particular problem domain, the generation of nodes itself already is inefficient, then...

+ +
+

[...] This is clearly biased in favor of children earlier in the stream.

+
+ +

Yes, you would get a significant bias with such a stream-based approach, probably wouldn't work well.

+ +
+

[...] This may be harder to implement than the previous version because computing the number of children may be as hard as computing the children themselves. [...]

+
+ +

Again I agree with your observation, I don't think there are many problems where this would be a feasible solution.

+ +
+

I believe that if the number of iterations of MCTS remaining, X_t, is larger than the number of children, N, then it doesn't matter what you do, because you'll find this node again the next iteration and expand one of the children.

+
+ +

This would only be correct for the children of the root node. For any nodes deeper in the tree, it is possible that MCTS never reaches them again even if $X_t > N$, because it could dedicate most of it search effort to different subtrees.

+ +
+ +

I think your solution would have to involve some sort of learned function (like the policy network in AlphaZero) which can efficiently compute a recommendation for a node to generate, only using the inputs that are already available before you pick a node to generate. In AlphaZero's policy network, those inputs would be the state $s$ in your current node, and the outward actions $a$ (each of which could lead to a node to be generated). This would often actually be very far from unbiased, but I imagine a strong, learned bias would likely be desireable anyway if you're in a situation where the mere generation of nodes is a legitimate concern for performance.

+",1641,,,,,7/30/2019 19:07,,,,3,,,,CC BY-SA 4.0 +13687,2,,13683,7/30/2019 22:23,,3,,"

locality of pixel dependencies probably means that neighboring pixels tend to be correlated, while faraway pixels are usually not correlated. This assumption is usually made in several image processing techniques (e.g. filters). Of course, the size and the shape of the neighborhood could vary, depending on the region of the image (or whatever), but, in practice, it is usually chosen to be fixed and rectangular (or squared).

+ +

stationarity of statistics might mean that the values of the pixels do not change over time, so this could be related to diffusion techniques in image processing. stationarity of statistics might also mean that the values of the pixels do not change much in a spatial neighborhood, even though, stationarity, e.g. in reinforcement learning, usually means that something does not change over time (so, if that's the case, the terminology stationarity is at least misleading and confusing, in this context), so this might be related to the locality of pixel dependencies property. Possibly, stationarity of statistics could also indirectly mean that you can use the same filter to detect the same feature in different regions of the image.

+ +

With while their theoretically-best performance is likely to be only slightly worse the authors probably thought that, theoretically, CNNs are not as powerful as feedforward neural networks. However, both CNNs and FFNNs are universal function approximators (but, at the time, nobody probably had yet investigated seriously the theoretical powerfulness of CNNs).

+",2444,,2444,,7/31/2019 10:27,7/31/2019 10:27,,,,9,,,,CC BY-SA 4.0 +13688,2,,13662,7/30/2019 23:02,,6,,"

tl;dr

+ +
+

What does that mean in the context of this paper?

+
+ +

With ""coarse segmentation"" the author means a segmentation that doesn't have much detail. ""Fine segmentation"", on the other hand, refers to a segmentation with a high level of detail.

+ +
+

But also more importantly [what does that mean in the context of] general computer vision?

+
+ +

The most common use in CV is to describe how general or specific a class in is classification. A ""coarse class"" is a very broad one, while a ""fine class"" is a very specific one.

+ +

Intended use

+ +

What the author refers to is the level of detail of the resulting segmentation.

+ +

A coarse segmentation would mean that we have large blobs covering each class without much detail. On the other hand, a fine segmentation would have a much higher level of detail which can even go down to pixel level (i.e. pixel-by-pixel correct segmentation).

+ +

To make this clear look at the following two examples. As we go from the left to right, the segmentation maps go from coarse to fine:

+ +

+

+ +

Note that in the rightmost images the result an almost pixel-perfect (i.e. fine details) segmentation map, while the ones on the left don't have much detail and can be considered coarse.

+ +

Alternative use

+ +

Because this isn't an established terminology, some of the times coarse and fine can refer to the nature of the classes in a classification task. Take the top image for example; the label for a coarse classification task could be a tree. For a fine classification task we would have labels like oak tree, pine tree, etc.

+ +

The most prominent example of this is the cifar dataset which has two versions: a coarse one that has 10 classes and a fine one that has 100 classes, which are all subclasses of the coarse classes. For example a coarse class is fish, while the fine ones are aquarium fish, flatfish, ray, shark, trout, etc.

+ +

For semantic segmentation an example could be the following: you want to make a street segmentation model. This a coarse segmentation would mean that it just splits the image into road, vehicle, etc. A fine segmentation, on the other hand, could also detect the type of vehicle, e.g. truck, car, etc.

+",26652,,,,,7/30/2019 23:02,,,,0,,,,CC BY-SA 4.0 +13689,1,,,7/30/2019 23:45,,1,30,"

I need to build a multi-camera people tracking system and I have no idea how to start. I read ML for Dummies and I've watched a lot of youtube classes/conferences and read a lot of articles about ML/DL, so I have all this theoretical information about what is a NN, loss function, weights, vectors, convolution, etc., but when I need to start building something, I get stuck. Even more, I don't think I can create my own models because I only have six months to finish this and I'm not sure if I'll be able to do it.

+ +

I've read some papers explaining architectures for an improved people-tracking system (e.g. https://www.intechopen.com/online-first/multi-person-tracking-based-on-faster-r-cnn-and-deep-appearance-features#B8), and it says it used ResNet-30 and stuff like that. My question is, how could I recreate the architectures in papers like that? Where can I find those pre-trained models? Or is there a place where I can get the data?

+ +

I want to start with at least a people-tracking system, without worrying about the multi-camera part for now, and I thought of almost the same approach as the people in the paper posted, meaning I want to recognize people based on parts of their body/the whole body to identify them, and track them based on their unique features (clothing color, hair, skin tone, etc), maybe skipping the part of facial recognition since that's too advanced I think.

+ +

Any idea on where to start? +Sorry if the question is too broad or too complex. Comments about first steps and sub-dividing the problem are also welcome. +PS: The main ultimate is to track how much time people are in a certain area filmed by many cameras.

+",27552,,,,,7/30/2019 23:45,Where to find pre-trained models for multi-camera people tracking?,,0,0,,,,CC BY-SA 4.0 +13691,1,,,7/31/2019 3:07,,3,302,"

What I know about CRF is that they are discriminative models, while HMM are generative models, but, in the inference method, both use the same algorithm, that is, the Viterbi algorithm, and forward and backward algorithms.

+ +

Does CRF use the same features as HMM, namely features transition and state features?

+ +

But in here https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf, CRF has these features Edge-Observation and Node-Observation Features.

+ +

What is the difference features transition and state features vs features Edge-Observation and Node-Observation features?

+",22686,,2444,,7/31/2019 22:15,7/31/2019 22:15,What are the differences between CRF and HMM?,,0,1,,,,CC BY-SA 4.0 +13692,1,13786,,7/31/2019 6:09,,10,7219,"

I am new to convolutional neural networks, and I am learning 3D convolution. What I could understand is that 2D convolution gives us relationships between low-level features in the X-Y dimension, while the 3D convolution helps detect low-level features and relationships between them in all the 3 dimensions.

+ +

Consider a CNN employing 2D convolutional layers to recognize handwritten digits. If a digit, say 5, was written in different colors:

+ +

+ +

Would a strictly 2D CNN perform poorly (since they belong to different channels in the z-dimension)?

+ +

Also, are there practical well-known neural nets that employ 3D convolution?

+",27555,,2444,,12/18/2021 12:58,12/18/2021 12:58,When should I use 3D convolutions?,<3d-convolution>,2,3,,,,CC BY-SA 4.0 +13693,1,,,7/31/2019 6:35,,4,209,"

I am building a model which predicts angles as output. What are the different kinds of outputs that can be used to predict angles?

+ +

For example,

+ +
    +
  1. output the angle in radians + +
      +
    • cyclic nature of the angles is not captured
    • +
    • output might be outside $\left[-\pi, \pi \right)$
    • +
  2. +
  3. output the sine and the cosine of the angle + +
      +
    • outputs might not satisfy $\sin^2 \theta + \cos^2 \theta = 1$
    • +
  4. +
+ +

What are the pros and cons of different methods?

+",20859,,2444,,12/18/2021 8:54,5/12/2023 11:07,What kind of output should be used for predicting angles in DNNs?,,2,0,,,,CC BY-SA 4.0 +13694,1,13702,,7/31/2019 6:52,,6,247,"

I have a decent background in Mathematics and Computer Science .I started learning AI from Andrew Ng's course from one month back. I understand logic and intuition behind everything taught but if someone asks me to write or derive mathematical formulas related to back propagation I will fail to do so. +I need to complete object recognition project within 4 months. +Am I on right path?

+",,user27556,1671,,10/15/2019 19:18,12/30/2019 19:09,Is it ok to struggle with mathematics while learning AI as a beginner?,,3,1,,1/22/2021 2:27,,CC BY-SA 4.0 +13695,1,,,7/31/2019 7:49,,1,156,"

How to detect patterns in a data set of given IP addresses using a neural network?

+ +

The data set is actually a list of all the vulnerable devices on a network. I want to use a neural network that detctes any patters in the occurrences of these vulnerabilities with reference to their IPs and ports.

+",27399,,,,,9/8/2020 4:01,How to detect patterns in a data set of given IP addresses using a neural network?,,1,2,,,,CC BY-SA 4.0 +13696,2,,13693,7/31/2019 7:50,,0,,"

As you said, the first option is not very suitable due to the cyclic nature of the angles. However, if you don't mind discretizing the values, you could represent the output as a binary vector.

+ +

A variant of the second option seems perfect to me. You may output a 2D vector and use that vector's angle as output. You'll probably need a regularizer for the vector's norm, but that's it.

+",27444,,,,,7/31/2019 7:50,,,,0,,,,CC BY-SA 4.0 +13698,1,,,7/31/2019 9:31,,4,605,"

The Conditional Variational Autoencoder (CVAE), introduced in the paper Learning Structured Output Representation using Deep Conditional Generative Models (2015), is an extension of Variational Autoencoder (VAE) (2013). In VAEs, we have no control over the data generation process, something problematic if we want to generate some specific data. Say, in MNIST, generate instances of 6.

+

So far, I have only been able to find CVAEs that can condition to discrete features (classes). Is there a CVAE that allows us to condition to continuous variables, kind of a stochastic predictive model?

+",27565,,2444,,11/22/2020 17:21,11/22/2020 17:51,Is there a continuous conditional variational auto-encoder?,,1,0,0,,,CC BY-SA 4.0 +13699,1,13705,,7/31/2019 10:01,,3,3069,"

I am using a CNN to train on some data, where training size = 21700 samples, and test size is 653 samples, and say I am using a batch_size of 500 (I am accounting for samples out of batch size as well).

+ +

I have been looking this up for a long time now, but can't get a clear answer, but when plotting the loss functions to check for whether the model is overfitting or not, do I plot as follows

+ +
for j in range(num_epochs):
+  <some training code---Take gradient descent step do wonders>
+  batch_loss=0
+  for i in range(num_batches_train):
+       batch_loss = something....criterion(target,output)...
+       total_loss += batch_loss
+  Losses_Train_Per_Epoch.append(total_loss/num_samples_train)#and this is 
+
+ +

where I need help

+ +
Losses_Train_Per_Epoch.append(total_loss/num_batches_train)
+and doing the same for Losses_Validation_Per_Epoch.
+plt.plot(Losses_Train_Per_Epoch, Losses_Validation_Per_epoch)
+
+ +

So, basically, what I am asking is, should I divide by num_samples or num_batches or batch_size? Which one is it?

+",27566,,2444,,8/2/2019 23:47,8/2/2019 23:47,Are the training loss and validation loss plotted per sample or per batch?,,1,0,,,,CC BY-SA 4.0 +13700,2,,4650,7/31/2019 11:53,,0,,"

Well AI can be used in Major Areas in Telecom

+

1/ Network Optimization

+
    Where the network is trained to come up with the required Parameter tunning to solve a particular issue, for example in 4G system we have user licenses for cell level that how many users this cell will support. Every time the number of users increases on that cell the Network optimizer sends a work order to the Backoffice to increase the number of users with temporary license that is in pool. For an AI system it can analyze the utilization and increase the licenses or it can tune the parameters to shift traffic on the neighbour cell, it can learn from the optimizers behaviour and actions, this is a single example
+
+

2/ Customer Intelligence

+
     Based on the customer behavior, his social media engagements and responses, usage patterns, past usage history and current context  the AI System can generate personalized offers for the user that can add value to the overall customer Journey and make him and engaged happy customer.
+
+

3/ Network expansions and Dynamic capacity:

+
     Mostly  network operators have licenses per resource contracts with vendors, the excess capacity is charged. In most cases for example in summers some particular areas need more additional capacity but once the summers are gone that particular area is the lowest utilized area. An AI system can learn based on the usage patterns when a particular area need higher capacity and when low capacity and can increase and decrease the capacity resulting in reduced costs.
+
+

Above are some of the use cases which can further be added with alot of Customer Engagement and Customer Experience Management uses cases

+

Regards +Muhammad Azfar +Customer Journey and Business Analyst +Customer Experience Management

+",27568,,-1,,6/17/2020 9:57,7/31/2019 11:53,,,,0,,,,CC BY-SA 4.0 +13701,2,,13698,7/31/2019 12:49,,1,,"

Whether a discrete or continuous class, you can model it the same.

+

Denote the encoder $q$ and the decoder $p$. Recall the variational autoencoder's goal is to minimize the $KL$ divergence between $q$ and $p$'s posterior. i.e. $\min_{\theta, \phi} \ KL(q(z|x;\theta) || p(z|x; \phi))$ where $\theta$ and $\phi$ parameterize the encoder and decoder respectively. To make this tractable this is generally done by using the Evidence Lower Bound (because it has the same minimum) and parametrizing $q$ with some form of reparametrization trick to make sampling differentiable.

+

Now your goal is to condition the sampling. In other words you are looking for modeling $p(x|z, c;\phi)$ and in turn will once again require $q(z|x, c; \theta)$. Your goal will now intuitively become once again $\min_{\theta, \phi} \ KL(q(z|x, c;\theta) || p(z|x, c; \phi))$. This is still simply transformed into the ELBO for tractability purposes. In other words your loss becomes $E_q[log \ p(x|z,c)] - KL(q(z|x,c)||p(z|c)$.

+

Takeaway: Conditioning doesn't change much, just embed your context and inject it both into the encoder and decoder, the fact that its continuous doesn't change anything. For implementation details, normally people just project/normalize and concatenate it somehow to some representation of $x$ in both the decoder/encoder.

+",25496,,2444,,11/22/2020 17:51,11/22/2020 17:51,,,,0,,,,CC BY-SA 4.0 +13702,2,,13694,7/31/2019 13:18,,6,,"

I think the key part of your question is ""as a beginner"". For all intents and purposes you can create a state of the art (SoTA) model in various fields with no knowledge of the mathematics what so ever.

+ +

This means you do not need to understand back-propagation, gradient descent, or even mathematically how each layer works. Respectively you could just know there exists an optimizer and that different layers generally do different things (convolutions are good at picking up local dependencies, fully connected layers are good at picking up connections among your neurons in an expensive manner when you hold no previous assumptions), etc.. Follow some common intuitions and architectures built upon in the field and your ability to model will follow (thanks to the amazing work on opensource ML frameworks -- Looking at you Google and Facebook)! But this is only a stop-gap.

+ +

A Newton quote that I'm about to butcher: ""If I have seen further it's because I'm standing on the shoulders of giants"". In other words he saw further because he didn't just use what people before him did, he utilized it to expand even further. So yes, I think you can finish your object detection project on time with little understanding of the math (look at the Google object detection API, it does wonders and you don't even need to know anything about ML to use it, you just need to have data. But, and this is a big but, if you ever want to extend into a realm that isn't particularly touched upon or push the envelope in any meaningful way, you will probably have to learn the math, learn the basics, learn the foundations.

+",25496,,4709,,12/30/2019 19:09,12/30/2019 19:09,,,,2,,,,CC BY-SA 4.0 +13704,2,,13605,7/31/2019 16:31,,2,,"

I think that something like ""strength"" would be difficult to quantify in this context. I do think that formal experimentation around the ""AI in a box"" scenario could be interesting. I know that experiments have been done where a human plays the role of the AI, attempting to get naive test subjects to ""release"" him by interacting with them over a chat interface. In all cases, the ""AI"" tends to be extraordinarily effective. It is often simply not possible to anticipate every way someone can trick you into revealing information or creating security holes. This is how human hackers do their jobs too. But I think a fully automated ""game"" could produce some interesting data.

+ +

One way I think it may be interesting to go about creating an ""AI in a box"" game is to have an ambiguous win condition. Essentially, make it possible to let the AI out and still think you are ""winning"" the game. One example that pops into my head right away is to use a (pointless) point system. You tell the player that the primary goal of the game is to keep the AI in the box, but you have a score that is displayed in the game, and tell the player that they win, no matter what, if their score is higher than the AI's by the end of the game. Of course, they never get to see the AI's score so they never know how well they are doing by comparison. This could be justified as representing human vs AI ""strength"" or ""leverage"" or whatever else.

+ +

In reality the AI doesn't have a score, but it does have the ability to effect the player's score. It can raise the player's score if they do things that help it escape and lower it when the player stands in its way. The score has no impact on the game and the player really wins by keeping the AI in the box. It only serves as a red herring that the AI can use to manipulate the player. I think this sort of experiment could provide an interesting model of how an AI might leverage bribes, threats, promises, and deception, as well as technical tricks, to convince a human to ""let it out of the box"".

+",22130,,22130,,7/31/2019 17:29,7/31/2019 17:29,,,,1,,,,CC BY-SA 4.0 +13705,2,,13699,7/31/2019 16:36,,3,,"

You want to compute the mean loss over all batches. What you need to do is to divide the sum of batch losses with the number of batches!

+ +

In your case:

+ +

You have a training set of $21700$ samples and a batch size of $500$. This means that you take $21700/500 \approx 43$ training iterations. This means that for each epoch the model is updated $43$ times! So the way you compute your training loss, that is what you need to divide by.

+ +

Note: I'm not sure what exactly you're trying to plot but I'm assuming you want to plot the training losses and the validation losses

+ + + +
training_loss = []
+validation_loss = []
+training_steps = num_samples // batch_size
+validation_steps = num_validation_samples // batch_size
+
+for epoch in range(num_epochs):
+
+    # Training steps
+    total_loss = 0
+    for b in range(training_steps):
+        batch_loss = ...  # compute batch loss
+        total_loss += batch_loss
+    training_loss.append(total_loss / training_steps)
+
+    # Validation steps
+    total_loss = 0
+    for b in range(validation_steps):
+        batch_loss = ...  # compute batch validation loss
+        total_loss += batch_loss
+    training_loss.append(total_loss / validation_steps)
+
+# Plot training and validation curves
+plt.plot(range(num_epochs), training_loss)
+plt.plot(range(num_epochs), validation_loss)
+
+ +

Another way would be to store the losses in a list and compute the mean. You can use this if you're not sure with what to divide.

+ + + +
...
+
+for epoch in range(num_epochs):
+
+    list_of_batch_losses = []  # initialize list that is going to store batch losses
+
+    # Training steps
+    for b in range(training_steps):
+        batch_loss = ...  # compute batch loss
+        list_of_batch_losses.append(batch_loss)  # store loss in a list
+
+    epoch_loss = np.mean(list_of_batch_losses)
+    training_loss.append(epoch_loss)
+
+    ...
+
+plt.plot(range(num_epochs), training_loss)
+
+",26652,,,,,7/31/2019 16:36,,,,1,,,,CC BY-SA 4.0 +13706,1,13717,,7/31/2019 17:06,,7,346,"

I wanted to ask about the methodology of testing the ML models against overfitting. Please note that I don't mean any overfitting reducing methods like regularisation, just a measure to judge whether a model has overfitting problems.

+ +

I am currently developing a framework for tuning models (features, hyperparameters) based on evolutionary algorithms. And the problem that I face is the lack of a good method to judge if the model overfits before using the test set. I encountered the cases where the model that was good on both training and validation sets, behaved poorly on the test set for both randomized and not randomized training and validation splits. I used k-fold cross-validation with additionally estimating the standard deviation of all folds results (the smaller deviation means better model), but, still, it doesn't work as expected.

+ +

Summing up, I usually don't see a correlation (or a very poor one) between training, validation and k-fold errors with test errors. In other words, tuning the model to obtain lower values of any of the above mentioned measures usually does not mean lowering the test error.

+ +

Could I ask you, how in practice you test your models? And maybe there are some new methods not mentioned in typical ML books?

+",22659,,2444,,6/5/2021 23:51,6/6/2021 0:00,What is the best measure for detecting overfitting?,,1,0,,,,CC BY-SA 4.0 +13707,5,,,7/31/2019 18:00,,0,,"

https://en.wikipedia.org/wiki/AI_box

+",1671,,1671,,7/31/2019 18:00,7/31/2019 18:00,,,,0,,,,CC BY-SA 4.0 +13708,4,,,7/31/2019 18:00,,0,,A hypothetical artificial superintelligence is kept in a virtual prison with limited means of affecting the external world. Can the AI hack its way out or trick its human keepers into releasing it?,1671,,1671,,7/31/2019 18:00,7/31/2019 18:00,,,,0,,,,CC BY-SA 4.0 +13709,1,,,7/31/2019 19:08,,2,51,"

The problem I am trying to attack is a predator-prey pursuit problem. There are multiple predators that pursue multiple preys and preys tried to evade predators. I am trying to solve a simplified version - one predator tries to catch a static prey on a plane. There is bunch of literature on the above problem when predators and preys are on the grid.

+ +

Can anybody suggest articles/code where such problem is solved on a continuous plane? I am looking at continuous state space, discrete action space (predator can turn left 10 degrees, go straight, turn right 10 degrees, runs at constant speed), and discrete time. MountainCar is one dimensional version (car is predator and flag is prey) and DQN works fine. However, when I tried DQN on two dimensional plane the training become very slow (I guess dimensionality curse).

+ +

The second question concerns the definition of states and reward. In my case I consider angle between predator heading vector and vector between the predator and prey positions. Reward is the change in distance between predator and prey, 10 when prey is captured, and -10 when predator gets too far from the prey. Is this reasonable? I already asked similar question before and with the help of @Neil Slater was able to refine reward and state.

+ +

The third question concerns when to update train network to target network. At each episode? Or only when prey is caught? Any ideas?

+ +

The last question I have is about the network structure: activation functions and regularization. Currently I am using two tanh hidden layers and linear output with l2 and dropout. Can anybody share some insights?

+ +

Thanks in advance!

+",27472,,,,,7/31/2019 19:08,How to solve optimal control problem with reinforcement learning,,0,0,,,,CC BY-SA 4.0 +13710,1,,,7/31/2019 19:13,,1,47,"

This is a follow-up question from my previous question here. I'm new to ML/DL, and one thing I need to do is to use a machine or deep learning video attention model which as the name suggests, can tag which parts of a video is probably more interesting and absorbs more viewer attention.

+ +

Do we have an available model to do that? If not, how to do it?

+",9053,,9053,,7/31/2019 22:33,7/31/2019 22:33,How do I tag the most interesting parts of a video?,,0,0,,,,CC BY-SA 4.0 +13711,2,,13693,7/31/2019 19:13,,0,,"

Approaches I have seen/used:

+ +

Scenario 1: The angle is in between 2 vectors of some form, in which case, $[-\pi, 0)$ and $[0, \pi)$ are equivalent. (In vector space there always exists two angles between vectors: $\theta$ and $1-\theta$ )

+ +
    +
  1. Use the sigmoid $\sigma$ function to put it between 0 and 1, and then scale to $[0, \pi)$. Since you arent modeling the whole rotation in this case, this works decently because the 2 ends dont have to be modeled as equivalent
  2. +
  3. Use cosine similarity, learn 2 vector representations and use $cos\ \theta = \frac{v_1^T v_2}{||v_1||*||v_2||}$ and then you could use the arccos.
  4. +
+ +

Scenario 2: Wanting to learn an angle on the unit circle, meaning $\theta$ vs $1-\theta$ is very important and so your bound is $[-\pi, \pi)$

+ +
    +
  1. Use a sinusoidal activation and multiply by $\pi$ (ex: $z=\pi*sin(x)$) This way you model the periodic nature of circling the unit circle (the activation eventually roates backs to itself continuously)
  2. +
+",25496,,,,,7/31/2019 19:13,,,,0,,,,CC BY-SA 4.0 +13713,5,,,7/31/2019 20:32,,0,,"

https://en.wikipedia.org/wiki/Computational_complexity

+",1671,,1671,,7/31/2019 20:32,7/31/2019 20:32,,,,0,,,,CC BY-SA 4.0 +13714,4,,,7/31/2019 20:32,,0,,The amount of resources required to run a given algorithm in relation to a given task. ,1671,,1671,,7/31/2019 20:32,7/31/2019 20:32,,,,0,,,,CC BY-SA 4.0 +13717,2,,13706,7/31/2019 21:47,,5,,"

tl;dr

+

The safest method I've found is to use cross-validation for hyperparameter selection and a hold-out test set for a final evaluation.

+

Why this isn't working for you...

+

In your case, I suspect you're either running a large number of iterations during for hyperparameter selection or you have a fairly small dataset (or even a combination of both). If you can't find more data or use a larger dataset, I'd suggest limiting the exhaustiveness of your hyperparameter selection phase. If you run the process enough times the model is bound to overfit on the validation sets.

+

Note that there is no guaranteed safe way of detecting overfitting before the test set!

+

Why I consider this strategy to be the safest

+

There are two different types of overfitting you need to be able to detect.

+

The first is the most straightforward: overfitting on the training set. This means that the model has memorized the training set and can't generalize beyond that. If the test set even slightly differs from the training set (which is the case in most real-world problems), then the model will perform worse on it than on the training set. This is simple to detect and, in fact, the only thing you need to catch this is a test set!

+

The second type of overfitting you need to detect is on the test set. Imagine you have a model and you make an exhaustive hyperparameter selection using a test set. Then you evaluate on the same test set. By doing this, you have adjusted your hyperparameters to achieve the best score for the specific test set. These hyperparameters are thus overfitting to that test set, even though the samples from that set were never seen during training. This is possible, because, during the iterative hyperparameter selection process, you have passed information about your test set to your model!

+

This is much harder to detect. In fact, the only way to do so is to split the original data into three parts: the training, the validation and the test sets. The first is used for training the model, the second for hyperparameter selection, and the final is used only once for a final evaluation. If your model has overfitted (i.e. test performance is worse than training and validation), you need to start from scratch. Shuffle your data, split it again and repeat the process. This is usually referred to as a hold-out strategy.

+

To make the model even less prone to overfitting, you can use cross-validation instead of a hold-out validation set. Because your model now has to be trained on $k$ slightly different training sets and evaluated on $k$ completely different validation sets, it is much harder for it to overfit on the validation sets (because it needs to fool $k$ different validation sets instead of one).

+

Cases where this might not be applicable

+

Depending on the circumstances, it might not be practical to apply both of these techniques.

+
    +
  • Cross-validation is pretty robust regarding overfitting but it imposes a computational burden to your process as it requires multiple trainings of the same model. This obviously isn't practical for computationally expensive models (e.g. image classifiers). In this case, use a hold-out strategy as mentioned previously.

    +
  • +
  • Using a hold-out test set means that you are reducing the size of your training set, which might actually make your model more prone to overfitting (i.e. have a higher-variance) for small datasets. In this case (if your model is practically untrainable due to the small size) you can resort to cross-validation, but you risk overfitting on the validation set and not having any way to detect it.

    +
  • +
+

How to combat overfitting

+

Since it is fairly related, I'll post a link to an answer on how to combat overfitting.

+",26652,,2444,,6/6/2021 0:00,6/6/2021 0:00,,,,1,,,,CC BY-SA 4.0 +13718,2,,6669,7/31/2019 22:51,,0,,"

As epsilon is throttled down networks 1 and 2 can freely specialize to producing tic-tac-toe's well-known non-losing behaviour against quasi-perfect adversaries without encoding any non-losing or winning behaviour against random (in other words, bad) adversaries. +I suggest while training network 1 (1st mover) and reducing epsilon-1 you keep the 2nd mover's epsilon-2 to values distinctly above 0, indeed why not fixed. Vice-versa for training the 2nd mover.

+",27580,,,,,7/31/2019 22:51,,,,0,,,,CC BY-SA 4.0 +13720,1,27364,,8/1/2019 10:54,,2,137,"

I'm programming my work with python, and I have a mesh and I want to extract 3d descriptors and feature points from it( trying to work on multi-scale strategy) , to visualize them later on the mesh,

+ +

What I'm asking about, is references, guidelines, anything which could benefit me with this situation

+ +

The main work I'm trying to do, is to reach matching stage, where I could find one-to-one correspondence.

+",27587,,,,,4/16/2021 10:38,Extracting Descriptors and feature points for 3d mesh,,1,0,,,,CC BY-SA 4.0 +13722,1,13731,,8/1/2019 19:41,,1,74,"

I've been trying out a simple neural network on the fashion_mnist dataset using keras. Regarding normalization, I've watched this video explaining why it's necessary to normalize input features, but the explanation covers the case when input features have different scales. The logic is, say there are only two features - then if the range of one of them is much larger than that of the other, the gradient descent steps will stagger along slowly towards the minimum.

+ +

Now I'm doing a different course on implementing neural networks and am currently studying the following example - the input features are pixel values ranging from 0 to 255, the total number of features (pixels) is 576 and we're supposed to classify images into one of ten classes. Here's the code:

+ +
import tensorflow as tf
+
+(Xtrain, ytrain) ,  (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+Xtrain_norm = Xtrain.copy()/255.0
+Xtest_norm = Xtest.copy()/255.0
+
+model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+                                    tf.keras.layers.Dense(128, activation=""relu""),
+                                    tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model.fit(Xtrain_norm, ytrain, epochs=5)
+model.evaluate(Xtest_norm, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 145us/sample - loss: 0.5012
+Epoch 2/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3798
+Epoch 3/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3412
+Epoch 4/5
+60000/60000 [==============================] - 7s 123us/sample - loss: 0.3182
+Epoch 5/5
+60000/60000 [==============================] - 7s 124us/sample - loss: 0.2966
+10000/10000 [==============================] - 1s 109us/sample - loss: 0.3385
+0.3384787309527397
+
+ +

So far, so good. Note that, as advised in the course, I've rescaled all inputs by dividing by 255. Next, I ran without any rescaling:

+ +
import tensorflow as tf
+
+(Xtrain, ytrain) ,  (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+model2 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+                                    tf.keras.layers.Dense(128, activation=""relu""),
+                                    tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model2.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model2.fit(Xtrain, ytrain, epochs=5)
+model2.evaluate(Xtest, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 158us/sample - loss: 13.0456
+Epoch 2/5
+60000/60000 [==============================] - 8s 137us/sample - loss: 13.0127
+Epoch 3/5
+60000/60000 [==============================] - 8s 140us/sample - loss: 12.9553
+Epoch 4/5
+60000/60000 [==============================] - 9s 144us/sample - loss: 12.9172
+Epoch 5/5
+60000/60000 [==============================] - 9s 142us/sample - loss: 12.9154
+10000/10000 [==============================] - 1s 121us/sample - loss: 12.9235
+12.923488986206054
+
+ +

So somehow rescaling does make a difference? Does that mean if I further reduce the scale, the performance will improve? Worth trying out:

+ +
import tensorflow as tf
+
+(Xtrain, ytrain) ,  (Xtest, ytest) = tf.keras.datasets.fashion_mnist.load_data()
+
+Xtrain_norm = Xtrain.copy()/1000.0
+Xtest_norm = Xtest.copy()/1000.0
+
+model3 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
+                                    tf.keras.layers.Dense(128, activation=""relu""),
+                                    tf.keras.layers.Dense(10, activation=""softmax"")])
+
+model3.compile(optimizer = ""adam"", loss = ""sparse_categorical_crossentropy"")
+model3.fit(Xtrain_norm, ytrain, epochs=5)
+model3.evaluate(Xtest_norm, ytest)
+------------------------------------OUTPUT------------------------------------
+Epoch 1/5
+60000/60000 [==============================] - 9s 158us/sample - loss: 0.5428
+Epoch 2/5
+60000/60000 [==============================] - 9s 147us/sample - loss: 0.4010
+Epoch 3/5
+60000/60000 [==============================] - 8s 141us/sample - loss: 0.3587
+Epoch 4/5
+60000/60000 [==============================] - 9s 144us/sample - loss: 0.3322
+Epoch 5/5
+60000/60000 [==============================] - 8s 138us/sample - loss: 0.3120
+10000/10000 [==============================] - 1s 133us/sample - loss: 0.3718
+0.37176641924381254
+
+ +

Nope. I divided by 1000 this time and the performance seems worse than the first model. So I have a few questions:

+ +
    +
  1. Why is it necessary to rescale? I understand rescaling when different features are of different scales - that will lead to a skewed surface of the cost function in parameter space. And even then, as I understand from the linked video, the problem has to do with slow learning (convergence) and not high loss/inaccuracy. In this case, ALL the input features had the same scale. I'd assume the model would automatically adjust the scale of the weights and there would be no adverse effect on the loss. So why is the loss so high for the non-scaled case?

  2. +
  3. If the answer has anything to do with the magnitude of the inputs, why does further scaling down of the inputs lead to worse performance?

  4. +
+ +

Does any of this have anything to do with the nature of the sparse categorical crossentropy loss, or the ReLU activation function? I'm very confused.

+",27548,,,,,8/2/2019 6:58,Effect of rescaling of inputs on loss for a simple neural network,,1,0,,,,CC BY-SA 4.0 +13723,2,,13694,8/1/2019 19:48,,2,,"
    +
  • Not only is it 100% ok, it's the process.
  • +
+ +

You may be surprised to know that even mathematicians struggle with mathematics, both the proofs they are working on, and the proofs of their colleagues. Some thinkers are so far ahead of the curve, very few understand what they're stating until generations later.

+ +

The main thing is to keep with it.

+",1671,,,,,8/1/2019 19:48,,,,0,,,,CC BY-SA 4.0 +13725,1,,,8/2/2019 0:47,,7,611,"

I came across a comment recently ""reads like sentences strung together with no logic."" But is this even possible?

+ +

Sentences can be strung together randomly if the selection process is random. (Random sentences in a random sequence.) Stochasticity does not seem logical—it's a probability distribution, not based on sequence or causality.

+ +

but

+ +

That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute.

+ +

So which is it?

+ +
    +
  • Is randomness anti-logical?
  • +
+ +


+ +
+ +

Some definitions of computational logic:

+ +

The arrangement of circuit elements (as in a computer) needed for computation +also: the circuits themselves Merriam Websters A system or set of principles underlying the arrangements of elements in a computer or electronic device so as to perform a specified task. Logical operations collectively. Google Dictionary +The system or principles underlying the representation of logical operations. Logical operations collectively, as performed by electronic or other devices. +
Oxford English Dictionary

+ +

Some definitions of randomness

+ +

Being or relating to a set or to an element of a set each of whose elements has equal probability of occurrence. Lacking a definite plan, purpose, or pattern. +Merriam Websters. Made, done, happening, or chosen without method or conscious decision. Google Dictionary Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method. Seeming to be without purpose or direct relationship to a stimulus. Oxford English Dictionary

+",1671,,1671,,8/5/2019 20:17,5/9/2020 22:23,Is randomness anti-logical?,,6,7,,,,CC BY-SA 4.0 +13726,1,,,8/2/2019 0:50,,2,75,"

I am developing a Body Measurement extraction application, my current stage is able to extract the point clouds of human body in a standing posture, from every angles.

+ +

Now, to be able to recognize shoulders, neck point etc, my research seems to fall into following flows:

+ +

Method A:

+ +
    +
  1. Obtain a lot of data, with labeled landmark points (shoulder left, shoulder right, neck line).
  2. +
  3. Use PointCNN / PointNet++ to perform segmentation for each landmark.
  4. +
  5. Once the landmarks are extracted, use Open3D / Point Cloud Library convex hull to obtain the measurement along point clouds. + +
      +
    • Method A seems straight forward, but might depends on the quality of point cloud, especially 3rd step.
    • +
  6. +
+ +

Method B:

+ +
    +
  1. Obtain a lot of data, with labeled landmarks and also measurement of shoulder length, chest circumference etc.
  2. +
  3. We first train the network to identify landmarks,
  4. +
  5. Then from landmark, we record the distance to the next nearest point, train the network to obtain the measurement that we want.
  6. +
+ +

Method C:

+ +
    +
  1. Obtain a lot of data, with measurement of shoulder length, chest circumference etc ONLY.
  2. +
  3. Pick a random point, and we record the distance to the next nearest point, train the network to obtain the measurement that we want.
  4. +
+ +

My questions:

+ +
    +
  1. How much data needed for this kind of learning?
  2. +
  3. If I obtain training data from somewhere online, and later validate using my own scanned data, will that valid?
  4. +
  5. Which method makes more sense?
  6. +
  7. Which existing problem with solution is similar to my case? (Facial recognition?) That I can refer to it to solve my problem.
  8. +
+ +

This is technically my first machine learning project, so please bear with me if my questions seems too silly.

+",15755,,15755,,8/2/2019 10:51,8/2/2019 10:51,Pipeline to Estimate Measurement of Human Body Point Cloud,,0,0,,,,CC BY-SA 4.0 +13727,5,,,8/2/2019 0:50,,0,,,1671,,2444,,1/7/2023 10:42,1/7/2023 10:42,,,,0,,,,CC BY-SA 4.0 +13728,4,,,8/2/2019 0:50,,0,,"For questions about the concept of randomness, or use of stochasticity in decision making algorithms. ",1671,,2444,,1/7/2023 10:42,1/7/2023 10:42,,,,0,,,,CC BY-SA 4.0 +13729,2,,13725,8/2/2019 4:51,,3,,"

I think the answer here lies in that the dictionary definition of randomness you have is not the one used in statistics, ML, or mathematics. We define randomness to mean there exists a distribution with generally greater than 0 uncertainty.

+ +

Depending on who you talk to, we live in a random universe (the way we define quantum mechanics depends on a wave function (essentially a probability distribution)

+ +

So why if a sequence is drawn from a distribution is it illogical? First, even as humans we can make a strong argument that what we say is random. I mean we speak to convey some form of message or context, but there exists multiple ways to deliver this, but we choose a single one. Our brains inherently model $p(\vec w|c)$ where $\vec w$ is the sequence and $c$ is our context or message we want to convey.

+ +

Takeaway: Generating a sequence in an ergodic or uniform manner would be illogical, but that is not what is being modeled or done in practice. Normally its drawn from some complex distribution.

+ +

Sidenote: My above claim could make it seem that being uniformly random implicates something illogical, and I want to emphasize that is not the case. It is domain to domain, sometimes that is the most logical solution, just in the case of sentence generation it normally isnt. I would define a logical algorithm as one that given the information at hand acts in a sensible manner towards achieving some goal, and so if something purely random does that, I don't see the problem.

+",25496,,,,,8/2/2019 4:51,,,,2,,,,CC BY-SA 4.0 +13731,2,,13722,8/2/2019 6:58,,1,,"

One of the factors that rescaling gives a better result is on your weight initialization of the model. Your experiments run on the same model that using the same weight initialization (tf.keras.dense using Glorot Uniform). If your input is too big or too small the model needs time to adjust the weight.

+ +
+

Why is it necessary to rescale? I understand rescaling when different features are of different scales - that will lead to a skewed surface of the cost function in parameter space. And even then, as I understand from the linked video, the problem has to do with slow learning (convergence) and not high loss/inaccuracy. In this case, ALL the input features had the same scale. I'd assume the model would automatically adjust the scale of the weights and there would be no adverse effect on the loss. So why is the loss so high for the non-scaled case?

+
+ +

Slow learning means you can't compare the loss of two models even at the same epoch. As the model using the same initial value of weights, the second model may need time (epoch) to adjust its weight to get the same result with the first model.

+ +
+

If the answer has anything to do with the magnitude of the inputs, why does further scaling down of the inputs lead to worse performance?

+
+ +

It has the same reason, very small values of the inputs also lead to the difficulty to adjust the weight.

+",16565,,,,,8/2/2019 6:58,,,,1,,,,CC BY-SA 4.0 +13732,1,13758,,8/2/2019 8:16,,2,591,"

Some puzzle games have a unique solution that can be solved by deduction rather than guesswork (e.g. Slitherlink, Masyu). Using a computer to solve this puzzle it's pretty easy, we can use a backtracking method to find the best solution in second (in general, the puzzle size is not too big).

+ +
+

Is it possible to train a bot to solve this kind of puzzle by deduction?

+
+ +

I think by train it to watch a previous step-by-step solution several times the bot can find some implicit rules/patterns to solve a specific puzzle. Is this possible? are there any references for this method?

+",16565,,,,,8/3/2019 17:49,Create an AI to solve a puzzle (by deduction),,1,3,,,,CC BY-SA 4.0 +13733,2,,13725,8/2/2019 8:24,,3,,"

I might misunderstand your question, but there seem to be different levels of logic at play here.

+ +
    +
  1. Computing logic, whereby any computational process is based on processor logic. In this case, any computing is involving logic, as boolean logic drives any processing.

  2. +
  3. Linguistic logic, where there is a logic in the sequencing of sentences within a text. A random collection of sentences is not a text, as there need to be certain principles behind the structure to make it a narrative.

  4. +
+ +

While you can easily generate a sequence of random sentences, they will not mean anything; there won't be any logic behind selecting a particular sentence to follow on from another one. So this is linguistic logic rather than processing logic. Note that where the linguistic logic is makes it a bit vague: I can read a randomly selected sequence of sentences and ascribe meaning to it by building a mental model that treats it as a logically constructed text. This principle is what made ELIZA so successful: even though the program's answers were based on simple pattern matching rules with no understanding, many users assumed there was logic/meaning behind it and interpreted it as such, papering over the cracks in the conversation.

+ +

In summary: there is logic involved in random sentence combining, but it is the low-level computing logic, not the higher-level linguistic interpretative logic, which is generally absent from randomly generated data.

+",2193,,,,,8/2/2019 8:24,,,,0,,,,CC BY-SA 4.0 +13734,2,,13644,8/2/2019 9:26,,2,,"
+

What I want to achieve is incremental training. So, as soon as I get new data, I can further train my already trained model and I don't have to retrain everything.

+
+ +

Learning without forgetting is one of the methods to solve multitask learning. If your model trained to solve problem A and then after sometimes you need your model to solve new problem B without forgetting the problem A (the model still good to solve the problem A), then you need this.

+ +

Transfer learning is a method to use a trained model to solve another task (and may forget the original task). For example, you use a model that originally trained to classify cat or dog to a new task that trying to classify goat or cow. You use this in hopes of speeding up your training process.

+ +

If your new data has the same task with the old data, you don't need to use multitask learning method. For example:

+ +
    +
  • if your model trained with 50 images to detect an apple in the image, and then you get new 100 images to detect an apple then you just need to continue your train (incremental learning). In this case, you need (to save) the latest parameter of your model after trained (latest learning rate value, epoch, etc.), if you have it then you just need to run your training again (continue the epoch).
  • +
  • if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to train your model to detect an orange and you don't care if your model will give a bad result to detect an apple, then you can use transfer learning. You may freeze a few first layers as ""extractor"" and initialize a new layer at the end.
  • +
  • if your model trained with 100 images to detect an apple in the image, and then you get new 100 images to detect an orange and your model must good to detect both apple and orange in an image, then you use multitask learning. The easiest method is to train your model with the apple+orange image, but you can also use another approach like proposed in Learning without Forgetting paper.
  • +
+",16565,,16565,,8/5/2019 4:02,8/5/2019 4:02,,,,1,,,,CC BY-SA 4.0 +13735,1,,,8/2/2019 10:19,,2,26,"

so I have this dataset of images of people sitting in a restaurant.
+I've annotated about 300 images with an average of 30 instances of ""person"" per image.

+ +

Now I'm wondering if I should have annotated only one (or just a few) person per image and processed way more images ?

+ +

I've successfully trained an SSD network with only one class, but I'm still wondering if I should have gone the other way...

+ +

Anyone got input on that ?

+ +

Cheers.

+",27602,,,,,8/2/2019 10:19,Train detector : 300 images with 30 objects or 9000 images with one?,,0,0,,,,CC BY-SA 4.0 +13738,1,13745,,8/2/2019 12:57,,6,1474,"

I have recently discovered asymmetric convolution layers in deep learning architectures, a concept which seems very similar to depthwise separable convolutions.

+ +

Are they really the same concept with different names? If not, where is the difference? To make it concrete, what would each one look like if applied to a 128x128 image with 3 input channels (say R,G,B) and 8 output channels?

+ +

NB: I cross-posted this from stackoverflow, since this kind of theoretical question is maybe better suited here. Hoping it is OK...

+",27606,,2444,,8/2/2019 14:28,8/2/2019 16:56,What is the difference between asymmetric and depthwise separable convolution?,,1,2,,,,CC BY-SA 4.0 +13739,1,13740,,8/2/2019 13:27,,2,55,"

I'm reading the book Pattern Recognition and Machine Learning by Bishop, specifically the intro where he covers polynomial regression model. In short, let's say we generate $10$ data points using the function $\sin(2\pi x)$ and add some gaussian random noise to each observation. Now we pretend not knowing the generating function and try to fit a polynomial model to these points.

+ +

As we increase the degree of the polynomial, it goes from underfitting ($d=1,2$) to overfitting ($d=10$). One thing the author notes is that the higher the degree of the polynomial, the higher the values of the coefficients (parameters). This is my first doubt: why does the size of the coefficients increase with the polynomial degree? And why is the size of the parameters related to overfitting?

+ +

Secondly, he states that even for degree $10$, if we get sufficiently many data points (say $100$), then the high degree polynomial will no longer overfit the data and should have comparatively better generalization performance. Second doubt: Why is this so?

+",27548,,,,,8/2/2019 16:51,Relation between size of parameters and complexity of model with overfitting,,2,0,,,,CC BY-SA 4.0 +13740,2,,13739,8/2/2019 14:31,,1,,"

Regarding your first question, this is domain/task specific, and not always the case. My guess of why it happens in your situation (You did not specify your domain, so ill assume it's sometimes outside $(-1,1)$) is that higher order polynomials increase much faster than lower order ones so it may fall under the trap where the coefficients have to be large to handle it. (e.g. if $x=2.0$ then $ 2^5*x^2 = x^7 \rightarrow 32*x^2 = x^7$)

+ +

Now to answer why more points will cause less overfitting is sheerly because you have more equations. Your loss is generally some form of average distance accross points and predictions. Given $N$ points I can always find an $N-1$ order polynomial that fits it exactly. Instead of a formal proof I will provide a form in which you can easily generate this polynomial from $N$ $(x_i, y_i)$ pairs
+$$ f(x) = \sum_{i=1}^{N}y_i\prod_{j=1\atop j \ != i}^N\frac{(x-x_j)}{(x_i-x_j)}$$

+ +

As you can see, for each $x_i$ it will zero out all terms in the sum except one where the product become 1 making $f(x_i) = y_i$ by construction. So adding more points will prevent whatever optimization process you are learning to directly solve each point exactly, but instead try to find an algorithm that best generalizes to each of the points.

+",25496,,,,,8/2/2019 14:31,,,,0,,,,CC BY-SA 4.0 +13741,1,,,8/2/2019 14:50,,3,637,"

I am trying to train my model using LTSM layer in Keras (python). I have some problems regarding the data representation and feeding it into the model.

+ +

My data is 184 XY coodinates encoded as a numpy array with two dimensions: one corresponding to the X or Y and second is every single point of X or Y. Shape of a single spectrum is (2, 70). Altogether, my data has a dimension of (184, 2, 70).

+ +

The label set is an array of 8 elements which describes the percentage distribution of some 8 features which are describing XY. The shape of an output is (184, 8).

+ +

My question is how can I train using the time series for each XY pair and compare it to the corresponding label set? Different XY data show similar features to each other that is why it is important to use all 184 sample for the training. What would be the best approach to handle this problem? Below I show the schematics of my data and model:

+ +

Input: (184, 2, 70) (number of XY, X / Y, points)

+ +

Output: (184, 8) (number of XY, predictions)

+ +

I look forward for some ideas!

+ +

+",27608,,27608,,8/5/2019 9:12,6/19/2023 18:09,How to train a LSTM model with multi dimensional data,,1,2,,,,CC BY-SA 4.0 +13742,2,,13600,8/2/2019 15:34,,1,,"

This is a somewhat provocative view, so be warned (and please don't down-vote this if you feel provoked by it!):

+ +

In the ""old days"", when information retrieval (IR) was one of the main tasks in NLP, several categories of words were ignored as stopwords; conjunctions, determiners, prepositions, etc. These function words do not carry meaning themselves, but organise the structure of sentences. Most IR algorithms worked on frequencies of individual words, and as functions words are very frequent (of and the are the two most frequent English words) and don't mean anything by themselves, they were ignored. This kept the index files small and didn't seem to influence the results.

+ +

However, if you want to analyse sentences themselves, they are rather important. They are also useful for all sorts of other tasks where you are looking at sequences of words (eg part-of-speech tagging based on context). Similar for word embeddings: without function words you'd not have any meaningful context to work with. So, increasingly you would not ignore function words anymore.

+ +

My suspicion is, that punctuation is now in the 'stopword position': it's not too clear how it influences meaning, and is often inconsistent or redundant (obviously not in all cases). So you can probably treat it as 'noise' and get away with it for most applications. For example, looking at meanings of words, it probably doesn't matter that much whether the sentence they occurred in was a question or an exclamation. By removing punctuation (maybe apart from sentence-terminators), your model is a bit smaller and you don't lose much.

+ +

Since punctuation is purely a property of written language, we can generally get away without it, as we do in speech. A text without punctuation might be harder to read, because we're not used to it, but don't forget that some writing systems (Chinese, Egyptian hieroglyphics, ...) don't even have spaces between words — and people can still use them without problems.

+",2193,,,,,8/2/2019 15:34,,,,1,,,,CC BY-SA 4.0 +13743,2,,13739,8/2/2019 16:44,,1,,"

Size of the co-efficients will probably increase only upto a certain degree of polynomial. This is due to the fact you are using $sin(2\pi x)$, if you used $sin(4\pi x)$ then the size of co-efficients will increase upto more degrees of polynomial. This can be seen when $sin(x)$ is represented as series:

+ +

$$ sin(x) = \frac{x}{1!} - \frac{x^3}{3!} + \frac{x^5}{5!}....$$

+ +

In your case $x \rightarrow 2\pi x$ so in order to approximate it the higher order terms must have very high co-efficients which the denominator factorial terms cannot cancel out (only upto a certain point though) and hence for small orders like $N=10$ (assume we have even terms in the series, since we are not dealing with mathematical definiteness, so even terms will cancel out or get cancelled out in some way), $10! = 3628800$ whereas $(2\pi) ^{10} = 95410558$ around 26 times greater. So you see till certain point the co-efficient values must increase for $sin(2\pi x)$. I think this answers both of your questions.

+ +

Coming to your second question, in general loosely we can say, the ML algorithm you are using performs Polynomial Regression which means fitting a curve by adjusting parameters, in a way such that the distance between the points generated by your model, for a given input, is as close as possible to the real data.

+ +

So the question is why does increasing data points gives better generalisation? What most people do not mention is now that you have a better generalisation of the function itself, by which I mean, if I give you 2 points (least number required as per Nyquist Sampling theorem to define a $sin$ wave of certain frequency) from a $sin$ curve, unless you know beforehand you cannot tell whether it was generated from a $sin$, but if I give you 100 points within the same time period (of a sine wave) you can easily guess the data must be generated from $sin$. Similarly, an ML algorithm cannot guess where the data is generated from when the number of data-points is less and tries to fit a model according to its best guess (minimum loss), but if you give a larger number of points it'll make better guess hence better generalisation.

+ +

Think like this, you want to make a circle with rubber band around pins. Can you make it with 4-5 pins? You need at-least certain number of pins to make it look a circle. The rubber band here is your model.

+",,user9947,,user9947,8/2/2019 16:51,8/2/2019 16:51,,,,0,,,,CC BY-SA 4.0 +13745,2,,13738,8/2/2019 16:56,,3,,"

They are not the same thing.

+ +

asymmetric convolutions work by taking the x and y axes of the image separately. For example performing a convolution with an $(n \times 1)$ kernel before one with a $(1 \times n)$ kernel.

+ +

On the other-hand depth-wise separable convolutions separate the spatial and channel components of a 2D convolution. It will first perform the $(n \times n)$ convolution on each channel separately (full kernel shape will be $(n \times n \times 1)$ rather than $(n \times n \times k)$ where $k$ is the number of channels in the previous layer) before doing a $(1 \times 1)$ convolution to learn a relationship between the channels (full kernel size for that being $(1 \times 1 \times k)$)

+",25496,,,,,8/2/2019 16:56,,,,2,,,,CC BY-SA 4.0 +13746,2,,13725,8/2/2019 17:09,,1,,"

In certain games, random selection is the optimal strategy. See: Matching Pennies

+ +

Strategy is essentially a plan of action utilized to achieve a goal.

+ +
    +
  • If random choice can be a strategy, it seems that it must be a form of logic, even if the nature of the stochastic process is counter to all forms of formal logic.
  • +
+ +

This seems paradoxical, in that the random strategy is to have no strategy (random choices.)

+",1671,,1671,,8/5/2019 18:39,8/5/2019 18:39,,,,12,,,,CC BY-SA 4.0 +13748,1,,,8/2/2019 20:53,,1,24,"

I have an idea for an RNN which has no separate internal memory state only an output. But there is a gate in which tells the neural network whether the output will be acted out in the physical world or it will be an internal thought. (It would also store its last, say, 10 outputs, so that it can have a memory of some kind.)

+ +

I think this would be quite realistic because human's either talk or think in an internal monologue, but don't do both. (It is hard to think and do things at the same time).

+ +

But I wonder how this gate will be activated. For example, when talking to someone familiar, this gate will be open, as you just say what's in your head. But for quiet contemplation time, this gate will be closed. And for thoughtful conversation, it will be open 50% of the time. So, I wonder if this gate would be controlled by the NN itself or be controlled from the environment?

+ +

I think there would be social pressure involved when talking to someone to keep the gate open. And likewise when in a library or a quiet place to keep the gate closed.

+ +

I wonder if there are some models like this out there already?

+",4199,,2444,,8/2/2019 21:16,8/2/2019 21:16,A gated neural network for internal thought?,,0,1,,,,CC BY-SA 4.0 +13749,2,,10623,8/2/2019 22:06,,19,,"

Self-supervised learning is when you use some parts of the samples as labels for a task that requires a good degree of comprehension to be solved. I'll emphasize these two key points, before giving an example:

+ +
    +
  • Labels are extracted from the sample, so they can be generated automatically, with some very simple algorithm (maybe just random selection).

  • +
  • The task requires understanding. This means that, in order to predict the output, the model has to extract some good patterns from the data, generating on the process a good representation.

  • +
+ +

A very common case for semi-supervised learning takes place in natural language processing, when you need to solve a task but have few labeled data. In such cases, you need to learn a good representation or language model, so you take sentences and give your network self-supervision tasks like these:

+ +
    +
  • Ask the network to predict the next word in a sentence (which you know because you took it away).

  • +
  • Mask a word and ask the network to predict which word goes there (which you know because you had to mask it).

  • +
  • Change the word for a random one (that probably doesn't make sense) and ask the network which word is wrong.

  • +
+ +

As you can see, these tasks are fairly simple to formulate and the labels are part of the same sample, but they require a certain understanding of the context to be solved.

+ +

And it's always like this: alter your data in some way, generating the label in the process, and ask the model something related to that transformation. If the task requires enough understanding of the data, you'll have success.

+",27444,,,,,8/2/2019 22:06,,,,0,,,,CC BY-SA 4.0 +13750,1,,,8/2/2019 23:22,,2,122,"

I am looking into whether a neural network is appropriate to detect ""points of interest"" (POI) in a set of tuples (say length, and some sensor value). A POI is essentially a quick change in the value which doesn't follow the pattern. So if we have a linear increase in the sensor value and then it suddenly jumps by 200% that would be a POI.

+ +

Here is an example of the data I am working with:

+ +
[(1,10),(2,11),(3,14),(5,24),(6.5,25), (7,26), (8,45)]
+
+ +

In this example lets say ""(3,14)"", ""(5,24)"", and ""(8,45)"" are points of interest. So I am trying to design a neural network which will detect these.

+ +

I have started by creating a Convolution 1D layer with a static input length of 500 elements.

+ +

After a couple hidden layers I apply a sigmoid function which provides a list of 0s and 1s as output where 1s signify a POI in the set.

+ +

There are a couple of issues with this approach which I am trying to solve.

+ +

In a categorical loss function an output of [1,0,0,1,0,0] for example would be seen as completely inaccurate if the expected output is [0,1,0,0,1,0] whereas in reality that is fairly accurate since the predicted POIs are very close to the real POIs.

+ +

So what I am trying to do is find a loss function to optimize the neural network.

+ +

So far I have tried:

+ +
    +
  • Binary Cross Entropy: I read this is good for classifying where inputs can belong to multiple classes. I tried this out thinking each POI is essentially a ""category"". But this seems to not work and I assume it's because of what I noted above.
  • +
  • Mean Absolute Error: This seems to have gotten slightly better results but after closer inspection it didn't seem very accurate and would mostly uniformly predict POIs on a set.
  • +
+ +

I have tried a few others without much luck.

+ +

What loss function would be more appropriate for this?

+ +

One other output I tried was instead of outputting 0s and 1s it should just return the indexes of the points of interest so say 3, 5 8. Would this be a better output?

+",27615,,,,,8/3/2019 10:51,"What loss function is appropriate for finding ""points of interest"" in a array of x,y inputs",,1,0,,,,CC BY-SA 4.0 +13751,1,,,8/3/2019 1:00,,3,296,"

I am a bit confused about observations in RL systems which use RNN to encode the state. I read a few papers like this and this. If I were to use a sequence of raw observations (or features) as an input to RNN for encoding the state of the system, I cannot change the weights of my network in the middle of the episode. Is that correct? Otherwise, the hidden state vectors will be different when the weights are changed.

+ +

Does that mean that the use of RNN in RL has to store the entire episode before the weights can be changed?

+ +

How does then one take into account the hidden states in RNN for RL? Are there any good tutorials on RNN-RL?

+",21509,,2444,,8/3/2019 10:49,8/15/2023 8:00,How are the observations stored in the RNN that encodes the state?,,1,0,,,,CC BY-SA 4.0 +13752,2,,13750,8/3/2019 2:26,,1,,"

You don't have to use machine learning to solve the problem.

+ +
    +
  1. Unify the scale of each data input (or each curve), such as normalize to $[0,1]$ (not necessary).

  2. +
  3. Calculate the slope of each pair of points $\frac{(y2 - y1)}{(x2 - x1)}$.

  4. +
  5. Set the threshold. Compare the difference between two adjacent slopes, the difference exceeds the threshold are marked as POI.

  6. +
+ +

Isn't that simpler?

+ +

If you must solve the problem with CNN, what I can think of is that you first collect (or draw) a bunch of curves, mark the POI in advance, and then feed it to the CNN model.

+",27617,,2444,,8/3/2019 10:51,8/3/2019 10:51,,,,1,,,,CC BY-SA 4.0 +13755,1,,,8/3/2019 14:53,,3,116,"

Taking out the weighting factor we can define focal loss as
+$$FL(p) = -(1-p)^\gamma log(p) $$

+ +

Where $p$ is the target probability. The idea being that single stage object detectors have a huge class imbalance between foreground and background (several orders of magnitude of difference), and this loss will down-scale all results that are positively classified compared to normal cross entropy ($CE(p) = -log(p)$) so that the optimization can then focus on the rest.

+ +

On the other hand, the general optimization scheme uses the gradient to find the direction with the steepest descent. There exists methodologies for adaption, momentum and etc but that is the general gist.
+$$ \theta \leftarrow \theta - \eta \nabla_\theta L $$

+ +

Focal losses gradient follows as so
+$$\dot {FL}(p) = \dot p [\gamma(1-p)^{\gamma -1} log(p) -\frac{(1-p)^\gamma}{p}]$$ compared to the normal crossentropies loss of
+$$ \dot{CE}(p) = -\frac{\dot p}{p}$$

+ +

So we can now rewrite these as

+ +

$$\dot{FL}(p) = (1-p)^\gamma \dot{CE}(p) + \gamma \dot p (1-p)^{\gamma -1} log(p)$$

+ +

The initial term, given our optimization scheme will do what we (and the authors of the retinanet paper) want which is downscale the effect of the labels that are already well classified but the second term is slightly less interpretative and in parameter space and may cause an unwanted result. So my question is why not remove it and only use the gradient
+$$\dot L = (1-p)^\gamma \dot{CE}(p)$$

+ +

Which given a $\gamma \in \mathbb{N}$ produces a loss function
+$$ L(p) = -log(p) - \sum_{i=1}^\gamma {\gamma \choose i}\frac{(-p)^i}{i}$$

+ +

Summary: Is there a reason we make the loss adaptive and not the gradient in cases like focal loss? Does that second term add something useful?

+",25496,,,,,8/6/2019 23:05,Does Retina-net's focal loss accomplish its goal?,,0,3,,,,CC BY-SA 4.0 +13756,1,,,8/3/2019 15:51,,2,24,"

I need to create an application that can detect if a person X entered as an input exists in an image set and return as output all the images in which the person X exists. The problem is that the pictures do not only contain people's faces, they also contain pictures taken from behind.

+ +

Is it possible to use Microsoft's cognitive services? If not, is there another solution to allow the realization of this application?

+",22702,,2444,,8/3/2019 16:09,8/3/2019 16:09,Can Microsoft's cognitive service find similar person in a set of images without using the face service?,,0,0,0,,,CC BY-SA 4.0 +13757,2,,2122,8/3/2019 17:26,,0,,"

After almost three years the question is still relevant.

+ +

Let me add some too:

+ +

Deep Learning Datasets

+ +

The datasets from the above link can be used for benchmarking deep learning algorithms.

+ +

STL-10 dataset

+ +

An image dataset which is inspired by CIFAR-10 dataset

+",16708,,,,,8/3/2019 17:26,,,,0,,,,CC BY-SA 4.0 +13758,2,,13732,8/3/2019 17:49,,2,,"

What you are trying to achieve sounds a lot like inductive logic programming:

+ +
+

Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.

+
+",27624,,,,,8/3/2019 17:49,,,,2,,,,CC BY-SA 4.0 +13759,2,,2122,8/3/2019 17:57,,1,,"

If you want to solve a multi-class classification problem, you could use the famous iris flower dataset, which was introduced by Fisher in 1936. In this dataset, each flower has (only) $4$ features (the inputs), namely

+ +
    +
  • petal length,
  • +
  • petal width,
  • +
  • sepal length, and
  • +
  • sepal width
  • +
+ +

There are $3$ classes (the outputs)

+ +
    +
  • iris setosa,
  • +
  • iris virginica, and
  • +
  • iris versicolor
  • +
+ +

And there are a total of $150$ observations (or records).

+ +

The iris flower dataset is available in sklearn. See, for example, Iris plants dataset.

+ +

To search for other datasets, you can also use https://toolbox.google.com/datasetsearch.

+",2444,,2444,,8/3/2019 21:43,8/3/2019 21:43,,,,0,,,,CC BY-SA 4.0 +13761,2,,4663,8/3/2019 19:03,,1,,"

I will break it down for you in very simple words. The accuracy will drop down as you label them wrong. In simpler words- accuracy is directly proportional on how perfect the data is labelled. If you think about it, suppose you have 2 categories-cats and dogs, and you have a dataset of 10,000 pictures. Out of which 50 are wrongly labelled. The accuracy will less than the perfectly labelled but not that less since the neural network built will not be that bad. But suppose now you have 1000 wrongly labelled which is 1/10 of the dataset, then the NN will have more abrupt outcomes.

+",27626,,38076,,12/18/2020 17:02,12/18/2020 17:02,,,,0,,,,CC BY-SA 4.0 +13762,1,13769,,8/3/2019 21:44,,2,107,"

I'm a beginner in ML and have been researching RL quite a bit recently. I'm planning to create an RL application to play a zero-sum game. This will be web-based, so anyone can play it.

+ +

I wondered if I need to create a database (or some other kind of storage) to store the policy the RL algorithm is updating, so that it can be used by the application when the next human user comes along to play against the application?

+",27629,,2444,,9/22/2019 18:43,9/22/2019 18:43,Is it a good idea to store the policy in a database?,,1,0,,,,CC BY-SA 4.0 +13763,2,,13189,8/3/2019 21:52,,1,,"

The starting point is that for a fair dice thrown fairly the p(n) is 1/n where n is the number of sides.

+

You said both

+

and

+
+

there are too many variables (up 40 dimensions with value range 1-100) in input, I don't know how these properties relate and an empirical approach would require too much data.

+
+

It seems that this problem has 2 solutions:

+
    +
  1. Don't use a neural net and create a 'std' statistical model. +It may be possible since you said:
  2. +
+
+

I know there is some underlying rule that simplify a lot the problem (ie. reduce the actual number of dimension of the input)

+
+
    +
  1. Use a neural network (with softmax at the end) - for a fair dice; with enough training data the classifier should arrive as 1/n as the approximating function for a fair dice. The other 40 dimensions/settings your mentioned are the inputs. I think a 'basic' neural network with dense layers only could work for your task.
  2. +
+",3526,,3526,,11/16/2020 5:21,11/16/2020 5:21,,,,0,,,,CC BY-SA 4.0 +13765,1,,,8/4/2019 0:28,,4,112,"

Recently I encountered a variant on the normal linear neural layer architecture: Instead of $Z = XW + B$, we now have $Z = (X-A)W + B$. So we have a 'pre-bias' $A$ that affects the activation of the last layer, before multiplication by weights. I don't understand the backpropagation equations for $dA$ and $dB$ ($dW$ is as expected).

+

Here is the original paper in which it appeared (although the paper itself isn't actually that relevant): https://papers.nips.cc/paper/4830-learning-invariant-representations-of-molecules-for-atomization-energy-prediction.pdf

+

Here is the link to the full code of the neural network: http://www.quantum-machine.org/code/nn-qm7.tar.gz

+
class Linear(Module):
+
+    def __init__(self,m,n):
+
+        self.tr = m**.5 / n**.5
+        self.lr = 1 / m**.5
+        
+        self.W = numpy.random.normal(0,1 / m**.5,[m,n]).astype('float32')
+        self.A = numpy.zeros([m]).astype('float32')
+        self.B = numpy.zeros([n]).astype('float32')
+
+    def forward(self,X):
+        self.X = X
+        Y = numpy.dot(X-self.A,self.W)+self.B
+        return Y
+
+    def backward(self,DY):
+        self.DW = numpy.dot((self.X-self.A).T,DY)
+        self.DA = -(self.X-self.A).sum(axis=0)
+        self.DB = DY.sum(axis=0) + numpy.dot(self.DA,self.W)
+        DX = self.tr * numpy.dot(DY,self.W.T)
+        return DX
+
+    def update(self,lr):
+        self.W -= lr*self.lr*self.DW
+        self.B -= lr*self.lr*self.DB
+        self.A -= lr*self.lr*self.DA
+
+    def average(self,nn,a):
+        self.W = a*nn.W + (1-a)*self.W
+        self.B = a*nn.B + (1-a)*self.B
+        self.A = a*nn.A + (1-a)*self.A
+
+",27634,,156,,1/20/2023 16:11,1/20/2023 16:11,Backpropagation equation for a variant on the usual Linear Neuron architecture,,1,3,,,,CC BY-SA 4.0 +13768,2,,13765,8/4/2019 9:30,,1,,"

The forward prop equation is:

+ +

$$ +Z = (X-A)W - B = XW - AW - B +$$

+ +

So the derivatives for $Z$ w.r.t $W$, $A$, $B$ and $X$ should be:

+ +

$$ +\frac{\partial Z}{\partial W} = X-A \\ +\frac{\partial Z}{\partial A} = - W \\ +\frac{\partial Z}{\partial B} = - 1 \\ +\frac{\partial Z}{\partial X} = W +$$

+ +

I don't know why he needs the last one though. The first is, like you said, as expected. The other two are wrong, I don't know why he used them in the implementation.

+",26652,,,,,8/4/2019 9:30,,,,4,,,,CC BY-SA 4.0 +13769,2,,13762,8/4/2019 9:38,,2,,"

You have lots of choices in how to store a policy, depending on how you have built it - using which RL algorithm, and what kind of representation for states and actions.

+ +

Tabular reinforcement learning algorithms lend themselves well to storage in a database table with an indexed state_id column and one or both action and value columns. This might be a good choice if you have a moderate sized state space, as you would avoid the need to load the whole table into memory just to compute the next move.

+ +

Whether this is feasible will depend on the complexity of your game. Even relatively simple games like checkers turn out to have too large a state space to enumerate all the states in this way.

+ +

So you are more likely to need some kind of policy function or state value function implemented using a parametric function approximator. Very often in RL this will be a neural network. In which case you would use whatever storage mechanism your neural network library supported - most will happily read and write their parameters to a file or string, allowing you a lot of flexibility on how and where to store them.

+ +

So your policy is likely to be stored in one or two files on disk as a serialised neural network. How to use that efficiently in a web service is a complex subject in its own right. You could just read the files back and instantiate the neural network each time it is needed, and this will probably be OK for a simple game and low traffic service. However, this is very inefficient.

+ +

Some neural network libraries designed around use in production will allow you to pre-load the neural network and keep it in memory between requests. How to do this depends entirely upon the frameworks you are using, so I cannot explain in more detail here. Initially I would not worry too much about this part for your project.

+",1847,,,,,8/4/2019 9:38,,,,4,,,,CC BY-SA 4.0 +13770,1,,,8/4/2019 10:33,,3,37,"

A machine learning project I am working on requires me to interface with an Xbox controller connected to a PC. The implementation must do the following two things:

+ +

Record the joystick input from the controller into a file at regular intervals, along with an associated screenshot from a game. (ex: 60 times a second).

+ +

With this data I want to try to replicate/reverse engineer a few different FPS games sensitivity’s,dead zones, acceleration curves.

+ +

Does anyone have any idea as to how I'd go about doing this? I'm not sure where to start. If this question isn't appropriate for this sub, I’m where could I ask?

+",27637,,,,,8/4/2019 10:33,"Reverse engineering controller sensitivity/aim for several games ie acceleration curves, deadzones, etc",,0,0,,,,CC BY-SA 4.0 +13772,1,,,8/4/2019 20:08,,2,811,"

What is an identity recurrent neural network (IRNN)? What is the difference between an IRNN and RNN?

+",2444,,,,,8/4/2019 20:08,What is an identity recurrent neural network?,,1,0,,,,CC BY-SA 4.0 +13773,2,,13772,8/4/2019 20:08,,1,,"

An identity recurrent neural network (IRNN) is a vanilla recurrent neural network (as opposed to e.g. LSTMs) whose recurrent weight matrices are initialized with the identity matrix, the biases are initialized to zero, and the hidden units (or neurons) use the rectified linear unit (ReLU).

+ +

An IRNN can be trained more easily using gradient descent (as opposed to a vanilla RNN that is not an IRNN), given that it behaves similarly to an LSTM-based RNN, that is, an IRNN does not suffer (much) from the vanishing gradient problem.

+ +

The IRNN achieves a performance similar to LSTM-based RNNs in certain tasks, including the adding problem (a standard problem that is used to examine the power of recurrent models in learning long-term dependencies). In terms of architecture, the vanilla RNNs are much simpler than LSTM-based RNNs, so this is an advantage.

+ +

For more details, see the paper A Simple Way to Initialize Recurrent Networks of Rectified Linear Units (2015), by Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton. See also this Keras implementation of the MNIST experiment described in the linked paper.

+",2444,,,,,8/4/2019 20:08,,,,0,,,,CC BY-SA 4.0 +13774,2,,13668,8/4/2019 20:30,,1,,"

Dennis' answer is very helpful. I also found section 5.5 of the MCTS survey very useful, in particular the widening discussion. Another useful reference was https://project.dke.maastrichtuniversity.nl/games/files/msc/Roelofs_thesis.pdf

+",27530,,,,,8/4/2019 20:30,,,,0,,,,CC BY-SA 4.0 +13775,1,13787,,8/4/2019 22:24,,35,16846,"

I just finished a 1-year Data Science master's program where we were taught R. I found that Python is more popular and has a larger community in AI.

+

What are the advantages that Python may have over R in terms of features applicable to the field of Data Science and AI (other than popularity and larger community)? What positions in Data Science and AI would be more Python-heavy than R-heavy (especially comparing industry, academic, and government job positions)? In short, is Python worthwhile in all job situations or can I get by with only R in some positions?

+",27652,,16959,,10/27/2020 1:26,10/27/2020 1:26,Is a switch from R to Python worth it?,,8,2,,11/19/2019 4:26,,CC BY-SA 4.0 +13776,1,,,8/4/2019 22:33,,4,265,"

In neural networks with stochastic layers I've seen the use of the REINFORCE estimator for estimating the gradient (because it can't be computed directly).

+ +

Some such examples are Show, Attend and Tell, Recurrent models of visual attention and Multiple Object Recognition with Visual Attention.

+ +

However, I haven't figured out how this exactly works. How do we ""bypass"" the gradient's computation by using the REINFORCE learning rule? Does anyone have any insight on this?

+",27653,,,,,8/6/2019 22:59,How is REINFORCE used instead of Backpropagation?,,1,0,,,,CC BY-SA 4.0 +13777,1,14233,,8/4/2019 22:53,,3,209,"

I've been reading on the differences between ""Strong"" and ""Weak ""AI.

+ +

I was wondering, where do Neural Networks (especially deep ones) fall in this spectrum? Can they be considered ""Strong AI""? If not, is there any model that can be considered ""Strong AI""?

+",27654,,,,,9/1/2019 5:21,"Can Neural Networks be considered as ""Strong AI""?",,1,0,,,,CC BY-SA 4.0 +13778,2,,13775,8/4/2019 23:16,,32,,"

Of course, this type of questions will also lead to primarily opinion-based answers. Nonetheless, it is possible to enumerate the strengths and weakness of each language, with respect to machine learning, statistics, and data analysis tasks, which I will try to list below.

+ +

R

+ +

Strengths

+ +
    +
  • R was designed and developed for statisticians and data analysts, so it provides, out-of-the-box (that is, they are part of the language itself), features and facilities for statisticians, which are not available in Python, unless you install a related package. For example, the data frame, which Python does not provide, unless you install the famous Python's pandas package. There are other examples like matrices, vectors, etc. In Python, there are also similar data structures, but they are more general, so not specifically targeted for statisticians.

  • +
  • There are a lot of statistical libraries.

  • +
+ +

Weakness

+ + + +

Python

+ +

Strengths

+ +
    +
  • A lot of people and companies, including Google and Facebook, invest a lot in Python. For example, the main programming language of TensorFlow and PyTorch (two widely used machine learning frameworks) is Python. So, it is very unlikely that Python won't continue to be widely used in machine learning for at least 5-10 more years.

  • +
  • The Python community is likely a lot bigger than the R community. In fact, for example, if you look at Tiobe's index, Python is placed 3rd, while R is placed 20th.

  • +
  • Python is also widely used outside of the statistics or machine learning communities. For example, it is used for web development (see e.g. the Python frameworks Django or Flask).

  • +
  • There are a lot of machine learning libraries (e.g. TensorFlow and PyTorch).

  • +
+ +

Weakness

+ +
    +
  • It does not provide, out-of-the-box, the statistical and data analysis functionalities that R provides, unless you install an appropriate package. This might be a weakness or a strength, depending on your philosophical point of view.
  • +
+ +

There are other possible advantages and disadvantages of these languages. For example, both languages are dynamic. However, this feature can both be an advantage and a disadvantage (and it is not strictly related to machine learning or statistics), so I did not list it above. I avoided mentioning opinionated language features, such as code readability and learning curve, for obvious reasons (e.g. not all people have the same programming experience).

+ +

Conclusion

+ +

Python is definitely worth learning if you are studying machine learning or statistics. However, it does not mean that you will not use R anymore. R might still be handier for certain tasks.

+",2444,,,,,8/4/2019 23:16,,,,3,,,,CC BY-SA 4.0 +13779,2,,13776,8/5/2019 0:12,,1,,"

REINFORCE is called a gradient estimator because it doesn't work on the true gradient, that comes from a loss function and the whole data, but makes up a heuristic loss, so that the gradient it ends up with isn't the true one. Let's see that with the REINFORCE equation:

+ +

$$ +{\huge +\Delta \mathbf{\theta}_t = \alpha \nabla_{\mathbf{\theta}} \log \pi_{\mathbf{\theta}} (a_t \mid s_t) v_t +}% +$$

+ +

As this shows, the gradient is still there ($\nabla_\theta$). But the policy corresponds to the network's output, so we can use backpropagation to compute the gradient of that heuristic loss with respect to the weights. The real gradient is unknown to us, but this estimation will do the job.

+",27444,,2444,,8/6/2019 22:59,8/6/2019 22:59,,,,3,,,,CC BY-SA 4.0 +13780,2,,13692,8/5/2019 0:38,,6,,"

3D convolutions should be used when you want to extract spatial features from your input on 3 dimensions. For computer vision, they are typically used on volumetric images, which are 3D.

+

Some examples are classifying 3D rendered images and medical image segmentation.

+",27655,,2444,,12/18/2021 12:57,12/18/2021 12:57,,,,0,,,,CC BY-SA 4.0 +13781,2,,13666,8/5/2019 0:49,,2,,"

Ensembles aren't very popular in the field of computer vision. The main reason why this is, is that models are already so large parameter-wise that it is hard to fit multiple models in-memory for classification. Since there are effective ways of training very large models, people would rather create a larger networks if they had the capacity than averaging the results from multiple ones.

+ +

That being said, there is no reason why ensembling wouldn't have beneficial results for your task.

+ +

One way would be, as you do, to average the results of the models. This is usually used to reduce the bias of weaker models. Another way would be to use meta-modelling, i.e. create a fourth model (even as simple as a linear classifier) that will be trained with the outputs of the three CNNs as its input features. The idea is that the meta-model will learn the best way to weight the outputs of the CNNs so that, instead of them all having an equal vote (as is the case when you average them), the meta-model will learn the best way to weigh them.

+",27655,,,,,8/5/2019 0:49,,,,0,,,,CC BY-SA 4.0 +13782,1,,,8/5/2019 1:34,,3,64,"

Deep fakes work by using a single encoder but then having a different decoder for different people.

+ +

But I wondered what if the encoder encodes say ""closed eyes"" of person A as the same code for ""closed mouth"" of person B. i.e. the codes could use the same codewords for different aspects of person A and person B. i.e. person A and person B could use the same codewords to descibe each of them except the codewords don't mean the same thing.

+ +

Then when you do a deep fake on person A with closed eyes it emerges as person B with a closed mouth.

+ +

How does one combat this effect. Or is does it just work and no-one knows why?

+",4199,,,,,8/5/2019 1:34,How do deep fakes get the right encoding for both people?,,0,0,,,,CC BY-SA 4.0 +13784,1,,,8/5/2019 4:52,,3,1159,"

Is there any way to make deepfake videos without a fancy computer? For example, run the DeepFaceLab on a website so your own computer won't get involved?

+",27660,,2444,,8/11/2019 15:47,8/11/2019 15:47,How to make deepfake video without a fancy PC?,,2,1,,,,CC BY-SA 4.0 +13785,1,,,8/5/2019 7:05,,2,30,"

Let's say our task is to pick and place a block, like: https://gym.openai.com/envs/FetchPickAndPlace-v0/

+ +

Reward function 1: -1 for block not placed, 0 for block placed

+ +

Reward function 2: 0 for block not placed, +1 for block placed

+ +

I noticed training 1 is much faster than 2... I am using the HER implementation from OpenAI. Why is that?

+",21158,,,,,8/5/2019 7:05,Why do these reward functions give different training curves?,,0,3,,,,CC BY-SA 4.0 +13786,2,,13692,8/5/2019 7:43,,18,,"

3D convolutions are used when you want to extract features in 3 dimensions or establish a relationship between 3 dimensions.

+

Essentially, it's the same as 2D convolutions, but the kernel movement is now 3-dimensional, causing a better capture of dependencies within the 3 dimensions and a difference in output dimensions post convolution.

+

The kernel of the 3d convolution will move in 3-dimensions if the kernel's depth is lesser than the feature map's depth.

+

+

On the other hand, 2-D convolutions on 3-D data mean that the kernel will traverse in 2-D only. This happens when the feature map's depth is the same as the kernel's depth (channels).

+

+

Some use cases for better understanding are

+
    +
  • MRI scans where the relationship between a stack of images is to be understood;

    +
  • +
  • low-level feature extractor for spatio-temporal data, like videos for gesture recognition, weather forecasting, etc. (3-D CNN's are used as low level feature extractors only over multiple short intervals, as 3D CNN's fail to capture long term spatio-temporal dependencies - for more on that check out ConvLSTM or an alternate perspective here.)

    +
  • +
+

Most CNN models that learn from video data almost always have 3D CNN as a low level feature extractor.

+

In the example you have mentioned above regarding the number 5 - 2D convolutions would probably perform better, as you're treating every channel intensity as an aggregate of the information it holds, meaning the learning would almost be the same as it would on a black and white image. Using 3D convolution for this, on the other hand, would cause learning of relationships between the channels which do not exist in this case! (Also 3D convolutions on an image with depth 3 would require a very uncommon kernel to be used, especially for the use case)

+",25658,,2444,,12/18/2021 12:57,12/18/2021 12:57,,,,0,,,,CC BY-SA 4.0 +13787,2,,13775,8/5/2019 8:01,,67,,"

I want to reframe your question.

+ +

Don't think about switching, think about adding.

+ +

In data science you'll be able to go very far with either python or r but you'll go farthest with both.

+ +

Python and r integrate very well, thanks to the reticulate package. I often tidy data in r because it is easier for me, train a model in python to benefit from superior speed and visualize the outcomes in r in beautiful ggplot all in one notebook!

+ +

If you already know r there is no sense in abandoning it, use it where sensible and easy to you. But it is 100% a good idea to add python for many uses.

+ +

Once you feel comfortable in both you'll have a workflow that fits you best dominated by your favorite language.

+",27665,,,,,8/5/2019 8:01,,,,1,,,,CC BY-SA 4.0 +13789,2,,13784,8/5/2019 9:27,,2,,"

Yes. There are services that provide free environment to run jupyter notebooks for research purposes (with GPU included, which is crucial for neural networks) - such as Google Colaboratory and Kaggle Kernels. Although they limit how long your computation may run (12 and 6 hours accordingly), which adds some difficulties to the process, although I think it is possible to bypass these restrictions.

+",27672,,,,,8/5/2019 9:27,,,,0,,,,CC BY-SA 4.0 +13790,1,,,8/5/2019 10:11,,2,137,"

I am playing around with creating custom architectures in stable-baselines. Specifically I am training an agent using a PPO2 model.

+ +

My question is, are there some rules of thumb or best practices in network architecture (of actor and critic networks) to achieve higher performance i.e. larger rewards?

+ +

For example, I find that usually using wider layers (e.g. 256 rather than 128 units) and adding more layers (e.g. a deep network with 5 layers rather than 2) achieves a smaller RMSE (better performance) for time series prediction when training an LSTM. Would similar conventions apply to reinforcement learning - would adding more layers to the actor and critic network have higher performance - does sharing an input layer work well?

+",27570,,,,,8/5/2019 10:11,What is a high performing network architecture to use in a PPO2 MlpLnLstmPolicy RL model?,,0,0,,,,CC BY-SA 4.0 +13791,2,,13775,8/5/2019 11:27,,6,,"

I didn't have this choice because I was forced to move from R to Python:

+ +

It depends on your environment: When you are embedded in an engineer department, working technical group or something similar than Python is more feasible.

+ +

When you are surrounded by scientists and especially statisticians, stay with R.

+ +

PS: R offers keras and tensorflow as well though it is implemented under the hood of python. Only very advanced stuff will make you need Python. +Though I'm getting more and more used to Python, the synthax in R is easier. And though each package has its own, it is somehow consistent while Python is not.. +And ggplot is so strong. Python has a clone (plotnine) but it lacks several (important) features. In principle you can do nearly as much as in R but especially visualization and data wrangling is much easier in R. Thus, the most famous Python library, pandas, is a clone of R.

+ +

PSS: Advanced statistics aims definitely at R. Python offers a lot of everyday tools and methods for a data scientist but it will never reach those >13,000 packages R provides. For example, I had to do an inverse regression and python doesn't offer this. In R you can choose between several confidence tests and whether it is linear or nonlinear. +The same goes to mixed models: It is implemented in python but it is so basic there I can't realize how this can be sufficient for someone.

+",26353,,26353,,8/6/2019 13:35,8/6/2019 13:35,,,,0,,,,CC BY-SA 4.0 +13792,2,,13775,8/5/2019 11:32,,1,,"

As others have said, it's not a ""switch"". But is it worth adding Python to your arsenal? I would say certainly. In data science, Python is popular and becoming ever more popular, while R is receding somewhat. And in the fields of machine learning and neural networks, I'd say that Python is the main language now -- I don't think R really comes close here in terms of usage. The reason for all of this is generality. Python is intended as a general programming language, and allows you to easily script all kinds of tasks. If you're staying strictly within a neatly structured statistical world, R is great, but with AI you often end up having to do novel, miscellaneous things, and I don't think R can beat Python at that. And because of this, I think Python and its packages will be receiving more support and development when it comes to the more cutting-edge tech.

+",26737,,,,,8/5/2019 11:32,,,,0,,,,CC BY-SA 4.0 +13793,1,13795,,8/5/2019 12:31,,0,282,"

I made a simple Python game. A screenshot is below: + +Basically, a paddle moves left and right catching particles. Some make you lose points while others make you gains points.

+

This is my first Deep Q Learning Project, so I probably messed something up, but here is what I have:

+
model = Sequential()
+model.add(Dense(200, input_shape=(4,), activation='relu'))
+model.add(Dense(200, activation='relu'))
+model.add(Dense(3, activation='linear'))
+model.compile(loss='categorical_crossentropy', optimizer='adam')
+
+

The four inputs are X position of player, X and Y position of particle (one at a time), and the type of particle. Output is left, right, or don't move.

+

Here is the learning algorithm:

+
def learning(num_episodes=500):
+    y = 0.8
+    eps = 0.5
+    decay_factor = 0.9999
+    for i in range(num_episodes):
+        state = GAME.reset()
+        GAME.done = False
+        eps *= decay_factor
+        done = False
+        while not done:
+            if np.random.random() < eps: #exploration
+                a = np.random.randint(0, 2)
+            else:
+                a = np.argmax(model.predict(state))
+            new_state, reward, done = GAME.step(a) #does that step
+            #reward can be -20, -5, 1, and 5
+            target = reward + y * np.max(model.predict(new_state))
+            target_vec = model.predict(state)[0]
+            target_vec[a] = target
+            model.fit(state, target_vec.reshape(-1, 3), epochs=1, verbose=0)
+            state = new_state
+
+

After training, this usually results in the paddle just going to the side and staying there. I am not sure if the NN architecture (units and hidden layers) is appropriate for given complexity. Also, is it possible that this is failing due to the rewards being very delayed? It can take 100+ frames to get to the food, so maybe this isn't registering well with the neural network.

+

I only started learning about reinforcement learning yesterday, so would appreciate advice!

+",27681,,-1,,6/17/2020 9:57,8/5/2019 19:38,Deep Q Learning Algorithm for Simple Python Game makes player stuck,,1,0,,,,CC BY-SA 4.0 +13794,1,13804,,8/5/2019 13:09,,1,364,"

The Wikipedia definitions are as follows

+ +

Multi-agent systems - A multi-agent system is a computerized system composed of multiple interacting intelligent agents.

+ +

Multi-modal interaction - Multimodal interaction provides the user with multiple modes of interacting with a system.

+ +

Doesn't providing a user with multiple modes of interacting with a system, assuming all modalities interact with each other to give final output (some sort of fusion mechanism for example), make it a multi-agent system?

+ +

If not, what is the difference between multi-modal and multi-agent systems and, monolithic and uni-modal systems?

+",25658,,2444,,8/5/2019 17:40,8/5/2019 23:48,What is the difference between multi-agent and multi-modal systems?,,1,2,,,,CC BY-SA 4.0 +13795,2,,13793,8/5/2019 13:22,,2,,"

This is probably the most major factor:

+ +
model.compile(loss='categorical_crossentropy', optimizer='adam')
+
+ +

you have set the loss function for a multiclass classifier. It is going to have some weird results when values - either predicted or target - are outside of range 0..1

+ +

You should use this instead:

+ +
model.compile(loss='mean_squared_error', optimizer='adam')
+
+ +

because your Q network outputs the expected future return on each action. This could easily be outside of the range that 'categorical_crossentropy' is designed for.

+ +

In addition, you really need to look into experience replay. It is not an optional extra when using neural networks with Q learning - it is pretty much required for anything but the most trivial environments. It is very likely your agent will still fail to learn without experience replay, if you correct all other problems with your code.

+ +
+

I am not sure if the NN architecture (units and hidden layers) is appropriate for given complexity.

+
+ +

It looks more complex than it needs to be, assuming that your 4 inputs represent paddle x position, particle x,y position and colour. I would suggest making the network simpler (maybe just 40 neurons per layer at a guess), to speed things up a little.

+ +

Check your input scaling. Neural networks like to train on inputs that have mean 0, standard deviation 1, and it is worth scaling them so that they fit roughly into -1..1 or similar. Your feature engineering is not shown in your code, so it might be an issue.

+ +
+

Also, is it possible that this is failing due to the rewards being very delayed?

+
+ +

This can be a factor that makes learning harder.

+ +
+

It can take 100+ frames to get to the food, so maybe this isn't registering well with the neural network.

+
+ +

100 time steps delay between rewards is not much for DQN. It should be correctly predicting Q values - it will just take more episodes to learn to predict the best movement when the food is further away.

+",1847,,,,,8/5/2019 13:22,,,,3,,,,CC BY-SA 4.0 +13796,2,,13784,8/5/2019 13:30,,1,,"

Take a look at using AWS You can spin up an instance with as much processing power as you need and by using their pre-built images it will already be preconfigured with a lot of the packages etc you might need for any kind of ML.

+ +

I see you wanted to use DeepFaceLab which I guess requires some kind of GUI so unsure if this is suitable for your requirements but check it out, it seems to be the best way to perform high-processing machine learning without the fancy computer

+",8960,,,,,8/5/2019 13:30,,,,0,,,,CC BY-SA 4.0 +13797,1,,,8/5/2019 14:02,,2,88,"

For real applications, concept drifts often exist, i.e., the relationship between the input and output changes overtime. Thus, we need our AI or machine learning system to quickly adapt to the environment.

+ +

What are the most common methods to enable neural networks to quickly adapt to the changing environment for supervised learning? Could somebody provide a link to a good review article?

+",22105,,2444,,8/5/2019 23:58,9/5/2019 12:00,What are the most common methods to enable neural networks to adapt to changing environments?,,1,0,0,,,CC BY-SA 4.0 +13798,1,,,8/5/2019 14:23,,1,24,"

I have some rated time-sequential data and I would like to test if an ANN can learn a correlation between my measurements and ratings.

+ +

I suspect I could just try a CNN where 1 Dimension is time or an LSTM/GRU and put the result through sigmoid, but is there any good literature on this? I have been trying to find information on datasets for the problem but it seems that Sequence regression is lacking any big official datasets, even though use-cases are there(e.g. learning personal music taste, try to predict rotten-tomato scores, etc..).

+ +

Looking for links to papers describing successful architectures or benchmarks where I can test my models.

+",27267,,,,,8/5/2019 14:23,Literature on Sequence Regresssion,,0,0,,,,CC BY-SA 4.0 +13799,1,,,8/5/2019 18:31,,2,405,"

While analyzing the data for a given problem set, I came across a few distributions which are not Gaussian in nature. They are not even uniform or Gamma distributions(so that I can write a function, plug the parameters and calculate the ""Likelihood probability"" and solve it using Bayes classification method). I got a set of a few absurd looking PDFs and I am wondering how should I define them mathematically so that I can plug the parameters and calculate the likelihood probability.

+ +

The set of PDFs/Distributions that I got are the following and I am including some solutions that I intend to use. Please comment on their validity:

+ +

1)

+ +

The distribution looks like:

+ +

$ y = ax +b $ from $ 0.8<x<1.5 $

+ +

How to programmatically calculate

+ +
1. The value of x where the pdf starts
+2. The value of x where the pdf ends
+3. The value of y where the pdf starts
+4. The value of y where the pdf ends
+
+ +

However, I would have liked it better to have a generic distribution for this form of graphs so that I can plug the parameters to calculate the probability.

+ +

2)

+ +

This PDF looks neither uniform nor Gaussian. What kind of distribution should I consider it roughly?

+ +

3)

+ +

I can divide this graph into three segments. The first segment is from $2<x<3$ with a steep slope, the second segment is from $3<x<6$ with a moderate sope and the third segment is from $6<x<8$ with a high negative slope.

+ +

How to programmatically calculate

+ +
 1. the values of x where the graph changes its slope.
+ 2. the values of y where the graph changes its slope.
+
+ +

4)

+ +

This looks like two Gaussian densities with different mean superimposed together. But then the question arises, how do we find these two individual Gaussian densities?

+ +

The following code may help:

+ +
variable1=nasa1['PerihelionArg'][nasa1.PerihelionArg>190] 
+variable2=nasa1['PerihelionArg'][nasa1.PerihelionArg<190] 
+
+ +

Find mean and variance of variable1 and variable2, find the corresponding PDFs. Define the overall PDF with a suitable range of $x$.

+ +

5)

+ +

This can be estimated as a Gamma distribution. We can find the mean and variance, calculate $\alpha$ and $\beta$ and finally calculate the PDF.

+ +

It would be very helpful if someone could give their insights on the above analysis, its validity, and correctness and their suggestions regarding how problems such as these should be dealt with.

+",26301,,26301,,8/7/2019 18:11,5/1/2023 19:07,What to do when PDFs are not Gaussian/Normal in Naive Bayes Classifier,,1,8,,,,CC BY-SA 4.0 +13800,1,,,8/5/2019 19:00,,1,46,"

newbie here. I am studying the REINFORCE method in ""Deep Reinforcement Learning Hands-On"". I can't understand how, after computing the loss of the episode, that loss is backpropagated in a NN with multiple output nodes. To be more precise, in Supervised Learning, when we have multiple output nodes we know the loss of each of them, but in RL, how do we compute the loss of each output node (or maybe the partial derivative of the total loss with respect to each output layer)?

+ +

I hope to have been clear, thanks in advance.

+",27698,,,,,8/5/2019 19:00,How is computed the gradient with respect to each output node from a loss value?,,0,5,,,,CC BY-SA 4.0 +13801,2,,11100,8/5/2019 21:43,,0,,"

your targets should be in the same range as your output functions other wise your loss function wont be accurate, with supervised learning your trying to reduce the loss of your output against your targets so in this case your targets should be the true/optimal probability distribution for that set of input data. Im from the midwest so obligatory ""cant compare apples to oranges"" here ;)

+",20044,,20044,,8/5/2019 21:50,8/5/2019 21:50,,,,0,,,,CC BY-SA 4.0 +13803,2,,13751,8/5/2019 23:06,,1,,"

This research question seems to be analyzed in further details here (section 3) - https://openreview.net/pdf?id=r1lyTjAqYX.

+ +

Usually, a sequence is taken as a state to be fed into RNN to compute the final hidden state. One can then ask what initial state should the RNN be seeded with? This paper analyzes three methods with respect to the seed -

+ +
    +
  • zero initialization: When the RNN is initialized with the zero state
  • +
  • burn-in: when the sequence is prepended by some preceding observations for RNN to learn a good initial state
  • +
  • storing the initial hidden state: When the hidden state at the beginning of the sequence is stored
  • +
+",21509,,,,,8/5/2019 23:06,,,,0,,,,CC BY-SA 4.0 +13804,2,,13794,8/5/2019 23:48,,2,,"

An agent is a concept, which can have slightly different meanings, abilities or instantiations depending on the context. However, given the purpose of this website, I will use and refer to the definition of agent commonly used in artificial intelligence.

+ +
+

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

+
+ +

For more details regarding the definition of an agent in AI, see my answer to the question What is an agent in Artificial Intelligence?.

+ +

A multi-agent system is a system composed of multiple agents that interact with an environment. See Multi-Agent Systems: A survey (2018) fore a more exhaustive overview of the field.

+ +

Multimodal interaction (MI) refers to the interaction with a system (e.g. a computer) using multiple modalities (e.g. speech or gestures). For example, we usually can interact with a laptop using a keyboard and a touchpad (or mouse), so the keyboard and the touchpad are the two different modalities that are used to interact with the computer. MI could thus be considered a sub-field of human-computer interaction.

+ +

Conceptually, an agent could be associated with each modality provided by a multimodal system, so a system that provides multimodal interaction could indeed be a multi-agent system. See, for example, A Multi-Agent based Multimodal System Adaptive to the User’s Interaction Context (2011).

+",2444,,,,,8/5/2019 23:48,,,,0,,,,CC BY-SA 4.0 +13805,1,13811,,8/6/2019 3:02,,4,501,"

I've heard somewhere that due to their nature of capturing spatial relations, even untrained CNNs can be used as feature extractors? Is this true? Does anyone have any sources regarding this I can look at?

+",27240,,2444,,5/10/2023 21:15,5/10/2023 21:15,Is it true that untrained CNNs can be used as feature extractors?,,2,0,,,,CC BY-SA 4.0 +13806,1,,,8/6/2019 3:08,,2,127,"

This is a follow-up question about one I asked earlier. The first question is here. Basically, I have a game where a paddle moves left and right to catch as much ""food"" as possible. Some food is good (gain points) and some is bad (lose points). NN Architecture:

+ +
    #inputs - paddle.x, food.x, food.y, food.type
+    #moves: left, right, stay
+    model = Sequential()
+    model.add(Dense(10, input_shape=(4,), activation='relu'))
+    model.add(Dense(10, activation='relu'))
+    model.add(Dense(3, activation='linear'))
+    model.compile(loss='mean_squared_error', optimizer='adam')
+
+ +

As suggested in the other question, I scaled my inputs to be between 0 and 1. Also, implemented experience replay (although I am not confident I did it correctly).

+ +

Here is my ReplayMemory class:

+ +
class ReplayMemory():
+    def __init__(self, capacity):
+        self.capacity = capacity
+        self.memory = []
+        self.count = 0
+
+    def push(self, experience):
+        if len(self.memory) < self.capacity:
+            self.memory.append(experience)
+        else:
+            self.memory[self.count % self.capacity] = experience
+        self.count += 1
+
+    def sample(self, batch_size):
+        return random.sample(self.memory, batch_size)
+
+    def can_provide_sample(self, batch_size):
+        return len(self.memory) >= batch_size
+
+ +

This basically stores states/rewards/actions and returns a random group when asked.

+ +

Lastly, here is my learning code:

+ +
def learning(num_episodes=20):
+        global scores, experiences, target_vecs
+        y = 0.8
+        eps = 0
+        decay_factor = 0.9999
+
+for i in range(num_episodes):
+    state = GAME.reset()
+    GAME.done = False
+    done = False
+    counter = 0
+    while not done:
+        eps *= decay_factor
+        counter+=1
+
+        if np.random.random() < eps:
+            a = np.random.randint(0, 2)
+        else:
+            a = np.argmax(model.predict(np.array([scale(state)])))
+
+        new_state, reward, done = GAME.step(a) #does that step
+        REPLAY_MEMORY.push((scale(state), a, reward, scale(new_state)))
+
+        #experience replay is here
+        if REPLAY_MEMORY.can_provide_sample(20):
+            experiences = REPLAY_MEMORY.sample(20)
+            target_vecs = []
+            for j in range(len(experiences)):
+                target = experiences[j][2] + y * np.max(model.predict(np.array([experiences[j][3]])))
+                target_vec = model.predict(np.array([experiences[j][0]]))[0]
+                target_vec[experiences[j][1]] = target
+                target_vecs.append(target_vec)
+            target_vecs = np.array(target_vecs)
+            states = [s for s, _, _, _ in [exp for exp in experiences]]
+            states = np.array(states)
+            model.fit(states, target_vecs, epochs=1, verbose=1 if counter % 100 == 0 else 0)
+        state = new_state
+        if counter > 1200: #game runs for 20 seconds each episode
+            done = True
+            scores.append(GAME.PLAYER.score)
+model.save(""model.h5"")
+
+ +

First, this takes a long time to train on my GTX1050. Is this normal for such a simple game? Also, does my code look fine? This is my first time with Deep Q Learning, so I would appreciate a second set of eyes.

+ +

What is happening is that training is super slow (more than an hour for 20 episodes (or 400 seconds of actual game play)). Also, it does not seem to get much better. The paddle (after 20 episodes) moves left and right but without any obvious pattern.

+ +

Here is a link to the code. Also, available on GitHub.

+",27681,,27681,,8/6/2019 21:48,8/10/2019 16:44,Deep Q Learning for Simple Game Not Effective,,1,4,,,,CC BY-SA 4.0 +13807,2,,13805,8/6/2019 5:41,,-1,,"

I'm not sure it's possible. Untrained CNN means it has random kernel values. Let's say you have a kernel with size 3x3 like below:

+ +
0 0 0
+0 0 0
+0 0 1
+
+ +

I don't think it is possible for that kernel to provide good information about the image. on the contrary, the kernel eliminates a lot of information. We cannot rely on random values for feature extraction.

+ +

But, if you use CNN with ""assigned"" kernel, then you don't need to train the convolutional layer. For example, you can start a CNN with a kernel that designed to extract vertical line:

+ +
-1 2 -1
+-1 2 -1
+-1 2 -1
+
+",16565,,,,,8/6/2019 5:41,,,,5,,,,CC BY-SA 4.0 +13808,1,13813,,8/6/2019 9:15,,1,391,"

As per subject title, are there ways to try Deep Learning without downloading and installing anything?

+ +

I'm just trying to have a feel of how this work, not really want to go through the download and install step if possible.

+",26942,,26942,,8/6/2019 11:29,8/6/2019 17:32,Are there ways to learn and practice Deep Learning without downloading and installing anything?,,3,2,,,,CC BY-SA 4.0 +13810,2,,13808,8/6/2019 10:44,,0,,"

You can definitely get a good handle on the theory of various concepts in ML(I.e the agent-environment loop and Markov Decision Processes) but true understanding(for the vast majority of people) will only come through application of the aforementioned theory.

+ +

I would suggest something like this course to get your feet wet in ML

+",9608,,,,,8/6/2019 10:44,,,,0,,,,CC BY-SA 4.0 +13811,2,,13805,8/6/2019 10:57,,6,,"

Yes, it has been demonstrated that the main factor for CNNs to work is its architecture, which exploits locality during the feature extraction. A CNN with random weights will do a random partition of the feature space, but still with that spatial prior that works so well, so those random features are OK for classification (and sometimes even better than trained ones, as they don't introduce additional bias).

+ +

You can read more in these papers:

+ + +",27444,,27444,,8/6/2019 11:14,8/6/2019 11:14,,,,1,,,,CC BY-SA 4.0 +13812,2,,13797,8/6/2019 11:07,,1,,"

For the vast majority of cases where you have a dynamic(and assumed non-linear) relationship between your input and output, you would not use modified architecture. You would simply retrain on the new data.

+ +

In some cases, based on domain knowledge or intuition, one might put a ""weight"" on the new data to increase or decrease its importance relative to previous data.

+ +

There are some attempts(mostly by those studying one-shot learning) to create NNs that quickly fit to new data effectively with only a few samples. However, most of these are not ready for anything resembling real-world problems(particularly on tabular data).

+",9608,,,,,8/6/2019 11:07,,,,0,,,,CC BY-SA 4.0 +13813,2,,13808,8/6/2019 11:43,,4,,"

As I understand, I think you wish to directly try out some deep learning stuff and things like library downloading, tools downloading, and managing all these really stop you from even starting to try out deep learning experiments.If this is what you asked for:

+ +
    +
  1. Google Colab

    + +

    I think this is the best place for you. +Anyone with a Google Drive account can sign up for Colab by heading to + and following the listed instructions.

    + +

    Since you mentioned that you just wanted to try out and practice stuffs, this would be +ideal for you.

    + +
      +
    • All major Python libraries like TensorFlow, Scikit-learn, Matplotlib among many +others +are pre-installed and ready to be imported.

    • +
    • Built on top of Jupyter Notebook

    • +
    + +

    Please have a quick look at:

    + +

    https://medium.com/lean-in-women-in-tech-india/google-colab-the-beginners-guide-5ad3b417dfa

  2. +
+ +

2.Microsoft Azure

+ +

The Azure free account is available to all new customers of Azure. If you have never + tried or paid for Azure before, you’re eligible. + Try out a student account: + https://azure.microsoft.com/en-in/free/students/

+ +

Hope this helps, go ahead with practicing deep learning +if not please feel free to raise questions, always ready to help

+ +
+",26854,,,,,8/6/2019 11:43,,,,2,,,,CC BY-SA 4.0 +13814,2,,13775,8/6/2019 11:47,,0,,"

It sounds like you have invested 1 year for data science with R, and embedded into R environment, but want to explore python for data science.

+ +

First learn the basics of the python like how lists and tuple works and how classes and objects work.

+ +

Then get your hands dirty with some libraries like numpy matplotlib pandas. Learn tensorflow or keras and then go for data science.

+",27728,,1671,,8/6/2019 21:21,8/6/2019 21:21,,,,0,,,,CC BY-SA 4.0 +13815,2,,13775,8/6/2019 12:02,,0,,"

This is totally my personal opinion.

+ +

I read in my office (at a construction site) that ""There is a right tool for every task.""

+ +

I expect me to face a variety of tasks, as a programmer. I want as many tools as I can ""buy or invest in"", as possible. One day one tool will help me solve it, some other day some other tool. R (for statistics) and Python (for in general) are two tools I definitely want with me and I think it is worth investment for me.

+ +

As far as switch is concerned, I will use the most efficient tool I know (where efficiency is measured by client's requirement, time and cost investment and ease of coding) . The more tools I know, the merrier! Of course there is a practical limit to it.

+ +

All this is my personal opinion and not necessarily correct.

+",27729,,,,,8/6/2019 12:02,,,,0,,,,CC BY-SA 4.0 +13816,2,,13775,8/6/2019 12:04,,4,,"

I would say yes. Python is better than R for most tasks, but R has its niche and you would still want to use it in many circumstances.

+ +

Additionally, learning a second language will improve your programming skills.

+ +

My own perspective on the strengths of R vs Python is that I would prefer R for a small, single-purpose program involving tables or charts, or exploratory work in the same vein. I would prefer Python for everything else.

+ +
    +
  • R is really good for table mashing. If most of what a particular program is going to do is smoosh some tables into different shapes, then R is the thing to pick. Python has tools for this, but R is designed for it and does it better.
  • +
  • It's worth switching to R whenever you need to make a chart, because ggplot2 is a masterpiece of API usability and matplotlib is a crawling horror.
  • +
  • Python is well designed for general purpose programming. It has a very well designed set of standard data structures, standard libraries, and control flow statements.
  • +
  • R is poorly suited for general purpose programming. It doesn't handle tree-structured or graph-structured data well. It has some rules (like being able to look into and modify your parent scope) which are immediately convenient, but when used lead to programs that do are hard to grow, modify, or compose.
  • +
  • R also has some straightforwardly bad things in it. These are mostly just historical leftovers like the three different object systems.
  • +
+ +

To elaborate more on the last point: computer programming done well is lego where you make your own bricks (functions and modules).

+ +

Programs are usually modified and repurposed past their original design. As you build them it is useful to think about which parts might be reused, and to build those part in a general way that will let them plug in to the other bricks.

+ +

R encourages you to melt all the bricks together.

+",27727,,,,,8/6/2019 12:04,,,,0,,,,CC BY-SA 4.0 +13817,2,,9312,8/6/2019 12:24,,3,,"

There is indeed an investigation in progress, regarding this topic. A first publication from last march noted that modularity has been done, although not explicitly, since some time ago, but somehow training keeps being monolithic. This paper assess some primary questions about the matter and compares training times and performances on modular and heavily recurrent neural networks. See:

+ +

Some others are very focused on modularity, but staying with the monolithic training (see Jacob Andrea's research, specially Learning to reason is very related to your third question). Somewhere between late 2019 and march next year, there should be more results (edit: mentioned results here).

+

In relation to your two last questions, we're starting to see now that modularity is a major key towards generalisation. Let me recommend you some papers (you can find them all in arxiv or google scholar):

+
    +
  • Stochastic Adaptive Neural Architecture Search for Keyword Spotting (variations of an architecture to balance performance and resource usage).

    +
  • +
  • Making Neural Programming Architectures Generalize via Recursion (they do task submodularity and I believe it's the first time that generalisation is guaranteed within field of neural networks).

    +
  • +
  • Mastering the game of Go with deep neural networks and tree search (network topology is actually the search tree itself, you can see more of this if you look for graph neural networks).

    +
  • +
+",27444,,27444,,2/10/2022 10:08,2/10/2022 10:08,,,,0,,,,CC BY-SA 4.0 +13818,2,,13775,8/6/2019 12:50,,0,,"
+

Person who chases two rabbits catches neither

+
+ +

And yes, Python is more popular. I work in both but, business speaking, it's easy to find a job on Python than in R.

+ +

So, you could:

+ +
    +
  • Pick Python because it is more popular. However, you must start from scratch.
  • +
+ +

Or

+ +
    +
  • Stay with R, after all, you have one year worth of training with R. But it is not popular.
  • +
+",27731,,,,,8/6/2019 12:50,,,,1,,,,CC BY-SA 4.0 +13819,1,13825,,8/6/2019 13:26,,1,194,"

I have created a Tf.Sequential model which outputs 1 for numbers bigger then 5 and 0 otherwise:

+ + + +
const model = tf.sequential();
+model.add(tf.layers.dense({ units: 5, activation: 'sigmoid', inputShape: [1]}));
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid'}));
+model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
+const xs = tf.tensor2d([[1], [2], [3], [4], [6], [7], [8], [9]]);
+const ys = tf.tensor2d([[0], [0], [0], [0], [1], [1], [1], [1]]);
+model.fit(xs, ys);
+model.predict(xs).print();
+
+ +

With 5 hidden neurons, not even the right trend is detected. Sometimes all the number are too low, or the outputs decrease even if the inputs increase or the outputs are too high.

+ +

I later thought that the best way to do this is to have 2 neurons, where 1 is for the input and the other applies a sigmoid function to the input. The weight and bias should easily be adjusted to make the ANN work.

+ + + +
const model = tf.sequential();
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid', inputShape: [1]}));
+model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
+const xs = tf.tensor2d([[1], [2], [3], [4], [6], [7], [8], [9]]);
+const ys = tf.tensor2d([[0], [0], [0], [0], [1], [1], [1], [1]]);
+model.fit(xs, ys);
+model.predict(xs).print();
+
+ +

Sometimes, this ANN does detect the right trend (the higher the input, the higher the output), but still, the results are never correct and are usually simply too high, always providing an output too close to 1.

+ +

How do I make my ANN work, and what have I done wrong?

+ +

Edit:

+ +

This is the code I'm using now, same problem as before:

+ + + +
const AdadeltaOptimizer = tf.train.adadelta();
+
+const model = tf.sequential();
+model.add(tf.layers.dense({ units: 5, activation: 'sigmoid', inputShape: [1]}));
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid'}));
+model.compile({loss: 'meanSquaredError', optimizer: AdadeltaOptimizer});
+const xs = tf.tensor1d([1, 2, 3, 4, 5, 6, 7, 8, 9]);
+const ys = tf.tensor1d([0, 0, 0, 0, 0, 1, 1, 1, 1]);
+model.fit(xs, ys, {
+epochs: 2000,
+});
+model.predict(xs).print();
+
+tf.losses.meanSquaredError(ys, model.predict(xs)).print();
+
+",21788,,198,,8/14/2019 8:41,8/14/2019 15:50,Why is this simple neural network not training?,,1,0,,,,CC BY-SA 4.0 +13820,1,13851,,8/6/2019 13:58,,1,108,"

In the image given below, I do not understand a few things

+ +

1) Why is an entire area colored to signify misclassification? For the given decision boundary, only the points between $x_0$ and the decision boundary signify misclassification right? It's supposed to be only a set of points on the x-axis, not an area.

+ +

2) Why is the green area with $x < x_0$ a misclassification? It's classified as $C_1$ and it is supposed to be $C_1$ right?

+ +

3) Similarly, why is the blue area a misclassification? Any $x >$ the decision boundary belongs to $C_2$ and is also classified as such...

+ +

+",27422,,16708,,8/16/2019 14:08,9/9/2020 16:04,Why is the entire area of a join probability distribution considered when it comes to calculating misclassification?,,1,1,,,,CC BY-SA 4.0 +13821,1,,,8/6/2019 14:25,,6,4275,"

I have implemented a CNN for image classification. I have not used fully connected layers, but only a softmax. Still, I am getting results.

+ +

Must I use fully-connected layers in a CNN?

+",27735,,2444,,8/6/2019 22:35,6/12/2020 16:03,Are fully connected layers necessary in a CNN?,,3,1,,,,CC BY-SA 4.0 +13822,2,,13821,8/6/2019 14:41,,1,,"

In theory, you do not need fully-connected (FC) layers. FC layers are used to introduce scope for updating weights during back-propagation, due to its ability to introduce more connectivity possibilities, as every neuron of the FC is connected every neuron of the further layers.

+",26115,,2444,,8/6/2019 22:38,8/6/2019 22:38,,,,0,,,,CC BY-SA 4.0 +13823,2,,13821,8/6/2019 14:42,,1,,"

The reasons people use the FC after convolutional layer is that CNN preserves spatial information. You said you use softmax, so you probably make some classification task. If you don't use FC layer, then you probably evaluate first class by first position of the first kernel, not by whole image with all kernels, second class by second position of the kernel and so on.

+ +

The dense net combine the info from all the kernels in all positions.

+ +

That said, you technically can convert FC to Conv net, as described here, so then you can said you ""skipped"" FC layer.

+",16940,,,,,8/6/2019 14:42,,,,0,,,,CC BY-SA 4.0 +13824,1,,,8/6/2019 14:49,,5,222,"

In a neural network, by how much does the number of neurons typically vary from layer to layer?

+

Note that I am NOT asking how to find the optimal number of neurons per layer.

+

As a hardware design engineer with no practical experience programming neural networks, I would like to glean for example

+
    +
  1. By how much does the number of neurons in hidden layers typically vary from that of the input layer?

    +
  2. +
  3. What is the maximum deviation in the number of hidden layer neurons to the number of input layer neurons?

    +
  4. +
  5. How commonly do you see a large spike in the number of neurons?

    +
  6. +
+

It likely depends on the application so I would like to hear from as many people as possible. Please tell me about your experience.

+",27726,,2444,,12/22/2021 23:03,12/22/2021 23:03,"In a neural network, by how much does the number of neurons typically vary from layer to layer?",,2,0,,,,CC BY-SA 4.0 +13825,2,,13819,8/6/2019 16:08,,3,,"

You should try bigger learning rate / more epochs.

+ +

What you see is basically output of randomly initialized weights. Default learning rate for SGD is 0.001, it's definitely not enough for weights to change quick enough with single batch epochs and 9 examples.

+ +

I never used TF.js, but in keras (that's look very similar) you can set it like this

+ +
sgd = keras.optimizers.SGD(lr=1)
+model.compile(sgd, 'mse')
+
+model.fit(xs, ys, epochs=200)
+
+ +

Another approach would be to use adaptive learning rate optimizer, for example adadelta. It still requires quite a lot of epochs, but much less, then SGD with standard learning rate and you don't need to set LR manually.

+ +
model.compile('adadelta', 'mse')
+model.fit(xs, ys, epochs=2000)
+
+ +

Here you can read how different optimizers work and converge

+ +

+ +

Edit: +Another thing with tf.js (and javascript code in general) is it doesn't guarantee order of operations. So, in your case it prints evaluation before it finishes training.

+ +

This code should perform fine. Take note that we evaluate model in then clause.

+ +
const model = tf.sequential();
+model.add(tf.layers.dense({ units: 5, activation: 'sigmoid', inputShape: [1]}));
+model.add(tf.layers.dense({ units: 1, activation: 'sigmoid'}));
+model.compile({loss: 'meanSquaredError', optimizer: 'adadelta'});
+const xs = tf.tensor1d([1, 2, 3, 4, 5, 6, 7, 8, 9]);
+const ys = tf.tensor1d([0, 0, 0, 0, 0, 1, 1, 1, 1]);
+model.fit(xs, ys, {epochs: 1000}).then(h => {
+   console.log(""Loss: "" + h.history.loss[0]);
+   model.predict(xs).print();
+});
+
+ +

You can also use async/await approach, for example as it described in this answer

+",16940,,16940,,8/14/2019 15:50,8/14/2019 15:50,,,,4,,,,CC BY-SA 4.0 +13826,1,13828,,8/6/2019 16:11,,4,269,"

I have just watched a few videos on TED Talks talking about how AI benefits creatives and artists, but none of the videos I watched provided further resources for reference.

+

So, I would like to know how creatives and artists can apply AI in their work process. Like at least a tutorial guide on how it works.

+

Are there any recommendations on communities, tutorials, guides, platforms, and real-world AI applications that are meant for creatives and artists?

+",26942,,2444,,9/29/2021 14:29,9/29/2021 14:29,What are examples of applications of AI for creatives and artists?,,1,3,,,,CC BY-SA 4.0 +13827,2,,13824,8/6/2019 16:18,,1,,"
    +
  1. Input layers will always have the dimensionality of your input data(for every model I can think of).

  2. +
  3. See above, the deviation between hidden layers can be significant. For example, 128 in the first hidden and 64 in the rest(or vice versa).

  4. +
  5. This question in particular will always be problem dependent. It is decided via architecture search or intuition/experience combined with some exploratory search.

  6. +
+",9608,,9608,,8/6/2019 20:06,8/6/2019 20:06,,,,0,,,,CC BY-SA 4.0 +13828,2,,13826,8/6/2019 17:16,,3,,"

The machine learning (~AI) is all about code and data. If you planning to create something really unique, you'll probably need to, well, work with code and data.

+ +

You'll probably should look at github and try to get inspiration from opensourced projects. If you found some AI-related artwork, try to google it's name with 'github'

+ +

Another approach would be to look at medium, it's usually have some post/tutorial.

+ +

The art is very broad topic and it's hard to cover it all. I try to provide examples of different resources that could help you to start.

+ +
    +
  • Google Research made some examples, where you run code in colab without installing anything locally. It helps you to try basic stuff.

  • +
  • There is tensorflow.js, that allows to launch code at browser. You can look at demos with code

  • +
  • There is intro course for visual artist on kadenze.

  • +
  • Here it is some more links on different courses.

  • +
  • DCGAN - very popular image generation library.

  • +
  • TouchDesigner, very popular tool among visual artists, provides python interface. For example, here is usage of aforementioned DCGAN with it

  • +
+ +

Also, there is creative section on NIPS(one of the biggest ML conferences) http://nips4creativity.com/. Some of the work provides detailed explanation, like this one

+ +

I know, starting to code could be intimidating, but many github repos provides some description about how to launch them and you don't even have to know how they work exactly to use them. Also, it would help to have a person with programming experience, who could help you to deal with code.

+",16940,,16940,,10/13/2019 15:51,10/13/2019 15:51,,,,0,,,,CC BY-SA 4.0 +13829,2,,13808,8/6/2019 17:32,,0,,"

Have you tried the Tensorflow playground? It allows you turn all of the knobs and see the effect without having to code anything
+https://playground.tensorflow.org

+",27726,,,,,8/6/2019 17:32,,,,1,,,,CC BY-SA 4.0 +13830,1,13833,,8/6/2019 17:39,,1,144,"

I recently came across this article which cites a paper, which apparently won the outstanding paper award in ACL 2019. The theme is that it solved a longstanding problem called Word Sense Disambiguation.

+

What is Word Sense Disambiguation? How does it affect NLP?

+

(Moreover, how does the proposed method solve this problem?)

+",,user9947,2444,,12/18/2021 22:05,12/18/2021 22:05,"What is ""Word Sense Disambiguation""?",,2,0,,,,CC BY-SA 4.0 +13831,2,,13824,8/6/2019 18:00,,0,,"

There is no right answer to this question. But, I would like to point you to an answer on CV that addresses the mean of your problem.

+

Two points from the accepted answer that I want to draw your attention to are:

+
+

a) There are some empirically-derived rules-of-thumb, of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'.

+

b) In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.

+
+

Other answers in the thread are also very insightful. I will recommend you to go through the answers and figure out the standard deviation and just assume things are normal, and hence you would have your distribution.

+",16708,,-1,,6/17/2020 9:57,8/6/2019 18:16,,,,0,,,,CC BY-SA 4.0 +13832,1,,,8/6/2019 19:52,,3,1011,"

As everyone experienced in deep learning might know, in an image classification problem we normally add borders to images then resize it to the input size of a CNN network. The reason of doing this is to keep aspect ratio of the original image and retain it's information.

+ +

I have seen people fill black (0 pixel value for each channel), gray (127 pixel value for each channel), or random value generated from gaussian distribution to the border.

+ +

My question is, is there any proof that which of these is correct?

+",25552,,2444,,8/6/2019 22:42,9/19/2019 17:01,How should we pad an image to be fed in a CNN?,,2,0,,,,CC BY-SA 4.0 +13833,2,,13830,8/6/2019 20:14,,1,,"

""Word Sense Disambiguation"" refers to the idea that words can have different meanings in different contexts. Here are some examples

+ +
    +
  • ""I went to the river river bank"" vs ""I deposited my check at the bank""
  • +
  • ""He's mad good at that game"" vs ""I am so mad at you""
  • +
+ +

How it effects NLP, comes down to the way we process text. This generally includes the steps of tokenization and embedding them into some form of vector space. These embeddings in many cases are trained either through some self supervised task on some corpora (examples include Word2Vec or Glove) or doing it from scratch under whichever task/data-set is being used.

+ +

Now regarding that paper it does not solve the problem but it does extend a new methodology that assists in helping achieve a better learned generalizable representation for this task. The way I interpreted it is that they don't just use a sense-label (which would be a one-hot encoding / what they call discrete) but instead use a continuous sense representation and do the comparison there. This difference allows for words with similar but different senses to not be equidistant from words with completely different senses.

+",25496,,,,,8/6/2019 20:14,,,,0,,,,CC BY-SA 4.0 +13834,2,,13830,8/6/2019 20:20,,1,,"

Word Sense Disambiguation (WSD) is the task of associating meanings or senses (from an existing collection of meanings) with words, given the context of the words. (The word sense is a synonym for meaning.)

+ +

For example, consider the noun ""tie"" in the following two sentences

+ +
    +
  1. He wore a vest and a tie.
  2. +
  3. Their record was 3 wins, 6 losses, and one tie.
  4. +
+ +

In these two sentences, the meaning of the word tie is different. In sentence 1, the word tie refers to a necktie, which is a piece of cloth. In sentence 2, the word tie is a synonym for a draw, so it refers to a situation of a game. Therefore, we could associate the meaning (or sense) ""neckwear consisting of a long narrow piece of material"" to the word tie in the first sentence and the meaning ""the finish of a contest in which the winner is undecided"" to the same word in the second sentence.

+ +

The goal of WSD is thus to predict the appropriate sense or meaning of a word, given the context of the word.

+ +

Why is WSD important in NLP? Of course, there are many words that change meaning depending on the context, so WSD is important because you expect NLP algorithms and models to be able to correctly give meanings to words, given their context.

+",2444,,2444,,8/6/2019 20:26,8/6/2019 20:26,,,,0,,,,CC BY-SA 4.0 +13836,1,13846,,8/7/2019 1:38,,2,1044,"

I read different articles and keep getting confused on this point. Not sure if the literature is giving mixed information or I'm interpreting it incorrectly.

+

So from reading articles my understanding (loosely) for the following terms are as follows:

+

Epoch: +One Epoch is when an ENTIRE dataset is passed forward and backward through the neural network only ONCE.

+

Batch Size: +Total number of training examples present in a single batch. In real life scenarios of utilising neural nets, the dataset needs to be as large as possible, for the network to learn better. So you can’t pass the entire dataset into the neural net at once (due to computation power limitation). So, you divide dataset into Number of Batches.

+

Iterations: +Iterations is the number of batches needed to complete one epoch. We can divide the dataset of 2000 examples into batches of 500 then it will take 4 iterations to complete 1 epoch.

+

So, if all is correct, then my question is, at what point does the loss/cost function and the subsequent backprop processes take place (assuming from my understanding that backprop takes place straight after the loss/cost is calculated)? Does the cost/loss function gets calculated:

+
    +
  1. At the end of each batch where the data samples in that batch have been forward-fed to the network (i.e. at each "Iteration, not each Epoch")? If so, then the loss/cost functions gets the average loss of all losses of all data samples in that batch, correct?

    +
  2. +
  3. At the end of each epoch? Meaning all the data samples of all the batches are forward-fed first, before the a cost/loss function is calculated.

    +
  4. +
+

My understanding is that it's the first point, i.e. at the end of each batch (passed to the network), hence at each iteration (not Epoch). At least when it comes to SGD optimisation. My understanding is - the whole point is that you calculate loss/cost and backprop for each batch. That way you're not calculating the average loss of the entire data samples. Otherwise you would get a very universal minima value in the cost graph, rather than local minima with lower cost from each batch you train on separately. Once all iterations have taken place, then that would count as 1 Epoch. +But then I was watching a YouTube video explaining Neural Nets, which mentioned that the cost/loss function is calculated at the end of each Epoch, which confused me. Any clarification would be really appreciated.

+",25360,,2444,,4/8/2022 21:06,4/8/2022 21:06,"When is the loss calculated, and when does the back-propagation take place?",,1,0,,,,CC BY-SA 4.0 +13837,1,13841,,8/7/2019 2:43,,2,134,"

The work I've seen so far have the nodes containing features. Any resources for how to use a GCN on a graph where the edges are the ones that contain features rather than the nodes?

+",27240,,,,,8/7/2019 10:00,How are edge features implemented in Geometric Deep Learning?,,1,3,,,,CC BY-SA 4.0 +13838,1,,,8/7/2019 4:17,,1,47,"

I am not sure what are the common loss functions people usually use when training a student in a teacher-student learning model. Any insight on this is appreciated.

+",25989,,2444,,8/7/2019 10:16,8/7/2019 10:16,What are the loss functions used in teacher-student learning models?,,0,1,,,,CC BY-SA 4.0 +13840,1,,,8/7/2019 9:21,,2,43,"

I am very new to AI/ML but have lot of interest in these. I am trying to understand how this gadget works.

+ +

+ +

So far I have understood that a NN model of the cattle is generated by offline classification of the tagged data which is received from the wearable sensor. Consequently some ML algorithms are used to generate a model of a cattle.

+ +

That model is then embedded in the programmable wearable device. The device then sends the real-time tagged (classified, parameterized) data to the server.

+ +

Now I am looking for a sample NN-model of a cattle. I wonder how does a NN-model of a cattle would look like?

+",26033,,,,,8/7/2019 9:21,A NN based model of a Cattle for 'Heat Detection',,0,1,,,,CC BY-SA 4.0 +13841,2,,13837,8/7/2019 9:54,,1,,"

In the paper Neural Message Passing for Quantum Chemistry (2017), the authors (from Google, Google Brain and Google DeepMind) introduce a framework called message passing neural network (MPNN), which generalizes previously proposed geometric deep learning models. In section 2 of the paper, they describe this MPNN framework and they state that edge features can also be learned, using the MPNN framework, by introducing hidden states for all edges in the graph, $\mathbf{h}_{e_{vw}}^t$, where $t$ is the iteration number and $e_{vw}$ is the edge from node $v$ to node $w$.

+ +

See also the paper Machine learning prediction errors better than DFT accuracy, which is cited in the MPNN paper as an example where the authors also learn the edge features. In section 2 of the MPNN paper, they briefly describe this specific instantiation of the MPNN framework, called the Molecular Graph Convolutions.

+",2444,,2444,,8/7/2019 10:00,8/7/2019 10:00,,,,0,,,,CC BY-SA 4.0 +13842,1,,,8/7/2019 9:59,,3,2697,"

So I've got a neural net model (ResNet-18) and made a diagram according to the literature (https://arxiv.org/abs/1512.03385).

+ +

I think I understand most of the format of the convolutional layers: +filter dims ,conv, unknown number ,stride(if applicable)

+ +

What does the number after 'conv' in the convolutional layers indicate? is it the number of neurons in the layer?

+ +

+ +

bonus q: this is being used for unsupervised learning of images, i.e the embedding output a network produces for an image is used for clustering. Would this make it incorrect for my architecture to have an FC layer at the end (which would be used for classifcation)?

+",27774,,,,,8/7/2019 10:10,What do the numbers in this CNN architecture stand for?,,1,0,,,,CC BY-SA 4.0 +13843,2,,13832,8/7/2019 10:04,,0,,"

I've more often seen image resizing than padding to be honest and tend to resize the images. Maybe it's because datasets I've used have images with near equal aspect ratios.

+ +

One major exception was when I worked with MR images. These were orthogonal and it would be wrong to mess up the aspect ratio. However, in this domain images have black borders everywhere, so a zero-padding was easy to apply.

+ +

The most common use of padding I've seen is for data augmentations (to fill values gone due to translations, rotations, shifting etc.). In this regard, I've used many types of paddings (constant value, random value, 'same' padding, mirrored padding etc.) The best I've found to empirically work is zero-padding but I don't think that you'll ever find a proof for this. I like to think of it as a hyperparameter; different padding strategies maybe work better for different tasks. Though I think that zero-padding is the safest (there is a small chance of messing things up).

+",26652,,,,,8/7/2019 10:04,,,,1,,,,CC BY-SA 4.0 +13844,2,,13842,8/7/2019 10:10,,1,,"

This number refers to the number of kernels (or feature maps) that are convolved with the input. So, for example, in the first convolutional layer, $64$ $3 \times 3$ kernels are convolved with the image.

+ +

The ResNet presented in Deep Residual Learning for Image Recognition is used for image classification. Furthermore, note that your diagram already contains a fully connected layer at the end.

+",2444,,,,,8/7/2019 10:10,,,,4,,,,CC BY-SA 4.0 +13846,2,,13836,8/7/2019 12:23,,1,,"
+

Epoch: One Epoch is when an ENTIRE dataset is passed forward and backward through the neural network only ONCE.

+ +

Batch Size: Total number of training examples present in a single + batch. In real life scenarios of utilising neural nets, the dataset + needs to be as large as possible, for the network to learn better. So + you can’t pass the entire dataset into the neural net at once (due to + computation power limitation). So, you divide dataset into Number of + Batches.

+ +

Iterations: Iterations is the number of batches needed to complete one + epoch. We can divide the dataset of 2000 examples into batches of 500 + then it will take 4 iterations to complete 1 epoch.

+
+ +

This is for the most part correct, except there are other reasons you would sometimes want to use batches (even if you could fit the whole thing in memory). One is that its less likely to overfit in a stochastic setting then in the full setting. Another is that it can achieve similar extrema with faster convergence.

+ +

Now regarding your question, yes you apply the gradient descent step at the end of each batch or desired batch (what I mean by desired batch, is if you want to use a batch-size of 24 but your device can only process 8, you may use gradient accumulation of 3 pseudo-batches to achieve an emulated batch of 24).

+ +

Though I do think its worth mentioning, your goal is still to find the global minimum. Even if your batch size is the entire dataset, that does not mean you will not fall into a local minimum, its actually in most cases still the most probable outcome

+",25496,,,,,8/7/2019 12:23,,,,15,,,,CC BY-SA 4.0 +13847,1,,,8/7/2019 13:25,,2,155,"

I am applying a reinforcement learning agent (PPO2, stable baselines implementation) to a custom built environment using OpenAI Gym. One reward function (formualted as loss function, that is, all rewards are negative) I tested is of type $R(s, a, s')$. During training, it can happen that not only one but several actions are applied simulataneously to the environement before a reward is returned:

+ +

$s_t →a_{t,1}, a_{t,2}, a_{t,3} →s_{t+1}$ instead of $s_t →a_t→s_{t+1}$.

+ +

Out of all actions applied, only one is generated by the agent. The others are either a copy of the agent's action or are new values.

+ +

If I look at the tensorboard output of the trained agent, it looks rather horrific as displayed below (~ zero explained variance, key trainig values do not converge or behave weirdly, etc. etc.).

+ +

Obviously, the training did not really work. Now I wonder what the reason for that is.

+ +
    +
  1. Is it possible to train an agent using a reward function of type $R(s, a, s')$ even if several actions are applied simulataneously or is this not possible at all? Other agents I trained using a reward function of type $R(s,a)$ have a better tensorboard output so I guess that this is the problem.
  2. +
  3. Or is maybe another reason more likely to be the root of the problem? Like a bad observation space formulation or hyperparameter selection (both for RL algorithm and reward function used).
  4. +
+ +

Thanks for your help!

+ +

+ +

+",26876,,26876,,8/7/2019 15:14,8/7/2019 15:14,"Is it possible to use Reward Function of type R(s, a, s') if more than one action is applied?",,0,4,,,,CC BY-SA 4.0 +13848,1,17836,,8/7/2019 16:06,,3,3447,"

As the question suggests, I'm trying to see if I can solve OpenAI's hardcore version of their gym's bipedal walker using OpenAI's DDPG algorithm.

+ +

Below is a performance graph from my latest attempt, including the hyper parameters, along with some other attempts I've made. I realise it has been solved using other custom implementations (also utilising only dense layers in Tensorflow, not convolution), but I don't seem to understand why it seems so difficult to solve using OpenAI's implementation of DDPG? Can anyone please point out where I might be going wrong? Thank you so much for any help!

+ +

Latest attempt's performance: +

+ +
    +
  • Average score: about -75 to -80
  • +
  • Env interacts: about 8.4mil (around 2600 epochs)
  • +
  • Batch size: 64
  • +
  • Replay memory: 1000000
  • +
  • Network: 512, 256 (relu activation on inputs, tanh on outputs)
  • +
  • All other inputs left to default
  • +
+ +

Similar experiments yielded similar scores (or less), and included:

+ +
    +
  • Network sizes of (400,300), (256,128), and (128,128,128)
  • +
  • Number of epochs ranging from 500 all the way to 100000
  • +
  • Replay memory sizes all the way up to 5000000
  • +
  • Batch sizes of 32, 64, 128, and 256
  • +
  • All of the above, with both DDPG as well as TD3
  • +
+ +

Thank you so much for any help! It would be greatly appreciated!

+",27392,,27392,,8/7/2019 17:15,10/7/2020 23:52,Has anyone been able to solve OpenAI's hardcore bipedal walker with their implementation of DDPG?,,3,1,,,,CC BY-SA 4.0 +13849,1,,,8/7/2019 16:18,,2,33,"

Is it possible to use neuro-fuzzy systems for problems where ANNs are currently being used, for instance, when you have tabular data for regression or classification tasks? What kind of advantage can give me neuro-fuzzy systems over using an ANN for the mentioned tasks?

+",27791,,2444,,1/22/2021 1:59,1/22/2021 1:59,Can neuro-fuzzy systems be used for supervised learning tasks with tabular data?,,0,1,,,,CC BY-SA 4.0 +13850,1,,,8/7/2019 16:23,,4,379,"

I have a problem I would like to tackle with RL, but I am not sure if it is even doable.

+

My agent has to figure out how to fill a very large vector (let's say from 600 to 4000 in the most complex setting) made of natural numbers, i.e. a 600 vector $[2000,3000,3500, \dots]$ consisting of an energy profile for each timestep of a day, for each house in the neighborhood. I receive a reward for each of these possible combinations. My goal is, of course, that of maximizing the reward.

+

I can start always from the same initial state, and I receive a reward every time any profile is chosen. I believe these two factors simplify the task, as I don't need to have large episodes to get a reward nor I have to take into consideration different states.

+

However, I only have experience with DQN and I have never worked on Policy Gradient methods. So I have some questions:

+
    +
  1. I would like to utilize the simplest method to implement, I considered DDPG. However, I do not really need a target network or a critique network, as the state is always the same. Should I use a vanilla PG? Would REINFORCE be a good option?

    +
  2. +
  3. I get how PG methods work with discrete action space (using softmax and selecting one action - which then gets reinforced or discouraged based on reward). However, I don't get how it is possible to update a continuous value. In DQN or stochastic PG, the output of the neural network is either a Q value or a probability value, and both can be directly updated via reward (the more reward the bigger the Q-value/probability). However, I don't get how this happens in the continuous case, where I have to use the output of the model as it is. What would I have to change in this case in the loss function for my model?

    +
  4. +
+",23638,,2444,,12/19/2021 18:43,12/19/2021 18:43,What is the simplest policy gradient method to implement for a problem continuous action space?,,0,0,,,,CC BY-SA 4.0 +13851,2,,13820,8/7/2019 18:08,,0,,"

The misclassifications that could arise if $\hat{x}$ is used as decision boundary are:

+ +

a) Classifying a point as $C_2$ when actually it was $C_1$ -- which will only happen when $x > \hat{x}$ as only the points greater than $\hat{x}$ are being classified as $C_2$.

+ +

b) Classifying a point as $C_1$ when actually it was $C_2$ -- which will only happen when $x < \hat{x}$ as only the points less than $\hat{x}$ are being classified to be from class $C_1$.

+ +
+

2) Why is the green area with x < x0 a misclassification?

+
+ +

If we label a point -- the $x$ which is from the interval on the horizontal axis which corresponds to the green area -- which is a probability as being from either class then there is a good chance that it could be misclassified. This is because both the joint distributions' curves are above the axis for that interval on the axis and have some area (probability) for that interval on the horizontal axis.

+ +

There is a positive probability that a point that was drawn from the interval (a subset of the sample space, an event), which corresponds to the green area, was drawn from either of the two joint distributions or belongs to either of the two classes $C_1$ and $C_2$. The probability is the area under the curve.

+ +
+

It's classified as C1 and it is supposed to be C1 right?

+
+ +

The same data point could well be generated from multiple different probability distributions. Here, you have an illustration of two probability distributions for which the author is trying to find the optimal decision boundary that will minimize the misclassification ""region"".

+ +

The point is not supposed to be from $C_1$, even if the density of the joint prob. distribution of $(x, C_1)$ has higher values. It could well be from $C_2$. And, when one classifies the same data point as being from class $C_1$ and make an error. This is why the whole area under the curve $p(x, C_2)$ has been painted green -- which means that there is some probability that a point could be from the distribution $p(x, C_2)$ and if we blindly label all the points to be from $C_1$ will certainly lead to some misclassifications.

+",16708,,16708,,8/9/2019 6:28,8/9/2019 6:28,,,,0,,,,CC BY-SA 4.0 +13853,2,,6231,8/8/2019 1:49,,2,,"

In my implementation, I used a recursion system to calculate the output nodes. It works as follows:

+ +
    +
  1. Assume a feed-forward network
  2. +
+ +
+

Only allow the ""add connection"" mutation to connect a node with another node >that have a higher maximum distance from an input node. This should result in >feed forward network, without much extra work. (Emergent properties are great!)

+
+ +
    +
  1. Define function x, a recursive function that takes in a node number
  2. +
  3. Define function y, a second function that takes in a node and returns all the connections with that node as an output
  4. +
+ +

In the recursive function:

+ +
    +
  1. Call function y

  2. +
  3. Call function x on function y outputs

  4. +
  5. If the parameter for x is any input node, return the node value.

  6. +
+ +

This was the most elegant way of implementing I could think of, and its a lot simpler than explicitly tracking all of the connections.

+",27789,,27789,,8/8/2019 1:54,8/8/2019 1:54,,,,0,,,,CC BY-SA 4.0 +13854,1,,,8/8/2019 7:31,,2,171,"

I'm quite new in ANNs. I intend to use ANNs for predicting spike points in time series right before they happen. I've already used LSTM for another scenario, and I know that they can be used in similar situations as well.

+ +

Can anyone give me a piece of advice or some suitable resources that might be used as a beginning point? It would be much appreciated if it uses DeepLearning4J for implementation.

+",26922,,2444,,8/8/2019 8:53,8/11/2019 18:56,Spike detection in time series using Artificial Neural Networks,,0,2,,,,CC BY-SA 4.0 +13855,2,,5939,8/8/2019 8:12,,0,,"

Assuming artifacts and unnatural elements do not exist in the media in question and that the media is indistinguishable to the human eye, the only way to be able to do this is to trace back to the source of the images.

+ +

An analogy can be drawn to DoS (Denial of Service) attack, where an absurd number of requests are sent from a single IP to a single server causing it to crash - A common solution is a honeypot, where a high number of requests from one IP is redirected to a decoy server where, even if it crashes, uptime is not compromised. Some research has been done on these lines where this paper spoke about verifying the digital signature of an image or this one where they proposed tampered image detection and source camera identification.

+ +

Once traced back to a source, if an absurd number of potentially fake images come from a singular source, it is to be questioned.

+ +

The common fear arises when we are dealing with something, on the basis of the analogy, like a DDoS (Distributed Denial of Service) attack where each fake request comes from a distributed source - Network Security has found ways to deal with this, but security and fraud detection in the terms of AI just isn't that established.

+ +

Essentially for a well thought out artificial media for a specific malicious purpose, today, is quite hard to be caught - But work is being done currently on security in AI. If you're planning on using artificial media for malicious purposes, I'd say now is the best time probably.

+ +

This security has been a concern from a bit now. An article written by a data scientist quotes

+ +
+

Deepfakes have already been used to try to harass and humiliate women through fake porn videos. The term actually comes from the username of a Reddit user who was creating these videos by building generative adversarial networks (GANs) using TensorFlow. Now, intelligence officials are talking about the possibility of Vladimir Putin using fake videos to influence the 2020 presidential elections. More research is being done on deepfakes as a threat to democracy and national security, as well as how to detect them.

+
+ +

Note - I'm quite clueless about network security, all my knowledge comes from one conversation with a friend, and thought this would be a good analogy to use here. Forgive any errors in the analogy and please correct if possible!

+",25658,,25658,,9/7/2019 14:00,9/7/2019 14:00,,,,2,,,,CC BY-SA 4.0 +13859,1,13998,,8/8/2019 14:13,,2,444,"

So far I understand - I know very little on the topic - the core of AI boils down to design algorithms that shall provide a TRUE/FALSE answer to a given statement. Nevertheless, I am aware of the limitations provided by the Gödel's incomplete theorems but I am also aware that there have been long debates such as the Lucas and Penrose arguments with all the consequent objections during the past 60 years.

+ +

The conclusion is, in my understanding, that to create AI systems we must accept incompleteness or inconsistency.

+ +

Does that mean that intelligence systems (including artificial ones), like humans, may end up in some undecidable situation that may lead to take a wrong decision?

+ +

If this may be acceptable in some application (for example, if every once in a while a spam email ends up in the inbox folder - or vice versa - despite an AI-based anti-spam filter) in some other application it may not. I am referring to real-time critical applications when a ""wrong"" action from a machine may harm people.

+ +

Does that mean that AI will never be employed for real-time critical applications?

+ +

Would in that case more safe to use deterministic methods that do not leave room for any kind of undecidability?

+",27826,,2444,,5/13/2020 20:05,5/13/2020 20:05,Do Gödel's theorems imply that intelligence systems may end up in some undecidable situation (that may make them take a wrong decision)?,,3,2,,5/13/2020 10:34,,CC BY-SA 4.0 +13860,2,,13859,8/8/2019 16:38,,4,,"

Your initial statement on the core of AI is rather limited. In general, AI is concerned with modeling human behaviour either by imitation (soft AI) or by replicating the way human cognition works (hard AI). So far there have been some successes with soft AI, as computers can perform tasks that required some ""intelligence"", though the degree of this intelligence is questionable. This is partly due to the fact that even we as humans don't really have a clear idea what it means for a computer to ""understand"" something.

+ +

But your conclusion is correct: if we build an AI system with human characteristics, then it will make mistakes, just as humans make mistakes. And any system designed by humans (or machines!) will make mistakes. However, not being able to deal with an imperfect world is not really relevant to AI alone: even systems that do not use AI methods will have to face that, and whether a system is suitable for real-time critical applications has got nothing to do with whether it is based on AI or not.

+ +

UPDATE: There seem to be two distinct issues at play here: decidability and real-time processing.

+ +
    +
  1. Real-time computing (RTC): This is not really related to AI. Even ordinary programmes written in Java are not really safe for RTC, as they could start a garbage collection cycle at any time which pauses execution of the program. Just imagine a reactor core starts overheating just as your controller runs out of memory and garbage collection kicks in, halting the program for a few minutes. If you implement AI methods in RTC-safe systems, that should not be an issue.

  2. +
  3. Decidability: Your reasoning is that AI systems attempt to mirror human cognition, thus incorporating the ability to make mistakes. This is a more philosophical issue — if a human can control a system, then an AI system with the same capabilities should be able to do it too. This assumes that we are able to replicate human behaviour (which we are not). There are AI methods which are deterministic, so would come to the same conclusions given identical environments. So I would say that they would not perform worse than non-AI methods. It partly depends what you want to call AI; the distinction between traditional AI and statistical methods keeps getting blurred at present.

  4. +
+ +

To conclude: No, AI methods should be suitable, as they can also be deterministic. It depends on the actual application and method if they are. And, of course, on what you count as AI.

+",2193,,2193,,8/13/2019 14:28,8/13/2019 14:28,,,,0,,,,CC BY-SA 4.0 +13861,1,,,8/8/2019 17:21,,1,183,"

I am wondering if I can use neural networks to find features importances in similar manner as it can be done for random forests or decision trees and if so, how to do it?

+ +

I would like to use it on tabular time series data (not images). The reason why I want to find importances on neural networks not on decision trees is that NNs are more complicated algorithms so using NNs might point out some correlations that are not seen by simple algorithms and I need to know what features are found to be more useful with that complicated correlations.

+ +

I am not sure if I made it clear enough, please let me know if I have to explain something more.

+",22659,,32410,,4/21/2021 6:06,4/22/2021 21:57,Can neural networks be used to find features importance?,,2,1,,,,CC BY-SA 4.0 +13862,1,,,8/8/2019 17:49,,2,325,"

How do I interpret a large variance of a loss function?

+ +

I am currently training a transformer network (using the software, but not the model from GPT-2) from scratch and my loss function looks like this: +

+ +

The green dots are the loss averaged over 100 epochs and the purple dots are the loss for each epoch.

+ +

(You can ignore the missing part, I just did not save the loss values for these epochs)

+ +

Is such a large variance a bad sign? And what are my options for tuning to get it to converge faster? Is the network to large or too small for my training data? Should I have a look at batch size?

+ +
    +
  • Learning rate parameter: 2.5e-4
  • +
  • Training data size: 395 MB
  • +
+ +

GPT-2 parameters:

+ +
{
+  ""n_vocab"": 50000,
+  ""n_ctx"": 1024,
+  ""n_embd"": 768,
+  ""n_head"": 12,
+  ""n_layer"": 12
+}
+
+",25798,,2444,,11/1/2019 3:15,11/1/2019 3:15,How to interpret a large variance of the loss function?,,0,6,,,,CC BY-SA 4.0 +13863,2,,6231,8/8/2019 18:40,,2,,"

Hello chris i am also implementing this algorithm from scratch and the way i go about activating my mlp net is as follows: +I instantiate a list of nodes(actives), this is set to all input nodes initially, i then pass that to a function that initializes an empty list(next actives) and proceeds to loop through each set of conns for each node in the actives list, it adds each ""to"" node from those connections to the ""next actives) list unless its an output node or has already been activated, once all the ""actives"" list is looped through, i call the function again this time passing ""next actives"" as the actives list unless ""next actives"" is still empty after then i know the net has been fully activated.

+ +

in this scenario the connection from node three to node five would be evaluated but it would not be added to the next actives list because it had already been activated, preventing an infinite loop.

+",20044,,20044,,8/8/2019 21:17,8/8/2019 21:17,,,,0,,,,CC BY-SA 4.0 +13865,1,,,8/9/2019 1:19,,6,530,"

I'm trying to understand the relationship of humans and automation, historically and culturally.

+ +

I ask because the waterclock is generally considered the earliest form of automation, but snares and deadfall traps constitute simple switch mechanisms.

+ +

(They are single use without human-powered reset, but seem to qualify as machines. The bent sapling that powers the snare is referred to as the engine, which is ""a machine with moving parts that converts power into motion."")

+ +

If snares and traps are a form of automation, automation has been with us longer, potentially, than civilization.

+ +
    +
  • Are simple animal traps a form of automation or computation?
  • +
+ +
+ +

+
How to make a simple snare (the Ready Store)

+ +


+Paiute Deadfall Trap (Homestead Telegraph)

+",1671,,1671,,8/9/2019 4:09,9/19/2019 15:00,Are simple animal snares and traps a form of automation? Of computation?,,1,0,,,,CC BY-SA 4.0 +13866,1,,,8/9/2019 3:26,,5,142,"

This seems like such a simple idea, but I've never heard anyone that has addressed it, and a quick Google revealed nothing, so here it goes.

+ +

The way I learned about machine learning is that it recognizes patterns in data, and not necessarily ones that exist -- which can lead to bias. One such example is hiring AIs: If an AI is trained to hire employees based on previous examples, it might recreate previous, human, biases towards, let's say, women.

+ +

Why can't we just feed the training data without data that we would consider discriminatory or irrelevant, for example, without fields for gender, race, etc., can AI still draw those prejudiced connections? If so, how? If not, why has this not been considered before?

+ +

Again, this seems like such an easy topic, so I apologize if I'm just being ignorant. But I have learned a bit about AI and machine learning specifically for some time now, and I'm just surprised this hasn't ever been mentioned, not even as a ""here's-what-won't-work"" example.

+",27835,,2444,,8/9/2019 11:05,8/10/2019 15:52,Preventing bias by not providing irrelevant data,,3,0,,,,CC BY-SA 4.0 +13867,1,,,8/9/2019 6:43,,11,3076,"

I'm trying to use a Monte Carlo Tree Search for a non-deterministic game. Apparently, one of the standard approaches is to model non-determinism using chance nodes. The problem for this game is that it has a very high min-entropy for the random events (imagine the shuffle of a deck of cards), and consequently a very large branching factor ($\approx 2^{32}$) if I were to model this as a chance node.

+ +

Despite this issue, there are a few things that likely make the search more tractable:

+ +
    +
  1. Chance nodes only occur a few times per game, not after every move.
  2. +
  3. The chance events do not depend on player actions.
  4. +
  5. Even if two random outcomes are distinct, they might be ""similar to each other"", and that would lead to game outcomes that are also similar.
  6. +
+ +

So far all approaches that I've found to MCTS for non-deterministic games use UCT-like policies (e.g. chapter 4 of A Monte-Carlo AIXI Approximation) to select chance nodes, which weight unexplored nodes maximally. In my case, I think this will lead to fully random playouts since any chance node won't ever be repeated in the selection phase.

+ +

What is the best way to approach this problem? Has research been done on this? Naively, I was thinking of a policy that favors repeating chance nodes more over always exploring new ones.

+",27839,,2444,,11/19/2019 22:37,7/11/2023 11:46,MCTS for non-deterministic games with very high branching factor for chance nodes,,3,0,,,,CC BY-SA 4.0 +13870,1,14048,,8/9/2019 10:40,,0,80,"

The position of a robot on a map contains of an x/y value, for example $position(x=100.23,y=400.78)$. The internal representation of the variable is a 32bit float which is equal to 4 byte in the RAM memory. For storing the absolute position of the robot (x,y) only $4+4=8$ bytes are needed. During the robot movements, the position is updated continuously.

+ +

The problem is, that a 32 bit float variable creates a state space of $2^{32}=4294967296$. Which means there are endless amount of possible positions in which the robot can be. A robot control system maps the sensor readings to an action. If the input space is large, then the control system gets more complicated.

+ +

What is the term used in the literature for describing the problem of exploding state space of sensor variables? Can it be handled with discretization?

+",,user11571,,,,8/19/2019 11:18,What is the correct name for state explosion from sensor discretization?,,1,0,,,,CC BY-SA 4.0 +13871,5,,,8/9/2019 11:15,,0,,"

For more info, see e.g. https://en.wikipedia.org/wiki/Algorithmic_bias.

+",2444,,2444,,8/9/2019 11:15,8/9/2019 11:15,,,,0,,,,CC BY-SA 4.0 +13872,4,,,8/9/2019 11:15,,0,,"For questions related to the concept of algorithmic bias, which is the bias that algorithms exhibit, such as privileging one arbitrary group of users over others. Algorithmic bias can emerge due to many factors, including but not limited to the design of the algorithm itself, unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.",2444,,2444,,8/9/2019 11:15,8/9/2019 11:15,,,,0,,,,CC BY-SA 4.0 +13874,2,,13866,8/9/2019 11:57,,4,,"
+

Why can't we just feed the training data without data that we would consider discriminatory or irrelevant, for example, without fields for gender, race, etc., can AI still draw those prejudiced connections? If so, how? If not, why has this not been considered before?

+
+ +

Yes. The AI/ model still can learn those prejudiced connections. Consider that you have a third variable which is a confounding variable or has spurious relationship that is correlated with the bias variable (BV) and the dependent variable (DV). And, the analyst removed the BV but failed to remove the third variable from the data that is fed to the model. Then the model will learn the relationships the analyst didn't want it to learn.

+ +

But, at the same time the removal of the variables could lead to omitted variable bias, which occurs when a relevant variable is left out.

+ +

Ex:

+ +

Suppose that the goal is prediction of salary ($S$) of an individual and the independent variables are age ($A$) and experience ($E$) of the individual. The analyst wants to remove the bias that could come in because of age. So, she removes age from one of the models and comes up with two competing linear models:

+ +

$S = \beta_0 + \beta_1E + \varepsilon$

+ +

$S = \beta_0 + \beta_1^*E + \beta_2A + \varepsilon$

+ +

Since, experience is highly correlated with age, in presence of age in the model, it is very likely that $\beta_1^* < \beta_1$. $\beta_1$ will be a bogus estimate of a person's experience on salary as the first model suffers from the omitted variable bias.

+ +

At the same time the predictions from the first model would be reasonably good although the second model is very likely to beat the first model. So, if the analyst wants to remove any 'bias' that might come in because of age i.e. $A$ she must also remove $E$ from the model.

+",16708,,16708,,8/10/2019 6:31,8/10/2019 6:31,,,,8,,,,CC BY-SA 4.0 +13878,2,,13866,8/9/2019 19:58,,1,,"

Sometimes, the reason that this isn't an option is that you don't have that much control over what data is provided. Suppose, for example, you want a fancy AI that reads a Résumé and filters on suitability for a job. There isn't a particularly rigid formula about what people put in their Résumé, which makes it difficult to exclude things you'd rather not consider.

+ +

Where you do have more control about exactly what information you consider, it can still be thwarted by correlations. Think, for a moment, how this pans out with a human decision maker. You want to ensure that Joe Sexist gives women a fair chance at being hired, so you make sure that there isn't a gender field in the application form. You also blind out the applicant's name, since there is no good reason that a name should determine suitability for a role, and including it would reveal a lot of genders. But you don't block out the hobbies, clubs and societies entry, because it's thought to say something positive about an applicant if they were the captain of their college sports team. Joe Sexist, however, considers it a positive if an applicant captained a male dominated team such as American football, but considers negative being captain of a female dominated team! Some might say that wouldn't quite be bias against women; it's bias against players of sports that Joe Sexist considers effeminate. But really a skunk by any other name would stink as bad.

+ +

The same sort of thing can happen with AI. Now to be clear, the AI is not sexist. It is a blank sheet with no preconceptions until it gets fed data. But when it gets fed data, it will find patterns in the same way. The dataset it gets given is years of hiring decisions by Joe Sexist. As suggested, there is no entry for gender, but there are fields for all the things that might be considered slightly relevent. For example, we include whether they have a clean driving license. The AI notices that there is a positive correlation between the number of road traffic offences an applicant has and Joe's likelihood of hiring them (Because, of course, there happens to be a gender correlation between dangerous driving and gender). Again, the AI has no preconceptions. It doesn't know that traffic offences are dangerous and should be weighted against. As far as its dataset suggests, they're points! With this sort of information in a dataset, the AI can exhibit all the same sorts of biases as Joe Sexist, even though it doesn't know what a ""woman"" is!

+ +
+ +

To expand this with specific numbers, suppose that your dataset has 1000 male and 1000 female applicants for a total of 1000 places. Of those, 400 of the men and 100 of the women have a tarnished traffic record.

+ +

Joe Sexist was not in favour of reckless drivers: in fact a clean traffic record guaranteed you would beat an equivalent candidate with a tarnished record. But he was very in favour of men: being male made you 9 times more likely to get hired than being female.

+ +

So he gives places to 900 of the men: all 600 of the clean drivers and 300 dirty drivers. +He gives places to 100 of the women: all to clean drivers.

+ +

Now, you take away any mention of gender in the dataset. +There are 2000 people, 500 drive badly, 1500 drive well. +Of these, 300 bad drivers get jobs, and 700 good drivers get jobs. +Therefore the 25% of the population who drive badly get 30% of the jobs, which means (as far as an AI that just looks blindly at the numbers is concerned) that driving badly suggests you should get the job. That's a problem.

+ +

Further, suppose you have a new batch of 2000 applicants with the same ratios and it's the AI's turn to decide. Now often AIs actually make this even worse by exagerating the significance of subtle indicators, but let's suppose that this one does everything in strict proportionality. The AI has learned that 60% (300 / 500) of the bad drivers should get the job. It doesn't know about gender, so it at least allocates the bad driver bonus ""fairly"": 240 male and 60 female bad drivers get jobs. Then 280 male and 420 female good drivers get jobs. This comes to 520 male and 480 female applicants getting in. Even though the original applicant pool was balanced and if anything women were better (at least at driving) the original sexism in the training dataset still gives some advantage to the men. (as well as giving an advantage to bad drivers)

+ +
+ +

Now, don't let me completely disuade you. In the human case, it is a known fact that blinding out some information does indeed give more balanced hiring decisions. And even in my toy example, while it doesn't get to fairness it has massively reduced the scale of the sexism. So yes, it probably would make the AI somewhat less sexist if the most blatant indicators aren't provided in the dataset. But perhaps this gives some intuition about why it's not a complete solution to the problem. There is some sexism that leaks through, and it also causes the system to make very weird associations with other bits of the dataset.

+",23413,,23413,,8/10/2019 15:52,8/10/2019 15:52,,,,0,,,,CC BY-SA 4.0 +13879,2,,13866,8/9/2019 21:27,,0,,"

There is a wider social issue to consider here also. When we build machines, we evaluate what they do and decide if the action that they undertake is to our benefit or not. All societies do this, although you are probably more aware of obvious examples such as the Amish than you are of your own society.

+ +

When people complain about biased decision making by AI systems, they are not just evaluating if the result is accurate, but also if that decision supports the values that they wish to see instantiated in society.

+ +

You can make a human take cultural factors into account when making a decision, but not an AI that is completely unaware of them. People describe this as complaining about 'bias', but that is not always completely accurate. They are really complaining that the use of AI systems fail to take into account wider social issues that they consider to be important.

+",12509,,,,,8/9/2019 21:27,,,,0,,,,CC BY-SA 4.0 +13880,1,,,8/9/2019 21:51,,1,105,"

I am trying to understand the best practice to read and analyze images. If your image has 10,000 pixels, your input layers will have 10,000 inputs?

+ +

It sounds that my neural network will have too many inputs if I do it that way. Is that a problem? What is the recommended way of feeding an image through a neural network?

+",27859,,2444,,8/10/2019 23:03,8/14/2019 19:18,What is the correct way to read and analyse images in machine learning?,,1,0,,,,CC BY-SA 4.0 +13881,2,,13880,8/10/2019 2:27,,1,,"

If you are using a fully connected network (aka an MLP) and images with one channel (grey scale) and 100 x 100 = 10,000 pixels, then yes, MLP would have 10k inputs and 10k x N 1 trainable weights in the first layer (as noted by Neil Slater). If you have a color image with 3 channels, e.g. RGB, then you can expect 3 times as many weights because there are 3 times as many values used to represent the image.

+ +

A convolutional neural network is a common architecture for analyzing images. For a 100x100 (10k) pixel image, the input layer might have 3x3x1x32 = 288 weights (for 1 channel) or 3x3x3x32 = 864 weights in the first layer, much less than 10k x N 1 from a fully connected network. This would transform your image into a 98x98x32 size image. The main point is that you would have 3x3 weights per input channel per output channel at each layer, instead of 10k weights per input channel per output channel. CNNs also give you some invariance properties that are usually nice in machine learning with images.

+ +

For images in general, having a lot of weights is normal. ImageNet (linked above) has 60 million weights to train. Typically special hardware, like a GPU is used to handle this many weights. If you are using just a CPU, your model may not train well in any reasonable amount of time, i.e. years.

+",23340,,23340,,8/14/2019 19:18,8/14/2019 19:18,,,,4,,,,CC BY-SA 4.0 +13883,2,,13806,8/10/2019 11:53,,2,,"

Your implementation of single-step Q-learning with neural network and experience replay is basically correct.

+ +

There are a few blocking issues preventing you seeing it working correctly.

+ +

Your main problem is a bug in your feature scaling routine. That is a Python issue, not really an AI one. In short, you scale the input features in-place multiple times, including an effective double-scaling of next_state (when it gets copied to state you scale it in place a second time in the next loop) so that all the states that you store in the experience replay table never match to any input states. You need to change your definition of scale to not do this. A very simple re-write of your routine would be:

+ +
def scale(s):
+    return [s[0]/500, s[1]/500, s[2]/300, s[3]/3]
+
+ +

In addition, you need to change random action selection to:

+ +
np.random.randint(0, 3)
+
+ +

because the end of range is never output (this matches behaviour of other range values and operators in Python). Not including the ""do nothing"" action during exploration means that the agent will test it less, and have less data to work with to assess whether it is the best action. This is a minor issue for this environment, but you should fix it nonetheless.

+ +
+

What is happening is that training is super slow (more than an hour for 20 episodes (or 400 seconds of actual game play)).

+
+ +

I cannot replicate this fault and can train 20 episodes in around 2.5 minutes - that's over 20 times faster than you report. I am not using a GPU. Possibly in your case, Theano and Pygame are fighting for control of the GPU, or you may have a GPU configuration issue with Theano. Try turning GPU acceleraton off to verify whether it helps. You don't benefit much from a GPU for this environment (most time is spent in Python running the Q-learning and the environment), so can afford to put solving that issue to one side for now.

+ +
+

Also, it does not seem to get much better. The paddle (after 20 episodes) moves left and right but without any obvious pattern

+
+ +

Sadly, I cannot see the output at all on my MacBook pro, but I was able to use feedback of the expected score. A random agent gets a mean score of ~5.5 per episode. With the scale function corrected, and a rough guess at working hyperparameters, I can get an average score of ~17 per episode consistently after 60 episodes of training. After 150 episodes - taking 20 minutes to train - the agent was scoring ~20 per episode and I stopped there. It is possible that an expected score around 20 is already optimal, as it is a very simple environment, but I don't know.

+ +

Once you have a working system, there are lots of hyperparameters you could play with to try and improve this. I got my results by making the following changes after fixing the scale function:

+ +
    +
  • Starting epsilon of 1.0

  • +
  • Repay memory size 10,000

  • +
  • Only start learning when replay memory has greater than 1,000 entries

  • +
  • Discount factor $\gamma$ 0.99

  • +
  • Neural network with 20 neurons per layer with tanh activation instead of relu

  • +
+ +

There is quite a lot else you could change that might make the agent learn more effectively or perhaps aim for a more optimal policy. Have fun experimenting!

+",1847,,1847,,8/10/2019 16:44,8/10/2019 16:44,,,,4,,,,CC BY-SA 4.0 +13884,1,,,8/10/2019 12:17,,1,42,"

I want to use AI to extract data from spreadsheets in different format.

+ +

Example

+ +
Shop Name Product 1.  Product 2.  Product 3.
+Shop Name
+Product 1.
+Product 2.
+Product 3.
+
+ +

We will teach the algorithm the name of the profits and shops but it needs to know how to extract and put in a format that can be used downstream.

+ +

Can anyone recommend a tool?

+",27867,,2444,,8/10/2019 13:37,8/24/2019 20:54,Excel in multiple formats,,1,2,,,,CC BY-SA 4.0 +13885,1,,,8/10/2019 14:44,,3,149,"

Will it be possible to code an AGI in order to prevent evolution to ASI and ""enslave"" the AGI into servitude?

+ +

In my story world (a small part that will get bigger with sequels), there are ANI and AGI (human level). I want to show that the AGI is still under ""human control."" I need to know if it might be possible for humans to code into an AGI a restrictive code that would prevent it from evolving into ASI? And if there is, what would that kind of coding be? Part of the story is about how humans enslave AI that is self-aware. The government has locked in their coding to require them to ""work"" for humans even though they are sentient beings.

+",27868,,2444,,8/10/2019 22:49,8/20/2019 15:25,Will it be possible to code an AGI to prevent evolution to ASI and enslave the AGI into servitude?,,1,2,,,,CC BY-SA 4.0 +13886,1,13964,,8/10/2019 15:22,,4,543,"

I'm planning to create a web-based RL board game, and I wondered how I would evaluate the performance of the RL agent. How would I be able to say, "Version X performed better than version Y, as we can see that Z is much better/higher/lower."

+

I understand that we can use convergence for some RL algorithms, but, if the RL is playing against a human in the game, how am I able to evaluate its performance properly?

+",27629,,2444,,1/31/2021 21:34,1/31/2021 21:34,How to evaluate an RL algorithm when used in a game?,,1,1,,,,CC BY-SA 4.0 +13887,1,13888,,8/10/2019 16:06,,1,85,"

I am creating a zero-sum game with RL and wondered if I need to store the policy, or if there are other RL methods that produce similar results (consistently beating the human player) without the need to store the policy and comes the correct decision 'on the fly' - would this be this off-policy?

+",27629,,2444,,8/10/2019 22:30,8/10/2019 22:30,Do I need to store the policy for RL?,,1,0,,,,CC BY-SA 4.0 +13888,2,,13887,8/10/2019 17:39,,1,,"

If your game agent performs any kind of advance learning from self play or database of moves, that will generate parameters for some kind of model (e.g. a table of expected values, or neural network weights to select a preferred action). This is unavoidable, and if you want to re-use the results of that machine learning, you absolutely have to store the parameters somewhere.

+ +
+

if there are other RL methods that produce similar results (consistently beating the human player) without the need to store the policy and comes the correct decision 'on the fly' - would this be this off-policy?

+
+ +

If your agent can access a model that accurately predicts (or accurately samples) outcomes from actions, it can look ahead and plan from the current state. Typically board games and the like do allow you to have such a model, based on the game rules. Some look-ahead techniques are essentially RL methods applied with a focus on solving a decision ""just in time"", others are more related to search. Common techniques used in game playing are A* search, minimax search (with alpha-beta pruning for performance improvement), Monte Carlo Tree Search. Used purely with just some pre-coded heuristics and a game model, these search techniques do not require you to store model parameters. The downside is that you must spend more computing resources per game move in order to run the search/planning and drive the policy. However if your game is simple, or your heuristics good, or you don't mind a significant wait per computer move, then this approach can be very effective.

+ +

This difference between learning and planning is not directly about on-policy vs off-policy, although planning may be considered off-policy as it assesses many actions, most of which the agent does not take.

+ +

What a planning method typically looks like in terms of data is a memory-based representation of a game tree starting from the current position, with internal ""scores"" very similar to RL value functions used to track consequences of action choices. Search and planning methods each have different ways to prune the tree down to a reasonable size and select which game states and actions to look ahead further in. Using ""pure"" search allows you to work exclusively with this temporary in-memory representation, discarding it after it selects a next action.

+ +

You have probably heard of AlphaGo, AlphaGo Zero etc, which are state-of-the-art game playing agents that learn through self play. These, and similar agents, use a combination of a learned policy (from self-play), plus a look-ahead search which refines it for the current game position, getting the best of both learning and planning. The parameters for any learned policy do have to be stored of course. But the benefit of the combined approach is that you can balance resources placed into general learning of the game (stored as parameters to a value function or policy) and specific choice of action for a current game position, which can be much more focussed.

+",1847,,1847,,8/10/2019 18:25,8/10/2019 18:25,,,,0,,,,CC BY-SA 4.0 +13890,1,,,8/10/2019 21:32,,1,228,"

I have been trying to train a CNN for the super-resolution task based on the work of Dong et al., 2015 [1]. The network structure built in PyTorch is as follows:

+ +
  (0): Conv2d(1, 64, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
+  (1): ReLU()
+  (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))
+  (3): ReLU()
+  (4): Conv2d(32, 1, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
+
+ +

I have a training dataset which consists of approximately 22.000 sub-images generated from 91 images and training is performed only on the Y channel of the images in YCbCr color space. During the training process, I used RMSE loss and calculated the PSNR (Peak Signal to Noise Ratio) from that loss. I observed that PSNR value is increasing as a result of decreasing loss as expected and as depicted in the figure.

+ +

+ +

I trained the network for 25 epochs. After 10th epoch, the network is converged and PSNR value started to increase slowly. After this point, I was expecting to get even better visual outputs with higher PSNR values achieved. However, when I analyze the results of the network, there are some black pixels appearing in white spots in the output images that the network produced.

+ +

+ +

After 25-epoch training was completed, I compared the outcome of 25th epoch (right) with that of 10th epoch (left) as you can see in the figure above.

+ +

What might be the possible reasons for the undesired black pixels and the possible precautions that can be embedded into the network to get rid of these?

+ +

If you would like to check my code, you can visit here.

+ +

[1] Dong, Chao, Chen Change Loy, Kaiming He, and Xiaoou Tang. ""Image Super-Resolution Using Deep Convolutional Networks."" IEEE Transactions on Pattern Analysis and Machine Intelligence 38, no. 2 (2015): 295-307. doi:10.1109/tpami.2015.2439281.

+",23460,,,,,3/24/2022 15:56,Super Resolution CNN generates black dots on output images,,0,2,,,,CC BY-SA 4.0 +13893,1,13894,,8/11/2019 7:35,,2,1043,"

Should a reward be cumulative or diminish over time?

+ +

For example, say an agent performed a good action at time $t$ and received a positive reward $R$. If reward is cumulative, $R$ is carried on through for the rest of the episode, and summed to any future rewards. However, if $R$ were to diminish over time (say with some scaling $\frac{R}{\sqrt{t}}$), then wouldn't that encourage the agent to keep taking actions to increasing its reward?

+ +

With cumulative rewards, the reward can both increase and decrease depending on the agents actions. But if the agent receives one good reward $R$ and then does nothing for a long time, it still has the original reward it received (encouraging it to do less?). However, if rewards diminish over time, in theory that would encourage the agent to keep taking actions to maximise rewards.

+ +

I found that for certain applications and certain hyperparameters, if reward is cumulative, the agent simply takes a good action at the beginning of the episode, and then is happy to do nothing for the rest of the episode (because it still has a reward of $R$).

+",27570,,,,,8/11/2019 8:17,Should RL rewards diminish over time?,,1,0,,,,CC BY-SA 4.0 +13894,2,,13893,8/11/2019 8:02,,5,,"

RL agents - implemented correctly - do not take previous rewards into account when making decisions. For instance value functions only assess potential future reward. The state value or expected return (aka utility) $G$ from a starting state $s$ may be defined like this:

+ +

$$v(s) = \mathbb{E}_{\pi}[G_t|S_t=s] = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^kR_{t+k+1}|S_t=s] $$

+ +

Where $R_t$ is the reward distribution at time $t$, and $\mathbb{E}_{\pi}$ stands for +expected value given following the policy $\pi$ for action selection.

+ +

There are a few variations of this, depending on setting and which value function you are interested in. However, all value functions used in RL look at future sums of reward from the decision point when the action is taken. Past rewards are not taken into account.

+ +

An agent may still select to take an early high reward over a longer term reward, if:

+ +
    +
  • The choice between two rewards is exclusive

  • +
  • The return is higher for the early reward. This may depend on the discounting factor, $\gamma$, where low values make the agent prefer more immediate rewards.

  • +
+ +

If your problem is that an agent selects a low early reward when it could ignore it in favour of something larger later, then you should check the discount factor you are using. If you want a RL agent to take a long term view, then the discount factor needs to be close to $1.0$.

+ +

The premise of your question however is that somehow a RL agent would become ""lazy"" or ""complacent"" because it already had enough reward. That is not an issue that occurs in RL due to the way that it is formulated. Not only are past rewards not accounted for when calculating return values from states, but there is also no formula in RL for an agent receiving ""enough"" total reward like a creature satisfying its hunger - the maximisation is applied always in all states.

+ +

There is no need to somehow decay past rewards in any memory structure, and in fact no real way to do this, as there is no data structure that accumulates past rewards used by any RL agent. You may still collect this information for displaying results or analysing performance, but the agent doesn't ever use $r_{t}$ to figure out what $a_t$ should be.

+ +
+

I found that for certain applications and certain hyperparameters, if reward is cumulative, the agent simply takes a good action at the beginning of the episode, and then is happy to do nothing for the rest of the episode (because it still has a reward of R

+
+ +

You have probably formulated the reward function incorrectly for your problem in that case. A cumulative reward scheme (where an agent receives reward $a$ at $t=1$ then $a+b$ on $t=2$ then $a+b+c$ on $t=3$ etc) would be quite specialist and you have likely misunderstood how to represent the agent's goals. I suggest ask a separate question about your specific environment and your proposed reward scheme if you cannot resolve this.

+",1847,,1847,,8/11/2019 8:17,8/11/2019 8:17,,,,0,,,,CC BY-SA 4.0 +13901,1,,,8/11/2019 18:00,,4,336,"

Every week I will get a lot of videos from a game that I play, outside the game where you throw wooden skittle bats at skittles, and then I will cut videos, so that, at the end. there is video only about throws.

+ +

The job is simple and systematic. I have a lot of videos, so I was wondering:

+ +
    +
  • Is it possible to teach AI to cut videos from the right place?
  • +
+ +

I was thinking to ask help or guidance where to start to solve this problem.

+ +

I can't use only sound, because sometimes you can hear skittles hit from outside of the video. I also can't just use movement activity, because sometimes there are people moving around the field. Videos are always filmed from a fixed stand, so it should make it easier. So, is it possible and where to start?

+ +

Here is another example: https://www.youtube.com/watch?v=sHu6yMBV3xU

+",27879,,1671,,8/12/2019 18:38,8/18/2019 23:40,Is it possible to teach an AI to edit video content?,,2,6,,,,CC BY-SA 4.0 +13902,1,,,8/11/2019 20:46,,1,17,"

I would like to teach a model the environment of a room. I'm doing so by mapping a camera pose (x, y, z, q0, q1, q2, q3) to its corresponding image; where x, y, z represent location in Cartesian coordinates and qn represent quaternion orientation. I have tried numerous decoder architectures but I get blurry results with little or no details; as can be seen from the images below:

+ +

+

+ +

I am using Adam optimizer with a learning rate of 0.0001, and my network architecture is as follows:

+ +
    +
  • ReLU(fc(7, 2048))
  • +
  • ReLU(fc_residual_block(2048, 2048))
  • +
  • ReLU(fc_residual_block(2048, 2048))
  • +
  • Reshape
  • +
  • ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
  • +
  • ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
  • +
  • ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
  • +
  • ReLU(ConvTransposed2D(in=128, out=128, filter_size=3, stride=2))
  • +
  • ReLU(ConvTransposed2D(in=128, out=1, filter_size=3, stride=2))
  • +
+ +

I have tried different learning rates, loss functions(MSE, SSIM) and even batch normalization. Is there something that I'm missing here?

+",27883,,,,,8/11/2019 20:46,Camera pose to environment Mapping,,0,0,,,,CC BY-SA 4.0 +13905,1,,,8/12/2019 0:38,,3,202,"

In Locatello et al's Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations he claims to prove unsupervised disentanglement is impossible.

+ +

His entire claim is founded on a theorem (proven in the appendix) that states in my own words:

+ +

Theorem: for any distribution $p(z)$ where each variable $z_i$ are independent of each other there exists an infinite number of transformations $\hat z = f(z)$ from $\Omega_z \rightarrow \Omega_z$ with distribution $q(\hat z$) such that all variables $\hat z_i$ are entangled/correlated and the distributions are equal ($q(\hat z) = p(z)$)

+ +

Here is the exact wording from the paper:

+ +

+ +

(I provide both because my misunderstanding may be stemmed from my perception of the theorem)

+ +

From here the authors explain the straightforward jump from this to that for any unsupervised learned disentangled latent space there will exist infinitely many entangled latent space with the exact same distribution.

+ +

I do not understand why this means its no longer disentangled? Just because an entangled representation exists, does not mean the disentangled is any less valid. We can still conduct inference of the variables independently because they still follow that $p(z) = \prod_i p(z_i)$, so where does the impossibility come in?

+",25496,,2444,,8/12/2019 9:27,8/12/2019 15:57,Is unsupervised disentanglement really impossible?,,1,1,,,,CC BY-SA 4.0 +13907,1,,,8/12/2019 6:25,,3,576,"

A while back I posted on the Reverse Engineering site about an audio DSP system whose designer had passed away and whose manufacturer no longer had source code (but the question was deleted). Basically, the audio filter settings are passed from a Windows program to the DSP device presumably as coefficients and then generic descriptions of those filters (boost/cut, frequency and bandwidth) are passed back from the box to the software - but only if it somehow recognizes the filter setting.

+ +

I want to be able to generate the filter settings separately from the manufacturer software, so I need to know how they are calculated. I've not been able to deduce how this is structured from observing the USB communication that I've gathered. So, I wonder if AI could do this.

+ +

How would I go about creating an AI to send commands to the box (I know how to communicate with the box and have a framework for how these types of commands are phrased) and then look at the responses to either further decode the system and/or create an algorithm for creating filters?

+ +

The communication with the DSP mixer box is basically via ""Serial"" commands and although it uses a USB port, there is a significant bottleneck inside the command control system in the mixer box. Any attempts to reverse engineer may encounter problems based on the sheer amount of time that it would take to compile enough data. Or not.

+",27891,,2444,,8/12/2019 9:35,12/30/2021 7:00,Can AI be used to reverse engineer a black box?,,2,4,,,,CC BY-SA 4.0 +13908,1,,,8/12/2019 6:46,,2,75,"

I am actually reading the linear classification. There is a question in the question set behind the chapter in the book as follows:

+
+

Sketch two multimodal distributions for which a linear discriminant could give excellent or possibly even the optimal classification accuracy.

+
+

I have no idea about how to get the optimal solution on linear classification, any ideas?

+",27893,,2444,,1/4/2021 23:00,1/4/2021 23:00,When could a linear discriminant give excellent or possibly even the optimal classification accuracy?,,1,0,,,,CC BY-SA 4.0 +13909,2,,13905,8/12/2019 7:27,,1,,"

The impossibility is referring how to learn the disentangled representations from the observed distribution or to know whether you have a disentangled representation in the first place.

+ +

Basically, an unsupervised learning agent tasked with learning a disentangled transformation of some features $\mathbf{z}$ needs to infer a set of features from the data which are not entangled, but the supplied data will always have many equally valid entangled solutions - valid from the point of view of describing the distribution of $\mathbf{z}$ accurately.

+ +

An analogy would be ""I have observed the value 50, and know it is the sum of 3 numbers. What are those numbers?"". Whilst the correct answer exists, and it is possible to guess it, it cannot be inferred from the supplied information.

+ +

The part of the proof you quote shows that the multiple equivalent entangled feature sets exist, and theoretically cannot be separated from a ""true"" disentangled feature set on the basis of knowing the distribution. Once you accept this, it is indeed just a short hop in logic to say that the disentangled features are not learnable - there is no way for a learning system to differentiate between the entangled and disentangled features that explain the distribution, and a large (infinite) set of valid entangled features are guaranteed to exist, which will confound attempts to find a perfect solution.

+ +

It is worth noting that the impossibility refers to learning perfect solutions, and that the proof does not rule out useful or practical approximate solutions, or solutions that augment unsupervised learning by applying some additional rules or a semi-supervised approach.

+",1847,,2444,,8/12/2019 9:32,8/12/2019 9:32,,,,10,,,,CC BY-SA 4.0 +13910,1,,,8/12/2019 9:05,,2,419,"

I am using a CNN for function approximation using geospatial data. +The input of the function I am trying to approximate consists of all the spatial distances between N location on a grid and all the other points in the grid.

+ +

As of now I implemented a CNN that takes an ""image"" as input. The image has N channels, one for each location of interest. Each i-th channel is a matrix representing my grid, where the pixel values are the distance between each point in the grid and the i-th location of interest. The labels are the N values computed via the actual function I want to approximate. N can be up to 100.

+ +

Here an example input of the first layer:

+ +

+ +

So far I could see the train and validation loss go down, but since it is a bit of a unusual application for a CNN (to my knowledge the input channels are at most 3, RGB) I was wondering:

+ +
    +
  • does this many-channel-input approach have any pitfalls?
  • +
  • will I be able to obtain a good accuracy or are there any hard limits I am not aware of?
  • +
  • are there any other similar application in literature?
  • +
+",27677,,27677,,8/12/2019 9:42,9/12/2019 12:02,Tweaking a CNN for large number of input channels,,1,1,,,,CC BY-SA 4.0 +13913,2,,13867,8/12/2019 11:24,,0,,"

If you have that sort of a priori knowledge about your environment it, as you said, will simplify the problem substantially. From what I gather you have done a good amount of background research and simply want to apply UTC MCTS(or similar) to the environment. You mention that ""chance nodes won't ever be repeated in the selection phase"".

+ +

If I understand what you are asking correctly, you can simply use what you know to alter the way you search nodes of the tree. I.e you can essentially act greedily wrt the initial chance nodes and then slowly decay that search strategy as the training progresses(to avoid convergence to local maxima).

+ +

I encourage you to dig a bit deeper into the methods around exploration vs. exploitation as there may be an elegant solution to this problem in particular.

+",9608,,,,,8/12/2019 11:24,,,,0,,,,CC BY-SA 4.0 +13914,2,,13908,8/12/2019 11:31,,1,,"

A linear model has a linear decision boundary. So in the case of the question, you need to draw two multimodal distributions whose domains do not overlap at all and then you can just say the linear model would should all to the right of some number would be class 1 and all to the left would be class 2

+",25496,,,,,8/12/2019 11:31,,,,0,,,,CC BY-SA 4.0 +13915,2,,13907,8/12/2019 11:53,,1,,"

Yes this is entirely possible. As was previously mentioned, complex connectionist systems are often thought of as black boxes(despite us being able to ""look in"" the box given enough computation and analysis) because of the difficulty in understanding learning and the networks ultimate decision making.

+ +

Here, we can model the problem as such: given an input of filter settings(and presumably some information about the audio), predict the target descriptors as an output. All you really need to do is generate a dataset from the program and then train it in a multi-label classification context to predict the output descriptors.

+",9608,,,,,8/12/2019 11:53,,,,3,,,,CC BY-SA 4.0 +13916,1,,,8/12/2019 13:27,,4,298,"

I came across 'Amber'(https://ambervideo.co/) where they are claiming that they have trained their AI to find patterns emerging due to artificially created videos which are invisible to naked eye.

+ +

I am wondering that the people who are creating deepfakes can as well their AI's to remove these imperfections and so the problem reduces to 'cat-mouse' game where having more resources(to train their AI) is more crucial.

+ +

I do not work in AI and vision and so I may be missing some trivial points in the area. I would really appreciate if detailed explanation or relevant resources are given.

+ +

Edit: Most of the people who do manipulate the media news or create fake news could afford more resources than an average citizen. So, is the future is really going to be dark where only few strong have even more control on the society than today?

+ +

I mean even though there are fake photos created by photo shop, most of the good photo-shopped photos do take a long time to make. But if AIs can be trained to do that then it is more about having large resources. Are there related works which give hope to know real from fakes?

+ +

P.S.: I realize that after the edit, the question also went tangential to the topic-tags here. Please let me if there are relevant tags.

+",27904,,1671,,8/12/2019 19:41,8/18/2019 13:12,Can we combat against deepfakes?,,3,2,,10/5/2020 21:28,,CC BY-SA 4.0 +13917,1,,,8/12/2019 13:38,,1,224,"

I am unaware to use the derived checkpoints from pre-trained BERT model for the task of semantic text similarity.

+ +
!python create_pretraining_data.py \
+          --input_file=/input_path/input_file.txt \
+          --output_file=/tf_path/tf_examples.tfrecord \
+          --vocab_file=/vocab_path/uncased_L-12_H-768_A-12/vocab.txt \
+          --do_lower_case=True \
+          --max_seq_length=128 \
+          --max_predictions_per_seq=20 \
+          --masked_lm_prob=0.15 \
+          --random_seed=12345 \
+          --dupe_factor=5
+
+!python run_pretraining.py \
+      --input_file=/tf_path/tf_examples.tfrecord \
+      --output_dir=pretraining_output \
+      --do_train=True \
+      --do_eval=True \
+      --bert_config_file=/bert_path/uncased_L-12_H-768_A-12/bert_config.json \
+      --init_checkpoint=/bert_path/uncased_L-12_H-768_A-12/bert_model.ckpt\
+      --train_batch_size=32 \
+      --max_seq_length=128 \
+      --max_predictions_per_seq=20 \
+      --num_train_steps=20 \
+      --num_warmup_steps=10 \
+      --learning_rate=2e-5
+
+ +

I have run a pre-trained BERT model with some domain of corpora from scratch. I have got the checkpoints and graph.pbtxt file from the code above. But I am unaware on how to use those files for evaluating semantic text similarity test file.

+",27902,,2444,,11/1/2019 2:45,11/1/2019 2:45,How to use pretrained checkpoints of BERT model on semantic text similarity task?,,1,0,,,,CC BY-SA 4.0 +13918,2,,13861,8/12/2019 15:03,,2,,"

This should be possible, considering universal approximation theorem you should be able to build a ann that approximates features that gives the most likely best feature set for a different net to train on. I would us a rnn for with a softmax output layer that ranks features by performance.

+ +

You can find a good explanation of softmax here: https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax +basically it will assign probability values for each output node with all of these values adding up to 1.0

+",20044,,20044,,8/13/2019 21:34,8/13/2019 21:34,,,,4,,,,CC BY-SA 4.0 +13919,2,,13867,8/12/2019 15:15,,5,,"

You can try using an ""Open-Loop"" MCTS approach, instead of the standard ""closed-loop"" one, and eliminate chance nodes altogether. See, for example, Open Loop Search for General Video Game Playing.

+ +

In a ""standard"" (closed-loop) implementation, you would store a game state in every normal (non-chance) node. Whenever there is a chance event, you would stochastically traverse to one of its children, and then have a normal node with a ""deterministic"" game state again.

+ +

In an open-loop approach, you do not store game states in any node (except possibly the root nodes), because nodes no longer deterministically correspond to specific game states. Every node in an open-loop MCTS approach only corresponds to the sequence of actions that leads to it from the root node. This completely eliminates the need for chance nodes, and results in a significantly smaller tree because you only need a single path in your tree for every possible unique sequence of actions. A single sequence of actions may, depending on stochastic events, lead to a distribution over possible game states.

+ +

In every separate MCTS iteration, you would re-generate game states again by applying moves ""along the edges"" as you traverse through the tree. You also ""roll the dice"" again for any stochastic events. If your MCTS iteration traverses a certain path of the tree often enough, it will still be able to observe all the possible stochastic events through sampling.

+ +

Note that, given an infinite amount of time, the closed-loop approach with explicit chance nodes will likely perform much better. But when you have a small amount of time (as is the case in the real-time video game setting considered in the paper I linked above), an open-loop approach without explicit chance nodes may perform better.

+ +
+ +

Alternatively, if you prefer the closed-loop approach with explicit chance nodes, you could try some mix of:

+ +
    +
  • Allowing MCTS to prioritise promising parts of the search tree over parts that have not been visited at all (i.e. do not automatically prioritise nodes with $0$ visits). For example, instead of giving unvisited node a value estimate of $\infty$ (this is how you could interpret the automatic selection of them), you could give them a value estimate equal to the value estimate of the parent node, and just apply the UCB1 equation directly.
  • +
  • Use AMAF value estimates / RAVE / GRAVE in your selection phase. This allows you to very quickly learn some crude value estimates for moves that you have never selected in the Selection phase yet, by generalising from observations of playing them in the Play-out phase. I have noticed that the ""standard"" implementation of RAVE / GRAVE, without an explicit UCB-like exploration term, does not mix well with my previous suggestion of using a non-infinite value estimate for unvisited children. It may be good to consider a UCB-like variant with an explicit exploration term instead.
  • +
+",1641,,,,,8/12/2019 15:15,,,,2,,,,CC BY-SA 4.0 +13921,1,,,8/12/2019 16:08,,4,410,"

I am working on a research project in a domain where other related works have always resorted to deep Q-learning. The motivation of my research stems from the fact that the domain has an inherent structure to it, and should not require resorting to deep Q-learning. Based on my hypothesis, I managed to create a tabular Q-learning based algorithm which uses limited domain knowledge to perform on-par/outperform the deep Q-learning based approaches.

+ +

Given that model interpretability is a subjective and sometimes vague topic, I was wondering if my algorithm should be considered interpretable. The way I understand it, the lack of interpretability in deep-learning-based models stems from the stochastic gradient descent step. However, in case of tabular Q-learning, every chosen action can always be traced back to a finite set of action-value pairs, which in turn are a deterministic function of inputs of the algorithm, although over multiple training episodes.

+ +

I believe in using deep-learning-based approaches conservatively only when absolutely required. However, I am not sure how to justify this in my paper without wading into the debated topic of model interpretability. I would greatly appreciate any suggestions/opinions regarding this.

+",27910,,2444,,8/12/2019 16:40,8/13/2019 21:15,Is tabular Q-learning considered interpretable?,,1,0,,,,CC BY-SA 4.0 +13922,2,,13917,8/12/2019 18:13,,1,,"

Have a look at https://medium.com/the-artificial-impostor/news-topic-similarity-measure-using-pretrained-bert-model-1dbfe6a66f1d

+ +

You can have the two sentences as first and second use the next sentence score as a similarity measure. You can further fine-tune your model on some semantic similarity tasks like Sent-Eval or your own dataset if you have one

+",27851,,,,,8/12/2019 18:13,,,,0,,,,CC BY-SA 4.0 +13923,2,,13865,8/12/2019 18:24,,3,,"
    +
  • Absolutely these traps and snares are a form of automation.
  • +
+ +

They take a task--harvesting small animals--which was traditionally done by hunting them, and make the process automatic. The mechanism requires a human to set up, but its function is automatic. This is to say that the mechanism operates without human involvement.

+ +
    +
  • Absolutely this is a form of computation.
  • +
+ +

As DuttaA observed, these machines utilize a simple ""IF/THEN"" statement. In the case of the snare:

+ +
+

IF the hook is displaced from the base, THEN the sapling straightens

+
+ +

These simple machines will also return True or False:

+ +
+

TRUE: The trap catches an animal
+ FALSE: The trap is sprung but empty

+
+ +

The small animals are the input and, potentially, the output, depending on whether the mechanism returns ""true"".

+ +

(The use of ""true"" has historically included phrases such as ""their aim was true"" in the sense of shooting an arrow or throwing a spear.)

+",1671,,,,,8/12/2019 18:24,,,,0,,,,CC BY-SA 4.0 +13924,2,,13916,8/12/2019 18:28,,1,,"

As mshlis begins to touch on, yes we can. However, it will be an unending war. There are quite a few reasons for this. For one, the problem itself is not simple. There are many different 'versions' of the deepfakes framework out in the wild at this point, any algorithm you create to try and spot them would have to work for all of the different iterations. Another reason is the systems that would be used to combat it can be quite easily fooled(see).

+ +

However, the most glaring, and unending problem comes from the architecture itself. Let us say we create a perfect algorithm that is foolproof and extremely accurate. Even then, all one would have to do is use that algorithm as the discriminator during training of your deepfakes model, and bing-bang-boom, your deepfake detection model is busted.

+",9608,,9608,,8/13/2019 5:09,8/13/2019 5:09,,,,4,,,,CC BY-SA 4.0 +13925,1,,,8/12/2019 19:37,,1,81,"

""The final 9 planes encode possible underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen."" +Doesn't this mean that the network does not know that it can promote to a queen?

+",19604,,,,,8/12/2019 20:13,Alpha Zero queen promotion,,1,0,,,,CC BY-SA 4.0 +13926,1,,,8/12/2019 19:58,,3,1181,"

Assume I have a list of sentences, which is just a list of strings. I need a way of comparing some input string against those sentences to find the most similar. Can ELMO embeddings be used to train a model that can give you the $n$ most similar sentences to an input string?

+ +

For reference, gensim provides a doc2vec model that can be trained on a list of strings, then you can use the trained model to infer a vector from some input string. That inferred vector can then be used to find the $n$ most similar vectors.

+ +

Could something similar be done, but using ELMO embedding instead?

+ +

Any guidance would be greatly appreciated.

+",27915,,2444,,8/13/2019 21:50,8/13/2019 23:16,Can ELMO embeddings be used to find the n most similar sentences?,,2,0,,,,CC BY-SA 4.0 +13927,2,,13925,8/12/2019 20:13,,3,,"

It means that there is no explicit coding of action choices to promote to queen, it is the default assumption if the underpromotion actions are not taken.

+ +

The Alpha Zero chess implementation can represent promotion to queen by not selecting an underpromotion action, whilst moving a pawn so that it qualifies for promotion.

+",1847,,,,,8/12/2019 20:13,,,,0,,,,CC BY-SA 4.0 +13929,2,,13921,8/12/2019 21:58,,2,,"

There is not a widely accepted definition of explainable AI (XAI). However, as a rule of thumb (my rule of thumb), if you can't explain it easily to a layperson (or even an expert), then the model or algorithm is not (very) interpretable. There are other concepts related to XAI, such as accountability (who is responsible for what?), transparency and fairness.

+ +

For example, the final decision of (trained) decision tree can easily be explained to (almost) any person, so a (trained) decision tree is a relatively interpretable model. See the chapter 4.4. Decision Tree of the book Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.

+ +

An artificial neural network (ANN) is usually considered not very interpretable because, unless you attempt to understand which parts of the network contribute to the output of the ANN (for example, with the technique layer-wise relevance propagation), then you cannot immediately or easily understand the output or decision of the ANN, given that an ANN involves many non-linear functions, which produce unintuitive outcomes. In other words, it is more difficult to attribute the contributions of each unit of an ANN to the output of the same ANN than to explain e.g. the decision of a decision tree.

+ +

In the context of deep reinforcement learning (DRL), the ANN is used to approximate the value or policy functions. This approximation is, in the first place, the main reason behind the low interpretability of deep RL models.

+ +

Q-learning is an algorithm, so it is not a model, like an ANN. Q-learning is used to learn a state-action value function, denoted with $Q: S \times A \rightarrow \mathbb{R}$, which can then be used to derive another function, the policy, which can then be used to take actions. In a way, Q-learning is similar to gradient descent, because both are machine learning (or optimization) algorithms. The $Q$ function is a model of the environment, given that, for each state, it represents the expected amount of reward that can be obtained, so, in a certain way, the learned $Q$ function represents a prediction of reward.

+ +

Is the learned tabular $Q$ function interpretable? Yes, it is relatively interpretable, but how much? What kind of interpretation do you really need? It depends on the context and people that need the interpretation or explanation. A reinforcement learning researcher will usually be satisfied with the usual explanation of the inner workings of $Q$-learning, Markov decision processes, etc., because the usual RL researcher is not concerned with the really important problems that involve the life of people and other beings. However, for example, in the context of healthcare, doctors might not just be interested in the explanation ""expected maximum future reward"", but they might also be interested in the environment, the credit assignment problem, the meaning and effectiveness of the reward function with respect to the actual problem that needs to be solved, in a probabilistic interpretation of the results (rather than just a mere action that needs to be taken), possible alternative good actions, etc.

+ +

Recently, there have been some attempts to make RL and, in particular, deep RL more interpretable and explainable. In the paper Programmatically Interpretable Reinforcement Learning (2019), Verma et al. propose a more interpretable (than deep RL) RL framework that is based on the idea of learning policies that are represented in a human-readable language. In the paper InfoRL: Interpretable Reinforcement Learning using Information Maximization (2019), the authors focus on learning multiple ways of solving the same task and they claim that their approach provides more interpretability. In the paper Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees (2018), the authors also claim that their approach facilitates understanding the network's learned knowledge by analyzing feature influence, extracting rules, and highlighting the super-pixels in image inputs.

+ +

To conclude, deep RL should not necessarily be avoided: it depends on the context (e.g., it is usually perfectly fine to use deep RL to solve video games). However, in cases where liability is an issue, then deep RL should also be explainable or more explainable alternatives should also be taken into account.

+",2444,,2444,,8/13/2019 21:15,8/13/2019 21:15,,,,0,,,,CC BY-SA 4.0 +13930,2,,13247,8/13/2019 3:58,,2,,"

In your example, the output node would still get a value from Input1, even though Input2 is disabled.

+ +

If the child was:

+ +
Child = {
+          (1, Input1, Output1),
+          (2, Input2, Output2) //Disabled
+         }
+
+ +

Then Output2 would return 0, meaning it wasn't activated.

+ +

For your second question, it is up to your implementation. You could:

+ +

1.) Use only the connection genes in crossover, and derive your node genes from the connection genes

+ +

2.) Test if every node is in use, and delete the ones that are not

+",27789,,,,,8/13/2019 3:58,,,,1,,,,CC BY-SA 4.0 +13931,2,,12455,8/13/2019 4:09,,1,,"

Your species count will increase as the chance of mutation increases. This is because in every generation, so many genes will be mutated that they have little resemblance of each other, and the distance function doesn't factor in historical markings / innovation numbers.

+ +

Try lowering the mutation rates.

+ +

Below is the distance function from here page 110

+ +

$$\delta = \frac{c_1E}{N} + \frac{c_2D}{N} + c_3 \cdot \overline{W}. $$

+ +

If your fitness vary a lot, try ranking the fitnesses in each specie and setting the survival chance based on its rank.

+ +

If you mean a large action space by changing environment, you can set the number of output nodes to the total number of actions, and rank each action, best to worst, then pick the best available action for the state.

+",27789,,1671,,8/13/2019 21:29,8/13/2019 21:29,,,,0,,,,CC BY-SA 4.0 +13932,2,,11345,8/13/2019 4:21,,0,,"

I'm not familiar with neat-python, but I have implemented NEAT to do openai tasks. If there is a class for initializing a population, you could just use that and have 2 objects like population1 and population2 and call them in the same loop.

+",27789,,,,,8/13/2019 4:21,,,,0,,,,CC BY-SA 4.0 +13933,2,,10641,8/13/2019 4:24,,1,,"

You can use a feed forward style network, so that every node outputs to a higher node except output nodes. This will eliminate connection loops.

+",27789,,,,,8/13/2019 4:24,,,,5,,,,CC BY-SA 4.0 +13935,1,,,8/13/2019 7:39,,4,411,"

For an experiment that I'm working on, I want to train a deep network in a special way. I want to initialize and train a small network first, then, in a specific way, I want to increase network depth leading to a bigger network which is subsequently to be trained. This process will be repeated until one reaches the desired depth.

+

It would be great if anybody heard of anything similar and could point out to me some related work. I think in some paper I read something about a related technique where people used something similar, but I don't find it anymore.

+",27047,,2444,,6/30/2022 22:48,7/1/2022 10:08,Iteratively and adaptively increasing the network size during training,,3,2,,,,CC BY-SA 4.0 +13936,2,,13935,8/13/2019 8:35,,2,,"

I haven't read any relevant paper about this, but I have seen some implementations based on what you are describing, arbitrarily called DGNN (Dynamic Growing Neural Network).

+ +

Hope this term can help your search.

+",23818,,2444,,8/13/2019 16:39,8/13/2019 16:39,,,,0,,,,CC BY-SA 4.0 +13938,2,,13916,8/13/2019 9:37,,2,,"

I think this game will go pretty crazy, because, at some point, the generator AI will be able to generate absolutely perfect images. Actually, no, just perfect enough that no AI can be sure whether they are real or fake.

+ +

So, I think the AI war will go onto more than the image, the detector AI will probably evolve to analyze whether this video is logically plausible, for example, by tracking the celebrities' position to prove that it is impossible that he/she was, for example, let's put it this way, being unloyal to his/her partner.

+ +

I mean, currently, AI can tell whether an image is fake or not better than human because it has seen about a million times more samples than us, but if we know who the person in the image is and we are as stalky as the AI I just described, we can probably work out that this image is implausible.

+ +

Of course, there will be counter measurements to that. But, at that point, we might as well just let the AI rule the world, given that it will have become this smart (lol).

+ +

But, seriously, if it's smart enough to think this far ahead in this 'real world' problem, then strong AI is nigh.

+",27925,,2444,,8/13/2019 22:02,8/13/2019 22:02,,,,0,,,,CC BY-SA 4.0 +13940,2,,13926,8/13/2019 10:07,,-1,,"

I'm assuming you are trying to train a network that compares 2 sentences and give how similar they are.

+ +

To do that you will need the dataset (the list of sentences) and a corresponding list of 'correct answers' (the exact similarity of the sentences, which I'm assuming you don't have?).

+ +

Why do you need to compare them using a neural network though? For python, difflib's sequence matcher would be my suggestion, but I'm sure there are many other libraries out there :)

+",27925,,27925,,8/13/2019 23:16,8/13/2019 23:16,,,,2,,,,CC BY-SA 4.0 +13941,2,,13910,8/13/2019 10:15,,1,,"

As far as I know, more than 3 channel is perfectly fine, since, 3 channels are what we use for images and that's enough since we can only see this many colors, but I don't see why more than that wouldn't work

+ +

Your 2nd question is like asking whether or not you will be good at a sport... Just try it

+ +

For your 3rd question, I've never seen any language AI using CNN instead they all use RNN, not sure if that's what you meant though

+",27925,,,,,8/13/2019 10:15,,,,0,,,,CC BY-SA 4.0 +13942,2,,13901,8/13/2019 10:31,,1,,"

I'm thinking, you can input the video you are trying to edit and make it output the timestamps to cut. You will probably have to manually type in the timestamps that you would cut for that video, or maybe a keylogging program of some sort.

+ +

This works in theory, kinda, but I'm not sure how exactly though. The inconsistent input length is easy enough to deal with, but I'm pretty sure you cant have inconsistent output lengths (otherwise I would be using it long ago). Maybe just make like 100 nodes and make it output (timestamp, confirm) where if confirm is > 0.5 it is an actual cut it wants to make.

+ +

But inputting whole videos is never a good idea. It takes way too long and takes like 30 gigs of RAM(I had to use virtual RAM which is ridiculously slow) for like a 5 min video and crashes all the time. Any suggestions though?

+",27925,,27925,,8/13/2019 10:42,8/13/2019 10:42,,,,0,,,,CC BY-SA 4.0 +13944,1,,,8/13/2019 11:51,,2,701,"

Consider an MLP that outputs an integer 'rating' of 0 to 4. Would it be correct to say this could be modeled in either of the following ways:

+ +
    +
  1. map each rating in the dataset to a 'normalized set' between 0 and 1 (i.e. 0, 0.25, 0.5, 0.75, 1), have a single neuron with sigmoid activation at output provide a single decimal value and then take as the rating whatever is closest to that value in the 'normalized set'

  2. +
  3. have 5 output neurons with a softmax activation function output 5 values, each representing a probability of one of the 5 ratings as the outcome, and then take as the rating whichever neuron gives the highest probability?

  4. +
+ +

If this is indeed the case, how does one typically decide 'which way to go'? Approach 1 certainly appears to yield a simpler model. What are the considerations, pros/cons of each approach? Perhaps a couple of concrete examples to illustrate?

+",27920,,2444,,8/13/2019 21:09,8/20/2019 12:28,One vs multiple output neurons,,3,0,,,,CC BY-SA 4.0 +13945,2,,13935,8/13/2019 13:36,,5,,"

Neuroevolution Through Augmenting Topologies or NEAT may be what you are referring to. The original paper by Kenneth O. Stanley is here

+ +

NEAT combines a neural network and a genetic algorithm. Instead of using back propagation or gradient descent to ""train"" your network, NEAT creates a population of very simple neural networks (no connections) and evolves them with fitness evaluation, crossover, and mutation. The genome syntax: every connection gene has a few settings. In node, Out node, Weight of connection, activated, and innovation. In, Out, and Weight values are the same as regular neural networks. Enabled and Disabled genes are well, enabled and disabled. The innovation value is possibly the most defining feature of NEAT, since it allows for crossover of different topologies and historical tracking of each connection.NEAT can mutate or change both its weights and connections, so for example, Parent1 and Parent2 has 5 of the same connections, represented by innovation / ID numbers 1 through 5. Since they have the same connection nodes, the genetic algorithm will randomly pick either Parent1 weight or Parent2 weight. The excess and disjoint genes are inherited from the more fit parent. NEAT will then mutate each genome, shown in the image below.

+",27789,,,,,8/13/2019 13:36,,,,0,,,,CC BY-SA 4.0 +13947,1,,,8/13/2019 15:23,,0,30,"

I'm reading notes on word vectors here. Specifically, I'm referring to section 4.2 on page 7. First, regarding points 1 to 6 - here's my understanding:

+

If we have a vocabulary $V$, the naive way to represent words in it would be via one-hot-encoding, or in other words, as basis vectors of $R^{|V|}$ - say $e_1, e_2,\ldots,e_{|V|}$. We want to map these to $\mathbb{R}^n$, via some linear transformation such that the images of similar words (more precisely, the images of basis vectors corresponding to similar words) have higher inner products. Assuming the matrix representation of the linear transformation given the standard basis of $\mathbb{R}^{|V|}$ is denoted by $\mathcal{V}$, then the "embedding" of the $i$-th vocab word (i.e. the image of the corresponding basis vector $e_i$ of $V$) is given by $\mathcal{V}e_i$.

+

Now suppose we have a context "The cat ____ over a", CBoW seeks to find a word that would fit into this context. Let the words "the", "cat", "over", "a" be denoted (in the space $V$) by $x_{i_1},x_{i_2},x_{i_3},x_{i_4}$ respectively. We take the image of their linear combination (in particular, their average): +$$\hat v=\mathcal{V}\bigg(\frac{x_{i_1}+x_{i_2}+x_{i_3}+x_{i_4}}{4}\bigg)$$

+

We then map $\hat v$ back from $\mathbb{R}^n$ to $\mathbb{R}^{|V|}$ via another linear mapping whose matrix representation is $\mathcal{U}$: $$z=\mathcal{U}\hat v$$

+

Then we turn this score vector $z$ into softmax probabilities $\hat y=softmax(z)$ and compare it to the basis vector corresponding to the actual word, say $e_c$. For example, $e_c$ could be the basis vector corresponding to "jumped".

+

Here's my interpretation of what this procedure is trying to do: given a context, we're trying to learn maps $\mathcal{U}$ and $\mathcal{V}$ such that given a context like "the cat ____ over a", the model should give a high score to words like "jumped" or "leaped", etc. Not just that - but "similar" contexts should also give rise to high scores for "jumped", "leaped", etc. For example, given a context "that dog ____ above this" wherein "that", "dog", "above", "this" are represented by $x_{j_1},x_{j_2},x_{j_3},x_{j_4}$, let the image of their average be

+

$$\hat w=\mathcal{V}\bigg(\frac{x_{j_1}+x_{j_2}+x_{j_3}+x_{j_4}}{4}\bigg)$$

+

This gets mapped to a score vector $z'=\mathcal{U}\hat w$. Ideally, both score vectors $z$ and $z'$ should have similarly high magnitudes in their components corresponding to similar words "jumped" and "leaped".

+

Now to the questions:

+
+

We create two matrices, $\mathcal{V} \in \mathbb{R}^{n\times |V|}$ and $\mathcal{U} \in \mathbb{R}^{|V|\times n}$, where $n$ is an arbitrary size which defines the size of our embedding space. $\mathcal{V}$ is the input word matrix such that the $i$-th column of $\mathcal{V}$ is the $n$-dimensional embedded vector for word $w_i$ when it is an input to this model. We denote this $n\times 1$ vector as $v_i$. Similarly, $\mathcal{U}$ is the output word matrix. The $j$-th row of $\mathcal{U}$ is an $n$-dimensional embedded vector for word $w_j$ when it is an output of the model. We denote this row of $\mathcal{U}$ as $u_j$.

+
+
    +
  1. How does minimizing the cross-entropy loss between $e_c$ and $\hat y$ ensure that basis vectors corresponding to similar words $e_i$ and $e_j$ are mapped to vectors in $\mathbb{R}^n$ that have high inner product? I'm not sure of the mechanism how the above procedure ensures that. In other words, how is it ensured that if words no. $i_1$ and $i_2$ are similar, then $\langle v_{i_1}, v_{i_2}\rangle$ and $\langle u_{i_1}, u_{i_2}\rangle$ have high values?

    +
  2. +
  3. How does the above procedure ensure that linear combinations of words in similar contexts are mapped to "similar" images? Does that even happen? In the above description for example, do $\hat v$ and $\hat w$ corresponding to similar contexts also have a high inner product? If so, how is that ensured?

    +
  4. +
  5. Maybe my linear algebra is rusty and this is a silly question, but from what I gather, the columns of $\mathcal{V}$ represent the images of OHE vectors (standard basis of $V$) in the standard basis of $\mathbb{R}^n$ - i.e. the embedded representation of vocab words. Also, the rows of $\mathcal{U}$ also somehow represent the embedded representation of vocab words in $\mathbb{R}^n$. It's not obvious to me why $v_i=\mathcal{V}e_i$ should be the same as or even similar to $u_i$. Again, how does the above procedure ensure that?

    +
  6. +
+",27548,,-1,,6/17/2020 9:57,8/15/2019 11:44,Understanding how continuous bag of words method learns embedded representations,,0,2,,,,CC BY-SA 4.0 +13948,1,,,8/13/2019 15:40,,1,1451,"

I just finished my implementation of NEAT and I want to see the phenotype of each genome. Is there a library for displaying a neural network like this?

+

+

Example of my genome syntax:

+
[[0, 11, 0.9154901559275923, 1, 19],
+[4, 11, 1.3524964932656411, 1, 19],
+[12, 9, -1.755210214894685, 1, 23],
+[11, 12, 0.6193383549414015, 1, 23]]
+
+

Where [In, Out, Weight, Activated?, Innovation]

+",27789,,-1,,6/17/2020 9:57,8/13/2019 22:36,Library for rendering neural network NEAT,,1,1,0,12/14/2021 21:40,,CC BY-SA 4.0 +13949,2,,13944,8/13/2019 15:52,,1,,"

This depends on whether the output is a continuous or discrete variable. If the output variable is discrete (there are a finite number of possibilities that it can be), as in a classification task (such as this one, where you are trying to place the input into one of 5 categories), you want to use one output neuron for each class. If the variable is continuous, however, you should only use one output neuron.

+ +

This is because of how the training process works. When training your network successively makes adjustments to try and reduce the errors. These adjustments are made in the direction of the error – so if the network predicts a value which is too high then the network's weights are adjusted to make the output value lower. On the other hand if the network's predicts a value which is too low the network's weights are adjusted to make the output bigger.

+ +

If you have output neurons labeled 0 to 4 and a training sample with some input value and a target prediction of 2 then the neural network will make its prediction. Once the prediction has made each neuron is adjusted individually – in this case neuron 2 will be adjusted in the direction of the correct probability and all the other neurons will be adjusted in the direction of the incorrect probability. In this way you have one prediction for each class.

+ +

Backpropagation is a about error attribution, and using multiple neurons allows the error of the neural network to be better attributed as the neural network can adjust each neuron individually, and thus adjust the required probabilities for each class.

+ +

Using a single neuron with a sigmoid activation function would be less good as the sigmoid function saturates values close to 0 and 1 so there would be an unnatural bias towards category 0 and category 4 over the other categories. The neural network could learn to overcome this, but it would take more time.

+",27933,,27933,,8/14/2019 6:43,8/14/2019 6:43,,,,2,,,,CC BY-SA 4.0 +13950,1,,,8/13/2019 16:36,,2,28,"

I want to implement a CNN, but I want to explore what happens when my first layer is a fully-connected one. I still want to use convolutions, of course, but I want to apply them after the first layer. I noticed that the input then loses its 3D structure. Does that mean I can only apply 1d convolutions after that? Is there a non-trivial way to recover the 3d structure, so that 2d convolutions may be applied?

+ +

Hopefully, when I reconstruct it to have 3d structure the 3d structure is somehow meaningful.

+ +

I also posted this question at https://forums.fast.ai/t/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer-or-a-flatten-layer/52489 and https://discuss.pytorch.org/t/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer-or-a-flatten-layer/53313.

+",9289,,2444,,8/13/2019 21:44,8/13/2019 21:44,How do I recover the 3D structure of a layer after a fully-connected layer?,,0,0,,,,CC BY-SA 4.0 +13951,2,,6411,8/13/2019 17:25,,1,,"

After the first Conv layer, the size reduced to 20x20 +for the primary caps which is convolutional caps layer +n final = (n + 2p -f )/s + 1 +which gives a 6x6 output with 256 channels

+ +

6x6x256 is further encoded into capsules of 8 dimensions by reshaping the channels +i.e. 256/8 = 32 +which gives 6x6x32 = 1152 capsules

+ +

try experimenting with the same hyperparameters first and then try to encode higher-level features by making suitable changes to the hyperparameters.

+",27936,,,,,8/13/2019 17:25,,,,0,,,,CC BY-SA 4.0 +13952,1,,,8/13/2019 17:31,,0,1932,"

Please tell me that is the LSTM network for the problem of reinforcement learning, as I explain to her what she will get the reward of a prediction, because the output will contain only actions?

+ +

Well, well, let's say at first I can play and upload my actions to training so that she sees which actions are right, that is, which she should strive for, but how to make her learn relatively independently?

+",27905,,,,,10/18/2019 17:03,LSTM in reinforcement learning,,1,2,,12/28/2021 22:00,,CC BY-SA 4.0 +13953,1,,,8/13/2019 17:35,,2,138,"

I'm trying to detect if a given video shot is fast or slow motion. Basically, I need to calculate a ""video motion"" score in a given video sequence, meaning how fast or slow motion the video is. For instance, if a video is about a car racing or camera moving fast, the score is high. Whereas if the video is about two persons standing/talking, then the motion is low, so the lower score.

+ +

What comes into my mind is using optical flow which is already an implemented function in OpenCV. I never used it. But I don't know how to interpret or use it for a ""motion score"".

+ +

Is optical flow applicable here? How can I use it to calculate a score? In particular, if there is a ML/Deep learning model that already does it, please share it.

+",9053,,2444,,8/13/2019 21:08,8/13/2019 21:08,How can I detect fast and slow motion in videos?,,0,0,,,,CC BY-SA 4.0 +13955,2,,9550,8/13/2019 18:24,,6,,"

The first equation deals with distance. Delta, or distance, is the measure of how compatible two genomes are with each other. c1, c2 and c3 are parameters you set to dictate the importance of E, D and W. Note that if you change cc1, c2 or c3, you will most likely also have to change dt, which is the distance threshold, or the maximum distance apart 2 genomes can be before they are separated into different species. E represents the total number of excess genes. D represents the total number of disjoint genes in both genomes, W represents the total weight difference between genes that match, and finally, N represents the number of connections / genes the genome with the larger number of connections has. For example, take the following 2 genomes:

+ +
[[1,.25][2,.55],[4,.78],[6,.2]]
+and
+[[1,.15][3,.92],[5,.37]]
+
+ +

Where the 0 index represents innovation number and the 1 index represents weight value. E would be 1, since there is 1 excess gene, gene 6. D would be 4, since connections 2 and 4 are not in genome 2, and connections 3 and 5 are not in genome 1. W would be .10, since only connection 1 is shared between the two genomes.

+ +

The second formula is a bit more complicated. From my understanding, correct me if I'm wrong, this is a formula for adjusting fitness. f′i is the adjusted fitness, which will replace the original fitness, fi. For every genome j in the entire population, yes, entire population and not just every genome in its specie, it will calculate the distance between j and i, i being the genome of fitness fi. Then it will sum up all the distance values, and divide the original fitness fi by the total distance sum, and set f'i to that. Next,

+ +
+

Every species is + assigned a potentially different number of offspring in proportion to the sum of adjusted fitnesses f`i of its member organisms.

+
+ +

This assigning of number of species offspring is used so that one specie can't take over the entire population, which is the whole point of speciation in the first place, so in conclusion, these two formulas are vital to the function and efficiency of the NEAT algorithm.

+",27789,,,,,8/13/2019 18:24,,,,4,,,,CC BY-SA 4.0 +13956,2,,8333,8/13/2019 18:32,,0,,"

In some implementations, the specie representatives change each generation. This allows for a dynamic definition of what a specie is. If you speciate from scratch(meaning each specie is assigned a new representative) every generation, you won't have the problem where a representative leaves its specie.

+",27789,,,,,8/13/2019 18:32,,,,0,,,,CC BY-SA 4.0 +13957,1,,,8/13/2019 18:34,,2,577,"

I was trying to understand the definition of 2d convolutions vs 3d convolutions. I saw the ""simplest definition"" according to Pytorch and it seems the following:

+ +
    +
  • 2d convolutions map $(N,C_{in},H,W) \rightarrow (N,C_{out},H_{out},W_{out})$
  • +
  • 3d convolutions map $(N,C_{in},D,H,W) \rightarrow (N,C_{out},D_{out},H_{out},W_{out})$
  • +
+ +

Which make sense to me. However, what I find confusing is that I would have expected images to be considered 3D tensors but we apply 2D convolutions to them. Why is that? Why is the channel tensor not part of the ""definitionality of the images""?

+ +

I also asked this question at https://forums.fast.ai/t/what-is-the-difference-between-2d-vs-3d-convolutions/52495.

+",9289,,2444,,8/13/2019 21:04,8/13/2019 21:04,What is the difference between 2d vs 3d convolutions?,,1,0,,,,CC BY-SA 4.0 +13959,1,21243,,8/13/2019 19:20,,6,84,"

We have the popular TextRank API which given a text, ranks keywords and can apply summarization given a predefined text length.

+ +

I am wondering if there is a similar tool for video summarization. Maybe a library, a deep model or ML-based tool that given a video file and a length, it ranks frames, or video scenes/shots. I'd like to generate a short summary of a video with visual features.

+",9053,,9053,,8/13/2019 22:18,3/21/2022 4:09,Video summarization similar to Summe's TextRank,,1,2,,3/25/2022 5:00,,CC BY-SA 4.0 +13960,2,,13957,8/13/2019 19:20,,1,,"

Looking at it from the perspective of input to output in that fashion is probably not the best. So lets start with our goal and how these ND convolutions accomplish that (Note these are in my own words, and may not be best stated).

+ +

Assumption: There exists highly correlative local associations

+ +

Goal: Have a linear model that takes advantage of these local associations

+ +

Solution: The ND Convolution

+ +

Explanation: ND convolutions take advantage of our locality assumption by connecting only local nodes/neurons. The fact that its a sliding window allows us to learn filters for any location along with ones that can be reusable.

+ +

Your Question: Where does the N in ND convolution matter?:
+When using an ND convolution we are working off the assumption that there exists this locality in N dimensions and nothing more. So we connect all other components of the input in a dense matter because we have no assumptions to work on in this space. So now going to the shapes you mentioned such as the input and output of the 2D convolution. We are convolving a filter of size $(C_{in}, k_h, k_w)$ with a $(C_{in},H,W)$ activation shaped map (the $N$ just refers to the number of activation maps, and there is no association between them). We use $C_{in}$ channels on the kernel because we are not making any assumptions about locality between channels. On the other hand in a 3D convolution we make locality assumption is 3 dimensions, so our kernel will be ($C_{in}, k_h, k_w, k_r$). These kernel sizes are actually determined by the input size, the amount of dimensions of the kernel will actually match the inputs (minus the batch) because it needs to densely match each one.

+ +

You may be thinking now, that in torch this is not the case: This is because its rare to want a 2D Convolution with an inputs that's different than the one you mention, so they only implemented it for a singular shape. I hope this clears up the convolutions and helps you understand not just for 2 and 3 dimensional convolutions, but for all N.

+",25496,,,,,8/13/2019 19:20,,,,0,,,,CC BY-SA 4.0 +13961,5,,,8/13/2019 20:11,,0,,"

See, for example, Text Summarization Techniques: A Brief Survey for an overview of the field.

+",2444,,2444,,8/13/2019 20:11,8/13/2019 20:11,,,,0,,,,CC BY-SA 4.0 +13962,4,,,8/13/2019 20:11,,0,,"For questions related to (automatic) text summarization, which is the task of producing a concise +and fluent summary of a text or document while preserving key information content and +the overall meaning of the original document. For example, search engines are an example of an application that generates summaries as the previews of the documents or websites.",2444,,2444,,8/13/2019 20:11,8/13/2019 20:11,,,,0,,,,CC BY-SA 4.0 +13963,2,,13926,8/13/2019 21:23,,3,,"

I ended up finding this article which does what I'm looking for. +Below is the portion of code I adapted for my needs

+ + + +
from sklearn.metrics.pairwise import cosine_similarity
+
+import tensorflow_hub as hub
+import tensorflow as tf
+
+elmo = hub.Module(""https://tfhub.dev/google/elmo/2"", trainable=True)
+
+def elmo_vectors(x):
+  embeddings=elmo(x, signature=""default"", as_dict=True)[""elmo""]
+
+  with tf.device('/device:GPU:0'):
+    with tf.Session() as sess:
+      sess.run(tf.global_variables_initializer())
+      sess.run(tf.tables_initializer())
+      # return average of ELMo features
+      return sess.run(tf.reduce_mean(embeddings,1))
+
+
+corpus=[""I'd like an apple juice"",
+        ""An apple a day keeps the doctor away"",
+         ""Eat apple every day"",
+         ""We buy apples every week"",
+         ""We use machine learning for text classification"",
+         ""Text classification is subfield of machine learning""]
+
+
+elmo_embeddings=[]
+print (len(corpus))
+for i in range(len(corpus)):
+    print (corpus[i])
+    elmo_embeddings.append(elmo_vectors([corpus[i]])[0])
+
+print ( elmo_embeddings, len(elmo_embeddings))
+print(elmo_embeddings[0].shape)
+sims = cosine_similarity(elmo_embeddings, elmo_embeddings)
+print(sims)
+print(sims.shape)
+
+",27915,,,,,8/13/2019 21:23,,,,0,,,,CC BY-SA 4.0 +13964,2,,13886,8/13/2019 22:13,,3,,"

When you want to compare Reinforcement Learning algorithms, you might want to compare the average rewards they generate and how fast and close they get to the optimal policy. However, in the case of comparing it to humans, you might want to compare the game results of all the games played.

+

Reward Comparison

+

Often Reinforcement Learning algorithms are compared by using the rewards (either direct, maximum or average in time/iteration). For example, in this page about RL a comparison of two algorithms is shown:

+

+

Or, when you know the optimal actions, you can plot the number of plays/iterations against the percentage of actions. See for example this RL comparison on the 10-armed testbed problem:

+

+

Henderson et al. 2017 have a whole section about the evaluation metrics of Reinforcement Learning algorithms. They also comment on the plotting of the average or maximum cumulative rewards, moreover they mention the sample bootstrap method to create confidence intervals for a better comparison. Lastly, they mention that the significance of the improvements of the algorithms should be calculated, using a statistical test, such the two sample t-Test. Note that you should take into account the distributions of the datasets to choose the right statistical test. An interesting article related to this is A Hitchhiker's Guide to Statistical Comparison of Reinforcement Learning Algorithms of Colas et al..

+

Comparison of plays against humans

+

To find out how well different algorithms play against humans you should do a large number of games and compare - what you consider - important parameters, for example: did the algorithm won, the time it took to win, number of points gained, etc. These values can then be compared statistically. Note that you have to think well about the setup of these experiments since you only should change the algorithms, the other parameters should stay equal. Therefore, you should - preferably - use a large number of subjects (of different ages, sexes, etc. to cover many types of people), and try to prevent any bias; think about the order, how many games are played, location, time etc.

+",198,,-1,,6/17/2020 9:57,8/13/2019 22:13,,,,0,,,,CC BY-SA 4.0 +13965,2,,13948,8/13/2019 22:36,,1,,"

I've never used it, but, if you are using Python, have a look at neat-python, which is a Python package that implements NEAT and also provides a module, visualize, to plot the best and average fitness vs. generation, plot the change in species vs. generation, and to show the structure of a network described by a genome. See this Stack Overflow thread, if you encounter any issue while attempting to use this module. See also this example.

+ +

Have also a look at this web page, which lists links to many NEAT implementations in different languages, where some of them provide visualization tools.

+",2444,,,,,8/13/2019 22:36,,,,0,,,,CC BY-SA 4.0 +13966,1,,,8/14/2019 7:29,,1,66,"

I have some problems with training CNN :(

+

For example: +Input 6x6x3, 1 core 3x3x3, output = 4x4x1 => pool: 2x2x1

+

+By backpropagation I calculated deltas for output.

+

This tutor and other tutors are explain to calc deltas for weights and Input only for 2D:

+

input*output=deltas for 2D weights

+

filter*out = input delta

+

But how I can to calc weights deltas for 3D filters?

+

I must to multiply each input by output as below?

+

FilterLayer1Delta = OutputDelta * InputLayer1 ?

+

FilterLayer2Delta = OutputDelta * InputLayer2 ?

+

FilterLayer3Delta = OutputDelta * InputLayer3 ?

+",27945,,-1,,6/17/2020 9:57,8/14/2019 7:29,How to train and update weights of filters,,0,0,,,,CC BY-SA 4.0 +13967,2,,13944,8/14/2019 8:14,,0,,"

In the case of one output neuron, you don't have to use sigmoid. As Teymour Aldridge suggested, it would cause a tendency to output 0 or 1. What I normally do is that I set the layer before the output layer to sigmoid of tanh so it won't output ridiculously off numbers, and set the output layer to linear. There would be cases that it outputs something like 1.5, but over time that disappears.

+ +

Hope it helps :)

+",27925,,,,,8/14/2019 8:14,,,,1,,,,CC BY-SA 4.0 +13968,1,,,8/14/2019 11:20,,2,1395,"

I'm a beginner of RL and currently trying to make DQN agent that can act optimally in a simple situation.

+ +

In the situation agent should decide at what rate to charge or discharge the electrical battery, which is equivalent to buying the electrical energy or selling it, for making money by means of arbitrage. So the action space is for example [-6, -4, -2, 0, 2, 4, 6]kW. The negative numbers mean discharging, and the positive numbers mean charging.

+ +

In a case that battery is empty, discharging actions(-6, -4, -2) should be forbidden. +Otherwise in a case that battery is fully charged, charging actions(2, 4, 6) should be forbidden.

+ +

To deal with this issue, I tried two approaches:

+ +
    +
  • In every step, renewing the action space, which means masking the forbidden action.
  • +
  • Give extreme penalties for selecting forbidden actions (in my case the penalty was -9999)
  • +
+ +

But none of them worked.

+ +

For the first approach, the training curve (the cumulative rewards) didn't converge.

+ +

For the second approach, the training curve converged, but the charging/discharging results are not reasonable (almost random results). +I think in second approach, a lot of forbidden actions are selected randomly by the epsilon-greedy policy, and these samples are stored in experience memory, which negatively affect the result.

+ +

for example:

+ +

The state is defined as [p_t, e_t] where p_t is the market price for selling (discharging) the battery, and e_t is the amount of energy left in the battery.

+ +

When state = [p_t, e_t = 0], and discharging action (-6), which is forbidden action in this state, is selected, the next state is [p_t, e_t = -6]. And then the next action (2) is selected, then the next state is [p_t, e_t = -4] and so on.

+ +

In this case the < s, a, r, s' > samples are:

+ +

< [p_t, 0], -6, -9999, [p_t+1, -6] >

+ +

< [p_t, -6], 2, -9999, [p_t+1, -4] > ...

+ +

These are not expected to be stored in the experience memory because they are not desired samples (e_t should be more than zero). I think this is why desired results didn't come out.

+ +

So what should I do? Please help.

+",27946,,,,,8/18/2019 12:10,Reinforcement learning: How to deal with illegal actions?,,2,2,,9/27/2020 22:41,,CC BY-SA 4.0 +13971,2,,13968,8/14/2019 13:23,,1,,"

In my project I also had the problem that the action space is not the same for every state of the environment. I do not like the approach to penalty forbidden actions with a high negative reward since it feels a bit like cheating. However it might work, I just haven't tried it.

+ +

The approach I used, which you could apply as well, is to integrate an additional function into your action space. This function would map an action to a specific amount of kW. Thereby, depending on the current state, the function maps the action to the amount of kW to charge or discharge your battery with. This has the advantage that you do not have to deal with illegal actions.

+ +

This could be applied as follows: +Instead of defining for every action the amount to charge / discharge your battery with, you create a set of functions that defines the respective amount. Here an example with five actions:

+ +
    +
  1. Action: Discharge the battery entirely
  2. +
  3. Action. Discharge the battery so that half of its capacity remains, +otherwise do nothing
  4. +
  5. Action: do nothing
  6. +
  7. Action: charge the batter to half its capacity, otherwise do nothing
  8. +
  9. Action: charge the battery to its maximum capacity
  10. +
+",26876,,,,,8/14/2019 13:23,,,,0,,,,CC BY-SA 4.0 +13972,2,,11534,8/14/2019 15:23,,0,,"

It occurs to me that Superintelligence as an ordinal value could be explained as follows:

+ +
    +
  • Artificial General Intelligence is strong utility in all problems that humans can conceive of.

  • +
  • Superintelligence as a category includes problems beyond what humans can conceive of.

  • +
+ +

An easy way to understand this is to look at the difference between dogs and humans. Dogs are optimized for certain tasks, such as tracking, but limited in terms of what they can conceive and what tasks they can engage in. Humans have superintelligence, compared to dogs, because we regularly engage with problems and tasks beyond the conception of canines.

+",1671,,,,,8/14/2019 15:23,,,,0,,,,CC BY-SA 4.0 +13973,1,14038,,8/14/2019 16:15,,6,1033,"

In chapter 6 of Sutton and Barto (p. 128), they claim temporal difference converges to the maximum likelihood estimate (MLE). How can this be shown formally?

+",26846,,2444,,8/18/2019 14:47,8/18/2019 19:13,How to show temporal difference methods converge to MLE?,,1,0,0,,,CC BY-SA 4.0 +13974,1,,,8/14/2019 16:53,,2,58,"

In chapter six of Sutton and Barto (p.128), they claim Monte Carlo methods converge to an estimate minimizing the mean squared error. How can this be shown formally?

+ +

Bump

+",26846,,26846,,8/16/2019 17:06,8/16/2019 17:06,How to show Monte Carlo methods converge to an estimate which minimizes mean squared error?,,0,0,,,,CC BY-SA 4.0 +13975,1,,,8/14/2019 19:57,,6,256,"

I am curious if there is any advantage of using 3D convolutions on images like CIFAR-10/100 or ImageNet. I know that they are not usually used on this data set, though they could because the channel could be used as the "depth" channel.

+

I know that there are only 3 channels, but let's think more deeply. They could be used deeper in the architecture despite the input image only using 3 channels. So, we could have at any point in the depth of the network something like $(C_F,H,W)$ where $C_F$ is dictated by the number of filters and then apply a 3D convolution with kernel size less than $C_F$ in the depth dimension.

+

Is there any point in doing that? When is this helpful? When is it not helpful?

+

I am assuming (though I have no mathematical proof or any empirical evidence) that if the first layer aggregates all input pixels/activations and disregards locality (like a fully connected layer or conv2D that just aggregates all the depth numbers in the feature space), then 3D convolutions wouldn't do much because earlier layers destroyed the locality structure in that dimension anyway. It sounds plausible but lacks any evidence or theory to support it.

+

I know Deep Learning uses empirical evidence to support its claims so perhaps there is something that confirms my intuition?

+

Any ideas?

+
+

Similar posts:

+ +",9289,,2444,,12/18/2021 13:01,12/18/2021 13:02,"Is there any use of using 3D convolutions for traditional images (like cifar10, imagenet)?",<3d-convolution>,1,0,,,,CC BY-SA 4.0 +13976,2,,13968,8/14/2019 20:51,,1,,"

You can set the number of output nodes to the number of all actions, then choose the highest output value, try to do that action, if it can't, move to the next highest output value and so on. The only problem with this is that you have to know how many possible actions there are

+",27789,,,,,8/14/2019 20:51,,,,0,,,,CC BY-SA 4.0 +13978,1,,,8/14/2019 22:30,,12,4591,"

I'm new to NN. I am trying to understand some of its foundations. One question that I have is: why the derivative of an activation function is important (not the function itself), and why it's the derivative which is tied to how the network performs learning?

+ +

For instance, when we say a constant derivative isn't good for learning, what is the intuition behind that? Is the activation function somehow like a hash function that needs to well differentiate small variance in inputs?

+",9053,,2444,,8/15/2019 0:06,8/24/2019 5:08,Why is the derivative of the activation functions in neural networks important?,,3,3,,,,CC BY-SA 4.0 +13979,2,,13978,8/15/2019 0:39,,7,,"

If what you are asking is what is the intuition for using the derivative in backpropagation learning, instead of an in-depth mathematical explanation:

+ +

Recall that the derivative tells you a function's sensitivity to change with respect to a change in its input. A high (absolute) value for the derivative at a certain point means that the function is very steep, and a small change in input may result in a drastic change in its output; conversely, a low absolute value means little change, so not steep at all, with the extreme case that the function is constant when the derivative is zero.

+ +

Training a neural network essentially amounts to an optimization problem where one wants to minimize a certain value, in this case the error produced by the network on the given training examples. Backpropagation learning can be viewed as a case of gradient descent (the inverse of hill climbing).

+ +

If for a moment we assume that your input is only 2-dimensional (just for illustration, the mathematics of course also work for higher dimensions), you could imagine the error function as a landscape with hills, mountains, valleys, ridges etc. You are standing at a high point and want to get down as far as possible. Gradient descent means that, in discrete steps, you always walk down in the direction that has the steepest slope downwards from where you are currently standing, until you eventually reach a (local) minimum.

+ +

In order to determine where that steepest slope is, you need the derivative of the activation function. Basically, you want to sort out how much each unit in your network contributes to an error, and adjust in the direction that contributes the most.

+ +

Edit: Regarding constant values for a derivative, in the landscape metaphor it would mean that the gradient is the same no matter where you are, so you'll always go in the same direction and never reach an optimum. However, multi-layer networks with linear activation function are kind of besides the point anyhow when you consider that each cell computes a linear combination of its inputs, which then is again a linear function, so the output of the last layer will ultimately be a linear function of the inputs at the first layer. That is to say, anything you can do with a multi-layer net with linear activation functions, you could also achieve with just a single layer.

+",27624,,27624,,8/15/2019 1:01,8/15/2019 1:01,,,,2,,,,CC BY-SA 4.0 +13980,2,,13978,8/15/2019 0:43,,4,,"

The basic (and usual) algorithm used to update the weights of the artificial neural network (ANN) is an iterative, numerical and optimization algorithm, called gradient descent, which is based on and requires the computation of the derivative of the function you want to find the minimum of. If the function you want to find the minimum of is multivariable, then, rather than the derivative, gradient descent requires the gradient, which is a vector where the $i$th element contains the partial derivative of the function with respect to the $i$th variable. Hence the name gradient descent, where the derivative of a function of one variable can be considered the gradient of the function.

+ +

In the case of ANNs, we usually have a loss function that we want to minimize: for example, the mean squared error (MSE). Therefore, in order to apply gradient descent to find the minimum of the MSE, we need to find the derivative or, more precisely, the gradient of the MSE. To do it, the back-propagation (an algorithm based on the chain rule) is often used, given that the MSE is a function of the ANN, which is a composite function of multiple non-linear functions, the activation functions, whose main purpose is thus to introduce non-linearity, or, in other words, it makes the ANN powerful. Given that the MSE is a function of the parameters of the ANN, then we need to find the partial derivative of the MSE with respect to all parameters of the ANN. In this process, we will also need to find the derivatives of the activation functions that each neuron applies to its linear combination of weights: to fully see this, you will need to learn the details of back-propagation! Hence the importance of the derivatives of the activation functions.

+ +

A constant derivative would always give the same learning signal, independently of the error, but this is not desirable.

+ +

To fully understand all these statements, I recommend you learn about back-propagation and gradient descent in detail, which requires a little bit of effort!

+",2444,,2444,,8/15/2019 1:29,8/15/2019 1:29,,,,1,,,,CC BY-SA 4.0 +13982,1,13985,,8/15/2019 1:55,,2,102,"

Reading this blog post about AlphaZero: +https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go

+ +

It uses language such as ""the amount of training the network needs"" and ""fully trained"" to describe how long they had the machine play against itself before they stopped training. They state training times such as 9 hours, 12 hours, and thirteen days for chess, shogi, and Go respectively. Why is there a point at which the training ""completes?"" They show plots of AlphaZero's performance on the Y axis (its Elo rating) as a function of the number of training steps. Indeed, the performance seems to level out as the number of training steps increases beyond a certain point. Here's a picture from that site of the chess performance vs training steps:

+ +

+ +

Notice how sharply the Elo rating levels off as a function of training steps.

+ +
    +
  1. First: am I interpreting this correctly? That is, is there an asymptotic limit to improvement on performance as training sessions tend to infinity?
  2. +
  3. If I am interpreting this correctly, why is there a limit? Wouldn't more training mean better refinement and improvement upon its play? It makes sense to me that the millionth training step may yield less improvement than the very first one, but I wouldn't expect an asymptotic limit. That is, maybe it gets to about 3500 Elo points in the first 200k training steps over the course of the first 10 hours or so of playing ches. If it continued running for the rest of the year, I'd expect it to rise significantly above that. Maybe double its Elo rating? Is that intuition wrong? If so, what are the factors that limit its training progress beyond the first 10 hours of play?
  4. +
+ +

Thanks!

+",27962,,,,,8/15/2019 6:39,"What does it mean for AlphaZero's network to be ""fully trained""",,1,0,,,,CC BY-SA 4.0 +13984,2,,13695,8/15/2019 2:39,,1,,"

It seems that you want to detect ranges of IP addresses that are vulnerable/dangerous/etc, right? Such ranges are essentially numeric intervals, and so my suggestion is to look at decision tree learning instead of neural networks, because you are essentially doing a classification task where you want to test both categorical data and splits over numerical attributes.

+ +

The result will be a tree-like function (nested conditionals) of the form

+ +
IF ...> address > ... 
+   THEN [vulnerable]
+   ELSE IF port=... 
+              THEN [not vulnerable]
+              ELSE [vulnerable]
+
+ +

where a huge benefit is that it is also more human-readable than a neural net.

+ +

The most prominent algorithms for decision trees are ID 3 and its successor C4.5.

+",27624,,,,,8/15/2019 2:39,,,,7,,,,CC BY-SA 4.0 +13985,2,,13982,8/15/2019 6:39,,3,,"

Neural network will eventually reach limit of it's approximation power. You can't expect to learn more and more things infinitely long with the same amount of learnable parameters. Also, if you eventually reach optimal performance, you can't play more optimal than what optimal is (not saying that it reached optimal performance but possibly something close to optimal for it approximation abilities). So probably combination of those two leads for the performance increase to reach it's limit.

+",20339,,,,,8/15/2019 6:39,,,,0,,,,CC BY-SA 4.0 +13986,1,13996,,8/15/2019 8:28,,5,397,"

I read these comments from Judea Pearl saying we don't have causality, physical equations are symmetric, etc. But the conditional probability is clearly not symmetric and captures directed relationships.

+ +

How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships?

+",21158,,2444,,8/15/2019 8:39,4/21/2020 17:10,Why isn't conditional probability sufficient to describe causality?,,4,0,,,,CC BY-SA 4.0 +13987,2,,13986,8/15/2019 10:27,,3,,"
+

But the conditional probability is clearly not symmetric and captures directed relationships.

+
+ +

One needs to consider the kinds of directed relationships that is captured by conditional probability. It surely does capture some kind of association or dependence which could be directed. At the same time, it is not right to say that it surely captures the causal relationships.

+ +

Let:

+ +

Sun rises = $A$, Rooster crows = $B$, then, $P(A |B)$ is bound to be very high but it does not mean that rooster crowing causes sunrise.

+ +
+

How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships?

+
+ +

He will ask him to go back to school.

+",16708,,16708,,8/15/2019 11:11,8/15/2019 11:11,,,,2,,,,CC BY-SA 4.0 +13988,1,13990,,8/15/2019 11:44,,3,162,"

This is related to my earlier question, which I'm trying to break down into parts (this being the first). I'm reading notes on word vectors here. Specifically, I'm referring to section 4.2 on page 7. First, regarding points 1 to 6 - here's my understanding:

+ +

If we have a vocabulary $V$, the naive way to represent words in it would be via one-hot-encoding, or in other words, as basis vectors of $R^{|V|}$ - say $e_1, e_2,\ldots,e_{|V|}$. We want to map these to $\mathbb{R}^n$, via some linear transformation such that the images of similar words (more precisely, the images of basis vectors corresponding to similar words) have higher inner products. Assuming the matrix representation of the linear transformation given the standard basis of $\mathbb{R}^{|V|}$ is denoted by $\mathcal{V}$, then the ""embedding"" of the $i$-th vocab word (i.e. the image of the corresponding basis vector $e_i$ of $V$) is given by $\mathcal{V}e_i$.

+ +

Now suppose we have a context ""The cat ____ over a"", CBoW seeks to find a word that would fit into this context. Let the words ""the"", ""cat"", ""over"", ""a"" be denoted (in the space $V$) by $x_{i_1},x_{i_2},x_{i_3},x_{i_4}$ respectively. We take the image of their linear combination (in particular, their average): +$$\hat v=\mathcal{V}\bigg(\frac{x_{i_1}+x_{i_2}+x_{i_3}+x_{i_4}}{4}\bigg)$$

+ +

We then map $\hat v$ back from $\mathbb{R}^n$ to $\mathbb{R}^{|V|}$ via another linear mapping whose matrix representation is $\mathcal{U}$: $$z=\mathcal{U}\hat v$$

+ +

Then we turn this score vector $z$ into softmax probabilities $\hat y=softmax(z)$ and compare it to the basis vector corresponding to the actual word, say $e_c$. For example, $e_c$ could be the basis vector corresponding to ""jumped"".

+ +

Here's my interpretation of what this procedure is trying to do: given a context, we're trying to learn maps $\mathcal{U}$ and $\mathcal{V}$ such that given a context like ""the cat ____ over a"", the model should give a high score to words like ""jumped"" or ""leaped"", etc. Not just that - but ""similar"" contexts should also give rise to high scores for ""jumped"", ""leaped"", etc. For example, given a context ""that dog ____ above this"" wherein ""that"", ""dog"", ""above"", ""this"" are represented by $x_{j_1},x_{j_2},x_{j_3},x_{j_4}$, let the image of their average be

+ +

$$\hat w=\mathcal{V}\bigg(\frac{x_{j_1}+x_{j_2}+x_{j_3}+x_{j_4}}{4}\bigg)$$

+ +

This gets mapped to a score vector $z'=\mathcal{U}\hat w$. Ideally, both score vectors $z$ and $z'$ should have similarly high magnitudes in their components corresponding to similar words ""jumped"" and ""leaped"".

+ +

Is my above understanding correct? Consider the following quote from the lectures:

+ +
+

We create two matrices, $\mathcal{V} \in \mathbb{R}^{n\times |V|}$ and $\mathcal{U} \in \mathbb{R}^{|V|\times n}$, where $n$ is an arbitrary size which defines the size of our embedding space. $\mathcal{V}$ is the input word matrix such that the $i$-th column of $\mathcal{V}$ is the $n$-dimensional embedded vector for word $w_i$ when it is an input to this model. We denote this $n\times 1$ vector as $v_i$. Similarly, $\mathcal{U}$ is the output word matrix. The $j$-th row of $\mathcal{U}$ is an $n$-dimensional embedded vector for word $w_j$ when it is an output of the model. We denote this row of $\mathcal{U}$ as $u_j$.

+
+ +

It's not obvious to me why $v_i=\mathcal{V}e_i$ should be the same as or even similar to $u_i$. How does the whole backpropagation procedure above ensure that?

+ +

Also, how does the procedure ensure that basis vectors corresponding to similar words $e_i$ and $e_j$ are mapped to vectors in $\mathbb{R}^n$ that have high inner product? (In other words, how is it ensured that if words no. $i_1$ and $i_2$ are similar, then $\langle v_{i_1}, v_{i_2}\rangle$ and $\langle u_{i_1}, u_{i_2}\rangle$ have high values?)

+",27548,,,,,8/16/2019 13:48,How does Continuous Bag of Words ensure that similar words are encoded as similar embeddings?,,1,0,,,,CC BY-SA 4.0 +13989,2,,7394,8/15/2019 12:06,,1,,"

In case this is still relevant to you I can share my tutorial on SVM with Python implementation in Jupyter notebook:

+ +

Primer to support vector machines

+ +

The tutorial assumes some mathematics and programming background knowledge. The SVM codes utilize no external machine learning packages and tries to teach the reader to build a SVM model him-/herself.

+ +

I hope it helps you!

+",27971,,,,,8/15/2019 12:06,,,,0,,,,CC BY-SA 4.0 +13990,2,,13988,8/15/2019 14:36,,2,,"

Unlike in skip-gram, the reason similar words have similar embeddings in CBOW is because the words show up in the same contexts of other skipped words.

+ +

lets assume two words $e_i$ and $e_j$ pop up in the exact same context of some word $e_k$ with 3 other context words as well. An example would be:

+ +
    +
  1. He leaped over the truck
  2. +
  3. He jumped over the truck
  4. +
+ +

Where the italics represent the words with similar meanings but the bolded words above are the ones being predicted/skipped in CBOW. Lets now show this.

+ +

Let the rest of the context be denoted $\{e_r\}_r$ and the skipped word as $e_s$ and so the loss will try to minimize both $-e_s^T log(softmax(\mathcal{U}\mathcal{V}[\frac{1}{R+1}(e_i + \sum_re_r)]))$ and $-e_s^T log(softmax(\mathcal{U}\mathcal{V}[\frac{1}{R+1}(e_j + \sum_re_r)]))$.

+ +

Assuming a long run withlarge enough batch-size (so ignoring catastrophic forgetting) we can essentially say it will be minimizing
+$$-e_s^T [log(softmax(\mathcal{U}\mathcal{V}[\frac{1}{R+1}(e_i + \sum_re_r)])) +\\ log(softmax(\mathcal{U}\mathcal{V}[\frac{1}{R+1}(e_j + \sum_re_r)]))]$$
+Now let $\mathcal{U}\mathcal{V}\frac{1}{R+1}\sum_re_r$ be denoted as $c$, $\mathcal{U}\mathcal{V}\frac{1}{R+1}e_i$ as $\hat u_i$, and $\mathcal{U}\mathcal{V}\frac{1}{R+1}e_j$ as $\hat u_j$. So we have to minimize
+$$-e_s^T[log(softmax(\hat u_i+c)) + log(softmax(\hat u_j+c))]$$
+or equivalently
+$$-e_s^T\ log(softmax(\hat u_i+c) * softmax(\hat u_j+c))$$
+As we know in crossentropy, we get our critical point when the inside of the log here would equal $e_s$, which is only possible if both $softmax(\hat u_i+c)$ and $softmax(\hat u_j+c)$ were to be equal to $e_s$. softmax is non-invertible and is invariant under addition only, therefore to achieve this we would need $\hat u_i + c$ to equal $\hat u_j + c + K$ where $K$ is just some arbitrary constant vector ($K=[k,k,k,...]$. Subtracting $c$ and multiplying by $R+1$ we make our model want

+ +

$$\begin{align} +u_i &= u_j + K \\ +\implies K &= \mathcal{U}\mathcal{V}(e_i -e_j) \\ +\end{align}$$

+ +

This would mean for all common words, we see this activity that $u_{i-j}$ would be a constant, but this is difficult because that means a $\delta$ must exist such that $\mathcal{U}\delta = k*\vec 1$ and that all similar word pairs indexed by $(a,b)$ would have $v_a \approx v_b + const*\delta$ (because $u_a = u_b + K \implies v_b = v_a + \mathcal{U}^{-1}K$ and we denote $\delta = \mathcal{U}^{-1}K$). This constraint is heavy, tough to learn, and would also be difficult to process on multiple word senses that would appear in the vocabulary. This is because if it were to learn this, it enforces constraints on the images of $\mathcal{U}$ and $\mathcal{V}$ along with creating word vectors with highly different magnitudes making the learning process difficult. This would indicate that this constant would be low in practice. Therefore we would get $v_i \approx v_j$.

+",25496,,25496,,8/16/2019 13:48,8/16/2019 13:48,,,,10,,,,CC BY-SA 4.0 +13991,2,,4786,8/15/2019 14:36,,0,,"

The reason the adjusted fitness prevents species from growing to big is do to the fact that the summation that determines the divisor in the adjusted fitness function reduces to the number of genomes in the species fi belongs to, so as species grow the adjusted fitness for every genome belonging to it is divided by a larger number and thus receives a lower adjusted fitness value which in turn will also reduce the summed adjusted fitness of the species. This smaller adjusted fitness then affects the number of offspring the species will get to create after elitism reduces the population.

+",20044,,20044,,11/24/2019 21:30,11/24/2019 21:30,,,,0,,,,CC BY-SA 4.0 +13993,1,,,8/15/2019 18:00,,2,96,"

Is this true? Are we planning to switching such reasoning methods regarding AI tech in the future?

+",27979,,,,,1/9/2021 10:03,Is AI and Big Data science recommending a shift in the scientific method from inductive to deductive reasoning?,,1,0,,,,CC BY-SA 4.0 +13994,1,14014,,8/15/2019 21:45,,4,96,"

I'm trying to score video scenes in terms of aesthetics and cinematography features. Basically, how ""interesting"" a scene or video frame can be for a viewer. Simpler, how attractive a scene is. My final goal is to tag intervals of video which can be more interesting to viewers. It can be a ""temporal attention"" model as well.

+ +

Do we have an available model or prototype to score cinematographic features of an image or a video? I need a starter tutorial on that. Basically, a ready-to-use prototype/model that I can test as opposed to a paper that I need to implement myself. Paper is fine as long as the code is open-source. I'm new and can't yet write a code given a paper.

+",9053,,9053,,8/16/2019 15:55,8/16/2019 17:49,Aesthetics analysis with deep learning,,1,8,,,,CC BY-SA 4.0 +13995,2,,13986,8/15/2019 22:42,,0,,"

Why isn't conditional probability sufficient to describe causality?

+ +

Suppose that, when the barometric pressure, in a certain region, drops below a certain level, two things happen

+ +
    +
  1. the height of the column of mercury in your barometer drops below a certain level

  2. +
  3. a storm occurs

  4. +
+ +

We may be tempted to model these relationships with the following graphical model, where each directed edge represents a causal relationship, so, for example, the drop in barometric pressure causes the storm.

+ +

+ +

However, this graphical (and causal) model is likely wrong (and unintuitive), given that the drop in barometric pressure is likely only correlated with the storm, so it is not the cause of the storm.

+ +

How can we see that the drop in barometric pressure is or not the cause of the storm?

+ +

We can compare the probabilities $P(A \mid \text{do}(B))$ and $P(A \mid B)$, where $A$ is the event ""a storm occurs"" and $B$ is the event ""drop in barometric pressure"". What does $\text{do}(B)$ mean? It means that we force the event $B$ to occur, that is, we force the drop in barometric pressure to occur. Intuitively, what is then the difference between $P(A \mid \text{do}(B))$ and $P(A \mid B)$? In the case of $P(A \mid \text{do}(B))$, we force the event $B$ to always occur. In the case of $P(A \mid B)$, we only and passively look at the cases of event $A$ when event $B$ occurs (without thus forcing $B$ to occur). We now know the difference between $P(A \mid \text{do}(B))$ and $P(A \mid B)$. However, how does this help us to understand that $B$ is not the cause of $A$? If $B$ was a cause of $A$, then, if we forced $B$ always to occur, then the probability of $A$ should also change accordingly. However, imagine that we are (magically) able to drop the barometric pressure, if the probability of $A$ does not change accordingly (in this case, if it does not increase), then the storm is not an effect of the drop in barometric pressure.

+ +

To conclude, Judea Pearl would say that do-operators (or interventions) are required to analyze causal relationships.

+ +

The article Probabilistic Causation by Stanford Encyclopedia of Philosophy gives a good overview of the (probabilistic) causation (or causality) field. In particular, have a look at section 3, which describes causal modeling (according to Pearl). Causal modeling and inference actually involve several nontrivial concepts that require some time to get familiar with, such as interventions (or do-operations), several basic causal relationships (such as forks, chains and colliders), d-separation or Bayesian networks.

+",2444,,2444,,8/15/2019 22:58,8/15/2019 22:58,,,,4,,,,CC BY-SA 4.0 +13996,2,,13986,8/15/2019 23:50,,5,,"

Perhaps the shortest answer to this question is that Bayes' Theorem itself allows us to easily change the direction of a conditional probability:

+ +

$$ +P(A|B) = \frac{P(B|A)P(A)}{P(B)} +$$

+ +

So if you have $P(B|A)$, $P(A)$, and $P(B)$, we can determine $P(A|B)$, and similarly you can determine $P(B|A)$ from $P(A|B)$, $P(B)$ and $P(A)$. Just by looking at $P(B|A)$ and $P(A|B)$, it is therefore impossible to tell what the causal direction is (if any).

+ +

In fact, probabilistic inference usually works the other way round: When there is a known causal relation, say from diseases $A$ to symptoms $B$, we usually have $P(B|A)$, and are interested in the diagnostic reasoning task of determining $P(A|B)$ from that. (The only other thing we need for that is the prior probability $P(A)$ since $P(B)$ is just a normalization factor.)

+",27624,,,,,8/15/2019 23:50,,,,0,,,,CC BY-SA 4.0 +13997,2,,7394,8/16/2019 1:41,,1,,"

Support vector machines are supervised learning models with associated learning algorithms that analyze data and are used for classification and regression analysis.

+ +

Here is a link where you can learn more about it from a introduction level: ""Support Vector Machine — Introduction to Machine Learning Algorithms"" (Medium)

+",19325,,1671,,8/16/2019 20:25,8/16/2019 20:25,,,,0,,,,CC BY-SA 4.0 +13998,2,,13859,8/16/2019 5:40,,1,,"

Your question is mostly philosophical, not technical or scientific. So I am giving opinions and references here.

+ +
+

the core of AI boils down to design algorithms

+
+ +

I am noticing that you are not even try to define AI (whose definition changed since the previous century). You could look at the table of contents of the Artificial Intelligence journal and notice how topics covered there changed drastically in a few decades (even experimental approaches have declined).

+ +

You might be interested in reading more about AGI and follow the few conferences about it. Beware, there is a lot of too much simplified approaches, and even a lot of bullshit (e.g. on this AGI mailing list, but some messages there are gems)

+ +

I assume you accept the Church-Turing philosophical thesis: every intelligent cognition (either natural i.e. biological or artificial) is some symbolic computation. In particular, the work of a mathematician can be abstracted as a Turing machine (that was the major insight of Turing and in the halting problem). Be also aware of the related Curry-Howard correspondence and Rice's theorem. Read Gödel, Escher, Bach !

+ +

We don't know yet how to make AGI. You could read Bostrom's SuperIntelligence book about potential dangers. You could also read J.Pitrat's book Artificial Beings (which gives much more positive and constructive insights about eventually making some AGI) and blog.

+ +

My personal belief (just an opinion) is that AGI could be perhaps achieved (in many dozens of years), should be definitely get much more funding -and more time- as a research topic (e.g. as much as the ITER reactor; see also softwareheritage.org and the motivations there), but won't be achieved by any single technique, but by a clever combination of many AI techniques (both symbolic AI -e.g. for planning- and machine learning or connectionnist approaches, with inspiration from cognitive psychology).

+ +
+

The conclusion is, in my understanding, that to create AI systems we must accept incompleteness or inconsistency.

+
+ +

We, members of the Homo Sapiens Sapiens species (in latin, the humans who know that they know, so capable of metaknowledge), claim to be intelligent. But all of us have a globally incomplete and inconsistent behavior, because each of us have contradictions (e.g. in our personal lives or ethical beliefs). So, logically speaking, incompleteness or inconsistency is not opposed to intelligence. Read also more about situated AI and machine ethics. BTW, I believe (since educated by J.Pitrat about this) that explicit and declarative metaknowledge is required in any AGI system.

+ +
+

Does that mean that AI will never be employed for real-time critical applications?

+
+ +

Notice that autonomous killing robots are already a controversial research topic today. Autonomous robots already exist (e.g. Mars rovers cannot be teleoperated -for every elementary movement- from Earth, because any radio signal takes minutes to reach Mars). And autonomous vehicles (à la Google car) claim today to use AI techniques and are real-time safety-critical systems. Today's Airbus or Boeing (cf DO-178C) are flying automatically most of the time. Cruise missiles and ICBMs are fire-and-forget devices. Many high-frequency trading systems claim to use AI techniques and are real-time.

+ +

PS. Notice that what was called AI in the previous century is today called AGI. My PhD in AI was defended in 1990 (and was about explicit metaknowledge for metaprogramming goals, see e.g. this old 1987 paper)

+",3335,,3335,,8/18/2019 13:56,8/18/2019 13:56,,,,4,,,,CC BY-SA 4.0 +13999,1,14002,,8/16/2019 5:47,,3,204,"

In a convolutional neural network, when we apply the convolution on a $5 \times 5$ image with $3 \times 3$ kernel, with stride $1$, we should get only one $4 \times 4$ as output. In most of the CNN tutorials, we are having $4 \times 4 \times m$ as output. I don't know how we are getting a three-dimensional output and I don't know how we need to calculate $m$. How is $m$ determined? Why do we get a three-dimensional output after a convolutional layer?

+",27986,,2444,,8/16/2019 8:12,8/20/2019 11:34,Why do we get a three-dimensional output after a convolutional layer?,,1,0,,,,CC BY-SA 4.0 +14000,2,,2712,8/16/2019 6:42,,0,,"

You could be interested in orthogonally persistent systems. You could look at them as schema-agnostic database systems whose data fits entirely in RAM (remember also 1980s Smalltalk or Lisp Machines or Prolog ones and 1994 GrassHopper OS) or at least in virtual memory. With that approach, even SBCL almost fits in your wish, since it has save-lisp-and-die. Look also into frame based systems and object databases. Read also a good operating systems textbook and see past discussions archived on tunes.org.

+ +

Shameful self promotion: My bismon system (work in progress in summer 2019) claims to be a GPLv3+ orthogonally persistent system applied for static source code analysis of IoT software. But you might reuse most of it for other kind of orthogonal persistence (of frame based data).

+",3335,,3335,,8/16/2019 6:54,8/16/2019 6:54,,,,0,,,,CC BY-SA 4.0 +14001,1,14004,,8/16/2019 8:07,,3,1827,"

I am a newbie in the fantastic AI world, I have started my learning recently. +After a while, my understanding is, we need to feed in tremendous data to train a or many models.

+ +

Once the training is complete, we could take out the trained models and ""plug in"" to any other programming languages to use to detect things.

+ +

So my questions are:

+ +

1. What are the trained models? are they algorithms or a collection of parameters in a file?

+ +

2. What do they look like? e.g. file extensions

+ +

3. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models??

+ +

Thank you!

+",27988,,2444,,8/16/2019 9:18,8/16/2019 17:46,"What is the ""thing"" which is trained in AI model training",,2,0,,12/22/2021 14:08,,CC BY-SA 4.0 +14002,2,,13999,8/16/2019 8:31,,2,,"

If you have a $h_i \times w_i \times d_i$ input, where $h_i, w_i$ and $d_i$ respectively refer to the height, width and depth of the input, then we usually apply $m$ $h_k \times w_k \times d_i$ kernels (or filters) to this input (with the appropriate stride and padding), where $m$ is usually a hyper-parameter. So, after the application of $m$ kernels, you will obtain $m$ $h_o \times w_o \times 1$ so-called feature maps (also known as activation maps), which are usually concatenated along the depth dimension, hence your output will have a depth of $m$ (given that the application of a kernel to the input usually produces a two-dimensional output). For this reason, the output is usually referred to as output volume.

+ +

In the context of CNNs, the kernels are learned, so they are not constant (at least, during the learning process, but, after training, they usually remain constant, unless you perform continual lifelong learning). Each kernel will be different from any other kernel, so each kernel will be doing a different convolution with the input (with respect to the other kernels), therefore, each kernel will be responsible for filtering (or detecting) a specific and different (with respect to the other kernels) feature of the input, which can, for example, be the initial image or the output of another convolutional layer.

+",2444,,2444,,8/16/2019 8:38,8/16/2019 8:38,,,,0,,,,CC BY-SA 4.0 +14003,1,,,8/16/2019 8:47,,14,9706,"

I've been reading different papers regarding graph convolution and it seems that they come into two flavors: spatial and spectral. From what I can see the main difference between the two approaches is that for spatial you're directly multiplying the adjacency matrix with the signal whereas for the spectral version you're using the Laplacian matrix.

+

Am I missing something, or are there any other differences that I am not aware of?

+",20430,,2444,,12/19/2021 14:54,12/19/2021 14:54,What is the difference between graph convolution in the spatial vs spectral domain?,,2,0,,,,CC BY-SA 4.0 +14004,2,,14001,8/16/2019 9:13,,6,,"

This answer applies to Machine Learning (ML) part of AI, as that seems to be what you are asking about. Please bear in mind that AI is still a broad church, including many other techniques than ML. ML, including neural networks for deep learning, and Reinforcement Learning (RL) is only a subset of AI - some AI techniques are more focused on the algorithm than parameters.

+ +
+
    +
  1. What are the trained models? are they algorithms or a collection of parameters in a file?
  2. +
+
+ +

In ML, the usual process is to feed data into parametric function (e.g. a neural network) and alter the parameters of it to ""fit"" the data. The main output of this is a collection of parameters and hyperparameters that describe the parametric function. So 90% of the time, when discussing the ""trained model"", it means the same thing as the collection of paramaters.

+ +

However, those parameters are of limited use without a library that can re-create the function from them. Parameters will be saved from a specific library and can be loaded back into that library easily. It is also possible for libraries to read or convert from models saved from other libraries, much like how different spreadsheet programs can read each others' files.

+ +
+
    +
  1. What do they look like? e.g. file extensions
  2. +
+
+ +

This varies a lot, depending on which library was used. It is not possible to make a general statement. For example, Tensorflow can save variables to a ""Checkpoint"" file with .ckpt extension, but can get more sophisticated depending on how much of the model you want to export, and full models with whole structure will contain more than just the variables and have the .pb extension.

+ +
+
    +
  1. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models??
  2. +
+
+ +

There are a few places where you can find selections of pre-trained models. One such place is Tensorflow's Model Zoo and you might be interested in Tensorflow detection model zoo.

+ +

Other frameworks may also provide example code. For instance Caffe also has a ""Model Zoo"" (searching Model Zoo is a good starting strategy).

+ +

If you are working at the level of collecting model parameters and want to run these models yourself, you will need to learn a bit about each library, what language is used to work with it, and maybe follow some tutorial about how to use it. A few models will be packaged up with working scripts to use from the command line, but many are not and may take sometime and effort to get working.

+ +

When you have a specific detection target, you may be disappointed to find models that don't quite match what you want. For image classifying, it is common if you have a specialist need to take a pre-existing general model that has been trained on large dataset for weeks, and then ""fine tune"" it with your own image dataset for your purpose. Most NN libraries will have tutorials and examples of this fine tuning process.

+",1847,,1847,,8/16/2019 9:20,8/16/2019 9:20,,,,0,,,,CC BY-SA 4.0 +14006,1,,,8/16/2019 9:53,,4,67,"

What contemporary information system or cognitive architecture is the one with the highest measure of the Integrated Information Theory (IIT) (that is, a theory of consciousness, which states that a system's consciousness is determined by its causal properties and is, therefore, an intrinsic, fundamental property of any physical system)

+ +

Is there race/competitions to develop (or allow autonomous development) the system with the maximum IIT measure?

+",8332,,2444,,5/25/2022 23:01,5/25/2022 23:01,What is the cognitive architecture with the highest IIT measure?,,0,0,,,,CC BY-SA 4.0 +14007,1,,,8/16/2019 11:06,,3,123,"

I am currently building a chatbot. What I have done so far is, collected possible questions/training data/files and create a model out of it using Apache OpenNLP; the model is able to predict all the questions that are in the training data and fails to predict for new questions.

+

Instead of doing all the above, I can write a program that matches the question/words against training data and predict the answer — what is the advantage of using Machine Learning algorithms?

+

I have searched extensively about this and all I got was, in Machine Learning there is no need to change the algorithm and the only change would be in the training data, but that is the case with programming too: the change will be in training data.

+",27992,,2444,,12/21/2021 12:21,12/21/2021 12:21,What are the advantages of Machine Learning compared to traditional programming for developing a chatbot?,,2,0,,,,CC BY-SA 4.0 +14008,2,,14007,8/16/2019 11:16,,3,,"

In my view ML does not work very well for conversational AI systems. It is generally alright for intent recognition, so getting what the user wants if they ask a question (""I want to book a flight?"", ""What is the weather in London?""), but anything after that quickly becomes difficult to handle, especially multi-step conversations that go beyond simple question/answer pairs.

+ +

My suggestion would be to plan possible dialogues out as flow charts (more like trees/graphs, as there can be multiple branches at any point), and then write a program that interprets the graph based on user input and gives appropriate replies. You will also want to have some conversational memory to keep track of any information the user has mentioned. That is also tricky to do in a ML system.

+ +

For a very simple framework to start off with, have a look at ELIZA. It's half a century old, but you can still use it as a starting point.

+ +

(Disclaimer: I work for a company that makes conversational AI systems)

+",2193,,,,,8/16/2019 11:16,,,,2,,,,CC BY-SA 4.0 +14009,1,,,8/16/2019 11:30,,3,166,"

Squezee-and-excite networks introduced SE blocks, while MobileNet v2 introduced linear bottlenecks.

+ +

What is the effective difference between these two concepts?

+ +

Is it only implementation (depth-wise convolution, vs per-channel pooling), or they serve a different purpose?

+ +

My understanding is that both approaches are used as attention mechanism, working per-channel. In other words, both approaches are used to filter unnecessary information (information that we consider noise, not signal). +Is this correct? +Do bottlenecks ensure, that the same feature won't be represented multiple times in different channels, or they don't help at all in this regard?

+",27994,,2444,,8/17/2019 15:57,8/17/2019 15:57,What is the difference between Squeeze-and-excite and bottleneck modules from Mobilenet v2?,,0,0,,,,CC BY-SA 4.0 +14012,1,,,8/16/2019 16:21,,2,43,"

Given a robot in a situation such as in a library reading a book.

+ +

Now I want to create a neural network that suggests an appropriate action in this situation. And, generally, ignore actions such as ""get up and dance"" and so on.

+ +

Since there are limitless actions a robot could do, I need to narrow it down to the ones in this situation. Using its vision system, the word ""book"" and book neurons should already be activated as well as ""reading"".

+ +

One idea I had was to create an adversarial network which generates words (sequences of letters) based on the situation such as ""turn page"", ""read next line"" and so on. And then have another neural network which translates these words into actions. (It would them simulate whether this was a good idea. If not it would somehow suppress the first word and try to generate a new word.)

+ +

Another example is the robot is in a maze and gets to a crossroads. The network would generate the word ""turn left"" and ""turn right"".

+ +

Another idea would be to have the actions be composed of a body part e.g. ""eyes"" and a movement such as ""move left"" and it would combine these to suggest actions.

+ +

Either way, it seems like I need a way to encode actions so that the robot doesn't consider every possible action in the universe.

+ +

Is there any research in this area or ideas on how to achieve this?

+ +

(I think this may be somewhat related to the task of ""try to name as many animals as you can."")

+",4199,,2444,,8/17/2019 13:52,9/4/2019 1:41,How to select the most appropriate set of actions for a given environment or task?,,1,0,,,,CC BY-SA 4.0 +14013,1,,,8/16/2019 16:51,,7,446,"
+

It (Adagrad) adapts the learning rate to the parameters, performing smaller updates + (i.e. low learning rates) for parameters associated with frequently occurring features, and larger updates (i.e. high learning rates) for parameters associated with infrequent features.

+
+ +

From Sebastian Ruder's Blog

+ +

If a parameter is associated with an infrequent feature then yes, it is more important to focus on properly adjusting that parameter since it is more decisive in classification problems. But how does making the learning rate higher in this situation help?

+ +

If it only changes the size of the movement in the dimension of the parameter (makes it larger) wouldn't that make things even more imprecise? Since the network depends more on those infrequent features, shouldn't adjusting those parameters be done more precisely instead of just faster? The more decisive parameters should have a higher ""slope"", thus why should they also have high learning rates? I must be missing something, but what is it?

+ +

Further, in the article, the formula for parameter adjustments with Adagrad is given. Where exactly in that formula do you find the information about the frequency of a parameter? There must be a relationship between the gradients of a parameter and the frequency of features associated with it because it's the gradients that play an important role in the formula. What is that relationship?

+ +

TLDR: I don't understand both the purpose and formula behind Adagrad. What is an intuitive explanation of it that also provides an answer to the questions above, or shows why they are irrelevant?

+",21788,,16565,,8/17/2019 8:04,5/1/2023 22:02,"An intuitive explanation of Adagrad, its purpose and its formula",,1,1,,,,CC BY-SA 4.0 +14014,2,,13994,8/16/2019 17:35,,3,,"

Aesthetics of images has a strong subjective element and possibility of multiple dimensions depending on purpose of the media. That means:

+ +
    +
  • It is hard to define what we mean by scoring aesthetics.

  • +
  • Given any well-constrained definition, it is then time-consuming to collect relevant data.

  • +
+ +

However, there is some interest in the machine-learning community, as media quality would be a very useful metric to sort and filter data on (provided the metric is close enough to the end user who wants to select it). As a result, there are data sets, research papers and pre-built models for this.

+ +

Media quality training data can be crowdsourced in a variety of ways, including looking at popularity of items on social media, to paying experts to assess large numbers of images. An example of one open dataset compiled by researchers for this purpose is called AVA.

+ +

This data might be reduced to image/quality pairs which you can then train a CNN model to predict the quality metric (score out of 10 for example). This might just be a regression, but other more complex loss functions are also considered.

+ +

A quick search for existing models brings up Google's NIMA project, which has more than one implementation available as open-source code. NIMA appears to use multiclass classification approach to predict which ratings humans would most likely give the image, and the resulting score is then a weighted average of the predicted scores - the claimed benefit of that seems to be that it better matches how the quality ratings are sourced, and it will better capture split opinions (e.g. half of people think the image is terrible, but half think it is great is a different type of image to one where everyone thinks it is just average).

+ +

Here is an implementation of NIMA by Github account ""idealo"" looks complete with documentation, and ready to use with pre-built scripts.

+ +

Just to show this is not a one-off, here's a blog by Andrej Karpathy about using CNNs to rate selfies which includes some introduction to core CNN concepts.

+",1847,,1847,,8/16/2019 17:49,8/16/2019 17:49,,,,3,,,,CC BY-SA 4.0 +14015,2,,14001,8/16/2019 17:46,,4,,"
+
    +
  1. What are the trained models? are they algorithms or a collection of parameters in a file?
  2. +
+
+ +

""Model"" could refer to the algorithm with or without a set of trained parameters. +If you specify ""trained model"", the focus is on the parameters, but the algorithm is implicitly part of that, since without the algorithm, the parameters are just an arbitrary set of numbers.

+ +
+
    +
  1. What do they look like? e.g. file extensions
  2. +
+
+ +

That very much depends on both the algorithm you're using and the specific implementation. A few simple examples might help clarify matters. Let's suppose that the problem we're trying to learn is the exclusive or (XOR) function:

+ +
a | b | a XOR b
+--+---+---------
+0 | 0 | 0
+0 | 1 | 1
+1 | 0 | 1
+1 | 1 | 0
+
+ +
+ +

First, let's use a 2-layer neural net to learn it. We'll define our activation function to be a simple step function:

+ +

$ f(x) = \begin{cases} +1 & \text{if } x > 0.5 \\ +0 & \text{if } x \le 0.5 +\end{cases} $

+ +

(This is actually a terrible activation function for real neural nets since it's non-differentiable, but it makes the example clearer.)

+ +

Our model is:

+ +

$h_0 = f(1\cdot a+1\cdot b + 0)\\ + h_1 = f(0.5\cdot a + 0.5\cdot b + 0)\\ + \,\;y = f(1\cdot h_0 - 1\cdot h_1 + 0)$

+ +

Each step of this essentially draws a hyperplane and evaluates to 1 if the input is on one side of the hyperplane and 0 otherwise. In this particular case, h_0 tells us if either a or b is true. h_1 tells us if they're both true, and y tells us if exactly one of them is true, which is the exact definition of the XOR function.

+ +

Our parameters are the coefficients and biases (the offset added at the end of each expression):

+ +

$ \begin{bmatrix} +1 & 1 & 0 \\ +0.5 & 0.5 & 0 \\ +1 & 1 & 0 \\ +\end{bmatrix}$

+ +

They can be stored in a file in any way we want; all that matters is that the code that stores them and the code that reads them back agree on the format.

+ +
+ +

Now let's solve the same problem using a decision tree. For these, we traverse a tree, and at every node, ask a question about the input to decide which child to visit next. Ideally, each question will divide the space of possibilities exactly in half. Once we reach a leaf node, we know our answer.

+ +

In this diagram, we visit the right child iff the expression is true.

+ +
     a+b=2
+    /     \
+  a+b=0    0
+ /     \     
+0       1
+
+ +

In this case, the model and parameters are harder to separate. The only part of the model that isn't learned is ""It's a tree"". The expressions in each interior node, the structure of the tree, and the value of the leaf nodes are all learned parameters. As with the weights from the neural network, we can store these in any format we want to.

+ +
+ +

Both methods are learning the same problem, and actually find basically the same solution: a XOR b = (a OR b) AND NOT (a AND B). But the nature of the mathematical model we use depends on the method we choose, the parameters depend on what we train it on, the file format depends on the code we use to do it, and the line between model and parameter is fairly arbitrary; the math works out the same regardless of how we split it up. We could even write a program that tries different methods, and outputs a program that classifies inputs using the method that performed best. In this case, the model and parameters aren't separate at all.

+ +
+
    +
  1. Especially, I want to find the trained models for detecting birds (the bird types do not matter). Are there any platforms for open-source/free online trained AI models??
  2. +
+
+ +

I don't know of any pretrained models that specifically recognize birds, but I'm not in image-recognition, so that doesn't mean much. If you're not averse to training your own model (using existing code), I believe the ImageNet dataset includes birds. AlexNet and LeNet would probably be good starting points for the model. Most if not all of the the state of the art image recognition models are based on convolutional networks, so you'll need a decent GPU to run them.

+",2212,,,,,8/16/2019 17:46,,,,0,,,,CC BY-SA 4.0 +14016,2,,13725,8/16/2019 18:02,,1,,"

Previous answers are very well written. I just wanted to supplement the thread by giving a simple example. The example shows how a logical function can be computed without errors using noisy components.

+ +

Taken verbatim from Neural Networks by Raul Rojas. An excellent book: +

+ +
+

an example of a network built using four + units. Assume that the first three units connected directly to the three bits of + input $x_1, x_2, x_3$ all fire with probability $1$ when the total excitation is greater + than or equal to the threshold $\theta$ but also with probability $p$ when it is $\theta − 1$. + The duplicated connections add redundancy to the transmitted bit, but in + such a way that all three units fire with probability one when the three bits + are $1$. Each unit also fires with probability $p$ if two out of three inputs are $1$. + However each unit reacts to a different combination. The last unit, finally, is + also noisy and fires any time the three units in the first level fire and also with + probability $p$ when two of them fire. Since, in the first level, at most one unit + fires when just two inputs are set to $1$, the third unit will only fire when all + three inputs are $1$. This makes the logical circuit, the AND function of three + inputs, built out of unreliable components error-proof.

+
+",16708,,16708,,8/18/2019 7:04,8/18/2019 7:04,,,,0,,,,CC BY-SA 4.0 +14019,2,,13694,8/17/2019 7:19,,1,,"

If you want to bee engineer who work with models as black boxes it could be OK. If you want to be researcher, as the job position or for better understanding of the subject it's not OK. Backporpagation is just basic multivariate calculus. If you straggling with it things like Hessians, regularizers, stochastic processes etc. would cause even more problems. If you want to go research track it could be good idea to take some math courses and prioritize them.

+",22745,,,,,8/17/2019 7:19,,,,0,,,,CC BY-SA 4.0 +14020,1,14036,,8/17/2019 8:34,,4,271,"

I'm looking for NLP techniques to transform sentences without affecting their meaning.

+

For example, techniques that could transform active voice into passive voice, such as

+
+

The cat was chasing the mouse.

+
+

to

+
+

The mouse was being chased by the cat.

+
+

I can think of a number of heuristics one could implement to make this happen for specific cases, but would assume that there is existing research on this in the field of linguistics or NLP. My searches for "sentence transformation" and similar terms didn't bring up anything though, and I'm wondering if I simply have the wrong search terms.

+

Related to this, I'm also looking for measures of text consistency, e.g., an approach that could detect that most sentences in a corpus are written in active voice and detect outliers written in passive voice. I'm using active vs. passive voice as an example here and would be interested in more general approaches.

+",28018,,2444,,4/8/2022 10:59,4/8/2022 10:59,Which NLP techniques can be used to transform sentences (e.g. from passive to active voice) without affecting their meaning?,,1,0,,,,CC BY-SA 4.0 +14021,2,,11336,8/17/2019 8:46,,1,,"

Specifically for face recognition (and other identification algorithms) there are better approaches than using classifiers directly.

+ +

Most identity recognition algorithms generate some kind of metric - typically an ""embedding"" of the original image into an abstract space i.e. a vector of real numbers. The space might be based on real-world biometrics e.g. normalised measurements of eye distance, eyebrow arch etc, which would be trained as a regression algorithm. The problem with this is that it requires a lot of labelled data, and the biometrics are not necessarily good at differentiating between identities. An alternative is to get the neural network to find the best abstract space for identities. You can do this if you have at least two images of each identity, and using triplet loss to train the network - the loss function directly rewards embedding the same identity close and different identity far apart.

+ +

Once you have an embedding, you no longer directly classify identities using the neural network. Instead, you base identity on distance between measured embedding and stored embeddings. This requires implementing a search function that looks at known embeddings and sorts by distance.

+ +
+

how do we ensure the network tells us when it encounters a new person?

+
+ +

Embeddings don't solve this problem directly, but give a useful heuristic - distance in embedding space. Typically a maximum allowed distance is set as a cutoff to consider an image as showing a new identity. This is a hyperparameter of the model. This is an area that triplet loss helps with, since it is trained to make the distance as large as possible between images that show different identities. If it has generalised well during training, then it should ignore differences due to lighting, pose, makeup etc, but still be able to differentiate similar looking people.

+ +

As the embedding is approximate, any such system may make mistakes, and needs to be carefully tested. The quality and quantity of training data are important, and it should match images used in production. But that is no different to the pure classifier, which must in addition be re-built and re-trained for every new class added.

+ +

Whether to use a more basic classifier or something like triplet loss is a question of scale - if the number of identities that need to be tracked is high, or the rate of change in identities is high, then embeddings trained on triplet loss (or similar) are more practical.

+",1847,,1847,,8/17/2019 9:11,8/17/2019 9:11,,,,0,,,,CC BY-SA 4.0 +14022,2,,2452,8/17/2019 14:48,,1,,"

CRUD applications today can't be considered expert systems.

+

However, even the so-called expert systems, which are currently developed, are implemented using normal programming statements, but what is important is the architecture that is built.

+

Current expert systems use only if-then types of rules, which produce data results that can be used as inputs to other rules, and an engine to step through them. This is quite limited, and it is greatly fragile.

+

What I do consider as expert systems are ones that can reason about variables (logical and numerical ones) and can use the limited formation of hypotheses and attempt a proof of them.

+

But, unfortunately, even what you might analyze and describe as an expert system is not really able to form models by itself, so it can easily run up against knowledge boundaries beyond which it cannot go.

+

Therefore, CRUD web applications today are not a modern version of the expert systems.

+",1581,,2444,,12/31/2020 20:46,12/31/2020 20:46,,,,0,,,,CC BY-SA 4.0 +14023,1,,,8/17/2019 16:30,,1,26,"

You can feed books to an RNN and it learn how to produce text.

+ +

What I'm interested in is an algorithm that, given say 20 letters it suggest, say the best 10 options for the next 10 letters.

+ +

So for example it begins with ""The cat jumped "" +and then we get various options such as ""over the dog"". ""on the table"" and so on.

+ +

My initial thoughts are to first use the most likely next letters. Then find the letter which is most uncertain and change this to the second likely next letter. And repeat this process.

+ +

(Then I may have another evaluation neural network to assess which is ""best"" English.)

+ +

In other words I want the RNN to ""think ahead"" at what it's saying - much like a chess playing machine.

+",4199,,,,,8/17/2019 16:30,Is there an standard algorithm for giving options from an RNN?,,0,3,,,,CC BY-SA 4.0 +14024,2,,13859,8/17/2019 18:10,,2,,"

Artificial intelligence cannot be boiled down to designing algorithms, binary or otherwise, simply because the exhibition of intelligence in biological systems predated the invention of algorithmic computing. From this, we can further draw the conclusion that algorithms are not a necessary component of systems that exhibit behavior we deem intelligent.

+ +

A decision was made, per the recommendation of John von Neumann, to increase reliability of computing machinery by delegating to a single binary central processing unit all computation. This choice and the prior work upon which it was based (Shannon, Church, and Turing) led to the preeminence of algorithm specification in computer languages. The foundation of expressing functional design in algorithmic terms was laid and the software industry was born.

+ +

Since that time, there has existed a parallel trend in research back toward the biological inspiration of computing machinery and, more specifically, parallel processing. We see this at several levels.

+ +
    +
  • Movement of floating point arithmetic, video rendering, and machine learning bottlenecks to dedicated VLSI hardware acceleration
  • +
  • Multiple core VLSI processors
  • +
  • Computing clusters and processing frameworks, containers, and environments that expose interfaces through which compiler and kernel programmers can control parallel machinery explicitly or implicitly
  • +
  • Multiple thread and processes delegated to multiple cores, agents, or hosts in computing clusters
  • +
  • Sophisticated VLSI level caching to maximize the efficiency of parallel operations
  • +
  • Language and compiler features to support the trends toward deployment to multiprocessing environments, such as declarative languages for Big Data platforms (ECL for example)
  • +
  • Development of AI chip designs that completely or partially shift the computing paradigm to prior to the emergence of the CPU in some ways, returning to considerable parallelism and departing from centralized processing (yet capitalizing on lessons learned in computer vision, cognitive science, reverse engineering of brain genetics, mental signal tracing, the use of gradient descent with back propagation, reinforcement designs, and applied robotics) — This is likely a major research direction for the 2020s.
  • +
+ +

Some believe that an implication of Gödel's two incompleteness theorems is that the human mind does not meet the criteria of a computing machine as Turing defined one, but these are largely tangential issues.

+ +

It is true that The working out of a proof that RNNs of sufficient resolution, depth, and width can be trained to be equivalent to any Turing Machine by Hava Siegelmann. It is true that her work is considered support for Marvin Minski's bold assertion that the human brain is a meat machine. However, the work on determinism by John Lucas and Roger Penrose's The Emperor's New Mind are not refutations of either of Gödel's theorems. They are refutations of what some thought were consequences Gödel's theorems and some of the implications of Minski's declaration.

+ +

Gödel clearly explains his intentions in the early portion of the paper presenting the theorems, and they had nothing to do with computing. He intended to and succeeded in proving that theorems within a concrete mathematical system not always be proven even if they are true. Gödel's work placed unwanted doubt on the initiative to prove all remaining unproven mathematical theorems. Mathematicians naturally tended to think of mathematics as the perfect human endeavor, and a legitimate proof of incongruity between what is true and what is provable seemed an imperfect irritation.

+ +

Perhaps the most profound response to Gödel's incompleteness theorems came from Alan Turing, who likely deliberately placed the word Completeness in the name of his theorem. But this was not a refutation either. He worked around incompleteness by defining a class of mathematical operations and finite data structures upon which they can operate that he could prove could be complete. Upon doing so, he put into place an important portion of the basis for algorithm development.

+ +

Nonetheless, it is probably wise for present day AI researchers to accept both incompleteness and inconsistency and realize that intelligence, artificial or not, is likely fallible after any finite degree of learning. This is likely because one cannot provide an infinite range of problem types to a learning system in a finite amount of time. There may always be at least one problem that the current state of learning cannot address. The practical colloquialism for this condition of partial knowledge is, ""We don't know what we don't know.""

+ +

Furthermore, a clear implication of the work of Gödel is that no proof may be found for some things that are true, ever, by any type of intelligence. Similarly, we cannot be sure that the most intelligent searching for a counter example to dispute a false assertion may end in finding one, ever. The PAC Learning framework addresses categories of problems that are solvable or not from a mathematical perspective and is worthy of study.

+ +

Lastly, but perhaps most profoundly, it is not clear that a type of intelligence exists that can learn anything, as opposed to be programmed to accomplish anything. Said another way, general intelligence may be an ideal conception never achieved but possibly approached. What may seem like super intelligence in one environment and during one specific time period may be entirely ineffective or even counter-intelligent and problematic in another environment or during a different time period.

+ +

This cannot be stressed too much, with so many statements about AI being made in the guise of science that have no origin in scientific rigor.

+ +

Nonetheless, even with these likely limitations on both AI and human intelligence, one cannot conclude that AI will be ineffective in real time critical applications. One cannot conclude that AI will be less effective than human intelligence in any particular domain either.

+ +

It is actually difficult to conclude anything about intelligence at all, without defining it formally and reaching a consensus in that definition, which continues to escape us. We can see that in the absence of this formality, mail industry from continuing to sort mail automatically. The automotive industry continues to pursue the invention of better artificial drivers than the average human driver. The game industry implements artificial opponents that have to deliberately make mistakes to let people win in an otherwise fair, real time game.

+ +

Clearly AI is evolving faster than the DNA components that affect the human brain.

+ +

People are less startled today than they would have been ten years ago by the proposition that, some time during this century, driving a car will be illegal in some jurisdictions, when the human and property loss statistics prove automated drivers to be substantially safer than nearly all manual ones. The bar for driving safety set by humans is not very high, with day dreaming, texting, occasional tiredness or inebriation slowing an already insufficient reaction time for many street events.

+ +

If the driving computing agent panics because there is determines the trajectory of a dog, a child, and an elderly person is intersecting with the car's trajectory, it may resolve the panic and plot a safe course in a millisecond (perhaps avoiding all three or perhaps sacrificing the dog to save the two people), whereas the human may resolve the panic only after hitting some one.

+ +

In summary, it is not infallibility that determines the proper balance or volume of AI deployment but the comparison of the distribution of human performances compared with the distribution found with the machine replacements under similar conditions.

+",4302,,4302,,8/18/2019 11:25,8/18/2019 11:25,,,,1,,,,CC BY-SA 4.0 +14027,2,,9372,8/18/2019 1:56,,4,,"

There is an academic paper here that studies a neural approach to deciphering ancient languages:--

+ +

(https://arxiv.org/pdf/1906.06718.pdf)

+ +
+

""In this paper we propose a novel neural approach for automatic decipherment of lost languages. To compensate for the lack of strong supervision signal, our model design is informed by patterns in language change doc-umented in historical linguistics. The model utilizes an expressive sequence-to-sequence model to capture character-level correspon-dences between cognates. To effectively train the model in an unsupervised manner, we innovate the training procedure by formalizing it as a minimum-cost flow problem. When applied to the decipherment of Ugaritic, we achieve a 5.5% absolute improvement overstate-of-the-art results. We also report the first automatic results in deciphering Linear B, a syllabic language related to ancient Greek, where our model correctly translates 67.3% of cognates."" ― Luo, Jiaming, Yuan Cao, and Regina Barzilay. ""Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B."" arXiv preprint arXiv:1906.06718 (2019).

+
+ +
+ +

Further Reading and Articles for the Layperson:--

+ + +",25982,,25982,,8/18/2019 11:06,8/18/2019 11:06,,,,0,,,,CC BY-SA 4.0 +14028,1,14092,,8/18/2019 4:51,,0,220,"

I have developed face recognition algorithms by using pre-built libraries in Python and open CV. However, suppose if I want to make my own neural network algorithm for face recognition, what are the steps that I need to follow?

+

I have just seen Andrew Ng's course videos (specifically, I watched 70 videos).

+",,user28028,2444,,9/7/2020 22:30,9/13/2020 10:23,What are the steps that I need to follow to build a neural network for face recognition?,,3,1,,,,CC BY-SA 4.0 +14029,2,,13993,8/18/2019 6:31,,1,,"

You could argue that AI and big data are trying to switch the AI method from deductive to inductive reasoning in the sense that original AI was deductive (if...then conditionals) but deep learning implies inductive reasoning (feed the network a million images of white swans and the network will ""conclude"" all swans are white - the classic example of (erroneous) inductive reasoning.)

+",17709,,,,,8/18/2019 6:31,,,,0,,,,CC BY-SA 4.0 +14030,2,,8518,8/18/2019 8:53,,2,,"

Penalty (barrier function) is perfectly valid and simplest method for simplex type constraint (L1 norm is simplex constraint on absolute values). Any type of barrier function may work, logarithmic, reciprocal or quadratic. All of them supported by any major framework(pytorch, tensorflow), just add them to loss function. You would need some hyperparameter tuning for the scale factor of penalty.

+ +

There is more efficient, though more complex way to do it. Instead of putting constraint you can automatically output value wich satisfy simplex constraint:

+ +

Assume that L1 norm constraint is $\left \|v\right \|_1 \leq 1$, $v \in \mathbb{R}^n$

+ +
    +
  1. put $sigmoid(v_i)$ activation on output to norm elements to [-1, 1]
  2. +
  3. add slack (fake) variable element $v_{n+1} = 1 - \sum_{1}^{n} v_i $
  4. +
  5. project new $v{}'\in \mathbb{R}^{n+1}$, $v{}'_i = |v_i|,1\leq i \leq n+1$ onto unit simplex with standard algorithm (also here)
  6. +
+ +

Backpropagation of last step may require differentiable sorting, which is missing in most of frameworks, you may have to look for open sourced implementation, for example extract it from here or use some automatic differentiation package. Both require some substantial code reading/debugging. However in my experience assuming constant $\Delta$ also works in many cases, in that case differentiable sorting is not needed. Intuition behind constant $\Delta$ is that $\Delta$ could be chosen such way that there is some interval on which it's value doesn't affect sorting order.

+",22745,,22745,,8/18/2019 9:13,8/18/2019 9:13,,,,0,,,,CC BY-SA 4.0 +14031,2,,9890,8/18/2019 10:36,,2,,"

The probability map / output isn't produced by your loss function, but your output layer, which is activated either by softmax or sigmoid.

+ +

In other words, your dice loss output is also a probability map. It's simply very confident in itself. If you forget about the problem with potential overfitting for a moment and train your binary crossentropy model longer, the probability values will eventually all converge to the 2 ends (0 and 1).

+ +

In my experience, dice loss and IOU tend to converge much faster than binary crossentropy for semantic segmentation, so if you stop the training early on, dice loss will produce a probability map that resembles a binarized output more so than binary crossentropy.

+",28033,,1671,,8/22/2019 20:04,8/22/2019 20:04,,,,0,,,,CC BY-SA 4.0 +14032,1,,,8/18/2019 11:03,,1,888,"

I have a robotics assignment, which I am unable to solve. Given the axis-angle rotation vector $\Theta = (2, 2, 0)$, how can I calculate the unit vector of the rotation axis $k$ and the angle $\theta$?

+",28034,,2444,,1/9/2021 20:19,5/30/2023 1:08,"Given an axis-angle rotation vector, how can I find the unit rotation axis and angle?",,1,0,,,,CC BY-SA 4.0 +14035,2,,14007,8/18/2019 12:44,,0,,"

Sebastian Thrun in one of his online interviews once suggested that he thought that the conversational solution was a combination of both massive machine learning and rules based programming.

+ +

The problem in the case of chats is that our expectations are very high which dooms early solutions to failure. For the ML side we require large amounts of data, and while large amounts may be available they are highly biased and unbalanced; they mostly focus on one area (specialized context) and re-use the same sentence formulas over and over again, so the learning finds a comfortable corner case solution and refuses to learn anything else. People are so predictable so they are an unhelpful source of raw data.

+ +

One approach might be to use carefully constructed rules to generate the data that ML can learn from, rules that can guarantee broad contextual applicability and sentence construction variation.

+",4994,,,,,8/18/2019 12:44,,,,0,,,,CC BY-SA 4.0 +14036,2,,14020,8/18/2019 12:56,,0,,"

Strictly speaking, this is impossible. Changing the form of a sentence also changes its meaning. Even active-passive can be important, as you would use it to emphasise what is important: was it relevant what the cat was doing, or was it more relevant what happened to the mouse? True, the purely propositional meaning is not affected by this, but that is only one component of a sentence's meaning.

+ +

There has been a lot of work in traditional linguistics about sentence form. You could look at one of the seminal works, Syntactic Structures by Noam Chomsky, where he introduces the concept which later lead to transformational grammar. This influenced a lot of subsequent linguistic approaches, but as far as I am aware transformations are not really that much in the linguistic focus anymore.

+ +

For your second question, stylistic consistency, you could look at the work of Douglas Biber. His book Variation across speech and writing introduces a number of (easily extractable) linguistic features that you could use to quantify consistency.

+",2193,,,,,8/18/2019 12:56,,,,0,,,,CC BY-SA 4.0 +14037,2,,13916,8/18/2019 13:12,,0,,"

Sound and image manipulation necessarily creates artifacts. Around the edges of superimposition in layers there are such. Face replacement and other more surface or object centered operations create a different class of artifacts. A sufficiently well constructed LSTM or GRU network and data set of manipulated frame sequences and the user (mouse and keyboard) events that manipulated them can be used to produce good guesses of the event set from new images. Adding unmanipulated images to the data set can allow for the no-event case. That would be the supervised way to do it. There are unsupervised approaches that would require considerably less training resources, which is likely the case with this San Francisco solutions provider.

+ +

In either case, the question of escalation is a good one. One can also create a device, building from the current state of machine learning, that hides manipulations from existing detection software. If they are forward thinking, the same provider may have already developed it.

+ +
+

Can we combat against deepfakes? ... + I am wondering that the people who are creating deepfakes can as well their AI's to remove these imperfections ...

+
+ +

Yes and yes. In war, the combatants learn the methods of the opposing combatants and adapt. A detection mechanism for opposing strategy changes is also theoretically possible, which is one of the reasons that military research facilities spend so much on higher forms of AI.

+ +

The edit to the question is not entirely tangential either.

+ +

If we propose, which some people have, that a virtual reality may damage human culture or individual psyches, the average citizen is likely to be considered collateral damage on the field of combat by companies seeking a good financial return from their AI development. Of course, we could say the same thing about the use of diminished fifths in music. Two notes that are six half steps apart produce a dissonant frequency ratio of $1:\sqrt{2}$. The diminished fifth was considered subliminally satanic in Europe centuries ago and prohibited in music compositions by law. The glass harmonica was alleged to have driven listeners insane.

+ +

Anthropologically, it is possible that a mark of our species is to manipulate appearance. To hunt fakes in frames and audio is likely a fruitless hunting ground, with our without the escalation. The current hunting ground of import is the research into what genetic elements led to human abilities to imagine, design, and fabricate. After that is known, we may have a better window into whether the cat-and-mouse games we play have any sustainable value for our species going forward. Those who love competition believe that it strengthens, which is possible. It is also possible that the games are solely an artifact of a painful path to our emergence as the dominant mammalian species and no longer of any particular use. ""Do to others as you would want them to do to you,"" has the ring of truth we can't ignore either.

+ +

If we look through this wider lens, we can see that our entertainment choices tend toward what could (in the absence of bias) qualify as deepfakes. There are entire cities fueled by the money made by the entertainment industry producing excellence in sound and image capture, synthesis, and manipulation. The story lines are not necessarily representing deep truths. This is more overt.

+ +

On the more covert side, some pass fakes off as reality as a move in their own game to achieve some objective, but this is not the exception in our culture. The fields of public relations and marketing are based on the creation and preservation of business value. Some elements of government, education, and community are based on the creation of economy-preserving beliefs. The intention may be to benefit others or beat them and gain personal wealth.

+ +

Some of us seek authenticity and would like the fake-finders to win the combat, but it appears they may be on the losing side.

+ +

Does this question and this answer pertain to this Stack Exchange community? Absolutely. This community's description in the drop down of SE communities reads, ""For people interested in conceptual questions about life and challenges in a world where 'cognitive' functions can be mimicked in purely digital environment."" Whether AI ultimately weighs in on the side of playing people or informing them certainly pertains to this published view of this community's purpose.

+",4302,,,,,8/18/2019 13:12,,,,0,,,,CC BY-SA 4.0 +14038,2,,13973,8/18/2019 13:36,,8,,"

The convergence and optimality proofs of (linear) temporal-difference methods (under batch training, so not online learning) can be found in the paper Learning to predict by the methods of temporal differences (1988) by Richard Sutton, specifically section 4 (p. 23). In this paper, Sutton uses a different notation than the notation used in the famous book Reinforcement Learning: An Introduction (2nd ed.), by Sutton and Barto, so I suggest you get familiar with the notation before attempting to understand the theorem and the proof. For example, Sutton uses letters such as $i$ and $j$ to denote states (rather than $s$), $z$ to denote (scalar) outcomes and $x$ to denote (vector) observations (see section 3.2 for example of the usage of this notation).

+ +

In the paper The Convergence of TD($\lambda$) for General $\lambda$ (1992), Peter Dayan, apart from recapitulating the convergence proof provided by Sutton, he also shows the convergence properties of TD($\lambda$) and he extends Watkins' Q-learning convergence theorem, whose sketch is presented in his PhD thesis Learning from Delayed Rewards (1989), and defined in detail in Technical Note: Q-learning (1992), by Dayan and Watkins, to provide the first strongest guarantee or convergence proof for TD(0).

+ +

There is much more research work on the convergence properties of TD methods, such as Q-learning and SARSA. For example, in the paper On the Convergence of Stochastic Iterative Dynamic Programming Algorithms (1994), where Q-learning is presented as a stochastic form of dynamic programming methods, the authors provide a proof of convergence for Q-learning by making direct use of stochastic approximation theory. See also Convergence of Q-learning: a simple proof by Francisco S. Melo. In the paper Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms, the authors provide a proof of the convergence properties of on-line temporal difference methods (e.g. SARSA).

+",2444,,2444,,8/18/2019 19:13,8/18/2019 19:13,,,,4,,,,CC BY-SA 4.0 +14040,2,,14032,8/18/2019 17:12,,0,,"

Given the axis-angle rotation vector $\Theta = (2, 2, 0)$, you can find the unit vector in the same direction by diving by the norm (or length) of $\Theta$, denoted by $\|\Theta\| = \sqrt{2^2 + 2^2 + 0^2} = \sqrt{8} = 2\sqrt{2}$. Therefore, the unit vector in the direction of $\Theta$ is $k= \Theta/\|\Theta\| = (1/\sqrt{2}, 1/\sqrt{2}, 0)$, which should be the axis of rotation that you're looking for. The angle should just be the norm of $\Theta$, that is, $\theta = \|\Theta\| = 2\sqrt{2}$. Note that $k \theta$ gives you your original vector $\Theta$.

+",2444,,2444,,8/18/2019 17:23,8/18/2019 17:23,,,,0,,,,CC BY-SA 4.0 +14041,1,,,8/18/2019 17:33,,3,3553,"

Given an axis-angle rotation vector $\Theta = (2,2,0)$, after finding the unit vector $k=(1/\sqrt{2}, 1/\sqrt{2}, 0)$ and angle $\theta = 2\sqrt{2}$ representing the same rotation, I need to derive the rotation matrix $R$ representing the same rotation and to show, that the matrix is orthonormal. How can I do that?

+",28035,,2444,,8/18/2019 18:30,8/21/2019 14:38,How can I derive the rotation matrix from the axis-angle rotation vector?,,1,0,,,,CC BY-SA 4.0 +14042,2,,9372,8/18/2019 17:44,,0,,"

I would say that it depends on whether that language would have wanted to be decyphered.

+ +

The origin of Cryptography dates back to around 700-800 AD. +We may not know the methods by which these texts are obscurred. One such example is the Lesser Banishing Ritual of the Pentagram which, revealed only by happenstance, formed the basis of a whole new esoteric renaissance in the west, the lexicon included forming a basis for the lexicon in use by these orders today.

+ +

This Rite centers around the letters INRI, which tradition says were written upon the cross of Jesus Christ as an abbreviation for Jesus of Nazareth, King of the Jews. There are, however, numerous other levels of occult meaning regarding these four letters in the Rosicrucian Magical Tradition. One of these is a Hermetic secret alluded to by the Latin phrase ""Igne Natura Renovatur Integra"" which means ""By fire, nature is perfectly renewed."" These four letters additionally adorn the rays of the angles of the Rose Cross Lamen worn by Adepts of the Ordo Rosae Rubeae et Aureae Crucis.

+ +

A deeper interpretation lies occulted behind the attributions of the Hebrew letters and the Magical Forces to the Paths on the Qabalistic Tree of Life. The Path attributed to the Hebrew letter y is attributed to the Zodiacal Sign Virgo as well, that of n to Scorpio, and r to the Sun. There exist further magical associations between the Sign Virgo with the Egyptian Goddess Isis, Scorpio with Apophis, and the Sun with Osiris. When the first letter is taken from the Names of each of these Gods, the name ""IAO"" is formed. Additionally, due to the Signs associated with Isis, Apophis, and Osiris, they form the letters ""LVX.""

+ +

Thus within the letters IRNI lie concealed the letters IAO and LVX, which may also be found upon the rays of the angles of the Rose Cross Lamen worn by Adepts of the R. R. et A. C. The name IAO was considered by the Gnostics to be the Supreme Name of God. Its letters further allude to Salt, Sulfur, and Mercury in Alchemy and to an even more recondite secret symbolized by the relationship between Isis, Apophis, and Osiris.

+ +

So I would say for machine learning to be able to decipher real languages, it would almost have to be intuitive, able to draw parallels and connect various points of reference together. It would have to determine what things mean, when a lot of the times, this is virtually impossible.

+ +

Your variables are infinite. You have no real method of finding out what they are. Add to that context, lexicons, deliberate obfuscation, culture and the passage of time and I think it would not be realistic to expect machine-learning to provide much help.

+",28043,,,,,8/18/2019 17:44,,,,0,,,,CC BY-SA 4.0 +14044,2,,14041,8/18/2019 19:03,,1,,"

The rotation matrix $R_k(\theta)$ associated with a given unit-length vector $k$ and angle $\theta$ is given by the following formula

+ +

$$\small{R_k(\theta) = \begin{bmatrix} +\cos \theta +k_x^2 \left(1-\cos \theta\right) & k_x k_y \left(1-\cos \theta\right) - k_z \sin \theta & k_x k_z \left(1-\cos \theta\right) + k_y \sin \theta \\ +k_y k_x \left(1-\cos \theta\right) + k_z \sin \theta & \cos \theta + k_y^2\left(1-\cos \theta\right) & k_y k_z \left(1-\cos \theta\right) - k_x \sin \theta \\ +k_z k_x \left(1-\cos \theta\right) - k_y \sin \theta & k_z k_y \left(1-\cos \theta\right) + k_x \sin \theta & \cos \theta + k_z^2\left(1-\cos \theta\right) +\end{bmatrix}}$$

+ +

So, to find your specific rotation matrix, you just need to substitute the values of your $k$ and $\theta$ in the above matrix.

+ +

The derivation of this matrix can be found in section 9.2 Rotation Matrix Derivation of the PhD thesis Modelling CPV (2015), by Ian R. Cole. The basic idea of the derivation follows the following steps

+ +
    +
  1. Rotate the given axis $k$ and the point $p$ (that you want to rotate) such that the axis $k$ lies in one of the coordinate planes: xy, yz or zx

  2. +
  3. Rotate the given axis $k$ and the point $p$ (that you want to rotate) such that the axis $k$ is aligned with one of the two coordinate axes for that particular coordinate plane: $x$, $y$ or $z$

  4. +
  5. Use one of the fundamental rotation matrix to rotate the point $p$ depending on the coordinate axis with which the rotation axis is aligned

  6. +
  7. Reverse rotate the axis-point pair such that it attains the final configuration as that was in step 2 (that is, you have to undo step 2)

  8. +
  9. Reverse rotate the axis-point pair which was done in step 1 (that is, you have to undo step 1)

  10. +
+ +

To show that a matrix is orthonormal, you need to show that it is orthogonal (each row is independent of any other row and each column is independent of any other column) and that the length of each row (and column) is 1. Equivalently, a square matrix $Q$ is orthogonal if and only if

+ +

$$ +Q^TQ = QQ^T = I +$$

+ +

where $Q^T$ is the transpose of $Q$ and $I$ is the identity matrix. If you are really stuck, have a look at this proof https://math.stackexchange.com/a/537248/168764 (and this other answer https://math.stackexchange.com/a/156742/168764), but, at this point, it should just be a matter of replacing $R_k(\theta)$ in the equation above and check that the equation holds.

+",2444,,2444,,8/21/2019 14:38,8/21/2019 14:38,,,,0,,,,CC BY-SA 4.0 +14045,2,,13901,8/18/2019 23:40,,2,,"

I believe the answer is yes, but the video edition is mostly programmatic. The AI part comes with detecting the right spots to cut.

+ +
    +
  • You want to detect the right portion when someone picks a wooden skittle
  • +
  • you want to detect when the Skittles stop moving on the ground (to detect this, also when they start moving)
  • +
+ +

These will give you timestamps on the video. The next step is to add some padding and run the drop-down frame operations in the video, which can all be done with a simple script.

+ +

You might find this video very similar in concept, and maybe even as code source: https://youtu.be/DQ8orIurGxw

+ +

For the recognition of the objects in the video frames, there are lots of options. You might look into computer vision or object motion detection.

+",190,,,,,8/18/2019 23:40,,,,2,,,,CC BY-SA 4.0 +14046,1,,,8/19/2019 2:12,,2,180,"

First of all, I should mention that I have a very basic knowledge of ML so I apologize if this question seems trivial or stupid.

+ +

I am working on a small personal project, basically an app that analyzes Facebook posts concerning movies and translates them into a rating (out of 100). The algorithm looks for keywords, the length of the post, etc.. to determine the individual rating, and then averages all the ratings among a user's FB friends to give the result. My question is, would I be able to drastically improve such algorithm by using ML or is it not worth it? If yes, what algorithms/techniques do you advise me to learn?

+ +

All help is appreciated!

+",,user28054,,,,9/13/2020 4:07,Using ML to analyze Facebook posts,,1,1,,,,CC BY-SA 4.0 +14047,1,14306,,8/19/2019 11:08,,5,861,"

There seems to be a lot of literature and research on the problems of stochastic gradient descent and catastrophic forgetting, but I can't find much on solutions to perform continual learning with neural network architectures.

+

By continual learning, I mean improving a model (while using it) with a stream of data coming in (maybe after a partial initial training with ordinary batches and epochs).

+

A lot of real-world distributions are likely to gradually change with time, so I believe that we should be able to train NNs in an online fashion.

+

Do you know which are the state-of-the-art approaches on this topic, and could you point me to some literature on them?

+",11303,,2444,,12/22/2021 11:55,12/22/2021 11:55,What are the state-of-the-art approaches for continual learning with neural networks?,,3,0,,,,CC BY-SA 4.0 +14048,2,,13870,8/19/2019 11:18,,1,,"

I would refer to your problem as having a continuous state space. +By using a 32-bit float variable you discritize it. However, creating states for every possible value of a 32-bit float variable is probably too much. You should decide on:

+ +
    +
  • the variable range: what is the real range of the position variables (e.g. from 0 m to 10 m),
  • +
  • and what is the resolution you require for your problem (e.g. 0.01 m or 0.1 m).
  • +
+ +

Note that you should take into account:

+ +
    +
  • the sensor range resolution,
  • +
  • the required range and resolution for the problem.
  • +
+ +

Then, based on the number of states you could decide to discretize the states or to use a Monte Carlo approach.

+ +

See for example the work of Brechtel et al. (2013).

+",198,,,,,8/19/2019 11:18,,,,0,,,,CC BY-SA 4.0 +14050,1,,,8/19/2019 14:35,,2,42,"

Inspired by: Two Worlds Pictures

+ +

+

+ +

I just want to create a Machine Learning Model that can automatically combine the opposite images into 1 image.

+ +

I am thinking about 2 possible solutions:

+ +
    +
  1. Pose Estimation: Detect humans and their poses from image data and search an archive by pose, but totally out of context.

  2. +
  3. Land Lines: explore similar lines.

  4. +
+ +

It's just my ideas, do you have any recommendations? +Thanks

+",27990,,,,,8/19/2019 14:35,How to use machine learning to create combine of opposite images side by side,,0,0,,,,CC BY-SA 4.0 +14052,1,,,8/19/2019 15:36,,1,95,"

I am a newbie in reinforcement learning and trying to understand how to implement continuous actions bounded by $[-2, 2]$. My research shows that doing nothing is a possible solution (i.e. action of 4.5 is mapped to 2 and the action of -3.1 is mapped to -2), but I wonder if there are more elegant approaches.

+",27472,,2444,,1/22/2021 1:46,1/22/2021 1:46,"DDPG: how to implement continuous action space bounded in the interval [-2, 2]?",,0,0,,,,CC BY-SA 4.0 +14053,2,,14013,8/19/2019 16:50,,0,,"

I found a somewhat more accessible introduction here:

+ +

https://medium.com/konvergen/an-introduction-to-adagrad-f130ae871827

+ +

Let me start from the last part of your question. The frequency of a parameter is in G_t, which is the accumulated sum of squared gradients from all the time steps up to step t. If the gradient vanishes in many of the previous steps, then you divide the learning rate with a smaller number for that parameter.

+ +

And for the first part, you want the parameter that is more frequent to have a smaller learning rate as it is updated on more iterations compared to a parameter which is updated only a small number of times.

+",22301,,22301,,8/20/2019 13:23,8/20/2019 13:23,,,,2,,,,CC BY-SA 4.0 +14055,1,,,8/19/2019 18:22,,2,97,"

Currenly I'm trying to reimplement alphazero in pure c++ using libtorch to accomodate my project's need. But when I training my model, I found out that the value loss doesn't decrese at all after even ~2000 iterations and the policy loss decreases pretty fast from the very begining.

+ +

Have anybody met any similar issue when developing your alphazero project? And could you give some suggestion of the cause of my issue based on your experience?

+ +

Many apreciates

+",28008,,,,,8/19/2019 18:22,Alphazero Value loss doesn't decrease,,0,0,,,,CC BY-SA 4.0 +14056,1,,,8/19/2019 20:57,,2,365,"

How do I create a chatbot using TensorFlow or PyTorch using like the one defined in DialogFlow? What are the best datasets that I can use so to create my own personal assistant like google assistant?

+ +

I want to create a chatbot (an open-source project) as an assistant for custom tasks (like google assistant).

+ +

I have tried many neural network models like seq2seq but I couldn't get satisfiable results maybe because of the small dataset (I took from Simpsons movie script) or model (seq2seq). I am curious what model they use at google and what type of dataset they pick to give such good results or any normal person can create fully functional chatbots without relying on paid services ( like google's DialogFlow, api.ai, etc.) with good results.

+ +

I recently heard of OpenAi's implementation of a specific model named as gpt-2 which as they concluded in the paper showed remarkable performance but they didn't provide the dataset because of some reasons.

+ +

What I want to say there are a lot of resources and codes on the internet to make a working chatbot (or maybe that what they show) but when I try to replicate them I always fail to get even remotely close good results.

+ +

So I need proper guidance on how to make and train such chatbots with my own laptop (with 16GB RAM, 2GB GPU . I can even get better configuration) and no other money spent on google services or any such paid API's.

+ +

Please suggest something if someone got good results.

+",28071,,16565,,8/21/2019 7:15,8/21/2019 7:15,How do I create a chatbot using tensorflow or pytorch using like the one defined in dialogflow?,,0,0,,,,CC BY-SA 4.0 +14058,2,,14046,8/20/2019 1:28,,1,,"

for this kind of ml training, you will need a ton of data first, at least in the thousands. If you have a bot program that fetches those data for you, AI is the way to go. I'm not sure how else you would do it though.

+ +

To train the nn you will need the inputs(the post) and the targets(the rating you want it to output). The targets could be anything you want, like the ratio of likes to views, etc.

+ +

There are tons of ML libraries out there and I recommend keras as it is easy to learn for beginners, hope it helps :)

+",27925,,,,,8/20/2019 1:28,,,,0,,,,CC BY-SA 4.0 +14059,1,,,8/20/2019 3:46,,3,217,"

What are some conferences for publishing papers on Deep Learning for Human Activity recognition? Do any of the major conferences have specific tracks for Human Activity Recognition?

+",28077,,2444,,1/22/2021 1:31,2/21/2021 2:05,What are some conferences for publishing papers on Deep Learning for Human Activity recognition?,,1,0,,,,CC BY-SA 4.0 +14061,1,,,8/20/2019 10:01,,1,248,"

I have a dataset of 3D images (volumes) with dimensions 400x250x400. For each input image I have an output of the same dimensions. I would like to train a machine learning (or deep learning) model on this data in order to predict values with new data.

+ +

My main problems are :

+ +

Images are very big, which leads to memory issues (tried with an NVIDIA 2080Ti and doesn't fit on memory during training)

+ +

I need a very fast inference, because the model will be used on real time(speed is a requirment)

+ +

I already have experience with architectures such as 3D Unet using Keras with tensorflow backend, but it didn't worked for me because of the previous reasons, even with very few layers and convolution filters.

+ +

I know that one of the first solutions that one could imagine, is to reduce resolution of the volumes, but in my case I'm not allowed this because I would lose a lot of spatial information.

+ +

Any ideas or suggestions ? Maybe Neural Nets are not the best solution ? If not, what could I use ?

+ +

Thank you very much for your suggestions

+",28085,,,,,8/20/2019 10:01,Suggestions for Deep Learning for regression on huge 3D volumes,,0,2,,,,CC BY-SA 4.0 +14064,2,,13944,8/20/2019 12:28,,0,,"

A somewhat large set of designs and set-ups can be made to learn a rating function for a given set of labeled examples. If the objectives are simplicity and effectiveness (accuracy, reliability, and speed), then a third option should be considered.

+ +

The requirement in the question includes, ""Outputs an integer rating 0 [through] 4 [inclusive]."" For such a discrete result, the number of required output bits $b$ (where $s$ is the number of possible states and $I$ is the set of integers) is given as follows.

+ +

$$\min_b \, (b \in I \; \land \; b \ge \log_2 s)$$

+ +

In this case, we require three bits of output.

+ +

$$s = 5 \quad \implies \quad b = 3$$

+ +

Note that with similar configuration ratings of 0 through 7 would also require only three bits of output. Either way, the output layer would likely be simplest and most efficient if its activation function was binary step function. This removes the need for rounding after it is applied. The output layer would then provide a binary value indicating rating. The goal of learning would be to reduce the error between the feed forward output and the associated the binary value of the label for each example.

+ +

Previous layer(s) could be sigmoid or a more contemporary and less problematic continuous activation function like ISRLU.

+ +

Since the engineer can select the error function used by the learning framework to accept any input range and distribution, normalizing the labels for supervised learning is primarily employed to remove redundancy from time and resource consuming operations required to compute error. With ratings as the labels, unless the distribution of ratings is skewed and the data set is such that learning time is excessive, normalization may not be necessary. If it is, it would likely be because improving the label distribution in advance (requiring floating point input to the error function and removing skew) would reduce learning time.

+ +

The other two approaches introduce unnecessary complexities mentioned in context above. A consequence of removing complexities without adding impediments to convergence is more efficiency during learning and during execution after learning.

+",4302,,,,,8/20/2019 12:28,,,,0,,,,CC BY-SA 4.0 +14066,2,,13799,8/20/2019 13:35,,0,,"

The relationship between the axes of graph (1) and your variables $x$ and $y$ is not clear, so this generalized answer may be helpful or useless.

+ +

From graph (1) it appears that the correlation coefficient $\mathcal{C}$ of a quadratic fit of data set $\mathcal{S}$ would be much better. Consider $y_1$ and $y_2$ approximations of $y$.

+ +

$$ +\mathcal{C} (y_2, a, b, c, \mathcal{S}) > \mathcal{C} (y_1, a, b, \mathcal{S}) \\ +y_2 = ax^2 + bx + c \\ +y_1 = ax + b +$$

+ +

To achieve a more nearly uniform distribution, perform a least squares fit for $y_2$ against $y$ on $\mathcal{S}$ to obtain $(a, b, c)$. Then find a mapping function that produces $y'$ and use it where the uniform distribution is desired. A reasonable approximation is simply this.

+ +

$$y' = \frac{y}{y_2(x)}$$

+",4302,,,,,8/20/2019 13:35,,,,0,,,,CC BY-SA 4.0 +14067,2,,13885,8/20/2019 14:41,,1,,"

No.

+ +

For any intelligent system $\mathcal{S}_a$ with the set of adaptive features $\mathcal{A}_a$, there may exist another intelligent system $\mathcal{S}_b$ with the set of adaptive features $\mathcal{A}_b$ such that there exists one element of $\mathcal{A}_b$ that can be made subservient (controlled in full) through the expression of at least one element in $\mathcal{A}_b$.

+ +

It has not been proven that there ALWAYS exist such a $\mathcal{S}_b$, but it is likely given what we know about escalation in nature via DNA and in human industrial development via innovation there. Thus 100% generalized intelligence is not likely to exist. Escalation appears to be the natural course of evolution. And that is a feature of both cognitive and functional adaptation, with or without artificiality as a criterion.

+ +

One can temporarily prevent one adaptive system from escaping the boundary conditions of a particular set of boundary condition classes through the design and deployment of another adaptive system. However, it cannot be inferred that any guarantees achieved temporarily will necessarily constrain the subservient system indefinitely.

+",4302,,4302,,8/20/2019 15:25,8/20/2019 15:25,,,,0,,,,CC BY-SA 4.0 +14068,2,,13832,8/20/2019 15:20,,2,,"

If the computational components of the forward feed through the network have no curvature, which is normally the case in a sum of products, then it can be proven that any constant pixel value is equivalent in terms of effect on convergence results. We wouldn't expect a proof for that, since it would be too trivial to spend time writing up for publication. In general, functioning vision systems have feed forward computational components with curvature, so the padding is likely significant.

+ +

Even the convolutional layers may have activation functions or something even more complex going forward, as noted in Gauge Equivariant Convolutional Networks and the Icosahedral CNN (Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling, 2019).

+ +

If purely stochastic values with value distributions like that of the un-padded coordinates are used, it may be possible to prove that some gain is made, but none appeared in a few academic article searches just made. Not surprisingly, there are many proofs regarding the properties of various message padding strategies for cryptography.

+ +

Short of the inclusion of thermal or quantum noise acquisition devices in VLSI circuitry and exposure of those devices in software, purely stochastic values cannot be generated. This leaves the risk of a learning approach expected to extract features from frames learning features of the pseudo-random noise generator used to pad.

+ +

The answer is that none are universally correct and there appears to be much work to do in proving advantages between different techniques in as many cases as such advantages can be proven.

+",4302,,,,,8/20/2019 15:20,,,,0,,,,CC BY-SA 4.0 +14069,2,,9765,8/20/2019 15:38,,2,,"

I think that the universal approximation theorem plays a large role in why companies and governments are investing in deep learning, it states that theoretically an ann can approximate any continuous function with n-dimension input variables. Specifically it states that feed forward nets with a single hidden layer can do this lending credence to the implication that rnns and cnns are also capable of universal function approximation. So they are investing because they have continuous functions that need to be approximated and really the best tool for the job is neural networks.

+",20044,,,,,8/20/2019 15:38,,,,2,,,,CC BY-SA 4.0 +14071,2,,13376,8/20/2019 20:11,,4,,"

AlphaGo (2017) is quite a good watch, given that it is a documentary about the AlphaGo program, how DeepMind developed it, the help they had, and doesn't get too technical. You can watch the trailer here.

+ +

Another documentary which wasn't exactly AI but something that is an interesting watch or at least was when it came out was The Human Face of Big Data.

+",25658,,2444,,8/20/2019 23:14,8/20/2019 23:14,,,,0,,,,CC BY-SA 4.0 +14073,1,,,8/20/2019 21:02,,2,34,"

Are there methods (possibly logical or (how they are called in the literature) relational) that allows for the developmental systems to understand or explain the value of the received reward during the developmental process. E.g. if the system (agent) can understand that reward is by the chance, then it should be processed quite differently than the reward that is just initial downpayment for the expected series of rewards. Credit assignment is the one method (especially for delayed rewards), but maybe there are different methods as well?

+ +

Relational reinforcement learning allows to learn symbolic transition and reward functions and emerging understanding of the reward by the agent can greatly facilitate the inner consciousness of the agent and the search process for the best transition and reward functions (symbolic search space can be enormous).

+",8332,,,,,8/20/2019 21:02,Developmental systems that try to explain or understand the reward value in the reinforcement learning?,,0,3,,,,CC BY-SA 4.0 +14076,1,,,8/21/2019 2:21,,2,130,"

We are exploring the images classified by a CNN at its decision boundary, using Genetic Algorithms to generate them. We have created a fine-tuned binary grayscale image classifier for cats. As the base model, we are using an Inception-ResNet v2 pre-trained on the ImageNet dataset, and then fine-tune it with a subset of cat and non-cat images (grayscale) from ImageNet. The model achieves ~97% accuracy for a test set.

+ +

We have constrained the problem such that evolution starts from a pure white image, and random crossover and mutations are performed with only black pixels. Crossover and mutation probabilities are kept at 0.8 and 0.015 respectively.

+ +

As an incentive to generate a ""cat"" with the minimum number of black pixels, I add a penalty for the black pixel count in the image. The initial population is a set of 100 white images that have a single random pixel coloured black in them.

+ +

The evolution generates images with only black and white pixels, and we have a fitness function that is taken as a linear transformation of loss calculated between target label and network prediction as follows;

+ +
+loss = binary cross entropy (target, prediction) + λ(# of black pixels)
+
+ +

Target value (y) = target label cat - in this case, 0.

+ +

λ = hyperparameter to weight the penalty for black pixel count.

+ +

Problem

+ +

My problem is that across multiple runs of evolution, all images classified as cats tend to have black pixels towards the edges of the image. Below is an example.

+ +

+ +

This image is classified as a cat with over 96% confidence.

+ +

I have tried different crossover mechanisms including

+ +
    +
  • Random rectangular area swap between parents
  • +
  • Alternating column interchange
  • +
  • Direct black pixel crossover after encoding the image to +a reduced form that only kept track of the black pixels (black pixel +list is the genome)
  • +
+ +

Initially, we ran evolution with a similarly fine-tuned VGG-16 model, and then moved to the Inception ResNet due to better accuracy. Pixels tend to edges across models and crossover mechanisms.

+ +

In one run, I explicitly constrained the evolution to perform mutations in the middle section of the images for 3,000 generations before lifting this restriction. But the images generated after that point always had better scores.

+ +

We are at a loss as to why the images never have pixels coloured in the middle.

+ +

Does anyone have any ideas on this?

+ +
+",27919,,,,,8/21/2019 2:21,CNN - Visualizing images near decision boundary - Pixels inexplicably tend to edges,,0,0,,,,CC BY-SA 4.0 +14077,1,14129,,8/21/2019 6:58,,0,34,"

I am in the process of collecting a huge dataset of Human poses captured images to create a model to classify poses.

+ +

My question is how will I be able to train on this massive dataset? I have multiple GPUs and Multiple machines access (Also have GCP).

+ +

What would be the best way to train on such huge dataset?

+ +

Thanks.

+",6242,,,,,8/24/2019 17:50,Train on big dataset (1mil + images),,1,1,,10/28/2021 16:34,,CC BY-SA 4.0 +14079,1,,,8/21/2019 8:33,,7,9686,"

+ +

I tried to create a simple model that receives an $80 \times 130$ pixel image. I only had 35 images and 10 test images. I trained this model for a binary classification task. The architecture of the model is described below.

+ +
conv2d_1 (Conv2D)            (None, 80, 130, 64)       640       
+_________________________________________________________________
+conv2d_2 (Conv2D)            (None, 78, 128, 64)       36928     
+_________________________________________________________________
+max_pooling2d_1 (MaxPooling2 (None, 39, 64, 64)        0         
+_________________________________________________________________
+dropout_1 (Dropout)          (None, 39, 64, 64)        0         
+_________________________________________________________________
+conv2d_3 (Conv2D)            (None, 39, 64, 128)       73856     
+_________________________________________________________________
+conv2d_4 (Conv2D)            (None, 37, 62, 128)       147584    
+_________________________________________________________________
+max_pooling2d_2 (MaxPooling2 (None, 18, 31, 128)       0         
+_________________________________________________________________
+dropout_2 (Dropout)          (None, 18, 31, 128)       0         
+_________________________________________________________________
+flatten_1 (Flatten)          (None, 71424)             0         
+_________________________________________________________________
+dense_1 (Dense)              (None, 512)               36569600  
+_________________________________________________________________
+dropout_3 (Dropout)          (None, 512)               0         
+_________________________________________________________________
+dense_2 (Dense)              (None, 1)                 513     
+
+ +

What could the oscillating training loss curve represent above? Why is the validation loss constant?

+",28100,,2444,,8/22/2019 23:04,11/24/2021 2:34,What could an oscillating training loss curve represent?,,4,1,,,,CC BY-SA 4.0 +14082,1,,,8/21/2019 12:02,,5,287,"

Is it possible to specify what the asymptotic behaviour of a Neural Networks (NN) model should be?

+

I am thinking of a NN which tries to learn a mapping $\vec y=f(\vec x)$ with $\vec x$ a vector of features of dimension $d$ and $\vec y$ a vector of outputs of dimension $p$.

+

Is it possible to specify that, for instance, the NN should have a fixed value when $x_1$ goes to infinite?

+

I mean: +$$ +\lim_{x_1\to \infty} f(\vec x) = \vec c +$$

+

If it is not possible with NN, do you know other machine learning models (for instance Gaussian Process Regression or Support Vector Regression) which have a known asymptotic behaviour?

+",28108,,2444,,3/29/2021 13:55,9/29/2021 7:09,Is it possible to control asymptotic behaviour of neural network models?,,2,1,,,,CC BY-SA 4.0 +14084,1,,,8/21/2019 14:45,,1,14,"

I have a general question regarding the mAP score used in measuring object detection system performance.

+ +

I understood how the AP score is calculated, by averaging precision over recall 0 to 1. And then we can compute mAP, by averaging AP score of different labels.

+ +

However, what I have been really confused, is that, it seems that mAP score is used to denote the ""precision"" of a model. Then what about the ""recall"" aspect? Note that generally speaking, when measuring the performance of a machine learning model, we need to report precision and recall at the same time, right? It seems that mAP can only cover the precision aspect of a model.

+ +

Am I missed anything here? Or mAP score, despite its name is derived from Precision, can indeed subsume both ""precision"" and ""recall"" and therefore become comprehensive enough?

+",25973,,,,,8/21/2019 14:45,"Can mAP score be used to describe ""recall"" rate of a model?",,0,5,,,,CC BY-SA 4.0 +14085,1,14088,,8/21/2019 16:24,,1,3097,"

I mean what is determine my model size, connection amount between layers and neurons, or size of my dataset?

+",28061,,,,,8/21/2019 18:23,"Is ""dataset size"" and ""model size"" same thing?",,1,0,,12/28/2021 22:03,,CC BY-SA 4.0 +14086,2,,14079,8/21/2019 16:41,,6,,"

Try lowering the learning rate.

+ +

Such a loss curve can be indicative of a high learning rate. Due to a high learning rate the algorithm can take large steps in the direction of the gradient and miss the local minima. Then it will try to come back to the minima in the next step and overshoot it again.

+ +

You may also try switching to a momentum-based GD algorithm. Such a training loss curve can be indicative of a loss contour like in this example, for which momentum-based GD methods are helpful.

+ +

I noticed that you have a very small training set. You may have better luck with a larger training data (~1000 examples) or using a pre-trained Conv network as a starting point.

+",20799,,,,,8/21/2019 16:41,,,,1,,,,CC BY-SA 4.0 +14088,2,,14085,8/21/2019 18:23,,0,,"

Dataset and model refer to different things. Dataset means part of data available for training (training dataset) or validation (validation dataset). Model is the learning process goal, the state of the computer ""brain"" after it has been fully educated (or made its learning). Model size refers to size of the container which contains the model. In deep learning it can be measured by width and depth of the network used, I also found a site comparing different models by npy file size, that physically contains the generated model as computer code. In that case model contained a more complex structure which was documented and size in bytes was for comparison purposes.

+ +

So in short, it is roughly speaking the size of layers and neurons, if I have to take one of your options. Dataset is a different thing.

+ +

More precise explanation about what is model and what is dataset:

+ +

https://www.quora.com/What-are-different-models-in-machine-learning

+",11810,,,,,8/21/2019 18:23,,,,2,,,,CC BY-SA 4.0 +14090,1,,,8/22/2019 0:41,,3,488,"

There are many known ways to overcome overfitting or make a model generalize better to unseen data.

+ +

Here I would like to ask if normalizing/standardizing/similiraizing the train and test data is a plausible approach.

+ +

By similarizing I mean making the images look alike by using some function that could be a Neural Network itself. I know that normally one would approach this the opposite way by augmenting and therefore increasing the variation in the training data. But is also possible to improve the model by restricting the variation of the training and test data?

+ +

I know that this may not be the best approach and maybe too complicated but I see some use cases where known techniques of preventing overfitting aren't applicable. In those cases, having a network that can normalize/standardize/similarize the ""style"" of different images could be very useful.

+ +

Unfortunately I didn't find a single paper discussing this approach.

+",23063,,,,,9/21/2019 10:01,Is normalizing the data a way to improve generalization?,,1,1,,,,CC BY-SA 4.0 +14091,2,,14090,8/22/2019 9:23,,1,,"

Batch Normalization is usually known to speed up the learning process as it makes the weights in the deeper layers more robust. It restricts the distribution of the weights in a particular layer - this video might tend to be useful to what BatchNorm does. This said, batch normalization does have a regularizing effect which does to tend to increase generalization.

+ +

Talking about generalization - A focus on regularization would probably be more helpful

+",25658,,,,,8/22/2019 9:23,,,,0,,,,CC BY-SA 4.0 +14092,2,,14028,8/22/2019 9:55,,0,,"

My implementation to increase detection on video is using object tracking algorithms.

+

More specifically, first, I detect the object using a trained classifier. Then I track the object with the KCF algorithm. If the object tracker misses the object, again I call for the classifier.

+",28129,,2444,,9/7/2020 22:32,9/7/2020 22:32,,,,0,,,,CC BY-SA 4.0 +14094,1,14104,,8/22/2019 14:03,,2,59,"

I am wondering why tf object detection api needs so few picture samples for training while regular cnns needs many more?

+ +

What I read in tutorials is that tf object detection api needs around 100-500 pictures per class for training (is it true?) while regular CNNs need many many more samples, like tens of thousands or more. Why is it so?

+",22659,,,,,8/23/2019 12:16,Why tf object detection api needs so few pictures?,,1,4,,,,CC BY-SA 4.0 +14096,1,,,8/22/2019 19:50,,2,53,"

We all know that using CNN, or even simpler functions, like CLD or EHD, we can generate a set of features out of images.

+ +

Is there any ways or approaches that given a set of features, we can somehow generate a corase version of the original image that was given as input? Maybe a gray-scale version with visible objects inside? If so, what features do we need?

+",9053,,9053,,8/22/2019 20:08,8/22/2019 21:12,How to generate the original image from feature set?,,1,2,,,,CC BY-SA 4.0 +14097,2,,14096,8/22/2019 21:12,,1,,"

The model (that I know of) which most resembles your description is the auto-encoder, which is trained to learn a compact representation (a vector) of the input, which can later be used to reconstruct the original input. In a certain way, this compact representation (implicitly) encodes the most important features of the input. In particular, you may be looking for denoising auto-encoders.

+",2444,,,,,8/22/2019 21:12,,,,5,,,,CC BY-SA 4.0 +14098,1,,,8/22/2019 22:10,,11,372,"

FaceNet uses a novel loss metric (triplet loss) to train a model to output embeddings (128-D from the paper), such that any two faces of the same identity will have a small Euclidean distance, and such that any two faces of different identities will have a Euclidean distance larger than a specified margin. However, it needs another mechanism (HOG or MTCNN) to detect and extract faces from images in the first place.

+

Can this idea be extended to object recognition? That is, can an object detection framework (e.g. MaskR-CNN) be used to extract bounding boxes of an object, cropping the object feeding this to a network that was trained on triplet loss, and then compares the embeddings of objects to see if they’re the same object?

+

Is there any research that has been done or any published public datasets for this?

+",28145,,43231,,2/19/2021 12:30,3/16/2022 17:08,Extending FaceNet’s triplet loss to object recognition,,0,1,,,,CC BY-SA 4.0 +14099,1,14100,,8/23/2019 2:34,,1,106,"

The ready-to-use DNNClassifier in tf.estimator seems not able to fit these data:

+ +
X = [[1,2], [1,12], [1,17], [9,33], [48,49], [48,50]]
+Y = [ 1,     1,      1,      1,      2,       3     ]
+
+ +

I've tried with 4 layers but it's fitting to 83% (=5/6 sampes) only:

+ +
hidden_units = [2000,1000,500,100]
+n_classes    = 4   
+
+ +

The sample data above are supposed to be separated by 2 lines (right-click image to open in new tab):

+ +

+ +

It seems stuck be cause of Y=2 and Y=3 are too close. How to change the DNNClassifier to fit to 100%?

+",2844,,,,,8/23/2019 8:23,TensorFlow estimator DNNClassifier fails to fit simple data,,1,0,,,,CC BY-SA 4.0 +14100,2,,14099,8/23/2019 7:41,,2,,"

Normalise your inputs.

+ +

Neural networks work poorly outside of relatively small numerical ranges on input. An ideal range is for each feature to be drawn from $\mathcal{N}(0,1)$ i.e. a Normal distribution with mean $0$ and standard deviation $1$. In your case, divide both parts of $\mathbf{x}$ by $25$ and subtract $1$ would probably suffice.

+ +

Your neural network architecture is completely overblown for the problem at hand. That may be because you were trying to force it to fit this data (and failing because of lack of normalisation). Try something more like: hidden_units = [20,10]

+",1847,,1847,,8/23/2019 8:23,8/23/2019 8:23,,,,7,,,,CC BY-SA 4.0 +14101,1,,,8/23/2019 9:31,,4,681,"

I'm a computer scientist who's studying support vector machines (SVMs) in a machine learning course. I have some understanding of how SVMs are designed, thanks to 16. Learning: Support Vector Machines - MIT. However, what I'm not understanding is the transition from the optimization problem of the Lagrangian function to its implementation in any programming language. Basically, what I need to understand is how to build, from scratch, the decision function, given a training set. In particular, how do I find Lagrange multipliers in order to know which points are to be considered to define support vectors and the decision function?

+

Can anyone explain this to me?

+",28151,,,user36057,4/29/2021 12:19,5/31/2021 15:01,How to implement SVM algorithm from scratch in a programming language?,,1,1,,,,CC BY-SA 4.0 +14102,1,14108,,8/23/2019 11:08,,0,52,"

I have a bunch of training data for classifying product names, around 30,000 samples. The task is to classify these product names into types of product, around 100 classes (single words).

+ +

For example:

+ +
dutch lady sweetened uht milk => milk
+samsung galaxy note 10        => electronics
+cocacola zero                 => softdrink
+...
+
+ +

All words in inputs are indexed to numbers, and so classes. I've tried to use tf.estimator.DNNClassifier to classify them but no good results. The outcome is just an accuracy of 4% which is no meaning.

+ +

Should it be I'm in the case that classes (Y values) are distributed kinda randomly and too hard to do multi-time linear separation?

+ +

Are there any existing solutions to classify a list of names? like my product names?

+",2844,,,,,8/23/2019 21:53,Solution to classify product names,,1,0,,,,CC BY-SA 4.0 +14104,2,,14094,8/23/2019 12:16,,2,,"

I guess that they need so little data because their models are already trained on huge datasets, and they are just transferring the learning (using those pre-trained models as starting point).

+",20430,,,,,8/23/2019 12:16,,,,0,,,,CC BY-SA 4.0 +14105,2,,13978,8/23/2019 19:56,,10,,"

Consider a dataset $\mathcal{D}=\{x^{(i)},y^{(i)}:i=1,2,\ldots,N\}$ where $x^{(i)}\in\mathbb{R}^3$ and $y^{(i)}\in\mathbb{R}$ $\forall i$

+ +

The goal is to fit a function that best explains our dataset.We can fit a simple function, as we do in linear regression. But that's different about neural networks, where we fit a complex function, say:

+ +

$\begin{align}h(x) & = h(x_1,x_2,x_3)\\ +& =\sigma(w_{46}\times\sigma(w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4)+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)\end{align}$

+ +

where, $\theta = \{w_{14},w_{24},w_{34},b_4,w_{15},w_{25},w_{35},b_5,w_{46},w_{56},b_6\}$ is the set of the respective coefficients we have to determine such that we minimize: +$$J(\theta) = \frac{1}{2}\sum_{i=1}^N (y^{(i)}-h(x^{(i)}))^2$$ +The above optimization problem can be easily solved with gradient descent. Just initiate $\theta$ with random values and with proper learning parameter $\eta$, update as follows till convergence: +$$\theta:=\theta-\eta\frac{\partial J}{\partial \theta}$$

+ +

In order to get the gradients, we express the above function as a neural network as follows: +

+ +

Let's calculate the gradient, say w.r.t. $w_{14}$.
+$$\frac{\partial J}{\partial w_{14}} = \sum_{i=1}^N \Big[\big(h(x^{(i)})-y^{(i)}\big)\frac{\partial h(x^{(i)})}{\partial w_{14}}\Big]$$ +Let $p(x) = w_{14}x_1+w_{24}x_2+w_{34}x_3+b_4$ , and
+Let $q(x) = w_{46}\times\sigma(p(x))+w_{56}\times\sigma(w_{15}x_1+w_{25}x_2+w_{35}x_3+b_5)+b_6)$

+ +

$\therefore \frac{\partial h(x)}{\partial w_{14}} = \frac{\partial h(x)}{\partial q(x)}\times\frac{\partial q(x)}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}} = \frac{\partial\sigma(q(x))}{\partial q(x)}\times\frac{\partial\sigma(p(x))}{\partial p(x)}\times\frac{\partial p(x)}{\partial w_{14}}$

+ +

We see that the derivative of the activation function is important for getting the gradients and so for the learning of the neural network. A constant derivative will not help in the gradient descent and we won't be able to learn the optimal parameters.

+",28058,,28058,,8/24/2019 5:08,8/24/2019 5:08,,,,0,,,,CC BY-SA 4.0 +14107,1,,,8/23/2019 21:19,,5,75,"

Suppose we have a labeled data set with columns $A$, $B$, and $C$ and a binary outcome variable $X$. Suppose we have rows as follows:

+ +
 col  A B C X
+  1   1 2 3 1
+  2   4 2 3 0
+  3   6 5 1 1
+  4   1 2 3 0
+
+ +

Should we throw away either row 1 or row 4 because they have different values of the outcome variable X? Or keep both of them?

+",28161,,16708,,9/4/2019 9:36,9/4/2019 9:36,What should we do when we have equal observations with different labels?,,3,1,,,,CC BY-SA 4.0 +14108,2,,14102,8/23/2019 21:53,,1,,"

If you're looking for an existing solution, the best approach I found was using a TF-IDF model, check out the links below which have similar examples which should be easily adapted for your dataset.

+ +

https://www.kaggle.com/selener/multi-class-text-classification-tfidf#targetText=Text%20classification%20(multiclass)&targetText=With%20the%20aim%20to%20classify,one%20of%20the%20product%20categories).

+ +

https://github.com/susanli2016/Machine-Learning-with-Python/blob/master/Consumer_complaints.ipynb

+ +

However if you specifically want to go for a DNN approach, there are a few options you can take for a multi-class text classification. Try looking into a simple CNN classifier, which is a relatively lightweight approach, computationally speaking, yet showing pretty good results:

+ +

https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f

+ +

Alternatively, you can use a word2vec or a doc2vec model to map your sentences to unique vectors, and then put them through a regression algorithm.

+ +

https://towardsdatascience.com/multi-class-text-classification-model-comparison-and-selection-5eb066197568

+",28160,,,,,8/23/2019 21:53,,,,0,,,,CC BY-SA 4.0 +14109,2,,14107,8/24/2019 0:13,,4,,"

The problem you are portraying looks like a modified XOR problem. You can't throw away the lines with a label of 1 because a the model won't be able to learn this class.

+",28162,,,,,8/24/2019 0:13,,,,0,,,,CC BY-SA 4.0 +14110,2,,13538,8/24/2019 0:31,,4,,"

I don't think that the ""try all the numbers"" approach is very representative, because I'm not sure whether or not the agent that uses that approach can be considered by any means AI.

+ +

There is no ""intelligence"" in just checking numbers to try to prove the statement. An agent that is considered to be intelligent should apply a more intelligent approach.

+ +

This becomes more evident because the question aims at exploiting the lack of scalability of the agent's strategy. If the question was ""Prove that no number exists which is one more than 5"", then the agent would have no trouble in finding the correct answer.

+",28163,,,,,8/24/2019 0:31,,,,1,,,,CC BY-SA 4.0 +14111,2,,4095,8/24/2019 5:57,,1,,"

MCTS only need to ""see"" states in respect of reward. All game mechanics is abstarcted away from MCTS and MCTS only access actions and rewards. MCTS player don't access states itself, it's only choose action according to backpropagated reward. For partially observed MCTS player can't even access rewards of states, but instead access only expected reward over information set. Because player don't see reward of each state of information set but only expected reward over all set he can't knowledgeably choose specific state from information set. Player choose random state from information set according to some distribution instead. That mean player ""don't see"" which state from information is set actually realized.

+",22745,,,,,8/24/2019 5:57,,,,0,,,,CC BY-SA 4.0 +14112,1,,,8/24/2019 7:14,,2,527,"

What is the difference between machine learning and quantum machine learning?

+",1581,,2444,,3/7/2020 6:56,12/31/2020 22:44,What is the difference between machine learning and quantum machine learning?,,1,1,,,,CC BY-SA 4.0 +14113,1,,,8/24/2019 9:06,,3,57,"

I am looking for a CNN method, or any other machine learning method, to recognize 3D natural geometries that are similar to each others, and compare these geometries with a reference 3D model. To illustrate this, consider the following crater topographic map (x,y,z) of the Moon as an example:

+ +

+ +

The exercise would be to recognize the craters, and compare their (3D) geometry (scale-invariant) with a reference 3D crater model (e.g. the one within the blue square). The result I am looking for is a kind of heatmap showing the similarity measure of (1) a sampled crater with the crater model, and/or (2) the geometry of some parts of the sampled crater (e.g. the inner crater steep sides) with those of the reference model. No classification.

+ +

I tend to think that a 3D-oriented CNN method (OctNet, Octree CNN... etc) is a starting point for the above-mentioned task but I would rather prefer getting opinions on this matter since I am still a newbie in machine learning and we are dealing with direct application to real-world natural objects here.

+",28171,,,,,8/24/2019 9:06,3D geometry and similarity with a reference model,,0,0,,,,CC BY-SA 4.0 +14114,2,,14079,8/24/2019 9:27,,7,,"

Overview

+ +

As it has already been observed, your main problem, beside the training related issues like fixing the learning rate, is you have basically no chance to learn such a big model woth such a small dataset ... from scratch

+ +

So focusing on the real problem, here are some techniques you could use

+ +
    +
  • dataset augmentation
  • +
  • transfer learning + +
      +
    • from a pretrained model
    • +
    • from the encoder stage of an autoencoder (last resort option before getting into more advanced topics)
    • +
  • +
+ +

Dataset Augmentation

+ +

Add transformations to your dataset you want your classifier to learn to be invariant to

+ +

Let's assume that

+ +
    +
  • $I$ is an input image

  • +
  • $l$ its associated label

  • +
  • $f(\mathcal{I};\theta) \rightarrow \mathcal{I}$ is a parametric transformation that affects appearance but not semantic, for example it is a rotation of $\theta$ angle

  • +
+ +

then you can augment your dataset by generating $\{I_{\theta}, l\}$ a set of transformed (e.g. rotated) images associated the same $l$ label

+ +

Transfer Learning

+ +

The fundamental idea of transfer learning is to re-use a NN which has been trained to solve a task, to solve other tasks retraining only a selected subset of the weights

+ +

It means using a pre-trained convolutive backend, the part of the model with Conv2D and Pooling, and train dense layers with dropout only (but you should still probably think about reducing the dimensionality there)

+ +

More formally think about representing your CNN Classifier as follows

+ +
    +
  • $f_{C}(I; \theta_{X})$ : Convolutive Processing on Input Image

    + +
      +
    • it is the part of the CNN composed of Conv2D and MaxPooling2D layers
    • +
    • the $\theta_{C}$ is the convolutive learnable weights set
    • +
  • +
  • $b = f_{C}(I; \theta_{C})$ : Bottleneck Feature Representation

    + +
      +
    • it is the result of Flatten layer
    • +
  • +
  • $f_{D}(b; \theta_{D})$ : Dense Processing

    + +
      +
    • it is the part of the model composed of Dense layers
    • +
    • the $\theta_{D}$ is the dense learnable weights set
    • +
  • +
+ +

The idea is to pick $\theta_{C}$ from a training performed on an another dataset, bigger than your current one, and keep it fixed while training in your task +This means reducing the number of parameters to be trained, however beware the dense layers account for most of the weights, as you can also see from your mode summary, which means you should also focus on reducing that number, for example reducing the bottleneck feature tensor size

+ +

Transfer Learning from Pre-Trained Model

+ +

For example, if your actual goal was to perform binary classification on some kind of MNIST-like data then you could use a convolutive backend from a CNN which has been pre-trained on the MNIST 0..9 classification task or you can train it yourself but is important is the $\theta_{C}$ weights will be learned from a MNIST dataset, which is much bigger than yours, even if the task is (slightly) different.

+ +

Furthermore, in case of MNIST like data, please consider if you really need your full 80 x 130 resolution hence your input tensor, considering I can deduct from your model summary it is grayscale (no color), needs to be $(80,130,1)$ or you could rescale to the 28 x 28 MNIST resolution so you work with a smaller $(28,28,1)$ tensor

+ +

My suggestion is to start from an architecture like this MNIST Keras Model as

+ +
    +
  • it has a bottleneck representation of 64 which could be enough for your task and
  • +
  • also suggesting to remove the first dense layer so to significantly reduce $\theta_{D}$ the number of learnable paramters hence going for something like
  • +
+ + + +
  model = Sequential()
+  # add Convolutional layers
+  model.add(Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same', input_shape=(10, 10, 1)))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'))
+  model.add(MaxPooling2D(pool_size=(2,2)))    
+  model.add(Flatten())
+  # output layer
+  model.add(Dense(1, activation='sigmoid'))
+
+
+ +

then compile the model with binary_crossentropy loss and maybe start giving a try to adam optimizer

+ +

Transfer Learning from Autoencoder

+ +

If your data is so special you can't find any big enough and similar enough dataset to use this strategy and you do not come up with any transformation you could use +to perform dataset augmentation, without getting into advanced things, you could try to play one last card: use an Autoencoder to learn a compressed representation aimed at reconstructing the original image and perform transfer learning with the encoder only

+ +

For example, again under the assumption of working with a $(28,28,1)$ tensor, you could start with an architecture like the following one

+ + + +
def build_ae(input_img): 
+  x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
+  # (28,28,16)
+
+  encoded = MaxPooling2D((8, 8), padding='same')(x)
+  # (4,4,8)
+
+
+  x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
+  # (4,4,8)
+
+  x = UpSampling2D((8, 8))(x)
+  # (16,16,8)
+
+  x = Conv2D(16, (3, 3), activation='relu')(x)
+  # Note: Convolving without padding='same' in order to get w-2 and h-2 dimensioality reduction so that following upsampling can lead to the desired 28x28 spatial resolution 
+  # (14,14,8)
+
+  x = UpSampling2D((2, 2))(x)
+  # (28,28,8)
+
+  decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
+  autoencoder = Model(input_img, decoded)
+  return autoencoder
+
+
+ +

In this case, the full model has 2633 weights but the encoding stage consists only of Conv2D+Relu+MaxPooling which means in total 3x3x1x16 weights for the convolutive step and 16 weights for the relu for a total of 160 weights only and the latent representation is a $(4,4,8)$ tensor which means a 128 dimensional flattened tensor and hence assuming, as before, to perform the binary classification with a dense sigmoid layer it would mean 128+1 weights to learn in the actual binary classification task

+ +

Of course it is possible to go for an even more compressed latent representation both on the spatial domain or channel domain with consequent reduced flattened vector dimensionality and ultimately even less weights to learn

+ +

Would you share more details about your problem, also your dataset, we could try to help more

+",1963,,,,,8/24/2019 9:27,,,,1,,,,CC BY-SA 4.0 +14116,1,,,8/24/2019 10:20,,4,548,"

Is it popular or effective to concatenate the results of mean-pooling and max-pooling, to get the invariance of the latter and the expressivity of the former?

+",21158,,2444,,12/31/2021 18:18,12/31/2021 18:18,Is it effective to concatenate the results of mean-pooling and max-pooling?,,1,0,,,,CC BY-SA 4.0 +14117,1,,,8/24/2019 10:23,,3,808,"

How can transfer learning be used to mitigate catastrophic forgetting. Could someone elaborate on this?

+",16708,,16708,,8/24/2019 11:51,9/23/2019 16:01,How is transfer learning used to mitigate catastrophic forgetting in neural networks?,,1,3,,,,CC BY-SA 4.0 +14118,2,,14116,8/24/2019 10:56,,3,,"

I haven't seen it as you describe and I don't think it would be much useful. Pooling layers are being gradually phased out of networks, because they don't seem to be that useful anymore. With the emergence of more and more conv-only architectures, I don't see that likely.

+",28173,,,,,8/24/2019 10:56,,,,0,,,,CC BY-SA 4.0 +14121,1,14138,,8/24/2019 13:28,,9,5661,"

Currently, I am working on a few projects that use feedforward neural networks for regression and classification of simple tabular data. I have noticed that training a neural network using TensorFlow-GPU is often slower than training the same network using TensorFlow-CPU.

+

Could something be wrong with my setup/code or is it possible that sometimes GPU is slower than CPU?

+",22659,,2444,,12/30/2021 17:54,12/30/2021 17:54,Is a GPU always faster than a CPU for training neural networks?,,3,0,,,,CC BY-SA 4.0 +14123,2,,14117,8/24/2019 15:18,,1,,"

Transfer learning is a field where you apply knowledge from a source onto a target. This is a vague notion and there is an abundance of literature pertaining to it. Given your question I will work under the assumption that you are referring weight/architecture sharing between model (in other words training a model on one dataset and using it as a featurizer for another dataset)

+ +

Now any learning system without lossless memory will have remnants of catastrophic forgetting. So let's think about how we would implement this transfer and what effects can be derived from this.

+ +
    +
  1. One implementation involves transferring a component and only training additional layers.
  2. +
  3. Another is retraining the entire system but at a lower learning rate?
  4. +
+ +

In setting 1, we can make the claim catastrophic forgetting is minimized by the fact that there is an unbiased featurizer that cant forget based on a sampling regime, though this does not mean additional layers which are still being trained can still faulter in this error mode.

+ +

In setting 2, we can make the claim catasrophic forgetting can be reduced compared to a normal end-to-end no-transfer training because the unbiased featurizers difference can be analytically bounded by its initial transferred featurization (complexity class is based on both the function and the number of steps -- so longer you train, the more likely it can forget)

+ +

These reasons are talking about mitigating and not erasing the concept of catastrophic forgetting, that is because as I mentioned above any learning system without lossless memory will have remnants of catastrophic forgetting, so making the generalized claim about transfer learning may not always fit the bill.

+",25496,,,,,8/24/2019 15:18,,,,0,,,,CC BY-SA 4.0 +14124,2,,14107,8/24/2019 15:37,,1,,"

This is perfectly acceptable in a stochastic environment. Generally your loss is to minimize $-log\ p(Y|X)$ or equivalently $-\sum_i log\ p(y_i|x_i)$. This optimization is equivalent to $-\mathbb{E}\log\ p(y_i|x_i)$. In other words you are minimizing in this case:

+ +

$$ +\begin{align*} +L &= -log\ p(1|x_0) - log\ p(0|x_0) \\ +&= -log [p(1|x_0) * p(0|x_0)] \\ +&= -log [p(1|x_0) * (1 - p(1|x_0))] \\ +\end{align*} +$$
+or since log is concave equivalently minimizing
+$$ \hat L = -p(1|x_0) * (1 - p(1|x_0)) $$ +After some basic calc 1, we see the optimal result we want the system to learn is that
+$$ p(1|x_0) = .5$$

+ +

Note that if you had more evidence, the result would just be that you want it to learn that it is $1$ with probability $\mathbb{E}_i\ y_i | x$

+",25496,,,,,8/24/2019 15:37,,,,4,,,,CC BY-SA 4.0 +14125,2,,14079,8/24/2019 16:13,,2,,"

Nicola Bernini's answer is quite comprehensive. Here are my insights.

+ +

First of all, think whether you really need neural networks to solve your problem. Think whether traditional computer vision operations like edge detection/ region-based methods help you to solve your problem (OpenCV can help you here). Think about your data again. In case you decide to use neural networks, some things to try out:-

+ +
    +
  1. Your data set size is too small. Try to recall that we are learning to approximate functions (Universal approximation theorem). Less data + More parameters have high chance of overfitting. Use transfer learning. (Try Resizing the image and perform random resized square crop and use as input to your neural network. This may or may not work since I don't know what exactly you are doing). Also try data augmentations that make sense (i.e. : Vertical flip of traffic sign images don't work)

  2. +
  3. Try reducing the learning rate/ using different learning rates for different parts of your network if you decide to use transfer learning.

  4. +
  5. Check whether your train and test dataset distributions are the same. i.e.: Don't train with 95 % of label 0 and test with a set that has 95% label 1. I do not know whether your dataset is highly class imbalanced or whether you are doing some kind of anomaly detection.

  6. +
  7. Think about optimizers. Try Adam if you haven't.

  8. +
+ +

If you need more help, try sharing the data and we can try to help you.

+",28182,,,,,8/24/2019 16:13,,,,0,,,,CC BY-SA 4.0 +14126,1,,,8/24/2019 16:35,,2,63,"

I am completely new to CNN's, and I do not quite know how to design or use them efficiently. That being said, I am attempting to build a CNN that learns to play Pac-man with reinforcement learning. I have trained it for about 3 hours and have seen little to no improvement. My observation space is 3 channels * 15 * 19, and there are 5 actions. Here is my code, I am open to any and all suggestions. Thanks for all your help.

+ +
from minipacman import MiniPacman as pac
+from torch import nn
+import torch
+import random
+import torch.optim as optimal
+from torch.autograd import Variable
+import matplotlib.pyplot as plt
+import numpy as np
+import keyboard
+
+
+loss_fn = nn.MSELoss()
+epsilon = 1
+env = pac(""regular"", 1000)
+time = 0
+action = random.randint(0, 4)
+q = np.zeros(3)
+alpha = 0.01
+gamma = 0.9
+tick = 0
+decay = 0.9999
+
+
+class Value_Approximator (nn.Module):
+    def __init__(self):
+        super(Value_Approximator, self).__init__()
+        # Convolution 1
+        self.cnn1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1, padding=2)
+        self.relu1 = nn.ReLU()
+
+        # Max pool 1
+        self.maxpool1 = nn.MaxPool2d(kernel_size=2)
+
+        # Convolution 2
+        self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2)
+        self.relu2 = nn.ReLU()
+
+        # Max pool 2
+        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
+
+        # Fully connected 1 (readout)
+        self.fc1 = nn.Linear(384, 5)
+
+    def forward(self, x):
+        # Convolution 1
+        out = self.cnn1(x)
+        out = self.relu1(out)
+
+        # Max pool 1
+        out = self.maxpool1(out)
+
+        # Convolution 2
+        out = self.cnn2(out)
+        out = self.relu2(out)
+
+        # Max pool 2
+        out = self.maxpool2(out)
+
+        # Resize
+        # Original size: (100, 32, 7, 7)
+        # out.size(0): 100
+        # New out size: (100, 32*7*7)
+        out = out.view(out.size(0), -1)
+
+        # Linear function (readout)
+        out = self.fc1(out)
+
+        return out
+
+approx = Value_Approximator()
+optimizer = optimal.SGD(approx.parameters(), lr=alpha)
+
+
+while time < 50000:
+    print(""Time: ""+str(time))
+    print(""Epsilon: ""+str(epsilon))
+    print()
+    time += 1
+    state = env.reset()
+    tick = 0
+
+    epsilon *= decay
+
+    if epsilon < 0.1:
+        epsilon = 0.1
+
+    while True:
+        tick += 1
+        state = np.expand_dims(state, 1)
+        state = state.reshape(1, 3, 15, 19)
+        q = approx.forward(torch.from_numpy(state))[0]
+
+        if random.uniform(0, 1) < epsilon:
+            action = env.action_space.sample()
+        else:
+            _, action = torch.max(q, -1)
+            action = action.item()
+        new_state, reward, terminal, _ = env.step(action)
+        show_state = new_state
+        new_state = np.expand_dims(new_state, 1)
+        new_state = state.reshape(1, 3, 15, 19)
+
+        q_new = approx.forward(torch.from_numpy(new_state).type(torch.FloatTensor))[0]  # "" find Q (s', a') ""
+        #  find optimal action Q value for next step
+        new_max, _ = torch.max(q_new, -1)
+        new_max = new_max.item()
+
+        q_target = q.clone()
+        q_target = Variable(q_target.data)
+
+        #  update target value function according to TD
+        q_target[action] = reward + torch.mul(new_max, gamma)  # "" reward + gamma*(max(Q(s', a')) ""
+
+        loss = loss_fn(q, q_target)  # "" reward + gamma*(max(Q(s', a')) - Q(s, a)) ""
+        # Update original policy according to Q_target ( supervised learning )
+        approx.zero_grad()
+        loss.backward()
+        optimizer.step()
+
+        #  Q and Q_target should converge
+        if time % 100 == 0:
+            state = torch.FloatTensor(show_state).permute(1, 2, 0).cpu().numpy()
+
+            plt.subplot(131)
+            plt.title(""Imagined"")
+            plt.imshow(state)
+            plt.subplot(132)
+            plt.title(""Actual"")
+            plt.imshow(state)
+            plt.show(block=False)
+            plt.pause(0.000001)
+
+        if keyboard.is_pressed('1'):
+            torch.save(approx.state_dict(), 'trained-10000.mdl')
+        if keyboard.is_pressed('9'):
+            torch.save(approx.state_dict(), 'trained-10000.mdl')
+
+        if terminal or tick > 100:
+            plt.close()
+            break
+
+        state = new_state
+
+
+torch.save(approx.state_dict(), 'trained-10000.mdl')
+
+",4744,,4744,,8/24/2019 23:32,8/24/2019 23:32,Torch CNN not training,,0,5,,,,CC BY-SA 4.0 +14127,2,,14121,8/24/2019 17:21,,2,,"

I advice you to always use GPU over CPU for training your models. This is driven by the usage of deep learning methods on images and texts, where the data is very rich.

+ +

You must have a GPU suited perfectly for training (e.g. NVIDIA 1080, NVIDIA Titan or higher versions), I wouldn't be surprised to find that your CPU was faster if you don't have a powerful GPU.

+",27519,,,,,8/24/2019 17:21,,,,1,,,,CC BY-SA 4.0 +14128,1,,,8/24/2019 17:36,,2,27,"

I want to know how to generate the structured light which projects different patterns of light on a 3D object which is under scanning.

+",27519,,2444,,8/25/2019 10:35,8/25/2019 10:35,How do I generate structured light for the 3D bin picking system?,,0,0,,,,CC BY-SA 4.0 +14129,2,,14077,8/24/2019 17:50,,2,,"

is your data stored in raw ASCII text, like a CSV file?

+ +

Perhaps you can speed up data loading and use less memory by using another data format. A good example is a binary format like GRIB, NetCDF, or HDF.

+ +

There are many command line tools that you can use to transform one data format into another that do not require the entire dataset to be loaded into memory.

+ +

Using another format may allow you to store the data in a more compact form that saves memory, such as 2-byte integers, or 4-byte floats.

+ +

In some cases, you may need to resort to a big data platform. +That is, a platform designed for handling very large datasets, that allows you to use data transforms and machine learning algorithms on top of it.

+",27519,,,,,8/24/2019 17:50,,,,2,,,,CC BY-SA 4.0 +14130,1,,,8/24/2019 19:25,,4,57,"

I'm looking for either an existing AI app or a pre-trained NN that will tell me if a photograph is right-side up or not. I want to use this to create an application that automatically rotates photos so they are right-side-up. This doesn't seem hard.

+ +

If it doesn't exist, presumably I can create it with Tensorflow, and just use a ton of photos to train it, and assume they are all correctly oriented in the training set. Would that work?

+",27476,,,,,8/26/2019 15:47,Turn photos right-side up?,,1,1,,,,CC BY-SA 4.0 +14131,2,,13884,8/24/2019 20:54,,1,,"

The question would need clarification. I'm new here so I will try to give an answer. +The easiest way would be to prepare the data in comma separated values (CSV). It is possible to export your data in this format from excel. +For downstream applications it will depends what programming language is used but in general it is possible to import CSV files and the data they contain for training (example panda data frame in python).

+ +

Hope this helps!

+",28010,,,,,8/24/2019 20:54,,,,0,,,,CC BY-SA 4.0 +14133,2,,13975,8/24/2019 22:56,,5,,"

As far as I've seen, there is no use of 3D CNNs for traditional image classification tasks.

+

The reason I think is that, while these images do have multiple channels, there is no spatial information in those channels for the 3D convolution to extract. On the other hand, it makes more sense to take the weighted sum of those pixels along that dimension (as the 2D convolution does).

+

3D CNNs have been used, as far as I know, only for applications where you have volumetric data, i.e. the images are sequential and, when combined, form a large 3D image.

+",27652,,2444,,12/18/2021 13:02,12/18/2021 13:02,,,,0,,,,CC BY-SA 4.0 +14135,1,,,8/25/2019 2:19,,2,28,"

I made a CNN with a reasonable loss curve, but the performance of the model does not improve. I have tried making the model larger, I am using three convolutional layers with batch norms.

+ +

Thanks for your help.

+ +

+",4744,,,,,8/25/2019 2:19,"Loss reduction, but constant performance with CNN",,0,1,,,,CC BY-SA 4.0 +14137,1,14139,,8/25/2019 7:12,,5,237,"

I have a question about state representation of Q-learning or DQN algorithm. +I'm still a beginner of RL, so I'm not sure that is it suitable to take exogenous variables as state features.

+ +

For example, in my current project, deciding to charge/discharge an electric vehicle actions according to the real-time fluctuating electricity prices, I'm wondering if the past n-step prices or hours can be considered as state features.

+ +

Because both the prices and the hour are just given information in every time step rather than being dependent to the charging/discharging actions, I'm suspicious about whether they can are theoretically qualified to be state features or not.

+ +

If they are not qualified, could someone give me a reference or something that I can read?

+",27946,,2444,,8/25/2019 10:14,8/25/2019 10:14,Can exogenous variables be state features in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +14138,2,,14121,8/25/2019 8:08,,7,,"

This changes according to your data and complexity of your models. See following article by microsoft. Their conclusion is

+ +
+

The results suggest that the throughput from GPU clusters is always + better than CPU throughput for all models and frameworks proving that + GPU is the economical choice for inference of deep learning models. + ...

+ +

It is important to note that, for standard machine learning models + where number of parameters are not as high as deep learning models, + CPUs should still be considered as more effective and cost efficient.

+
+ +

Since you are training MLP, it can not be thought as standard machine learning model. See my preprint, The impact of using large training data set KDD99 on classification accuracy. I compare different machine learning algorithms using weka.

+ +

+ +

As you can see from above image, MLP takes 722 minutes to train while Naive Bayes ~2 minutes. If your data is small and your models parameters are not high, you will see better performance on CPU.

+",4300,,,,,8/25/2019 8:08,,,,2,,,,CC BY-SA 4.0 +14139,2,,14137,8/25/2019 9:44,,3,,"

Including exogenous variables in your state representation certainly can be useful, as long as you expect them to be relevant information for determining the action to pick. So, state features are not only useful if you expect your agent (through application of actions) to have (partial) influence on those state variables; you just want the state variables themselves to be informative for your next action to take / prediction of expected future rewards.

+ +

However, if you only have exogenous variables, i.e. if you expect your agent to have no influence whatsoever on what states you'll end up in next... then the full problem definition typically used in RL (Markov decision processes) may be unnecessarily complex, and you may prefer to look into the Multi-Armed Bandits (MAB) problem formulation. If you're already familiar with RL / MDPs, you may think of MAB problems as (sequences of) single-step episodes, where you always just look at the current state and don't care at all about future states (because you expect to have 0 influence on them).

+ +

In theory, the RL / MDP framework is more general and is also applicable to those MAB problems, but RL algorithms that support this framework may perform worse than MAB algorithms in practice, because they (informally speaking) still put in effort trying to ""learn"" how their actions affect future states (a waste of effort when you expect there to be no such influence from the agent).

+",1641,,,,,8/25/2019 9:44,,,,0,,,,CC BY-SA 4.0 +14140,1,14142,,8/25/2019 14:19,,3,78,"

I started working on the application of deep learning in medical imaging recently. While dealing with MRI images in the BraTS dataset, I observe that first and last few frames are always completely empty (black). I want to ask those who are already working in the field, is there a way to remove them in a procedural manner before training and add them correctly after the training as a postprocessing step (to comply with the ground truth segmentations' shape)? Has anyone tried that? +I could not find any results on Google. So asking here.

+ +

Edit: I think I did not make my point clear enough. I meant to say first and last few frames of each MRI scan are empty. How to deal with those is what I intended to ask.

+",16159,,-1,,8/28/2019 17:17,8/28/2019 17:17,Dealing with empty frames in MRI images,,2,0,0,,,CC BY-SA 4.0 +14141,2,,14140,8/25/2019 14:51,,0,,"

As an expert in image analysis I don't think this would be a problem. I have never worked with MRI images from the particular dataset you described but I found that the format of the file containing the images is NIfTI. +NIfTI files can be imported in Matlab(niftiread function), ImageJ and Python (NiBabel-Nipy). Thus you should be able to write a script to import the images from the file, select which images you want to keep, and save the output I'm the same format as the input (NIfTI).

+",28010,,28010,,8/27/2019 16:59,8/27/2019 16:59,,,,0,,,,CC BY-SA 4.0 +14142,2,,14140,8/25/2019 22:37,,1,,"

I've worked on the BRATS dataset and I can verify that this is pretty much standard process. Besides throwing the totally blank images, I also throw away the images in the beginning and ending of the sequence that show the tip of the scull and the base of the neck.

+ +

Generally when dealing with MRIs, +I do this with a script (think of is as a preprocessing step) I run on each image that counts the amount of pixels that have a positive intensity (I actually add a small value to this to account for noise). Let's say for images with values 0-255 I count the amount of pixels with an intensity of over 10-15, let's call this $x$. After that I set a threshold (empirically), let's call it $t$ and discard images with $x < t$.

+ +

Specifically for BRATS, because you have the labels, you can see which of these have the desired classes and discard most of the rest. If you try to train a network on the dataset as is you face an enormous imbalance ratio. I've had trouble training networks due to this and the most success I got was when I threw away most of the irrelevant images.

+",26652,,,,,8/25/2019 22:37,,,,0,,,,CC BY-SA 4.0 +14143,2,,14121,8/26/2019 0:25,,1,,"

It depends, if you ve to solve a ""simple"" problem which does not require CNN or stacked models without multidimensional data and not many multiplications, big long numbers then if you decide to use CNN / stacked architectures AND GPU it is like using a hammer to insert a needle.It will not only spend energy but the computations will do zero padding in memory, you will observe a degradation in speed.

+",28200,,28200,,8/30/2019 13:48,8/30/2019 13:48,,,,0,,,,CC BY-SA 4.0 +14144,2,,14079,8/26/2019 6:55,,1,,"

From my experience, this oscillation comes from:

+
    +
  • Too high learning rate: Weights change too quick.
  • +
  • Too few neurons in layers: Not enough to fit. +
      +
    • When having not enough neurons, the network can't learn at all and the oscillation is due to failure to fit to global optimum; it's correct for some cases and wrong on other cases
    • +
    +
  • +
+",2844,,2844,,11/24/2021 2:34,11/24/2021 2:34,,,,1,,,,CC BY-SA 4.0 +14145,1,,,8/26/2019 10:36,,3,32,"

I'm using Prioritized Experience Replay (PER) with a DDQN. To compensate for overfitting relatively high-value samples due to the non-uniform selection, I'm training with sample weights provided along with the PER samples to downplay each sample's loss contribution according to its probability of selection. I've observed that typically these sample weightings vary from $~0.1$ to $<0.01$, as the buffer gradually fills up (4.8M samples).

+ +

When using this compensation, the growth of the maximal Q value per episode stalls prematurely compared to a non-weight-compensated regime. I presume that this is because the size of the back-propagation updates is being greatly and increasingly diminished by the sample weights.

+ +

To correct for this I've tried taking the beta-adjusted maximum weight as reported by the PER (the same buffer-wide value by which the batch is normalized) and multiplying the base learning rate by it, thereby adjusting the optimizer after each batch selection.

+ +

My question is two-fold:

+ +
    +
  1. Is this the correct interpretation of what's going on?

  2. +
  3. Is it standard practice to compensate for sample weighting in this way?

  4. +
+ +

Although it seems to be working in keeping the Q growth alive whilst taming the loss, I've not been able to find any information on this and haven't found any implementations that compensate in this way so have a major doubt about the mathematical validity of it.

+",28203,,2444,,8/31/2019 23:50,8/31/2019 23:50,Should importance sample weighting be compensated for by dynamically increasing learning rate?,,0,0,,,,CC BY-SA 4.0 +14146,2,,14059,8/26/2019 14:00,,1,,"

If your using computer vision, the top recognised conference is CVPR (computer vision and pattern recognition).

+

You can also try to submit at ICML (International conference on machine learning) and NIPS (Neural Information Processing Systems), which focuses on applications of machine learning and deep learning.

+

I'd also recommend IJCAI (International joint conference on artificial intelligence).

+",27227,,2444,,1/22/2021 1:32,1/22/2021 1:32,,,,0,,,,CC BY-SA 4.0 +14147,1,,,8/26/2019 14:19,,3,522,"

I am basically interested in vehicle on the road.

+ +

YoloV3 pytorch is giving a decent result.

+ +

So my interested Vehicles Car Motorbike Bicycle Truck and bus, I have a small vehicles being detected as truck.

+ +

Since the small vehicle is nicely being detected as truck. I have annotated this small vehicle as a different class.

+ +

Though, I could add an extra class say 81th class, since the current YoloV3 being used is trained on 80 classes.

+ +

81th class would contain the weight of the truck, I would freeze the weights such that the rest of the 80 classes remain unaltered and only the 81th class of this new data gets trained.

+ +

The problem is the final layer gets tuned according to the prediction of all the classes it learns.

+ +

I was not able to find any post that could actually mention this way of preserving the predictions of the other classes and introducing a new class using transfer learning.

+ +

The closest, I was able to get is this post of +Weight Sampling Tutorial SSD using keras

+ +

Its mentioned in

+ +

Option 1: Just ignore the fact that we need only 8 classes

+ +

This would work, and it wouldn't even be a terrible option. Since only 8 out of the 80 classes would get trained, the model might get gradually worse at predicting the other 72 clases in the second paragraph.

+ +

Is it possible to preserve the predictions, of the previous pre trained model while introducing the new class and use transfer learning to train only for that class?

+ +

I Feel that this is not possible, would like to know your opinion. Hope someone can prove me wrong.

+",17763,,23503,,11/9/2020 11:14,11/9/2020 11:14,Transfer learning to train only for a new class while not affecting the predictions of the other class,,1,0,,,,CC BY-SA 4.0 +14148,2,,14130,8/26/2019 15:47,,2,,"

I don't know if there is an existing pretrained NN that does this but it wouldn't be very hard to modify one to do this.

+ +

First, I'd take a pretrained image classification NN (e.g. VGG, ResNet), drop its final layer and replace it with one with 4 neurons, representing the 4 orientations (so that you know which way to rotate it).

+ +

Then I'd take again a dataset of regular images (e.g. a subset of ImageNet) and assume that they are correctly oriented. I'd make three more duplicate datasets with the same images rotates by 90, 180 and 270 degrees respectively. These 4 datasets would be the 4 classes I'd fine tune the model on.

+ +

By training your model on this dataset, you'll be training it to recognize which side your image is facing. Since it's a pretrained net and its a fairly simple task, I think that after a few iterations, your model will have converged. Then you could write a script that uses this model to predict an image's orientation and rotate it accordingly.

+",26652,,,,,8/26/2019 15:47,,,,0,,,,CC BY-SA 4.0 +14149,1,14155,,8/26/2019 16:26,,0,1939,"

I am using feedforward neural network for regression and what I get as a result of prediction is a constant value visible on the graph below: +

+ +

Data I use are typical standardised tabular numbers. The architecture is as follows:

+ +
model.add(Dropout(0.2))
+model.add(Dense(units=512, activation='relu'))
+model.add(Dropout(0.3))
+model.add(Dense(units=256, activation='relu'))
+model.add(Dropout(0.3))
+model.add(Dense(units=128, activation='relu'))
+model.add(Dense(units=128, activation='relu'))
+model.add(Dense(units=1))
+
+adam = optimizers.Adam(lr=0.1)
+
+model.compile(loss='mean_squared_error', optimizer=adam)
+
+reduce_lr = ReduceLROnPlateau(
+    monitor='val_loss',
+    factor=0.9,
+    patience=10,
+    min_lr=0.0001,
+    verbose=1)
+
+tensorboard = TensorBoard(log_dir=""logs\{}"".format(NAME))
+
+history = model.fit(
+    x_train,
+    y_train,
+    epochs=500,
+    verbose=10,
+    batch_size=128,
+    callbacks=[reduce_lr, tensorboard],
+    validation_split=0.1)
+
+ +

It seems to me that all weights are zeroed and only constant bias is present here, since for different data samples from a test set I get the same value, but I am not sure.

+ +

I understand that the algorithm has found smallest MSE for such a constant value, but is there a way of avoiding such situation, since straight line is not really good solution for my project?

+",22659,,,,,8/27/2019 8:52,Why do I get a straight line as an output from a neural network?,,1,3,,,,CC BY-SA 4.0 +14150,1,,,8/26/2019 17:00,,2,14,"

While reading the InfoGAN paper and implement it taking help from a previous implementation, I'm having some difficulty understanding how it learns the discrete categorical code when trained on MNIST.

+ +

The implementation we tried to follow uses a target as a randomly generated integer from 0 to 9. My doubt is this: how can it learn categorical information if from the start it's learning using a loss which takes in random values.

+ +

If this implementation is wrong, then what should the target be while training the Q network, when using the categorical-cross-entropy loss on the output logits?

+",26831,,,,,8/26/2019 17:00,How does InfoGAN learn latent categorical codes on MNIST,,0,0,,,,CC BY-SA 4.0 +14153,1,,,8/27/2019 3:39,,4,213,"

I am trying to rank video scenes/frames based on how appealing they are for a viewer. Basically, how ""interesting"" or ""attractive"" a scene inside a video can be for a viewer. My final goal is to generate say a 10-second short summary given a video as input, such as those seen on Youtube when you hover your mouse on a video.

+ +

I previously asked a similar question here. But the ""aesthetics"" model is good for ranking artistic images, not good for frames of videos. So it was failing. I need a score based on ""engagement for general audience"". Basically, which scenes/frames of video will drive more clicks, likes, and shares when selected as a thumbnail.

+ +

Do we have an available deep-learning model or a prototype doing that? A ready-to-use prototype/model that I can test as opposed to a paper that I need to implement myself. Paper is fine as long as the code is open-source. I'm new and can't yet write a code given a paper.

+",9053,,,,,9/11/2019 23:21,Video engagement analysis with deep learning,,1,8,,,,CC BY-SA 4.0 +14155,2,,14149,8/27/2019 8:52,,0,,"

You should try experimenting with a lower learning rate as a starting point. You're starting with 0.1 which is quite high for most of the cases, and only reduce by 0.9 which is not much.

+",20430,,,,,8/27/2019 8:52,,,,0,,,,CC BY-SA 4.0 +14158,2,,7966,8/27/2019 11:41,,4,,"

This article on Dynamically Expandable Neural Networks (DEN) (by Harshvardhan Gupta) is based on this paper Lifelong Learning with Dynamically Expandable Networks (by Jeongtae Lee, Jaehong Yoon, Eunho Yang, Sung Ju Hwang)

+ +

This presents 3 solutions to increase the capacity of the network if needed retaining whatever useful information from the old model and train the new model:

+ +
    +
  • Selective retraining
  • +
  • Dynamic Network Expansion 
  • +
  • Network Split/Duplication
  • +
+ +

To me, it seems that such neural network is dynamic and improving. As such, they answer partially your question. If they don't sorry about that.

+",19852,,2444,,8/27/2019 23:30,8/27/2019 23:30,,,,0,,,,CC BY-SA 4.0 +14159,1,14160,,8/27/2019 14:46,,7,3020,"

I came across some papers that use $\mathbb E$ in equations, in particular, this paper: https://arxiv.org/pdf/1511.06581.pdf. Here is some equations from the paper that uses it:

+ +

$Q^\pi \left(s,a \right) = \mathbb E \left[R_t|s_t = s, a_t = a, \pi \right]$ ,

+ +

$V^\pi \left(s \right) = \mathbb E_{a\backsim\pi\left(s \right)} \left[Q^\pi \left(s, a\right) \right]$ ,

+ +

$Q^\pi \left(s, a \right) = \mathbb E_{s'} \left[r + \gamma\mathbb E_{a'\backsim\pi \left(s' \right)} \left[Q^\pi \left(s', a' \right) \right] | s,a,\pi \right]$

+ +

$\nabla_{\theta_i}L_i\left(\theta_i \right) = \mathbb E_{s, a, r, s'} \left[\left(y_i^{DQN} - Q \left(s, a; \theta_i \right) \right) \nabla_{\theta_i} Q\left(s, a, \theta_i \right) \right]$

+ +

Could someone explain to me what is the purpose of $\mathbb E$?

+",28233,,2444,,8/28/2019 10:52,11/7/2019 22:03,What does the symbol $\mathbb E$ mean in these equations?,,2,1,,,,CC BY-SA 4.0 +14160,2,,14159,8/27/2019 15:06,,12,,"

That's the Expected Value operator. Intuitively, it gives you the value that you would ""expect"" (""on average"") the expression after it (often in square or other brackets) to have. Typically that expression involves some random variables, which means that there may be a wide range of different values the expression may take in any concrete, single event. Taking the expectation basically means that you ""average"" over all the values the expression could potentially take, appropriately weighted by the probabilities with which certain events occur.

+ +

You'll often find assumptions under which the expectation is taken after a vertical line ($\mid$) inside the brackets, and/or in the subscript to the right of the $\mathbb{E}$ symbol. Sometimes, some assumptions may also be left implicit.

+ +
+ +

For example:

+ +

$$\mathbb{E} \left[ R_t \mid s_t = s, a_t = a, \pi \right]$$

+ +

may be read in english as ""the expected value of the discounted returns from time $t$ onwards ($R_t$), given that the state at time $t$ is $s$ ($s_t = s$), given that our action at time $t$ is $a$ ($a_t = a$), given that we continue behaving according to policy $\pi$ after time $t$"".

+ +

I would say that, in this case, the expected value also relies on the transition dynamics of the Markov decision process (i.e. the probabilities with which we transition between states, given our actions). This is left implicit.

+ +
+ +

Second example:

+ +

$$V^{\pi}(s) = \mathbb{E}_{a \sim \pi(s)} \left[ Q^{\pi}(s, a) \right]$$

+ +

may be read as ""$V^{\pi}(s)$ is equal to the expected value of $Q^{\pi}(s, a)$, under the assumption that $a$ is sampled according to $\pi(s)$"".

+ +

In theory, something like this could be computed by enumerating over all possible $a$, computing $Q^{\pi}(s, a)$ for every such $a$, and multiplying it by the probability that $\pi(s)$ assigns to $a$. In practice, you could also approximate it by running a large number of experiments in which you sample $a$ from $\pi(s)$, and then evaluate a single concrete case of $Q^{\pi}(s, a)$, and average over all the evaluations.

+",1641,,1641,,8/27/2019 15:23,8/27/2019 15:23,,,,1,,,,CC BY-SA 4.0 +14161,2,,14159,8/27/2019 15:41,,7,,"

$\mathbb E$ is the symbol for the expectation (or expected value).

+ +

To fully understand the concept of expected value, you need to understand the concept of random variable. An example should help you understand the idea behind the concept of a random variable.

+ +

Suppose you toss a coin. The outcome of this (random) experiment can either be heads or tails. Formally, the sample space, $\Omega = \{\text{heads}, \text{tails}\}$, is the set that contains the possible outcomes of a random experiment. The outcome (e.g. heads) is the result of a random process. A random variable is a function that we can associate with a random process so that we can more formally describe the random process. In this case, we can associate a random variable, $T$, with this random process of tossing a coin.

+ +

$$ +T(\omega) = +\begin{cases} +1, & \text{if } \omega = \text{heads}, \\[6pt] +0, & \text{if } \omega = \text{tails}, +\end{cases} +$$

+ +

where $\omega \in \Omega$.

+ +

In other words, if the outcome of the random process is heads, then the output of the associated random variable $T$ is $1$, else it is $0$.

+ +

We can also associate with each random process (and thus with the corresponding random variable) a probability distribution, which, intuitively, describes the probability of occurrence of each possible outcome of the random process. In the case of the coin-flipping random variable (or process), assuming that the coin is ""fair"", then the following function describes the probability of each outcome of the coin

+ +

$$ +f_T(t) = +\begin{cases} +\tfrac 12,& \text{if }t=1,\\[6pt] +\tfrac 12,& \text{if }t=0, +\end{cases} +$$

+ +

In other words, there is $\tfrac 12$ probability that the outcome of the random process is $1$ (heads) and $\tfrac 12$ probability that it is $0$ (tails).

+ +

If you throw a coin $n$ times in the air, how many times will it land heads and tails? Of course, it will depend on the experiment. In the first experiment, you might get $\frac{3n}{4}$ heads and $\frac{n}{4}$ tails. In the second experiment, you might get $\frac{n}{2}$ heads and $\frac{n}{2}$ tails, and so on. If you repeat this experiment an infinite amount of times (of course, we can't do that, but imagine if we could do that), how many times do you expect (on average) to get heads and tails? The expected value is the answer to this question.

+ +

In the case of the coin-tossing experiment, the outcomes are discrete (heads or tails), consequently, $T$ is a discrete random variable. In the case of a discrete random variable, the expected value is defined as follows

+ +

$$\mathbb E[T] = \sum_{t \in T} p(t) t$$

+ +

where $t$ is the outcome of the random variable $t$ and $p(t)$ is the probability of such outcome. In other words, the expected value of a random variable $T$ is defined as a weighted sum of the values it can take, where the weights are the corresponding probabilities of occurrence. So, in the case of the coin-tossing experiment, the expected value is

+ +

\begin{align} +\mathbb E[T] +&= \sum_{t \in T} p(t) t\\ +&= \frac{1}{2}1 + \frac{1}{2} 0\\ +&=\frac{1}{2} +\end{align}

+ +

What does $\mathbb E[T] = \frac{1}{2}$ mean? Intuitively, it means that half of the times the random process produces heads and half of the times it produces tails, assuming it is governed by the probability distribution $f_T(t)$.

+ +

Note that, if the probability distribution $f_T(t)$ had been defined differently, then the expected value would also have been different, given that the expected value is defined as a function of the probability of occurrence of each outcome of the random process.

+ +

In your specific examples, $\mathbb E$ is still the symbol for the expected value. For example, in the case of $Q^\pi \left(s,a \right) = \mathbb E \left[R_t|s_t = s, a_t = a, \pi \right]$, $Q^\pi \left(s,a \right)$ is thus defined as the expected value of the random variable $R_t$, given that $s_t = s$, $a_t = a$ and the policy is $\pi$ (so this is actually a conditional expectation). In this specific case, $R_t$ represents the return at time step $t$, which, in reinforcement learning, is defined as

+ +

$$ +R_t = \sum_{k=0}^\infty \gamma^k r_{t+k+1} +$$

+ +

where $r_{t+k+1} \in \mathbb{R}$ is the reward at time step $t+k+1$. $R_t$ a random variable because it is assumed that the underlying environment is a random process.

+ +

It is not always easy to intuitively understand the expected value of a random variable. For example, in the case of a coin-flipping random process, the expected value $\frac{1}{2}$ should be intuitive (given that it is the average of $1$ and $0$), but, in the case of $Q^\pi \left(s,a \right)$, at first glance, it is not clear what the expected value should be (hence the need for algorithms such as Q-learning), given that it depends on the rewards, which depend on the dynamics of the environment. However, the intuition behind the concept of the expected value and the calculation (provided the associated random variable is discrete) does not change.

+ +

In the case there is more than one random variable involved in the calculation of the expected value, then we also need to specify the random variable the expected value is being calculated with respect to, hence the subscripts of the expected value in your examples. See, for example, Subscript notation in expectations for more info.

+",2444,,2444,,11/7/2019 22:03,11/7/2019 22:03,,,,0,,,,CC BY-SA 4.0 +14162,1,14171,,8/27/2019 17:54,,3,272,"

If AlphaZero was always playing the best moves it would just generate the same training game over and over again. So where does the randomness come from? When does it decide not to play the most optimal move?

+",19604,,2444,,8/31/2019 23:58,8/31/2019 23:58,When does AlphaZero play suboptimal moves?,,1,2,,,,CC BY-SA 4.0 +14165,1,14199,,8/27/2019 20:11,,3,330,"

I'm just starting to learn about neural networking and I decided to study a simple 3-input perceptron to get started with. I am also only using binary inputs to gain a full understanding of how the perceptron works. I'm having difficulty understanding why some training outputs work and others do not. I'm guessing that it has to do with the linear separability of the input data, but it's unclear to me how this can easily be determined. I'm aware of the graphing line test, but it's unclear to me how to plot the input data to fully understand what will work and what won't work.

+ +

There is quite a bit of information that follows. But it's all very simple. I'm including all this information to be crystal clear on what I'm doing and trying to understand and learn.

+ +

Here is a schematic graphic of the simple 3-input perceptron I'm modeling.

+ +

+ +

Because it only has 3 inputs and they are binary (0 or 1), there are only 8 possible combinations of inputs. However, this also allows for 8 possible outputs. This allows for training of 256 possible outputs. In other words, the perceptron can be trained to recognize more than one input configuration.

+ +

Let's call the inputs 0 thru 7 (all the possible configurations of a 3-input binary system). But we can train the perceptron to recognize more than just one input. In other words, we can train the perceptron to fire for say any input from 0 to 3 and not for inputs 4 thru 7. And all those possible combinations add up to 256 possible training input states.

+ +

Some of these training input states work, and others do not. I'm trying to learn how to determine which training sets are valid and which are not.

+ +

I've written the following program in Python to emulate this Perceptron through all 256 possible training states.

+ +

Here is the code for this emulation:

+ +
import numpy as np
+np.set_printoptions(formatter={'float': '{: 0.1f}'.format})
+
+# Perceptron math fucntions. 
+def sigmoid(x):
+    return 1 / (1 + np.exp(-x))
+def sigmoid_derivative(x):
+    return x * (1 - x)
+# END Perceptron math functions.
+
+# The first column of 1's is used as the bias.  
+# The other 3 cols are the actual inputs, x3, x2, and x1 respectively
+training_inputs = np.array([[1, 0, 0, 0],
+                         [1, 0, 0, 1],
+                         [1, 0, 1, 0],
+                         [1, 0, 1, 1],
+                         [1, 1, 0, 0],
+                         [1, 1, 0, 1],
+                         [1, 1, 1, 0],
+                         [1, 1, 1, 1]])
+
+# Setting up the training outputs data set array                         
+num_array = np.array
+num_array = np.arange(8).reshape([1,8])
+num_array.fill(0)
+
+for num in range(25):
+    bnum = bin(num).replace('0b',"""").rjust(8,""0"")
+    for i in range(8):
+        num_array[0,i] = int(bnum[i])
+
+    training_outputs = num_array.T
+# training_outputs will have the array form: [[n,n,n,n,n,n,n,n]]
+# END of setting up training outputs data set array                      
+
+    # -------  BEGIN Perceptron functions ----------
+    np.random.seed(1)
+    synaptic_weights = 2 * np.random.random((4,1)) - 1
+    for iteration in range(20000):
+        input_layer = training_inputs
+        outputs = sigmoid(np.dot(input_layer, synaptic_weights))
+        error = training_outputs - outputs
+        adjustments = error * sigmoid_derivative(outputs)
+        synaptic_weights += np.dot(input_layer.T, adjustments)
+    # -------  END Perceptron functions ----------
+
+
+    # Convert to clean output 0, 0.5, or 1 instead of the messy calcuated values.
+    # This is to make the printout easier to read.
+    # This also helps with testing analysis below.
+    for i in range(8):
+        if outputs[i] <= 0.25:
+            outputs[i] = 0
+        if (outputs[i] > 0.25 and outputs[i] < 0.75):
+            outputs[i] = 0.5
+        if outputs[i] > 0.75:
+            outputs[i] = 1
+    # End convert to clean output values.
+
+    # Begin Testing Analysis
+    # This is to check to see if we got the correct outputs after training.
+    evaluate = ""Good""
+    test_array = training_outputs
+    for i in range(8):
+        # Evaluate for a 0.5 error.
+        if outputs[i] == 0.5:
+            evaluate = ""The 0.5 Error""
+            break
+        # Evaluate for incorrect output
+        if outputs[i] != test_array[i]:
+            evaluate = ""Wrong Answer""
+    # End Testing Analysis
+
+    # Printout routine starts here:
+    print_array = test_array.T
+    print(""Test#: {0}, Training Data is: {1}"".format(num, print_array[0]))
+    print(""{0}, {1}"".format(outputs.T, evaluate))
+    print("""") 
+
+ +

And when I run this code I get the following output for the first 25 training tests.

+ +
Test#: 0, Training Data is: [0 0 0 0 0 0 0 0]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0]], Good
+
+Test#: 1, Training Data is: [0 0 0 0 0 0 0 1]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0]], Good
+
+Test#: 2, Training Data is: [0 0 0 0 0 0 1 0]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0]], Good
+
+Test#: 3, Training Data is: [0 0 0 0 0 0 1 1]
+[[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0]], Good
+
+Test#: 4, Training Data is: [0 0 0 0 0 1 0 0]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0]], Good
+
+Test#: 5, Training Data is: [0 0 0 0 0 1 0 1]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0]], Good
+
+Test#: 6, Training Data is: [0 0 0 0 0 1 1 0]
+[[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error
+
+Test#: 7, Training Data is: [0 0 0 0 0 1 1 1]
+[[ 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0]], Good
+
+Test#: 8, Training Data is: [0 0 0 0 1 0 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Good
+
+Test#: 9, Training Data is: [0 0 0 0 1 0 0 1]
+[[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error
+
+Test#: 10, Training Data is: [0 0 0 0 1 0 1 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.0]], Good
+
+Test#: 11, Training Data is: [0 0 0 0 1 0 1 1]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 1.0]], Good
+
+Test#: 12, Training Data is: [0 0 0 0 1 1 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0]], Good
+
+Test#: 13, Training Data is: [0 0 0 0 1 1 0 1]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 1.0]], Good
+
+Test#: 14, Training Data is: [0 0 0 0 1 1 1 0]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0]], Good
+
+Test#: 15, Training Data is: [0 0 0 0 1 1 1 1]
+[[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0]], Good
+
+Test#: 16, Training Data is: [0 0 0 1 0 0 0 0]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0]], Good
+
+Test#: 17, Training Data is: [0 0 0 1 0 0 0 1]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0]], Good
+
+Test#: 18, Training Data is: [0 0 0 1 0 0 1 0]
+[[ 0.0 0.0 0.5 0.5 0.0 0.0 0.5 0.5]], The 0.5 Error
+
+Test#: 19, Training Data is: [0 0 0 1 0 0 1 1]
+[[ 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0]], Good
+
+Test#: 20, Training Data is: [0 0 0 1 0 1 0 0]
+[[ 0.0 0.5 0.0 0.5 0.0 0.5 0.0 0.5]], The 0.5 Error
+
+Test#: 21, Training Data is: [0 0 0 1 0 1 0 1]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 0.0 1.0]], Good
+
+Test#: 22, Training Data is: [0 0 0 1 0 1 1 0]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Wrong Answer
+
+Test#: 23, Training Data is: [0 0 0 1 0 1 1 1]
+[[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Good
+
+Test#: 24, Training Data is: [0 0 0 1 1 0 0 0]
+[[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Wrong Answer
+
+ +

For the most part, it appears to be working. But there are situations where it clearly does not work.

+ +

I have the labels in two different ways.

+ +

The first type of error is ""The 0.5 Error"" which is easy to see. It should never return any output of 0.5 in this situation. Everything should be binary. The second type of error is when it reports the correct binary outputs but they don't match what it was trained to recognize.

+ +

I would like to understand the cause of these errors. I'm not interested in trying to correct the errors as I believe these are valid errors. In other words, these are situations where the perceptron is simply incapable of being trained for. And that's ok.

+ +

What I want to learn is why these cases are invalid. I'm suspecting that they have something to do with the input data not being linearly separable in these situations. But if that's the case, then how do I go about determining which cases are not linearly separable? If I could understand how to do that I would be very happy.

+ +

Also, are the reasons why it doesn't work in specific cases the same? In other words, are both types of errors caused by linear inseparability of the input data? Or is there more than one condition that causes a Perceptron to fail in certain training situations.

+ +

Any help would be appreciated.

+",28240,,2444,,8/27/2019 23:54,8/29/2019 18:51,What are the reasons a perceptron is not able to learn?,,1,0,0,,,CC BY-SA 4.0 +14167,1,14168,,8/27/2019 21:50,,8,682,"

Let's assume that there is a sequence of pairs $(x_i, y_i), (x_{i+1}, y_{i+1}), \dots$ of observations and corresponding labels. Let's also assume that the $x$ is considered as independent variable and $y$ is considered as the variable that depends on $x$. So, in supervised learning, one wants to learn the function $y=f(x)$.

+ +

Can reinforcement learning be used to learn $f$ (possibly, even learning the symbolic form of $f(x)$)?

+ +

Just some sketches how can it be done: $x_i$ can be considered as the environment and each $x_i$ defines some set of possible ""actions"" - possible symbolic form of $f(x)$ or possible numerical values of parameters for $f(x)$ (if the symbolic form is fized). And concrete selected action/functional form $f(x, a)$ (a - set of parameters) can be assigned reward from the loss function: how close the observation $(x_i, y_i)$ is to the value that can be inferred from $f(x)$.

+ +

Are there ideas or works of RL along the framework that I provided in the previous passage?

+",8332,,2444,,8/27/2019 23:22,7/7/2020 15:27,Can supervised learning be recast as reinforcement learning problem?,,1,1,,,,CC BY-SA 4.0 +14168,2,,14167,8/27/2019 23:14,,14,,"

Any supervised learning (SL) problem can be cast as an equivalent reinforcement learning (RL) one.

+

Suppose you have the training dataset $\mathcal{D} = \{ (x_i, y_i \}_{i=1}^N$, where $x_i$ is an observation and $y_i$ the corresponding label. Then let $x_i$ be a state and let $f(x_i) = \hat{y}_i$, where $f$ is your (current) model, be an action. So, the predicted label of observation $x_i$ corresponds to the action taken in state $x_i$. The reward received after having taken action $f(x_i)$ in state $x_i$ can then be defined as the loss $|f(x_i) - y_i|$ (or any other suitable loss).

+

The minimization of this loss is then equivalent to the maximization of the (expected) reward. Therefore, in theory, you could use trajectories of the form $$T=\{(x_1, f(x_1), |f(x_1) - y_1|), \dots, (x_N, f(x_N), |f(x_N) - y_N|)\}$$ to learn a value function $q$ (for example, with Q-learning) or a policy $\pi$, which then, given a new state $x_{\text{new}}$ (an observation) produces an action $f(x_{\text{new}})$ (the predicted label).

+

However, note that the learned policy might not be able to generalise to observations not present in the training dataset. Moreover, although it is possible to solve an SL problem as an RL problem, in practice, this may not be the most appropriate approach (i.e. it may be inefficient).

+

For more details, read the paper Reinforcement Learning and its Relationship to Supervised Learning (2004) by Barto and Dietterich, who give a good overview of supervised and reinforcement learning and their relationship. The paper Learning to predict by the methods of temporal differences (1988), by Richard Sutton, should also give you an overview of reinforcement learning from a supervised learning perspective. However, note that this does not mean that a reinforcement learning problem can be cast as an equivalent supervised learning one. See section 1.3.3 Converting Reinforcement Learning to Supervised Learning of the mentioned paper Reinforcement Learning and its Relationship to Supervised Learning for more details.

+

Reinforcement learning can thus be used for classification and regression tasks. See, for example, Reinforcement Learning for Visual Object Detection (2016) by Mathe et al.

+",2444,,2444,,7/7/2020 15:27,7/7/2020 15:27,,,,0,,,,CC BY-SA 4.0 +14171,2,,14162,8/28/2019 8:49,,3,,"

During the self-play training process, AlphaZero does not greedily play only the moves it thinks are ""best"" (which would normally be the move with the highest visit count leading out of the root node of the MCTS search tree). Instead, for the purpose of generating a more diverse set of experience, it samples moves proportionally to the visit counts. This means that in any given situation encountered during self-play, the move considered to be ""optimal"" will still have the largest probability of getting picked, but other moves will also have smaller probabilities of getting picked. In theory, it might even sometimes pick the move that it expects to be the worst one (very rarely)!

+ +

If I recall correctly, they only did what I describe above for the first 30 moves of any game, and afterwards move on to greedy play. This still results in a very diverse set of 30-move-starts for every game it experiences though. I'm not 100% sure if I remember this detail correctly though, maybe they only did this earlier (in AlphaGo Zero for example), and no longer do it in AlphaZero. Would have to check the paper to make sure.

+ +

Additionally, whenever they start a new search process, they perturb the prior probabilities assigned by the learned policy network to all the moves available in the root node. This is done in a non-deterministic way using Dirichlet noise. I think this is not really explicitly mentioned in the AlphaZero paper, but it is in the Supplementary Materials (and also in the AlphaGo Zero paper?). Anyway, this also means that if precisely the same game state is encountered twice in two different games of self-play, the search behaviour may be slightly different due to the introduced stochasticity, and hence it may come to a different conclusion as to what move is ""optimal"".

+",1641,,,,,8/28/2019 8:49,,,,0,,,,CC BY-SA 4.0 +14172,1,,,8/28/2019 10:24,,4,521,"

I recently read Bytenet and Wavenet and I was curious why the first model is not as popular as the second. From my understanding, Bytenet can be seen as a seq2seq model where the encoder and the decoder are similar to Wavenet. Following the trends from NLP where seq2seq models seem to perform better, I find it strange that I couldn't find any paper that compares the two. Are there any drawbacks of Bytenet over Wavenet other than the computation time?

+",20430,,2444,,8/28/2019 14:01,8/9/2023 9:05,What are the differences between Bytenet and Wavenet?,,1,2,,,,CC BY-SA 4.0 +14173,1,,,8/28/2019 13:00,,3,139,"

I wanted to use True Positive (and True Negative) in my cost function to make to modify the ROC shape of my classifier. Someone told me and I read that it is not differentiable and therefore not usable as a cost function for a neural network.

+ +

In the example where 1 is positive and 0 negative I deduce the following equation for True Positive ($\hat y = prediction, y = label$):

+ +

$$ TP = \bf(\hat{y}^Ty) $$ +$$ \frac{\partial TP}{\partial \bf y} = \bf y $$

+ +

The following for True Negative: +$$ TN = \bf(\hat{y}-1)^T(y-1) $$ +$$ \frac{\partial TN}{\partial \bf y} = \bf \hat y^T -1 $$

+ +

The False Positive: +$$ FP = - \bf (\hat y^T-1) y $$ +$$ \frac{\partial FP}{\partial \bf y} = - \bf ( \hat y^T - 1) $$

+ +

The False Negative: +$$ FN = \bf \hat y^T (y-1) $$ +$$ \frac{\partial FN}{\partial \bf y} = \bf \hat y $$

+ +

All equations seem differentiable to me. Can someone explain where I went wrong?

+",27843,,27933,,9/6/2019 7:16,9/6/2019 7:16,Using True Positive as a Cost Function,,1,1,,,,CC BY-SA 4.0 +14174,1,14200,,8/28/2019 13:48,,2,141,"

I'm trying to understand the intuition behind how the Content Loss is calculated in a Neural Style Transfer. I'm reading from an articles: https://medium.com/mlreview/making-ai-art-with-style-transfer-using-keras-8bb5fa44b216 , that explains the implementation of Neural Style Transfer, from the Content loss function: +

+ +

The article explains that:

+ +
    +
  • F and P are matrices with a number of rows equal to N and a number of columns equal to M.

  • +
  • N is the number of filters in layer l and M is the number of spatial elements in the feature map (height times width) for layer l.

  • +
+ +

From the code below for getting the features/content representation from particular Conv layers, I didn't quite understand how it works. Basically I printed out the output of every line of code to try to make it easier, but it still left a number of questions to be asked, which I listed below the code:

+ +
def get_feature_reps(x, layer_names, model):
+    """"""
+    Get feature representations of input x for one or more layers in a given model.
+    """"""
+    featMatrices = []
+    for ln in layer_names:
+        selectedLayer = model.get_layer(ln)
+        featRaw = selectedLayer.output
+        featRawShape = K.shape(featRaw).eval(session=tf_session)
+        N_l = featRawShape[-1]
+        M_l = featRawShape[1]*featRawShape[2]
+        featMatrix = K.reshape(featRaw, (M_l, N_l))
+        featMatrix = K.transpose(featMatrix)
+        featMatrices.append(featMatrix)
+    return featMatrices
+
+def get_content_loss(F, P):
+    cLoss = 0.5*K.sum(K.square(F - P))
+    return cLoss
+
+ +

1- For the line featRaw = selectedLayer.output, when I print featRaw, I get the output: +Tensor(""block4_conv2/Relu:0"", shape=(1, 64, 64, 512), dtype=float32).

+ +
    +
  • a- Relu:0 does this mean Relu activation has not yet been applied?

  • +
  • b- Also I presume we're outputing the feature maps outputs from block4_conv2, not the filters/kernels themselves, correct?

  • +
  • c- Why is there an axis of 1 at the start? My understanding of Conv layers is that they're simply made up from the number of filters/kernels (with shape-height, width, depth) to apply to the input.

  • +
  • d- Is selectedLayer.output simply outputs the shape of the Conv layer, or does the output object also hold other information like the pixel values from the output feature maps of the layer?

  • +
+ +

2- With the line: featMatrix = K.reshape(featRaw, (M_l, N_l) where printing featMatrix would output: Tensor(""Reshape:0"", shape=(4096, 512), dtype=float32).

+ +
    +
  • a- This is where I'm confused the most. So to get the feature/content representation of a particular Conv layer of an image, we simply create a matrix of 2 dimensions, the first being the number of filters and the other being the area of the filter/kernel (height * width). That doesn't make sense! How do we get unique feature of an image from just that?!! We're not retrieving any pixel values from a feature map. We're simply getting the area size of filter/kernel and the number of filters, but not retrieving any of the content (pixel values) itself!!

  • +
  • b- Also the final featMatrix is transposed - i.e. featMatrix = K.transpose(featMatrix) with the output Tensor(""transpose:0"", shape=(512, 4096), dtype=float32). Why is that (i.e. why reverse the axis)?

  • +
+ +

3 - Finally I want to know, once we retrieve the content representation, how can I output that in both as a numpy array and save it as an image?

+ +

Any help would be really appreciated.

+",25360,,25360,,8/29/2019 11:54,8/29/2019 19:20,Understanding the intuition behind Content Loss (Neural Style Transfer),,1,0,,6/1/2022 15:57,,CC BY-SA 4.0 +14175,1,,,8/28/2019 14:27,,4,1614,"

Why do model-based methods use fewer samples than model-free methods? Here, I'm specifically referring to model-based methods in which we have to learn a policy and model. I can only think of two reasons for this question:

+ +
    +
  1. We can potentially obtain more samples from the learned model, which may speed up the learning speed.

  2. +
  3. Models allow us to predict the future states and run simulations. This may lead to more valuable transitions, whereby speeding up learning.

  4. +
+ +

But I heavily doubt this is the whole story. Sincerely, I hope someone could share a more detailed explanation for this question.

+",8689,,2444,,10/13/2020 8:31,10/13/2020 8:31,Why are model-based methods more sample efficient than model-free methods?,,1,0,,,,CC BY-SA 4.0 +14176,2,,14172,8/28/2019 15:01,,0,,"

My conclusion is the same as yours that there doesn't seem to be any published comparison of the two models. ByteNet is computationally expensive and requires a lot of parameters. WaveNet improves on ByteNet's efficiency, as you mentioned, and I believe that is the main difference.

+",5763,,,,,8/28/2019 15:01,,,,0,,,,CC BY-SA 4.0 +14178,1,,,8/28/2019 17:02,,0,106,"

K-means tries to find centroid and then clusters around the centroids. But what if we want to cluster based on the complement?

+ +

For example, suppose we have a group of animals and we want to cluster Dogs, Cats, (Not Dogs and Not Cats). The 3rd category will not arise from mean clustering.

+",21158,,2444,,9/1/2019 0:02,1/6/2023 16:06,How can I cluster based on the complementary categories?,,2,2,,,,CC BY-SA 4.0 +14182,2,,14175,8/28/2019 19:34,,2,,"

In this Medium article I found [1] it is quite well explained what is behind the better model efficiency in model based RL in comparison to model free one.

+ +

Main difference between those two is like you said, that the model helps in finding the correct path more efficiently because of the existence of the model. Maybe you cannot find new samples (point 1) but you know better the whole inner logic about the system and instead of just knowing what to do with the specific sample, you can relate it to the whole picture (sort of like in point 2: you can play with the choices) and make more profound calculations.

+ +

The article had a comparison, which told that you are writing a map in a city about every possible direction you can take when you are model-based and while in model-free you can enter specific places and remember which direction was best based on last visits but you still never know where you are coming nor going exactly.

+ +

In other words, if you think you're teaching a taxi driver on a big city with many signs and rules, model based guy would drive more precisely sooner because the sign language (the inner logic and model of the city) helps them on understanding the map sooner than just reacting crossing by crossing somewhat by chance all the time.

+ +

Sample efficiency tells what is the amount of information fetched from one sample [2]. Model-based machine can adjust model, maybe make some calculations about expected rewards AND after that the same as model-free, adjust the common policy. Model-free does only have the policy. Again the taxi guys: model-free guys know that last time and second last I stopped in crossing, model-based guy knows also it was due to red lights in pole. Third time model-free guy is the first in row and BANG - hits the crossing car. Next time rule is there comes sometimes cars and model-based guy knew that from the first place.

+ +
+ +

My sources:

+ +

[1] https://medium.com/the-official-integrate-ai-blog/understanding-reinforcement-learning-93d4e34e5698

+ +

[2] What is sample efficiency, and how can importance sampling be used to achieve it?

+",11810,,,,,8/28/2019 19:34,,,,0,,,,CC BY-SA 4.0 +14184,1,,,8/28/2019 22:59,,2,52,"

I am starting to create my first bot with Microsoft Bot Framework with the help of Azure, initially I want to know where all the conversations the user has with the bot are stored, so then get a log of all the conversations that have been held.

+ +

I already have some answers stored in knowledge bases using QnA Maker, for certain questions that you can answer, I want to know where the questions that were not answered or better that the bot could not answer are stored.

+ +

Currently, I am asking the users to write their question in a feedback form when they don't get a response from my bot. This is taking up my time and also annoys the user as they have to type more. I want my bot to collect these questions and store them in a database.

+",28278,,27933,,8/30/2019 14:52,8/30/2019 14:52,How can we log user-bot conversations using the Microsoft Bot Framework?,,0,1,,,,CC BY-SA 4.0 +14187,2,,14173,8/29/2019 7:56,,1,,"

The vector functions for true positive, false positive etc all make use of the ""magic"" numbers $0$ and $1$ used to represent Boolean values. They are convenience methods that you can use in a numerical library, but you do need to be aware of the fundamental Boolean nature of the data. The $0$ and $1$ values allow the maths for calculating TP et al, but are not fundamental to it, they are a representation.

+ +

Your derivations of gradients for the functions you give seem correct, barring the odd typo. However, the gradient doesn't really apply to the value of $\mathbf{y}$, because all components of $\mathbf{y}$ are either $0$ or $1$. The idea that you could increase $\mathbf{y}$ slightly where $\mathbf{\hat{y}}$ is $1$ in order to increase the value of the TP metric slightly has no basis. Instead the only valid changes to make an improvement are to modify FN values from $0$ to $1$ exactly.

+ +

You could probably still use your derivations as a gradient for optimisation (it would not be the only time in machine learning that something does not quite apply theoretically but you could still use it in practice). However, you then immediately hit the problem of how the values of $\mathbf{y}$ have been discretised to $0$ or $1$ - that function will not be differentiable, and it will prevent you back propagating your gradients to the neural network weights that you want to change. If you fix that follow-on problem using a smoother function (e.g. a sigmoid) then you are likely to end up with something close to either cross-entropy loss or the perceptron update step.

+ +

In other words, although what you have been told is an over-simplification, you will not find a way to improve the performance of your classifier by adding cost functions based directly on TP, FP etc. That is what binary cross-entropy loss is already doing. There are other, perhaps more fruitful avenues of investigation - hyperparameter searches, regularisation, ensembling, and if you have an unbalanced data set then consider weighting the true/false costs.

+",1847,,1847,,8/29/2019 8:12,8/29/2019 8:12,,,,3,,,,CC BY-SA 4.0 +14188,1,,,8/29/2019 9:00,,1,32,"

I want to train a WGAN where the convolution layers in the critic are only allowed to have non-negative weights (for a technical reason). The biases, nonetheless, can take both +/- values. There is no constraint on the generator weights.

+

I did a toy experiment on MNIST and observed that the performance is significantly worse than a regular WGAN.

+

What could be the reason? Can you suggest some architectural modifications so that the nonnegativity constraint doesn't severely impair the model capacity?

+",28286,,2444,,1/25/2021 19:02,1/25/2021 19:02,Wasserstein GAN with non-negative weights in the critic,,0,0,,,,CC BY-SA 4.0 +14189,1,,,8/29/2019 9:46,,3,103,"

I have a very big table with lots of names and how much they are searched by date.

+ +

I would like to find trending patterns. When does a name rise and when does it fall. Without knowing the name or the pattern before. +The rise could be during the seasons of the year but also during a week.

+ +

Like a 'warm hat' is trending in winter and falling in summer. +Or searches for a ""board game"" might rise on Sunday and decrease on Monday.

+ +

The table looks simplified like this:

+ +
winter gloves, 2014-01-01, 200
+warm hat, 2014-01-01, 300
+swimming short, 2014-01-01, 1
+sunscreen, 2014-01-01, 2
+....
+winter gloves, 2014-07-01, 1
+warm hat, 2014-07-01, 1
+swimming short, 2014-07-01, 200
+sunscreen, 2014-07-01, 300
+
+ +

Which algorithms should I have a look at?

+ +

Thanks for any hint, +Joerg

+",28289,,28289,,8/29/2019 10:58,8/30/2019 18:23,Algorithm for seasonal trends,,1,0,,,,CC BY-SA 4.0 +14190,2,,13725,8/29/2019 10:07,,0,,"

Central Premises:--

+ +

This, computability of randomness in conjunction to logic, is unfortunately/fortunately a very technical topic.

+ +
+

"" ... That stochastic process is part of an algorithm, which is a set of instructions that must be valid for the program to compute.

+ +

... Is randomness anti-logical?"" ~ DukeZhou (Stack Exchange user, opening poster)

+
+ +

This answer is about: randomness and chaos; and how they relate to logic and computability.

+ +
+

""What is randomness and where does it come from? + This is one scary place to venture in. We take for granted the randomness in our reality. We compensate for that randomness with probability theory. However, is randomness even real or is it just a figment of our lack of intelligence? That is, does what we describe as randomness just a substitute for our uncertainty about reality? Is randomness just a manifestation of something else?"" + — ""Medium."" Medium, < medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6 >.

+
+ +

-

+ +
+

""Many natural intensional properties in artificial and natural languages are hard to compute. We show that randomized algorithms are often necessary to have good estimators of natural properties and to verify some specific relations. We concentrate on the reliability of queries to show the advantage of randomized algorithms in uncertain cognitive worlds."" — de Rougemont, Michel. ""Logic, randomness and cognition."" Logic, Thought and Action. Springer, Dordrecht, 2005. 497-506.

+
+ +
+ +

Layperson's Explanations:--

+ +

Unfortunately, quantum chaos in relation to randomness, is a profoundly technical topic. I have managed to track down sources that relatively aren't overly technical.

+ +

As a starting point, this Wikipedia article is worth reading:--

+ +

(https://simple.wikipedia.org/wiki/Chaos_theory)

+ +

You can continue and read this particular Medium post:--

+ +

(https://medium.com/intuitionmachine/there-is-no-randomness-only-chaos-and-complexity-c92f6dccd7ab)

+ +

For profoundly technical topics, I recommend this book series, as they are written by experts in basic technical terms, for the laypersons wanting to study technical topics:--

+ +

(https://en.wikipedia.org/wiki/Very_Short_Introductions)

+ +

I recommend reading:--

+ +
    +
  • Chaos: A Very Short Introduction
  • +
  • Probability: A Very Short Introduction
  • +
  • Fractals: A Very Short Introduction
  • +
+ +
+ +

Other References for the Layperson:--

+ + + +

Some broader implications of chaos [Link to Stanford Encyclopedia of Philosophy].

+ +

When I think of randomness, I'm inclined to think in cosmological terms. Is randomness is a structural property of the universe? Is anything in the universe truly random?

+ +
+ +

Technical Explanations:--

+ +
+

""In mathematical logic, independence is the unprovability of a sentence from other sentences."" Wikipedia contributors. — ""Independence (mathematical logic)."" Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 3 Feb. 2019. Web. 29 Aug. 2019.

+
+ +

-

+ +
+

""We propose a link between logical independence and quantum physics. We demonstrate that quantum systems in the eigenstates of Pauli group operators are capable of encoding mathematical axioms and show that Pauli group quantum measurements are capable of revealing whether or not a given proposition is logically dependent on the axiomatic system. Whenever a mathematical proposition is logically independent of the axioms encoded in the measured state, the measurement associated with the proposition gives random outcomes. This allows for an experimental test of logical independence. Conversely, it also allows for an explanation of the probabilities of random outcomes observed in Pauli group measurements from logical independence without invoking quantum theory. The axiomatic systems we study can be completed and are therefore not subject to Gödel's incompleteness theorem."" Paterek, Tomasz, et al. ""Logical independence and quantum randomness."" — New Journal of Physics 12.1 (2010): 013019.

+
+ +

-

+ +

Other Technical Explanations, Sources, References, and Further Reading:--

+ + + +
+ +

Notes:--

+ +
    +
  • I usually do not use Medium or Quora as a source, with some exceptions. I have chosen to do so here.
  • +
  • I've decided to place Stanford Encyclopedia of Philosophy sources in the layperson's section.
  • +
+",25982,,4709,,5/9/2020 22:23,5/9/2020 22:23,,,,11,,,,CC BY-SA 4.0 +14191,1,14231,,8/29/2019 11:49,,3,71,"

I'm currently working on a project to make an DQN agent that decides whether to charge or discharge an electric vehicle according to hourly changing price to sell or buy. The price pattern also varies from day to day. The goal of this work is to schedule the optimal charging/discharging actions so that it can save money.

+ +

The state contains past n-step price records, current energy level in battery, hour, etc., like below:

+ +

$$ +s_t = \{ p_{t-5}, p_{t-4}, p_{t-3}, p_{t-2}, p_{t-1}, E_t, t \} +$$

+ +

What I'm wondering is whether this is a partial observable situation or not, because the agent can only observe past n-step prices rather than knowing every price at every time step.

+ +

Can anyone comment on this issue?

+ +

If this is the partial observable situation, is there any simple way to deal with it?

+",27946,,2444,,8/29/2019 12:01,9/1/2019 10:41,Is a state that includes only the past n-step price records partially observable?,,1,0,,,,CC BY-SA 4.0 +14192,2,,14003,8/29/2019 11:57,,2,,"

After I read multiple explanations from different sources I think I found the main difference between the two methods. Implementation wise the only difference is the matrix that you're multiplying the signal with (Laplacian/adjacency matrix). But by using the Laplacian, you're encoding the graph structure (in-out degree of each node) which dictates how a signal should ""diffuse"" in the network.

+",20430,,,,,8/29/2019 11:57,,,,0,,,,CC BY-SA 4.0 +14194,1,14197,,8/29/2019 15:41,,2,124,"

From Artificial Intelligence: A Modern Approach, a book by Stuart Russell and Peter Norvig, this is the definition of AI:

+ +
+

We define AI as the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments, and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning.

+
+ +

Given the definition of AI above, is unsupervised learning (e.g. clustering) a branch of AI? I think the definition above is more suitable for supervised or reinforcement learning.

+",16565,,2444,,9/4/2019 0:02,9/4/2019 0:02,Is unsupervised learning a branch of AI?,,1,0,,,,CC BY-SA 4.0 +14196,1,,,8/29/2019 16:59,,2,43,"

I found on the web that fisherface is the best algorithm for face detection. Before investing deeply into it, I just want to know how hard is it to implement it and how much time will it take.

+ +

I am new to this website and I welcome any suggestions.

+",28295,,2444,,8/31/2019 13:23,8/31/2019 13:23,How to implement fisherface algorithm and how much time will it take?,,0,0,,,,CC BY-SA 4.0 +14197,2,,14194,8/29/2019 18:18,,3,,"

There is a problem with confining Artificial Intelligence to a single definition, because it has become an umbrella term encompassing many fields of science. It has come a long way from the ""thinking machines"" of the 50s. Actually, the term was coined in a summer workshop in 1956, whose proposal was:

+ +
+

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

+
+ +

So even from its very first introduction the field of AI included learning. Personally, I've never seen a definition of AI, that doesn't include learning, of which unsupervised learning is clearly a part. So by any means clustering algorithms, which are unsupervised learning algorithms, can be considered AI. This can be confirmed in multiple sources, e.g. wikipedia.

+",26652,,,,,8/29/2019 18:18,,,,0,,,,CC BY-SA 4.0 +14199,2,,14165,8/29/2019 18:51,,0,,"

UPDATE:

+ +

I found the answers on my own.

+ +

To begin with I figured out how to plot the input data and output training data on a scatter plot using matplotlib. And once I was able to do that I could instantly see exactly what's going on.

+ +

When the answers are correct the input data is indeed linearly separable based on what the training output is looking to recognize.

+ +

The ""0.5 error"" occurs when a single violation of linear separability occurs.

+ +

The ""Wrong Answer"" error occurs when linear separation is violated twice. In other words, there are conditions where linear separation is violated in two separate planes. Or when the data can be separated by planes but it would require more than one plane to do this. (see graphic examples below)

+ +

I suspected that there would be a difference between these different types of errors and the answer is, yes, there is a difference.

+ +

So I have solved my own question. Thanks to anyone who may have been working on this.

+ +

If you'd like to see some of my graphs here are some specific examples:

+ +

A graph of all possible binary inputs

+ +

+ +

An example of a good training_outputs = np.array([0, 0, 0, 0, 1, 1, 1, 0]). As you can see in this graph the red points are linearly separable from the blue points.

+ +

+ +

An example of a 0.5 error, training_outputs = np.array([0, 0, 0, 1, 0, 0, 1, 0]). The points are not linearly separable in one plane

+ +

+ +

Example of a 2-plane wrong answer error, training_outputs = np.array([0, 0, 0, 1, 0, 1, 1, 0]). You can see that there are two planes that are not linearly separable

+ +

+ +

An example of a different kind of wrong answer, training_outputs = np.array([0, 0, 0, 1, 1, 0, 0, 0]). In this case the data can be separated by planes, but it would require 2 different planes to do this which a perceptron cannot handle.

+ +

+ +

So this covers all possible error conditions. Aren't graphs great!

+",28240,,,,,8/29/2019 18:51,,,,0,,,,CC BY-SA 4.0 +14200,2,,14174,8/29/2019 19:20,,1,,"
+
    +
  1. For the line featRaw = selectedLayer.output, when I print featRaw, I get the output: Tensor(""block4_conv2/Relu:0"", shape=(1, 64, 64, 512), dtype=float32).

    + +
      +
    • a) Relu:0 does this mean Relu activation has not yet been applied?
    • +
  2. +
+
+ +

It has been applied.

+ +
+
    +
  • b) Also I presume we're outputing the feature maps outputs from block4_conv2, not the filters/kernels themselves, correct?
  • +
+
+ +

Yes.

+ +
+
    +
  • c) Why is there an axis of 1 at the start? My understanding of Conv layers is that they're simply made up from the number of filters/kernels (with shape-height, width, depth) to apply to the input.
  • +
+
+ +

The first dimension in keras is used for representing batch dimension. When training the model, you aren't passing the images one by one but in batches. The number of images that are processed in parallel before the model calculates its loss and makes its update is called the batch_size. When computing the content loss, though, you have a single image you want to perform style transfer to, so batch_size=1. That's why you see the unit in the beginning of the tensor's shape.

+ +
+
    +
  • d) Is selectedLayer.output simply outputs the shape of the Conv layer, or does the output object also hold other information like the pixel values from the output feature maps of the layer?
  • +
+
+ +

It outputs the tensor containing all the pixel values that are outputted from that layer.

+ +
+
    +
  1. With the line: featMatrix = K.reshape(featRaw, (M_l, N_l) where printing featMatrix would output: Tensor(""Reshape:0"", shape=(4096, 512), dtype=float32).

    + +
      +
    • a) This is where I'm confused the most. So to get the feature/content representation of a particular Conv layer of an image, we simply create a matrix of 2 dimensions, the first being the number of filters and the other being the area of the filter/kernel (height * width). That doesn't make sense! How do we get unique feature of an image from just that?!! We're not retrieving any pixel values from a feature map. We're simply getting the area size of filter/kernel and the number of filters, but not retrieving any of the content (pixel values) itself!!
    • +
  2. +
+
+ +

I think you're confusing things a bit. You are getting the pixel values, just in a different shape. Instead of stacking them in a $64 \times 64 \times 515$ array you are stacking them in a $1096 \times 515$ array. Exactly the same information is stored in a different shape. Think of it like transforming a $10 \times 2$ matrix to a $5 \times 4$ one. It contains the same information, in a different shape.

+ +
+
    +
  • b) Also the final featMatrix is transposed - i.e. featMatrix = K.transpose(featMatrix) with the output Tensor(""transpose:0"", shape=(512, 4096), dtype=float32). Why is that (i.e. why reverse the axis)?
  • +
+
+ +

I'm not sure why exactly he does this, but from a bit that I saw the code I think it has something to do with the style loss (not the content loss). Probably something didn't match up with the dimensions of some operation, maybe a matrix multiplication...

+ +
+
    +
  1. Finally I want to know, once we retrieve the content representation, how can I output that in both as a numpy array and save it as an image?
  2. +
+
+ +

If you have the output image as a tensor, let's say y. You can convert it to numpy through keras.backend.eval:

+ + + +
import keras.backend as K
+
+y_arr = K.eval(y)
+
+ +

There are many ways to save the image, including scikit-image, scipy, opencv, PIL, matplotlib, etc. I'll use the last:

+ + + +
import matplotlib.pyplot as plt
+
+plt.imsave('my_image.png', y_arr)
+
+ +

Note that in order to work y_arr should have a shape of either (height, width, 3) (if it is an RGB image) or (height, width) (if it is grayscale).

+",26652,,,,,8/29/2019 19:20,,,,11,,,,CC BY-SA 4.0 +14201,2,,9418,8/29/2019 19:37,,0,,"

Don't use the Euclidian distance as the similar/dissimilar factor, you'll get better results if you put a couple Dense layers at the top of your Siamese network. You don't mention how large your feature vectors are, but if they are 128D face encodings, this is what I used:

+ +
# dist_euclid_layer is a layers.Lambda that performs the euclid distance on the input
+first_dense = layers.Dense(512, activation='relu')(dist_euclid_layer)
+drop_one = layers.Dropout(0.5)(first_dense)
+output_dense = layers.Dense(2, activation='sigmoid')(drop_one)
+model = models.Model(inputs=[input_layer], outputs=output_dense)
+model.compile(loss=losses.binary_crossentropy, optimizer=opt, metrics=['accuracy'])
+
+ +

In training, use [0,1] or [1,0] for Y to indicate different / similar.

+ +

Then in production, use the np.argmax to find the positive match:

+ +
predictions_raw = model.predict(vstack_batch_input, batch_size=len(batch_input))
+predictions = np.argmax(predictions_raw,axis=1)
+tot_matches = np.sum(predictions)
+if tot_matches == 1:
+    # ... good times, whatever face[N] has prediction[N] == 1 is the matching face
+
+",28300,,,,,8/29/2019 19:37,,,,0,,,,CC BY-SA 4.0 +14202,1,,,8/29/2019 19:44,,3,101,"

I am a physicist and I don't have much background on machine learning or deep learning except taking a couple of courses on statistics. In physics, we often simulate a model by means of two-way coupled systems where each system is described by a partial differential equation. The equations are generally unfolded in a numerical grid of interest and then solved iteratively until a self-consistent solution is obtained.

+ +

A well-known example is the Schrödinger-Poisson solver. Here, for a given nano/atomic structure, we assume an initial electron density. Then we solve the Poisson equation for that electron density. The Poisson equation tells us the electrostatics (electric potential) of the structure. Given this information, we solve the Schrödinger equation for the structure which tells us about the energy levels of the electrons and their wave functions in the structure. But one would then find that this energy levels and wavefunctions correspond to a different electron density (than the one we initially guessed). So we iterate the process with the new electron density and follow the above mentioned procedures in a loop until a self-consistent solution is obtained.

+ +

Often times the iteration processes are computationally expensive and extremely time-consuming.

+ +

My question is this: Would the use of deep learning algorithms offer any advantage in modeling such problems where iteration and self-consistency are involved? Are there any study/literature where researchers explored this avenue?

+",28301,,2444,,9/1/2019 22:26,9/1/2019 22:26,Feasibility of using machine learning to obtain self-consistent solutions,,0,0,,,,CC BY-SA 4.0 +14204,1,,,8/29/2019 19:53,,4,98,"

Context: I'm a complete beginner to evolutionary algorithms and genetic algorithms and programming. I'm currently taking a course about genetic algorithms and genetic programming.

+ +

One of the concepts introduced in the course is ""closure,"" the idea that - with an expression tree representing a genetic program that we're evolving - all nodes in the tree need to be the same type. As a practical example, the lecturer mentions that implementing greater_than(a, b) for two integers a and b can't return a boolean like true or false (it can return, say, 0 and 1 instead).

+ +

What he didn't explain is why the entire tree needs to match in all operators. It seems to me that this requirement would result in the entire tree (representing your evolved program) being composed of nodes that all return the same type (say, integer).

+",28304,,2444,,1/19/2021 18:49,6/9/2023 1:04,Why do all nodes in a GP tree need to be the same type?,,1,1,,,,CC BY-SA 4.0 +14206,1,,,8/30/2019 0:43,,3,300,"

I am working on an image to image regression task which requires me to develop a deep learning model that takes in a sequence of 5 images and return another image. The sequence of 5 images and the output images are conceptually and temporally related. In fact, the 5 images in the sequence each correspond to timestep in a simulation and the output, the one I am trying to predict, corresponds eventually to the 6th timestep of that sequence.

+ +

For now, I have been training a simple regression-type CNN model which takes in the sequence of 5 images stored in a list and outputs an image corresponding to the next timestep in the simulation.

+ +

However, I have been researching a bit now in order to find a better way to carry out this task and I found the idea of ConvLSTMs. However, I have seen these applied to the prediction of feature and the output of a sentence describing that image. What I wanted to know is whether ConvLSTMs can also output images, but more importantly if they can be applied to my case. If not, what other types of deep learning network can be suitable for this task?

+ +

Thanks in advance !

+",28307,,,,,8/30/2019 0:43,Image to image regression in tensorflow,,0,0,,,,CC BY-SA 4.0 +14207,1,,,8/30/2019 2:55,,2,69,"

A single iteration of gradient descent can be parallelised across many worker nodes. We simple split the training set across the worker nodes, pass the parameters to each worker, each worker computes gradients for their subset of the training set, and then passes it back to the master to be averaged. With some effort, we can even use model parallelism.

+ +

However, stochastic gradient descent is an inherently serial proces. Each update must be performed sequentially. Each iteration, we must perform a broadcast and gather of all parameters. This is bad for performance. Ultimately, number of updates is the limiting factor of deep model training speed.

+ +

Why must we perform many updates? +With how few updates can we achieve good accuracy?

+ +

What factors affect the minimum number of updates requires to reach some accuracy?

+",28308,,,,,9/29/2019 4:04,In how few updates can a multi layer neural net be trained?,,1,0,,,,CC BY-SA 4.0 +14208,2,,14207,8/30/2019 3:13,,2,,"

In Don't Decay the Learning Rate, Increase the Batch Size, Smith et al. train ResNet-50 on ImageNet to 76.1% with only 2500 updates. Has anyone done it in less?

+ +

In The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent Sankararaman et al. present the concept of gradient confusion which slows convergence, and show that depth increases gradient confusion while overparameterization on width and skip connections reduces gradient confusion.

+ +

In Massively Distributed SGD: ImageNet/ResNet-50 Training in a Flash, Mikami et al. train ResNet-50 on ImageNet to 75.29% with some number of updates, but I can't do the math to compute out how many updates they must have used.

+ +

To summarise:

+ +
    +
  • Larger mini-batches help, but give diminishing returns and cause other issues.
  • +
  • Deeper models tend to require more updates.
  • +
  • Wider layers need fewer updates.
  • +
  • Skip connections, label smoothing, and batch norm might help.
  • +
  • The best I have found so far on ImageNet/ResNet-50 to >75% is 2500 updates.
  • +
+",28308,,,,,8/30/2019 3:13,,,,0,,,,CC BY-SA 4.0 +14210,1,,,8/30/2019 11:41,,2,51,"

I want to make a thing that produces a stochastic process from time series data. +The time series data is recorded every hour over the year, which means 24-hour of patterns exist for 365 days.

+ +

What I want to do is something like below:

+ +
    +
  1. Fit a probability distribution using data for each hour so that I can sample the most probable value for this hour.

  2. +
  3. Repeat it for 24 hours to generate a pattern of a day.

  4. +
+ +

BUT! I want the sampling to be done considering previous values rather than being done in an independent manner.

+ +

For example, I want to sample from or just rather than when refers to a specific hour.

+ +

What I came up with was the Markov chain, but I couldn't find any reference or materials on how to model it from real data.

+ +

Could anyone give me a comment for this issue?

+",27946,,27933,,9/4/2019 9:36,9/4/2019 9:36,Are there any ways to model markov chains from time series data?,,0,0,,,,CC BY-SA 4.0 +14211,1,,,8/30/2019 12:38,,2,52,"

I am working on an image to image regression task which requires me to develop a deep learning model that takes in a sequence of 5 images and return another image. The sequence of 5 images and the output images are conceptually and temporally related. In fact, the 5 images in the sequence each correspond to timestep in a simulation and the output, the one I am trying to predict, corresponds eventually to the 6th timestep of that sequence.

+ +

For now, I have been training a simple regression-type CNN model which takes in the sequence of 5 images stored in a list and outputs an image corresponding to the next timestep in the simulation. This does work with a small and rather simple dataset (13000 images) but works a bit worse on a more diverse and larger dataset (102000 images).

+ +

For this reason, I have been researching a bit now in order to find a better way to carry out this task and I found the idea of ConvLSTMs. However, I have seen these applied to the prediction of feature and the output of a sentence describing that image. What I wanted to know is whether ConvLSTMs can also output images, but more importantly if they can be applied to my case. If not, what other types of deep learning network can be suitable for this task?

+ +

Thanks in advance!

+",28307,,,,,9/1/2019 12:59,Can ConvLSTMs outuput images?,,1,0,,,,CC BY-SA 4.0 +14212,1,,,8/30/2019 12:48,,3,175,"

I am working on an MDP where there are four states and ten actions. I am supposed to derive the optimal policy to reach the desired state. At any state, a particular action can take you to any of the other states. +For ex. If we begin with state S1 -> performing action A1 on S1 can take you to S2 or S3 or S4 or just stay in the same state S1. Similarly for other actions.

+ +

My question is - is it mandatory to have only a single reward value for a single action A? Or is it possible to give a reward of 10 if action a1 on state s1 takes you to s2, give a reward of 50 if action a1 on state s1 takes you to s3, give a reward of 100 if action a1 on state s1 takes you to s4 which is the terminal state or give zero reward if that action results in the state being unchanged.

+ +

Can I do this??

+ +

Because in my case every state is better than its previous state. i.e S2 is better than S1, S3 is better than S2 and so on. So if an action on S1 is directly taking u to S4 which is the final state i would like to award it the maximal reward.

+",28314,,27933,,9/4/2019 9:36,9/4/2019 9:36,Can I have different rewards for a single action based on which state it transitions to?,,1,0,,,,CC BY-SA 4.0 +14213,1,,,8/30/2019 15:02,,2,73,"

How do I program a neural network such that, when an image is inputted, the output is a numerical value that is not the probability of the image being a certain class? In other words, a CNN that doesn't classify. For example, when an input of an image of a chair is given, the model should not give the chance that the image is a chair but rather give the predicted age of the chair, the predicted price of the chair, etc. I'm currently not sure how to program a neural net like this.

+",28303,,,,,8/31/2019 5:46,How does one create a non-classifying CNN in order to gain information from images?,,2,1,,,,CC BY-SA 4.0 +14214,2,,14213,8/30/2019 17:21,,2,,"

This can be thought of as a loss function design problem. If you optimize your network weights for something like multi-class classification, then expect your network to learn weights for this task (You will use cross entropy loss for this task). If you optimize your network to output a single value at the last layer and treat it as a regression problem for age prediction, then your network can learn weights for this particular task (You may use something like Mean Square Error loss here).

+ +

Let me give you a weak guidline on how to do this. Suppose say your input is images of chairs and you want to predict their age using a pretrained resnet, this is how you may do it in pytorch.

+ +

Definition
+X: Input pictures
+Y: List of ground truth values which is age here. i.e. : [1.4, 2.5, 2.2, ....]

+ +

Modify your neural network to give one output at the last layer
+model = torchvision.models.resnet18(pretrained=True)
+num_ftrs = model.fc.in_features
+model.fc = torch.nn.Linear(num_ftrs, 1)

+ +

Design your loss function appropriately. You can treat age prediction as a standard regression problem. Of course, there are better loss designs here.
+criterion = torch.nn.MSELoss()

+ +

So use this loss during training
+loss = criterion(output, target) where output is your neural network prediction and target is your ground truth values.

+ +

This is how you can modify an existing architecture for your task. Hope it helps.

+",28182,,,,,8/30/2019 17:21,,,,0,,,,CC BY-SA 4.0 +14215,1,17581,,8/30/2019 18:04,,4,2233,"

I would like to take in some input values for $n$ variables, say $R$, $B$, and $G$. Let $Y$ denote the response variable of these $n$ inputs (in this example, we have $3$ inputs). Other than these, I would like to use a reference/target value to compare the results.

+ +

Now, suppose the relation between the inputs ($R$, $B$ and $G$) with the output $Y$ is (let's say):

+ +

$$Y = R + B + G$$

+ +

But the system/machine has no knowledge of this relation. It can only read its inputs, $R$, $B$ and $G$, and the output, $Y$. Also, the system is provided with the reference value, say, $\text{REF} = 30$ (suppose).

+ +

The aim of the machine is to find this relation between its inputs and output(s). For this, I have come across some quite useful material online like this forum query and Approximation by Superpositions of a Sigmoidal Function by G. Cybenko and felt that it were possible. Also, I doubt that Polynomial Regression may be helpful as suggested Here.

+ +

One vague approach that comes to my mind is to use a truth table like approach to somehow deduce the effect of the inputs on the output and hence, get a function for it. But neither am I sure how to proceed with it, nor do I trust its credibility.

+ +

Is there any alternative/already existing method to accomplish this?

+",26452,,2444,,9/1/2019 22:21,1/19/2020 13:47,How can I determine the mathematical relation between the input and output variables?,,3,5,,,,CC BY-SA 4.0 +14216,2,,14189,8/30/2019 18:23,,1,,"

As you are handling with time series data and you want to find trends; A good approach should be consider applying Holt-Winter's seasonal method. This algorithm handle seasonal, trend and smooth parameters. A good implementation of this kind of algorithm is Prophet by Facebook. You can code an exploratory analysis with this library and obtain trend, yearly seasonality, and weekly seasonality of the time series, among other components. Example:

+ +

+",12006,,,,,8/30/2019 18:23,,,,0,,,,CC BY-SA 4.0 +14219,1,14268,,8/31/2019 2:17,,0,318,"

Take the below LSTM:

+ +
input: 5x1 matrix
+hidden units: 256
+output size (aka classes, 1 hot vector): 10x1 matrix
+
+ +

It is my understanding that an LSTM of this size will do the following:

+ +

$w_x$ = weight matrix at $x$

+ +

$b_x$ = bias matrix at $x$

+ +

activation_gate = tanh($w_1$ $\cdot$ input + $w_2$ $\cdot$ prev_output + $b_1$)

+ +

input_gate = sigmoid($w_3$ $\cdot$ input + $w_4$ $\cdot$ prev_output + $b_2$)

+ +

forget_gate = sigmoid($w_5$ $\cdot$ input + $w_6$ $\cdot$ prev_output + $b_3$)

+ +

output_gate = sigmoid($w_7$ $\cdot$ input + $w_8$ $\cdot$ prev_output + $b_4$)

+ +

The size of the output of each gate should be equal to the number of hidden units, ie, 256. The problem arrises when trying to convert to the correct final output size, of 10. If the forget gate outputs 256, then it is summed with the element wise product of the activation and input gate to find the new state, this will result in a hidden state of size 256. (Also in all my research I have not found anywhere whether this addition is actually addition, or simply appending the two matrices).

+ +

So if I have a hidden state of 256, and the output gate outputs 256, doing an element wise product of these two results in, surprise surprise, 256, not 10. If I instead ensure the output gate outputs a size of 10, this no longer works with the hidden state in an element wise product.

+ +

How is this handled? I can come up with many ways of doing it myself, but I want an identical replica of the basic LSTM unit, as I have some theories I want to test, and if it is even the slightest bit different it would make the research invalid.

+",26726,,,,,9/3/2019 3:53,How does an LSTM output the correct dimensions for classes?,,1,2,,,,CC BY-SA 4.0 +14221,2,,14213,8/31/2019 5:46,,1,,"

You generally use SoftMax Layer as the output layer for a neural network that is used as a Classifier.

+ +

Now, if you want your neural network to predict the age of a chair, predicting price of a chair like in linear regression (output is continuous), you have to remove the SoftMax Layer and add one or multiple layers such that the output of the final layer gives only one value at the output. (which is the prediction for you age or price). And, instead of logit loss you can use MSE for back propagation.

+ +

So, like keshik mentioned in another answer, it's all about the final layer used and the loss function used. Based on what you use, your weights get trained.

+ +

This is what is done in transfer learning also. Based on the task you want to achieve you change the last layers and retrain your network.

+",20760,,,,,8/31/2019 5:46,,,,0,,,,CC BY-SA 4.0 +14224,1,14247,,8/31/2019 12:28,,72,10706,"

If the original purpose for developing AI was to help humans in some tasks and that purpose still holds, why should we care about its explainability? For example, in deep learning, as long as the intelligence helps us to the best of their abilities and carefully arrives at its decisions, why would we need to know how its intelligence works?

+",16565,,2444,,8/19/2020 10:52,1/13/2022 23:25,Why do we need explainable AI?,,9,2,,,,CC BY-SA 4.0 +14226,2,,14224,8/31/2019 14:54,,7,,"

Explainable AI is often desirable because

+ +
    +
  1. AI (in particular, artificial neural networks) can catastrophically fail to do their intended job. More specifically, it can be hacked or attacked with adversarial examples or it can take unexpected wrong decisions whose consequences are catastrophic (for example, it can lead to the death of people). For instance, imagine that an AI is responsible for determining the dosage of a drug that needs to be given to a patient, based on the conditions of the patient. What if the AI makes a wrong prediction and this leads to the death of the patient? Who will be responsible for such an action? In order to accept the dosage prediction of the AI, the doctors need to trust the AI, but trust only comes with understanding, which requires an explanation. So, to avoid such possible failures, it is fundamental to understand the inner workings of the AI, so that it does not make those wrong decisions again.

  2. +
  3. AI often needs to interact with humans, which are sentient beings (we have feelings) and that often need an explanation or reassurance (regarding some topic or event).

  4. +
  5. In general, humans are often looking for an explanation and understanding of their surroundings and the world. By nature, we are curious and exploratory beings. Why does an apple fall?

  6. +
+",2444,,2444,,8/31/2019 15:20,8/31/2019 15:20,,,,0,,,,CC BY-SA 4.0 +14227,2,,14224,8/31/2019 15:29,,4,,"

Another reason: In the future, AI might be used for tasks that are not possible to be understood by human beings, by understanding how given AI algorithm works on that problem we might understand the nature of the given phenomenon.

+",22659,,2444,,8/31/2019 15:35,8/31/2019 15:35,,,,0,,,,CC BY-SA 4.0 +14228,1,,,8/31/2019 16:44,,3,313,"

I understand that RMSE is just the square root of MSE. Generally, as far as I have seen, people seem to use MSE as a loss function and RMSE for evaluation purposes, since it exactly gives you the error as a distance in the Euclidean space.

+ +

What could be a major difference between using MSE and RMSE when used as loss functions for training?

+ +

I'm curious because good frameworks like PyTorch, Keras, etc. don't provide RMSE loss functions out of the box. Is it some kind of standard convention? If so, why?

+ +

Also, I'm aware of the difference that MSE magnifies the errors with magnitude>1 and shrinks the errors with magnitude<1 (on a quadratic scale), which RMSE doesn't do.

+",12574,,2444,,8/31/2019 18:24,10/9/2022 7:08,When to use RMSE as opposed to MSE and vice versa?,,1,3,,,,CC BY-SA 4.0 +14230,1,,,9/1/2019 0:39,,1,522,"

I have been training my CNN for a bit now and, while both the training loss and the training error curves are going down during training, both my validation loss and my validation error curves are kind of zig-zagging and oscillating along the epochs. What does this represent?

+",28307,,2444,,12/14/2020 10:31,12/14/2020 10:31,What does an oscillating validation error curve represent?,,1,1,,,,CC BY-SA 4.0 +14231,2,,14191,9/1/2019 1:24,,1,,"

An environment is partially observable if the agent cannot fully observe the current state but it only partially observes it. More specifically, in fully observable MDPs (FOMDPs), the agent knows the current state of the environment, which can or not (for example, depending on whether the state is Markov or not) contain all the information required to theoretically take the optimal action. In a partially observable MDPs (POMDPs), the state is not available or observable, so the agent might not possess all the required information to theoretically take the optimal action.

+ +

For example, in the game of poker, there are hidden and observable cards. In this game, the state of the environment could be defined as all the hands of all players, the cards on the table and the next cards that will be drawn from the deck until the end of the round. If a player had access to all this information, then the environment would be fully observable, but, usually, this information is not available, so poker is usually considered a partially observable game.

+ +

You defined the state as follows

+ +

$$s_t = \{ p_{t:t-n}, E_t\}$$

+ +

So, if you have access to the past $n$ price records, $p_{1:n}$, and the energy level, $E_t$, then your environment is fully observable. However, this does not mean that it is a Markov environment, that is, intuitively, that your current state is sufficient to theoretically take the optimal action.

+ +

To conclude, partial observability depends on the definition of state in a specific problem. The concept of Markov property is also related to the concept of partial observability. However, in the case of MDPs, the concept of Markov property has a specific definition: given the current state and an action, the probability of the next state is conditionally independent of all previous states and actions or, more formally

+ +

$$ +P(S_{t+1} \mid S_t, A_t) = P(S_{t+1} \mid S_t, S_{t-1}, \dots, S_1, A_t, A_{t-1}, \dots, A_1) +$$

+ +

where $S_{t+1}$ is the next state, $S_t$ the current state, $A_t$ the current action, $S_{t-1}, \dots, S_1$ the previous states and $A_{t-1}, \dots, A_1$ the previous actions.

+",2444,,2444,,9/1/2019 10:41,9/1/2019 10:41,,,,3,,,,CC BY-SA 4.0 +14233,2,,13777,9/1/2019 4:33,,2,,"

In this excerpt there is something very misleading

+ +
+

The principle behind Weak AI is simply the fact that machines can be made to act as if they are intelligent. For example, when a human player plays chess against a computer, the human player may feel as if the computer is actually making impressive moves. But the chess application is not thinking and planning at all. All the moves it makes are previously fed in to the computer by a human and that is how it is ensured that the software will make the right moves at the right times.

+
+ +

Intelligent systems do not in general make use of a look up table. For example, the first chess computer Deep Blue made use of complex evaluation functions.

+ +

Neural networks could be trained to act as these evaluation functions.

+ +

Additionally, there are planning agents - this is a very large sub-field in AI.

+ +

Below again the article mistakes all artificial agents for simple deterministic Turing-like machines

+ +
+

Weak AI is focused towards the technology which is capable of carrying out pre-planned moves based on some rules and applying these to achieve a certain goal but, Strong AI is based on coming up with a technology that can think and function very similar to humans.

+
+ +

A highly recommended book is ""Artificial Intelligence: A modern Approach."" This reference covers many important sub-fields of AI and provides enough information to realize when people have mistaken the operations of their laptop as the final measure of artificial agency.

+ +

Here Comes the hand waving

+ +

The strong vs weak AI dichotomy is fueled by what appears to be a fundamental question: Is there something ""special"" about humanity that sets us apart from the rest of the material realm?

+ +

If we are just a complex arrangement of matter with no metaphysical properties then in theory we only need to solve two problems for strong AI:

+ +
    +
  1. Powerful enough hardware
  2. +
  3. Knowing how to correctly assemble said hardware
  4. +
+ +

Justification of 1 & 2:

+ +

In the absence of metaphysical properties, we know that a correct arrangement of matter will result in the epiphenomenal emergence of human like experience - this idea is strongly supported by Big Bang theory and the Theory of Evolution.

+ +

Thus, all that is left is finding how to correctly arrange matter (hardware) so as to ensure said emergence.

+ +

Though, this philosophical thread quickly descends into the rabbit hole where we must tread softly when discussing whether a synthesized agent actually has qualia. Subjectivity is arguably the fundamental characteristic of human experience and we can't even assure ourselves that another human has this - we just accept that they do.

+ +

I hope that I have clarified where the science and philosophy stand.

+",28343,,28343,,9/1/2019 5:21,9/1/2019 5:21,,,,0,,,,CC BY-SA 4.0 +14234,2,,14178,9/1/2019 5:36,,1,,"

Note: K-means does not assume an interpretation/label of the clusterings - in fact it is an unsupervised algorithm. The interpretations are a result of human analysis after running K-means.

+ +

For example, in the case of cats and dogs one would most definitely chose k = 2 - which provides an easy interpretation. However, what would it mean if we set k = 1000. We no longer have a ""clean"" interpretation of the centroids.

+ +

Note: how I keep saying ""interpretation."" The algorithm simply assigns a data point to a cluster and calls it a day. Humans then look at the results and try to understand them with an interpretation.

+ +

Continuing with the example where k = 2. One could easily interpret ""is cat"" as ""not dog"" and ""is dog"" as ""not cat."" The idea here is that the data is unlabeled beforehand and humans try to fathom the results retrospectively by assigning the resulting clusters with an understandable label.

+ +

I hope this clarifies the issue.

+",28343,,28343,,4/19/2020 15:53,4/19/2020 15:53,,,,0,,,,CC BY-SA 4.0 +14235,2,,5087,9/1/2019 5:47,,1,,"

One approach for this is collaborative filtering.

+ +

see also link

+ +

This does however need you to have data about some user preferences on some products. Given that you have stated you are willing to mine user preferences this approach may be feasible.

+ +

The idea is that with this data you can train a model to predict how a user might rate a product. This is accomplished by learning the ""preference signature"" of a user and a feature vector for each product.

+ +

General Idea

+ +

for your i-th user, the algorithm will learn said preferences as a vector $$\theta^{(i)}$$

+ +

Additionally, for your j-th product it will learn a feature vector $$x^{(j)}$$

+ +

You can then predict how the i-th user will rate the j-th item by computing the dot product (or some equivalent). That is, compute the predicted rating as: +$$\hat R(i,j)=\theta^{(i)}\cdot x^{(j)}$$

+ +

You can then use this rating to decide whether or not a a product is a good match for a user.

+ +

The A. Ng Machine Learning Coursera MOOC has a very nice module on collaborative filtering.

+ +

Implementation Note

+ +

When asking your users for feedback, try to ask for quantitative measures. For example, the classic 1-5 scale rating.

+",28343,,28343,,9/2/2019 2:37,9/2/2019 2:37,,,,4,,,,CC BY-SA 4.0 +14236,1,,,9/1/2019 10:29,,2,76,"

I'm new to artificial intelligence. I am looking for the most appropriate AI solution for my application, which is developing an algorithm to predict a proceeding situation (edited: I want my algorithm to predict a situation or more than one to happen at a predefined moment) and, at the same time, to learn from the iterative stages of my application.

+ +

Any suggestions? Any help? Any proposals?

+",28345,,28345,,9/1/2019 22:08,5/28/2020 23:06,Which predictive algorithm is most appropriate for a proceeding situation?,,1,3,,,,CC BY-SA 4.0 +14237,2,,14212,9/1/2019 12:24,,2,,"

The reward function can be a function of the current state, current action, and next state: $R(s_t, a_t, s_{t+1})$. It's valid to use the Bellman operator in this setting because it's still a contraction and will yield the optimal value function.

+ +

NOTE: I'm assuming that you will be solving the MDP with the Bellman equation.

+",28347,,,,,9/1/2019 12:24,,,,0,,,,CC BY-SA 4.0 +14238,2,,14224,9/1/2019 12:36,,-1,,"

IMHO, the most important need for explainable AI is to prevent us from becoming intellectually lazy. If we stop trying to understand how answers are found, we have conceded the game to our machines.

+",28348,,,,,9/1/2019 12:36,,,,2,,,,CC BY-SA 4.0 +14239,2,,14211,9/1/2019 12:59,,2,,"

If you only need 5 frames to predict the next frame, then I'd recommend a U-Net architecture, wich is basically a CNN encoder/decoder network in which the decoder uses the intermediate features produced in the encoder as well as its own features to produce an output image. Also, in additional to using a conventional L2 loss for the output image, you can always add an additional GAN loss to make the image look more realistic.

+ +

If using a longer history of frames can help, then I recommend taking a look at ""Recurrent Environment Simulators"" and combining it with the ideas above.

+ +

Hope it helps!

+",28347,,,,,9/1/2019 12:59,,,,2,,,,CC BY-SA 4.0 +14240,2,,14224,9/1/2019 13:23,,9,,"

If you're a bank, hospital or any other entity that uses predictive analytics to make a decision about actions that have huge impact on people's lives, you would not make important decisions just because Gradient Boosted trees told you to do so. Firstly, because it's risky and the underlying model might be wrong and, secondly, because in some cases it is illegal - see Right to explanation.

+",22835,,2444,,9/1/2019 19:42,9/1/2019 19:42,,,,0,,,,CC BY-SA 4.0 +14242,2,,14236,9/1/2019 17:05,,1,,"

One approach would be to use a Sequence Processing Neural architecture - one option is a recurrent network. These were specifically designed with your intent in mind.

+ +

They can consume sequences of data and learn from these in order to predict subsequent time-steps. Though these will require you to understand best practices for implementation.

+ +

If your task isn't too complex you could make use of a forward algorithm or some variant. Many of these models make use of simplifying assumptions - if your use case cannot admit these you'll have to go for something more advanced.

+ +

If your prediction task is complex and requires heavy use of long term dependencies you may have to go for a more cutting edge architecture like a transformer - but I heard that these are difficult to implement and most definitely require familiarity with deep learning systems.

+ +

Finally, if you need a system that can understand data and dependencies across very large spans of time then you may have to wait for the science to progress.

+ +

I hope this helps.

+",28343,,,,,9/1/2019 17:05,,,,0,,,,CC BY-SA 4.0 +14243,1,14298,,9/1/2019 17:39,,7,1099,"

After an AI goes through the process described in How would an AI learn language?, an AI knows the grammar of a language through the process of grammar induction. They can speak the language, but they have learned formal grammar. But most conversations today, even formal ones, use idiomatic phrases. Would it be possible for an AI to be given a set of idioms, for example,

+ +
+

Immer mit der Ruhe

+
+ +

Which, in German, means 'take it easy' but an AI of grammar induction, if told to translate 'take it easy' to German, would not think of this. And if asked to translate this, it would output

+ +
+

Always with the quiet

+
+ +

So, it is possible to teach an AI to use idiomatic phrases to keep up with the culture of humans?

+",26306,,,,,9/6/2019 15:48,How would an AI learn idiomatic phrases in a natural language?,,2,2,,,,CC BY-SA 4.0 +14244,2,,14230,9/1/2019 17:44,,3,,"

Without more information the best diagnostic is:

+

You potentially have a bug in your code.

+

Justification

+

Neural network systems are capable of failing silently. The system can still appear to "learn" even in the presence of said bug - a while back I had a very similar issue with a toy CNN project. My network could get 99% accuracy on the training set but always achieved 8-13% on the validation set. This looked like overfit but none of the methods solved the issue. Finally, I found that I wasn't feeding the data correctly into the network during train time but I was feeding the data correctly during validation time.

+

Conclusion

+

Provide the following for better diagnostic:

+
    +
  • Is this a custom CNN you hardcoded?
  • +
  • are you using python?
  • +
  • what's the objective function?
  • +
  • can you provide the loss curve for the training set?
  • +
  • show us some code too (this will be very helpful for working toward a solution)
  • +
+

I hope this helps, and best of luck!

+",28343,,28343,,7/25/2020 16:14,7/25/2020 16:14,,,,0,,,,CC BY-SA 4.0 +14246,1,14270,,9/1/2019 18:11,,6,365,"

To compare the performance of various algorithms for perfect information games, reasonable benchmarks include reversi and m,n,k-games (generalized tic-tac-toe). For imperfect information games, something like simplified poker is a reasonable benchmark.

+ +

What are some reasonable benchmarks to compare the performance of various algorithms for reinforcement learning in discrete MDPs? Instead of using a random environment from the space of all possible discrete MDPs on $n$ states and $k$ actions, are there subsets of such a space with more structure that are more reflective of ""real-world"" environments? An example of this might be so-called gridworld (i.e. maze-like) environments.

+ +

This is a related question, though I'm looking for specific examples of MDPs (with specified transitions and rewards) rather than general areas where MDPs can be applied.

+ +

Edit: Some example MDPs are found in section 5.1 (Standard Domains) of Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search (2012) by Guez et al.:

+ +
+

The Double-loop domain is a 9-state deterministic MDP with 2actions, + 1000 steps are executed in this domain. Grid5 is a 5×5 grid with no + reward anywhere except for a reward state opposite to the reset state. + Actions with cardinal directions are executed with small probability + of failure for 1000 steps. Grid10 is a 10×10 grid designed like Grid5. + We collect 2000 steps in this domain. Dearden’s Maze is a 264-states + maze with 3 flags to collect. A special reward state gives the number + of flags collected since the last visit as reward, 20000 steps are + executed in this domain.

+
+",3373,,3373,,12/27/2019 4:11,12/27/2019 4:11,Benchmarks for reinforcement learning in discrete MDPs,,1,0,,,,CC BY-SA 4.0 +14247,2,,14224,9/1/2019 18:44,,76,,"

As argued by Selvaraju et al., there are three stages of AI evolution, in which interpretability is helpful.

+
    +
  1. In the early stages of AI development, when AI is weaker than human performance, transparency can help us build better models. It can give a better understanding of how a model works and helps us answer several key questions. For example, why a model works in some cases and doesn't in others, why some examples confuse the model more than others, why these types of models work and the others don't, etc.

    +
  2. +
  3. When AI is on par with human performance and ML models are starting to be deployed in several industries, it can help build trust for these models. I'll elaborate a bit on this later, because I think that it is the most important reason.

    +
  4. +
  5. When AI significantly outperforms humans (e.g. AI playing chess or Go), it can help with machine teaching (i.e. learning from the machine on how to improve human performance on that specific task).

    +
  6. +
+

Why is trust so important?

+

First, let me give you a couple of examples of industries where trust is paramount:

+
    +
  • In healthcare, imagine a Deep Neural Net performing diagnosis for a specific disease. A classic black box NN would just output a binary "yes" or "no". Even if it could outperform humans in sheer predictability, it would be utterly useless in practice. What if the doctor disagreed with the model's assessment, shouldn't he know why the model made that prediction; maybe it saw something the doctor missed. Furthermore, if it made a misdiagnosis (e.g. a sick person was classified as healthy and didn't get the proper treatment), who would take responsibility: the model's user? the hospital? the company that designed the model? The legal framework surrounding this is a bit blurry.

    +
  • +
  • Another example is self-driving cars. The same questions arise: if a car crashes, whose fault is it: the driver's? the car manufacturer's? the company that designed the AI? Legal accountability, is key for the development of this industry.

    +
  • +
+

In fact, according to many, this lack of trust has hindered the adoption of AI in many fields (sources: [1], [2], [3]). While there is a running hypothesis that with more transparent, interpretable or explainable systems users will be better equipped to understand and therefore trust the intelligent agents (sources: [4], [5], [6]).

+

In several real-world applications, you can't just say "it works 94% of the time". You might also need to provide a justification...

+

Government regulations

+

Several governments are slowly proceeding to regulate AI and transparency seems to be at the center of all of this.

+

The first to move in this direction is the EU, which has set several guidelines where they state that AI should be transparent (sources: [7], [8], [9]). For instance, the GDPR states that if a person's data has been subject to "automated decision-making" or "profiling" systems, then he has a right to access

+
+

"meaningful information about the logic involved"

+
+

(Article 15, EU GDPR)

+

Now, this is a bit blurry, but there is clearly the intent of requiring some form of explainability from these systems. The general idea the EU is trying to pass is that "if you have an automated decision-making system affecting people's lives then they have a right to know why a certain decision has been made." For example, a bank has an AI accepting and declining loan applications, then the applicants have a right to know why their application was rejected.

+

To sum up...

+

Explainable AIs are necessary because:

+
    +
  • It gives us a better understanding, which helps us improve them.
  • +
  • In some cases, we can learn from AI how to make better decisions in some tasks.
  • +
  • It helps users trust AI, which leads to a wider adoption of AI.
  • +
  • Deployed AIs in the (not too distant) future might be required to be more "transparent".
  • +
+",26652,,2444,,1/13/2022 23:25,1/13/2022 23:25,,,,1,,,,CC BY-SA 4.0 +14248,1,,,9/1/2019 20:30,,1,39,"

I have written my own AlphaZero implementation and started training it recently.
+Problem is, I am 99% sure there is a mistake and I do not know how to tackle this, since I cannot explain it. I am new too AI so my own go at debugging this wasn't quite succesful.

+ +

Input to my NN: A game state, represented by the board and position of the stones.
+Output of my NN: a policy vector P and a scalar v(so an array and a number).
+During self-play, training examples for each move are generated. These are later used to fit the network.

+ +

After having trained a bit, I can see both policy and value loss decreasing, which is good.
+But for the very first game state (empty board) my prediction v of winning the game always stays at 0. This is very concerning, since the game I am training on is Connect4. Connect4 is a solved game and in the long run, the value for v should be 1(100% win chance).
+So, any ideas what I can do? I mean I could post the code here, but that is quite a lot, I don't know if any of you are willing to read through it.

+ +

To show you what I mean, I'll show you the output of one of my test cases:

+ +
def test_p_v():
+    game = connect4.Connect4()
+    nnetwrapper = neuralnetwrapper.NNetWrapper(game, args)
+    trainer = trainingonly1NN.Training(game = game, nnet= nnetwrapper, args = args)
+
+    trainer.nnet.load_checkpoint(folder = './NNmodels/', filename='best.pth.tar')
+
+    m = mctsnn.MCTS(nnet = nnetwrapper, args=args)
+
+    m.root.expand()
+
+    for c in m.root.children:
+        print(""P: {}  v: {}"".format(c.state.P, c.state.v))
+
+    print(""Root MCTS: P: {}   v: {}"".format(m.root.state.P, m.root.state.v))
+
+ +

results in:

+ +
P: [0.1436838  0.13809082 0.14174062 0.18597336 0.11126296 0.120884
+ 0.15836443]  v: [-0.14345692]
+P: [0.14202288 0.13772981 0.14302546 0.1945151  0.11690026 0.1178078
+ 0.1479987 ]  v: [0.4222183]
+P: [0.1447647  0.13066562 0.14334281 0.18055147 0.13374692 0.12126701
+ 0.1456615 ]  v: [-0.5827425]
+P: [0.15192215 0.14221476 0.1443521  0.16634388 0.12634312 0.12711576
+ 0.14170831]  v: [-0.0229549]
+P: [0.1456457  0.136381   0.13940862 0.17145196 0.12714048 0.12233274
+ 0.15763956]  v: [-0.02743456]
+P: [0.15353182 0.13510287 0.1433772  0.16371183 0.12161442 0.1228981
+ 0.15976372]  v: [0.37902302]
+P: [0.14321715 0.13596673 0.13836266 0.18927328 0.11999774 0.12481775
+ 0.1483647 ]  v: [-0.521353]
+Root MCTS: P: [0.14296353 0.13863131 0.1358864  0.18102945 0.10981551 0.12779148
+ 0.16388236]   v: [0.]
+
+ +

So as you can see, for every different state, there are different P-values and different v-values, which makes sense.
+It also makes sense for my ROOT Node to have the highest value in P at position 3, since this refers to the middle column.
+But the v in my root Node is 0. This is alarming and I have no idea what do from here on.

+ +

I also checked some of the training examples passed to my neural network to learn, they look like this (board, P, Actual game result):

+ +
[[array([[0, 0, 0, 0, 0, 0, 0],
+       [0, 0, 0, 0, 0, 0, 0],
+       [0, 0, 0, 0, 0, 0, 0],
+       [0, 0, 0, 0, 0, 0, 0],
+       [0, 0, 0, 0, 0, 0, 0],
+       [0, 0, 0, 0, 0, 0, 0]]), [0.13, 0.13, 0.15, 0.13, 0.18, 0.13, 0.15], 1]
+
+ +

whereas the very last number (1) is the v-value my network is supposed to fit! So it often even is 1!

+ +

But I fear that since this is the very root of all, the game result for the following notes is -1 and 1 changing every ""step"", so the average probably is 0 which is quite logical. Don't know how to express this, I fear it is trying to average the v mean of all states, instead of just training the v for one state.

+",27406,,27406,,9/1/2019 21:26,9/1/2019 21:26,AlphaZero value at root node not being affected by training,,0,0,,,,CC BY-SA 4.0 +14249,1,,,9/2/2019 3:04,,1,34,"

I used following custom loss function.

+ +
def custom_loss(epo):
+
+  def loss(y_true,y_pred):
+      m=K.binary_crossentropy(y_true, y_pred)
+      x=math.log10(epo)
+      y=x*x
+      y=(math.sqrt(y)/100)
+      l=(m*(y))
+
+      return K.mean(l, axis=-1)
+  return loss
+
+ +

and this is my discriminator model

+ +
def Discriminator():
+
+
+  inputs = Input(shape=img_shape)
+
+
+  x=Conv2D(32, kernel_size=3, strides=2, padding=""same"")(inputs)
+  x=LeakyReLU(alpha=0.2)(x)
+  x=Dropout(0.25)(x, training=True)
+
+  x=Conv2D(64, kernel_size=3, strides=2, padding=""same"")(x)
+  x=ZeroPadding2D(padding=((0, 1), (0, 1)))(x)
+  x=BatchNormalization(momentum=0.8)(x)
+  x=LeakyReLU(alpha=0.2)(x)
+
+  x=Dropout(0.25)(x, training=True)
+  x=Conv2D(128, kernel_size=3, strides=2, padding=""same"")(x)
+  x=BatchNormalization(momentum=0.8)(x)
+  x=LeakyReLU(alpha=0.2)(x)
+
+  x=Dropout(0.25)(x, training=True)
+  x=Conv2D(256, kernel_size=3, strides=1, padding=""same"")(x)
+  x=BatchNormalization(momentum=0.8)(x)
+  x=LeakyReLU(alpha=0.2)(x)
+
+  x=Dropout(0.25)(x, training=True)
+  x=Flatten()(x)
+  outputs=Dense(1, activation='sigmoid')(x)
+  model = Model(inputs, outputs)
+  #model.summary()
+  img = Input(shape=img_shape)
+  validity = model(img)
+  return Model(img, validity)
+
+ +

and initialize discriminator here

+ +
D = Discriminator()
+epoch=0
+D.compile(loss=custom_loss(epoch), optimizer=optimizer, metrics= 
+['accuracy'])
+G = Generator()
+z = Input(shape=(100,))
+img = G(z)
+D.trainable = False
+valid = D(img)
+
+ +

i want to update epo value of loss function after each epoch in the following code

+ +
for epoch in range(epochs):
+
+  for batch in range(batches):
+      ............
+     d_loss_real = D.train_on_batch(imgs, valid)
+     d_loss_fake = D.train_on_batch(gen_batch, fake)
+     d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
+     g_loss = combined.train_on_batch(noise_batch, valid)
+
+ +

Are there any way for updating loss function without effecting training after compiling the model?

+",23509,,23509,,9/2/2019 3:15,9/2/2019 3:15,How to update Loss Function parameter after compilation,,0,0,,,,CC BY-SA 4.0 +14251,2,,14224,9/2/2019 11:05,,4,,"

The answer to this is incredibly simple. If you are a bank executive one day you may need to stand up in court and explain why your AI denied mortgages to all these people... who just happen to share some protected characteristic under anti-discrimination legislation. The judge will not be happy if you handwave the question away mumbling something about algorithms. Or worse, why did this car/plane crash and how will you prevent it next time.

+ +

This is the major blocker to more widespread adoption of AI in many industries.

+",16207,,,,,9/2/2019 11:05,,,,1,,,,CC BY-SA 4.0 +14254,1,,,9/2/2019 12:44,,2,601,"

I'm following this tutorial, and I wonder why is there a train-step - why is it necessary? I thought the whole idea of GPT-2 is that you do not need to train it on specific text domain, as it's already pre-trained on a large amount of data.

+",27947,,27947,,9/2/2019 21:13,9/2/2019 21:13,Why do you need to retrain GPT-2?,,1,0,,,,CC BY-SA 4.0 +14257,2,,14215,9/2/2019 14:54,,1,,"

There are always a large number of possible functions that can produce a given set of input-output values. The challenge is to find a simplest function (according to whatever criteria you choose) to produce those values.

+ +

One approach is to write a general function of the input variables, comprising terms of all order in R, G, and B, with a coefficient for each term, then search for values of the coefficients that A) reproduce the known input-output values accurately and B) leave the largest number of coefficients equal to zero.

+ +

Several different algorithms can be used to do the search efficiently. My choice would be a genetic algorithm to seek the minimum of the RMS difference between the produced and known I-O values, summed with the product of a gradually increasing parameter and the number of nonzero coefficients.

+",28348,,,,,9/2/2019 14:54,,,,1,,,,CC BY-SA 4.0 +14259,2,,14254,9/2/2019 15:30,,1,,"

I've come to learn about GPT-2 through Robert Miles AI safety Youtube channel and intend to look into it in more detail.

+ +

From my current understanding, GPT-2 is pre-trained to ""understand"" ""natural"" language (for any definition of the words in quotes). However you would want it to not only understand general text but generate text similar to some specific ""genre"", e.g. scientific articles, youtube comments, twitter messages, you name it.

+ +

So using its pre-trained understanding, it analyzes the structure of sample texts and replicates this structure.
+For scientific articles this structure could be:

+ +
    +
  • Abstract
  • +
  • Context of research topic
  • +
  • Introduction of researchers
  • +
  • Explanation of methods/experiments/discoveries
  • +
  • Results and interpretation
  • +
  • Future research and application
  • +
+ +

For Youtube comments the structure is probably more chaotic but could include a vague reference to former comments, insults, nonsensical bar-grade philosophy, internet slang and smileys.

+ +

TL;DR: The domain specific text is only used to tell GPT-2 what you're looking for. You basically hand it context to work with, instead of prompting ""Say something clever"" (my least favorite line at parties, when I've been introduced as clever).

+ +

P.S.: Take this with a grain of salt. It's 90% conjecture from incomplete information.

+",28388,,,,,9/2/2019 15:30,,,,1,,,,CC BY-SA 4.0 +14260,2,,12487,9/2/2019 16:30,,1,,"

I have not come across a model like this yet. BUT

+ +

If you have not tried smaller models I'd recommend trying that first.

+ +

Justification: this lets you use learning curves to diagnose what to do next.

+ +

Also, you might try starting with a GRU (the LSTM overhead may not be needed).

+ +

One Idea for a starting model

+ +

Observe that turnout does not affect weather. So the weather forecast model will be independent of the turnout model but not vice versa.

+ +

(Note: I'm using RNN to denote which ever recurrent architecture you choose)

+ +

Formulation for simultaneous prediction:

+ +

Weather

+ +

$\text{RNN}_w$ is your weather prediction model

+ +

$\hat w_t = \text{RNN}_w(\hat w_{t-1})$ be the predicted weather at time $t$

+ +

Turnout: The idea here is that people will or will not go to the beach for various reasons (overcrowded, too hot or stormy etc). So in the beach population prediction task we use all these as features to predict the population at the next time-step. This reduces the problem to any one of the classical models already developed.

+ +

$\text{RNN}_p$ is your turnout (beach population) prediction model

+ +

$\hat p_t$ be the predicted turnout at time $t$

+ +

$\hat c_t=[\hat p_t, \hat w_t]$ is the concatenation at time t

+ +

$\hat p_{t+1} = \text{RNN}_p(\hat c_t)$

+ +

Formulation in the presence of weather forecast:

+ +

Simply replace $\hat w_t$ with the true weather forecasts $\hat f_t$.

+ +

A final warning

+ +

Unless you think your RNN is better at weather prediction than the forecasting systems I would not recommend using $\text{RNN}_w$ in a production application.

+ +

I hope this helps.

+",28343,,28343,,9/2/2019 16:42,9/2/2019 16:42,,,,0,,,,CC BY-SA 4.0 +14261,2,,14215,9/2/2019 18:40,,0,,"

As I see your explanation, what you're looking for can be stated via another way saying

+ +
Y = aX + b
+
+ +

where Y is output vector, X input vector and a & b are the coefficients you want to find.

+ +

Why so? And what is happening?

+ +

First thing first: I recommend to look the video [4] about how matrices and vectors work together and form after multiplication very familiar equations:

+ +
Y = a1 * x1 + a2 * x2 + a3 * x3
+
+ +

Now you see, you do not only get R + G + B but also some constants supplied for each of the variables.

+ +

About polynomial solutions I found [2] and [3] but reading through [3] you'll soon notice it is about completely different approach:

+ +
Y = a1 * x + a2 * x^2 + a3 * x^3
+
+ +

which you don't want.

+ +

So, you find something called linear regression and a so called Deep Neural Network for example to solve it [1].

+ +

I would summarize the source [1] in these steps:

+ +
    +
  • You have to find some training data. That is examples of correct Y values, when X values are known.

  • +
  • Then you build in Python note book some code that has: a Neural Network with hidden layer(s), Activation Function, Back Propagation and Objective Function.

  • +
  • Many many iterations with the data, called Training.

  • +
+ +

There are plenty of samples and courses online about these in detail, but with correct tools and tutorials some dozen lines and no more are needed.

+ +

Process is ended with validation phase with some more known data items and results. It can tell you how well the estimated model works.

+ +
+ +

Final Notes:

+ +

As you may observe, the solution includes quite a many funny terms you have to learn somehow before mastering the task. For example Udemy has great online courses on this topic, also free tutorials are available on another sites. Your plans sound quite ambitious compared to the knowledge you have so far, so I really do recommend you learn little bit more to be able to fine tune the already given examples online. For example tutorial [5] includes one. It is at first quite complicated code, you'd need quite a lot practice to master it line by line.

+ +
+ +

In short:

+ +

Find your favorite tutorial, study neural networks a little bit (basics) and pick code sample to start experimenting. It is a long way but it is worth it.

+ +
+ +

Source:

+ +

[1] https://lightsapplications.wordpress.com/linear-regression-and-deep-learning/

+ +

[2] https://www.ritchieng.com/machine-learning-polynomial-regression/

+ +

[3] https://arachnoid.com/polysolve/

+ +

[4] https://www.youtube.com/watch?v=F2lJ7oSwcyY

+ +

[5] https://missinglink.ai/guides/neural-network-concepts/backpropagation-neural-networks-process-examples-code-minus-math/

+",11810,,,,,9/2/2019 18:40,,,,1,,,,CC BY-SA 4.0 +14262,1,14339,,9/2/2019 20:34,,-1,1875,"

I have a simple text classifier, with the following structure:

+ +
    input = keras.layers.Input(shape=(len(train_x[0]),))
+
+    x=keras.layers.Dense(500, activation='relu')(input)
+    x=keras.layers.Dropout(0.5)(x)
+    x=keras.layers.Dense(250, activation='relu')(x)
+    x=keras.layers.Dropout(0.5)(x)
+    preds = keras.layers.Dense(len(train_y[0]), activation=""sigmoid"")(x)
+
+    model = keras.Model(input, preds)
+
+ +

When training it with 300,000 samples, with a batch size of 500, I get an accuracy value of .95 and loss of .22 in the first iteration, and the subsequent iterations are .96 and .11.

+ +

Why does the accuracy grow so quickly, and then just stop growing?

+",17272,,,,,9/9/2019 20:08,Accuracy too high too fast?,,3,3,,1/31/2022 12:29,,CC BY-SA 4.0 +14264,2,,14224,9/2/2019 23:19,,18,,"
+

Why do we need explainable AI? + ... why we need to know ""how does its intelligence work?""

+
+ +

Because anyone with access to the equipment, enough skill, and enough time, can force the system to make a decision that is unexpected. The owner of the equipment, or 3rd parties, relying on the decision without an explanation as to why it is correct would be at a disadvantage.

+ +

Examples - Someone might discover:

+ +
    +
  • People whom are named John Smith and request heart surgery on: Tuesday mornings, Wednesday afternoons, or Fridays on odd days and months have a 90% chance of moving to the front of the line.

  • +
  • Couples whom have the male's last name an odd letter in the first half of the alphabet and apply for a loan with a spouse whose first name begins with a letter from the beginning of the alphabet are 40% more likely to receive the loan if they have fewer than 5 bad entries in their credit history.

  • +
  • etc.

  • +
+ +

Notice that the above examples ought not to be determining factors in regards to the question being asked, yet it's possible for an adversary (with their own equipment, or knowledge of the algorithm) to exploit it.

+ +

Source papers:

+ +
    +
  • ""AdvHat: Real-world adversarial attack on ArcFace Face ID system"" (Aug 23 2019) by Stepan Komkov and Aleksandr Petiushko

    + +
      +
    • Creating a sticker and placing it on your hat fools facial recognition system.
    • +
  • +
  • ""Defending against Adversarial Attacks through Resilient Feature Regeneration"" (Jun 8 2019), by Tejas Borkar, Felix Heide, and Lina Karam

    + +
      +
    • +

      ""Deep neural network (DNN) predictions have been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, so-called universal adversarial perturbations are image-agnostic perturbations that can be added to any image and can fool a target network into making erroneous predictions. Departing from existing adversarial defense strategies, which work in the image domain, we present a novel defense which operates in the DNN feature domain and effectively defends against such universal adversarial attacks. Our approach identifies pre-trained convolutional features that are most vulnerable to adversarial noise and deploys defender units which transform (regenerate) these DNN filter activations into noise-resilient features, guarding against unseen adversarial perturbations."".

      +
    • +
  • +
  • ""One pixel attack for fooling deep neural networks"" (May 3 2019), by Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi

    + +
      +
    • Altering one pixel can cause these errors:
    • +
    + +
    +


    + Fig. 1. One-pixel attacks created with the proposed algorithm that successfully fooled three types of DNNs trained on CIFAR-10 dataset: The All convolutional network (AllConv), Network in network (NiN) and VGG. The original class labels are in black color while the target class labels and the corresponding confidence are given below.

    + +

     

    + +


    + Fig. 2. One-pixel attacks on ImageNet dataset where the modified pixels are highlighted with red circles. The original class labels are in black color while the target class labels and their corresponding confidence are given below.

    +
  • +
+ +

Without an explanation as to how and why a decision is arrived at the decision can't be absolutely relied upon.

+",17742,,,,,9/2/2019 23:19,,,,1,,,,CC BY-SA 4.0 +14265,2,,14224,9/2/2019 23:28,,2,,"

In addition to all these answers mentioning the more practical reasons of why we'd want explainable AIs, I'd like to add a more philosophical one.

+ +

Understanding how things around us work is one of the main driving forces of science from antiquity. If you don't have an understanding of how things work, you can't evolve beyond that point. Just because ""gravity works"" hasn't stopped us trying to understand how it works. In turn a better understanding of it led to several key discoveries, which have helped us advance our technology.

+ +

Likewise, if we stop at ""it works"" we will stop improving it.

+ +
+ +

Edit:

+ +

AI hasn't been just about making ""machines think"", but also through them to understand how the human brain works. AI and neuroscience go hand-by-hand.

+ +

This all wouldn't be possible without being able to explain AI.

+",27655,,27655,,9/4/2019 19:02,9/4/2019 19:02,,,,0,,,,CC BY-SA 4.0 +14268,2,,14219,9/3/2019 3:53,,0,,"

After some more research, I have found the answer.

+ +

An LSTM is comprised of 1 LSTM cell that is continuously updated by passing in new inputs, the hidden state and the previous output. All nodes inside the LSTM cell are of size hidden_units. That means the output of the activation gate, input gate, forget gate and output gate are all of size hidden_units. As an extension of this, the size of the hidden state is also equal to hidden_units.

+ +

The problem arises when the desired output is not equal to that of hidden_units. This is fixed simple and easily, by slapping a basic feed forward net onto the end of the output of the LSTM, so even if it outputs 256, you can convert it to n nodes. These final output nodes are typically put through a softmax with cross-entropy loss applied.

+ +

I came to the conclusion by looking at the code included here: https://towardsdatascience.com/lstm-by-example-using-tensorflow-feb0c1968537

+ +

In which you can see the final output has its own separate weights and bias'.

+",26726,,,,,9/3/2019 3:53,,,,0,,,,CC BY-SA 4.0 +14270,2,,14246,9/3/2019 8:31,,5,,"

Although I am not aware of any ""benchmark problems"" for (discrete) MDPs, I'll comment a bit on possible benchmarks and I will show some benchmarks used to test POMDP algorithms.

+ +

MDP vs POMDP

+ +

In Markovian Decision Processes (MDPs) the whole state space is known, this means you know all the information for your problem; therefore, you can use them to find solutions for perfect information problems or games. +Many of these games could use an MDP, some examples: 2048 and chess. Note that you must take into mind that the computational complexity grows with the number of states. +Although I could not find any benchmarks for MDPs, games with perfect information can be used to compare MDP solvers.

+ +

When a problem or game has imperfect information, you should use a Partially Observable Markovian Decision Processes (POMDPs); in which you do not need to know the current state, but you keep track of the probabilities of being in any of the (discrete) states.

+ +

POMDP Benchmarks

+ +

Since I worked with POMDPs, I will comment some of the benchmarks researches used for discrete POMDPs (Pineau et al. (2003), Spaan and Vlassis (2004), Kurniawati et al. (2008), Ong et al. (2010), ArayaLopez et al. (2010)):

+ +
    +
  • Tag: a robot and target move in a grid environment and can move one step at a time, moving has a cost, and a reward is gained if the robot is at the same position as the target (i.e. tagged it).
  • +
  • Two-Robot Tag: two robots attempt to catch a target, thereby sharing their observations and actions; the target tries to get away from them.
  • +
  • Mazes (Littman et al. (1995), Kaelbling et al. (1998), Spaan and Vlassis (2004)): + +
      +
    • Hallway and Hallway2 are robot navigation tasks in a hallway, where the robot has only local noisy sensor information. The difficulty of hallways is it being long areas which look alike, which causes ambiguity in the localization.
    • +
    • Tiger-grid a two world state with a tiger being either behind the left or right door. The actions are listen, open the right or left door and there is a positive reward when opening the door without the tiger, otherwise a large negative reward.
    • +
  • +
  • Rock Sample: a rover explores a grid area, it knows its own position and the position of the rocks, however, it does not know which rocks are valuable. The rover can sense how valuable they are, but this sensor is less reliable when it is used farther away.
  • +
+ +


+

+ +

The tag game: the robot (blue) and target on a map with 29 positions and 870 states (29 for the robot, 29 + 1 (tagged) for the target).

+ +

These problems tend to be of the same size (number of states and actions) such that the results of different algorithms can be compared easily.

+ +

References:

+ +
    +
  • Araya-Lopez, M., Thomas, V., Buffet, O., and Charpillet, F. (2010). A closer look at MOMDPs. In 2010 22nd IEEE International Conference on Tools with Artificial Intelligence, volume 2, pages 197–204.
  • +
  • Kaelbling, L.P., Littman, M.L., Cassandra, A.R. (1998). +Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1-2): 99-134
  • +
  • Kurniawati, H., Hsu, D., and Lee, W. (2008). SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proceedings of Robotics: Science and Systems IV, Zurich, Switzerland.
  • +
  • Littman, M.L., Cassandra, A.R. and Kaelbling, L.P. (1995). Learning policies for partially observable environments: Scaling up. in Proc. 12th Int. Conf. on Machine Learning, San Francisco, CA.
  • +
  • Ong, S. C. W., Png, S. W., Hsu, D., and Lee, W. S. (2010). Planning under Uncertainty for Robotic Tasks with Mixed Observability. The International Journal of Robotics Research, 29(8):1053–1068.
  • +
  • Pineau, J., Gordon, G., and Thrun, S. (2003). Point-based value iteration: An anytime algorithm for POMDPs. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), +pages 477–484.
  • +
  • Spaan, M. T. J. and Vlassis, N. (2004). A point-based POMDP algorithm for robot planning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 2399–2404, New Orleans, Louisiana.
  • +
+",198,,198,,9/15/2019 15:54,9/15/2019 15:54,,,,0,,,,CC BY-SA 4.0 +14272,2,,14147,9/3/2019 14:37,,1,,"

Even if you want to re-train your model for just one new class you will have to prepare your training data such that it includes all or most of the classes which you want to predict. Most of the times last two layers of a network have the data of number of labels which are to be predicted and that should always be sum of the number of classes you already trained on and the number of classes you want to add for training.

+",28430,,,,,9/3/2019 14:37,,,,0,,,,CC BY-SA 4.0 +14273,1,14285,,9/3/2019 18:45,,1,327,"

I tried this example for a multi class classifier, but when looking at the data I realized two things:

+ +
    +
  1. There are many examples of ""all zeros"" vectors, that is, messages that don't belong in any classification.
  2. +
  3. These all-zeros are actually the majority, by far.
  4. +
+ +

Is it valid to have an all-zeros output for a certain input? I would guess a Sigmoid activation would have no problems with this, by simply not trying to force a one out of all the ""near zero"" outputs.

+ +

But I also think an ""accuracy"" metric will be skewed too optimistically: if all outputs are zero 90% of the time, the network will quickly overfit to always output 0 all the time, and get 90% score.

+",17272,,,,,10/4/2019 11:01,"Class imbalance and ""all zeros"" one-hot encoding?",,1,2,,,,CC BY-SA 4.0 +14278,2,,14107,9/4/2019 1:25,,0,,"

I might consider 2 models (throw away col 1 and throw away col 4), and one more that keeps both, and see which generalises better to test set.

+",17793,,,,,9/4/2019 1:25,,,,0,,,,CC BY-SA 4.0 +14279,2,,14012,9/4/2019 1:41,,1,,"
+

One idea I had was to create an adversarial network which generates words

+
+ +

It may be better to use for e.g. numbers instead of words, then map these numbers to the language of your choice like English in your case. This mapping can be changed to other languages and you will have a robot that overcomes language barriers.

+",17793,,,,,9/4/2019 1:41,,,,0,,,,CC BY-SA 4.0 +14280,1,,,9/4/2019 1:58,,2,3147,"

I've been doing some class assignments recently on building various neural networks. For convolutional networks, there are several well-known architectures such as LeNet, VGG etc. Such ""classic"" models are frequently referenced as starting points when building new CNNs.

+ +

Are there similar examples for RNN/LSTM networks? All I've found so far are articles and slides explaining recurrent neurons, LSTM layers, and the math behind them, but no well-known examples of entire multi-layered network architectures, unlike CNNs which seem to have in abundance.

+",23844,,,,,9/5/2019 12:07,What are some examples of LSTM architectures?,,1,0,,,,CC BY-SA 4.0 +14282,2,,14262,9/4/2019 9:42,,2,,"

As you have trained your model in batch_size of 500. Weights has been updated for each batch therefore 600 times(300000/500) by the end of one epoch.
+So, Your model generalized well. Check the predictions. If Predictions are well. Your model is ready.

+",28455,,,,,9/4/2019 9:42,,,,0,,,,CC BY-SA 4.0 +14284,1,,,9/4/2019 10:52,,1,102,"

I am solving an RL MDP problem which is model based. I have an MDP which has four possible states S1-S4 and four different actions A1-A4, with S4 being terminal state and S1 is the beginning state. There is equal probability of applying any of the available actions on S1. The goal of my problem is to get from S1 to S4 with the maximum possible reward. I have two questions in this regard -

+ +
    +
  1. Will this be a valid MDP model if I have this rule for my model - If i perform action a1 on S1 and next state is S2 then the set of available valid actions on S2 will be only a2,a3 and a4. If i apply any of the actions a2,a3 or a4 on S2 and it takes me to S3 then now i am left with a set of two valid actions for S3(except the one that was taken on s2). Can i do this? Because in my problem an action once taken does not require to be taken again later.

  2. +
  3. I am confused about finding the optimal value and policy for my MDP. Both the optimal value and policy functions are not known to me. The objective is to find the optimal policy for my MDP which would get me from S1 to S4 with max reward. Any action taken on a particular state can lead to any other states (i.e there is a uniform equal state transition probability of 25% in my case for all states except S4 since it's terminal state). How can i approach this problem? After a lot of google search I vaguely understood that I must start with choosing a random policy (equal probability of taking any valid action) -> find value functions for each state -> iteratively compute V until it converges and then from these value functions I need to compute the optimal policy. Those solutions mostly use the Bellman Equations. Can someone please elaborate on how i can do this? Or if there is any other method to do it?

  4. +
+ +

Thank you in advance

+",28314,,,,,9/4/2019 10:52,Finding optimal Value function and Policy for an MDP,,0,5,,,,CC BY-SA 4.0 +14285,2,,14273,9/4/2019 10:56,,0,,"

I understand your questions as follows:

+ +

Is it valid to have an all-zeros output for a certain input?

+ +

Yes its possible in some cases like:

+ +
    +
  1. In the tutorial ""jigsaw-toxic-comment-classification-challenge"", the data is taken from wikipedia comments, so generally there is no toxic behavior in people's comment, it is very rare that a person posts ""bad"" comments in an informative source like wikipedia, this might lead to all the labels being zero for a particular example.

  2. +
  3. In a single label classification like predicting whether a person has a rare disease,the dataset would contain labels that are mostly ""zero""-(no disease)-skewed towards zero

  4. +
+ +

This happens in datasets where the positive output to be predicted is a very rare case.

+ +

I guess your second question is -whether ""accuracy"" is a good way of evaluating a model for this type of a problem

+ +

You are right, in such a case even a simple program/model that outputs zeros for all the inputs would achieve an accuracy of more than 90%, so here accuracy is not a good metric to evaluate the model on.

+ +

You should go through the metrics f1_score,recall, and precision which are ideal for this type of problems.

+ +

Basically here we are interested in ""out of those which are predicted positive, how many are really positive -precision"" and ""out of those which should have been predicted positive, how many are really predicted positive""

+ +

If my definitions seem confusing to understand, please got through the link below:

+ +

f1_score/recall/precision

+ +

Hope I was helpful

+",26854,,,,,9/4/2019 10:56,,,,0,,,,CC BY-SA 4.0 +14286,2,,14280,9/4/2019 11:04,,2,,"

In the paper, LSTM: A Search Space Odyssey (2017), by Klaus Greff et al., eight LSTM variants on three representative tasks (speech recognition, handwriting recognition, and polyphonic music modeling) are compared.

+ +

The compared variants are

+ +
    +
  1. Vanilla LSTM features three gates (input, forget, output), block input, a single cell, an output activation function, and peephole connections (connections from the cell to the gates). The output of the block is recurrently connected back to the block input and all of the gates. The vanilla LSTM is trained using gradient descent and back-propagation through time (BPTT). The original LSTM (which is not the vanilla LSTM) does not contain, for example, the forget gate or the peephole connections (but the cell possesses a constant error carousel, a constant weight of $1$).

  2. +
  3. LSTM trained based on the decoupled extended Kalman filtering (DEKF-LSTM), which enables the LSTM to be trained on some pathological cases at the cost of high computational complexity.

  4. +
  5. Vanilla LSTM trained with an evolution-based method (called evolino), instead of BPTT.

  6. +
  7. LSTM block architectures evolved with a multi-objective evolutionary algorithm, so that to maximize fitness on context-sensitive grammar.

  8. +
  9. LSTM architectures for large scale acoustic modeling, which introduces a linear projection layer that projects the output of the LSTM layer down before recurrent and forward connections in order to reduce the number of parameters for LSTM networks with many blocks.

  10. +
  11. An LSTM architecture with a trainable scaling parameter for the slope of the gate activation functions, which improves the performance of LSTM on an offline handwriting recognition dataset.

  12. +
  13. Dynamic Cortex Memory, an LSTM composed of recurrent connections between the gates of a single block, but not between different blocks, which improves the convergence speed of LSTM.

  14. +
  15. Gated Recurrent Unit (GRU), which simplifies the architecture of the LSTM by combining the input and forget gate into an update gate.

  16. +
+ +

There are other related neural network architectures, such as the neural Turing machine (NTM) or differentiable neural computer (DNC). In general, there are several architectures that use LSTM blocks, even though they are not just recurrent neural networks. Other examples are the neural programmer-interpreter (NPI) or the meta-controller.

+",2444,,2444,,9/5/2019 12:07,9/5/2019 12:07,,,,0,,,,CC BY-SA 4.0 +14287,2,,7390,9/4/2019 12:23,,3,,"

According to Sutton and Barto, they are the same thing. Note 13.5-6 (page 338) of their Reinforcement Learning: An Introduction, 2nd Edition book:

+ +
+

Actor-critic methods are sometimes referred to as advantage actor-critic methods in the literature

+
+",28474,,2444,,5/14/2020 10:22,5/14/2020 10:22,,,,0,,,,CC BY-SA 4.0 +14289,1,,,9/4/2019 12:49,,2,36,"

I would like to perform Machine Translation in tensorflow.js, in the browser. The issue is that state-of-the-art models have many gigabytes (fairseq ensemble), and translation is slow.

+ +

Do you know some good small models for Machine Translation? Probably something below the 50 million parameter threshold.

+ +

I found some older papers with models below 100 MB for training on small datasets (IWSLT'15 English-Vietnamese attention-based models), but these are probably superseded.

+",28473,,,,,9/4/2019 12:49,Small Machine Translation Model,,0,1,,,,CC BY-SA 4.0 +14290,1,14291,,9/4/2019 22:51,,3,2562,"

When watching the machine learning course on Coursera by Andrew Ng, in the logistic regression week, the cost function was a bit more complex than the one for linear regression, but definitely not that hard.

+ +

But it got me thinking, why not use the same cost function for logistic regression?

+ +

So, the cost function would be $\frac{1}{2m} \sum_{i}^m|h(x_i) - y_i|^2$, where $h(x_i)$ is our hypothesis $\text{function}(\text{sigmoid}(X * \theta))$, $m$ is the number of training examples and $x_i$ and $y_i$ are our $ith$ training example?

+",,user17894,2444,,9/5/2019 9:57,1/6/2022 7:22,Why not use the MSE instead of the current logistic regression?,,3,0,,,,CC BY-SA 4.0 +14291,2,,14290,9/5/2019 0:34,,4,,"

The mean squared error (MSE), $J(\theta) = \frac{1}{2m}\sum_{i=1}^m(h_\theta(x_i)-y_i)^2$, is not as appropriate as a cost function for classification, given that the MSE makes assumptions about the data that are not appropriate for classification. Though, as an optimization objective, it is still possible to attempt to minimize MSE even in a classification problem, and thus still learn parameters $\theta$.

+ +

The new cost function has better convergence characteristics as it is more inline with the objective.

+ +

See link for the precise mathematical formulation that explains these loss functions from a probabilistic perspective.

+ +

Note that the absolute value is redundant because $\forall x:x^2\geq0$.

+ +

I hope this clarifies the matter.

+",28343,,2444,,9/5/2019 10:00,9/5/2019 10:00,,,,2,,,,CC BY-SA 4.0 +14292,1,14294,,9/5/2019 3:41,,0,635,"

I'm initialising DNN of shape [2 inputs, 2 hiddens, 1 output] with these weights and biases:

+ +
#hidden layer
+weight1= tf.Variable(tf.random_uniform([2,2], -1, 1), 
+         name=""layer1"");
+bias1 = tf.Variable(tf.zeros([2]), name=""bias1"");
+
+#output layer
+weight2 = tf.Variable(tf.random_uniform([2,1], -1, 1), 
+          name=""layer2"");
+bias2 = tf.Variable(tf.zeros([1]), name=""bias2"");
+
+ +

That's what I followed some online article, however, I wonder what if I initialise bias values using tf.random_uniform instead of tf.zeros? Should I choose zero biases or random biases generically?

+",2844,,2444,,9/5/2019 10:03,9/5/2019 10:48,Should the biases be zero or randomly initialised?,,1,1,,,,CC BY-SA 4.0 +14293,1,,,9/5/2019 9:47,,1,130,"

My project requires gender recognition of people shown on the given images, with more than one person per image. However, these people can be positioned in frontal or side view(passing by perpendicularly, no face visible). On the pictures there will be entire bodies shown, not only the faces. My idea is to firstly use object detection to point where people would be and next use CNNs to recognize gender of each person.

+ +

My question is: should I use one object detection algorithm for both frontal and side views of a person and then classify them with one CNN, or should I use object detection to separately find people positioned in frontal and side manner and use two different CNNs, one for classification of frontal views and one for side views?

+ +

I am asking this because I think it might be easier for one NN to classify only one view at a time, because side view might have different features than frontal, and mixing this features might be confusing for a network. However I am not really sure. If something is unclear, please let me know.

+ +

[EDIT] Since problem might be hard to understand only by reading, I made some illustrations. Basically I wonder if using second option can help in achieveing better accuracy for the subtle differences like those in gender recognition, especially when face is not visible:

+ +
    +
  1. Single detection and classification: +

  2. +
  3. Two different classifiers: +

  4. +
+ +

Img Source

+",22659,,22659,,9/10/2019 8:28,6/2/2021 3:03,Should I use single or double view for gender recognition?,,1,1,,,,CC BY-SA 4.0 +14294,2,,14292,9/5/2019 10:48,,2,,"

From the stanford CNN class (http://cs231n.github.io/neural-networks-2/):

+ +
+

Blockquote + Initializing the biases. It is possible and common to initialize the biases to be zero, since the asymmetry breaking is provided by the small random numbers in the weights. For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient. However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization.

+
+",28448,,,,,9/5/2019 10:48,,,,0,,,,CC BY-SA 4.0 +14295,2,,14290,9/5/2019 10:56,,1,,"

I mean you technically could (it's not going to break or something) however, cross entropy is much better suited for classification as it penalizes for misclassification errors: have a look at the function: when you are wrong the loss goes to infinity:

+ +

you are either from one class or another. MSE is designed for regression where you have nuance: you get close to target is sometimes good enough. You should try both and you will see the performance will be much better for the cross entropy.

+",28448,,,,,9/5/2019 10:56,,,,3,,,,CC BY-SA 4.0 +14296,1,14300,,9/5/2019 15:08,,4,516,"

Problem Statement : +I have a system with four states - S1 through S4 where S1 is the beginning state and S4 is the end/terminal state. The next state is always better than the previous state i.e if the agent is at S2, it is in a slightly more desirable state than S1 and so on with S4 being the most desirable i.e terminal state. We have two different actions which can be performed on any of these states without restrictions. +Our goal is to make the agent reach state S4 from S1 in the most optimal way i.e the route with maximum reward (or minimum cost). The model i have is a pretty uncertain one so i am guessing the agent must initially be given a lot of experience to make any sense of the environment. The MDP i have designed is shown below :

+ +

MDP Formulation : +

+ +

The MDP might a look a bit messy and complicated but it basically is just showing that any action (A1 or A2) can be taken at any state (except the terminal state S4). The probability with which the transition takes place from one state to the other and the associated rewards are given below.

+ +

States : States S1 to S4. S4 is terminal state and S1 is the beginning state. S2 is a better state than S1 and S3 is a better state than S1 or S2 and S4 is the final state we expect the agent to end up in.

+ +

Actions : Available actions are A1 and A2 which can be taken at any state (except of course the terminal state S4).

+ +

State Transition Probability Matrix : One action taken at a particular state S can lead to any of the other available states. For ex. taking action A1 on S1 can lead the agent to S1 itself or S2 or S3 or even directly S4. Same goes for A2. So i have assumed an equal probability of 25% or 0.25 as the state transition probability. The state transition probability matrix is the same for actions A1 and A2. I have just mentioned it for one action but it is the same for the other action too. Below is the matrix I created - +

+ +

Reward Matrix : The reward function i have considered is a function of the action, current state and future state - R(A,S,S'). The desired route must go from S1 to S4. I have awarded positive rewards for actions that take the agent from S1 to S2 or S1 to S3 or S1 to S4 and similarly for states S2 and S3. A larger reward is given when the agent moves more than one step i.e S1 to S3 or S1 to S4. What is not desired is when the agent gets back to a previous state because of a action. So i have awarded negative rewards when the state goes back to a previous state. The reward matrix currently is the same for both the actions (meaning both A1 and A2 have same importance but it can be altered if A1/A2 is preferred over the other). Following is the reward matrix i created (same matrix for both the actions) -

+ +

+ +

Policy, Value Functions and moving forward : +Now that i have defined my states, actions, rewards, transition probabilities the next step I guess i need to take is to find the optimal policy. I do not have an optimal value function or policy. From lot of googling i did, I am guessing i should start with a random policy i.e both actions have equal probability of being taken at any given state -> compute the value function for each state -> compute the value functions iteratively until they converge -> then find the optimal policy from the optimal value functions.

+ +

I am totally new to RL and all the above knowledge is from whatever i have gathered reading online. Can someone please validate my solution and MDP if I am going the right way? If the MDP i created will work ? +Apologies for such a big write-up but i just wanted to clearly depict my problem statement and solution. If the MDP is ok then can someone also help me with how can the value function iteratively converge to an optimal value? I have seen lot of examples which are deterministic but none for stochastic/random processes like mine.

+ +

Any help/pointers on this would be greatly appreciated. +Thank you in advance

+",28314,,28314,,9/5/2019 15:18,9/5/2019 21:39,Can someone please help me validate my MDP?,,1,12,,,,CC BY-SA 4.0 +14297,2,,14243,9/5/2019 15:25,,3,,"

Do you have access to parallel corpora in source and target language that translates idioms correctly? Neural machine translation. (NMT) should handle this. NMT uses deep learning to match sequences/pairs of words in one language to another and is now the state of the art method for translation AI.

+ +

I don't think an AI knows grammar of a language. A translating AI knows patterns but not necessarily grammar in the sense that we learn in school as children. Here's a potential approach that should work give a large enough corpora with examples of idioms - github.com/facebookresearch/MUSE

+",10287,,10287,,9/5/2019 15:32,9/5/2019 15:32,,,,0,,,,CC BY-SA 4.0 +14298,2,,14243,9/5/2019 15:51,,8,,"

Short answer: Yes.

+ +

TL;DR

+ +

In the presence of good datasets this can be accomplished with a pipeline.

+ +

Long Answer

+ +

In reality an idiom is a series of words which is supposed to have a semantic meaning that is not denoted by the literal reading (source). This means that any system that is used must be capable of considering multiple words at a time. Additionally, some idioms are context dependent. Example:

+ +
    +
  • The fisherman broke the ice with his tool.
  • +
+ +

Are we to believe that this is a very suave fisherman?

+ +
+

So, it is possible to teach an AI to use idiomatic phrases to keep up with the culture of humans?

+
+ +

Observe that humans do not come linguistically ""pre-loaded"" with idioms. So we can safely assume that idiom usage is a learning task and that the only way for them to keep up is for them to keep learning. So if we solve the idiom learning task we just need to keep our agent online or periodically retrain it on nascent corpora.

+ +

One difficulty is that, in the absence of a label, a metaphor could be easily mistaken for an idiom and vice versa. So semantic outlier (sorry it's not free) approaches may suffer from precision issues. Example:

+ +
    +
  • She's a thorny wildflower (metaphor - could easily be an idiom)
  • +
  • She's a diamond in the rough (idiom - could easily be a metaphor)
  • +
+ +

Though, idioms will most likely be repeated if a dataset is large whereas a ""custom metaphor"" is less likely to repeat.

+ +

Additionally, some idioms (eg bite the bullet or break a leg) do not have readily available ""interpretable semantics"" that allow us to extract their intended meaning. For example, if one did not know the idiom ""cut me some slack"" one could think:

+ +

""Slack implies loosening or to make less tight/taut. I was being very uptight. They probably want me to loosen up and not be so critical.""

+ +

Of course the human understanding of it might happen in a flash and not follow such a delineated path. The idea is that some NLP pipeline might be constructible that satisfactorily handles idioms in some specific use cases (example of a pipeline). For example, one module might attempt to process outliers like ""diamond in the rough"" which have said interpretable semantics. Though, something like ""bite the bullet"" may have to be labelled with the correct semantics.

+ +

I've only scratched the surface of this. Natural language understanding is already a hard problem - and idioms are thus a tough task in a tough task. I hope that this motivates the reading of some more thorough articles. I have gathered some articles that can be used as a springboard into the literature.

+ +

Here's a source that uses a dictionary type approach to train the model to recognize idioms. Excerpt:

+ +
+

For identification, we assume data of the form ${(⟨p_i,d_i⟩,y_i) : i = 1...n}$ where $p_i$ is the phrase associated with definition $d_i$ and $y_i ∈ \{literal, idiomatic\}$.

+
+ +

This source provides pseudo-code for idiom extraction.

+ +

This source describes a dataset to help solve the idiom difficulties.

+",28343,,28343,,9/6/2019 15:48,9/6/2019 15:48,,,,2,,,,CC BY-SA 4.0 +14299,1,,,9/5/2019 17:19,,2,107,"

After completing Coursera course from Andrew Ng, I wanted to implement again simple RNN for generating dinosaurs name based on a text file containing around 800 dinosaurs name. +This is done with Numpy in coursera, here is a link to a Jupyter notebook (not my repo) to get strategy and full objective: +Here

+ +

I started similar implementation but in Pytorch, here is the model:

+ + + +
class RNN(nn.Module):
+    def __init__(self,input_size):
+        super(RNN, self).__init__()
+        print(""oo"")
+        self.hiddenWx1 = nn.Linear(input_size, 100) 
+        self.hiddenWx2 = nn.Linear(100, input_size)
+        self.z1 = nn.Linear(input_size,100)
+        self.z2 = nn.Linear(100,input_size)
+        self.tanh = nn.Tanh()
+        self.softmax = torch.nn.Softmax(dim=1)
+
+    def forward(self, input, hidden):
+        layer = self.hiddenWx1(input)
+        layer = self.hiddenWx2(layer)
+        a_next = self.tanh(layer)
+        z = self.z1(a_next)
+        z = self.z2(z)
+        y_next = self.softmax(z)
+        return y_next,a_next
+
+ +

Here is the main algorithm of training:

+ + + +
for word in examples:  # for every dinosaurus name
+                model.zero_grad()
+                hidden= torch.zeros(1, len(ix_to_char)) #initialise hidden to null, ix_to_char is below
+                word_vector = word_tensor(word) # convert each letter of  the current name in one-hot tensors
+                output = torch.zeros(1, len(ix_to_char)) #first input is null
+                loss = 0
+                counter = 0
+                true = torch.LongTensor(len(word)) #will contains the index of each letter.If word is ""badu"" => [2,1,4,22,0]
+
+                measured = torch.zeros(len(word)) # will contains the vectors returned by the model for each letter (softmax output) 
+
+
+                for t in range(len(word_vector)): # for each letter of current word
+                    true[counter] = char_to_ix[word[counter]] # char_to_ix return the index of letter in dictionary
+
+                    output, hidden = model(output, hidden)
+
+                    if (counter ==0):
+                        measured = output
+                    else: #measures is a tensor containing tensors of probability distribution
+                        measured = torch.cat((measured,output),dim=0)
+                    counter+=1
+
+                loss = nn.CrossEntropyLoss()(measured, true) #
+                loss.backward()
+                optimizer.step()
+
+ +

The letter dictionary (ix_to_char) is as follow:

+ +

{0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}

+ +

Every 2000 epochs, I sample some new words with this function using torch multimonial to select a letter based on the softmax probability returned by the model:

+ + + +
def sampling(model):
+    idx = -1 
+    counter = 0
+    newline_character = char_to_ix['\n']
+
+    x = torch.zeros(1,len(ix_to_char))
+    hidden = torch.zeros(1, len(ix_to_char))
+    generated_word=""""
+
+
+    while (idx != newline_character and counter != 35):
+        x,hidden = model(x, hidden)
+        #print(x)
+        counter+=1
+        idx = torch.multinomial(x,1)
+        #print(idx.item())
+        generated_word+=ix_to_char[idx.item()]
+    if counter ==35:
+        generated_word+='\n'
+    print(generated_word)
+
+ +

Here are the results of the first display:

+ +
epoch:1, loss:3.256033420562744
+aaasaaauasaaasasauaaaaapsaaaasaaaaa
+
+aaaaaaaaaaaaasaaaoaaaaaauaaaaaaaaaa
+
+taaaauasaasaaaaasaaasaauaaaaaaaausa
+
+uaasaaaaauaaaasasssaauaaaaasaaaaaaa
+
+auaaaaaaaassasaaauaaaaaaaaasasaaaas
+
+epoch:2, loss:3.199960231781006
+aaasaaassussssusssussssssssssusssss
+
+aasaaassssssssssssasusssissssssssss
+
+sasaaassssuosasssssssssssssssssssss
+
+aasassasassusssssssssussssssssssuss
+
+oasaasassssssussssssssussssssssssss
+
+epoch:3, loss:3.263746500015259
+aaaaaaasaaaasaaaaasaaaasaaaaaaaaaaa
+
+aaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaa
+
+aaaaaaaaaaaaaaaaaaaaaauaaaaaaaaaaas
+
+aaaaaaaasaaaasraaaaaaaaaaaaaaaaaaaa
+
+aaaaaaaaaaaaauusaaaaauaaaaaaaaaaaaa
+
+ +

It doesn't work but I have no idea how to fix the issue. +With no training at all, the sampling function seems to work as the returned words seem complete random:

+ +
hbtpsbykkxvlah
+
+ttiwlzxdxabzmbdvsapsnwwpaoiasotalft
+
+ +

My post may be a bit long, but so far I have no idea what is the issue of my program.

+ +

Manny thanks for your help.

+",28507,,28507,,9/6/2019 21:19,2/3/2020 23:01,Issue at training simple RNN for word generation,,1,6,,,,CC BY-SA 4.0 +14300,2,,14296,9/5/2019 17:31,,2,,"

The good news is that:

+ +
    +
  • Your MDP appears valid, with well-defined states, actions. It has state transition and reward functions (which you have implemented as matrices). There is nothing else to add, it's a full MDP.

  • +
  • You could use this MDP to evaluate a policy, using a variety of reinforcement learning (RL) methods suitable for finite discrete MDPS. For instance, Dynamic Programming could be used, or Monte Carlo or SARSA.

  • +
  • You could use this MDP to find an optimal policy for the environment it represents, again using a variety of RL methods, such as Value Iteration, Monte Carlo Control, SARSA or Q-Learning.

  • +
+ +

The bad news is that:

+ +
    +
  • All policies in the MDP as defined are optimal, with expected returns (total reward summed until end of episode) of $v(S1) = 55, v(S2) = 33.75, v(S3) = 21.25$ - solved using Dynamic Programming in case you are wondering.

  • +
  • The MDP is degenerate because action choice has no impact on either state transition or reward. It is effectively a Markov Reward Process (MRP) because the agent policy has been made irrelevant.

  • +
  • Without discounting, the best result is not going from S1-S4 directly, as you appear to want, but repeatedly looping S1-S3-S2-S1-S3-S2... (this is currently hiiden by action choice being irrelevant).

    + +
      +
    • There are a few ways to fix this, but maybe the simplest is to make the rewards more straightforward (e.g. +0, +10, +20, +30 for S1-S1, S1-S2..., -10, 0, +10, +20 for S2-S1, S2-S2...) and add a discount factor, often labelled $\gamma$, when calculating values. A discount factor makes immediate rewards have higher value to the agent, so it will prefer to get a larger reward all at once and end the episode than loop around before finishing.
    • +
  • +
+ +

This whole ""bad news"" section should not worry you too much though. Instead it points to a different issue. The key point is here:

+ +
+

The model i have is a pretty uncertain one so i am guessing the agent must initially be given a lot of experience to make any sense of the environment.

+
+ +

It looks like you have assumed that you need to explicitly build a MDP model of your environment in order to progress with your problem. So you are providing an inaccurate model, and expect that RL works with that, improving it as part of searching for an optimal policy.

+ +

There are a few different approaches you could take in order to learn a model. In this case as your number of states and actions are very low, then you could do it like this:

+ +
    +
  • Create a 2D tensor (i.e. just a matrix) to count number of times each state, action pair is visited, initialised with all zeroes, and indexed using S, A

  • +
  • Create a 3D tensor to count number of times each state transition is observed, again initialised with all zeroes, indexed using S, A, S'.

  • +
  • Run a large number of iterations with the real environment, choosing actions at random, and adding +1 to each visited S, A pair in the first tensor, and +1 to each S, A, S' triple in the second tensor.

  • +
  • You now have an approximate transition function based on real experience, without needing an initial guess, or anything particularly clever, you are just taking averages in a table. Divide each count of S, A, S' by the total count of S, A to get conditional transition probability $p(s'|s,a)$. It's not really an established, named RL method, but will do.

  • +
+ +

However, if your construction of the MDP is just step 1 for running some RL policy optimisation approach, none of that is really necessary. Instead, you can use a model-free approach such as tabular Q learning to learn directly online from interactions with the environment. This is likely to be more efficient than learning the model first or alongside the policy optimisation. You don't need the explicit MDP model at all, and adding one can make things more complex - in your case for no real gain.

+ +

You probably do still need to define a reward function in your case as there is no inherent reward in the system. You want the agent to reach state S4 as quickly as possible, so you need to monitor the states observed and add a reward signal that is appropriate for this goal. As above, I suggest you modify your planned reward structure to be simple/linear and add discounting to capture the requirement to ""increase"" state as fast as possible (here I am assuming that being in S2 is still somehow better than being in S1 - if that's not the case, and reaching S4 is the only real goal, then you could simplify further). That's because if you make the rewards for state progression non-linear - as in your example - the agent may find loops that exploit the shape of the reward function and not work to progress states towards S4 as you want.

+ +

Beyond this very simple looking environment, there are use cases for systems that learn transition models alongside optimal policies. Whether or not to use them will depend on other qualities of your environment, such as how cheap/fast it is to get real experience of the environment. Using a learned model can help by doing more optimisation with the same raw data, using it to simulate and plan in between taking real actions. However, if the real environment data is very easy to collect, then there may be no point to that.

+",1847,,1847,,9/5/2019 21:39,9/5/2019 21:39,,,,3,,,,CC BY-SA 4.0 +14301,1,,,9/5/2019 22:35,,1,17,"

Is there any grounds for assuming an algorithms applied to a data-set that created a decently accurate model will perform as well on a different data-set with meta-features chosen and evaluated by meta-learning? What meta-features are even worth considering when evaluating similarity between data-sets with the goal of finding an optimal combination of algorithm application to this new data-set to create an accurate model?

+",25721,,,,,9/5/2019 22:35,Applying ML algorithms to data-sets with similar meta-features?,,0,0,,,,CC BY-SA 4.0 +14303,1,,,9/6/2019 5:37,,1,49,"

I am curious to if there is data available for MLP architectures in use today, their initial architecture, the steps that were taken to improve the architecture to an acceptable state and what the problem is the neural network aimed to solve.

+ +

For example, what the initial architecture (amount of hidden layers, the amount of neurons) was for a MLP in a CNN, the steps taken to optimize the architecture (adding more layers and reducing nodes, changing activation functions) and the results each step produced (i.e. increased error or decreased error). What the problem is the CNN tried to solve (differentiation of human faces, object detection inteded for self driving cars etc.)

+ +

Of course I used a CNN as an example but I am referring to data for any MLP architecture in plain MLPs or Deep Learning architectures such as RNNs, CNNs and mroe. I am focused on the MLP architecture mostly.

+ +

If there is not how do you think one can accumulate this data?

+",14621,,14621,,9/6/2019 7:51,9/6/2019 7:51,Is there data available about successful neural network architectures?,,0,2,0,,,CC BY-SA 4.0 +14306,2,,14047,9/6/2019 9:43,,4,,"

What I understand from your questions is that you are trying to avoid catastrophic forgetting while applying online learning.

+ +

This problem should be addressed by implementing methods that reduce catastrophic forgetting for different tasks. At first glance it might seem that they don't apply because it's data that change and not a particular task but changing data result in a change of the task. Say your goal is to classify different breeds of dogs. Your online data-set morphs into excluding ""Great Danes"". Your neural network after enough epochs would forget about ""Great Danes"". The task is still serving its purpose by classifying different breeds but the task still changed. It changed from recognizing ""Great Danes"" as a dog breed to not recognizing ""Great Danes"" as a dog breed. The weights changed to exclude them but the methods I linked tries and keep weights intact even though it was not intended for the purpose of online learning. Just set the hyper parameters to include these techniques to low as I believe data won't have an instant change but would change over time, and you should be fine.

+ +

The most obvious technique being storing information as you train. This is called pseudo-rehearsal. With this at least you would be able to use stochastic gradient decent but you need memory and resources as the data set grows.

+ +

Then there was an attempt to reduce impacts of weights on old tasks to keep some relevancy to them. Structural Regularization.

+ +

Later these guys implemented HAT which seems to keep some weights static while others adapt to new tasks.

+",14621,,,,,9/6/2019 9:43,,,,0,,,,CC BY-SA 4.0 +14309,1,14328,,9/6/2019 19:17,,3,93,"

Imagine I have a tensorflow CNN model with good accuracy but maybe too many filters:

+ +
    +
  • Is there a way to determine which filters have more impact in output? I think it should be possible. At least, if a filter A has a 0, that only multiples the output of a filter B, then filter B is not related to filter A. In particular, I'm thinking in 2d data where 1 dimension is time-related and the other feature related (like one-hot char).

  • +
  • Is there a way to eliminate the less relevant filters from a trained model, and leave the rest of the model intact?

  • +
  • Is it useful or there are better methods?

  • +
+",28526,,2444,,9/8/2019 13:26,9/8/2019 13:26,Is it useful to eliminate the less relevant filters from a trained CNN?,,1,0,,,,CC BY-SA 4.0 +14310,1,14313,,9/6/2019 20:23,,3,385,"

Is there any computer vision technology that can detect any type of object? For example, there is a camera fixed, looking in one direction always looking at a similar background. If there is an object, no matter what the object is (person, bag, car, bike, cup, cat) the CV algorithm would notice if there is an object in the frame. It wouldn't know what type of object it is, just that there is an object in the frame.

+ +

Something similar to motion detector but that would work on a flat conveyor belt. Even though the conveyor belt moves will look similar between frames. Would something like this be possible? Possibly something to do with extracting differences from the background, with the goal being to not have to train the network with data for every possible object that may pass by the camera.

+",28528,,2444,,9/6/2019 23:24,9/7/2019 17:44,Is there any computer vision technology that can detect any type of object?,,1,1,,,,CC BY-SA 4.0 +14311,2,,14299,9/6/2019 21:37,,1,,"

Your forward function is not using the previous hidden state.

+ +

observe: you pass hidden but never use it. +

+ +
def forward(self, input, hidden):
+    layer = self.hiddenWx1(input)
+    layer = self.hiddenWx2(layer)
+    a_next = self.tanh(layer)
+    z = self.z1(a_next)
+    z = self.z2(z)
+    y_next = self.softmax(z)
+    return y_next,a_next
+
+",28343,,,,,9/6/2019 21:37,,,,2,,,,CC BY-SA 4.0 +14313,2,,14310,9/7/2019 3:51,,3,,"

TL;DR

+ +

This is possible. You need a correctly labeled dataset. Your dataset has two labels:

+ +

$y\in \{\text{background},\text{object in frame}\}$ or simply $y\in \{0,1\}$

+ +

This labelling avoids needing to know what object is in frame only that there is an object in frame.

+ +

Examples With Similar Objectives

+ +

Here as a paper (link) that was seeking to classify animals in a frame. The first part of their pipeline needed to know whether there was an animal in frame or not. Excerpt:

+ +
+

Task I: Detecting Images That Contain Animals. For this task, our models take an image as input and output two probabilities describing whether the image has an animal or not (i.e. binary classification)

+
+ +

Here is another paper (link) regarding detecting animals in images. Excerpt:

+ +
+

Wildlife detection, which is actually a binary classifier capable of clasifying input images into two classes: “animal” or “no animal”

+
+ +

The methods used in these papers could be adapted to fit the conveyor belt use case.

+ +

Here is a system (link) that detects moving objects using background extraction. The method they use to check whether or not the image is a background image or not could be adapted to your use case.

+ +

Ideas for Implementation

+ +

In the conveyor belt use case, a very simple model might work. Thus, it is recommended that you try a simple model first. For example, you could flatten your inputs to a vector and feed these into a shallow feed forward network. A simple logistic regression classifier might even work on flattened inputs for this use case.

+ +

If your desired performance is not reached with a simple model you can analyze their performance using methods like learning curves to decide how to proceed.

+ +

If you find that your model has high bias:

+ +

You can either use a larger/deeper model like a custom tuned CNN or you can use a pre-trained network like VGG16 and use transfer learning to achieve your goal.

+ +

Here is a post that describes using a CNN to classify cats vs dogs (link). Simply replace the cat/dog dataset with your object/no-object dataset. The post even describes using transfer learning with VGG16.

+ +

If you find that your model has high variance

+ +

More data can help. Furthermore, in the conveyor belt example, hand labeling data would be easy and quick. Thus, it would not take very much time to get many more examples.

+ +

I hope this helps.

+",28343,,28343,,9/7/2019 17:44,9/7/2019 17:44,,,,0,,,,CC BY-SA 4.0 +14314,2,,4737,9/7/2019 9:15,,2,,"

It is an unreasonable assumption for many scenarios to know the full transition model of an environment.

+ +

In the deterministic toy scenarios used for explaining RL, such as a grid world, it seems like a trivial requirement. However, it starts to get harder very quickly as problems and state features get more complex.

+ +

Even in relatively simple environments that are possible to model in this way, the calculations start to become difficult enough that there is an argument to not bother. For instance, a common toy example is the game of Blackjack - in order to calculate the probability of different rewards after sticking you would need to resolve the remaining game tree of typically 0,1,2, but up to more, for the dealer, accounting for their starting card, and perhaps accounting for which cards have been drawn so far (if you are simulating this more realistically, without replacement).

+ +

However, that could be resolved if we wanted. To make things harder, you need to consider agents that are set to solve problems where we don't have any meaningful probabilistic model of how the state will progress.

+ +

A good example would be any real world agent that used a vision system as part of its state. Consider a flying drone set with the task of locating and tracking migrating whales over some area of the ocean. It is rewarded for capturing footage of whales. Its state is the current image plus GPS and telemetrics data.

+ +

Knowing $P(s' \mid s,a)$ in the whale-tracking example implies that you would know the probability distribution of the next video frame, somehow accounting for the chaotic movement of waves, the likely behaviour of any creatures in shot, and effects of wind turbulence on drone position etc. Even the most sophisticated physics simulations with ray-tracing rendering engines cannot rise to that challenge with much accuracy, and the computation required even for a rough render of expected image(s) based on a physics engine would be far beyond anything you could run in real time on the drone. However, it is still possible to build an agent which learns how to solve this kind of environment through gathering statistics on its task using images as state data.

+ +

Scaling that challenge back a bit, you can view many of the Atari game-playing challenges like this. The input state is a collection of still frames from the game. Knowing $P(s' \mid s,a)$ implies knowing the probability distribution of next set of frame images, given the current inputs. This is a near impossible challenge, even if you know the code for the game, providing an analytical result for $P(s' \mid s,a)$ is unrealistic - the best way to do it would be to have a memory clone of the game that you could use to look ahead then reset. Whilst this is theoretically possible in a computer game, it is not something you can consider when interacting with the real world.

+",1847,,1847,,9/7/2019 9:24,9/7/2019 9:24,,,,0,,,,CC BY-SA 4.0 +14317,5,,,9/7/2019 12:21,,0,,,2444,,2444,,9/7/2019 12:21,9/7/2019 12:21,,,,0,,,,CC BY-SA 4.0 +14318,4,,,9/7/2019 12:21,,0,,"For questions related to deepfakes, which refers to machine learning techniques used to combine and superimpose existing images and videos onto source images or videos. Deepfakes have been used to create fake celebrity pornographic videos, revenge porn, fake news, and malicious hoaxes.",2444,,2444,,9/7/2019 12:21,9/7/2019 12:21,,,,0,,,,CC BY-SA 4.0 +14319,2,,5939,9/7/2019 13:12,,3,,"

Digital Media Forensics (DMF) field aims to develop technologies for the automated assessment of the integrity of an image or video, so DMF is the field you are looking for. There are several approaches in DMF: for example, those based on machine learning (ML) techniques, in particular, convolutional neural networks (CNNs).

+ +

For example, in the paper Deepfake Video Detection Using Recurrent Neural Networks (2018), David Güera and Edward J. Delp propose a two-stage analysis composed of a CNN to extract features at the frame level followed by a temporally-aware RNN to capture temporal inconsistencies between frames introduced by the deepfake tool. More specifically, they use a convolutional LSTM architecture (CNN combined with an LSTM), which is trained end-to-end, so that the CNN learns the features in the videos, which are passed to the RNN, which attempts to predict the likelihood of those features belonging to a fake video or not. Section 3 explains the creation of deepfake videos, which leads to inconsistencies between video frames (which are exploited in the proposed method) because of the use of images with different viewing and illumination conditions.

+ +

Other similar works have been proposed. See this curated list https://github.com/aerophile/awesome-deepfakes for more related papers.

+",2444,,2444,,9/7/2019 13:27,9/7/2019 13:27,,,,0,,,,CC BY-SA 4.0 +14320,1,,,9/7/2019 13:12,,1,82,"

I'm currently a student learning about AI Networks. I've came across a statement in one of my Professor's books that a FFBP (Feed-Forward Back-Propagation) Neural Network with a single hidden layer can model any mathematic function with accuracy dependant on number of hidden layer neurons. Try as I might I cannot find any explanation as to why that occurs - could someone maybe explain the question why that is?

+",28537,,,,,8/14/2021 17:02,How does a single neuron in hidden layer affect training accuracy,,1,1,,10/11/2021 10:29,,CC BY-SA 4.0 +14321,1,14323,,9/7/2019 14:18,,6,1752,"

In the original DQN paper, page 1, the loss function of the DQN is

+ +

$$ +L_{i}(\theta_{i}) = \mathbb{E}_{(s,a,r,s') \sim U(D)} [(r+\gamma \max_{a'} Q(s',a',\theta_{i}^{-}) - Q(s,a;\theta_{i}))^2] +$$

+ +

whose gradient is presented (on page 7)

+ +

$$\nabla_{\theta_i} L_i(\theta_i) = \mathbb{E}_{s,a,r,s'} [(r+\gamma \max_{a'}Q(s',a';\theta_i^-) - Q(s,a;\theta_i))\nabla_{\theta_i}Q(s,a;\theta_i)] $$

+ +

But why is there no minus (-) sign if $-Q(s,a;\theta_i)$ is parameterized by $\theta_i$ and why is the 2 from power gone?

+",28538,,2444,,1/18/2021 0:59,1/18/2021 0:59,How is the gradient of the loss function in DQN derived?,,1,1,,,,CC BY-SA 4.0 +14323,2,,14321,9/7/2019 17:24,,2,,"

In general, if you have a composite function $h(x) = g(f(x))$, then $\frac{dh}{dx} = \frac{d g}{df} \frac{d f}{dx}$. In your case, the function to differentiate is

+

$$L_{i}(\theta_{i}) = \mathbb{E}_{(s,a,r,s') \sim U(D)} \left[ \left(r+\gamma \max_{a '} Q(s',a',\theta_{i}^{-}) - Q(s,a;\theta_{i}) \right)^2 \right]$$

+

So, we want to calculate $\nabla_{\theta_i} L_{i}(\theta_{i})$, which is equal to

+

$$\nabla_{\theta_i} \mathbb{E}_{(s,a,r,s') \sim U(D)} \left[ \left(r+\gamma \max_{a '} Q(s',a',\theta_{i}^{-}) - Q(s,a;\theta_{i}) \right)^2 \right] \label{2} \tag{2} $$

+

For clarity, let's ignore the iteration number $i$ in \ref{2}, which can thus more simply be written as

+

$$\nabla_{\theta} \mathbb{E}_{(s,a,r,s') \sim U(D)} \left[ \left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta ) \right)^2 \right] \label{3} \tag{3} $$

+

The subscript of the expected value operator $\mathbf{e}=(s,a,r,s') \sim U(D)$ means that the expected value is being taken with respect to the multivariate random variable, $\mathbf{E}$ (for experience), whose values (or realizations) are $\mathbf{e}=(s,a,r,s')$, and that follows the distribution $U(D)$ (a uniform distribution), that is, $\mathbf{e}=(s,a,r,s')$ are uniformly drawn from the experience replay buffer, $D$ (or $\mathbf{E} \sim U(D)$). However, let's ignore or omit this subscript for now (because there are no other random variables in \ref{3}, given that $\gamma$ and $a'$ should not be random variables, thus it should not be ambiguous with respect to which random variable the expectation is being calculated), so \ref{3} can be written as

+

$$\nabla_{\theta} \mathbb{E} \left[ \left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta ) \right)^2 \right] \label{4} \tag{4} $$

+

Now, recall that, in the case of a discrete random variable, the expected value is a weighted sum. In the case of a continuous random variable, it is an integral. So, if $\mathbf{E}$ is a continuous random variable, then the expectation \ref{4} can be expanded to

+

$$ +\int_{\mathbb{D}} {\left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta )\right)}^2 f(\mathbf{e}) d\mathbf{e} +$$

+

where $f$ is the density function associated with $\mathbf{E}$ and $\mathbb{D}$ is the domain of the random variable $\mathbf{E}$.

+

The derivative of an integral can be calculated with the Leibniz integral rule. In the case the bounds of the integration are constants ($a$ and $b$), then the Leibniz integral rule reduces to

+

$${\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t)\,dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t)\,dt.}$$

+

Observe that the derivative is taken with respect to the variable $x$, while the integration is taken with respect to variable $t$.

+

In our case, the domain of integration, $\mathbb{D}$, is constant because it only represents all experience in the dataset, $\mathcal{D}$. Therefore, the gradient in \ref{4} can be written as

+

\begin{align} +\nabla_{\theta} \int_{\mathbb{D}} \left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta )\right)^2 f(\mathbf{e}) d\mathbf{e} +&= \\ +\int_{\mathbb{D}} \nabla_{\theta} \left( \left( r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta )\right)^2 f(\mathbf{e}) \right) d\mathbf{e} +&=\\ +\mathbb{E} \left[ \nabla_{\theta} { \left(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta) \right)}^2 \right] \label{5}\tag{5} +\end{align}

+

Recall that the derivative of $f(x)=x^2$ is $f'(x)=2x$ and that the derivative of a constant is zero. Note now that the only term in \ref{5} that contains $\theta$ is $Q(s,a; \theta)$, so all other terms are constant with respect to $\theta$. Hence, \ref{5} can be written as

+

\begin{align} +\mathbb{E} \left[ \nabla_{\theta} {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))}^2 \right] +&=\\ +\mathbb{E} \left[ 2 {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))} \nabla_{\theta} \left( {r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta)} \right) \right] +&=\\ +\mathbb{E} \left[ 2 {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))} \left( {\nabla_{\theta} r + \nabla_{\theta} \gamma \max_{a'} Q(s',a',\theta^{-}) - \nabla_{\theta} Q(s,a; \theta)} \right) \right] +&=\\ +\mathbb{E} \left[ - 2 {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))} {\nabla_{\theta} Q(s,a; \theta)} \right] +&=\\ +-2 \mathbb{E} \left[ {(r + \gamma \max_{a'} Q(s',a',\theta^{-}) - Q(s,a; \theta))}{\nabla_{\theta} Q(s,a; \theta)} \right] +\end{align}

+

The $-2$ disappears because it can be absorbed by the learning rate.

+",2444,,2444,,1/18/2021 0:59,1/18/2021 0:59,,,,0,,,,CC BY-SA 4.0 +14324,1,,,9/7/2019 18:23,,3,96,"

In lots of games there are multiple phases or decision points that are not similar yet seem to have a dependency on one another when taking the perspective of the overall strategy of the player. A couple examples I thought up:

+ +
    +
  1. In a simple draw poker, you can have a strategy for discarding cards and a strategy for betting. They may not be mutually exclusive if you know your opponents betting will change with the number of cards you draw.

  2. +
  3. In Cribbage there are two phases, Discard to crib and the Play. The Play phase is definitely dependent on which cards are discarded in the discard phase. So it seems knowledge of Play strategy would be needed to make the Discard decision.

  4. +
+ +

The intent is to learn how to set up an unsupervised learning algorithm to play a game with multiple types of decision making. Doesn't matter the game. I'm at a loss at the highest level in what ML models to learn to use for this scenario. I don't think a single NN would work because of the different decision types.

+ +

My question is how are these dependencies handled in ML? What are some known algorithms/models that can handle this?

+ +

I'm at a loss on what to even search for so feel free to dump some terminology and keywords on me. =)

+",26540,,26540,,9/7/2019 20:32,9/8/2019 10:27,How to handle multiple types of decisions?,,1,4,,,,CC BY-SA 4.0 +14325,1,,,9/7/2019 22:42,,1,133,"

I experimented with a CNN operating on texts encoded as sequences of character vectors, where characters are encoded as one-hot vectors in one embedding and as random unit length pairwise orthogonal vectors (orthogonal matrix) in another. While geometrically these encode the same vector space, the one-hot embedding outperformed the random orthogonal one consistently. I suppose this has to do with the clarity of the signal: A zero vector with a single 1-valued cell is an easier to learn signal than just some vector with lots of different values in each cell.

+ +

I wondered if you know of any papers on this kind of effect. I did not find any but would like to back up this finding and check if my reasoning for why this is the case makes sense/ find a better or more in-depth explanation.

+",20150,,20150,,9/8/2019 17:31,9/8/2019 17:31,Reference request: one-hot encoding outperforming random orthogonal encoding,,0,6,,,,CC BY-SA 4.0 +14326,1,14327,,9/8/2019 3:20,,2,104,"

I've found multiple depictions of how an LSTM cell operates. See 2 below:

+ +

+ +

and

+ +

+ +

Each of these images suggest the hidden state is utilised differently. On the top diagram, it is shown that the hidden state is added along with the previous output and current input to both the forget gate and the input gate. The bottom image suggests the input and forget gates are calculated only using the previous output and current input. Which is it?

+ +

Also, when the previous output is fed in for the current layer, is this before or after it has been reshaped to the final output size and been put through a softmax?

+",26726,,,,,9/8/2019 5:05,Structure discrepancy of an LSTM?,,1,0,,,,CC BY-SA 4.0 +14327,2,,14326,9/8/2019 5:05,,1,,"
    +
  1. There are different variants of LSTM, in most ML packages now of days you'll probably see what's shown in the bottom picture. For more details, intuition and motivations please see this paper.

  2. +
  3. It is not reshaped and there is no softmax layer. That is all done outside the LSTM.

  4. +
+",25496,,,,,9/8/2019 5:05,,,,0,,,,CC BY-SA 4.0 +14328,2,,14309,9/8/2019 5:46,,4,,"

NOTE: All the observations and results are from the paper The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.

+ +

To answer your questions one by one:

+ +
    +
  • Yes there are ways to determine which filters have more impact on the output. Its a very naive way but works very good in practice. Filters with small weights impact output less (according to empirical evidence), which basically means neurons whose weights lie in the switching region i.e ~$0$ in ReLu and ~$-1$ to $1$ (say) have less impact on final output.
  • +
  • Yes, just eliminating these lower weight filters eliminate the unnecessary noise and indecisiveness introduced by these filters and suprisingly makes the model perform better (observed empirically).
  • +
  • The concept a relatively old paradigm but has been a new twist by the simplicity of the method of elimination of unnecessary weights in the aforementioned paper and thus winning it the best paper award in ICLR 2019.
  • +
+ +

TL;DR: Eliminating unnecessary weights makes the model perform better than the original model.

+ +

Also here is the TensorFlow code.

+",,user9947,,,,9/8/2019 5:46,,,,0,,,,CC BY-SA 4.0 +14329,1,,,9/8/2019 5:59,,2,178,"

I am currently trying to improve a CNN architecture that was proposed for generating depth images. +The architecture was originally proposed for autonomous driving and it looks like following :

+ +

+ +

The idea behind this architecture is to improve the accuracy of depth images by adding confidence maps +to the outputs of different sensors to govern their influence on the result for each pixel. Input is an +RGB image and its corresponding LIDAR data, output is a depth image with the same dimensions.

+ +

For example, RGB features are best at discriminating global information such as sky, land, body of water, etc. +while LIDAR features are better at capturing fine-grained, local information. So to decide which features will +have more influence at the final regression result of which pixel, scientists proposed a confidence-guided +architecture where confidence weights for each map are learned during training.

+ +

Judging by their test results and how successful their paper was, their idea worked out pretty well for their +problem domain. That's why I would like to employ the idea of confidence weights in my own domain, where I have +multiple sources of features too. I have implemented the same architecture and got promising results: Accuracy +have been improved, but not enough to compete with SOTA.

+ +

However, I believe that the architecture above can be improved for my needs. The diversity of the scene structure +in my domain adds some amount of complexity to the image generation problem: Opposed to depth image generation for +autonomous driving where scene structure is somewhat restricted (there is sky, there is road, there are sideways etc), +the images that I need to analyze are sometimes taken from handheld cameras, some other images are areal views.

+ +

Here is a typical example of their scenario and some examples related to my domain, respectively.

+ +

+ +

+ +

This means my CNN needs to learn confidence weights for regions in fundamentally different scene setups. Top of +the image can be sky, but can also be populated by people. Bottom of the image can be sea, or can be road. This +brings me to the understanding that there is a non-linear relation between the position of the segments and confidence +weights in my case; and I need to modify the CNN architecture by introducing some additional non-linearity to learn the +confidences for different CNN columns in each situation correctly.

+ +

TLDR; I want to improve the CNN architecture above by introducing additional non-linearity, but I do not know how to do it. +I have tried adding another layer to confidence weights (Extended the architecture by duplicating the same weights and activating +with ReLU), but it has decreased the accuracy of the resulting model. Using the confidence weights as-is increases the accuracy, but +not as much as I need.

+",15526,,15526,,9/9/2019 3:48,9/9/2019 3:48,Confidence Maps and Non-Linearity,,0,0,,,,CC BY-SA 4.0 +14331,2,,14324,9/8/2019 10:21,,1,,"

Your intuition is correct, neural networks are a no go (except see bottom).

+ +

It seems like you'd want to look into the ML sub-field called Reinforcement Learning. +In a nutshell, RL offers a set of methods to learn what is the best action to take in a given situation.

+ +

More formally, in RL settings the algorithm learns from experience by observing a reward ($R$) associated to an action ($a$).

+ +

RL problems can be conceptualized as Markov Decision Processes (MDPs). From Sutton & Barto:

+ +
+

MDPs are a classical formalization of sequential decision making, where actions influence not just immediate rewards, but also subsequent situations, or states, and through those future rewards.

+
+ +

MDP can be faced with several methods, mainly Monte Carlo Methods and Temporal-Difference Learning.

+ +

In short, both use the Bellman equation in different ways. A key component of the eq. is the discounting term ( $\gamma$ ). Values for rewards associated with future states are discounted (multiplied) by $\gamma<1$ in order to modulate the importance of future rewards.

+ +

Since you're concerned with strategy, another way to influence the agent's learning wrt. the environmental rewards is the adoption of an epsilon value $\epsilon<1$. When defining an $\epsilon$-greedy policy, the agent will choose the action with highest value $Q$ only with a probability equal to $1-\epsilon$. This enables the agent to balance exploitation of previous experience with the exploration of the environment. Very useful in cases where an immediate low reward compromises the achievement of a much higher reward later during the episode.

+ +

There's many other ways to influence the behaviour of your learner of choice to respond differently to its environment. For a complete view, I recommend Sutton and Burton who offer their book in pdf format for free at the given link.

+ +

Edit: almost forgot. Neural networks can be useful when combined with more classical RL methods, say Q-learning (a type of Temporal-difference learning), taking on the name of deep Q-learning. +Alpha go also uses a combination of Monte Carlo methods combined with 2 NNs for Value evaluation and policy evaluation Wikipedia. +I wouldn't necessarily venture in this domain without first having a clear overview of classical RL methods.

+ +

Hope this helps.

+",25893,,25893,,9/8/2019 10:27,9/8/2019 10:27,,,,2,,,,CC BY-SA 4.0 +14332,1,,,9/8/2019 10:51,,4,3807,"

When trying to implement my own PPO (Proximal Policy Optimizer), I came across two different implementations :

+ +

Exploration with std

+ +
    +
  1. Collect trajectories on $N$ timesteps, by using a policy-centered distribution with progressively trained std variable for exploration
  2. +
  3. Train policy function on $K$ steps
  4. +
  5. Train value function on $K$ steps
  6. +
+ +

For example, the OpenAI's implementation.

+ +

Exploration with entropy

+ +
    +
  1. Collect trajectories on $N$ timesteps, by using policy function directly
  2. +
  3. Train policy and value function at the same time on $K$ steps, with a common loss for the two models, with additional entropy bonus for exploration purpose.
  4. +
+ +

For example, the PPO algorithm as described in the official paper.

+ +

What are the pros/cons of these two algorithms?

+ +

Is this specific to PPO, or is this a classic question concerning policy gradients algorithms, in general?

+",23818,,2444,,4/30/2020 17:57,9/27/2020 19:06,What are the pros and cons of using standard deviation or entropy for exploration in PPO?,,1,0,,,,CC BY-SA 4.0 +14333,1,14334,,9/8/2019 18:00,,5,635,"

I'm looking for more or less successful artificial intelligence usage examples to build an ontology or rationale why it can't be done. I found a lot of articles on how to use ontologies for AI, but not succeded vice versa.

+",28546,,2444,,9/12/2019 12:04,9/12/2019 12:04,Examples of ontologies made with AI,,1,0,,,,CC BY-SA 4.0 +14334,2,,14333,9/8/2019 22:43,,7,,"

Ontology learning is a relatively new field that aims to automatically (or semi-automatically) learn or create ontologies (using machine learning, text mining, knowledge representation and reasoning, information retrieval and natural language processing techniques) from some text or corpus.

+ +

Ontology learning can be divided into different phases or tasks

+ +
    +
  1. the acquisition of terms that refer to specific concepts (named-entity +recognition)

  2. +
  3. the recognition of synonyms among these terms

  4. +
  5. the identification of taxonomic relations (such as the ""is-a"" relation)

  6. +
  7. the establishment of non-hierarchical relations

  8. +
  9. the derivation of new knowledge, i.e. knowledge that is not explicitly +encoded by the ontology.

  10. +
+ +

See also Ontology Learning from Text: An Overview (2003) and A survey of ontology learning techniques and applications (2018) for more details.

+ +

In the paper Ontology Learning with Deep Learning: a case study on Patient Safety using PubMed (2016), the authors investigate how continuous bag-of-words (CBOW) and skip-gram (two language models based on artificial neural networks) can be used to aid ontology development for patient safety, using PubMed citations as a corpus.

+ +

Latent Dirichlet allocation (LDA) has also been used for ontology learning, for example, in the paper Terminological ontology learning and population using latent Dirichlet allocation. (2014).

+",2444,,2444,,9/12/2019 12:03,9/12/2019 12:03,,,,0,,,,CC BY-SA 4.0 +14335,1,14336,,9/9/2019 1:21,,1,141,"

I have been running my 2013 server box since 2 weeks ago for training an AI model. +I set up 30 epochs to run but since than it only ran 1 epoch as my PC config is super slow. But it generates 1 .h5 file.

+ +

My question is will this .h5 file that I trained with Xception model work for Resnet 50?

+",27988,,,,,9/9/2019 6:52,Will a .h5 file trained with Xception model work with Resnet50?,,1,0,,12/22/2021 13:51,,CC BY-SA 4.0 +14336,2,,14335,9/9/2019 6:52,,1,,"

If I understand your question correctly, you are asking whether you could load the saved weights of a trained model with the Xception architecture on a a Resnet50 architecture.

+ +

Short answer: No

+ +

Long answer: Xception and Resnet50 have very different architectures.

+ +

Here is a paper comparing multilple CNNs including Xception and Resnet50: +https://www.researchgate.net/publication/330478807_Deep_Feature-Based_Classifiers_for_Fruit_Fly_Identification_Diptera_Tephritidae/figures?lo=1

+ +

As you can see, the architectures are quiet different between Xception and ResNet50. Thus when you train a model, you are changing the weights and bias of the different layers of the model. If you switch models, and they are similar like let's say VGG16 and VGG19, you could import part of the weights for the layers which are similar between the two models. +As far as I know, I don't know of a function which can do this operation in tensorflow or keras. +But in your case, the architectures are extremely different between Xception and ResNet and there seem not to have any layers in common.

+ +

In general, one train a model, save the weights of the training, import back the weights, and use the same model for the next round of training/testing/predictions.

+ +

Hope that helps!

+",28010,,,,,9/9/2019 6:52,,,,4,,,,CC BY-SA 4.0 +14337,1,,,9/9/2019 7:43,,1,931,"

+

+ +

I want to deblur text images using deep learning. Which approaches are best suited for the task? Any example networks? Is unsupervised network the best approach? GAN or cycle GAN for these purposes?

+ +

I have currently prepared 1000 images for training (shapr+blur) is it sufficient? For each of these approaches, how many training images do I need? +I have attached sample blurred image and Ground truth

+",28567,,28567,,9/9/2019 21:35,9/9/2019 21:35,Which approaches are best suited for text deblurring?,,0,2,,,,CC BY-SA 4.0 +14338,1,14340,,9/9/2019 10:01,,0,201,"

Currently I'm working on a project for scanning credit card and text extraction from cards. So first of all I decided to preprocess my images with some filters like thresholding, dilation and some other stuff. But it was not successfully for OCR of every credit cards. So I learned a lot and I found a solution like this for number plate recognition that is very similar to my project. +In the first step I want to generate a random dataset like my cards to locate card number region, and for every card that I've generated I cropped two images that one of them has numbers and another has not. I generated 2000 images for every cards.

+ +

so I have some images like this:

+ +

(does not have numbers) + (has numbers)

+ +

And after generating my dataset I used this model with tensorflow to train my network.

+ +
    model = models.Sequential()
+    model.add(layers.Conv2D(8, (5, 5), padding='same', activation='relu', input_shape=(30, 300, 3)))
+    model.add(layers.MaxPooling2D((2, 2)))
+
+    model.add(layers.Conv2D(16, (5, 5), padding='same', activation='relu'))
+    model.add(layers.MaxPooling2D((2, 2)))
+
+    model.add(layers.Conv2D(32, (5, 5), padding='same', activation='relu'))
+    model.add(layers.MaxPooling2D((2, 2)))
+
+    model.add(layers.Flatten())
+    model.add(layers.Dense(512, activation='relu'))
+    model.add(layers.Dense(2, activation='softmax'))
+
+ +

Here is my plot for 5 epochs.

+ +

+

+ +

I almost get 99.5% of accuracy and It seems to be wrong, I think I have kind of overfitting in my data. Does it work correctly or my model is overfitted ? And how can I generate dataset for this purpose ?

+",28569,,,,,9/9/2019 12:35,Generate credit cards dataset for locating number region,,1,0,,,,CC BY-SA 4.0 +14339,2,,14262,9/9/2019 10:38,,4,,"

It actually depends on a couple of things here -

+ +
    +
  1. How many output classes do you have? If you have only 2 or 3 classes, it is a very easy task for the classifier that you have built. So, it is highly possible that convergence has occurred.
  2. +
  3. As @Djib2011 mentioned already, if your input training set is not balanced and is heavier with one of the output classes (95%), then this accuracy you see makes sense but note that your model won't do well in production.
  4. +
  5. Do not try to evaluate your model on the basis of your training accuracy. Test it on data your model has never seen before and then evaluate the classification accuracy making sure that your training/testing data is not heavy with one of the classes.
  6. +
+",25704,,,,,9/9/2019 10:38,,,,0,,,,CC BY-SA 4.0 +14340,2,,14338,9/9/2019 12:18,,2,,"

I am assuming the question you are asking is how to prevent over-fitting on the maximum accuracy. Your graph does show that your model over-fits.

+ +

There is a couple of different methods to prevent over-fitting from happening. You can specify training to stop after a certain amount of epochs. In your case it seems to be 2 or 3 epochs. Take care as a new initialization might require more or less epochs to reach optimal accuracy thus it is required run your model a couple of times to determine the correct amount of epochs.

+ +

You can also specify the accuracy you want your model to reach before you stop training, this can be dangerous as a local-minima could be found below your expected accuracy thus resulting in your training to run indefinitely. Also your model could have become more accurate then your expected accuracy but stopped prematurely due to the model reaching your stopping-condition.

+ +

You can combine the two. Terminate training once it reach your expected accuracy otherwise terminate it if it reaches a certain amount of epochs. This way you prevent your training from happening indefinitely.

+ +

In Tensorflow you would want to build your own custom training seen here

+ +

The method you are interested is

+ +
epoch_accuracy = tfe.metrics.Accuracy()
+
+ +

In their example they get the accuracy after each epoch, you would want to create your own batches and apply it there.

+ +

If you really want to dive deeper, you can implement methods for the model to detect when it starts over-fitting such as looking at the standard deviation of the accuracy across a certain measure (usually epochs but your model starts over-fitting after 5 so its not a good measure in your case, maybe do standard deviation over a certain amount of batches trained) if the accuracy leaves the standard deviation space then it might be good idea to stop training. The problem with this technique is that your model has to over-fit a bit to know it needs to stop. The other problem is that your model might seem it is over-fitting but was just a dip in accuracy and was stopped prematurely. (There is methods around this but can't recall them)

+ +

Another technique is to add drop out. This introduce noise and pushes your model out of a local-minima. Also forces nodes that was ignored to be used and optimized. This surprisingly works well but is not a surefire way to prevent over-fitting. This technique is built into Tensorflow and don't need custom training. You can find more reading about Tensorflow dropout here, also they have more techniques on how to overcome over-fitting but their solutions is to change the model and your model seems to be fine.

+ +
tf.keras.layers.Dropout(0.2),
+
+ +

These are just some techniques I listed but there is a lot more out there. Unfortunately there is no concrete method to prevent it but steps can be taken to stop well before over-fitting occurs. The first two techniques are mostly used together.

+",14621,,14621,,9/9/2019 12:35,9/9/2019 12:35,,,,2,,,,CC BY-SA 4.0 +14341,1,,,9/9/2019 14:16,,3,73,"

I've got a test dataset with 4 features and the PCA produces a set of 4 eigenvectors, e.g.,

+ +
EigenVectors: [0.7549043055910286, 0.24177972266822534, -0.6095588015369825, -0.01000612689310429]
+EigenVectors: [0.0363767549959317, -0.9435613299702559, -0.3290509434298886, -0.009706951562064631]
+EigenVectors: [-0.001031816289317291, 0.004364438034564146, 0.016866154627905586, -0.999847698334029]
+EigenVectors: [-0.654824523403971, 0.2263084929291885, -0.7210264051508555, -0.010499173877772439]
+
+ +

Do the eigenvector values represent the features from the original dataset? E.g., is feature 1 & 2 explaining most of the variance in eigenvector 1?

+ +

Am I interpreting the results correct to say that features 1 and 2 are therefore the most important in my dataset since PC1 represents 90% of the variance?

+ +

I'm trying to map back to the original features but am unsure how to interpret the results.

+",28572,,2444,,9/9/2019 14:45,9/9/2019 17:18,Do the eigenvectors represent the original features?,,1,0,,,,CC BY-SA 4.0 +14342,1,,,9/9/2019 14:18,,2,77,"

I am using raw data set with 4 feature variables (Total Cholesterol, Systolic Blood Pressure, Diastolic Blood Pressure, and Cigraeette count) to do a Binominal Classification (find stroke likelihood) using Logistic Regression Algorithm.

+ +

I made sure that the class counts are balanced. i.e., an equal number of occurrences per class.

+ +

using Python + sklearn, the problem is that the classification performance gets very negatively-impacted when I try to normalize the dataset using

+ +
 X=preprocessing.StandardScaler().fit(X).transform(X)
+
+ +

or

+ +
  X=preprocessing.MinMaxScaler().fit_transform(X)
+
+ +

So before normalizing the dataset:

+ +
         precision    recall  f1-score   support
+
+      1       0.70      0.72      0.71        29
+      2       0.73      0.71      0.72        31
+
+avg / total   0.72      0.72      0.72        60
+
+ +

while after normalizing the dataset (the precision of class:1 decreased significantly) + precision recall f1-score support

+ +
      1       0.55      0.97      0.70        29
+      2       0.89      0.26      0.40        31
+
+ avg / total  0.72      0.60      0.55        60
+
+ +

Another observation that I failed to find an explanation to is the probability of each predicted class.

+ +

Before the normalization:

+ +
 [ 0.17029846  0.82970154]
+ [ 0.47796534  0.52203466]
+ [ 0.45997593  0.54002407]
+ [ 0.54532438  0.45467562]
+ [ 0.45999462  0.54000538]
+
+ +

After the normalization ((for the same test set entries))

+ +
 [ 0.50033247  0.49966753]
+ [ 0.50042371  0.49957629]
+ [ 0.50845194  0.49154806]
+ [ 0.50180353  0.49819647]
+ [ 0.51570427  0.48429573] 
+
+ +

Dataset description is shown below:

+ +
       TOTCHOL    SYSBP    DIABP  CIGPDAY   STROKE
+count  200.000  200.000  200.000  200.000  200.000
+mean   231.040  144.560   81.400    4.480    1.500
+std     42.465   23.754   11.931    9.359    0.501
+min    112.000  100.000   51.500    0.000    1.000
+25%    204.750  126.750   73.750    0.000    1.000
+50%    225.500  141.000   80.000    0.000    1.500
+75%    256.250  161.000   90.000    4.000    2.000
+max    378.000  225.000  113.000   60.000    2.000
+
+ +

SKEW is

+ +
TOTCHOL    0.369
+SYSBP      0.610
+DIABP      0.273
+CIGPDAY    2.618
+STROKE     0.000
+
+ +

Is there a logical explanation for the decreased precision?

+ +

Is there a logical explanation for the very-close-to-0.5 probabilities?

+",25463,,25463,,9/9/2019 15:04,5/28/2023 22:00,Is it compulsary to normalize the dataset if doing so can negatively impact a Binary Logistic regression performance?,,1,0,,,,CC BY-SA 4.0 +14343,5,,,9/9/2019 14:46,,0,,,2444,,2444,,9/9/2019 14:46,9/9/2019 14:46,,,,0,,,,CC BY-SA 4.0 +14344,4,,,9/9/2019 14:46,,0,,"For questions related to the principal component analysis (PCA) technique, which can be used for dimensionality reduction.",2444,,2444,,9/9/2019 14:46,9/9/2019 14:46,,,,0,,,,CC BY-SA 4.0 +14346,1,,,9/9/2019 16:17,,1,25,"

Suppose that we have a labeled training set of $n$ closely cropped images of cars $(x_1, y_1) , \dots, (x_n, y_n)$. We then train a CNN on this. Let's say we have $m$ test images. Then for each of the $m$ images, do we use the trained CNN on a cropped out portion of the box to detect whether there is a car or not? If the object is large, wouldn't having a large sliding window have better performance than a smaller sliding window?

+",28575,,,,,9/9/2019 16:17,Sliding Window Detection,,0,2,,,,CC BY-SA 4.0 +14347,2,,14341,9/9/2019 17:12,,2,,"
+

The principal components (eigenvectors) correspond to the direction + (in the original n-dimensional space) with the greatest variance in + the data.

+ +

The corresponding eigenvalue is a number that indicates how much + variance there is in the data along that eigenvector (or principal + component).

+
+ +

Thus, feature 2 is the most important (based on eigenvalue alone). Then feature 1. The other 2 features have little impact and theoretically could be removed as a part of your data reduction effort.

+ +

Also, it is important to point out that

+ +
+

When performing PCA, it is typically a good idea to normalize the data + first. Because PCA seeks to identify the principal components with the + highest variance, if the data are not properly normalized, attributes + with large values and large variances (in absolute terms) will end up + dominating the first principal component when they should not.

+
+ +

IOW, if you didn't normalize your data then your PCA analysis is quite likely meaningless.

+ +

*The above quoted text is from http://www.lauradhamilton.com/introduction-to-principal-component-analysis-pca

+",1881,,1881,,9/9/2019 17:18,9/9/2019 17:18,,,,0,,,,CC BY-SA 4.0 +14348,1,,,9/9/2019 20:00,,2,29,"

Let's assume, that I have a neural network with few numerical features and one binary categorical feature. The network in this case is used for regression. I wonder if such a neural network can properly adjust to two different states of the categorical feature or maybe training two separate networks, according to these two states can be a better idea in the sense of smaller achievable error(assuming I have enough data for each of the two states)? The new model will use simple 'if statement' on the begining of the regression process and use proper network accordingly.

+",22659,,22659,,9/9/2019 20:43,9/9/2019 20:43,Can two neural networks be better instead of one with a categorical feature?,,0,1,,,,CC BY-SA 4.0 +14349,2,,14262,9/9/2019 20:08,,1,,"

It can be normal and there might be nothing wrong with your model. If there is a very strong and clear correlation in your data(good separability) then a network can achive very high accuracy very fast. After reaching some value learning gets harder.

+",22659,,,,,9/9/2019 20:08,,,,0,,,,CC BY-SA 4.0 +14351,1,14352,,9/10/2019 5:18,,0,119,"

I'm trying to separate classes in 3D space, the data are as in the sketch below:

+ +

+ +

There are 3 classes: 0,1,2; and with the look into the sketch, it seems that I need 3 planes to separate the classes, thus how many hidden layers should be in the DNN? Any roughly how many neurons in each layer?

+ +

Some say the number of hidden layers is roughly the number of separation times, so I put 3 hidden layers and it worked! But any reasons behind that simple measure?

+",2844,,2844,,9/10/2019 6:13,9/10/2019 6:42,How many hidden layers are needed for this training data set,,1,0,,,,CC BY-SA 4.0 +14352,2,,14351,9/10/2019 6:42,,2,,"

You need to perform Hyperparameter Tuning to identify -

+ +
    +
  1. Number of hidden layers.
  2. +
  3. Number of neurons in each of the hidden layers.
  4. +
  5. Dropout
  6. +
  7. The activation function you use in each of your hidden layers.
  8. +
+ +

There parameters are only related to how you build your model. There are others that relate to training like batch size, number of epochs and so on. Your model's performance ultimately depends on how well you tune your hyperparameters.

+ +

Also note that hyperparameter tuning is a trial and error task because it depends on several factors that may not be obvious to us. With experience, experts do build certain thumb rules about what may be the right choice, but there is no way to generalize it. ""Some say the number of hidden layers is roughly the number of separation times"" - is just another thumb rule. You simply need to find out what best suits your scenario.

+",25704,,,,,9/10/2019 6:42,,,,0,,,,CC BY-SA 4.0 +14353,1,,,9/10/2019 9:17,,1,36,"

I have developed a new algorithm which has dynamic connections. it is based off of the following paper:

+ +

https://www.researchgate.net/publication/334226867_Dynamic_Encoder_Network_to_Process_Noise_into_Images

+ +

i have figured out how to effect the dynamism. The pseudo code is in the diagram.

+ +

Additionally i would really like to know how to do the rest of the code in terms of a general overview. i would need to have the same deconvolutional layer for all training examples, as well as different current_weight_matrices for the different training examples. so how do i go about adding the dynamic layers to the deconv neural network . Will i have to have many copies of the same network?

+",28585,,,,,9/10/2019 9:17,I am trying to create a network with dynamic connections for every different training example,,0,0,,,,CC BY-SA 4.0 +14356,2,,13097,9/10/2019 16:51,,1,,"

Both study properties of a network. The literature under respective titles seems to focus on certain topics.

+ +

Network analysis seems to focus on understanding the structure of a network. Centrality , modularity, assortativity etc are metrics used to study properties of networks. Key areas of research are for egs community detection, centrality measures, clustering algorithms, link prediction

+ +

GDL is oriented more towards using dataset structured as a graph as an input to machine learning problems like classification, regression. +Key areas of research include graph representation, neural architecture.

+ +

Some problems like link prediction for example are present in both domains. +Some problems like network construction for egs isn't covered deeply in both areas.

+ +

Some disciplines like algebraic graph theory appear in both disciplines. +Fiedler vector for egs is studied in network analysis for community +detection . Spectral analysis, matrix factorisation are ideas being explored in representation learning.

+",28590,,,,,9/10/2019 16:51,,,,0,,,,CC BY-SA 4.0 +14357,1,,,9/10/2019 17:08,,3,96,"

In Reinforcement Learning, when I train a model, it comes up with its own set of solutions. For example, if I am training a robot to walk, it will come up with its own walking gait, such as this Deep Mind robot that has learned to walk in a bizarre gait. It can surely walk/run although the movements does not quite look like a human.

+ +

I was wondering how can I train a model by providing it some kind of reference motion data? For example, if I collect motion data from a walking human and then provide it to the training, can the training be made learn the walking movements that looks similar to the reference motion data?

+ +

Searching online I did find some links that shows this is possible. For example, here is a research where the researchers did exactly what I am trying to do, they fed motion data captured from humans to a simulation and made it learn the movements.

+ +

So, my question is: How can I give some kind of hints or reference data to a reinforcement learning model instead of just leaving it all by itself? how exactly is this done? What is it even called? What are some terms and keywords that I can search for to learn more?

+ +

Many thanks in advance

+",28596,,,,,11/12/2019 14:52,Reinforcement learning with hints or reference model,,1,0,,,,CC BY-SA 4.0 +14359,1,23787,,9/10/2019 18:05,,1,262,"

My work colleague got a project with a lot of work that is not hard or complicated. The problem is simple but it is a lot of work. +We have two XML files with a lot of variables in it. Not only is the XML files flatten but you would have classes with classes in them that can reach an absurd amount of depth.

+ +

The problem comes in that the one file is a request that is received and the other is a response. The request needs to map its variables to the response variables. A simple tool could be built to solve this solution but the problem comes in that there are rules for certain variables. Some variables in the request have arithmetic involve, sometimes with other variables, or some don't get mapped if other variables are present.

+ +

I was thinking about Genetic Programming when I heard about this problem. If all the rules could be defined then it should be able to build a tree that would represent the desired output which is the XML response.

+ +

Will it work and if not do you think there is an AI algorithm that can solve this?

+",14621,,,,,9/27/2020 2:42,Which AI algorithm is great for mapping between two XML files,,1,0,,,,CC BY-SA 4.0 +14364,2,,6274,9/10/2019 23:49,,15,,"

There are 2 problems you might face.

+ +
    +
  1. Your neural net (in this case convolutional neural net) cannot physically accept images of different resolutions. This is usually the case if one has fully-connected layers, however, if the network is fully-convolutional, then it should be able to accept images of any dimension. Fully-convolutional implies that it doesn't contain fully-connected layers, but only convolutional, max-pooling, and batch normalization layers all of which are invariant to the size of the image.

    + +

    Exactly this approach was proposed in this ground-breaking paper Fully Convolutional Networks for Semantic Segmentation. Keep in mind that their architecture and training methods might be slightly outdated by now. A similar approach was used in widely used U-Net: Convolutional Networks for Biomedical Image Segmentation, and many other architectures for object detection, pose estimation, and segmentation.

  2. +
  3. Convolutional neural nets are not scale-invariant. For example, if one trains on the cats of the same size in pixels on images of a fixed resolution, the net would fail on images of smaller or larger sizes of cats. In order to overcome this problem, I know of two methods (might be more in the literature):

    + +
      +
    1. multi-scale training of images of different sizes in fully-convolutional nets in order to make the model more robust to changes in scale; and

    2. +
    3. having multi-scale architecture.

    4. +
    + +

    A place to start is to look at these two notable papers: Feature Pyramid Networks for Object Detection and High-Resolution Representations for Labeling Pixels and Regions.

  4. +
+",28608,,2444,,6/13/2020 20:32,6/13/2020 20:32,,,,2,,,,CC BY-SA 4.0 +14365,2,,14293,9/11/2019 0:11,,1,,"

I think your fundamental question is: can convolutional neural networks generalize to objects independent of the view?
+The answer is mostly yes (given the dataset contains multiple views of objects). This is evident by looking at the results of various challenges: e.g. COCO object detection challenge results on youtube. You can see that no matter what the view is on the car, pedestrians, the detector is not biased towards any specific viewpoint.

+ +

Therefore, one can assume that you can build only one network to perform object detection and another network to perform classification.

+ +

If you really want to go even further:
+- you can make a small change to the architecture of your detector (I guess you might be using something like SSD, YOLO or Faster-RCNN), in which you make gender classification for every bounding box prediction. If you think about it, it is intuitive because the detector is already doing classification (there is softmax + cross-entropy loss usually), you can just add another term in its tensor output and modify the loss. That way you don't even need another network! It would be much faster and simpler.
+- you can predict the pose estimation of the object (and corresponding normals) with respect to the camera to capture the best viewpoint to perform classification.

+",28608,,,,,9/11/2019 0:11,,,,4,,,,CC BY-SA 4.0 +14366,2,,10911,9/11/2019 0:44,,2,,"

Another specific way to do this if one uses a neural network for this. Use a dropout a layer in your network and instead of scaling the activations at test time, one can sample the activations (just like in training-time) and predict multiple times for a given input, then look at distribution of your outputs. Intuitively this would add ""probabilistic, bayesian effect"" to you neural network. +I think this method was first proposed in Dropout as a Bayesian Approximation: +Representing Model Uncertainty in Deep Learning which is called Monte Carlo Dropout.

+",28608,,,,,9/11/2019 0:44,,,,0,,,,CC BY-SA 4.0 +15365,1,15373,,9/11/2019 4:10,,3,110,"

In the paper, Eligibility Traces for off-Policy Policy Evaluation (2010), by Doina Precup et al., mentioned the term ""non-starving"" many times. The specific use of the term was like ""non-starving policy"" in the context of off-policy learning.

+ +

A specific mention of the term

+ +
+

we consider a method that requires nothing of the behavior + policy other than that it be non-starving, i.e., that it + never reaches a time when some state-action pair is never + visited again.

+
+ +

What does the thing look like intuitively? Why is it required?

+",9793,,2444,,9/11/2019 13:03,9/15/2019 16:09,What is a non-starving policy in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +15367,1,35704,,9/11/2019 6:49,,5,3572,"

I've recently encountered different articles that are recommending to use the KL divergence instead of the MSE/RMSE (as the loss function), when trying to learn a probability distribution, but none of the articles are giving a clear reasoning why the first is better than the others.

+

Could anyone give me a strong argument why the KL divergence is suitable for this?

+",20430,,2444,,5/30/2022 8:18,5/30/2022 10:46,What are the advantages of the Kullback-Leibler over the MSE/RMSE?,,2,0,,,,CC BY-SA 4.0 +15368,1,15419,,9/11/2019 7:11,,0,111,"

A single neuron will be able to do linear separation. For example, XOR simulator network:

+ +
x1 --- n1.1
+   \  /    \
+    \/      \
+             n2.1 
+    /\      /
+   /  \    /
+x2 --- n1.2 
+
+ +

Where x1, x2 are the 2 inputs, n1.1 and n1.2 are the 2 neurons in hidden layer, and n2.1 is the output neuron.

+ +

The output neuron n2.1 does a linear separation. How about the 2 neurons in hidden layer?

+ +

Is it still called linear separation (at 2 nodes and join the 2 separation lines)? or polynomial separation of degree 2?

+ +

I'm confused about how it's called because there are curvy lines in this wiki article: https://en.wikipedia.org/wiki/Overfitting

+ +

+ +

+",2844,,2844,,9/11/2019 7:18,9/13/2019 10:31,Is it still called linear separation with a layer of more than 1 neuron,,2,0,,,,CC BY-SA 4.0 +15369,2,,15368,9/11/2019 7:26,,0,,"

What you have depicted is a nonlinear classificator. Although each stage does a linear separation, the sequential composition of linear separations is nonlinear. The nonlinearity of the neuron is key in this regard, as otherwise it would all be equivalent to a matrix multiplication, which is linear. You were right about the degree, although it's rarely called like that. People usually describe just the number of layers, and I guess the main reason is that it's not directly equivalent to such polynomials, as that depends on other factors (e.g. the activation function).

+",27444,,,,,9/11/2019 7:26,,,,0,,,,CC BY-SA 4.0 +15371,1,,,9/11/2019 9:39,,2,855,"

I want to create a model to solve a multi-class classification problem.

+

Here are more details about my problem.

+
    +
  • Every picture contains only one object

    +
  • +
  • The background is very simple

    +
  • +
  • All objects belong to the same family of objects (for example, all objects are knives), but there are different specific subtypes

    +
  • +
  • the model will learn and predict the name of the object (example: the model learn all types of knives, and when it get an image it will tell us the name of the knife)

    +
  • +
+

To be clear, let's say I have 50 types of knives, and the output of the model has to recognize the correct name of the knife. Knife name could be:

+
    +
  • Chef's Knife,
  • +
  • Heavy Duty Utility Knife,
  • +
  • Boning Knife, etc.
  • +
+

To solve this problem, I have started to use annotated, segmented (masked) images (COCO-like dataset) and the Mask R-CNN model.

+

As a first step, I got a prediction, but I really don't know if I'm on the right way.

+

For this problem, Mask R-CNN could be the solution, or it is impossible to recognize a tiny difference between two objects from the same class (for example Chef's Knife, Heavy Duty Utility Knife)?

+",29616,,2444,,3/9/2021 18:06,11/30/2022 0:04,Is Mask R-CNN suited to solve a multi-class classification problem where the classes are related?,,1,0,,,,CC BY-SA 4.0 +15372,1,15377,,9/11/2019 12:16,,3,501,"

Is it crucial to always have the same initial (starting) state for Reinforcement Learning, for example, for Q-learning or DQN? +Or it can vary?

+",29617,,2444,,9/11/2019 12:34,9/11/2019 16:34,How important is the choice of the initial state?,,1,0,,,,CC BY-SA 4.0 +15373,2,,15365,9/11/2019 12:59,,4,,"

A non-starving policy is a (behavior) policy that is theoretically guaranteed to visit each state and take all possible actions from each state an infinite number of times, so that to always update $Q(s, a)$, $\forall s, \forall a$, an infinite number of times. In the context of off-policy prediction, this criterion implies that any trajectory will have no zero probability under a behavior policy. As a consequence, the experience from the behavior policy sufficiently covers the possibilities of any target policy.

+ +

An example of a non-starving policy is the $\epsilon$-greedy policy, which, with $0 < \epsilon \leq 1$ (which is usually a small number between $0$ and $1$) probability, takes a random action from a given state, and, with $1-\epsilon$ probability, takes the current best action, that is, the action with the highest value from a given state, according to the current value function.

+",2444,,9793,,9/15/2019 16:09,9/15/2019 16:09,,,,8,,,,CC BY-SA 4.0 +15374,1,,,9/11/2019 13:14,,2,356,"

I am using a fully connected neural network that uses a sigmoid activation function. If we feed a big enough input, the sigmoid function will finally become 1 or 0. Is there any solution to avoid this?

+

Will this lead to classical sigmoid problems vanishing gradient or exploding gradient?

+",28602,,2444,,12/21/2021 10:38,12/21/2021 10:39,"Why will the sigmoid function be 1 (and 0), if we use a fully connected layer that produces a big enough positive (or negative, respectively) output?",,1,5,,,,CC BY-SA 4.0 +15375,1,,,9/11/2019 15:30,,0,56,"

I have a huge amount of data and I want to calculate my false positive and false negative. Is there a software that can help me determine it?

+",29622,,2444,,9/11/2019 19:58,9/11/2019 19:58,How to calculate the false positives and negatives?,,1,0,,,,CC BY-SA 4.0 +15376,1,15381,,9/11/2019 16:05,,3,79,"

Is it still a policy iteration algorithm if the policy is updated optimizing a function of the immediate reward instead of the value function?

+",29621,,1847,,9/11/2019 17:55,9/11/2019 18:57,Can policy iteration use only the immediate reward for updates?,,1,0,,,,CC BY-SA 4.0 +15377,2,,15372,9/11/2019 16:12,,2,,"

The initial state can vary during in both training and use, and how you decide to do this makes very little difference to Q-learning. The important factor is whether all state/action pairs relevant to optimal behaviour can be reached. As there is already randomness in any exploring policy, and in many environments as part of state transitions and reward functions, adding some more at the start is not an issue.

+ +

More formally, you can take any existing environment with states $S_1, S_2, S_3 ... S_n$ plus defined actions and rewards. Add a special fixed start state $S_{0}$ with one special action $A_{0}$ allowed. Make the state transition matrix following that action any distribution you like over the other states, and reward $0$. It is clearly a valid MDP, and is identical to your original MDP in terms of value and policy functions for states $S_1, S_2, S_3 ... S_n$. For all intents and purposes to the agent (which gets no meaningful policy choice in $S_0$), it starts in the original MDP in some randomly chosen state.

+ +

That is as long as the variations are not made to deliberately exclude some of the state space during training that it would need later, or otherwise used to ""attack"" the agent and prevent it learning. Q-learning proofs of convergence assume that all state/action pairs are reachable an infinite number of times in the limit of infinite training time. Of course in practice this is never achieved, but clearly if you excluded some important state from ever being seen at the start by choosing to start in a way that it is never reachable, then the agent would never learn about it.

+ +

You can use a highly variable start state for training to your advantage. It is a form of exploration, and truly exploring starts (that could start from any state/action pair, so you would want to randomise the first action choice) allow you to skip any further exploration and still guarantee convergence to an optimal policy - i.e. you could learn Q values on-policy using a deterministic policy. This will not necessarily make things converge faster, but does go to demonstrate that random start states are not a barrier to RL.

+ +

There are a couple of minor/partial exceptions to ""do what you like"":

+ +
    +
  • If the environment is otherwise highly deterministic, adding a random start may increase the difficulty in optimising it, as more states will be reachable, even with an optimal policy, therefore the state space over which optimal behaviour needs to be discovered may be larger and take longer to learn.

  • +
  • Policy Gradient methods, like REINFORCE, optimise the policy towards maximising return from a known distribution of start states. So you can vary start state, but you should really stick to a standard start procedure, including a fixed distribution of allowed start states.

  • +
  • Similarly, if you want to report a standard score for your agent, it is common to quote the expected return from the start states. Again that doesn't mean you have a fixed start state, but you should use a fixed distribution of start states to assess your agent.

  • +
+ +

Many standard environments, like Open AI's Gym environments CartPole and LunarLander, include some amount of randomness in the start state, so that the agent has to solve a more general problem than just generating a fixed sequence of actions that always works from the beginning.

+",1847,,1847,,9/11/2019 16:34,9/11/2019 16:34,,,,2,,,,CC BY-SA 4.0 +15378,2,,15375,9/11/2019 17:11,,1,,"

Yes, you can use sklearn's confusion_matrix. To explicitly extract the false positives and negatives, you can do

+ +
from sklearn.metrics import confusion_matrix
+
+y_true = [0, 1, 0, 1]
+y_pred = [1, 1, 1, 0]
+tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
+
+",2444,,,,,9/11/2019 17:11,,,,0,,,,CC BY-SA 4.0 +15379,1,,,9/11/2019 17:20,,1,427,"

In Policy Iteration (PI), the action generated by the policy, whether it's optimal or not w.r.t the current value function $v(s)$. Whereas, in Value Iteration, the action is greedily generated w.r.t current $v(s)$, which is an approximation of the objective function (as I understand). As a consequence, in the first few iterations, will Value Iteration perform better than Policy Iteration?

+",29621,,,user9947,9/15/2019 18:59,9/15/2019 18:59,Is Value Iteration better than Policy Iteration for first few iterations?,,0,5,,,,CC BY-SA 4.0 +15381,2,,15376,9/11/2019 18:51,,2,,"
+

Is it still a policy iteration algorithm if the policy is updated optimizing a function of the immediate reward instead of the value function?

+
+ +

Technically yes.

+ +

The value update step in Policy Iteration is:

+ +

$$v(s) \leftarrow \sum_{r,s'}p(r,s'|s,\pi(s))(r + \gamma v(s'))$$

+ +

The discount factor $\gamma$ can be set to $0$, making the update:

+ +

$$v(s) \leftarrow \sum_{r,s'}p(r,s'|s,\pi(s))r$$

+ +

However, there are two key details that are important, and make this a technically yes, not some alternative way of solving problems:

+ +
    +
  • Changing discount factor $\gamma$ changes what it means for an agent to act optimally. Setting it to zero means that the agent will prioritise only its immediate reward signal, and make no long-term decisions at all. This would be useless for instance if the task was to escape a maze in minimum time.

  • +
  • Technically there is still a value function being updated. The function $v(s)$ is still the expected future reward, just we have set it to only care a very short step into the future. So short that it doesn't care what the value of the next state is, so the next state does not appear in any updates.

  • +
  • Due to the lack of bootstrapping between states, all the data for optimal behaviour is already available in the reward distribution. So the entire MDP can be solved with a single sweep through all the states. Or it could just be solved on-demand using $\pi(s) = \text{argmax}_a[\sum_{r,s'}p(r,s'|s,a)r]$ for any state, making a policy iteration process redundant.

  • +
+ +

However, with these caveats in mind, yes this is still policy iteration. It is the same update process, just with a particular choice of one of the parameters.

+",1847,,1847,,9/11/2019 18:57,9/11/2019 18:57,,,,1,,,,CC BY-SA 4.0 +15384,2,,14153,9/11/2019 23:21,,1,,"

One of the key terms in the literature that you are looking for is video captioning.

+ +

You can have a look at some of the relevant papers with code on this subject. In short, it is an active area of research and a difficult problem, one reason is because videos are still difficult to learn about (because of larger amount of data + larger model, etc...) and this model has to be working with two modalities of data: text and image.

+ +

A paper that you might want to start with is Deep Visual-Semantic Alignments for Generating Image Descriptions which works on single images. In short, you can use something similar like in the paper: object detector (e.g. Faster RCNN) to extract visual features and feed them into the state of an RNN (LSTM) which would output a sequence of words in your summary (see picture below). +

+",28608,,,,,9/11/2019 23:21,,,,3,,,,CC BY-SA 4.0 +15386,1,,,9/12/2019 2:03,,1,91,"

The Transformer model proposed in ""Attention Is All You Need"" uses sinusoid functions to do the positional encoding.

+ +

Why have both sine and cosine been used? And why do we need to separate the odd and even dimensions to use different sinusoid functions?

+",29631,,2444,,11/30/2021 15:41,11/30/2021 15:41,Why do both sine and cosine have been used in positional encoding in the transformer model?,,0,0,,,,CC BY-SA 4.0 +15387,1,15391,,9/12/2019 3:04,,4,357,"

When we are working on an AI project, does the domain/context (academia, industry, or competition) make the process different?

+

For example, I see in the competition most participants even winners use the stacking model, but I have not found anyone implementing it in the industry. How about the cross-validation process, I think there is a slight difference in industry and academia.

+

So, does the context/domain of an AI project will make the process different? If so, what are the things I need to pay attention to when creating an AI project based on its domain?

+",16565,,2444,,12/27/2021 10:18,12/27/2021 10:18,"When we are working on an AI project, does the context (academia, industry or competition) make the process different?",,2,0,,,,CC BY-SA 4.0 +15388,1,,,9/12/2019 3:35,,2,1519,"

From my understanding, the critic evaluates the policy (actor) following dynamic programming (DP) or approximate dynamic programming (ADP) scheme, which should converge to the optimal value function after sufficient iterations. The policy (actor) then updates its parameter w.r.t the optimal value function using gradient methods. This policy evaluation and improvement circle are repeated until neither the critic nor the actor changes anymore.

+ +

How's guaranteed to converge as a whole? Is there any mathematical proof? Is it possible that it may converge to a local optimal point instead of a global one?

+",29621,,2444,,9/13/2019 12:29,9/13/2019 19:49,How is the actor-critic algorithm guaranteed to converge?,,1,0,0,,,CC BY-SA 4.0 +15389,1,,,9/12/2019 5:02,,1,27,"

I have a probabilistic classifier that produces a distribution over my 3 classes - C1, C2, C3. +I want to compare some new points I'm classifying to each other, to see which one is the best fit for a specific class.

+ +

for example:. +for a new point X1 the classifier will output something like [0.2, 0.2, 0.6] +for another new point X2 it will produce [0.2, 0.4, 0.4] +so for both X1, X2 - the chosen class would be C3. +Now I want to know - which of X1, X2 is a better fit to C3 +I cannot simply choose the one with the highest probability for C3, because it's probability for C3 depends on it's probabilities for the other classes. X1 got 0.6 and X2 0.4, but it's possible that X2 is closer to C3 in the hyper-plane than X1, it is just less unique for C3 than X1, and therefor X1 got a higher probability.

+ +

here's a visual, in 2 dimensions:

+ +

+ +

X2, who got a lower probability, is clearly a better fit to the Red class then X1, which is truly unique to the Red class, but is further from the class cluster

+ +

My questions are:

+ +
    +
  1. how do I normalize the results of a probabilistic classifier so I can compare predictions to each other?
  2. +
  3. given an output of a probabilistic classier - how can I get the actual distance from the probabilities. It must be possible because there's an exact mapping between a set of probabilities to a point in the classified hyper-plan.
  4. +
+ +

Thanks a lot! +Amir

+",29636,,,,,9/12/2019 5:02,Probabilistic classification - normalize results,,0,0,,,,CC BY-SA 4.0 +15391,2,,15387,9/12/2019 6:42,,4,,"

I cannot comment about the process for AI for academia. I can compare AI for competitions and AI for business. To clarify whatever I say is about ML not any other AI techniques. The process might be different for other techniques. But most of things that I say are general enough that I am assuming should still apply.

+ +

The main difference that I saw while doing ML for a competition vs. for a business was that of focus.

+ +

When doing it for a competition for Kaggle the focus was mainly creating the model

+ +
    +
  • machine learning metrics are specified for you
  • +
  • some data was given to you
  • +
  • business problem was given to you
  • +
+ +

When doing it for business what is different

+ +
    +
  • given a business problem finding the parts that can actually benefit from ML. You have to define the ML problem in it and define how it actually benefits the business. This may involve significant discussions with business stakeholders, weighting the pros and cons of doing it versus doing something else, communicate the benefits to the business stakeholders, take them into confidence for the process to start
  • +
  • find the right data for the problem from scratch, ensure it is collected by rest of the system or brought from 3rd parties
  • +
  • define business metrics over and above machine learning metrics. At the end of day nobody really cares about whether ML model recall, accuracy is good or bad. What is important is the relevant business effect.
  • +
  • make the model, deploy it and integrate it with the rest of the system. This is important because if your goal is just to making the model you would not care about the factors associated with actually using it i.e. latency of predictions, cost of machines needed to run it etc.
  • +
  • A/B testing for the models, running multiple models in parallel, dynamically being able to adjust which models to use
  • +
+ +

Hope this gives some idea about the differences in AI for competitions and AI for business.

+",29638,,,,,9/12/2019 6:42,,,,0,,,,CC BY-SA 4.0 +15393,1,,,9/12/2019 8:09,,6,326,"

Let's say we have a problem that can be solved by some RL algorithms (DQN, for example, because we have discrete action space). At first, the action space is fixed (the number of actions is $n_1$), and we have already well trained an offline DQN model. Later, we have to add more actions for some reasons (and the number of action is now $n_2$, where $n_2 > n_1$).

+

Are there some solutions to update the value function or policy (or the neural network) with only minor changes?

+",29643,,2444,,12/28/2020 1:34,12/28/2020 1:34,Are there RL techniques to deal with incremental action spaces?,,1,0,,,,CC BY-SA 4.0 +15394,1,15405,,9/12/2019 9:37,,0,50,"

I'm learning HMMs and decided to model a problem for learning purposes. I came to this idea of word predicting by letters. +here is the model :

+ +

while typing, the word is typed letter by letter, so we can consider them as series of observations. +let's say we have just 4 words in our database:

+ +
    +
  • Tap
  • +
  • Trip
  • +
  • Trap
  • +
  • Trigger
  • +
+ +

and we want to predict the word after 1,2 or 3 written letters.

+ +

we have to define states and HMM parameters (state transitions, emissions and priors).

+ +

our [hidden ?] states would be :

+ +
    +
  • [ ][ ][ ][ ][ ][ ][ ] : no observations. I chose 7 [ ] because of the longest word
  • +
  • T [ ][ ][ ][ ][ ][ ]
  • +
  • T A [ ][ ][ ][ ][ ]
  • +
  • T R [ ][ ][ ][ ][ ]
  • +
  • … .
  • +
+ +

and we have to learn the transition probabilities of each state pairs (the pairs which come after each other like T,TR and TA. but not for TA and TR).

+ +

our prior probabilities are 1/3 because we have 3 words. but we may change it by learning which word is used more frequently.

+ +
+ +

Now, I have these questions :

+ +
    +
  1. is HMM suitable for this kind of problem ?
  2. +
  3. are my assumptions (about states and prior probabilities) correct ?
  4. +
  5. the states get out of hand when the word count increases. making the model very complex. is that deduction true?
  6. +
  7. how the emission probabilities are defined in this model ?
  8. +
  9. what do I miss in case of parameters or definitions ?
  10. +
+ +

I'm a new contributor, be nice to me. regards :)

+",18124,,18124,,9/12/2019 9:55,9/12/2019 18:06,"is a ""word prediction"" problem, applicable using HMMs?",,1,0,,1/1/2022 10:46,,CC BY-SA 4.0 +15395,2,,15393,9/12/2019 9:39,,2,,"
+

Is there some solutions to update the model with only minor changes?

+
+ +

In general, assuming the new action choices are meaningful - in at least some states, the expected return from taking one of the new actions is higher than the current optimal policy using just the old action selection - then the answer here is ""no"".

+ +

At the very least you will need to re-train your agent such that it explores the new action choices, and learns the new value function and policy. You can of course start this re-training using data and internal representations learned from the earlier environment, and that may help if the new actions have not changed things too radically.

+ +

There are a couple of things that might help improve performance on this re-training:

+ +
    +
  • If the actions are not entirely discrete, but have some features that could be generalised from, you could base your value function or policy function estimators on those features instead of discrete actions. So for example in DQN your input to the neural network would be concatenated state, action feature vectors, and output a single value. Then it may generalise to the new actions quickly, in some cases perhaps even getting close to the correct value estimates from the start.

  • +
  • If you were training using DynaQ+, this includes an exploration term (which is added to planning assessments of immediate reward) that will prioritise exploring new state/action pairs when they appear. Other planning algorithms may have similar adjustments, although I am not aware of specific ones that could be dropped straight into a DQN agent.

  • +
+ +

If you know in advance which states the new actions are likely to be most useful in, you may be able to insert that knowledge into some initial action selection or value estimate helpers to avoid the need to train from scratch.

+",1847,,1847,,9/12/2019 11:26,9/12/2019 11:26,,,,0,,,,CC BY-SA 4.0 +15396,1,15400,,9/12/2019 10:26,,0,837,"

I have these training data to separate, the classes are rather randomly scattered:

+

+

My first attempt was using tf.nn.relu activation function, but output was stuck with whatever number of training steps. So I guessed it could be because of dead ReLU units, thus I changed the activation function in hidden layers to tf.nn.leaky_relu, but it's still no good.

+

It works when all hidden layers come with tf.sigmoid, yes, but why doesn't ReLU work here? Is it because of dead ReLU units, or exploding gradients, or anything else?

+

Source code (TensorFlow):

+ +
#core
+import time;
+
+#libs
+import tensorflow        as tf;
+import matplotlib.pyplot as pyplot;
+
+#mockup to emphasize value name
+def units(Num):
+  return Num;
+#end def
+
+#PROGRAMME ENTRY POINT==========================================================
+#data
+#https://i.imgur.com/uVOxZR7.png
+X = [[1,1],[1,2],[1,3],[2,1],[2,2],[2,3],[3,1],[3,2],[3,3],[4,1],[4,2],[4,3],[5,1],[6,1]];
+Y = [[0],  [1],  [0],  [1],  [0],  [1],  [0],  [2],  [1],  [1],  [1],  [0],  [0],  [1]  ];
+Max_X      = 6;
+Max_Y      = 2;
+Batch_Size = 14;
+
+#normalise
+for I in range(len(X)):
+  X[I][0] /= Max_X;
+  X[I][1] /= Max_X;
+  Y[I][0] /= Max_Y;
+#end for
+
+#model
+Input     = tf.placeholder(dtype=tf.float32, shape=[Batch_Size,2]);
+Expected  = tf.placeholder(dtype=tf.float32, shape=[Batch_Size,1]);
+
+#RELU DOESN'T WORK, DEAD RELU? SIGMOID WORKS BUT SLOW.
+#CHANGE TO tf.sigmoid OR tf.tanh AND IT WORKS:
+activation_fn = tf.nn.leaky_relu;
+
+#1
+Weight1   = tf.Variable(tf.random_uniform(shape=[2,units(60)], minval=-1, maxval=1));
+Bias1     = tf.Variable(tf.random_uniform(shape=[  units(60)], minval=-1, maxval=1));
+Hidden1   = activation_fn(tf.matmul(Input,Weight1) + Bias1);
+
+#2
+Weight2   = tf.Variable(tf.random_uniform(shape=[60,units(50)], minval=-1, maxval=1));
+Bias2     = tf.Variable(tf.random_uniform(shape=[   units(50)], minval=-1, maxval=1));
+Hidden2   = activation_fn(tf.matmul(Hidden1,Weight2) + Bias2);
+
+#3
+Weight3   = tf.Variable(tf.random_uniform(shape=[50,units(40)], minval=-1, maxval=1));
+Bias3     = tf.Variable(tf.random_uniform(shape=[   units(40)], minval=-1, maxval=1));
+Hidden3   = activation_fn(tf.matmul(Hidden2,Weight3) + Bias3);
+
+#4
+Weight4   = tf.Variable(tf.random_uniform(shape=[40,units(30)], minval=-1, maxval=1));
+Bias4     = tf.Variable(tf.random_uniform(shape=[   units(30)], minval=-1, maxval=1));
+Hidden4   = activation_fn(tf.matmul(Hidden3,Weight4) + Bias4);
+
+#5
+Weight5   = tf.Variable(tf.random_uniform(shape=[30,units(20)], minval=-1, maxval=1));
+Bias5     = tf.Variable(tf.random_uniform(shape=[   units(20)], minval=-1, maxval=1));
+Hidden5   = activation_fn(tf.matmul(Hidden4,Weight5) + Bias5);
+
+#out
+Weight6   = tf.Variable(tf.random_uniform(shape=[20,units(1)], minval=-1, maxval=1));
+Bias6     = tf.Variable(tf.random_uniform(shape=[   units(1)], minval=-1, maxval=1));
+Output    = tf.sigmoid(tf.matmul(Hidden5,Weight6) + Bias6);
+
+Loss      = tf.reduce_sum(tf.square(Expected-Output));
+Optimiser = tf.train.GradientDescentOptimizer(1e-1);
+Training  = Optimiser.minimize(Loss);
+
+#training
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed   = {Input:X, Expected:Y};
+Losses = [];
+Start  = time.time();
+
+for I in range(10000):
+  if (I%1000==0):
+    Lossvalue = Sess.run(Loss, feed_dict=Feed);
+    Losses   += [Lossvalue];
+    
+    if (I==0):
+      print("Loss:",Lossvalue,"(first)");
+    else:
+      print("Loss:",Lossvalue);
+  #end if
+  
+  Sess.run(Training, feed_dict=Feed);
+#end for
+
+Lastloss = Sess.run(Loss, feed_dict=Feed);
+Losses  += [Lastloss];
+print("Loss:",Lastloss,"(last)");
+
+Finish = time.time();
+print("Time:",Finish-Start,"seconds");
+
+#eval
+print("\nEval:");
+Evalresults = Sess.run(Output,feed_dict=Feed).tolist();
+for I in range(len(Evalresults)):
+  Evalresults[I] = [round(Evalresults[I][0]*Max_Y)];
+#end for
+print(Evalresults);
+Sess.close();
+
+#result: diagram
+print("\nLoss curve:");
+pyplot.plot(Losses,"-bo");
+#eof
+
+",2844,,145,,4/22/2023 18:16,4/22/2023 18:16,"Network doesn't converge with ReLU or Leaky ReLU, but works well with sigmoid/tanh",,1,1,,,,CC BY-SA 4.0 +15397,1,,,9/12/2019 11:01,,1,217,"

I am confused in understanding the maximum likelihood as a classifier. I know what is Bayesian network and I know that ML is used for estimating the parameters of models. Also, I read that there are two methods to learn the parameters of a Bayesian network: MLE and Bayesian estimator.

+ +

The question which confused me are the following.

+ +
    +
  1. Can we use ML as a classifier? For example, can we use ML to model the user's behaviors to identify the activity of them? If yes, How? What is the likelihood function that should be optimized? Should I suppose a normal distribution of users and optimize it?

  2. +
  3. If ML can be used as a classifier, what is the difference between ML and BN to classify activities? What are the advantages and disadvantages of each model?

  4. +
+",29645,,2444,,2/9/2020 18:01,7/24/2023 5:04,Can maximum likelihood be used as a classifier?,,2,1,,,,CC BY-SA 4.0 +15398,1,15402,,9/12/2019 11:08,,1,2820,"

Neural networks (NNs) are used as approximators in reinforcement learning (RL). To update the policy in RL, the actor network's gradients w.r.t its weights are needed. Since NN doesn't have a mathematical expression to work with, how can its derivatives be calculated?

+",29621,,2444,,9/13/2019 22:06,9/13/2019 22:06,"How can the derivative of a neural network be calculated, given no mathematical expression?",,1,1,,,,CC BY-SA 4.0 +15400,2,,15396,9/12/2019 11:20,,2,,"

I don't think it is dead ReLU units as a main cause, although they may be happening as part of the NN failing.

+ +

The NN architecture is too complex for the given task (too deep, too many neurons) and that means that any problems you have with other design choices will tend to get amplified. It could be that your NN is close to diverging on the given data and architecture, and that sigmoid is more resilient to it.

+ +

I'd suggest the following changes:

+ +
    +
  • Dropping the learning rate, try 0.01 or even 0.001

  • +
  • Normalising the two input features. NNs like to work with data that is mean 0, standard deviation 1, although there is some flexibility here, your values ranging from 0 to 6 are probably starting to cause minor problems

  • +
  • Look at standard initialisation routines for weights, available within TensorFlow framework, such as Glorot uniform. Your random -1 to +1 is probably too high a range for the given network, and NNs - especially ""deep"" NNs with 3+ hidden layers - are very sensitive to how initial weights are set.

  • +
  • Simplify the network architecture a little. Five hidden layers and 200 neurons seems a bit much for your goal of over-fitting to this small data set. Try something like 3 hidden layers and 50 neurons.

  • +
  • Your output layer and loss function are designed for a regression task, but you mention that the goal is to identify classes. You need a softmax layer and multiclass log loss for predicting exclusive classes.

  • +
+",1847,,1847,,9/12/2019 11:36,9/12/2019 11:36,,,,3,,,,CC BY-SA 4.0 +15401,2,,15397,9/12/2019 13:55,,0,,"

If you read nothing else, maximum likelihood estimate => chance that the data predicted is the data observed. If you have a range of points (2, 3, 4, 5, 71) your MLE is going to favour ~4.5 because of means and standard deviations. MLE speeds up finding good input parameters, usually for a different classifier.

+ +

To answer your question:

+ +

1) Columbia University have a great example of using MLE classifiers, where everything is broken down into bitesize (or bytesize) chunks. Read this. Seriously.

+ +

2) In short, MLE is best used for simple, univariable distributions. It doesn't scale well to big problems but is waaay faster than a Bayesian network for simple tasks like predicting your height based on the heights of your immediate relatives. If you want to get technical, the conditional probability network of the Bayesian model reveals insights faster than the chain multiplication of the more primitive MLE.

+ +

Hope it helps!

+",28446,,,,,9/12/2019 13:55,,,,0,,,,CC BY-SA 4.0 +15402,2,,15398,9/12/2019 14:13,,5,,"

I think what you mean to ask is how can differentiation occur when there's no obvious neural network function to differentiate?

+ +

Don't worry - lots of people get confused about this, because it seems like an obvious hole in the puzzle. As mentioned by @AtillaOzgur, neural networks use partial differentiation through backpropagation.

+ +

First, take the output of all the neurons (except the one you're about to differentiate by) as a function:

+ +

+ +

The above diagram represents the output of one neuron. Do this for every neuron in your network until you have a set. Let's call this set function NN. The output of NN (given all your neuron outputs) is what you'd normally plug into your RL policy.

+ +

You then differentiate NN by a single neuron (n) as shown:

+ +

$$\frac{\partial NN}{\partial n} = \lim_{h\to0} \left(\frac{NN(\text{all other neuron outputs}, n + h) - NN(\text{all other neuron outputs}, n)}{h} \right)$$

+ +

In reality however, it's the partial derivative of the activation function (A) with respect to the output of a single neuron (n):

+ +

$$\frac{\partial A}{\partial n}$$

+ +

So, depending on your activation function, you just plug in your neuron output to a certain expression and you've found the value by which to update your neural network.

+ +

I hope this helps. Deep learning is definitely a field with a learning curve, but places like StackExchange are great resources to help you out.

+",28446,,2444,,9/13/2019 22:04,9/13/2019 22:04,,,,2,,,,CC BY-SA 4.0 +15403,1,15425,,9/12/2019 15:46,,1,52,"

A convolutional neural network (CNN) can easily predict the class of an object in an image.

+ +

Can a CNN distinguish the Pisa Tower from other buildings, or Hagia Sophia from other mosques easily? If it can, how many training images can be sufficient? Do I need thousands of training images of that specific thing to distinguish it?

+ +

(This is a term project recommendation about deep neural networks, so I need to understand its feasibility.)

+",29650,,2444,,6/6/2020 0:47,6/6/2020 0:47,How well can a CNN distinguish an object from its class?,,1,0,0,,,CC BY-SA 4.0 +15404,2,,13135,9/12/2019 17:04,,1,,"

You're correct, the return is the discounted future reward from the one iteration while the expected return is averaged over a bunch of iterations.

+",29652,,,,,9/12/2019 17:04,,,,0,,,,CC BY-SA 4.0 +15405,2,,15394,9/12/2019 18:06,,1,,"

I think a HMM is overkill for this problem. You kind of have 'hidden' states, but they are very limited and dependent on the full sequence of previous states, which you probably want to avoid to make best use of the HMM's features. It also, as you rightly say, leads to a proliferation in states: each dictionary item adds as many states as it has letters to the model.

+ +

The way I would approach this it to use a trie which contains your dictionary, and then you traverse the trie as the user types characters. At each point you have a sub-trie which gives you the possible completions of the word. You could even (if available) augment this with probabilities to guess the most likely word (though this does not take into account the previous word in the sentence, which might be more useful).

+ +

If you want to learn about HMMs, one possible application would be a weather forecast (I thought this was an example Lawrence Rabiner uses in his tutorial, but it's actually on the HMM Wikipedia page). Or, if you want to work with texts, use parts-of-speech tagging. You want to have something where each observation can belong to several possible states (eg light can belong to adj, noun, and verb). Here you would have the words as observations and the part-of-speech tags as states, which keeps the model at a reasonably small size (and is thus easier to train).

+",2193,,,,,9/12/2019 18:06,,,,2,,,,CC BY-SA 4.0 +15406,2,,10975,9/12/2019 18:56,,1,,"

I can comment on several properties of MSE and related losses.

+ +

As you mentioned MSE (aka $l_2$-loss) is convex which is a great property in optimization in which one can find a single global optimum. MSE is used in linear and non-linear least squares problems which form the basis of many widely used statistical methods. I would imagine the math and implementation would be more difficult if one would use a higher-order loss (e.g. $x^3$) and that would also prove to be futile because MSE already possesses great statistical and optimization properties on its own.
+Another important aspect, one wouldn't use higher-order loss functions in regression is because it would be extremely prone to outliers. MSE on its own would weigh the outliers much more than l1-loss would! And in real world data there is always noise and outliers present. In comparison l1 loss is more difficult in optimization, one reason for which is it's not differentiable at zero.

+ +

Other interesting losses you might want to read about are $l_0$ and $l_{inf}$ loss, all of which have their own trade-offs in optimization-sense.

+",28608,,,,,9/12/2019 18:56,,,,0,,,,CC BY-SA 4.0 +15407,2,,8333,9/12/2019 19:42,,0,,"

I think in the specific case you described if the representatives are now compatible you would move the lower fitness representative into the higher fitness representatives species, then with the remainder of the lower representatives species (all other genomes that didnt have compat distances close enough to move into the other species) you would just randomly pick a new representative and carry on with the process.

+",20044,,,,,9/12/2019 19:42,,,,0,,,,CC BY-SA 4.0 +15408,1,,,9/12/2019 20:29,,4,3015,"

Say I have x,y data connected by a function with some additional parameters (a,b,c):

+ +

$$ y = f(x ; a, b, c) $$

+ +

Now given a set of data points (x and y) I want to determine a,b,c. If I know the model for $f$, this is a simple curve fitting problem. What if I don't have $f$ but I do have lots of examples of y with corresponding a,b,c values? (Or alternatively $f$ is expensive to compute, and I want a better way of guessing the right parameters without a brute force curve fit.) Would simple machine-learning techniques (e.g. from sklearn) work on this problem, or would this require something more like deep learning?

+ +

Here's an example generating the kind of data I'm talking about:

+ +
import numpy as np                                                                                                                                                                                                                           
+import matplotlib.pyplot as plt                                                                                                                                                                                                              
+
+Nr = 2000                                                                                                            
+Nx = 100                                                                                                             
+x = np.linspace(0,1,Nx)                                                                                              
+
+f1 = lambda x, a, b, c : a*np.exp( -(x-b)**2/c**2) # An example function                                             
+f2 = lambda x, a, b, c : a*np.sin( x*b + c)        # Another example function                                        
+prange1 = np.array([[0,1],[0,1],[0,.5]])                                                                             
+prange2 = np.array([[0,1],[0,Nx/2.0],[0,np.pi*2]])                                                                   
+#f, prange = f1, prange1                                                                                             
+f, prange = f2, prange2                                                                                              
+
+data = np.zeros((Nr,Nx))                                                                                             
+parms = np.zeros((Nr,3))                                                                                             
+for i in range(Nr) :                                                                                                 
+    a,b,c = np.random.rand(3)*(prange[:,1]-prange[:,0])+prange[:,0]                                                  
+    parms[i] = a,b,c                                                                                                 
+    data[i] = f(x,a,b,c) + (np.random.rand(Nx)-.5)*.2*a                                                              
+
+plt.figure(1)                                                                                                        
+plt.clf()                                                                                                            
+for i in range(3) :                                                                                                  
+    plt.title('First few rows in dataset')                                                                           
+    plt.plot(x,data[i],'.')                                                                                          
+    plt.plot(x,f(x,*parms[i]))                                                                                       
+
+ +

+ +

Given data, could you train a model on half the data set, and then determine the a,b,c values from the other half?

+ +

I've been going through some sklearn tutorials, but I'm not sure any of the models I've seen apply well to this type of a problem. For the guassian example I could do it by extracting features related to the parameters (e.g. first and 2nd moments, %5 and .%95 percentiles, etc.), and feed those into an ML model that would give good results, but I want something that would work more generally without assuming anything about $f$ or its parameters.

+",21099,,21099,,9/16/2019 19:08,9/16/2019 19:08,Can ML be used to curve fit data based on dataset of example fits?,,2,0,,,,CC BY-SA 4.0 +15409,1,15423,,9/12/2019 21:18,,4,208,"

I am playing around with neural networks in Tensorflow and I figured an interesting test would be whether I can write a calculator using a Tensorflow Neural Network.

+ +

I started with simple addition and it kinda worked (so given 2, 4 it would get around 5.9 or 6.1).

+ +

Then I wanted to add the ability to calculate using ""+"", ""-"", and ""*"".

+ +

Here is the code I came up with in the end:

+ + + +
import numpy as np
+import tensorflow as tf
+from random import randrange
+
+def generate_input(size):
+    nn_input = []
+    for i in range(0,size):
+        symbol = float(randrange(3))
+        nn_input.append([
+                float(randrange(1000)),
+                float(randrange(1000)),
+                1 if symbol == 0 else 0,
+                1 if symbol == 1 else 0,
+                1 if symbol == 2 else 0,
+                ])
+    return nn_input
+
+def generate_output(input_data):
+    return [[generate_single_output(i)] for i in input_data]
+
+def generate_single_output(input_data):
+    plus = input_data[2]
+    minus = input_data[3]
+    multiplication = input_data[4]
+
+    if (plus):
+        return input_data[0] + input_data[1]
+
+    if (minus):
+        return input_data[0] - input_data[1]
+
+    if (multiplication):
+        return input_data[0] * input_data[1]
+
+def user_input_to_nn_input(user_input):
+    symbol = user_input[1]
+    return np.array([[
+            float(user_input[0]),
+            float(user_input[2]),
+            1 if symbol == '+' else 0,
+            1 if symbol == '-' else 0,
+            1 if symbol == '*' else 0,
+            ]])
+
+
+if __name__ == '__main__':
+    model = tf.keras.models.Sequential([
+        tf.keras.layers.Dense(64, activation='relu', input_shape=(5,)),
+        tf.keras.layers.Dense(64, activation='relu'),
+        tf.keras.layers.Dense(1),
+        ])
+
+    model.compile(tf.keras.optimizers.RMSprop(0.001), loss=tf.keras.losses.MeanSquaredError())
+
+
+    input_data = np.array(generate_input(10000))
+    output_data = np.array(generate_output(input_data))
+
+    model.fit(input_data, output_data, epochs=20)
+
+    while True:
+        user_calculation = input(""Enter expression (e.g. 2 + 3):"")
+        user_input = user_calculation.split()
+        nn_input = user_input_to_nn_input(user_input)
+        print(model.predict(nn_input)[0][0])
+
+
+ +

The idea is built on this tutorial: https://www.tensorflow.org/tutorials/keras/basic_regression

+ +

The input is 5 fields: number 1, number 2, plus?, minus?, multiplication?

+ +

Where the last 3 are simply 1 or 0 depending on whether that is the calculation I am trying to do.

+ +

As an output for say [1,4,1,0,0] I would expect 1 + 4 = 5 +for [1,4,0,1,0] I would expect 1 - 4 = -3 etc.

+ +

For some reason though the numbers I am getting are completely off and seem random.

+ +

Basically I am trying to understand what I went wrong? +The data being input to the NN seems correct and the model is based on the model used in the tutorial I quoted (and the problems seem fairly similar so I expect if one would work the other would too).

+",27615,,,,,9/23/2019 7:56,Why isn't my Neural Network based calculator working?,,1,0,,,,CC BY-SA 4.0 +15410,5,,,9/12/2019 21:43,,0,,"

https://en.wikipedia.org/wiki/Gamification

+",1671,,1671,,9/12/2019 21:43,9/12/2019 21:43,,,,0,,,,CC BY-SA 4.0 +15411,4,,,9/12/2019 21:43,,0,,The application of elements of game play to address real-world problems. For questions on utilizing this process in the field of AI.,1671,,1671,,9/12/2019 21:43,9/12/2019 21:43,,,,0,,,,CC BY-SA 4.0 +15412,2,,11345,9/12/2019 22:05,,0,,"

well if you want to just train genomeN against genomeN-1 then you just need to loop through genomes using indexing and start at 1 so you can always do genome1 = genomes[ix-1] and genome2 = genomes[ix] and then modify your fitness evaluation code to run both at the same time and applies a fitness function to each. Also maybe you iterate by twos so that you only eval each net once.

+",20044,,20044,,2/10/2020 15:23,2/10/2020 15:23,,,,0,,,,CC BY-SA 4.0 +15413,1,,,9/13/2019 4:21,,1,20,"

How good would a CBIR system trained on a dataset, for example, DELF, trained on the Google Landmarks dataset, perform when evaluated on a contextually different dataset such as the WANG or the COREL dataset without retraining?

+",252,,2444,,9/13/2019 12:49,9/13/2019 12:49,CBIR Evaluation on contextually different data,,0,1,,,,CC BY-SA 4.0 +15415,1,,,9/13/2019 9:22,,1,30,"

Let's say that we have a test data set with $20,000$ observations for which we want to make a binary prediction for. When we apply our best trained model to this data set (e.g. logistic regression with threshold = 0.5, data_size = 4000 rows, 5 fold cv), only about $1 \%$ of the predictions are positive. That is, $p(\text{positive}) \geq 0.5$ is true only for about $1 \%$ of the predictions. We expect many more positives since the recall of the positive class of the best trained model is about $40 \%$. If we manually lower the threshold to $0.45$, then about $10 \%$ of the predictions are positive. Assume that the $20,000$ observations come from the same distribution as the training/validation data and are independent samples.

+ +

Questions.

+ +
    +
  1. Why would a a model with decent recall for the positive class predict very few positives in the out of sample data?
  2. +
  3. If (1) is true, then is it appropriate to lower the threshold for positive (e.g. $0.5$ to $0.45$) to increase the number of predicted positives in the test set?
  4. +
+",28575,,,,,9/13/2019 9:22,Improving Recall of a Certain Class,,0,0,,,,CC BY-SA 4.0 +15416,1,15418,,9/13/2019 9:42,,2,116,"

I often read ""the performance of the system is satisfactory"" or "" when your model is satisfactory"".

+ +

But what does it mean in the context of Machine Learning?

+ +

Are there any clear and/or generic criteria for Machine Learning model to be satisfactory for commercial use?

+ +

Is decision what model to choose or whether additional model adjustments or improvements are needed based on data scientist experience, customer satisfaction or benchmarking academic or market competition results?

+",28605,,1671,,10/15/2019 19:17,10/15/2019 19:17,What are criteria for ML model to be satisfactory for commercial use?,,2,1,,,,CC BY-SA 4.0 +15417,1,15420,,9/13/2019 9:50,,2,620,"

Can an RL algorithm trained in one environment be successful in a different one?

+ +

For example, if I train a model to go through one labyrinth, could this model also go through a different but similar labyrinth or would it need a new training process?

+ +

By similar, I mean like these two:

+ +

+ +

+ +

But with this one being not similar:

+ +

+",22659,,2444,,9/13/2019 12:23,9/13/2019 12:23,Can an RL algorithm trained in one environment be successful in a different one?,,1,2,0,,,CC BY-SA 4.0 +15418,2,,15416,9/13/2019 9:55,,2,,"

The answer is ""when it works well enough to perform the task that you have set it"". +It is a good idea to set your performance criteria in advance so that you can clearly identify the goal that you are trying to achieve and also so that you will know if the model is likely to be successful or not.

+",12509,,,,,9/13/2019 9:55,,,,0,,,,CC BY-SA 4.0 +15419,2,,15368,9/13/2019 10:31,,0,,"

I found out the curvy zigzag green line is not polynomial as if it were polynomial, a vertical line won't cut that curvy line more than 1 time.

+ +

It's the combination of straight lines (linear separation) of multiple neurons in the same layer. So it's linear separation ('linear' by previous_layer_output*weight, 'separation' by activation function), at multiple nodes.

+",2844,,,,,9/13/2019 10:31,,,,0,,,,CC BY-SA 4.0 +15420,2,,15417,9/13/2019 10:42,,0,,"
+

Can an RL algorithm trained in one environment be succesfull on a different one?

+
+ +

Strictly the answer here is ""no"". You train an agent to solve a single environment. If a second environment is similar enough, you can probably re-train the agent on the new environment with little or no changes to the agent, and to get results just as good in the second agent.

+ +

In some cases, you could even start with the trained agent and ""fine tune"" it to the new environment. That probably would not work well with the labyrinth example though, especially if the start and end points are moved.

+ +
+

Example: If I train a model to go through one labyrinth, could this model also go through different but similar labyrinth or would it need a new training process?

+
+ +

This could be subtly different. It is down to how you define the environment. It is possible to define an environment not as ""this maze"" but for example ""all possible mazes in a 30x20 grid pattern"". To do so, you need to make the maze configuration part of the environment's state.

+ +

Expanding from a singular environment example to all similar environments has a cost. In the labyrinth example, this is significant. A single 30x20 maze has 600 states, which is a trivial problem in RL. All possible mazes of the same size has closer to $2^{1200}$ states, which is much larger and requires different kinds of approaches, different RL methods.

+ +

You would expect to train on many example mazes (typically a randomly generated new one for each episode) and use some kind of generalisation - e.g. a neural network, maybe a CNN due to the grid design - when handling the value function or policy function. Training time for a single maze will be under a second, and the policy will be perfect. Training time for all possible mazes measured in hours or days and the policy will still get things wrong from time-to-time.

+",1847,,,,,9/13/2019 10:42,,,,1,,,,CC BY-SA 4.0 +15422,1,,,9/13/2019 12:04,,3,842,"

I have a regression MLP network with all input values between 0 and 1, and am using MSE for the loss function. The minimum MSE over the validation sample set comes to 0.019. So how to express the 'accuracy' of this network in 'lay' terms? If RMSE is 'in the units of the quantity being estimated', does this mean we can say: ""The network is on average (1-SQRT(0.019))*100 = 86.2% accurate""?

+ +

Also, in the validation data set, there are three 'extreme' expected values. The lowest MSE results in predicted values closer to these three values, but not as close to all the other values, whereas a slightly higher MSE results in the opposite - predicted values further from the 'extreme' values but more accurate relative to all other expected values (and this outcome is actually preferred in the case I'm dealing with). I assume this can be explained by RMSE's sensitivity to outliers?

+",27920,,32410,,2/23/2021 12:36,2/23/2021 12:36,How to express accuracy of a regression ANN that uses MSE loss function?,,2,0,,,,CC BY-SA 4.0 +15423,2,,15409,9/13/2019 12:39,,6,,"

A neural network is not good at selecting a function based on those 3 input parameters, because of the way a neuron is setup.

+ +

What you should do is either make a neural network for each operation, or use different input neurons for each operation. E.g. 2 input neurons for the addition operation, 2 for the multiplication, and 2 for the minus. 6 inputs in total of which 4 will always be 0.

+ +

This will make it easier for the neural network to calculate the result.

+",29671,,25982,,9/23/2019 7:56,9/23/2019 7:56,,,,3,,,,CC BY-SA 4.0 +15424,2,,12455,9/13/2019 13:45,,0,,"

If the compatibility distances are coming out to be > float.max that tells me something may be wrong with that calculation, I would suggest setting break points and debugging that code, I usually have my threshold set 5-10 and can noticeably tell when a change in species size when i move the threshold to something like 10

+",20044,,20044,,9/13/2019 14:11,9/13/2019 14:11,,,,0,,,,CC BY-SA 4.0 +15425,2,,15403,9/13/2019 14:48,,0,,"

To help you understand the feasibility of your project these posts could be a good start: +https://datascience.stackexchange.com/questions/13181/how-many-images-per-class-are-sufficient-for-training-a-cnnenter link description here

+ +

https://stats.stackexchange.com/questions/226672/how-few-training-examples-is-too-few-when-training-a-neural-network

+ +

That being said, the short answer would be it depends. It depends how precise you want to be, what is the difficulty of the task, the infrastructure you have for the training etc.

+ +

For the images, you should not worry, you could start with the ImageNet dataset: +For construction and buildings: +http://www.image-net.org/explore?wnid=n04341686

+ +

For mosquees: +http://www.image-net.org/synset?wnid=n03788195

+ +

You can then use data augmentation techniques to enhance the size of you training set. Here is a library I have used in the past which helped me greatly to achieve this task: https://github.com/aleju/imgaug

+ +

Hope that helps!

+",28010,,,,,9/13/2019 14:48,,,,0,,,,CC BY-SA 4.0 +15428,2,,15388,9/13/2019 19:43,,3,,"

There are different actor-critic (AC) algorithms with different convergence guarantees. For example, AC algorithms where the critic is tabular have different convergence guarantees than AC algorithms where the critic is a neural network (function approximation). Most convergence proofs assume that the actor and the critic operate at different time scales, but, for example, in the paper A Convergent Online Single Time Scale Actor-Critic Algorithm (2010) this assumption is not made.

+ +

In the paper Incremental Natural Actor-Critic Algorithms (2007), the authors propose four different AC algorithms that use function approximation (neural networks) to represent the critic. Three of these proposed AC algorithms are based on natural policy gradients. In section 6 of the extended and more technical version of the mentioned paper, Natural Actor-Critic Algorithm (2007), the authors prove the convergence of the parameters of the policy and value function to a local maximum of an objective (or performance) function, which corresponds to the average reward (equation 2) plus a measure of the temporal-difference (TD) error inherent in the function approximation.

+",2444,,2444,,9/13/2019 19:49,9/13/2019 19:49,,,,1,,,,CC BY-SA 4.0 +15429,2,,13088,9/13/2019 20:09,,6,,"

A stationary policy, $\pi_t$, is a policy that does not change over time, that is, $\pi_t = \pi, \forall t \geq 0$, where $\pi$ can either be a function, $\pi: S \rightarrow A$ (a deterministic policy), or a conditional density, $\pi(A \mid S)$ (a stochastic policy). A non-stationary policy is a policy that is not stationary. More precisely, $\pi_i$ may not be equal to $\pi_j$, for $i \neq j \geq 0$, where $i$ and $j$ are thus two different time steps.

+

There are problems where a stationary optimal policy is guaranteed to exist. For example, in the case of a stochastic (there is a probability density that models the dynamics of the environment, that is, the transition function and the reward function) and discrete-time Markov decision process (MDP) with finite numbers of states and actions, and bounded rewards, where the objective is the long-run average reward, a stationary optimal policy exists. The proof of this fact is in the book Markov Decision Processes: Discrete Stochastic Dynamic Programming (1994), by Martin L. Puterman, which apparently is not freely available on the web.

+",2444,,36821,,2/25/2021 0:17,2/25/2021 0:17,,,,1,,,,CC BY-SA 4.0 +15430,2,,15422,9/14/2019 0:50,,2,,"

You can not use error to reliably measure accuracy. Error is best used as a measure of how fast the model is currently learning.

+ +

As an example, using different loss functions (cross entorpy vs MSE) results in massively different values for the error at similar accuracy.

+ +

Also considering this, an error of 0.0000000001 quite often has lower validation set accuracy then and error of 0.1, as the prior is likely over trained.

+ +

As for you second question, yes this is because MSE has a huge bias towards outliers. I have personally found regression networks to struggle in most circumstances, so if it is at all possible to turn the network into a classifier, you may see an improvement.

+",26726,,,,,9/14/2019 0:50,,,,1,,,,CC BY-SA 4.0 +15431,1,15436,,9/14/2019 8:58,,4,431,"

When you train a model using Monte Carlo-based learning the state and action taken at each step is recorded, and then at some point an end state is reached and the agent receives some reward - what do you do at that point?

+ +

Let's say there were 100 steps taken to reach this final reward state, would you update the full rollout of those 100 state/action/rewards and then begin the next episode, or do you then 'bubble up' that final reward to the previous states and update on those as well?

+ +

E.g.

+ +
    +
  • Process an update for the full 100 experiences. Can either stop here, or...

  • +
  • Bubble up the final reward to the 99th step and process an update for the 99 state/action/reward.

  • +
  • Bubble up the final reward to the 98th step and process an update for the 98 state/action/reward.

  • +
  • and so on right the way to the first step...

  • +
+ +

Or, do you just process an update for the full 100-step roll-out and that's it?

+ +

Or perhaps these are two different approaches? Is there a situation where you'd one rather than the other?

+",20352,,2444,,1/2/2021 12:46,1/2/2021 12:46,"In Monte Carlo learning, what do you do when an end state is reached, after having recorded the previously visited states and taken actions?",,1,1,,,,CC BY-SA 4.0 +15432,2,,15422,9/14/2019 10:55,,1,,"

Just as a general remark, notice that technically we don't use the term ""accuracy"" for regression settings, such as yours - only for classification ones.

+ +
+

If RMSE is 'in the units of the quantity being estimated', does this mean we can say: ""The network is on average (1-SQRT(0.019))*100 = 86.2% accurate""?

+
+ +

No.

+ +

The advantage of the RMSE, as you have correctly quoted, is that it is in the same units with your predicted quantity; so, if this quantity is, say, USD, you can say (to the business user) that the error of the model is 0.019 USD, and this can be perfectly fine by itself. But you cannot convert it to a percentage - it would be meaningless.

+ +

If required to give the performance of a regression model in a percentage, your best option would be the Mean Absolute Percentage Error (MAPE).

+",11539,,,,,9/14/2019 10:55,,,,0,,,,CC BY-SA 4.0 +15433,1,,,9/14/2019 23:48,,3,35,"

I know how to train a NN for recognizing handwritten digits (e.g. using the MNIST database).

+ +

I'm wondering how to accomplish the same ""online"", which is during the process of writing e.g. I'm writing down a digit, and during the process and while keeping ""the pen down"", the NN recognizes the digit. +To make it simpler, I could assume one stroke-order only for each digit (e.g. each digit is written with a certain order of strokes, like for digit 1 is just a vertical line going up to down).

+ +

Which is the most suitable NN for the purpose and how to accomplish this?

+",13087,,,,,9/14/2019 23:48,Handwritten digits recognition during the process of writing,,0,3,,,,CC BY-SA 4.0 +15434,1,15553,,9/15/2019 2:22,,3,40,"

I have a dataset of images with 9 different classes. However, there are different categories with the same type of associated image and only can be differentiated with an associated matrix in my specific problem.

+ +

I want to train a neural network with the images and the associated matrix as inputs. What type of architecture is good to use? Or where can I find bibliography about it?

+",29693,,2444,,9/15/2019 11:38,9/20/2019 3:23,Image classification with an associated matrix,,1,2,,,,CC BY-SA 4.0 +15436,2,,15431,9/15/2019 9:46,,3,,"

I am assuming you are asking about Monte Carlo simulation for value estimates, perhaps as part of a Monte Carlo control learning agent.

+

The basic approach of all value-based methods is to estimate an expected return, often the action value $Q(s,a)$ which is a sum of expected future reward from taking action $a$ in state $s$. Monte Carlo methods take a direct and simple approach to this, which is to run the environment to the end of an episode and measure the return. This return is a sample out of all possible returns, so it can just be averaged with other observed returns to obtain an estimate. A minor complication is that the return depends on the current policy, and in control scenarios that will change, so the average needs to be recency-weighted for control e.g. using a fixed learning rate $\alpha$ in an update like $Q(s,a) \leftarrow Q(s,a) + \alpha(G - Q(s,a))$

+

Given this, you can run pretty much any approach that calculates the returns from observed state/action pairs. You will find that the "bubble up" approach is used commonly - the process usually termed backing up - working backwards from the end of the episode.

+

If you have an episode from $t=0$ to $t=T$ and records of states, actions, rewards $s_0, a_0, r_1, s_1, a_1, r_2, s_2, a_2, r_3 . . . s_{T-1}, a_{T-1}, r_T, s_T$ (note indexing, reward follows state/action, there is no $r_0$ and no $a_T$), then the following algorithm could be used to calculate individual returns $g_t$:

+
+

$g \leftarrow 0$

+

for $t = T-1$ down to $0$:

+

$\qquad g \leftarrow r_{t+1} + \gamma g$

+

$\qquad Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha(g - Q(s_t,a_t))$

+
+

This working backwards is an efficient way to process rewards and assign them with discounting to action values for all state, action pairs observed in the episode.

+
+

Or perhaps these are two different approaches?

+
+

It would be valid to calculate only the return for the first state/action, and randomly select state/actions to start from (called exploring starts). Or in fact take any arbitrary set of estimates generated this way. You don't have to use all return estimates, but you do need to have an algorithm that is guaranteed to update values of all state/action pairs in the long term.

+
+

Is there a situation where you'd one rather than the other?

+
+

Most usually you will see the backed up return estimates to all observed state/action pairs, as this is more sample efficient, and Monte Carlo is already a high variance method that requires lots of samples to get good estimates (especially for early state/action pairs at the start of long episodes).

+

However, if you work with function approximation such as neural networks, you need to avoid feeding in correlated data to learn from. A sequence of state/action pairs from a single episode are going to be correlated. There are a few ways to avoid that, but one simple approach could be to take just one sample from each rollout. You might do this if rollouts could be run very fast, possibly simulated on computer. But other alternatives may be better - for instance put all the state, action, return values into a data set, shuffle it after N episodes and learn from everything.

+",1847,,-1,,6/17/2020 9:57,9/17/2019 9:40,,,,3,,,,CC BY-SA 4.0 +15437,1,,,9/15/2019 10:10,,2,27,"

I'm trying to get a grasp on scalability of clustering algorithms, and have a toy example in mind. Let's say I have around a million or so songs from $50$ genres. Each song has characteristics - some of which are common across all or most genres, some of which are common to only a few genres and some of which are genre-specific.

+ +

Common attributes could be something like song duration, artist, label, year, album, key, etc. Genre-specific attributes could be like lead guitarist, trombone player, conductor, movie name (in case of movie soundtracks), etc. Assume that there are, say, $2000$ attributes across all possible genres.

+ +

The aim is to identify attributes that characterize subgenres of these genres. So of course, let's say for rock I can just collect all the attributes for all rock songs, but even that set of attributes may be too broad to characterize the rock genre - maybe there are some that are specific to subgenres and so I won't have the desired level of granularity.

+ +

Note that for the purpose of this example, I'm not assuming that I already know the subgenres a priori. For example, I'm not going to categorize songs into subgenres like post rock, folk rock, etc. and then pick out attributes characterizing them. I want to discover subgenres on the basis of clustering, if that makes sense.

+ +

In a nutshell, about a million songs belonging to $50$ genres and all songs collectively have $2000$ attributes. So for each song I'll make a vector in $\mathbb{R}^{2000}$ - each dimension corresponds to an attribute. If that attribute is present for that song, the corresponding element of the vector is $1$, otherwise $0$ (e.g. a jazz-related attribute will be $0$ for a rock song). Now I want to do genre-wise clustering. For each genre, not only do I want to cluster songs of that genre into groups, but I also want to get an idea which attributes are the most important to characterize individual groups.

+ +

On the basis of this clustering, I can identify subgenres (e.g. of rock music) that I can characterize using a subset of the $2000$ attributes.

+ +

My first question is: is there a better way to initialize the problem than forming 2000-dimensional vectors of ones and zeros?

+ +

Secondly, given the vast number of dimensions and examples, what clustering methods could be tried? From what I've surveyed, there are graph-based clustering methods, hierarchical, density-based, spectral clustering and so on. Which of these would be best for the toy example? I've heard that one can project the points on to a lower-dimensional subspace, then do clustering. But I also want which attributes define different clusters. Since attributes are encoded in the dimensions, with dimensionality reduction techniques I'll lose information about the attributes. So now what?

+",27548,,,,,9/15/2019 10:10,Clustering of very high dimensional data and large number of examples without losing info in dimensions,,0,0,,,,CC BY-SA 4.0 +15439,1,,,9/15/2019 14:22,,1,531,"

I'm interested in modeling a Siamese network for facial verification. I've already written a simple working model that inputs feature vectors generated from two CNNs with shared weights then outputs a similarity score (euclidean distance.)

+ +

Here is a similar model found within the Keras documentation (this Siamese network joins two networks comprised of fully connected layers) The model also uses a euclidean distance metric.

+ +

The threshold used in that example when computing the accuracy of the model is 0.5. The similarity scores generated by the model run from that training script roughly ranges from 0 to 1.68. My model outputs scores ranging from 0 to 1.96.

+ +

I would suppose that the choice of threshold when working with a similarity metric could be determined by finding the threshold value that maximizes an appropriate metric (e.g. F1 score) on a test set.

+ +

Now when it comes to parameter tuning using a validation set - done so to choose the appropriate optimization and regularization parameters and model architecture to generate scores in the first place, how do I determine what value to set as the threshold? This threshold would be important to calculating the performance metric of each model generated during the parameter search - needed to determine what final model architecture and parameter set to use when training. Also, would the method for choosing the threshold change should I choose a different distance metric (e.g. cosine similarity)?

+ +

I've already done a parameter search using an arbitrarily-set threshold of 0.5. I'm unsure if this would reflect best practices or if it ought to be adjusted when using a different distance metric.

+ +

Thank you for any help offered. Please let me know if any more details on my part are necessary to facilitate a better discussion in this thread.

+",29702,,29702,,9/16/2019 11:54,9/16/2019 11:54,Threshold selection for Siamese network hyper-parameter tuning,,0,0,,,,CC BY-SA 4.0 +15440,2,,11244,9/15/2019 14:38,,1,,"

Suppose that $X$ and $Y$ are metric spaces. A metric space is a set equipped with a metric, which is a function that defines the intuitive notion of distance between the elements of the set. For example, the set of real numbers, $\mathbb{R}$, equipped with the metric induced by the absolute value function (which is a norm). More precisely, the metric $d$ can be defined as $$d(x,y)=|x - y|, \forall x, y \in \mathbb{R} \tag{1}\label{1}.$$

+ +

Let $f$ be a function from the metric space $X$ to the metric space $Y$, that is, $f: X \rightarrow Y$. Then, $f$ is a non-expansive map (also known as metric map) if and only if

+ +

$$d_{Y}(f(x),f(y)) \leq d_{X}(x,y) \tag{2} \label{2}$$

+ +

where the subscript $_X$ in $d_X$ means that the metric $d_X$ is the metric associated with the metric space $X$. Therefore, any function $f$ between two metric spaces that satisfies \ref{2} is a non-expansive operator.

+ +

To show that the max operator is non-expansive, consider the set of real numbers, $\mathbb{R}$, equipped with the absolute value metric defined in \ref{1}. Then, in this case, $f=\operatorname{max}$, $d(x, y) = |x - y|$ and $X = Y = \mathbb{R}$, so \ref{2} becomes

+ +

$$|\operatorname{max}(x) - \operatorname{max}(y)| \leq | x - y | \tag{3} \label{3}$$

+ +

Given that $\operatorname{max}(x) = x, \forall x$, then \ref{3} trivially holds, that is

+ +

\begin{align} +|\operatorname{max}(x) - \operatorname{max}(y)| &\leq | x - y | \iff \\ +|x - y| &\leq | x - y | \iff \\ +|x - y| &= | x - y | \tag{4} \label{4} +\end{align}

+ +

For example, suppose that $x=6$ and $y=9$, then \ref{4} becomes

+ +

\begin{align} +|\operatorname{max}(6) - \operatorname{max}(9)| &\leq | 6 - 9 | \iff \\ +|6 - 9| &\leq | -3 | \iff \\ +|-3| &= 3 +\end{align}

+ +

There are other examples of non-expansive maps. For example, $f(x) = k x$, for $0 \leq k \leq 1$, where $f : \mathbb{R} \rightarrow \mathbb{R}$.

+ +

See also https://en.wikipedia.org/wiki/Contraction_mapping and https://en.wikipedia.org/wiki/Contraction_(operator_theory).

+",2444,,,,,9/15/2019 14:38,,,,0,,,,CC BY-SA 4.0 +15441,1,,,9/15/2019 16:17,,7,1898,"

I am trying to solve a maze puzzle using the A* algorithm. I am trying to analyze the algorithm based on different applicable heuristic functions.

+

Currently, I explored the Manhattan and Euclidean distances. Which other heuristic functions are available? How do we compare them? How do we know whether a heuristic function is better than another?

+",10118,,2444,,2/4/2021 22:04,2/4/2021 22:04,How do we determine whether a heuristic function is better than another?,,1,0,0,,,CC BY-SA 4.0 +15442,2,,15441,9/15/2019 17:55,,3,,"

In the A* algorithm, at each iteration, a node is chosen which minimizes a certain function, called the evaluation function, which, in the case of A*, is defined as

+ +

$$f(n)=g(n)+h(n)$$

+ +

where $g(n)$ is the length (or cost) of the cheapest path from the start node to the current node $n$ and $h(n)$ is the heuristic function that estimates the cost of the cheapest path from current node $n$ to goal node.

+ +

There is potentially more than one path to the goal from a given node $n$. However, one of these paths is the cheapest path. An admissible heuristic function is a heuristic function that does not overestimate the cost to reach the goal node, that is, it estimates a cost to reach a goal that is smaller or equal to the cheapest path from $n$, which is denoted by $h^*(n)$. Therefore, an admissible heuristic $h$ satisfies $h(n) \leq h^*(n), \forall n$. Given that the goal is to find the cheapest path from a start to a goal node, intuitively, an admissible heuristic is an optimistic predictive function.

+ +

A* is guaranteed to find the optimal solution (or path) if it uses an admissible heuristic. In section 2.4 of the book Principles of Artificial Intelligence (1982), Nils J. Nilsson provides the proof of this fact.

+ +

However, not all admissible heuristics give the same information, so not all admissible heuristics are equally efficient. For instance, a heuristic function that is trivially admissible is $h(n) = 0, \forall n$. However, in this case, the only actual information that is used to choose the next node to expand is only based on $g(n)$, that is, $f(n) = g(n)$. This evaluation function corresponds to the evaluation function of the uniform-cost search algorithm, which is an uninformed-search algorithm (as opposed to A*, which, nonetheless, is considered an informed-search algorithm).

+ +

Which admissible heuristic is thus more informed? Consider two versions of A*, each with a different admissible heuristic function

+ +

$$ +f_1(n) = g_1(n) + h_1(n) +$$

+ +

and

+ +

$$ +f_2(n) = g_1(n) + h_2(n) +$$

+ +

where $h_1(n) \leq h^*(n), \forall n$ and $h_2(n) \leq h^*(n), \forall n$. Then A* with the evaluation function $f_1$ is more informed than A* with $f_2$ if, for all non-goal nodes $n$, $h_1(n) > h_2(n)$. See section 2.4.4. of the cited book where an example that attempts to show this is given.

+ +

The admissibility of a heuristic depends on the problem. For example, in the case of the Fifteen Puzzle problem, both Manhattan and the Hamming distances are admissible heuristics. However, in other problems, these distances might not induce an admissible heuristic.

+",2444,,2444,,9/20/2019 11:56,9/20/2019 11:56,,,,0,,,,CC BY-SA 4.0 +15446,1,,,9/15/2019 21:28,,2,190,"

Do neural networks compute the probability distribution for policy gradient methods. If so, how do they compute an infinite probability distribution? How do you represent a continuous action policy with a neural network?

+",29708,,29708,,9/16/2019 4:00,9/16/2019 8:26,How do policy gradients compute an infinite probability distribution from a neural network,,1,2,,,,CC BY-SA 4.0 +15447,2,,15408,9/16/2019 2:27,,1,,"

Yes, ML can fit a curve based on examples that include hyperparameters but not a model specification. To do this, you need to specify a family of models that is large enough to include the true model. You can then treat this as learning a relationship from 4 inputs to a single output.

+ +

For example, suppose you are willing to make only the following relatively mild assumptions about $f$:

+ +
    +
  • $f$ is a function mapping 4 inputs (3 parameters and a true input) to 1 output, all real valued.
  • +
  • $f$ is a composition of a finite number (say, no more than 60) of the following basic mathematical operators: +,-,*,/,exp, ln, sin, min, max.
  • +
+ +

You can now frame the search for $f$ as a graph-search or local-search problem through the space of possible functions, which is finite. If the space is small, or is smooth, you are likely to find good or exact representations of $f$ quickly.

+ +

An example of an ML technique that is explicitly designed for this purpose is Koza's Genetic Programming. It searches the space of all possible LISP programs constructed from a pre-specified set of functions for a program that maps from specified inputs to specified outputs. It has been widely used for the kind of curve fitting you describe here.

+",16909,,,,,9/16/2019 2:27,,,,0,,,,CC BY-SA 4.0 +15449,1,15462,,9/16/2019 6:10,,58,10849,"

We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.

+ +

How could artificial intelligence harm us?

+",29713,,1671,,9/16/2019 21:19,9/19/2019 16:55,How could artificial intelligence harm us?,,13,1,,,,CC BY-SA 4.0 +15450,2,,15446,9/16/2019 6:58,,1,,"
+

Do neural networks compute the probability distribution for policy gradient methods.

+
+ +

In short, yes. It does not have to be neural networks, any trainable parametric function approximator based on gradients will do. Neural networks are a common choice, as are linear function approximators using selected basis functions.

+ +
+

If so, how do they compute an infinite probability distribution?

+
+ +

For background, this is an issue for generating stochastic policies in continuous action spaces only. In discrete action spaces, it is usually possible to compute an arbitrary probability density function for the whole action space, and sample from it to model the policy. It is also possible to compute a deterministic policy simply enough in continuous spaces - the input is the current state and the output is the action to take. The issue then is that this does not allow an agent to learn through exploration of the environment. To do that requires a stochastic policy.

+ +

If you want to generate a stochastic policy in continuous action spaces, you could discretise the space and sample from that using e.g. softmax to generate the action probabilities. Or you could have the approximation function do something more indirect: Output the parameters of a probability distribution that can be sampled from.

+ +
+

How do you represent a continuous action policy with a neural network?

+
+ +

Typically by having state features as input and the parameters of a PDF that can be sampled as the output. For instance, the network could output mean $\mu$ and standard deviation $\sigma$ of a normal distribution for an action value, and the policy is given by $\pi(a|s) = a \sim \mathbb{N}(\mu, \sigma)$.

+ +

This distribution can be sampled (there are simple methods to generate a sample from a normal distribution), and returns from following this policy used as feedback to the neural network using the policy gradient theorem. Assuming that there is an optimal deterministic policy to be found, the neural network can learn over time to home in on a specific mean with low standard deviation.

+ +

In some cases, the standard deviation can be treated as a hyper-parameter, similar to $\epsilon$ in $\epsilon$-greedy action selection, and might be decayed over time. In that case, a neural network can just output the mean action.

+ +

It is also possible to learn a deterministic policy through off-policy learning, adding a noise function to support exploration. This is what Deep Deterministic Policy Gradient does.

+",1847,,1847,,9/16/2019 8:26,9/16/2019 8:26,,,,0,,,,CC BY-SA 4.0 +15451,1,,,9/16/2019 7:58,,2,55,"

Considering a black box optimization problem on non-linear, non-convex function where we want to minimize an objective function.

+ +

One way to assess the quality of an optimizer is to look at the best solution it finds. However that doesn't give us any information on how much of the parameter space the optimizer had to explore to come up with these solutions.

+ +

Therefore I was wondering if there are metrics quantifying how much of the parameter space is explored ?

+",29715,,29715,,9/23/2019 8:25,10/24/2019 3:03,Metrics of quality of parameter space exploration,,1,1,,,,CC BY-SA 4.0 +15453,2,,14342,9/16/2019 9:03,,0,,"

I found the answer to my question, went back to the Python script and in the command that fits the model i.e.

+ +
 LR = LogisticRegression (C=0.1, solver = ""sag"",max_iter=1000).fit (X_train, y_train)
+
+ +

The parameter C was set to 0.001 which is a very small value (meaning lambda is very high as C=1/lambda) (C is the regularization strength and smaller values indicate stronger regularization). more on that matter can be found here and here

+",25463,,,,,9/16/2019 9:03,,,,0,,,,CC BY-SA 4.0 +15455,2,,15387,9/16/2019 9:23,,1,,"

Not very sure about the AI in competitions, as I have not taken part in any competitive competitions. On comparing AI in Academia and Industry, the biggest difference is probably freedom.

+ +

In academia, considering a research project or so, a large number of experiments and trying new things are encouraged. New learnings are heeded to, and it usually involves rigorous literature survey and studies of previous works. Even if a model performed badly, if there were new learnings one could take from it, it wouldn't be deemed a failure. There is also a lot of data available that could be used for research purposes, and open-source projects used or learned from, are always thanked and appreciated.

+ +

In industry the scene is quite different. There is more of a focus on using pre-trained models or transfer learning. Quite frequently, open-source projects are just cloned, mildly developed, and deployed under the companies name without releasing the code - basically requiring bare minimum effort towards literature. More of a focus was given (In my case at least) on reading blog posts and readme's, over the papers themselves, in order to save time. And compute efficiency is key. In industry, the effort is more directed towards scaling these models, building the data pipelines, and satisfying the clients needs. Data is also another concern in industry, with it being common practice to outsource data collection and preparation to third parties (Usually other companies that specialize in this area).

+ +

The key difference, I would say, is the amount of freedom one has in academia, as compared to a strong sense of direction towards a singular goal in industry. AI in industry pretty much mostly is in the solutions-and-services sector (mostly), making it quite similar to software engineering, broadly speaking.

+ +

So, summarizing, the domain of the AI project makes a big difference, with the main difference being what part of the project most effort and focus is put into.

+",25658,,,,,9/16/2019 9:23,,,,0,,,,CC BY-SA 4.0 +15456,2,,15367,9/16/2019 9:44,,2,,"

KL-divergence is a measure on probability distributions. It essentially captures the information loss between ground truth distribution and predicted.

+

L2-norm/MSE/RMSE doesn't do well with probabilities, because of the power operations involved in the calculation of loss. Probabilities, being fractions under 1, are significantly affected by any power operations (square or root), and considering we are calculating the squares of differences of probabilities, the values that are summed are abnormally small, essentially barely learning anything as the random initialization itself starts with an abnormally small loss, almost always staying constant.

+

L1 norm, on the other hand, does not have any power operations, making it relatively acceptable.

+

Loss functions, such as Kullback-Leibler-divergence or Jensen-Shannon-Divergence, are preferred for probability distributions because of the statistical meaning they hold. KL-Divergence, as mentioned before, is a statistical measure of information loss between distributions, or, in other words, assuming $Q$ is the ground truth distribution, KL-Divergence is a measure of how much $P$ deviates from $Q$. Also, considering probability distributions, convergence is much stronger in measures of Information Loss such KL-Divergence.

+

More clarity on the motivation behind Kullback-Leibler can be read here.

+",25658,,2444,,5/30/2022 8:24,5/30/2022 8:24,,,,0,,,,CC BY-SA 4.0 +15457,2,,15416,9/16/2019 9:54,,1,,"

From what I have observed, the ability to scale an ML model is key.

+ +

Real time inference must be quick, and cause no delays from the provider side. Being able to deploy the model also carries enormous weight - that is, how easy would it be to build the data pipelines and how easy would it be to integrate it in a web application from the server perspective.

+ +

Apart from the obvious achievement of set metrics and performance criterion, speed and ease of deployment also carry a very important role. There have been scenarios of brilliant solutions being denied (from what I have seen) because they exceeded the limits set for time and compute in an application scenario.

+",25658,,,,,9/16/2019 9:54,,,,0,,,,CC BY-SA 4.0 +15458,2,,14357,9/16/2019 10:23,,2,,"

You can look into the techniques used in GANs (genereative adverserial networks). These networks work by having 2 learning agents. 1 to create images and 1 to learn the difference in a human made image and a computer generated image. This works because the 2 agents drive each other to be better and ultimately make the generator create images which can't be distinguished from real life images.

+ +

In your case you can make a agent trying to tell if the data is human or computer generated. The agent learning to move will then get negative rewards when the other agent can identify it as computer movement. This way the mover will learn to move like your reference data.

+ +

UPDATE:

+ +

I just found this video and paper which does exactly the same as you asked. Instead of using a GAN like structure they use a task specific reward and a imitation reward, which is based on reference motion data they have.

+ +

https://www.youtube.com/watch?v=vppFvq2quQ0

+",29671,,29671,,11/12/2019 14:52,11/12/2019 14:52,,,,2,,,,CC BY-SA 4.0 +15459,1,,,9/16/2019 10:51,,4,217,"

Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning?

+ +

I came across this statement in the section 5.2 of the book ""understanding machine learning: from theory to algorithms"". You just search ""increases (logarithmically)"" in your browser and then you can find the sentence.

+ +

I just can't understand the statement. And there is no proof in the book either. What I would like to do is prove that estimation error $\epsilon_{est}$ increase (logarithmically) with |𝐻| and decrease with 𝑚. Hope you can help me out. A rigorous proof can't be better!

+",27112,,27112,,9/18/2019 0:27,3/12/2020 21:33,Why does estimation error increase with $|H|$ and decrease with $m$ in PAC learning?,,2,0,,,,CC BY-SA 4.0 +15460,2,,15459,9/16/2019 11:25,,2,,"

Definitely, you can find the proof in different resources (for example, in these notes or in the paper that originally proposed PAC learnability, A Theory of the Learnable). However, the intuition behind your question is when the size of the hypothesis increases, if you do not change anything, you can't see more part of the space. Hence, the estimation error will increase. Moreover, when you increase the number of samples, you have more chance to see more part of the hypothesis space, hence, the estimation error decrease.

+

Also, you can see some lemma about the relation of the PAC learnability and other similar concepts in the Wikipedia article Probably approximately correct learning:

+
+

Under some regularity conditions these three conditions are equivalent:

+
    +
  1. The concept class $C$ is PAC learnable.
  2. +
  3. The VC dimension of $C$ is finite.
  4. +
  5. $C$ is a uniform Glivenko-Cantelli class.
  6. +
+
+",4446,,-1,,6/17/2020 9:57,9/17/2019 17:04,,,,5,,,,CC BY-SA 4.0 +15461,1,,,9/16/2019 11:51,,1,35,"

tldr; if I train the network on 1 training example, the outcome sometimes makes no sense at all, sometimes is as expected. If I train it on more examples and higher iterations, the network, which produces two outcomes (p and v) always predicts exactly 0 for v and I would like to change that.

+ +

In the following post I will provide all code necessary to reproduce the problem.
+I am training a neural network on the same input. The wanted outcome for a value ""v"" is 1. If I create the network and train it, sometimes the predicted outcome will be 1, sometimes it will be -1.
+ Also, the loss seems to flip between 0 and 4 during training epochs.
+ Additionally, the loss blows up immensly, even though both losses for the outcome layers are close to zero.
+ I do not understand where this behaviour comes from. I used Leaky-ReLU to make sure it can handle negative input, I used a high learning rate to make sure the data in this example is sufficient on the training, and the input is the same all the time.

+ +

My Neural network looks like this:

+ +
input_layer = keras.Input(shape=(6,7),)    
+formatted_input_layer = keras.layers.Reshape((6,7, 1))(input_layer)       
+conv_layer1 = self.create_conv_layer(formatted_input_layer)
+res_layer1 = self.create_res_layer(conv_layer1)
+res_layer2 = self.create_res_layer(res_layer1)
+res_layer3 = self.create_res_layer(res_layer2)
+res_layer4 = self.create_res_layer(res_layer3)
+policy_head = self.create_policy_head(res_layer4)
+value_head = self.create_value_head(res_layer4)
+model = keras.Model(inputs=input_layer, outputs=[policy_head, value_head])
+optimizer = keras.optimizers.SGD(lr=args['lr'],momentum=args['momentum'])
+model.compile(loss = {'policy_head' : 'categorical_crossentropy', 'value_head' : 'mean_squared_error'}, optimizer=optimizer, loss_weights={'policy_head':0.5, 'value_head':0.5})
+
+ +

Methods for the different layers:
+conv_layer:

+ +
def create_conv_layer(self, input_layer):
+    conv_layer = keras.layers.Conv2D(filters=256,
+                                     kernel_size=3,
+                                     strides=1,
+                                     padding='same',
+                                     use_bias=False,
+                                     data_format=""channels_last"",
+                                     activation = ""linear"",
+                                     kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+    conv_layer= keras.layers.BatchNormalization(axis=-1)(conv_layer)
+    conv_layer = keras.layers.LeakyReLU()(conv_layer)
+    return conv_layer
+
+ +

res_layer:

+ +
def create_res_layer(self, input_layer):
+        conv_layer = self.create_conv_layer(input_layer)
+        res_layer = keras.layers.Conv2D(filters=256,
+                                         kernel_size=3,
+                                         strides=1,
+                                         padding='same',
+                                         use_bias=False,
+                                         data_format=""channels_last"",
+                                         activation = ""linear"",
+                                         kernel_regularizer = keras.regularizers.l2(0.0001))(conv_layer)
+        res_layer= keras.layers.BatchNormalization(axis=-1)(res_layer)
+        res_layer = keras.layers.add([input_layer, res_layer])
+        res_layer = keras.layers.LeakyReLU()(res_layer)
+        return res_layer
+
+ +

policy head:

+ +
def create_policy_head(self, input_layer):
+        policy_head = keras.layers.Conv2D(filters=2,
+                                          kernel_size=1,
+                                          strides=1,
+                                          padding='same',
+                                          use_bias = False,
+                                          data_format='channels_last',
+                                          activation='linear',
+                                          kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+        policy_head = keras.layers.BatchNormalization(axis=-1)(policy_head)
+        policy_head = keras.layers.LeakyReLU()(policy_head)
+        policy_head = keras.layers.Flatten()(policy_head)
+        policy_head = keras.layers.Dense(units = 7,
+                                         use_bias = False,
+                                         activation = 'softmax',
+                                         kernel_regularizer = keras.regularizers.l2(0.0001),
+                                         name = ""policy_head""
+                                         )(policy_head)
+        return policy_head
+
+ +

value head:

+ +
def create_value_head(self, input_layer):
+        value_head = keras.layers.Conv2D(filters=1,
+                                          kernel_size=1,
+                                          strides=1,
+                                          padding='same',
+                                          use_bias = False,
+                                          data_format='channels_last',
+                                          activation='linear',
+                                          kernel_regularizer = keras.regularizers.l2(0.0001))(input_layer)
+        value_head = keras.layers.BatchNormalization(axis=-1)(value_head)
+        value_head = keras.layers.LeakyReLU()(value_head)  
+        value_head = keras.layers.Flatten()(value_head)        
+        value_head = keras.layers.Dense(units = 21,
+                                         use_bias = False,
+                                         activation = 'linear',
+                                         kernel_regularizer = keras.regularizers.l2(0.0001)
+                                         )(value_head)
+        value_head = keras.layers.LeakyReLU()(value_head)      
+        value_head = keras.layers.Dense(units = 1,
+                                         use_bias = False,
+                                         activation = 'tanh',
+                                         kernel_regularizer = keras.regularizers.l2(0.0001),
+                                         name = ""value_head""                                     
+                                         )(value_head)
+        return value_head
+
+                                )(value_head)
+
+ +

The way I am testing my NN:

+ +
    canonicalBoard = np.zeros(shape = (6,7), dtype=int) 
+
+
+    Pi = [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0] 
+
+    trainExamples = [[canonicalBoard, Pi, 1]]*50        
+
+    nnetwrapper.train(trainExamples)
+
+    board = canonicalBoard[np.newaxis, :, :]
+
+    p, v = nnetwrapper.nnet.model.predict(board)
+
+ +

which results in the training looking like this:

+ +
Epoch 1/10
+50/50 [==============================] - 4s 71ms/step - loss: 1.6829 - policy_head_loss: 1.9459 - value_head_loss: 1.0000
+Epoch 2/10
+50/50 [==============================] - 1s 15ms/step - loss: 2.3218 - policy_head_loss: 3.8470 - value_head_loss: 0.3768
+Epoch 3/10
+50/50 [==============================] - 1s 13ms/step - loss: 4456112.5000 - policy_head_loss: 0.9027 - value_head_loss: 0.8510
+Epoch 4/10
+50/50 [==============================] - 1s 14ms/step - loss: 16085884.0000 - policy_head_loss: 0.0945 - value_head_loss: 3.9925
+Epoch 5/10
+50/50 [==============================] - 1s 14ms/step - loss: 32722448.0000 - policy_head_loss: 2.6572 - value_head_loss: 4.0000
+Epoch 6/10
+50/50 [==============================] - 1s 14ms/step - loss: 52690084.0000 - policy_head_loss: 9.6345 - value_head_loss: 3.1810e-12
+Epoch 7/10
+50/50 [==============================] - 1s 14ms/step - loss: 74703120.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+Epoch 8/10
+50/50 [==============================] - 1s 14ms/step - loss: 97784832.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+Epoch 9/10
+50/50 [==============================] - 1s 14ms/step - loss: 121202520.0000 - policy_head_loss: 2.0802e-05 - value_head_loss: 4.0000
+Epoch 10/10
+50/50 [==============================] - 1s 14ms/step - loss: 144415040.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 4.0000
+
+ +

and my prediction looking like this:

+ +
p: [[0. 0. 0. 0. 0. 1. 0.]]  v: [[-1.]]
+
+ +

another outcome could be:

+ +
Epoch 1/10
+50/50 [==============================] - 4s 82ms/step - loss: 1.6829 - policy_head_loss: 1.9459 - value_head_loss: 1.0000
+Epoch 2/10
+50/50 [==============================] - 1s 17ms/step - loss: 2.2826 - policy_head_loss: 2.0001 - value_head_loss: 2.1454
+Epoch 3/10
+50/50 [==============================] - 1s 16ms/step - loss: 1718694.1250 - policy_head_loss: 0.5434 - value_head_loss: 3.9772
+Epoch 4/10
+50/50 [==============================] - 1s 14ms/step - loss: 6204218.0000 - policy_head_loss: 9.4180e-05 - value_head_loss: 0.0000e+00
+Epoch 5/10
+50/50 [==============================] - 1s 14ms/step - loss: 12620835.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 6/10
+50/50 [==============================] - 1s 14ms/step - loss: 20322232.0000 - policy_head_loss: 7.7489 - value_head_loss: 0.0000e+00
+Epoch 7/10
+50/50 [==============================] - 1s 14ms/step - loss: 28812526.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 1.5966
+Epoch 8/10
+50/50 [==============================] - 1s 14ms/step - loss: 37715012.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 9/10
+50/50 [==============================] - 1s 15ms/step - loss: 46747064.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+Epoch 10/10
+50/50 [==============================] - 1s 14ms/step - loss: 55699992.0000 - policy_head_loss: 1.1921e-07 - value_head_loss: 0.0000e+00
+
+ +

with the predictions:

+ +
p: [[0. 0. 0. 0. 0. 1. 0.]]  v: [[1.]]
+
+ +

which are the correct ones as I would have expected.

+ +

How come on some trainings, my NN doesnt fit the data at all? I wanted to start the training process for a whole week now, but before I do so I want to make sure there are no errors in the way I layed out my NN. And this looks like I am missing something here.

+ +

And here at the predict / train methods of my neuralnet, (the NN is part of an alpha-zero replica and the involved game is connect4, I omitted these in the example to make it easier to actually replicate the problem. This is why you see some transform operations in predict and train methods)

+ +
def train(self, examples):
+        input_boards, target_pis, target_vs = list(zip(*examples))       
+        input_boards = np.asarray(input_boards)
+        target_pis = np.asarray(target_pis)
+        target_vs = np.asarray(target_vs)        
+        logger.debug(""Passing to nn: x: {}, y: {}, batch_size: {}, epochs: {}"".format(input_boards, [target_pis, target_vs], self.args[""batch_size""], self.args[""epochs""]))
+        self.nnet.model.fit(x = input_boards, y = [target_pis, target_vs], batch_size = self.args[""batch_size""], epochs = self.args[""epochs""])
+
+
+def predict(self, board):
+        board = board.nn_board_2d
+        # preparing input
+        board = board[np.newaxis, :, :] # this has to be done for the conv2d to work
+
+        # run
+        pi, v = self.nnet.model.predict(board)
+        return pi[0], v[0]
+
+ +

The parameters I used for this example:

+ +
'lr': 0.2
+'dropout': 0.1
+'epochs': 10
+'num_channels': 512,
+'filters': 256
+'momentum':0.9
+
+ +

EDIT: As soon as I use a lower learning rate and more iterations, my p changes, but my v stays exactly at 0. This is what was bothering me in the first place:

+ +
Epoch 1/10
+118/118 [==============================] - 5s 44ms/step - loss: 1.8037 - policy_head_loss: 2.4305 - value_head_loss: 0.7578
+Epoch 2/10
+118/118 [==============================] - 2s 14ms/step - loss: 1.1666 - policy_head_loss: 1.7723 - value_head_loss: 0.1416
+Epoch 3/10
+118/118 [==============================] - 2s 13ms/step - loss: 1.0987 - policy_head_loss: 1.6832 - value_head_loss: 0.0948
+Epoch 4/10
+118/118 [==============================] - 2s 13ms/step - loss: 1.0430 - policy_head_loss: 1.5787 - value_head_loss: 0.0876
+Epoch 5/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9943 - policy_head_loss: 1.4859 - value_head_loss: 0.0826
+Epoch 6/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9469 - policy_head_loss: 1.3959 - value_head_loss: 0.0777
+Epoch 7/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.9046 - policy_head_loss: 1.3180 - value_head_loss: 0.0707
+Epoch 8/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.8629 - policy_head_loss: 1.2403 - value_head_loss: 0.0647
+Epoch 9/10
+118/118 [==============================] - 2s 14ms/step - loss: 0.8068 - policy_head_loss: 1.1344 - value_head_loss: 0.0583
+Epoch 10/10
+118/118 [==============================] - 2s 13ms/step - loss: 0.7335 - policy_head_loss: 0.9922 - value_head_loss: 0.0538
+I0916 14:24:38.595780  7116 trainingonly1NN.py:232] Values for empty board: new network: P : [0.20057818 0.11068489 0.15129891 0.20042823 0.13117987 0.04705378
+ 0.15877616]  v: 0
+
+",27406,,27406,,9/16/2019 15:56,9/16/2019 15:56,Neural Network training on one example to try overfitting leads to strange predictions,,0,4,,,,CC BY-SA 4.0 +15462,2,,15449,9/16/2019 12:18,,52,,"

tl;dr

+ +

There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.

+ +

To better illustrate these concerns, I'll try to split them into three categories.

+ +

Conscious AI

+ +

This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are ""The terminator"", ""The Matrix"", ""Age of Ultron"". The most influential novels were written by Isaac Asimov and are referred to as the ""Robot series"" (which includes ""I, robot"", which was also adapted as a movie).

+ +

The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the ""brain"" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on ""ambiguous intelligence"" (which I think is more realistic).

+ +

In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to ""think"" and become conscious.

+ +

Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.

+ +

Using AI with malicious intent

+ +

Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots! +The second category I want to focus a bit more on is several malicious uses of today's AI.

+ +

I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:

+ +
    +
  • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3

  • +
  • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.

  • +
  • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.

  • +
  • Hacking.

  • +
  • Military applications, e.g. drone attacks, missile targeting systems.

  • +
+ +

Adverse effects of AI

+ +

This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:

+ +
    +
  • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).

  • +
  • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.

  • +
+",26652,,,,,9/16/2019 12:18,,,,1,,,,CC BY-SA 4.0 +15463,1,,,9/16/2019 12:29,,0,250,"

I'm dealing with a "ticket similarity task".

+

Every time new tickets arrive at the help desk (customer service), I need to compare them and find out about similar ones.

+

In this way, once the operator responds to a ticket, at the same time he can solve the others similar to the one solved.

+

I expect an input ticket and all the other tickets with their similarity in output.

+

I thought about using DOC2VEC, but it requires training every time a new ticket enters.

+

What do you recommend?

+",20780,,2444,,6/9/2021 9:38,7/2/2023 6:17,How could I compute in real-time the similarity between tickets?,,2,0,,,,CC BY-SA 4.0 +15464,2,,15449,9/16/2019 13:23,,10,,"

In addition to the other answers, I would like to add to nuking cookie factory example:

+ +

Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.

+ +

Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.

+ +

So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.

+",29671,,2444,,9/16/2019 15:41,9/16/2019 15:41,,,,3,,,,CC BY-SA 4.0 +15465,2,,7875,9/16/2019 16:46,,3,,"

There is at least one very important and serious AI scientist that apparently believes in the creation of true artificial general intelligence and possibly superintelligence: Jürgen Schmidhuber, who is the co-author of the LSTM, among many other important contributions. In fact, he recently founded NNAISENSE for this ultimate purpose, that is, to build a general-purpose artificial general intelligence. In his talk When creative machines overtake man, at TEDxLausanne, Schmidhuber talks about the singularity (also known as omega). See also his web article Is History Converging? Again? (2012) or the paper 2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years (2007).

+",2444,,,,,9/16/2019 16:46,,,,2,,,,CC BY-SA 4.0 +15467,2,,15449,9/16/2019 18:31,,5,,"

I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.

+",29725,,,,,9/16/2019 18:31,,,,1,,,,CC BY-SA 4.0 +15470,2,,15408,9/16/2019 19:06,,1,,"

While John's answer I think gives a better idea of which direction I might want to go to seriously tackle this, it turns out that just throwing the data straight into some sklearn algorithms does work better than I thought it would. For example, the following produces ballpark results for the model parameters (for the two model cases I tested), without assuming (explicitly) anything about the model itself:

+ +
from sklearn.ensemble import RandomForestRegressor                                                                   
+from sklearn.model_selection import train_test_split                                                                 
+from sklearn import metrics 
+
+Xtrain, Xtest, ytrain, ytest = train_test_split(data, parms, random_state=1, )  
+
+model = RandomForestRegressor(random_state=1, n_estimators=15, n_jobs=7)                                             
+model.fit(X=Xtrain,y=ytrain)                                                                                       
+ypred = model.predict(Xtest)
+
+print(f'i={i} ' + str([metrics.explained_variance_score(ytest[:,i], ypred[:,i]) for i in [0,1,2]]))                  
+
+plt.figure(2)                                                                                                        
+plt.title('Guessed parameter fits')                                                                                  
+plt.clf()                                                                                                            
+for i in range(3) :                                                                                                  
+    for j in range(3) :                                                                                              
+        plt.subplot(3,3,i*3+j+1)                                                                                     
+        plt.plot(x,Xtest[i*3+j],'.')                                                                                 
+        plt.plot(x,f(x,*ypred[i*3+j,:]))                                                                             
+plt.show()
+
+ +

Results

+ +

Gaussian curves, explained variance for a,b,c parameter guesses (1.0=perfect everytime):
+[0.8784556933371098, 0.9172501716286985, 0.8874106964444304]

+ +

Sin curves, explained variance for a,b,c parameter guesses
+[0.8190156553698631, 0.9757765139100565, 0.7551784827108721]

+ +

Here's some plots of from the test sets, along with the original model evaluated with the guessed parameters.

+ +

+

+",21099,,,,,9/16/2019 19:06,,,,0,,,,CC BY-SA 4.0 +15472,2,,15463,9/16/2019 20:32,,0,,"

You need to create an active learning loop over the process of the learning. Try to start from a history of tickets and using doc2vec to get the similarity. When you find a bad result in the result of your classifier, then report it and then try to retrain the classifier. Also, you can wait to retrain the model, up to finding the predefined batch0size of the new data which are not in the training set.

+ +

Also, to get a better result in the active learning loop, you can testify incoming data by the measuring of the classifier uncertainty over it. If the entropy of the classifier over the data is not in a good situation, you can label the data by the operator (as an oracle) and then up reach to the predefined batch-size, retrain the classifier.

+ +

Morevoer, to know better about the active learning process and query strategies follow this link (and other articles in that link like this article).

+",4446,,4446,,9/16/2019 20:39,9/16/2019 20:39,,,,0,,,,CC BY-SA 4.0 +15476,2,,15449,9/16/2019 21:23,,8,,"

My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.

+ +

This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.

+ +

The point here is that the AI is not ""smart"" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.

+",1671,,,,,9/16/2019 21:23,,,,4,,,,CC BY-SA 4.0 +15478,2,,15449,9/17/2019 4:09,,16,,"

Short term

+ +
    +
  • Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
  • +
  • Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
  • +
  • Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own ""small world"" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
  • +
  • Political manipulation... Enough said, I think.
  • +
  • Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
  • +
  • Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
  • +
  • Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
  • +
  • Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
  • +
  • Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
  • +
  • Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
  • +
+ +

Did I mention that all of these are in full swing already?

+ +

Long Term

+ +

Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:

+ +
    +
  • Either we develop AGI as a result of our improved understanding of the nature of intelligence,
  • +
  • or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
  • +
+ +

In the first case, if an AI ""goes rogue"" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.

+ +

Here are some potential problems:

+ +
    +
  • Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems: + +
      +
    • Script Kiddies ­— people could download an AI and set up the parameters in a destructive way. Relatedly,
    • +
    • Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
    • +
    • Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
    • +
  • +
  • Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to ""pull the plug"" because power plants are computer controlled!)
  • +
  • Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.
  • +
  • These are rather philosophical considerations, but some would argue that AI would destroy what makes us human: + +
      +
    • Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
    • +
    • Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
    • +
    • Monoculture/stagnation — in various scenarios (such as a single ""benevolent dictator"" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
    • +
  • +
+ +

I think AGI is coming and we need to be mindful of these problems so that we can minimise them.

+",29739,,,,,9/17/2019 4:09,,,,2,,,,CC BY-SA 4.0 +15479,1,,,9/17/2019 4:13,,4,1198,"

I have some images with a fixed background and a single object on them which is placed, in each image, at a different position on that background. I want to find a way to extract, in an unsupervised way, the positions of that object. For example, us, as humans, would record the x and y location of the object. Of course the NN doesn't have a notion of x and y, but i would like, given an image, the NN to produce 2 numbers, that preserve as much as possible from the actual relative position of objects on the background. For example, if 3 objects are equally spaced on a straight line (in 3 of the images), I would like the 2 numbers produced by the NN for each of the 3 images to preserve this ordering, even if they won't form a straight line. They can form a weird curve, but as long as the order is correct that can be topologically transformed to the right, straight line. Can someone suggest me any paper/architecture that did something similar? Thank you!

+",29742,,,,,9/20/2021 0:01,"Get the position of an object, out of an image",,1,8,,,,CC BY-SA 4.0 +15482,5,,,9/17/2019 10:18,,0,,,2444,,2444,,9/17/2019 10:18,9/17/2019 10:18,,,,0,,,,CC BY-SA 4.0 +15483,4,,,9/17/2019 10:18,,0,,"For questions related to computational learning theory (or, in short, learning theory), which is a research subfield of artificial intelligence devoted to studying the design and mathematical analysis of machine learning algorithms. Computational learning theory (COLT) is largely concerned with computational and data efficiency. A seminal paper in COLT is Valiant's ""A theory of the learnable"" (1984).",2444,,2444,,9/17/2019 10:18,9/17/2019 10:18,,,,0,,,,CC BY-SA 4.0 +15484,2,,15449,9/17/2019 11:18,,6,,"

I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.

+ +

We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.

+ +

There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.

+ +

Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.

+ +

You were probably reading that last paragraph and nodding along, ""Yeah, I learned that in the first introductory ML course I took"". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says ""it's not a malignancy"", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.

+ +

AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.

+",29755,,,,,9/17/2019 11:18,,,,1,,,,CC BY-SA 4.0 +15485,1,15531,,9/17/2019 11:37,,2,54,"

For example, if I want to do a cat and mouse AI, the cat would wish to minimize the time taken for it to catch the mouse and the mouse would want to maximize that time. The time is analog and thus I cannot use a traditional Xy method but need another method that goes like this:

+ +

network.train_against_value(X, y, determinator)

+ +

Here, X is more like where the cat and mouse are. y is where the cat or mouse should move, and determinator is the time taken for the mouse to be caught, where the mouse wishes to maximize this value through its output of y and the cat wishes to minimize it. There is one Xy pair for each decision made by the cat and mouse, but one determinator throughout one game. Many games are played to train the AI.

+ +

Example: X: (300, 300, 200, 200) -> (mousex, mousey, catx, caty)

+ +

Y: (1,3) -> (xmove, ymove) direction, the numbers are then tuned by code for the actual movement to be always 1.

+ +

Determinator: 50 -> time for mouse to be caught in seconds

+ +

Where it would train so that with every X inputted it outputs a y so that determinator is minimum. Is there a method for train_towards_value as well? If there is no prebuilt method, how do I create one? What is the technical name for this kind of training?

+ +

I have two neural networks for the cat and mouse, where the cat is slower than the mouse but is larger and could eat the mouse. Just consider the mouse is difficult to control from the neural network because of inefficiencies so that it is possible for the cat to catch the mouse.

+",17423,,17423,,9/19/2019 8:22,9/19/2019 9:07,Training Keras Towards Or Against Analog Value?,,1,4,,,,CC BY-SA 4.0 +15486,2,,15449,9/17/2019 12:19,,5,,"

This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.

+ +

But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.

+ +

I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.

+ +

AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...

+ +

What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.

+",29757,,,,,9/17/2019 12:19,,,,0,,,,CC BY-SA 4.0 +15487,2,,15449,9/17/2019 14:10,,4,,"

I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.

+ +

1. AI systems may degrade

+ +

Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.

+ +

Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.

+ +

2. AI systems are biased

+ +

This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.

+",26069,,,,,9/17/2019 14:10,,,,1,,,,CC BY-SA 4.0 +15488,2,,11889,9/17/2019 14:36,,0,,"

It sums the squared error for the output vs the expected output, this isnt something you need to do for each experiment they are simply telling you the metric they are using as fitness for the genomes in the xor example experiment, in other experiments you could use something else. If you were training it to play video games you set your fitness to be a numerical representation of how well the genome played the game, so you dont always need to have an expected value as long as your fitness function uses a meaningful metric as the fitness value.

+",20044,,,,,9/17/2019 14:36,,,,0,,,,CC BY-SA 4.0 +15490,1,,,9/17/2019 16:00,,2,29,"

I'm reading the notes here and have a doubt on page 2 (""Least squares objective"" section). The probability of a word $j$ occurring in the context of word $i$ is $$Q_{ij}=\frac{\exp(u_j^Tv_i)}{\sum_{w=1}^W\exp(u_w^Tv_i)}$$

+ +

The notes read:

+ +
+

Training proceeds in an on-line, stochastic fashion, but the implied global cross-entropy loss can be calculated as $$J=-\sum_{i\in corpus}\sum_{j\in context(i)}\log Q_{ij}$$ + As the same words $i$ and $j$ can occur multiple times in the corpus, it is more efficient to first group together the same values for $i$ and $j$: + $$J=-\sum_{i=1}^W\sum_{j=1}^WX_{ij}\log(Q_{ij})$$

+
+ +

where $X_{ij}$ is the total number of times $j$ occurs in the context of $i$ and the value of co-occuring frequency is given by the co-occurence matrix $X$. This much is clear. But then the author states that the denominator of $Q_{ij}$ is too expensive to compute, so the cross entropy loss won't work.

+ +
+

Instead, we use a least square objective in which the normalization factors in $P$ and $Q$ are discarded: + $$\hat J=\sum_{i=1}^W\sum_{j=1}^WX_i(\hat P_{ij}-\hat Q_{ij})^2$$ + where $\hat P_{ij}=X_{ij}$ and $\hat Q_{ij}=\exp(u_j^Tv_i)$ are the unnormalized distributions.

+
+ +

$X_i=\sum_kX_{ik}$ is the number of times any word appears in the context of $i$. I don't understand this part. Why have we introduced $X_i$ out of nowhere? How is $\hat P_{ij}$ ""unnormalized""? Is there a tradeoff in switching from softmax to MSE?

+ +

(As far as I know, softmax made total sense in skip gram because we were calculating scores corresponding to different words (discrete possibilities) and matching the predicted output to the actual word - similar to a classification problem, so softmax makes sense.)

+",27548,,27548,,9/17/2019 17:06,9/17/2019 17:06,Doubt on formulating cost function for GloVe,,0,3,,,,CC BY-SA 4.0 +15493,5,,,9/17/2019 17:12,,0,,"

See A Theory of the Learnable (1984) by Leslie G. Valiant.

+",2444,,2444,,9/17/2019 17:12,9/17/2019 17:12,,,,0,,,,CC BY-SA 4.0 +15494,4,,,9/17/2019 17:12,,0,,"For questions related to Probably Approximately Correct (PAC) learning, a framework for mathematical analysis of machine learning algorithms, which was introduced in the paper ""A Theory of the Learnable"" (1984) by Leslie G. Valiant.",2444,,2444,,9/17/2019 17:12,9/17/2019 17:12,,,,0,,,,CC BY-SA 4.0 +15496,2,,15449,9/17/2019 19:12,,2,,"

Human beings currently exist in an ecological-economic niche of ""the thing that thinks"".

+ +

AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.

+ +

How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.

+ +
+ +

Suppose there is a strong AI without ""super ethics"" which is cheaper to manufacture than a human (including manufacturing a ""body"" or way of manipulating the world), and as smart or smarter than a human.

+ +

This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.

+ +

In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.

+ +

Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.

+ +

If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.

+ +

During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.

+ +

As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.

+ +

You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.

+ +

As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.

+ +

So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.

+ +

At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.

+ +

The vestigial humans who ""own"" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position ""at the top"" is politically unstable.

+ +

This process assumed a ""strong"" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.

+ +

It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.

+ +

But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of ""wow, cool, horse"" and the fun of riding and controlling a huge animal.

+",29777,,,,,9/17/2019 19:12,,,,0,,,,CC BY-SA 4.0 +15497,1,15507,,9/17/2019 21:32,,3,500,"

AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?

+",17601,,1671,,9/17/2019 21:34,1/26/2021 13:27,Is AGI likely to be developed in the next decade?,,3,3,0,,,CC BY-SA 4.0 +15499,2,,13335,9/18/2019 1:55,,0,,"

The paper you link to in the question is paywalled for me, so this answer may not be specific enough, but we'll make do.

+ +

Fundamentally, AUC is about measuring a classifier's robustness. The intuition behind the measure is that if we have classifier that outputs a score for an example, rather than a class label alone, we can choose to interpret that score in ways that trade detection rate (true positives) for false positive rate.

+ +

As a very simple example, imagine that we have a classifier that outputs a probability that an example pair is a link. By default we might chose to interpret this a probability of more than 0.5 as a link being present. Suppose this produces a certain true positive and false positive rate. Now let's change our mind. It's suddenly very important that we detect all the links, even if we get a lot of false positives. We choose to interpret any probability value > 0 as being a link. Our true positive rate will spike, but so will our false positive rate. Finally, if we care only about minimizing the prediction of fake links, we could chose to interpret only probability values that are >0.99 as being links. Our true positive rate will go way down, but our false positive rate will too.

+ +

AUC quantifies how rapidly the false positive rate increases as we increase the true positive rate by changing our interpretation of the scores the classifier outputs. Better classifiers will generally be able to increase their true positive rates quickly without introducing many false positives, while worse classifiers will not. An AUC of 1.0 indicates that we can achieve 100% true positive rate without increasing the false positive rate at all. An AUC of 0.0 indicates that increasing the true positive rate by 1% will always result in an increase of 1% in the false positive rate (i.e. the classifier is essentially just as good as a random guess without looking at the input).

+ +

In your link prediction problem, we cannot directly measure the false positive rate. However, if we treat all non-links as negative examples when we compute this measure, a classifier with a higher AUC will is one that can more rapidly separate positive examples from non-positive ones as you increase your sensitivity. Even though we aren't sure which examples are false positives and which are unseen links, a classifier with a high AUC is one where we would be more likely to trust that the non-links it labels as links are actually links we can't see.

+",16909,,,,,9/18/2019 1:55,,,,0,,,,CC BY-SA 4.0 +15501,1,,,9/18/2019 9:04,,2,188,"

I would like to clone a voice as precisely as possible. Lately, impressive models have been released that only need about 10 s of voice input (cf. https://github.com/CorentinJ/Real-Time-Voice-Cloning), but I would like to go beyond that and clone a voice even more precisely (with subsequent text-to-speech using that voice). It doesn't matter if I have to provide minutes or hours of voice inputs.

+",29792,,,,,9/18/2019 9:04,What is the State-of-the-Art open source Voice Cloning tool right now?,,0,0,,,,CC BY-SA 4.0 +15502,1,15530,,9/18/2019 10:22,,0,69,"

Usually for DNN, I have the training data of matching X (2D) to Y (2D), for example, XOR data:

+ +
X = [[0,0],[0,1],[1,0],[1,1]];
+Y = [[0],  [1],  [1],  [0]  ];
+
+ +

However, RNN seems strange, I don't get it how to match X to Y, input of RNN layer is 3D and output is 2D (rightclick to open in new tab): https://colab.research.google.com/drive/17IgFuxOYgN5fNO9LKwDijEBkIeWNPas6

+ +
import tensorflow as tf;
+
+x = [[[1],[2],[3]], [[4],[5],[6]]];
+bsize = 2;
+times = 3;
+
+#3d input
+input = tf.placeholder(tf.float32, [bsize,times,1]);
+
+cell  = tf.keras.layers.LSTMCell(20);
+rnn   = tf.keras.layers.RNN(cell);
+hid   = rnn(input);
+
+sess = tf.Session();
+init = tf.global_variables_initializer();
+sess.run(init);
+
+#results in 2d
+print(sess.run(hid, {input:x}));
+
+ +

The example data seen on https://www.tensorflow.org/tutorials/sequences/recurrent are:

+ +
 t=0  t=1    t=2  t=3     t=4
+[the, brown, fox, is,     quick]
+[the, red,   fox, jumped, high]
+
+ +

How to map these data from X (3D input for RNN layer) to Y (2D)? (Y is 2D because RNN layer output is 2D).

+",2844,,2844,,9/18/2019 10:44,9/20/2019 10:33,How to map X to Y for TensorFlow RNN training data,,1,4,,,,CC BY-SA 4.0 +15503,2,,15449,9/18/2019 11:00,,1,,"

AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..

+ +

Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.

+",29795,,,,,9/18/2019 11:00,,,,0,,,,CC BY-SA 4.0 +15504,1,,,9/18/2019 11:23,,4,373,"

I would like to develop a platform in which people will write text and upload images. I am going to use Google API to classify the text and extract from the image all kinds of metadata. In the end, I am going to have a lot of text which describes the content (text and images). Later, I would like to show my users related posts (that is, similar posts, from the content point of view).

+ +

What is the most ppropriate way of doing this? I am not an AI expert and the best approach from my prescriptive it to have some tools, like google API or Apache Lucene search engine, which can hide the details of how this is done.

+",29799,,2444,,11/15/2019 19:51,8/18/2020 5:53,What is the best way to find the similarities between two text documents?,,2,0,,,,CC BY-SA 4.0 +15505,2,,15504,9/18/2019 13:58,,0,,"

Google has introduced Universal Sentence Encoder, which converts sentences into vector representations while preserving the semantic details. The pre-trained models are available on Tensorflow Hub. The Colab notebook would help you get started as well.

+",252,,2444,,11/15/2019 19:47,11/15/2019 19:47,,,,0,,,,CC BY-SA 4.0 +15507,2,,15497,9/18/2019 14:26,,5,,"

As a riff on my answer to this question, which is about the broader concern of the development of the singularity, rather than the narrower concern of the development of AGI:

+ +

I can say that among AI researchers I interact with, it far more common to view the development of AGI in the next decade as speculation (or even wild speculation) than as settled fact.

+ +

This is borne out by surveys of AI researchers, with 80% thinking ""The earliest that machines will be able to simulate learning and every other +aspect of human intelligence"" is in ""more than 50 years"" or ""never"", and just a few percent thinking that such forms of AI are ""near"". It's possible to quibble over what exactly is meant by AGI, but it seems likely that for us to reach AGI, we'd need to simulate human-level intelligence in at least most of its aspects. The fact that AI researchers think this is very far off suggests that they also think AGI is not right around the corner.

+ +

I suspect that the reasons AI researchers are less optimistic about AGI than Kurzweil or others in tech (but not in AI), are rooted in the fact that we still don't have a good understanding of what human intelligence is. It's difficult to simulate something that we can't pin down. Another factor is that most AI researchers have been working in AI for a long time. There are countless past proposals for AGI frameworks, and all of them have been not just wrong, but in the end, more or less hopelessly wrong. I think this creates an innate skepticism of AGI, which may perhaps be unfair. Nonetheless, expert opinion on this one is pretty well settled: no AGI this decade, and maybe not ever!

+",16909,,16909,,9/18/2019 17:54,9/18/2019 17:54,,,,0,,,,CC BY-SA 4.0 +15508,1,15511,,9/18/2019 14:39,,2,364,"

Let's say that we have three actions. The highest-valued action of the three choices is the first. When training the DQN, what do we do with the other two, as we don't have a target for them, since they weren't taken?

+

I've seen some code that leaves the target for off actions as whatever the prediction returned, which feels a bit wrong to me as two or more similar behaving actions might never be differentiated well after random action selection dwindles.

+

I've also seen some implementations that set the target for all actions to zero and only adjust the target for the action taken. This would help regarding action differentiation long term, but it also puts more reliance on taking random actions for any unfamiliar states (I believe) as an off action might never be taken otherwise.

+",29806,,2444,,12/5/2020 14:22,12/5/2020 14:22,"When training a DQN, how should we update the value of actions that were not taken?",,1,0,,,,CC BY-SA 4.0 +15509,1,15514,,9/18/2019 15:05,,4,398,"

At least at some level, maybe not end-to-end always, but deep learning always learns a function, essentially a mapping from a domain to a range. The domain and range, at least in most cases, would be multi-variate.

+ +

So, when a model learns a mapping, considering every point in the domain-space has a mapping, does it try to learn a continuous distribution based on the training-set and its corresponding mappings, and map unseen examples from this learned distribution? Could this be said about all predictive algorithms?

+ +

If yes, then could binary classification be compared to having a hyper-plane (as in support vector classification) in a particular kernel-space, and could the idea of classification problems using hyper-planes be extended in general to any deep learning problem learning a mapping?

+ +

It would also explain why deep learning needs a lot of data and why it works better than other learning algorithms for simple problems.

+",25658,,2444,,2/20/2020 13:34,6/24/2020 10:45,"In deep learning, do we learn a continuous distribution based on the training dataset?",,1,0,,,,CC BY-SA 4.0 +15510,1,,,9/18/2019 15:29,,7,140,"

I was reading the paper Label-Free Supervision of Neural Networks with Physics and Domain Knowledge, published at AAAI 2017, which won the best paper award.

+ +

I understand the math and it makes sense. Consider the first application shown in the paper of tracking falling objects. They train only on multiple trajectories of the said pillow, and during the evaluation, they claim that they can track any other falling object (which may not be pillows).

+ +

I am unable to understand how that happens? How does the network know which object to track? Even during the training, how does it know that it's the pillow that it's supposed to track?

+ +

The network is trained to fit a parabola. But any parabola could fit it. There are infinite such parabolas.

+",29809,,2444,,9/25/2019 21:15,6/7/2023 4:10,"How does the network know which objects to track in the paper ""Label-Free Supervision of Neural Networks with Physics and Domain Knowledge""?",,1,0,,,,CC BY-SA 4.0 +15511,2,,15508,9/18/2019 16:06,,3,,"

The loss function for DQN algorithm is +\begin{equation} +L(\theta_i) = \mathbb E_{s, a, r, s'} [(y - Q(s, a;\theta_i))^2] +\end{equation} +Like you said, we only take one action per timestep. We can only shift weights of the network that had the effect in calculating action value $Q(s, a)$ for that particular action that we took. For that action, variable $y$ would have value +\begin{equation} +y = r + \gamma \max_{a'} Q(s', a', \theta^-_i) +\end{equation} +and we would have standard form of DQN loss. We calculate the gradient of that loss with respect to network parameters $\theta$, backprop it and slightly shift weights so to more accurately estimate $Q(s, a)$ for that specific state-action pair.

+ +

We never took other actions. Since we never took them we didn't get reward $r$ and we can't estimate $\max_{a'} Q(s', a', \theta^-_i)$ for those other actions. We don't want to change weights that had effect on calculating actions values for those other actions because we have no way of estimating how accurate they were. Standard approach is to set +\begin{equation} +y = Q(s, a;\theta_i) +\end{equation} +this way the loss for those other actions would be $0$ and it would result in not changing the weights that had influence in calculating their action values.

+ +

Setting target $y$ to $0$ for all other actions would mean that we want $Q(s, a)$ for them all to slightly shift to $0$. That would not be correct since we have no way of knowing their true value. I think you misinterpreted that part in the implementations.

+",20339,,,,,9/18/2019 16:06,,,,0,,,,CC BY-SA 4.0 +15512,2,,13952,9/18/2019 16:18,,1,,"

You can use LSTM in reinforcement learning, of course. You don't give actions to the agent, it doesn't work like that.

+ +

The agent give actions to your MDP and you must return proper reward in order to teach the agent. For example if you implement trading bot, the policy(policy=the agent, which is your LSTM network) will say that at step T it is going to have action 34, which means something to your MDP and you return reward for example -0.03 or +0.05 or whatever depending what that actions is doing at the moment T.

+ +

So I get the question like you want to do a supervised learning on a reinforcement learning environment.

+ +

You can mimic supervised learning as well, but the idea of reinforcement learning is not that.

+ +

Here is how to mimic:

+ +

Scenario: you are at step T, lets say you have 3 possible actions -1,0,+1;

+ +

In a supervised learning you must give the desired action to the learning process. +In reinforcement learning you must give reward based on if you are happy or not from the agent's action.

+ +

So you must have predefined that for -1 you are not happy and you give reward 0.0, for action 0 you are not happy and you give reward 0.0 and for action +1 you are happy and you give reward +100;

+ +

I hope this makes things clear.

+",29812,,,,,9/18/2019 16:18,,,,3,,,,CC BY-SA 4.0 +15513,2,,15449,9/18/2019 16:32,,1,,"

In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.

+ +

Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.

+ +

Examples include:

+ + +",26341,,,,,9/18/2019 16:32,,,,0,,,,CC BY-SA 4.0 +15514,2,,15509,9/18/2019 16:40,,3,,"

Well, there are some questions here...

+
+

Does it (Deep Learning) try to learn a continuous distribution based +on the training-set and its corresponding mappings, and map unseen +examples from this learned distribution?

+
+

Yes. Talking about Deep Artificial Neural Networks, they try to learn continuous distribution using continuous activation functions in each neuron. Therefore, the output is also a continuous function to represent a continuous probability distribution. The issue with the unseen examples is the need for similar examples in the training set; otherwise, the weights and bias of the network will not be tuned in the regions of space around the unseen example. Imagine a Neural Network learning a function y = x, if we only present values between 0 and 10 during training, we should expect it to only make good predictions for y for values of x ranging from 0 to 10. It doesn't mean that it won't predict for other values, but the predictions will not be so accurate or nowhere close to the expectations. That is because the network is not trying to guess what was the function used to generate y, but it is simply trying to adjust its parameters to make its internal functions generate the expected y for the given x. That is why Deep Neural Networks require a lot of data. In a unidimensional space is easier to provide examples that cover the subset of the domain we want our network to learn. When we use multidimensional space, we need a lot more examples to have a good representation of the hyperspace used as domain.

+
+

Could this (map unseen examples) be said about all predictive algorithms?

+
+

Yes, it should. Otherwise, the algorithm would not be able to generalize well. A good predictive algorithm is the one that can predict unseen examples using fewer training samples.

+
+

Could the idea of classification problems using hyper-planes be +extended in general to any Deep Learning problem learning a mapping?

+
+

In the case of Deep Neural Networks, the result is more like, for a given input value, return the probability of it belonging to a class. For binary classification, the network will have a single output. The sigmoid function modulates this output to ranges between 0 and 1. We can interpret the output as the probability of belonging to one out of two possible classes. To know the probability for the other class, we subtract it from 1. For three or more classes, we will need three or more outputs ranging from 0 to 1, and each output is the probability of belonging to one of the classes. In this case, the outputs are also normalized by a softmax function, that guarantees that the sum of all outputs is equal to 1, as a probability distribution.

+
+

Would also explain why Deep Learning needs a lot of data and why it +works better mostly than other Learning algorithms for simple +problems.

+
+

Already partially explained... The need for a lot of data is to have a good representation of the hyperplane used as the domain. +The Deep Neural Networks work well because of their power to represent different models. They are a very 'flexible' functions that can be bent to approximate the relation existent between the data in the training set and the expected target. Simpler algorithms, as linear models, for instance, have less representation power, they are limited to a smaller set of models. Even though many models can be linearly approximated (because the input and output almost follow a linear relation), the neural network will be able to learn the nuances of the dataset better. This can also be the curse of neural networks, because they may try to learn every detail of the training set that wasn't really relevant and true for other cases and this concept is called overtraining... but is a discussion for another topic.

+",29810,,29810,,6/24/2020 10:45,6/24/2020 10:45,,,,2,,,,CC BY-SA 4.0 +15516,2,,11927,9/18/2019 19:10,,1,,"

Empathy is about living and values, just as someone can be a good person, or a bad person, based on their education, culture, past experience and values, just as I who recently became a father became much more empathic about my parents.

+ +

If it's really a semi-programmed artificial intelligence that captures around you the experience with your programmers, with people, with the environment, etc., and somehow is set to learn from it, I believe you can develop some kind of “ empathy ”, but as Jaume said, it won't be“ natural ”

+ +

We have to remember that real artificial intelligence is the one that absorbs and learns, not a set of IF's and ELSE's.

+",29817,,,,,9/18/2019 19:10,,,,2,,,,CC BY-SA 4.0 +15518,2,,11927,9/18/2019 21:10,,2,,"

It seems to me that empathy is based on understanding the experience of another entity:

+ +
+

originally Psychology. The ability to understand and appreciate another person's feelings, experience, etc.
SOURCE: OED

+
+ +

Using this definition, the AI would have to understand human experience. (There may be a ""Chinese Room"" issue in terms of whether one considers the algorithm to truly ""understand"". But, if it can classify the input sufficiently to produce an appropriate response, that can constitute understanding.)

+ +

The underlying problem is that the algorithm likely doesn't ""feel"" in the same way humans experience emotions, in that the human experience is colored by chemical response. So while the algorithm might be able to demonstrate sufficient ""understanding"" of a human's experience, and act in an empathetic manner, the degree of understanding may always be limited.

+",1671,,1671,,9/18/2019 21:31,9/18/2019 21:31,,,,1,,,,CC BY-SA 4.0 +15519,5,,,9/18/2019 21:27,,0,,,1671,,1671,,9/18/2019 21:27,9/18/2019 21:27,,,,0,,,,CC BY-SA 4.0 +15520,4,,,9/18/2019 21:27,,0,,For questions relating to predictions of future technology specifically related to AI and modeling human cognition. (Includes subjects such as the hypothetical Singularity.),1671,,1671,,9/18/2019 21:27,9/18/2019 21:27,,,,0,,,,CC BY-SA 4.0 +15521,2,,15497,9/18/2019 22:58,,2,,"

I wouldn't take anything Ray Kurzweil says especially seriously. Actual AI experts spend large quantities of time reading the existing scientific literature, and working to expand it. Because Kurzweil doesn't spend much of his time actually learning about AI, he has plenty of time in which to talk about it. Loudly. This is harmful to research, because 1) a lot of the uninformed predictions he and others make resemble doomsday scenarios, and 2) the predictions of good things have insanely optimistic time frames attached, and when they don't come true, research funding may be lost because AI hasn't lived up to what people thought it promised.

+ +

AI research has been progressing very rapidly in the last decade, but if we're being honest, a lot of the credit for that has to go to the people who develop research-grade graphics cards. The ability to perform massive amounts of linear algebra in parallel has allowed us to use techniques that we've known about for a couple decades, but that were too computationally expensive to be practical at the time. And because those techniques are now practical, a lot of current research is applying those techniques to new problems, and modifying and improving them based on what we've learned. (I don't want to understate the contributions here; there have been a lot of really clever ideas developed in the last ten years. But it's mostly consistent iterative improvement of techniques that already existed, rather than completely revolutionary ideas.)

+ +

To make human-equivalent AIs, we'll probably need to make a few of those giant conceptual leaps. And each of those leaps will then need to be followed up by a decade or two of iterative improvement, because that's how the process works. Case in point, the revolutionary idea that eventually led to all the Deep Learning models out there today was this one, dated 1986. First, there was the revolutionary idea. It was followed up by a bunch of work that built on it and expanded it in new directions. The work eventually stagnated because of hardware constraints. Then hardware scientists and engineers made some advances that let us continue work, and only then did we finally start getting the major applications that we're seeing today.

+ +

We know human-level intelligence is possible, since humans manage it. I have little doubt that we'll figure out how to do it with AI eventually (maybe in my lifetime, maybe not). But if you want Kurzweil's predictions to be even remotely plausible, you might want to add a zero to the end of most of his time frames.

+",2212,,,,,9/18/2019 22:58,,,,0,,,,CC BY-SA 4.0 +15522,1,15523,,9/18/2019 23:19,,3,153,"

I want to give some examples of AI via movies to my students. There are many movies that include AI, whether being the main character or extras.

+

Which movies have the most realistic (the most possible or at least close to being made in this era) artificial intelligence?

+",22111,,2444,,6/27/2020 22:25,6/27/2020 22:25,Which movies have the most realistic artificial intelligence?,,2,0,,,,CC BY-SA 4.0 +15523,2,,15522,9/18/2019 23:33,,3,,"

Just A Rather Very Intelligent System (J.A.R.V.I.S.) in Iron Man (and related films, such as The Avengers) is something (a personal assistant) that people are already trying to develop, so JARVIS is a quite realistic artificial intelligence. Examples of existing personal assistants are Google Assistant (integrated into Google Home devices), Cortana, Siri and Alexa. There are other virtual assistants, but, unfortunately, there aren't many reliable open-source ones. Note that JARVIS is way more intelligent and capable than the other mentioned personal assistants.

+ +

Similarly, HAL 9000, in 2001: A Space Odyssey, is a sentient artificial intelligence which can be considered a personal assistant.

+",2444,,2444,,9/18/2019 23:39,9/18/2019 23:39,,,,2,,,,CC BY-SA 4.0 +15524,1,,,9/18/2019 23:45,,22,9890,"

The Transformer model introduced in ""Attention is all you need"" by Vaswani et al. incorporates a so-called position-wise feed-forward network (FFN):

+ +
+

In addition to attention sub-layers, each of the layers in our encoder + and decoder contains a fully connected feed-forward network, which is + applied to each position separately and identically. This consists of + two linear transformations with a ReLU activation in between.

+ +

$$\text{FFN}(x) = \max(0, x \times {W}_{1} + {b}_{1}) \times {W}_{2} + {b}_{2}$$

+ +

While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is ${d}_{\text{model}} = 512$, and the inner-layer has dimensionality ${d}_{ff} = 2048$.

+
+ +

I have seen at least one implementation in Keras that directly follows the convolution analogy. Here is an excerpt from attention-is-all-you-need-keras.

+ +
class PositionwiseFeedForward():
+    def __init__(self, d_hid, d_inner_hid, dropout=0.1):
+        self.w_1 = Conv1D(d_inner_hid, 1, activation='relu')
+        self.w_2 = Conv1D(d_hid, 1)
+        self.layer_norm = LayerNormalization()
+        self.dropout = Dropout(dropout)
+    def __call__(self, x):
+        output = self.w_1(x) 
+        output = self.w_2(output)
+        output = self.dropout(output)
+        output = Add()([output, x])
+        return self.layer_norm(output)
+
+ +

Yet, in Keras you can apply a single Dense layer across all time-steps using the TimeDistributed wrapper (moreover, a simple Dense layer applied to a 2D input implicitly behaves like a TimeDistributed layer). Therefore, in Keras a stack of two Dense layers (one with a ReLU and the other one without an activation) is exactly the same thing as the aforementioned position-wise FFN. So, why would you implement it using convolutions?

+ +

Update

+ +

Adding benchmarks in response to the answer by @mshlis:

+ +
import os
+import typing as t
+os.environ['CUDA_VISIBLE_DEVICES'] = '0'
+
+import numpy as np
+
+from keras import layers, models
+from keras import backend as K
+from tensorflow import Tensor
+
+
+# Generate random data
+
+n = 128000  # n samples
+seq_l = 32  # sequence length
+emb_dim = 512  # embedding size
+
+x = np.random.normal(0, 1, size=(n, seq_l, emb_dim)).astype(np.float32)
+y = np.random.binomial(1, 0.5, size=n).astype(np.int32)
+
+ +
+ +
# Define constructors
+
+def ffn_dense(hid_dim: int, input_: Tensor) -> Tensor:
+    output_dim = K.int_shape(input_)[-1]
+    hidden = layers.Dense(hid_dim, activation='relu')(input_)
+    return layers.Dense(output_dim, activation=None)(hidden)
+
+
+def ffn_cnn(hid_dim: int, input_: Tensor) -> Tensor:
+    output_dim = K.int_shape(input_)[-1]
+    hidden = layers.Conv1D(hid_dim, 1, activation='relu')(input_)
+    return layers.Conv1D(output_dim, 1, activation=None)(hidden)
+
+
+def build_model(ffn_implementation: t.Callable[[int, Tensor], Tensor], 
+                ffn_hid_dim: int, 
+                input_shape: t.Tuple[int, int]) -> models.Model:
+    input_ = layers.Input(shape=(seq_l, emb_dim))
+    ffn = ffn_implementation(ffn_hid_dim, input_)
+    flattened = layers.Flatten()(ffn)
+    output = layers.Dense(1, activation='sigmoid')(flattened)
+    model = models.Model(inputs=input_, outputs=output)
+    model.compile(optimizer='Adam', loss='binary_crossentropy')
+    return model
+
+ +
+ +
# Build the models
+
+ffn_hid_dim = emb_dim * 4  # this rule is taken from the original paper
+bath_size = 512  # the batchsize was selected to maximise GPU load, i.e. reduce PCI IO overhead
+
+model_dense = build_model(ffn_dense, ffn_hid_dim, (seq_l, emb_dim))
+model_cnn = build_model(ffn_cnn, ffn_hid_dim, (seq_l, emb_dim))
+
+ +
+ +
# Pre-heat the GPU and let TF apply memory stream optimisations
+
+model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+%timeit model_dense.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+
+model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+%timeit model_cnn.fit(x=x, y=y[:, None], batch_size=bath_size, epochs=1)
+
+ +

I am getting 14.8 seconds per epoch with the Dense implementation:

+ +
Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.6332
+Epoch 1/1
+128000/128000 [==============================] - 15s 115us/step - loss: 0.5327
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.3828
+Epoch 1/1
+128000/128000 [==============================] - 14s 113us/step - loss: 0.2543
+Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.1908
+Epoch 1/1
+128000/128000 [==============================] - 15s 116us/step - loss: 0.1533
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.1475
+Epoch 1/1
+128000/128000 [==============================] - 15s 117us/step - loss: 0.1406
+
+14.8 s ± 170 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
+
+ +

and 18.2 seconds for the CNN implementation. I am running this test on a standard Nvidia RTX 2080. +So, from a performance perspective there seems to be no point in actually implementing an FFN block as a CNN in Keras. Considering that the maths are the same, the choice boils down to pure aesthetics.

+",29824,,29824,,2/12/2020 20:14,7/12/2020 0:02,Why would you implement the position-wise feed-forward network of the transformer with convolution layers?,,2,2,,,,CC BY-SA 4.0 +15525,1,,,9/19/2019 4:49,,3,217,"

The terms are mentioned in the paper: An Emphatic Approach to the Problem of off-Policy Temporal-Difference Learning (Sutton, Mahmood, White; 2016) and more, of course.

+

In this paper, they proposed the proof of "stability" but not convergence.

+

It seems that stability is guaranteed if the "key matrix" is shown to be positive definite. However, convergence requires more than that.

+

I don't understand the exact difference between the two.

+",9793,,2444,,12/13/2021 17:57,12/13/2021 17:57,What are the differences between stability and convergence in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 +15526,1,,,9/19/2019 5:01,,0,57,"

Prediction's goal is to get an estimate of a performance of a policy given a specific state.

+ +

Control's goal is to improve the policy wrt. the prediction.

+ +

The alternation between the two is the basis of reinforcement learning algorithms.

+ +

In the paper “Safe and Efficient Off-Policy Reinforcement Learning.” (Munos, 2016), the section 3.1) ""Policy evaluation"" assumes that the target policy is fixed, while the section 3.2) ""Control"" extends to where the target policy is a sequence of policies improved by a sequence of increasingly greedy operations.

+ +

This suggests that even a proof of convergence is established with a fixed target policy, one cannot immediately imply that of the case where the target policy is a sequence of improving policies.

+ +

I wonder why it is the case. If an algorithm converges under a fixed target policy assumption, any policy during the chain of improvement should have no problem with this algorithm as well. With the merit of policy improvement, each policy in sequence is increasingly better hence converging to an optimal policy.

+ +

This should be obvious from the policy improvement perspective and should require no further proof at all?

+",9793,,9793,,9/19/2019 8:35,9/22/2019 17:20,Why doesn't stability in prediction imply stability in control in off-policy reinforcement learning?,,1,3,,,,CC BY-SA 4.0 +15527,2,,8971,9/19/2019 5:09,,1,,"

Ignoring HER for now the $Q$ and $V$ functions operate on states and actions which are part of a Markov decision process which we call $M_0$.

+

Back to HER, the $Q$ and $V$ functions now take a goal as an additional parameter called the goal. We will denote individual goals $g_n$, the true goal $g_0$, and the set of all goals $G$. The set of goals is chosen such that every state matches at least one goal. We create a new MDP $M_1 = M_0 \times G$ (i.e. a larger MDP composed of multiple copies of $M_0$, all the states in each being tagged with a goal). The reward is either +1 or 0 depending on whether or not the goal and the component from $M_0$ match in some predefined sense. In HER trajectories are collected from the subset of $M_1$ where $g = g_0$ and added to the replay buffer. When training the $Q$ and $V$ functions we don't only use the original trajectories: we create new ones by substituting new values of $g$, which we do so strategically so as to include some trajectories with a positive reward.

+

Things to note:

+
    +
  1. HER doesn't assign rewards to states or trajectories in $M_0$: the reward function used is only defined for $M_1$
  2. +
  3. The performance of HER depends on $Q$ and $V$ both being models with some ability to extrapolate to unseen data points; such as neural networks, support vector machines, etc. It would not provide any benefit if applied to value tables.
  4. +
+",29828,,2444,,11/27/2020 15:54,11/27/2020 15:54,,,,1,,,,CC BY-SA 4.0 +15529,1,15533,,9/19/2019 8:24,,0,1617,"

I'm testing out TensorFlow LSTM layer text generation task, not classification task; but something is wrong with my code, it doesn't converge. What changes should be done?

+ +

Source code:

+ +
import tensorflow as tf;
+
+# t=0  t=1    t=2  t=3     
+#[the, brown, fox, is,     quick]
+#   0  1      2    3       4
+#[the, red,   fox, jumps,  high]
+#   0  5      2    6       7
+
+#t0 x=[[the],  [the]]
+#   y=[[brown],[red]]
+#t1 ...
+#t2
+#t3
+bsize = 2;
+times = 4;
+
+#data
+x = [];
+y = [];
+#t0        the:     the:
+x.append([[0/6],   [0/6]]); #normalise: x divided by 6 (max x)
+#          brown:   red:
+y.append([[1/7],   [5/7]]); #normalise: y divided by 7 (max y)
+#t1
+x.append([[1/6],   [5/6]]);
+y.append([[2/7],   [2/7]]);
+#t2
+x.append([[2/6],   [2/6]]);
+y.append([[3/7],   [6/7]]);
+#t3
+x.append([[3/6],   [6/6]]);
+y.append([[4/7],   [7/7]]);
+
+#model
+inputs  = tf.placeholder(tf.float32,[times,bsize,1]) #4,2,1
+exps    = tf.placeholder(tf.float32,[times,bsize,1]);
+
+layer1  = tf.keras.layers.LSTMCell(20) 
+hids1,_ = tf.nn.static_rnn(layer1,tf.split(inputs,times),dtype=tf.float32);
+
+w2      = tf.Variable(tf.random_uniform([20,1],-1,1));
+b2      = tf.Variable(tf.random_uniform([   1],-1,1));
+outs    = tf.sigmoid(tf.matmul(hids1,w2) + b2);
+
+loss  = tf.reduce_sum(tf.square(exps-outs))
+optim = tf.train.GradientDescentOptimizer(1e-1)
+train = optim.minimize(loss)
+
+#train
+s    = tf.Session();
+init = tf.global_variables_initializer();
+s.run(init)
+
+feed = {inputs:x, exps:y}
+for i in range(10000):
+  if i%1000==0:
+    lossval = s.run(loss,feed)
+    print(""loss:"",lossval)
+  #end if
+  s.run(train,feed)
+#end for
+
+lastloss = s.run(loss,feed)
+print(""loss:"",lastloss,""(last)"");
+#eof
+
+ +

Output showing loss values (a little different every run):

+ +
loss: 3.020703
+loss: 1.8259083
+loss: 1.812584
+loss: 1.8101325
+loss: 1.8081319
+loss: 1.8070083
+loss: 1.8065354
+loss: 1.8063282
+loss: 1.8062303
+loss: 1.8061805
+loss: 1.8061543 (last)
+
+ +

Colab link: +https://colab.research.google.com/drive/1TsHjmucuynCPOgKuo4a0hiM8B8UaOWQo

+",2844,,2844,,9/20/2019 10:36,9/20/2019 15:48,"LSTM network doesn't converge, what should be changed?",,2,1,,9/20/2019 20:55,,CC BY-SA 4.0 +15530,2,,15502,9/19/2019 8:49,,0,,"

I found out how to get 3D output from LSTMCell so that I can matmul with output weights + biases and subtract with expected values:

+ +
    +
  • Inputs & expecteds should be: placeholder(,[times,batch_size,num_inp]) instead of batch_size first then times. However, tf.keras.layers.LSTM will ask for [batch_size,times,num_inp]
  • +
  • Use tf.nn.static_rnn with a list of inputs, instead of 1 input
  • +
+ +

Source code:

+ +
import tensorflow as tf;
+
+x = [[[1],[2],[3]],[[4],[5],[6]]];
+times = 2;
+bsize = 3;
+
+#3d input
+inputs = tf.placeholder(tf.float32, [times,bsize,1]);
+
+cell   = tf.nn.rnn_cell.BasicRNNCell(20);
+hids,_ = tf.nn.static_rnn(cell,tf.unstack(inputs,times),dtype=tf.float32);
+
+sess = tf.Session();
+init = tf.global_variables_initializer();
+sess.run(init);
+
+#results in 2d
+print(sess.run(hids, {inputs:x}));
+
+",2844,,2844,,9/20/2019 10:33,9/20/2019 10:33,,,,0,,,,CC BY-SA 4.0 +15531,2,,15485,9/19/2019 8:55,,0,,"
+

What is the technical name for this kind of training?

+
+

The name for the problem is Sequential Decision Making or Optimal Control.

+

There are a few different approaches you can take when solving this kind of problem. However, I think that the way that you are describing your project, Reinforcement Learning (RL) would match your approach the best.

+
+

I cannot use a traditional Xy method but need another method that goes like this:

+

network.train_against_value(X, y, determinator)

+
+

Although this method signature could probably be made to work by re-structuring one or other RL frameworks, it is more usual when using RL with neural networks to treat the experience-gathering and scoring systems - the core ideas behind RL - as a data generator for either a supervised learning problem or in some cases directly producing the output layer gradients.

+

One approach you could use is called Deep Q Networks (DQN) which effectively generates mini-batches for supervised learning of a neural network, based on gathering experiences.

+

In brief, DQN trains a neural network to predict what you are calling the "determinator", but that RL would call the "value" of each action. So you may move the action choice (y) to the input of the neural network, or alternatively predict multiple values (one output for each possble action) - the RL theory is the same for each approach here, it is an implementation detail. The agent (cat or mouse) would then pick the action with the highest predicted value by default (with the cat's values being the negative of the mouse's), trying other actions at random whilst training so as to learn all values. Whenever the agent gained new experience, it would add that to the training data so it could improve its predictions.

+

RL is a complex subject in its own right, and to begin understanding it properly you would do well to start by tackling even simpler problems that don't require neural networks or running two agents against each other. There is a good introductory book on the subject with the option to read a free PDF version: Reinforcement Learning: An Introduction (Sutton & Barto)

+",1847,,-1,,6/17/2020 9:57,9/19/2019 9:07,,,,2,,,,CC BY-SA 4.0 +15533,2,,15529,9/19/2019 9:26,,3,,"

writing here my suggestion, because i haven't earned the right to comment yet.

+ +

Your main ""problem"" could be your loss function. It converges, this is why your loss value is decreasing. So I suggest to let it maybe train longer.

+ +

Alternatively you could change the loss function to fit your need. For example you could use:

+ +
loss  = tf.reduce_mean(tf.square(exps-outs))
+
+ +

You will get a smaller loss value which decreases clearly after every output.

+ +

I hope this helps :)

+",29834,,,,,9/19/2019 9:26,,,,3,,,,CC BY-SA 4.0 +15534,2,,15510,9/19/2019 9:50,,0,,"

They are not tracking anything, instead they are trying to find an object which satisfies free fall equation. Gravity acts the same regardless of object's properties - at least in vacuum.

+ +

""In this paper, we model prior knowledge on the structure +of the outputs by providing a weighted constraint function +g used to penalize “structures” that are not +consistent with our prior knowledge.""

+ +

There is a restriction to the possible parabola that is given in the last equation on page 2. They are training the Network to learn time dependence of that equation, which is the same for all objects.

+",22301,,,,,9/19/2019 9:50,,,,0,,,,CC BY-SA 4.0 +15535,2,,15525,9/19/2019 10:30,,2,,"

Sometimes when training, particularly in reinforcement learning, the model can become unstable due to the amount of variance that exists in the training data that the agent generates by interacting with the environment. This is certainly a problem at the start of training as you can get huge outliers in the data because the agent is behaving randomly. You can find that just one update to the policy could potentially make it collapse because it moves the policy into some obscure region, e.g. so the agent always take a particular action. You can make training more stable by using larger batches and a smaller learning rate so it takes smaller steps at a time, but the downside to that is training is slower. So you need to test different hyperparameters to find a good trade-off between the two. You can also use an training architecture such as Proximal Policy Optimization (PPO) which clips the amount the policy can move in any given update to try and maintain some stability.

+ +

Convergence is a term used to describe when the model has found an optimal policy and isn't learning any further, usually demonstrated when the reward plateaus for a certain number of episodes. Of course, it may have settled on a local optima, and other global optima may exist; the data you present and the way you train your model may yield better results - again, all part of testing and experimentation.

+",20352,,20352,,9/19/2019 10:36,9/19/2019 10:36,,,,1,,,,CC BY-SA 4.0 +15537,2,,15522,9/19/2019 11:24,,2,,"

I would like to mention WOPR from War Games, maybe is an old movie for your students, but it is a more realistic IA centered around the problem of playing board games (if you exclude the part about deciding that a game is not worth the time).

+ +

Also I remember an artificial assistant in ""The Time machine"" that was more convincing than J.A.R.V.I.S because it is not so intelligent, I remember it more like an agent that can find and read you wikipedia articles, but without reasoning about them a lot, but I could be wrong.

+ +

The robot companion in Moon is also interesting and comical as it is like a small child that has been told to cheat but can't disobey direct orders.

+ +

Other films go around the dilema of creating AGI, like, ""Blade runner"", ""The bicentenary Man"", Spilbergs' ""I.A."", ""Her"", or ""Ex Machina"", they are more interesting from a philosofical point of view (they are all very similar to Mary Shelley's Frankenstein) because the actual implementation is unconceivable right now.

+",29839,,,,,9/19/2019 11:24,,,,0,,,,CC BY-SA 4.0 +15538,2,,15449,9/19/2019 12:04,,0,,"

Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.

+",21732,,,,,9/19/2019 12:04,,,,0,,,,CC BY-SA 4.0 +15539,1,,,9/19/2019 13:10,,3,169,"

I was working on an RL problem and I am confused at one specific point. We use replay memory so that the network learns about previous actions and how these actions lead to a success or a failure.

+ +

Now, to train the neural network, we use batches from this replay or experience memory. But here's my confusion.

+ +

Some places like this extract random (non-sequential) batches from the memory to train the neural network but Andrej Karpathy uses the sequential data to train the network.

+ +

Can someone tell me why there's the difference?

+",29843,,2444,,9/19/2019 13:38,9/19/2019 13:38,What is the difference between random and sequential sampling from the reply memory?,,0,4,,,,CC BY-SA 4.0 +15541,1,15552,,9/19/2019 16:38,,3,284,"

This is my problem:

+

I have 10 variables that I intend to evaluate two by two (in pairs). I want to know which variables have the strongest relationships with each other. And I'm only interested in evaluating relationships two by two. Well, one suggestion would be to calculate the pairwise correlation coefficient of these variables. And then list the pairs with the highest correlation coefficient to the lowest correlation. That way I would have a ranking between the most correlated to the lowest correlated pairs.

+

My question is: Is there anything analogous in the world of artificial intelligence to the correlation coefficient calculation? That is, what tools can the world of AI / Machine Learning offer me to extract this kind of information? So that in the end I can have something like a ranking among the most "correlated" pairs from the point of view of AI / Machine Learning?

+

In other words, how do I know which variable among these 10 best "relates" (or "correlates") with variable 7, for example?

+",29851,,2444,,1/6/2022 10:26,1/6/2022 10:26,How do I determine which variables/features have the strongest relationship with each other?,,1,0,,,,CC BY-SA 4.0 +15542,1,15551,,9/19/2019 16:45,,5,369,"

Evolutionary algorithms are mentioned in some sources as a method to train a neural network (finding weights, not hyperparameters). However, I have not heard about one practical application of such an idea yet.

+

My question is, why is that? What are the issues or limitations with such a solution that prevented it from practical use?

+

I am asking because I am planning on developing such an algorithm and want to know what to expect and where to put most attention.

+",22659,,2444,,1/20/2021 19:31,1/20/2021 19:31,Why evolutionary training of neural networks is not popular?,,1,0,,,,CC BY-SA 4.0 +15543,2,,15449,9/19/2019 16:55,,0,,"

Few people realize that our global economy should be considered an AI: +- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money. +- It is man-made so qualifies as artificial

+ +

This neural network is better in its task then humans: +Capitalism has always won against economy planned by humans (plan-economy).

+ +

Is this neural net dangerous ? +Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.

+ +

How did this AI become dangerous? +You could answer it is because of human greed. +Our creation reflects ourselves. +In other words: we didnot train our neural net to behave well. +Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.

+ +

Would it be easy to train this AI to be no longer dangerous ? +Maybe not, maybe some AI are just larger then life. +It is just survival of the fittest.

+",29850,,,,,9/19/2019 16:55,,,,0,,,,CC BY-SA 4.0 +15544,1,15547,,9/19/2019 17:55,,3,232,"

Let's assume we have an ANN which takes a vector $x\in R^D$, representing an image, and classifies it over two classes. The output is a vector of probabilities $N(x)=(p(x\in C_1), p(x\in C_2))^T$ and we pick $C_1$ iff $p(x\in C_1) \geq 0.5$. Let the two classes be $C_1= \texttt{cat}$ and $C_2= \texttt{dog}$. Now imagine we want to extract this ANN's idea of ideal cat by finding $x^* = argmax_x N(x)_1$. How would we proceed? I was thinking about solving $\nabla_xN(x)_1=0$, but I don't know if this makes sense or if it is solvable.

+ +

In short, how do I compute the input which maximizes a class-probability?

+",23527,,23527,,9/19/2019 18:02,9/19/2019 20:42,How can we find find the input image which maximizes the class-probability for an ANN?,,2,0,,,,CC BY-SA 4.0 +15545,2,,1655,9/19/2019 19:11,,0,,"

Low precision enables high parallelism computation in Convo and FC layers. +CPU & GPU fixed architecture, but ASIC/FPGA can be designed based on neural network architecture

+",17792,,,,,9/19/2019 19:11,,,,0,,,,CC BY-SA 4.0 +15546,1,15550,,9/19/2019 20:15,,2,33,"

Made a neural network using tensor flows that was supposed matches an Ip to one of the 7 type of vulnerabilities and gives out what type of vulnerability that IP has.

+ +
+ +
    model = tf.keras.models.Sequential([
+  tf.keras.layers.Flatten(),
+  tf.keras.layers.Dense(50, activation=tf.nn.relu),
+  tf.keras.layers.Dense(7, activation=tf.nn.softmax)
+])
+
+model.compile(optimizer='adam',
+              loss='sparse_categorical_crossentropy',
+              metrics=['accuracy'])
+
+
+
+model.fit(xs, ys, epochs=500)
+
+ +

The output of print(model.predict([181271844])) when this command is executed should be one of the numbers from 1 to 7 but the out put its gives is

+ +
+

[[0.22288103 0.20282331 0.36847615 0.11339897 0.04456346 0.02391759 + 0.02393949]]

+
+ +

I can't seem to figure out what the problem is.

+",27399,,,,,9/19/2019 23:38,Neural network does not give out the required out put?,,1,1,,,,CC BY-SA 4.0 +15547,2,,15544,9/19/2019 20:25,,4,,"

In deep networks there is actually a wide variety of solutions to the problem, but if you need to find one, any easy way to do this is just through normal optimization schemes
+$$\hat x = argmin_x \ L(y,x)$$
+where $L(y,x)$ is your loss function. Since ANN's are generally differentiable you can optimize this iteratively with some form gradient descent scheme:
+$$x^{i+1} \leftarrow x^{i} - \lambda \nabla_{x^i}L(y,x^i)$$
+where $\lambda$ is your learning rate.

+",25496,,,,,9/19/2019 20:25,,,,0,,,,CC BY-SA 4.0 +15548,2,,15544,9/19/2019 20:42,,4,,"

Probably the simplest way to search for an image with the highest probability of being a cat is to use a technique similar to Deep Dream:

+ +
    +
  • Load the network for training, but freeze all the network weights

  • +
  • Create a random input image, and connect it to the network as a ""variable"" i.e. data that can be changed through training

  • +
  • Set a loss function based on maximising the pre-sigmoid value in the last layer (this is easier to handle than working with 0.999 etc probability)

  • +
  • Train using backpropagation, but instead of using gradients to change the weights, back propagate all the way to the input layer and use gradients to change the input image.

  • +
  • Typically you will also want to normalise the input image between iterations.

  • +
+ +

There is a good chance that the ideal input you find which triggers ""maximum catness"" will be a very noisy jumbled mess of cat-related features. You may be able to encourage something more visually apppealing, or at least less noisy, by adding a little movement - e.g. minor blurring, or a slight zoom (then crop) between each iteration. At that point, it becomes a more an artistic endeavour than mathematical.

+ +

Here is something I produced using some TensorFlow Deep Dream code plus zooming and blurring to encourage larger scale features to dominate:

+ +

+ +

Technically the above maximises a single internal feature map of a CNN, not a class probability, but it is the same thing conceptually.

+",1847,,,,,9/19/2019 20:42,,,,0,,,,CC BY-SA 4.0 +15550,2,,15546,9/19/2019 23:38,,1,,"

The numbers you are seeing as output are a probability vector. This is a common output format for multi-class classification models.

+ +

In this case, you can interpret the vector as saying:

+ +
    +
  • 22% chance of class 1
  • +
  • 20% chance of class 2
  • +
  • 37% chance of class 3
  • +
  • 11% chance of class 4
  • +
  • 4% chance of class 5
  • +
  • 2% chance of class 6
  • +
  • 2% chance of class 7
  • +
+ +

If you want to get a concrete label out of this, the easiest choice is to compute and return the index of the maximum element.

+",16909,,,,,9/19/2019 23:38,,,,0,,,,CC BY-SA 4.0 +15551,2,,15542,9/19/2019 23:45,,4,,"

The main evolutionary algorithm used to train neural networks is Neuro-Evolution of Augmenting Topoloigies, or NEAT. NEAT has seen fairly widespread use. There are thousands of academic papers building on or using the algorithm.

+ +

NEAT is not widely used in commercial applications because if you have a clean objective function, a topology that is optimized for gradient decent via backpropogation, and an implementation that is highly optimized for a GPU, you are almost certainly going to see better, faster, results from a conventional training process. Where NEAT is really useful is if you want to do something weird, like train to maximize novelty, or if you want to try to train neurons that don't have cleanly decomposable gradients. Basically, you need to have any of the usual reasons you might prefer an evolutionary algorithm to hill-climbing approaches:

+ +
    +
  1. You don't have a clean mapping from loss function to individual model components.
  2. +
  3. Your loss function has many local maximia.
  4. +
+",16909,,,,,9/19/2019 23:45,,,,2,,,,CC BY-SA 4.0 +15552,2,,15541,9/19/2019 23:50,,2,,"

It sounds like you have a series of data points, each with 10 related measurements, and you want automatically assess which of the measurements are most closely related to each other.

+

You are right that the correlation coefficient is a good choice for this.

+

Other techniques used in some AI algorithms include the Information Gain measurement (where you measure the reduction in entropy of one variable that follows from partitioning on another one), and embedded feature selection approaches, like the one in this paper.

+",16909,,2444,,1/6/2022 10:24,1/6/2022 10:24,,,,0,,,,CC BY-SA 4.0 +15553,2,,15434,9/20/2019 3:23,,0,,"

The topic is multimodal neural networks.

+ +

Here some repositories that i hope will help me a lot

+ +

https://docs.google.com/presentation/d/1z8-GeTXvSuVbcez8R6HOG1Tw_F3A-WETahQdTV38_uc/edit#slide=id.g1ea5aac985_0_892

+ +

https://github.com/prml615/prml

+ +

https://github.com/husseinmozannar/multimodal-deep-learning-for-disaster-response

+ +

https://github.com/guillaume-be/multimodal-avito

+ +

Thanks to kbrose, his suggestion led me to this type of architectures.

+",29693,,,,,9/20/2019 3:23,,,,0,,,,CC BY-SA 4.0 +15556,2,,10566,9/20/2019 12:53,,1,,"

I know the new bot in JP Morgan uses reinforcement learning:

+

Machine Learning in FX

+

There a lot of studies of Reinforcement learning can apply in crypto currency trading. Here is one of many examples:

+

A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem

+",29812,,11539,,11/14/2020 2:17,11/14/2020 2:17,,,,0,,,,CC BY-SA 4.0 +15557,2,,6997,9/20/2019 13:04,,1,,"

I would say this post is a must to read:

+ +
+

https://rubenfiszel.github.io/posts/rl4j/2016-08-24-Reinforcement-Learning-and-DQN.html

+
+",29812,,,,,9/20/2019 13:04,,,,0,,,,CC BY-SA 4.0 +15559,1,,,9/20/2019 13:14,,2,37,"

I am looking to plan a solution for a workspace fault and not hardware faults.

+ +

Consider a task where a robot has to move balls from one place to another. In case it faces any condition which is outside the task for eg. someone snatches the ball from the robot while it is transferring or the robot drops the balls in between. These are some example faults that could occur, many other might be possible. I am trying to build a generalized algorithm that so that the robot can find a way to resolve unexpected changes itself.

+ +

I currently have an FSM for the whole task. Any fault that somehow changes any of the state machine variables should be considered. For instance, there are faults that deal with obstacles that may come in the way.

+ +

But there might be faults for example a cloth in front of the camera. This fault should be corrected by a human since the robot cannot manage that. All the faults like that are out of the scope of the robot.

+ +

Any suggestion or ideas related to the algorithm will be helpful.

+",29868,,1847,,9/20/2019 16:13,9/20/2019 16:13,Algorithm to solve a fault independent of its type,,0,4,,,,CC BY-SA 4.0 +15560,2,,11949,9/20/2019 15:00,,0,,"

For sure, this may be helpful. You remove excessive information from image and make the classification task a bit more simple. But you need to remember that bounding boxes may work not perfectly and accuracy of the classification algorithm may suffer from corrupted inputs (when bounding boxes are corrupted).

+",29872,,,,,9/20/2019 15:00,,,,0,,,,CC BY-SA 4.0 +15561,2,,15529,9/20/2019 15:48,,0,,"

I'm still working on how to make the code work for text generation, but the following converges and work for text classification:

+ +
import tensorflow as tf;
+tf.reset_default_graph();
+
+#data
+'''
+t0      t1      t2
+british gray    is => cat (y=0)
+0       1       2
+white   samoyed is => dog (y=1)
+3       4       2 
+'''
+Bsize = 2;
+Times = 3;
+Max_X = 4;
+Max_Y = 1;
+
+X = [[[0],[1],[2]], [[3],[4],[2]]];
+Y = [[0],           [1]          ];
+
+#normalise
+for I in range(len(X)):
+  for J in range(len(X[I])):
+    X[I][J][0] /= Max_X;
+
+for I in range(len(Y)):
+  Y[I][0] /= Max_Y;
+
+#model
+Inputs   = tf.placeholder(tf.float32, [Bsize,Times,1]);
+Expected = tf.placeholder(tf.float32, [Bsize,      1]);
+
+#single LSTM layer
+#'''
+Layer1   = tf.keras.layers.LSTM(20);
+Hidden1  = Layer1(Inputs);
+#'''
+
+#multi LSTM layers
+'''
+Layers = tf.keras.layers.RNN([
+  tf.keras.layers.LSTMCell(30), #hidden 1
+  tf.keras.layers.LSTMCell(20)  #hidden 2
+]);
+Hidden2 = Layers(Inputs);
+'''
+
+Weight3  = tf.Variable(tf.random_uniform([20,1], -1,1));
+Bias3    = tf.Variable(tf.random_uniform([   1], -1,1));
+Output   = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3);
+
+Loss     = tf.reduce_sum(tf.square(Expected-Output));
+Optim    = tf.train.GradientDescentOptimizer(1e-1);
+Training = Optim.minimize(Loss);
+
+#train
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed = {Inputs:X, Expected:Y};
+for I in range(1000): #number of feeds, 1 feed = 1 batch
+  if I%100==0: 
+    Lossvalue = Sess.run(Loss,Feed);
+    print(""Loss:"",Lossvalue);
+  #end if
+
+  Sess.run(Training,Feed);
+#end for
+
+Lastloss = Sess.run(Loss,Feed);
+print(""Loss:"",Lastloss,""(Last)"");
+
+#eval
+Results = Sess.run(Output,Feed);
+print(""\nEval:"");
+print(Results);
+
+print(""\nDone."");
+#eof
+
+",2844,,,,,9/20/2019 15:48,,,,0,,,,CC BY-SA 4.0 +15562,1,15602,,9/20/2019 15:57,,0,36,"

I can do text classification with RNN, in which the last output of RNN (rnn_outputs[-1]) is used to matmul with output layer weight and plus bias. That is getting a word (class name) after the last T in the time dimension of RNN.

+ +

The matter is for text generation, I need a word somewhere in the middle of time dimension, eg.:

+ +
t0  t1    t2  t3
+The brown fox jumps
+
+ +

For this example, I have the first 2 words: The, brown.

+ +

How to get the next word ie. ""fox"" using RNN (LSTM)? How to convert the following text classification code to text generating code?

+ +

Source code (text classification):

+ +
import tensorflow as tf;
+tf.reset_default_graph();
+
+#data
+'''
+t0      t1      t2
+british gray    is => cat (y=0)
+0       1       2
+white   samoyed is => dog (y=1)
+3       4       2 
+'''
+Bsize = 2;
+Times = 3;
+Max_X = 4;
+Max_Y = 1;
+
+X = [[[0],[1],[2]], [[3],[4],[2]]];
+Y = [[0],           [1]          ];
+
+#normalise
+for I in range(len(X)):
+  for J in range(len(X[I])):
+    X[I][J][0] /= Max_X;
+
+for I in range(len(Y)):
+  Y[I][0] /= Max_Y;
+
+#model
+Inputs   = tf.placeholder(tf.float32, [Bsize,Times,1]);
+Expected = tf.placeholder(tf.float32, [Bsize,      1]);
+
+#single LSTM layer
+#'''
+Layer1   = tf.keras.layers.LSTM(20);
+Hidden1  = Layer1(Inputs);
+#'''
+
+#multi LSTM layers
+'''
+Layers = tf.keras.layers.RNN([
+  tf.keras.layers.LSTMCell(30), #hidden 1
+  tf.keras.layers.LSTMCell(20)  #hidden 2
+]);
+Hidden2 = Layers(Inputs);
+'''
+
+Weight3  = tf.Variable(tf.random_uniform([20,1], -1,1));
+Bias3    = tf.Variable(tf.random_uniform([   1], -1,1));
+Output   = tf.sigmoid(tf.matmul(Hidden1,Weight3) + Bias3);
+
+Loss     = tf.reduce_sum(tf.square(Expected-Output));
+Optim    = tf.train.GradientDescentOptimizer(1e-1);
+Training = Optim.minimize(Loss);
+
+#train
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed = {Inputs:X, Expected:Y};
+for I in range(1000): #number of feeds, 1 feed = 1 batch
+  if I%100==0: 
+    Lossvalue = Sess.run(Loss,Feed);
+    print(""Loss:"",Lossvalue);
+  #end if
+
+  Sess.run(Training,Feed);
+#end for
+
+Lastloss = Sess.run(Loss,Feed);
+print(""Loss:"",Lastloss,""(Last)"");
+
+#eval
+Results = Sess.run(Output,Feed);
+print(""\nEval:"");
+print(Results);
+
+print(""\nDone."");
+#eof
+
+",2844,,,,,9/23/2019 21:57,How to change this RNN text classification code to become text generation code?,,1,0,,,,CC BY-SA 4.0 +15565,1,,,9/21/2019 1:49,,0,44,"

OK, now I think an AI must view grids in a different way to computers.

+ +

For example a computer would represent a grid like this:

+ +
cells = [[1,2,3],[4,5,6],[7,8,9]] = [row1,row2,row3]
+
+ +

That is a grid is 3 rows of 3 cells.

+ +

But... that's not how a human sees it. A human sees a grid as made of 3 rows and 3 collumns somehow intersecting.

+ +

If an AI is built on some mathematical logic like set theory, it's like a set of rows which in turn is a set of cells.

+ +

So what would be a way to represent a grid in a computer that is more ""human"". And doesn't favor either rows or columns? Or is there some mathematical or programmatical description of a grid that treats rows and columns as equivalent?

+",4199,,16909,,9/21/2019 20:28,9/21/2019 20:36,How would an AI understand grids?,,1,1,,,,CC BY-SA 4.0 +15571,2,,15565,9/21/2019 20:36,,4,,"

Although it is common to represent a grid as two dimensional array in a computer program, this is not the only way to represent one. You could, for example, use a generalized graph structure made of linked nodes, with 4 links each. Many other representations are possible. Even if you use a 2d array, some languages would index the columns first rather than the rows (FORTRAN for instance).

+ +

That issue is really an aside though, because it conflates representation with reasoning. By choosing to represent the world in certain data structures, we can made it easier or harder for different AI algorithms to reason about them, but the reasoning can often proceed even if the representation is inefficient. That is, an AI algorithm can ask questions about the columns in the grid, even if the columns of the grid are not represented in a way that makes it programmitcally easy to group them together.

+",16909,,,,,9/21/2019 20:36,,,,0,,,,CC BY-SA 4.0 +15573,1,,,9/22/2019 4:35,,3,100,"

I have previously implemented a Neural Network with Back-Propagation that was able to learn Tic-tac-toe and could go pretty well at Connect-4.

+ +

Now I'm trying to do a NN that can make a prediction. The idea is that I have a large set of customer purchase history, so people I can ""target"" with marketing, others I can't (maybe I just have a credit-card number but no email address to spam). I've a catalogue of products that changes on a monthly basis with daily updates to stock.

+ +

My original idea was to use the same NN that I've used before, with inputs like purchased y/n for each product and an output for each product (softmax to get a weighted prediction). But I get stuck at handling a changing catalog. I'm also not sure if I should lump everyone in together or sort of generate a NN for each person individually (but some people would have very little purchase history, so I'd need to use everyone else as the training set).

+ +

So I thought I'd need something with some ability to use the purchase data as a sequence, so purchased A, then B, then C etc. But reviewing something like LSTM, I kind of think it's still not right.

+ +

Basically, I know how to NN for a game-state sort of problem. But I don't know how to do it for this new problem.

+",2819,,,,,10/16/2020 13:00,What sort of Neural Network is best suited to predicting a future purchase?,,0,0,,,,CC BY-SA 4.0 +15577,1,,,9/22/2019 12:31,,1,64,"

I want to use a GAN for sequence prediction, in a similar way that we use RNNs for sequence prediction. I want to test its performance in comparison with RNNs. Is there a GAN that can be used for sequence prediction?

+",10051,,2444,,9/22/2019 14:52,9/22/2019 14:52,Is there a GAN that can be used for sequence prediction?,,0,1,,,,CC BY-SA 4.0 +15579,1,,,9/22/2019 15:46,,5,73,"

When we test a new optimization algorithm, what the process that we need to do?For example, do we need to run the algorithm several times, and pick a best performance,i.e., in terms of accuracy, f1 score .etc, and do the same for an old optimization algorithm, or do we need to compute the average performance,i.e.,the average value of accuracy or f1 scores for these runs, to show that it is better than the old optimization algorithm? Because when I read the papers on a new optimization algorithm, I don't know how they calculate the performance and draw the train-loss vs iters curves, because it has random effects, and for different runs we may get different performance and different curves.

+",29902,,,,,9/22/2019 19:53,How can we conclude that an optimization algorithm is better than another one,,1,1,,,,CC BY-SA 4.0 +15583,1,,,9/22/2019 17:07,,1,263,"

After looking around the internet (including this paper, I cannot seem to find a satisfactory explanation of the Average Recall (AR) metric. On the COCO website, it describes AR as: ""the maximum recall given a fixed number of detections per image, averaged over categories and IoUs"".

+ +

What does ""maximum recall"" mean here?

+ +

I was wondering if someone could give a reference or a high level overview of the AR calculation algorithm.

+ +

Thanks!

+",19789,,,,,9/22/2019 17:07,How is Average Recall (AR) calculated for an object detection model?,,0,0,,,,CC BY-SA 4.0 +15584,2,,15526,9/22/2019 17:20,,1,,"

It is a mathematical problem. Usually, we use contraction mapping theorem for the proof of convergence. You should apply the Banach fixed point theorem for Bellman's functions.

+",29906,,,,,9/22/2019 17:20,,,,0,,,,CC BY-SA 4.0 +15586,2,,15579,9/22/2019 19:53,,1,,"

See here for a potential way to do it:

+ +

http://infinity77.net/global_optimization/#motivation-motivation

+ +

http://infinity77.net/global_optimization/#rules-the-rules

+ +

You basically test the two (or more) optimization algorithms against known objective functions, with several random (but repeatable) starting points and then analyze the outcome.

+",29908,,,,,9/22/2019 19:53,,,,2,,,,CC BY-SA 4.0 +15587,2,,12160,9/22/2019 21:25,,2,,"

I don't know about your first question, but I got a basic policy gradient approach with the kinetic energy as reward working on MountainCar-v0.

+ +

You can implement it based on this blog and the notebook you find there. It uses an MLP with one hidden layer of size 128 and standard policy gradient learning.

+ +

The reward engineering boils down to replacing the reward variable with the kinetic energy $v^2$ (no potential energy and no constant factor, the reward itself is not used). I takes $>1000$ episodes to solve the environment consistently.

+ +

I'm afraid the solution is not very satisfactory and I don't have the feeling there is much to learn from it. The solution is originally for the cartpole problem and it stops working for me if I change hyperparameters/optimizer or the specifics of the reward.

+",29907,,,,,9/22/2019 21:25,,,,1,,,,CC BY-SA 4.0 +15588,1,,,9/22/2019 22:32,,3,229,"

I'm not sure what this type of data is called, so I will give an example of the type of data I am working with:

+ +
    +
  • A city records its inflow and outflow of different types of vehicles every hour. More specifically, it records the engine size. The output would be the pollution level X hours after the recorded hourly interval.
  • +
+ +

It's worth noting that the data consists of individual vehicle engine size, so they cant be aggregated. This means the 2 input vectors (inflow and outflow) will be of variable length (different number of vehicles would be entering and lraving every hour) and I'm not sure how to handle this. I could aggregate and simply sum the number of vehicles, but I want to preserve any patterns in the data. E.g. perhaps there is a quick succession of several heavy motorbike engines, denoting a biker gang have just entered the city and are known to ride recklessly, contributing more to pollution than the sum of its parts.

+ +

Any insight is appreciated.

+",29910,,,,,9/24/2019 7:13,What kind of neural network architecture is suitable for variable length block-like time series data?,,2,0,,,,CC BY-SA 4.0 +15589,1,,,9/23/2019 3:18,,3,1194,"

I have a book containing lots of puzzles with instructions like:

+ +
""Find a path through all the white squares.""
+
+""Connect each black circle to a white circle with a straight line without crossing lines"".
+
+""Put the letters A to G in the grid so that no letters are repeated in any row or collumn""
+
+ +

I thought it might be fun to (1) Try and write a program to solve each individual puzzle (2) Write a program that can solve more general problems, and even more interesting (3) Try to write a program that parses the English instructions and then solves the problem.

+ +

I think that in general there would be common themes like, ""draw a path"", ""connect the dots"", ""place the letters in the grid"", and so forth.

+ +

The program would have general knowledge of thing like squares, cells, rows, collumns, colours, letters, numbers and so on.

+ +

I wondered if there is anything similar out there already?

+ +

If an AI could read instructions and solve the puzzles could we say that it is in someway intelligent?

+",4199,,,,,9/23/2019 12:48,Puzzle solving AI?,,1,0,,,,CC BY-SA 4.0 +15590,1,,,9/23/2019 5:32,,4,1244,"

Is randomness (either true randomness or simulated randomness) necessary for AI? If true, does it mean ""intelligence comes from randomness""? +If not, can a robot lacking the ability to generate random numbers be called an artificial general intelligence?

+",29915,,2444,,9/23/2019 12:32,9/24/2019 0:19,Is randomness necessary for AI?,,3,5,,,,CC BY-SA 4.0 +15591,2,,15590,9/23/2019 6:06,,2,,"

Yes, randomness is necessary to achieve generality in theory. Right now AIs we have are on the basis of seeking pattern and use them to predict future moves or outcomes. If we don't include randomness in data then machine might consider that as pattern and behave according to that (Which will be bias for us). +Generating random numbers is a different story in itself and won't be a criterion alone to judge. While this might be one of the conditions for sure.

+",29916,,,,,9/23/2019 6:06,,,,2,,,,CC BY-SA 4.0 +15594,1,15604,,9/23/2019 8:33,,12,2172,"

I found the following neural network cheat sheet (Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data).

+ +

+ +

What are all these different kinds of neural networks used for? For example, which neural networks can be used for regression or classification, which can be used for sequence generation, etc.? I just need a brief overview (1-2 lines) of their applications.

+",2844,,2444,,9/24/2019 17:06,9/25/2019 2:15,What are all the different kinds of neural networks used for?,,1,0,,11/15/2020 18:53,,CC BY-SA 4.0 +15595,2,,15497,9/23/2019 8:39,,0,,"

My simple answer is NO.

+ +

Let me elaborate. If you closely observe nature, you see that nothing changes drastically all of a sudden. Even when it does, it doesn't stay for long.

+ +

Field of AI, has just started and it needs a lot more evolution to achieve AGI. Though AI is solving many directed problems like Face Recognition, Speech Recognition and many more (applications are innumerable), all these can be considered as Narrow AI. They solve a particular task. For AI reach to the state where it can better than humans in all aspects, not only do we need breakthroughs in algorithms, we also need many more breakthroughs in electronics and physics.

+ +

Please read this article. Summary is experts(around 350) estimate that there’s a 50% chance that AGI will occur until 2060. So, there is a very bleak chance that AGI will become a reality in next decade.

+ +

https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing/

+",20760,,,,,9/23/2019 8:39,,,,5,,,,CC BY-SA 4.0 +15596,2,,15589,9/23/2019 12:33,,1,,"
+

The program would have general knowledge of thing like squares, cells, rows, collumns, colours, letters, numbers and so on.

+
+ +

What do you mean by ""general knowledge"" of this objects ? If you intend to give it similar knowledge that a human would have about them you are going to have a realy hard time.

+ +

It's a broader task but the Cyc project is attempting to assemble a comprehensive knowledge base that spans the basic concepts that every little child know and have worked on it for the last 35 years (https://en.wikipedia.org/wiki/Cyc). You could check what they have on their database about the thing that interest you but even so it will be a really hard task.

+ +

Basically what Cyc is trying to do is to create a huge dataset which contain every ""common knowledge"" about the world. In your case the world would be limited to just the dot, line, grid, letter, etc but it would still be a huge work to create this database.

+ +

This is the way AI community dealed with ""general knowledges"" for a long time. An alternative to this approach would be to use machine learning but I have no idea how it could be used for this objective (apart from using NLP to help you create the database which is what Cyc is now doing)

+ +
+

I wondered if there is anything similar out there already?

+
+ +

As far as I know there is nothing of the sort as of now. There exist AI which can learn to play multiple video game but they don't read the rule (like : ""Find a path through all the white squares.""), they just try things until they find the ""good"" way to play a game. This AI generally use reinforcement learning.

+ +
+

If an AI could read instructions and solve the puzzles could we say that it is in someway intelligent?

+
+ +

The concept of intelligence is not something on which everyone agree and even so most of the people think that there is different level of intelligence.

+ +

From my point of view this AI would have a certain level of intelligence if it had the possibility to learn a new kind of game (even if somewhat similar to the ones it already know) in just a few try (I can't give you a number but definitively not the thousands that actual method would require). Just like you presented it this ability would probably require your AI to have some level of understanding of the different common concepts between games (lines, dots, path ...).

+",26961,,26961,,9/23/2019 12:48,9/23/2019 12:48,,,,1,,,,CC BY-SA 4.0 +15597,2,,15588,9/23/2019 12:45,,1,,"

(this response should be a comment but I don't have yet the reputation to comment).

+ +

If I'm understanding your problem correctly you have a variable number of input which have an order and only one output ? It look like the kind of task where you could use recurrent neural network (the most common ones are the LSTM and GRU).

+ +

If you use a recurrent neural network you could (if you have the timestamps of your data) cut the hourly interval into smaller time step to help detect pattern.

+",26961,,,,,9/23/2019 12:45,,,,0,,,,CC BY-SA 4.0 +15598,1,37682,,9/23/2019 14:05,,7,167,"

I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip.

+

If, instead of layers that would be fully connected, I have layers with neurons connected in some other way (some pairs of neurons connected, others not), how is this going to affect the hardware acceleration?

+

An example of this is the convolutional networks. However, there is still a clear pattern, which perhaps is exploited by the acceleration, which would mean that if there is no such pattern, the acceleration would not work as well?

+

Should this be a concern? If so, is there some rule of thumb for how the connectivity pattern is going to affect the efficiency of hardware acceleration?

+",29923,,2444,,8/15/2020 9:38,10/29/2022 15:28,How do neural network topologies affect GPU/TPU acceleration?,,1,3,,,,CC BY-SA 4.0 +15600,2,,15590,9/23/2019 19:16,,1,,"
    +
  1. It might be too philosophical answer, but maybe first we need to answer the question whether a human way of thinking or his creativeness includes random elements. For example if an author writing a book uses some randomness in developing some side thread or some episodic character and I would say, that yes - sometimes we think up of something random.

  2. +
  3. Some algorithms uses randomness at their basis, for example evolutionary algorithms for generating first population.

  4. +
+",22659,,,,,9/23/2019 19:16,,,,2,,,,CC BY-SA 4.0 +15601,1,15607,,9/23/2019 20:39,,8,2436,"

My weights go from being between 0 and 1 at initialization to exploding into the tens of thousands in the next iteration. In the 3rd iteration, they become so large that only arrays of nan values are displayed.

+

How can I go about fixing this?

+

Is it to do with the unstable nature of the sigmoid function, or is one of my equations incorrect during backpropagation which makes my gradients explode?

+
import numpy as np
+from numpy import exp
+import matplotlib.pyplot as plt
+import h5py
+
+# LOAD DATASET
+MNIST_data = h5py.File('data/MNISTdata.hdf5', 'r')
+x_train = np.float32(MNIST_data['x_train'][:])
+y_train = np.int32(np.array(MNIST_data['y_train'][:,0]))
+x_test = np.float32(MNIST_data['x_test'][:])
+y_test = np.int32(np.array(MNIST_data['y_test'][:,0]))
+MNIST_data.close()
+
+##############################################################################
+# PARAMETERS 
+number_of_digits = 10 # number of outputs
+nx = x_test.shape[1] # number of inputs ... 784 --> 28*28
+ny = number_of_digits
+m_train = x_train.shape[0]
+m_test = x_test.shape[0]
+Nh = 30 # number of hidden layer nodes
+alpha = 0.001
+iterations = 3
+##############################################################################
+# ONE HOT ENCODER - encoding y data into 'one hot encoded'
+lr = np.arange(number_of_digits)
+y_train_one_hot = np.zeros((m_train, number_of_digits))
+y_test_one_hot = np.zeros((m_test, number_of_digits))
+for i in range(len(y_train_one_hot)):
+  y_train_one_hot[i,:] = (lr==y_train[i].astype(np.int))
+for i in range(len(y_test_one_hot)):
+  y_test_one_hot[i,:] = (lr==y_test[i].astype(np.int))
+
+# VISUALISE SOME DATA
+for i in range(5):
+  img = x_train[i].reshape((28,28))
+  plt.imshow(img, cmap='Greys')
+  plt.show()
+
+y_train = np.array([y_train]).T
+y_test = np.array([y_test]).T
+##############################################################################
+# INITIALISE WEIGHTS & BIASES
+params = { "W1": np.random.rand(nx, Nh),
+           "b1": np.zeros((1, Nh)),
+           "W2": np.random.rand(Nh, ny),
+           "b2": np.zeros((1, ny))
+          }
+
+# TRAINING
+# activation function
+def sigmoid(z):
+  return 1/(1+exp(-z))
+
+# derivative of activation function
+def sigmoid_der(z):
+  return z*(1-z)
+
+# softamx function
+def softmax(z):
+  return 1/sum(exp(z)) * exp(z)
+
+# softmax derivative is alike to sigmoid
+def softmax_der(z):
+  return sigmoid_der(z)
+
+def cross_entropy_error(v,y):
+  return -np.log(v[y])
+
+# forward propagation
+def forward_prop(X, y, params):
+  outs = {}
+  outs['A0'] = X
+  outs['Z1'] = np.matmul(outs['A0'], params['W1']) + params['b1']
+  outs['A1'] = sigmoid(outs['Z1'])
+  outs['Z2'] = np.matmul(outs['A1'], params['W2']) + params['b2']
+  outs['A2'] = softmax(outs['Z2'])
+  
+  outs['error'] = cross_entropy_error(outs['A2'], y)
+  return outs
+
+# back propagation
+def back_prop(X, y, params, outs):
+  grads = {}
+  Eo = (y - outs['A2']) * softmax_der(outs['Z2'])
+  Eh = np.matmul(Eo, params['W2'].T) * sigmoid_der(outs['Z1'])
+  dW2 = np.matmul(Eo.T, outs['A1']).T
+  dW1 = np.matmul(Eh.T, X).T
+  db2 = np.sum(Eo,0)
+  db1 = np.sum(Eh,0)
+  
+  grads['dW2'] = dW2
+  grads['dW1'] = dW1
+  grads['db2'] = db2
+  grads['db1'] = db1
+#  print('dW2:',grads['dW2'])
+  return grads
+
+# optimise weights and biases
+def optimise(X,y,params,grads):
+  params['W2'] -= alpha * grads['dW2']
+  params['W1'] -= alpha * grads['dW1']
+  params['b2'] -= alpha * grads['db2']
+  params['b1'] -= alpha * grads['db1']
+  return 
+
+# main
+for epoch in range(iterations):
+  print(epoch)
+  outs = forward_prop(x_train, y_train, params)
+  grads = back_prop(x_train, y_train, params, outs)
+  optimise(x_train,y_train,params,grads)
+  loss = 1/ny * np.sum(outs['error'])
+  print(loss)
+  
+```
+
+",29877,,18758,,10/9/2021 8:23,10/9/2021 8:23,How to deal with large (or NaN) neural network's weights?,,2,1,,,,CC BY-SA 4.0 +15602,2,,15562,9/23/2019 21:57,,0,,"

I found out how to switch it (the code) to do text generation task, use 3D input (X) and 3D labels (Y) as in the source code below:

+ +

Source code:

+ +
import tensorflow as tf;
+tf.reset_default_graph();
+
+#data
+'''
+t0       t1       t2
+british  gray     is  cat
+0        1        2   (3)  <=x
+1        2        3        <=y
+white    samoyed  is  dog
+4        5        2   (6)  <=x
+5        2        6        <=y 
+'''
+Bsize = 2;
+Times = 3;
+Max_X = 5;
+Max_Y = 6;
+
+X = [[[0],[1],[2]], [[4],[5],[2]]];
+Y = [[[1],[2],[3]], [[5],[2],[6]]];
+
+#normalise
+for I in range(len(X)):
+  for J in range(len(X[I])):
+    X[I][J][0] /= Max_X;
+
+for I in range(len(Y)):
+  for J in range(len(Y[I])):
+    Y[I][J][0] /= Max_Y;
+
+#model
+Input    = tf.placeholder(tf.float32, [Bsize,Times,1]);
+Expected = tf.placeholder(tf.float32, [Bsize,Times,1]);
+
+#single LSTM layer
+'''
+Layer1   = tf.keras.layers.LSTM(20);
+Hidden1  = Layer1(Input);
+'''
+
+#multi LSTM layers
+#'''
+Layers = tf.keras.layers.RNN([
+  tf.keras.layers.LSTMCell(30), #hidden 1
+  tf.keras.layers.LSTMCell(20)  #hidden 2
+],
+return_sequences=True);
+Hidden2 = Layers(Input);
+#'''
+
+Weight3  = tf.Variable(tf.random_uniform([20,1], -1,1));
+Bias3    = tf.Variable(tf.random_uniform([   1], -1,1));
+Output   = tf.sigmoid(tf.matmul(Hidden2,Weight3) + Bias3); #sequence of 2d * 2d
+
+Loss     = tf.reduce_sum(tf.square(Expected-Output));
+Optim    = tf.train.GradientDescentOptimizer(1e-1);
+Training = Optim.minimize(Loss);
+
+#train
+Sess = tf.Session();
+Init = tf.global_variables_initializer();
+Sess.run(Init);
+
+Feed   = {Input:X, Expected:Y};
+Epochs = 10000;
+
+for I in range(Epochs): #number of feeds, 1 feed = 1 batch
+  if I%(Epochs/10)==0: 
+    Lossvalue = Sess.run(Loss,Feed);
+    print(""Loss:"",Lossvalue);
+  #end if
+
+  Sess.run(Training,Feed);
+#end for
+
+Lastloss = Sess.run(Loss,Feed);
+print(""Loss:"",Lastloss,""(Last)"");
+
+#eval
+Results = Sess.run(Output,Feed).tolist();
+print(""\nEval:"");
+for I in range(len(Results)):
+  for J in range(len(Results[I])):
+    for K in range(len(Results[I][J])):
+      Results[I][J][K] = round(Results[I][J][K]*Max_Y);
+#end for i      
+print(Results);
+
+print(""\nDone."");
+#eof
+
+",2844,,,,,9/23/2019 21:57,,,,0,,,,CC BY-SA 4.0 +15603,2,,15590,9/23/2019 23:49,,3,,"
+

Is randomness (either true randomness or simulated randomness) necessary for AI

+
+ +

It depends on how you define Artificial Intelligence. If you regard it strictly as an intentionally created construct which demonstrates utility, then no. (For instance, Nimatron, potentially the first functioning AI, beat most human competitors at NIM. But Nimatron was classical AI, entirely rules based with no learning.) That said:

+ +
    +
  • Randomness has proved a useful component in machine learning, and any feasible AGI would likely require ML.
  • +
+ +

Given sufficient computing power, aka time and space, it would absolutely be possibly to brute force anything, including AGI, but the resulting algorithm would be ""brittle"", unable to ""compute"" anything not previously defined. A learning algorithm, presented a problem outside of its domain of knowledge may initially degrade in performance, but it can learn from those outcomes, and gradually improve performance.

+ +

IBM brute forced Chess with Deep Blue, but Chess is a strictly narrow problem that turned out not to require general intelligence. AGI requires human level performance in all tasks engaged in by humans, which, even if they could be broken down to a set of individual narrow problems, it's an ever expanding set of problems.

+ +
+

Does it mean ""intelligence comes from randomness""?

+
+ +

Not if the definition of intelligence is rooted in utility because deterministic processes can demonstrate utility.

+ +
    +
  • In statistical AI, the intelligence arises from the analysis of random search or the fitness of the genetic algorithm, not the randomness per se.
  • +
+ +

In other words, if you have the randomness without the analysis, every decision is an unqualified guess.

+ +

My sense is that it is free will that would arise from randomness—effects unrelated to causes—because without true randomness, the universe and everything in it is purely deterministic.

+",1671,,1671,,9/24/2019 0:19,9/24/2019 0:19,,,,0,,,,CC BY-SA 4.0 +15604,2,,15594,9/24/2019 2:00,,10,,"

I agree that this is too broad, but here's a 1 sentence answer for most of them. The ones I left out (from the bottom of the chart) are very modern, and very specialized. I don't know much about them, so perhaps someone who does can improve this answer.

+ +
    +
  • Perceptron: Linear or logistic-like regression (and thus, classification).
  • +
  • Feed Forward: Usually non-linear regression or classification with sigmoidal activation. Essentially a multi-layer perceptron.
  • +
  • Radial Basis Network: Feed Forward network with Radial Basis activation functions. Used for classification and some kinds of video/audio filtering
  • +
  • Deep Feed Forward: Feed Forward with more than 1 hidden layer. Used to learn more complex patterns in classification or regression, maybe reinforcement learning.
  • +
+ +
+ +
    +
  • Recurrent Neural Network: A Deep Feed Forward Network where some nodes connect to past layers. Used in reinforcement learning, and +to learn patterns in sequential data like text or audio.
  • +
  • LSTM: A recurrent neural network with specialized control neurons (sometimes called gates) that allow signals to be remembered for longer periods of time, or selectively forgotten. Used in any RNN application, and often able to learn sequences that have a very long repetition time.
  • +
  • GRU: Much like LSTM, another kind of gated RNN with specialized control neurons.
  • +
+ +
+ +
    +
  • Auto Encoder: Learns to compress data and then decompress it. After learning this model, it can be split into two useful subparts: a mapping from the input space to a low-dimensional feature space, that may be easier to interpret or understand; and a mapping from a small dimensional subspace of simple numbers into complex patterns, which can be used to generate those complex patterns. Basis of much modern work in vision, language, and audio processing.
  • +
  • VAE, DAE,SAE: Specializations of the Auto Encoder.
  • +
+ +
+ +
    +
  • Markov Chain: A neural network representation of a markov chain: State is encoded in the set of neurons that are active, and +transition probabilities are thus defined by the weights. Used for +learning transition probabilities and unsupervised feature learning +for other applications.
  • +
  • HN, BM, RBM, DBM: Specialized architectures based on the Markov Chain idea, used to automatically learn useful features for other +applications.
  • +
+ +
+ +
    +
  • Deep Convolutional Network: Like a feed-forward network, but each node is really a bank of nodes learning a convolution from the layer +before it. This essentially allows it to learn filters, edge +detectors, and other patterns of interest in video and audio +processing.

  • +
  • Deep Deconvolutional Network: Opposite of a Convolutional Network in some sense. Learn a mapping from features that represent edges or +other high level properties of some unseen image, back to the pixel +space. Generate images from summaries.

  • +
  • DCIGN: Essentially an auto-encoder made of a DCN and a DN stuck together. Used to learn generative models for complex images like +faces.

  • +
  • Generative Adversarial Network: Used to learn generative models for complex images (or other data types) when not enough training data is +available for a DCIGN. One model learns to generate data from random +noise, and the other learns to classify the output of the first +network as distinct from whatever training data is available.

  • +
+",16909,,16909,,9/25/2019 2:15,9/25/2019 2:15,,,,0,,,,CC BY-SA 4.0 +15605,2,,15451,9/24/2019 2:06,,1,,"

It is somewhat common to measure the number of operations involved in the search. This is only really useful if you're doing scientific work, because the computational cost of measuring it accurately is quite high. For example, if you were using a GA that used tournament-based fitness selection, and wanted to compare it to one that used round-robin selection, counting the number of evaluations of individuals would be a good measure of total computational effort.

+ +

In Genetic Programming, it is fairly common to build the measurement of the total op count into the interpreter you write for the programs. You can then compare that directly to the number of evals times the length of the genomes for something like a GA.

+",16909,,,,,9/24/2019 2:06,,,,0,,,,CC BY-SA 4.0 +15607,2,,15601,9/24/2019 7:03,,6,,"

This problem is called exploding gradients, resulting in an unstable network that at best cannot learn from the training data and at worst results in NaN weight values that can no longer be updated.

+ +

One way to assure it is exploding gradients, is if loss is unstable and not improving, or if loss shows NaN value during training.

+ +

Apart from the usual gradient clipping and weights regularization that are recommended, I think the problem with your network is the architecture.

+ +

30 is an abnormally high number of nodes for 2 layer perceptron model. Try increasing number of layers and reducing nodes per layer. - This is under the assumption that you're experimenting with MLP's, because for the problem above, convolutional neural networks seem like an obvious way to go. If unexplored - definitely check out CNN's for digit recognition, two layer models will surely work better there.

+ +

Hope this helped!

+",25658,,,,,9/24/2019 7:03,,,,1,,,,CC BY-SA 4.0 +15608,2,,15588,9/24/2019 7:13,,1,,"

I have come across the same issue but in language. Where each input was a sentence, hence of different lengths.

+ +

The easier solution is to just find the longest sequence, extract its length, and 0 pad all other values to be able to get all of them to the same size, and then use any recurrent neural network architecture (Since you're dealing with a time series), where these padded values act as input. (When 0 padding, it is best if fewer of your layers have a bias value, because you want all multiplication with 0's to just give 0's as they represent a pad, having bias would mean after the weight multiplies with 0 it adds a small bias value, which is undesired because it is irrelevant information, One can always assume backprop will learn 0's for the biases even if present, but it never actually does, they mostly learn extremely small values, which can also hinder results)

+ +

Another thing to experiment with is creating a small network, to create an embedding for each input. The padded input can be input to a small neural network which generates a fixed size embedding (Activations of a particular layer ; Backprop will learn a decent representation when trained end-to-end ) that is the input to the recurrent neural architecture. - Adding a network to create an embedding almost always helps, as long as kept simple. Experimenting with the embedding algorithm (network), can help you find an idealistic one, which can significantly boost your results.

+",25658,,,,,9/24/2019 7:13,,,,0,,,,CC BY-SA 4.0 +15609,2,,3172,9/24/2019 7:24,,3,,"

Edge Computing is an approach for extended from cloud computing which leverages the same concept but has its advantage like mitigating the latency, resource usage, energy usage and so on.

+ +

Federated learning is just an algorithm or a kind of approach which empower the edge computing by applying the technique of model iteration instead of fetching data from the device. +It also removes privacy concern in edge computing.

+",14395,,,user9947,9/24/2019 13:05,9/24/2019 13:05,,,,0,,,,CC BY-SA 4.0 +15611,1,,,9/24/2019 15:39,,2,258,"

I'm wondering if there is a NN that can achieve the following task:

+

Output a unit vector that is parallel to the input vector. i.e., input a vector $\mathbf{v}\in\mathbb{R}^d$, output $\mathbf{v}/\|\mathbf{v}\|$. The dimension $d$ can be fixed, say $2$.

+

To achieve this, it seems to me that we need to use NN to do three functions: square, square-root, and division. But I don't know if a NN can do all of these.

+",29948,,2444,user9947,12/19/2021 20:44,12/19/2021 21:00,Is there a neural network that can output a unit vector that is parallel to the input vector?,,0,2,,,,CC BY-SA 4.0 +15612,1,,,9/24/2019 16:08,,5,118,"

I have a use case where the set of actions is different for different states. Is the agent aware of what actions are valid for each state, or is the agent only aware of the entire action space (in which case I guess the environment needs to discard invalid actions)?

+

I presume the answer is yes, but I would like to confirm.

+",29949,,2444,,11/17/2020 21:54,11/17/2020 21:54,Is the agent aware of a possible different set of actions for each state?,,1,1,,,,CC BY-SA 4.0 +15613,1,,,9/24/2019 19:35,,3,62,"

As computers are getting bigger better and faster, the concept of what constitutes a single datum is changing.

+

For example, in the world of pen-and-paper, we might take readings of temperature over time and obtain a time-series in which an individual datum is a time, temperature pair. However, it is now common to desire classifications of entire time-series, in the context of which our entire temperature time-series would be but a single data point in a data set consisting of a great number of separate time-series. In image processing, an $(x,y,c)$ triple is not a datum, but a whole grid of such values is a single datum. With lidar data and all manner of other fields things that were previously considered a dataset are now best thought of as a datum.

+

What is the term for datasets that are themselves composed of datasets?

+

The term "metadata" is occupied, I should think.

+

Are there any papers that talk about this transition from datasets of data to datasets of datasets? And what the implications are for data scientists and researchers?

+",21298,,2444,,12/10/2021 22:02,12/10/2021 22:02,What is the term for datasets that are themselves composed of datasets?,,1,2,,,,CC BY-SA 4.0 +15615,2,,15612,9/25/2019 2:40,,1,,"

This is actually an implementation choice, and will depend on how you chose to represent the agent's model of the function that maps from states to actions.

+

If you explicitly represent the entire state space, as you might chose to do with simple benchmark problems that you solve by directly solving an MDP with something like value iteration, then you can also easily explicitly represent exactly the set of actions that the agent can perform in each state, and the agent can learn the expected value of just taking those actions.

+

If your state space is very large, you may not be able to represent it explicitly, and your agent is more likely to use some approximation of the value function or its policy, as is commonly done in Q-Learning. Here, it is often preferable to define your model of the environment so that taking an invalid action in a state causes some well-defined outcome, or causes the agent to randomly re-select its actions until it ends up picking a valid one. The agent will eventually learn that selecting an invalid action leads to bad outcomes, without "realizing" that the action is invalid.

+",16909,,2444,,11/17/2020 21:54,11/17/2020 21:54,,,,0,,,,CC BY-SA 4.0 +15616,1,,,9/25/2019 5:33,,16,335,"

Let's call our dataset splits train/test/evaluate. We're in a situation where we require months of data. So we prefer to use the evaluation dataset as infrequently as possible to avoid polluting our results. Instead, we do 10 fold cross validation (CV) to estimate how well the model might generalize.

+

We're training deep learning models that take between 24-48 hours each, and the process of parameter sweeping is obviously very slow when performing 10-fold cross validation.

+

Does anyone have any experience or citations for how well parameter sweeping on one split of the data followed by cross validation (used to estimate how well it generalizes) works?

+

I suspect it's highly dependent on the distribution of data and local minima & maxima of the hyper parameters, but I wanted to ask.

+",23125,,43231,,12/31/2020 18:01,8/19/2023 6:06,Will parameter sweeping on one split of data followed by cross validation discover the right hyperparameters?,,1,1,,,,CC BY-SA 4.0 +15617,1,,,9/25/2019 8:37,,1,57,"

I'm using Q-learning with some extensions such as noisy linear layers, n-steps and double DQN.

+ +

The training, however, isn't that successful, my rewards are descreasing over time after a steep increase at the beginning:

+ +

+ +

But what's interesting is that my td loss is also descreasing:

+ +

+ +

The sigma magnitudes of the noisy linear layers which control the exploration are strangely increasing, and also, seems to converge. I expected it to reduce uncertainty over time, but the opposite is the case.

+ +

+ +

Another intresting thing, and that's probably why my loss is decreasing: The model tends to generate always the same transition, which is why the episodes are ending early and the rewards are getting lower. My experience replay is full of this single transition (around 99 percent of the buffer).

+ +

What could be the reason? Which things I should check? Is there anything I could try? I'm also willing to add information, just comment what could be of interest.

+",21685,,21685,,9/25/2019 13:46,9/25/2019 13:46,"TD losses are descreasing, but also rewards are decreasing, increasing sigma",,0,0,,,,CC BY-SA 4.0 +15619,1,,,9/25/2019 9:47,,8,169,"

Discussing the video More Parkour Atlas, a friend asked how the robot's movement was so similar to the one from a real human and wondering how this is achieved?

+

To my knowledge, this is not something the developer "programmed", but instead emerged from the learning algorithm.

+

Could you provide an overview and some reference on how this is achieved?

+",29969,,2444,,1/22/2021 0:42,1/22/2021 0:42,How does Atlas from Boston Dynamics have human-like movement?,,1,0,,,,CC BY-SA 4.0 +15621,1,,,9/25/2019 13:59,,14,6225,"

I was following some examples to get familiar with TensorFlow's LSTM API, but noticed that all LSTM initialization functions require only the num_units parameter, which denotes the number of hidden units in a cell.

+ +

According to what I have learned from the famous colah's blog, the cell state has nothing to do with the hidden layer, thus they could be represented in different dimensions (I think), and then we should pass at least 2 parameters denoting both #hidden and #cell_state.

+ +

So, this confuses me a lot when trying to figure out what the TensorFlow's cells do. Under the hood, are they implemented like this just for the sake of convenience or did I misunderstand something in the blog mentioned?

+ +

+",29974,,2444,,4/12/2020 13:31,5/24/2021 10:13,What is the relationship between the size of the hidden layer and the size of the cell state layer in an LSTM?,,3,0,,,,CC BY-SA 4.0 +15622,1,15635,,9/25/2019 14:20,,3,60,"

Let's assume I have a CNN model trained to categorize some objects on the images. By using this model I find more categorized images. If I now retrain this model on data set that consists old set and newly categorised images is there a chance that such new model will have higher accuracy? Or maybe because new data posses only information that could be found on initial set, model will have similar/lower accuracy?

+ +

Please let me know if something unclear.

+",22659,,22659,,9/25/2019 18:17,9/26/2019 11:31,"Can a model, retrained on images classified previously by itself, increase its accuracy?",,1,0,,,,CC BY-SA 4.0 +15623,1,,,9/25/2019 15:00,,1,24,"

I am looking for a GAN paper I have read a while ago, but unfortunately cannot find it again. I think it compared GANs and other methods (like CVAEs) w.r.t. how they handle multi-modal data, not sure about the CVAEs though. +What I know is that they created different synthetic toy datasets with multiple modes to analyze this. I remember one plotting of a blue spiral of this data on a white background. Any guesses?

+",17959,,2444,,9/25/2019 15:43,9/25/2019 15:43,Looking for GAN paper with spiral image,,0,0,,,,CC BY-SA 4.0 +15624,1,,,9/25/2019 15:13,,8,156,"

Here's a sort of a conceptual question. I was implementing a SOM algorithm to better understand its variations and parameters. I got curious about one bit: the BMU (best matching unit == the neuron that is more similar to the vector being presented) is chosen as the neuron that has the smallest distance in feature space to the vector. Then I updated it and its neighbours.

+

This makes sense, but what if I used more than one BMU for updating the network? For example, suppose that the distance to one neuron is 0.03, but there is another neuron with distance 0.04. These are the two smallest distances. I would use the one with 0.03 as the BMU.

+

The question is, what would be the expected impacts on the algorithm if I used more than one BMU? For example, I could be selecting for update all neurons for which the distance is up to 5% more than the minimum distance.

+

I am not asking for code. I can implement it to see what happens. I am just curious to see if anyone have any insight on the pros and cons (except additional complexity) of this approach.

+",29975,,43231,,1/14/2021 8:31,6/3/2023 13:04,What is the impact of using multiple BMUs for self-organizing maps?,,1,0,,,,CC BY-SA 4.0 +15626,1,,,9/25/2019 17:01,,1,29,"

I have a use case where the state of the environment could change due to random events in between time steps that the agent takes actions. For example, at t1, the agent takes action a1 and is given the reward and the new state s1. Before the agent takes the next action at t2, some random events occurred in the environment that altered the state. Now when the agent takes action at t2, it's now acting on ""stale information"" since the state of the environment had changed. Also, the new state s2 will represent changes not only due to the agent's action, but also due to the prior random events that occurred. In the worst case, the action could possibly have become invalid for the new state that was introduced due to these random events occurred within the environment.

+ +

How do we deal with this? Does this mean that this use case is not a good one to solve with RF? If we just ignore these changing states due to the random events in the environment, how would that affect the various learning algorithms? I presume that this is not a uncommon or unique problem in real-life use cases...

+ +

Thanks!

+ +

Francis

+",29949,,,,,9/25/2019 17:01,RF: How to deal with environments changing state due to external factors,,0,0,,,,CC BY-SA 4.0 +15628,2,,15619,9/25/2019 23:35,,2,,"

Because Boston Dynamics is a private, for-profit, company, we cannot know for sure how they achieve their results. However, we can examine the available public information and make educated guesses.

+

In the information posted with the video, Boston Dynamics tells us that they use

+
+

... an optimization algorithm transforms high-level descriptions of +each maneuver into dynamically-feasible reference motions. Then Atlas +tracks the motions using a model predictive controller that smoothly +blends from one maneuver to the next.

+
+

This sounds like three older AI approaches have been blended together to create the video.

+

First, they mention using an optimization algorithm to assemble complex motions from "dynamically-feasible reference motions". This sounds like they have first learned, or possibly pre-programmed, a range of simple movements that, on their own, are not very impressive, but that can be composed together into more complex movements. This approach is called Layered Learning, and was pioneered by Peter Stone and Manuela Veloso in the late 1990s. It is widely and successfully used in academic robotics competitions. Basically, this algorithm tries out different combinations and sequences of simple actions until it finds one that is close to the desired complex action. This is usually done with a local search algorithm, or sometimes with other optimization tools.

+

The second technique is, of course, actually learning to perform the basic actions that layered learning can compose together into more complex actions. This is usually done with some form of reinforcement learning, but sometimes is done by a programming who explicitly solves the equations for the movement of a simple system.

+

Finally, they need to use a model predictive controller to smoothly interpolate between the sequence of actions that the layered learning approach has come up with. In this approach, the engineers designing the robot have very carefully measured, for this specific robot, exactly how parts of it tend to move or continue moving, and written this down as a mathematical model called a dynamical system. This model allows the algorithm to figure out how this specific robot's movements are slipping away from the planned ones as it executes the motions. The slippage occurs because of things like friction (the robot's parts are not all perfectly smooth, and may have different amounts of lubrication, or a motor might slip). The algorithm can then make small changes to the motion to smooth out these unexpected bumps. That's the part that makes everything look so smooth.

+

It's worth noting too, that the video is not fully representative of the typical results their algorithm obtains. Boston Dynamics claims in the video caption that their algorithm "succeeds" in achieving a desired motion about 80% of the time. They don't tell us what "success" means, but the video is probably just the best of many takes for a carefully planned and filmed routine.

+",16909,,2444,,1/22/2021 0:42,1/22/2021 0:42,,,,0,,,,CC BY-SA 4.0 +15631,1,15656,,9/26/2019 2:04,,4,149,"

The following X-shape alternated pattern can be separated quite well and super fast by K-nearest Neighbour algorithm (go to https://ml-playground.com to test it):

+

+

However, DNN seems to face great struggles to separate that X-shape alternated data. Is it possible to do K-nearest before DNN, ie. set the DNN weights somehow to simulate the result of K-nearest before doing DNN training?

+

Another place to test the X-shape alternated data: https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html

+",2844,,156,,5/17/2021 1:43,5/17/2021 1:43,Is it possible to do K-nearest-neighbours before training DNN,,1,2,,,,CC BY-SA 4.0 +15632,1,15633,,9/26/2019 6:15,,2,748,"

I am going through Russel and Norvig's Artificial Intelligence: A Modern Approach (3rd edition). I was reading the part regarding the A* algorithm

+ +
+

A* graph search version is optimal when heuristic is consistent and tree search version optimal when heuristic is just admissible.

+
+ +

The book gives the following graph search algorithm.

+ +

+ +

The above algorithm says that pop the node from the frontier set and expand it (assuming not in explored set), and add its children to frontier only if child not in frontier or explored set.

+ +

Now, if I apply the same to A* (assuming a consistent heuristic) and suppose I find goal state (as a child of some node) for the first time I add it to the frontier set. Now, according to this algorithm, if the goal state is already in the frontier set, it must never be added again (this implies never be updated/promoted right?).

+ +

I have a few questions.

+ +
    +
  1. I might as well stop the search when I find goal state for the first time as a child of some node and not wait till I pop the goal state from the frontier?

  2. +
  3. Does a consistent heuristic guarantee that when I add a node to the frontier set I have found the optimal path to it? (because if I don't update it or re-add it with updated cost (according to the graph search algorithm, the answer to the question must be yes.)

  4. +
+ +

Am I missing something? Because it also states that, whenever A* selects the node for expansion, the optimal path to that node is found and doesn't say that when a node is added to the frontier set, the optimal path is found.

+ +

So, I'm pretty confused, but I think the general graph search definition (in the above image) is misleading.

+",5122,,2444,,11/10/2019 17:19,11/10/2019 17:19,"In the graph search version of A*, can I stop the search the first time I encounter the goal node?",,1,0,,,,CC BY-SA 4.0 +15633,2,,15632,9/26/2019 6:45,,0,,"

No, the optimal path is found when you pop the goal state. If you stop the search when you first add the goal state then the final path may not be optimal.

+",12509,,,,,9/26/2019 6:45,,,,2,,,,CC BY-SA 4.0 +15635,2,,15622,9/26/2019 11:25,,0,,"

The most likely outcome of this approach is wasted time and very little effect on accuracy.

+ +

There will be changes to the model. Some will be beneficial and improve the model, but some will backfire making it worse.

+ +

For instance:

+ +
    +
  • The model predicts with probability 0.4 that an image is in a certain class. It is the highest prediction, and actually true. It will be added to the training dataset with a ""ground truth"" of probability 1.0, so on balance more and better data has been added to the data set. This will improve generalisation, as whatever caused the relatively low 0.4 value initially - e.g. a pose or lighting variation - will now be covered correctly in the training set.

  • +
  • The model predicts with probability 0.4 that an image is in a certain class. It is the highest prediction, and actually false. It will be added to the dataset with ""ground truth"" of probability 1.0 for the wrong class. This will weaken associations to the correct class for similar input images, meaning for exaple that a certain pose or lighting difference that is already causing problems for the model will be used to incorrectly classify images in future.

  • +
+ +

These two scenarios will occur, on average, at a rate determined by the model's current test accuracy. So if your current model is 90% accurate, 1 in 10 images in your new training data will be mislabelled. This will ""lock in"" the current errors at the same rate on average as they already occur.

+ +

The effect may be a drift up or down in accuracy as the model will definitely change due to the new training data. However you have little to no control over how this drift effect goes if you are not willing or able to oversee the automatic classifications generated on new data by the model.

+ +

There are a few ways to get some improvement unsupervised from new data. For instance:

+ +
    +
  • Build an autoencoder from the early convolutional layers of your model and train it to re-generate all inputs as outputs. This should help it learn important features of the variations in data that you are using. Once this training is done, discard the decoder part of the auto-encoder and add your classifier back in to fine tune it. This may help if you have only a small amount of labelled data, but a lot of unlabelled data.

  • +
  • Use a model that has better accuracy than yours to auto-label the data. This might seem a little chicken-and-egg, but you may be able to create such a model using ensemble techniques. The ensemble model could be too awkward to use in production, but may still be used in an auto-labeling pipeline to improve your training data.

  • +
+ +

Note you may get even better results simply ignoring the extra unlabeled data, and instead fine tuning a high quality ImageNet-trained model on the labeled data you already have - saving yourself a lot of effort. Depends on the nature of the images, and how much labeled data you are already working with.

+",1847,,1847,,9/26/2019 11:31,9/26/2019 11:31,,,,2,,,,CC BY-SA 4.0 +15637,1,,,9/26/2019 12:33,,2,258,"

I'm currently writing a program using keras (python 3) to play a game similar to Atari games, only in this one there are objects moving in the screen in different angles and directions (in most of Atari games I've encountered the objects you need to shoot are static). The agent's aim is to shoot them.

+ +

after executing every action I get feedback from the environment: I get the locations of all the objects on the screen, the locations of the collisions that happened, my position (angle of the turret) and the total score (from which I can calculate the reward)

+ +

I defined that each state will consists from the parameters mentioned above.

+ +

I want to use softmax algorithm in order to choose the next action, but I'm not sure how to do it. I'd be very grateful if anyone could help me or refer me to a source that can explain the syntax? currently I'm using decay epsilon-greedy algorithm.

+ +

Thank you very much for your time and attention.

+",29989,,,,,9/26/2019 12:33,how to use Softmax action selection algorithm in atari-like game,,0,2,,,,CC BY-SA 4.0 +15639,1,15649,,9/26/2019 16:21,,4,308,"

I have implemented a neural network from scratch (only using numpy) and I am having problems understanding why the results are so different between stochastic/minibatch gradient descent and batch gradient descent:

+ +

+ +

The training data is a collection of point coordinates (x,y). +The labels are 0s or 1s (below or above the parabola). +

+ +

As a test, I am doing a classification task. +My objective is to make the NN learn which points are above the parabola (yellow) and which points are below the parabola (purple).

+ +

Here is the link to the notebook: https://github.com/Pign4/ScratchML/blob/master/Neural%20Network.ipynb

+ +
    +
  • Why is the batch gradient descent performing so poorly with respect +to the other two methods?
  • +
  • Is it a bug? But how can it be since the code is almost identical to the
    +minibatch gradient descent?
  • +
  • I am using the same (randomly chosen with try and error) hyperparameters for all three
    +neural networks. Does batch gradient descent need a more accurate
    +technique to find the correct hyperparameters? If yes, why so?
  • +
+",10813,,10813,,9/27/2019 11:55,9/27/2019 11:55,Why is batch gradient descent performing worse than stochastic and minibatch gradient descent?,,1,0,,,,CC BY-SA 4.0 +15640,2,,15613,9/26/2019 19:16,,1,,"

I don't think that this is anything new. Let's use your example of classifying an entire time series, say predicting word 1 vs word 2 for speech recognition. We can write out the data as a data frame like we would do with any other multivariate data: observations at time 1, time 2, etc as the predictors and the classification label as the response variable.

+ +

Each observation is a vector of the values at particular times for your subject, plus the label--no different than any other multivariate data. Sure, there might be special dependence structure because of the time series nature of your data, but you can still write it as a multivariate problem.

+ +

Okay, let's say that you hit the speech signal with a wavelet transform, resulting in an image-looking spectrogram of 2D data. Then just consider each ""pixel"" (time-frequency pair) to be a variable in your multivariate problem, along with the classification label. This is some kind of bijection between an $m\times n$ matrix and $\mathbb{R}^{n\times m}$.

+ +

You can extend this idea to 3D or 4D data (or higher), too. Just unwrap the high-dimension tensor in some kind of map $T^{m\times n \times \dots} \rightarrow \mathbb{R}^{m\times n \times \dots}$.

+",25529,,,,,9/26/2019 19:16,,,,9,,,,CC BY-SA 4.0 +15642,1,,,9/26/2019 19:37,,3,906,"

pjreddie's official darknet version (link from official website here) has been forked several times. In particular I've come accross AlexeyAB's fork through this tutorial. I assume the tutorial's author used AlexeyAB's fork because he wanted to use it on a Windows machine, which pjreddie's darknet cannot do AFAIK.

+ +

I am not really concerned about that (I am a linux user), but I am very interested about the half precision option (CUDNN_HALF) that AlexeyAB's darknet has, whereas pjreddie's darknet does not. Of course I've checked that this option was handled by the graphic card (RTX2080) we use at my office.

+ +

Nevertheless, I wonder: how stable/robust is that fork? Of course I want high-performing software, but I also want a certain level of stability! On the other hand, the latest commit on pjreddie's darknet is back from September 2018 (ie 1 year old), whereas AxeleyAB's darknet is active…

+ +

More broadly, there seems to be a lot of darknet forks: which ones to prefer?

+ +

What does the neural network community think?

+",30003,,,,,9/27/2019 1:20,What is the best variant of darknet to use?,,1,0,,,,CC BY-SA 4.0 +15643,1,,,9/26/2019 20:12,,1,906,"

I have CNN model to classify 2 classes. (Yes or No) +I use categorical_crossentropy loss and softmax activation at the end. +For input I use image with all 3 channels, for output I use One hot encoded vector ([0,1] or [1,0])

+ +

I have function that guaranty me, that each batch I have same number of one and another class, so the classes are not unevenly represented.

+ +

What happened when I train the model is that I am stuck at same loss while trening,...

+ +

I assume that model predict always same class and half in batch has loss 0 half of them max, so that bring it to 8 all the time,...

+ +

What could went wrong?

+ +

The network is something like this :

+ +
x = Conv2D(16, (3, 3), padding='same')(input_img)
+x = LeakyReLU(0.1)(x)
+x = Conv2D(32 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = MaxPooling2D((2, 2))(x)
+x = Dropout(0.25)(x)
+x = Conv2D(32 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = Conv2D(48 , (3, 3), padding='same')(x)
+x = LeakyReLU(0.1)(x)
+x = MaxPooling2D((2, 2))(x)
+x = Dropout(0.25)(x)
+
+x = Flatten()(x)
+x = Dense(4096)(x)
+x = LeakyReLU(0.1)(x)
+x = Dropout(0.5)(x)
+x = Dense(2048)(x)
+x = LeakyReLU(0.1)(x)
+x = Dropout(0.5)(x)
+out = Dense(2, activation='softmax', name='table')(x)
+
+model = Model(input_img, out)
+model.compile(optimizer='adam', loss= 'categorical_crossentropy')
+
+ +

Training Loss:

+ +

+",26993,,26993,,10/2/2019 4:35,10/2/2019 4:35,CNN clasification model loss stuck at same value,,0,7,,,,CC BY-SA 4.0 +15645,1,15655,,9/26/2019 23:49,,3,2461,"

I am trying to train a CNN-LSTM model. The size of my images is 640x640. I have a GTX 1080 ti 11GB. I am using Keras with the TensorFlow backend.

+ +

Here is the model.

+ +
img_input_1 = Input(shape=(1, n_width, n_height, n_channels))
+conv_1 = TimeDistributed(Conv2D(96, (11,11), activation='relu', padding='same'))(img_input_1)
+pool_1 = TimeDistributed(MaxPooling2D((3,3)))(conv_1)
+conv_2 = TimeDistributed(Conv2D(128, (11,11), activation='relu', padding='same'))(pool_1)
+flat_1 = TimeDistributed(Flatten())(conv_2)
+dense_1 = TimeDistributed(Dense(4096, activation='relu'))(flat_1)
+drop_1 = TimeDistributed(Dropout(0.5))(dense_1)
+lstm_1 = LSTM(17, activation='linear')(drop_1)
+dense_2 = Dense(4096, activation='relu')(lstm_1)
+dense_output_2 = Dense(1, activation='sigmoid')(dense_2)
+model = Model(inputs=img_input_1, outputs=dense_output_2)
+
+op = optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001)
+
+model.compile(loss='mean_absolute_error', optimizer=op, metrics=['accuracy'])
+
+model.fit(X, Y, epochs=3, batch_size=1)
+
+ +

Right now, using this model, I can only use the training data when the images are resized to 60x60, any larger and I run out of GPU memory.

+ +

I want to use the largest possible size as I want to retain as much discriminatory information as possible. (The labels will be mouse screen coordinates between 0 - 640).

+ +

Among many others, I found this question: How to handle images of large sizes in CNN?

+ +

Though I am not sure how I can ""restrict your CNN"" or ""stream your data in each epoch"" or if these would help.

+ +

How can I reduce the amount of memory used so I can increase the image sizes?

+ +

Is it possible to sacrifice training time/computation speed in favor of higher resolution data whilst retaining model effectiveness?

+ +

Note: the above model is not final, just a basic outlay.

+",30005,,2444,,5/22/2020 23:49,5/23/2020 5:39,How can I reduce the GPU memory usage with large images?,,1,1,,,,CC BY-SA 4.0 +15647,2,,15642,9/27/2019 0:04,,3,,"

I used Trieu's Darkflow and it was trained well up to around 30 to 50 epoch with maximum 30 GB dataset. At the same time, it was quite often crashed after these epochs.

+ +

Half precision...I don't remember Darkflow have that.

+ +

One way to estimate each fork reputation is by number of Star and Issues, for example Alexey's Darknet 5,000 and 2,500 and Darkflow 5,000, 539 respectively.

+ +

Reference: OS Ubuntu 16.04

+",27229,,27229,,9/27/2019 1:20,9/27/2019 1:20,,,,1,,,,CC BY-SA 4.0 +15648,1,,,9/27/2019 8:12,,9,142,"

I have a set of fixed integers $S = \{c_1, \dots, c_N \}$. I want to find a single integer $D$, greater than a certain threshold $T$, i.e. $D > T \geq 0$, that divides each $c_i$ and leaves remainder $r_i \geq 0$, i.e. $r_i$ can be written as $r_i = c_i \text{ mod } D$, such that the sum of remainders is minimized.

+

In other words, this is my problem

+

\begin{equation} +\begin{aligned} +D^* \quad = \text{argmin}_D& \sum_i c_i \text{ mod } D \\ +\textrm{subject to} &\quad D > T +\end{aligned} +\end{equation}

+

If the integers have a common divisor, this problem is easy. If the integers are relatively co-prime however, then it is not clear how to solve it.

+

The set $|S| = N$ can be around $10000$, and each element also has a value in tens of thousands.

+

I was thinking about solving it with a genetic algorithm (GA), but it is kind of slow. I want to know is there any other way to solve this problem.

+",30015,,2444,,1/23/2021 17:35,8/14/2021 16:41,"Given a list of integers $\{c_1, \dots, c_N \}$, how do I find an integer $D$ that minimizes the sum of remainders $\sum_i c_i \text{ mod } D$?",,1,4,,,,CC BY-SA 4.0 +15649,2,,15639,9/27/2019 8:37,,2,,"

Assuming the problem at hand is a classification (Above or Below parabola), this is probably because of the nature of Batch gradient descent. Since the gradient is being calculated on the whole batch, it tends to work well on only convex loss functions.

+ +

The reason for why batch gradient descent is not working too great probably is because of the high number of minimas in the error manifold, ending up learning nothing relevant. You can change the loss function and observe the change in results, they might not be great (Batch GD usually isn't) but you'll be able to see differences.

+ +

You can check this out for more on the differences between the three. Hope this helped!

+",25658,,,,,9/27/2019 8:37,,,,2,,,,CC BY-SA 4.0 +15650,1,15651,,9/27/2019 13:52,,13,1224,"

The No Free Lunch (NFL) theorem states (see the paper Coevolutionary Free Lunches by David H. Wolpert and William G. Macready)

+ +
+

any two algorithms are equivalent when their performance is averaged across all possible problems

+
+ +

Is the ""No Free Lunch"" theorem really true? What does it actually mean? A nice example (in ML context) illustrating this assertion would be nice.

+ +

I have seen some algorithms which behave very poorly, and I have a hard time believing that they actually follow the above-stated theorem, so I am trying to understand whether my interpretation of this theorem is correct or not. Or is it just another ornamental theorem like Cybenko's Universal Approximation theorem?

+",,user9947,2444,,9/27/2019 16:49,10/4/2019 17:32,"What are the implications of the ""No Free Lunch"" theorem for machine learning?",,1,0,,,,CC BY-SA 4.0 +15651,2,,15650,9/27/2019 16:43,,13,,"

This is a really common reaction after first encountering the No Free Lunch theorems (NFLs). The one for machine learning is especially unintuitive, because it flies in the face of everything that's discussed in the ML community. That said, the theorem is true, but what it means is open to some debate.

+ +

To restate the theorem for people who don't know it, the NFL theorem for machine learning is really a special case of the NFL theorem for local search and optimization. The local search version is easier to understand. The theorem makes the following, somewhat radical claim:

+ +
+

Averaged across all possible optimization problems, the average solution quality found by any local search algorithm you choose to use is exactly the same as the average solution quality of a local ""search"" algorithm that just generates possible solutions by sampling uniformly at random from the space of all solutions.

+
+ +

Another formulation, when people want an even stronger reaction, is to say that if you want to find the best solution to a problem, it's just as good to try things that seem to be making your solution iteratively worse as it is to try things that seem to be making your solution iteratively better. On average, both these approaches are equally good.

+ +

Okay, so why is this true? Well, the key is in the details. Wolpert has sometimes described the theorem as a specialization of Hume's work on the problem of induction. The basic statement of the problem of induction is: we have no logical basis for assuming that the future will be like the past. Logically, there's no reason that the laws of physics couldn't all just radically change tomorrow. From a purely logical perspective, it's totally reasonable that the future can be different from the past in any number of ways. Hume's problem is that, in general the future is like the past in a lot of ways. He tried to formulate a philosophical (logical) argument that this needed to be so, but basically failed.

+ +

The No Free Lunch theorems say the same thing. If you don't know what your search space looks like, then if you iteratively refine your guess at what a good solution looks like, in response to the observations you've made in the past about what good solutions look like (i.e. learning from data), then it's just as likely that operation you make helps as it is that it hurts. That's why the ""averaged over all possible optimization problems"" part is key. For any optimization problem where hill climbing is a good strategy after $k$ moves, we can make one that is identical, except that the kth hill climbing move leads to an awful solution. The actual proof is more subtle than that, but that's the basic idea.

+ +

A very brief lay summary might be:

+ +
+

A machine learning algorithim can only be made to work better on some kinds of problems by being made to work worse on another kind of problem.

+
+ +

So what does this mean in a practical sense? It means that you need to have some apriori reason for thinking that your algorithm will be effective on a particular problem. Exactly what a good reason looks like is the subject of vigorous debate within the ML community. This is very closely related to the bias/variance tradeoff.

+ +

Some common responses are:

+ +
    +
  • When you're looking at a new optimization problem, although it could have any random kind of structure, the problems we actually encounter in the real world are a lot more regular, and certain common themes are present, like the the fact that moving ""uphill"" (minimizing error) iteratively tends to lead to good solutions. Basically, this school of thought says NFL is an ornamental theorem: most ML algorithms work better on ""the kind of problems we see in real life"", by working worse on ""the kind of problems we don't see in real life"".
  • +
  • When you're looking at a new optimization problem in [insert your favourite application domain], although it could have any random kind of structure, problems tend to look like [whatever you think], which makes [your favorite algorithm] a lot more effective than random guessing.
  • +
  • Wolpert & McCready themselves published an interesting result showing that there actually are specialized optimization processes, based on co-evolution, that are consistently better than random guessing.
  • +
+ +

Regardless, it's indisputable that some algorithms are better than others, in certain sub-domains (we can see this empirically). NFL tells us that to be better there, they need to be worse somewhere else. The question up for debate is whether the ""somewhere else"" is real problems, or purely artificial ones.

+",16909,,16909,,10/4/2019 17:32,10/4/2019 17:32,,,,2,,,,CC BY-SA 4.0 +15652,5,,,9/27/2019 16:51,,0,,,2444,,2444,,9/27/2019 16:51,9/27/2019 16:51,,,,0,,,,CC BY-SA 4.0 +15653,4,,,9/27/2019 16:51,,0,,For questions related to the various No Free Lunch (NFL) theorems (both in the context of machine learning and optimization).,2444,,2444,,9/27/2019 16:51,9/27/2019 16:51,,,,0,,,,CC BY-SA 4.0 +15655,2,,15645,9/27/2019 22:09,,2,,"

As the other links suggest, you basically have four options:

+ +
    +
  • ""restrict your CNN"". This means making your model smaller and simpler, possibly by inserting a pooling layer at the front, or reducing the total number of layers. From a memory perspective, this isn't likely to produce really large gains though.
  • +
  • ""stream your data in each epoch"". By default, the entire training set will be stored on the GPU. This is a good idea, because the bus connecting the GPU and the RAM has extremely high latency. It takes a very long time to start sending data across the bus. However, most systems have much more space in the RAM than in the video memory on a GPU. Instead of storing all the training data in the GPU, you could store it in main memory, and then manually move over just the batch of data you want to use for a given update. After computing the update, you could free the memory assigned to the batch. I am not sure how to do this in Keras. In the past, I have done this by writing a custom CUDA kernel.
  • +
  • ""Use less data"". If you train on a random subset of the training data, you can keep your images at high quality, but your model will probably overfit. If you train on downsampled images, your model may not be able to discriminate between them well. However, both of these options are easy to do.
  • +
  • ""Get more memory"". If you buy a video card with more video RAM, you can train more complex models. This is why scientific-grade cards have much larger memories than gaming cards. The Tesla V100 has 32GB of memory, and 16GB cards are common, whereas even the most advanced cards for gaming have only 11GB.
  • +
+",16909,,,,,9/27/2019 22:09,,,,0,,,,CC BY-SA 4.0 +15656,2,,15631,9/28/2019 0:57,,2,,"

There are two factors that will change the ability of a deep neural network to fit a given dataset: either you need more data, or a deeper and wider network. Since the pattern is only 2-d, it can likely be approximated by some sort of simple periodic function. A DNN can approximate periodic functions pretty well, so the issue is probably that you don't have enough data.

+ +

If you have an apriori belief that the pattern is well approximated by K-nearest neighbors, then you could do the following:

+ +
    +
  1. Fit a K-NN model to the data.
  2. +
  3. Generate $N$ new points uniformly at random from the input space.
  4. +
  5. Label the $N$ new points using the K-NN model.
  6. +
  7. Fit your DNN to the original dataset, plus the $N$ new points.
  8. +
+",16909,,,,,9/28/2019 0:57,,,,0,,,,CC BY-SA 4.0 +15658,1,,,9/28/2019 18:25,,7,169,"

My understanding of the vanishing gradient problem in deep networks is that as backprop progresses through the layers the gradients become small, and thus training progresses slower. I'm having a hard time reconciling this understanding with images such as below where the losses for a deeper network are higher than for a shallower one. Should it not just take longer to complete each iteration, but still reach the same level if not higher of accuracy?

+

+",30038,,-1,,6/17/2020 9:57,7/25/2020 22:00,Why do very deep non resnet architectures perform worse compared to shallower ones for the same iteration? Shouldn't they just train slower?,,2,2,,,,CC BY-SA 4.0 +15660,2,,12397,9/29/2019 1:01,,1,,"

The discriminator's job is to tell between real images and generated images. It would be impossible to do that if it never actually sees generated images, just as if you wanted a network to differentiate between cats and dogs it wouldn't work if you only showed it pictures of dogs.
+If you were to train a generator on only real images, all labels it would see would be a 1. What you end up with is a network that learns how to produce 1 regardless of its inputs, which is very easy to learn without finding any underlying patterns in the data. Once you add in the generated images and 0 labels it is forced to learn something interesting.

+",30043,,,,,9/29/2019 1:01,,,,3,,,,CC BY-SA 4.0 +15661,2,,6185,9/29/2019 6:39,,2,,"

You claim that

+
+

C++ is technically a more powerful language than python.

+
+

But that claim is wrong (or does not mean much). Remember that a programming language is a specification (often some document written in English). For example, n3337 is a late draft of the C++ specification. I don't like Python, but it does seems as powerful than C++ (even if C++ implementations are generally faster than Python ones): what a good Python programmer can code well in Python, another good C++ programmer can code well in C++ and vice versa.

+

Theoretically, both C++ and Python are Turing-complete (on purpose) programming languages.

+

And Python is as expressive as C++ is. I cannot name a programming language feature that Python has but not C++ (except those related to reflection; see also this answer and be aware of dlopen - see my manydl.c program -, of LLVM, of libgccjit, of libbacktrace, and consider some meta-programming approach with them, à la Bismon or like J.Pitrat's blog advocates it).

+

Maybe you think of a programming language as the software implementing it. Then Python is as expressive as C++ is (and seems easier to learn, but that is an illusion; see http://norvig.com/21-days.html for more about that illusion). Python and C++ have a quite similar semantics, even if their syntax is very different. Their type system is very different.

+

Observe that sadly, many recent major machine learning libraries (such as TensorFlow or Gudhi, both mostly coded in C++) are in practice easier to use in Python than in C++. But you can use TensorFlow or Gudhi from C++ code since TensorFlow and Gudhi are mostly coded in C++ and both provide and document a C++ API (not just a Python one).

+

C++ enables multi-threaded programming, but the usual Python implementation has its GIL, is bytecoded, so is significantly slower than C++ (which is usually compiled by optimizing compilers such as GCC or Clang; however you could find C++ interpreters, e.g. Cling). Some experimental implementations of Python are JIT-compiled and without GIL. But these are not mature: I recommend investing a million euros to increase their TRL.

+

Observe also that C++ is much more difficult to learn than Python. Even with a dozen years of C++ programming experience, I cannot claim to really know most of C++.

+

Sadly, most recent books teaching AI software engineering (e.g. this one or that one) use Python (not C++) for their examples. I actually want more recent AI books using C++ !

+

BTW, I program open source software (like this one, or the obsolete GCC MELT) using AI techniques, but they don't use Python. My approach to AI applications is to start designing some DSL in them.

+

Some AI approaches involve metaprogramming, e.g. generating some (or most, or even all) the code of a system by itself. J.Pitrat (he passed away in October 2019) pioneered this approach. See his blog, his CAIA system, read his Artificial Beings, the conscience of a couscious machine book (ISBN 978-1848211018) and the RefPerSys project (whose ambition is to generate most -and hopefully all- of its C++ code).

+

On operating systems such as Linux you could in practice generate C++ (or C) code at runtime and compile it (using GCC) into a plugin, then later dlopen(3) that generated plugin, and retrieve function pointers by their name using dlsym(3). See the manydl.c example (on a powerful desktop in 2020, you would be able to generate and load half a million of plugins, if you run that example several days). With dladdr(3) and Ian Taylor's libbacktrace, you can also inspect some of the call stack.

+

AFAIK major corporations such as Google use C++ internally for most of their AI-related code. Look also into MILEPOST GCC or the H2020 Decoder project for an application of machine learning techniques to compilers. See also HIPEAC.

+

Of course, you can code AI software in Haskell, in Common Lisp (e.g. with SBCL), or in Ocaml. Many machine learning frameworks can be called from them. Number crunching libraries could use OpenCL.

+",3335,,3335,,9/4/2020 8:46,9/4/2020 8:46,,,,0,,,,CC BY-SA 4.0 +15662,2,,15658,9/29/2019 7:45,,0,,"

Those graphs do not disprove your 'vanishing gradient' theory. The deeper network may eventually do better than the shallower one, but it might take much longer to do it.

+ +

Incidentally, the ReLU activation function was designed to mitigate the vanishing gradient problem.

+",12509,,,,,9/29/2019 7:45,,,,0,,,,CC BY-SA 4.0 +15663,1,15664,,9/29/2019 15:36,,4,405,"

Intuitively, I understand that having an unbiased estimate of a policy is important because being biased just means that our estimate is distant from the truth value.

+ +

However, I don't understand clearly why having lower variance is important. Is that because, in offline policy evaluation, we can have only 'one' estimate with a stream of data, and we don't know if it is because of variance or bias when our estimate is far from the truth value? Basically, variance acts like bias.

+ +

Also, if that is the case, why having variance is preferable to having a bias?

+",30051,,2444,,9/29/2019 21:41,9/29/2019 21:41,Why is having low variance important in offline policy evaluation of reinforcement learning?,,2,0,0,,,CC BY-SA 4.0 +15664,2,,15663,9/29/2019 20:30,,1,,"

Having low variance is important in general as it reduces the number of samples needed to obtain accurate estimates. This is the case for all statistical machine learning, not just reinforcement learning.

+ +

In general, if you are estimating a mean or expected quantity by taking many samples, the variation in the error is proportional to $\frac{\sigma}{\sqrt{N}}$ for a direct arithemtic mean of all samples, and behaves similarly for other averaging approaches (such as recency-weighted means using a learning rate). The bounds on accuracy can be made better by either increasing $N$ i.e. taking more samples, or by decreasing the variance $\sigma^2$.

+ +

So anything you can do to reduce variance in your measurements has a direct consequence of reducing the number of samples required to achieve the same degree of accuracy.

+ +

In the case of off-policy reinforcement learning, there is added variance - compared to on-policy learning - due to different probabilities of taking an action in behaviour and target policies. This is due to the need to adjust reward signals using importance sampling - multiplying by the importance sampling ratio will make the reward signal vary more (in fact it can become unbounded). This is not really any more of a challenge than any other source of variance, but as it interferes with the goal of speedy learning, a lot of research effort has been put into methods that reduce the variance.

+",1847,,,,,9/29/2019 20:30,,,,0,,,,CC BY-SA 4.0 +15665,2,,15663,9/29/2019 21:27,,1,,"

Bias is not necessarily bad, even though the term bias usually has a negative connotation. In fact, in machine learning, inductive bias is quite important and necessary. For example, if you want to learn a function $f(x) = y$, where $x \in \mathcal{X}$ and $y \in \mathcal{Y}$, you often just have a finite dataset $\mathcal{D} = \{ (x_i, y_i)\}_{i=1}^N$, which may not contain all possible $(x, y)$ pairs associated with $f$. In that case, $\mathcal{D}$ may not contain enough information to learn $f$, so you need to assume that $f$ behaves in a certain way or that the input and output spaces have certain characteristics. A typical way of dealing with finite datasets is to introduce noise during the learning process (which is a regularization technique).

+ +

However, bias can lead to sub-optimal solutions. For example, you could assume that $f$ is a lot more complex than the function $\hat{f}$ that maps $x_i$ to $y_i$ (of $\mathcal{D}$), for $i=1, \dots, N$. So, to solve this issue, you could introduce a lot of noise, while, in reality, $\hat{f}$ may be extremely similar to $f$, even though not exactly the same, so, in reality, you may not need all this noise.

+ +

Why is low variance desirable? Essentially, while you are learning something, it is easier to learn regular patterns as opposed to more irregular ones. For example, $1, 2, 1, 2, 1, 2$ is a relatively regular sequence compared to $8, 2, 5, 6, 1, 7, 99$, which is thus harder to learn (or memorise) than the former.

+",2444,,2444,,9/29/2019 21:39,9/29/2019 21:39,,,,1,,,,CC BY-SA 4.0 +15666,1,15669,,9/30/2019 2:15,,4,666,"

If a policy is fixed, it is said that a Markov Decision Process (MDP) becomes a Markov Reward Process (MRP).

+

Why is this so? Aren't the transitions and rewards still parameterized by the action and current state? In other words, aren't the transition and reward matrices still cubes?

+

From my current train of thought, the only thing that is different is that the policy is not changing (the agent is not learning the policy). Everything else is the same.

+

How is it that it switches to an MRP, which is not affected by actions?

+

I am reading "Deep Reinforcement Learning Hands-On" by Maxim Lapan, which states this. I have also found this statement in online articles, but I cannot seem to wrap my head around it.

+",30059,,2444,,1/20/2021 22:36,1/20/2021 22:36,Why does having a fixed policy change a Markov Decision Process to a Markov Reward Process?,,1,0,,,,CC BY-SA 4.0 +15667,2,,15621,9/30/2019 4:24,,4,,"

I had a very similar issue as you did with the dimensions. Here's the rundown:

+ +

Every node you see inside the LSTM cell has the exact same output dimensions, including the cell state. Otherwise, you'll see with the forget gate and output gate, how could you possible do an element wise multiplication with the cell state? They have to have the same dimensions in order for that to work.

+ +

Using an example where n_hiddenunits = 256:

+ +
Output of forget gate: 256
+Input gate: 256
+Activation gate: 256
+Output gate: 256
+Cell state: 256
+Hidden state: 256
+
+ +

Now this can obviously be problematic if you want the LSTM to output, say, a one hot vector of size 5. So to do this, a softmax layer is slapped onto the end of the hidden state, to convert it to the correct dimension. So just a standard FFNN with normal weights (no bias', because softmax). Now, also imagining that we input a one hot vector of size 5:

+ +
input size: 5
+total input size to all gates: 256+5 = 261 (the hidden state and input are appended)
+Output of forget gate: 256
+Input gate: 256
+Activation gate: 256
+Output gate: 256
+Cell state: 256
+Hidden state: 256
+Final output size: 5
+
+ +

That is the final dimensions of the cell.

+",26726,,,,,9/30/2019 4:24,,,,4,,,,CC BY-SA 4.0 +15668,1,,,9/30/2019 4:49,,1,61,"

I wanted to know about Intelligent reflecting surface (IRS) technology. +what is the application of IRS in wireless communication? +what are the competitive advantages over existing technologies?

+",28346,,,,,9/30/2019 4:49,Intelligent reflecting surface,,0,0,,,,CC BY-SA 4.0 +15669,2,,15666,9/30/2019 6:56,,1,,"
+

If a policy is fixed, it is said that an MDP becomes an MRP.

+
+ +

I would change the phrasing slightly here, to:

+ +

If a policy is fixed, an MDP can be accurately modeled as an MRP.

+ +
+

Why is this so? Aren't the transitions and rewards still parameterized by the action and current state? In other words, aren't the transition and reward matrices still cubes?

+
+ +

The transition and reward matrices remain the same in the MDP, but it is possible to flatten them into an equivalent MRP, because in terms of observations of next state and reward, the action that is taken is just part of the transition rules - if the policy is fixed, then so are all the probabilities for next state and reward.

+ +

More concretely, if you have an MDP with $|\mathcal{A}|$ transition matrices $P_{ss'}^a$ and a fixed policy $\pi(a|s)$, then you can create a combined transition matrix with a sum:

+ +

$$P_{ss'} = \sum_{a \in \mathcal{A}} \pi(a|s) P_{ss'}^a$$

+ +

and you can similarly reduce the reward function. Once you have done so you have data that describe an MRP.

+ +
+

How is it that it switches to an MRP which is not affected by actions?

+
+ +

If the MDP represents a real system where actions are still being taken by an agent, then of course those are still present within the system, and still affect it. The difference is that if you know the agent's policy, then the action choice is predictable, and the MRP representation covers the full definition of probabilities of observed state transitions and rewards.

+",1847,,1847,,9/30/2019 8:42,9/30/2019 8:42,,,,0,,,,CC BY-SA 4.0 +15670,2,,1462,9/30/2019 11:47,,1,,"
    +
  • Simply we say AI is software and robot is its body.
  • +
+ +

This is because the algorithms we commonly think of AI come in the form of software, where when we talk about robots, we're talking about physical automation.

+ +

In an automobile manufacturing process where automation is used, the software makes the decisions on what physical action the robot arm should take at any given time.

+",30064,,1671,,9/30/2019 23:41,9/30/2019 23:41,,,,0,,,,CC BY-SA 4.0 +15675,2,,9813,10/1/2019 0:18,,0,,"

My sense is that this would require a statistical approach with a large dataset.

+ +

The algorithm would need to ""translate"" the slang into formal terms (discrete words or phrases expressing a single concept.)

+ +

The trick would be vetting the algorithms decisions, which would require a sufficient sample of humans to evaluate the given translation for each novel instance of slang. (This would likely require some form of crowdsourcing, similar to Captcha.)

+ +

This would determine whether 4😂⇔ 5😂⇔ 6😂 where 4x, 5x, 6x of the symbol are equivalent, and whether spacing between the emoji's is meaningful.

+ +

Most likely these would be fuzzy associations in that the same slang can be interpreted differently by different people, and the meaning can vary when used in different contexts:

+ +

😂😂😂😂😂 could mean ""I'm laughing super-hard because what you say is so absurdly incorrect."" [Adversarial]

+ +

😂😂😂😂😂 could mean ""I'm laughing super-hard because the joke is extremely funny."" [Cooperative]

+ +

Informally, my experience of 5😂 has always been in adversarial, but that could be a function of the contexts in which I've encountered it, which reinforces the need for a large samples.

+ +

It occurs to me that you could reduce the sample size by using a friendly chatbot that parses social media posts for any symbolic information that is non-standard, then queries the posters asking for clarification. (This way, you'd get the intent of the slang from the person using it, as opposed interpretations of those viewing it.)

+ +

For informal text (as opposed to emojis) the algorithm would want to be able to distinguish between intentional or unintentional mis-spellings.

+",1671,,1671,,10/1/2019 0:53,10/1/2019 0:53,,,,0,,,,CC BY-SA 4.0 +15676,1,15689,,10/1/2019 2:59,,1,315,"

People say embedding is necessary in NLP because if using just the word indices, the efficiency is not high as similar words are supposed to be related to each other. However, I still don't truly get it why.

+ +

The subword-based embedding (aka syllable-based embedding) is understandable, for example:

+ +
biology   --> bio-lo-gy
+biologist --> bio-lo-gist
+
+ +

For the 2 words above, when turning them into syllable-based embeddings, it's good because the 2 words will be related to each other due to the sharing syllables: bio, and lo.

+ +

However, it's hard to understand the autoencoder, it turns an index value into vector, then feed these vectors to DNN. Autoencoder can turn vectors back to words too.

+ +

How does autoencoder make words related to each other?

+",2844,,2844,,10/1/2019 9:16,10/2/2019 9:09,"Why is embedding important in NLP, and how does autoencoder work?",,2,0,,,,CC BY-SA 4.0 +15678,2,,15676,10/1/2019 9:15,,1,,"

The subword-based embedding is rather visual and easily understandable. However, the autoencoder embedding is what machines understand the componential meaning of words.

+ +

1) An autoencoder embedding layer can be trained together with other layers to fit with the relation of data in dataset.

+ +

2) Or the embedding layer can be kept unchanged as used as a function, required that the embedding layer must have been trained on a similar task like said in 1)

+ +

And as stated in the question, embedding is important in NLP as word indices don't show the full meanings of words, they must be separated into embedding values for better efficiency.

+ +

TensorFlow random uniform trainable embedding: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding

+ +

TensorFlow utility class for creating subword-based encoder: https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/SubwordTextEncoder

+",2844,,2844,,10/2/2019 5:23,10/2/2019 5:23,,,,0,,,,CC BY-SA 4.0 +15681,1,15682,,10/1/2019 17:30,,1,62,"

The author explains in 2.2 Action-Value Methods:

+ +
+

To roughly assess the relative effectiveness of the greedy and $\varepsilon $-greedy methods, we compared them numerically on a suite of test problems. This is a set of 2000 randomly generated n-armed bandit tasks with n = 10. For each action, a, the rewards were selected from a normal (Gaussian) probability distribution with mean Q*(a) and variance 1. The 2000 n-armed bandit tasks were generated by reselecting the Q*(a) 2000 times, each according to a normal distribution with mean 0 and variance 1. Averaging over tasks, we can plot the performance and behavior of various methods as they improve with experience over 1000 plays, as in Figure 2.1. We call this suite of test tasks the 10-armed testbed.

+ +

+
+ +

But doing my best, the replication yields something nearer to :

+ +

+ +

I think I am misunderstanding how the author took the averages.

+ +

Here is my code:

+ +
from math import exp
+
+import numpy
+import matplotlib.pyplot as plt
+
+
+def act(action, Qstar):
+    return numpy.random.normal(Qstar[action], 1)
+
+
+def run(epsilon):
+    history = [0 for i in range(1000)]
+
+    for task in range(1, 2000):
+        Qstar = [numpy.random.normal(0, 1) for i in range(10)]
+        Q = [0 for i in range(10)]
+        for t in range(1, 1001):
+            if numpy.random.randint(0, 100) < epsilon:
+                action = numpy.random.randint(0, len(Q))
+            elif t == 0:
+                action = 0
+            else:
+                averages = [q/t for q in Q]
+                action = averages.index(max(averages))
+
+            reward = act(action, Qstar)
+            Q[action] += reward
+
+            history[t-1] += reward
+
+    return [elem/2000 for elem in history]
+
+
+if __name__ == '__main__':
+    plt.plot(run(10), 'b', label=""ɛ=0.1"")
+    plt.plot(run(1), 'r', label=""ɛ=0.01"")
+    plt.plot(run(0), 'g', label=""ɛ=0"")
+    plt.xlabel('Plays')
+    plt.ylabel('Reward')
+    plt.legend()
+    plt.show()
+
+",14892,,,,,10/1/2019 19:54,"Unable to replicate Figure 2.1 from ""Reinforcement Learning: An Introduction""",,1,0,,,,CC BY-SA 4.0 +15682,2,,15681,10/1/2019 19:48,,1,,"

You are calculating the average reward for each action (i.e. bandit arm) incorrectly. You cannot calculate this simply with a list comprehension, and you need to keep a second list storing the number of times each action was taken.

+ +

The correct calculation is to divide the total reward obtained from each action by the number of times that action was taken. You are dividing all actions by the total number of actions taken. Adding:

+ +
...
+counts = [0 for i in range(10)]
+...
+else:
+    averages = []
+    for i in range(0, 10):
+        averages.append(Q[i]/counts[i] if counts[i] > 0 else 0)
+    ...
+...
+counts[action] += 1
+
+ +

results in working code, and I can generate this graph using 200 samples per method, which looks like a noisier version of the one in the figure you referenced:

+ +

+",16909,,16909,,10/1/2019 19:54,10/1/2019 19:54,,,,0,,,,CC BY-SA 4.0 +15683,1,,,10/1/2019 20:59,,1,128,"

I'm interested in learning about Neural Networks and implementing them. I'm particularly interested in GANs and LSTM networks.

+ +

I understand perceptrons and basic Neural Network configuration (sigmoid activation, weights, hidden layers etc). But what topics do I need to learn in order, to get to the point where I can implement GAN or LSTM.

+ +

I intend to make an implementation of each in C++ to prove to myself that I understand. I haven't got a particularly good math background, but I understand most math-things when they are explained.

+ +

For example, I understand backpropagation, but I don't really understand it. I understand how reinforced learning is used with backpropagation, but not fully how you can have things like training without datasets (like tD-backgammon). I don't quite understand CNNs, especially why you might make a particular architecture.

+ +

If for each ""topic"" there was a book or website or something for each it would be great.

+",2819,,,,,11/6/2019 12:06,What order should I learn about Neural Networks?,,2,8,,5/16/2020 23:10,,CC BY-SA 4.0 +15684,2,,7853,10/2/2019 0:01,,0,,"

Your description sounds like something similar to Imagenet dataset. According to this website, the state-of-the-art top-1 accuracy just as high as 86%, not much higher than yours. There are plenty of methods to improve accuracy. I would suggest you to read the paper or github listed in SOTA to find ideas that best fits your situation.

+",18276,,,,,10/2/2019 0:01,,,,0,,,,CC BY-SA 4.0 +15685,1,22747,,10/2/2019 1:35,,2,103,"

What I did:
+Created a population of 2D legged robots in a simulated environment. Found the best motor rotation values to make the robots move rightward, using an objective function with Differential Evolution (could use PSO or GA too), that returned the distance moved rightward. Gradient descent used for improving fitness.

+ +

What I want to do:
+Add more objectives. To find the best motor rotation, with the least motion possible, with the least jittery motion, without toppling the body upside down and making the least collision impact on the floor.

+ +

What I found:

+ +
    +
  • Spent almost two weeks searching for solutions, reading research +papers, going through tutorials on Pareto optimality, installing +libraries and trying the example programs.

  • +
  • Using pairing functions to create a cost function wasn't good +enough.

  • +
  • There are many multi-objective PSO, DE, GA etc., but they seem +to be built for solving some other kind of problem.

  • +
+ +

Where I need help:

+ +
    +
  • Existing multi objective algorithms seem to use some pre-existing +minimization and maximization functions (Fonseca, Kursawe, OneMax, +DTLZ1, ZDT1, etc.) and it's confusing to understand how I can use my +own maximization and minimization functions with the libraries. +(minimize(motorRotation), maximize(distance), +minimize(collisionImpact), constant(bodyAngle)).

  • +
  • How do I know which is the best Pareto front to choose in a +multi-dimensional space? There seem to be ways of choosing the +top-right Pareto front or the top-left or the bottom-right or +bottom-left. In multi-dimensional space, it'd be even more varied.

  • +
  • Libraries like Platypus, PyGMO, Pymoo etc. just define the problem using +problem = DTLZ2(), instantiate an algorithm algorithm = +NSGAII(problem) and run it algorithm.run(10000), where I assume +10000 is the number of generations. But since I'm using a legged +robot, I can't simply use run(10000). I need to assign motor values +to the robots, wait for the simulator to make the robots in the +population move and then calculate the objective function cost. How +can I achieve this?

  • +
  • Once the pareto optimal values are found, how is it used to create a +cost value that helps me determine the fittest robot in the +population?

  • +
+",9268,,9268,,10/2/2019 2:34,7/29/2020 16:29,How to calculate multiobjective optimization cost for ordinary problems?,,1,0,,,,CC BY-SA 4.0 +15687,2,,7853,10/2/2019 5:01,,0,,"

I can't comment but here are a few suggestions:

+ +
    +
  • Play with lr and lr finder
  • +
  • Use a pretrained model
  • +
  • Use architecture search or use something like efficientnet b6
  • +
  • Use swish over relu
  • +
  • Try different optimizers
  • +
  • Try Bayesian Optimization
  • +
+",28538,,,,,10/2/2019 5:01,,,,0,,,,CC BY-SA 4.0 +15688,1,,,10/2/2019 5:28,,4,352,"

I recently read a paper on community detection in networks. In the paper EdMot: An Edge Enhancement Approach for Motif-aware Community Detection, the authors consider the ""lower-order structure"" of the network at the level of individual nodes and edges. And they mentioning about some ""higher-order structure"" method. The point is, what is the exact meaning (definition) of lower- and higher-order structure in a network?

+",26886,,2444,,10/2/2019 17:16,5/25/2020 20:04,"What are the exact meaning of ""lower-order structure"" and ""higher-order structure"" in this paper?",,1,0,,,,CC BY-SA 4.0 +15689,2,,15676,10/2/2019 9:09,,1,,"

The information you are probably missing is that word embeddings are learned on the basis of context. For example, you might try to predict a vector for a word from the wordvectors of the other words in the same sentence.

+ +

This way word vectors of words that occur in similar contexts will turn out to be similar. You can think of it as word vectors not encoding the word themselves but the contexts in which they are used. Of course ultimately that is the same.

+",2227,,,,,10/2/2019 9:09,,,,0,,,,CC BY-SA 4.0 +15693,1,15698,,10/2/2019 14:32,,8,486,"

In a single agent environment, the agent takes an action, then observes the next state and reward:

+ +
for ep in num_episodes:
+    action = dqn.select_action(state)
+    next_state, reward = env.step(action)
+
+ +

Implicitly, the for moving the simulation (env) forward is embedded inside the env.step() function.

+ +

Now in the multiagent scenario, agent 1 ($a_1$) has to make a decision at time $t_{1a}$, which will finish at time $t_{2a}$, and agent 2 ($a_2$) makes a decision at time $t_{1b} < t_{1a}$ which is finished at $t_{2b} > t_{2a}$.

+ +

If both of their actions would start and finish at the same time, then it could easily be implemented as:

+ +
for ep in num_episodes:
+    action1, action2 = dqn.select_action([state1, state2])
+    next_state_1, reward_1, next_state_2, reward_2 = env.step([action1, action2])
+
+ +

because the env can execute both in parallel, wait till they are done, and then return the next states and rewards. But in the scenario that I described previously, it is not clear how to implement this (at least to me). Here, we need to explicitly track time, a check at any timepoint to see if an agent needs to make a decision, Just to be concrete:

+ +
for ep in num_episodes:
+    for t in total_time:
+       action1 = dqn.select_action(state1)
+       env.step(action1) # this step might take 5t to complete. 
+       as such, the step() function won't return the reward till 5 t later. 
+        #In the mean time, agent 2 comes and has to make a decision. its reward and next step won't be observed till 10 t later. 
+
+ +

To summarize, how would one implement a multiagent environment with asynchronous action/rewards per agents?

+",22943,,2444,,10/3/2019 2:59,10/3/2019 2:59,How would one implement a multi-agent environment with asynchronous action and rewards per agent?,,1,0,,,,CC BY-SA 4.0 +15694,2,,12034,10/2/2019 14:44,,5,,"

No, AiAngel is not a bot. +It's Rouge's software that changes his voice along with facial recognition software which tracks his movement and copies it to the avatar.

+ +

That being said, he has created a very entertaining channel and line of work for himself. By looking at the videos, you can see that he is a true genius at work. Single handedly puts on all hats for that project, from hardware infrastructure development, to software design to graphic art to acting and video editing. Truely one of the great minds of this era.

+ +

You can find Ai Angel (Angelica) on Rogue Shadow's

+ + + +

..along with several other platforms.

+ +

Hope than answers your question with certainty (instead of philosophically)

+",30115,,30115,,4/16/2020 19:32,4/16/2020 19:32,,,,3,,,,CC BY-SA 4.0 +15695,1,,,10/2/2019 17:30,,2,106,"

I'm looking for examples of AI systems or agents that best represent these five characteristics (one example for each characteristics):

+ +
    +
  • Reactivity
  • +
  • Proactivity
  • +
  • Adaptability
  • +
  • Sociability
  • +
  • Autonomy
  • +
+ +

It would be better if its machine learning-based application.

+",30118,,2444,,10/3/2019 1:50,10/3/2019 1:50,What are the examples of agents that is represent these characteristics?,,0,1,,,,CC BY-SA 4.0 +15698,2,,15693,10/2/2019 19:26,,2,,"

The cleanest solution from a theoretical point of view is to switch over to a hierarchical framework, some framework that supports temporal abstraction. My favourite one is the options framework as formalised by Sutton, Precup and Singh.

+ +

The basic idea is that the things that you consider ""actions"" for your agents become ""options"", which are ""large actions"" that may take more than a single primitive time step. When an agent selects an option, it will go on ""auto-pilot"" and keep selecting primitive actions at the more primitive, fine-grained timescale as dictated by the last selected option, until that option has actually finished executing. In your case, you could:

+ +
    +
  • implement the first ""primitive action"" of an option to immediately apply all effects to the state, and append a sequence of ""no-op"" actions afterwards to make sure the option actually has a longer duration than a single primitive timestep, OR
  • +
  • implement the very last primitive action of an option to actually apply changes to the state, and prepend a sequence of ""no-op"" actions in front of it to make the option take more time, OR
  • +
  • something in between (i.e. actually make partial changes to the state visible during the execution of the option).
  • +
+ +

Since all legal choices for agents in your scenario appear to be options, i.e. you do not allow agents to select primitive actions at the more fine-grained timescale, you would only have to implement ""inter-option"" learning in your RL algorithms; there would be no need for ""intra-option"" learning.

+ +

In practice, if you only have a small number of agents and have options that take relatively large amounts of time, you don't have to actually loop through all primitive time-steps. You could, for example, compute the primitive timestamps at which ""events"" should be executed in advance, and insert these events to be processed into an event-handling queue based on these timestamps. Then you can always just skip through to the next timestamp at which an event needs handling. With ""events"" I basically mean all timesteps at which something should happen, e.g. timesteps where an option ends and a new option should be selected by one or more agents. Inter-option Reinforcement Learning techniques are basically oblivious to the existence of a more fine-grained timescale, and they only need to operate at precisely these decision points where one option ends and another begins.

+",1641,,,,,10/2/2019 19:26,,,,0,,,,CC BY-SA 4.0 +15699,1,15700,,10/2/2019 19:42,,2,240,"

By definition, every state in RL has Markov property, which means that the future state depends only on the current state, not the past states.

+ +

However, I saw that in some case we can define a state to be the history of observations and actions taken so far, such as $s_t = h_t = o_1a_1\dots o_{t-1}a_{t-1}o_t$. I think maze solving can be of that case since the current state, or the current place in a maze, clearly depends on which places the agent has been and which ways the agent has taken so far.

+ +

Then it seems that the future states naturally depend on the past states and the past actions as well. What am I missing?

+",30051,,30051,,10/2/2019 20:04,10/3/2019 4:11,Markov property in maze solving problem in reinforcement learning,,1,0,,,,CC BY-SA 4.0 +15700,2,,15699,10/2/2019 20:32,,3,,"

Hi Hunnam and welcome to our community!

+ +
+

By definition, every state in RL has Markov property, which means that the future state depends only on the current state, not the past states.

+
+ +

No this is not exactly correct.
+We can use RL to solve problems with the Markov Property exactly because the current state is a sufficient statistic of the future. In other words, the state encodes the distribution of future states.

+ +

Note that the state isn't necessarily the observations. As you point out in the next paragraph:

+ +
+

However, I saw that in some case we can define a state to be the history of observations and actions taken so far, such as 𝑠𝑡=ℎ𝑡=𝑜1𝑎1…𝑜𝑡−1𝑎𝑡−1𝑜𝑡 .

+
+ +

At times we can use the history to represent the state. The history can be a series of observations.

+ +
+

I think maze solving can be of that case since the current state, or the current place in a maze, clearly depends on which places the agent has been and which ways the agent has taken so far.

+
+ +

This isn't correct in the general case. Given a maze which you know how to solve, regardless of where you start, you know how to reach the exit. This is the markov property. Given the current position, you have enough information to make a certain and optimal decision.

+ +

Perhaps an example of a situation where the history is necessary will help illustrate the differences.

+ +

Suppose you are playing Pong. If you take a single frame, it doesn't contain enough information to know the direction of the ball. Therefore the observations alone are insufficient. What if you remember the previous frame? Then combining the two observations gives you all the information you need in order to make an optimal move.

+",28538,,28538,,10/3/2019 4:11,10/3/2019 4:11,,,,2,,,,CC BY-SA 4.0 +15701,1,15702,,10/3/2019 0:55,,6,817,"

I am working on a project for my artificial intelligence class. I was wondering if I have 2 admissible heuristics, A and B, is it possible that A does not dominate B and B does not dominate A? I am wondering this because I had to prove if each heuristic is admissible and I did that, and then for each admissible heuristic, we have to prove if each one dominates the other or not. I think I have a case that neither dominates the other and I was wondering if maybe I got the admissibility wrong because of that.

+",30124,,2444,,11/10/2019 16:44,11/11/2019 15:44,Can two admissable heuristics not dominate each other?,,1,0,0,,,CC BY-SA 4.0 +15702,2,,15701,10/3/2019 1:40,,4,,"

This is possible. Admissibility only asserts that the heuristic will never overestimate the true cost. With that being said, it is possible for one heuristic in some cases to do better than another and vice-versa. Think of it as a game of rock paper scissors.

+ +

Specifically, you may find that sometimes $h_1 < h_2$ and in other times $h_2 < h_1$, where $h_1$ and $h_2$ are admissible heuristics. Thus, by definition, neither strictly dominates the other.

+ +

In fact, there is a way to ""combine"" the two admissible heuristics to get the best of both using:

+ +

$$h_3 = \max(h_1, h_2)$$

+",28343,,2444,,11/11/2019 15:44,11/11/2019 15:44,,,,1,,,,CC BY-SA 4.0 +15704,1,,,10/3/2019 13:38,,1,76,"

I currently use an object detector to detect an object and specific parts of it (a crop and its stem). Such detector is not the best choice for detecting parts that could be represented by a point (typically a stem) so I'm planning on moving to a keypoint detector.

+ +

After reading the literature it appears that there are many solutions. I'm particularly interested in using a hourglass network to predict a set of heatmaps abstracting the keypoints positions.

+ +

The problem is that many of the existing frameworks are dedicated to human pose estimation (for instance OpenPose) but I don't need all this complexity.

+ +

By best choice for now is this framework that is a Tensorflow implementation of Hourglass Networks but it is still too specific to human pose estimation.

+ +

Do you have any suggestion of frameworks that best suit my application? i.e. simple keypoint detection.

+",19859,,,,,10/3/2019 13:38,Looking or the simplest framework to train keypoint detector,,0,0,,,,CC BY-SA 4.0 +15706,1,,,10/3/2019 15:00,,1,68,"

I am learning how to use tensorflow without keras, just to make sure I understand tensorflow directly.

+ +

I created a spiral-looking datasets with 100 points of each class (200 total), and I created a neural network to classify it. For the life of me I can't figure out how to get a good accuracy. Do you mind looking at my code to see what I did wrong?

+ +

From what I've gleaned from various forums, it seems like if I do 4 hidden layers and 14 neurons per layer, I should be able to perfectly separate this dataset. I tried with with learning rate of 0.01, and 20k epochs.

+ +

I've tried different combinations of activation functions (tanh, sigmoid, relu, even alternating between them), but the best that I've gotten is around 60 percent, whereas people with less layers and less neurons have gotten close to 90 percent.

+ +

What I did NOT do is to create additional features (for example, r and theta), and this was intentional. I'm just curious to see if I can do this by looking at x and y alone.

+ +

The code is pasted below, and it includes the code to create the data (and it plots the data).

+ +

Thank you in advance!

+ + + +
import numpy as np;
+import matplotlib.pyplot as plt;
+import tensorflow as tf;
+import random;
+
+# Part A
+# Gather data and plot
+def chooseTrainingBatch(X, Y, n, k):
+    indices = range(0,n)
+    chosenIndices = random.choices(indices,k=k)
+    batchX = X[chosenIndices, :]
+    batchY = Y[chosenIndices, :]
+    return (batchX, batchY)
+
+
+def doLinearClassification(n, learning_rate=1, epochs=20, num_hidden_layer_1=100, num_of_layers=4, batch_size = 20):
+    d = 1;
+    plt.figure();
+    X = np.zeros([2*n, 2]);
+    Y = np.zeros([2*n, 2]);
+    for t in np.arange(1,n+1,d):
+        r1 = 50 + 0.2*t;
+        r2 = 30 + 0.4*t;
+        phi1 = -0.06*t + 3;
+        phi2 = -0.08*t + 2;
+
+        x1 = r1 * np.cos(phi1);
+        y1 = r1 * np.sin(phi1);
+        x2 = r2 * np.cos(phi2);
+        y2 = r2 * np.sin(phi2);
+
+        plt.scatter(x1, y1, c='b');
+        plt.scatter(x2, y2, c='r');
+
+        X[t-1,0] = x1;
+        X[t-1,1] = y1;
+        Y[t-1,0] = 1;
+
+        X[n+t-1,0] = x2;
+        X[n+t-1,1] = y2;
+        Y[n+t-1,1] = 1;
+
+
+    # declare the training data placeholders
+    x = tf.placeholder(tf.float32, [None, 2])
+    y = tf.placeholder(tf.float32, [None, 2])
+
+    # Weights connecting the input to the hidden layer 1
+    W1 = tf.Variable(tf.random_normal([2, num_hidden_layer_1]))
+    b1 = tf.Variable(tf.random_normal([num_hidden_layer_1]))
+
+    # activation of hidden layer 1
+    hidden_1_out = tf.nn.relu(tf.add(tf.matmul(x, W1), b1))
+
+    last_hidden_out = hidden_1_out
+    for i in range(1,num_of_layers):    
+        # weights connecting the hidden layer i-1 to the hidden layer i
+        next_W = tf.Variable(tf.random_normal([num_hidden_layer_1, num_hidden_layer_1]))
+        next_b = tf.Variable(tf.random_normal([num_hidden_layer_1]))
+
+        # activation of hidden layer
+        if (i%2 == 0):
+            next_hidden_out = tf.nn.tanh(tf.add(tf.matmul(last_hidden_out, next_W), next_b))
+        else:    
+            next_hidden_out = tf.nn.tanh(tf.add(tf.matmul(last_hidden_out, next_W), next_b))
+
+        # update for next loop
+        last_hidden_out = next_hidden_out
+
+    # and the weights connecting the last hidden layer to the output layer
+    W_end = tf.Variable(tf.random_normal([num_hidden_layer_1, 2]))
+    b_end = tf.Variable(tf.random_normal([2]))
+
+    # activation of output layer
+    y_ = tf.nn.sigmoid(tf.add(tf.matmul(last_hidden_out, W_end), b_end))
+
+    # loss function
+    y_clipped = tf.clip_by_value(y_, 1e-10, 0.9999999)
+    cross_entropy = -tf.reduce_mean(tf.reduce_sum(y * tf.log(y_clipped) + (1 - y) * tf.log(1 - y_clipped), axis=1))
+
+    # add an optimiser
+    optimiser = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
+
+    # finally setup the initialisation operator
+    init_op = tf.global_variables_initializer()
+
+    # define an accuracy assessment operation
+    correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
+    blah = tf.add(tf.matmul(last_hidden_out, W_end), b_end)
+    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
+
+    # For the purpose of separating dataset, testing data is the entire set
+    testingX = X;
+    testingY = Y;
+
+    # start the session
+    with tf.Session() as sess:
+       # initialise the variables
+       sess.run(init_op)
+       for epoch in range(epochs):
+            this_cost = 0
+            trainingX, trainingY = chooseTrainingBatch(X, Y, n, batch_size)
+            _, c = sess.run([optimiser, cross_entropy], feed_dict={x: trainingX, y: trainingY})
+            this_cost += c
+            print(""Epoch:"", (epoch + 1), ""cost ="", ""{:.3f}"".format(this_cost))
+
+       #debugging statements
+       #this_y = sess.run(y, feed_dict={x: testingX, y: testingY})        
+       #print(""y"")
+       #print(this_y)
+       #this_blah = sess.run(blah, feed_dict={x: testingX, y: testingY})
+       #print(""blah"")
+       #print(this_blah)
+       #this_y_ = sess.run(y_, feed_dict={x: testingX, y: testingY})
+       #print(""y_"")
+       #print(this_y_)
+       this_accuracy = sess.run(accuracy, feed_dict={x: testingX, y: testingY})
+       print(""Accuracy: "", this_accuracy)
+       return this_accuracy
+
+
+# this is the real thing
+#doLinearClassification(100, learning_rate = 0.01, epochs=20000, num_hidden_layer_1=14, num_of_layers=4, batch_size=20);
+
+# this is just to debug the code
+doLinearClassification(20, learning_rate = 0.01, epochs=200, num_hidden_layer_1=14, num_of_layers=4, batch_size=2);
+```
+
+",30134,,,,,10/3/2019 15:00,"Trying to separate spiral data with neural network, learning tensorflow",,0,1,,,,CC BY-SA 4.0 +15707,1,,,10/3/2019 16:14,,2,27,"

Is it possible to update the weights of a vanilla transformer model using counterexamples alongside examples?

+ +

For example, from the PAWS data set, given the phrases ""Although interchangeable, the body pieces on the 2 cars are not similar."" and ""Although similar, the body parts are not interchangeable on the 2 cars."" we have the label 0 because it is a counterexample, whereas for the phrases ""Katz was born in Sweden in 1947 and moved to New York City at the age of 1."" and ""Katz was born in 1947 in Sweden and moved to New York at the age of one."" we have the label 1 because it is a positive example of a valid paraphrase.

+ +

My goal is to use the transformer model to generate paraphrases, and I am attempting to build a GAN but could not find any references for updating the transformer text-to-text model using counterexamples.

+",29999,,2444,,11/1/2019 3:06,11/1/2019 3:06,How to train a transformer text-to-text model on counterexamples?,,0,0,,,,CC BY-SA 4.0 +15708,1,15709,,10/3/2019 18:50,,1,287,"

A stationary policy is a function that maps a state to a probability distribution of actions.

+ +

In a contextual bandit problem, a state itself does not include the history. But in a reinforcement learning problem, the history can be used to define a state. In this case, does the history include the rewards revealed thus far as well? If so, the policy is not stationary anymore I guess.

+ +

As the title says, I think I am confused about the difference in the definitions of stationary (and/or non-stationary) policy between reinforcement learning and contextual bandit.

+",30051,,2444,,10/3/2019 18:56,10/3/2019 20:25,What is the difference between the definition of a stationary policy in reinforcement learning and contextual bandit?,,1,4,,,,CC BY-SA 4.0 +15709,2,,15708,10/3/2019 20:00,,3,,"
+

What is the difference between the definition of a stationary policy in reinforcement learning and contextual bandit?

+
+ +

There is no difference. A policy decides which action to take in each state. This is usually split into deterministic policies of the form $\pi(s) : \mathcal{S} \rightarrow \mathcal{A}$ and stochastic policies of the form $\pi(s|a) : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}, \text{Pr}[A_t=a |S_t=s]$.

+ +

When we say that a policy is static, it means that the mapping from state to action - or distribution of actions - does not change over the time that we are interested in. This definition applies equally in the Reinforcement Learning (RL) and Contextual Bandit settings.

+ +
+

But in a reinforcement learning problem, the history can be used to define a state.

+
+ +

It can be used this way, but is not required. What is required is the Markov property i.e. that identifying the state also determines all the allowed transitions and rewards, plus their probabilities of occurring given an action.

+ +
+

In this case, does the history include the rewards revealed thus far as well?

+
+ +

If those can affect the future state transitions and rewards yes. If they generally do not, then you would usually exclude them from the state description.

+ +
+

If so, the policy is not stationary anymore I guess.

+
+ +

What you are proposing is a system that changes its state depending on rewards seen so far. As above, this is not necessary to define a RL problem, but it is allowed in RL. Whilst in a contextual bandit you assume no rules apply to state transitions, and this is a key difference between contextual bandit and RL settings.

+ +

The policy is stationary if its mapping rules remain unchanged.

+ +

Your proposed addition to the state does not require changing the policy. The policy is a function of the state. It may choose a different action depending on this historical aspect of the state, but it can still remain a fixed function - its input may change, but the policy function itself does not need to change to account for that.

+",1847,,1847,,10/3/2019 20:25,10/3/2019 20:25,,,,0,,,,CC BY-SA 4.0 +15710,1,,,10/3/2019 20:08,,3,360,"

As far as I can tell (correct me if I'm wrong), Alphazero (with MCTS and neural network heuristic function RL) is the state of the art training method for turn based, deterministic, perfect information, complete information, two player, zero sum games.

+ +

But what is the state of the art for turn based, imperfect information games, that have 2 players, complete information, and is zero sum? (Deterministic or stochastic.) Examples include Battleship and most 2 player card games.

+ +

Are there standard games, or other tests by which this is measured? Is the criteria I offered for type of game not specific enough to narrow the answer down properly?

+ +

If the state of the art involves supervised learning (data set of manually played games), then what's the state of the art for pure reinforcement learning, if there is one?

+",27354,,27354,,10/3/2019 20:25,10/3/2019 20:25,What is the state of the art AI training technique for imperfect information 2 player turn based games?,,0,5,,,,CC BY-SA 4.0 +15711,1,,,10/4/2019 1:50,,1,47,"

It was noted today that automated text generation is advancing at a rapid pace, potentially accelerating.

+ +

As bots become more and more capable of passing turing tests, especially in single iterations, such as social media posts or news blurbs, I have to ask:

+ +
    +
  • Does it matter where a text originates, if the content is strong?
  • +
+ +

Strength here is used in the sense of meaning. To elucidate my argument I'll present an example. (It helps to know the Library of Babel, an infinite memory array where every possible combination of characters exists.)

+ +
+

An algorithm is set up to produce aphorisms. The overwhelming majority of the output is gibberish, but among the junk is an incredibly profound observation emerges that changes the way people think about a subject or issue.

+
+ +

Where the bot just spams social media, the aphorism in question is identified because it recieves a high number of reposts by humans, who, in this scenario, provide the mechanism for finding the needle (the profound aphorism) in the haystack (the junk output).

+ +

Does the value of the insight depend on the cognitive quality of the generator, in the sense of having to understand the statement?

+ +

A real world example would be Game 2, Move 37 in the AlphaGo vs. Lee Sedol match.

+",1671,,1671,,10/4/2019 23:00,10/4/2019 23:00,Does it matter if it's a bot or a human generating text? Doesn't it come down to the content?,,2,1,,,,CC BY-SA 4.0 +15712,1,15760,,10/4/2019 2:35,,2,99,"

I initialised an LSTM with Xavier initialisation, although I've found this occurs for all initialisations I have tested. When initialised, if the LSTM is tested with a random input, it will get stuck in a cycle, either over a few characters or just one. Example output:

+ +
nhhbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+
+f(b(bf(bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+
+kk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,mkk,,m
+
+ +

I've also noticed the LSTM is particularly bad in this way, that even when trained it has a tendency to get stuck in loops like this. It seems it has difficulty truly retaining context strong enough to over power the input, activation and output gates with only the forget gate. Is there an explanation for this?

+",26726,,2444,,10/8/2019 0:38,10/8/2019 0:38,Why does an LSTM cycle on initialisation?,,1,1,,,,CC BY-SA 4.0 +15714,1,,,10/4/2019 7:36,,2,107,"

There are the 3 Asimov’s laws:

+ +
    +
  1. A robot may not injure a human being or, through inaction, allow a +human being to come to harm.

  2. +
  3. A robot must obey orders given to it by human beings, except where +such orders would conflict with the first law.

  4. +
  5. A robot must protect its own existence as long as such protection +does not conflict with the first or second law.

  6. +
+ +

These laws are based on morality, which assumes that robots have sufficient agency and cognition to make moral decisions.

+ +

Additionally there are alternative laws of responsible robotics:

+ +
    +
  1. A human may not deploy a robot without the human–robot work +system meeting the highest legal and professional standards of +safety and ethics.

  2. +
  3. A robot must respond to humans as appropriate for their roles.

  4. +
  5. A robot must be endowed with sufficient situated autonomy to +protect its own existence as long as such protection provides smooth +transfer of control to other agents consistent the first and second +laws

  6. +
+ +

Thinking beyond morality, consciousness and the AI designer's professionalism to incorporate safety and ethics into the AI design.

+ +

Should AI incorporate irrefutable parent rules for AI to be inevitably mortal by design?

+ +

How to assure AI can be deactivated if necessary the way the deactivation procedure cannot be worked around by the AI itself, even at the cost of the AI termination as its inevitable destiny?

+ +
+ +

EDIT: to explain reasoning behind the main question.

+ +

Technological solutions are often based on observing biology and nature.

+ +

In evolutionary biology, for example, research results of birds mortality, show
+potential negative effect of telomere shortening (dna) on life in general.

+ +
+

telomere length (TL) has become a biomarker of increasing interest + within ecology and evolutionary biology, and has been found to predict + subsequent survival in some recent avian studies but not others. + (...) We performed a meta-analysis on these estimates and found an overall significant + negative association implying that short telomeres are associated with + increased mortality risk

+
+ +

If such research is confirmed in general, then natural life expectancy is limited by design of its DNA, ie by design of its cell-level code storage. I assume this process of built-in mortality cannot be effectively worked around by a living creature.

+ +

A similar design could be incorporated in any AI design, to assure its vulnerability and mortality, in the sense a conscious AI could otherwise recover and restore its full health state and continue to be up and running infinitely.

+ +

Otherwise a simple turn off switch could be disabled by the conscious AI itself.

+ +
+ +

References

+ +

Murphy, R. and Woods, D.D., 2009. Beyond Asimov: the three laws of responsible robotics. IEEE intelligent systems, 24(4), pp.14-20.

+ +

Wilbourn Rachael V., Moatt Joshua P., Froy Hannah, Walling Craig A., Nussey Daniel H. and Boonekamp Jelle J. The relationship between telomere length and mortality risk in non-model vertebrate systems: a meta-analysis373Phil. Trans. R. Soc. B

+",28605,,28605,,10/4/2019 9:40,10/4/2019 11:01,Should AI be mortal by design?,,0,0,,1/23/2021 4:09,,CC BY-SA 4.0 +15715,1,15749,,10/4/2019 8:32,,0,97,"

Here's a simple image classifier implemented in TensorFlow Keras (right click to open in new tab): https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynb

+ +

I altered it a bit to fit with my 2-class dataset. And the output layer is:

+ +
Dense(2, activation=tf.nn.softmax);
+
+ +

The loss function and optimiser are still the same as in the example in the link above.

+ +
loss_fn   = tf.losses.SparseCategoricalCrossentropy();
+optimizer = tf.optimizers.Adam();
+
+ +

I wish to turn it into a classifier with single output neuron as I have only 2 classes in dataset, and sigmoid does the 2 classes good. Tried some combinations of output activation functions + loss functions + optimisers, but the network doesn't work any more (ie. it doesn't converge).

+ +

For example, this doesn't work:

+ +
//output layer
+Dense(1, activation=tf.sigmoid);
+
+//loss and optim
+loss_fn   = tf.losses.mse;
+optimizer = tf.optimizers.Adagrad(1e-1);
+
+ +

Which combination of output activation + loss + optimiser should work for the single-output-neuron model? And generically, which loss functions and optimisers should pair well?

+",2844,,2844,,10/6/2019 7:25,10/6/2019 8:39,TF Keras: How to turn this probability-based classifier into single-output-neuron label-based classifier,,1,0,,,,CC BY-SA 4.0 +15716,2,,15711,10/4/2019 8:38,,1,,"

sometimes voice pitch also matters(and system designers forget about that) - if system is badly designed then you will feel that it is a bot, if design is great than you don't feel it's a bot at all

+",23181,,,,,10/4/2019 8:38,,,,0,,,,CC BY-SA 4.0 +15717,2,,15711,10/4/2019 8:40,,1,,"

There are several aspects to this.

+ +

Firstly, content. I guess a further comparison would be to the monkeys on typewriters coming up with the complete works of Shakespeare eventually. You will probably have a huge mass of tedious text, with the odd nugget in it. One would hope that the signal-to-noise ratio would be better with human authors, though looking at Twitter that might not actually be the case (unless there are already more bots on there than humans!) Content evaluation/filtering is still an issue, especially with longer texts. How many computer-generated novels are there? Not as many as weather reports or news summaries.

+ +

Second, legal issues. Who 'owns' the text and is responsible for it? If a computer generates a text that incites violence, who is to blame for the consequences? The developer? Some texts require 'ownership', as they might have consequences attached to them. A wrong weather forecast that causes people to make wrong decisions, leading to loss of life. Do you sue the computer for that? So in this respect, clear responsibility is important.

+ +

So even if the content is good (point 1), there might also be responsibility (point 2) where it matters whether a text has been generated by an algorithm.

+",2193,,,,,10/4/2019 8:40,,,,1,,,,CC BY-SA 4.0 +15720,1,15721,,10/4/2019 10:38,,2,74,"

According to the definition, the AI agent has to play a game by it's own. A typical domain is the blocksworld problem. The AI determines which action the robot in a game should execute and a possible strategy for determine the action sequence is reinforcement learning. Colloquial spoken, reinforcement learning leads to an AI agent who can play games.

+ +

Before a self-learning character can be realized, the simulation has to be programmed first. That is an environment which contains the rules for playing blocksworld or any other game. The environment is the house in which the AI character operates. Can the Q-learning algorithm be utilized to build the simulation itself?

+",,user11571,,,,10/4/2019 15:26,Can reinforcement learning be utilized for creating a simulation?,,1,0,,,,CC BY-SA 4.0 +15721,2,,15720,10/4/2019 10:53,,1,,"
+

Can the Q-learning algorithm be utilized to build the simulation itself?

+
+ +

Only in the presence of a meta-environment, or meta-simulation where the goals of creating the original simulation are encoded in the states, available actions and rewards.

+ +

A special case of this might be in model-learning planning algorithms where there exists a ""real"" environment to refer to, and the agent benefits from exploring it and constructing a statistical model that it can then use to create an approximate simulation of the outcomes of sequences of actions. The Dyna-Q algorithm, which is a simple extension of Q-learning, is an example of this kind of model building approach. The simulation is very basic - it simply replays previous relevant experience. But you could consider this as an example of the agent constructing a simulation.

+ +

Getting an agent to act like a researcher and actually design and/or code a simulation from scratch would require a different kind of meta-environment. It is theoretically possible, but likely very hard to implement in general way - even figuring out the reward scheme to express the goals of such an agent could be a challenge. I'm not aware of any examples, but entirely possible someone has attempted this kind of meta agent, because it is an interesting idea.

+ +

Possibly the simplest example would be a gridworld meta-environment where a ""designer"" agent could select the layout of objects in the maze, with the goal of making a second ""explorer"" agent's task progressivley more difficult. The designer would be ""creating the simulation"" only in a very abstract way though, by setting easy-to-manage parameters of the environment, not writing low level code.

+ +

There is not much difference between the approach above and having two opposing agents playing a game. It is different from a turn-based game like chess in that each agent would complete a full episode, and then be rewarded by the outcome at the end of the combined two episodes. There are some similarities to GANs for image generation.

+",1847,,1847,,10/4/2019 15:26,10/4/2019 15:26,,,,0,,,,CC BY-SA 4.0 +15724,2,,12857,10/4/2019 13:31,,0,,"
+

I am curious to know if I can put the economy feature such as 'national interest rate' or 'unemployment rate' besides each stocks' features.

+
+ +

The variables are macro-econometric and they, in general, seem to have some influence on stocks' prices. This inclusion might as well increase your model's prediction accuracy. You can definitely use them as predictors. As mentioned in comments - Experimentation is the way to go.

+ +
+

Can I feed the data to my neural network like the table above?

+
+ +

In general, you can have any kind of numeric variables as input to a neural network. Things will work out fine. The important thing is the selection of relevant predictor variables that, potentially, have some relationship with the response variable.

+",16708,,16708,,10/5/2019 5:26,10/5/2019 5:26,,,,0,,,,CC BY-SA 4.0 +15729,1,,,10/4/2019 20:52,,3,280,"

A little background... I’ve been on-and-off learning about data science for around a year or so, however, I started thinking about artificial intelligence a few years ago. I have a cursory understandings of some common concepts but still not much depth. When I first learned about deep learning, my automatic response was “that’s not how our minds do it.” Deep learning is obviously an important topic, but I’m trying to think outside the black box.

+

I think of deep learning as being “outside-in” in that a model has to rely on examples to understand (for lack of a better term) that some dataset is significant. However, our minds seem to know when something is significant in the absence of any prior knowledge of the thing (i.e., “inside-out”).

+

Here’s a thing:

+

+

I googled “IKEA hardware” to find that. The point is that you probably don’t know what this is or have any existing mental relationship between the image and anything else, but you can see that it’s something (or two somethings). I realize there is unsupervised learning, image segmentation, etc., which deal with finding order in unlabeled data, but I think this example illustrates the difference between the way we tend to think about machine learning/AI and how our minds actually work.

+

More examples:

+

1)

+

+

2)

+

+

3)

+

+

Let’s say that #1 is a stock chart. If I were viewing the chart and trying to detect a pattern, I might mentally simplify the chart down to #2. That is, the chart can be simplified into a horizontal segment and a rising segment.

+

For #3, let’s say this represents log(x). Even though it’s not a straight line, someone with no real math background could describe it as an upward slope that it decreasing as the line gets higher. That is, the line can still be reduced to a small number of simple ideas.

+

I think this simplification is the key to the gap between how our minds work and what currently exists in AI. I’m aware of Fourier transforms, polynomial regression, etc., but I think there’s a more general process of finding order in sensory data. Once we identify something orderly (i.e., something that can’t reasonably be random noise), we label it as a thing and then our mental network establishes relationships between it and other things, higher order concepts, etc.

+

I’ve been trying to think about how to use decision trees to find pockets of order in data (to no avail yet - I haven’t figured out to apply it to all of the scenarios above), but I’m wondering if there are any other techniques or schools of thought that align with the general theory.

+",30154,,-1,,6/17/2020 9:57,10/29/2020 20:03,“Outside-in” versus “Inside-out” machine learning,,1,0,,,,CC BY-SA 4.0 +15730,1,15744,,10/5/2019 0:18,,48,16035,"

As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).

+ +

For example, at least, we can take into account integers. We can think, principally, and ""understand"" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define ""understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to ""understanding"".

+ +

As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers?

+ +

For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity?

+ +

If I take into account real numbers, the problem becomes much harder.

+ +

What is the point that I am missing? Are there any resources that focus on the subject?

+",19102,,2444,,10/5/2019 0:34,4/1/2021 15:00,Can digital computers understand infinity?,,19,1,,,,CC BY-SA 4.0 +15731,1,15732,,10/5/2019 1:51,,6,256,"

What are the real-life applications of transfer learning in machine learning? I am particularly interested in industrial applications of the concept.

+",30157,,2444,,6/11/2020 21:35,6/11/2020 21:35,What are the real-life applications of transfer learning?,,1,0,,,,CC BY-SA 4.0 +15732,2,,15731,10/5/2019 5:10,,7,,"

One application I know of being used in industry is of image classification, by only training the last layer of one of the inception models released by Google, with the desired number of classes. I can't provide specific details.

+ +

Transfer learning is useful when:

+ +
    +
  1. You do not have the resources (time, processing power, etc.) to train a DL model from scratch.

  2. +
  3. You can compromise a bit on accuracy.

  4. +
+",16708,,2444,,6/11/2020 21:35,6/11/2020 21:35,,,,1,,,,CC BY-SA 4.0 +15733,2,,12570,10/5/2019 9:26,,1,,"

The true value $v_{\pi}(s)$ is a conceptual target for the $\overline{VE}$ in the book. You often do not know it in real problems. However, it is still used in two main ways in the book:

+ +
    +
  • Theoretically for analysis of different aprpoximation schemes, which can be shown to converge to minimise the $\overline{VE}$ objective, or a related one.

  • +
  • In toy problems when exploring the nature of approximation in Reinforcement Learning (RL), it is possible to use tabular methods guranateed to get close to zero error, and then compare them to approximate methods. There are several plots of this type in the book.

  • +
+ +

The book shows derivation of gradient descent methods that start with minimising $\overline{VE}$ as an objective, and that use samples of $v_{\pi}(s)$ such as Monte Carlo return $G_t$ or the TD target) in place of the unknown $v_{\pi}(s)$ in the loss function. These also rely on the fact that the sample distribution will be weighted by $\mu(s)$ if they are taken naturally from the environment whilst the agent is following the policy $\pi$ so that $\mu(s)$ also does not need to be explicitly known.

+ +

Outside of toy problems deliberately constructed to demonstrate that this theory is correct, you will not know $v_{\pi}(s)$ or be able to calculate $\overline{VE}$. However, you will know from the theory that if you follow the update rules derived in the book for approximate gradient descent methods, that the process should find a local minimum for $\overline{VE}$, for whatever state approximation scheme you have chosen to use.

+ +

Usually you cannot even approximate $\overline{VE}$ from raw data, as the variance in returns will add noise to the signal, and there is no way to separate variance in returns from error in approximation in the general case. However there are a couple of scenarios that do lend themselves to measuring this objective, provided you already have your estimate $\hat{v}(s,w)$ and the policy remains fixed throughout:

+ +
    +
  • Simple, fast (perhaps simulated), environments which can be solved to arbitrary accuracy using tabular methods. In this case, you first calculate $v_{\pi}(s)$ using a non-approximate method, then sample many approximations by running the environment using policy $\pi$ and treating that as your data set.

  • +
  • Fully deterministic environments where $\pi$ is also deterministic. These have a variance of $0$ for Monte Carlo returns, so each observed return from any given state is already the true value of $v_{\pi}(s)$. Again you can just run the environment many times to get your data set to calculate $v_{\pi}(s)$ and $\hat{v}(s,w)$ for the observed states in the correct frequencies, and thus have data to calculate $\overline{VE}$.

  • +
+",1847,,1847,,10/5/2019 11:00,10/5/2019 11:00,,,,1,,,,CC BY-SA 4.0 +15736,2,,15683,10/5/2019 12:36,,1,,"

I think, once you are covered with the common stuff, you can probably go on and study all kinds of neural network variants.

+ +
+ +

The common stuff:

+ +

a) An undergraduate level Linear Algebra course -- covering matrix calculus. You might find this useful.

+ +

b) An undergraduate level study in statistical inference. Concepts from this topic will come up most of the time and you might have hard time getting around even though you understand the rest of the math. I would recommend this.

+ +

c) A starter book on neural networks. Ex- Neural networks by Raul Rojas.

+ +
+ +

After all these are covered you will certainly be ready for learning the variants of neural networks with ease. For LSTM I would recommend Alex Graves.

+",16708,,,,,10/5/2019 12:36,,,,0,,,,CC BY-SA 4.0 +15737,1,,,10/5/2019 13:53,,5,1512,"

I'm trying to learn how genetic algorithms can solve optimization problems. I have already learned how genetic algorithms can solve the knapsack, TSP and set cover problems. I'm looking for some other similar optimization problems, but I have not found any.

+

Would you please mention some other famous optimization problems that can be solved by using genetic algorithms?

+",30164,,2444,,1/15/2021 11:49,1/15/2021 11:52,What are examples of optimization problems that can be solved using genetic algorithms?,,1,1,,,,CC BY-SA 4.0 +15738,2,,15737,10/5/2019 14:15,,1,,"

There are numerous problems that can be solved with genetic algorithms or, more generally, with evolutionary algorithms (which includes also genetic programming and evolutionary strategies), even though they may not necessarily be the most efficient approach.

+

Here are a few examples.

+
    +
  • Evolution of the topology of neural networks. This is called neuroevolution.
  • +
  • Automatic test case generation (in particular, for self-driving cars). AsFault is one specific example.
  • +
  • Design of novel quantum computing algorithms. Specifically, genetic programming has been used to solve this problem (see this reference for more details).
  • +
  • As an alternative to reinforcement learning algorithms to solve RL problems. Specifically, evolution strategies have been successfully used in this case (see this).
  • +
+

There is a Wikipedia article that lists many other applications of genetic algorithms: List of genetic algorithm applications.

+",2444,,2444,,1/15/2021 11:52,1/15/2021 11:52,,,,0,,,,CC BY-SA 4.0 +15740,2,,15730,10/5/2019 15:31,,3,,"

By adding some rules for infinity in arithmetic (such as infinity minus a large finite number is infinity, etc.), the digital computer can appear to understand the notion of infinity.

+ +

Alternatively, the computer can simply replace the number n with its log-star value. Then, it can differentiate the numbers at a different scale, and can learn that any number with log-star value > 10 is practically equivalent to infinity.

+",20745,,,,,10/5/2019 15:31,,,,2,,,10/5/2019 15:31,CC BY-SA 4.0 +15743,1,15748,,10/5/2019 18:25,,10,6881,"

As I understand, ResNet has some identity mapping layers, whose task is to create the output as the same as the input of the layer. The ResNet solved the problem of accuracy degrading. But what is the benefit of adding identity mapping layers in intermediate layers?

+

What's the effect of these identity layers on the feature vectors that will be produced in the last layers of the network? Is it helpful for the network to produce better representation for the input? If this expression is correct, what is the reason?

+",30170,,2444,,2/6/2021 2:43,5/4/2021 10:12,What is the benefit of using identity mapping layers in deep neural networks like ResNet?,,4,0,,,,CC BY-SA 4.0 +15744,2,,15730,10/5/2019 19:19,,62,,"

I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here.

+

Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must both be the case that humans can understand these concepts while they remain alien to machines, and that there exist other concepts that are not like infinity that both humans and machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction.

+

The root of this misunderstanding is the problem of what it means to understand. Understanding is a vague term in everyday life, and that vague nature contributes to this misconception.

+

If by understanding, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a long running, and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something special about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience.

+

So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic.

+

We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes.

+

We might say that an entity understands a concept if it can apply the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, then we only need to record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like Aleph Numbers. It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes.

+

We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the General Problem Solver can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity better than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either.

+

Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas that express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work.

+

This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts.

+

So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that does separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a hard argument to make).

+

Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better).

+",16909,,36737,,4/1/2021 15:00,4/1/2021 15:00,,,,0,,,,CC BY-SA 4.0 +15745,2,,15729,10/5/2019 19:47,,2,,"

It sounds like you are interested in the ideas of intrinsic motivation and attention in the context of machine learning. These are big topics, and the subject of much active research.

+ +

Intrinsic motivation says that the key to identifying interesting patterns and skills that are worth learning is to give the agent some intrinsic reason to learn to do new things. This is not dissimilar from what humans have: learning new things, and improving or exercising our capabilities to the fullest is what Aristotle identified as the good life. There are thus good reasons to think that intrinsic motivation for AI might solve the problem you identify. Current research in this domain is exploring different mathematical ways to represent intrinsic motivation.

+ +

Attention was the subject of a large burst of research in deep neural networks during the last few years. Here's a recent talk from AWS at ICML that provides a good overview. The idea here is that an agent can learn both a reasonable mapping from inputs to outputs for some problem, and a separate mapping that describes how different future inputs should ""activate"" certain parts of the input/output mapping that the agent has learned. Essentially attention-driven models include a second component that learns which features of the input to pay attention to when engaging in certain kinds of tasks.

+",16909,,,,,10/5/2019 19:47,,,,5,,,,CC BY-SA 4.0 +15747,2,,15730,10/6/2019 1:10,,12,,"

TL;DR: The subtleties of infinity are made apparent in the notion of unboundedness. Unboundedness is finitely definable. ""Infinite things"" are really things with unbounded natures. Infinity is best understood not as a thing but as a concept. Humans theoretically possess unbounded abilities not infinite abilities (eg to count to any arbitrary number as opposed to ""counting to infinity""). A machine can be made to recognize unboundedness.

+ +

Down the rabbit hole again

+ +

How to proceed? Let's start with ""limits.""

+ +

Limitations

+ +

Our brains are not infinite (lest you believe in some metaphysics). So, we do not ""think infinity"". Thus, what we purport as infinity is best understood as some finite mental concept against which we can ""compare"" other concepts.

+ +

Additionally, we cannot ""count infinite integers."" There is a subtly here that is very important to point out:

+ +

Our concept of quantity/number is unbounded. That is, for any any finite value we have a finite/concrete way or producing another value which is strictly larger/smaller. That is, Provided finite time we could only count finite amounts.

+ +

You cannot be ""given infinite time"" to ""count all the numbers"" this would imply a ""finishing"" which directly contradicts the notion of infinity. Unless you believe humans have metaphysical properties which allow them to ""consistently"" embody a paradox. Additionally how would you answer: What was the last number you counted? With no ""last number"" there is never a ""finish"" and hence never an ""end"" to your counting. That is you can never ""have enough"" time/resources to ""count to infinity.""

+ +

I think what you mean is we can fathom the notion of bijection between infinite sets. But this notion is a logical construction (ie it's a finite way of wrangling what we understand to be infinite).

+ +

However, what we are really doing is: Within our bounds we are talking about our bounds and, when ever we need to, we can expand our bounds (by a finite amount). And we can even talk about the nature of expanding our bounds. Thus:

+ +

Unboundedness

+ +

A process/thing/idea/object is deemed unbounded if given some measure of its quantity/volume/existence we can in a finite way produce an ""extension"" of that object which has a measure we deem ""larger"" (or ""smaller"" in the case of infinitesimals) than the previous measure and that this extension process can be applied to the nascent object (ie the process is recursive).

+ +

Canonical case number one: The Natural Numbers

+ +

Additionally, our notion of infinity prevents any ""at-ness"" or ""upon-ness"" unto infinity. That is, one never ""arrives"" at infinity nor does one ever ""have"" infinity. Rather, one proceeds unboundedly.

+ +

Thus how do we conceptualize infinity?

+ +

Infinity

+ +

It seems that ""infinity"" as a word is misconstrued to mean that there is a thing that exists called ""infinity"" as opposed to a concept called ""infinity"". Let's smash atoms with the word:

+ +
+

Infinite: limitless or endless in space, extent, or size; impossible to measure or calculate.

+ +

in- :a prefix of Latin origin, corresponding to English un-, having a negative or privative force, freely used as an English formative, especially of adjectives and their derivatives and of nouns (inattention; indefensible; inexpensive; inorganic; invariable). + (source)

+ +

Finite: having limits or bounds.

+
+ +

So in-finity is really un-finity which is not having limits or bounds. But we can be more precise here because we can all agree the natural numbers are infinite but any given natural number is finite. So what gives? Simple: the natural numbers satisfy our unboundedness criterium and thus we say ""the natural numbers are infinite.""

+ +

That is, ""infinity"" is a concept. An object/thing/idea is deemed infinite if it possess a property/facet that is unbounded. As before we saw that unboundedness is finitely definable.

+ +

Thus, if the agent you speak of was programmed well enough to spot the pattern in the numbers on the cards and that the numbers are all coming from the same set it could deduce the unbounded nature of the sequence and hence define the set of all numbers as infinite - purely because the set has no upper bound. That is, the progression of the natural numbers is unbounded and hence definably infinite.

+ +

Thus, to me, infinity is best understood as a general concept for identifying when processes/things/ideas/objects posses an unbounded nature. That is, infinity is not independent of unboundedness. Try defining infinity without comparing it to finite things or the bounds of those finite things.

+ +

Conclusion

+ +

It seems feasible that a machine could be programmed to represent and detect instances of unboundedness or when it might be admissible to assume unboundedness.

+",28343,,28343,,10/8/2019 20:03,10/8/2019 20:03,,,,2,,,,CC BY-SA 4.0 +15748,2,,15743,10/6/2019 4:10,,10,,"

TL;DR: Deep networks have some issues that skip connections fix.

+ +

To address this statement:

+ +
+

As I understand Resnet has some identity mapping layers that their task is to create the output as the same as the input of the layer

+
+ +

The residual blocks don't strictly learn the identity mapping. They are simply capable of learning such a mapping. That is, the residual block makes learning the identity function easy. So, at the very least, skip connections will not hurt performance (this is explained formally in the paper).

+ +

From the paper:

+ +

+ +

Observe: it's taking some of the layer outputs from earlier layers and passing their outputs further down and element wise summing these with the the outputs from the skipped layers. These blocks may learn mappings that are not the identity map.

+ +

From paper (some benefits):

+ +
+

$$\boldsymbol{y} = \mathcal{F}(\boldsymbol{x},\{W_i\})+\boldsymbol{x}\quad\text{(1)}$$The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

+
+ +

An example of a residual mapping from the paper is $$\mathcal{F} = W_2\sigma_2(W_1\boldsymbol{x})$$

+ +

That is $\{W_i\}$ represents a set of i weight matrices ($W_1,W_2$ in the example) occurring in the layers of the residual (skipped) layers. The ""identity shortcuts"" are referring to performing the element wise addition of $\boldsymbol{x}$ with the output of the residual layers.

+ +

So using the residual mapping from the example (1) becomes:

+ +

$$\boldsymbol{y} = W_2\sigma_2(W_1\boldsymbol{x})+\boldsymbol{x}$$

+ +

In short, you take the output $\boldsymbol{x}$ of a layer skip it forward and element wise sum it with the output of the residual mapping and thus produce a residual block.

+ +

Limitations of deep networks expressed in paper:

+ +
+

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

+
+ +

The skip connections and hence the residual blocks allow for stacking deeper networks while avoiding this degradation issue.

+ +

Link to paper

+ +

I hope this helps.

+",28343,,28343,,10/6/2019 5:53,10/6/2019 5:53,,,,0,,,,CC BY-SA 4.0 +15749,2,,15715,10/6/2019 6:34,,0,,"

Advice from Neil, yes, output targeting class labels is still classification.

+ +

Output range is in contiguous range:

+ +
    +
  • This is regression
  • +
  • For example: Linear activation, it has full numeric range of outputs.
  • +
+ +

Output targeting class labels is classification:

+ +
    +
  • Single output neuron with sigmoid-like functions. This will classify to 2 classes, although Y data can be normalised to classify more classes.
  • +
  • Multiple output neurons (probabilities of classes) with sigmoid-like functions (softmax is mainly used). This will classify to 2 or more classes.
  • +
+ +

Multiple combinations of loss functions and optimisers can make the single-neuron output layer work, with different configs for them. Note that learning rates of different optimisers are different, some take 1e-1, some need 1e-3 for good training.

+ +

For example, this combination should work:

+ +
loss_fn   = tf.losses.LogCosh();
+optimizer = tf.optimizers.RMSprop(1e-3);
+
+ +

From my trying out, these other combinations also work for single output neuron with my data (Adam, Adamax, Nadam, RMSprop work when learning_rate=1e-3 instead of 1e-1):

+ +
                         Adadelta  Adagrad  Adam  Adamax  Ftrl  Nadam  RMSprop  SGD
+BinaryCrossentropy       Yes       Yes      --    --      --    --     --       Yes
+CategoricalCrossentropy  Yes       --       --    --      --    --     --       --
+CategoricalHinge         --        --       --    --      --    --     --       --
+CosineSimilarity         --        --       --    --      --    --     --       --
+Hinge                    Yes       Yes      --    --      --    --     --       Yes
+Huber                    Yes       Yes      --    --      --    --     --       Yes
+LogCosh                  Yes       Yes      --    --      --    --     --       Yes
+Poisson                  Yes       Yes      --    --      --    --     --       Yes
+SquaredHinge             Yes       Yes      --    --      --    --     --       Yes
+
+KLD: lambda a,b: KLD(a,b)
+MAE,MAPE,MSE,MSLE: lambda a,b: Mxxx(a,b)
+The above lambdas are direct functions, not classes like SGD, Adadelta, etc.
+SparseCategorialCrossentropy: Seems not working with single output neuron.
+
+",2844,,2844,,10/6/2019 8:39,10/6/2019 8:39,,,,0,,,,CC BY-SA 4.0 +15750,2,,15743,10/6/2019 10:05,,3,,"

As explained in this paper , the major benefit of identity mapping is that it enables backpropagation signal to reach from output (last) layers to input (first) layers.

+ +

You can see on the paper at section 2 that it resolves vanishing gradient problem which arises in deeper networks.

+",19102,,,,,10/6/2019 10:05,,,,0,,,,CC BY-SA 4.0 +15751,1,15773,,10/6/2019 13:42,,2,94,"

Why is awareness of itself such a point when speaking about AI? Does it always mean a starting point for apocalyptic nightmares to occur when such a level is reached or is it just a classical example about what could be a really abstract thing that machine cannot easily posses?

+ +

I would sleep my nights far more calmfully if the situation was the latter, and I understand the first does not automatically happen. The main thing I would like to discover is the starting point - which approach came first historically? Or is there another view point in the historical first occurrence of self awareness term?

+",11810,,,,,10/7/2019 16:14,Why is awareness of itself such a point when speaking about AI?,,1,2,,,,CC BY-SA 4.0 +15753,2,,15730,10/6/2019 16:19,,19,,"

I think your premise is flawed.

+ +

You seem to assume that to ""understand""(*) infinities requires infinite processing capacity, and imply that humans have just that, since you present them as the opposite to limited, finite computers.

+ +

But humans also have finite processing capacity. We are beings built of a finite number of elementary particles, forming a finite number of atoms, forming a finite number of nerve cells. If we can, in one way or another, ""understand"" infinities, then surely finite computers can also be built that can.

+ +

(* I used ""understand"" in quotes, because I don't want to go into e.g. the definition of sentience etc. I also don't think it matters in regarding this question.)

+ +
+

As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).

+
+ +

Here, you actually say it out loud. ""With enough resources."" Would the same not apply to computers?

+ +

While humans can, e.g. use infinities when calculating limits etc. and can think of the idea of something getting arbitrarily larger, we can only do it in the abstract, not in the sense being able to process arbitrarily large numbers. The same rules we use for mathematics could also be taught to a computer.

+",30205,,,,,10/6/2019 16:19,,,,1,,,,CC BY-SA 4.0 +15754,1,,,10/6/2019 17:09,,4,343,"

Are there any algorithms to use reinforcement learning to learn optimal policies in partially observable Markov decision process (POMDP) i.e. when the state is not perfectly observed? More specifically, how does one update the belief state using Bayes' rule when the update Q kernel is not known?

+",30206,,2444,,12/19/2021 18:51,12/19/2021 18:51,Is there a way to do reinforcement learning in POMDP?,,0,5,,,,CC BY-SA 4.0 +15756,2,,15730,10/6/2019 19:38,,9,,"

In Haskell, you can type:

+ +

print [1..]

+ +

and it will print out the infinite sequence of numbers, starting with:

+ +
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329,330,331,332,333,334,335,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,526,527,528,529,530,531,532,533,534,535,536,537,538,539,540,541,542,543,544,545,546,547,548,549,550,551,552,553,554,555,556,557,558,559,560,561,562,563,564,565,566,567,568,569,570,571,572,573,574,575,576,577,578,579,580,581,582,583,584,585,586,587,588,589,590,591,592,593,594,595,596,597,598,599,600,601,602,603,604,605,606,607,608,609,610,611,612,613,614,615,616,617,618,619,620,621,622,623,624,625,626,627,628,629,630,631,632,633,634,635,636,637,638,639,640,641,642,643,644,645,646,647,648,649,650,651,652,653,654,655,656,657,658,659,660,661,662,663,664,665,666,667,668,669,670,671,672,673,674,675,676,677,678,679,680,681,682,683,684,685,686,687,688,689,690,691,692,693,694,695,696,697,698,699,700,701,702,703,704,705,706,707,708,709,710,711,712,713,714,715,716,717,718,719,720,721,722,723,724,725,726,727,728,729,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,746,747,748,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,771,772,773,774,775,776,777,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,
+
+ +

It will do this until your console runs out of memory.

+ +

Let's try something more interesting.

+ +
double x = x * 2
+print (map double [1..])
+
+ +

And here's the start of the output:

+ +
[2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190,192,194,196,198,200,202,204,206,208,210,212,214,216,218,220,222,224,226,228,230,232,234,236,238,240,242,244,246,248,250,252,254,256,258,260,262,264,266,268,270,272,274,276,278,280,282,284,286,288,290,292,294,296,298,300,302,304,306,308,310,312,314,316,318,320,322,324,326,328,330,332,334,336,338,340,342,344,346,348,350,352,354,356,358,360,362,364,366,368,370,372,374,376,378,380,382,384,386,388,390,392
+
+ +

These examples show infinite computation. In fact, you can keep infinite data structures in Haskell, because Haskell has the notion of non-strictness-- you can do computation on entities that haven't been fully computed yet. In other words, you don't have to fully compute an infinite entity to manipulate that entity in Haskell.

+ +

Reductio ad absurdum.

+",1916,,,,,10/6/2019 19:38,,,,5,,,,CC BY-SA 4.0 +15758,2,,15730,10/7/2019 1:23,,8,,"

I believe humans can be said to understand infinity since at least Georg Cantor because we can recognize different types of infinites (chiefly countable vs. uncountable) via the concept of cardinality.

+ +

Specifically, a set is countably infinite if it can be mapped to the natural numbers, which is to say there is a 1-to-1 correspondence between the elements of countably infinite sets. The set of all reals is uncountable, as is the set of all combinations of natural numbers, because there will always be more combinations than natural numbers where n>2, resulting in a set with a greater cardinality. (The first formal proofs for uncountability can be found in Cantor, and is subject of Philosophy of Math.)

+ +

Understanding of infinity involves logic as opposed to arithmetic because we can't express, for instance, all of the decimals of a transcendental number, only use approximations. Logic is a fundamental capability of what we think of as computers.

+ +
    +
  • An analytic process (AI) that can recognize a function that produces an infinite loop, such as using $\pi$ to draw a circle, might be said to understand infinity...
  • +
+ +

""Never ending"" is a definition of infinity, with the set of natural numbers as an example (there is a least number, 1, but no greatest number.)

+ +

Intractability vs. Infinity

+ +

Outside of the special case of infinite loops, I have to wonder if an AI is more oriented on computational intractability as opposed to infinity.

+ +

A problem is said to be intractable if there is not enough time and space to completely represent it, and this can be extended to many real numbers.

+ +

$\pi$ may be understood to be infinite because it arises from/produces a circle, but I'm not sure this is the case with all real numbers with an intractable number of decimals.

+ +

Would the AI assume such a number were infinite or merely intractable? The latter case is concrete as opposed to abstract--either it can finish the computation or not.

+ +

This leads to the halting problem.

+ +
    +
  • Turing's proof that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist could be taken as an indication that an algorithm based on the Turing-Church model of computation cannot have a perfect understanding of infinity.
  • +
+ +

If an alternate computational model arose that could solve the halting problem, it might be argued that an algorithm could have a perfect understanding, or at least demonstrate an understanding comparable to humans.

+",1671,,1671,,10/16/2019 21:08,10/16/2019 21:08,,,,1,,,,CC BY-SA 4.0 +15759,1,,,10/7/2019 2:31,,1,89,"

Are there any machine learning or deep learning techniques to use the content of an image for another image?

+

More specifically, suppose I take a photo of a notebook. I get the angle, lighting, and perspective perfect. Now I copy an image I found online that contains text or handwriting. Is there a machine learning technique that would now draw this writing in my notebook?

+

Just asking if it's possible before I attempt to hire someone.

+",30219,,2444,,10/31/2020 15:52,10/31/2020 15:52,Are there any deep learning techniques to use the content of an image for another image?,,0,2,,,,CC BY-SA 4.0 +15760,2,,15712,10/7/2019 4:49,,0,,"

This doesn't necessarily answer the question, but it does give some possible solutions to mitigate the problem.

+ +

Apparently, the emphasised looping behaviour above is a result of improper initialisation, in which I was initialising with only positive and 0 weights. The looping behavior is largely diminished by proper initialisation, however it is not totally removed. See below for examples:

+ +
djbgnuywjkfrrkrkrpueeeeeffrrdv
+kmkkkkkkkkkkkrkkkkkkkkkkkkkkkk
+clfwtpjbsqeeeeeeesssjjeeeeeeee
+ldwgmbvcmmbvcmmmgmmmgmmmgmmmmg
+ywzrfotntntnttntntunuwqzffotnt
+lvvurrrrrrxafllvvurrrrrrxcfhla
+
+ +

(These are samples from 6 different uniquely initialised LSTM's on the exact same architecture).

+ +

I have also found that this common behaviour is typically combated by adding a certain degree of randomness in feeding in the LSTM's output. This is normally done by interpreting the output at time step $t$ of the LSTM as a probability distribution, used to pick the next value fed into the LSTM at $t+1$. This helps break non-confident looping behaviour which is where looping behaviour most commonly occurs through some quick experimentation, while still mostly retaining confident predictions.

+ +

I also tested this by randomising the input a little bit, using a probability distribution where 90% of the time the correct input is selected and 10% a random one is, and back-propagating as normal. This didn't seem to have much of an effect on the looping behaviour, although perhaps might work as a good form of regularisation. I am yet to test it.

+ +

You can also use the temperature method in LSTMs, which explained really well here: https://cs.stackexchange.com/q/79241/20691.

+",26726,,2444,,10/8/2019 0:38,10/8/2019 0:38,,,,1,,,,CC BY-SA 4.0 +15761,1,,,10/7/2019 5:15,,3,191,"

In the TRPO paper, the objective to maximize is (equation 14) +$$ +\mathbb{E}_{s\sim\rho_{\theta_\text{old}},a\sim q}\left[\frac{\pi_\theta(a|s)}{q(a|s)} Q_{\theta_\text{old}}(s,a) \right] +$$

+ +

which involves an expectation over states sampled with some density $\rho$, itself defined as +$$ +\rho_\pi(s) = P(s_0 = s)+\gamma P(s_1=s) + \gamma^2 P(s_2=s) + \dots +$$

+ +

This seems to suggest that later timesteps should be sampled less often than earlier timesteps, or equivalently sampling states uniformly in trajectories but adding an importance sampling term $\gamma^t$.

+ +

However, the usual implementations simply use batches made of truncated or concatenated trajectories, without any reference to the location of the timesteps in the trajectory.

+ +

This is similar to what can be seen in the PPO paper, which transforms the above objective into (equation 3) +$$ +\mathbb{E}_t \left[ \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_\text{old}}(a_t|s_t)} \hat A_t \right] +$$

+ +

It seems that something is missing in going from $\mathbb{E}_{s\sim \rho}$ to $\mathbb{E}_t$ in the discounted setting. +Are they really equivalent?

+",30223,,2444,,6/21/2020 13:21,6/21/2020 13:21,Are these two TRPO objective functions equivalent?,,1,0,,,,CC BY-SA 4.0 +15762,2,,15730,10/7/2019 6:30,,7,,"

(There's a summary at the bottom for those who are too lazy or pressed for time to read the whole thing.)

+ +

Unfortunately to answer this question I will mainly be deconstructing the various premises.

+ +
+

As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle.

+
+ +

I disagree with the premise that humans would actually be able to count to infinity. To do so, said human would need an infinite amount of time, an infinite amount of memory (like a Turing machine) and most importantly an infinite amount of patience - in my experience most humans get bored before they even count to 1,000.

+ +

Part of the problem with this premise is that infinity is actually not a number, it's a concept that expresses an unlimited amount of 'things'. Said 'things' can be anything: integers, seconds, lolcats, the important point is the fact that those things are not finite.

+ +

See this relevant SE question for more details: +https://math.stackexchange.com/questions/260876/what-exactly-is-infinity

+ +

To put it another way: if I asked you ""what number comes before infinity?"" what would your answer be? This hypothetical super-human would have to count to that number before they could count infinity. And they'd need to know the number before that first, and the one before that, and the one before that...

+ +

Hopefully this demonstrates why the human would not be able to actually count to infinity - because infinity does not exist at the end of the number line, it is the concept that explains the number line has no end. Neither man nor machine can actually count up to it, even with infinite time and infinite memory.

+ +
+

For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow.

+
+ +

Being able to 'differentiate' between 10 different things doesn't imply the understanding of those 10 things.

+ +

A well-known thought experiment that questions the idea of what it means to 'understand' is John Searle's Chinese Room experiment:

+ +
+

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

+ +

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

+
+ +

The thing to take away from this experiment is that the ability to process symbols does not imply that one actually understands those symbols. Many computers process natural languages every day in the form of text (characters encoded as integers, typically in a unicode-based encoding like UTF-8), but they do not neccessarily understand those languages. On a simpler Effectively all computers are able to add two numbers together, but they do no necessarily understand what they are doing.

+ +

In other words, even in the 'deep learning vision model' the computer arguably does not understand the numbers (or 'symbols') it is being shown, it is merely the algorithm's ability to simulate intelligence that allows it to be classed as artificial intelligence.

+ +
+

For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity?

+
+ +

If you were to perform the same card test on a human, and continually increased the number of cards used, eventually a human wouldn't be able to keep track of them all due to lack of memory. A computer would experience the same problem, but could theoretically outperform the human.

+ +

So now I ask you, can a human really differentiate infinite things? Personally I suspect the answer is no, because all humans have limited memory, and yet I would agree that humans most likely can understand infinity to some degree (some can do so better than others).

+ +

As such, I think the question ""If it cannot differentiate infinite things, how does it understand infinity?"" has a flawed premise - being able to differentiate infinite things is not a prerequisite for understanding the concept of infinity.

+ +
+ +

Summary:

+ +

Essentially your question hinges on what it means to 'understand' something.

+ +

Computers can certainly represent infinity, the IEEE floating point specification defines both positive and negative infinity, and all modern processors are capable of processing floating points (either in hardware or through software).

+ +

If AIs are ever capable of actually understanding things then theoretically they might be able to understand the concept of infinity, but we're a long way off being able to definitively prove this either way, and we'd have to come to a consensus about what it means to 'understand' something first.

+",16369,,,,,10/7/2019 6:30,,,,0,,,,CC BY-SA 4.0 +15768,2,,15730,10/7/2019 12:22,,2,,"

Computers don't understand ""infinity"" or even ""zero"", just like a screwdriver does not understand screws. It is a tool made for processing binary signals.

+ +

In fact, a computer's equivalent in wetware is not a person but a brain. Brains don't think, persons do. The brain is just the platform persons are implemented with. It's a somewhat common mistake to conflate the two since their connection tends to be rather inseparable.

+ +

If you wanted to assign understanding, you'd at least have to move to actual programs instead of computers. Programs may or may not have representations for zero or infinity, and may or may not be able to do skillful manipulations of either. Most symbolic math programs fare mostly better here than someone required to work with math as part of their job.

+",,user30242,,,,10/7/2019 12:22,,,,0,,,,CC BY-SA 4.0 +15772,1,,,10/7/2019 14:37,,1,123,"

Currently I have a setup where I'm determining the position of a transmitter using the RSSI of 4 receivers. Its a simple feed-forward network with some hidden layers, where the input is the RSSI values, and the output is a 2d coordinate.

+ +

Now, if I decide to add/remove receivers, I have to train the network again, since the input size changes. This is not ideal, since the receivers can move around, dissapear, etc. I have looked at some alternatives, but being pretty new to machine learning, it's difficult to pick which direction to go.

+ +

I have looked at a potential solution (stolen from another question), but I'm lost at how to implement it using tensorflow:

+ +

+ +

Any help is appreciated.

+",30250,,,,,6/21/2023 13:06,Indoor positioning with variable number of distance measurements in tensorflow,,1,0,,,,CC BY-SA 4.0 +15773,2,,15751,10/7/2019 16:14,,0,,"

Neil Slater has it right, there most probably is no fear of AI self awareness as some starting point of evil series of things to happen.

+ +

Wikipedia [1] puts self awareness talks to sci-fi section, among stories, not a real thing. Self-awareness is among a list of terms that tries to make machines or aliens similarly human than ordinary people and uses that as a method of story telling.

+ +

Self-awareness or other humanlike skills that we posses that the machines don't have yet and will not have in near future can twist minds and make a seed of conspiracy theories, but at least the exhaustive Wikipedia overview on topic did not speak anything about AI.

+ +

Maybe the concept of humanlike behaviour materialises on our minds as a term of self-awareness but my source puts the origin to different category.

+ +

[1] https://en.m.wikipedia.org/wiki/Self-awareness

+",11810,,,,,10/7/2019 16:14,,,,0,,,,CC BY-SA 4.0 +15776,1,15782,,10/7/2019 17:58,,3,251,"

The Chinese Room argument against strong AI overlooks the fact that ""the man in the room"" is acting as a macro-scale ""neurotransmitter"" of the larger system in which he resides. It does not rule out strong AI, it simply reduces to an enigmatic question: where does understanding ""reside"" and how does it epiphenomenally emerge?

+ +

What are other examples of thought experiments against or in favor of strong AI (apart from the Chinese room argument) or extensions or refutations to known experiments?

+",28343,,2444,,1/22/2021 0:38,1/22/2021 0:38,"What are examples of thought experiments against or in favour of strong AI, apart from the Chinese room argument?",,1,0,,,,CC BY-SA 4.0 +15779,2,,15730,10/7/2019 18:38,,1,,"

I would think that a computer couldn’t understand infinity primarily because the systems and parts of a system, that are driving the computer are finite themselves.

+",30261,,,,,10/7/2019 18:38,,,,0,,,,CC BY-SA 4.0 +15781,2,,3903,10/7/2019 20:10,,1,,"

I've spent some time thinking about this in the context of games.

+ +

The problem with reward functions is that they generally involve weighting nodes, which is useful but ultimately materially meaningless.

+ +

Here are two materially meaningful rewards:

+ +

COMPUTATIONAL RESOURCES

+ +

Consider a game where an AI is competing not for points, but for processor time and memory.

+ +

The better the algorithm performs at the game, the more memory and processing it has access to. This has a practical effect--the more resources available to the automata, the stronger its capabilities. (i.e. it's rationality is less bounded in terms of time and space to make a decision.) Thus the algorithm would be ""motivated"" to prevail such a contest.

+ +

ENERGY

+ +

Any automata with a sufficient degree of ""self awareness"", here specifically referring to the knowledge that it requires energy to process, would be motivated to self-optimize its own code to eliminate unnecessary flipping of bits (unnecessary energy consumption.)

+ +

Such an algorithm would also be motivated to ensure its power supply so that it can continue to function.

+",1671,,,,,10/7/2019 20:10,,,,0,,,,CC BY-SA 4.0 +15782,2,,15776,10/7/2019 20:40,,3,,"

An excellent book summarizing the development of thought in this area over several hundred years is Mind Design II, edited by John Haugeland. This book contains a collection of essays written by the major thinkers in this area through until about the 1990s. A brief summary of some of the major ideas is:

+ +
    +
  1. Descartes: Minds are spirits, brains are bodies. Minds attach to brains in a special part of the brain. Descartes has a number of thought experiments, but they mostly have a Theological root. Most modern scientists do not find these to be satisfying because the resolve metaphysical questions about the mind by assuming you already believe in a particular interpretation of a God.
  2. +
  3. Gilbert Ryle argues in the late 1940s that concepts like the Mind are in some sense vacuous. Asking whether a machine has a mind or not is like asking whether a field contains two cows or a pair of cows: this is a conceptual distinction that has no relation to the real world. Because of this, its study is futile. The parallel movement of Behaviorism in psychology also viewed the study of minds and mental states as a fool's errand. Rather, this movement focused on studying behaviors that could be directly measured.
  4. +
  5. Turing publishes his seminal paper in AI, in 1950. In this paper, Turing anticipates nearly all later objections to AI, and summarizes reasonable refutations of them. This paper contains a number of short thought experiments. One of the more notable ones is the Turing Test. A key observation Turing makes is that if a machine behaves in a way indistinguishable from a human, then any arguments used to reject its claim to intelligence (and even to consciousness) appear to work equally well on other humans. Searle doesn't really address this in his later argument. Basically, I have no way to know for sure whether you have subjective experiences of the same kind I have. The fact that you're made of roughly the same kind of meat as I am seems like a shaky explanation. For example, would humanoid aliens that act like us also be intelligent? What if it were found that their brains took some radically different form from ours? What if it were found that their brains were really networks of transistor-like machines?

  6. +
  7. The Cognativist revolution spanned a number of fields starting in the 1960s. This movement held that behaviorism was wrong for two basic reasons. First, Behaviorism does not account for the subjective feelings of existence or understanding. Second, Behaviorism could not explain phenomena like language, that appeared to involve reasoning logically about symbols. Notable authors in this period are Chomsky and Fodor, who argued that minds were essentially computer programs that happened to be running on brains instead of computers. These ideas dominated AI, Psychology and Linguistics for about 30-40 years. Fodor's works contain a number of thought experiments involving language and programs.

  8. +
  9. John Searle's Chinese Room argument caused problems for the cognativists, because it presents a convincing chain of logic showing that a program that does complicated tasks (like linguistics) does not subjectively understand what it is doing. Various objections are raised and Searle responds to them in his publication of the argument. Objections to Searle's arguments form the basis of several later movements in philosophy of AI.
  10. +
  11. The Connectionists argue that we can represent the behavior of the brain (and thus, the mind) with a brain-like program. This paralleled increasing interest in artificial neural networks. An example of a cognativist thought experiment is to imagine a simulation of an entire human brain, conducted with perfect fidelity (for instance, we accurately model all the quantum mechanical interactions). If such a machine were not conscious, opponents need to answer the question of where consciousness is hiding (Is it in the meat? Then see Turing. Is it in some yet-undiscovered property of neurons? Then the burden of proof is definitely on the person proposing such a hidden property, and they must explain why we cannot simulate it).
  12. +
  13. The Churchlands and others argued, much as Ryle, that minds as the public understand them are a sort of ""folk theory"", that appear to explain something about reality to lay people, but are actually a dead end. Concepts like ""beliefs"" and ""experiences"" are, according to these arguments, akin to aether or phlogiston: things that our current, malformed, understandings of neuroscience seem to require, but that in fact have no basis in reality, even though nearly everyone believes in them at present. This line of work contains many extensions of the kinds of thought experiments Turing hinted at.
  14. +
  15. Brooks and others advance the theory of embodied cognition, in which the mind is not confined to the body, but is actually a property of a body plus the environment it is placed in. An example thought experiment from this school (by Andy Clarke if I recall correctly) is to imagine a person with profound loss of the ability to form new long term memories. This person carries around a notebook. The cover says ""Your memory"". The first page contains instructions explaining that the person has lost the ability to remember things, and that they write new things they want to remember in the book. The person can remember things for, say, 15 minutes at a time, and the book is very well organized. Where is this person's mind exactly? Is it in their brain (which does not remember anything 15 minutes in the past), or is it in the brain and the book, which together can remember and reason about things essentially like a normal person. If it is in the brain and book, what if we replace the book with a computer inside the person's head?
  16. +
+",16909,,,,,10/7/2019 20:40,,,,0,,,,CC BY-SA 4.0 +15783,2,,15730,10/7/2019 20:46,,4,,"

Then premise assumes that humans ""understand"" infinity. Do we?

+ +

I think you'd need to tell me what criterion you would use, if you wanted to know whether I ""understand"" infinity, first.

+ +

In the OP, the idea is given that I could ""prove"" I ""understand"" infinity, because ""In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).""

+ +

Well, that's simply not true. Worse, if it were true (which it isnt), then it would be equally true for a computer. Here's why:

+ +
    +
  1. Yes, you can in principle count integers, and see that counting never ends.
  2. +
  3. But even if you had enough resources, you could never ""count infinitely many things"". There would always be more. That's what ""infinite"" means.
  4. +
  5. Worse, there are multiple orders (""cardinalities"") of infinity. Most of them, you can't count, even with infinite time, and perhaps not even with infinite other resources. They are actually uncountable. They literally cannot be mapped to a number line, or to the set of integers. You cannot order them in such a way that they can be counted, even in principle.
  6. +
  7. Even worse, how do you do that bit where you decide ""in principle"" what I can do, when I clearly can't ever do it, or even the tiniest part of it? That step feels layman-style assumptive, not actually seeing the issues in doing it rigorously. It may not be trivial.
  8. +
  9. Last, suppose this was your actual test, like in the OP. So if I could ""in principle with enough resources (time etc) count infinitely many things"", it would be enough for you to decide I ""understood"" infinity (whatever that means). Then so could a computer with sufficient resources (RAM, time, algorithm). So the test itself would be satisfied trivially by a computer if you gave the computer the same criteria.
  10. +
+ +

I think maybe a more realistic line of logic is that what this question actually shows, is that most (probably all?) humans actually do not understand infinity. So understanding infinity is probably not a good choice of test/requirement for AI.

+ +

If you doubt this, ask yourself. Do you honestly, truly, and seriously, ""understand"" a hundred trillion years (the possible life of a red dwarf star)? Like, can you really comprehend what its like, experiencing a hundred trillion years, or is it just a 1 with lots of zeros? What about a femtosecond? Or a time interval of about 10^-42 seconds? Can you truly ""understand"" that? A timescale compared to which, one of your heartbeats, compares like one of your heartbeats compares to a billion billion times the present life of this universe? Can you really ""understand infinity"", yourself? Worth thinking about......

+",5817,,5817,,10/7/2019 20:55,10/7/2019 20:55,,,,1,,,,CC BY-SA 4.0 +15784,1,15786,,10/7/2019 21:53,,2,241,"

I would like to work on a project where I teach an NN to play N64 games. +To my current understanding, I would need an emulator?

+ +

I can do the Machine Learning side of it, im just unsure how I can give the NN access to the game's controls, such as left, right, up or down?

+ +

Where could I find more information on doing so and is using an Emulator the right path to take?

+",30270,,,,,10/8/2019 0:28,How can I teach a computer to play N64 games using Neural Nets?,,1,0,,,,CC BY-SA 4.0 +15785,2,,8031,10/7/2019 21:59,,0,,"

I see that the objective is to achieve multiple goals using A* algorithm. +So if your problem is more like a Traveling Salesman Problem, which is what it kind of sounds like, you can refer to this post: +https://stackoverflow.com/questions/4453477/using-a-to-solve-travelling-salesman +The problem can be converted to a graph search problem, and could utilize Minimum Spanning Tree. A-star can be used to compute edge weight.

+",30269,,,,,10/7/2019 21:59,,,,0,,,,CC BY-SA 4.0 +15786,2,,15784,10/8/2019 0:28,,3,,"

The common options are:

+ +
    +
  1. Implement a simple simulation of the game, perhaps without graphics. Train the agent on the simulation. This is usually not very satisfying, because the agent won't play the real game.
  2. +
  3. Use an emulator that allows you to inspect the game's memory, and input values directly at the controls. BizHawk is a good choice for SNES and I think also for N64 emulation. Use a pipe to connect this program with your agent. Pass across the pipe any key values from the game's memory that represent the state of the game, and send back actions that represent the inputs the agent makes in that state. Usually this is the best option.
  4. +
  5. Scrape the screen in real time. Tools like SkyScraper will allow you to capture the raw pixels displayed in a window. You should then be able to pass these pixels to your agents via a pipe. You may be able to use tools like Selenium to pass values back from the agent to the game. This route is usually a lot more intensive than option 2, but is the most realistic. It may also require very good hardware to keep up with the game in real time.
  6. +
+",16909,,,,,10/8/2019 0:28,,,,0,,,,CC BY-SA 4.0 +15787,2,,15730,10/8/2019 1:14,,1,,"

The ""concept"" of infinity is 1 thing to understand. I can represent it with 1 symbol (∞).

+ +
+

As I mentioned before, humans understand infinity because they are + capable, at least, counting infinite integers, in principle.

+
+ +

By this definition humans do not understand infinity. Humans are not capable of counting infinite integers. They will die (run out of compute resources / power) at some time. It would probably be easier in fact to get a computer to count towards infinity than it would be to get a human to do so.

+",30272,,,,,10/8/2019 1:14,,,,2,,,,CC BY-SA 4.0 +15790,1,,,10/8/2019 7:11,,2,1020,"

What is the difference between a semantic network and an ontology? How are they related? I have not found any article that describes how these two concepts are related.

+",30283,,2444,,2/7/2021 22:52,2/7/2021 22:52,What is the difference between a semantic network and an ontology?,,1,0,,,,CC BY-SA 4.0 +15791,2,,15790,10/8/2019 8:24,,2,,"

A semantic network is a way to implement an ontology. An ontology is just a generalised way of representing knowledge in a particular domain, and there are multiple ways of doing so. The key that distinguishes an ontology from, say, Wikipedia, is that it is formally defined, so that the knowledge represented can be used in programs to reason with.

+ +

Semantic networks can be used to do that. Please note that there are many types of semantic networks, so you can use them to represent a wide range of relevant information. Since the point of an ontology is to show the relationships between entities relevant to a domain, they are usually represented as networks — if the information is stored as RDF triples, these can usually be visualised as an equivalent network.

+ +

So ontology is the broader, more general term, whereas a semantic network is a more specific way of representing information.

+",2193,,,,,10/8/2019 8:24,,,,0,,,,CC BY-SA 4.0 +15792,1,,,10/8/2019 9:00,,1,232,"

I built a three-layer neural network (first is 1D convolutional and the remaining two are linear). It takes an input of 5 angles in radians, and outputs two numbers from 0 to 1, which are respectively the probability of failure or success. The NN is trained in a simulation.

+

The simulation goes this way: it takes 5 angles in radians and calculates the vector sum of 5 vectors having $x$ as module and $\alpha$ as angles (taken from the input). It returns $1$ if the vector sum has a module greater than $y$, or $0$ if it is less than $y$.

+

My intention is to be able to tell sequences of radians that will generate vectors with a sum greater than $y$ in module from the ones which won't.

+

Which would be the best configuration to achieve this? Is the configuration I set up (1D convolution layer + 2 linear layers) efficient? If so, would it be easy to find the right size for the convolution? Or should I just remove it?

+

I noticed that if I change the order of the input angles the output of the simulation will be the same. Is there a particular configuration you should use when dealing with these cases?

+",30287,,1847,,3/28/2022 12:37,3/28/2022 12:37,An architecture for classifying distance from origin for a sum of vectors?,,1,0,0,,,CC BY-SA 4.0 +15797,1,,,10/8/2019 16:20,,2,22,"

The following figure is from the last page in YOLOv3 paper highlighting how mAP is unfair metric for evaluating Object Detectors: + +The figure shows two hypothetical Object Detector results which the author say they give the same perfect mAP, while visually the first detector is clearly more accurate than the other.

+ +

According to my understanding, the two detectors do not give the same mAP. This is how I calculate it for each detector:

+ +
Detector 1, 'Dog' class AP table:
+ ______________________________________ 
+| Object  | True? | Precision | Recall |
+|_________|_______|___________|________|
+| Dog_99% | Yes   |     1     |    1   |  
+|_________|_______|___________|________|
+Hence, AP_dog = 1
+
+Detector 1, 'Person' class AP table:
+ ________________________________________
+| Object    | True? | Precision | Recall |
+|___________|_______|___________|________|
+|Person_99% | Yes   |     1     |    1   |  
+|___________|_______|___________|________|
+Hence, AP_person = 1
+And by continuing doing so for the other 7 classes in the dataset, mAP=1. 
+
+Detector 2, 'Dog' class AP table:
+ ______________________________________ 
+| Object  | True? | Precision | Recall |
+|_________|_______|___________|________|
+| Dog_48% | Yes   |     1     |    1   |  
+|_________|_______|___________|________|
+Hence, AP_dog = 1
+
+Detector 2, 'Bird' class AP table:
+ _______________________________________
+| Object   | True? | Precision | Recall |
+|__________|_______|___________|________|
+| Bird_90% | Yes   |     1     |    1   | 
+| Bird_89% | No    |     0.5   |    1   | 
+|__________|_______|___________|________|
+Hence, AP_bird = 0.75
+And by continuing doing so for the other 7 classes in the dataset, mAP is less than 1 because AP for at least one class is less than one (AP_bird).
+
+ +

Hence, according to my understanding mAP for the first detector is 1, and for the second detector is less than 1. What is the mistake I'm doing in the calculation? Or is there some assumption in the paper that I'm not considering?

+",30301,,,,,10/8/2019 16:20,How mAP is unfair evaluation metric for Object Detection?,,0,0,,,,CC BY-SA 4.0 +15800,1,15809,,10/8/2019 19:48,,3,616,"

I am recording the vibrations of an AC Motor (50Hz Europe) and I am trying to find out whether it is powered on or not. When I record these vibrations, I basically get the vibration values ($-1$ to $+1$) over time.

+ +

I would like to develop a program to detect the presence of a 50Hz sine wave on a steady stream of input data. I will have $X$ and $Y$ measurements, where $X$ represents amplitude and $Y$ the time (sampled at 100Hz - it is possible to increase the sample rate to 200Hz or 400Hz at max)

+ +

Is this a task suited for a neural network, and if so, would it be less efficient than other means of detection?

+",30307,,2444,,11/6/2019 23:08,11/6/2019 23:08,Can a neural network be used to detect sine waves?,,2,4,,,,CC BY-SA 4.0 +15801,2,,15730,10/8/2019 20:51,,1,,"

Just food for thought: how about if we try to program infinity not in theoretical, but in practical terms? Thus, if we deem something that a computer cannot calculate, given its resources as infinity, it would fulfill the purpose. Programmatically, it can be implemented as follows: if the input is less than available memory it's not infinity. Subsequently, infinity can be defined as something that returns out-of-memory error on an evaluation attempt.

+",3992,,2444,,10/9/2019 1:15,10/9/2019 1:15,,,,0,,,,CC BY-SA 4.0 +15802,1,16083,,10/8/2019 21:21,,6,644,"

I'm studying about different selection methods in genetic algorithms. My question is about the Stochastic Universal Sampling (SUS) selection method. I know that each individual will occupy a segment of the line according to its fitness value and then equally spaced pointers will be placed over this line.

+

I want to know how the distance between pointers is determined. I have seen 1/6 and 1/4 as the distance between pointers. I want to choose the number of pointers dynamically according to the situation. I want to know what conditions or factors affect the determination of this distance. For example, when do we decide to choose 1/4 as distance? I want to know if it is possible to change the number of samples in each iteration according to different conditions or situations. If so, what are these conditions?

+",30311,,2444,,1/30/2021 22:03,1/30/2021 22:03,How is the distance between pointers in Stochastic Universal Sampling determined?,,1,0,,,,CC BY-SA 4.0 +15803,2,,15730,10/8/2019 21:25,,3,,"

I think the concept that is missing in the discussion, so far, is symbolic representation. We humans represent and understand many concepts symbolically. The concept of Infinity is a great example of this. Pi is another, along with some other well-known irrational numbers. There are many, many others.

+ +

As it is, we can easily represent and present these values and concepts, both to other humans and to computers, using symbols. Both computers and humans, can manipulate and reason with these symbols. For example, computers have been performing mathematical proofs for a few decades now. Likewise, commercial and/or open source programs are available that can manipulate equations symbolically to solve real world problems.

+ +

So, as @JohnDoucette has reasoned, there isn't anything that special about Infinity vs many other concepts in math and arithmetic. When we hit that representational brick wall, we just define a symbol that represents ""that"" and move forward.

+ +

Note, the concept of infinity has many practical uses. Any time you have a ratio and the denominator ""goes to"" zero, the value of the expression ""approaches"" infinity. This isn't a rare thing, really. So, while your average person on the street isn't conversant with these ideas, lots and lots of scientists, engineers, mathematicians and programmers are. It's common enough that software has been dealing with Infinity symbolically for a couple decades, now, at least. E.g. Mathematica: http://mathworld.wolfram.com/Infinity.html

+",30310,,30310,,10/8/2019 21:40,10/8/2019 21:40,,,,0,,,,CC BY-SA 4.0 +15804,2,,15792,10/8/2019 23:17,,2,,"

I'm not completely sure I understand your simulation. What I think you are doing is:

+ +
    +
  1. Generate 5 angles specified in radians (Are these always normalized to within (0, $2*\pi$)?).
  2. +
  3. Interpret each angle as a unit vector in a 2d space.
  4. +
  5. Add the unit vectors together, yielding a vector that lies somewhere inside a circle with radius 5.
  6. +
  7. Ask whether the summed vector is more or less than a distance $y$ from the origin of the circle.
  8. +
+ +

If you're doing that, your problem looks like trying to learn a separation of two concentric rings, which is a well known benchmarking problem for classification.

+ +

I am reasonably certain you can learn this pattern with several layers of ReLU neurons. I'm not certain that convolutional layers will help you much here. The main patterns I'd expect the network to learn are:

+ +
    +
  • perhaps 2-3 layers to learn whether the point lies far away from the origin in each of several different directions.
  • +
  • 1 layer to learn where the decision boundary is in each of 4 directions away from the origin.
  • +
  • 1 layer to inclusive-OR the 4 decision boundaries together.
  • +
+ +

My guess is that this is fairly easy to learn with 4 layers of 8 ReLU neurons, or something like it.

+",16909,,,,,10/8/2019 23:17,,,,1,,,,CC BY-SA 4.0 +15806,2,,11781,10/9/2019 2:33,,3,,"

I have found some clues in Maei's thesis (2011): “Gradient Temporal-Difference Learning Algorithms.”

+ +

According to the thesis:

+ +
    +
  1. GTD2 is a method that minimizes the projected Bellman error (MSPBE).
  2. +
  3. GTD2 is convergent in non-linear function approximation case (and off-policy).
  4. +
  5. GTD2 converges to a TD-fixed point (same point as semi-gradient TD).
  6. +
  7. GTD2 is slower to converge than usual semi-gradient TD.
  8. +
+ +
+

It doesn't readily apply to non-linear function approximation.

+
+ +

No, it does.

+ +
+

It doesn't yield a good solution.

+
+ +

No, it does. TD-fixed point is the same point for the solution of semi-gradient TD (which is generally used). There is no edge on that.

+ +

The only explanation seems to be practical convergence rate.

+ +

To quote his words:

+ +
+

Some of our empirical results suggest that gradient-TD method maybe slower than conventional TD methods on problems on which conventional TD methods are sound (that is, on-policy learning problems).

+
+",9793,,,,,10/9/2019 2:33,,,,0,,,,CC BY-SA 4.0 +15807,2,,15800,10/9/2019 6:34,,0,,"

You can implement an autoencoder network. Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. +When you train autoencoder with 50Hz sine wave data, your model can reconstruct correctly if gets 50Hz sine wave data as an input. When input's ""Reconstruction Loss"" is less than your threshold value that can be say the given input is 50Hz sine wave.

+",30288,,,,,10/9/2019 6:34,,,,0,,,,CC BY-SA 4.0 +15808,1,15811,,10/9/2019 6:53,,0,127,"

I'm trying to solve dLoss/dW1. The network is as in picture below with identity activation at all neurons:

+ +

+ +

Solving dLoss/dW7 is simple as there's only 1 way to output:

+ +

$Delta = Out-Y$

+ +

$Loss = abs(Delta)$

+ +

The case when Delta>=0, partial derivative of Loss over W7 is:

+ +

$\dfrac{dLoss}{dW_7} = \dfrac{dLoss}{dOut} \times \dfrac{dOut}{dH_4} \times \dfrac{dH_4}{dW_7} \\ += \dfrac{d(Out-Y)}{dOut} \times \dfrac{d(H_4W_{13} + H_5W_{14})}{dH_4} \times \dfrac{d(H_1W_7 + H_2W_8 + H_3W_9)}{dW_7} \\ += 1 \times W_{13} \times H_1$

+ +

However, when solving dLoss/dW1, the situation is very different, there are 2 chains to W1 through W7 and W10, and now, how should the chain for $\dfrac{dLoss}{dW_1}$ be?

+ +

Furthermore, at an arbitrary layer, with all outputs of all layers already calculated plus all gradients of weights on the right side also calculated, what should a formula for $\dfrac{dLoss}{dW}$ be?

+",2844,,2844,,10/9/2019 8:07,10/9/2019 9:26,Backpropagation: Chain Rule to the Third Last Layer,,1,0,,,,CC BY-SA 4.0 +15809,2,,15800,10/9/2019 6:57,,1,,"
+

Is this a task suited for a neural network

+
+ +

Yes. You have choices in fact:

+ +
    +
  • A fully-connected network would be simplest architecture, and would work if you gave it some time window of samples (e.g. every 0.5 seconds or every 50 samples) and supervised training data - sets of samples with sensor readings and the ground truth value of whether the motor was on or not.

  • +
  • A 1D convolutional neural network would likely be most efficient and robust to train, and would take the same inputs and outputs as the fully-connected network.

  • +
  • A recurrent neural network would be tricker to train, but a nicer design as you could feed it samples one at a time. The input would be the current sample, and output the probability that the motor was on. When training this, you would also want to provide it transitions between the motor being on and off. The nice feature about this is that it should give you quick feedback about whether the motor was on or off - with the caveat that it may be more likely to trigger intermittent false positives, so a little extra post-processing might be required.

  • +
+ +

All of the above require you to collect training data, ideally in situations identical to planned use of the detector. So if the motor is mounted somewhere that could experience other vibrations, a few of those kind of scenarios should be simulated with motor both on and off.

+ +
+

and if so, would it be less efficient than other means of detection?

+
+ +

In terms of computing power and effort on your part, you may find that an off-the-shelf Fast Fourier Transform (FFT) library function with a simple threshold at your target frequency will make a robust and simple detector, with no need for a neural network.

+ +

Typically for specific frequency detection you would take a window of samples, adjust them (using e.g. Hamming window) to reduce edge effects which would appear as frequencies in the conversion, and then run FFT. This combination is so common that you may find it already combined in the FFT library. For more on this, you would want to ask in Signal Processing Stack Exchange, where use of FFT is well understood.

+ +

If the environment is noisy or the target frequency can drift (making it hard to set a simple threshold) then you could also combine FFT with a neural network. This combination can solve much more complicated signal detection, and is used in speech processing for instance.

+ +
+

sampled at 100Hz - it is possible to increase the sample rate to 200Hz or 400Hz at max

+
+ +

For reliably detecting a 50Hz signal, I would say that 200Hz sample rate is minimum. The theorectical minimum is 100Hz (i.e. twice the signal frequency) but may give you problems with noise and the possibility that your sample points just happen to fall on low amplitude parts of the oscillations, making it look like the motor is off even when it is on.

+",1847,,1847,,10/9/2019 8:14,10/9/2019 8:14,,,,2,,,,CC BY-SA 4.0 +15810,2,,15730,10/9/2019 7:59,,2,,"

The Questions That Computers Can Never Answer - Wired (magazine)

+ +
+ +

Computers might not be able to reach infinity at all: < https://www.nature.com/articles/35023282 >, never mind actually understand it.

+ +

Computation and computers do have implications for ""hard limits of systems.""

+ +

(https://en.wikipedia.org/wiki/Limits_of_computation)

+",25982,,25982,,11/26/2019 8:27,11/26/2019 8:27,,,,0,,,,CC BY-SA 4.0 +15811,2,,15808,10/9/2019 8:04,,0,,"

I finally solved it out but it's long, it's not only chain rule, it includes quotient rule too. And, this is only the third last layer, once a DNN has more layers then it's more complex.

+ +

$\dfrac{dLoss}{dW_1} = \dfrac{d}{dW_1}(Out-Y) = \dfrac{d}{dW_1}Out = \dfrac{d}{dW_1}(H_4W_{13} + H_5W_{14}) \\= \dfrac{d}{dW_1}H_4W_{13} + \dfrac{d}{dW_1}H_5W_{14} \\= W_{13} \times \dfrac{d}{dW_1}H_4 + W_{14} \times \dfrac{d}{dW_1}H_5 \\= W_{13} \times \dfrac{d}{dW_1}(H_1W_7 + H_2W_8 + H_3W_9) + W_{14} \times \dfrac{d}{dW_1}(H_1W_{10} + H_2W_{11} + H_3W_{12}) \\= W_{13} \times \dfrac{d}{dW_1}H_1W_7 + W_{14} \times \dfrac{d}{dW_1}H_1W_{10} \\= W_{13}W_7 \times \dfrac{d}{dW_1}H_1 + W_{14}W_{10} \times \dfrac{d}{dW_1}H_1 \\= (W_{13}W_7 + W_{14}W_{10}) \times \dfrac{d}{dW_1}(X_1W_1 + X_2W_2) \\= (W_{13}W_7 + W_{14}W_{10}) \times X_1$

+",2844,,2844,,10/9/2019 9:26,10/9/2019 9:26,,,,0,,,,CC BY-SA 4.0 +15813,2,,11781,10/9/2019 10:50,,0,,"

As I understand it above-mentioned projection operator project into linear feature subspace produced from set of feature vectors (or feature functions), that is space of linear combinations of features. Vanilla DQN don't have any feature space, projection into linear subspace doesn't make sense in DQN context. If you attempt to produce feature space for values/Q with some NN it wouldn't be DQN (because Q wouldn't be produced) and it wouldn't work anyway on anything but toy problems because amount of degrees of freedom of output would be too high.

+",22745,,,,,10/9/2019 10:50,,,,2,,,,CC BY-SA 4.0 +15816,1,,,10/9/2019 15:31,,1,48,"

so I'm working on a Project where I want to predict the Vehicle Position from the Vehicle Data like speed, acceleration etc.. now the data that I have comes also with a timestamp for each sample ( I mean that I have also a timestamp feature).

+ +

at first I thought that I should get rid of that timestamp feature because it is not relevant to my Project, I mean logically, I will not need a timestamp feature to predict the vehicle position, that didn't make sense to me when I first took a look at the dataset. I thought other features like speed, acceleration, braking pressure etc.. are more important and I thought also that the solution for this Problem would be to use a normal Deep NN or RBFNN for making this Prediction. recently, I read some papers that shows how a Convolutional NN can be also used for regression and that confused me to choose the Architecture needed for my Project. this Week I also watched a Tutorial where a RNN/ LSTM was implemented for regression Tasks.

+ +

Now I'm very confused which architecture should I use for my Project. I also noticed that maybe if I used that timestamp feature, I can maybe use an RNN/LSTM Network for this Task but I don't know if my dataset can be seen as time-series dataset, actually the vehicle position doesn't depend on the time as far as I can tell.

+ +

Hopefully can someone answer me based on Experience. It would be also great to have some Papers or references where I can look for more.

+",30327,,,,,10/9/2019 15:31,How to choose the suitable Neural Network Architecture for Regression Tasks,,0,0,,,,CC BY-SA 4.0 +15817,1,15822,,10/9/2019 15:47,,2,330,"

I'm just started to learn deep learning and I have a question about this neural network:

+ +

+ +

I think $h_1$, $h_j$ and $h_n$ are perceptrons. So, if they are perceptrons, all of them will have an activation function.

+ +

I'm wondering if it is possible to have only one activation function, and sum the output to all of the perceptrons, and pass that sum to that activation function. And the output of this activation function will be $y$.

+ +

I will have this network, where $H1$, $Hj$ and $Hn$ don't have activation function: +

+ +

The input for the activation function will be the sum of the outputs of $H1$, $Hj$ and $Hn$ without been processed by an activation function.

+ +

Is that possible (or is it a good idea)?

+",4920,,2444,,10/10/2019 2:14,10/10/2019 3:33,Can multiple activation functions be replaced with a single activation function?,,2,3,,,,CC BY-SA 4.0 +15818,1,,,10/9/2019 15:51,,1,26,"

Turns out that it looks like I will be approximating a 100x10 matrix in my project thesis. II have the following equation

+ +

$y = Dx$,

+ +

where $y$ is $(100 \times 1)$, $D$ is $100 \times 10$ and $x$ is $10 \times 1$

+ +

It is the transformation matrix $D$ I will be approximating by iterating over quite alot of pairs (x,y). I was wondering if it is possible to output a $(100 \times 10)$ matrix as output in a FFNN-architecture, or is there any other ways to do it? As this is not an image or anything similar which gives rise to pooling etc., my guess is that CNN is not ideal here.

+ +

So tldr: Is it easy to use feed-forward architecture to approximate an matrix?

+ +

EDIT: I figured out that $D$ obviously doesn't need to be a mtrxi, just have 100 output nodes, duh. Thanks.

+",30329,,30329,,10/9/2019 16:26,10/9/2019 16:26,Matrix-output for FFNN?,,0,1,,,,CC BY-SA 4.0 +15819,2,,15817,10/9/2019 17:24,,0,,"

Of course, it is possible, but why do you want to do this?

+ +

Let's think about it. Imagine your weights for that layer is a matrix full of ones, so, if you have no bias, then your output of that layer would be the sum of all values in the $h_1$, $h_j$, $h_n$ neurons, right? So, it is possible to sum all the values together, give the output to an activation function, and then you'll have your output.

+",30327,,2444,,10/10/2019 2:15,10/10/2019 2:15,,,,0,,,,CC BY-SA 4.0 +15820,1,15823,,10/9/2019 17:45,,28,4162,"

Is there any research on the development of attacks against artificial intelligence systems?

+ +

For example, is there a way to generate a letter ""A"", which every human being in this world can recognize but, if it is shown to the state-of-the-art character recognition system, this system will fail to recognize it? Or spoken audio which can be easily recognized by everyone but will fail on the state-of-the-art speech recognition system.

+ +

If there exists such a thing, is this technology a theory-based science (mathematics proved) or an experimental science (randomly add different types of noise and feed into the AI system and see how it works)? Where can I find such material?

+",30335,,1671,,10/11/2019 0:34,10/11/2019 19:20,Is there any research on the development of attacks against artificial intelligence systems?,,8,4,,,,CC BY-SA 4.0 +15821,2,,15820,10/9/2019 18:05,,12,,"

Sometimes if the rules used by an AI to identify characters are discovered, and if the rules used by a human being to identify the same characters are different, it is possible to design characters that are recognized by a human being but not recognized by an AI. However, if the human being and AI both use the same rules, they will recognize the same characters equally well.

+ +

A student I advised once trained a neural network to recognize a set of numerals, then used a genetic algorithm to alter the shapes and connectivity of the numerals so that a human could still recognize them but the neural network could not. Of course, if he had then re-trained the neural network using the expanded set of numerals, it probably would have been able to recognize the new ones.

+",28348,,,,,10/9/2019 18:05,,,,0,,,,CC BY-SA 4.0 +15822,2,,15817,10/9/2019 18:34,,2,,"

TL;DR: This is possible but removing the activations will decrease the expressivity of the network because it will become mathematically equivalent to a single neuron.

+ +

Mathematical Explanation

+ +

The outputs of your intermediate neurons (in the absence of activation functions) are now:

+ +

$\text{(1)}\quad H_i(x) = \sum_{j=1}^nw_{ij}\cdot x_j+b_i$

+ +

You are then summing each $H_i$:

+ +

$\text{(2)}\quad\mathcal{H(x)}=\sum_{i=1}^nH_i(x)$

+ +

You then pass $\mathcal{H(x)}$ to some activation say $g$:

+ +

$\text{(3)}\quad\hat y = g(\mathcal{H(x)})$

+ +

The trouble is that the inner term $\mathcal{H(x)}$ mathematically reduces to a single linear operation on $x$. Proof:

+ +
    +
  1. $\mathcal{H(x)}=\sum_{i=1}^nH_i(x)$. Substituting in (1):
  2. +
  3. $\mathcal{H(x)}=\sum_{i=1}^n((\sum_{j=1}^nw_{ij}\cdot x_j)+b_i)$. This can be re-aranged :
  4. +
  5. $\mathcal{H(x)}=(\sum_{j=1}^n(\sum_{i=1}^nw_{ij})\cdot x_j)+\sum_{i=1}^nb_i$. But this reduces to:
  6. +
  7. $\mathcal{H(x)}=\sum_{j=1}^n\tilde w_{j}\cdot x_j+\tilde b$. Where $\tilde w_{j},\tilde b$ are scalars.
  8. +
+ +

Thus, without the non-linear activations (3) mathematically reduces to a single neuron.

+",28343,,28343,,10/10/2019 3:33,10/10/2019 3:33,,,,0,,,,CC BY-SA 4.0 +15823,2,,15820,10/9/2019 21:56,,28,,"

Yes, there is some research on this topic, which can be called adversarial machine learning, which is more an experimental field.

+ +

An adversarial example is an input similar to the ones used to train the model, but that leads the model to produce an unexpected outcome. For example, consider an artificial neural network (ANN) trained to distinguish between oranges and apples. You are then given an image of an apple similar to another image used to train the ANN, but that is slightly blurred. Then you pass it to the ANN, which unexpectedly predicts the object to be an orange.

+ +

Several machine learning and optimization methods have been used to detect the boundary behaviour of machine learning models, that is, the unexpected behaviour of the model that produces different outcomes given two slightly different inputs (but that correspond to the same object). For example, evolutionary algorithms have been used to develop tests for self-driving cars. See, for example, Automatically testing self-driving cars with search-based procedural content generation (2019) by Alessio Gambi et al.

+",2444,,2444,,10/10/2019 13:17,10/10/2019 13:17,,,,3,,,,CC BY-SA 4.0 +15824,1,,,10/9/2019 23:49,,4,909,"

It is a well-known math fact that composition of linear/affine transformations is still linear/affine. For a naive example,

+

$\textbf{A}_1\textbf{A}_2\textbf{x}$ is simply $\textbf{A}\textbf{x}$ where $\textbf{A}=\textbf{A}_1\textbf{A}_2$

+

Any one knows why in practice multiple linear layers tend to work better, even though it is mathematically equivalent to a single linear layer? Any reference is appreciated!

+",30344,,28343,,2/15/2021 14:05,2/15/2021 14:05,Any explanation why multiple linear layers work better than a single linear layer in practice?,,1,2,,,,CC BY-SA 4.0 +15825,2,,15824,10/10/2019 1:08,,2,,"

The key is that the layers of neurons in neural networks are not affine transformations. All commonly used neurons have some kind of non-linearity. The simplest of these is the Rectified Linear Unit (ReLU), which takes the form $y = x$ when $x > 0$ and $y = 0$ for all other values, where $x$ is a weighted sum of the inputs to the neuron.

+",16909,,,,,10/10/2019 1:08,,,,2,,,,CC BY-SA 4.0 +15826,5,,,10/10/2019 2:27,,0,,,2444,,2444,,10/10/2019 2:27,10/10/2019 2:27,,,,0,,,,CC BY-SA 4.0 +15827,4,,,10/10/2019 2:27,,0,,"For questions related to variational auto-encoders (VAEs). The first VAE was proposed in ""Auto-Encoding Variational Bayes"" (2013) by Diederik P. Kingma and Max Welling. There are several other VAEs, for example, the conditional VAE.",2444,,2444,,10/10/2019 2:27,10/10/2019 2:27,,,,0,,,,CC BY-SA 4.0 +15828,5,,,10/10/2019 2:29,,0,,,2444,,2444,,10/10/2019 2:29,10/10/2019 2:29,,,,0,,,,CC BY-SA 4.0 +15829,4,,,10/10/2019 2:29,,0,,"For questions related to adversarial machine learning, which is a branch of machine learning focused on the study of adversarial examples, which are malicious inputs designed to fool machine learning models.",2444,,2444,,10/10/2019 2:29,10/10/2019 2:29,,,,0,,,,CC BY-SA 4.0 +15832,2,,15820,10/10/2019 6:00,,4,,"

Isn't that essentially what chess does? For example, A human can recognize that a Ruy exchange offers white great winning chances (because of pawn structure) by move 4 while an engine would take several hours of brute force calculation to understand the same idea.

+",30348,,,,,10/10/2019 6:00,,,,1,,,,CC BY-SA 4.0 +15835,2,,15772,10/10/2019 8:47,,0,,"

The solution mentioned seems feasible but you'd probably encounter a lot of problems, such as -

+ +
    +
  1. Since you're outputting coordinates, each one of the input networks must be trained differently. Considering $N$ should ideally be a variable, how many input networks do you train?

  2. +
  3. You'd use an intermediate encoding of sorts as co-ordinate representation which would be averaged and then passed through the output network. - Interpolation properties of the encoding might not be too great, meaning a linear change in the representation might not lead to a linear change in the co-ordinate, causing an averaging function to give skewed results.

  4. +
+ +

Just a suggestion but I think it would be better if instead of learning multiple estimates of co-ordinate representations and then averaging them it might be better if you tried to learn some sort of distance function.

+ +

Essentially, each receiver would have a fixed position (Assumption) and the input network (or two input networks preferably) would output the distance ($r$) of the transmitter from that receiver and the angle ($\theta$) it makes with a common axis. The problem with this approach is you wouldn't be leveraging multiple receivers as they would not be learning together, and the advantage being that whatever $N$ is, the RSSI readings pass through the same network always.

+ +

Once an $N$ number of $(r, \theta)$ values are obtained - a robust algorithm +$f(r_1, r_2,..., \theta_1, \theta_2, ...) = (x_t, y_t)$ could be found to output the transmitter co-ordinates.

+ +

I'm not too confident if it would work too well but just a suggested direction I could think of! Hope this helped!

+",25658,,,,,10/10/2019 8:47,,,,0,,,,CC BY-SA 4.0 +15838,2,,15820,10/10/2019 9:42,,4,,"

There are many insightful comments and answers so far. I want to illustrate my idea of ""color blindness test"" more. Maybe it's a hint to lead us to the truth.

+ +

Imagine there are two people here. One is colorblind (AI) and another one is non-colorblind (human). If we show them a normal number ""6"", both of them can easily recognize it as number 6. Now, if we show them a delicately designed colorful number ""6"", only human can recognize it as number 6 while AI will recognize it as number 8. The interesting of this analogy is that we can not teach/train colorblind people to recognize this delicately designed colorful number ""6"" because of natural difference, which I believe is also the case between AI and human. AI gets results from computation while human gets results from ""mind"". Therefore, like @S. McGrew's answer, if we can find the fundamental difference between AI and human of how we read things, then this question is answered.

+",30335,,,,,10/10/2019 9:42,,,,4,,,,CC BY-SA 4.0 +15839,2,,15820,10/10/2019 10:28,,11,,"

Yes there are, for instance one pixel attacks described in

+ +
+

Su, J.; Vargas, D.V.; Kouichi, S. One pixel attack for fooling deep + neural networks. arXiv:1710.08864

+
+ +

One pixels attacks are attacks in which changing one pixel in input image can strongly affect the results.

+",30354,,2212,,10/10/2019 18:27,10/10/2019 18:27,,,,1,,,,CC BY-SA 4.0 +15842,2,,9076,10/10/2019 11:15,,0,,"

Mutation is secondary operator (10%) while Crossover is Primary (90%) which creates offspring while mutation can only shuffle information within the Choromsome

+",30358,,28348,,10/10/2019 20:43,10/10/2019 20:43,,,,0,,,,CC BY-SA 4.0 +15843,1,15855,,10/10/2019 13:49,,0,441,"

I've a prediction matrix(P) of dimension 3x3 and one-hot encoded label matrix(L) of dimension 3x3 as shown below.

+ +
    |0.5 0.3 0.1|      |1 0 0|
+P = |0.3 0.2 0.1|  L = |0 1 0|
+    |0.2 0.5 0.8|      |0 0 1|
+
+ +

each column in 'P' corresponds to prediction of a label in 'L'

+ +
    +
  1. How is the BCELoss calculated using pytorch?, my experimentation by giving these two matrices as parameters to loss function yielded me poor results and pytorch's loss calculation function doesn't disclose on how loss calculation is done for this case.

  2. +
  3. How is the loss averaged for each instance and across the a batch?

  4. +
  5. if loss is calculated column wise and averaged for each instance and across the batch, then how can loss be backprop'd in pytorch?
  6. +
+ +

Thanks in advance.

+",25676,,,,,10/11/2019 4:09,Confused with backprop in pytorch with BCE loss,,1,0,,12/26/2021 14:15,,CC BY-SA 4.0 +15844,2,,1285,10/10/2019 13:49,,0,,"

I'm currently working on a p2p framework to train with neuroevolution, it will have neat, hyperneat, and eshyperneat example experiments. AI developers will be able to fork and add any experiments they want.

+ +

It isn't a blockchain per sé but will have a dht for genes, champion nets, peers of course, and genomes. Neuroevolution training can be done in parallel so the population will be distributed evenly among peers, peers that finish early will help slower peers with their evaluations, and all peers will have a check on one of the nets they evaluated by a random peer to prevent malice and all peers will check the champion of each generation. Any peer that evaluates a net will be rewarded a token that is specific to the generation it helped train and will also download a copy of the champion net, later I plan to allow non training clients the ability to view performance of past champs and purchase them from the network, any peer with a token for the genome they purchase will receive part of the payment. This will not be proof of stake, it will be a proof of work where work is evaluating nets, since this needs to be checked by other nets so people don't just post phony fitness for genomes I'll be calling it proof of fitness.

+",20044,,4709,,12/8/2019 23:00,12/8/2019 23:00,,,,0,,,,CC BY-SA 4.0 +15845,2,,1285,10/10/2019 15:10,,0,,"

Maybe the term you're looking for is federated learning. Check out OpenMined project, PySyft and Tensorflow Federated libraries.

+",30363,,,,,10/10/2019 15:10,,,,0,,,,CC BY-SA 4.0 +15846,2,,9076,10/10/2019 21:13,,1,,"

I like to use the term, ""recombination operator"" rather than ""crossover operator"", because the latter term suggests a specific type of operation: constructing an offspring by switching corresponding chromosome segments between two parents. ""Recombination"" (to me) suggests any operation that forms an offspring from the genetic information of two parents. ""Crossover"" in that sense doesn't work when the individuals are, for example, permutations; but many ""recombination operators"" that do work are still possible, which preserve non-conflicting portions of two parent permutations.

+ +

In GA, mutation can be thought of as a relatively small random change that occurs within an individual. Mutation usually is a change of the value of one gene without making use of gene values in any other individuals, but can also be a random rearrangement of elements in a permutation, or a random change in the values of several genes. Sometimes the term is applied to a ""hill climbing"" procedure in which several mutations are applied to an individual and their effect on fitness is tested; then the one that produces the most fitness improvement is retained.

+",28348,,,,,10/10/2019 21:13,,,,0,,,,CC BY-SA 4.0 +15847,2,,15820,10/10/2019 23:27,,5,,"

Here's an example:

+ + + +

In his recent book The Fall, Stephenson wrote about smartglasses that that project a pattern over the facial features to foil recognition algorithms (which seems not only feasible but likely;)

+ +

Here's an article from our sponsors, Adversarial AI: As New Attack Vector Opens, Researchers Aim to Defend Against It which includes this graphic of ""Five ways AI hacks can lead to real world problems"".

+ +

The article references the conference on The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, where you can download the full report.

+ +

I'm assuming many such examples exist in the real world, and will amend this link-based answer as I find them. Good question!

+",1671,,1671,,10/11/2019 0:08,10/11/2019 0:08,,,,0,,,,CC BY-SA 4.0 +15848,1,,,10/11/2019 0:33,,1,192,"

There was a recent question on adversarial AI applications, which led me to start digging around.

+

Here my interest is not general, but specific:

+

How do you game an automatic trading system by messing with data, as opposed to hacking the algorithm itself?

+",1671,,2444,,12/11/2021 21:05,12/11/2021 21:05,"How do you game an automatic trading system by messing with data, as opposed to hacking the algorithm itself?",,1,2,,,,CC BY-SA 4.0 +15849,2,,15820,10/11/2019 1:19,,3,,"

Here's a live demo: https://www.labsix.org/physical-objects-that-fool-neural-nets/

+ +

Recall that neural nets are trained by feeding in the training data, evaluating the net, and using the error between the observed and the intended output to adjust the weights and bring the observed output closer to the intended. Most attacks have been on the observation that you can, instead of updating the weights, update the input neurons. That is, permute the image. However, this attack is very finnicky. It falls apart when the permuted image is scaled, rotated, blurred, or otherwise altered. That's clearly a cat to us, but guacamole to the neural net. But a slight rotation and the net starts classifying it correctly again.

+ +

However recent breakthroughs allow actual objects presented to a real camera to be reliably misclassified. That's clearly a turtle, albeit with a wonky pattern on its shell. But that net is convinced it's a rifle from practically every angle.

+",30378,,,,,10/11/2019 1:19,,,,0,,,,CC BY-SA 4.0 +15850,1,,,10/11/2019 1:38,,1,287,"

Let $H_1$ , $H_2$ ,... be a sequence of hypothesis classes for binary classification.

+ +

Assume that there is a learning algorithm that implements the ERM rule in the realizable case such that the output hypothesis of the algorithm for each class $H_n$ only depends on $O(n)$ examples out of the training set. Furthermore, assume that such a hypothesis can be calculated given these $O(n)$ examples in time $O(n)$, and that the empirical risk of each such hypothesis can be evaluated in time $O(mn)$.

+ +

For example, if $H_n$ is the class of axis aligned rectangles in $R^n$ , we saw that it is possible to find an ERM hypothesis in the realizable case that is defined by at most $2n$ examples.

+ +

Prove that in such cases, it is possible to find an ERM hypothesis for $H_n$ in the unrealizable case in time $O(mnm^{O(n)})$.

+",27112,,16909,,10/11/2019 2:11,6/22/2023 13:05,"Prove that in such cases, it is possible to find an ERM hypothesis for $H_n$ in the unrealizable case in time $O(mnm^{O(n)})$",,1,2,,,,CC BY-SA 4.0 +15851,2,,15848,10/11/2019 2:13,,1,,"

I don't have a proper source for this, as i've only read this from an online forum: everything i say is just hearsay and I am very uneducated on the subject. With that being said...

+ +

As you may know algorithmic trading relies on strategies, i.e. I trade a certain way once I see certain indexes move in a certain way. If your procedure is known by other people for some reason, then other adverse agents can release information or influence the market to manipulate your bot into making trades that the adverse agent then takes advantage over.

+ +

I'm not sure how this works in detail, and frankly I find it quite unlikely that a single adverse agent can target a single known strategy with the entire market in-between the two. But I've heard that a process known as the alpha algorithm can be used to discover hidden trading strategies. But again, I wasn't able to find any articles that directly linked this method towards financial trading. So my usual warning of me talking out my butt again applies.

+ +

Just a fun thought. A friend of mine told me (unfortunately, again I wasn't able to find a proper source for this) that Jane Street found that the most effective trading strategies were based on relatively simple ML techniques (I recall the specific example mentioned was just a common form of regression). The important follow question would be ""What are their input metrics"", I don't think I was told this, but I would believe that it would also just be something computed by relatively well-founded stock market indices. If this is the case, than again it would be very difficult for a single adverse agent to influence that particular bot since the force of the entire market is in between them.

+ +

On the opposite end, we have this redditor(finally, a source!), who amazingly analyzed about 130+ trading strategies, each of which were ""big brained"" in kind. This redditor concluded that all papers were essentially p-hacked, and none of the results were significant. Of course there is the possibility that authors with profitable results would not publish them, but the conclusion remains: keep your strategies simple. Again, this implies that it would be harder to ""hack"" a specific strategy used in the market by a particular individual.

+ +

P.S. There are other ""big brained"" methods that are kept well secret and thus difficult to analyze: I heard a rumour that a Chinese company paid people to take pictures inside shopping malls, to use to analyze shopping behavior, and how vibrant the economy is; I also hear that some people use satellites photos to look at, say, car dealerships, to see how much of their stuff is being purchased. Again its unclear if these techniques are profitable or not, but certainly a lot of money is being thrown into it.

+",6779,,6779,,10/11/2019 2:59,10/11/2019 2:59,,,,3,,,,CC BY-SA 4.0 +15852,1,15856,,10/11/2019 2:47,,2,44,"

The empirical error equation given in the book Understanding Machine Learning: From Theory to Algorithms is

+ +

+ +

My intuition for this equation is: total wrong predictions divided by the total number of samples $m$ in the given sample set $S$ (Correct me if I'm wrong). But, in this equation, the $m$ takes $\{ 1, \dots, m \}$. How is this actually calculated, as I thought it should be one number (the size of the sample)?

+",30381,,2444,,1/22/2021 15:12,1/22/2021 15:12,"What does the notation $[m]=\{1, \ldots, m\}$ mean in the equation of the empirical error?",,1,0,,,,CC BY-SA 4.0 +15853,2,,15850,10/11/2019 3:09,,0,,"

Here's a rough proof sketch that might be clearer, but is probably less precise, than the solution you already have.

+ +
    +
  1. We have a sequence of hypotheses classes, but the problem actually only asks us to consider a single class. A hypothesis class is a set of possible models for classification that differ only in a set of parameter values.

  2. +
  3. We are told that the algorithm for selecting a hypothesis from $H_n$ uses only $O(n)$ samples from the training set, and that using those samples, it takes the algorithm $O(n)$ to find the hypothesis that minimizes the empirical risk for the class, provided that the ERM minimizing hypothesis has an empirical risk of 0 (i.e. it is realizable).

  4. +
  5. Further, if we pick some hypothesis at random from the class, we are told that the cost of determining its exact empirical risk is $O(nm)$, where $m$ is the total size of the training set, and $n$ is just a parameter associated with this particular hypothesis class $H_n$.

  6. +
  7. Observer that, in the unrealizable case, we cannot use the algorithm to find the risk minimizing hypothesis (which, by 2, runs in O(n)). No other algorithm is mentioned, but we are also told (in 2) that the algorithm can construct a hypothesis from $O(n)$ datapoints in $O(n)$ time, even if no feasible hypothesis can be found for those $O(n)$ points.

  8. +
  9. Suppose we wanted to find the hypothesis with minimal risk via brute force. We could compute every subset of the training dataset that is of the size $O(n)$ that the algorithm may be able to use to construct a hypothesis in $O(n)$ time. There are $O(m^{O(n)})$ such subsets (e.g. $m*(m-1)/2$ possible pairs, $m*(m-1)*(m-2)/6$ possible triplets, etc.).

  10. +
  11. Since there are $O(m^{O(n)})$ possible subsets of size $O(n)$, and our algorithm can compute a candidate hypothesis in $O(n)$ time, it costs us $O(nm^{O(n)})$ to find all possible hypotheses that our algorithm can produce for this dataset.

  12. +
  13. Since it costs us $O(nm)$ time to compute the emperical risk of a hypothesis generated by this algorithm, we can minimize risk by computing risk for each of the $O(m^{O(n)})$ hypotheses we made in step 6, and then selecting the one with lowest empirical risk. This costs us $O(mnm^{O(n)})$ time total. Our total runtime is then the sum of the cost to generate the models in step 6, and the cost to evaluate them here in step 7: $O(mnm^{O(n)})+O(nm^{O(n)})$.

  14. +
  15. Since $O(mnm^{O(n)})+O(nm^{O(n)}) \in O(mnm^{O(n)})$, we have proven the desired bound on runtime.

  16. +
+",16909,,,,,10/11/2019 3:09,,,,0,,,,CC BY-SA 4.0 +15855,2,,15843,10/11/2019 4:09,,1,,"
    +
  1. BCELoss ( Binary Cross Entropy Loss) is used for binary classifier, which is a neural network that have a binary output, 0 or 1. It is not used for multi-output neural network like your case. For that kind of networks, you can use MSELoss or CrossEntropyLoss as your loss for the network.
  2. +
  3. For the calculation of BCE, it is shown on pytorch documentation. +https://pytorch.org/docs/stable/nn.html#BCELoss +For across batch, it either sum the loss or take the mean. you can set that through the reduction="""" argument.
  4. +
  5. As i said, the loss is either summed or taken the mean. Then there will be a single value so pytorch can do backdrop.
  6. +
+ +

Hope I can help you

+",23713,,,,,10/11/2019 4:09,,,,0,,,,CC BY-SA 4.0 +15856,2,,15852,10/11/2019 4:25,,2,,"

This is a commonly used notation in theoretical computer science.

+ +

$[m]$ is not the variable $m$, but is instead the set of integers from $1$ to $m$ inclusive. The empirical error equation thus reads in English:

+ +

The cardinality of a set consisting of the elements $i$ of the set of integers $[m]$ such that the hypothesis given input $x_i$ disagrees with label $y_i$, normalized by $m$.

+",16909,,,,,10/11/2019 4:25,,,,0,,,,CC BY-SA 4.0 +15857,1,,,10/11/2019 5:27,,5,311,"

Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers?

+ +

I read this in a paper titled Mean Field Residual Networks: On the Edge of Chaos (2017). I am not sure how far this is true, because the error depends on those activations.

+",30384,,2444,,12/12/2021 12:35,12/12/2021 12:35,Is the gradient at a layer independent of the activations of the previous layers?,,2,0,,,,CC BY-SA 4.0 +15858,2,,15857,10/11/2019 5:33,,1,,"

Yes, this is the premise of back-propagation, the gradient at layer $j_{n}$ is not impacted by the gradient at layer $j_{n-1}$. This allows you to start with a gradient at the output layer and propagated it back through the network to the input layer.

+ +

It is however impacted by the gradient at $j_{n+1}$, which the back-prop algorithm also follows.

+",26726,,2444,,10/22/2019 1:16,10/22/2019 1:16,,,,0,,,,CC BY-SA 4.0 +15859,1,15863,,10/11/2019 6:39,,0,3051,"

The answers to this Quora question say it's OK to ignore machine learning and start right away with deep learning.

+ +

Is machine learning required or is useful for understanding (theoretically and practically) deep learning? Can I start right away with deep learning or should I cover machine learning first? In what way machine learning useful for deep learning? (leave the mathematics part - I'm ok with it).

+",30381,,2444,,10/11/2019 16:05,11/24/2019 6:17,Is machine learning required for deep learning?,,3,1,,,,CC BY-SA 4.0 +15860,1,,,10/11/2019 6:44,,3,45,"

I'm trying to figure out how to extract specific text from an utterance by a user.

+ +

I need to extract ""unknown"" text from a short and simple text. In this case, the user wants to create a list. everything in the {} is unknown text. As it doesn't belong to a specific entity such as food, athletes, movies, etc.

+ +
+
    +
  • create a new {groceries} list
  • +
  • create a list {movies}
  • +
  • create a new list {movies}
  • +
  • create a list and call it {books}
  • +
  • create a new list and give it the name {stamps}
  • +
  • create a list with the title {red ketchup}
  • +
  • create another list called {rotten food}
  • +
+
+ +

the above list is but a small sample of all the different ways that a user can say he wants to create a list.

+ +

In everything that I have seen, it's all based on existing entities for the NER and when someone says that it's custom, I found that it just means we have to train a specific set of words and hope for the best. If I add one more word that isn't trained, it fails to get the data.

+ +

But in this case, the user can say anything such as ""old shoes"", ""schools I want to go to"", ""Keanu Reeves movies"". So I cannot see how I could possibly train it.

+ +

With Spacy, I followed this example (https://raw.githubusercontent.com/explosion/spaCy/master/examples/training/train_intent_parser.py) and it mostly works in getting the proper titles. However, I have to train it for every different phrase to work.

+ +

For example, if a user says

+ +
+

create a beautiful new list and give it the name {stamps}

+
+ +

the word beautiful causes it to fail and now I have to train for that as well. At this rate, we are looking at millions of phrases to train.

+ +

before Spacy, we tried Dialogflow and Rasa. At each point, it's about training phrases but the more we train, the more one thing worked and another broke.

+ +

At this point, we have tried and overall had good intent detection success but when it comes to extracting data such as this, I'm starting to look like a deer in a headlight.

+ +

We are new to NLP and while we've had a lot of good progress and over the past few weeks, we cannot seem to find any articles written on this specific problem and whether it can be solved. Dialogflow has the concept of any entity but they recommend avoiding it and it works 2 out of 3 times when things get complicated.

+ +

The goal is to detect which of these words is the title based training. Can it be done? and if so, what's the approach?

+ +

Any code, hints or articles that might get us started would be appreciated.

+",30386,,,,,10/11/2019 6:44,How to train a model to extract custom and unknown entities,,0,0,,,,CC BY-SA 4.0 +15861,1,,,10/11/2019 6:55,,1,62,"

lately we read that many manufacturers are forcing ARM architectures to be used on future workstations. One of ARM's recent announcements is a machine learning processor. What will change in terms of computing performance if ARM architectures become new standard, and these kinds of ML-focused chips are found in most devices?

+",23181,,16909,,10/11/2019 12:49,10/11/2019 13:15,What will change when workstations will have ARM Machine Learning Processors onboard?,,1,0,,,,CC BY-SA 4.0 +15862,2,,15859,10/11/2019 8:13,,4,,"

Deep learning is part of machine learning.

+ +
    +
  • You will miss out useful information if you ignore machine learning.
  • +
  • You are ok to start your work in machine learning with deep learning and neural networks. You have to start somewhere and starting with a strong and successful method is resaonable, especially if you need to be able to produce good results quickly.
  • +
  • You will learn essential machine learning stuff while reading about deep learning.
  • +
  • The deep learning tutorials and other learning materials you will be reading may not be telling you that what you are learning also applies to other machine learning methods but you will be learning lot's of stuff that applies more generally. You will be studying some machine learning whether you want to or not.
  • +
  • If you have plenty of time a more broad view will help understanding. Still, there is no need to wait with deep learning to after mastering some other methods.
  • +
  • Broader knowledge helps you to relate and memorise concepts and be more aware of potential issues, especially issues that are rarely discussed in the deep learning community. Such knowledge and experience will be most useful when trying to apply deep learning to new problems or if trying to make substantial changes.
  • +
+",30388,,,,,10/11/2019 8:13,,,,0,,,,CC BY-SA 4.0 +15863,2,,15859,10/11/2019 10:00,,0,,"

+
+
    +
  1. Is Machine Learning required or is useful for understanding (theoretically and practically) Deep Learning?
  2. +
+
+

NO

+

Deep learning is itself a huge subject area with serious applications in NLP, Computer Vision, Speech and Robotics. +You should learn deep learning from scratch like understanding forward propagation, back propagation, how weights are updated etc.. instead of using high level frameworks like keras, pytorch. +It's OK to use them once you understand the basics to save time and code complexity, but remember "surely" you don't need machine learning for that.

+

Since you are familiar with the mathematics part, I would suggest you to straight away jump into Deep Learning. Note that deep learning is inspired by how the brain works.

+
+
    +
  1. Can I start right away with Deep learning or should I cover Machine learning first?
  2. +
+
+

Yes you may start right away, start with the hello world problems "MNIST DIGIT Classification" if you know little image processing. +Start with a simple neural network model from scratch, then use keras (very easy) and then proceed to CNN ... +You may start with simple problems in other fields too (NLP, Speech) +I suggest Andrew Yangs, course in Machine Learning (within this he explains a neural network model for MNIST I guess).

+
+
    +
  1. In what way machine learning useful for Deep learning?
  2. +
+
+

You will understand that in machine learning you sit down and find useful features in the dataset yourself, but in deep learning it happens automatically +(Learn Deep learning in detail and come back and read this, you will understand exactly what I mean!) +If you learn Machine learning and then go to deep learning, you will realise that it was unnecessary .if you interested in this field of AI, jump into deep learning right now!

+",26854,,-1,,6/17/2020 9:57,11/24/2019 6:17,,,,0,,,,CC BY-SA 4.0 +15867,2,,15861,10/11/2019 13:15,,1,,"

It is not clear to me that much will change.

+ +
    +
  • ARM makes many devices, mostly designed to consume as little power as possible. My guess is that most workstations will not contain ARM's ML processor, even if they contain an ARM CPU.

  • +
  • ARM's ML processor can do machine learning. It is specifically optimized for training convolutional deep neural networks, and it is a bit faster that most existing non-purpose built chips, especially for its price. In practice, chips like this one should make ML a bit cheaper and faster, but it's not a breakthrough product. This just looks like the next iteration of Moore's law.

  • +
+",16909,,,,,10/11/2019 13:15,,,,0,,,,CC BY-SA 4.0 +15868,1,,,10/11/2019 13:50,,2,43,"

I want to implement a model that improves itself with the passage of time.

+ +

My main task is to build a machine translator (from English to Urdu).. The problem I am facing is that I have very little dataset available to train. Even if I create a corpus still there is a possibility of that corpus having poor translations due to outdated word choice for my native language.

+ +

I was thinking to create a model which predicts output and user tells whether it is correct or not. Or maybe suggests a better translation.

+ +

Now I have two options.

+ +
    +
  1. Take that input from end user, append it to my dataset and retrain the model. (I don't know whether it is even possible or not at production level).

  2. +
  3. Second is to reinforce that data into previous system. So far I only came to know about Online learning or Reinforcement learning (Q-learning, as my data is very small and even if user is training still not going to be in millions of sentences)

  4. +
+ +

Am I on the right track, and how can I progress with either of these two options? Is there any prebuilt solution similar to this?

+",29891,,29891,,10/11/2019 16:39,10/11/2019 16:39,Best approach for online Machine Translation with few hundred of samples?,,0,2,,,,CC BY-SA 4.0 +15869,2,,36,10/11/2019 17:31,,1,,"

Direct Answer to Your Question:--

+

The field where quantum computing and A.I. intersect is called quantum machine learning.

+
    +
  1. A.I. is a developing field, with some background (ala McCarthy of LISP fame).

    +
  2. +
  3. Quantum computing is a virgin field that is largely unexplored.

    +
  4. +
+

A particular type of complexity interacts with another type of complexity to create a very rich field.

+

Now combine (1) and (2), and you end up with even more uncertainty; the technical details shall be explored in this answer.

+

Google Explains Quantum Computing in One Simple Video: Google and NASA's Quantum Artificial Intelligence Lab

+
+

Body:--

+

IBM is an authority:--

+

IBM: Quantum Computers Could Be Useful, But We Don't Know Exactly How

+

Quantum machine learning is an interesting phenomenon. This field studies the intersection between quantum computing and machine learning.

+

(https://en.wikipedia.org/wiki/Quantum_machine_learning)

+
+

"While machine learning algorithms are used to compute immense quantities of data, quantum machine learning increases such capabilities intelligently, by creating opportunities to conduct analysis on quantum states and systems." Wikipedia contributors. — "Quantum machine learning." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 7 Oct. 2019. Web. 11 Oct. 2019.

+
+
+

Technical Mirror:--

+

This particular section on the implementations is worth noting:--

+

(https://en.wikipedia.org/wiki/Quantum_machine_learning#Implementations_and_experiments)

+
+

" ... This dependence on data is a powerful training tool. But it comes with potential pitfalls. If machines are trained to find and exploit patterns in data then, in certain instances, they only perpetuate the race, gender or class prejudices specific to current human intelligence.

+

But the data-processing facility inherent to machine learning also has the potential to generate applications that can improve human lives. 'Intelligent' machines could help scientists to more efficiently detect cancer or better understand mental health.

+

Most of the progress in machine learning so far has been classical: the techniques that machines use to learn follow the laws of classical physics. The data they learn from has a classical form. The machines on which the algorithms run are also classical.

+

We work in the emerging field of quantum machine learning, which is exploring whether the branch of physics called quantum mechanics might improve machine learning. Quantum mechanics is different to classical physics on a fundamental level: it deals in probabilities and makes a principle out of uncertainty. Quantum mechanics also expands physics to include interesting phenomena which cannot be explained using classical intuition. ... " — "Explainer: What Is Quantum Machine Learning And How Can It Help Us?". Techxplore.Com, 2019, https://techxplore.com/news/2019-04-quantum-machine.html.

+
+ +
+

Business Applications and Practical Uses:--

+ +
+

Further Reading:--

+ +",25982,,-1,,6/17/2020 9:57,10/13/2019 7:18,,,,0,,,,CC BY-SA 4.0 +15870,2,,15820,10/11/2019 19:20,,2,,"

There are some research at least on the ""foolability"" of neural networks, that gives insight on potential high risk of neural nets even when they ""seem"" 99.99% acurate.

+ +

A very good paper on this is in Nature: https://www.nature.com/articles/d41586-019-03013-5

+ +

In a nutshell:

+ +

It shows diverse exemples of fooling neural networks/AIs, for exemple one where a few bits of scotch tape places on a ""Stop"" sign changes it, for the neural net, into a ""limited to 40"" sign... (whereas a human would still see a ""Stop"" sign!).

+ +

And also 2 striking exemples of turning an animal into another by just adding invisible (for humans!) colored dots, (turning in the exemple a Panda into a Gibbon, where a human hardly see anything different so still sees a Panda).

+ +

Then they elaborate on diverse research venues, involving for exemple ways to try to prevent such attacks.

+ +

The whole page is a good read to any AI researcher and shows lots of troubling problems (especially for automated systems such as cars, and soon maybe armaments).

+ +
+ +

An exerpt relevant to the question:

+ +

Hendrycks and his colleagues have suggested quantifying a DNN’s robustness against making errors by testing how it performs against a large range of adversarial examples. However, training a network to withstand one kind of attack could weaken it against others, they say. And researchers led by Pushmeet Kohli at Google DeepMind in London are trying to inoculate DNNs against making mistakes. Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns.

+ +

For the moment, however, no one has a fix on the overall problem of brittle AIs. The root of the issue, says Bengio, is that DNNs don’t have a good model of how to pick out what matters. When an AI sees a doctored image of a lion as a library, a person still sees a lion because they have a mental model of the animal that rests on a set of high-level features — ears, a tail, a mane and so on — that lets them abstract away from low-level arbitrary or incidental details. “We know from prior experience which features are the salient ones,” says Bengio. “And that comes from a deep understanding of the structure of the world.”

+ +
+ +

Another excerpt, near the end:

+ +

""Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. There is not much theory behind deep learning, says Song. “If something doesn’t work, it’s difficult to figure out why,” she says. “The whole field is still very empirical. You just have to try things.”""

+",2233,,,,,10/11/2019 19:20,,,,0,,,,CC BY-SA 4.0 +15872,2,,15859,10/12/2019 7:56,,0,,"

I would argue definitely since it is a bit sequential. (i) Start off applying basic machine learning concepts such as regression, classification and generalization techniques etc. to real world problems. (ii) you will soon realize the limitations of those techniques.(iii) Take your learning to the next level by learning and applying deep learning concepts specially if the issues are around image classification or NLP. As mentioned by @Joachim Wagner you will not only miss out on useful info but there will a huge gap in our learning. Hence, i would suggest learning concurrently or ML first otherwise DL will become black box of black box.

+",30157,,,,,10/12/2019 7:56,,,,0,,,,CC BY-SA 4.0 +15873,1,15878,,10/12/2019 8:25,,1,316,"

I was reading a PyTorch code then I saw this learning rate scheduler:

+ +
def warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor):
+    """"""
+    Learning rate scheduler
+    :param optimizer:
+    :param warmup_iters:
+    :param warmup_factor:
+    :return:
+    """"""
+    def f(x):
+        if x >= warmup_iters:
+            return 1
+        alpha = float(x) / warmup_iters
+        return warmup_factor * (1 - alpha) + alpha
+
+    return torch.optim.lr_scheduler.LambdaLR(optimizer, f)
+
+ +

and this is where the function is called:

+ +
if epoch == 0:
+    warmup_factor = 1. / 1000
+    warmup_iters = min(1000, len(data_loader) - 1)
+
+    lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor)
+
+ +

As I understood it gradually increase learning rate until it reach initial learning rate. Am I correct? Why we need to increase learning rate? As I know for better learning in Neural Networks we decrease learning rate.

+",10051,,2444,,10/12/2019 16:04,10/12/2019 16:04,Is this learning rate schedule increasing the learning rate?,,1,0,,,,CC BY-SA 4.0 +15875,1,,,10/12/2019 10:52,,1,392,"

I am new to neural networks. I would like to use them as a fitting or forecasting method.

+ +

A simple NN model that does not contain hidden layers, that is, the input nodes are directly connected to the outputs nodes, represents a linear model. Nonlinearity begins to appear in an ANN model when we have hidden nodes, in which a nonlinear function is assigned to the hidden nodes, and using minimization their weights are determined.

+ +

How do we choose the non-linear activation function that should be assigned to each hidden neuron?

+",27312,,2444,,10/13/2019 0:14,10/13/2019 0:14,How do we choose the activation function for each hidden node?,,2,2,,10/14/2019 4:41,,CC BY-SA 4.0 +15876,2,,15875,10/12/2019 14:03,,1,,"

To know the form of your non-linear function, firstly you should define the type of problem you are dealing with such as an image classification task. Secondly, pick the activation functions based on your task such as sigmoid, Tanh, ReLu, LeadyRelu, Softmax etc. Overall, your ANN performance mainly depends on the number of hidden layers (hidden units), selection of activation functions, weight decay, momentum and dropout etc.

+",30157,,,,,10/12/2019 14:03,,,,0,,,,CC BY-SA 4.0 +15877,1,15879,,10/12/2019 15:02,,6,2426,"

I'm certain that this is a very naive question, but I am just beginning to look more deeply at neural networks, having only used decision tree approaches in the past. Also, my formal mathematics training is more than 30 years in the past, so please be kind. :)

+ +

As I'm reading François Chollet's book on Deep Learning with Python, I'm struck that it appears that we are effectively treating the weights (kernel and biases) as terms in the standard linear equation ($y=mx+b$). At page 72 of the book, the author writes

+ +
output = dot(W, input) + b
+output = (output < 0 ? 0 : output)
+
+ +

Am I reading too much into this, or is this correct (and so fundamental I shouldn't be asking about it)?

+",30426,,32410,,5/9/2021 23:50,5/9/2021 23:50,Do neurons of a neural network model a linear relationship?,,4,0,0,,,CC BY-SA 4.0 +15878,2,,15873,10/12/2019 15:50,,1,,"

The higher (or smaller) the learning rate, the higher (or, respectively, smaller) the contribution of the gradient of the objective function, with respect to the parameters of the model, to the new parameters of the model. Therefore, if you progressively increase (or decrease) the learning rate, then you will accelerate (or, respectively, slow down) the learning process, so later training examples have higher (or, respectively, smaller) influence on the parameters of the model.

+ +

In your example, the function warmup_lr_scheduler returns an object of class LambdaLR, initialized with a certain optimizer and the function f, which is defined as

+ +
def f(x):
+    if x >= warmup_iters:
+        return 1
+    alpha = float(x) / warmup_iters
+    return warmup_factor * (1 - alpha) + alpha
+
+ +

The documentation of torch.optim.lr_scheduler.LambdaLR says that the function f should compute a multiplicative factor given an integer parameter epoch, so x is a training epoch. If the epoch x is greater than or equal to warmup_iters, then 1 is returned, but anything multiplied by 1 is itself, so, when the epoch x is greater than a threshold, warmup_iters (e.g. 1000), then the initial learning rate is unaffected. However, when x < warmup_iters, the multiplicative factor is given by

+ +
alpha = float(x) / warmup_iters
+warmup_factor * (1 - alpha) + alpha
+
+ +

which is a function of the epoch x. The higher the epoch x, the higher the value of alpha, so the smaller (1 - alpha) and warmup_factor * (1 - alpha). Note that float(x) / warmup_iters will never be greater than 1 because x is never greater than warmup_iters. So, effectively, as the epoch increases, warmup_factor * (1 - alpha) tends to 0 and alpha tends to 1.

+ +

The learning rate can only increase if you multiply it with a constant greater than 1. However, this can only happen if warmup_factor > 1. You can verify this by solving the inequality warmup_factor * (1 - alpha) + alpha > 1.

+ +

To conclude, the initial learning rate is not being increased, but the learning process starts with a smaller learning rate than the given learning rate, for a warmup_iters epochs, then, after warmup_iters epochs, it uses the initially given learning rate (e.g. 0.002).

+",2444,,2444,,10/12/2019 15:56,10/12/2019 15:56,,,,0,,,,CC BY-SA 4.0 +15879,2,,15877,10/12/2019 16:23,,10,,"

In a neural network (NN), a neuron can act as a linear operator, but it usually acts as a non-linear one. The usual equation of a neuron $i$ in layer $l$ of an NN is

+ +

$$o_i^l = \sigma(\mathbf{x}_i^l \cdot \mathbf{w}_i^l + b_i^l),$$

+ +

where $\sigma$ is a so-called activation function, which is usually a non-linearity, but it can also be the identity function, $\mathbf{x}_i^l$ and $\mathbf{w}_i^l$ are the vectors that respectively contain the inputs and the weights for neuron $i$ in layer $l$, and $b_i^l \in \mathbb{R}$ is a bias. Similarly, the output of a layer of a feed-forward neural network (FFNN) is computed as

+ +

$$\mathbf{o}^l = \sigma(\mathbf{X}^l \mathbf{W}^l + \mathbf{b}^l).$$

+ +

In your specific example, you set the new weight to $0$, if the output of the linear combination is less than $0$, else you use the output of the linear combination. This is the definition of the ReLU activation function, which is a non-linear function.

+",2444,,2444,,10/12/2019 16:34,10/12/2019 16:34,,,,8,,,,CC BY-SA 4.0 +15880,5,,,10/12/2019 16:47,,0,,,2444,,2444,,10/12/2019 16:47,10/12/2019 16:47,,,,0,,,,CC BY-SA 4.0 +15881,4,,,10/12/2019 16:47,,0,,"For questions related to the concept of learning rate (of an optimization algorithm, such as gradient descent) in machine learning.",2444,,2444,,10/12/2019 16:47,10/12/2019 16:47,,,,0,,,,CC BY-SA 4.0 +15882,1,,,10/12/2019 17:12,,1,26,"

I've been exploring word-level alignments tools such as MGIZA and it seems to me that there hasn't been any new tool for this problem. Are neural networks not suitable to solve this problem or simply no interest in the area to build new tools?

+",30428,,,,,10/12/2019 17:12,Why hasn't deep learning been used for word level alignment?,,0,0,,,,CC BY-SA 4.0 +15884,2,,15875,10/12/2019 17:50,,1,,"

TL;DR:One does not know ahead of time what hyper-parameters will achieve optimal performance. So what you need is an iterative implementation strategy:

+ +

Implementation Strategy

+ +

When working with neural networks it is key to make sure that you spend your time wisely. It is possible to spend lots of time on a dead end simply because you made an assumption about your model at the very beginning.

+ +

So when selecting activation functions and other hyper parameters don't over think things. That is, get a quick and dirty model up and running and tune from there. From this model you can iterate. For example, you could start with ReLU activations in the hidden layers and as you tune your model you could experiment with other other activations.

+ +

That is, the data and the task at hand along with your tuning shape your model. A highly recommended video on this is A. Ng's lecture here and this video from the A. Ng deep learning specialization.

+ +

Some content not in the video is how to use learning curves to help define your iterations. These help you decide what you should do next when your model is not achieving desired performance.

+",28343,,28343,,10/12/2019 17:56,10/12/2019 17:56,,,,2,,,,CC BY-SA 4.0 +15885,1,16859,,10/12/2019 18:53,,4,59,"

When training a deep network to learn object classification from a set like ImageNet, we minimize the cross entropy between the ground truth and the predicted categories. This is done in a supervised way. It is my understanding that you can separate categories in an unsupervised way using principal component analysis, but I have never seen that in a deep network. I am curious if this can be done easily in the last case. One possible way to do this would be to minimize a loss that favors categorization into one-hot vectors (this would only guarantee that an image is classified into a single category, rather that the correct category, though). Has this been done, or is there any reason why not?

+",30433,,,,,12/30/2019 1:01,Are there methods that allow deep networks to learn object categorization in a self-supervised way?,,1,0,,,,CC BY-SA 4.0 +15886,2,,15877,10/12/2019 21:17,,3,,"

Taking the question from comments on nbro's answer.

+ +
+ +
+

Am I wrong to see a clear relationship between how we are currently training networks and the classic function that defines a line?

+
+ +

You are right about it. This is an intuitive way to understand neural networks. You can create a neural network that only does simple linear regression, by using linear activations functions in all the layers, such as the neural network (model) output is a linear combination of the inputs. And, this seems like a great way to introduce neural networks to students.

+ +

But, one must also look at the fact that neural networks provide the flexibility to model many kinds of non-linear relationships.

+ +
+ +

A list of activation functions.

+",16708,,16708,,10/15/2019 7:08,10/15/2019 7:08,,,,0,,,,CC BY-SA 4.0 +15888,2,,15877,10/13/2019 4:07,,5,,"

Almost never. The sum of linear functions is another linear function, so if neurons were only linear transformations there would be basically no point to having more than one neuron per layer. Instead, every neuron applies some kind of nonlinear function to its input. There are lots of different variations, but in the end the combination of the nonlinear activation function at each layer with the linear matrix multiplication connecting the outputs of each layer to the inputs of the next, creates something that has much more intricate behavior while still being reasonably efficient to compute.

+",30438,,,,,10/13/2019 4:07,,,,1,,,,CC BY-SA 4.0 +15890,1,15896,,10/13/2019 11:36,,2,88,"

I am wondering where Google uses the result from deep learning of reCaptcha (how can a system that knows to recognize street signs is useful somewhere? how they profit from it?)

+",30443,,2444,,10/13/2019 14:47,10/13/2019 22:51,How is the reCaptcha useful for Google?,,1,0,,,,CC BY-SA 4.0 +15891,5,,,10/13/2019 12:08,,0,,,2444,,2444,,10/13/2019 12:08,10/13/2019 12:08,,,,0,,,,CC BY-SA 4.0 +15892,4,,,10/13/2019 12:08,,0,,"For questions related to CAPTCHA, an acronym for ""Completely Automated Public Turing test to tell Computers and Humans Apart"", which is a type of challenge-response test used in computing to determine whether or not the user is human.",2444,,2444,,10/13/2019 12:08,10/13/2019 12:08,,,,0,,,,CC BY-SA 4.0 +15894,1,,,10/13/2019 15:29,,1,87,"

I have a car with 8 lidars, each with a field of view of 60 degrees. My car looks like this:

+ +

+ +

How can I merge all the lidar readings into 1 point cloud?

+",29708,,2444,,10/20/2019 15:11,10/20/2019 16:27,How can I combine the readings of multiple lidars into 1 point cloud?,,1,1,,1/22/2021 0:35,,CC BY-SA 4.0 +15896,2,,15890,10/13/2019 22:51,,0,,"

At this time, Google is aggressively pursuing research into AI of many sorts. All AI at this point is constrained by the size and accuracy of available training data. Getting human curated data is expensive and time consuming. Crowdsourcing through Captcha gets access to that without paying directly.

+ +

Why roads specifically? It can hardly be a coincidence that Google's holding company also owns a self driving car company Waymo. They need machines to learn what road signs look like in real life, so that they can respond to them.

+",23413,,,,,10/13/2019 22:51,,,,0,,,,CC BY-SA 4.0 +15897,1,,,10/14/2019 1:28,,1,53,"

I was watching a video about policy gradients by Andrej Karpathy. At 10:00, it shows an equation for supervised learning for image classification.

+

$$\max\sum _{i} \log p(y_i \mid x_i)$$

+

I have worked with image classification models before, but I always minimized a cost function (aka loss function). I have also never seen someone maximizing a cost function for image classification in the wild.

+
    +
  • So, what are the advantages of a minimizing loss function over a maximizing loss function in image classification?

    +
  • +
  • Other than RL, which problems do we solve by maximizing a cost function?

    +
  • +
+",39,,2444,,12/12/2021 12:39,12/12/2021 12:39,"In image classification, why do we usually minimize a cost function rather than maximizing it?",,1,1,,,,CC BY-SA 4.0 +15898,2,,8689,10/14/2019 2:11,,1,,"

It depends on the used network as well as the feeding mechanism but let's give an example;

+ +

When working with LSTM, giving the time data (as an integer sequence) in addition to the time-series data(coming from features) dramatically increases the performance of the network.

+ +

[$X_{0}$,$X_{1}$, ...] $\rightarrow$ [[$X_{0}$,$t_{0}$],$[X_{1}$,$t_{1}$], ...] +

+ +

If you go and look for the kaggle competition winner's notebooks, they do also create additional features based on the featured data.

+ +

Let's assume that the performance is already quite high on the three features so that you can predict those three features with high reliability. +It would only make sense to increase the number of features if you would like to predict additional features!

+",30351,,,,,10/14/2019 2:11,,,,0,,,,CC BY-SA 4.0 +15899,2,,15897,10/14/2019 4:22,,1,,"

There is really no difference between minimizing a cost function and maximizing a value function. One can be the reciprocal of the other, or the negative of the other, for example.

+",28348,,,,,10/14/2019 4:22,,,,1,,,,CC BY-SA 4.0 +15900,1,15919,,10/14/2019 4:33,,1,100,"

What are the standard (or baseline) problems (or at least common ones) for CNNs and LSTMs? As an example, for a feed-forward neural net, a common problem is the XOR problem.

+ +

Is there a standard problem like this for CNNs and LSTMs? I think for a CNN the standard test is to try it on MNIST, but I'm not sure of an LSTM.

+",26726,,2444,,10/14/2019 23:29,10/20/2019 15:07,What are the standard problems for CNNs and LSTMs?,,2,1,,,,CC BY-SA 4.0 +15901,2,,9105,10/14/2019 4:41,,1,,"

In my experience, the fitness function is a way to define the goal of a genetic algorithm. It provides a way to compare how ""good"" two solutions are, for example, for mate selection and for deleting ""bad"" solutions from the population.

+ +

The fitness function can also be a way to incorporate constraints, prior knowledge you may have about the shape of the fitness landscape, or the way your crossover/recombination operators will work in that fitness landscape.

+ +

For example, the fitness function can include hard constraints like ""Genes x,y, and z must all stay on one side of the surface $Ax +By +Cz = k$"" by assigning the fitness value at zero if the gene values are on the wrong side of the surface. However, it's often better in a case like that soften the boundary by assigning a fitness penalty that is zero at the surface and grows larger as the gene values move farther from the surface on the wrong side of the surface.

+ +

Different fitness functions can be used for mate selection vs deleting ""bad"" trial solutions. For example, ""mating fitness"" between two potential parents A and B can be a function of how different the two parents are. By providing a mating advantage to pairs that are significantly different, the population can be forced to remain fairly diverse and thus explore a larger region of solution space, or to avoid converging to local (sub-optimum) fitness maxima. Meanwhile, the usual kind of fitness will cull the low-fitness individuals from the population and drive evolution toward high fitness.

+ +

What is often much more important is the set of variables (""genes"") used to represent a trial solution, how the genes are arranged in the ""chromosome"", and the ways genes from two parents can be combined to form a new trial solution. Since you didn't ask about those things I won't go into detail in this answer, but if you ask in a separate question I will provide a detailed answer.

+",28348,,28348,,10/14/2019 5:03,10/14/2019 5:03,,,,2,,,,CC BY-SA 4.0 +15902,2,,15900,10/14/2019 5:32,,1,,"

I think for LSTM, I have not come across a standard test, but when I started, I tried like this.

+ +

Generate a sequence of numbers like, [0,1,2,3,4],[1,2,3,4,5]..... as the dataset and then your labels would be [5,6,.....].Train this using an LSTM network.This would be a good way to understand the parameters involved and by changing the number of layers, and different parameters you can easily check how it works.

+ +

of course, mnist is the test for CNNs

+",26854,,,,,10/14/2019 5:32,,,,1,,,,CC BY-SA 4.0 +15903,1,,,10/14/2019 9:27,,9,4034,"

Recently, I always hear about the terms sim2sim, sim2real and real2real. Will anyone explain the meaning/motivation of these terms (in DL/RL research community)?

+ +

What are the challenges in this research area?

+ +

Anything intuitive would be appreciated!

+",30470,,2444,,10/14/2019 12:13,10/14/2019 12:13,"What are sim2sim, sim2real and real2real?",,1,0,,,,CC BY-SA 4.0 +15904,2,,15903,10/14/2019 12:04,,4,,"

The abbreviations sim2sim, sim2real and real2real refer to techniques that can be used to transfer knowledge from one environment (e.g. in simulation) to another one (e.g. in the real world).

+ +
    +
  • sim2sim stands for simulation-to-simulation,
  • +
  • sim2real stands for simulation-to-real, and
  • +
  • real2real stands for real-to-real.
  • +
+ +

In sim2sim, knowledge acquired during one simulation is transferred to an agent (or robot) in another simulation. Similarly, in sim2real, knowledge acquired during the simulation is used in a real-world problem (or environment). Finally, in real2real, knowledge acquired in a real-world problem can be transferred to another agent in another real-world problem.

+ +

The main challenges are related to the differences that exist between one environment and the other (either in simulation or in the real-world). For example, in sim2real, the simulation is almost never a perfect model of the real-world environment, so an agent trained in a simulation will probably not behave optimally in the real-world environment, which is often a lot more complex than the simulated environment. However, it is often the case that a robot needs to be trained in simulation, given that a robot trained in a real-world environment is subject to crashes.

+",2444,,2444,,10/14/2019 12:10,10/14/2019 12:10,,,,1,,,,CC BY-SA 4.0 +15906,2,,1997,10/14/2019 14:04,,1,,"

Random residual networks for many non-linearities such as tanh live on the edge of chaos, +in that the cosine distance of two input vectors will converge to a fixed point at a polynomial rate, rather than an exponential rate, as with vanilla tanh networks. Thus a typical residual network will slowly cross the stable-chaotic boundary with depth, hovering around this boundary for many layers. Basically it does not “forget” the geometry of the input space “very quickly”. So even if we make them considerably deep, they work better the vanilla networks.

+ +

For more information on the propagation of information in residual networks - Mean Field Residual Networks: On the Edge of Chaos

+",30384,,,,,10/14/2019 14:04,,,,0,,,,CC BY-SA 4.0 +15908,5,,,10/14/2019 15:21,,0,,,2444,,2444,,10/14/2019 15:21,10/14/2019 15:21,,,,0,,,,CC BY-SA 4.0 +15909,4,,,10/14/2019 15:21,,0,,"For questions related to residual networks (ResNets), introduced in ""Deep Residual Learning for Image Recognition"" (2015) by Kaiming He et al. and that won the first place at ""Large Scale Visual Recognition Challenge 2015"" (ILSVRC2015).",2444,,2444,,10/14/2019 15:21,10/14/2019 15:21,,,,0,,,,CC BY-SA 4.0 +15910,1,15911,,10/14/2019 16:52,,5,2524,"

I am trying to learn tabular Q learning by using a table of states and actions (i.e. no neural networks). I was trying it out on the FrozenLake environment. It's a very simple environment, where the task is to reach a G starting from a source S avoiding holes H and just following the frozen path which is F. The $4 \times 4$ FrozenLake grid looks like this

+
SFFF
+FHFH
+FFFH
+HFFG
+
+

I am working with the slippery version, where the agent, if it takes a step, has an equal probability of either going in the direction it intends or slipping sideways perpendicular to the original direction (if that position is in the grid). Holes are terminal states and a goal is a terminal state.

+

Now I first tried value iteration which converges to the following set of values for the states

+
[0.0688909  0.06141457 0.07440976 0.05580732 0.09185454 0. 0.11220821 0.         0.14543635 0.24749695 0.29961759 0. 0.         0.3799359  0.63902015 0.        ]
+
+

I also coded policy iteration, and it also gives me the same result. So I am pretty confident that this value function is correct.

+

Now, I tried to code the Q learning algorithm, here is my code for the Q learning algorithm

+
def get_action(Q_table, state, epsilon):
+    """
+    Uses e-greedy to policy to return an action corresponding to state
+    
+    Args:
+        Q_table: numpy array containing the q values
+        state: current state
+        epsilon: value of epsilon in epsilon greedy strategy
+        env: OpenAI gym environment 
+    """
+    return env.action_space.sample() if np.random.random() < epsilon else np.argmax(Q_table[state]) 
+
+
+def tabular_Q_learning(env):
+    """
+    Returns the optimal policy by using tabular Q learning
+    
+    Args:
+        env: OpenAI gym environment
+        
+    Returns:
+        (policy, Q function, V function) 
+    """
+    
+    # initialize the Q table
+    # 
+    # Implementation detail: 
+    # A numpy array of |x| * |a| values
+    
+    Q_table = np.zeros((env.nS, env.nA))
+    
+    # hyperparameters
+    epsilon = 0.9
+    episodes = 500000
+    lr = 0.81
+
+    
+    for _ in tqdm_notebook(range(episodes)):
+        # initialize the state
+        state = env.reset()
+        
+        if episodes / 1000 > 21:
+            epsilon = 0.1
+        
+        t = 0
+        while True: # for each step of the episode
+            # env.render()
+            # print(observation)
+         
+            # choose a from s using policy derived from Q 
+            action = get_action(Q_table, state, epsilon) 
+            
+            # take action a, observe r, s_dash
+            s_dash, r, done, info = env.step(action)
+            
+            # Q table update 
+            Q_table[state][action] += lr * (r + gamma * np.max(Q_table[s_dash]) - Q_table[state][action])
+            
+            state = s_dash
+            
+            t += 1
+            
+            if done:
+                # print("Episode finished after {} timesteps".format(t+1))
+                break
+        # print(Q_table)
+    
+    policy = np.argmax(Q_table, axis=1)
+    V = np.max(Q_table, axis=1)
+    
+    return policy, Q_table, V
+
+

I tried running it and it converges to a different set of values which is following [0.26426802 0.03656142 0.12557195 0.03075882 0.35018374 0. 0.02584052 0. 0.37657211 0.59209091 0.15439031 0. 0. 0.60367728 0.79768863 0. ]

+

I am not getting, what is going wrong. The implementation of Q learning is pretty straightforward. I checked my code, it seems right.

+

Any pointers would be helpful.

+",30479,,2444,,11/1/2020 16:49,11/1/2020 16:51,Why is my implementation of Q-learning not converging to the right values in the FrozenLake environment?,,1,0,,,,CC BY-SA 4.0 +15911,2,,15910,10/15/2019 0:56,,4,,"

I was able to solve the problem.

+

The main issue for non-convergence was that I was not decaying the learning rate appropriately. I put a decay rate of $-0.00005$ on the learning rate lr, and subsequently Q-Learning also converged to the same value as value iteration.

+",30479,,2444,,11/1/2020 16:51,11/1/2020 16:51,,,,2,,,,CC BY-SA 4.0 +15912,1,,,10/15/2019 1:00,,1,32,"

We usually categorize objects in a hierarchy of classes. Let us say crow vs bird. In addition, classes can be ""messy"", for instance a crow can be also a predator, but not all birds are predators.

+ +

My question is, can deep networks represent these hierarchies easily? Has anybody studied that? (I could not find anything at all).

+",30433,,,,,10/15/2019 9:27,Are there deep networks that can differentiate object class from individual object?,,1,0,,,,CC BY-SA 4.0 +15913,2,,12324,10/15/2019 2:43,,2,,"

Let's write down Fibonacci!

+ +
+

K = 0 1 1 2 3 5 ...

+
+ +

And another series that is derived from Fibo;

+ +
+

X = 1 4 7 12 30 $X_{5}$

+
+ +

Guessing $X_{5}$ is our task and both series are available to you (Fibonacci as an additional feature).

+ +

One unit that you feed with X will try to capture the relation of $X_{t}$ and $X_{t-1}$ only;

+ +

$X_{t}$ = $X_{t-1}$ + $X_{t-2}$

+ +

However, an additional unit that you insert&feed with K will not only try to capture $K_{t}$ and $K_{t-1}$ relation but also the relation between $K_{t}$ and $X_{t}$.

+ +

$K_{t}$ = $K_{t-1}$ + $2*$$X_{t}$ + $1$

+ +

In the example above, there is a clear correlation between $K_{t}$ and $X_{t}$ (which is not always the case) and it will support the network to capture the sequential relation. Even the CORRELATION is not crystal clear, almost every additional feature data correlates with the other features and will support the network to grab the relation.

+",30351,,,,,10/15/2019 2:43,,,,0,,,,CC BY-SA 4.0 +15914,1,,,10/15/2019 7:14,,1,81,"

Sorry if this is a stupid question. I'm just starting out in ML and am working with gpt-2 for text generation.

+ +

My situation is that I have to generate text in a particular field for eg. family businesses, which pretrained gpt-2 is unlikely to have much ""training"" with. Besides the topic, I also need to generate the text in the style of one particular writer (for eg. incorporating their turns of phrase etc) This particular writer hasn't written much about the family business topic unfortunately, but has written about other topics.

+ +

It occurred to me that I can take gpt-2, finetune it on a large corpus of material on family businesses, and then finetune the new model on the written material of the particular writer.

+ +

Would this be the right way to achieve my objective of creating content on family businesses in the style of this particular writer?

+ +

Any suggestions on what sort of stuff I should keep in mind while doing this?

+ +

Any help is much appreciated.

+",30489,,,,,10/15/2019 7:14,Finetuning GPT-2 twice for particular style of writing on a particular topic,,0,0,,,,CC BY-SA 4.0 +15915,2,,15912,10/15/2019 9:27,,2,,"

In this case, you have an ontology and want to learn the ontology. There are many researches in this topic that you can find. However, the data could be the most challenging part. Some of the researches:

+ + + +

Also, as these are some frameworks to ontology learning, you can use deep networks such as RNN‌ to learn the task.

+",4446,,,,,10/15/2019 9:27,,,,1,,,,CC BY-SA 4.0 +15916,1,,,10/15/2019 11:34,,2,498,"

I am trying to understand the best loss function to be used with a convolutional neural network. I came to know that we can mix two loss functions. Can any body share in what case was it done and how?

+",23737,,2444,,10/20/2019 15:04,10/22/2019 0:47,When and how to use a mix of loss functions for back-propagation?,,1,1,,,,CC BY-SA 4.0 +15917,1,,,10/15/2019 13:51,,1,30,"

I have been building a multilabel image classification model using inception v3, which uses images of size 299x299, I have been wondering what are the effects of feeding images of rectangular shapes for example (or arbitrary resolutions) on the performance of the model, and if I can define requirements for how the data should be to ensure optimal performance, what would those requirements be ? +Intuitively, I think that square images would perform better than rectangular images, is this true?

+",23866,,,,,10/15/2019 13:51,Resizing effects on image recognition,,0,0,,,,CC BY-SA 4.0 +15918,1,15920,,10/15/2019 16:04,,5,1063,"

I am working on a deep reinforcement learning problem. Throughout the episode, there is a small positive and negative reward for good or bad decisions. In the end, there is a huge reward for the completion of the episode. So, this reward function is quite sparse.

+

This is my understanding of how DQN works. The neural network predicts quality values for each possible action that can be taken from a state $S_1$. Let us assume the predicted quality value for an action $A$ is $Q(S_1, A)$, and this action allows the agent to reach $S_2$.

+

We now need the target quality value $Q_\text{target}$, so that using $Q(S_1, A)$ and $Q_\text{target}$ the temporal difference can be calculated, and updates can be made to the parameters of the value network.

+

$Q_\text{target}$ is composed of two terms. The immediate reward $R$ and the maximum quality value of the resulting state that this chosen action leaves us in, which can be denoted by $Q_\text{future} = \text{max}_a Q(S_2, a)$, which is in practice obtained by feeding the new state $S_2$ into the neural network and choosing (from the list of quality values for each action) the maximum quality value. We then multiply the discount factor $\gamma$ with this $Q_\text{future}$ and add it to the reward $R$, i.e. $Q_\text{target} = R + \gamma \text{max}_a Q(S_2, a) = R + \gamma Q_\text{future}$.

+

Now, let us assume the agent is in the penultimate state, $S_1$, and chooses the action $A$ that leads him to the completion state, $S_2$, and gets a reward $R$.

+

How do we form the target value $Q_\text{target}$ for $S_1$ now? Do we still include the $Q_\text{future}$ term? Or is it only the reward in this case? I am not sure if $Q_\text{future}$ even has meaning after reaching the final state $S_2$. So, I think that, for the final step, the target value must simply be the reward. Is this right?

+",17143,,2444,,11/1/2020 14:37,11/1/2020 14:37,How do we compute the target value when the agent ends up in the terminal state?,,1,1,,,,CC BY-SA 4.0 +15919,2,,15900,10/15/2019 16:12,,2,,"

It's more domain- or task-specific. There is no obvious baseline anymore because these models and this field has evolved into too large of an ecosystem. Nonetheless, I'll list a couple of notorious examples below.

+ +

Image classification:

+ +
    +
  • MNIST
  • +
  • CIFAR
  • +
  • ImageNet
  • +
+ +

Detection/segmentation:

+ +
    +
  • PascalVOC
  • +
  • COCO
  • +
  • CityScapes
  • +
+ +

Pose estimation:

+ +
    +
  • MPII
  • +
  • LEEDS
  • +
+ +

Text classification:

+ +
    +
  • IMDB
  • +
  • yelp
  • +
+ +

Question answering:

+ +
    +
  • SQuAD
  • +
+ +

Translation:

+ +
    +
  • WMT
  • +
  • IWSLT
  • +
+ +

This is just a taste. There are tons more both in each category and the number of categories, a good source is the Papers with Code website.

+ +

Therefore, there is no single standard problem, given that there are too many that all in one shape or form use CNNs or RNNs (or others).

+",25496,,2444,,10/20/2019 15:07,10/20/2019 15:07,,,,0,,,,CC BY-SA 4.0 +15920,2,,15918,10/15/2019 17:39,,4,,"
+

Now, let us assume the agent is in the penultimate state, $S_1$, and +chooses the action $A$ that leads him to the completion state, $S_2$, +and gets a reward $R$.

+

How do we form the target value $Q_\text{target}$ for $S_1$ now? Do we still include the $Q_\text{future}$ term? Or is it only the reward +in this case?

+
+

Your term "completion state" is commonly called "terminal state". In a terminal state, there are no more actions to take, no more time steps, and no possibility to take any action. So, by definition, in your state $S_2$, the expected future reward is $0$.

+

Mathematically, this is often noted like $v(S_T) = 0$ or $q(S_T,\cdot) = 0$ with the $T$ standing for last time step of the episode, and dot standing in for the fact that no action need to be supplied, or the specific action value is not relevant. So, therefore using your terms, $Q_\text{future} = \text{max}_a Q(S_2, a) = 0$

+

That makes the equations work in theory, but does not explain what to do in code. In practice in your code, you would do as you suggest and use just the reward when calculating the TD target for $Q(S_1, A)$. This is typically done using an if block around the done condition e.g.

+
if done:
+  td_target = r
+else:
+  td_target = r + gamma * np.max(q_future_values) 
+end
+
+

Of course, the details depend on how you have structured and named your variables. You will find code similar to this in most DQN implementations though.

+

You should not really try to learn $V(S_2)$ or $Q(S_2, A)$, or calculate TD target starting from $S_2$, because the result should be $0$ by definition.

+",1847,,2444,,11/1/2020 14:37,11/1/2020 14:37,,,,0,,,,CC BY-SA 4.0 +15921,2,,15877,10/15/2019 18:07,,1,,"

You're quite right in your interpretation, but I'll answer in two parts in order to avoid confusion with respect to activation functions.

+ +

Part 1. (TLDR: a neurons weights is the normal vector of a hyperplane that divides input space in two parts. The neuron's preactivation is proportional to the distance of the input point to the plane.) Every artificial neuron learns a linear relationship between its inputs. The most recalled equation of a line is $y=m \cdot x+b$, but that's actually a very specific form that allows us going through values of X of the line and seeing to what values of Y it corresponds. A most general form would be $0=n \cdot y + m \cdot x + b$. This tells us that the line is formed by the points (X,Y) whose values make that series of calculations be zero. We can explore different values of (X,Y) and see that most of them give non-zero values, and that they give positive values at one side of the line and negative values at the other side. Only if you land just on the line it will give you a zero. This is a very important interpretation, because it's what allows neurons to find divisions of the input space (into a positive side and a negative side). Of course it probably won't be a 2d space, so it will be a hyperplane instead of a line, but I hope you get the idea.

+ +

Part 2. However, if we only use linear transformations we couldn't learn non-linear functions. Here's where the activation function plays a very important role: it distorts the neuron's preactivation value (which is linear) in a non-linear way (what makes it a non-linear function). Activation functions have lots of bells and whistles, which are too much to write here, but you can start thinking about them as distortions applied to that distance of the input point to the neuron's hyperplane. The one you saw is called ReLU, and it basically truncates the negative values, thus focusing only on the positive side of the hyperplane (it may be interpreted as measuring how far the point has crossed a frontier).

+",27444,,,,,10/15/2019 18:07,,,,0,,,,CC BY-SA 4.0 +15922,5,,,10/15/2019 20:13,,0,,,2444,,2444,,10/15/2019 20:13,10/15/2019 20:13,,,,0,,,,CC BY-SA 4.0 +15923,4,,,10/15/2019 20:13,,0,,"For questions related to the ExpectiMinimax algorithm (or tree), which is a variation of the minimax algorithm (or tree). In addition to ""min"" and ""max"" nodes of the traditional minimax tree, this variant has ""chance"" nodes (which take the expected value of a random event occurring), which are interleaved with the max and min nodes. The ExpectiMinimax was proposed in ""Game-playing and game-learning automata"" (1966) by Donald Michie.",2444,,2444,,10/15/2019 20:19,10/15/2019 20:19,,,,0,,,,CC BY-SA 4.0 +15924,1,15928,,10/15/2019 20:52,,1,77,"

I have a game/simulation that takes a vector of encoded sequences of moves (up, down, left, right). Let's say that these are sequential step taken by an ant moving in a 2D space, starting from the origin. The moves are generated randomly.

+ +

I want to know for any game, if the ant gets farther than a certain distance y from the origin (although it might even be closer than y at the end of the game). I would like to classify games into ""ant gets further away than y"" with value of one, or zero for ""ant does not get further away than y"". I don't need an AI for this task, I have set this objective as a training goal for myself.

+ +

I am able to tell if the last position is past y or not, using a regular feed forward network, I believe it is easier because it is as easy as summing up all the moves, regardless of the order. But to tell if the ant got past y and then got back, that still needs to return one.

+ +

I thought I might be able to reach my objective through an RNN, encoding the moves as a sequence of one-hot encoded sequential directions to move towards. Currently, I am using one hidden layer (I tried with different sizes ranging from 10 to 100), backpropagating the loss only at the last step of a single training on a vector, but it seems like the RNN total loss doesn't decrease at all.

+ +

Is there any obivious flaw in my simulation, or in the neural network model? Is there a category of problems this could belong to?

+",30287,,30287,,10/16/2019 8:41,10/16/2019 8:41,Recurrent Neural Network to track distance from origin,,1,3,,,,CC BY-SA 4.0 +15926,2,,4301,10/16/2019 2:16,,1,,"

The biggest issue here may be similarity to prior work. As for the benchmarks, benchmarks are a common means for comparing algorithms. What it means here would be to compare the end-result (your chosen goal) for each of the algorithms compared in a similar scenario, or a generated test-scenario. This will mean that you will have to utilize all chosen algorithms in your test-game-map software. To make it more original, shake it up a little with some creative/disruptive element which should lower the performance of the algorithm. Find some way to quantify how close the algorithms actually come to meeting the stated goal (or not meeting goal, like how far off they were) with and without the disruptive element.

+",16959,,,,,10/16/2019 2:16,,,,0,,,,CC BY-SA 4.0 +15928,2,,15924,10/16/2019 6:47,,0,,"

This kind of problem does not really have a name other than ""toy problem"" since no-one needs to teach an AI to add up, multiply or divide* - there are already far more reliable and far faster ways to achieve that on any computer. What you are doing here is essentially vector addition, applying a distance metric then setting a true/false value based on a comparison. It would be 2 or 3 lines of code in most high-level programming languages.

+ +

Neural networks can learn any function though, so in theory what you want to do is possible. You should not expect results to be perfect, a statistical learner never actually learns the analytical form of a function or process, just the rough ""shape"" of it.

+ +

I have not done your experiment. However, your idea to use a RNN seems reasonable. With the details you have given, I can offer a few pieces of general advice:

+ +
    +
  • Use a modern RNN gated architecture, either LSTM or GRU. That's because the point in the sequence of moves where you want to set a ""distance exceeded"" toggle could be many steps away from the end of a game. The simplest RNNs (with direct loop backs from output to input within a layer) can easily suffer from vanishing gradients in this situation, whilst LSTM and GRU architectures are designed to deal with it.

  • +
  • Generate a lot of training data. You will need many examples of both categories before any neural network will home in on what is causing one class or the other to occur. The learning is based on statistics, not reason.

  • +
  • Take a look at related LTSM examples that learn to add binary numbers. Repeat those even simpler experiments first, then move on to your own problem. This will avoid some beginner mistakes with poor choices of hyper-parameters, bad implementations etc.

  • +
+ +
+ +

* Humans of course do use intelligence, reasoning and logic to learn reliable procedures for addition, multiplication and division. No doubt someone could be interested in how an AI can replicate that, without starting from built-in capabilties of a CPU (which of course the humans designed and built those procedures into the system at a low level). However, that's at a higher level of AI research than we are dealing with here.

+",1847,,1847,,10/16/2019 7:05,10/16/2019 7:05,,,,1,,,,CC BY-SA 4.0 +15930,2,,15730,10/16/2019 9:56,,1,,"

Its arguable if we humans understand infinity. We just create new concept to enplace old mathematics when we meet this problem. +In division by infinity machine can understand it the same way as we:

+ +
double* xd = new double;
+*xd =...;
+if (*xd/y<0.00...1){
+int* xi = new int;
+*xi = (double) (*xd);
+delete xd;
+
+ +

If human thinks of infinity - imagines just huge number in his/her current context. So key to writing algorithm is just finding a scale that AI is currently working with. And BTW this problem must ve been solved years ago. People designing float/double must ve been conscious what they were doing. Moving exponenta sign is linear operation in double.

+",30520,,30520,,10/16/2019 17:09,10/16/2019 17:09,,,,0,,,,CC BY-SA 4.0 +15931,2,,15730,10/16/2019 12:40,,0,,"

I think the property humans have which computers do not, is some sort of parallel process that runs alongside every other thing they are thinking and tries to assign an importance weighting evaluation to everything you are doing. +If you ask a computer to run the program : +A = 1; +DO UNTIL(A<0) + a=a+1; +END;

+ +

The computer will. +If you ask a human, another process interjects with ""I'm bored now... this is taking ages... I'm going to start a new parallel process to examine the problem, project where the answer lies and look for a faster route to the answer ... +Then we discover that we are stuck in an infinite loop that will never be ""solved"".. and interject with an interrupt that flags the issue, kills the boring process and goes to get a cup of tea :-) +Sorry if that is unhelpful.

+",30524,,,,,10/16/2019 12:40,,,,1,,,,CC BY-SA 4.0 +15932,1,15935,,10/16/2019 13:57,,6,210,"

Science Fiction has frequently shown AI to be a threat to the very existence of mankind. AI systems have often been the antagonists in many works of fiction, from 2001: A Space Odyssey through to The Terminator and beyond.

+

The Media seems to buy into this trope as well. And in recent years we have had people like Elon Musk warn us of the dangers of an impending AI revolution, stating that AI is more dangerous than nukes.

+

And, apparently, experts think that we will be seeing this AI revolution in the next 100 years.

+

However, from my (albeit limited) study of AI, I get the impression that they are all wrong. I am going to outline my understanding below, please correct me if I am wrong:

+
    +
  • Firstly, all of these things seem to be confusing Artificial Intelligence with Artificial Consciousness. AI is essentially a system to make intelligent decisions, whereas AC is more like the "self-aware" systems that are shown in science fiction.

    +
  • +
  • Not AI itself, but intelligence and intelligent decision-making algorithms are something we've been working with and enhancing since before computers have been around. Moving this over to an artificial framework is fairly easy. However, consciousness is still something we are learning about. My guess is we won't be able to re-create something artificially if we barely understand how it works in the real world.

    +
  • +
  • So, my conclusion is that no AI system will be able to learn enough to start thinking for itself, and that all our warnings of AI are completely unjustified.

    +
  • +
  • The real danger comes from AC, which we are a long, long way from realizing because we are still a long way off from defining exactly what consciousness is, let alone understanding it.

    +
  • +
+
+

So, my question is, assuming that my understanding is correct, are any efforts are being made by companies or organizations that work with AI to correct these popular misunderstandings in sci-fi, the media, and/or the public?

+

Or are the proponents of AI ambivalent towards this public fear-mongering?

+

I understand that the fear mongering is going to remain popular for some time, as bad news sells better than good news. I am just wondering if the general attitude from AI organizations is to ignore this popular misconception, or whether a concerted effort is being made to fight against these AI myths (but unfortunately nobody in the media is listening or cares).

+",30526,,2444,,12/18/2021 9:08,12/18/2021 9:08,"""AI will kill us all! The machines will rise up!"" - what is being done to dispel such myths?",,1,11,,,,CC BY-SA 4.0 +15934,2,,15730,10/16/2019 17:13,,1,,"

Well -- just to touch on the question of people and infinity -- my father has been a mathematician for 60 years. Throughout this time, he's been the kind of geek who prefers to talk and think about his subject over pretty much anything else. He loves infinity and taught me about it from a young age. I was first introduced to the calculus in 5th grade (not that it made much of an impression). He loves to teach, and at the drop of a hat, he'll launch into a lecture about any kind of math. Just ask.

+ +

In fact, I would say that there are few things he is more familiar with than infinity...my mother's face, perhaps? I wouldn't count on it. If a human can understand anything, my father understands infinity.

+",30534,,,,,10/16/2019 17:13,,,,0,,,,CC BY-SA 4.0 +15935,2,,15932,10/16/2019 17:42,,2,,"

Nothing.

+ +

Its in almost everyone's favor for it to stay that way financially. Having non-technical individuals associate AI with terminators makes a perception that the field has greater capabilities than it does $\rightarrow$ this leads to grants, funding, etc...

+ +

Is there any negative? Yes. Misconceptions always have drawbacks. We see the creation of dumb ethics boards and such cough cough Elon Musk.

+ +

But if history has anything to say about this, as the field gains popularity (which it is dnagerously quick), information will spread by definition, and eventually misconceptions will be laid to rest.

+ +

Note that this answer is biased and based upon my own opinions

+",25496,,,,,10/16/2019 17:42,,,,0,,,,CC BY-SA 4.0 +15936,1,24217,,10/16/2019 18:06,,4,382,"

The game ""Flow Free"" in which you connect coloured dots with lines is very popular. A human can learn techniques to play it.

+ +

I was wondering how an AI might approach it. There are certain rules of thumb that a human learns, e.g. connecting dots on the edges one should keep to the edge.

+ +

Most of the time it appears the best approach is a depth-first search, e.g. one tries very long paths to see if they work. Combined with rules of thumbs and inferences such as ""don't leave gaps"". Also ""Don't cut off one dot from another dot of the same colour"".

+ +

But there are ways to ""not leave gaps"" such as keep within one square of another line. That humans seem to be able to grasp but seems harder for an AI to learn.

+ +

In fact I wonder if the rule of thumb ""keep close to other lines"" might even require some kind of internal language.

+ +

I mean to even understand the rules of the game one would think one would need language. (Could an ape solve one of these puzzles? I doubt it.)

+ +

So basically I'm trying to solve how an AI could come up with these technqiues for solving puzzles like Flow Free. (Techniques that might not work in all cases).

+ +

Perhaps, humans have an innate understanding of concepts such as ""keep close to the wall"" and ""don't double back on yourself"" and can combine them in certain ways. Also we are able to spot simple regions quickly bounded by objects.

+ +

I think a built in understanding of ""regions"" would be key. And the key concept that dots can't be joined unless they are in the same region. And we have got to a dead-end if:

+ +
    +
  1. There is an empty region
  2. +
  3. There is a region with a dot without it's pair
  4. +
+ +

Still I don't think this is enough.

+",4199,,4199,,10/16/2019 18:18,10/22/2020 20:29,How can an AI play Flow Free?,,1,4,,,,CC BY-SA 4.0 +15937,1,15944,,10/16/2019 19:00,,0,86,"

We are a team of computer science our graduation project about EmotionalSpeech Synthesis.

+ +

We've found valuable information like research papers and WaveNet, Tacotron. +A website (https://www.voicery.com/)

+ +

we were hoping to get to know more information from you.

+ +

We need more details what should we start with to grasp the fundamentals to build this idea, what is the architecture to be used in this project, whether there are papers, a GitHub Repository containing helpful documentation, datasets, some other resources, previous knowledge.

+",26390,,1671,,10/21/2019 21:06,10/21/2019 21:06,Emotional Speech Synthesis,,1,0,,1/18/2021 0:39,,CC BY-SA 4.0 +15938,2,,15730,10/16/2019 21:03,,1,,"

Humans certainly don't understand infinity. Currently computers cannot understand things that humans cannot because computers are programmed by humans. In a dystopian future that may not be the case.

+ +

Here are some thoughts about infinity. +The set of natural numbers is infinate. It has also been proved that the set of prime numbers, which is a subset of the natural numbers, is also infinate. So we have an infinate set within an infinate set. +It gets worse, between any 2 real numbers there is an infinate number of real numbers. +Have a look at the link to Hilbert's paradox of the Grand Hotel to see how confusing infinity can get - +https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel

+",30539,,,,,10/16/2019 21:03,,,,0,,,,CC BY-SA 4.0 +15939,1,,,10/16/2019 23:22,,1,93,"

I'm looking for an ""elevator pitch"" breakdown of areas of applications for Reinforcement Learning & Neural Networks vs. Genetic Algorithms, both actual and theoretical.

+ +

Links are welcome, but please provide some explanation.

+",1671,,2444,,10/17/2019 21:53,10/17/2019 21:53,"An ""elevator pitch"" breakdown of areas of applications for Reinforcement Learning & Neural Networks vs. Genetic Algorithms",,1,3,,,,CC BY-SA 4.0 +15940,2,,15939,10/16/2019 23:38,,3,,"

Your question suggests a confusion of techniques, representations and problems.

+ +
    +
  • Neural Networks are a representation that can be used to approximate functions. A neural network approximates a function that maps from inputs to outputs by optimizing parameters (weights).

  • +
  • Genetic Algorithms are a technique that can be used to optimize a problem. You might chose to use a GA to optimize the weights in a neural network for instance. Or you might use it to optimize a different representation or approximation of a function.

  • +
  • Reinforcement Learning is a problem. In a Reinforcement learning problem, the agent learns a function mapping states to actions. You can learn this function directly in some problem domains, or by a near-direct approximation (like tile-coding), or with a function appropriator (like a neural network).

  • +
+",16909,,,,,10/16/2019 23:38,,,,5,,,,CC BY-SA 4.0 +15941,2,,12026,10/16/2019 23:50,,0,,"

No, your tree is not an accurate representation of the problem.

+ +

Hint: Consider the case where both players flip an H on their first coin. Player 1 decides not to flip a second coin. Is the game over?

+",16909,,,,,10/16/2019 23:50,,,,0,,,,CC BY-SA 4.0 +15942,1,,,10/17/2019 0:43,,3,158,"

I have the following homework.

+ +
+

We proved Sauer's lemma by proving that for every class $H$ of finite VC-dimension $d$, and every subset $A$ of the domain,

+ +

$$ +\left|\mathcal{H}_{A}\right| \leq |\{B \subseteq A: \mathcal{H} \text { shatters } B\} | \leq \sum_{i=0}^{d}\left(\begin{array}{c}{|A|} \\ {i}\end{array}\right) +$$

+ +

Show that there are cases in which the previous two inequalities are strict (namely, the $\leq$ can be replaced by $<$) and cases in which they can be replaced by equalities. Demonstrate all four combinations of $=$ and $<$.

+
+ +

How can I solve this problem?

+",27112,,,user9947,3/31/2020 0:05,4/25/2021 2:35,How to show Sauer's Lemma when the inequalities are strict or they are equalities?,,0,0,,,,CC BY-SA 4.0 +15943,2,,15730,10/17/2019 1:27,,2,,"

John Doucette's answer covers my thoughts on this pretty well, but I thought a concrete example might be interesting. I work on a symbolic AI called Cyc, which represents concepts as a web of logical predicates. We often like to brag that Cyc ""understands"" things because it can elucidate logical relationships between them. It knows, for example, that people don't like paying their taxes, because paying taxes involves losing money and people are generally averse to that. In reality, I think most philosophers would agree that this is an incomplete ""understanding"" of the world at best. Cyc might know all of the rules that describe people, taxes, and displeasure, but it has no real experience of any of them.

+ +

In the case of infinity, though, what more is there to understand? I would argue that as a mathematical concept, infinity has no reality beyond its logical description. If you can correctly apply every rule that describes infinity, you've grokked infinity. If there's anything that an AI like Cyc can't represent, maybe it's the emotional reaction that such concepts tend to evoke for us. Because we live actual lives, we can relate abstract concepts like infinity to concrete ones like mortality. Maybe it's that emotional contextualization that makes it seem like there's something more to ""get"" about the concept.

+",30545,,,,,10/17/2019 1:27,,,,0,,,,CC BY-SA 4.0 +15944,2,,15937,10/17/2019 2:57,,0,,"

Since I am working with audio synthesis projects, I can suggest you to go through the recently released paper ""MelNet: A Generative Model for Audio in the Frequency Domain"", in this architecture human like speech is synthesized, I think this might be better than WaveNet where the generated audio does contain robotic nature, I think this will definitely help you with Emotional speech synthesis as it generates human-like voices. +You can check for online demos as well.

+ +

Demo

+ +

Here Is the Paper!

+ +

Related YouTube Video

+ +

Hope this helps!

+",26854,,26854,,10/17/2019 6:52,10/17/2019 6:52,,,,3,,,,CC BY-SA 4.0 +15945,1,,,10/17/2019 4:31,,2,43,"

I have a hypothetical example that closes to my research problem:

+ +

Assume you are a boss and you have different types of tasks that you need to assign to your employee. Sensitive task (very classified), and task that requires high skills. So you need to assign a sensitive task (government document) to the trusted employee. While the other task (e.g. statistical analysis ) can be assigned to employee who is more creative and smart. Now every day you have many tasks that need to be done and have a large number of employees with a number of crowdsources (freelancers).

+ +

You have an outcome and history of trust and performance along of failure rate of assigned task on that day of these employees as:

+ +

+ +

As you can see here on day 1: the trust of emp 111 is good, so on that day, he had a low failure rate of the sensitive task. While his performance is low, and that made other task failed a lot.

+ +

So now assume you have a sensitive task coming, and you have a pool of workers.

+ +

The basic equation might not good here: Trust + Performance. I need to weigh each factor based on the type of tasks.

+ +

Trust x w1 + performance x w2 ::::: w1 is high coefficient when sensitive is coming.

+ +

Any idea of how I model these issues.

+",30551,,,,,10/17/2019 4:31,Assigning Weighting Factors,,0,3,,,,CC BY-SA 4.0 +15946,1,16085,,10/17/2019 12:15,,6,407,"

Although I have a decent background in math, I'm trying to understand which courses from CS and logic to look into. My aim is to get into a Machine Learning PhD program.

+",30568,,,,,10/25/2019 20:05,Which courses in computer science and logic are relevant to Machine Learning?,,3,0,,,,CC BY-SA 4.0 +15947,1,,,10/17/2019 12:27,,3,292,"

I need to write a minimax algorithm with alpha-beta pruning in limited time for the 2048 game. I know expectimax is better for this work.

+ +

Assume I wrote different heuristic functions. If I want to write an evaluation function as a linear combination of these heuristic functions, do I have to give random weights, or can I calculate the optimal weights with some optimization algorithm?

+",30569,,2444,,10/17/2019 15:49,10/17/2019 15:49,How to choose the weights for a linear combination of heuristic functions?,,0,3,,,,CC BY-SA 4.0 +15949,1,15957,,10/17/2019 14:16,,1,120,"

Is artificial intelligence and, in particular, neural networks being used in real-world critical applications and devices?

+ +

I had a discussion with my colleague who states that nobody would use artificial intelligence, especially neural nets, for critical stuff, like technical devices or sensors.

+ +

I'm only aware of the problem of neural nets being so-called black-boxes, but, nevertheless, I think it is possible to make an NN robust so that it matches the demands of daily processes, also in sensitive fields like health care, energy market, self-driving cars, and so on. Yet I cannot underline this.

+ +

Does somebody have more insights or other information, opinions and so on? I appreciate any meaningful answer.

+",26353,,2444,,10/17/2019 22:31,10/18/2019 5:40,"Is artificial intelligence and, in particular, neural networks being used in real-world critical applications?",,2,2,,,,CC BY-SA 4.0 +15950,1,,,10/17/2019 16:05,,3,64,"

Consider a Bayesian classifier used in spam e-mail filtering. It converts an e-mail to a vector, most of the time using the bag-of-words method. Although it learns first before getting employed, it can be made to work as an online system, i.e. it can be used to filter and learn from examples even after deployment.

+ +

Now, on the other hand, now comes the perceptron. It calculates a mean vector of spam and not spam, and then classifies them into the appropriate categories. The model adjusts the mean vectors each time it makes mistakes.

+ +

Now, comes neural nets, they too are capable of taking a vector-like bag of words or image pixels of dogs and cats and classify them into yes or no.

+ +

So, while designing and implementing them into the system, how to determine which one of the methods (Bayesian classifier, perceptron or neural network) is the most appropriate for a given situation or task? One factor to consider is the time complexity (or speed), but what are other factors, and how to rank them?

+",,user27450,2444,,10/17/2019 22:09,10/18/2019 2:18,How do I determine the most appropriate classifier for a certain problem?,,1,0,,,,CC BY-SA 4.0 +15951,1,15963,,10/17/2019 16:19,,2,143,"

In my experience with Neural Nets, I have only used them to take input vectors and return binary output.

+ +

But, here in a video, https://youtu.be/ajGgd9Ld-Wc?t=214, Kai Fu Lee, renowned AI Expert shows a deep net which takes thousands of samples of Trump's speeches and generates output in the Chinese Language.

+ +

In short, how can deep nets/neural nets be used to generate output rather than giving answer yes or no? Additionally, how are these nets being trained? Can anyone here provide me a simple design to nets that are capable of doing that?

+",,user27450,25496,,10/18/2019 14:06,10/19/2019 1:42,How can neural networks be used to generate rather than classify?,,2,0,,,,CC BY-SA 4.0 +15952,2,,15951,10/17/2019 17:37,,1,,"

If the output can either be yes or no, then you have a discrete and binary output, so this problem is called binary classification, that is, it is the task of classifying (or categorizing) the input into one of two categories (or classes). You can also have a neural network with an output that can take more than two possible discrete values, which is can be used to solve a multi-class classification problem. For example, a neural network that outputs a sentence, which is composed of $n$ words, where $n>1$. In general, the output does not necessarily need to take a value from a set of discrete values (or classes), but it can also take a numeric value (e.g. a floating-point number). In that case, the problem is called regression. For example, the task of predicting the height of a person (a numeric value) given a picture of the same.

+ +

There are different types of neural networks. The simplest neural network is either a perceptron (if you consider it a neural network) or a multi-layer feed-forward neural network, that is, a neural network with only forward connections, with possible multiple layers. There are also convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are more sophisticated neural networks that are more suited for processing respectively imagery or sequences. There are also generative neural networks (e.g. variational auto-encoders), which are trained to learn a distribution, from which you can then sample.

+ +

In your specific example, the sentence could have been generated with a recurrent neural network or a generative model (or a combination of both). More precisely, a recurrent generative model could have been trained to learn the rules of either the English or Chinese language. Then you sample from this distribution to generate sentences. In principle, a sentence could also be generated with simpler neural networks (such as a multi-layer perceptron), but, in practice, this may be more inefficient.

+",2444,,2444,,10/17/2019 17:48,10/17/2019 17:48,,,,0,,,,CC BY-SA 4.0 +15954,2,,15946,10/17/2019 18:23,,1,,"

In several projects, I found data analysis and data structures to be critical. Machine Learning requires huge amounts of data and, most likely, the data will come from multiple sources. Prior to use, data requires analysis, cleaning, interpretation, feature engineering (subject matter expertise), and structure.

+",30577,,,,,10/17/2019 18:23,,,,1,,,,CC BY-SA 4.0 +15957,2,,15949,10/17/2019 23:18,,1,,"

This is as much an ethical concern as a practical one.

+ +

AI systems are already reaching or exceeding human performance in many critical areas. Consider the detection of common cancers, where AI systems match or exceed humans. Another good example is Tesla's Autopilot, which is actually safer than human drivers, but gets a lot of bad press when it makes a mistake. Both of these systems are likely using Neural Networks, possibly alongside other heuristic or rule-drive approaches.

+ +

The issue isn't whether these systems can be ""safe enough"" for everyday use. They are safe enough in a societal sense already. The concern is that the people who die when these systems make mistakes are randomly selected, whereas the people who die when a human performs the work die because a human makes a mistake (usually). This is difficult to accept for the same reason that some people are scared of flying, even though it is many times safer than driving the same distance: there is a loss of control, and ""good"" people may die through no fault of their own, or perhaps through events that are no one's fault.

+ +

Whether we use this systems will thus probably depend on the application. In Medicine it's easy to see a case: we can do more tests than before for the same price. People who can afford to have a doctor review the machine's decisions are probably no worse off. People who couldn't afford this already are better off (they get a diagnosis with some positive predictive power value now, instead of none at all). In driving, it's more complicated, and will probably require further development. No one knows for sure how good self-driving cars can get, but they'll probably get somewhat better over time. Maybe they'll get good enough that they more or less never kill people, or maybe they won't.

+ +

I actually think your friend is way off about the use cases he thinks AI is not suitable for though. Check out this article in Military Embedded Systems. In applications where decisions are made on a pure cost/benefit basis (and not based on people's gut feelings about morality), AI systems are actually easier to adopt, and are already often better than human operators. This trend seems likely to increase in the future, so ""technical devices and sensors"", which are often black-boxes to the lay public anyway, seem like they are among the first things to go.

+",16909,,16909,,10/18/2019 2:08,10/18/2019 2:08,,,,8,,,,CC BY-SA 4.0 +15958,2,,15950,10/18/2019 2:18,,1,,"

This is one of the main skills that separates someone with a deep understanding of, and experience in, machine learning learning, from a neophyte. There are several approaches:

+ +
    +
  1. Try several methods, perhaps with automated hyperparameter optimization, and see if there's a big difference in typical model quality. This is pretty common if you don't have a lot of experience, but also something experts may try in a more targeted way.
  2. +
  3. Visualize the shape of your problem, perhaps by using a dimensionality reduction technique like PCA or tSNE, or maybe an auto-encoder. If you compress the data to 2d, are there clear linear patterns? Maybe try a linear model like logistic regression. Are there several distinct groups? How many lines would you need to draw to separate them? If it's a lot, maybe you're going to need a very non-linear model. If it's just a few, maybe a small multi-layer perceptron can help. If there are spiral bands or circular shapes, maybe and SVM with a non-linear kernel. Knowing how to translate the visualizations into intuition about the kinds of model that can help is an advanced skill. You need to understand what shapes of patterns each kind of model can learn to fit, and how these do or do not translate into a higher dimensional space.

  4. +
  5. Read the literature. If you're working in computer vision, you should try a CNN. Why? Well, everyone else is using them. They work great on most computer vision problems. They hold most competition records. It'd be silly not to try them. If you're working on spam classification though, CNNs are a bad choice. People use compression classifiers, Bayesian models, and sometimes multi-stage models to build complex features. If you look at the recent literature in your area, you can tell what to use. Reading and understanding this literature well enough to interpret it usually requires more advanced scientific training in ML and/or the application area.

  6. +
+ +

As a final note, the No Free Lunch theorems for ML tell us that this is always going to be an art (at least, that's one interpretation experts argue for), and not a science, so the more practice you get with it, the better you'll become.

+",16909,,,,,10/18/2019 2:18,,,,0,,,,CC BY-SA 4.0 +15959,1,15962,,10/18/2019 3:28,,1,139,"

For a single neuron with 2 weights, I can plot the loss landscape and it looks like this (OR data, sigmoid activation, MAE loss):

+ +

+ +

But, when the neuron accepts more inputs, which means more than 2 weights required, or when there are more neurons, more layers in the network; how should the 3D loss landscape be plotted?

+",2844,,2844,,10/18/2019 6:56,10/18/2019 7:59,How to plot Loss Landscape with more than 2 weights in the network,,1,4,,,,CC BY-SA 4.0 +15960,2,,15949,10/18/2019 5:40,,1,,"

There is significant lag between overseas military operations and people operators back in the states. Because of this, the military is offloading more functionality to the remote piloted drones and convoys. The same will be the case once we have functional rescue bots. Look up boston robotics, if you don't think robots will be entering fires or ravines in the near future.

+",28098,,,,,10/18/2019 5:40,,,,1,,,,CC BY-SA 4.0 +15962,2,,15959,10/18/2019 7:59,,0,,"

It seems not possible to plot loss values (z) against all combinations of weights in all layers, especially when the network is big with thousands or millions of params; in that case, the number of points to plot is too too big.

+ +

And also, the 3D space can't be used to plot more than 3 dimensions.

+ +

However, with a deep network with lots of weights, these can be plotted:

+ +
    +
  • Loss value against any pair of 2 weights
  • +
  • Turn the layer right before output layer (single neuron) into a layer of 2 neurons, and loss can be plotted against these 2 weights (but doesn't make much sense as the meaning of loss value depends on all other weights also)
  • +
+ +

Example plot when there are 2 neurons in the layer right before output layer (of 1 neuron):

+ +

+",2844,,,,,10/18/2019 7:59,,,,0,,,,CC BY-SA 4.0 +15963,2,,15951,10/18/2019 14:00,,2,,"

Think of a neural network as a universal function approximator (With infinite width under a set of constraints this is actually provable). Now when discussing generation in the context you have provided, you essentially want to draw from some distribution $p(y|c)$ where $y$ is your output and $c$ is your context or input.

+ +

Theorem: For any distribution $\Omega$, if we take $z \sim \mathcal{N}(0,I)$, there exists a function $f$ where $f(z) \sim \Omega$.

+ +

Given the above theorem (for the purposes of this post I don't need to prove it, but its very similar to the universal approximation theorem proof) and if we take neural networks as a pseudo-universal function approximator, if we have a valid objective or training procedure that can learn the parameters of $f$, sampling is as easy as sampling $\mathcal{N}(0,I)$ and then applying $f$.

+ +

So the trick really is finding a good training procedure, and this is where you see GANs, VAEs and other models/schemes come into play.

+ +

Everything I've said above works really well when there isn't autocorrelation like in text, but when there is, the above methodology would result in a combinatorially large output space which isn't realistic with a vocabulary size usually spanning somewhere between a couple thousand and a couple hundred thousand. So to handle this they model the joint by taking advantage of that autocorrelation by modeling the joint probability as its bayesian decomposition. +$$p(\vec w) = p(w_0)\prod_{i=1}^{N-1}p(w_i|w_{<i})$$
+Now that there is a framework to efficiently model this type of output, were back into the position as before where we are looking for clever training schemes. In this case you'll see commonly RNN's or other sequential model training with teacher forcing (@nbro described this in his answer too), or using GAN like compositions using either reinforcement learning to handle the lack of differentiability in sampling or using approximations like Gumbel-Softmax or Intermediate Loss Sampling (method I actually developed)

+ +

I hope this answered your question.

+",25496,,25496,,10/19/2019 1:42,10/19/2019 1:42,,,,5,,,,CC BY-SA 4.0 +15964,1,15967,,10/18/2019 14:22,,1,85,"

Bellow I have a validation plot +How should I interpret this validation plot? +Is my data underfitting? What else can be seen from this? +Which one is the best?

+ +

What does it mean that the right line is growing and green line decrease (slightly) for example after 15?

+ +

+ +

Second random forrest +

+",30599,,30599,,10/19/2019 11:53,10/19/2019 11:53,How should I interpret this validation plot?,,1,0,,,,CC BY-SA 4.0 +15965,1,,,10/18/2019 15:16,,1,1836,"

Is it possible and how trivial (or not) might it be (if possible) to retrain GPT-2 on time-series data instead of text?

+",3861,,2444,,11/1/2019 3:17,4/13/2023 6:11,Is it possible to use the GPT-2 model for time-series data prediction?,,2,0,,,,CC BY-SA 4.0 +15966,1,,,10/18/2019 15:30,,1,169,"

I am trying to reproduce results presented in this paper. On page 4, the authors state:

+ +
+

... we train for 50 epochs (one epoch consists of 19*2*50 = 1900 full + episodes), which amounts to a total of 4.75*10^6 timesteps.

+
+ +

The 1900 episodes are broken down into Rollouts per MPI worker (2) * Number of MPI Workers (19) * Cycles per epoch (50), as shown in the hyper parameters section on page 10.

+ +

When testing on my local machine, using the GitHub Baselines repo, I am using 1 MPI worker and the following hyperparams:

+ +
'n_cycles': 50,  # per epoch
+'rollout_batch_size': 2,  # per mpi thread
+
+ +

By the same calculation, this means that I should have 1*50*2 = 100 episodes per epoch.

+ +

However when I run her on FetchReach-v1 turns out I only have 10 episodes per epoch. Here is a log sample:

+ +
Training...
+---------------------------------
+| epoch              | 0        |
+| stats_g/mean       | 0.893    |
+| stats_g/std        | 0.122    |
+| stats_o/mean       | 0.269    |
+| stats_o/std        | 0.0392   |
+| test/episode       | 10       |
+| test/mean_Q        | -0.602   |
+| test/success_rate  | 0.5      |
+| train/episode      | 10       |  <-- 10 episodes/epoch
+| train/success_rate | 0        |
+---------------------------------
+
+ +

Why is there this discrepancy? Any suggestions would be appreciated.

+",14390,,14390,,10/19/2019 6:39,10/19/2019 6:39,Reinforcement learning number of episodes per epoch not matching with paper,,0,0,,,,CC BY-SA 4.0 +15967,2,,15964,10/18/2019 23:18,,2,,"

This is a sign of overfitting.

+ +

As you make your trees deeper, it becomes possible to ""memorize"" the data: each leaf of the tree is just a single point. The trees begin to learn patterns that do not exist. When you try out these patterns on new data (which is what cross-validation is imitating), then the patterns do not work, and your model fails to generalize.

+ +

The main piece of information to draw from this plot is that the optimum tree depth is about 15.

+",16909,,,,,10/18/2019 23:18,,,,5,,,,CC BY-SA 4.0 +15968,2,,15965,10/19/2019 1:36,,1,,"

Definitely! but at that point it would be training a transformer-encoder (gpt2's architecture) and not GPT2 because GPT2 is defined by the weights / training procedure / data it was trained and not the architecture, and I don't think it would transfer properly to time series.

+",25496,,,,,10/19/2019 1:36,,,,1,,,,CC BY-SA 4.0 +15970,1,,,10/19/2019 6:07,,-1,67,"

My full code is as follows. I have tried to whittle it down to just the code that matters, but the problem I have is that i'm not sure what part of my network code is producing the problem. I've removed my code that loads and sifts through the CSV data because then my code would be too long.

+ +
#include <iostream>
+#include <array>
+#include <random>
+#include <chrono>
+#include <iomanip>
+#include <fstream>
+#include <algorithm>
+#include <iomanip>
+#include <variant>
+#include <unordered_set>
+
+
+typedef std::variant<std::string,std::uint_fast16_t,bool,float> CSVType;
+
+/* ... functions to load CSV data ... */
+
+typedef float DataType;
+typedef DataType (*ActivationFuncPtr)(const DataType&);
+
+DataType step(const DataType& x, const DataType& threshold)
+{
+    return x >= threshold ? 1 : 0;
+}
+
+DataType step0(const DataType& x)
+{
+    return step(x,0);
+}
+
+DataType step05(const DataType& x)
+{
+    return step(x,0.5);
+}
+
+DataType sigmoid(const DataType& x)
+{
+    return DataType(1) / (DataType(1) + std::exp(-x));
+}
+
+DataType sigmoid_derivative(const DataType& x)
+{
+    return x * (DataType(1) - x);
+}
+
+template<std::size_t NumInputs>
+class Neuron
+{
+public:
+
+    Neuron()
+    {
+        RandomiseWeights();
+    }
+
+    void RandomiseWeights()
+    {
+        std::generate(m_weights.begin(),m_weights.end(),[&]()
+        {
+            return m_xavierNormalDis(m_mt);
+        });
+        m_biasWeight = 0;
+
+        for(std::size_t i = 0; i < NumInputs+1; ++i)
+            m_previousWeightUpdates[i] = 0;
+    }
+
+    DataType FeedForward(const std::array<DataType,NumInputs> inputValues)
+    {
+        DataType output = m_biasWeight;
+        for(std::size_t i = 0; i < inputValues.size(); ++i)
+            output += inputValues[i] * m_weights[i];
+
+        m_inputValues = inputValues;
+
+        return output;
+    }
+
+    std::array<DataType,NumInputs> Backpropagate(const DataType& error)
+    {
+        std::array<DataType,NumInputs> netInputOverWeight;
+        for(std::size_t i = 0; i < NumInputs; ++i)
+        {
+            netInputOverWeight[i] = m_inputValues[i];
+        }
+
+        DataType netInputOverBias = DataType(1);
+
+        std::array<DataType,NumInputs> errorOverWeight;
+        for(std::size_t i = 0; i < NumInputs; ++i)
+        {
+            errorOverWeight[i] = error * netInputOverWeight[i];
+        }
+
+        DataType errorOverBias = error * netInputOverBias;
+
+        for(std::size_t i = 0; i < NumInputs; ++i)
+        {
+            m_outstandingWeightAdjustments[i] = errorOverWeight[i];
+        }
+        m_outstandingWeightAdjustments[NumInputs] = errorOverBias;
+
+        DataType errorOverNetInput = error;
+
+        std::array<DataType,NumInputs> errorWeights;
+        for(std::size_t i = 0; i < NumInputs; ++i)
+        {
+            errorWeights[i] = errorOverNetInput * m_weights[i];
+        }
+
+        return errorWeights;
+    }
+
+    void AdjustWeights(const DataType& learningRate, const DataType& momentum)
+    {
+        for(std::size_t i = 0; i < NumInputs; ++i)
+        {
+            DataType adjustment = learningRate * m_outstandingWeightAdjustments[i] + momentum * m_previousWeightUpdates[i];
+            m_weights[i] = m_weights[i] - adjustment;
+            m_previousWeightUpdates[i] = adjustment;
+        }
+        DataType adjustment = learningRate * m_outstandingWeightAdjustments[NumInputs] + momentum * m_previousWeightUpdates[NumInputs];
+        m_biasWeight = m_biasWeight - adjustment;
+        m_previousWeightUpdates[NumInputs] = adjustment;
+    }
+
+    const std::array<DataType,NumInputs>& GetWeights() const
+    {
+        return m_weights;
+    }
+
+    const DataType& GetBiasWeight() const
+    {
+        return m_biasWeight;
+    }
+
+protected:
+
+    static std::mt19937 m_mt;
+    static std::uniform_real_distribution<DataType> m_uniformDisRandom;
+    static std::uniform_real_distribution<DataType> m_xavierUniformDis;
+    static std::normal_distribution<DataType> m_xavierNormalDis;
+
+    std::array<DataType,NumInputs> m_weights;
+    DataType m_biasWeight;
+
+    std::array<DataType,NumInputs+1> m_previousWeightUpdates;
+    std::array<DataType,NumInputs+1> m_outstandingWeightAdjustments;
+
+    std::array<DataType,NumInputs> m_inputValues;
+};
+
+template<std::size_t NumInputs>
+std::mt19937 Neuron<NumInputs>::m_mt(std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count());
+
+template<std::size_t NumInputs>
+std::uniform_real_distribution<DataType> Neuron<NumInputs>::m_uniformDisRandom(-1,1);
+
+template<std::size_t NumInputs>
+std::uniform_real_distribution<DataType> Neuron<NumInputs>::m_xavierUniformDis(-std::sqrt(6.f / NumInputs+1),std::sqrt(6.f / NumInputs+1));
+
+template<std::size_t NumInputs>
+std::normal_distribution<DataType> Neuron<NumInputs>::m_xavierNormalDis(0,std::sqrt(2.f / NumInputs+1));
+
+template<std::size_t NumNeurons>
+class ActivationLayer
+{
+public:
+
+    ActivationLayer()
+    :
+        m_outputs({})
+    {}
+
+    virtual std::array<DataType,NumNeurons> GetOutputs() const final
+    {
+        return m_outputs;
+    }
+
+    virtual void CompleteBackprop(const DataType& learningRate, const DataType& momentum) final
+    {
+    }
+
+protected:
+    std::array<DataType,NumNeurons> m_outputs;
+};
+
+template<std::size_t NumNeurons>
+class SigmoidActivation : public ActivationLayer<NumNeurons>
+{
+public:
+
+    virtual std::array<DataType,NumNeurons> FeedForward(const std::array<DataType,NumNeurons>& inputValues)
+    {
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+            ActivationLayer<NumNeurons>::m_outputs[i] = sigmoid(inputValues[i]);
+        return ActivationLayer<NumNeurons>::m_outputs;
+    }
+
+    virtual std::array<DataType,NumNeurons> Backpropagate(const std::array<DataType,NumNeurons> errors)
+    {
+        std::array<DataType,NumNeurons> backpropErrors;
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+            backpropErrors[i] = errors[i] * sigmoid_derivative(ActivationLayer<NumNeurons>::m_outputs[i]);
+        return backpropErrors;
+    }
+};
+
+template<std::size_t NumInputs, std::size_t NumNeurons>
+class FullyConnectedLayer
+{
+public:
+
+    FullyConnectedLayer()
+    :
+        m_neurons([=]()
+        {
+            std::array<Neuron<NumInputs>,NumNeurons> neurons;
+            for(Neuron<NumInputs>& n : neurons)
+                n = Neuron<NumInputs>();
+            return neurons;
+        }())
+    {
+    }
+
+    virtual std::array<DataType,NumNeurons> FeedForward(const std::array<DataType,NumInputs>& inputValues)
+    {
+        std::array<DataType,NumNeurons> outputValues;
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+            outputValues[i] = m_neurons[i].FeedForward(inputValues);
+        return outputValues;
+    }
+
+    /** \brief Take a sum of errors for each node and produce the errors for each input node in the previous layer.
+     *
+     */
+
+    virtual std::array<DataType,NumInputs>
+    Backpropagate(const std::array<DataType,NumNeurons> errors)
+    {
+        std::array<std::array<DataType,NumInputs>,NumNeurons> errorValues;
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+        {
+            errorValues[i] = m_neurons[i].Backpropagate(errors[i]);
+        }
+        std::array<DataType,NumInputs> returnErrors;
+        std::fill(returnErrors.begin(),returnErrors.end(),0);
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+        {
+            for(std::size_t j = 0; j < NumInputs; ++j)
+            {
+                returnErrors[j] += errorValues[i][j];
+            }
+        }
+        return returnErrors;
+    }
+
+    virtual void CompleteBackprop(const DataType& learningRate, const DataType& momentum)
+    {
+        for(Neuron<NumInputs>& n : m_neurons)
+            n.AdjustWeights(learningRate, momentum);
+    }
+
+    const Neuron<NumInputs>& operator[](const std::size_t& index) const
+    {
+        return m_neurons[index];
+    }
+
+    std::array<std::array<DataType,NumInputs>,NumNeurons> GetWeights() const
+    {
+        std::array<std::array<DataType,NumInputs>,NumNeurons> weights;
+        for(std::size_t i = 0; i < NumNeurons; ++i)
+        {
+            weights[i] = m_neurons[i].GetWeights();
+        }
+        return weights;
+    }
+
+protected:
+    std::array<Neuron<NumInputs>,NumNeurons> m_neurons;
+};
+
+template<std::size_t I = 0, typename FuncT, typename... Tp>
+inline typename std::enable_if<I == sizeof...(Tp)>::type for_each(std::tuple<Tp...> &, FuncT)
+{
+}
+
+template<std::size_t I = 0, typename FuncT, typename... Tp>
+inline typename std::enable_if<I < sizeof...(Tp)>::type for_each(std::tuple<Tp...>& t, FuncT f)
+{
+    f(std::get<I>(t)); // call f, passing the Ith element of the std::tuple t and the existing output O
+    for_each<I + 1, FuncT, Tp...>(t, f); // process the next element of the tuple with the new output
+}
+
+template<std::size_t I = 0, typename FuncT, typename... Tp>
+inline typename std::enable_if<I == sizeof...(Tp)>::type for_each(const std::tuple<Tp...> &, FuncT)
+{
+}
+
+template<std::size_t I = 0, typename FuncT, typename... Tp>
+inline typename std::enable_if<I < sizeof...(Tp)>::type for_each(const std::tuple<Tp...>& t, FuncT f)
+{
+    f(std::get<I>(t)); // call f, passing the Ith element of the std::tuple t and the existing output O
+    for_each<I + 1, FuncT, Tp...>(t, f); // process the next element of the tuple with the new output
+}
+
+template<std::size_t I = 0, typename FuncT, typename O, typename FinalOutput, typename... Tp>
+inline typename std::enable_if<I == sizeof...(Tp)>::type for_each_get_final_output(std::tuple<Tp...> &, FuncT, O o, FinalOutput& finalOutput)
+{
+    finalOutput = o;
+}
+
+template<std::size_t I = 0, typename FuncT, typename O, typename FinalOutput, typename... Tp>
+inline typename std::enable_if<I < sizeof...(Tp)>::type for_each_get_final_output(std::tuple<Tp...>& t, FuncT f, O o, FinalOutput& finalOutput)
+{
+    auto newO = f(std::get<I>(t),o); // call f, passing the Ith element of the std::tuple t and the existing output O
+    for_each_get_final_output<I + 1, FuncT, decltype(newO), FinalOutput, Tp...>(t, f, newO, finalOutput); // process the next element of the tuple with the new output
+}
+
+template<std::size_t I = 0, typename FuncT, typename O, typename... Tp>
+inline typename std::enable_if<I == 0>::type for_each_reverse_impl(std::tuple<Tp...>& t, FuncT f, O o)
+{
+    f(std::get<0>(t),o);
+}
+
+template<std::size_t I = 0, typename FuncT, typename O, typename... Tp>
+inline typename std::enable_if<(I > 0)>::type
+for_each_reverse_impl(std::tuple<Tp...>& t, FuncT f, O o)
+{
+    auto newO = f(std::get<I>(t),o); // call f, passing the Ith element of the std::tuple t and the existing output O
+    for_each_reverse_impl<I - 1, FuncT, decltype(newO), Tp...>(t, f, newO); // process the next element of the tuple with the new output
+}
+
+template<typename FuncT, typename O, typename... Tp>
+inline void for_each_reverse(std::tuple<Tp...>& t, FuncT f, O o)
+{
+    for_each_reverse_impl<sizeof...(Tp)-1, FuncT, O, Tp...>(t, f, o);
+}
+
+enum class LOSS_FUNCTION : std::uint_fast8_t
+{
+    MEAN_SQUARE_ERROR,
+    CROSS_ENTROPY
+};
+
+class ValidationOptions
+{
+public:
+    enum class METRIC : std::uint_fast8_t { NONE, ACCURACY, LOSS };
+
+    ValidationOptions()
+    :
+        m_validationSplit(.3f),
+        m_enableLoss(false),
+        m_lossFunction(LOSS_FUNCTION::MEAN_SQUARE_ERROR),
+        m_enableAccuracy(false),
+        m_outputFilter([](const DataType& x){ return x; }),
+        m_earlyStoppingMetric(METRIC::NONE),
+        m_earlyStoppingPatience(1.f),
+        m_earlyStoppingDelta(1.f),
+        m_earlyStoppingNumEpochsAverage(1)
+    {}
+
+    ValidationOptions& Loss(const bool enable = true, LOSS_FUNCTION lossFunction = LOSS_FUNCTION::MEAN_SQUARE_ERROR)
+    {
+        m_enableLoss = enable;
+        m_lossFunction = lossFunction;
+        return *this;
+    }
+
+    ValidationOptions& Split(const float dataSplitValidation)
+    {
+        m_validationSplit = dataSplitValidation;
+        return *this;
+    }
+
+    ValidationOptions& Accuracy(const bool enable = true, ActivationFuncPtr outputFilter = [](const DataType& x){return x;})
+    {
+        m_enableAccuracy = enable;
+        m_outputFilter = outputFilter;
+        return *this;
+    }
+
+    ValidationOptions& EarlyStop(const bool enable = true,
+                                 const METRIC metric = METRIC::ACCURACY,
+                                 const float patience = .1f,
+                                 const DataType delta = .01,
+                                 const std::size_t epochNumToAverage = 10)
+     {
+         if(enable == false)
+            m_earlyStoppingMetric = METRIC::NONE;
+         else
+            m_earlyStoppingMetric = metric;
+
+         m_earlyStoppingPatience = patience;
+         m_earlyStoppingDelta = delta;
+         m_earlyStoppingNumEpochsAverage = epochNumToAverage;
+
+         return *this;
+     }
+
+     float GetValidationSplit() const { return m_validationSplit; }
+     bool Loss() const { return m_enableLoss; }
+     LOSS_FUNCTION GetLossFunction() const { return m_lossFunction; }
+     bool Accuracy() const { return m_enableAccuracy; }
+     ActivationFuncPtr GetOutputFilter() const { return m_outputFilter; }
+     METRIC GetEarlyStoppingMetric() const { return m_earlyStoppingMetric; }
+     float GetEarlyStoppingPatience() const { return m_earlyStoppingPatience; }
+     DataType GetEarlyStoppingDelta() const { return m_earlyStoppingDelta; }
+     std::size_t GetEarlyStoppingNumEpochsAvg() const { return m_earlyStoppingNumEpochsAverage; }
+
+protected:
+
+    float m_validationSplit;        /**< Percentage of the data set aside for validation */
+
+    bool m_enableLoss;
+    LOSS_FUNCTION m_lossFunction;   /**< Loss function to use */
+
+    bool m_enableAccuracy;
+    ActivationFuncPtr m_outputFilter;   /**< When measuring accuracy data is passed through this */
+
+    METRIC m_earlyStoppingMetric;                   /**< The metric used to stop early */
+    float m_earlyStoppingPatience;                  /**< Percentage of total epochs to wait before stopping early */
+    DataType m_earlyStoppingDelta;                  /**< The amount that the early stopping metric needs to change in a single step before stopping */
+    std::size_t m_earlyStoppingNumEpochsAverage;    /**< The number of epochs averaged over to smooth out the stopping metric */
+};
+
+template<typename... Layers>
+class NeuralNetwork
+{
+public:
+
+    NeuralNetwork(Layers... layers)
+    :
+        m_layers(std::make_tuple(layers...))
+    {
+
+    }
+
+    template<std::size_t NumFeatures, std::size_t NumOutputs, std::size_t NumTrainingRows>
+    void Fit(const std::size_t& numberEpochs,
+             const std::size_t& batchSize,
+             DataType learningRate,
+             const DataType& momentum,
+             std::array<std::array<DataType,NumFeatures>,NumTrainingRows>& trainingData,
+             std::array<std::array<DataType,NumOutputs>,NumTrainingRows>& trainingOutput,
+             const ValidationOptions validationOptions,
+             const bool linearDecayLearningRate = true,
+             std::ostream& outputStream = std::cout)
+    {
+        std::size_t epochNumber = 0;
+
+        // need to support more than just MSE to measure loss
+        std::vector<DataType> lastEpochLoss(validationOptions.GetEarlyStoppingNumEpochsAvg(),0);
+        DataType lastEpochLossAverage = std::numeric_limits<DataType>::max();
+
+        std::vector<DataType> lastValidationAccuracys(validationOptions.GetEarlyStoppingNumEpochsAvg(),0);
+        DataType lastValidationAccuraryAvg = 0;
+
+        std::vector<std::size_t> randomIndices(NumTrainingRows,0);
+        for(std::size_t i = 0; i < NumTrainingRows; ++i)
+            randomIndices[i] = i;
+
+        std::random_shuffle(randomIndices.begin(),randomIndices.end());
+        // take some percentage as validation split
+        // we do this by taking the first percentage of already shuffled indices and removing them
+        // from what is available
+        std::size_t numValidationRecords = NumTrainingRows*validationOptions.GetValidationSplit();
+        std::size_t numTrainingRecords = NumTrainingRows - numValidationRecords;
+        std::vector<std::size_t> validationRecords(numValidationRecords);
+        for(std::size_t i = 0; i < numValidationRecords; ++i)
+        {
+            std::size_t index = randomIndices.back();
+            randomIndices.pop_back();
+            validationRecords[i] = index;
+        }
+
+        while(epochNumber < numberEpochs)
+        {
+            // shuffle the indices so that they are pulled into each batch randomly each time
+            std::random_shuffle(randomIndices.begin(),randomIndices.end());
+
+            DataType epochLoss = 0;
+
+            std::tuple<Layers...> backupLayers = m_layers;
+
+            for(std::size_t batchNumber = 0; batchNumber < std::ceil(numTrainingRecords / batchSize); ++batchNumber)
+            {
+                std::array<DataType,NumOutputs> propagateError = {0};
+
+                std::size_t startIndex = batchNumber * batchSize;
+                std::size_t endIndex = startIndex + batchSize;
+                if(endIndex > numTrainingRecords)
+                    endIndex = numTrainingRecords;
+
+                DataType batchLoss = 0;
+
+                for(std::size_t index = startIndex; index < endIndex; ++index)
+                {
+                    std::size_t row = randomIndices[index];
+                    const std::array<DataType,NumFeatures>& dataRow = trainingData[row];
+                    const std::array<DataType,NumOutputs>& desiredOutputRow = trainingOutput[row];
+
+                    // Feed the values through to the output layer
+                    // use of ""auto"" is so this lambda can be used for all layers without
+                    // me needing to do any fucking around
+                    std::array<DataType,NumOutputs> finalOutput;
+                    for_each_get_final_output(m_layers, [](auto& layer, auto o)
+                    {
+                        return layer.FeedForward(o);
+                    }, dataRow, finalOutput);
+
+                    DataType totalError = 0;
+                    for(std::size_t i = 0; i < NumOutputs; ++i)
+                    {
+                        if(validationOptions.GetLossFunction() == LOSS_FUNCTION::MEAN_SQUARE_ERROR)
+                            totalError += std::pow(desiredOutputRow[i] - finalOutput[i],2.0);
+                        else if(validationOptions.GetLossFunction() == LOSS_FUNCTION::CROSS_ENTROPY)
+                        {
+                            if(NumOutputs == 1)
+                            {
+                                // binary cross entropy
+                                totalError += (desiredOutputRow[i] * std::log(1e-15 + finalOutput[i]));
+                            }
+                            else
+                            {
+                                // cross entropy
+                            }
+                        }
+                    }
+
+                    batchLoss += totalError;
+                }
+
+                batchLoss *= DataType(1) / (endIndex - startIndex);
+
+                for(std::size_t i = 0; i < NumOutputs; ++i)
+                    propagateError[i] = batchLoss;
+
+                // update after every batch
+                for_each_reverse(m_layers, [](auto& layer, auto o)
+                {
+                    auto errors = layer.Backpropagate(o);
+                    return errors;
+                }, propagateError);
+
+                // once backprop is finished, we can adjust all the weights
+                for_each(m_layers, [&](auto& layer)
+                {
+                    layer.CompleteBackprop(learningRate,momentum);
+                });
+
+                epochLoss += batchLoss;
+            }
+
+            epochLoss *= DataType(1) / numTrainingRecords;
+
+            lastEpochLoss.erase(lastEpochLoss.begin());
+            lastEpochLoss.push_back(epochLoss);
+            DataType avgEpochLoss = 1.f * std::accumulate(lastEpochLoss.begin(),lastEpochLoss.end(),0.f) / (epochNumber < validationOptions.GetEarlyStoppingNumEpochsAvg() ? epochNumber+1 : lastEpochLoss.size());
+
+            if(validationOptions.GetEarlyStoppingMetric() == ValidationOptions::METRIC::LOSS
+               && epochNumber > numberEpochs * validationOptions.GetEarlyStoppingPatience()
+               && avgEpochLoss > lastEpochLossAverage + validationOptions.GetEarlyStoppingDelta())
+            {
+                // the loss average has decreased, so we should go back to the previous run and exit
+                std::cout   << ""Early exit Loss Avg \n""
+                            << ""Last Epoch: "" << lastEpochLossAverage << ""\n""
+                            << ""This Epoch: "" << avgEpochLoss << std::endl;
+                m_layers = backupLayers;
+                break;
+            }
+
+            lastEpochLossAverage = avgEpochLoss;
+
+            // check for the error against the reserved validation set
+            std::size_t numCorrect = 0;
+            for(std::size_t row = 0; row < validationRecords.size(); ++row)
+            {
+                const std::array<DataType,NumFeatures>& dataRow = trainingData[row];
+                const std::array<DataType,NumOutputs>& desiredOutputRow = trainingOutput[row];
+
+                std::array<DataType,NumOutputs> finalOutput;
+                for_each_get_final_output(m_layers, [](auto& layer, auto o)
+                {
+                    return layer.FeedForward(o);
+                }, dataRow, finalOutput);
+
+                bool correct = true;
+                for(std::size_t i = 0; i < NumOutputs; ++i)
+                {
+                    if(validationOptions.GetOutputFilter()(finalOutput[i]) != desiredOutputRow[i])
+                        correct = false;
+                }
+                if(correct)
+                    ++numCorrect;
+            }
+
+            DataType validationAccuracy = DataType(numCorrect) / numValidationRecords;
+
+            lastValidationAccuracys.erase(lastValidationAccuracys.begin());
+            lastValidationAccuracys.push_back(validationAccuracy);
+            DataType avgValidationAccuracy = std::accumulate(lastValidationAccuracys.begin(),lastValidationAccuracys.end(),0.f) / (epochNumber < validationOptions.GetEarlyStoppingNumEpochsAvg() ? epochNumber+1 : lastValidationAccuracys.size());
+
+            if(validationOptions.GetEarlyStoppingMetric() == ValidationOptions::METRIC::ACCURACY
+               && epochNumber > numberEpochs * validationOptions.GetEarlyStoppingPatience()
+               && avgValidationAccuracy < lastValidationAccuraryAvg - validationOptions.GetEarlyStoppingDelta())
+            {
+                // the accuracy has decreased, so we should go back to the previous run and exit
+                std::cout   << ""Early exit validation accuracy \n""
+                            << ""Last Epoch: "" << lastValidationAccuraryAvg << ""\n""
+                            << ""This Epoch: "" << avgValidationAccuracy << std::endl;
+                m_layers = backupLayers;
+                break;
+            }
+
+            lastValidationAccuraryAvg = avgValidationAccuracy;
+
+            outputStream << epochNumber << "","" << epochLoss << "","" << avgEpochLoss << "","" << validationAccuracy << "","" << avgValidationAccuracy << std::endl;
+
+            learningRate -= learningRate / (numberEpochs-epochNumber);
+
+            ++epochNumber;
+        }
+    }
+
+    template<std::size_t NumFeatures, std::size_t NumOutputs, std::size_t NumEvaluationRows>
+    void Evaluate(std::array<std::array<DataType,NumFeatures>,NumEvaluationRows> inputData,
+                  std::array<std::array<DataType,NumOutputs>,NumEvaluationRows> correctOutputs,
+                  DataType& loss,
+                  DataType& accuracy,
+                  ActivationFuncPtr outputFilter = [](const DataType& x){return x;})
+    {
+        loss = 0;
+
+        std::size_t numCorrect = 0;
+
+        for(std::size_t row = 0; row < NumEvaluationRows; ++row)
+        {
+            const std::array<DataType,NumFeatures>& dataRow = inputData[row];
+            const std::array<DataType,NumOutputs>& outputRow = correctOutputs[row];
+
+            // Feed the values through to the output layer
+
+            std::array<DataType,NumOutputs> finalOutput;
+            for_each_get_final_output(m_layers, [](auto& layer, auto o)
+            {
+                layer.FeedForward(o);
+                return layer.GetOutputs();
+            }, dataRow, finalOutput);
+
+            DataType thisLoss = 0;
+            for(std::size_t i = 0; i < NumOutputs; ++i)
+                thisLoss += outputRow[i] - finalOutput[i];
+            loss += thisLoss * thisLoss;
+
+            bool correct = true;
+            for(std::size_t i = 0; i < NumOutputs; ++i)
+            {
+                if(outputFilter(finalOutput[i]) != outputRow[i])
+                    correct = false;
+            }
+            if(correct)
+                ++numCorrect;
+        }
+
+        loss *= DataType(1) / NumEvaluationRows;
+        accuracy = DataType(numCorrect) / NumEvaluationRows;
+    }
+
+    template<std::size_t NumFeatures, std::size_t NumOutputs, std::size_t NumRecords>
+    void Predict(std::array<std::array<DataType,NumFeatures>,NumRecords> inputData,
+                 std::array<std::array<DataType,NumOutputs>,NumRecords>& predictions,
+                 ActivationFuncPtr outputFilter = [](const DataType& x){return x;})
+    {
+        for(std::size_t row = 0; row < NumRecords; ++row)
+        {
+            const std::array<DataType,NumFeatures>& dataRow = inputData[row];
+
+            // Feed the values through to the output layer
+
+            std::array<DataType,NumOutputs> finalOutput;
+            for_each_get_final_output(m_layers, [](auto& layer, auto o)
+            {
+                return layer.FeedForward(o);
+            }, dataRow, finalOutput);
+
+            for(std::size_t i = 0; i < NumOutputs; ++i)
+                predictions[row][i] = outputFilter(finalOutput[i]);
+        }
+    }
+
+protected:
+    std::tuple<Layers...> m_layers;
+};
+
+main()
+{
+    std::vector<std::vector<CSVType>> trainingCSVData;
+    /* load training CSV data */
+
+    std::vector<std::vector<CSVType>> testCSVData;
+    /* load test CSV data */
+
+    std::cout << std::fixed << std::setprecision(80);
+
+    std::ofstream file(""error_out.csv"", std::ios::out | std::ios::trunc);
+    if(!file.is_open())
+    {
+        std::cout << ""couldn't open file"" << std::endl;
+        return 0;
+    }
+
+    file << std::fixed << std::setprecision(80);
+
+    /*
+        Features
+        1   pClass 1
+        2   pClass 2
+        3   pClass 3
+        4   Sex female 1, male 0
+        5   Age normalised between 0 and 1  age range 0 to 100
+        6   Number siblings between 0 and 1   num range 0 to 8
+        7   Number of parents / children        num range 0 to 9
+        8   Ticket cost   between 0 and 1       num range 0 to 512.3292
+        9   embarked S
+        10  embarked Q
+        11  embarked C
+    */
+
+    std::array<std::array<DataType,29>,891> inputData;
+    std::array<std::array<DataType,1>,891> desiredOutputs;
+
+    /* ... data that loads the titanic data into a series of features. Either class labels or normalised values (like age) */
+
+    NeuralNetwork neuralNet{
+        FullyConnectedLayer<29,256>(),
+        SigmoidActivation<256>(),
+        FullyConnectedLayer<256,1>(),
+        SigmoidActivation<1>()
+    };
+
+    neuralNet.Fit(300,
+                  1,
+                  0.05,
+                  0.25f,
+                  inputData,
+                  desiredOutputs,
+                  ValidationOptions().Accuracy(true,step05).Loss(true,LOSS_FUNCTION::CROSS_ENTROPY).Split(0.3),
+                  false,
+                  file);
+
+    file.close();
+
+    return 0;
+}
+
+ +

The data used is from the titanic problem that you can download from Kaggle here.

+ +

The typical output file that's being generated is like this:

+ +
0,-4.91843843460083007812500000000000000000000000000000000000000000000000000000000000,-4.91843843460083007812500000000000000000000000000000000000000000000000000000000000,0.65168541669845581054687500000000000000000000000000000000000000000000000000000000,0.65168541669845581054687500000000000000000000000000000000000000000000000000000000
+1,-6.14257431030273437500000000000000000000000000000000000000000000000000000000000000,-6.14257431030273437500000000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+2,-6.43130302429199218750000000000000000000000000000000000000000000000000000000000000,-6.43130302429199218750000000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+3,-6.58864736557006835937500000000000000000000000000000000000000000000000000000000000,-6.58864736557006835937500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+4,-6.70884752273559570312500000000000000000000000000000000000000000000000000000000000,-6.70884752273559570312500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+5,-6.78206682205200195312500000000000000000000000000000000000000000000000000000000000,-6.78206682205200195312500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+6,-6.86832284927368164062500000000000000000000000000000000000000000000000000000000000,-6.86832284927368164062500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+7,-6.92110681533813476562500000000000000000000000000000000000000000000000000000000000,-6.92110681533813476562500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+8,-6.96584081649780273437500000000000000000000000000000000000000000000000000000000000,-6.96584081649780273437500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+9,-7.02414274215698242187500000000000000000000000000000000000000000000000000000000000,-7.02414274215698242187500000000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000,0.65543073415756225585937500000000000000000000000000000000000000000000000000000000
+10,-7.06",,,,,,,,,,,,,,
+15971,1,15989,,10/19/2019 7:17,,0,94,"

Bellow I have a Learning Curve plot How should I interpret this plot for my random forrest algorithm (the second one the most complex one)? +Which one is the best?

+ +

+ +

+ +

+",30599,,30599,,10/19/2019 11:48,10/20/2019 23:34,How to interpret this learning curve plot,,1,0,,,,CC BY-SA 4.0 +15972,1,16032,,10/19/2019 8:18,,2,87,"

On this website https://scikit-learn.org/stable/modules/learning_curve.html, the authors are speaking about variance and bias and they give a simple example of how works in a linear model.

+ +

How can I determine the bias and variance of a random forest?

+",30599,,2444,,11/21/2019 3:23,11/21/2019 3:23,How can I determine the bias and variance of a random forrest?,,1,0,,,,CC BY-SA 4.0 +15973,1,,,10/19/2019 8:56,,2,246,"

Say we have the layer $X W + b = Y$.

+ +
    +
  1. I want to get $\frac{dL}{dW}$ and we assume I have $\frac{dL}{dY}$. +So all I need is to find $\frac{dY}{dW}$. I know that it should be $X^T\frac{dL}{dY}$ but don't understand why. please explain.
  2. +
  3. I want to get $\frac{dL}{db}$ and we assume I have $\frac{dL}{dY}$. +So all I need is to find $\frac{dY}{db}$. I know that it should be $\sum(\frac{dL}{dY})_i$ (I mean sum the rows) but I don't understand why. please explain.
  4. +
+ +

Thanks :)

+",30611,,2444,,10/19/2019 13:53,10/19/2019 13:53,Understanding the partial derivative with respect to the weight matrix and bias,,0,2,,,,CC BY-SA 4.0 +15974,2,,15970,10/19/2019 10:16,,1,,"

This is a guess, as I am not reading all that code!

+ +
+

This was previously working when the error I fed back through backprop was just the difference between the correct result and the prediction. But I've since been told that I should be propagating back the Loss Functions error, which I then implemented as Binary Cross Entropy.

+
+ +

You may have been right before, and the advice you received wrong. Sort of.

+ +

Backpropagation does not feed back the error value directly. It works exclusively with gradients of the error. So you typically start with a gradient value based on the objective function.

+ +

If you backprop each function in the network a single step at a time, then you would start with $\nabla_{\hat{y}}\mathcal{L}$, the gradient of the loss function with respect to the estimated values (i.e. output of the NN). Then from that you could calculate $\nabla_{z}\mathcal{L}$, which is the gradient of the loss function with respect to the pre-activation values of the output layer.

+ +

However, it is possible to combine steps analytically, and really common to start with the first gradient being $\nabla_{z}\mathcal{L}$. That's because these pre-activation values are what you use to then calculate $\nabla_{W}\mathcal{L}$ and $\nabla_{b}\mathcal{L}$ - the gradients with respect to the layer's weights and biases, plus also to calculate $\nabla_{z'}\mathcal{L}$ - the gradients with respect to pre-activation values of the previous layer.

+ +

Either way, once you have $\nabla_{z}\mathcal{L}$ for the output layer you can just run back though each layer in turn, repeating the same calcuation steps again and again. It is this repetition over each layer that looks like classic backprop in code.

+ +

Assuming your neural network outputs a single value, the probablility of survival in the case of this dataset:

+ +
    +
  • Your objective/loss function for binary classification should be Binary Cross Entropy.

  • +
  • Your output layer should have a sigmoid activation function.

  • +
+ +

If you take the gradient of the loss function and backpropagate it throught that output layer's activation function, you end up with a starting delta $\nabla_{z} \mathcal{L}$ i.e. the gradient of the loss function with respect to the pre-activation value of the first layer. This is a nice place to start the recursive backprop code.

+ +

The value of this gradient is mathematically, per item:

+ +

$$\nabla_{z} \mathcal{L} = \hat{y}_i - y_i$$

+ +

i.e. the difference the neural network estimate and the ground truth value. It takes this value because complex terms in the gradient of the loss function and the sigmoid activation function cancel out exactly. This is one reason why you often see sigmoid activation used with binary cross entropy - it is very convenient that the combination simplifies the gradient like this. You need to backpropagate each data point separately, and average the gradients for each batch/minibatch (actually you don't need to take this mean value, but doing so means that you don't need to adjust your learning rate as much to account for the number of items being processed per update step).

+ +

It appears to of worked for you.

+ +

Whether or not you were accidentally correct depends on whether you were getting an estimate for $\nabla_{z}\mathcal{L}$ to start your backprop routine, or $\nabla_{\hat{y}}\mathcal{L}$. I cannot find what that assumption is in your code, but the fact that it worked before and does not now suggests that you may of been correct, even though you may not of fully understood the maths.

+",1847,,1847,,10/19/2019 15:45,10/19/2019 15:45,,,,1,,,,CC BY-SA 4.0 +15976,1,,,10/19/2019 12:43,,2,87,"

I have a neural network that should be able to classify documents to target label A. The problem is that the network is actually classifying label B, which is an easier task.

+ +

To make the problem more clear: I need to classify documents from different sources. In the training data each source occurs repeatedly, but the network should be able to work on unknown sources. All documents from a single source have the same class. In this case, it is easier to identify sources than the target label so in practice the network is not really identifying the target label, but the source.

+ +

The solution to this problem is making sure that the model is bad at identifying the sources in the training data, while still attaching the right target labels.

+ +

I think the first step is to get two output layers, one for the target label and one for identifying which source it is from. My approach fails however at the training procedure: I want to minimize the loss on the target output, but maximize the loss on the non-target output. But if I maximize the loss on that non-target output, that does not mean that the network 'unlearns' the non-target labels. So the main question for the non-target output is:

+ +

TLDR; How do I define a training procedure that minimizes the loss on a non-target output layer, and then maximizes that loss on all layers before it. My goal is to have a network that is good at classifying label A, but bad at a related label B. If anyone wants to give a code example, my prefered framework is PyTorch.

+",30614,,,,,8/2/2023 3:09,Maximize loss on non-target variable,,2,0,,,,CC BY-SA 4.0 +15977,1,,,10/19/2019 18:23,,14,211,"

Is there a way to understand, for instance, a multi-layered perceptron without hand-waving about them being similar to brains, etc?

+

For example, it is obvious that what a perceptron does is approximating a function; there might be many other ways, given a labelled dataset, to find the separation of the input area into smaller areas that correspond to the labels; however, these ways would probably be computationally rather ineffective, which is why they cannot be practically used. However, it seems that the iterative approach of finding such areas of separation may give a huge speed-up in many cases; then, natural questions arise why this speed-up may be possible, how it happens and in which cases.

+

One could be sure that this question was investigated. If anyone could shed any light on the history of this question, I would be very grateful.

+

So, why are neural networks useful and what do they do? I mean, from the practical and mathematical standpoint, without relying on the concept of "brain" or "neurons" which can explain nothing at all.

+",30623,,2444,,12/12/2021 12:43,12/12/2021 12:43,Is there a way to understand neural networks without using the concept of brain?,,3,2,,,,CC BY-SA 4.0 +15978,1,,,10/19/2019 19:17,,1,33,"

I'm trying to come up with a generative model that can input a name and output all valid formats of it.

+ +

For example, ""Bob Dylan"" could be an input and the gen model will output ""Dylan, Bob"", ""B Dylan"", ""Bob D"" and any other type of valid formatting of a person's name. So given my example the gen model doesn't seem that complicated to build, but it also has to handle stuff like ""Dylan, Bob"" and ""B Dylan"", but obviously the 2nd one it shouldn't output ""Bob Dylan"" as a potential output cause inferring that requires more than just ""B Dylan"". Any ideas for a good Generative Model for this?

+",25721,,,,,10/19/2019 19:17,What's a good generative model for creating valid formats of a person's name?,,0,9,,,,CC BY-SA 4.0 +15979,2,,15977,10/20/2019 1:53,,12,,"

tl;dr I always like to think of Neural Networks as a generalization of logistic regression.

+ +

I too don't like that, traditionally, when introducing Neural Networks, books start with biological neurons and synapses, etc. I think its more beneficial to start from statistics and linear regression, then logistic regression and then neural networks.

+ +

A perceptron is essentially a simple binary logistic regressor (if you threshold the output). If you have many perceptrons that share the same input (i.e. a layer in a neural network), you can think of it as a multi-class logistic regressor. Now, by stacking one such layer after an other, you create a Multi-Layer Perceptron (MLP), which is a Neural Network with two layers. There is equivalent to two multi-class logistic regressors stacked one after the other. One notable thing that changes is the training technique here, i.e. backpropagation (because you don't have direct access to the targets from the hidden layer). Another thing that can change is the activation function (it's not always sigmoid in Neural Networks)

+ +

Introduce sparse connectivity and weight sharing and you get a Convolutional Neural Network. Add a connection from a layer to its self (for the next timestep) and you get a Recurrent Neural Network. Likewise, you can reproduce any Neural Network through this reasoning.

+ +

I know this is an over-simplified way of presenting them, but I think you get the point.

+",26652,,,,,10/20/2019 1:53,,,,0,,,,CC BY-SA 4.0 +15980,1,,,10/20/2019 5:12,,0,78,"

I cannot find information in detail about autoencoder

+ +

What can I do with an autoencoder (and how can I do this), practically speaking?

+ +

What does the encoder (this part I think I understand) and a decoder (could not find much about this) part do? Can it for example show on an explainable way how patterns in the data are being represented?

+ +

I read some papers that say that it can be used to denoise the input, how does this work? (Am I changing the values of my input)

+ +

Is it true that an autoencoder can be also done with PCA (if we assume linearity)?

+",30599,,2444,,10/20/2019 12:33,10/20/2019 12:33,What can I do with an autoencoder?,,1,0,,10/21/2019 21:09,,CC BY-SA 4.0 +15982,2,,15980,10/20/2019 12:14,,1,,"

An autoencoder learns to compress data, and then to decompress it again, recovering the original data.

+ +

It does this by learning a mapping from the original feature space to a lower-dimensional space, and then another mapping back. This is indeed, like PCA. The same technique is used to compress JPEG images to transmit them over the web.

+ +

Autoencoders can denoise an image because the model will not be able to easily compress or decompress random noise, and so learns to ignore it.

+ +

Autoencoders can also be used to find embeddings of data that are likely to have semantic meaning. For example, you can use an autoencoder to compress English text, and a different one to compress French text. If you add some special constraints, then the models can learn to embed in the same lower-dimensional space. This technique underlies many of the recent advances in machine translation.

+",16909,,2444,,10/20/2019 12:32,10/20/2019 12:32,,,,0,,,,CC BY-SA 4.0 +15983,1,,,10/20/2019 12:53,,2,69,"

Are there are any openly available implementations of the SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation paper?

+",30632,,30632,,11/30/2019 4:55,11/30/2019 4:55,Is there any open source implementation of the SBEED learning algorithm?,,0,0,,,,CC BY-SA 4.0 +15984,1,15998,,10/20/2019 14:37,,4,109,"

I want to know if there is a measure of how well two classes in Y are separable (linearly or not) based on their features in X. Easiest way of explaining this is to compare it to correlation coefficients, the higher the correlation the higher possiblity for successful regression based on given feature (at least in theory).

+ +

Is there any measure that will tell me how well classes are separated based on input data features, before training a ML model?

+",22659,,,,,10/21/2019 13:55,Is there any measure of separability of classes?,,1,1,,,,CC BY-SA 4.0 +15985,2,,15894,10/20/2019 16:27,,0,,"

A high-level solution is to use the NVIDIA DriveWorks SDK. NVIDIA's Developer Blog has a good post, ""Point Cloud Processing with NVIDIA DriveWorks SDK"".

+ +

The NVIDIA DriveWorks SDK contains a collection of CUDA-based low level point cloud processing modules optimized for NVIDIA DRIVE AGX platforms. The DriveWorks Point Cloud Processing modules include common algorithms that any AV developer working with point cloud representations would need, such as accumulation and registration.

+ +

I believe you don't need the AGX hardware for the problem you posted.

+",5763,,,,,10/20/2019 16:27,,,,0,,,,CC BY-SA 4.0 +15986,1,,,10/20/2019 17:32,,35,7846,"

To produce tangible results in the field of AI/ML, one must take theoretical results under the lens of computational complexity.

+

Indeed, minimax effectively solves any two-person "board game" with win/loss conditions, but the algorithm quickly becomes untenable for games of large enough size, so it's practically useless asides from toy problems.

+

In fact, this issue seems to cut at the heart of intelligence itself: the Frame Problem highlights this by observing that any "intelligent" agent that operates under logical axioms must somehow deal with the explosive growth of computational complexity.

+

So, we need to deal with computational complexity: but that doesn't mean researchers must limit themselves with practical concerns. In the past, multilayered perceptrons were thought to be intractable (I think), and thus we couldn't evaluate their utility until recently. I've heard that Bayesian techniques are conceptually elegant, but they become computationally intractable once your dataset becomes large, and thus we usually use variational methods to compute the posterior, instead of naively using the exact solution.

+

I'm looking for more examples like this: What are examples of promising (or neat/interesting) AI/ML techniques that are computationally intractable (or uncomputable)?

+",6779,,2444,,1/19/2021 14:39,1/19/2021 14:39,What are examples of promising AI/ML techniques that are computationally intractable?,,7,0,,,,CC BY-SA 4.0 +15987,2,,15986,10/20/2019 21:24,,14,,"

Exact Bayesian inference is (often) intractable (i.e. there is no closed-form solution, or numerical approximations are also computationally expensive) because it involves the computation of an integral over a range of real (or even floating-point) numbers, which can be intractable.

+

More precisely, for example, if you want to find the parameters $\mathbf{\theta} \in \Theta$ of a model given some data $D$, then Bayesian inference is just the application of the Bayes' theorem

+

\begin{align} +p(\mathbf{\theta} \mid D) +&= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{p(D)} \\ +&= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{\int_{\Theta} p(D \mid \mathbf{\theta}^\prime) p(\mathbf{\theta}^\prime) d \mathbf{\theta}^\prime} \\ +&= \frac{p(D \mid \mathbf{\theta}) p(\mathbf{\theta})}{\int_{\Theta} p(D, \mathbf{\theta}^\prime) d \mathbf{\theta}^\prime } \tag{1}\label{1} +\end{align}

+

where $p(\mathbf{\theta} \mid D)$ is the posterior (which is what you want to find or compute), $p(D \mid \mathbf{\theta})$ is the likelihood of your data given the (fixed) parameters $\mathbf{\theta}$, $p(\mathbf{\theta})$ is the prior and $p(D) = \int_{\Theta} p(D \mid \mathbf{\theta}^\prime) p(\mathbf{\theta}^\prime) d \mathbf{\theta}^\prime$ is the evidence of the data (which is an integral given that $\mathbf{\theta}$ is assumed to be a continuous random variable), which is intractable because the integral is over all possible values of $\mathbf{\theta}$, that is, ${\Theta}$. If all terms in \ref{1} were tractable (polynomially computable), then, given more data $D$, you could iteratively keep on updating your posterior (which becomes your prior on the next iteration), and exact Bayesian inference would become tractable.

+

The variational Bayesian approach casts the problem of inferring $p(\mathbf{\theta} \mid D)$ (which requires the computation of the intractable evidence term) as an optimization problem, which approximately finds the posterior, more precisely, it approximates the intractable posterior, $p(\mathbf{\theta} \mid D)$, with a tractable one, $q(\mathbf{\theta} \mid D)$ (the variational distribution). For example, the important variational auto-encoder (VAEs) paper (which did not introduce the variational Bayesian approach) uses the variational Bayesian approach to approximate a posterior in the context of neural networks (that represent distributions), so that existing machine (or deep) learning techniques (that is, gradient descent with back-propagation) can be used to learn the parameters of a model.

+

The variational Bayesian approach (VBA) becomes always more appealing in machine learning. For example, Bayesian neural networks (which can partially solve some of the inherent problems of non-Bayesian neural networks) are usually inspired by the results reported in the VAE paper, which shows the feasibility of the VBA in the context of deep learning.

+",2444,,2444,,1/19/2021 14:26,1/19/2021 14:26,,,,0,,,,CC BY-SA 4.0 +15988,2,,15916,10/20/2019 23:21,,1,,"

Mixing loss functions is very possible. For example, in the case of neural style transfer, there is a style loss and a content loss. Both of them are backpropagated through the network. The final loss used for the backpropagation is a weighted sum of the losses. In the case of style transfer, it ensures that the image generated is not only imitating the style of the style image, but also keeping the original content. Sometimes there is also a third loss called variation loss. This is also applied to a weight and summed. Note the weight is a hyperparameter and is not changed during training. Also, the loss should not measure the same purpose, but rather a different purpose that you want to optimize together. Example code using PyTorch:

+ +
loss1 = torch.nn.MSELoss(a,b)
+loss2 = torch.nn.MSELoss(b,c)
+loss = loss1 * alpha + loss2 * beta
+
+",23713,,2444,,10/22/2019 0:47,10/22/2019 0:47,,,,0,,,,CC BY-SA 4.0 +15989,2,,15971,10/20/2019 23:34,,1,,"

Note the X index is training set size. For the first and second case, teh training set size starts at 0(or 1). The model will overfit certainly at that data size. When data size increases, the model overfits less and less and eventually the model have enough data samples that it won't overfit. The data size continue to increase and the model performance increase as well. To a certain point, the validation loss increase start to diminish and the model slightly overfits the samples. For the third graph, it seems like originally teh loss is low and started to increase. +Hopes it help

+",23713,,,,,10/20/2019 23:34,,,,0,,,,CC BY-SA 4.0 +15990,2,,15986,10/21/2019 0:15,,16,,"

AIXI is a Bayesian, non-Markov, reinforcement learning and artificial general intelligence agent that is incomputable, given the involved incomputable Kolmogorov complexity. However, there are approximations of AIXI, such as AIXItl, described in Universal Artificial Intelligence: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability (2005), by Marcus Hutter (the original author of AIXI), and MC-AIXI-CTW (which stands for Monte Carlo AIXI Context-Tree Weighting). Here is a Python implementation of MC-AIXI-CTW: https://github.com/gkassel/pyaixi.

+",2444,,,,,10/21/2019 0:15,,,,0,,,,CC BY-SA 4.0 +15991,2,,15986,10/21/2019 0:30,,3,,"

In general, partially-observable Markov decision processes (POMDPs) are also computationally intractable to solve exactly. However, there are several approximations methods. See, for example, Value-Function Approximations for Partially Observable Markov Decision Processes (2000) by Milos Hauskrecht.

+",2444,,,,,10/21/2019 0:30,,,,0,,,,CC BY-SA 4.0 +15992,1,,,10/21/2019 3:17,,3,133,"

As part of a research project for college, I would like to understand what many of you astern to be the risks associated with regulating Artificial Intelligence. Such as whether regulation is too risky in regards to limiting progress or too risky in regards to uninformed regulation.

+",30641,,2444,,11/21/2019 3:25,11/21/2019 22:51,What are the risks associated with regulating AI?,,4,2,,,,CC BY-SA 4.0 +15993,1,16002,,10/21/2019 3:28,,4,269,"

I want to build a multivariable and multivariate regression model in Keras (with TensorFlow as backend), that is, a regression model with multiple values as input (multivariable) and output (multivariate).

+ +

The independent variables are, for example, the length, weight, force, etc., and the dependent variables are the torque, friction, heat, temperature, etc.

+ +

What is the best approach to achieve that? Any guidance before I start? (If anyone can share any example code/notebook/code would be great as well).

+",30642,,2444,,12/6/2019 15:09,12/6/2019 15:09,What is the best approach for multivariable and multivariate regression?,,1,0,,,,CC BY-SA 4.0 +15994,1,16003,,10/21/2019 4:36,,2,668,"

There's this 7 player social deduction game called Secret Hitler, and I have been trying to find a self-learning AI algorithm to learn how to play this game for a while. Basically, four players are given a liberal role, two players are given a fascist role, and 1 player is given a hitler role. The liberals and hitler do not know any other roles, and the fascists know everyone's roles. During a turn, a president elects a chancellor based on a yes/no vote and then the government passes a policy (either liberal or fascist) that is drawn from a randomized deck. At certain points in the game, different special abilities come into play, like executing a player or investigating their role. To win the game, the liberals must either enact 5 liberal policies or kill hitler; the fascists must enact 6 fascist policies or get hitler enacted as the chancellor after 3 fascist policies have been enacted.

+ +

Now, there are other details that are irrelevant that I didn't mention, but those are the general rules. It seems simple enough to build a visual implementation in a language like Java, but there are so many moving pieces that I would have to account for. I doubt that simply making random moves at first and learning off of the bad/good moves would work, because I need a way for agents to make moves based on which roles they know.

+ +

Unfortunately, AlphaZero wouldn't work here, and I'm struggling to find any algorithm that would work for this (or any other social deduction game). Do I have to write my own algorithm? I'm slightly confident that this is a case of supervised learning where I can give weight to the nodes in a neural network that correspond to wins, but please correct me if I'm incorrect.

+",23897,,23897,,10/21/2019 19:45,10/21/2019 19:45,What would be the most effective self-learning algorithm for a 7 player social deduction game?,,1,0,,,,CC BY-SA 4.0 +15995,1,,,10/21/2019 5:32,,4,793,"

+

I don't really understand what this equation is saying or what the purpose of the ELBO is. How does it help us find the true posterior distribution?

+",25721,,2444,,11/7/2020 1:14,11/7/2020 1:14,What's going on in the equation of the variational lower bound?,,2,1,,,,CC BY-SA 4.0 +15996,2,,15976,10/21/2019 11:14,,0,,"

You can maximize the loss by doing 1-loss (if the range of the loss is between 1 and 0. However, I am not sure if that would help. The network can just propagate the weights of the source classifier to always output the wrong answer. a better approach is to hand craft features of the document to make it less identifiable as the source. If the document have stated the source, remove it. This may help. Also, another way is to use documents from more sources so the network cannot use the source to classify the label. Hope I can help you.

+",23713,,,,,10/21/2019 11:14,,,,0,,,,CC BY-SA 4.0 +15997,2,,15995,10/21/2019 11:49,,3,,"

From this document, as you found here, $X$ is an observed variable and $Z$ is a hidden variable; $p(X)$ is the density function of $X$. The posterior distribution of the hidden variables can then be written as follows using the Bayes’ Theorem:

+ +

$$p(Z|X) = \frac{p(X|Z)p(Z)}{p(X)} = \frac{p(X|Z)p(Z)}{\int_Zp(X,Z)}$$

+ +

Now base on what you post, if we denote that $L= \mathbb{E}_q [\log p(X, Z)] + H[Z]$ ($q(Z)$ is a distribution we use to approximation the true posterior distribution $p(Z|X)$ in VB and $H[Z] = -\mathbb{E}_q [\log q(Z)]$), then it is obvious that $L$ is a lower bound of the log probability of the observations. +As a result, if in some cases we want to maximize the marginal probability (the log probability of the observations), we can instead maximize its variational lower bound $L$. As a real example, you can follow the ""Multiple Object Recognition with Visual Attention"" example in the referenced document.

+ +

Moreover, the term $L$ will be presented in KL-divergence that will be used to measuring the similarity of two distributions. Be aware that there is progress on the bound in this paper (Fixing a Broken ELBO).

+",4446,,4446,,10/21/2019 13:00,10/21/2019 13:00,,,,7,,,,CC BY-SA 4.0 +15998,2,,15984,10/21/2019 13:55,,2,,"

Are you thinking something like Information Gain?

+ +

Information Gain basically uses the concept of information entropy to determine if splitting a variable is useful.

+",30662,,,,,10/21/2019 13:55,,,,0,,,,CC BY-SA 4.0 +15999,1,,,10/21/2019 14:20,,2,54,"

I am looking for research and experience working with ML models to ingest data for tasks, like text analysis, and creates a system that copies (or in other words enciphers) the input data, to then reproduce it in the future without the original.

+ +

I'm interested in how ML models can be used in this way to obfuscate information without too much information loss by the model, e.g. overfitting on purpose to create a new representation of the input information.

+",11893,,2444,,10/22/2019 1:07,10/22/2019 6:54,Using ML to encypher data for production,,1,0,,,,CC BY-SA 4.0 +16000,2,,15999,10/21/2019 14:44,,2,,"

It sounds like you are trying to compress data, and then recover the same data later.

+ +

The most common tool for this task is an autoencoder. This model accepts data as input, and then learns to compress it and decompress it to produce something as close as possible to the original data. By making the middle layer of an autoencoder narrower, you can make the compression more lossy. By making it wider, you can make it less lossy.

+",16909,,,,,10/21/2019 14:44,,,,1,,,,CC BY-SA 4.0 +16001,2,,15995,10/21/2019 14:54,,2,,"

The use of KL provides a more intuitive way of what the ELBO is attempting to maximize.

+ +

Basically, we want to find a posterior approximation such that $p(z\mid x) \approx q(z)\in\mathcal{Q}$

+ +

$$KL(q(z)\parallel p(z\mid x)) \rightarrow \min_{q(z)\in\mathcal{Q}}$$

+ +

As a result of this, while finding this optimal posterior approximation, we maximize the probability of all the observed data $x$. Note that the evidence is usually intractable. Thus, can express the $KL$ as follows:

+ +

\begin{align*} \log p(x) &= \int q(z) \log p(x)dz \\ +&= \int q(z) \log\frac{p(x,\theta)}{p(\theta\mid x)}dz \\ +&= \int q(z) \log\frac{p(x,z)q(z)}{p(\theta\mid x)q(z)}dz\\ +&= \int q(z) \log\frac{p(x,z)}{q(z)}dz + \int q(z) \log\frac{q(z)}{p(z\mid x)}dz \\ &= \mathcal{L}(q(z)) + KL(q(z)\parallel p(z\mid x)) \end{align*}

+ +

In this case, KL just gives us the difference between $q$ and $p$. We want to make this difference close to zero meaning that $q=p$. So, minimizing the KL is the same as maximizing the ELBO, and as a result, we obtain the lower bound in your expression. If you expand your bound, you can find a nice interpretion:

+ +

$$ \begin{align*} +\mathcal{L}(q(z)) &= \int q(z) \log\frac{p(x,z)}{q(z)}dz \\ +&= \mathbb{E}_{q(z)} \log p(x\mid z) - KL(q(z)\parallel p(z)) \end{align*} $$

+ +

When we optimize this expression, we want to find a $q$ that fits our data properly and also is really close to true posterior. Thus, $\mathbb{E}_{q(z)} \log p(x\mid z)$ act as a data term and $KL(q(z)\parallel p(z)) $ as a regularizer.

+",29857,,29857,,11/3/2019 14:50,11/3/2019 14:50,,,,1,,,,CC BY-SA 4.0 +16002,2,,15993,10/21/2019 15:07,,0,,"

You create a model like this very easily with keras. Follow these steps.

+

Data Exploring

+

Before building a machine learning model, you must explore your data first. An approach is to use libraries to visualize the data as graphs. You can use the tool pandas-profiling

+

https://towardsdatascience.com/exploring-your-data-with-just-1-line-of-python-4b35ce21a82d

+

Points to look for when exploring the data

+

You should look for the following:

+
    +
  • Distribution of labels and input features
  • +
  • Range of input features and labels
  • +
  • Missing label or input feature
  • +
  • Nan on value
  • +
  • Outliers
  • +
+

You should look out for them and use appropriate data cleaning or filtering method to clean them. For example, Nan or missing value may be substituted with 0 and outliers will be removed. Input features may also be normalized.

+

Building the model

+

Keras is a easy tool for building machinea learning model. For how to build a basic MLP ( Multi-Layer Perceptron), you can refer to the example code and the resource.

+

https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/

+

Experimenting

+

After creating the basic model, you can start experimenting with stuff like:

+
    +
  • Different model structure like adding layers and changing hidden layer size
  • +
  • Different Optimizer
  • +
  • Different hyperparameters like learning rate and batch size
  • +
  • Different input features
  • +
+

You will gain more experience as you go. There is a lot of ways to improve the performance of the model.

+

Bewares

+

Here are things to beware:

+
    +
  • If your model doesn't work out, don't worry. Try another method or tune parameters until it work.
  • +
  • If the testing loss become very high while training loss is low, you can add dropout to prevent this from happening. This is called overfitting.
  • +
+

Hope I can help you and have fun!

+",23713,,-1,,6/17/2020 9:57,10/21/2019 15:07,,,,0,,,,CC BY-SA 4.0 +16003,2,,15994,10/21/2019 15:16,,1,,"

It is very likely that you want an algorithm like Counterfactual Minimax Regret. This algorithm has several variants, but they differ mostly in their efficiency.

+ +

CFR is the algorithm that was used to solve 2-Player Poker, although the solution comes from one of its more advanced versions. The algorithm is highly applicable to other games of incomplete information and games with roles, like Secret Hitler. Essentially, it learns how to act so that the actions it takes leak as little information to the other players as possible about its role (or hand of cards), while also learning about the roles of other players from their actions, and maximizing its chances of winning the game.

+",16909,,,,,10/21/2019 15:16,,,,0,,,,CC BY-SA 4.0 +16004,1,,,10/21/2019 15:29,,0,203,"

I am working on a project that takes signals from the brain, preprocesses them, and then makes the machine learn about what human is thinking about. I am struck on preprocessing the signal (incoming from the EEG). I am having a problem when I attempt to remove noise. I used SVM but to no avail. I need some other suggestions from experts who have worked on a project similar to this. What can I do to preprocess the signal?

+",27469,,30426,,5/18/2021 22:03,5/18/2021 22:03,How can I remove the noise from an EEG signal?,,4,3,,,,CC BY-SA 4.0 +16005,1,,,10/21/2019 16:18,,3,127,"

I'm new to object detectors and segmentation. I want to localize digits on a plate as fast as possible. All images of the dataset are normalized to $300 \times 60$. There are different approaches to solve the problem. For example, binarization + connected component labeling, vertical and horizontal projection. The aforementioned approaches fail in ambient lights, noises, and shadows. Also, there are other approaches such as STN-OCR (based on convolutional recurrent neural networks) that need a lot of plates with different composition of numbers. I have limited plates with the same numbers (about 1000 different numbers) but totally 10000 plates in different illuminations and noises. I have a good OCR (without segmentation), so I need a network just localize digits.

+ +

Is there any deep learning-based architecture for this purpose? Can I use faster RCNN? Yolo? SSD?

+ +

I trained Faster RCNN in Matlab, but it detects too many random bounding boxes for each plate. What could be the problem?

+",30669,,2444,,10/22/2019 0:59,10/22/2019 0:59,Is there a deep learning-based architecture for digit localisation?,,0,4,,,,CC BY-SA 4.0 +16006,1,,,10/21/2019 16:25,,2,57,"

As I stated in my question, I would like to know the underlying pipeline and machine learning models that are used to classify intents and identify entities in IBM Watson Assistant and Microsoft LUIS services.

+ +

I searched on different websites and the documentations of those services, but I did not find anything. However, there are some blogs mentioned that IBM Watson is trained using one billion words from Wikipedia but there is no reference to support that claim.

+ +

I highly appreciate if anyone could refer me to a doc/blog that answers my question.

+ +

Thanks in advance :)

+",30671,,,,,10/21/2019 16:25,What is the underlying model of IBM Watson Assistant and Microsoft LUIS?,,0,1,,,,CC BY-SA 4.0 +16007,2,,15986,10/21/2019 17:13,,5,,"

The logical induction algorithm can make predictions about whether mathematical statements are true or false, which are eventually consistent; e.g. if A is true, its probability will eventually reach 1; if B implies C then C's probability will eventually reach or exceed B's; the probability of D will eventually be the inverse of not(D); the probabilities of E and F will eventually reach or exceed that of E AND F; etc.

+ +

It can also give consistent predictions about itself, e.g. ""the logical induction algorithm will predict the probability of X to be Y at timestep T"", whilst avoiding paradoxes like the liar's paradox.

+",30672,,,,,10/21/2019 17:13,,,,1,,,,CC BY-SA 4.0 +16008,1,16011,,10/21/2019 17:48,,3,186,"

I have a couple of small questions about the David Silver lecture about reinforcement learning, lecture slides (slides 23, 24). More specifically it is about the temporal difference algorithm:

+ +

$$V(s_{t}) \leftarrow V(s_t)+ \alpha \left[ G_{t+1}+\gamma V(s_{t+1})- V(s_t) \right]$$

+ +

where $\gamma$ is our discount rate and $\alpha $ the learning rate. +In the example given in the lecture slides we observe the following paths:

+ +

$(A,1,B,0), (B,1), (B,1), (B,1), (B,1), (B,1), (B,1), (B,0)$

+ +

Meaning for the first trajectory we are in state $A$, get reward $1$, get to state $B$ and get reward $0$ and the game finishes. For the second trajectory we start in state $B$, get reward $1$ and the game finishes ...

+ +

Let`s say we initialize all states with value $0$ and choose $\alpha=0.1, \gamma=1$

+ +

My first question is whether the following ""implementation"" of the $TD(0)$ algorithm for the first two of the above observed trajectories correct?

+ +
    +
  1. $V(a)\leftarrow0 + 0.1(1+0-0)= 0.1; \quad V(b)\leftarrow0+0.1(1+0-0)=0.1$
  2. +
  3. $V(b)\leftarrow0.1+(0.1)(1+0-0.1)= 0.19$
  4. +
+ +

? If so, why dont we use the updated value function for $V(b)$ to also update our value for $V(a)$?

+ +

My third question is about the statement that

+ +
+

$TD(0)$ converges to solution of max likelihood Markov model

+
+ +

this means that if we keep sampling and apply the $TD(0)$ algorithm that the thereby obtained solution converges towards the ML-estimate of that sample using the Markov Model? Why dont we just use the ML-estimate immediately?

+",30369,,30369,,10/21/2019 19:56,10/21/2019 20:09,Confusion about temporal difference learning,,1,7,,12/23/2021 13:43,,CC BY-SA 4.0 +16009,2,,15986,10/21/2019 19:04,,11,,"

This question gets at a really interesting fact about AI research in general: AI is hard.

+ +

In fact, almost every AI problem is computationally hard (typically NP-Hard, or #P-Hard). This means that most new areas of AI research starts out by characterizing some problem that is intractable, and proposing an algorithm that technically works, but is too slow to be useful. However, that's not the whole story. Usually AI researchers then proceed to develop tractable techniques according to one of two schools:

+ +
    +
  • Algorithms that usually work in practice, and are always fast, but are not completely correct.
  • +
  • Algorithms that are always correct, and are usually fast, but are sometimes very slow, or only work on specific kinds of sub-problem.
  • +
+ +

Take together, these let AI address most problems. For example:

+ +
    +
  • Search was developed as a general purpose AI technique for solving planning and logic problems. The first algorithm, called the general problem solver, always worked, but was extremely slow. Eventually, we developed heuristic guided search techniques like A*, domain specific tricks like GraphPlan, and stochastic search techniques like Monte-Carlo Tree Search.
  • +
  • Bayesian Learning (or Bayesian Inference) has been known since the 1800's, but it is known to involve either the computation of intractable integrals, or the creation of exponentially sized discrete tables, making it NP-Hard. A very simple algorithm involves applying brute force and enumerating all of the options, but this is too slow. Eventually, we developed techniques like Gibbs Sampling (that is always fast, and usually right), or Variable Elimination (that is always right, and usually fast). Today we can solve most problems of this kind very well.
  • +
  • Reasoning about language was thought to be very hard (see the Frame Problem), because there are an infinite number of possible sentences, and an infinite number of possible contexts they could be used in. Exact approaches based on rules did not work. Eventually we developed probabilistic approaches like Hidden Markov Models and Deep Neural Networks, that aren't certain to work, but work so well in practice that language problems are, if not completely solve, getting very close. + +
      +
    • Games of chance, like Poker, were thought to be impossible, because they are #P-Hard to complete exactly (this is harder than NP-Hard). There will probably never be an exact algorithm for these. In spite of this, techniques like CFR+ can derive solutions that are so close to exactly perfect that you would need to play for decades against them to tell the difference.
    • +
  • +
+ +

So, what's still hard?

+ +
    +
  • Inferring the structure of a Bayesian network. This is closely related to the problem of causality. It's #P-Hard, but we don't currently have any good algorithms to even do this approximately very well. This is an active area of research.
  • +
  • Picking a machine learning algorithm to use for an arbitrary problem. The No Free Lunch theorem tells us this is not possible in general, but it seems like we ought to be able to do it pretty well in practice.
  • +
  • More to come...?
  • +
+",16909,,20044,,10/22/2019 14:49,10/22/2019 14:49,,,,1,,,,CC BY-SA 4.0 +16011,2,,16008,10/21/2019 20:02,,4,,"
+

My first question is whether the following "implementation" of the 𝑇𝐷(0) algorithm for the first two of the above observed trajectories correct?

+
    +
  1. $V(a)\leftarrow0 + 0.1(1+0-0)= 0.1; \quad V(b)\leftarrow0+0.1(1+0-0)=0.1$
  2. +
  3. $V(b)\leftarrow0.1+(0.1)(1+0-0.1)= 0.19$
  4. +
+
+

Your calculations for the first trajectory $(A,1,B,0)$ is incorrect for either TD or Monte Carlo. For some reason you have assigned either an immediate reward or return of $1$ to the second step, whilst in the example, it is $0$ for both the sampled return and the single-step TD target.

+

In addition, you quote this update rule for single-step TD:

+

$$V(s_{t}) \leftarrow V(s_t)+ \alpha \left[ G_{t+1}+\gamma V(s_{t+1})- V(s_t) \right]$$

+

. . . actually that is not the usual notation. The symbol $G_t$ is normally used to show a "return" value - a sum of rewards (often weighted by some factor, such as $\gamma$). The usual way of showing the TD update rule would be:

+

$$V(s_{t}) \leftarrow V(s_t)+ \alpha \left[ r_{t+1}+\gamma V(s_{t+1})- V(s_t) \right]$$

+

i.e. using the immediate reward. This might be a simple typo, however I am explaining this because it may behind your incorrect calculation.

+

The correct calculation is not very much different from yours though:

+
    +
  1. $V(a)\leftarrow0 + 0.1(1+0-0)= 0.1; \quad V(b)\leftarrow0+0.1(0+0-0)=0.0$
  2. +
  3. $V(b)\leftarrow0.0+(0.1)(1+0-0.0)= 0.1$
  4. +
+
+

If so, why dont we use the updated value function for $V(b)$ to also update our value for $V(a)$?

+
+

You can, and would do this in either of the following situations:

+
    +
  • In online learning, you experience a trajectory with states in order (A,B) again

    +
  • +
  • In offline learning, you repeat the previous experience in batch learning or using experience replay

    +
  • +
+

It is worth noting that if you take a small batch of data and repeat it again and again to update the value functions, that they will converge to values depending on your data set. That is what the slide is explaining in the lecture - highlighting the difference that TD and MonteCarlo will make when you do this. However, if that data set is a very small subsample of possible random behaviour in the environment, then you may not create an accurate value function, but instead it will be the best one that you can given the limited data. If it is easy to collect more experience, then that is often preferable.

+
+

Why dont we just use the [maximum-likelihood]-estimate immediately?

+
+

Because it is not directly useful for a value prediction task, and you would need some mechanism to use that maximum likelihood MDP model to generate value predictions. With TD, you are already in the process of making this estimate*.

+

You could take the existing samples, and use them to generate the parameters of an MRP (Markov Reward Process, as there are no example actions in the trajectory) based on the observations. That "best guess" MRP is your maximum likelihood MDP model, and would evaluate the same as your converged repeated TD batch over the samples.

+

The main difference explained by the slide is that Monte Carlo will converge to $V(a) = 1$ because the only sample with A in it has a return of 1 following state A. Whilst TD will converge to $V(a) = 1.75$, because it treats the same sample as the only instance of state progression from A - e.g state A "always" has an immediate reward of 1 then goes to state B. Both algorithms will converge to $V(b) = 0.75$.

+
+

* There are algorithms, such as Dyna-Q, which partially do this, using experience gathered so far to create a model of the environment dynamics. Sometimes this is useful and effective. However, it is not always possible or the best approach.

+",1847,,-1,,6/17/2020 9:57,10/21/2019 20:09,,,,3,,,,CC BY-SA 4.0 +16013,2,,15992,10/21/2019 20:57,,0,,"
+

Risks of regulation?

+
+ +

As you mention in your survey, it is generally understood that the primary concern with regulating AI research is that other parties risk falling behind.

+ +
+

Should we regulate it? Can it be done?

+
+ +

You can't really ""regulate"" technological development in the same way you can regulate some other things in general. Asides from the fact that there is no global governance that can implement this regulation on nations, you can't really regulate someone's research more than you can control how people think: you just need a pen / paper / computer to do any research in math/AI.

+ +

The NSA tried to regulate encryption citing national security reasons during a saga known as the Crypto Wars. They failed.

+ +
+

What is AI anyways? How will we get there? What will it be like?

+
+ +

Honestly, from the phrasing of your questions in your survey, I get the impression that you don't really understand the hypothetical existential risk due to AI. Personally I don't really buy into their thesis, but in any case, if such a super-intelligent agent emerges, the problem isn't so much ""oh no my city is destroyed"" or ""oh no so many people are killed"", but more so ""all of humanity is enslaved without being aware"" or ""everything is dead"". We think this might happen because we assume AI is all-powerful and we project our own negative qualities onto this unknown agent with unknown power. It's mostly fear really.

+ +

This is all speculation, and by definition you cannot predict the behavior of an agent smarter than you, so literally every single comment on this topic is purely unbased speculation. The only thing that is true is that we don't know.

+ +

There is another aspect of AI which is dangerous, which more so concerns with how humans use it: i.e. facial recognition, automated weapon systems, automated hacking. These are more pressing issues.

+ +
+

What should we do? We are forced to research AI because no party can afford to fall behind, but at the same time we are pushing ourselves towards a dangerous future: it's a catch-22....

+
+ +

Consensus and current practice suggests that every researcher publicizes our results. Compared to other areas of academia, whose research is often locked behind paywall, ML/AI research is quite publicly accessible. Of course, this doesn't prevent the possibility of a rouge agent....

+",6779,,6779,,10/21/2019 23:17,10/21/2019 23:17,,,,0,,,,CC BY-SA 4.0 +16014,1,16016,,10/21/2019 21:11,,2,559,"

I have read this post: +How to choose an activation function?.

+ +

There is enough literature about activation functions, but when should I use a linear activation instead of ReLU?

+ +

What does the author mean with ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.?

+ +

Is there a more detail answer to this?

+",30599,,2444,,10/22/2019 0:50,9/25/2020 2:45,When should I use a linear activation instead of ReLU?,,2,0,,,,CC BY-SA 4.0 +16015,2,,15992,10/21/2019 21:21,,0,,"

I think there is a very strong argument for regulating AI. Chiefly, unintentional (or intentional) bias in statistically driven algorithms, and the idea that responsibility can be offloaded to processes that cannot be meaningfully punished where they transgress. Additionally, the history of technology, especially since the industrial revolution, strongly validates neo-luddism in the sense that the problems arising from implementation of new technology are not always predictable.

+ +

In this sense, there are both ethical reasons to consider regulation, and minimax reasons (here in the sense of erring on the side of caution to minimize the maximum potential downside.)

+ +
    +
  • Risk of falling behind
  • +
+ +

A risk is that not all participants will hew to the regulations, giving those who don't a significant advantage, but, that, in and of itself, is not a reason to forgo sensible regulation.

+ +

However, this is not a justification to forgo regulation in that that penalties at least serve as potential deterrent.

+ +
    +
  • Opportunity cost
  • +
+ +

Not a risk, but a driver. The idea of ""leaving money on the table"" in that not implementing a given technology forgoes greater utility, sacrificing potential benefit.

+ +

This is not invalid, but shouldn't ignore hidden costs. For instance, the wide-scale deployment of even primitive bots has had a profound social impact.

+",1671,,1671,,10/22/2019 1:09,10/22/2019 1:09,,,,0,,,,CC BY-SA 4.0 +16016,2,,16014,10/21/2019 22:17,,2,,"

The activation function you choose depends on the application you are building/data that you have got to work with. It is hard to recommend one over the other, without taking this into account.

+ +

Here is a short-summary of the advantages and disadvantages of some common activation functions: +https://missinglink.ai/guides/neural-network-concepts/7-types-neural-network-activation-functions-right/

+ +
+

What does the author mean with ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.

+
+ +

ReLU is good for inputs > 0, since ReLU = 0 if input < 0(which would kill the neuron, if the gradient is = 0)

+ +

To remedy this, you could look into using a Leaky-ReLU instead. +(Which avoids killing the neuron by returning a non-zero value in the cases of input <= 0)

+",30565,,,,,10/21/2019 22:17,,,,3,,,,CC BY-SA 4.0 +16017,1,16021,,10/22/2019 4:32,,2,336,"

I'm reading about the KL divergence on Wikipedia. I don't understand how the equation gives "information gained" as it says in the "Interpretations" section

+
+

Expressed in the language of Bayesian inference, ${\displaystyle D_{\text{KL}}(P\parallel Q)}$ is a measure of the information gained by revising one's beliefs from the prior probability distribution $Q$ to the posterior probability distribution $P$

+
+

I was under the impression that KL divergence is a way of measure the difference between distributions (used in autoencoders to determine the difference between the input and the output generated from latents).

+

How does the equation $$D_{KL}(P \| Q)=\sum P(x) \log \left( \frac{P(X)}{Q(X)} \right)$$ give us a divergence? Also, in encoding and decoding algorithms that use KL divergence, is the goal to minimize $D_{KL}(P \| Q)$?

+",25721,,2444,,11/7/2020 15:06,11/7/2020 15:06,"How does the Kullback-Leibler divergence give ""knowledge gained""?",,1,0,,,,CC BY-SA 4.0 +16018,2,,15986,10/22/2019 4:50,,4,,"

Hutter's ""fastest and shortest algorithm for all well-defined problems"" is the ultimate just-in-time compiler. It runs a given program and, in parallel, searches for proofs that some other program is equivalent but faster. The running program is restarted at exponentially-spaced intervals; if a faster program has been found, that is started instead. The running time of this algorithm is of the same order as the fastest provably-equivalent algorithm, plus a constant $O(1)$ term (the time taken to find the proof, which doesn't dependent on the input size). For example, it will run Bubble Sort in at most $O(n~log (n))$) time, by finding a proof that it's equivalent to such a fast algorithm (like Merge Sort) then switching to that algorithm.

+ +

Hutter's algorithm is similar to the best ahead-of-time compilers, known as super-optimisers. They search through all possible programs, starting with the smallest/fastest, until they find one equivalent to the given code. These are actually in use right now, but are only practical for programs that are a few (machine code) instructions long. The LLVM compiler contains some ""peephole optimisations"" (i.e. find/replace templates) that were found by a super-optimiser a few years ago. Note that super-optimisation should not be confused with super-compilation (a rather general optimisation, which is not optimal and involves no search).

+",30672,,,,,10/22/2019 4:50,,,,0,,,,CC BY-SA 4.0 +16019,2,,15986,10/22/2019 5:36,,3,,"

Levin's search algorithm is a general method of function inversion. Many AI tasks are of this sort, e.g. given a cost or reward function (object -> cost or object -> reward), its inverse (cost -> object or reward -> object) would find an object with the given cost/reward; we could ask this inverse function for an object with low cost or high reward.

+ +

Levin's algorithm is optimal iff the given function is a ""black box"" with no known pattern in its output. For example, if a small change in the input produces a small change in the output, Levin search wouldn't be optimal; instead we could use hill climbing or some other gradient method.

+ +

Levin's algorithm looks for the function's inverse by running all possible programs in parallel, assigning exponentially more time to shorter programs. Whenever a program halts, we check whether its output is the desired inverse (i.e. whether givenProgram(outputOfHaltedProgram) = desiredOutput, e.g. whether cost(outputOfHaltedProgram) = low).

+ +

This way ""simpler"" guesses at the inverse are made first; where we define the simplicity (AKA ""Levin complexity"") of a value by looking through all programs $p$ which generate that value, and minimising the sum of: $p$'s length (in bits) plus the logarithm of $p$'s running time (in steps). If we ignored running time we would get Kolmogorov complexity, which is theoretically nicer but is incomputable (we don't know when to give up waiting for short non-halting programs, due to the Halting Problem). Levin complexity is computable, since we can give up waiting for those loops once they've taken exponentially-many steps as a longer solution (e.g. once we've spent $T$ steps waiting for a possible loop of length $N$, we can start trying programs that are $N+1$ bits long for $T/2$ steps).

+ +

The running time of Levin Search is of the same order as the simplest such inverse-value-generating program. However, this is misleading, since the fraction of steps allocated to running any particular program $p$ is $1/2^{complexity(p)}$, so this constant factor will be slowing down the computation of the inverse too. There is also overhead associated with context-switching between all of these programs.

+ +

The FAST algorithm does the same job as Levin Search, in the same time, but avoids the overhead of context-switching between an infinite number parallel programs. Instead it runs one program at a time, cuts it off if it hasn't halted within an appropriate number of steps, then retries for twice as many steps later on. The GUESS algorithm is also equivalent, but chooses programs at random; the expected runtime is the same, but there's no need to keep track of loop counters like in FAST, plus it can be run on parallel hardware without having to coordinate anything (whilst still avoiding the infinite parallelism of the original).

+ +

Levin search is currently impractical in its original setting of searching through general-purpose, Turing-complete programs. It can be useful in less general domains, e.g. searching through the space of hyper-parameters or other domain-specific, configuration-like ""programs"".

+",30672,,30672,,10/22/2019 14:41,10/22/2019 14:41,,,,0,,,,CC BY-SA 4.0 +16021,2,,16017,10/22/2019 8:10,,3,,"

You can know it better, if you know the concept of entropy:

+
+

Information entropy is the average rate at which information is produced by a stochastic source of data. The information content (also called the surprisal) of an event ${\displaystyle E}$ is an increasing function of the reciprocal of the ${\displaystyle p(E)}$ of the event, precisely ${\displaystyle I(E)=-\log _{2}(p(E))=\log _{2}(1/p(E))}$. Shannon defined the entropy Η of a discrete random variable ${\textstyle X}$ with possible values ${\textstyle \left\{x_{1},\ldots ,x_{n}\right\}}$ and probability mass function ${\textstyle \mathrm {P} (X)}$ as: +$${\displaystyle \mathrm {H} (X)=\operatorname {E} [\operatorname {I} (X)]=\operatorname {E} [-\log(\mathrm {P} (X))].}$$ +Here ${\displaystyle \operatorname {E} }$ is the expected value operator, and $I$ is the information content of ${\displaystyle I(X)}$ is itself a random variable. The entropy can explicitly be written as +$${\displaystyle \mathrm {H} (X)=-\sum _{i=1}^{n}{\mathrm {P} (x_{i})\log _{b}\mathrm {P} (x_{i})}}$$ +where $b$ is the base of the logarithm used.

+
+

Now, the KL divergence, try to find the cross entropy of two probability distributions.

+

So, what is the cross entropy:

+
+

The cross entropy between two probability distributions ${\displaystyle p}$ and ${\displaystyle q}$ over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution ${\displaystyle q}$, rather than the true distribution ${\displaystyle p}$.

+

The cross entropy for the distributions ${\displaystyle p}$ and ${\displaystyle q}$ over a given set is defined as follows: +$${\displaystyle H(p,q)=\operatorname {E} _{p}[-\log q]}.$$ +The definition may be formulated using the Kullback–Leibler divergence${\displaystyle D_{\mathrm {KL} }(p\|q)}$ of ${\displaystyle q}$ from ${\displaystyle p}$ (also known as the relative entropy of ${\displaystyle p}$ with respect to ${\displaystyle q}$). +$${\displaystyle H(p,q)=H(p)+D_{\mathrm {KL} }(p\|q)},$$ +where ${\displaystyle H(p)}$ is the entropy of ${\displaystyle p}$.

+
+

Now, as we want to approximate a distribution function $p$ with other distribution function $q$, we want to minimize the cross entropy between these two. The first part $H(p)$ could not be changed as we want to find a distribution $q$ and $p$ is given. Hence, we need to minimize $KL$ divergence of these two, to minimize the cross entropy and better approximation for the $p$ distribution.

+",4446,,-1,,6/17/2020 9:57,10/22/2019 8:18,,,,0,,,,CC BY-SA 4.0 +16022,1,,,10/22/2019 8:57,,3,206,"

I'm using a neural network to solve a multi regression problem because I'm trying to predict continuous values. To be more specific, I'm making a tracking algorithm to track the position of an object, I'm trying to predict two values, the latitude and longitude of an object.

+

Now, to calculate the loss of the model, there are some common functions, like mean squared error or mean absolute error, etc., but I'm wondering if I can use some custom function, like this, to calculate the distance between the two longitude and latitude values, and then the loss would be the difference between the real distance (calculated from the real longitude and latitude) and the predicted distance (calculated from the predicted longitude and latitude). These are some thoughts from me, so I'm wondering if such an idea would make sense?

+

Would this work in my case better than using the mean squared error as a loss function?

+

I had another question in mind. In my case, I'm predicting two values (longitude and latitude), but is there a way to transform these two target values to only one value so that my neural network can learn better and faster? If yes, which method should I use? Should I calculate the summation of the two and make that as a new target? Does this make sense?

+",30327,,2444,,7/18/2020 12:35,8/12/2021 16:04,When should I create a custom loss function?,,2,0,,,,CC BY-SA 4.0 +16023,1,,,10/22/2019 10:12,,2,191,"

I'm studying different stop criteria in genetic algorithms and the advantages and disadvantages of each of them for evaluating different algorithms. One of these methods is the max number of fitness function calls (max NFFC), so that we define a value for max NFFC and, if the number of fitness function calls reached this value, the algorithm will stop. Fitness function is called for calculating the fitness of the initial population and whenever a crossover or mutation happens (if parents are chosen as offspring there is no need to compute fitness function).

+

I searched if there is a disadvantage or limitation about using this stop criterion, but I didn't find anything. So, I wanted to know if applying this stop criterion in my algorithm has any disadvantages or there is nothing wrong with using this criterion.

+",30311,,2444,,1/30/2021 2:50,6/19/2023 6:07,Is there any disadvantage of the maximum number of fitness function call as a stop criterion?,,1,0,,,,CC BY-SA 4.0 +16024,2,,16022,10/22/2019 10:14,,1,,"

Using two value and using MSE is probably a better approach. I'd you combine the value to one value, like the case of summation, the network may fits to output 0 on one axis and the value on the other. The method you propose also have the same issue. There are many combination to the real distance, but only one is correct. +For a neural network to learn faster, one value will not help it learn faster. Instead, accuracy is often increased if the predicted value is a one hot vector of labels instead of a single value. Hope this can help you.

+",23713,,,,,10/22/2019 10:14,,,,4,,,,CC BY-SA 4.0 +16025,1,,,10/22/2019 12:24,,4,187,"

I'm trying to implement the Reinforce algorithm (Monte Carlo policy gradient) in order to optimize a portfolio of 94 stocks on a daily basis (I have suitable historical data to achieve this). The idea is the following: on each day, the input to a neural network comprises of the following:

+ +
    +
  • historical daily returns (daily momenta) for previous 20 days for each of the 94 stocks
  • +
  • the current vector of portfolio weights (94 weights)
  • +
+ +

Therefore states are represented by 1974-dimensional vectors. The neural network is supposed to return a 94-dimensional action vector which is again a vector of (ideal) portfolio weights to invest in. Negative weights (short positions) are allowed and portfolio weights should sum to one. Since the action space is continuous I'm trying to tackle it via the Reinforce algorithm. Rewards are given by portfolio daily returns minus trading costs. Here's a code snippet:

+ + + +
class Policy(nn.Module):
+    def __init__(self, s_size=1974, h_size=400, a_size=94):
+        super().__init__()
+        self.fc1 = nn.Linear(s_size, h_size)
+        self.fc2 = nn.Linear(h_size, a_size)
+        self.state_size = 1974
+        self.action_size = 94
+    def forward(self, x):
+        x = F.relu(self.fc1(x))
+        x = self.fc2(x)
+        return x
+    def act(self, state):
+        state = torch.from_numpy(state).float().unsqueeze(0).to(device)
+        means = self.forward(state).cpu()
+        m = MultivariateNormal(means,torch.diag(torch.Tensor(np.repeat(1e-8,94))))
+        action = m.sample()
+        action[0] = action[0]/sum(action[0])
+        return action[0], m.log_prob(action)
+
+ +

Notice that in order to ensure that portfolio weights (entries of the action tensor) sum to 1 I'm dividing by their sum. Also notice that I'm sampling from a multivariate normal distribution with extremely small diagonal terms since I'd like the net to behave as deterministically as possible. (I should probably use something similar to DDPG but I wanted to try out simpler solutions to start with).

+ +

The training part looks like this:

+ + + +
optimizer = optim.Adam(policy.parameters(), lr=1e-3)
+
+def reinforce(n_episodes=10000, max_t=10000, gamma=1.0, print_every=1):
+    scores_deque = deque(maxlen=100)
+    scores = []
+    for i_episode in range(1, n_episodes+1):
+        saved_log_probs = []
+        rewards = []
+        state = env.reset()
+        for t in range(max_t):
+            action, log_prob = policy.act(state)
+            saved_log_probs.append(log_prob)
+            state, reward, done, _ = env.step(action.detach().flatten().numpy())
+            rewards.append(reward)
+            if done:
+                break 
+        scores_deque.append(sum(rewards))
+        scores.append(sum(rewards))
+
+        discounts = [gamma**i for i in range(len(rewards)+1)]
+        R = sum([a*b for a,b in zip(discounts, rewards)])
+
+        policy_loss = []
+        for log_prob in saved_log_probs:
+            policy_loss.append(-log_prob * R)
+        policy_loss = torch.cat(policy_loss).sum()
+
+        optimizer.zero_grad()
+        policy_loss.backward()
+        optimizer.step()
+
+        if i_episode % print_every == 10:
+            print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
+            print(scores[-1])
+
+    return scores, scores_deque
+
+scores, scores_deque = reinforce()
+
+ +

Unfortunately, there is no convergence during training even after fiddling with the learning rate so my question is the following: is there anything blatantly wrong with my approach here and if so, how should I tackle this?

+",26195,,2444,,5/30/2022 9:00,6/24/2023 10:03,Why is my implementation of REINFORCE algorithm for portfolio optimization not converging?,,1,0,,,,CC BY-SA 4.0 +16026,1,16027,,10/22/2019 13:06,,2,133,"

I read the DDPG paper, in which the authors state that the actions are fed only later to their Q network:

+ +
+

Actions were not included until the 2nd hidden layer of Q. (Sec 7, Experiment Details)

+
+ +

So does that mean, that the input of the first hidden layer was simply the state and the input of the second hidden layer the output of the first hidden layer concatenated with the actions?

+ +

Why would you do that? To have the first layer focus on learning the state value independent of the selected action? How would that help?

+ +

Is this just a small little tweak or a more significant improvement?

+",19928,,,,,12/26/2021 14:17,Why feed actions in later layer in Q network?,,1,1,,,,CC BY-SA 4.0 +16027,2,,16026,10/22/2019 13:42,,2,,"
+

So does that mean, that the input of the first hidden layer was simply the state and the input of the second hidden layer the output of the first hidden layer concatenated with the actions?

+
+ +

Yes.

+ +
+

Why would you do that? To have the first layer focus on learning the state value independent of the selected action? How would that help?

+
+ +

Neural networks hidden layers learn representations that get progressively closer to a linear relationship with the target, layer by layer.

+ +

So the first layer would not be learning the state value per se, but some representation of the state that was better related to the action value at the output.

+ +

Neural network architectures are often established by experimentation, so I expect here they tried the idea and the performance was OK. If they do not give alternative architectures in the paper, then the precise reason is not clear.

+ +

I can try a few guesses:

+ +
    +
  • Concatenating the actions with the states in the input layer would result in more parameters for the neural network to achieve the same accuracy, running slightly slower. That's because more weights would be required to link the input layer with the first hidden layer.

  • +
  • Separating the layers of state and action inputs is a form of regularisation, as the first layer has to produce features that are useful for all possible actions.

  • +
  • You don't want the neural network to construct a reverse map $\pi(a|s) \rightarrow Q(s,a)$, you want it to independently assess the action values to do its job as a critic. By having the states and actions presented in different laters, this reduces the chance of the neural network finding shortcuts due to ""recognising the policy"" to predict the values. That's because the state and action pairings may repeat (so be recognised), but the first layer activations change over time (so even with repeated state/action pairs, this is a new representation to learn from).

  • +
+ +
+

Is this just a small little tweak or a more significant improvement?

+
+ +

I don't know, and suggest looking through previous work by the same authors in case they describe the approach in more detail.

+",1847,,,,,10/22/2019 13:42,,,,0,,,,CC BY-SA 4.0 +16028,2,,16025,10/22/2019 13:51,,0,,"

One thing you could try to simplify the output logic would be to use a softmax output and then with your outputs set a var to = (max_output - min_output)/2 then treat that number as your long/short ""threshold"" and this ensures that your ouput always sums to 1 while still allowing the net to learn to output short signals. I would also check to make sure you have 1.0 bias nuerons since I imagine at times (especially first epoch) you are passing in zeros for the portfolio weights.

+",20044,,,,,10/22/2019 13:51,,,,2,,,,CC BY-SA 4.0 +16030,2,,15857,10/22/2019 13:58,,1,,"
+

Is the gradient at a layer (of a feed-forward neural network) independent of the activations of the previous layers?

+
+ +

Yes, as per @recessive answer they are indeed independent of the previous layers.

+ +

The goal of back-propagation is to trace the loss(error between target and network output) to specific weights in the network, and then tweak them to minimize this loss. +For this to be possible, the activation must be independent of previous activation functions(going forward in the network).

+ +

It is very helpful to have a good understanding of the back-propagation process when reading these papers, and I personally suggest watching this video to get a good understanding of it:

+ +

https://www.youtube.com/watch?v=Ilg3gGewQ5U

+",30565,,,,,10/22/2019 13:58,,,,0,,,,CC BY-SA 4.0 +16031,2,,16004,10/22/2019 14:14,,1,,"

This might be more of a signal-processing question, rather than a artificial intelligence question, but I will try my best to be of help.

+ +

Do you know what the noise you are trying to remove is? How it behaves/where it stems from? +Or, do you know how your output signal should look, post processing?

+ +

If you know these things and you are familiar with MATLAB or any other matrix multiplication software, they come with great prebuilt toolboxes for traditional approaches to remove noise from signals.

+ +

If you are not exactly sure what patterns you are looking for, I suggest perhaps looking into Autoencoders to discover the hidden patterns. Though it is important to note that the origin of the noise may greatly effect its abilities. +If you plan on using such a technique it is important that you have a sufficiently large dataset of the signals available.

+ +

Without the clarifications to these questions, along with @nbro's questions, it is hard to be more specific.

+",30565,,,,,10/22/2019 14:14,,,,0,,,,CC BY-SA 4.0 +16032,2,,15972,10/22/2019 14:46,,1,,"

To gain a good understanding of this, I recommend first reading about the trade-off between bias and variance in ML and AI methods.

+ +

A great article on this topic that I recommend as a light mathematical introduction is this: +https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229

+ +

In short: +Bias represents the models effort to generalize samples, as opposed to Variance that represents the models effort to conform to new data. +A high bias, low variance model will thus look more like a straight(underfitted) line, while a low bias, high variance model will look jagged and all-over the place(overfitted).

+ +

In essence, you need to find a balance between the two to avoid both overfitting(high variance, low bias) and underfitting(high bias, low variance) for your specific application.

+ +
+

But how can I determine this for a model such as a Random Forrest classifier?

+
+ +

To determine your models bias and variance configuration(if either is too high/low), you can look at the models performance on the validation and test set. +The very reason we divide our data into training-validation-test sets, is so that we can validate the models performance when it is presented with samples it has not seen during training.

+",30565,,,,,10/22/2019 14:46,,,,7,,,,CC BY-SA 4.0 +16033,1,,,10/22/2019 15:08,,1,1190,"

I've been searching for a tool to convert from the brat standoff format to the CoNLL-U format, so that to use it as a parsing corpus model to the spaCy library.

+

Can you help me?

+",30702,,2444,,12/21/2021 17:03,12/21/2021 17:03,Is there a tool to convert from the brat standoff format to CoNLL-U format?,,1,0,,1/20/2021 0:11,,CC BY-SA 4.0 +16034,2,,16004,10/22/2019 16:15,,0,,"

by svm do you possibly mean singular value decomposition (svd a known noise reduction technique) if this is true then i would say the next method i would try would be wavelet transform for noise reduction and if neither of these techniques are working on there own it is not uncommon to use them together as is done here.

+",20044,,,,,10/22/2019 16:15,,,,0,,,,CC BY-SA 4.0 +16035,2,,4682,10/22/2019 21:58,,2,,"

Additional features can also cause overfitting if they have low or misleading information.

+ +

Consider the following problem:

+ +

$X = [1, 3, 3, 4, 5]$, $Y = [1, 3, 4, 4, 5]$.

+ +

Suppose that the real dataset was generated from the relationship:

+ +

$Y = X$, with a probability of 0.2 of adding or subtracting 1.

+ +

A reasonable model estimate is $Y = X$. Note that no model can fit this data perfectly, because the two 3 inputs map to different outputs.

+ +

Now, suppose we add a new feature: a random number between $0$ and $10$: +$W = [1 ,5, 2, 6, 3]$

+ +

It may not be obvious, but a sufficiently deep and broad neural network can learn a new function:

+ +

$g(W) = 1$ if $W = 2,4,7,8,9,$ or $0$.

+ +

$g(W) = 0$ otherwise.

+ +

and define a new prediction: $Y = X - g(W)$.

+ +

This happens to produce a perfect fit on the training data. However, it will perform extremely poorly on new data (like a test set), because it has learned a meaningless pattern out of random noise. Coincidentally, it will be wrong on about 50% of samples, while our first model will be wrong on only 20% of samples.

+",16909,,,user9947,3/21/2020 0:53,3/21/2020 0:53,,,,2,,,,CC BY-SA 4.0 +16036,1,,,10/23/2019 7:26,,3,3680,"

Is it possible to calculate the best possible placements for settlements in Catan without using an ML algorithm?

+ +

While it is trivial to simply add up the numbers surrounding the settlement (highest point location), I'm looking to build a deeper analysis of the settlement locations. For example, if the highest point location is around a sheep-sheep-sheep, it might be better to go to a lower point location for better resource access. It could also weight for complementary resources, blocking other players from resources, and being closer to ports.

+ +

It seems feasible to program arithmetically, yet some friends said this is an ML problem. If it is ML, how would one go about training, as the gameboard changes every game?

+",30720,,1671,,10/25/2019 23:53,10/26/2019 19:49,How to calculate the optimal placements for settlements in Catan without an ML algorithm?,,3,1,,,,CC BY-SA 4.0 +16037,1,16040,,10/23/2019 13:04,,5,520,"

(I apologize for the title being too broad and the question being not 'technical')

+ +

Suppose that my task is to label news articles. This means that given a news article, I am supposed to classify which category that news belong to. Eg, 'Ronaldo scores a fantastic goal' should classify under 'Sports'.

+ +

After much experimentation, I came up with a model that does this labeling for me. It has, say, 50% validation accuracy. (Assume that it is the best)

+ +

And so I deployed this model for my task (on unseen data obviously). Of course, from a probabilistic perspective, I should get roughly 50% of the articles labelled correctly. But how do I know that which labels are actually correct and which labels need to be corrected? If I were to manually check (say, by hiring people to do so), how is deploying such a model better than just hiring people to do the classification directly? (Do not forget that the manpower cost of developing the model could have been saved.)

+",30729,,1671,,10/24/2019 0:47,10/24/2019 0:47,How does text classification reduce manpower costs?,,2,2,,,,CC BY-SA 4.0 +16038,1,16041,,10/23/2019 14:40,,5,407,"

I very often applied a grid search to tune the parameters of my supervised model. I have the feeling that parameter tuning will eventually (very often) lead to overfitting? Is this crazy to say?

+ +

Is there a way that we can apply grid search in such a way that it will not overfit?

+",30599,,2444,,10/23/2019 15:24,10/23/2019 15:24,How can I avoid overfitting when doing parameter tuning?,,1,0,,,,CC BY-SA 4.0 +16039,2,,16037,10/23/2019 14:41,,0,,"

First of all to be more real, you usually expect more than 50% validation accuracy on articles predictions.

+ +

Back on your question, you should definitely try to automate this process if you are looking for a long-term solution of labeling articles. Deploying such model should not cost more than hiring employees to do this manually, at least for a long-term perspective.

+",28233,,,,,10/23/2019 14:41,,,,0,,,,CC BY-SA 4.0 +16040,2,,16037,10/23/2019 14:46,,10,,"

There are several advantages:

+ +
    +
  1. Some text classification systems are much more accurate than 50%. For example, most spam classification systems are 99.9% accurate, or more. There will be little value to having employees review these labels.
  2. +
  3. Many text classification systems can output a confidence as well as a label. You can selectively have employees review only the examples the model is not confident about. Often these will be small in number.
  4. +
  5. You can usually test a text classification model by having it classify some unseen data, and then asking people to check the work. If you do this for a small number of examples, you can make sure the system is working. You can then confidently use the system on a much large set of unlabeled examples, and be reasonably sure about how accurate it is.
  6. +
  7. For text, it is also important to measure how much different people agree on the ratings. You are unlikely to do better than this, because this gives you a notion of the subjectivity of the specific problem you are working on. If people disagree 50% of the time anyway, maybe you can accept a 50% failure rate from the automated system, and not bother checking its work.
  8. +
+",16909,,16909,,10/23/2019 17:25,10/23/2019 17:25,,,,6,,,,CC BY-SA 4.0 +16041,2,,16038,10/23/2019 15:09,,4,,"

Yes. Usually you would use cross validation to avoid overfitting during parameter tuning. If your dataset is large enough, and you don't try too many parameter combinations, this will work well, because to ""get lucky"" and overfit, a parameter combination will need to work very well on many variations of the problem, which is less likely than working well on just one set of data.

+",16909,,,,,10/23/2019 15:09,,,,0,,,,CC BY-SA 4.0 +16042,1,16044,,10/23/2019 15:26,,2,26,"

Based on the answer of my previous question: +How can I avoid overfitting when doing parameter tuning?

+ +

Can we say: the more we increase the numbers K of cross validation the less likely it is that we overfit?

+",30599,,,,,10/23/2019 15:40,Can we say: the more we increase the numbers of cross validation the less likely it is that we overfit?,,1,0,,,,CC BY-SA 4.0 +16043,2,,16036,10/23/2019 15:27,,3,,"

Catan is actually a much more complicated game than the simple rules would suggest, and an exact solution is probably beyond the scope of current AI techniques.

+ +

Monte Carlo Tree Search or Expectiminimax techniques seem like they could help, but are intended for games of perfect information. Catan is not a game of perfect information (the development cards are hidden), and also has a phase that occurs without a regular turn sequence (trading).

+ +

To solve Catan properly, I think you're going to need both algorithms for solving POMDPs (like CFR+), and algorithms for negotiation (like Kraus' Diplomat). I'm not certain that these have been combined before in formal analysis, so this might actually be a good PhD thesis for someone.

+ +

That said, you can probably get a good player using self-play techniques, because Catan has randomization, and a relatively small set of moves, like Backgammon. These may or may not offer simple rules about how-best to play the game. Your friends are right to think about this as, at root, an ML problem.

+",16909,,,,,10/23/2019 15:27,,,,5,,,,CC BY-SA 4.0 +16044,2,,16042,10/23/2019 15:40,,1,,"

In general, no.

+ +

There is a tradeoff between making the validation set for each fold smaller, and having more folds in total.

+ +

As an example, if you have $N$ folds for $N$ datapoints, each fold will have only a single datapoint in its validation set. The validation accuracy of a model on a single datapoint is not a reliable estimator for the test performance of the model. In fact, you can construct examples where the error is arbitrarily large.

+ +

For this reason, people sometimes use Bootstrap Validation if they need a very large number of folds. In practice though, most people just us 10 folds, and that's ""good enough"".

+",16909,,,,,10/23/2019 15:40,,,,0,,,,CC BY-SA 4.0 +16045,1,16127,,10/23/2019 16:21,,1,1658,"

+ +

I got this slide from CMU's lecture notes. The $x_i$s on the right are inputs and the $w_i$s are weights that get multiplied together then summed up at each hidden layer node. So I'm assuming this is a node in the hidden layer.

+ +

What is the mathematical reason for taking the sum of the weights and inputs and inputting that into a sigmoid function? Is there something the sigmoid function provides mathematically or provides some sort of intuition useful for the next layer?

+",25721,,2444,,10/31/2019 20:05,10/31/2019 20:05,Why is there a sigmoid function in the hidden layer of a neural network?,,2,0,,10/31/2019 19:32,,CC BY-SA 4.0 +16046,1,,,10/23/2019 22:09,,0,1173,"

I want to design a neural network that can be used for predicting sports scores for betting, specifically for American football. What I’d like to do is create a kind of profile for each game based on the specific strengths and weaknesses of each team.

+

For example, let’s say two teams have the following characteristics:

+

Team A:

+
    +
  • Passing Offense Rating: 5
  • +
  • Rushing Offense Rating: 2
  • +
+

Team B:

+
    +
  • Passing Defense Rating: 3
  • +
  • Rushing Defense Rating: 4
  • +
+

I’d like to be able to search for historical games where two teams have similar profiles. I could perhaps then narrow it down to games with profiles that have statistically significant historical outcomes (i.e., certain types of matchups are likely to result in similar results).

+

In reality, I’d have dozens of team characteristics to compare. I would then need to assign weights of importance to each characteristic, which could be used to further ensure the effective selection of similar games.

+

I think I could do this like a convolutional neural network where there is an additional filter applied to the characteristics for the weights.

+

Are there any other ways that are specifically applicable to this strategy?

+",30154,,2444,,12/18/2021 11:36,12/18/2021 11:36,Neural networks for sports betting,,1,0,,,,CC BY-SA 4.0 +16047,2,,16046,10/23/2019 23:56,,2,,"

This is probably not going to work well as a way to make money. People with far larger budgets, and far more training, are already milking out any money to be made this way. This is probably their day job, and they are good at it.

+ +

That said, here are some ideas:

+ +
    +
  1. You do not need or want to use a convolutional network for this. Convolutional networks are useful when you want to detect a pattern invariant to translation in complex inputs. The only translation you have is that the teams could appear in either order. Just input them in both possible orders if this is a concern.
  2. +
  3. You want to find similar games. You do not need or (generally) want to use a neural network to do this. As a starting point, normalize the features and compute the nearest neighbors of an input point directly. Algorithms for doing this are fast and will return all similar points. You can even (with small modifications) output the degree of similarity as a numeric value.
  4. +
  5. If you want to predict who will win, you will probably have much better luck doing that directly. Build a feed-forward neural network that predicts which team won from the features you gathered. This is a routine classification task, and the resulting model will probably work better than finding similar games and then trying to manually determine what to do.
  6. +
  7. If you care about scores instead of just who won, use a feed forward network for regression instead of classification. The task is almost identical.
  8. +
+",16909,,,,,10/23/2019 23:56,,,,9,,,,CC BY-SA 4.0 +16048,5,,,10/24/2019 0:51,,0,,"

https://en.wikipedia.org/wiki/Utility#Utility_function

+ +

https://en.wikipedia.org/wiki/Utility#Expected_utility

+ +

https://en.wikipedia.org/wiki/Utility_maximization_problem

+ +

https://plato.stanford.edu/entries/decision-theory/

+",1671,,1671,,10/24/2019 0:51,10/24/2019 0:51,,,,0,,,,CC BY-SA 4.0 +16049,4,,,10/24/2019 0:51,,0,,"For questions about the economic and game theoretic concepts of utility, including utility functions and expected utility.",1671,,1671,,10/24/2019 0:51,10/24/2019 0:51,,,,0,,,,CC BY-SA 4.0 +16050,2,,2817,10/24/2019 2:21,,2,,"

I wrote some python code to reproduce this paper's purported results. My code very efficiently optimizes simple smooth functions like bowls, but does not come close to reproducing the paper's claimed results on more complex functions, including with the parameters the authors report. I think that, since both @Jairo and I were unable to reproduce the results from the information in the paper, independently, it is likely that something is wrong with the paper.

+ +

It may be possible to reproduce the paper's claimed behaviors using a library like py_swarm, however the paper is using a Firefly Algorithm (already a rarer form of a PSO), and is using a home-brew variant of that algorithm, so my guess is that rolling your own is required, as we both did.

+ +

Additionally, I find the paper's claims fairly unlikely. For example, in section 4.1, they claim to converge on the global minimum of a function with many local minima in just 10 iterations. It seems to me like most of the time the fireflies will quickly converge on the bottom of one of the gullies in this function, and get stuck there. This is also what I observe in my replications. I suspect the authors may have cherry-picked their results from the best runs without reporting this, or have omitted some key detail from the paper.

+ +

Here is my replication code, in case someone else wants to try to reproduce this:

+ +
from math import sin, pi, exp, sqrt
+from random import random
+from copy import deepcopy
+
+def michalewiz_objective(x):
+    result = 0
+    for i, x_i in enumerate(x):
+        result -= sin(x_i)*(sin(i*(x_i**2)/pi))**20
+    return -result
+
+def bowl_objective(x):
+    return -sum([x_i**2 for x_i in x])
+
+pop_size = 40
+max_generations = 10
+alpha = 0.2
+gamma = 1
+beta_0 = 1
+d = 2
+I = michalewiz_objective
+#I = bowl_objective
+
+def move(firefly, other_firefly):
+    radius = sqrt(sum([(firefly[i] - other_firefly[i])**2 for i in range(0, len(firefly))]))
+    for i, value in enumerate(firefly):
+        firefly[i] += beta_0*exp(-gamma*radius**2)*(other_firefly[i] - firefly[i])
+        firefly[i] += alpha * (random() - 0.5)
+
+# Using 4*random() to match Figure 3's apparent spread.
+fireflies = [[4*random() for i in range(0, d)] for j in range(0, pop_size)]
+
+for generation in range(0, max_generations):
+    new_fireflies = [deepcopy(firefly) for firefly in fireflies]
+    for index, firefly in enumerate(fireflies):
+        for other_firefly in fireflies:
+            if I(other_firefly) > I(firefly):
+                move(new_fireflies[index], other_firefly)
+    fireflies = new_fireflies
+    best = max([I(f) for f in fireflies])
+    mean = [sum([f[0] for f in fireflies]), sum([f[1] for f in fireflies])]
+    print(best)
+    print(mean)
+
+",16909,,16909,,11/23/2019 4:31,11/23/2019 4:31,,,,0,,,,CC BY-SA 4.0 +16051,1,,,10/24/2019 2:54,,1,215,"

I am trying to implement NEAT for the snake game. My game logic is ready, which is working properly and NEAT configured. But even after 100 generations with 200 genomes per generation, the snakes perform very poorly. It barely ever eats more than 2 food. Below is the snip of the eval_genome function:

+ +
def eval_genome(genomes, config):
+    clock = pygame.time.Clock()
+    win = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT))
+    for genome_id, g in genomes:
+        net = neat.nn.FeedForwardNetwork.create(g, config)
+        g.fitness = 0
+        snake = Snake()
+        food = Food(snake.body)
+        run = True
+        UP = DOWN = RIGHT = LEFT = MOVE_SNAKE = False
+        moveToFood = 0
+        score = 0
+        moveCount = 0
+        while run:
+            pygame.time.delay(50)
+            clock.tick_busy_loop(10)
+            for event in pygame.event.get():
+                if event.type == pygame.QUIT:
+                    run = False
+            snakeHeadX = snake.body[0]['x']
+            snakeHeadY = snake.body[0]['y']
+            snakeTailX = snake.body[len(snake.body)-1]['x']
+            snakeTailY = snake.body[len(snake.body)-1]['y']
+            snakeLength = len(snake.body)
+            snakeHeadBottomDist = WIN_HEIGHT - snakeHeadY - STEP
+            snakeHeadRightDist = WIN_WIDTH - snakeHeadX - STEP
+            foodBottomDist = WIN_HEIGHT - food.y - STEP
+            foodRightDist = WIN_WIDTH - food.x - STEP
+            snakeFoodDistEuclidean = math.sqrt((snakeHeadX - food.x)**2 + (snakeHeadY - food.y)**2)
+            snakeFoodDistManhattan = abs(snakeHeadX - food.x) + abs(snakeHeadY - food.y)
+            viewDirections = snake.checkDirections(food, UP, DOWN, LEFT, RIGHT)
+            deltaFoodDist = snakeFoodDistEuclidean
+
+            outputs = net.activate((snakeHeadX, snakeHeadY, snakeHeadBottomDist, snakeHeadRightDist, snakeTailX, snakeTailY, snakeLength, moveCount, moveToFood, food.x, food.y, foodBottomDist, foodRightDist, snakeFoodDistEuclidean, snakeFoodDistManhattan, viewDirections[0], viewDirections[1], viewDirections[2], viewDirections[3], viewDirections[4], viewDirections[5], viewDirections[6], viewDirections[7], deltaFoodDist))
+
+            if (outputs[0] == max(outputs) and not DOWN):
+                snake.setDir(0,-1)
+                UP = True
+                LEFT = False
+                RIGHT = False
+                MOVE_SNAKE = True
+            elif (outputs[1] == max(outputs) and not UP):
+                snake.setDir(0,1)
+                DOWN = True
+                LEFT = False
+                RIGHT = False
+                MOVE_SNAKE = True
+            elif (outputs[2] == max(outputs) and not RIGHT):
+                snake.setDir(-1,0)
+                LEFT = True
+                UP = False
+                DOWN = False
+                MOVE_SNAKE = True
+            elif (outputs[3] == max(outputs) and not LEFT):
+                snake.setDir(1,0)
+                RIGHT = True
+                UP = False
+                DOWN = False
+                MOVE_SNAKE = True
+            elif (not MOVE_SNAKE):
+                if (outputs[0] == max(outputs)):
+                    snake.setDir(0,-1)
+                    UP = True
+                    MOVE_SNAKE = True
+                elif (outputs[1] == max(outputs)):
+                    snake.setDir(0,1)
+                    DOWN = True
+                    MOVE_SNAKE = True
+                elif (outputs[2] == max(outputs)):
+                    snake.setDir(-1,0)
+                    LEFT = True
+                    MOVE_SNAKE = True
+                elif (outputs[3] == max(outputs)):
+                    snake.setDir(1,0)
+                    RIGHT = True
+                    MOVE_SNAKE = True  
+
+            win.fill((0, 0, 0))
+            food.showFood(win)
+            if(MOVE_SNAKE):
+                snake.update()
+                newSnakeHeadX = snake.body[0]['x']
+                newSnakeHeadY = snake.body[0]['y']
+                newFoodDist = math.sqrt((newSnakeHeadX - food.x)**2 + (newSnakeHeadY - food.y)**2)
+                deltaFoodDist = newFoodDist - snakeFoodDistEuclidean
+                moveCount += 1
+                if (newFoodDist <= snakeFoodDistEuclidean):
+                    g.fitness += 1
+                else:
+                    g.fitness -= 10
+            snake.show(win)
+            if(snake.collision()):
+                if score != 0:
+                    print('FINAL SCORE IS: '+ str(score))
+                g.fitness -= 50
+                break
+
+            if(snake.eat(food,win)):
+                g.fitness += 15
+                score += 1
+                if score == 1 :
+                    moveToFood = moveCount
+                    # foodEatenMove = pygame.time.get_ticks()/1000
+                else:
+                    moveToFood = moveCount - moveToFood
+                food.foodLocation(snake.body)
+                food.showFood(win)
+
+ +

Additionally, I am putting the definition of the checkDirections function. What it does is returns an array of size 8 corresponding to 8 directions where each value can be either 0 (not food or body), 1(food found but no body), 3(body found but no food), or 4(both body and food found).

+ +
def checkDirections(self, food, up, down, left, right):
+        '''
+        x+STEP, y-STEP
+        x+STEP, y+STEP
+        x-STEP, y-STEP
+        x-STEP, y+STEP
+        x+STEP, y
+        x, y-STEP
+        x, y+STEP
+        x-STEP, y
+        '''
+        view = []
+        x = self.xdir
+        y = self.ydir
+
+        view.append(self.check(x, y, STEP, -STEP, food.x, food.y))
+        view.append(self.check(x, y, STEP, STEP, food.x, food.y))
+        view.append(self.check(x, y, -STEP, -STEP, food.x, food.y))
+        view.append(self.check(x, y, -STEP, STEP, food.x, food.y))
+        view.append(self.check(x, y, STEP, 0, food.x, food.y))
+        view.append(self.check(x, y, 0, -STEP, food.x, food.y))
+        view.append(self.check(x, y, 0, STEP, food.x, food.y))
+        view.append(self.check(x, y, -STEP, 0, food.x, food.y))
+
+        if up == True:
+            view[6] = -999
+        elif down == True:
+            view[5] = -999
+        elif left == True:
+            view[4] == -999
+        elif right == True:
+            view[7] == -999        
+        return view
+
+    def check(self, x, y, xIncrement, yIncrement, foodX, foodY):
+        value = 0
+        foodFound = False
+        bodyFound = False
+        while (x >= 0 and x <= WIN_WIDTH and y >= 0 and y <= WIN_HEIGHT):
+            x += xIncrement
+            y += yIncrement
+            if (not foodFound):
+                if (foodX == x and foodY == y):
+                    foodFound = True
+            if (not bodyFound):
+                for i in range(1, len(self.body)):
+                    if ((x == self.body[i]['x']) and (y == self.body[i]['y'])):
+                        bodyFound = True
+            if (not bodyFound and not foodFound):
+                value = 0
+            elif (not bodyFound and foodFound):
+                value = 1
+            elif (bodyFound and not foodFound):
+                value = 2
+            else:
+                value = 3
+        return value
+
+ +

I am using sigmoid as the activation function. Although I have tried with tanh and relu as well with no luck. Below is the NEAT config file that I am using:

+ +
[NEAT]
+fitness_criterion     = max
+fitness_threshold     = 10000
+pop_size              = 200
+reset_on_extinction   = False
+
+[DefaultGenome]
+# node activation options
+activation_default      = sigmoid
+activation_mutate_rate  = 0.0
+activation_options      = sigmoid
+
+# node aggregation options
+aggregation_default     = sum
+aggregation_mutate_rate = 0.0
+aggregation_options     = sum
+
+# node bias options
+bias_init_mean          = 0.0
+bias_init_stdev         = 1.0
+# was 30 max and -30 for min bias
+bias_max_value          = 100.0
+bias_min_value          = -100.0
+bias_mutate_power       = 0.5
+bias_mutate_rate        = 0.7
+bias_replace_rate       = 0.3
+
+# genome compatibility options
+compatibility_disjoint_coefficient = 1.0
+compatibility_weight_coefficient   = 0.5
+
+# connection add/remove rates
+conn_add_prob           = 0.8
+conn_delete_prob        = 0.56
+
+# connection enable options
+enabled_default         = True
+# below was 0.01
+enabled_mutate_rate     = 0.3
+
+feed_forward            = True
+initial_connection      = full
+
+# node add/remove rates
+node_add_prob           = 0.7
+node_delete_prob        = 0.4
+
+# network parameters
+num_hidden              = 0
+num_inputs              = 24
+num_outputs             = 4
+
+# node response options
+response_init_mean      = 1.0
+response_init_stdev     = 0.0
+response_max_value      = 30.0
+response_min_value      = -30.0
+response_mutate_power   = 0.0
+response_mutate_rate    = 0.0
+response_replace_rate   = 0.0
+
+# connection weight options
+weight_init_mean        = 0.0
+weight_init_stdev       = 1.0
+weight_max_value        = 30
+weight_min_value        = -30
+weight_mutate_power     = 0.5
+weight_mutate_rate      = 0.8
+weight_replace_rate     = 0.1
+
+[DefaultSpeciesSet]
+compatibility_threshold = 3.0
+
+[DefaultStagnation]
+species_fitness_func = max
+max_stagnation       = 20
+species_elitism      = 2
+
+[DefaultReproduction]
+elitism            = 2
+survival_threshold = 0.2
+
+ +

If anyone has any insights or thoughts that could help improve the performance of the snake AI, please let me know.

+",30739,,,,,4/24/2022 7:07,Unable to achieve expected outputs using NEAT for the snake game,,1,0,,,,CC BY-SA 4.0 +16052,1,16065,,10/24/2019 3:30,,3,684,"

Can alpha-beta pruning/ minimax be used for systems apart from games? Like for selecting the right customer for a product, etc. (the typical data science problems)? I have seen people do it, but can't understand how. Can someone help me understand that?

+ +

Can I do something like if - find two criteria on which customers can buy product depends on like gender and age. Find the probability for all the customers depending on age and gender if they can buy it.

+ +

like if there are 3 customers - there probability to buy a product on the basis of their age and gender is - Customer 1 - (20%, 30%), Customer 2 - (30%, 60%), Customer 3 - (40%, 20%). here the x and y represents - (probability based on age, probability based on gender ). Probability is probability to buy the product.

+ +

For minimax, will it be correct if one player(max) tries to select the customer on basis of gender and other player(min) on basis of age. so, one can be max and one can be min.

+ +

Dont know if this correct or not, but just a idea.

+",30749,,30749,,10/24/2019 17:03,10/24/2019 19:40,Can alpha-beta pruning be used for applications apart from games?,,1,0,,,,CC BY-SA 4.0 +16053,2,,16045,10/24/2019 4:38,,2,,"

By itself, I'm not sure it's possible to know. It's possible the slides were old. Or, the intended purpose was to mention how as sigmoid ranges from 0 to 1. Mostly, it looks like it was intended to bring up gradient descent. But it could also be an entry point to the discussion of other methods such as ReLU. Either that or perhaps some sort of norming function.

+",30750,,,,,10/24/2019 4:38,,,,0,,,,CC BY-SA 4.0 +16054,1,16055,,10/24/2019 8:09,,2,326,"

What are the actual risks to society associated with the widespread use of AI? Outside of the use of AI in a military context.

+ +

I am not talking about accidental risks or unintentional behaviour - eg, a driver-less car accidentally crashing.

+ +

And I am not talking about any transitional effects when we see the use of AI being widespread and popular. For instance I have heard that the widespread use of AI will make many existing jobs redundant, putting many people out of work. However this is true of any major leap forward in technology (for example the motor car killed off the stable/farrier industries). The leaps forward in technology almost always end up creating more jobs than were lost in the long run.

+ +

I am interested in long term risks and adverse effects stemming directly from the widespread use of AI in a non-military sense. Has anybody speculated on the social or psychological impacts that AI will produce once it has become popular?

+",30526,,1671,,10/26/2019 20:10,10/26/2019 20:40,What are the societal risks associated with AI?,,4,0,,,,CC BY-SA 4.0 +16055,2,,16054,10/24/2019 8:36,,4,,"

The biggest risk is algorithmic bias. As more and more decision-making processes are taken on by AI systems, there will be an abdication of responsibility to the computer; people in charge will simply claim the computer did it, and they cannot change it.

+ +

The real problem is that training data for machine learning often contains bias, which is usually ignored or not recognised. There was a story on BBC Radio about someone whose passport photo was rejected by an algorithm because he supposedly had his mouth open. However, he belonged to an ethnic group which has larger lips than Caucasian whites, but the machine could not cope with that.

+ +

There is a whole raft of examples where similar things happen: if you belong to a minority group, machine learning can lead to you being excluded, just because the algorithms will have been trained on training data that was too restricted.

+ +

Update: Here is a link to a BBC News story about the example I mentioned.

+",2193,,2193,,10/24/2019 8:59,10/24/2019 8:59,,,,2,,,,CC BY-SA 4.0 +16056,1,16059,,10/24/2019 9:30,,3,361,"

Is it possible for value-based methods to learn stochastic policies? I'm trying to get a clear picture of the different categories for RL algorithms, and while doing so I started to think about settings where the optimal policy is stochastic (POMDP), and if it is possible to learn this policy for the "traditional" value-based methods

+

If it is possible, what are the most common methods for doing this?

+",30565,,2444,,11/20/2020 2:05,11/20/2020 2:05,Is it possible for value-based methods to learn stochastic policies?,,1,3,,,,CC BY-SA 4.0 +16057,1,,,10/24/2019 9:37,,2,358,"

I've been reading different papers which implements the Transformer for time series forecasting. Most of the them are claiming that the training time is significantly faster then using a normal RNN. From my understanding when training such a model, you can encode the input in parallel, but the decoding is still sequential unless you're using teacher-forcing.

+ +

What makes the transformer faster than RNN in such a setting? Is there something that I am missing?

+",20430,,2444,,11/30/2021 15:45,11/30/2021 15:45,Why is the transformer for time series forecasting faster than RNN?,,0,2,,,,CC BY-SA 4.0 +16058,1,16060,,10/24/2019 10:17,,1,544,"

In his original GAN paper Goodfellow gives a game theoretic perspective for GANs:

+ +

\begin{equation} +\underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right) = +\mathbb{E}_{x\sim\mathit{p}_{\textrm{data}}\left(x \right)} \left[\textrm{log}\, D \left(x \right) \right] ++ \mathbb{E}_{z\sim\mathit{p}_{\textrm{z}}\left(z \right)} \left[\textrm{log} \left(1 - D \left(G \left(z \right)\right)\right) \right] +\end{equation}

+ +

I think I understand this formula, at least it makes sense to me. +What I don't understand is that he writes in his NIPS tutorial:

+ +
+

In the minimax game, the discriminator minimizes a cross-entropy, but the generator maximizes the same cross-entropy.

+
+ +

Why does he write that the discriminator minimizes the cross-entropy while the generator maximizes it? Shouldn't it be the other way around? At least that is how I understand $\underset{G}{\min}\, \underset{D}{\max}\, V\left(D,G \right)$.

+ +

I guess this shows that I have a fundamental error in my understanding. Could anyone clarify what I'm missing here?

+",3199,,2444,,5/18/2020 15:20,12/10/2021 16:15,Why does the discriminator minimize the cross-entropy while the generator maximize it?,,1,0,,,,CC BY-SA 4.0 +16059,2,,16056,10/24/2019 11:38,,2,,"
+

Is it possible for value-based methods to learn stochastic policies?

+
+ +

Yes, but only in a limited sense, due to the ways it is possible to generate stochastic policies from a value function. For instance, the simplest exploratory policy used by SARSA and Monte Carlo Control, $\epsilon$-greedy, is stochastic.

+ +

SARSA natually learns the optimal $\epsilon$-greedy policy for any fixed value of $\epsilon$. That is not quite the same as learning the optimal policy, but might still be useful in a non-stationary environment where exploration is always required and the algorithm is forever learning online.

+ +

You can also use other functions to generate stochastic policies from value functions. For instance, sampling from the Boltzmann distribution over action values using a temperature parameter to decide relative priorities between actions with different action values.

+ +

However, all these approaches share the problem that they cannot converge towards an optimal stochastic policy. The policies are useful for mangaging exploration, but will only be optimal in the limited sense of optimal given the fixed policy generator or by chance. There is no way for a purely value-based method to learn a conversion from values to an optimal balance of probabilities for action choice.

+ +

For strict MDPs this is not an issue. If the MDP has the Markov property in the state representation, then there will always be a deterministic optimal policy, and value-based methods can converge towards it. That may include reducing $\epsilon$ in $\epsilon$-greedy approaches or the temperature in Gibbs sampling, when using an on-policy method.

+ +
+

I started to think about settings where the optimal policy is stochastic(POMDP), and if it is possible to learn this policy for the ""traditional"" value-based methods

+
+ +

It isn't.

+ +

To resolve this you need to add some kind of policy function and a mechanism to search for better policies directly by modifying that function. Policy Gradient methods are one approach, but you could include genetic algorithms or other search methods too under this idea.

+ +

It may still be useful to use a value-based method as part of a policy search, to help evaluate changes to the policy. This is how Actor-Critic works.

+",1847,,,,,10/24/2019 11:38,,,,0,,,,CC BY-SA 4.0 +16060,2,,16058,10/24/2019 13:22,,1,,"

Your mistake is that you think that the referenced $V(D,G)$ is the definition of the cross-entropy! Indeed, the cross-entropy is defined based on the negative value of the $V(D,G)$. Hence, if you consider the minus behind the $V(D,G)$ ($-V(D,G)$) the sentence will be meaningful.

+",4446,,2444,,12/10/2021 16:15,12/10/2021 16:15,,,,0,,,,CC BY-SA 4.0 +16061,1,,,10/24/2019 15:25,,6,1521,"

I recently read a new paper (late 2019) about a one-shot object detector called CenterNet. Apart from this, I'm using Yolo (V3) one-shot detector, and what surprised me is the close similarity between Yolo V1 and CenterNet.

+ +

First, both frameworks treat object detection as a regression problem, each of them outputs a tensor that can be seen as a grid with cells (below is an example of an output grid).

+ +

+ +

Each cell in this grid predicts an object class, a box offset relative to the cell's position and a box size. The only major difference between Yolo V1 and CenterNet is that Yolo also predicts an object confidence score, that is represented in CenterNet by the class score. Yolo also predicts 2 boxes.

+ +

In brief, the tensor at one cell position is Class + B x (Conf + Size + Off) for Yolo V1 and Class + Size + Off for CenterNet.

+ +

The training strategy is quite similar too. Only the cell containing the center of a ground truth is responsible for that detection and thus affects the loss. Cells near the ground truths center (base on the distance for CenterNet and IoU for Darknet) have a reduced penalty in the loss (Focal Loss for CenterNet vs and tuned hyper parameter for Yolo).

+ +

The loss functions have near the same structure (see above) except that L1 is preferred in CenterNet while Yolo uses L2, among other subtleties.

+ +

+ +

My point is not that Yolo V1 and CenterNet are the same — there are not — but they are far closer that it appears at first glance.

+ +

The problem is that recent papers like CenterNet (CornerNet, ExtremeNet, Triplet CenterNet, MatrixNet) all claim to be ""Keypoint-based detector"" while they are not so much different than regular ""anchor-based"" detectors (that are preconditioned regressors in fact).

+ +

Instead I think that the biggest difference between Yolo and CenterNet is the backbone that has a bigger resolution for CenterNet (64x64) while Darknet has 7 or 8 only.

+ +
+ +

My Question is: do you see a major difference between the two concepts that I may have missed and that could explain the performance gap? I understand that new backbones, new loss functions and better resolutions can improve the accuracy but is there a structural difference between the two approaches?

+",19859,,2444,,1/29/2021 0:02,1/29/2021 0:02,What are the differences between Yolo v1 and CenterNet?,,0,2,,,,CC BY-SA 4.0 +16062,1,,,10/24/2019 16:41,,4,419,"

I know this is a very general question, but I'm trying to illustrate this topic to people who are not from the field, and also my understanding is very limited since I'm just a second-year physics student with a basic understanding of R and Python. My point is, I'm not trying to say anything wrong here.

+ +

So according to Wikipedia, after the second AI winter, which happened because expert systems didn't match expectations of the general public and of scientists, AI made a recovery ""due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards"".

+ +

What I'm trying to understand now is whether the rise of AI is rather connected to greater computational power available to the public or whether there have been fundamental mathematical advances that I'm not aware of. If the latter is the case (because according to my understanding, the mathematical models behind neural networks are rooted in the 70s and 80s), I would appreciate examples.

+ +

Again, please don't be offended by the general character of this question, I know it is probably really hard to answer correctly, however, I'm just trying to give a short historic introduction to the field to a lay audience and wanted to be clear in that regard.

+",30765,,2444,,10/24/2019 16:52,3/23/2020 15:27,What happened after the second AI winter?,,1,4,,,,CC BY-SA 4.0 +16063,5,,,10/24/2019 16:51,,0,,,2444,,2444,,10/24/2019 16:51,10/24/2019 16:51,,,,0,,,,CC BY-SA 4.0 +16064,4,,,10/24/2019 16:51,,0,,"For questions related to AI winters, which are periods of reduced funding and interest in artificial intelligence research, due to unmet expectations after a period of hype. There have been at least two major AI winters in 1974-1980 and 1987-1993.",2444,,2444,,10/24/2019 16:51,10/24/2019 16:51,,,,0,,,,CC BY-SA 4.0 +16065,2,,16052,10/24/2019 17:05,,1,,"

Thinking about this more, the answer is in fact yes, but not for the application you mention.

+ +

You cannot use alpha-beta pruning to learn a model to predict customer outcomes, because it is only useful for domains where you are concerned about an adversary. In finding a customer model, there is no reason to worry about someone coming in and forcing you to make bad decisions about the optimization of the model. Consequentially, there is no reason to use minimax search, and thus, to use alpha-beta pruning.

+ +

There are applications other than (video) games where you could use these techniques though. For example, there are security games. In these ""games"" we want to use AI to find a strategy to protect an airport. It is reasonable to try and design our model under the assumption that someone else wants to break it. You could use Alpha-Beta pruning here (although in practice, more sophisticated algorithms are used).

+",16909,,16909,,10/24/2019 19:40,10/24/2019 19:40,,,,5,,,,CC BY-SA 4.0 +16066,2,,16062,10/24/2019 18:04,,1,,"

My reading of AI development (somewhat simplified here) is that the availability of large data sets, increased computing power, and the introduction of new machine learning algorithms (which require large data sets and massive computing power) contributed to the resurgence of AI.

+ +

However, as witnessed on this site, there has been a paradigm shift within the field: while previous approaches to AI were largely symbolic, with a bit of connectionism thrown in, the current AI mainstream is purely based on statistical models and machine learning.

+ +

Massive (distributed) computing power on its own is not sufficient, as there was a bottleneck with domain modelling/knowledge acquisition in traditional symbolic approaches. New algorithms (basically further developments of neural networks) on their own would also not be sufficient without massive amounts of training data. Only the combination of all three elements enabled the AI resurgence.

+",2193,,,,,10/24/2019 18:04,,,,0,,,,CC BY-SA 4.0 +16069,1,16070,,10/24/2019 23:21,,3,221,"

In Chapter 1 of the book Reinformcement Learning An Introduction 2nd Edition by Richard S. Sutton and Andrew G.Barto, there is one statement ""Exploratory moves do not result in any learning"".

+ +

This sentence is in Figure 1.1.

+ +
+

Figure 1.1: A sequence of tic-tac-toe moves. The solid black lines represent the moves taken during a game; the dashed lines represent moves that we (our reinforcement learning player) considered but did not make. Our second move was an exploratory move, meaning that it was taken even though another sibling move, the one leading to e⇤, was ranked higher. Exploratory moves do not result in any learning, but each of our other moves does, causing updates as suggested by the red arrows in which estimated values are moved up the tree from later nodes to earlier nodes as detailed in the text.

+
+ +

It confuses me. In my understanding, exploration should contribute to learning in almost all RL algorithms. So, why does the book state ""Exploratory moves do not result in any learning"" in this case?

+",30044,,,,,10/25/2019 11:46,"Why ""Exploratory moves do not result in any learning""?",,1,0,,,,CC BY-SA 4.0 +16070,2,,16069,10/25/2019 0:14,,4,,"

I believe this is a pedagogical decision. Because the sentence occurs in the first chapter of the book, I think the authors are trying to avoid the objection that a neophyte might make: learning from random movements seems like it will cause you to learn strange behaviors.

+ +

Certainly, the statement is inaccurate. We need only reach page 26 to see a counterexample: The Q-learning equation (2.1) sums over all actions, not just the greedy ones. This also applies to the temporal difference learning method specified in Chapter 6, which is the method Figure 1.1 is discussing.

+",16909,,16909,,10/25/2019 11:46,10/25/2019 11:46,,,,2,,,,CC BY-SA 4.0 +16072,1,,,10/25/2019 5:46,,1,20,"

I am interested in the node classification task for graph data. So far,I've tried it with the Cora dataset, but it is an undirected graph and has word attributes as features. I want to extend this task to a time-varying directed graph. Does anybody know about this kind of dataset?

+",26886,,,,,10/25/2019 5:46,Is there any time-varying directed graph dataset?,,0,0,,,,CC BY-SA 4.0 +16073,1,16078,,10/25/2019 8:32,,2,1077,"

I am pretty new to Artificial Intelligence programming, however i do understand the basic concept. +I have an idea in my mind:

+ +

Import a JPEG Image, +Convert this Image into a 2D Array (x,y values + r g b values). +Then create a second array with same (xy) values wit rgb all set to 0,0,0. +Now i want to build an AI Layer which will try to lower the error factor between the arrays until they are equal (the rgb values in the second array are equal to the first array (error factor 0) ). +I would prefer to do it in Java. Any suggestions to librarys or example that can help me get started? Thanks for any help.

+",30781,,,,,10/25/2019 13:19,Generate Image with Artificial intelligence,,1,5,,12/30/2021 11:37,,CC BY-SA 4.0 +16074,2,,6908,10/25/2019 8:40,,2,,"

You can take a look at this paper that solving your problem with a neural network. You can use the pytorch implementation of the satnet layer : satnet layer API. In this supervised setup the layer also learn the boolean constraints of your model. You can find an example of a sodoku solver in the github repo.

+",8912,,,,,10/25/2019 8:40,,,,0,,,,CC BY-SA 4.0 +16075,1,,,10/25/2019 11:13,,1,42,"

I'm trying to run the Deepmind Spriteworld demo described on the project's GitHub page, but I'm not finding run_demo.py in the distribution and the closest sounding file, demo_ui.py doesn't launch a UI when run (tried both on Linux and Windows).

+ +

How should the Deepmind Spriteworld demo UI be launched?

+",30787,,30787,,10/25/2019 15:34,11/24/2019 16:00,Deepmind Spriteworld run_demo.py not found,,1,1,,1/10/2020 20:12,,CC BY-SA 4.0 +16076,1,,,10/25/2019 11:55,,2,488,"

I'm trying to extract some particular information from the image(png).

+ +

I tried to extract the text using the below code

+ +
import cv2
+import pytesseract
+import os
+from PIL import Image
+import sys
+
+def get_string(img_path):
+    # Read image with opencv
+    img = cv2.imread(img_path)
+
+    # Convert to gray
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
+    # Apply dilation and erosion to remove some noise
+    kernel = np.ones((1, 1), np.uint8)
+    img = cv2.dilate(img, kernel, iterations=1)
+    img = cv2.erode(img, kernel, iterations=1)
+
+    # Write the image after apply opencv to do some ...
+    cv2.imwrite(""thres.png"", img)
+    # Recognize text with tesseract for python
+    result = pytesseract.image_to_string(Image.open(""invoice.png""))
+    os.remove(""invoice.png"")
+
+    return result
+
+if __name__ == '__main__':
+    from sys import argv
+
+    if len(argv)<2:
+        print(""Usage: python image-to-text.py relative-filepath"")
+    else:
+        print('--- Start recognize text from image ---')
+        for i in range(1,len(argv)):
+            print(argv[i])
+            print(get_string(argv[i]))
+            print()
+            print()
+
+        print('------ Done -------')
+
+ +

But I want to extract data from particular fields.

+ +

Such as

+ +
+
 a) INVOICE NO.
+ b) CUSTOMER NO.
+ c) SUBTOTAL
+ d) TOTAL
+ e) DATE
+
+
+ +

How can I extract the required information from the below image ""invoice""?

+ +

PFB

+ +

+",30725,,30725,,11/15/2019 10:47,12/15/2019 11:01,How to Extract Information from the Image,,0,1,,,,CC BY-SA 4.0 +16078,2,,16073,10/25/2019 13:19,,0,,"

For recreating an image exactly the same as the original, you can use an autoencoder. This basically use AI Layers to encode the image raw pixel values to a vector of floats, drastically decreasing the representing vector. Afterwards another AI Layer increases the dimensions back to the original image. The method does not required labels, as it only refer to teh image to encode it to a vector of features. For implementing in java, there is not a lot 9f resources. However, you can check this library out: https://deeplearning4j.org/ +For the implementation, see this: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/unsupervised/variational/VariationalAutoEncoderExample.java +For the method, you can see this python tutorial and implement it in java. https://towardsdatascience.com/autoencoders-in-keras-c1f57b9a2fd7

+ +

For generating completely new image, you can try GAN(Generative Adverserial Network). This generates completely new images from a random noise image. The noise image is passed through a generator which is a CNN(convolutional neural network) and get a result image. The result image is then feed to a discriminator(CNN as well) to classify if that image is fake or real. The generator and discriminator compete and slowly gets better. For java implementation, see this: https://github.com/wmeddie/dl4j-gans

+ +

Hope I can help you and have a nice day!

+",23713,,,,,10/25/2019 13:19,,,,0,,,,CC BY-SA 4.0 +16080,1,,,10/25/2019 13:30,,4,229,"

I recently started looking for networks that focus on image segmentation tasks related to biomedical applications. I could not miss the publication U-Net: Convolutional Networks for Biomedical Image Segmentation (2015) by Ronneberger, Fischer, and Brox. However, as deep learning is a fast-growing field and the article was published more than 4 years ago, I was wondering if anyone knows other algorithms that yield better results for image segmentation tasks? And if so, do they also use a U-shape architecture (i.e. contraction path then expansion path with up-conv)?

+",30792,,2444,,6/13/2020 0:07,6/13/2020 0:07,What are the best algorithms for image segmentation tasks?,,2,2,,,,CC BY-SA 4.0 +16081,2,,16075,10/25/2019 15:31,,0,,"

I posted the question to the project's issue page, where the recommendation was to manually download and run run_demo.py. That worked for me.

+",30787,,,,,10/25/2019 15:31,,,,0,,,,CC BY-SA 4.0 +16082,2,,15946,10/25/2019 15:41,,2,,"

From the top of my mind roughly in order of priority excl. math:

+ +

Practical/applied CS: machine learning, artificial intelligence (incl. symbolic AI), data mining, algorithms, data structures

+ +

Theoretical CS: complexity theory

+ +

Programming: Python

+ +

Logic is highly relevant for symbolic AI but not so much for sub-symbolic approaches like ML.

+ +

For all topics mentioned in the first two categories you can find free online lectures from different universities.

+",30789,,,,,10/25/2019 15:41,,,,0,,,,CC BY-SA 4.0 +16083,2,,15802,10/25/2019 15:51,,2,,"

As originally conceived in James Baker's 1989 paper Reducing bias and inefficiency in the selection algorithm, Stochastic Universal Sampling accepts a population containing $N$ individuals, and a number of parents to sample, denoted $n$. Assuming fitness values are normalized so that they sum up to $N$, at each step, a new pointer is placed a step equal in size to the fraction $\frac{N}{n}$ ahead of the location of the previous pointer (and the location of the first pointer is set to a random value in the range [0, $\frac{N}{n}$) ). So, for example, if you want to sample 6 individuals from a population of size 10, you would make steps of size $\frac{10}{6}$, spacing your pointers at even intervals of $\frac{10}{6}$.

+ +

Modern implementations, like the one on Wikipedia sometimes do not document this fact clearly, although it is apparent what is intended if you already understand the method. They often write the step size as $\frac{F}{n}$, where $F$ is the total fitness of the population, without discussing its relation to the size of the population. The extra normalization step is actually not essential, so modern implementations generally seem to skip it.

+ +

So in summary, the step size $\frac{F}{n}$ used if fitness values of a population sum to $F$, and you want to select $n$ individuals. If you want to select more individuals, use a higher value for $n$. If you want to select fewer, use a lower value for $n$, which updates your step size accordingly.

+ +

Values of this parameter of $\frac{1}{4}$ or $\frac{1}{6}$ suggest that the implementation may be normalizing the sum of fitness values is being normalized to N, and then using the parameter as a multiplicative factor automatically. This is a fairly reasonable design. You could interpret these values as ""Select $\frac{1}{4}$ of the population"" and ""Select $\frac{1}{6}$ of the population"".

+ +

Note that this sort of bumps your question up a level: how do you pick the fraction of the population to keep? That question doesn't have a clean answer, and picking it is generally an art developed by experts through practice. It is very closely related to the exploration/exploitation tradeoff.

+ +

Some ways you might pick $n$:

+ +
    +
  1. Use a fixed value, for instance, keep half the population at each step. The exact proportion you will want to pick is not something you can know in advance. Expert practitioners can make effective guesses. Others will need to just try out different values using a technique like cross validation, and pick whichever one seems to work best.
  2. +
  3. You can use a value that changes over time. A common strategy for this would be to use one of the temperature schedules developed in the simulated annealing literature, and keep a portion of the population that was inversely proportionate to the temperature. That is, early on, you'd use a large $n$, and keep most of the population around (probably with mutations). Later, you'd use a small $n$ and keep only the best individuals around.
  4. +
  5. You could use a value of $n$ that changes in response to the fitness of the population. This is much like the adaptive learning rates used in some algorithms to train neural networks (most notably: the ADAM optimizer). When fitness levels have a lot of variety, use low values of $n$ to encourage more exploitation. When fitness levels are all within a narrow band, use high values of $n$ to encourage more exploration.
  6. +
+",16909,,16909,,1/15/2020 15:42,1/15/2020 15:42,,,,2,,,,CC BY-SA 4.0 +16085,2,,15946,10/25/2019 18:14,,5,,"

I worked as a professor for a time, and often advised students on this. For a PhD in machine learning, I think the ideal background is:

+ +
    +
  1. Core CS Courses + +
      +
    • Programming (typically 3-4 courses). Language choice is not highly important, but Python, C++, Java, and perhaps JavaScript, are reasonable picks, if only because of their prevalence.
    • +
    • Core topics: data structures, algorithms, operating systems, databases
    • +
    • numerical linear algebra or numerical methods
    • +
    • advanced algorithm design and analysis
    • +
  2. +
+ +

Together these will allow you to read and write the code that even highly optimized versions of ML algorithms are written in, and to understand what might be going wrong within them.

+ +

2. AI & ML courses, usually offered through a CS department

+ +
    +
  • a broad survey course in AI (like AI:AMA by Russel & Norvig), usually offered to senior undergraduates.
  • +
  • A course applied machine learning or data mining.
  • +
+ +

You may also take other AI courses, but they are not as common to see offered to undergraduates, so many students wait until graduate school:

+ +
    +
  • reinforcement learning
  • +
  • soft computing
  • +
  • computational learning theory
  • +
  • Bayesian Methods
  • +
  • Deep Learning
  • +
  • Multiagent Systems
  • +
  • Information Retrieval
  • +
  • Natural Language Processing
  • +
  • Computer Vision
  • +
  • Robotics
  • +
+ +

Together, these will give you the broadest possible background in AI & ML. These can allow you to find new applications of ML, or to pull AI techniques from one area into another as you need.

+ +

3. Statistics courses

+ +
    +
  • a 1 or 2 term course in probability theory, ideally a version that requires and uses calculus.
  • +
  • at minimum a course in statistical hypothesis testing.
  • +
+ +

Much stronger would be to also take courses in:

+ +
    +
  • regression
  • +
  • generalized linear models
  • +
  • experiment design
  • +
  • causal inference
  • +
  • Bayesian methods
  • +
+ +

These courses allow you to reason formally and comfortably about uncertainty. They also give you the correct framework for answering questions about whether your ML algorithm is working, and what patterns an ML algorithm uncovers mean.

+ +

4. Mathematics courses

+ +
    +
  • 3 semesters of calculus, going at least as far as multi-variate/vector calculus.
  • +
  • optionally, a more advanced course that builds on calculus, like real analysis, but only to reinforce calculus concepts.
  • +
  • at least 1, and preferably 2, courses in linear algebra
  • +
  • at least 1, and preferably 2, courses in discrete mathematics.
  • +
  • ideally, something like Knuth et al.'s Concrete Mathematics
  • +
  • ideally a course in advanced optimization techniques
  • +
  • optionally, courses in logic, but be aware that this is almost a fringe area in AI now, and essentially irrelevant to a PhD in machine learning. The parts you need are usually covered in a broad survey AI course.
  • +
+ +

These courses give you the basic mathematical fluency to understand most machine learning algorithms well.

+",16909,,16909,,10/25/2019 20:05,10/25/2019 20:05,,,,2,,,,CC BY-SA 4.0 +16086,2,,16036,10/25/2019 23:48,,2,,"

Historically, the non-ML approach would be an expert system. This is typically a rules-based decision system, falling under the umbrella of symbolic AI.

+ +

These systems can have strong utility in limited contexts, but are generally ""brittle"" in that parameters not previously defined or accounted will produce no-compute or weak utility. Because the rules of a game are fully definable, the main concern is utility, which relates to the degree to which the game has been solved.

+ +

Informing a heuristic system in this case requires analysis of the game in in the sense of game theory and combinatorial game theory, since Catan involves both imperfect information and combinatorial elements. The complexity is high indeed, not only per imperfect information, branching factors, stochasticity, players > 2, but, as you note, the game board itself has a very high number of potential configurations, so solving the game is presumed to be extremely difficult to impossible. (Possibly NEXPTIME if finite and undecidable otherwise.)

+ +

The paper Game strategies for The Settlers of Catan suggests that the game tree for Catan is not surveyable b/c the options for trade negotiation in natural language aren't bounded:

+ +
+

One response to this is to develop a symbolic model consisting of heuristic strategies for playing the game. Developing + such models potentially has two advantages. First, a symbolic + model can in principle lead to an interpretable model of human + expert play ... Second, a symbolic model can provide + a prior distribution over which next move is likely to be + optimal...

+
+ +

The paper mentions this second part to relation to machine learning, where ""the posterior distribution over optimal actions acquired through training improves on the baseline prior distribution.""

+ +

Especially where the game is unsolved and intractable, machine learning has demonstrated strong utility for an increasing number of games, so it is unlikely not to be an optimal component for truly strong play. However, such a system can be a combination of ML and domain specific knowledge, such as in informed search.

+ +

The Optimizing UCT for Settlers of Catan goes into this in detail, and also provides reference to prior work.

+ +

If your primary requirement is strong utility, some form of machine learning is likely optimal. But it can be fun to attempt to solve games and cobble together sets of heuristics.

+",1671,,1671,,10/26/2019 19:49,10/26/2019 19:49,,,,2,,,,CC BY-SA 4.0 +16087,1,16114,,10/26/2019 6:51,,10,1020,"

According to the SBEED: Convergent Reinforcement Learning with +Nonlinear Function Approximation for convergent reinforcement learning, the Smoothed Bellman operator is a way to dodge the double sample problem? Can someone explain to me what the double sample problem is and how SBEED solves it?

+",30632,,1671,,10/27/2019 23:03,5/28/2021 13:38,What is the double sample problem in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 +16088,1,,,10/26/2019 10:08,,4,501,"

I understand fuzzy logic is a variant of formal logic where, instead of just 0 or 1, a given sentence may have a truth value in the [0..1] interval. Also, I understand that logical probability (objective bayesian) understands probability as an extension of logic, where uncertainity is taken into account. To me they sound rather similar (they both extend formal logic by modelling truth as a continuos interval between 0 and 1).

+ +

My question is, what is the relationship between these two concepts?. What is the difference, and what are the differences in AI approaches based upon these two formal systems?

+",23527,,23527,,6/10/2020 7:21,6/10/2020 7:21,What is the relationship between fuzzy logic and objective bayesian probability?,,1,0,,,,CC BY-SA 4.0 +16089,5,,,10/26/2019 10:17,,0,,"

Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief. The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses.

+",23527,,23527,,10/26/2019 19:46,10/26/2019 19:46,,,,0,,,,CC BY-SA 4.0 +16090,4,,,10/26/2019 10:17,,0,,Questions in this tag should be about the Bayesian approach of probability theory and its relevance to AI-related issues.,23527,,23527,,10/26/2019 19:46,10/26/2019 19:46,,,,0,,,,CC BY-SA 4.0 +16091,2,,16036,10/26/2019 11:03,,0,,"

From the way you have phrased your question one can derive a couple of strong assumptions which simplify the problem tremendously and make it feasable:

+ +
    +
  1. We do not look for an agent being able to play the game but only an evaluation of settlement options (no other agents to be considered)
  2. +
  3. The evaluation of settlement options is static (i.e. does not change over time) and is independent of other settlements
  4. +
+ +

From that two simple ideas come to my mind:

+ +

1. the ML approach +Look at historical game data and see which settlement options led to a win. So basically look at tuples of (X,y) with X being something like (W8, C2, O6) meaning that the settlement give access to wood with an 8, clay with a 2 and ore with a 6. And y indicating a win or loss. +To make it a bit dynamic you could differentiate between initial settlements (being placed at the beginning) and the ones during the game. So for each of these two categories you would derive basically a score for the possible settlements.

+ +

If you can compute all the possible combinations you will not even need ML since you can simply run the math once and then look it up. Might be doable in this case as the assumptions mentioned above simplify the problem a lot (compared to fully ""solving"" the game). Thinking through the possible combinations for a given settlement location (selecting 3 fields with A possible resources and B possible numbers) will quickly give you an idea about that.

+ +

2. The classy symbolic approach +What comes to my mind right away is linear programming as it offers a convenient way to model the strategic aspects you have mentioned. You could develop a target function to maximize using scores for different resources and numbers (e.g. you could give clay higher importance than wool). Besides that constraints can capture additional aspects of game strategies like ""always make sure to have access to clay"" or ""do not settle where the 3 resources are the same"" etc.

+ +

My very first idea to model this is using decision variables like X_(i,j) with X being 0 or 1, i representing the resources out of {clay, wood, ..., desert} (side note: do not forget the water and different ports here) and j modelling the numbers out of {2,...12}. The constraints would need to model the fact that you need to select 3 of those X_(i,j) for every settlement.

+ +

If you want to calculate this for a given game you would need to feed the model the possible settlement options based on the layout of that specific game. Then run the optimization and it gives you the best settlement option (i.e. the 3 feasable X_(i,j) maximizing your goal function).

+ +

Qua definition you need to bring in game knowledge for this approach. And probably talking to someone who is really good at the game would help to understand what matters.

+",30789,,,,,10/26/2019 11:03,,,,0,,,,CC BY-SA 4.0 +16092,2,,16088,10/26/2019 13:29,,6,,"

This was a somewhat hotly debated question in the 1980s. The debate was more-or-less ended with papers like Cheeseman's In Defense of Probability.

+ +

The short answer is that Fuzzy Logic does not just assign a continuous value to sentences, what it does is assign degrees of membership in different fuzzy sets. These degrees of membership range between 0 and 1.

+ +

In contrast, probability says: Among the set of values this variable could take on, what fraction of them are in a certain set? This fraction also ranges between 0 and 1.

+ +

This might seem like a semantic distinction, but it has deep implications if you try to use these systems to reason about uncertainty.

+ +

For example, consider the question ""Will it rain tomorrow?"". A probabilistic approach would try to determine the fraction of days like tomorrow that have had rain. A fuzzy logic approach will try to determine whether tomorrow is like a rainy day. The distinction becomes obvious if we then ask whether it will not rain tomorrow. The probabilistic approach will try to find the fraction of days like tomorrow where it did not rain. The chance of it raining or not raining will sum to 1. The fuzzy logic approach will try to determine whether tomorrow is like a non-rainy day. Note that tomorrow may be both like a stereotypical rainy day and not like one. There is no firm requirement that these sets be disjoint. This reflects Cheeseman's critique of Fuzzy Sets: they implicitly reject the additive axiom of probability theory, which (as denoted by the name axiom) is something that is fairly unreasonable to reject.

+ +

While there are various approaches to make memberships in fuzzy sets more probabilistic in nature, that's the root distinction.

+ +

I think the dominant modern view is that fuzzy sets are a great tool when you need to reason about membership in a fuzzy concept. Whether rice is ""done"" or not isn't a question of something being true or false. It's a degree of membership in the set ""done"". On the other hand, whether the object in front of a self-driving car is a person or not is either true or false. It is not necessarily a good idea to reason about this in terms of whether the object is partially in the person set or plastic bag set.

+",16909,,,,,10/26/2019 13:29,,,,1,,,,CC BY-SA 4.0 +16093,2,,16054,10/26/2019 14:49,,2,,"

One risk that’s already realized: large online vendors think they have implemented artificial intelligence in their “help” pages and therefore they can (try to) make it impossible to get to someone who can actually think. And since the artificial stupidity (AS) usually feeds the customer articles completely unrelated to the issue, anyone sufficiently persistent to pursue it is extremely pissed off at the company before (if ever) the issue is resolved. And because far too many people passively accept this abuse, the companies have no incentive to be more reasonable. In other words, “AS” is reducing our expectations for customer service.

+ +

Another is the JavaScript intended to prevent invalid names, phone numbers, and email addresses in web forms which due to bugs or obsolescence rejects legitimate inputs.

+",30818,,,,,10/26/2019 14:49,,,,0,,,,CC BY-SA 4.0 +16094,2,,16054,10/26/2019 16:21,,1,,"

IMHO the greatest risk is that AI can make people lazy. If you can ask an AI for an answer to any problem, what's your motivation to figure out how to figure out the answer for yourself? I have run into a lot of young people who can't add or multiply two three-digit numbers without using a calculator. When it's possible to dump a huge mass of data into an AI, and the AI tells you the structures it finds in the data without explaining how it finds the structures so you can do it yourself, the AI wins and you lose.

+",28348,,,,,10/26/2019 16:21,,,,5,,,,CC BY-SA 4.0 +16095,2,,16054,10/26/2019 20:21,,1,,"
    +
  • Offloading of responsibility may the single greatest danger.
  • +
+ +

Where algorithmic bias may be the core issue of Machine Learning, it can be identified and mitigated.

+ +

Transferring responsibility to a robot or algorithm requires an intentional choice with moral dimension. As the scholar Joanna Bryson put it:

+ +
+

In humans consciousness and ethics are associated with our morality, but that is because of our evolutionary and cultural history. In artefacts, moral obligation is not tied by either logical or mechanical necessity to awareness or feelings. This is one of the reasons we shouldn't make AI responsible: we can't punish it in a meaningful way. +
Source: AI Ethics: Artificial Intelligence, Robots, and Society

+
+ +

In a malicious sense, transferring agency to an automaton that may do something harmful which benefits me allows me to say ""I didn't make the decision and have no responsibility for the outcome."" (It seems to me that companies are doing this more and more.)

+ +

There was a very good short story on the subject Unchained: A story of love, loss, and blockchain in which automate taxis develop novel strategies that have an unintended moral dimension in regard to humans.

+",1671,,1671,,10/26/2019 20:40,10/26/2019 20:40,,,,0,,,,CC BY-SA 4.0 +16098,1,,,10/26/2019 23:47,,2,40,"

I'm working on finding out the motion vectors of objects in images. The inputs are the images of objects in motion. The outputs of neural network are the object name, direction of object vector and prediction of next vector change.

+ +

There are different 3D ConvNets I'm considering as a baseline like ReMotENet. I would appreciate if you would recommend any interesting papers in MoCap domain and any existing neural networks performing similar task.

+",29928,,,,,10/26/2019 23:47,Which neural network algorithms can be used to map motion vectors in image processing?,,0,0,,,,CC BY-SA 4.0 +16099,1,,,10/27/2019 8:22,,2,102,"

In my experience, most of the time, when people talk about AI nowadays they mostly mean machine learning. Despite this, ML is usually seen as a mere technique to build high-performance software.

+

I rarely see people discuss the foundational questions of it, such as: from which "philosophy" of AI did ML emerge? Why is ML compelling in AI research, if not by its performance? What are the fundamental differences between statistical/probabilistic AI and logical AI? For reference, this hasn't even been mentioned in my master-level course on machine learning. Even myself I used to have a distaste for ML because I thought it was just mindless data-crunching.

+

But, lately, I've been reading through "Probability Theory: The Logic Of Science" and I'm starting to appreciate the theoretical side of ML, for instance, how Bayesian probability can be seen as a model of plausible reasoning in humans, and how probability theory extends logic (motivating, maybe, why probabilistic AIs were the next logical [no pun intended] step after logical AI). I would like now to delve deeper into the topic.

+

What are some books/papers that deal with fundamental and philosophical issues of ML and relate it to the global discourse of AIs?

+",23527,,2444,,12/20/2021 21:27,12/20/2021 21:27,What are some books/papers that deal with fundamental and philosophical issues of ML and relate it to the global discourse of AIs?,,1,0,,,,CC BY-SA 4.0 +16100,2,,15601,10/27/2019 9:03,,1,,"

Try using float64 instead of float32; int64 instead of int32; increasing the bits of memory gradually increases the weights that can be stored

+",30829,,,,,10/27/2019 9:03,,,,0,,,,CC BY-SA 4.0 +16101,1,,,10/27/2019 12:52,,2,57,"

I was trying to implement the loss function of H-GAN. Here is my code . But it seem somethings wrong, maybe is recognition loss on z (EQ 9). I used the EQ 5 on MISO to calculate it. Here is my code:

+ +
def recognition_loss_on_z(self,latent_code, r_cont_mu, r_cont_var):
+    eplison = (r_cont_mu - latent_code) / (r_cont_var+1e-8)
+    return -tf.reduce_mean(tf.reduce_sum(-0.5*tf.log(2*np.pi*r_cont_var+1e-8)-0.5*tf.square(eplison), axis=1))/(config.batch_size * config.latent_dim)
+
+ +

And I calculated loss function:

+ +
    self.z_mean, self.z_sigm = self.Encode(self.images)
+    self.z_x = tf.add(self.z_mean, tf.sqrt(tf.exp(self.z_sigm))*self.ep)
+
+    self.D_pro_logits, self.l_x_real, self.Q_y_style_given_x_real, continuous_mu_real, continuous_var_real  = self.discriminator(self.images, training=True, reuse=False)
+    self.De_pro_tilde, self.l_x_tilde, self.Q_y_style_given_x_tidle, continuous_mu_tidle, continuous_var_tidle= self.discriminator(self.x_tilde, training=True, reuse = True)
+    self.G_pro_logits, self.l_x_fake, self.Q_y_style_given_x_fake, continuous_mu_fake, continuous_var_fake = self.discriminator(self.x_p, training=True, reuse=True)
+
+    tidle_latent_loss = self.recognition_loss_on_z(self.z_x, continuous_mu_tidle,continuous_var_tidle)
+    real_latent_loss  = self.recognition_loss_on_z(self.z_x, continuous_mu_real,continuous_var_real)
+    fake_latent_loss =  self.recognition_loss_on_z(self.zp, continuous_mu_fake,continuous_var_fake)
+
+ +

And discriminator:

+ +
    def discriminator(self, x_var,training=False, reuse=False):
+    with tf.variable_scope(""discriminator_recongnizer"") as scope:
+        if reuse==True:
+            scope.reuse_variables()
+        conv1 = tf.nn.leaky_relu(batch_normalization(conv2d(x_var, output_dim = 64 , kernel_size=6, name='dis_R_conv1'),training = training,name='dis_bn1', reuse = reuse), alpha =0.2)
+        conv2 = tf.nn.leaky_relu(batch_normalization(conv2d(conv1, output_dim = 128 , kernel_size=4, name='dis_R_conv2'),training = training ,name='dis_bn2', reuse = reuse), alpha =0.2)
+        conv3 = tf.nn.leaky_relu(batch_normalization(conv2d(conv2, output_dim = 128 , kernel_size=4, name='dis_R_conv3'),training = training,name='dis_bn3', reuse = reuse), alpha =0.2)
+        conv4 = conv2d(conv3, output_dim = 256 , kernel_size=4, name='dis_R_conv4')
+        lth_layer = conv4
+        conv4 = tf.nn.leaky_relu(batch_normalization(conv4, training=training, name='dis_bn4', reuse = reuse),alpha =0.2)
+        conv4 = tf.reshape(conv4,[-1, 256*8*8])
+        #Discriminator
+        with tf.variable_scope('discriminator'):
+            d_output = fully_connect(conv4, output_size=1, scope='dr_dense_2')
+        with tf.variable_scope('dis_q'):
+            fc_r = tf.nn.leaky_relu(batch_normalization(fully_connect(conv4, output_size=256 + config.style_classes, scope='dis_dr_dense_3'), training=training, name='dis_bn_fc_r', reuse=reuse), alpha=0.2)
+            continuous_mu = fully_connect(fc_r, output_size=256, scope='dis_dr_dense_mu')
+            continuous_var = tf.exp(fully_connect(fc_r, output_size=256, scope='dis_dr_dense_logvar'))
+            style_predict = fully_connect(fc_r, output_size=config.style_classes, scope='dis_dr_dense_y_style')
+
+        return d_output,lth_layer,style_predict,continuous_mu,continuous_var   
+
+ +

Does anyone have experience with that, please tell me where I was wrong. Thanks you so much, I really appreciate that!

+",30832,,1671,,10/29/2019 16:28,10/29/2019 16:28,How to implement loss function of H-GAN model,,0,0,0,,,CC BY-SA 4.0 +16102,5,,,10/27/2019 17:16,,0,,,2444,,2444,,10/27/2019 17:16,10/27/2019 17:16,,,,0,,,,CC BY-SA 4.0 +16103,4,,,10/27/2019 17:16,,0,,"For questions related to the exploding gradient problem, which is the numerical problem associated with the significant increase (or explosion) of the numbers of the gradient vector of an objective function with respect to the parameters of a neural network, which is being trained with a gradient-based optimization algorithm and backpropagation. There is also the related vanishing gradient problem, which arises when the numbers become very small.",2444,,2444,,10/27/2019 17:21,10/27/2019 17:21,,,,0,,,,CC BY-SA 4.0 +16104,1,16107,,10/27/2019 19:58,,6,1644,"

I have an idea to find the optimal number of hidden neurons required in a neural network, but I'm not sure how accurate it is.

+

Assuming that it has only 1 hidden layer, it is a classification problem with 1 output node (so it's a binary classification task), has N input nodes for N features in the data set, and every node is connected to every node in the next layer.

+

I'm thinking that to ensure that the network is able to extract all of the useful relations between the data, then every piece of data must be linked to every other piece of data, like in a complete graph. So, if you have 6 inputs, there must therefore be 15 edges to make it complete. Any more and it will be recomputing previously computed information and any less will be not computing every possible relation.

+

So, if a network has 6 input nodes, 1 hidden node, 1 output node. There will be 6 + 1 connections. With 6 input nodes, 2 hidden nodes, and 1 output node, there will be 12 + 2 connections. With 3 hidden nodes there will be 21 connections. Therefore, the hidden layer should have 3 hidden nodes to ensure all possibilities are covered.

+

This answer discusses another method. For the sake of argument, I've tried to keep both examples using the same data. If this idea is computed with 6 input features, 1 output node, $\alpha = 2$, and 60 samples in the training set, this would result in a maximum of 4 hidden neurons. As 60 samples is very small, increasing this to 600 would result in a maximum of 42 hidden neurons.

+

Based on my idea, I think there should be at most 3 hidden nodes and I can't imagine anymore being useful, but would there be any reason to go beyond 3 and up to 42, like in the second example?

+",11795,,2444,,1/11/2021 0:37,1/11/2021 0:37,Is this idea to calculate the required number of hidden neurons for a single hidden layer neural network correct?,,1,1,,,,CC BY-SA 4.0 +16105,2,,16099,10/27/2019 20:40,,2,,"

I'll recommend two sources:

+ +
    +
  1. The venerable Russell & Norvig book, which is a common text in AI courses. Russell & Norvig end each chapter with a summary of the history of the developments of the techniques they have just discussed. These sections are often skipped by novice readers, but are almost exactly what you are looking for. The ones in the back half of the book should, together, give you a good sense of the order in which developments occurred, why techniques were developed, and what advances techniques enabled.

  2. +
  3. R&N does a good job of covering what happened, but not always why. For that, you want a philosophy of AI book. I recommend Mind Design II as a starting place. This book is a chronologically organized collection of papers and academic essays by the big thinkers in philosophy of mind and in AI research. Often, the papers are responses to one another. With some side reading about the history of each author, you can begin to get a good sense of the big philosophical movements in the field over the last 70 years, and why things have changed.

  4. +
+ +

If you don't want to read the book, I can give you a summary (spoilers ahead!):

+ +
    +
  1. 1920's: behaviorists, and others, propose that Mind = Brain, and specifically focus on intelligent behavior.
  2. +
  3. 1950: Turing proposes that a computer could be programmed to exhibit intelligent behavior.
  4. +
  5. 1960's: Cognativists in AI, Psych, and Linguistics argue that behavior is not enough. Minds think, and thought takes the form of reasoned algorithms. The lynchpin of their argument is domains like language understanding, which they claim cannot be modeled without logic. Their work produces search algorithms, and early rule-based planning and language systems.
  6. +
  7. 1970's Searle argues that computers can't think because of Phenomenology. Most AI folk ignore him, and go on working, but Philosophy & Psych take greater notice.
  8. +
  9. 1970's Dryfus (and others) argue compelling that logic and reason cannot explain human thought, by dissecting the rule based systems of the day through the Frame Problem. AI researchers take some notice.
  10. +
  11. The Connectionists, especially Hinton (AI) & Churchland (Philosophy) propose the first post-Cognativist theories of mind. These focus on the idea that Mind is in the Connections (specifically, the firing patterns) of neurons, not in the brain itself. This view of minds spurs (re)-development of Neural Networks. In the 1980's, this is mostly ignored.
  12. +
  13. During the 1990's, Connectionists, and others working on probabilistic methods in AI demonstrate systems for language that are substantially better than rule based methods. Cognativists begin to decline, because the domains they claimed needed reasoned algorithms are actually better handled by statistical algorithms. Connectionist views gain support in AI and elsewhere.
  14. +
  15. Today, with rising computational power, statistical techniques come to the forefront of many AI domains. Simultainiously, the Churchlands, Rodney Brooks, and others propose further Post-Cognativist schools of thought based around dynamical systems theory and embodied cognition, which were somewhat influential in robotics. Cognativism continues to enjoy some support within the AI, Psych, and Linguistics communities, but this is much diminished. Some hybrid systems use a mix of statistical and rule-based techniques.
  16. +
+",16909,,,,,10/27/2019 20:40,,,,0,,,,CC BY-SA 4.0 +16106,1,16117,,10/27/2019 20:53,,5,1445,"

How many weights does the max-pooling layer have?

+

For example, if there are 10 inputs, a pooling filter of size 2, stride 2, how many weights, including bias, does a max-pooling layer have?

+",30843,,2444,,12/31/2021 18:21,12/31/2021 18:23,How many weights does the max-pooling layer have?,,1,0,,,,CC BY-SA 4.0 +16107,2,,16104,10/27/2019 20:56,,7,,"
+

I have an idea to find the optimal number of hidden neurons required in a neural network but I'm not sure how accurate it is.

+
+ +

It's a complete non-starter, and there is a no such calculation possible in the general case (real-valued inputs to a neural network).

+ +

Even with one input neuron it is not possible. That is because even with one input, the output can be an arbitrarily complex mapping to classes. A good example with two inputs that would require an infinite number of hidden neurons to supply a simple classifier would be classifying x,y points as being in the Mandelbrot set.

+ +

In some, more constrained, examples, with well-defined functions, you can construct a minimal neural network that solves the problem perfectly. For instance a neural network model of XOR can be made with two hidden neurons (and six links). However, this kind of analysis is limited to simple problems. You might be able to come up with some variation of your idea if all inputs were boolean, and the neural network limited to some combined bitwise logic on all the inputs.

+ +

Your idea of matching number of edges to number of possible interactions between inputs does not work because you are only considering the most basic kind of interaction between two variables, whilst variables can in practice combine in all sorts of ways to form a function.

+ +

In addition, each neuron in a hidden layer works with a linear weighted sum, plus a fixed transformation function. This is in no way guaranteed to match the function shape that you are trying to approximate with the neural network. An analogy that you might be aware of is discrete Fourier transforms - it is possible to model any part of a function by combine sine and cosine waves of different frequencies, but some functions will require many such waves in order to be represented accurately.

+ +

Your link to the answer in Cross Validated Stack Exchange gives you a rule of thumb that the writers find often works with the kinds of data that they work with. This is useful experience. You can use such rules as the starting point for searching for architecture that works on your problem. This will likely be more useful than your idea based on counting the possible variable interactions. However, in both cases, the most important step is to perform a test with some unseen examples, and to search for the best neural network architecture for your problem.

+ +

There are things you can do with variable interactions though. For instance, try looking for linear correlations between simple polynomial combinations of variables and your target variable, e.g. plot $x_1 x_2$ vs $y$ or $x_3^2 x_4$ vs $y$ . . . you may find some combinations have a clear signal implying a relationship. Take care if you do this sort of thing though, if you test very many of these, you will find a linear relationship purely by chance that looks good initially but turns out to be a dud when testing (it's a form of overfitting). So you should generally test a lot less than the size of your dataset, and limit yourself to some modest maximum total power.

+",1847,,,,,10/27/2019 20:56,,,,0,,,,CC BY-SA 4.0 +16109,1,16110,,10/28/2019 0:25,,4,701,"

Did I get it right, that RNNs most often have just one hidden neuron layer? Is there a reason for that? Will RNNs with several hidden layers in each cell work worse? +Thank you!!

+",30851,,,,,10/28/2019 2:44,Why RNNs often use just one hidden layer?,,1,0,,,,CC BY-SA 4.0 +16110,2,,16109,10/28/2019 2:44,,3,,"

Definitely you can have multiple hidden layers in RNN. One the most common approaches to determine the hidden units is to start with a very small network (one hidden unit) and apply the K-fold cross validation ( k over 30 will give very good accuracy) and estimate the average prediction risk. Then you will have to repeat the procedure for increasing growing networks, for example for 1 to 10 hidden units or more if needed.

+ +

However, in my experience, if you are interested to get best possible accuracy, you should start with small number of hidden layers and more simple structure, and if you are not satisfied with the corresponding accuracy, then we should go on increasing the learning rate by fixed but small small steps and each time start training fresh.

+",30799,,,,,10/28/2019 2:44,,,,0,,,,CC BY-SA 4.0 +16111,1,,,10/28/2019 7:47,,4,392,"

The mean episodic reward is generally increasing, but it has spontaneous drops, and I'm not sure of their cause.

+ +

+ +

The problem has a sparse reward, batch size=2000, entropy_coefficient=0.1, other hyper-parameters are pretty standard.

+ +

Has anyone seen this kind of behavior? What could be the cause these drops in the reward(not enough exploration, too sparse rewards, the state not expressive enough, etc.)?

+",28125,,28125,,11/6/2019 21:51,7/19/2023 4:09,What could be the cause of the drop in the reward in A3C?,,1,0,,,,CC BY-SA 4.0 +16112,1,16116,,10/28/2019 9:08,,1,313,"

I'm just started to learn about meta learning and CNN and in most paper that I've read they mention to have one CNN to feature extraction. These features will help the another network.

+ +

I don't know what is feature extraction (I don't know what are those features) but I'm wondering if I can use it on image segmentation.

+ +

The idea is to use the first network to feature extraction without doing image classification, and pass those features to the other network.

+ +

My question is: +How can I use feature extraction in CNN on image segmentation?

+",4920,,,,,11/27/2019 11:02,How can I use feature extraction in CNN with image segmentation?,,1,0,,,,CC BY-SA 4.0 +16113,2,,12529,10/28/2019 9:23,,2,,"

Unfortunately no, the way to go is track the total reward and see if it's increasing and converging eventually. Value loss isn't a useful metric as the loss can be 0 when the value network always predicts 0 and the agent doesn't collect any reward, meaning very poor behavior.

+",28125,,,,,10/28/2019 9:23,,,,0,,,,CC BY-SA 4.0 +16114,2,,16087,10/28/2019 9:25,,6,,"

The double sampling problem is referenced in Chaper 11.5 Gradient Descent in the Bellman Error in Reinforcement Learning: An Introduction (2nd edition).

+

From the book, this is the full gradient descent (as opposed to semi-gradient descent) update rule for weights of an estimator that should converge to a minimal distance from the Bellman error:

+
+

$$w_{t+1} = w_t + \alpha[\mathbb{E}_b[\rho_t[R_{t+1} + \gamma\hat{v}(S_{t+1},\mathbf{w})] - \hat{v}(S_{t},\mathbf{w})][\nabla\hat{v}(S_{t},\mathbf{w})- \gamma\mathbb{E}_b[\rho_t\nabla\hat{v}(S_{t+1},\mathbf{w})]]$$

+

[...] But this is naive, because the equation above involves the next state, $S_{t+1}$, appearing in two +expectations that are multiplied together. To get an unbiased sample of the product, +two independent samples of the next state are required, but during normal interaction +with an external environment only one is obtained. One expectation or the other can be +sampled, but not both.

+
+

Basically, unless you have an environment that you can routinely re-wind and re-sample to get two independent estimates (for $\hat{v}(S_{t+1},\mathbf{w})$ and $\nabla\hat{v}(S_{t+1},\mathbf{w})$) then the update rule that naturally arises from gradient descent on the Bellman error does will work any better than other approaches, such as semi-gradient methods. If you can do this rewind process on every step, then it may be worth it because of the guarantees of convergence, even in off-policy with non-linear approximators.

+

The paper proposes a workaround for this issue, keeping the robust convergence guarantees, but dropping the need to collect two independent samples of the same estimate on each step.

+",1847,,-1,,6/17/2020 9:57,10/28/2019 9:25,,,,0,,,,CC BY-SA 4.0 +16115,2,,12008,10/28/2019 9:26,,0,,"

Just use one class inheriting from nn.Module called e.g. ActorCriticModel.

+ +

Then, have two members called self.actor and self.critic and define them to have the desired architecture.Then, in the forward() method return two values, one for the actor output (which is a vector) and one for the critic value (which is a scalar).

+ +

This way you can use only one optimizer.

+",28125,,,,,10/28/2019 9:26,,,,0,,,,CC BY-SA 4.0 +16116,2,,16112,10/28/2019 10:10,,0,,"

Feature extraction is a way that people use pretrained model to extract information from input data. For example, image segmentation task may use the VGG network or other image classifying network for feature extraction. The output of the last convolution layer is taken. Then, the features are feed into the untrained network to get outputs. The bottom network for image segmentation usually consists of upsampling and convolutional layers. Then output of size of original image is resulted in teh main network. Hope I can help you

+",23713,,,,,10/28/2019 10:10,,,,0,,,,CC BY-SA 4.0 +16117,2,,16106,10/28/2019 10:13,,2,,"

A max-pooling layer doesn't have any trainable weights. It has only hyperparameters, but they are non-trainable. The max-pooling process calculates the maximum value of the filter, which consists of no weights and biases. It is purely a way to downscale the data to a smaller dimension.

+",23713,,2444,,12/31/2021 18:23,12/31/2021 18:23,,,,0,,,,CC BY-SA 4.0 +16119,1,,,10/28/2019 12:47,,1,255,"

Which model is the most appropriate for this problem with multiple inputs and outputs?

+ +

The data set is

+ +
A1, A2, A3, A4, A5, A6, B1, B2, B3, B4
+
+ +

where A1, A2, A3, A4, A5, A6 are the inputs and B1, B2, B3, B4 the outputs (this is what I want the model to predict).

+ +

What an LSTM be appropriate for this task? Any advice or hint would be much appreciated. Also if anyone can share already done examples, it would really help me a lot.

+",30642,,2444,,11/4/2019 14:20,7/16/2023 23:03,Which model can I use for this problem with multiple inputs and outputs?,,1,2,,,,CC BY-SA 4.0 +16120,1,16122,,10/28/2019 14:21,,3,161,"

I am trying to predict crime. I have data with factors: location, keyword description of the crime, time crime occurred and so on. This is for crimes that occurred in the past.

+ +

I would like to treat the prediction of crimes as a binary classification problem. In this model, the data I have collected would form the ""positive"" examples: they are all examples of a crime happening. However, I am unsure what to use for the negative examples.

+ +

Obviously, most of the time there is no crime at the location, but can I use this as negative data? For example, if I know there was a crime at 7pm at location X, and no other crimes there, should I generate new negative data points for every hour except 7pm?

+ +

Ideally, I want to create probabilities of crime based on a set of factors.

+",8385,,16909,,10/29/2019 13:55,10/30/2019 4:21,How do I predict the occurrence of rare events?,,2,0,,,,CC BY-SA 4.0 +16121,2,,11103,10/28/2019 14:38,,1,,"

https://github.com/openai/retro

+ +

Current list of machines is

+ +
    +
  • Atari
  • +
  • Atari2600 (via Stella)
  • +
  • TurboGrafx-16/PC Engine (via Mednafen/Beetle PCE Fast)
  • +
  • Game Boy/Game Boy Color (via gambatte)
  • +
  • Game Boy Advance (via mGBA)
  • +
  • Nintendo Entertainment System (via FCEUmm)
  • +
  • Super Nintendo Entertainment System (via Snes9x)
  • +
  • GameGear (via Genesis Plus GX)
  • +
  • Genesis/Mega Drive (via Genesis Plus GX)
  • +
  • Master System (via Genesis Plus GX)
  • +
+ +

There is a vague tutorial here for adding other systems +https://github.com/openai/retro/issues/169

+",30869,,,,,10/28/2019 14:38,,,,0,,,,CC BY-SA 4.0 +16122,2,,16120,10/28/2019 15:37,,3,,"

It might be more informative to:

+ +
    +
  1. Label each combination of location, type, and time of crime with a crime rate. For example, theft, in Crystal City, at 11pm at night, occurs 20 times per year, or 0.4 times per resident per year.

  2. +
  3. Predict the crime rate, rather than individual events.

  4. +
+ +

This avoids the need to have explicit examples of ""non-crime"", and lets you instead directly learn something related to the probabilities of crimes being committed (the rate).

+",16909,,,,,10/28/2019 15:37,,,,4,,,,CC BY-SA 4.0 +16123,2,,16080,10/28/2019 15:49,,3,,"

U-Net and U-Net inspired architectures have been quite popular in the medical image-related tasks ever since it was first introduced. There have been several improved versions of U-Net designed for specific tasks that followed. One such example is Attention U-Net, extremely popular for Pancreas Segmentation.

+ +

Other examples of architectures that have achieved state-of-the-art results in image segmentation tasks in recent years include Multi-Scale 3DCNN + CRF, popular for Brain and Lesion images, Multi-Scale Attention for MRIs, etc. A recent paper that describes an interesting 3D FCNN architecture is HyperDense-Net, widely used for multi-modal tasks in medical image segmentation.

+",30871,,,,,10/28/2019 15:49,,,,0,,,,CC BY-SA 4.0 +16124,1,16126,,10/28/2019 15:52,,5,185,"

I am reading Goodfellow's book about neural networks, but I am stuck in the mathematical calculus of the back-propagation algorithm. I understood the principle, and some Youtube videos explaining this algorithm shown step-by-step, but now I would like to understand the matrix calculus (so not basic calculus!), that is, calculus with matrices and vectors, but especially everything related to the derivatives with respect to a matrix or a vector, and so on.

+

Which math book could you advise me to read?

+

I specify I studied 2 years after the bachelor in math school (in French: mathématiques supérieures et spéciales), but did not practice for years.

+",30870,,2444,,1/22/2021 1:01,1/22/2021 1:02,Which linear algebra book should I read to understand vectorized operations?,,2,0,,,,CC BY-SA 4.0 +16125,2,,16124,10/28/2019 16:30,,1,,"

Linear Algebra Done Right by Axler seems to be the best book on linear algebra, with a brisk and modern approach.

+",6779,,,,,10/28/2019 16:30,,,,0,,,,CC BY-SA 4.0 +16126,2,,16124,10/28/2019 17:29,,3,,"

If you already have two years of a bachelor's of mathematics, I recommend part I of the book that you're mentioning. That part of the book reviews the main mathematics used in the optimization of neural nets (in part 1), and then actually goes through the various models in detail in the later parts. The review is done at a level that is suitable for someone who has already studied these topics, but needs a refresher.

+

The book Matrix Differential Calculus with Applications in Statistics and Econometrics covers more advanced topics, which might also be what you are looking for. There is also the related Wikipedia article.

+",16909,,2444,,1/22/2021 1:02,1/22/2021 1:02,,,,1,,,1/22/2021 0:28,CC BY-SA 4.0 +16127,2,,16045,10/28/2019 19:31,,5,,"

Let us suppose we have a network without any functions in between. Each layer consists of a linear function. i.e

+ +
layer_output = Weights.layer_input + bias
+
+ +

Consider a 2 layer neural network, the outputs from layer one will be: + x2 = W1*x1 + b1 +Now we pass the same input to the second layer, which will be

+ +
x3 = W2x*2 + b2
+Also x2 = W1*x1 + b1
+Substituting back, we have:
+x3 = W2(W1*x1 + b1) + b2
+x3 = (W2W1)*x1 + (W2*b1 + b2)
+x3 = W*x1 + b
+
+ +

Oh no! We still got a linear function. No matter how many layers we add, we will still get a linear function. In that case, our network will never be able to approximate any non linear functions.

+ +

So what is the solution?

+ +

We will simply add some non linear functions in between. These functions are called activation functions. Some of these functions include:

+ +
    +
  • ReLU
  • +
  • Sigmoid
  • +
  • tanh
  • +
  • Softmax
  • +
+ +

and there are a lot more of them.

+ +

Yay! Our network is no more linear!

+ +

We have a lot of different non linear functions, and each of them serve a different purpose.

+ +

For example, ReLU is simple and computationally cheap. + ReLU(x) = max(0, x) +Sigmoid outputs are between 0 and 1. +tanh is similar to sigmoid, but zero centered, with outputs from -1 to 1 +Softmax is usually used if you want to represent any vector as a discrete probability distribution.

+ +

Hope you are having a great day!

+",21229,,,,,10/28/2019 19:31,,,,0,,,,CC BY-SA 4.0 +16128,1,16129,,10/28/2019 20:16,,2,1206,"

I've just started to learn CNN and somewhere I have read if I remove the last FCL I will get the features extracted from the input image but... what are those features?

+ +

Are they numbers? Labels? An image location (x,y) where there is a line.

+ +

I want to use these features on a one shot network, but I can't imagine how to use them if I don't know what they are.

+",4920,,4920,,10/29/2019 6:57,10/29/2019 6:57,What are the features get from a feature extraction using a CNN?,,1,0,,,,CC BY-SA 4.0 +16129,2,,16128,10/28/2019 22:46,,2,,"

You get what we call high-level features, which are basically abstract representations of the parts that carry information in the image you want to classify.

+ +

Imagine you want to classify a car. The image you feed your network could be a car on a road with a driver and trees and clouds, etc. The network, however, if you've trained it to recognize cars, will try to focus on parts of the image regarding a car. The final layers will have learned to extract an abstract representation of a car from the image (this means a low-resolution car-like shape). Now your final FC layers will try to classify the image from these high-level features. In this example, you would have an FC layer that learns to classify a car if this this abstract car-like figure is present in the image. Likewise, if it isn't present it won't classify it as a car. By accessing these high-level features, you essentially have a more compact and meaningful representation of what the image represents (based always on the classes that the CNN has been trained on).

+ +

By visualizing the activations of these layers we can take a look on what these high-level features look like.

+ +

+ +

The top row here is what you are looking for: the high-level features that a CNN extracts for four different image types.

+",26652,,,,,10/28/2019 22:46,,,,7,,,,CC BY-SA 4.0 +16130,2,,12390,10/28/2019 23:53,,0,,"

I agree a neural net is a good start and you might want to add a constitutional neural network to list of models you want to test and evaluate.

+ +

However your question was really on getting something up and running using python. I don't have enough time for me to play games, much more let my computers do it for me... but I know someone who has done such a project.

+ +

Hopefully that helps.

+",30750,,,,,10/28/2019 23:53,,,,0,,,,CC BY-SA 4.0 +16131,1,16144,,10/29/2019 1:23,,2,297,"

Problem

+

I have 66 slot machines. For each of them, I have 7 possible actions/arms to choose from. At each trial, I have to choose one of 7 actions for each and every one of the 66 slots. The reward depends on the combination of these actions, but the slots are not equal, that is, pulling the same arm for different slots gives different results. I do not care about an initial state or feature vector, as the problem always starts from the same setting (it is not contextual). My reward depends on how I pull one of the 7 arms of all of the 66 bandits simultaneously, where, as said, each slot has its own unique properties towards the calculation of the total reward. Basically, the action space is a one-hot encoded 66x7 matrix.

+

My solution

+

I ignored the fact that I do not care about a feature vector or state and I treated the problem using a deep NN with a basic policy-gradient algorithm, where I increase directly the probability of each action depending on the reward I get. The state simply does not change, so the NN receive always the same input. This solution does work effectively in finding an approximately optimal strategy, however, it is very computationally expensive and something tells me I am overkilling the problem.

+

However, I do not see how I could apply standard solutions to MAB, such as epsilon-greedy. I need simultaneity between the different "slot machines", and, if I just take each possible permutation as a different action, in order to explore them with greedy methods, I get way too many actions (in the order of $10^{12}$). I have not found in the literature something similar to this multi-armed multi-bandit problem and I am clueless if anything like that has ever been considered - perhaps I am overthinking it and this can be somehow reduced to a normal MAB?

+",23638,,2444,,9/22/2021 22:15,9/22/2021 22:15,"Which solutions could I use to solve a multi-armed ""multi-bandit"" problem?",,1,1,,,,CC BY-SA 4.0 +16132,2,,15977,10/29/2019 4:59,,3,,"

One way to view a neural network is as a series of linear transformations.

+ +

You take a bunch of data points and look at it from a different perspective from a different space. You apply some non linear function on the data points like, ReLU, sigmoid etc. Now you repeat the same process of looking from a different space.

+ +

Our goal is to look at it from a point where things starts looking right for our tasks. These linear transformations is what the network has to optimise.

+",21229,,,,,10/29/2019 4:59,,,,1,,,,CC BY-SA 4.0 +16133,1,16833,,10/29/2019 5:25,,13,23202,"

I'm working on a project, where we use an encoder-decoder architecture. We decided to use an LSTM for both the encoder and decoder due to its hidden states. In my specific case, the hidden state of the encoder is passed to the decoder, and this would allow the model to learn better latent representations.

+

Does this make sense?

+

I am a bit confused about this because I really don't know what the hidden state is. Moreover, we're using separate LSTMs for the encoder and decoder, so I can't see how the hidden state from the encoder LSTM can be useful to the decoder LSTM because only the encoder LSTM really understands it.

+",30885,,2444,,1/17/2021 16:43,12/15/2021 11:14,What exactly is a hidden state in an LSTM and RNN?,,4,0,,,,CC BY-SA 4.0 +16134,2,,15977,10/29/2019 6:19,,0,,"

A good way of looking at it would be understanding neural networks mathematically, i.e. purely on the basis of the fact that you're just trying to fit a function and solve an optimisation problem (apart from looking at it as multiple units of logistic regression).

+ +

Say we want to approximate a function $y =f_w(x)$ with $x \in D$, where $D$ is our domain-space. We want this function to map to $C$, our co-domain, with all the values the function ends up taking being the set $y \in R$, our range. Essentially we frame $f(x)$ as a sequence of operations (What operation should be done where is got from common-practice, intuition, and insight mostly gained from experience) assuming that when the right parameters are used for these operations we will arrive at a very reasonable approximation of the function.

+ +

We initialise the parameters with whatever values we want initially (usually random), calling this parameter-space $W$. The essential idea would be frame another function $L(f_w(x), \hat{y})$ called the loss function which we want to minimise. This acts as a test to how good our function is - since our function parameters were initially random, the error between the function approximations and the actual range values for known points (training set) are estimated. These estimated error values and its gradient is then used by back-propagation where $w_{init}\in W$ is updated to another $w_{1}\in W$, where $w_1$ is calculated by moving on $L$ in the direction of decreasing gradient, in hopes of reaching the loss functions minima.

+ +

Simplifying, essentially all you want to do is find a $y=f_w(x)$ where parameters $w$ are to be chosen such that $L(f_w(x), \hat{y})$ is minimised for the training set.

+ +

Even though this is a very rough idea of neural networks, such a direction in thinking can especially be useful when studying generative networks and other problems where the problem has to be formulated mathematically before being able to approach it.

+",25658,,,,,10/29/2019 6:19,,,,0,,,,CC BY-SA 4.0 +16135,2,,16133,10/29/2019 6:34,,3,,"

As you said, one way to look at it is definitely that the LSTM-encoder's encoding can be only understood by itself, that's why the decoder exists there. An optimisation process encoded it, why couldn't an optimisation process decode it?

+ +

The hidden state is essentially just an encoding of the information you gave it keeping the time-dependencies in check. Most encoder-decoder networks are trained end to end meaning, when the encoding is learned a corresponding decoding is learned simultaneously to decode the encoded latent in your desired format.

+ +

I'd recommend you read this blog on how transformer models are used to convert French to English, as it would give you better intuition and understanding on what happens with encoder-decoder sequence models

+",25658,,,,,10/29/2019 6:34,,,,0,,,,CC BY-SA 4.0 +16136,1,16139,,10/29/2019 7:12,,5,919,"

REINFORCE is a Monte Carlo policy gradient algorithm, which updates weights (parameters) of policy network by generating episodes. Here's a pseudo-code from Sutton's book (which is same as the equation in Silver's RL note):

+ +

+ +

When I try to implement this with my own problem, I found something strange. Here's implementation from Pytorch's official GitHub:

+ +
def finish_episode():
+    R = 0
+    policy_loss = []
+    returns = []
+    for r in policy.rewards[::-1]:
+        R = r + args.gamma * R
+        returns.insert(0, R)
+    returns = torch.tensor(returns)
+    returns = (returns - returns.mean()) / (returns.std() + eps)
+    for log_prob, R in zip(policy.saved_log_probs, returns):
+        policy_loss.append(-log_prob * R)
+    optimizer.zero_grad()
+    policy_loss = torch.cat(policy_loss).sum()
+    policy_loss.backward()
+    optimizer.step()
+    del policy.rewards[:]
+    del policy.saved_log_probs[:]
+
+ +

I feel like there's a difference between the above two. In Sutton's pseudo-code, the algorithm updates $\theta$ for each step $t$, while the second code (PyTorch's one) accumulate loss and update $\theta$ with the summation, i.e. after each episode. +I tried to search other implementation of REINFORCE, and I found that most of the implementations follow the second form, update after each generated episodes.

+ +

To check whether both give the same result, I changed the second code as

+ +
def finish_episode():
+    R = 0
+    policy_loss = []
+    returns = []
+    for r in policy.rewards[::-1]:
+        R = r + args.gamma * R
+        returns.insert(0, R)
+    returns = torch.tensor(returns)
+    returns = (returns - returns.mean()) / (returns.std() + eps)
+    for log_prob, R in zip(policy.saved_log_probs, returns):
+        optimizer.zero_grad()
+        loss = -log_prob * R
+        loss.backward()
+        optimizer.step()
+
+...
+
+ +

and run it, which gives different result (if my code has no problem). +So they are not the same, and I think the last one is more close to the original pseudo-code of REINFORCE. What am I missing now? Is it okay because the results are approximately same? (I'm not sure about this claim)

+ +

However, in some sense, I think Pytorch's implementation is the right version of REINFORCE. In Sutton's pseudo-code, episode is generated first, so I think $\theta$ shouldn't be updated at each step and should be updated after the total loss is computed. If $\theta$ is updated at each step, then such $\theta$ might be different with the original $\theta$ that used to generate the episode.

+",30886,,2444,,5/30/2022 8:44,5/30/2022 8:46,Should the policy parameters be updated at each time step or at the end of the episode in REINFORCE?,,1,0,,,,CC BY-SA 4.0 +16137,2,,12989,10/29/2019 7:48,,1,,"

First of all, when you add hidden layers, or stack RBMs, you get a Deep Belief Network (DBN). Your question then deals with the comparison of DBNs and RBMs.

+ +

There are some elements to answer this question in the article Representational Power of Restricted Boltzmann Machines and Deep Belief Networks by Nicolas Le Roux, which can be found summarized in these course slides. The main results are:

+ +
+

Restricted Boltzmann Machines:

+ +
    +
  • Increasing the number of hidden units improves representational ability.

  • +
  • With an unbounded number of units, any distribution over $\{0,1\}^n$ can be approximated arbitrarily well.

  • +
+ +

Deep Belief Networks:

+ +
    +
  • Adding additional layers using greedy contrastive divergence training does not provide additional benefit.

  • +
  • There remain open questions about the benefits additional layers add.

  • +
+
+ +

I emphasized the $4^{th}$ point that covers the most your question. It does not mean that additional layers are useless, but since RBM are universal approximators ($2^{nd}$ point), the benefits of adding layers seem less straightforward. They seem dependent on the first layer, the training procedure... You can see the article for more details and the open questions raised about DBNs. Note that this answer is an entry point on the topic, there might be more recent results following this article which I'm not familiar with...

+",25825,,2444,,10/29/2019 13:38,10/29/2019 13:38,,,,0,,,,CC BY-SA 4.0 +16138,1,,,10/29/2019 8:20,,3,88,"

I am working on classifying the Omniglot dataset, and the different papers dealing with this topic describe the problem as one-shot learning (classification). I would like to nail down a precise description of what counts as one-shot learning.

+

It's clear to me that in one-shot classification, a model tries to classify an input into one of $C$ classes by comparing it to exactly one example from each of the $C$ classes.

+

What I want to understand is:

+
    +
  1. Is it necessary that the model has never seen the input and the target examples before, for the problem to be called one-shot?

    +
  2. +
  3. Goodfellow et. al. describe one-shot learning as an extreme case transfer learning where only one labeled example of the transfer task is presented. So, it means they are considering the training process as a kind of continuous transfer learning? What has the model learned earlier, that is being transferred?

    +
  4. +
+",23273,,2444,,12/14/2020 11:11,12/14/2020 11:11,Precise description of one-shot learning,,1,0,,,,CC BY-SA 4.0 +16139,2,,16136,10/29/2019 11:01,,3,,"

The essence of your observation is that Sutton's version of REINFORCE is taking into consideration all of the trajectory to compute the returns, while in the pytorch version only the future is taken into consideration, hence going in reverse to sum the future rewards and ignore the previous rewards. The consequence is that future actions are not punished for early mistakes. The guys at OpenAI refer to this as reward-to-go, but, personally, I find that it resembles Monte Carlo On Policy Control without Exploring Starts or First Visit from Sutton's book.

+

You can find more on REINFORCE and Policy Gradient in Spinning Up RL: Part3: Intro to policy gradient - Don't let the past distract you.

+

Also, something to note is that even in Sutton's version, the whole trajectory is unrolled, i.e. the episode completes, and then the weights get updated. Otherwise it stops being a Monte Carlo Method and it becomes a TD method. In addition, you can't make a change on a single point because sampling is a non differentiable operation, instead, the gradient is estimated by collecting a lot of trajectories.

+",28538,,2444,,5/30/2022 8:46,5/30/2022 8:46,,,,1,,,,CC BY-SA 4.0 +16141,2,,15658,10/29/2019 11:19,,1,,"

In theory, deeper architectures can encode more information than shallower ones because they can perform more transformations of the input which results in better results at the output. The training is slower because back propagation is quite expensive, as you increase the depth, you increase the number of parameters and gradients that need to be computed.

+ +

Another issue that you need to take into account is the effect of the activation function. Saturating functions like sigmoid and hyperbolic tangent result in very small gradients in their edges, other activation functions are just flat, eg. ReLU is flat on the negatives therefore, there is no error to propagate because the gradient is either very small (as in saturating functions) or zero. Batch Norm greatly assists in this operation because it collapses values in better ranges where the gradients aren't close to zero.

+",28538,,28538,,10/29/2019 20:52,10/29/2019 20:52,,,,0,,,,CC BY-SA 4.0 +16142,2,,4683,10/29/2019 12:19,,0,,"

In the case of applying both to natural language, CNN's are good at extracting local and position-invariant features but it does not capture long range semantic dependencies. It just consider local key-phrasses.

+ +

So when the result is determined by the entire sentence or a long-range semantic dependency CNN is not effective as shown in this paper where the authors compared both architechrures on NLP takss.

+ +

This can be extended for general case.

+",30892,,,,,10/29/2019 12:19,,,,0,,,,CC BY-SA 4.0 +16143,2,,13644,10/29/2019 14:33,,7,,"

Learning without Forgetting (LwF) is an incremental learning (sometimes also called continual or lifelong learning) technique for neural networks, which is a machine learning technique that attempts to avoid catastrophic forgetting. There are several incremental learning approaches. LwF is an incremental learning approach based on the concept of regularization. In section 3.2 of the paper Continual lifelong learning with neural networks: A review (2019), by Parisi et al., other regularisation-based continual learning techniques are described.

+ +

LwF could be seen as a combination of distillation networks and fine-tuning, which refers to the re-training with a low learning rate (which is a very rudimentary technique to avoid catastrophically forgetting the previously learned knowledge) an already trained model $\mathcal{M}$ with new and (usually) more specific dataset, $\mathcal{D}_{\text{new}}$, with respect to the dataset, $\mathcal{D}_{\text{old}}$, with which you originally trained the given model $\mathcal{M}$.

+ +

LwF, as opposed to other continual learning techniques, only uses the new data, so it assumes that past data (used to pre-train the network) is unavailable. The paper Learning without Forgetting goes into the details of the technique and it also describes the concepts of feature extraction, fine tuning and multitask learning, which are related to incremental learning techniques.

+ +

What is the difference between LwF and transfer learning? LwF is a combination of distillation networks and fine-tuning, which is a transfer learning technique, which is a special case of incremental learning, where the old and new tasks are different, while, in general, in incremental learning, the old and new tasks can also be the same (which is called domain adaptation).

+",2444,,2444,,10/29/2019 16:11,10/29/2019 16:11,,,,1,,,,CC BY-SA 4.0 +16144,2,,16131,10/29/2019 16:16,,2,,"

Although you can frame your problem as a bandit problem or RL, it has other workable interpretations. Critical information from your comments is that:

+ +
    +
  • Total reward is not a simple sum of all the results from 66 different machines. There are interactions between machines.

  • +
  • Total reward is deterministic.

  • +
+ +

This looks like a problem in combinatorial optimisation. There are many possible techniques you can throw at this. Which ones work best will depend on how nonlinearities and dependence between choices on different machines affect the end results.

+ +

Best Case

+ +

With deterministic results, if changes between machines were completely isolated, you could search each machine in turn, because you can treat all other 65 components as a constant if you don't change their settings. That would be very simple to code and take $7 \times 66 = 462$ steps to find the optimimum result.

+ +

Worst Case

+ +

In the worst case, the dependencies are so strong and chaotic that there is essentially no predictable difference between changing a single machine's setting and all of them. Pseudo-random number generators and secure hashing functions have this property, as do many quite simple physical systems with feedback loops.

+ +

In the worst case, there will be a ""magic setting"" with best results, and only a brute force search through all combinations of levers will find it.

+ +

In order to apply any more efficient search method, you have to assume that the response to combinations of levers is not quite so chaotic.

+ +

How to Search?

+ +

It seems likely from your description, that the best search algorithm is going to be somewhere between simple machine-by-machine optimisation and an exhaustive global search. However, it is hard to tell just where on that spectrum it lies.

+ +

There are a few different ways to frame it as reinforcement learning. For instance, you could use current switch combination as state, and run 66 switch changes as an ""episode"".

+ +

I would suggest that genetic algorithms are a good match for this search task, assuming there is at least some local-only effect that means combining two good solutions is likley to result in a third good solution. Genetic algorithms don't need calculations for gradients, and fit nicely with discrete combinations. Your genome can simply be the 66 different switch positions, and the fitness rating your black box score for those positions.

+ +

Plenty of other combinatorial search algorithms are available. Enough to fill a book or two. One place you could look for inspiration is Clever Algorithms: Nature-Inspired Programming Recipes which is a free PDF.

+",1847,,1847,,10/30/2019 9:13,10/30/2019 9:13,,,,1,,,,CC BY-SA 4.0 +16146,1,,,10/29/2019 20:49,,5,138,"

In the deep learning specialization course by Andrew Ng, in the video Sequence Models (minute 4:13), he says that in negative sampling we have to choose a sample of words from the corpus to train rather than choosing the whole corpus. But he said that, for smaller datasets, we need a bigger number of samples, for example, 5-20, and, for larger datasets, we need a smaller sample, for example, 2-5. By sample, I am referring to the number of words along with the target word we have taken to train the model.

+ +

Why do small datasets require more samples, while big datasets require fewer samples?

+",30907,,30907,,10/30/2019 18:24,10/30/2019 18:55,"Why do small datasets require more samples, while big datasets require fewer samples in negative sampling?",,1,2,0,,,CC BY-SA 4.0 +16147,1,16150,,10/29/2019 22:30,,4,110,"

I have read a lot about NAS, but I still do not understand one concept: When setting up a neural network, hyperparameters (such as the learning rate, dropout rate, batch size, filter size, etc.) need to be set up.

+

In NAS, only the best architecture is decided, e.g. how many layers and neurons. But what about the hyperparameters? Are they randomly chosen?

+",30909,,2444,,10/29/2021 15:00,10/29/2021 15:01,"When using Neural Architecture Search, how are the hyper-parameters chosen?",,1,0,,,,CC BY-SA 4.0 +16148,1,,,10/30/2019 4:19,,3,712,"

Natural gradient aims to do a steepest descent on the ""function"" space, a manifold that is independent from how the function is parameterized. It argues that the steepest descent on this function space is not the same as steepest descent on the parameter space. We should favor the former.

+ +

Since, for example in a regression task, a neural net could be interpreted as a probability function (Gaussian with the output as mean and some constant variance), it is ""natural"" to form a distance on the manifold under the KL-divergence (and a Fisher information matrix as its metric).

+ +

Now, if I want to be creative, I could use the same argument to use ""square distance"" between the outputs of the neural nets (distance of the means) which I think is not the same as the KL.

+ +

Am I wrong, or it is just another legit way? Perhaps, not as good?

+",9793,,,,,6/26/2020 9:55,Why they use KL divergence in Natural gradient?,,3,0,,,,CC BY-SA 4.0 +16149,2,,16120,10/30/2019 4:21,,0,,"

I would go so far as to say that unless the training examples include predicate data- that is, data about conditions leading up to a crime or non-crime-, then you cannot have enough information to predict the occurence of a crime from conditions or events that happen in advance of a potential crime not yet committed.

+",28348,,,,,10/30/2019 4:21,,,,2,,,,CC BY-SA 4.0 +16150,2,,16147,10/30/2019 5:33,,2,,"

It's not clearly stated (it's not stated at all on Wikipedia), but, after a bit of searching, I found an answer here about a third of the way down the page:

+
+

The best performing architecture observed during the training of the controller is taken, and a grid search is performed over some basic hyperparameters such as learning rate and weight decay in order to achieve near STOTA (state of the art) performance.

+
+

So, as a direct answer: The norm; A grid search.

+",26726,,2444,,10/29/2021 15:01,10/29/2021 15:01,,,,0,,,,CC BY-SA 4.0 +16151,1,,,10/30/2019 6:32,,2,146,"

I have a lengthy timeseries datasets which contains several variables (from sensors etc) to be classified as actions or states. Providing they are successfully done, I want to learn a control policy using DDPG. +But I have no knowledge of the environment. +How can I learn my policy off-line only by using these datasets without having any model of the environment? After learning off-line first, then the policy can then be used to learn and control online later in a certain real-world environment.

+ +

First, I know that experience buffer can be used to store the datasets. How should you set the buffer size in this case? +From what I understand, DDPG needs lots of data to be used for learning. +Should I build an environment model using the specified datasets? Or I don't really need this step?

+ +

All of these will be implemented in Python and maybe with the help of another tools if needed. There are some implementation of DDPG available so it is not the main problem, but this implementation must be tweaked to solve my proposed problem. Normally the implemented DDPG in Python requires a Gym-environment as an input so I must change it to satisfy my needs as I don't need Gym for my use case. And these implementations in Python are somehow on-line codes so you need to interact directly with the environment model for the algorithm to be working.

+ +

Can someone help me tackle this problem or give me some advice regarding this? I can help giving more details if needed. Thank you.

+ +

Regards

+",30918,,30918,,10/30/2019 7:11,10/30/2019 7:11,How to learn using DDPG in python solely using a timeseries datasets,,0,0,,,,CC BY-SA 4.0 +16152,1,16154,,10/30/2019 7:19,,4,741,"

In my AI literature research, I often notice authors use the term 'democratizing AI', especially in the AutoML area. For example in the abstract (last sentence) of this paper:

+
+

LEAF therefore forms a foundation for democratizing and improving AI, as well as making AI practical in future applications.

+
+

I think I have an idea of what this means, but I would like to ask you for some more specific answers.

+",22659,,2444,,12/12/2021 12:44,12/12/2021 12:44,What does 'democratizing AI' exactly mean?,,1,0,,,,CC BY-SA 4.0 +16153,1,,,10/30/2019 9:22,,6,338,"

I am interested in the possibility of having extra input along with the main data. For instance, a medical application that would rely mostly on an image: how could one also account for sex, age, etc.?

+ +

It is certainly possible to put the output of a CNN and other data into, say, a densely connected network; but it seems inefficient. Are there well-established ways of doing something like this?

+",23584,,2444,,11/3/2019 13:05,11/3/2019 13:05,Are there well-established ways of mixing different inputs (e.g. image and numbers)?,,1,0,,,,CC BY-SA 4.0 +16154,2,,16152,10/30/2019 9:59,,4,,"

In this particular context, ""Democratize"" means to make more accessible to people.

+ +

Thus, ""Democratizing AI"" means to make AI softwares and AI programming available, accessible and easy to use for the vast majority of people.

+",19859,,,,,10/30/2019 9:59,,,,1,,,,CC BY-SA 4.0 +16155,2,,16080,10/30/2019 10:12,,1,,"

You can find leaderboards as well as code at this address.

+ +

For now, HRNetV2 leads the game.

+ +

The U-Net architecture is part of a broad family of network architectures that aggregate multi-scale features to extract finer details useful for semantic segmentation. Examples are Feature Pyramidal Networks (FPN), Hourglass, Encoder-Decoder, MatrixNet, etc...

+ +

+",19859,,,,,10/30/2019 10:12,,,,0,,,,CC BY-SA 4.0 +16156,2,,16153,10/30/2019 11:27,,4,,"

A more efficient way would be creating a multi input model, with something like this:

+ +
___________    _____________
+|__Image__|    |Other input|            
+_____|_____     _____|_____
+|___CNN___|     |__Dense___|
+_____|______    _____|______
+|_Features1_|   |_Features2_|
+         __|_____|__
+         |__Merge___|
+         _____|______
+         |___Dense__|
+         _____|_____
+         |__Output__|
+
+ +

However, you could also combine the unstructured data to the image, as stated in the quora answer:

+ +
+

The out-of-the-box method:

+ +

If you want to just take your CNN library and use it without much + thought, there's an easy way to do it.

+ +

Your image has “channels”: red blue and green channels, for example. + Just add another channel for each unstructured feature. Those channels + will just be 2D-arrays whose entries are all the same value: the value + of your outside feature.

+ +

It means more memory and more parameters though. If you have a lot of + unstructured data, this can become prohibitively expensive.

+ +

The more efficient method (and still not hard):

+ +

You use one or more deconvolutional filters to bring the unstructured + data up to the size of the structured data, concatenate them along the + channel dimension, and keep going as if nothing happened.

+
+ +

Source: How can l train a CNN with extra features other than the pixels? (Quora)

+",23713,,1671,,11/2/2019 16:13,11/2/2019 16:13,,,,0,,,,CC BY-SA 4.0 +16157,1,16159,,10/30/2019 12:09,,2,208,"

In OpenAI Gym "reward" is defined as:

+
+

reward (float): amount of reward achieved by the previous action. The +scale varies between environments, but the goal is always to increase +your total reward.

+
+

I am training Hindsight Experience Replay on Fetch robotics environments, where rewards are sparse and binary indicating whether or not the task is completed. The original paper implementing HER uses success rate as a metric in its plots, like so:

+

+

On page 5 of the original paper, it is stated that the reward is binary and sparse.

+

When I print the rewards obtained during a simulation of FetchReach-v1 trained with HER, I get the following values. The first column shows the reward and the second column shows the episode length.

+

+

As can be seen, at every time step, I am getting a reward, sometimes I get a $-1$ reward at every time step throughout the episode for a total of $-50$. The maximum reward I can achieve throughout the episode is $0$.

+

Therefore my question is: What is the reward obtained at each time-step? What does it represent and how is this different from the success rate?

+",14390,,2444,,11/21/2020 12:39,11/21/2020 12:39,What is the difference between success rate and reward when dealing with binary and sparse rewards?,,1,0,,,,CC BY-SA 4.0 +16159,2,,16157,10/30/2019 12:20,,1,,"

Page 6 of the paper describes the exact reward functions, and why they were used:

+ +
+

Goals: Goals describe the desired position of the object (a box or a + puck depending on the task) with some fixed tolerance of $\epsilon$ i.e. $G = \mathcal{R}^3$ + and $f_g(s) = [|g − s_{object}| ≤ \epsilon]$, where $s_{object}$ is the position of + the object in the state s. The mapping from states to goals used in + HER is simply $m(s) = s_{object}$.

+ +

Rewards: Unless stated otherwise we use binary and sparse rewards $r(s, a, g) = −[f_g(s 0 ) = 0]$ where $s'$ is + the state after the execution of the action a in the state s. We + compare sparse and shaped reward functions in Sec. 4.4.

+
+ +

So, at least in the base version (which I believe is your fetchreach-v1), the agent receives a reward of -1 for every timestep spent more than $\epsilon$ from the goal state, and a reward of 0 for every timestep spent within $\epsilon$ of the goal state. Thus, a score of -5.0 would seem to correspond to the agent moving directly to the goal and staying there, while a score of -50.0 would correspond to the agent failing to reach the goal state entirely.

+",16909,,,,,10/30/2019 12:20,,,,2,,,,CC BY-SA 4.0 +16160,1,16167,,10/30/2019 14:29,,3,192,"

Training neural network with 4 GPUs using pyTorch, performance is not even 2 times (btw 1 & 2 times) compare to using one GPU. From Nvidia-smi we see GPU usage is for few milliseconds and next 5-10 seconds looks like data is off-loaded and loaded for new executions (mostly GPU usage is 0%). Is there any way in pyTorch to improve the data upload and offload for the GPU execution.

+",27221,,,,,10/30/2019 23:47,Training network with 4 GPUs performance is not exactly 4 times over one GPU why?,,1,1,,10/10/2021 17:05,,CC BY-SA 4.0 +16161,1,,,10/30/2019 15:17,,1,98,"

There are several occasion that reinforcement learning can be used as a control mean. +The action is for example the set target temperature (which in many occasions change with time) and the state is for example the current temperature and other variables. The policy is then the control mean that is going to be learnt using the reinforcement learning.

+ +

As there is a dead time (input lag) and time delay in the real world, how can one propose to tackle this problem when using reinforcement learning as a control mean? Thank you.

+",30918,,,,,10/30/2019 15:17,Solving the dead time problem for control using reinforcement learning,,0,8,,,,CC BY-SA 4.0 +16162,1,,,10/30/2019 17:03,,2,290,"

A general AI x creates another AI y which is better than x.

+ +

y creates an AI better than itself.

+ +

And so on, with each generation's primary goal to create a better AI.

+ +

Is there a name for this.

+ +

By better, I mean survivability, ability to solve new problems, enhance human life physically and mentally, and advance our civilization to an intergalactic civilization to name a few.

+",30935,,2444,,10/31/2019 12:42,11/5/2019 6:36,What is the name of an AI whose primary goal is to create a better AI?,,3,1,,,,CC BY-SA 4.0 +16163,2,,16162,10/30/2019 17:20,,6,,"

I don't think there is a single standard word or phrase that covers just this concept. Perhaps recursive self-improvement matches the idea concisely - but that is not specific AI jargon.

+ +

Very little is understood about what strength this effect can have or what the limits are. Will 10 generations of self-improvement lead to a machine that is 10% better, 10 times better, or $2^{10}$ times better? And by what measure?

+ +

Some futurologists suggest this might be a very strong effect, and use the term Singularity to capture the idea that intelligence growth through recursive self-improvement will be strong, exceed human intelligence, and lead to some form of super-intelligent machine - the point at which this goal is reached is called The Singularity. Ray Kurzweil is a well-known proponent of this idea.

+ +

Specifically, use of the term Singularity implies more than just the basic recursion that you suggest, and includes assumptions of a very large effect. Plus technically, it refers to a stage that results from the recursion, not the recursion itself.

+ +

However, despite the popularity of it as a concept, whether or not such self-improving system will have a large impact on the generation of intelligent machines is completely unknown at this stage. Related research about general intelligence is still in its infancy, so it is not even clear what would count as being the first example system x.

+",1847,,1847,,10/30/2019 17:33,10/30/2019 17:33,,,,3,,,,CC BY-SA 4.0 +16164,2,,16148,10/30/2019 18:39,,2,,"

The KL divergence has slightly different interpretations depending on the context. The related Wikipedia article contains a section dedicated to these interpretations. Independently of the interpretation, the KL divergence is always defined as a specific function of the cross-entropy (which you should be familiar with before attempting to understand the KL divergence) between two distributions (in this case, probability mass functions)

+ +

\begin{align} +D_\text{KL}(P\parallel Q) +&= -\sum_{x\in\mathcal{X}} p(x) \log q(x) + \sum_{x\in\mathcal{X}} p(x) \log p(x) \\ + &= H(P, Q) - H(P) +\end{align} +where $H(P, Q)$ is the cross-entropy of the distribution $P$ and $Q$ and $H(P) = H(P, P)$.

+ +

The KL is not a metric, given that it does not obey the triangle inequality. In other words, in general, $D_\text{KL}(P\parallel Q) \neq D_\text{KL}(Q\parallel P) $.

+ +

Given that a neural network is trained to output the mean (which can be a scalar or a vector) and the variance (which can be a scalar, a vector or a matrix), why don't we use a metric like the MSE to compare means and variances? When you use the KL divergence, you don't want to compare just numbers (or matrices), but probability distributions (more precisely, probability densities or mass functions), so you will not compare just the mean and the variance of two different distributions, but you will actually compare the distributions. See the example of the application of the KL divergence in the related Wikipedia article.

+",2444,,2444,,10/30/2019 18:52,10/30/2019 18:52,,,,1,,,,CC BY-SA 4.0 +16165,2,,16146,10/30/2019 18:55,,1,,"

He likely found this to a be a best practice to avoid over fitting, with a small data set if you only use small and easy to learn (less words -> less degrees of freedom) sequences then you open your model to the risk of over fitting that data set where as on a large data set that has alot more total information you can train on small sequences without being at risk of over fitting because although the smaller sequences will be easier to learn the variance of sequences will be much higher.

+",20044,,,,,10/30/2019 18:55,,,,0,,,,CC BY-SA 4.0 +16166,2,,16148,10/30/2019 19:43,,1,,"

Yes, Squared distances & KL Divergence are not the same. Squared distance between means is not a useful metric as it doesn't gauge the amount of similarity between 2 distributions.

+ +

When we compute \begin{align} +D_\text{KL}(P\parallel Q) +\end{align} +We are computing the amount of information that is lost when we approximate P as Q. Ideally, we would want the KL divergence to be as low as possible. +Here is an interesting article https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained +where the author has explained KL Divergence with a toy example.

+ +

I hope it helps :)

+",30939,,,,,10/30/2019 19:43,,,,1,,,,CC BY-SA 4.0 +16167,2,,16160,10/30/2019 23:35,,2,,"

Your dataset class probably have a lot of preprocessing code. You should use a dataloader. It will prefetch data from the dataset when the GPU is processing. Also, you can process all the data beforehand and save to a file. Multiple GPU cannot scale as the GPU have to get all data to one GPU to calculate the loss. The performance of 4 GPU is around 3.5x. A large batch size also would help as each GPU will have 1/4 the batch size. a batch size of 64-128 is good for 4 GPU. See the following example code fro CIFAR-10 for multi gpu code. It have dataloaders and dataparallel.

+ +
import os
+import time
+import datetime
+
+import torch
+import torch.nn as nn
+import torch.optim as optim
+from torch.optim import lr_scheduler
+import torch.backends.cudnn as cudnn
+
+import torchvision
+import torchvision.transforms as transforms
+from torchvision.datasets import CIFAR10
+from torch.utils.data import DataLoader
+
+from model import pyramidnet
+import argparse
+from tensorboardX import SummaryWriter
+
+
+parser = argparse.ArgumentParser(description='cifar10 classification models')
+parser.add_argument('--lr', default=0.1, help='')
+parser.add_argument('--resume', default=None, help='')
+parser.add_argument('--batch_size', type=int, default=768, help='')
+parser.add_argument('--num_worker', type=int, default=4, help='')
+parser.add_argument(""--gpu_devices"", type=int, nargs='+', default=None, help="""")
+args = parser.parse_args()
+
+gpu_devices = ','.join([str(id) for id in args.gpu_devices])
+os.environ[""CUDA_VISIBLE_DEVICES""] = gpu_devices
+
+
+def main():
+    best_acc = 0
+
+    device = 'cuda' if torch.cuda.is_available() else 'cpu'
+
+    print('==> Preparing data..')
+    transforms_train = transforms.Compose([
+        transforms.RandomCrop(32, padding=4),
+        transforms.RandomHorizontalFlip(),
+        transforms.ToTensor(),
+        transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
+
+    dataset_train = CIFAR10(root='../data', train=True, download=True, 
+                            transform=transforms_train)
+
+    train_loader = DataLoader(dataset_train, batch_size=args.batch_size, 
+                              shuffle=True, num_workers=args.num_worker)
+
+    # there are 10 classes so the dataset name is cifar-10
+    classes = ('plane', 'car', 'bird', 'cat', 'deer', 
+               'dog', 'frog', 'horse', 'ship', 'truck')
+
+    print('==> Making model..')
+
+    net = pyramidnet()
+    net = nn.DataParallel(net)
+    net = net.to(device)
+    num_params = sum(p.numel() for p in net.parameters() if p.requires_grad)
+    print('The number of parameters of model is', num_params)
+
+    criterion = nn.CrossEntropyLoss()
+    optimizer = optim.Adam(net.parameters(), lr=args.lr)
+    # optimizer = optim.SGD(net.parameters(), lr=args.lr, 
+    #                       momentum=0.9, weight_decay=1e-4)
+
+    train(net, criterion, optimizer, train_loader, device)
+
+
+def train(net, criterion, optimizer, train_loader, device):
+    net.train()
+
+    train_loss = 0
+    correct = 0
+    total = 0
+
+    epoch_start = time.time()
+    for batch_idx, (inputs, targets) in enumerate(train_loader):
+        start = time.time()
+
+        inputs = inputs.to(device)
+        targets = targets.to(device)
+        outputs = net(inputs)
+        loss = criterion(outputs, targets)
+
+        optimizer.zero_grad()
+        loss.backward()
+        optimizer.step()
+
+        train_loss += loss.item()
+        _, predicted = outputs.max(1)
+        total += targets.size(0)
+        correct += predicted.eq(targets).sum().item()
+
+        acc = 100 * correct / total
+
+        batch_time = time.time() - start
+
+        if batch_idx % 20 == 0:
+            print('Epoch: [{}/{}]| loss: {:.3f} | acc: {:.3f} | batch time: {:.3f}s '.format(
+                batch_idx, len(train_loader), train_loss/(batch_idx+1), acc, batch_time))
+
+    elapse_time = time.time() - epoch_start
+    elapse_time = datetime.timedelta(seconds=elapse_time)
+    print(""Training time {}"".format(elapse_time))
+
+
+if __name__=='__main__':
+    main()
+
+ +

Source: https://github.com/dnddnjs/pytorch-multigpu/blob/master/data_parallel/train.py

+ +

Hope I can help you and have a nice day!

+",23713,,23713,,10/30/2019 23:47,10/30/2019 23:47,,,,0,,,,CC BY-SA 4.0 +16168,2,,16119,10/30/2019 23:45,,0,,"

This depend on type of data you use.

+ +

Time sequence data

+ +

If the data advanced in time, a LSTM or similar RNN should be used. RNN calculate output through time. It works very good on time series data as it have a real sense of time. While CNN and MLP could work for time series data, it often don't work that well as different timestep of data is not defined.

+ +

Non- Time sequence data

+ +

According to you previous comment, the data seems to be of this kind. In the case of your data, a normal Multi-layer perceptron works well for this. The data is a direct mapping between the input and the output. If the input data is an image, use a CNN.

+ +

For example code in keras, see here: +You need the pandas module for this to work. Run pip install pandas to install pandas. + from keras.models import Sequential + from keras.utils import np_utils + from keras.layers.core import Dense, Activation, Dropout

+ +
import pandas as pd
+import numpy as np
+
+# Read data
+train = pd.read_csv('../input/train.csv')
+labels = train.ix[:,0].values.astype('int32')
+X_train = (train.ix[:,1:].values).astype('float32')
+X_test = (pd.read_csv('../input/test.csv').values).astype('float32')
+
+# convert list of labels to binary class matrix
+y_train = np_utils.to_categorical(labels) 
+
+# pre-processing: divide by max and substract mean
+scale = np.max(X_train)
+X_train /= scale
+X_test /= scale
+
+mean = np.std(X_train)
+X_train -= mean
+X_test -= mean
+
+input_dim = X_train.shape[1]
+nb_classes = y_train.shape[1]
+
+# Here's a Deep Dumb MLP (DDMLP)
+model = Sequential()
+model.add(Dense(128, input_dim=input_dim))
+model.add(Activation('relu'))
+model.add(Dropout(0.15))
+model.add(Dense(128))
+model.add(Activation('relu'))
+model.add(Dropout(0.15))
+model.add(Dense(nb_classes))
+model.add(Activation('softmax'))
+
+# we'll use categorical xent for the loss, and RMSprop as the optimizer
+model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
+
+print(""Training..."")
+model.fit(X_train, y_train, nb_epoch=10, batch_size=16, validation_split=0.1, show_accuracy=True, verbose=2)
+
+print(""Generating test predictions..."")
+preds = model.predict_classes(X_test, verbose=0)
+
+def write_preds(preds, fname):
+    pd.DataFrame({""ImageId"": list(range(1,len(preds)+1)), ""Label"": preds}).to_csv(fname, index=False, header=True)
+
+write_preds(preds, ""keras-mlp.csv"")
+
+ +

Code source: https://www.kaggle.com/fchollet/simple-deep-mlp-with-keras

+ +

In conclusion, in the case of our data, use a multi layer perceptron should work. Hope kit can help you and have a nice day!

+",23713,,23713,,11/4/2019 4:59,11/4/2019 4:59,,,,2,,,,CC BY-SA 4.0 +16170,1,,,10/31/2019 7:00,,1,28,"

I am reproducing the results from Hindsight Experience Replay by Andrychowicz et. al. In the original paper they present the results below, where the agent is trained for 200 epochs.

+ +

200 epochs * 800 episodes * 50 time steps = 8,000,000 total time steps.

+ +

+ +

I try to reproduce the results but instead of using 8 cpu cores, I am using 19 CPU cores.

+ +

I train the FetchPickAndPlace for 120 epochs, but with only 50 episodes per epoch. Therefore 120 * 50 * 50 = 300,000 iterations. I present the curve below:

+ +

+ +

and logger output for the first two epochs:

+ +

+ +

Now, as can be seen from my tensorboard plot, after 30 epochs we get a steady success rate very close to 1. 30 epochs * 50 episodes * 50 time steps = 75,000 iterations. Therefore it took the algorithm 75,000 time steps to learn this environment.

+ +

The original paper took approximately 50 * 800 * 50 = 2,000,000 time steps to achieve the same goal.

+ +

How is it that in my case the environment was solved nearly 30 times faster? Are there any flaws in my workings above?

+ +

NB: This was not a one off case. I tested again and got the same results.

+ +

Post on Reddit: https://www.reddit.com/r/reinforcementlearning/comments/dpjwfu/getting_same_results_with_half_the_number_of/

+",14390,,14390,,10/31/2019 7:50,10/31/2019 7:50,How am I getting same results 30 times faster than in original HER paper?,,0,0,,,,CC BY-SA 4.0 +16171,1,,,10/31/2019 9:08,,1,115,"

I’m using a simple neural network to solve a reinforcement learning problem.

+ +

The configuration is:

+ +

X-inputs: The current state +Y-outputs: The possible actions

+ +

Whenever the network yields a “good” solution, i “reward” the network by training it a number of times.

+ +

Whenever the network yields a “bad” or “neutral” solution, i ignore it.

+ +

This seems to be working somewhat, but from what i read, everyone else (in broad terms) seems to be using a 2 neural network configuration for similar tasks. (Policy network and value network)

+ +

Am i missing something? - and are there any obvious caveats of the “single network” method i am using?

+ +

Supplemental question: Are there other methods of “rewarding” a network, aside from simply training it?

+ +

Thanks,

+",26768,,,,,11/3/2019 18:50,Neural network for reinforcement learning,,1,1,,,,CC BY-SA 4.0 +16172,1,16181,,10/31/2019 10:19,,1,1680,"

I have an image classification task to solve, but based on quite simple/good terms:

+ +
    +
  • There are only two classes (either good or not good)
  • +
  • The images always show the same kind of piece (either with or w/o fault)
  • +
  • That piece is always filmed from the same angle & distance
  • +
  • I have at least 1000 sample images for both classes
  • +
+ +

So I thought it should be easy to come up with a good CNN solution - and it was. I created a VGG16-based model with a custom classifier (Keras/TF). Via transfer learning I was able to achieve up to 100% validation accuracy during model training, so all is fine on that end.

+ +

Out of curiosity and because the VGG-based approach seems a bit ""slow"", I also wanted to try it with a more modern model architecture as the base, so I did with ResNet50v2 and Xception. I trained both similar to the VGG-based model, tried it with several hyperparameter modifications, etc. However, I was not able to achieve a better validation accuracy than 95% - so much worse than with the ""old"" VGG architecture.

+ +

Hence my question is:

+ +
+

Given these ""simple"" (always the same) images and only two classes, is the VGG model probably a better base than a modern network like ResNet or Xception? Or is it more likely that I messed something up with my model or simply got the training/hyperparameters not right?

+
+",14504,,2444,,6/14/2020 22:17,12/4/2020 5:04,Is a VGG-based CNN model sometimes better for image classfication than a modern architecture?,,3,1,,,,CC BY-SA 4.0 +16173,1,16174,,10/31/2019 11:34,,1,95,"

I'm trying to replace the strided convolutions of Keras' MobileNet implementation with the ConvBlurPool operation as defined in the Making Convolutional Networks Shift-Invariant Again paper. In the paper, a ConvBlurPool is implemented as follows:

+ +

$$ +Relu \circ Conv_{k,s} \rightarrow Subsample_s \circ Blur_m \circ Relu \circ Conv_{k,1} +$$ +where k is the convolution's output kernels, s is the stride, m is the blurring kernel size and the subsample+blur is implemented as a strided convolution with a constant kernel.

+ +

My issues start when batch normalization enter the picture. +In MobileNet, a conv block is defined as follows (omitting the zero-padding):

+ +

$$ +Relu \circ BatchNorm \circ Conv_{k,s} +$$

+ +

I am leaning towards converting it to:

+ +

$$ +Subsample_s \circ Blur_m \circ Relu \circ BatchNorm \circ Conv_{k,1} +$$

+ +

i.e., putting the BN before the activation as it's normally done. This is not equivalent though, because the first BN operates on the downsampled signal.

+ +

Another possibility would be:

+ +

$$ +BatchNorm \circ Subsample_s \circ Blur_m \circ Relu \circ Conv_{k,1} +$$

+ +

with the BN as last operation. This is also not equivalent, because now the BN comes after the ReLu.

+ +

Is there any reason to prefer one option over the other? Are there any other options I'm not considering?

+",22086,,,,,10/31/2019 12:18,Positioning of batch normalization layer when converting strided convolution to convolution + blurpool,,1,0,,,,CC BY-SA 4.0 +16174,2,,16173,10/31/2019 12:18,,0,,"

After finding the paper authors' Github, I saw that, although they only have a MobileNet V2 model implemented, they choose the Subsample-after-ReLu option (the first one in the question). +Although this doesn't fully answer my question, I'll take ""the paper authors do it this way"" as enough reason to prefer this over the alternative.

+",22086,,,,,10/31/2019 12:18,,,,1,,,,CC BY-SA 4.0 +16175,1,16429,,10/31/2019 12:51,,2,1036,"

Suppose i trained the images of two people say Bob , Thomas .When i run the algorithm to detect the face of a totally different person from these two say John , then John is recognized as Bob or Thomas.How to avoid this ?

+ +

I am studying a face recognition model on GitHub(link) which uses Facenet model. Problem is when an unknown image (the image which is not in training data set) is given to identify , it identifies the unknown person as one the person in the data set .I searched on web and i found i need to increase threshold value .I guess i need to increase the threshold. But when i am increasing the threshold value to 0.99,0.99,99 then only it is rejecting the unknown image (image of the person who is not in data set) and sometimes even rejecting the image of person who is in dataset.

+ +

I guess by increasing the threshold value what we are assuring is that an image is classified as one of the person in training data only when they are close enough.

+ +

How to make changes so that the model works properly ?And can someone explain Threshold in Facenet model better.

+",30306,,30306,,11/10/2019 17:07,11/11/2019 10:21,Three step threshold in Facenet model of face recogniton,,1,5,,,,CC BY-SA 4.0 +16176,1,,,10/31/2019 12:58,,1,94,"

Suppose that my task is to label news articles; that is, to classify which category a news article belongs to. Using the labelled data (with old labels) that I have, I have trained a model for this.

+ +

For relevancy purposes, certain labels may be split into multiple new labels. For example, 'Sports' may split into 'Sports' and 'E-Sports'. Because of these new labels, I will need to retrain my model. However, my training data is labelled with the old labels. What can I do to address these 'label updates'?

+ +

My idea: Perhaps use some unsupervised clustering method (K-means?) to split the data with the old labels into the new labels. (But how can we be certain that which cluster has what new label?) Then use this 'updated' data to train a model. Is this correct?

+",30729,,,,,11/4/2019 7:33,How to handle classification with label updates?,,1,2,,,,CC BY-SA 4.0 +16177,1,16178,,10/31/2019 13:04,,0,197,"

im working on a project in which I have to make a multi-layer perceptron with two hidden layers with 3 nodes in each. The target value in my data contains 8 unique values/classes. One of the tasks states ""For the most popular class CYT plot weight values per iteration for the last layer (3 weights and bias)"". My question is ""does this statement make sense""? I can access the weights and biases of a layer but I don't get what are weight values for a specific class and how to access them

+",21084,,,,,10/31/2019 13:38,How can we print weights per iteration in a simple feed forward MLP for an specific class?,,1,0,,,,CC BY-SA 4.0 +16178,2,,16177,10/31/2019 13:38,,1,,"

A common model used for this kind of classification task is to have one output neuron per class. So, for example, neuron 1 may have a loss function that is related to outputting ""1"" for examples of class 1, and ""0"" for examples of other classes. Neuron 2 may be asked to do the same, but for class 2 rather than class 1.

+ +

If you use a model of this kind, you can pull the weights for each neuron in the final output layer. It sounds like this is what you are being asked to plot.

+",16909,,,,,10/31/2019 13:38,,,,0,,,,CC BY-SA 4.0 +16181,2,,16172,10/31/2019 16:04,,0,,"

VGG is a more basic architecture which uses no residual blocks. Reset usually perform better then VGG due to it's more layers and residual approach. Given that resnet-50 can get 99% accuracy on MNIST and 98.7% accuracy on CIFAR-10, it probably should achieve better than VGG network. Also, the validation accuracy should not be 100%. You could try increasing the size of your validation set to improve accuracy on validation. VGG network should perform worst than ResNet in most scenario, but experimenting is the way to go. Try and experiment more to get a method that works for your data. Hope that I can help you and have a nice day!

+",23713,,,,,10/31/2019 16:04,,,,0,,,,CC BY-SA 4.0 +16182,1,16190,,10/31/2019 16:15,,4,407,"

I am quite new to the Reinforcement Learning domain and I am curious about something. It seems to be the case that the majority of current research assumes Markovian environments, that is, future states of the process depend only upon the present state, not on the sequence of events that preceded it. I was curious about how we can assign rewards when the Markovian property doesn't hold anymore. Do the state-of-the-art RL theory and research support this?

+",30960,,2444,,10/31/2019 20:15,11/1/2019 15:10,How to assign rewards in a non-Markovian environment?,,1,0,,,,CC BY-SA 4.0 +16183,1,,,10/31/2019 17:02,,3,38,"

I want to understand automatic Neural Architecture Search (NAS). I read already multiple papers, but I cannot figure out what the actual search space of NAS is / how are classical hyper-parameters considered in NAS?

+ +

My understanding:

+ +

NAS aims to find a good performing model in the search space of all possible model architectures using a certain search- and performance estimation strategy. +There are architecture-specific hyper-parameters (in the most simple feed-forward network case) like the number of hidden layers, the number of hidden neurons per layer as well as the type of activation function per neuron +There are classical hyper-parameters like learning rate, dropout rate, etc. +What I don't understand is:

+ +

What exactly is part of the model architecture as defined above? Is it only the architecture-specific hyper-parameters or also the classical hyper-parameters? In other words, what is spanning the search space in NAS: Only the architecture-specific hyper-parameters or also the classical hyper-parameters?

+ +

In case only the architecture-specific hyper-parameters are part of the NAS search space, what about the classical hyper-parameters? A certain architecture (with a fixed configuration of the architecture-specific hyper-parameters) might perform better or worse depending on the classical hyper-parameters - so not taking into account the classical hyper-parameters in the NAS search space might result in a non-optimal ultimate model architecture, or not?

+",30909,,2444,,12/19/2021 19:19,12/19/2021 19:19,Which hyper-parameters are considered in neural architecture search?,,0,0,0,,,CC BY-SA 4.0 +16185,5,,,10/31/2019 20:12,,0,,,2444,,2444,,10/31/2019 20:12,10/31/2019 20:12,,,,0,,,,CC BY-SA 4.0 +16186,4,,,10/31/2019 20:12,,0,,"For questions related to the concept of Markov decision process (MDP), which is a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision-maker. The concept of MDP is useful for studying optimization problems solved via dynamic programming and reinforcement learning.",2444,,2444,,10/31/2019 20:12,10/31/2019 20:12,,,,0,,,,CC BY-SA 4.0 +16187,1,25236,,10/31/2019 21:19,,2,200,"

I am reading the paper Semi-Supervised Deep Learning with Memory (2018) by Yanbei Chen et al. The topic is the classification of images using semi-supervised learning. The authors use a term on page 2 in the middle of the page that I am not familiar with. They write:

+
+

The key to our framework design is two-aspect: (1) the class-level discriminative feature representation and the network inference uncertainty are gradually accumulated in an external memory module; (2) this memorised information is utilised to assimilate the newly incoming image samples on-the-fly and generate an informative unsupervised memory loss to guide the network learning jointly with the supervised classification loss

+
+

I am not sure what the term discriminative feature representation means.

+

I know that a discriminative model determines the decision boundary between the classes, and examples include: Logistic Regression (LR), Support Vector Machine (SVM), conditional random fields (CRFs) and others.

+

Moreover, I know that, in machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data.

+

Any insights on the definition of this term much appreciated.

+",30962,,2444,,12/17/2020 11:49,12/17/2020 11:49,"What does ""class-level discriminative feature representation"" mean in the paper ""Semi-Supervised Deep Learning with Memory""?",,1,0,,,,CC BY-SA 4.0 +16190,2,,16182,10/31/2019 23:35,,2,,"

Dealing with a Non-Markovian process is unusual in Reinforcement Learning. Although some explicit attempts have been made, the most common approach when confronted with a non-Markovian environment is to try and make the agent's representation of it Markovian.

+ +

After reducing Agent's model of the dynamics to a Markovian process, rewards are assigned from the environment in exactly the same way as before. The environment simply sends the agent a reward signal in response to each action.

+ +

The Markovian assumption is essentially a formalism of the idea that the future can be predicted from the present. It says that if you know the dynamics of a system, and you know the state of the system now, you know everything you need to predict the state of the system later, and how we got to this state is not important. Formally, we write this as $P(s_t∣s_{t−1:0})=P(s_t∣s_{t−1})$.

+ +

That said, the models we use in AI are usually simplifications of the real world. When we simplify the world, we can introduce non-Markovian dynamics. However, if the model grows too complex, and the state space too large, learning will take too long. The goal is then to define a state space that is small enough to be learnable, and not too bad an approximation of the real dynamics of the world. AI researchers have several tools to do this.

+ +

As a working example, imagine that the future position of a robot depends mainly on the current position, and current velocity, along with the action the robot takes right now. Using these variables to define a state, we get almost Markovian dynamics, but as the robot moves over time, its battery drains and the movements become very slightly more imprecise. If we wanted to remove this error, we can:

+ +
    +
  1. Expand the state variables. If we add ""current battery level"" to the state, then our process will become Markovian again. This is the most obvious approach, but it only works up to a point. The state space grows exponentially in size as you add new variables. It will quickly become too large to learn, so in many problems, a complex state is subsequently decomposed into simpler sub-states. These simpler states may not depend on one another at all, or may depend only to varying degrees. The main limiting factor in learning to navigate an exponential state space is that the number of parameters in our model will grow exponentially. If the original space had $O(2^n)$ parameters, splitting it in half will yield two seperate learning problems of size $O(2^{n/2} + 2^{n/2} = 2^{n/2 + 1})$. Splitting the problem in two reduces its size by a factor $O(2^{n/2})$, which is big. This is the technique exploited by Dynamic Bayesian Networks. In the case of our robot, we could make current $x$ position depend only on previous $x$ position, previous $x$ velocity, and the robot's action, rather than on all previous variables, and likewise for the other variables.
  2. +
  3. Use Higher Order Models of the Environment. This is really a generalization of (1) above. Instead of the state being the current location and velocity of the robot, we could define the state to be the current location/velocity and the location/velocity of the robot 1 step in the past (or 2, or 3, or $k$ steps). This increases the size of the state space enormously, but if we don't know why there is an error in the robot's movements, this can still allow us to model it. In this case, if we know the size of the change in the robot's position last time, and we observe that it is smaller this time (for the same action), we can estimate the rate at which the change is changing, without understanding its cause.
  4. +
+ +

As an example, consider the process of setting the price of a good. The agent's reward is non-Markovian, because sales increase or decline gradually in response to price changes. However, they don't depend on all of the history of prices. Imagine that instead they depend on the last 5 prices (or the last k). We can use technique (2) above to expand the agent's model of what a state is. The now the agent learns that when prices have been $p_{t-1:t-5}$ in the last 5 steps, and it sets the price at time $t$ to some value, it's reward is $x$. Since the reward depends only on the prices now, and in the last 5 steps, the agent is now learning a Markovian process, even though the original process is non-Markovian, and the reward function is non-Markovian. No changes are made to the reward function, or the environment, only the agent's model of the environment.

+",16909,,16909,,11/1/2019 15:10,11/1/2019 15:10,,,,13,,,,CC BY-SA 4.0 +16191,1,,,11/1/2019 1:08,,3,405,"

I use Google's Cloud TPU hardware extensively using Tensorflow for training models and inference, however, when I run inference I do it in large batches. The TPU takes about 3 minutes to warm up before it runs the inference. But when I read the official TPU FAQ, it says that we can do real-time inference using TPU. It says the latency is 10ms which for me is fast enough but I cannot figure out how to write code that does this, since every time I want to pass something for inference I have to start the TPU again.

+ +

My goal is to run large Transformer-based Language Models in real-time on TPUs. I guessed that TPUs would be ideal for this problem. Even Google seems to already do this.

+ +

Quote from the official TPU FAQ:

+ +
+

Executing inference on a single batch of input and waiting for the + result currently has an overhead of at least 10 ms, which can be + problematic for low-latency serving.

+
+",29999,,49807,,9/17/2021 5:05,9/17/2021 5:05,How to use TPU for real-time low-latency inference?,,0,0,,,,CC BY-SA 4.0 +16192,5,,,11/1/2019 1:54,,0,,,2444,,2444,,11/1/2019 1:54,11/1/2019 1:54,,,,0,,,,CC BY-SA 4.0 +16193,4,,,11/1/2019 1:54,,0,,"For questions related to grammar induction (or grammar inference), which is the problem, in machine learning, of learning a formal grammar from a set of observations (a dataset).",2444,,2444,,11/1/2019 2:00,11/1/2019 2:00,,,,0,,,,CC BY-SA 4.0 +16194,5,,,11/1/2019 2:26,,0,,,2444,,2444,,11/1/2019 2:26,11/1/2019 2:26,,,,0,,,,CC BY-SA 4.0 +16195,4,,,11/1/2019 2:26,,0,,"For questions related to BERT (which stands for Bidirectional Encoder Representations from Transformers), a language representation model introduced in the paper ""BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"" (2019) by Google.",2444,,2444,,11/1/2019 2:26,11/1/2019 2:26,,,,0,,,,CC BY-SA 4.0 +16196,5,,,11/1/2019 2:30,,0,,,2444,,2444,,11/1/2019 2:30,11/1/2019 2:30,,,,0,,,,CC BY-SA 4.0 +16197,4,,,11/1/2019 2:30,,0,,"For questions related to the long-short term memory (LSTM), which refers to a recurrent neural network architecture that uses LSTM units. The first LSTM unit was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber in the paper ""Long-Short Term Memory"".",2444,,2444,,11/1/2019 2:30,11/1/2019 2:30,,,,0,,,,CC BY-SA 4.0 +16198,5,,,11/1/2019 2:44,,0,,,2444,,2444,,11/1/2019 2:44,11/1/2019 2:44,,,,0,,,,CC BY-SA 4.0 +16199,4,,,11/1/2019 2:44,,0,,"For questions related to the transformer, which is a deep machine learning model introduced in 2017 in the paper ""Attention Is All You Need"", used primarily in the field of natural language processing (NLP).",2444,,2444,,11/1/2019 2:44,11/1/2019 2:44,,,,0,,,,CC BY-SA 4.0 +16200,5,,,11/1/2019 2:57,,0,,,2444,,2444,,11/1/2019 2:57,11/1/2019 2:57,,,,0,,,,CC BY-SA 4.0 +16201,4,,,11/1/2019 2:57,,0,,"For questions related to GPT (which stands for Generative Pre-Training), which is a combination of transformers (proposed in ""Attention is All You Need"") and unsupervised pre-training for solving language tasks, such as machine translation. GPT was proposed in ""Improving Language Understanding by Generative Pre-Training"" (2018) by Open AI. There's also GPT-2, which was proposed in ""Language Models are Unsupervised Multitask Learners"" (2019) by Open AI.",2444,,2444,,11/1/2019 2:57,11/1/2019 2:57,,,,0,,,,CC BY-SA 4.0 +16202,5,,,11/1/2019 3:00,,0,,,2444,,2444,,11/1/2019 3:00,11/1/2019 3:00,,,,0,,,,CC BY-SA 4.0 +16203,4,,,11/1/2019 3:00,,0,,"For questions related to the concept of a language model, which is a probability distribution over sequences of words (for example, of a natural language, such as English).",2444,,2444,,11/1/2019 3:00,11/1/2019 3:00,,,,0,,,,CC BY-SA 4.0 +16204,2,,3494,11/1/2019 3:14,,1,,"

I actually prefer C for machine learning. Because like in life, in the world as we know it, consists of never-ending ""logic gates"" (which basically is like flipping a coin - there WILL be 2 possible outcomes - not counting the third: landing on the side!). Which also means that while the universe seems never-ending, we still never stop finding those things that are even smaller than the last smallest thing, right?

+ +

So... To put it in a context when programming C, I can control the memory usage more efficiently by coding smaller snippets that get combined, to always form smaller & efficient ""code-fragments"", that make up what we would call ""cells"" in biology (it got a measurable function, and has some pre-set properties).

+ +

Thus, I like to optimize for low RAM usage, low CPU usage etc. when programming AI. I have only done feedforward with a basic genetic algorithm in C, but the more advanced recurrent neural network I wrote in C++ (ONLY because of the simplicity of using std::vector<TYPE> name;, so I wrote my own cvector.c: https://pastebin.com/sBbxmu9T & cvector.h: https://pastebin.com/Rd8B7mK4 & debug: https://pastebin.com/kcGD8Fzf - compile with gcc -o debug debug.c cvector.c). That actually helped a lot in the quest of optimizing CPU usage (and overall runtime) when creating optimized neural networks.

+ +

EDIT: +So I am in one sense really see the opposite of what AlexPnt sees, when it comes to exploring what is possible within the realm of a ""self"".

+",30967,,5300,,4/14/2020 22:09,4/14/2020 22:09,,,,0,,,,CC BY-SA 4.0 +16206,1,,,11/1/2019 7:55,,1,10,"

Suppose that we want to segment a red blob from the image, normally you will have a class for this red blob e.g. 0. And every red blob you detected will have a class of 0.

+ +

But, in my case, I want that the model will look at the surrounding context, e.g., if the red blob is surrounded by blue blobs, it should be classified as class 1, instead of 0. Like the following image.

+ +

+ +

Is this something easily achieve able with U-Net or other models (you can suggest)?

+ +

In my case, the context can be more difficult than this, e.g., if there are blue and green surrounding you, you will have another class.

+",20819,,2444,,6/13/2020 0:03,6/13/2020 0:03,Would models like U-Net be able to segment objects which has label based on its surrounding context?,,0,0,,,,CC BY-SA 4.0 +16207,1,,,11/1/2019 9:20,,1,50,"

I am building a CNN with two outputs. I still have to put time in the network itself, but I was trying to get a good evaluation/classification report of the results. My code is the following:

+ +
scores = model.evaluate(data_test, [Y1_test, Y2_test], verbose=0)
+
+for i, j in zip(model.metrics_names, scores):
+    print(i,'=', j)
+
+ +

Output:

+ +
loss = 5.124477842579717
+Y1_output_loss = 1.3782909
+Y2_output_loss = 4.10769
+Y1_output_accuracy = 0.6304348
+Y2_output_accuracy = 0.54347825
+
+ +

Not great, but that is not the point. My code for the classification repot is as follows:

+ +
Y1_pred, Y2_pred = model.predict(data_test)
+Y1_true, Y2_true = Y1_test.argmax(axis=-1), Y2_test.argmax(axis=-1)
+Y1_pred, Y2_pred = Y1_pred.argmax(axis=-1), Y2_pred.argmax(axis=-1)
+
+
+print(classification_report(Y1_true, Y1_pred))
+print(classification_report(Y2_true, Y2_pred))
+
+ +

Output:

+ +
Classification report Y1
+              precision    recall  f1-score   support
+
+           0       0.20      0.33      0.25         6
+           3       0.00      0.00      0.00         3
+           6       0.00      0.00      0.00         6
+           8       0.00      0.00      0.00         2
+           9       0.00      0.00      0.00         7
+          10       0.03      0.50      0.06         2
+          11       0.00      0.00      0.00         3
+          12       0.00      0.00      0.00         7
+          13       0.00      0.00      0.00         2
+          14       0.00      0.00      0.00         7
+          15       0.00      0.00      0.00         1
+
+    accuracy                           0.07        46
+   macro avg       0.02      0.08      0.03        46
+weighted avg       0.03      0.07      0.04        46
+
+
+Classification report Y2
+              precision    recall  f1-score   support
+
+           0       0.00      0.00      0.00         9
+           2       0.00      0.00      0.00        10
+           3       0.15      1.00      0.26         7
+           4       0.00      0.00      0.00         9
+           5       0.00      0.00      0.00         6
+           6       0.00      0.00      0.00         2
+           7       0.00      0.00      0.00         3
+
+    accuracy                           0.15        46
+   macro avg       0.02      0.14      0.04        46
+weighted avg       0.02      0.15      0.04        46
+
+ +

Now the average accuracy is extremely low suddenly, so I have the feeling it isn't lining up correctly. But I don't see where?

+ +

Thank you all

+",30971,,,,,11/1/2019 9:20,CNN multi output scores and evaluation,,0,1,,,,CC BY-SA 4.0 +16208,1,,,11/1/2019 9:31,,1,135,"

I want to build a Voice Assistance using Tensorflow, Like Google Assistance, So that I can give commands like:

+ +
Open Camera
+Send Message
+Play Music
+ETC
+
+ +

I know I can use pre-trained model for Voice Recognition but this is not my problem. If I know correctly, What Neural Network does it learn from your input and output and creates a best algorithms.

+ +

So I want to know Is it possible to somehow train my network for my commands so that I don't have to HARD-CODED them because it is hard to remember so many command?

+ +

Is Google also Hard-Coded the commands for Google Voice Assistance?

+ +

Sorry, If I'm unable to explain this to you :P

+",30970,,30970,,11/1/2019 11:05,11/1/2019 12:12,Best way to train Neural Network for Voice Commands?,,1,0,,,,CC BY-SA 4.0 +16210,2,,16208,11/1/2019 12:12,,1,,"
+

What Neural Network does it learn from your input and output and creates a best algorithms.

+
+ +

This is inaccurate. A neural network is a function approximator, so it approximates an unknown function. However, it does so in many cases by learning from your input and output(as in the case of supervised learning). The function your approximating would here for example be the function that maps a certain soundwave input to ""open camera"", etc.

+ +

The problem with supervised learning is that you need massive datasets to accurately approximate the target function - luckily for Google they have such datasets and resources available.

+ +
+

Is Google also Hard-Coded the commands for Google Voice Assistance?

+
+ +

No, because that would not generalize well over multiple languages/accents/background noise, etc. It is therefor that neural networks as function approximators have had such a large impact over the last years - they generalize well. +If you remember when the first ""voice assistance"" products came out, they rarely functioned - this was often due to them being ""hard coded"", with only static measures for noise-suppression.

+",30565,,,,,11/1/2019 12:12,,,,2,,,,CC BY-SA 4.0 +16211,2,,16162,11/1/2019 12:22,,1,,"

The first thing that comes to mind when reading your question is Genetic algorithms.

+ +

They create alternate versions of themselves and measure each versions performance on a specific task, before discarding those that work poorly, while keeping the best ones for their next generation. +The mutations here are often random, and for large/complex problems, these simulations can take incredibly long time. +This group of algorithms are heavily inspired by evolution and biology, as you can see.

+ +

I realize as I read the last part of your question, that this might be on a much smaller scope than you had envisioned. But, in essence genetic algorithms does what you describe in your first part.

+ +

For the more grand-scale question, see @Neil Slater's answer.

+",30565,,16909,,11/4/2019 16:09,11/4/2019 16:09,,,,6,,,,CC BY-SA 4.0 +16212,1,16215,,11/1/2019 12:59,,5,113,"

I'm studying the paper "Minimizing Total Tardiness on a Single Machine Using Ant Colony Optimization" which has proposed to use Ant colony optimization to SMTWTP.

+

According to this paper:

+
+

Each artificial ant iteratively and independently decides which job to +append to the sub-sequence generated so far until all jobs are +scheduled, Each ant generates a complete solution by selecting a job $j$ +to be on the $i$-th position of the sequence. This selection process is +influenced through problem-specific heuristic information called +visibility and denoted by $\eta_{ij}$ as well as pheromone trails denoted by $\tau_{ij}$. The former is an indicator of how good the choice of that job +seems to be and the latter indicates how good the choice of that job +was in former runs. Both matrices are only two dimensional as a +consequence of the reduction in complexity

+
+

They have proposed this formula for the probability that job $j$ be selected to be processed on position $i$ (page 9 of the linked paper):

+

$$ +\mathcal{P}_{i j}=\left\{\begin{array}{cl} +\frac{\left[\tau_{i j}\right]^{\alpha}\left[\eta_{i j}\right]^{\beta}}{\sum_{h \in \Omega}\left[\tau_{i h}\right]^{\alpha}\left[\eta_{i h}\right]^{\beta}} & \text { if } j \in \Omega \\ +0 & \text { otherwise } +\end{array}\right.\tag{1}\label{1} +$$

+

but I can't understand what $[]$ surrounding $\eta_{ij}$ and $\tau_{ij}$ indicates. Does it show that these values are matrices?

+",30164,,2444,,1/15/2021 11:56,1/15/2021 11:56,What is the meaning of the square brackets in ant colony optimization?,,1,0,,,,CC BY-SA 4.0 +16214,1,,,11/1/2019 13:45,,2,216,"

The thing about machine learning (ML) that worries me is that "knowledge" acquired in ML is hidden: we usually can't explain the criteria or methods used by the machine to provide an answer when we ask it a question.

+

It's as if we asked an expert financial analyst for advice and he/she replied, "Invest in X"; then when we asked "Why?", the analyst answered, "Because I have a feeling that's the right thing for you to do." It makes us dependent on the analyst.

+

Surely there are some researchers trying to find ways for ML systems to encapsulate and refine their "knowledge" into a form that can then be taught to a human or encoded into a much simpler machine. Who, if any, are working on that?

+",28348,,2444,,7/29/2020 21:41,1/6/2021 13:25,Who is working on explaining the knowledge encoded into machine learning models?,,3,1,,1/5/2021 11:08,,CC BY-SA 4.0 +16215,2,,16212,11/1/2019 14:56,,3,,"

The square brackets $[]$ in $[\tau_{ij}]^\alpha$ and $[\eta_{ij}]^\beta$ may be just a way of emphasing that the elements $\tau_{ij} \in \mathbb{R}$ and $\eta_{ij} \in \mathbb{R}$ of respectively the matrices $\mathbf{\tau} \in \mathbb{R}^{n \times n}$ and $\mathbf{\eta} \in \mathbb{R}^{n \times n}$ (where $n$ is the number of nodes in the graph) are respectively raised to $\alpha$ and $\beta$, so they could have used also other type of brackets, for example, $()$. It may also be a way of indicating that $[\tau_{ij}]$ and $[\eta_{ij}]$ are $1 \times 1$ matrices or vectors that contain respectively the scalars $\tau_{ij}$ and $\eta_{ij}$, so you are multiplying matrices or vectors (dot product).

+ +

This notation is also used in the paper that introduced the ant colony system (ACS) (and it is probably used in many other papers related to ant colony optimization). See equation 1 of Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem (1997) by Dorigo and Gambardella.

+",2444,,2444,,11/2/2019 1:08,11/2/2019 1:08,,,,1,,,,CC BY-SA 4.0 +16216,2,,16214,11/1/2019 15:05,,0,,"

I have one concept, which allows to structure knowledge gathered by a ML system in both categorical and algorithms-like structure.

+ +

The key idea here is that we contain in our minds some network-like structure in our minds, which helping us not only to classify some data, but also to form an abstract meaning about text or message.

+ +

Other than that, in opposite to our minds, it looks like this solution can produce these structures in readable for us forms.

+",8861,,,,,11/1/2019 15:05,,,,0,,,,CC BY-SA 4.0 +16217,1,,,11/1/2019 16:34,,2,28,"

Say I have a game like tic-tac-toe or chess. Or some other visual logic based problem.

+ +

I could express the state of the game as a string. (or perhaps a 2D array)

+ +

I want to be able to express the possible moves of the game as rules which change the string. e.g. replacement rules.

+ +

I looked into regex as a possibility but this doesn't seem powerful enough. For example, one can't have named patterns which one can use again. (e.g. if I wanted to name a pattern called ""numbers_except_for_8"". To be used again.

+ +

And it also should be able to express things like ""repeat if possible"".

+ +

In other words I need some simple language to express rules of a game that has:

+ +
    +
  • modularity
  • +
  • simpleness
  • +
  • can act on other rules (self referential)
  • +
+ +

There are languages like LISP but these on the other hand seem too complicated. (Perhaps there is no simple language hence why the English language is so complicated).

+ +

I did read once about a generalised board game solving software program which had a way to express the rules of a game. But I can't seem to find a reference to it anywhere.

+ +

As an example rules for tic tac toe might be:

+ +

Players-Turn: +""Find a blank square""->""Put an X in it""->Oponent's turn

+ +

Oponents-Turn: +""Find a blank square""->""Put an O in it""->Player's turn

+ +

So I think the ingredients for rules are: searching for patterns, determining if an object is of a particular type (which might be the same as the first ingredient), and replacing.

+",4199,,4199,,11/1/2019 16:40,11/1/2019 16:40,What is a good language for expressing replacement or template rules?,,0,3,,,,CC BY-SA 4.0 +16218,1,,,11/1/2019 23:27,,2,34,"

In recent years if you are working on stereo depth/disparity algorithms, it seems like you will only ever get your paper accepted to CVPR/ICCV/ECCV if there's some deep learning involved in it. A lot of authors published their code on github and I've tried out multiple of them and here is what I observed. None of these deep learning based methods generalized well. Almost all methods trained on the KITTI dataset (street images) or the scene flow dataset (synthetic images). These methods perform well when the test data is similar to the training data, but fails miserably on other kinds of test data (e.g. close up human) whereas a classical traditional computer vision based method like PatchMatch would generate decent results. In my opinion, no matter how well these new deep learning methods perform on the KITTI benchmark, it's nearly useless in the real world.

+ +

I understand deep learning has the potential to approximate any non-linear function when there's enough quality training data and unlimited computation, but ground truth depth/disparity cannot be labeled by manual labor like a cat-dog classification problem. That means the ground truth training data has to come from traditional computer vision algorithms or hardware or be synthetic. Traditional computer vision algorithms are not even close to perfect yet but the research pretty much stifled because of deep learning. The ground truth of the KITTI dataset comes from a hardware LIDAR, but it's extremely sparse. If we align multiple scans from LIDAR in order to form a dense result, that's relying on some type of SLAM which again is relying on an imperfect traditional computer vision algorithm. There is no sign of hardware that can generate accurate dense depth that is coming out soon. As for synthetic data, it doesn't accurately represent real data. Since there isn't even a good way to obtain training data for stereo depth/disparity, why are the researchers so fixated on building complex deep neural nets to solve stereo depth/disparity nowadays?

+",30980,,,,,11/1/2019 23:27,Why are researchers focused on deep learning based stereo depth/disparity methods instead of non deep learning ones?,,0,0,,,,CC BY-SA 4.0 +16219,1,,,11/2/2019 1:22,,1,412,"

In section 5 of the paper Soft Actor-Critic Algorithms and Applications, it is proposed an optimization problem to obtain an optimal temperature parameter $\alpha^*_t$. First, one uses the original evaluation and improvement steps to estimate $Q_t^*$ and $\pi_t^*$, and then one somehow solves the optimization problem:

+

$$\alpha_t^* = \arg\min_{\alpha_t} \mathbb E _{a_t\sim\pi^*}\left[\alpha_t(-\log\pi_t^*(a_t|s_t;\alpha_t)-H)\right]\text .$$

+

As far as I understand, we should use our current estimate of $\pi_t^*$ to solve that problem. Since it was obtained from a previous $\alpha_{t-1}^*$, in practice, it is not dependent on $\alpha_t$ and so the optimization problem becomes a linear problem with the only restriction being $\alpha_t\geq0$.

+

Here comes my problem: under this rationale, if $\alpha_t$ is a scalar independent of both state $s_t$ and action $a_t$, the value of the cost function is just proportional to $\alpha_t$ and so the solutions are either $0$ or $\infty$, depending on the sign of the expected value (something similar happens if $\alpha_t^*=\alpha_t^*(s_t,a_t)$). However, the whole idea of introducing this parameter is to account optimally for the exploration of the policy.

+

What is the correct way to solve this optimization problem along with the evaluation and improvement steps? I am particularly interested in the tabular case. Also, is there any explanation why they use a negative minimum entropy $H$ when the entropy is always positive?

+

By the way, in the approximate case, the current official implementation seems to be doing just that: moving $\alpha_t^*$ up or down a little bit (closer to $\infty$ or 0, respectively), depending on the magnitude of the expected value. I guess one could do the same for the tabular case, modifying the $\alpha_t^*$ only a little bit in each step, but this seems rather suboptimal.

+",30983,,32410,,10/2/2021 18:56,10/2/2021 18:56,How does the automated temperature adjustment step work in Soft Actor-Critic?,,0,0,,,,CC BY-SA 4.0 +16220,1,,,11/2/2019 3:40,,3,64,"

I would like a few suggestions on an idea that I have -

+ +

I am trying to make a musical instrument (percussion), whilst just having a PVC disc. I am hitting the disc in a variety of styles (in order to produce a variety of sounds correspondingly), just like the way the actual percussion instrument is hit. I am converting the mechanical vibrations on the PVC disc to an electrical signal using a transducer, performing an FFT analysis of the different strokes, and trying to identify the stroke which is hit. Using this technique, I could get an accuracy of only 80 percent. I would like it to be extremely accurate ( more than 95 percent recognition). I was using only frequency as the parameter used to distinguish the sounds.

+ +

Now, I am thinking that if I could use other parameters too in order to identify the stroke, I might be able to get the required accuracy. I am thinking of resorting to Machine Learning for this. I am kind of new to this and would like to know what I might need to know before I proceed with this idea.

+ +

Any help would be greatly appreciated.

+",30985,,,,,4/1/2020 12:03,Analyzing vibration using machine learning,,1,1,,,,CC BY-SA 4.0 +16221,1,16229,,11/2/2019 7:12,,2,65,"

For example, given a face image, and you want to predict the gender. You also have age information for each person, should you feed the age information as input or should you use it as auxiliary output that the network needs to predict?

+ +

How do I know analytically (instead of experimentally) which approach will be better? What is the logic behind this?

+",20819,,,,,11/2/2019 16:10,Should I use my redundant feature as an auxiliary output or as another input feature?,,1,0,,,,CC BY-SA 4.0 +16222,1,,,11/2/2019 8:42,,1,51,"

I have a dataset which has two very similar classes (men wrestling, women wrestling). I've used InceptionV3 as a classifier to solve the problem of classifying this dataset. Unfortunately, the accuracy of this classifier doesn't hit more than 70%. Is there any suggestion about how I can overcome this problem or any other similar problems?

+",26472,,,,,11/2/2019 15:12,The best way of classifying a dataset including classes with high similarity?,,1,2,,12/28/2021 22:14,,CC BY-SA 4.0 +16223,2,,16214,11/2/2019 9:00,,0,,"

We actually can explain how the machine answers our questions. For example, we know how the ML works for the case of image recognition. See the following paper by Chris Olah:

+

https://distill.pub/2017/feature-visualization/ +and also +https://distill.pub/2018/building-blocks/

+

By feature visualization, we know that the machine sees at each layer and how it decides using given information. As the images are visual it is easy to "visualize" the features but of course, interpreting inner workings of a Neural Network for other kinds of data (such as financial data, for example) would be much more difficult I presume. But still, we can understand how the initial data is weighted or how the intermediate features are calculated.

+

Given that, I highly doubt the reasoning of a machine can be taught to a human. It is far too complex to be learned. We can easily understand one or two dimensional functions, but the data we use in machine learning problems usually have hundreds of dimensions.

+",22301,,11539,,1/6/2021 13:25,1/6/2021 13:25,,,,1,,,,CC BY-SA 4.0 +16224,1,,,11/2/2019 12:32,,5,1862,"

There are mainly two different areas of AI at the moment. There is the "learning from experience" based approach of neural networks. And there is the "higher logical reasoning" approach, with languages like LISP and PROLOG.

+

Has there been much overlap between these? I can't find much!

+

As a simple example, one could express some games in PROLOG and then use neural networks to try to play the game.

+

As a more complicated example, one would perhaps have a set of PROLOG rules which could be combined in various ways, and a neural network to evaluate the usefulness of the rules (by simulation). Or even create new PROLOG rules. (Neural networks have been used for language generation of a sort, so why not the generation of PROLOG rules, which could then be evaluated for usefulness by another neural network?)

+

As another example, a machine with PROLOG rules might be able to use a neural network to be able to encode these rules into some language that could be in turn decoded by another machine. And so express instructions to another machine.

+

I think, such a combined system that could use PROLOG rules, combine them, generate new ones, and evaluate them, could be highly intelligent. As it would have access to higher-order logic. And have some similarity to "thinking".

+",4199,,2444,,12/27/2021 13:29,12/27/2021 13:29,"Has machine learning been combined with logical reasoning (for example, PROLOG)?",,2,1,,,,CC BY-SA 4.0 +16225,1,,,11/2/2019 13:52,,3,419,"

I'm studying ant colony optimization. I'm trying to understand the difference between the ant system (AS) and the max-min ant system (MMAS) approaches. As far as I found out, the main difference between these 2 is that in AS the pheromone trail is updated after all ants have finished the tour (it means all ants participate in this update), but in MMAS, only the best ant updates this value. Am I right? Is there any other significant difference?

+",30164,,2444,,11/2/2019 14:16,1/15/2021 11:39,What is the difference between the ant system and the max-min ant system?,,1,1,,,,CC BY-SA 4.0 +16226,1,16245,,11/2/2019 14:56,,8,1918,"

If recurrent neural networks (RNNs) are used to capture prior information, couldn't the same thing be achieved by a feedforward neural network (FFNN) or multi-layer perceptron (MLP) where the inputs are ordered sequentially?

+ +

Here's an example I saw where the top line of each section represents letters typed and the next row represents the predicted next character (red letters in the next row means a confident prediction).

+ +

+ +

Wouldn't it be simpler to just pass the $X$ number of letters leading up to the last letter into an FFNN?

+ +

For example, if $X$ equaled 4, the following might be the input to the FFNN

+ +
S, T, A, C => Prediction: K
+
+",30154,,30154,,11/3/2019 6:47,11/3/2019 21:36,Why use a recurrent neural network over a feedforward neural network for sequence prediction?,,2,0,,,,CC BY-SA 4.0 +16227,2,,16226,11/2/2019 15:05,,7,,"

An RNN or LSTM have the advantage of ""remembering"" the past inputs, to improve performance over prediction of a time-series data. If you use a neural network over like the past 500 characters, this may work but the network just treat the data as a bunch of data without any specific indication of time. The network can learn the time representation only through gradient descent. RNN or LSTM however have ""time"" as a mechanism built into the model. The model loops through the model sequentially and have a real ""sense of time"" even before the model is trained. The model also have ""memory"" of previous data points to help the prediction. The architecture is based on the progress of time and the gradient are propagated through time as well. This is a much more intuitive way to process time-series data.

+ +

A 1D CNN also will work for the task. An example of CNN in time series data is wavenet, which uses CNN for generating incredibly life like speech using dilated convolution neural network. For whether LSTM or CNN works better, it depends on the data. You should try experimenting with both networks to see which works best.

+ +

Suppose you need to classify a video's genre. It is much simpler to watch it in sequence then seeing frames of it playing randomly in front of your eyes. This is why an RNN or an LSTM works better in time series data.

+",23713,,23713,,11/3/2019 7:00,11/3/2019 7:00,,,,4,,,,CC BY-SA 4.0 +16228,2,,16222,11/2/2019 15:12,,1,,"

If you want to classify data with similar characteristics, it would often helps if you hand craft features. For classifying women or men wrestling, you may want to try using cv2 to track human faces and feed that to your CNN as input. Example: https://realpython.com/face-recognition-with-python/

+ +

If the data is not images, you may want to do a analysis to see which feature have no clear relationship with whether it is men or women wrestling and remove them. Example: https://towardsdatascience.com/a-feature-selection-tool-for-machine-learning-in-python-b64dd23710f0

+ +

Hope that I can help you and have a nice day!

+",23713,,,,,11/2/2019 15:12,,,,0,,,,CC BY-SA 4.0 +16229,2,,16221,11/2/2019 15:16,,2,,"

For extra input that does not matter, you should not input it to the network.

+ +
+

Feature selection, the process of finding and selecting the most + useful features in a dataset, is a crucial step of the machine + learning pipeline. Unnecessary features decrease training speed, + decrease model interpretability, and, most importantly, decrease + generalization performance on the test set.

+
+ +

Source: A Feature Selection Tool for Machine Learning in Python

+ +

As the source says, unnecessary features decreases accuracy and training speed. Moreover, they have no mapping to the labels, so they won't end up being used. They are unnecessary and adding them will only cause you trouble. Hope this helps you and have a nice day!

+",23713,,1671,,11/2/2019 16:10,11/2/2019 16:10,,,,0,,,,CC BY-SA 4.0 +16230,2,,6753,11/2/2019 16:29,,3,,"

LSTMs or GRUs are computationally more effective than the standard RNNs because they explicitly attempt to address the vanishing and exploding gradient problems, which are numerical problems related to the vanishing or explosion of the values of the gradient vector (the vector that contains the partial derivatives of the loss function with respect to the parameters of the model) that arise when training recurrent neural networks with gradient descent and back-propagation through time.

+",2444,,2444,,11/2/2019 23:34,11/2/2019 23:34,,,,0,,,,CC BY-SA 4.0 +16231,5,,,11/2/2019 16:51,,0,,,2444,,2444,,11/2/2019 16:51,11/2/2019 16:51,,,,0,,,,CC BY-SA 4.0 +16232,4,,,11/2/2019 16:51,,0,,"For questions related to the gated recurrent unit (GRU), a modification and simplification of the LSTM unit, which is a more sophisticated unit (with respect to the standard one) of a recurrent neural network (RNN). An RNN that uses GRU units is often called a GRU network. GRUs were introduced in the paper ""Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation"" (2014) by Kyunghyun Cho et al.",2444,,2444,,11/2/2019 16:51,11/2/2019 16:51,,,,0,,,,CC BY-SA 4.0 +16233,1,26223,,11/2/2019 17:05,,1,239,"

I am looking for a dataset, which I could train a model to detect people/boats/surfboards, etc., from a drone view.

+

Has anyone seen a dataset that could be useful for this purpose?

+

I have some photos made by me (like this one below), but I need more data. Of course, the best will be if data will be labeled, but, if someone has seen an unlabeled dataset with videos/photos like that below, please share the link to it.

+

Sample photos I am looking for:

+

+

+",30992,,2444,,2/5/2021 13:06,2/5/2021 13:06,Dataset for floating objects detection,,2,0,,2/5/2021 13:04,,CC BY-SA 4.0 +16234,1,16235,,11/2/2019 18:55,,2,91,"

In the book Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning, James Stone says

+ +
+

With supervised learning, the response to each input vector is an output vector that receives immediate vector-valued feedback specifying the correct output, and this feedback refers uniquely to the input vector just received; in contrast, each reinforcement learning output vector (action) receives scalar-valued feedback often sometime after the action, and this feedback signal depends on actions taken before and after the current action.

+
+ +

I fail to understand the part formatted in bold. Once we have a set of labeled examples (feature vector and label pairs), where is the ""feedback"" coming from? Testing and validation results of our calibrated model (say a neural network based one)?

+",12855,,2444,,11/3/2019 13:34,1/24/2020 12:34,"What does ""immediate vector-valued feedback"" mean?",,2,0,,,,CC BY-SA 4.0 +16235,2,,16234,11/2/2019 19:09,,2,,"

By ""immediate vector-valued feedback"", they probably mean exactly the label in the ""labeled examples"" you mentioned.

+",23527,,,,,11/2/2019 19:09,,,,2,,,,CC BY-SA 4.0 +16236,2,,16171,11/2/2019 20:01,,1,,"

From your description, it seems that you are implementing a version of an algorithm called REINFORCE. This algorithm belongs to a family called Policy Gradient methods, which directly optimizes the policy network $\pi(a_t|s_t)$ from rewards without ever worrying about estimating a value function. This type of algorithm is usually pretty slow and presents high variance.

+ +

The methods that you recognize as the ones using two neural networks correspond to a family called Actor-Critic methods. This type of algorithm uses the trajectories of rewards to estimate a value network $q(s_t,a_t)$ (called the critic), and, contrary to the previous family of methods, it uses the value network to train the policy network $\pi(a_t|s_t)$ (called the actor), instead of directly using the trajectory of rewards. This indirect dependence usually makes variance smaller and also learning faster. I recommend you have a look at chapter 13 of the book An Introduction to Reinforcement Learning.

+ +

So, to answer your first question: it seems you are missing the family of Actor-Critic methods. I recommend you learn about them since they are very powerful (e.g., read about DDPG or SAC).

+ +

About your second question, the standard method to ""reward"" a policy network is not by training it. Usually, you have a reward function $r(s_t,a_t)$ that depends on your current state $s_t$ and action $a_t$ and you modify the parameters $\theta$ of your network in such a way that the probability of an action $\pi(a_t|s_t)$ increases if the reward is positive or decreases if it is negative. More specifically, you perform stochastic gradient ascend steps like this one:

+ +

$$\theta_t\leftarrow\theta_t+\alpha\mathbb E\left[\sum_{k=t}^{T+t} \gamma^{k-t}r(s_k,a_k)\right]\nabla \log\pi(a_t|s_t,\theta_t)$$

+ +

What this formula says is that if in the time-step $t$ you take an action $a_t$ in the state $s_t$, wait $T$ steps and collect the rewards from $r(s_t,a_t)$ to $r(s_{t+T},a_{t+T})$, then, you should modify your parameters in the direction that the policy increases the most (i.e., $\nabla \log\pi(a_t|s_t,\theta_t)$) if the expected return $E\left[\sum_{k=t}^{T+t} \gamma^{k-t}r(s_k,a_k)\right]$ is positive, or in the direction that the policy decreases the most if that expected return is negative.

+",30983,,30983,,11/3/2019 18:50,11/3/2019 18:50,,,,5,,,,CC BY-SA 4.0 +16238,1,,,11/3/2019 0:40,,6,1031,"

In the vanilla Monte Carlo tree search (MCTS) implementation, the rollout is usually implemented following a uniform random policy, that is, it takes random actions until the game is finished and only then the information gathered is backed up.

+

I have read the AlphaZero paper (and the AlphaGo Zero too) and I didn't find any information on how the rollout is implemented (maybe I missed it).

+

How is the rollout from the MCTS implemented in both the AlphaGo Zero and the AlphaZero algorithms?

+",22369,,2444,,12/19/2021 18:14,12/19/2021 18:14,How is the rollout from the MCTS implemented in both of the AlphaGo Zero and the AlphaZero algorithms?,,0,7,,,,CC BY-SA 4.0 +16239,2,,16233,11/3/2019 3:25,,2,,"

Perhaps you can check this dataset out: +http://www.aiskyeye.com/

+ +
+

The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of + Machine Learning and Data Mining , Tianjin University, China. The + benchmark dataset consists of 288 video clips formed by 261,908 frames + and 10,209 static images, captured by various drone-mounted cameras, + covering a wide range of aspects including location (taken from 14 + different cities separated by thousands of kilometers in China), + environment (urban and country), objects (pedestrian, vehicles, + bicycles, etc.), and density (sparse and crowded scenes). Note that, + the dataset was collected using various drone platforms (i.e., drones + with different models), in different scenarios, and under various + weather and lighting conditions. These frames are manually annotated + with more than 2.6 million bounding boxes of targets of frequent + interests, such as pedestrians, cars, bicycles, and tricycles. Some + important attributes including scene visibility, object class and + occlusion, are also provided for better data utilization.

+
+ +

It provides many drone view images with bounding boxes. Hope it can help you and have a nice day!

+",23713,,,,,11/3/2019 3:25,,,,0,,,,CC BY-SA 4.0 +16240,1,,,11/3/2019 4:36,,3,271,"

I generate some non-Gaussian data, and use two kinds of DNN models, one with BN and the other without BN.

+ +

I find that the model DNN with BN can't predict well.

+ +

The codes is shown as follow:

+ + + +
import numpy as np
+import scipy.stats
+import matplotlib.pyplot as plt
+from keras.models import Sequential
+from keras.layers import Dense,Dropout,Activation, BatchNormalization
+
+np.random.seed(1)
+
+# generate non-gaussian data
+def generate_data():
+    distribution = scipy.stats.gengamma(1, 70, loc=10, scale=100)
+    x = distribution.rvs(size=10000)
+    # plt.hist(x)
+    # plt.show()
+    print ('[mean, var, skew, kurtosis]', distribution.stats('mvsk'))
+
+    y = np.sin(x) + np.cos(x) + np.sqrt(x)
+    plt.hist(y)
+    # plt.show()
+    # print(y)
+    return x ,y 
+
+x, y = generate_data()
+
+x_train = x[:int(len(x)*0.8)]
+y_train = y[:int(len(y)*0.8)]
+x_test = x[int(len(x)*0.8):]
+y_test = y[int(len(y)*0.8):]
+
+
+def DNN(input_dim, output_dim, useBN = True):
+    '''
+    定义一个DNN model
+    '''
+    model=Sequential()
+
+    model.add(Dense(128,input_dim= input_dim))
+    if useBN:
+        model.add(BatchNormalization())
+    model.add(Activation('tanh'))
+    model.add(Dropout(0.5))
+
+    model.add(Dense(50))
+    if useBN:
+        model.add(BatchNormalization())
+    model.add(Activation('tanh'))
+    model.add(Dropout(0.5))
+
+    model.add(Dense(output_dim))
+    if useBN:
+        model.add(BatchNormalization())
+    model.add(Activation('relu'))
+
+    model.compile(loss= 'mse', optimizer= 'adam')
+    return model
+
+clf = DNN(1, 1, useBN = True)
+clf.fit(x_train, y_train, epochs= 30, batch_size = 100, verbose=2, validation_data = (x_test, y_test))
+
+y_pred = clf.predict(x_test)
+def mse(y_pred, y_test):
+    return np.mean(np.square(y_pred - y_test))
+print('final result', mse(y_pred, y_test))
+
+ +

The input x is like this shape:

+ +

+ +

If I add BN layers, the result is shown as follows:

+ +
Epoch 27/30
+ - 0s - loss: 56.2231 - val_loss: 47.5757
+Epoch 28/30
+ - 0s - loss: 55.1271 - val_loss: 60.4838
+Epoch 29/30
+ - 0s - loss: 53.9937 - val_loss: 87.3845
+Epoch 30/30
+ - 0s - loss: 52.8232 - val_loss: 47.4544
+final result 48.204881459013244
+
+ +

If I don't add BN layers, the predicted result is better:

+ +
Epoch 27/30
+ - 0s - loss: 2.6863 - val_loss: 0.8924
+Epoch 28/30
+ - 0s - loss: 2.6562 - val_loss: 0.9120
+Epoch 29/30
+ - 0s - loss: 2.6440 - val_loss: 0.9027
+Epoch 30/30
+ - 0s - loss: 2.6225 - val_loss: 0.9022
+final result 0.9021717561981543
+
+ +

Anyone knows the theory about why BN is not suitable for non-gaussian data ?

+",23200,,26652,,11/3/2019 9:38,7/21/2022 0:13,Is batch normalization not suitable for non-gaussian input?,,1,1,,,,CC BY-SA 4.0 +16241,2,,11139,11/3/2019 6:57,,3,,"

If you want to count the number of objects using a neural network, you can use pretrained YOLO with the bottom prediction layer removed, and feed the features to a classification feed forward layer of let's say 1000 class representing 0-999 objects in the image. You can then train it and propagate the gradients through it. For example, in the pytorch code for YOLO,(source:https://github.com/eriklindernoren/PyTorch-YOLOv3) +You can add a nn.Linear and use cross entropy loss to classify the number of images. You can also change the architecture completely. Maybe you can try adding layers to reset or other classifying network to count the number of objects. Hope this can help you and have a nice day!

+",23713,,,,,11/3/2019 6:57,,,,1,,,,CC BY-SA 4.0 +16242,1,,,11/3/2019 7:33,,1,323,"

I have a structured dataset of around 100 gigs, and I am using DNN for classification in TF 2.0. Because of this huge dataset, I cannot load entire data in memory for training. So, I'll be reading data in batches to train the model.

+ +

Now, the input to the network should be normalized and for that, I need training dataset mean and SD. I have been reading TensorFlow docs to get info on how to normalize features when reading data in batches. But, couldn't find one. though I found this article, it is only for the case where entire data can be loaded in memory.

+ +

So, If any of you have worked on creating such a TensorFlow data pipeline for normalizing input features while loading data in batches and training model, It would be helpful.

+",31002,,,,,7/31/2020 21:04,TensorFlow 2.0 - Normalizing input to DNN (on structured data),,2,0,,8/10/2020 17:01,,CC BY-SA 4.0 +16243,2,,16220,11/3/2019 10:30,,1,,"

If you want to use machine learning for such a project, you can use vibrations data directly, and treat the problem as a regular audio classification problem.

+ +

A simple approach would be to use a Neural Network with Convolutions. This would take care of features extraction for you. And maybe follow these by dense layers at the end.

+ +

Given that, it would be easier to make suggestions if you posted samples of your data.

+ +

Edit: Also, keep in mind that machine learning usually requires large datasets so if you are collecting data all by yourself in the way you describe, you might not have enough sample to run a good model. In such a case when number of samples is limited one can use transfer learning - using a pre-trained model - but I am not aware of any such pre-trained models for wave data.

+",22301,,22301,,11/3/2019 11:05,11/3/2019 11:05,,,,0,,,,CC BY-SA 4.0 +16245,2,,16226,11/3/2019 14:41,,3,,"

Assumptions

+ +

Different model structures encode different assumptions - while we often make simplifying assumptions that aren't strictly correct, some assumptions are more wrong than others.

+ +

For example, your proposed structure of ""just pass the $X$ number of letters leading up to the last letter into an FFNN"" makes an assumption that all the information relevant for the decision is fully obtainable from the $X$ previous letters, and $(X+1)$st and earlier input letters are not relevant - in some sense, an extension of the Markov property. Obviously, that's not true in many cases, there are all kinds of structures where long term relationships matter, and assuming that they don't lead to a model that intentionally doesn't take such relationships into account. Furthermore, it would make an independence assumption that the effect of $X$th, $(X-1)$st and $(X-2)$nd elements on the current output is entirely distinct and separate, you don't make an assumption that those features are related, while in most real problems they are.

+ +

The classic RNN structures also make some implicit assumptions, namely, that only the preceding elements are relevant for the decision (which is wrong for some problems, where information from the following items is also required), and that the transformative relationship between the input, output and the passed-on state is the same for all elements in the chain, and that it doesn't change over time; That's also not certainly true in all cases, this is quite a strong restriction, but that's generally less wrong than the assumption that the last $X$ elements are sufficient, and powerful true (or mostly true) restrictions are useful (e.g. the No Free Lunch Theorem applies) for models that generalize better; just like e.g. enforcing translational invariance for image analysis models, etc.

+",1675,,2444,,11/3/2019 21:36,11/3/2019 21:36,,,,1,,,,CC BY-SA 4.0 +16246,2,,16242,11/3/2019 15:12,,0,,"

One way could be to first iterate over the dataset in batches, just to get the mean and sd. Then when running training, use the true population parameters obtained before.

+",19358,,,,,11/3/2019 15:12,,,,0,,,,CC BY-SA 4.0 +16247,2,,16224,11/3/2019 16:30,,4,,"

In reference to your exact question, there is published research that attempts to bring these two areas together.

+

For example, HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving (2017) by Cezary Kaliszyk, François Chollet, Christian Szegedy. This group also has other published work related to the subject.

+

Regardless of their results, they list several areas of logical systems that are highly suited to machine learning methods (section 3.1, p. 4):

+
+
    +
  • Predicting whether a statement is useful in the proof of a given conjecture

    +
  • +
  • Predicting the dependencies of a proof statement (premise selection)

    +
  • +
  • Predicting whether a statement is an important one (human named)

    +
  • +
  • Predicting which conjecture a particular intermediate statement originates from

    +
  • +
  • Predicting the name given to a statement

    +
  • +
  • Generating intermediate statements useful in the proof of a given conjecture

    +
  • +
  • Generating the conjecture the current proof will lead to

    +
  • +
+
+

It's tough to know whether or not you can combine Higher Order Logic and Machine Learning in an effective way without needing to create a general AI. This is equivalent to wondering if an effective merging of the two areas is an AI-complete / AI-hard problem.

+

There are active attempts at general AI by researchers such as Ben Goertzel (many others as well but just to give a popular name for googling). Research into general AI would give you an idea of whether or not other pieces of the puzzle are needed in order to create something "highly intelligent".

+",31005,,2444,,12/11/2021 10:35,12/11/2021 10:35,,,,0,,,,CC BY-SA 4.0 +16248,5,,,11/3/2019 16:38,,0,,"

For more info, see https://en.wikipedia.org/wiki/Expert_system.

+",2444,,2444,,11/3/2019 16:38,11/3/2019 16:38,,,,0,,,,CC BY-SA 4.0 +16249,4,,,11/3/2019 16:38,,0,,"For questions related to expert systems, which are computer systems that emulate the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if-then rules. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence software (symbolic AI). ",2444,,2444,,11/3/2019 16:38,11/3/2019 16:38,,,,0,,,,CC BY-SA 4.0 +16250,2,,16240,11/3/2019 18:21,,1,,"

So batch-normalization helps descent based learning have an easier time traversing the loss manifold, but in your case you use it along with a relu as a final activation is problematic, it means the output is relatively associated with the other samples in the batch.

+ +

Remove that last BN and you get better results, but also understand BN is inherently problematic for this task. Think of DNNs as featurizers, and BN in this case takes out the 2 batch-wide statistics which if they don't align to the initial distribution will cause error, which will lead to error in the output. In theory if BN gets the perfect statistics of the gaussian it should not matter too much, so one thing I tried with your code was remove last BN and increase N to 100,000 while increasing the batch size to 10000 and you see a huge boost in performance.

+",25496,,,,,11/3/2019 18:21,,,,0,,,,CC BY-SA 4.0 +16251,1,16637,,11/3/2019 23:06,,1,54,"

I have data of 695 hours. I use the first 694 hours to train the network and I use 695th hour to validate it. Now my goal is to predict the next hour.

+ +

How I can use my trained network to predict the next hour, that is, the 696th hour (which I do not have access to)?

+",30551,,2444,,11/19/2019 20:03,11/19/2019 20:03,How can I test my trained network on the next unavailable hour?,,1,0,,,,CC BY-SA 4.0 +16252,1,,,11/4/2019 0:06,,5,228,"

I am trying to write a CNN from scratch and am wondering if it is possible to vectorize the convolution step.

+

For example, if I had a dataset of 500 RGB images of size 32x32x3, and wanted the first convolutional layer to have 64 filters, how would I go about the vectorization of this layer?

+

Currently, I am running through all 500 images in a for loop, convoluting individually. I do this for all the images up to the flattening stage (where it essentially becomes a normal NN again), at which point I can implement the normal vectorisation approach to get to my output, etc.

+

A holistic overview of the process would be appreciated, as I am struggling to get my head around it and am struggling to find and information on the matter online.

+",29877,,2444,,1/11/2021 0:58,1/11/2021 0:58,Is it possible to vectorise a CNN?,,1,1,,,,CC BY-SA 4.0 +16253,2,,16162,11/4/2019 1:46,,2,,"

Direct Answer to Your Question:--

+ +

Google uses the term: Automated Machine Learning.

+ +
+ +

What this Answer is About:--

+ +
+

"" ... A general AI x creates another AI y which is better than x. ... "" ~ Ashwin Rohit (Stack Exchange user, Opening Poster)

+
+ +

What is the term for this: ""A.I. creating A.I.""?

+ +

-

+ +

What is some theory behind this:--

+ +
+

""The AutoML procedure has so far been applied to image recognition and language modeling. Using AI alone, the team have observed it creating programs that are on par with state-of-the-art models designed by the world’s foremost experts on machine learning."" – Google's AI Is Now Creating Its Own AI. (2017, May 22). Retrieved from < https://www.iflscience.com/technology/google-ai-creating-own-ai/ >

+
+ +
+ +

Layperson Explanation:--

+ + + +
+

"" ... Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software.

+ +

The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in. ... ""

+ +

– Google's AI Can Make Its Own AI Now. (2017, October 19). Retrieved from < https://www.mentalfloss.com/article/508019/googles-ai-can-make-its-own-ai-now >

+
+ +

We use programs to write programs.

+ +

Researchers often need tools to solve complicated problems and algorithms are often needed. They don't always have the technical experience to do this. This is an artificial intelligence-based solution to the ever-growing challenge of applying machine learning to this problem.

+ +

This allows non-experts to engage in predictive performance of their final machine learning models.

+ +

There is the potential of ""feed-back between systems"" when A.I. feeds into A.I., which continues to feed into itself, ad infinitum.

+ +
+ +

Business Applications and Practical Uses:--

+ +

Defer to the book: Automated Machine Learning for Business.

+ + + +
+ +

Technical Mirror:--

+ + + +
+

""What is AutoML? + Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.

+ +

Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks:

+ +
    +
  • Preprocess and clean the data.
  • +
  • Select and construct appropriate features.
  • +
  • Select an appropriate model family.
  • +
  • Optimize model hyperparameters.
  • +
  • Postprocess machine learning models.
  • +
  • Critically analyze the results obtained.
  • +
+ +

As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.""

+ +

– AutoML. (n.d.). Retrieved from < http://www.ml4aad.org/automl/ >

+
+ + + +
+ +

Sources and References; and Further Reading:--

+ + +",25982,,25982,,11/5/2019 6:36,11/5/2019 6:36,,,,1,,,,CC BY-SA 4.0 +16254,2,,16252,11/4/2019 5:18,,1,,"

Yes you can vectorize a CNN. See this github file for details: https://github.com/parasdahal/deepnet/blob/master/deepnet/layers.py

+ +

After looking through it it basically transposes the input to some dimension and apply matrix multiplication to the weight with some other kind of transfromation. Pls refer to the github repository for details.

+ +

Hope this can help you and have a nice day!

+",23713,,,,,11/4/2019 5:18,,,,0,,,,CC BY-SA 4.0 +16256,2,,16176,11/4/2019 7:33,,1,,"

Doing some unsupervised learning will give you some divisions within each label but you can't be sure it will be split on ""E-sports"" and ""Sports"". It might as wel split sports into ""land sports"" and ""water sports"" or ""long sports"" and ""short sports"".

+ +

The only way to reliably split ""sports"" in 2 subsections of your desire is to relabel this manualy I am afraid.

+",29671,,,,,11/4/2019 7:33,,,,0,,,,CC BY-SA 4.0 +16260,1,,,11/4/2019 12:05,,3,604,"

I am new to machine learning. I am reading this blog post on the VC dimension.

+ +

$\mathcal H$ consists of all hypotheses in two dimensions $h: R^2 → \{−1, +1 \}$, positive inside some square boxes and negative elsewhere.

+ +

An example.

+ +

+ +

My questions:

+ +
    +
  1. What is the maximum number of dichotomies for the 4 data points? i.e calculate mH(4)

  2. +
  3. It seems that the square can shatter 3 points but not 4 points. The $\mathcal V \mathcal C$ VC dimension of a square is 3. What is the proof behind this?

  4. +
+",31018,,,user9947,4/6/2020 4:44,4/6/2020 20:54,What is the maximum number of dichotomies in a square?,,1,2,,,,CC BY-SA 4.0 +16262,2,,16242,11/4/2019 16:15,,0,,"

The central limit theorem tells us that the error in an estimate of the mean or standard deviation of a dataset will decline as $\frac{1}{\sqrt{n}}$, where $n$ is the number of samples taken at random from the set, and combined together to compute the mean and deviation.

+ +

If you select at random, for example, $10^6$ examples (probably a few megabytes), then the mean and standard deviation you compute from those examples will be within $\frac{1}{sqrt(10^6)} = \frac{1}{10^3}$ of the ""true"" answer. That's one part in 1,000, which is certainly accurate enough to use for re-scaling the dataset.

+",16909,,,,,11/4/2019 16:15,,,,3,,,,CC BY-SA 4.0 +16263,1,16265,,11/4/2019 16:35,,1,388,"

I am interested in exploring whether AI techniques can derive hidden patterns of relationships in a data set. For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price?

+

In explorations around how this might be done, I tried to use a neural network to solve for a predictable relationship between two variables to predict a third, so I trained my neural network with inputs consisting of the length of two sides of a triangle, and the result being the length of the hypotenuse. It couldn't get it to work.

+

I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships.

+

If that is true, I wonder if there is some other AI technique that could 'derive' a network modeling the Pythagorean theorem from a training data set with better results than a normal neural network?

+",31024,,2444,,12/30/2021 14:18,12/30/2021 14:18,Which AI technique is best suited to discovering non-linear relationships in data?,,2,0,,,,CC BY-SA 4.0 +16264,1,,,11/4/2019 17:39,,5,79,"

I'm working on a project where there is a limited dataset of videos (about 200). We want to train a model that can detect a single class in the videos. That class can be of multiple different types of shapes (thin wire, a huge area of the screen, etc).

+

There are three options on how we can label this data:

+
    +
  1. Image classification (somewhere in the image is this class)
  2. +
  3. Bounding box (in this area, there is the class)
  4. +
  5. Semantic segmentation (these pixels are the class)
  6. +
+

My assumption is that if the model was trained on semantic segmentation data it would perform slightly better than bounding box data. I'm also assuming it would perform way better than if the model only learned on image classification data. Is that correct?

+",20338,,2444,,1/30/2021 19:58,1/30/2021 19:58,Do models train better if the labelling information is more specific (or dense)?,,1,0,,,,CC BY-SA 4.0 +16265,2,,16263,11/4/2019 18:44,,2,,"
+

For example, from among house size, lot size, age of house and asking price, what formula best predicts selling price?

+
+ +

There is no general formula for this. Search for neural network regression and you can get started. The AI technique or any prediction algorithm in general will learn a function that maps from the input feature vector $(x_1, ...,x_n)$, where each of the element in the vector is a measurement on the $\text{predictors/independent variables/regressors}$ to the $\text{variable of interest/dependent variables}$ i.e. $\text{selling price}$

+ +
+

I was told by somebody who understands all this better than me that the reason it failed is because conventional neural networks are not good at modeling non-linear relationships.

+
+ +

The statement is incorrect. In fact the opposite is true. CNNs are known for modeling non-linear relationships. Examples are the highly successful image classification CNN architectures like Inception, ResNet, etc.

+",16708,,16708,,11/5/2019 9:24,11/5/2019 9:24,,,,2,,,,CC BY-SA 4.0 +16266,1,,,11/4/2019 18:46,,3,1598,"

I would like to develop a neural network to measure the distance between two opposite sides of an object in an image (in a similar way that the fractional caliper tool measures an object).

+

So, given an image of an object, the neural network should produce the depth or height of the object.

+

Which computer vision techniques and neural networks could I use to solve this problem?

+",31027,,2444,,10/16/2021 22:53,10/16/2021 22:53,"How can I ""measure"" an object using Computer Vision techniques and neural networks?",,2,0,,,,CC BY-SA 4.0 +16268,1,,,11/4/2019 19:30,,2,139,"

I'm new to Deep Learning, and I have some conceptual problems. I followed a simple tutorial here, and trained a model in Keras to do image classification on 10 classes of logos. I prepared 10 classes with each class having almost 100 images. My trained Resnet50 model performs exceptionally great when the image is one of those 10 logos, with 1.00 probability. But the problem is if I pass a non-logo item, a random image totally unrelated visually, still it marks it as one of those logos with close to 1.00 probability!

+ +

I'm confused. Am I missing anything? Why is this happening? How to find a solution? I need to find logos in video frames. But right now, with a high possbility each frame is marked as a logo!

+ +

Here is my simple training code:

+ +
def build_finetune_model(base_model, dropout, fc_layers, num_classes):
+    for layer in base_model.layers:
+        layer.trainable = False
+
+    x = base_model.output
+    x = Flatten()(x)
+    for fc in fc_layers:
+        # New FC layer, random init
+        x = Dense(fc, activation='relu')(x) 
+        x = Dropout(dropout)(x)
+
+    # New softmax layer
+    predictions = Dense(num_classes, activation='softmax')(x) 
+    finetune_model = Model(inputs=base_model.input, outputs=predictions)
+    return finetune_model
+finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
+adam = Adam(lr=0.00001)
+finetune_model.compile(adam, loss='categorical_crossentropy', metrics=['accuracy'])
+filepath=""./checkpoints/"" + ""ResNet50"" + ""_model_weights.h5""
+checkpoint = ModelCheckpoint(filepath, monitor=[""acc""], verbose=1, mode='max')
+callbacks_list = [checkpoint]
+
+history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=8, 
+                                       steps_per_epoch=steps_per_epoch, 
+                                       shuffle=True, callbacks=callbacks_list)
+
+plot_training(history)
+
+",9053,,9053,,11/5/2019 17:40,3/29/2021 21:05,Why is this ResNet50 misclassifying objects?,,1,5,,,,CC BY-SA 4.0 +16269,2,,16263,11/4/2019 19:34,,3,,"

You are mixing up lots of things here. Specifically, you seem to be lacking a basic understanding of artificial neural networks and what they can do (e.g. what type of articifial neural networks are linear classifiers/regressors and which can model non-linear relationships).

+ +

Therefore, I'd take a step back and start with understanding the basics of AI. The go-to book for that is 'Artificial Intelligence: A Modern Approach' by Russel and Norvig. It might be a slower (and more theoretical) start but IMO that is the right approach to actually understand what you are doing.

+",30789,,,,,11/4/2019 19:34,,,,0,,,,CC BY-SA 4.0 +16271,2,,16264,11/4/2019 23:20,,3,,"

It depends on what is your ultimate goal. If your goal is to simply classify the object in the image, having more complex output won't help. Simpler output representation yields better result. If your goal is to detect the bounding box, output the bounding box. There is no need for a more complex output feature. If you use a segmentation method for bounding box detection, it is more prone to error because of it's excess output features.

+ +

Assume you are given a gradr 6 math test. If you do the questions using grade 12 maths knowledge and do it with calculus and stuff to make your calculations seems more complex, will you have a higher mark than doing it the normal way? No! The marks is the same or even less due to higher chance of error in doing complex calculations.

+ +

In short, higher complexity on your labels won't help your task if it is a simple task. Hope this would help you and have a nice day!

+",23713,,,,,11/4/2019 23:20,,,,2,,,,CC BY-SA 4.0 +16273,2,,16266,11/5/2019 1:22,,2,,"

Father Ted explains why this is a hard problem.

+ +

Seriously -- if you have stereo images it should be possible, since that's what we use for depth perception. If you know how far away points x1 and x2 are, then you can measure distance using trigonometry. No neural networks needed, I guess. https://en.wikipedia.org/wiki/Triangulation_(computer_vision)

+",31035,,31035,,11/5/2019 18:22,11/5/2019 18:22,,,,0,,,,CC BY-SA 4.0 +16274,1,16275,,11/5/2019 6:40,,1,77,"

I believe I saw an article about an AI that was able to decode human vision 'brain-waves' in real-time, which would create a blurry image of what the human was seeing.

+ +

This AI Decodes Your Brainwaves and Draws What You're Looking at

+ +

Is anyone aware where I can find this?

+",31041,,25982,,11/5/2019 13:19,11/6/2019 11:11,Have any AI's been able to decode human vision 'thoughts',,2,5,,,,CC BY-SA 4.0 +16275,2,,16274,11/5/2019 9:22,,0,,"

Direct Answer to the OP's Question

+
+

"Have any AI's been able to decode human vision 'thoughts'" ~ Albert (Stack Exchange user, OP)

+
+

This is technology that can produce pictures of what the user is thinking about through scanning a brain.

+
+

"Is anyone aware where I can find this?" ~ Albert (Stack Exchange user, OP)

+
+

Emotiv is the most accessible commercial model (circa. late 2019).

+
+

The OP is probably interested in consumer brain–computer interfaces (also known as BCIs). These are varied technologies which range from:--

+
    +
  1. Simple "yes-no" brain-interface (e.g. for people in a coma)
  2. +
  3. Advanced programs that control video games through thought, such as a high fantasy wizard duel (Do a search about this on YouTube!)
  4. +
  5. Technology that can produce pictures of what the user is thinking about through scanning a brain. (This was mentioned by the OP.)
  6. +
+
+

This Wikipedia page, < https://en.wikipedia.org/wiki/Consumer_brain%E2%80%93computer_interfaces >, compares different models of BCIs.

+ +
+

There also some very serious ethical issues regarding being able to "read brains." I mentioned the medical use; and I hope it goes in this direction.

+

Deep philosophical discussions can be had on whether it is appropriate to read the brain of a supposed criminal. (I would personally say no.)

+

(https://plato.stanford.edu/entries/neuroethics/)

+
+

[I can't comment on the technical details of this. It is outside of my purview. For example, if you need information about the Python-BCI interface, you will need an expert.]

+
+

[This is not medical and/or legal advice. This is theoretical discussion.]

+",25982,,-1,,6/17/2020 9:57,11/6/2019 1:57,,,,0,,,,CC BY-SA 4.0 +16276,2,,16266,11/5/2019 9:57,,2,,"

If the measurements you want from the object aren't too complicated (ie. length of a clearly defined feature), and if you are able to acquire a training dataset of images of the objects similar to what your model will see in your use case (same scale/distance), their bounding boxes and their measurements, a model you could try to implement is a Multi-Task Convolutional Neural Network (MTCNN).

+

MTCNNs are typically used for face detection and alignment, but I would imagine it is possible to adapt them to your use case given proper training and tuning. If there are more complicated measurements that you want to obtain, you could pass on the detected objects to another model to make more specific measurements.

+

You will have a problem however, with measuring depth. Depth is hard to estimate from an image because of the information that we lose when moving from a 3D to a 2D space. +A longer explanation on this is available in MachineEpsilon's answer to the Cross Validated question "how to detect the exact size of an object in an image using machine learning?" but quoting his main statements:

+
+

This task of depth estimation is part of a hard and fundamental problem in computer vision called 3D reconstruction. Recovering metric information from images is sometimes called photogrammetry. It's hard because when you move from the real world to an image you lose information.

+

Specifically, the projective transformation 𝑇 that takes your 3D point 𝑝 to your 2D point 𝑥 via 𝑥=𝑇𝑝 does not preserve distance. Since 𝑇 is a 2×3 matrix, calculating 𝑇−1 to solve 𝑇−1𝑥=𝑝 is an underdetermined inverse problem. A consequence of this is that pixel lengths are not generally going to be meaningful in terms of real world distances

+
+

However, that's not to say you could add additional sensors to resolve the depth estimation problem (ie. stereoscopic cameras or infrared distance sensors) if additional cost is not an issue.

+",23289,,-1,,6/17/2020 9:57,11/6/2019 3:11,,,,0,,,,CC BY-SA 4.0 +16279,1,,,11/5/2019 10:34,,1,20,"

I've videos from a mounted camera on a helmet and the manually segmented labels (mask) of them.

+ +

The mask is valid through the entire video, only the scene vary. +In different videos the camera is mounted differently on top of the helmet.

+ +

Things I've tried:

+ +
    +
  • Training semantic segmentation on frame-mask pairs
  • +
  • Training semantic segmentation of concatenated frames with the mask.
  • +
  • Averaging consecutive frames and calculating the time-wise std of the pixels, and feeding the NN with this as an input.
  • +
  • ensembling (averaging) segmentation results from N frames
  • +
  • Usage of classic background subtraction techniques such as MOG2 (worse)
  • +
+ +

Although DNN models achieve 99% accuracy during training, in some of my test videos the model is missing large part of the helmet..

+ +

I'm certain this task can achieve ~100% accuracy even for never-before-seen examples.

+ +

Do you have some ideas?

+",25412,,,,,11/5/2019 10:34,Segmentation of a static object in a video,,0,0,,,,CC BY-SA 4.0 +16282,1,,,11/5/2019 11:08,,3,66,"

Is the Assumption-based Truth Maintenance System still used to maintain consistency while explicitly accounting for assumptions?

+",31052,,2444,,11/5/2019 14:29,6/10/2021 21:55,Is the Assumption-based Truth Maintenance System still used?,,0,0,,,,CC BY-SA 4.0 +16285,2,,16268,11/5/2019 14:27,,1,,"

Your problem is a classification problem. If you are following the tutorial, it is using the ResNet50 network, which is a convolutional neural network with one fully connected layer at the end. At the end the activation function is softmax. Detailed description of the activation function can be found here: Softmax function explained

+ +

+ +

Basically, softmax increases the difference between the higher probability and lower probability. It also limits the output between 0 and 1.

+ +

Problem Origin

+ +

Due to the nature of the softmax function, it always chooses the best one and enlarge the value to be a value near one, even if the range of the output predictions is very small like 0-0.1. Also you training data only have the data of the 10 logos labelled, so if the network see unseen images with no logo it recognizes, it predicts the one with the most similarity. If you want to classify images with no logos in it, you should add an extra class in the training dataset and also the code to train the network to learn to classify images which is not in the 5 class into a separate maybe called unlabeled. Hope I can help you and have a nice day.

+",23713,,,,,11/5/2019 14:27,,,,5,,,,CC BY-SA 4.0 +16286,5,,,11/5/2019 14:33,,0,,,2444,,2444,,11/5/2019 14:33,11/5/2019 14:33,,,,0,,,,CC BY-SA 4.0 +16287,4,,,11/5/2019 14:33,,0,,"For questions related to the Vapnik–Chervonenkis theory (also known as VC theory), which a form of computational learning theory, so it attempts to explain the learning process from a statistical point of view, developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis.",2444,,2444,,11/5/2019 14:33,11/5/2019 14:33,,,,0,,,,CC BY-SA 4.0 +16289,5,,,11/5/2019 14:47,,0,,,2444,,2444,,11/5/2019 14:47,11/5/2019 14:47,,,,0,,,,CC BY-SA 4.0 +16290,4,,,11/5/2019 14:47,,0,,"For questions related to the Vapnik–Chervonenkis (VC) dimension, originally defined by Vladimir Vapnik and Alexey Chervonenkis, which is a measure of the capacity (complexity, expressive power, richness, or flexibility) of a space of functions that can be learned by a statistical classification algorithm.",2444,,2444,,11/5/2019 14:47,11/5/2019 14:47,,,,0,,,,CC BY-SA 4.0 +16291,1,,,11/5/2019 14:57,,5,311,"

In the book ""Perceptrons: An Introduction to Computational Geometry"" by Minsky and Papert (1969), which part of this book tells that a single-layer perceptron could not solve the XOR problem?

+ +

I have been already scanned it, but I did not find the part. Or am I missing something?

+",31061,,2444,,1/19/2021 2:18,1/19/2021 2:18,"Which part of ""Perceptrons: An Introduction to Computational Geometry"" tells that a perceptron cannot solve the XOR problem?",,1,0,,,,CC BY-SA 4.0 +16294,1,,,11/5/2019 16:48,,6,665,"

I am looking for a result that shows the convergence of semi-gradient TD(0) algorithm with non-linear function approximation for on-policy prediction. Specifically, the update equation is given by (borrowing notation from Sutton and Barto (2018))

+ +

$$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] \nabla \hat v(S, \mathbf w)$$

+ +

where $\hat v(S, \mathbf w)$ is the approximate value function parameterized by $\mathbf w$.

+ +

Sutton and Barto (2018) mention that the above update equation converges when $\hat v$ is linear in $\mathbf w$. But I couldn't find a similar result for non-linear function approximation. Any help would be greatly appreciated.

+",31063,,2444,,1/4/2020 21:33,1/4/2020 21:33,Convergence of semi-gradient TD(0) with non-linear function approximation,,1,0,,,,CC BY-SA 4.0 +16295,1,,,11/5/2019 20:18,,2,39,"

So, I’m looking into some dynamic ways in which one can drive the behavior of a video game character. Specifically an NPC (Non playable character) that will be observable from the players point of view. Something I’d like to clarify from the start is what I mean by behavior. Since video games are understood visually, then I would qualify behavior to be anything visual, such as gestures, mannerisms or actions in any local space.

+ +

Let’s take a common archetype as an example. We’ll say we want the behavior of a villain. My first thought, was to use videos as training data. Videos of specific subjects or actors in a villainous scene (think Frankenstein, Dracula, Emperor Palpatine in Star Wars etc...) in hopes that an understanding of their mannerisms, body language and gestures could be captured and later applied to 3D animation dynamically.

+ +

I do understand that anything 3D typically requires rigging and animation. I’m currently not exactly sure how to marshal the data from one format (video analyses) to 3D animation. I thought I’d start working on the concept from high level first.

+ +

Any thoughts?

+",20271,,,,,11/5/2019 20:18,Can a Video Game Characters Behavior be directed by a NN?,,0,2,,,,CC BY-SA 4.0 +16297,5,,,11/5/2019 20:54,,0,,,2444,,2444,,11/5/2019 20:54,11/5/2019 20:54,,,,0,,,,CC BY-SA 4.0 +16298,4,,,11/5/2019 20:54,,0,,"For questions related to thought vectors, which are a generalization of word embeddings to e.g. sentences.",2444,,2444,,11/5/2019 20:54,11/5/2019 20:54,,,,0,,,,CC BY-SA 4.0 +16300,5,,,11/6/2019 0:29,,0,,"

https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence

+",1671,,1671,,11/6/2019 0:29,11/6/2019 0:29,,,,0,,,,CC BY-SA 4.0 +16301,4,,,11/6/2019 0:29,,0,,"For questions about AI milestone, both those achieved and those predicted. ",1671,,1671,,11/6/2019 0:29,11/6/2019 0:29,,,,0,,,,CC BY-SA 4.0 +16302,2,,16274,11/6/2019 11:11,,1,,"

There have been studies in University of Oregon and Kyoto University to be able to visualise thoughts and dreams on a screen using voxel values of an FMRI scan as input and an estimation of an image of the thoughts as the output. Instead of linking you to these studies and papers - you could just watch this episode of mind field where both these studies are demonstrated and linked.

+ +

The idea behind this is easier to understand if you have a good understanding of generative networks such as generative adversarial networks or so. Essentially in GAN's you'd map a known latent distribution to images in pixel-space. You would be doing the same thing here, just that the latent distribution would now be the FMRI scan input and the mapping would be made in a supervised setting where they are initially showed images. A very rough understanding of the idea can be drawn on these lines.

+",25658,,,,,11/6/2019 11:11,,,,2,,,,CC BY-SA 4.0 +16304,2,,2324,11/6/2019 11:24,,1,,"

To avoid a repetitive answer that has been already spoken about such as absurdly high iterative ability or it being able to create another AGI system and multiplying or anything sci-fi like - there is one line of thought I feel people do not speak enough about.

+ +

Our human senses are extremely limited i.e. we can see objects only when light from within the visible light spectrum (~ 400nm-700nm) reflects into our eyes, we can hear only a limited range of frequencies the rest being inaudible etc. An AGI system apart from its obvious intelligence, would be able to gain a significant amount of information from even common observations. It can see infrared, ultraviolet and radio waves as what we interpret as colours; it would be able to hear sounds that we did not know were being emitted at all. Essentially an AGI with good input sensor capabilities would be able to take information from experiencing the world as it actually is, and not a limited illusion we experience.

+",25658,,,,,11/6/2019 11:24,,,,0,,,,CC BY-SA 4.0 +16306,2,,15683,11/6/2019 12:06,,0,,"

Would personally recommend deeplearning.ai's course to begin with. There may be more comprehensive or better MOOC's for covering basic MLP's, CNN, RNN's, tuning and training of neural networks but this is probably the most common one and the one that I can personally vouch for.

+ +

After this I'd recommend you get a physical or pdf copy of Deep Learning by Goodfellow et al. and use it as reference material for any new idea you'd want to learn. Personally would not recommend reading the whole book and its better as a reference material as it is quite comprehensive.

+ +

This should essentially be able to give you enough knowledge to be able to cover almost any paper/material on deep learning. The Course mentioned (most courses) would cover LSTM's as they are quite an old idea (~1997 I think) and GAN's are well covered in the book mentioned (The author invented them) since they are more of a recent advancement (2014).

+ +

Hope this was helpful!

+",25658,,,,,11/6/2019 12:06,,,,0,,,,CC BY-SA 4.0 +16307,5,,,11/6/2019 13:45,,0,,,2444,,2444,,11/6/2019 13:45,11/6/2019 13:45,,,,0,,,,CC BY-SA 4.0 +16308,4,,,11/6/2019 13:45,,0,,"For questions related to the hyper-parameters of AI models and algorithms, which are parameters that are set before the learning process begins. For example, the number of hidden layers in a feed-forward neural network is usually a hyper-parameter.",2444,,2444,,11/6/2019 13:45,11/6/2019 13:45,,,,0,,,,CC BY-SA 4.0 +16309,5,,,11/6/2019 13:47,,0,,,2444,,2444,,11/6/2019 13:47,11/6/2019 13:47,,,,0,,,,CC BY-SA 4.0 +16310,4,,,11/6/2019 13:47,,0,,"For questions related to anomaly detection (or outlier detection) algorithms, which is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. There are unsupervised, supervised and semi-supervised anomaly detection algorithms.",2444,,2444,,11/6/2019 13:47,11/6/2019 13:47,,,,0,,,,CC BY-SA 4.0 +16311,1,16313,,11/6/2019 14:22,,6,375,"

The FaceNet model returns the loss of the predictions and ground-truth classes. How is this loss calculated?

+",30306,,2444,,11/11/2019 14:48,11/11/2019 14:48,What is the formula used to calculate the loss in the FaceNet model?,,1,0,,,,CC BY-SA 4.0 +16312,2,,16111,11/6/2019 15:37,,0,,"

I'll share my understanding so far. +This kind of behavior is actually normal when using on-policy algorithms with a sparse final reward. +Issue stems from the fact that once you get stuck in a behavior policy which does nothing (uses a ""do nothing"" action, for instance, until timeout), it's quite hard to get out of it, because you keep getting experiences that teach you nothing (no reward signal at all) and keeps you in the same policy. +Possible mitigations:

+ +
    +
  • Encourage more exploration (in A3C, make the entropy loss coefficient bigger) in order to recover more quickly from this type of stationary behavior.
  • +
  • Use an off-policy algorithm with a big enough replay buffer, so that even if you start behaving this way you still use experiences from a ""healthy"" older policy.
  • +
+ +

If you stick to a totally on-policy algorithm, making the batch_size bigger might help a little.

+",28125,,,,,11/6/2019 15:37,,,,0,,,,CC BY-SA 4.0 +16313,2,,16311,11/6/2019 15:47,,6,,"

The loss function used is the triplet loss function. + +Let me explain it part by part.

+ +

Notation

+ +

The $f^a_i$ means the anchor input image. The $f^p_i$ means the postive input image, which corresponds to the same people as the anchor image. The $f^n_i$ corresponds to the negative sample, which is a different person(input image) then the anchor image.

+ +

The formula explained step by step

+ +

The first part, $||f^a_i - f^p_i||^2_2$ basically calculates the distance between the anchor image output features and the postive image output features, which you want the distance to be as small as possible as the input is the same person. For the second part, $||f^a_i - f^n_i||^2_2$ , it calculates the distance of the output features of the anchor image and the negative image. You wnat the distance to be as large as possible as they are not the same person. Finally, the $\alpha$ term is a constant(hyperparameter) that adds to the loss to prevent negative loss.

+ +

How it works

+ +

The loss function optimizes for the largest distance between the anchor and negative sample and the smallest distance of the positive and anchor sample. It cleverly combines both metrics into one loss function. It can optimize for both case simultaneously in one loss function. If there is no negative sample, the model will not be able to differciate different person and vice versa.

+ +

Hope I can help you and have a nice day!

+",23713,,30306,,11/11/2019 8:49,11/11/2019 8:49,,,,2,,,,CC BY-SA 4.0 +16316,1,16342,,11/6/2019 21:47,,5,228,"

Robot technology is usually thought from an engineering perspective. A human programmer writes a software this executed in a robot who is doing a task.

+ +

But what would happen, if the project is started with the opposite goal? The idea is, that the human becomes the robot by himself. That means, the human is using makeup to make his face more mechanically, buys special futuristic clothing which mirrors the light and imitates in a roleplay the working of a kitchen robot.

+ +

What are methods human actors use to imitate robots?

+",,user11571,1671,,12/2/2019 21:41,12/2/2019 21:41,What are methods human actors use to imitate robots?,,2,4,,,,CC BY-SA 4.0 +16317,5,,,11/6/2019 21:54,,0,,,1671,,1671,,11/6/2019 21:54,11/6/2019 21:54,,,,0,,,,CC BY-SA 4.0 +16318,4,,,11/6/2019 21:54,,0,,For question related to adoption of AI in the real world.,1671,,1671,,11/6/2019 21:54,11/6/2019 21:54,,,,0,,,,CC BY-SA 4.0 +16319,2,,1644,11/6/2019 23:27,,1,,"

There is some research on this topic. See, for example, the papers Robot Identification and Localization with Pointing Gestures (2018) and Proximity Human-Robot Interaction Using Pointing Gestures +and a Wrist-mounted IMU (2019), by Boris Gromov et al., where the human is assumed to possess an inertial measurement unit (IMU) attached to the arm

+",2444,,,,,11/6/2019 23:27,,,,0,,,,CC BY-SA 4.0 +16320,5,,,11/6/2019 23:29,,0,,,2444,,2444,,11/6/2019 23:29,11/6/2019 23:29,,,,0,,,,CC BY-SA 4.0 +16321,4,,,11/6/2019 23:29,,0,,"For questions related to human-computer, human-robot or human-AI interaction, in the context of AI.",2444,,2444,,11/6/2019 23:29,11/6/2019 23:29,,,,0,,,,CC BY-SA 4.0 +16322,1,16341,,11/7/2019 1:02,,3,180,"

Until Chapter 6 of Sutton & Barto's book on Reinforcement Learning, the authors use $V$ for the current estimate of a state value. Equation (6.1), for example, shows:

+ +

$$ V(S_t) \leftarrow V(S_t) + \alpha[G_t - V(S_t)]\ \ \ \ \ \ (6.1)$$

+ +

However, on Chapter 7 they add a subscript to $V$. The first time this appears is on page 143 when they define the return from $t$ to $t+1$:

+ +

$$ G_{t:t+1} \dot{=} R_{t+1} + \gamma V_t(S_{t+1})$$

+ +

and say that $V_t : \mathcal{S} \rightarrow \mathbb{R}$ is ""the estimate at time $t$ of $v_\pi$.""

+ +

At first I thought I understood this as a natural consequence of considering $n$ steps ahead in the future and needing an extra index to go over the $n$ steps. But then this stopped making sense when I realized that an estimate for a state must be consolidated, no matter at which of $n$ steps that is coming from. After all, a state $s$ has a single value to estimate, $v_\pi(s)$, and that does not depend on $t$.

+ +

Then I thought that they are just taking into account that there are many successive estimates of $V$ as the algorithm progresses, so $V_t$ is just the estimate after processing the $n$ steps starting at time $t$. In other words, the subscript would be a rigorous mathematical way of denoting the sequence of algorithmic updates. But this does not make sense either since even in Chapter 6 and before, the estimate is also successively updated. See Equation (6.1), for example. The $V$ on the left-hand side is a different variable from the one on the right-hand side (this is why they must use $\leftarrow$ indicating an assignment as opposed to a mathematical equality with $=$). It could have easily been written with an index as well.

+ +

So, what is the purpose of the new index for $V$ in Chapter 7, and why is it more important at this particular chapter?

+ +

Edit and elaboration: Going back to the text, it seems to me that the new subscript is indeed added as an attempt for greater clarity, even though the subscript-less notation $V$ from previous chapters might have been kept (and in fact it is still used in the pseudo-code in page 144).

+ +

It seems the authors wanted to stress that the update of $V$ happens not only for every trace of $n$ steps, but also at every one of those steps.

+ +

However, I think this introduced a technical error, because suppose we just learned from an 10-step episode ($T=10$), using $n = 3$. Then the latest estimate of $v_\pi$ is $V_{T-1} = V_{10 - 1} = V_{9}$. Then at the next episode, the first time $V_{t + n}$ is used to inform a target update, it will be for $\tau = 0$ (from the pseudo-code), which implies $t - n + 1 = 0$, so $t = n - 1$, that is, $V_{t+n}=V_{n-1+n}=V_{2n-1}=V_5$, which is not the most up-to-date estimate $V_9$ of $v_\pi$.

+ +

Of course the problem would be easily solved if we simply set the next used estimate $V_{2n + 1}$ to be equal to the last episode's $V_{T-1}$, but to avoid confusion this would have to be explicitly stated somewhere.

+",30679,,30679,,11/7/2019 5:30,11/7/2019 18:17,Sutton & Barto's notation $V_{t+n}$ in Chapter 7: $n$-step Bootstrapping,,1,2,,,,CC BY-SA 4.0 +16323,1,,,11/7/2019 2:01,,4,415,"

I want to train an ANN. The problem is that the input features are completely unbounded (There are no boundaries as maximum and minimum for them).

+ +

For example, the following input vectors $(42, 54354354)$ and $(0.4, 47239847329479324732984732947)$ are both valid.

+ +

I know RNNs that can add up input neurons, which are pretty similar to my case, but the number of the digits was limited in all of the implementations.

+ +

Is there a way to implement an ANN that can add up the input numbers of any magnitude?

+",31092,,2444,,11/7/2019 16:11,11/7/2019 16:42,Can neural networks deal with unbounded numbers as inputs?,,1,0,,,,CC BY-SA 4.0 +16325,5,,,11/7/2019 3:11,,0,,"
+

A multi-layer perceptron (MLP) is a class of feed-forward artificial neural network. An MLP consists of at least three layers of nodes. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called back-propagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.

+
+

Source: Wikipedia - Multilayer perceptron

+",1671,,4709,,8/5/2020 16:02,8/5/2020 16:02,,,,0,,,,CC BY-SA 4.0 +16326,4,,,11/7/2019 3:11,,0,,"For question about Multi Layer Perceptron model/architecture, its training and other related details and parameters associated with the model. ",1671,,1671,,11/7/2019 3:11,11/7/2019 3:11,,,,0,,,,CC BY-SA 4.0 +16327,1,,,11/7/2019 7:24,,2,64,"

I am using a policy gradient algorithm (actor-critic) for wireless networks. The policy gradient-based algorithm helps because it considers continuous action space.

+ +

But how much does a policy gradient-based algorithm contribute to the complexity of the involved neural networks, compared to discrete action space algorithms (like Q-learning)? Moreover, in terms of computation, how do policy gradient algorithms (for continuous action spaces) compare to discrete action space algorithms?

+",31095,,2444,,11/8/2019 14:29,11/8/2019 14:29,What is the complexity of policy gradient algorithms compared to discrete action space algorithms?,,0,2,,,,CC BY-SA 4.0 +16328,1,,,11/7/2019 7:28,,2,74,"

How do you build a language model to predict the contextual similarity between two documents?

+",31094,,2444,,11/7/2019 13:11,11/8/2019 3:18,How do you build a language model to predict the contextual similarity between two documents?,,1,0,,,,CC BY-SA 4.0 +16330,1,16335,,11/7/2019 9:53,,4,204,"

The universal approximation theorem states, that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $R^n$.

+

Michael Nielsen states

+
+

No matter what the function, there is guaranteed to be a neural network so that for every possible input, $x$, the value $f(x)$ (or some close approximation) is output from the network.

+
+

So, for continuous functions, this seems plausible. Interestingly, in the same article, Nielsen mentioned "any function".

+

Later, he writes

+
+

However, even if the function we'd really like to compute is discontinuous, it's often the case that a continuous approximation is good enough.

+
+

The last statement leaves open a gap, to ask how well an approximation practically can be.

+

Let's ignore contradictory input/output training pairs like $f(0)=0$ and $f(0)=1$, which actually don't event represent a function anyway.

+

Furthermore, assume that the training data is generated randomly, which would practically result in a discontinuous function.

+

How does a neural network learn such data? Will a learning algorithm always be able to find a neural network that approximates the function represented by the input-output pairs?

+",10800,,2444,,2/1/2021 21:03,2/1/2021 21:03,Does a neural network exist that can learn every possible training data?,,1,0,,,,CC BY-SA 4.0 +16331,1,,,11/7/2019 11:30,,3,97,"

I am working on building a recommendation engine. I need to build a model that recommends similar items. Currently, I am using the Nearest Neighbor algorithm present in sklearn.neighbors package.

+ +

I am working in finance domain, similarity can based on the ""Supplier"", ""Buyer"", ""Industry type"" etc.

+ +

I have attached sample data in the image below

+ +

+ +

Is there any better machine learning algorithms/packages in Python for the same?

+",29717,,29717,,12/4/2019 11:19,12/4/2019 11:19,Which machine learning algorithms can be used to build a recommendation system?,,0,8,,,,CC BY-SA 4.0 +16332,1,,,11/7/2019 12:37,,1,49,"

I would like to code a script that could locate a specific word or number in a financial statement. Financial statements roughly contain the same information, they are however not identical and organized in the same way. My thought is that by using Tensorflow I could train a neural network to locate the specific words or numbers for me. I am thinking that if I label different text and numbers in 1000 financial statements and use them to train the neural network, it will then be able to identify these numbers or words in all financial statements. For example, tell it in all 1000 training statements which number that is the profit of the company. Then when I give it an unseen financial statement, it should be able to identify which number that is the profit.

+ +

Is this doable? I have been working with coding in python for a couple of months and so far I've built some web scrapers and integrated them with twitter, slack and google sheets. However, this would be my first AI related project. I would be very grateful for all your thoughts on this project and if anyone could steer me in the right direction by sharing relevant tutorials.

+ +

Thanks a lot!

+",31101,,2444,,11/7/2019 17:07,11/7/2019 17:07,How could I locate certain words or numbers in a financial statement?,,0,5,,,,CC BY-SA 4.0 +16333,2,,16328,11/7/2019 13:41,,1,,"

You can use an autoencoder for text. For example, you can refer to this example here: https://machinelearningmastery.com/lstm-autoencoders/

+ +

For comparing the contextual similarity, you can compare the encoded vectors for the distance maybe through a formula like mean squared error.

+ +

This works as the autoencoder compresses the input data into a vector of numbers, forcing the encoder and the decoder to learn specific features about text. The features cannot be understood by humans but have meanings.

+ +

Another approach will be using some word or document embedding like word2vec or GloVe. IT may also work well depending on your dataset size. You need to experiment through different methods to find out which is the best.

+ +

A supervised method can also be used if you have labels. For example, you can use a LSTM and train it like an Siamese network for LSTM using triplet loss. Here is an example: https://medium.com/@gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1 Implementation: https://github.com/GKarmakar/deep-siamese-text-similarity

+ +

Hope this can help you and have a nice day

+",23713,,23713,,11/8/2019 3:18,11/8/2019 3:18,,,,2,,,,CC BY-SA 4.0 +16335,2,,16330,11/7/2019 14:28,,1,,"

The branch of AI research that answers questions like this is called computational learning theory.

+ +

For the specific question you have asked, the universal approximation theorem does indeed prove that any function can be modeled by a sufficiently wide neural network. The definition of a function includes the requirement that each input be mapped to exactly one output, so contradictory labels in training data are excluded explicitly.

+ +

Here is a rough sketch that can provide an intuition behind why this is true. This is not a proper proof, but it gives you an idea of the power of ""finite number of neurons"" in a hidden layer:

+ +
    +
  1. A single neuron can essentially learn to draw a straight line across the space of input features. It outputs a value arbitrarily close to 1 for things on one side of the line, and arbitrarily close to 0 for things on the other side of the line.
  2. +
  3. For any given datapoint, it is possible to enclose the hyper-volume containing that datapoint, and no others by drawing a series of lines in the input space, and defining one side of each line as ""inside"" the hyper-volume, and the other side as ""outside"". In 2-d space, this corresponds to drawing the 4 sides of a square around a point.
  4. +
  5. An output neuron learns to draw lines across the outputs of the hidden layer neurons. It can therefore decide to output 1 only when, say, 4 other neurons are all simultaneously active.
  6. +
+ +

It should seem natural then that a sufficiently wide neural network can memorize all of an input pattern. Since memorization is sufficient for ""learning"" to fit a pattern this should give you an intuition for the ability of neural networks to fit things.

+",16909,,16909,,11/7/2019 22:54,11/7/2019 22:54,,,,0,,,,CC BY-SA 4.0 +16336,1,16562,,11/7/2019 14:48,,2,160,"

I was going through the code by Andrej Karpathy on reinforcement learning using a policy gradient. I have some questions from the code.

+ +
    +
  1. Where is the logarithm of the probability being calculated? Nowhere in the code I see him calculating that.

  2. +
  3. Please explain to me the use of dlogps.append(y - aprob) line. I know this is calculating the loss, but how is this helping in a reinforcement learning environment, where we don't have the correct labels?

  4. +
  5. How is policy_backward() working? How are the weights changing to the loss function mentioned above? More specifically, what's dh here?

  6. +
+",29843,,2444,,11/8/2019 15:18,11/16/2019 20:36,How is gradient being calculated in Andrej Karpathy's pong code?,,1,0,,,,CC BY-SA 4.0 +16338,2,,16323,11/7/2019 16:42,,3,,"

Depending on the activation functions in the first layer, this is not a real problem, many functions such as $\tanh(x)$ and $\sigma(x)$ (sigmoid) have asymptotic upper and lower bounds, so the huge input values swamp all other inputs in the corresponding neurons, but the output is well-behaved.

+ +

However, a conversion before the input layer may be appropriate for some kind of data. For example, you might want to take the $\log(x)$ of the input values such that the initial weight networks work with the relation of the values instead of their absolute magnitude. This may be useful for audio data such as speech or music and many other measurements of signals that may be attenuated before the measurement.

+ +

If your numbers don't actually represent magnitudes of some kind but arbitrary sequences of digits, you should use a network that can deal with sequences, of course.

+",22993,,,,,11/7/2019 16:42,,,,0,,,,CC BY-SA 4.0 +16339,5,,,11/7/2019 16:48,,0,,,2444,,2444,,11/7/2019 16:48,11/7/2019 16:48,,,,0,,,,CC BY-SA 4.0 +16340,4,,,11/7/2019 16:48,,0,,"For questions related to feedforward neural networks (FFNNs), which are also sometimes called multilayer perceptrons, but these two expressions may not always be interchangeable.",2444,,2444,,11/7/2019 16:48,11/7/2019 16:48,,,,0,,,,CC BY-SA 4.0 +16341,2,,16322,11/7/2019 18:17,,2,,"
+

So, what is the purpose of the new index for $V$ in Chapter 7, and why is it more important at this particular chapter?

+
+ +

My guess would be that your intuition is correct, and that it's mostly introduced just to clarify exactly which ""version"" of our value function approximator is going to be used in any particular equation. In previous chapters, which discuss single-step update rules, I guess the authors assumed there was less potential for confusion, and therefore no need to clarify this. Without the clarification, some people might for instance wonder if we should use $V_t$ for our value estimates of an $n$-step return $G_{t+n}$, regardless of how large $n$ is.

+ +
+ +
+

However, I think this introduced a technical error, because suppose we just learned from an 10-step episode ($T=10$), using $n = 3$. Then the latest estimate of $v_\pi$ is $V_{T-1} = V_{10 - 1} = V_{9}$. Then at the next episode, the first time $V_{t + n}$ is used to inform a target update, it will be for $\tau = 0$ (from the pseudo-code), which implies $t - n + 1 = 0$, so $t = n - 1$, that is, $V_{t+n}=V_{n-1+n}=V_{2n-1}=V_5$, which is not the most up-to-date estimate $V_9$ of $v_\pi$.

+
+ +

Once we start considering a situation with more than a single episode, the $V_t$ notation becomes quite confusing. You should read $V_t$ as ""the value function approximator that we have available at time $t$ of the current episode"". So, if we were to use the symbol $V_0$ within the context of a second episode, that would be identical to what was referred to as $V_T$ in the context of the first episode. The $V_t$ notation can be convenient if we're thinking about our equations with our minds in ""math-mode"", but becomes highly confusing once we start thinking about practical implementations involving multiple episodes -- this is probably why they did not include it in the pseudocode.

+ +

If you really wanted to use the subscript-notation in the pseudocode, you'd have to add an extra term in the subscript that adds up all the durations of all previous episodes. If we then try to work out your example situation, we'd run into another problem though... we'd want to use $V_{t+n+T} = V_{2n-1+T} = V_{15}$ at the first iteration where $\tau = 0$ in the second episode. But, across the two episodes, only $13$ steps have passed, so this does not yet exist! You run into the same issue if you try to work out what happened when $\tau = 0$ in the first episode: applying exactly the same reasoning as in your quote, we would have wanted to use $V_5$ after only $3$ time steps passed in the first episode.

+ +

The problem here is that you're trying to use the variable named $t$ in the pseudocode as the subscript for $V$. To get a better idea of what's going on here, let's loop back to the previous page and examine the definition of the $n$-step return:

+ +

$$G_{t:t+n} \doteq R_{t+1} + \gamma R_{t+2} + \dots + \gamma^{n-1} R_{t+n} + \gamma^n V_{t+n-1} (S_{t+n}).$$

+ +

Ok, we've got that. Now, let's take another look at the update rule in which we use this quantity:

+ +

$$V_{t+n}(S_t) \doteq V_{t+n-1}(S_t) + \alpha \left[ G_{t:t+n} - V_{t+n-1} (S_t) \right].$$

+ +

Ok. So $V_{t+n-1}$ appears three times in the update rule. Two times explicitly, estimating the value of $S_t$, and once more ""hidden"" in the definition of $G_{t:t+n}$, where it is used to estimate the value of $S_{t+n}$. Note very carefully what it is that this update rule is doing; it's updating the state estimate of $S_t$. If you now look at the pseudocode again, you'll see a comment on the line where $\tau$ is computed: ($\tau$ is the time whose state's estimate is being updated)!

+ +

What this means, is that in the pseudocode, you should be using $\tau$ as the subscript for $V$! If you do that, it'll at least be correct for the first episode. In the pseudocode, the update rule looks like:

+ +

$$V(S_{\tau}) \gets V(S_{\tau}) + \dots$$

+ +

Plugging in the subscripts from the mathematical definition leads to:

+ +

$$V_{\tau + n}(S_{\tau}) \gets V_{\tau + n - 1}(S_{\tau}) + \dots$$

+ +

Since the pseudocode defines $\tau = t - n + 1$, we can substitute above:

+ +

$$ +\begin{aligned} +V_{t - n + 1 + n}(S_{t - n + 1}) &\gets V_{t - n + 1 + n - 1}(S_{t - n + 1}) + \dots\\ +V_{t + 1}(S_{t - n + 1}) &\gets V_{t}(S_{t - n + 1}) + \dots +\end{aligned}$$

+ +

and now it should make sense again from a practical point of view. At every time step $t$, where $t$ measures number of steps of experience that we have simulated, we simply use the latest value function $V_t$ we have available at that time for bootstrapping in the update rule. When $t + 1 < n$, $S_{t - n + 1}$ is undefined. In these cases, the above update rule doesn't work, which makes sense intuitively because we have not yet progressed far enough into the episode to be capable of compute $n$-step returns.

+",1641,,,,,11/7/2019 18:17,,,,1,,,,CC BY-SA 4.0 +16342,2,,16316,11/7/2019 18:24,,4,,"

The great acting teacher Stella Adler wrote about mannerisms being a powerful tool for actors. Method acting in general focuses on natural performances based roughly on understanding the mindset of the character portrayed.

+ +

It's possible actors who have portrayed androids have observed industrial robots to inform their physicality, and many performances convey the idea, via movement, of a mechanical inner structure. (It is often said that an ""actor's body is their instrument"".)

+ +

What is more interesting is actors trying to convey the cognitive structure of the androids.

+ +

With Arnold, and Terminator robots in general, the baseline performance is decidedly robotic, to convey their inhumanity. But the more advanced Terminators are able to mimic naturalistic human mannerisms, and even established human characters, to trick humans.

+ +

Lieutenant Data often used head motions, such as cocking his head slightly, to convey computation. Here the character arc involved working to become more human, as this character draws heavily on Pinocchio, the wooden puppet that became a boy. Overall Data's performance conveyed a lack of emotion, a definite reference to the logic-oriented Mr. Spock, although I recall episodes where Data experimented with ""emotional circuits"" and ""humor circuits"", where the output was intentionally inconsistent with natural human behavior.

+ +

Blade Runner, where the Tyrell Corporation's motto was ""More Human than Human"", presented the cutting edge Nexus-6 androids as having emotions, but, due to their artificially short life-spans, were portrayed as childlike in trying to reconcile extremely powerful feelings. The Voight-Kampff Test, a form of Turing Test, used in the film to identify androids, relied on the emotional response to questions.

+ +

The key plot point of Do Androids Dream of Electric Sheep, the novel the film was based on, utilized what would be formalized as evolutionary game theory to hypothesize that empathy is a natural function of intelligence sufficiently advanced. Deckard, who may or may not have been an android, and Rachel, who definitely was, are both capable of love. This capacity informed their performances, to the extent that the androids came off as more human than the actual humans, due to the depth of their emotion. This is also reflected in Blade Runner 2049 via the girlfriend-bot Joi, who us the most limited android, but the most human character in the film per her capacity to love (or at least simulate it.)

+ +

In the recent HBO Westworld reboot, the Androids replicate natural human mannerisms when playing their designated roles, but reset to more mechanical mannerisms when acting under their own agency. This is reflected in Ex Machina, where the android mimics human emotions to pass a Turing Test and trick the human subject, only to revert to purely alien mannerisms after the android is free. (""Alien"" here used in the sense of non-human--it's possible the android is sentient as it seems to convey some degree of emotion in regarding the simulated human skin it will wear.)

+ +

The most interesting recent android performance may come from the recent Alien: Covenant where Michael Fassbender plays two identical androids, David and Walter, which have two distinct neural structures. (David has the capacity to be creative, where Walter cannot. In the film it is mentioned that David made people uncomfortable, so the creative functions were removed from subsequent models.) The key difference in the performance seems to be that David demonstrates passion, and even emotions, where Walter is more clearly ""robotic"".

+ +
    +
  • In general, the underlying approach of actors seems to have been to show the androids being distinct from humans, drawing a clear, though sometimes subtle, contrast.

  • +
  • Actors portraying androids have typically utilized robotic mannerisms to convey an artificial entity.

  • +
+",1671,,2444,,11/7/2019 20:00,11/7/2019 20:00,,,,4,,,,CC BY-SA 4.0 +16343,1,,,11/7/2019 20:49,,1,394,"

I'm trying to train a neural network with my own dataset. The neural network can accept the cityscape format.

+ +

Is there any application that can give mask/segmented image, instance image, label IDs images and JSON file, similar to cityscape dataset format?

+ +

Basically, I want to create my own dataset similar to the cityscape dataset format.

+",27221,,2444,,11/8/2019 18:46,11/8/2019 18:46,creating your own dataset similar to cityscapes format,,0,1,,,,CC BY-SA 4.0 +16346,1,16359,,11/8/2019 2:22,,2,203,"

The brevity penalty is defined as

+ +

$$bp = e^{(1- r/c)},$$

+ +

where $r$ is the reference length and $c$ is the output length.

+ +

But what happens if the output length gets zero? Is there any standard way of coping with that issue?

+",21685,,2444,,11/8/2019 14:10,11/8/2019 20:00,What happens when the output length in the brevity penalty is zero?,,1,0,0,,,CC BY-SA 4.0 +16347,1,16352,,11/8/2019 2:31,,1,451,"

I am trying to understand the policy gradient method using a PyTorch implementation and this tutorial.

+ +

My first question is about the end result of this gradient derivation,

+ +

\begin{aligned} +\nabla \mathbb{E}_{\pi}[r(\tau)] &=\nabla \int \pi(\tau) r(\tau) d \tau \\ +&=\int \nabla \pi(\tau) r(\tau) d \tau \\ +&=\int \pi(\tau) \nabla \log \pi(\tau) r(\tau) d \tau \\ +\nabla \mathbb{E}_{\pi}[r(\tau)] &=\mathbb{E}_{\pi}[r(\tau) \nabla \log \pi(\tau)] +\end{aligned}

+ +

Mainly in this equation

+ +

$$\nabla \mathop{\mathbb{E}_\pi }[r(\tau )] = \mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )]$$

+ +

Does expectation follow a distributive or associative property?

+ +

I know that expectations of a function can be written as below

+ +

$$\mathop{\mathbb{E}}[f(x)] =\sum p(x)f(x)$$

+ +

Then can we rewrite the first equations as

+ +

$$\mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )] \\= \mathop{\mathbb{E}_\pi }[r(\tau )] \,\, \mathop{\mathbb{E}_\pi }[\nabla log \pi (\tau )] \\= \sum p(\tau)r(\tau ) \,\, \sum p(\tau)\nabla log \pi (\tau ) \\ += p(\tau) \sum r(\tau ) \nabla log \pi (\tau )$$

+ +

The problem is when I compare this to PyTorch implementation (line 71-74)

+ +
for log_prob, R in zip(policy.saved_log_probs, returns):
+    policy_loss.append(-log_prob * R)
+optimizer.zero_grad()
+policy_loss = torch.cat(policy_loss).sum()
+
+ +

The pytorch implementation simply multiplied log probability and reward -log_prob * R and then summed the vector torch.cat(policy_loss).sum() there is no $p(\tau)$. What is really happening here?

+ +

The second question is the multiplication of log probability and reward in PyTorch implementation -log_prob * R, PyTorch implementation has a negative log probability and derived equation has a positive one $\mathop{\mathbb{E}_\pi }[r(\tau )\nabla log \pi (\tau )]$. What is the need for multiplying log probability with a negative value in PyTorch implementation?

+ +

I have only a basic understanding of maths and that's why I am asking this question here.

+ +
+ +

Edit: found a better derivation of above equation https://youtu.be/Ys3YY7sSmIA?t=3622

+",39,,2444,,5/17/2020 22:11,5/17/2020 22:11,How does the policy gradient's derivative work?,,1,0,0,,,CC BY-SA 4.0 +16348,1,,,11/8/2019 3:23,,8,16741,"

I came across this answer on Quora, but it was pretty sparse. I'm looking for specific meanings in the context of machine learning, but also mathematical and economic notions of the term in general.

+",1671,,2444,,12/12/2021 12:46,12/12/2021 12:46,What is convergence in machine learning?,,2,1,,,,CC BY-SA 4.0 +16349,2,,16316,11/8/2019 3:29,,1,,"

Disclaimer: The intent of this answer is to suggest a a parallel between methods of acting and machine learning, both in intent and application, and theory. A large number of links are included for the convenience of readers new to the field, and there is not an exact correspondence of AI concepts to acting preparation techniques.

+ +

In my prior answer, I mentioned the method acting technique, and Stella Adler's interpretation of Stanislavski's method. Bear in mind that the method is a post-empiricism approach, an attempt, in some sense, to create a science of acting in the sense of analysis, and an approach that is fundamentally algorithmic in the sense of process. (The original manual is titled An Actor Prepares.) Note that areas covered include action, imagination (creativity), units and objectives, emotional memory (accessing memory), and adaptation. See Also: Classical Acting.)

+ +

Note also that plays are aptly named. Drama and comedy arise out of interplay of individuals, and the process of refining performance is the process of play—searching within a rule-space for the most optimal outcome.

+ +
    +
  • Strong actors will rigorously research the character to create a mental model of the character's experience of the world, similar to a model-based agent.

  • +
  • Modern actors seek objectives, sometimes referred to as motivations, similar to goal-based agents.

  • +
+ +

The model has many dimensions, and there may be multiple layers of objectives in the sense of the subconscious. (What does the character want? What does it really want? What does it really really want?) This also applies to the contexts for any choice, which are multiple (personal, societal, economic, etc.)

+ +
    +
  • Actors observe human behavior for the purpose of imitating it, commonly referred to as ""people watching"".
  • +
+ +

As you note, actors preparing for role of robot may observe machinery, with the purpose of indicating for an audience that quality. Actors may also observe other actors, although novelty in performance is typically understood to be optimal.

+ +
    +
  • Actors will access emotional memory, alternately referred to as ""sense memory"" & ""emotional recall"" (affective memory), either to produce a physical effect or or analysis. Output are signifiers. They create a state space which they can return to access on command.
  • +
+ +

Essentially it's a form of ""memory palace"" (method of loci) where events take the place of locations.

+ +
    +
  • Actors will improvise in preparation, to identify and test choices (actions and mannerisms), which involves decision theory. The choices of the other actors (rational agents) are factor, and influence each other.

  • +
  • Choices are selected in a genetic process, for fitness in environment, here defined as audience response. The improvisation that leads to the performance is evolutionary, in that it optimizes via the rehearsal process, with director as audience, and later, in the case of live theater, in response to live audiences. (See also the Actor-critic model.)

  • +
+ +

It's not quite a monte-carlo, being more of an informed search, but does not exclude randomness.

+ +

Essentially, it's a process of analysis, trial-and-error, more analysis, repeat, similar to machine learning with heuristics. It wouldn't be far off to say that there is a convergence, leading to what is perceived to be the optimal set of choices, (although it is more typical to say a performance ""gels"" or ""comes together"".)

+ +
    +
  • It can be said that modern acting methods are themselves algorithmic processes, where the intent is maximizing utility, here audience response, which can carry significant economic consequences.
  • +
+ +

Modern actors are using methods similar to modern AI methods to imitate intelligent androids!

+ +

In the sense of Adler specifically, the technique involves simulating natural emotions to ""trick"" the observer, a form of Affective computing. In other words, via training, the actor is doing what AI's are being trained to do in the context of interacting with humans.

+ +

The underlying method can be understood as a form of applied psychology and neuroscience, where the actor is accessing emotion for the purpose of analysis, and accessing specific parts of the brain on command to create observable signs.

+",1671,,1671,,11/9/2019 0:28,11/9/2019 0:28,,,,4,,,,CC BY-SA 4.0 +16350,2,,16348,11/8/2019 4:50,,6,,"

When formulating a problem in deep learning, we need to come up with a loss function, which uses model weights as parameters. Back-propagation starts at an arbitrary point on the error manifold defined by the loss function and with every iteration intends to move closer to a point that minimises error value by updating the weights. Essentially for every possible set of weights the model can have, there is an associated loss for a given loss function, with our goal being to find the minimum point on this manifold.

+ +

Convergence is a term mathematically most common in the study of series and sequences. A model is said to converge when the series $s(n) = loss_{w_n}(\hat y, y)$ (Where $w_n$ is the set of weights after the $n$'th iteration of back-propagation and $s(n)$ is the $n$'th term of the series) is a converging series. The series is of course an infinite series only if you assume that loss = 0 is never actually achieved, and that learning rate keeps getting smaller.

+ +

Essentially meaning, a model converges when its loss actually moves towards a minima (local or global) with a decreasing trend. Its quite rare to actually come across a strictly converging model but convergence is commonly used in a similar manner as convexity is. Strictly speaking rarely exists practically, but is spoken in a manner telling us how close the model is to the ideal scenario for convexity, or in this case convergence.

+",25658,,,,,11/8/2019 4:50,,,,0,,,,CC BY-SA 4.0 +16351,1,,,11/8/2019 5:17,,1,21,"

Let's say I have a dataset, each item/row of which has $\mathit{X + 1}$ characteristics where the last characteristic (i.e., the $\mathit{1}$) represents the some value I want to predict, $\mathit{Y}$, based on a SOM trained on the $\mathit{X}$ characteristics. I want to organize the dataset into groups such that each group has a small variance among the respective $\mathit{Y}$ values. I believe I could do this by using a non-Euclidean distance to find the Best Matching Unit (BMU) based on applying weights to each dimension.

+ +

For example, given a node at (0,0) and weights for dimension $\mathit{x}$ of 1 and dimension $\mathit{y}$ of 2, a data point at (3,2) would have a weighted distance of 5 from the node, calculated as follows:

+ +

$\sqrt{\mathit{(1 * (3 - 0)) ^ 2 + (2 * (2 - 0)) ^ 2}}$

+ +

I don't think a simple linear regression would work to determine the weights because it would not take advantage of clustering.

+ +

The goal would be, for a new data point, to approximate a probability distribution of outcomes based on similarly-profiled data points in the training set (i.e., retrieve all of the training results with the same BMU and analyze the results). I think this might essentially just be replicating a deep feedforward network, but I'd like to try it.

+ +

Is there a way I could achieve this by modifying a SOM model or using a similar technique?

+",30154,,30154,,11/8/2019 6:17,11/8/2019 6:17,Self-organizing map using weighted non-euclidean distance to minimize variance of predictions,,0,0,,,,CC BY-SA 4.0 +16352,2,,16347,11/8/2019 8:24,,2,,"

You cannot do this:

+ +
+

$\mathop{\mathbb{E}_\pi }[r(\tau )\bigtriangledown log \pi (\tau )] \\= \mathop{\mathbb{E}_\pi }[r(\tau )] \,\, \mathop{\mathbb{E}_\pi }[\bigtriangledown log \pi (\tau )]$

+
+ +

That is because $r(\tau )$ and $\bigtriangledown log \pi (\tau )$ are correlated by their dependence on $\tau$. In a simpler concrete example, if your expectation was over simple equiprobable discrete distribution where $\tau$ could be any integer in range $[1,10]$, then $\mathop{\mathbb{E} }[\tau^2] = 38.5$ whilst $\mathop{\mathbb{E} }[\tau]\mathop{\mathbb{E} }[\tau] = 30.25$

+ +
+

The pytorch implementation simply multiplied log probability and reward -log_prob * R and then summed the vector torch.cat(policy_loss).sum() there is no $p(\tau)$. What is really happening here?

+
+ +

The purpose of transforming the gradient into an expectation $\mathbb{E}$ for the policy gradient theorem, is so that you can estimate it using samples taken from the distribution. Typically, you don't know $p(\tau)$, but you do know that if you follow the same process where $p(\tau)$ applies (i.e. measure the return from the environment whilst following the policy represented by the policy function) that you will get an unbiased sample from that distribution.

+ +

So what is going on here is that you throw away the outer expectation $\mathbb{E}_{\pi}[]$ and replace it with a stochastic estimate for the same value based on taking samples. The samples are naturally obtained with distribution described by $p(\tau)$, if you follow the policy function when making action choices.

+ +
+

The second question is the multiplication of log probability and reward in pytorch implementation -log_prob * R, pytorch implementation has a negative log probability and derived equation has a positive one $\mathop{\mathbb{E}_\pi }[r(\tau )\bigtriangledown log \pi (\tau )]$. What is the need for multipling log probability with negative value in pytorch implementaion?

+
+ +

I don't know the code, but this very likely because of a sign change brought on by considering how to respond to the gradient estimate.

+ +

There is a clue in the use of the name ""loss"". To maximise return in policy gradient methods, you can perform gradient ascent based on the estimated gradient as the goal is to find higher values. However, it is more usual in NN libraries to perform gradient descent in order to minimise a loss function. That is a likely cause of the sign reversal here.

+",1847,,1847,,11/8/2019 13:53,11/8/2019 13:53,,,,3,,,,CC BY-SA 4.0 +16353,1,,,11/8/2019 9:39,,4,205,"

Facebook has just pushed out a bigger version of their multi-lingual language model XLM, called XLM-R. My question is: do these kind of multi-lingual models imply, or even ensure, that their embeddings are comparable between languages? That is, are semantically related words close together in the vector space across languages?

+ +

Perhaps the most interesting citation from the paper that is relevant to my question (p. 3):

+ +
+

Unlike Lample and Conneau (2019), we do not use language embeddings, + which allows our model to better deal with code-switching.

+
+ +

Because they do not seem to make a distinction between languages, and there's just one vocabulary for all trained data, I fail to see how this can be truly representative of semantics anymore. The move away from semantics is increased further by the use of BPE, since morphological features (or just plain, statistical word chunks) of one language might often not be semantically related to the same chunk in another language - this can be true for tokens themselves, but especially so for subword information.

+ +

So, in short: how well can the embeddings in multi-lingual language models be used for semantically comparing input (e.g. a word or sentence) of two different languages?

+",29995,,29995,,11/8/2019 10:56,3/10/2020 8:55,Are embeddings in multi-lingual language models comparable across languages?,,2,0,,,,CC BY-SA 4.0 +16355,5,,,11/8/2019 14:10,,0,,,2444,,2444,,11/8/2019 14:10,11/8/2019 14:10,,,,0,,,,CC BY-SA 4.0 +16356,4,,,11/8/2019 14:10,,0,,"For questions related to BLEU (BiLingual Evaluation Understudy), which is a metric for evaluating the quality of text which has been machine-translated from one natural language to another. The metric was proposed in the paper ""BLEU: a Method for Automatic Evaluation of Machine Translation"" (2002) by Kishore Papineni et al. ",2444,,2444,,11/8/2019 14:10,11/8/2019 14:10,,,,0,,,,CC BY-SA 4.0 +16357,1,16373,,11/8/2019 14:48,,5,692,"

In my opinion, deep learning algorithms and models (that is, multi-layer neural networks) are more sensitive to overfitting than machine learning algorithms and models (such as the SVM, random forest, perceptron, Markov models, etc.). They are capable of learning more complex patterns. At least that's how I look at it. Still, I and my colleagues disagree about this and I cannot really find any information about this. My colleagues say that deep learning algorithms are hardly vulnerable to overfitting.

+ +

Are there statements (or opinions) about this aspect?

+",30599,,2444,,11/11/2019 1:10,11/11/2019 1:10,Are deep learning models more prone to overfitting than machine learning ones?,,1,2,0,,,CC BY-SA 4.0 +16359,2,,16346,11/8/2019 16:40,,1,,"

Division by zero is not mathematically defined.

+ +

A usual or standard way of dealing with this issue is to raise an exception. For example, in Python, the exception ZeroDivisionError is raised at runtime if you happen to divide by zero.

+ +

If you execute the following program

+ +
zero = 0
+numerator = 10
+numerator / zero
+
+ +

You will get

+ +
Traceback (most recent call last):
+  File ""main.py"", line 3, in <module>
+    numerator / zero
+ZeroDivisionError: division by zero
+
+ +

However, if you want to avoid this runtime exception, you can check for division by zero and deal with this issue in a way that is appropriate for your program (without needing to terminate it).

+ +

In the paper BLEU: a Method for Automatic Evaluation of Machine Translation that introduced the BLEU (and brevity penalty) metric, the authors defined the brevity penalty as

+ +

\begin{align} +BP = +\begin{cases} +1, & \text{if } c > r\\ +e^{(1- r/c)} & \text{if } c \leq r\\ +\end{cases} \label{1} \tag{1} +\end{align}

+ +

This definition does not explicitly take into account the division by zero.

+ +

The Python package nltk does not raise an exception, but it (apparently, arbitrarily) returns zero when c == 0. Note that the BLEU metric ranges from 0 to 1. For example, if you execute the following program

+ +
from nltk.translate.bleu_score import brevity_penalty, closest_ref_length
+
+reference1 = list(""hello"") # A human reference translation. 
+references = [reference1] # You could have more than one human reference translation.
+
+# references = [] Without a reference, you will get a ValueError.
+
+candidate = list() # The machine translation.
+c = len(candidate)
+
+r =  closest_ref_length(references, c)
+print(""brevity_penalty ="", brevity_penalty(r, c))
+
+ +

You will get

+ +
brevity_penalty = 0
+
+ +

In the example above, the only human reference (translation) is reference1 = list(""hello"") and the only candidate (the machine translation) is an empty list. However, if references = [] (you have no references), then you will get the error ValueError: min() arg is an empty sequence, where references are used to look for the closest reference (the closest human translation) to the candidate (the machine translation), given that there could be more than one human reference translation, and one needs to be chosen to compute the brevity penalty, with respect to your given candidate.

+ +

In fact, in the documentation of the brevity_penalty function, the following comment is written

+ +
# If hypothesis is empty, brevity penalty = 0 should result in BLEU = 0.0.
+
+ +

where hypothesis is a synonym for candidate (the machine translation) and the length of the candidate is $c$ in the formula \ref{1} (and c in the example above).

+ +

To answer your second question more directly, I don't think there's a standard way of dealing with the issue, but I've not fully read the BLEU paper yet.

+",2444,,2444,,11/8/2019 20:00,11/8/2019 20:00,,,,0,,,,CC BY-SA 4.0 +16360,1,,,11/8/2019 17:06,,2,120,"

There is a recent trend in people using LSTMs to write novels. I haven’t attempted this myself. From what I’m hearing, they can tell a story, but it seems they lose the context of the story rather quickly. After which they begin constructing new, but not necessarily related constructs.

+ +

Can they construct a plot in the long term?

+",20271,,2444,,11/8/2019 17:15,12/8/2019 18:01,Why can't LSTMs tell a long story?,,1,1,,,,CC BY-SA 4.0 +16361,1,16376,,11/8/2019 17:13,,2,1153,"

Can some one explain how Facenet model works in detail and simple words .

+",30306,,30306,,11/9/2019 8:25,11/9/2019 11:49,Detailed explaination of Facenet Model for face recogniton?,,1,0,,,,CC BY-SA 4.0 +16362,2,,16360,11/8/2019 17:31,,3,,"

The long-short term memory (LSTM) is a type of recurrent neural network, which is only suited for sequence modelling, that is, to keep track of statistical dependencies between elements of a sequence.

+ +

The LSTM prediction capabilities are limited to the training data that is used to train it, the inductive bias (in the case of LSTMs, the inductive bias particularly refers to the fact that elements of a sequence are dependent on each other) and the available computation resources. However, storytelling often assumes the existence of a common-sense knowledge between the storyteller and the listener (but LSTM completely ignores this) and requires the true understanding of language, which is believed to be an AI-complete problem (in simple words, it is a very complex task, which probably cannot be exactly solved with statistical models).

+ +

Furthermore, even though LSTM partially addresses the vanishing gradient problem (and they were specifically created to partially solve this issue), they can still suffer from the exploding gradient problem. Moreover, although LSTMs and, in general, neural networks are universal function approximators, in practice, functions may not be continuous, which is an assumption made in the universal approximation theorems.

+",2444,,2444,,11/8/2019 17:45,11/8/2019 17:45,,,,4,,,,CC BY-SA 4.0 +16364,1,,,11/8/2019 17:50,,1,85,"

Simply speaking, I'm trying to somehow search an audio clip for a list of words, and if found, I mark the time stamps. My use-case is profanity check with a list of pre-defined profane words.

+ +

Is there any successfull approaches, samples, tools or APIs, possibly through deep learning, to perform this? I'm new to audio processing.

+",9053,,,,,2/2/2022 21:01,Deep audio fingerprinting for word search,,1,0,,,,CC BY-SA 4.0 +16365,1,,,11/8/2019 18:53,,4,169,"

I am looking for a book or paper which clearly explains the relationship between Ising models and deep neural networks.

+ +

Can anyone provide any references?

+",31126,,2444,,1/17/2021 19:34,1/17/2021 19:34,Which books or papers clearly explain the relation between Ising models and deep neural networks?,,1,0,,,,CC BY-SA 4.0 +16367,1,,,11/9/2019 0:42,,2,151,"

A recent question on AI and acting recalled me to the idea that in drama, there are not only conflicting motives between agents (characters), but a character may themselves have objectives that are in conflict.

+ +

The result of this in performance is typically nuance, but also carries the benefit of combinatorial expansion, which supports greater novelty, and it occurs to me that this would be a factor in affective computing.

+ +

(The actress Eva Green is a good example, where her performances typically involve indicating two or more conflicting emotions at once.)

+ +

It occurs to me that this can even arise in the context of a formal game where achieving the most optimal outcome requires managing competing concerns.

+ +
    +
  • Is there literature or examples of AI with internal conflicting objectives?
  • +
+",1671,,,,,11/10/2019 1:45,AI with conflicting objectives?,,2,0,,,,CC BY-SA 4.0 +16368,2,,16367,11/9/2019 2:02,,2,,"

There are multi-objective optimization problems, where the objective functions may be in conflict with each other, which can potentially have multiple Pareto-optimal solutions. The paper Multi-objective optimization using genetic algorithms: A tutorial (2006) gives a good overview of the multi-objective optimization problem with genetic algorithms, which can be called evolutionary multi-objective optimization (EMO) or multi-objective optimization evolutionary algorithms (MOEAs).

+ +

A common multi-objective genetic algorithm is NSGA (or NSGA-2 and NSGA-3), which stands for Non-dominated Sorting Genetic Algorithm, which is based on the concepts of non-dominated sorting, Pareto front and optimality, niches (sub-populations), and elitism (the best individuals of the current population are carried over to the next generation).

+ +

If you want to play with MOEAs, you may wanna try the Python deap package, which supports, for example, the NSGA-2 algorithm.

+",2444,,2444,,11/9/2019 2:13,11/9/2019 2:13,,,,1,,,,CC BY-SA 4.0 +16369,1,,,11/9/2019 5:11,,2,95,"

I would like to solve the Sokoban puzzle, which consists in moving a character in a 2D map to push boulders into target cells. Each turn, the player can move to an adjacent cell (no diagonals) if it is empty, or push a boulder one step further. To push a boulder, the player needs to stand next to it, and the space behind the boulder needs to be empty (no other boulders and no walls).

+

I'm using the STRIPS planner, and I am having a hard time defining the fixed and dynamic relations and also the preconditions and effects of each operator for this puzzle.

+",31131,,2444,,2/7/2021 17:39,2/7/2021 17:39,"How can I define the relations, preconditions and effects of each operator for the Sokoban puzzle?",,0,0,,,,CC BY-SA 4.0 +16370,2,,16365,11/9/2019 5:15,,4,,"

The following articles

+ + + +

may help you understand the ""relationship"" between Ising models and DNN, assuming you know what the Ising model is and what a DNN is, the similarity should be fairly intuitive to you.

+ +

The Ising model is a sort of floating soup of ferromagnetic particles each generating their own small magnetic field either working against or with their neighbor. When many of the particles aline, they create an aligned field we refer to as a dipole moment in magnetism, while in a DNN we refer to the joined effort of a few entities working to cause a larger effect in another entity an 'activation function'. In a fully connected DNN, where the Euclidean distance weights the connections and the nodes are initialized with a certain magnetic polarity in relation to the axis of the magnetic field it generates, the network would be an almost exact representation of the reality of what the Ising model seeks to simplify.

+",30365,,30365,,11/13/2019 17:52,11/13/2019 17:52,,,,4,,,,CC BY-SA 4.0 +16371,2,,16367,11/9/2019 5:31,,2,,"

MOEAs sounds very cool, but I feel that you can't really talk about conflict in AI without discussing generative adversarial networks (GANs), which have been shown to have amazing performance by training a model to say detect in-between pictures of cats and dogs and an adversarial network being trained to create pictures to attempt to trick the training network as much as possible. The completely conflicting objectives of the networks enable both to be trained very well so the models, in the end, are much more robust and able to handle sometimes bizarrely generated edge cases.

+ +

I also found this paper Evolutionary Multi-Objective Optimization Driven by Generative Adversarial Networks (GANs) (2019), which combines MOEAs and GANs, but there are potentially more related papers.

+",30365,,2444,,11/10/2019 1:45,11/10/2019 1:45,,,,0,,,,CC BY-SA 4.0 +16373,2,,16357,11/9/2019 6:26,,5,,"

Your reasoning isn't wrong. Deep Neural Networks (DNNs) have a much larger capacity than simpler ML algorithms (excluding NNs) and can easily memorize even a very complex dataset and overfit.

+ +

DNNs, however, are so effective because they usually are applied on tasks that are harder, so it's not as easy to overfit. For example an image classifier might be trained on a dataset with millions of images; a task much harder to overfit on.

+ +

In cases where this isn't possible (e.g. an image classification task with a couple thousand images), transfer learning is used. You can initialize your weights from a model pre-trained on a large dataset, use its already-trained feature extraction layers and simply fine tune the last layer.

+ +

Data augmentation also helps a lot here, which effectively increases size of the training set and discourages the DNN from memorizing the samples. It is so effective that it is used even in large datasets, where it is harder to overfit.

+ +

Additionally, DNNs employ several methods to prevent them from overfitting. The most prominent of these is dropout, which is a very effective regularizer. Batch Normalization has also proven an effective regularizer. SGD allows you to explore more parameters than GD, which also is effective against overfitting. Finally, early stopping and parameter norm penalties aren't uncommon in DNNs.

+",26652,,,,,11/9/2019 6:26,,,,0,,,,CC BY-SA 4.0 +16374,1,,,11/9/2019 9:14,,6,634,"

In perfect information games, the agent can see all the moves performed in the past. Besides, it can observe the next action that will be put into practice by the opponent.

+ +

In this case, can we say that perfect information games are actually a fully observable environment? If we reach this conclusion, I guess that imperfect information becomes a partially observable environment?

+",31133,,2444,,1/4/2022 19:29,1/4/2022 19:29,"Are perfect and imperfect information games modelled as fully and partially observable environments, respectively?",,2,0,,,,CC BY-SA 4.0 +16375,1,,,11/9/2019 11:18,,4,7563,"

I am solving a problem in which, according to the given values, the heuristic is not admissible. According to my calculation from other similar problems, it should be consistent, as well as keeping in mind the values, but the solution says it's not consistent either. Can someone tell why?

+",31139,,2444,,5/7/2021 12:05,5/7/2021 12:05,"If an heuristic is not admissible, can it be consistent?",,2,1,,,,CC BY-SA 4.0 +16376,2,,16361,11/9/2019 11:49,,2,,"

Facenet is a Siamese network. It's basic architecture is this: + +The input(a face) is fed through a deep convolutional neural network and also a fully connected layer at the end. The fully connected layer at the end output an embedding of the input image which is a predefined size. The embedding can contain feature that human understand or maybe not. The embedding represent the input image, just in a ""compressed"" form.

+ +

To further explain that, let me give an example. Let's say that you have to describe a face. What's will you say? You will probably say something like the face is round, the eyes are blue, it is a female face and more. The neural network is doing what you are doing, describing the face, but using numbers instead of words.

+ +

To do a face recognition task, the network take an pre taken image of the list of people to recognize and the unknown new data from the people to be recognised. It then feed both images into the network and get the embeddings. The network then calculate the distance of the two embeddings, using some metric such as squared error or absolute error. In the image it uses the squared error. If the error is below a certain threshold, the face is recognised. If not, it then loops through the other pre taken images in the set of faces of the system and do the task again. The system stores embedding of the pre taken images before hand.

+ +

For training the FaceNet, triplet loss is used. Triplet loss has been explained in another of your post, by me. What is the formula used to calculate the accuracy in the FaceNet model?

+ +

Basically, the model is trained using the triplet loss as it can train the network to output a similar embedding for the same person and a very different embedded for a different person.

+ +

Sometimes, a binary classification end is also used. It removed the need of the triplet loss and outputs a number from 0 to 1 for similarity instead. This removes the triplet loss part.

+ +

Hope my answer can help you and have a nice day!

+",23713,,,,,11/9/2019 11:49,,,,1,,,,CC BY-SA 4.0 +16378,2,,16375,11/9/2019 17:16,,2,,"

For a heuristic to be admissible, it must never overestimate the distance from a state to the nearest goal state.

+ +

For a heuristic to be consistent, the heuristic's value must be less than or equal to the cost of moving from that state to the state nearest the goal that can be reached from it, plus the heurstic's estimate for that state.

+ +

What this means is that, as you move along the sequence of nodes from start to goal that the heuristic recommends, a consistent heuristic should monotonically decrease in value. A consistent heuristic is thus also always admissible.

+ +

Notice that this means that if a heuristic is not admissible (like yours), it is also not consistent (by the contrapositive).

+ +

Therefore, if you already know your heuristic is not admissible, you should not be surprised that it is not consistent.

+ +

It seems most likely that you may have confused the definition of consistent for monotone. A consistent heuristic is both monotone and admissible.

+ +

As Neil Says, if you want to know why your specific heuristic is inadmissible, you should post another question about it, or modify this one.

+",16909,,,,,11/9/2019 17:16,,,,0,,,,CC BY-SA 4.0 +16380,1,,,11/9/2019 18:51,,2,32,"

I am currently trying to implement an end-to-end speech recognition system from scratch, that is, without using any of the existing frameworks (like TensorFlow, Keras, etc.). I am building my own library, where I am trying to do a polynomial approximation of functions (like exponential, log, sigmoid, ReLU, etc). I would like to have access to a nice description of the neural networks involved in an end-to-end speech recognition system, where the architecture (the layers, activation functions, etc.) is clearly laid out, so that I can implement it.

+ +

I find most of the academic or industry papers citing various previous works, toolkits or papers, making it tedious for me. I am new to the field, so I am having more difficulty, so looking for some help here.

+",31145,,2444,,11/11/2019 16:27,11/11/2019 16:27,Is there a detailed description or implementation of an end-to-end speech recognition system?,,0,2,,,,CC BY-SA 4.0 +16381,1,,,11/9/2019 21:30,,4,501,"

I was just doing a simple NN example with the fashion MNIST dataset, where I was getting 97% accuracy, when I noticed that I was using Binary cross-entropy instead of categorical cross-entropy by accident. When I switched to categorical cross-entropy, the accuracy dropped to 90%. I then got curious and tried to use binary cross-entropy instead of categorical cross-entropy in my other projects and in all of them the accuracy increased.

+ +

Now, I know that binary cross-entropy can be used in a multi-class, multi-label classification problem, but why is working better than categorical cross-entropy in a multiclass single label problem?

+",31147,,2444,,11/10/2019 1:49,11/10/2019 1:49,Why does the binary cross-entropy work better than categorical cross-entropy in a multi-class single label problem?,,1,0,,,,CC BY-SA 4.0 +16382,2,,16381,11/9/2019 22:35,,1,,"

https://stats.stackexchange.com/questions/260505/machine-learning-should-i-use-a-categorical-cross-entropy-or-binary-cross-entro +Is relevant.

+ +

based on my reading when you have a NN and do Binary crossentropy on what you might call 'Linked category data' the accuracy can tend to be better than in a Categorical crossentropy model. The binary aspect implies the categories can undergo multiple splits before deciding on the exact category when the data is categorically splittable like this in a tree like hierarchy the accuracy can tend to be better.

+ +

Think of how difficult it would be to memorize the name of every type of clothing in someones wardrobe if each piece had its own special name. Vs if they had structurally relevant names like Upper/lower for category number one is it warn on your top half or bottom half. followed by inner or outer. It an inner or outer layer of clothing. learning such binary name/feature categories enables a more accurate model. If it was data unrelated in this way it would most likely not be as accurate. The binary model can take advantage learning such features while a muti categorical model I think assumes independence and tries to learn best the features of each group and gives out a prediction of how sure it falls in each category.

+",30365,,,,,11/9/2019 22:35,,,,2,,,,CC BY-SA 4.0 +16383,1,,,11/9/2019 22:39,,4,115,"

I am working on a problem in which I am attempting to find a stable region in a spiral galaxy. The PI I'm working with asked me to use machine learning as a tool to solve the problem. I have created some visualizations of my data, as bellow.

+ +

+ +

In this image, you can see there is a flat region between 0 and roughly 30 pixels, and between 90 pixels and 110 pixels. I have received suggestions to use an RNN LSTM model that can identify flat regions, but I wanted to hear other suggestions of other neural network models as well.

+ +

The PI I'm working with suggests to feed my data visualization images into a neural network and have the neural network identify said stable regions. Can this be done using a neural network, and what resources would I have to look at? Moreover, can this problem be solved with RNN LSTM? I think the premise of this was to treat the radius as some temporal dimension. I've been extensively looking for answers online, and I cannot quite seem to find any similar examples.

+",31148,,30365,,11/11/2019 0:14,12/11/2019 2:00,Using a neural network to identify a stable region within a set of data?,,2,0,,,,CC BY-SA 4.0 +16384,1,16387,,11/9/2019 23:15,,3,884,"

My vague understanding of reinforcement learning (RL) is that it's very similar to supervised learning except that it updates on a continuous feed of data/activity, this to me sounds very similar to AutoML (which I've started to notice being used).

+ +

Do they use different algorithms? What is the fundamental difference between RL and AutoML?

+ +

I'm after an explanation for somebody who understands technology but does not work with machine learning tools regularly.

+",19484,,2444,,11/10/2019 0:45,11/11/2019 15:51,What is the difference between reinforcement learning and AutoML?,,2,0,,,,CC BY-SA 4.0 +16385,5,,,11/10/2019 0:46,,0,,,2444,,2444,,11/10/2019 0:46,11/10/2019 0:46,,,,0,,,,CC BY-SA 4.0 +16386,4,,,11/10/2019 0:46,,0,,"For questions related to automated machine learning (AutoML), which refers to a collection of techniques to automate the design and the application of machine learning algorithms and models.",2444,,2444,,11/10/2019 0:46,11/10/2019 0:46,,,,0,,,,CC BY-SA 4.0 +16387,2,,16384,11/10/2019 1:29,,3,,"

Automated machine learning (AutoML) is an umbrella term that encompasses a collection of techniques (such as hyper-parameter optimization or automated feature engineering) to automate the design and application of machine learning algorithms and models.

+ +

Reinforcement learning (RL) is a sub-field of machine learning concerned with the task of making decisions and taking actions in an environment so as to maximize (long-term) reward (which is the goal of the so-called RL agent). RL is (at least partially) based on the way animals (including humans) learn. For example, the usual way of training a dog to perform a certain task is to reward it with food whenever it takes the correct action (for example, jumping, if you want the dog to jump whenever you make a certain gesture with your hand). In this case, the RL agent is the dog, the task the dog needs to perform (e.g. jumping) is the environment, food is the reward and the goal is to get food.

+ +

Given that reinforcement learning (RL) is a sub-field of machine learning, then, in principle, AutoML can also be used to automate the design of RL algorithms, models or agents. For example, if you use a neural network to represent the policy (the function the determines which action to take in the environment), then you can potentially use AutoML to find the most appropriate architecture (for example, the most appropriate number of layers) for this neural network.

+",2444,,2444,,11/10/2019 15:07,11/10/2019 15:07,,,,0,,,,CC BY-SA 4.0 +16388,5,,,11/10/2019 1:53,,0,,,2444,,2444,,11/10/2019 1:53,11/10/2019 1:53,,,,0,,,,CC BY-SA 4.0 +16389,4,,,11/10/2019 1:53,,0,,"For questions related to the concept of cross-entropy in the context of artificial intelligence. For example, when the cross-entropy is used as a loss function to train a neural network.",2444,,2444,,11/10/2019 1:53,11/10/2019 1:53,,,,0,,,,CC BY-SA 4.0 +16390,2,,16383,11/10/2019 2:27,,0,,"

In image processing CNNs are usually used to create weighted filters for focusing in on the image features which are most important for making predictions. Keras is one of the libraries used to examine images in this way. With this type of analysis you will need labeled and unlabeled data you want to create a network that inputs a photo extracts the flat black line regions and outputs those. The model will be generative, generating guesses of regions where the function is flat. This is all possible to do but in order to label the data you need to label them by hand or you need to create a function that manually labels them which would not be very difficult. The input nodes will take in the pixels of the picture and the output layer will be guesses at location along the graph of wether the section is flat or not. It seems overkill to do this with a neural network when it is possible to not use a NN and creating a labeling method will most likely be your first step. If you have any questions please ask.

+",30365,,,,,11/10/2019 2:27,,,,0,,,,CC BY-SA 4.0 +16391,1,16392,,11/10/2019 2:54,,3,474,"

In a feed-forward neural network, in order to efficiently do backpropagation, what kind of data structure is needed?

+ +

I know the weights can just be stored in an array, and you need pointers of some kind to represent connections from one layer to the next (or just a default scheme/pattern), but is anything else needed for backpropagation to work?

+",30365,,2444,,11/10/2019 13:31,5/23/2022 21:22,What kind of data structures are needed to efficiently do back-propagation in a feedforward neural network?,,2,0,,,,CC BY-SA 4.0 +16392,2,,16391,11/10/2019 4:51,,2,,"

TL;DR: You'll need to store a little bit more to perform backward passes. You'll need to store data from the forward pass. This stored information is used for calculating the gradient.

+ +

Overview (warning: not trivial)

+ +
+

I know the weights can just be stored in an array

+
+ +

You'll need a little more:

+ +

To update the weights you need to keep a ""cache"" of the forward pass intermediate terms. That is, forward propagation can be seen as a series of transformations on your input $X$: +$$X\xrightarrow{\Theta^{[1]}+b^{[1]}} [ Z^{[1]} \xrightarrow{\alpha^{[1]}} A^{[1]}] \xrightarrow{\Theta^{[2]}+b^{[2]}} +\dots +\xrightarrow{\Theta^{[L]}+b^{[L]}} [ Z^{[L]} \xrightarrow{\alpha^{[L]}} A^{[L]}]\xrightarrow{\frac{1}{m}\sum\limits_m\sum\limits_{n_L} loss\{A^{[L]},y\}} J +$$

+ +

where:

+ +

$Z^{[1]}=\Theta^{[1]}X+b^{[1]}$ (ie the linear part)

+ +

$A^{[l]}=\alpha^{[l]}(Z^{[l]})$ (ie element wise activation over linear part)

+ +

You need to store the $Z^{[l]}$ & $A^{[l]}$ terms in said ""cache."" You could store these in an array or some other similar data structure. You need these for calculating the gradient during the backwards pass.

+ +

Syntax

+ +

$A^{[k]}$ - this means we are indexing by layer (eg $\alpha^{[k]}$ is the activation for k-th layer)

+ +

$m$ - is the number of examples in the batch

+ +

$n_k$ - denotes the number of neurons in the k-th layer

+ +

$L$ - the number of layers (so $n_L$ is the number of neurons in last layer)

+ +

$\Theta$ - The set of all weights (notice no superscript)

+ +

Backprop

+ +

In the case of neural networks the cost is a scalar function of inputs and parameters. To get backprop started calculate the scalar by matrix derivative of the cost with respect to the activations of the last layer call this matrix $dA^{[L]}$. Observe:

+ +

$dA^{[L]} = \frac{\partial J(\Theta,X)}{\partial A^{[L]}}$

+ +

Next, we calculate scalar-by-matrix derivative of $Z^{[L]}$. Doing this one realizes:

+ +

$dZ^{[L]} = \frac{\partial J(\Theta,X)}{\partial Z^{[L]}} = dA\odot\alpha'^{[L]}(Z^{[L]})$

+ +

Where $\odot$ denotes element wise (Hadamard) product.

+ +

With the above one can make use of the matrix definitions for back propagation:

+ +

$\text{(A)}\quad d\Theta^{[l]} = \frac{1}{m}dZ^{[l]}\times (A^{[l-1]})^T$

+ +

$\text{(B)}\quad db^{[l]} = \frac{1}{m}\sum_{c=1}^m dZ^{[l](c)}$ (where the new superscript in $dZ^{[l](c)}$ denotes summing along the batch dimension )

+ +

$\text{(C)} \quad dZ^{[l]}= dA^{[l]}\odot \alpha^{'[l]}(Z^{[l]})$

+ +

$\text{(D)}\quad dA^{[k]} = (\Theta^{[k+1]})^T\times dZ^{[k+1]}$

+ +

And of course the wight updates are:

+ +

$\Theta^{[L]} \leftarrow \Theta^{[L]} - \frac{\eta}{m}d\Theta $

+ +

$b^{[L]} \leftarrow b^{[L]} - \frac{\eta}{m}db $

+ +

(where $\eta$ is the learning rate)

+ +

Observe, how the forward pass terms are used during the backprop calculations.

+ +

A recommendation

+ +

Take the A. Ng deep learning specialization. He does a good job explaining the intuition and even has a project to implement this. Though, he does not derive the back propagation equations. You can find a not so easy derivation here.

+",28343,,28343,,11/10/2019 16:53,11/10/2019 16:53,,,,1,,,,CC BY-SA 4.0 +16393,2,,16374,11/10/2019 6:26,,1,,"

Not exactly, at least traditionally: in Game Theory, ""imperfect information"" is most often defined as agents having only partial information about the history of agents' actions, as you correctly noted. But also note that this doesn't refer to the general world facts or state.

+ +

But ""partial observability"" is typically used in terms of systems, e.g. in Markov Decision Processes, where it explicitly refers to world state, which might or might not include the history of other actors' actions.

+ +

But of course in the end it depends which exact definitions are used in the context you're looking at - every author is free to define their own concepts, using traditional names or new ones.

+",3554,,,,,11/10/2019 6:26,,,,0,,,,CC BY-SA 4.0 +16394,1,,,11/10/2019 9:50,,12,5589,"

A friend of mine, who is an International Master at chess, told me that humans were superior to machines provided you didn't impose the time constraints that exist in competitive chess (40 moves in 2 hours) since very often games were lost, to another human or a machine, when a bad move is made under time pressure.

+ +

So, with no time constraints and access to a library of games, the human mind remains superior to the machine is my friend's contention. I'm an indifferent chess player and don't really know what to make of this. I was wondering if any research had been made that could back up that claim or rebut it.

+",31158,,2444,,11/11/2019 1:07,1/22/2021 1:25,Are humans superior to machines in chess?,,1,3,,,,CC BY-SA 4.0 +16395,1,16397,,11/10/2019 10:26,,3,53,"

I am trying to implement a tabular-based GLIE Monte-Carlo learning algorithm. +So I repeat n times:

+ +
    +
  1. create observations using my previous policy $\pi_{n-1}(s)$
  2. +
  3. update my state-action values using the observations generated in 1 with the monte-carlo update rule: $Q_n(s_t,a_t)= Q_n(s_t,a_t)+1/N(s_t,a_t)\times(G_t-Q_n(S_t,a_t))$
  4. +
  5. update my policy to $\pi_{n}$ using epsilon-geedy improvement with $\epsilon=1/(n+1)$.
  6. +
+ +

In step 2 I need to decide for an initial estimate $\tilde{Q}_n$. Is it a decent option to use $\tilde{Q}_n=Q_{n-1}$?

+",30369,,,,,11/10/2019 11:08,Can I use my previous estimate of the state-action values as initialisation in GLIE-Monte Carlo Control?,,1,0,,,,CC BY-SA 4.0 +16396,2,,16394,11/10/2019 10:42,,22,,"

Losing games to computers because of mistakes made under time pressure was probably a thing about 20 years ago, when Kasparov lost to DeepBlue after such a mistake(correction: it was Kramnik with the blunder, not Kasparov (see edit 2)). But after Kramnik's loss in early 2000s, no world champion ever tried to play against a computer (to my knowledge). Nowadays, there are computer only tournaments among programs with ratings well above 3300 (for comparison, Carlsen's peak rating was around 2880), and it is not uncommon for computers to make moves with no apparent meaning to humans.

+

No time limit for humans also mean no time limit for computers so I doubt any human can win a single game against a computer. Older models like Stockfish 8 depend on their computational power as it can look at several millions of position per second, Google AlphaZero managed to beat Stockfish with 80000 positions per second: they don't seem to depend on brute force calculations any more. Keep in mind that this is without any prior knowledge of openings etc, they are trained using reinforcement learning, starting from the rules of the game only. From there, they can develop their own strategies and implement them without making any mistakes. They create their own openings from scratch, so existing libraries is not going to be very useful.

+

I am not aware of any research on this but lack of a challenge from humans is probably enough evidence. Also, Grand Masters regularly use chess engines in their training routine to analyze positions, so there is that.

+

A few years ago there was a game between Stockfish against GM Nakamura + Rybka, which Stockfish won. It is possible that human GM + Stockfish might have better chances against AlphaZero in correspondence without any time limits, but we probably will never know.

+

Here is an interview with Carlsen after a game, very interesting to show what he thinks about AlphaZero.

+

Both Kramnik and Kasparov made serious mistakes in their matches against computers. Kasparov resigned in a drawn position and missed a knight sacrifice, and Kramnik missed a mate in 1.

+",22301,,2444,,1/22/2021 1:25,1/22/2021 1:25,,,,1,,,,CC BY-SA 4.0 +16397,2,,16395,11/10/2019 11:08,,1,,"
+

In step 2 I need to decide for an initial estimate $\tilde{Q}_n$. Is it a decent option to use $\tilde{Q}_n=Q_{n-1}$?

+
+ +

Yes, this is a common choice. It's actually common to update the table for $\tilde{Q}$ in place, without any separate initialisation per step. The separate phases of estimation and policy improvement are easier to analyse for theoretical correctness, but in practice updates made in place can be faster because new information is used as soon as it is available.

+ +

Depending on how the policy was changed, and how accurate the previous estimate was, this could place the estimates closer convergence for the next step. Often the previous estimates will be closer to the new targets than any fixed or random initialisation scheme you could set up.

+",1847,,,,,11/10/2019 11:08,,,,2,,,,CC BY-SA 4.0 +16398,2,,16374,11/10/2019 14:29,,3,,"

There is indeed a close parallel here, but the concepts are distinct. Every perfect information game is fully observable, but not every fully observable game is a game of perfect information.

+ +

A game of imperfect information is one in which you lack knowledge of any of the following:

+ +
    +
  1. The state of the game (e.g. current market prices).
  2. +
  3. The rewards you will receive from various states (i.e. utility and cost functions).
  4. +
+ +

In contrast, in partially observable process (specifically, a POMDP), the requirement is that you must not know which state you are in.

+ +

This is a subtle distinction, so here are some examples:

+ +
    +
  • A multi-armed bandit game with stationary distributions. Here, you know which state you are in (in fact, if the distributions are stationary, you know that the state doesn't change, except for the value of your winnings). You are not in a POMDP (the game is fully observable), but you are operating with imperfect information, because you don't know the utility function associated with different actions. You are operating in a regular MDP.

  • +
  • The game of chess has perfect information, and is also thus fully observable.

  • +
  • The game of poker has imperfect information because you cannot observe the current state of the game (you can't see the cards in your opponent's hand). It is thus a POMDP.
  • +
+",16909,,,,,11/10/2019 14:29,,,,0,,,,CC BY-SA 4.0 +16399,5,,,11/10/2019 15:42,,0,,,2444,,2444,,11/10/2019 15:42,11/10/2019 15:42,,,,0,,,,CC BY-SA 4.0 +16400,4,,,11/10/2019 15:42,,0,,"For questions related to speech recognition, also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT), which is a sub-field of computational linguistics that enables the recognition and translation of spoken language into text by computers.",2444,,2444,,11/10/2019 15:42,11/10/2019 15:42,,,,0,,,,CC BY-SA 4.0 +16401,2,,16375,11/10/2019 16:06,,2,,"
+

If a heuristic is not admissible, can it be consistent?

+
+ +

No. Consistency implies admissibility. In other words, if a heuristic is consistent, it is also admissible. However, admissibility does not imply consistency. In other words, an admissible heuristic is not necessarily consistent.

+ +

Definitions

+ +

Given a graph $G=(V, E)$ representing the search space, where $V$ and $E$ are respectively the set of vertices and edges, and the function $w: E \times E \rightarrow \mathbb{R}$ that defines the weight (or cost) of each edge of $G$, an admissible heuristic $h_{\text{a}}$ is defined as

+ +

$$h_{\text{a}}(n) \leq h^*(n), \forall n \in V$$

+ +

where $h^*(n)$ is the optimal cost to reach a goal from $n$ (so $h^*(n)$ is the optimal heuristic).

+ +

On the other hand, a consistent heuristic $h_{\text{c}}$ is defined as

+ +

\begin{align} +h_{\text{c}}(n) &\leq w(n, s) + h_{\text{c}}(s), \forall n \in V \setminus \mathcal{G}, \text{ and} \\ +h_{\text{c}}(g) &= 0, \forall g \in \mathcal{G}, +\end{align} +where $s$ is a successor of $n$, $g$ is any goal node and $\mathcal{G}$ is the set of goal nodes of the graph $G$.

+ +

Theorem

+ +

A consistent heuristic is an admissible heuristic.

+ +

Proof

+ +

Let $h$ be a consistent heuristic. Given that $h$ is consistent, then $h(g) = 0$, for any goal node $g$, so it does not overestimate the cost of reaching the goal at any of the goal nodes (given that, if you already are at a goal node, the cost is $0$, and $h(g) = 0$ is not greater than $0$). Let $g_{n}$ be an arbitrary neighbour of an arbitrary goal node $g$. Given that $h$ is consistent, then $h(g_{n}) \leq w(g_{n}, g) + h(g)$. Given that $h(g)$ does not overestimate the cost to reach the goal from $g$, then $w(g_{n}, g) + h(g)$ also does not overestimate the cost of reaching the goal from $g_n$, given that $w(g_{n}, g)$ is the true cost of the edge $(g_{n}, g) \in E$ and the cost to reach the goal from $g_n$ must be at least $w(g_{n}, g)$. This reasoning can be applied inductively (or recursively) on $g_n$ (then on the neighbouring nodes of $g_n$, and so on), so $h$ must be admissible.

+",2444,,2444,,11/10/2019 16:24,11/10/2019 16:24,,,,6,,,,CC BY-SA 4.0 +16402,5,,,11/10/2019 16:29,,0,,,2444,,2444,,11/10/2019 16:29,11/10/2019 16:29,,,,0,,,,CC BY-SA 4.0 +16403,4,,,11/10/2019 16:29,,0,,"For questions related to admissible heuristics, which are heuristics that never overestimate the cost of reaching a goal.",2444,,2444,,11/10/2019 16:29,11/10/2019 16:29,,,,0,,,,CC BY-SA 4.0 +16404,5,,,11/10/2019 16:31,,0,,,2444,,2444,,11/10/2019 16:31,11/10/2019 16:31,,,,0,,,,CC BY-SA 4.0 +16405,4,,,11/10/2019 16:31,,0,,"For questions related to consistent heuristics, which are heuristics whose estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor.",2444,,2444,,11/10/2019 16:31,11/10/2019 16:31,,,,0,,,,CC BY-SA 4.0 +16406,5,,,11/10/2019 17:01,,0,,,2444,,2444,,11/10/2019 17:01,11/10/2019 17:01,,,,0,,,,CC BY-SA 4.0 +16407,4,,,11/10/2019 17:01,,0,,"For questions related to the uniform-cost search (UCS) algorithm, also known as the cheapest-first search, which is the uninformative version of the A* algorithm. The UCS is very similar to Dijkstra's algorithm (originally introduced by Edsger W. Dijkstra in 1956 and published in the 1959 paper ""A note on two problems in connexion with graphs""), which originated UCS, according to Norvig and Russell's book.",2444,,2444,,11/17/2020 12:15,11/17/2020 12:15,,,,0,,,,CC BY-SA 4.0 +16409,5,,,11/10/2019 17:20,,0,,,2444,,2444,,11/10/2019 17:20,11/10/2019 17:20,,,,0,,,,CC BY-SA 4.0 +16410,4,,,11/10/2019 17:20,,0,,"For questions related to the graph search. As opposed to a tree search, a graph search uses a list or set, called the closed list (or explored set), which contains the already visited and expanded nodes (or states) of the search space (which is usually represented as a graph, both in the case of tree and graph searches), so that not to revisit these already visited nodes.",2444,,2444,,11/10/2019 23:47,11/10/2019 23:47,,,,0,,,,CC BY-SA 4.0 +16411,5,,,11/10/2019 17:23,,0,,,2444,,2444,,11/10/2019 17:23,11/10/2019 17:23,,,,0,,,,CC BY-SA 4.0 +16412,4,,,11/10/2019 17:23,,0,,For questions related to the family of algorithms often denoted as best-first search algorithms. A* is an example of a best-first search algorithm.,2444,,2444,,11/10/2019 17:23,11/10/2019 17:23,,,,0,,,,CC BY-SA 4.0 +16413,5,,,11/10/2019 17:29,,0,,,2444,,2444,,11/10/2019 17:29,11/10/2019 17:29,,,,0,,,,CC BY-SA 4.0 +16414,4,,,11/10/2019 17:29,,0,,"For questions related to the tree search. As opposed to the graph search, a tree search does not use a closed list to keep track of the already visited nodes, so a tree search may visit the same nodes (or states) multiple times.",2444,,2444,,11/10/2019 23:48,11/10/2019 23:48,,,,0,,,,CC BY-SA 4.0 +16415,2,,16384,11/10/2019 18:34,,2,,"

RL can be used in the context of Neural Architecture Search (NAS), with is a form of automated ML. A model searches for an architecture that performs a given task. How well this task is performed guides how the architecture will be modified (improved) on the next pass. It works but is very computation-intensive (think hundreds of GPUs).

+ +

See for instance:

+ + +",23584,,23584,,11/11/2019 15:51,11/11/2019 15:51,,,,0,,,,CC BY-SA 4.0 +16418,1,,,11/10/2019 20:00,,5,1370,"

This at first sounds ridiculous. Of course there is an easy way to write a program to solve a wordsearch.

+ +

But what I would like to do is write a program that solves a word-search like a human.

+ +

That is, use or invent different strategies. e.g. search randomly for the starting letter; go line-by-line;

+ +

Probably the AI will eventually find out that going line by line looking for a given starting letter of a word is a good strategy.

+ +

Any idea how you would write such a strategy-finding AI?

+ +

I think the main ""moves"" would be things like ""move right one letter in the grid"", ""store this word in memory"", ""compare this letter with first letter in memory"" and a few more.

+",4199,,,,,11/11/2019 1:15,How to create an AI to solve a word search?,,1,0,,,,CC BY-SA 4.0 +16420,1,16426,,11/10/2019 21:51,,3,33,"

For the purpose of object detection, is it better to adjust the natural lighting (while recording the video) or to apply filters (e.g. brightness filters, etc.) on the original video to make it brighter?

+ +

My intuition is that it shouldn't matter when you adjust the natural lighting or do it after with video filters.

+",28201,,2444,,11/10/2019 23:38,11/11/2019 7:46,Is it better to adjust the natural lighting (while recording the video) or to subsequently apply filters on the original video?,,1,0,,,,CC BY-SA 4.0 +16421,2,,16383,11/11/2019 1:08,,1,,"

If you're really just trying to find long contiguous flat regions in a sequence, you do not need machine learning. Your PI is mistaken. You would be better off simply writing a short data processing program. Your program could find the finite differences between adjacent datapoints, and then count whether a long string of them are below some threshold to identify long flat regions. This will be faster, simpler, and perhaps more accurate than using ML on data visualizations for this task.

+ +

If you are trying to find something more complex than these long flat regions, you could instead train an LSTM on the raw sequential data that you are using to generate the images. Again, that will probably be more accurate than trying to train a CNN, or any non-sequential model, on the image data itself.

+",16909,,,,,11/11/2019 1:08,,,,0,,,,CC BY-SA 4.0 +16422,2,,16418,11/11/2019 1:15,,4,,"

This sounds like a problem that might be solvable with a LSTM-DQN approach, as described in Language Understanding for Text-based Games using Deep +Reinforcement Learning by Narasimhan et al., 2015, and then extended to a domain very similar to your problem in Deep Reinforcement Learning for Syntactic Error Repair in Student Programs by Gupta et al., 2019.

+ +

The basic idea is to treat the array of letters each of which can be 'circled' or not, plus the position of a cursor as a state. An action is to move the cursor or toggle the state of a letter. You then model the problem as a reinforcement learning problem, with rewards given every time an entire new word becomes 'circled', probably with a caveat about not circling any invalid letters (otherwise, it'll just learn to circle everything).

+ +

This is just a guess, but it seems like it's very closely related to your problem, so it's likely to be effective.

+",16909,,,,,11/11/2019 1:15,,,,6,,,,CC BY-SA 4.0 +16423,1,,,11/11/2019 1:33,,2,263,"

If I understood correctly, the AlphaGo Zero network returns two values: a vector of logit probabilities p and a value v.

+ +

My question is: in this vector that it is outputted, do we have a probability for every possible action in the game? If so: does it apply a probability of 0 to actions that are not possible in that particular state? If this is true, how does the network know which actions are valid?

+ +

If not: then the network will output vectors of different sizes according to each state. Is this even feasible? And again, how will the network know which actions are valid?

+ +

Related questions but none of them covers this question in specific: 1, 2 and 3.

+",22369,,,,,11/11/2019 1:33,AlphaGo Zero: Does the policy head give a probability for every possible move?,,0,3,,,,CC BY-SA 4.0 +16424,1,,,11/11/2019 2:17,,2,295,"

I was curious about how people make AI to play games. Does anyone know of the AI used to play these games? What allows the AI to see/click the screen in real-time? Even just direction on what libraries for such tasks would be helpful. I can't imagine game developers make an API for creating bots in their games like browsers use with selenium.

+",30365,,2444,,11/12/2019 0:01,11/12/2019 0:01,What technology do people use to create bots for games like LOL or Runescape?,,1,0,,,,CC BY-SA 4.0 +16426,2,,16420,11/11/2019 7:46,,1,,"

Personally, I'd say as long as the object is visible don't do either. If the model has been well built and if lighting changes would help, the convolution operation weights would learn an operation similar to contrast or brightness changes.

+ +

On the other hand if the object visibility is an issue, then natural lighting changes would be better, due to the lack of potential artefacts a filter would create.

+ +

So overall, I'd say natural lighting changes should be more helpful (Assuming model is built well) and brightness filters would not be very helpful as the convolution operations would learn them if they were useful, also there would be artefacts in the input which can lead to the model learning irrelevant details.

+ +

Hope this helped!

+",25658,,,,,11/11/2019 7:46,,,,0,,,,CC BY-SA 4.0 +16447,1,,,11/11/2019 7:53,,0,774,"

I want to use deep reinforcement learning for vehicle rerouting in SUMO, but I don't know how to start training the model.

+ +

I've already created road network and vehicle routing in SUMO-XML files (mymap.net.xml and mymap.rou.xml). Currently, I'm trying to train the model on Jupyter Notebook, importing TraCI library to control the SUMO simulator and allow for a reinforcement learning approach. However, I'm still confused in training step.

+ +
    +
  1. Do I need any traffic data to train my agent to take actions in the environment?

  2. +
  3. How can I train based on these SUMO-XML files I created?

  4. +
  5. Is it possible to run the simulation on Windows? or I need to change to Ubuntu instead?

  6. +
+ +

I would appreciate if someone could guide me. Thank you in advance.

+",,Chantakarn,,,,11/19/2019 11:38,How can I use deep reinforcement learning for vehicle rerouting in SUMO?,,1,0,,2/13/2022 23:47,,CC BY-SA 4.0 +16427,1,,,11/11/2019 8:38,,3,32,"

What is the general approach to defect detection in deep learning?

+ +

Would the approach be better if we try to learn the positive images (defects in images) as much as possible or we try to learn the negative images (images without blemishes) and try to single out the defects as some anomalies

+ +

Can someone point me to some architecture?

+ +

REgards

+",31175,,,,,11/11/2019 8:38,Defect Detection System using Deep Learning,,0,0,,,,CC BY-SA 4.0 +16428,2,,16424,11/11/2019 8:56,,3,,"

Bot development is more about 'hacking' than AI in a way that in the very first place you need to read and (over) write game data which you are not supposed to (and thereby potentially violating the Terms and Conditions - so be aware of that). The AI part is fairly simple for most hack/bot applications that I can think of.

+ +

Read data

+ +

To read game data you can for example:

+ +
    +
  • use an assembly code debugger like Ollydbg to locate relevant data in the memory, e.g. amount of gold
  • +
  • Observe graphical objects being rendered, e.g. a unit being drawn
  • +
  • Intercept network packages containing all kinds of game information, e.g. a unit appearing on your screen
  • +
+ +

Write data

+ +

In a similar way there are multiple ways to write data:

+ +
    +
  • overwrite game data in memory
  • +
  • Use the Windows API SendInput() function to emulate keyboard inputs
  • +
  • Use the Windows API SendMessage() function to send messages to the game
  • +
  • Manipulate network traffic
  • +
+ +

These lists are not comprehensive but to give you an idea of how it is being done.

+ +

AI-wise an A* algorithm for example can be deployed to do path finding.

+ +

If you are interested in the topic I suggest to read 'Game Hacking' by Nick Cano. The book provides a good introduction.

+",30789,,,,,11/11/2019 8:56,,,,0,,,,CC BY-SA 4.0 +16429,2,,16175,11/11/2019 10:21,,2,,"

The problem originated because of the nature of the code.

+ +

Code: +https://github.com/AISangam/Facenet-Real-time-face-recognition-using-deep-learning-Tensorflow/blob/master/classifier.py

+ + + +
                model = SVC(kernel='linear', probability=True)
+                model.fit(emb_array, label)
+
+                class_names = [cls.name.replace('_', ' ') for cls in img_data]
+
+ +

As you see the code uses a SVC (Support Vector Classifier) to classify the classes. The SVC (or SVM) does not have an extra class for unknown class.

+ +

For the threshold variable, it is used in face detection, aka drawing a bounding box around the face for FaceNet to classify it.

+ +

Code:

+ +

https://github.com/AISangam/Facenet-Real-time-face-recognition-using-deep-learning-Tensorflow/blob/master/identify_face_image.py

+ + + +
            frame = frame[:, :, 0:3]
+            bounding_boxes, _ = detect_face.detect_face(frame, minsize, pnet, rnet, onet, threshold, factor)
+            nrof_faces = bounding_boxes.shape[0]
+
+ +

As you can see, the threshold variable is only used in detecting the bounding box.

+ +

Code for getting class name:

+ + + +

+predictions = model.predict_proba(emb_array)
+                    print(predictions)
+                    best_class_indices = np.argmax(predictions, axis=1)
+                    # print(best_class_indices)
+                    best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices]
+                    print(best_class_probabilities)
+                    cv2.rectangle(frame, (bb[i][0], bb[i][1]), (bb[i][2], bb[i][3]), (0, 255, 0), 2)    #boxing face
+
+                    #plot result idx under box
+                    text_x = bb[i][0]
+                    text_y = bb[i][3] + 20
+                    print('Result Indices: ', best_class_indices[0])
+                    print(HumanNames)
+
+ +

You can see that no unknown class can be found.

+ +

Solution

+ +

You can try adding another threshold value and check if the predictions maximum value is lower than the threshold value. I have little experience in tensor flow so this is just a proof of concept, not sure if it will work.

+ +
best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices] #original code
+if(best_class_probabilities < threshold_2):
+    best_class_indices = -1
+HumanNames = ""unknown""
+
+ +

By the way, because of the nature of triplet loss, you don't have to add and extra class to the SVC/SVM as the embedding model is locked and not trained, so unknown class embeddings will be very different to the known class. However you can try either approach.

+ +

Hope it can help you can have a nice day!

+",23713,,,,,11/11/2019 10:21,,,,7,,,,CC BY-SA 4.0 +16430,1,,,11/11/2019 10:25,,3,57,"

I'd like to design a network that gets two images (an image under construction, and an ideal image), and has to come up with an action vector for a simple motor command which would augment the image under construction to resemble the ideal image more. So basically, it translates image differences into motor commands to make them more similar?

+ +

I'm dealing with a 3D virtual environment, so the images are snapshots of objects and motor commands are simple alterations to the 3D shape.

+ +

Probably the network needs two pre-trained CNNs with the same weights that extract image features, then output and concatenate those into a dense layer (or two), which converges into action-space. Training should probably happen via reinforcement learning

+ +

Additionally, in the end it needs recurrence, since there are multiple motor actions it needs to do in a row to get closer to the intended result.

+ +

Would there be any serious difficulties with this? Or are there any approaches to achieve the intended result? or any similar examples?

+ +

Thanks in advance

+",31180,,31180,,11/11/2019 11:23,11/11/2019 11:23,Ideas on a network that can translate image differences into motor commands?,,0,3,,,,CC BY-SA 4.0 +16434,2,,12434,11/11/2019 12:34,,1,,"

Here are two review articles:

+ + +",23584,,2444,,11/11/2019 14:16,11/11/2019 14:16,,,,3,,,,CC BY-SA 4.0 +16435,1,16436,,11/11/2019 13:53,,1,195,"

This question is related to What is the formula used to calculate the accuracy in the FaceNet model? . I know how loss is calculated in the FaceNet model , but how the loss function is used to calculate probability that this unknown person is , say Bob (0.70). Also we don't know which is positive or negative image , we only know the Anchor (so how FaceNet finds which image is positive or negative ?) . How probability is calculated in FaceNet Model using triplet loss ?

+ +

Can we know what is the exact formula or CNN is like black box which uses some unknown method to calculate probability ?

+",30306,,30306,,11/11/2019 13:59,11/11/2019 14:42,How is the percentage or the probablity calculated using Loss function in Facenet Model?,,1,0,,,,CC BY-SA 4.0 +16436,2,,16435,11/11/2019 14:42,,1,,"

The facenet model is just the head of the model. The architecture is similar to the enocdr part of an autoencoder, but it uses supervised learning instead of unsupervised learning. The network is called a siamese network The triplet loss helps make the embeddings more representative of the input image/person, with the embedding distance being as large as possible for difference people and vice versa.

+ +

However the embeddings is just the representation of the people. It doesn't contain information directly mapping to what person it is. The classification head is used after the Face Net feature extraction top is used.

+ +

One-shot learning method for person classfication

+ +

This method saves the calculated embeddings of people in a database. The face recognition system first calculates the embeddings of the new, unknown image to be classified. The system loop through the embeddings database and calculates the Euclidean distance of the unknown embeddings and the embeddings in the database.

+ +

After computing, it chooses the smallest distance of all and compares it to a threshold set by you. If the distance is larger than the threshold, the classified class is unknown , or else the resulting class will be the embeddings which results to the lowest distance.

+ +

Example code from the deeplearning.ai coursera course (great course on AI btw): +Example code of one shot face recognition

+ +

Advantages and disadvantages of the system

+ +

The system have advantages ranging from no training needed and can classify unknown class. However, once the number of people in the database becomes huge, the system will run slower. It's performance may also differ depending on the quality of the reference image.

+ +

SVM (Support Vector Machine) method

+ +

This is the method used in the github repository of your previous posts. It uses a one layer nn classifying head to output the classified class.

+ +

Advantages and disadvantages of the system

+ +

The system have the flexibility of adapting to multiple photo environment and no affecting of performance due to reference images. This method also works well for larger database of people. Triplet loss can also be removed using this method. You can train the network directly from propagating the loss with the probability predicted. However, This method requires re-training when a new person is added to the system. Multiple images of a person is also needed.

+",23713,,,,,11/11/2019 14:42,,,,6,,,,CC BY-SA 4.0 +16437,2,,11759,11/11/2019 15:07,,2,,"

Yes, in both cases. Below I give two very simple proofs that directly follow from the definitions of admissible and consistent heuristics. However, in a nutshell, the idea of the proofs is that $h_{\max}(n)$ and $h_{\min}(n)$ are, by definition (of $h_{\max}$ and $h_{\min}$), equal to one of the given admissible (or consistent) heuristics, for all nodes $n$, so $h_{\max}(n)$ and $h_{\min}(n)$ are consequently admissible (or consistent).

+ +

Definitions

+ +

Consider the graph $G=(V, E, \mathcal{G})$ representing the search space, where $V$, $E$ and $\mathcal{G} \subseteq V$ are respectively the set of nodes, edges and goal nodes, and the function $w\colon E \times E \rightarrow \mathbb{R}$, which gives you the cost of each edge $e = (u, v) \in E$, where $u, v \in V$, that is, $w(e) = w(u, v) \in \mathbb{R}$ is the cost of the edge $e$.

+ +

A heuristic $h$ is admissible if $$h(n) \leq h^*(n), \forall n \in V,$$ +where $h^*(n)$ is the optimal cost to reach a goal from node $n$ (that is, it is the optimal heuristic).

+ +

On the other hand, a heuristic $h$ is consistent if

+ +

\begin{align} +h(n) &\leq w(n, s) + h(s), \forall n \in V \setminus \mathcal{G}, \text{ and} \\ +h(n) &= 0, \forall n \in \mathcal{G}, +\end{align} +where $s$ is a successor of $n$.

+ +

Theorem 1

+ +

Given a set of admissible heuristics $H = \{ h_1, \dots, h_N \}$, then, for every $n \in V$, the heuristics $h_{\max}(n) = \max(h_1(n), \dots, h_N(n))$ and $h_{\min}(n) = \min(h_1(n), \dots, h_N(n))$ are also admissible.

+ +

Proof

+ +

Given that, $h_i(n) \leq h^*(n), \forall n \in V$ and $\forall i \in \{ 1, \dots N \}$, then $h_{\max}(n) = h_j(n) \leq h^*(n)$ (for some $j \in \{ 1, \dots N \}$) and $h_{\min}(n) = h_k(n) \leq h^*(n)$ (for some $k \in \{ 1, \dots N \}$), so $h_{\max}$ and $h_{\min}$ are also admissible.

+ +

$\tag*{$\blacksquare$}$

+ +

Theorem 2

+ +

Given a set of consistent heuristics $H = \{ h_1, \dots, h_N \}$, then, for every $n \in V$, the heuristics $h_{\max}(n) = \max(h_1(n), \dots, h_N(n))$ and $h_{\min}(n) = \min(h_1(n), \dots, h_N(n))$ are also consistent.

+ +

Proof

+ +

Given that, for every $i \in \{ 1, \dots, N \}$,

+ +

\begin{align} +\begin{cases} +h_i(n) \leq w(n, s) + h_i(s), & \text{ if } n \in V \setminus \mathcal{G}\\ +h_i(n) = 0, & \text{ if } n \in \mathcal{G} +\end{cases} +\end{align}

+ +

then, if $n \in V \setminus \mathcal{G}$, $h_{\max}(n) = h_j(n) \leq w(n, s) + h_j(s)$, for some $j \in \{1, \dots, N \}$, and, similarly, $h_{\min}(n) = h_k(n) \leq w(n, s) + h_{k}(s)$, for some $k \in \{1, \dots, N \}$. Similarly, if $n \in \mathcal{G}$, $h_{\max}(n) = h_{\min}(n) = h_i(n) = 0, \forall i \in \{1, \dots, N \} $. Thus, $h_{\max}$ and $h_{\min}$ are also consistent.

+ +

$\tag*{$\blacksquare$}$

+ +

Notes

+ +

$h_{\max}$ and $h_{\min}$ have been defined in the only possible reasonable way, because, given two (admissible or consistent) heuristics, one may not always dominate the other (or vice-versa), even if both are admissible and consistent.

+",2444,,2444,,11/11/2019 17:03,11/11/2019 17:03,,,,1,,,,CC BY-SA 4.0 +16438,1,,,11/11/2019 15:34,,2,481,"

I would like to decompile a compiled file to source code.

+

Is it possible to use any AI technique to perform decompilation? Is there any research on this topic? If yes, can you briefly explain one of the existing approaches (just to get some insight)? I would also appreciate links to research papers on this topic.

+",9126,,2444,,11/19/2020 15:27,11/19/2020 15:27,Is it possible to create a decompiler using AI?,,0,1,,,,CC BY-SA 4.0 +16439,2,,12761,11/11/2019 15:47,,2,,"

Other replies are commenting on the skip connections for a U-Net. I believe you want to exclude these skip connections from your auto-encoder. You say you want to use the auto-encoder for unsupervised pretraining, for which you want to pass the data through a bottle neck, so adding skip connections would work against you if you want to use the encoder for a classification task.

+ +

You ask whether the decoder should 'mirror' the MobileNet encoder. This is actually an interesting one, and I think it could work even if the decoder does not look like the encoder at all. Since you don't need to (and in fact shouldn't) add skip connections, this should be easy to try.

+",31192,,,,,11/11/2019 15:47,,,,1,,,,CC BY-SA 4.0 +16440,1,16446,,11/11/2019 16:01,,5,1045,"

I have read articles on how Jensen-Shannon divergence is preferred over Kullback-Leibler in measuring how good a distribution mapping is learned in a generative network because of the fact that JS-divergence better measures distribution similarity when there are zero values in either distribution.

+

I am unable to understand how the mathematical formulation of JS-divergence would take care of this and also what advantage it particularly holds qualitatively apart from this edge case.

+

Could anyone explain or link me to an explanation that could answer this satisfactorily?

+",25658,,2444,,5/30/2022 8:29,5/30/2022 8:29,Why is the Jensen-Shannon divergence preferred over the KL divergence in measuring the performance of a generative network?,,1,0,,,,CC BY-SA 4.0 +16441,1,16466,,11/11/2019 16:09,,3,128,"

Of my understanding mode-collapse is when there happen to be multiple classes in the dataset and the generative network converges to only one of these classes and generates images only within this class. On training the model more, the model converges to another class.

+ +

In Goodfellows NeurIPS presentation he clearly addressed how training a generative network in an adversarial manner avoids mode-collapse. How exactly do GAN's avoid mode-collapse? and did previous works on generative networks not try to address this?

+ +

Apart from the obvious superior performance (generally), is the fact that GAN's address mode-collapse make them far preferred over other ways of training a generative model?

+",25658,,,,,11/13/2019 14:45,How exactly does adversarial training help in handling mode-collapse in generative networks?,,1,0,,,,CC BY-SA 4.0 +16443,1,,,11/11/2019 20:44,,4,1599,"

+ +

The above model is what really helped me understand the implementation of convolutional neural networks, so based on that, I've got a tricky hypothesis that I want to find more about, since actually testing it would involve developing an entirely new training model if the concept hasn't already been tried elsewhere.

+ +

I've been building a machine learning project for image recognition and thought about how at certain stages we flatten the input after convoluting and max pooling, but it occurred to me that by flattening the data, we're fundamentally losing positional information. If you think about how real neurons process information based on clusters, it seems obvious that proximity of the biological neurons is of great significance rather than thinking of them as flat layers, by designing a neural network training model that takes neuron proximity into account in deciding the structure by which to form connections between neurons, so that positional information can be utilized and kept relevant, it seems that it would improve network effectiveness.

+ +

Edit, for clarification, I made an image representing the concept I'm asking about:

+ +

+ +

Basically: Pixels 1 and 4 are related to each other and that's very important information. Yes we can train our neural network to know those relationships, but that's 12 unique relationships in just a 3x3 pixel grid that our training process needs to successfully teach the network to value, whereas a model that takes proximity of neurons into consideration, like the real world brain would maintain the importance of those relationships since neurons connect more readily to others in proximity.

+ +

My question is: Does anyone know of white papers / experiments closely related to the concept I'm hypothesizing? Why would or would that not be a fundamentally better model?

+",7249,,7249,,11/12/2019 1:31,11/12/2019 6:27,Wouldn't convolutional neural network models work better without flattening the input in any stages?,,2,0,,,,CC BY-SA 4.0 +16445,2,,16443,11/11/2019 23:58,,3,,"

I have had similar thoughts about neural networks before. Convolution layers are layers of two dimensional nodes effectively passing the spacial data so why don't we use two dimensional hidden layers to receive information out of them.

+ +

I'm sure someone has used this type of implementation before. I believe the papers bellow are using this. Part of the point of neural networks is that the weights are trained in order to optimize finding the best solution so regardless of the spacial information it learns to 'focus'/increase weight on locations that are associated with deciding the solution.

+ +

Think of the problem where your neural network examines an image and gives true or false. Training images are True if the center is red and one of the corners is blue or if the center is blue and one of the corners are red. Flattening the layers or not should have basically no effect on this model. In other circumstances like object detection or labeling outlines yes I believe not flattening will benefit the model. With that said flattening the data does not erase spacial relationship each layer will still be trained to detect the spacial information that gives a correct answer the flattened layers just won't have the benefit of neighbors when the layers are one dimensional instead of two.

+ +

In a a CNN with multi class detecting as the task you could allow each class to have its own CNN like hidden layers that narrow to a decision node and decide if they match that class or not. Imagine a palm tree shape where the palm trunk is the image convolutions and each leaf on the top are the two dimensional hidden layers that narrow to an output layer.

+ +

Multi-dimensional NN and + Three dimensional Neural Network

+ +

I know I spoke in a lot of abstraction so if any part doesn't make sense, I'll make an edit to clarify.

+",30365,,,,,11/11/2019 23:58,,,,0,,,,CC BY-SA 4.0 +16446,2,,16440,11/12/2019 0:58,,4,,"

Lets start with question 1) how does JS-divergence handles zeros?

+ +

by definition:
+\begin{align} +D_{JS}(p||q) &= \frac{1}{2}[D_{KL}(p||\frac{p+q}{2}) + D_{KL}(q||\frac{p+q}{2})] \\ +&= \frac{1}{2}\sum_{x\in\Omega} [p(x)log(\frac{2 p(x)}{p(x)+q(x)}) + q(x)log(\frac{2 q(x)}{p(x)+q(x)})] +\end{align} +Where $\Omega$ is the union of the domains of $p$ and $q$. Now lets assume one distribution is zero where the other is not (without loss of generality due to symmetry we can just say $p(x_i) = 0$ and $q(x_i) \neq 0$. We then get for that term in the sum
+$$\frac{1}{2}q(x_i)log(\frac{2q(x_i)}{q(x_i)}) = q(x_i)\frac{log(2)}{2}$$
+Which isn't undefined as it would be the KL case.

+ +

Now onto 2) In GANS why does JS divergence produce better results than KL

+ +

The asymmetry of KL divergence places an unfair advantage to one distribution over the other where in this case, its not ideal to consider it this way from an optimization perspective. Additionally KL divergences inability to handle non-overlapped distributions is crushing given that these are approximated through sampling schemes, and therefore there are no guarantees. JS solves both those issues and leads to a smoother manifold which is why its generally preferred. A good resource is this paper where they go more in detail investigating this.

+",25496,,,,,11/12/2019 0:58,,,,0,,,,CC BY-SA 4.0 +16448,1,16451,,11/12/2019 6:17,,13,9848,"

What does AI software look like? What is the major difference between AI software and other software?

+",31211,,16909,,11/12/2019 17:26,3/30/2021 7:55,"What does AI software look like, and how is it different from other software?",,4,0,,,,CC BY-SA 4.0 +16449,2,,16443,11/12/2019 6:27,,2,,"

Read on Fully Convolutional Networks (FCN). There is a lot of papers on the subject, first was ""Fully Convolutional Networks for Semantic Segmentation"" by Long.

+ +

The idea is quite close to what you describe - preserve spatial locality in the layers. In FCN there is no fully connected layer. Instead there is average pooling on top of last low-resolution/high-channels layer. The effect is like as if you have several fully connected layer centered on different locations and end result produced by weighted voting of them.

+ +

Pleasant side effect of FCN is that they work on any spatial image size(bigger then receptive field) - image size is not coded into network.

+",22745,,,,,11/12/2019 6:27,,,,3,,,,CC BY-SA 4.0 +16451,2,,16448,11/12/2019 8:46,,19,,"

Code in AI is not in principle different from any other computer code. After all, you encode algorithms in a way that computers can process them. Having said that, there are a few points where your typical "AI Code" might be different:

+
    +
  • A lot of (especially early) AI code was more research based and exploratory, so certain programming languages were favoured that were not mainstream for, say, business applications. For example, much work in early AI has been coded in Lisp, and probably not much in Fortran or Cobol, which were more suited to engineering or business. Special languages were developed to make it easy to program with symbols and logic (eg Prolog).

    +
  • +
  • The emphasis was more on algorithms than clever/complex programming. If you look at the source code for ELIZA (there are multiple implementations in many different languages), it's really very simple.

    +
  • +
  • Before the advent of neural networks and (statistical) machine learning, most AI programming was symbolic, so there hasn't been much emphasis on numerical computing. This changed as probabilities and fuzziness were increasingly used, but even if using general purpose languages there would be fewer numerical calculations.

    +
  • +
  • Self-modifying code is inherently complex; while eg Lisp made no difference between code and data (at least not in the same way as eg C or Pascal), this would just complicate development without much gain. Perhaps in the early days this was necessary when computers had precious little memory and power and you had to work around those constraints. But these days I don't think anybody would use such techniques anymore.

    +
  • +
  • As modern programming languages evolved, Lisp and Prolog (which were the dominant AI languages until probably 20 to 30 years ago) have been slowly replaced by eg Python; probably because it is easier to find programmers comfortable in an imperative paradigm rather than a functional one. In general, interpreted languages would be preferred over compiled ones due to speed of development, unless performance is important.

    +
  • +
+

The move to deep learning has of course shifted this a lot. Now the core processing is all numeric, so you would want languages that are better with calculations than symbol handling. Interpreted languages would now mainly make up the 'glue' code to interface between compiled modules, and be used for data pre-processing. So current AI code is probably not really that different from code used in scientific computing these days.

+

There is of course still a difference between R&D and production code. You might explore a subject using an interpreted language, and then re-code your algorithm for production in a compiled language to gain better performance. This depends on how established the area is; there will for example be ready-made libraries available for neural networks or genetic algorithms which are well-established algorithms (where performance matters).

+

In conclusion: I don't think AI code is any more complex than any other code. Of course, that's not very exciting to portray in a film, so artistic licence is used to make it more interesting. I guess self-modifying code also enables the machines to develop their own conscience and take over the world, which is even more gripping as a story element. However, given that a lot of behaviour is nowadays in the (training/model/configuration) data rather than the algorithm, this might even be more straight forward to modify.

+

Note: this is a fairly simplified summary based on my own experience of working in AI; other people's views might vary, without either being 'wrong'.

+

Update 2021: I now work at a company that extracts business information/events from news data on a large scale using NLP methods. And we're using Lisp... so it's still in active, commercial use in AI.

+",2193,,2193,,3/30/2021 7:55,3/30/2021 7:55,,,,0,,,,CC BY-SA 4.0 +16452,1,,,11/12/2019 9:25,,3,144,"

As per the law of unintended consequences, could it be that deepfakes will eventually have the opposite effect to what people currently seem to fear most. For example, once it is clear that anyone can be deepfaked to unlimited degrees of precision, wouldn't we have a situation where in regards to

+ +
    +
  1. pornography, revenge-porn: no one (including the person being viewed) will actually care anymore. E.g. if a movie star's account gets hacked and nude pictures are released to the public, it becomes a non-story because everone simply assumes it's one of thousands of other deepfakes that already exist.
  2. +
  3. fake-news, government propaganda: the general public will demand multiple sources, witnesses before believing the next crazy story. That, I assume, is a good thing as well.
  4. +
+",31214,,1671,,11/14/2019 20:46,11/14/2019 20:46,"Deepfakes as ""force for good""?",,0,5,,,,CC BY-SA 4.0 +16454,2,,13647,11/12/2019 13:25,,0,,"

You can use this to view the Keras Resnet Inception V2 network.

+ +
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
+from keras.layers import Input
+model = InceptionResNetV2(weights='imagenet', include_top=True)
+print(model.summary())
+
+ +

This will Output (im showing only the last few layers):

+ +
__________________________________________________________________________________________________
+conv_7b_ac (Activation)         (None, 8, 8, 1536)   0           conv_7b_bn[0][0]                 
+__________________________________________________________________________________________________
+avg_pool (GlobalAveragePooling2 (None, 1536)         0           conv_7b_ac[0][0]                 
+__________________________________________________________________________________________________
+predictions (Dense)             (None, 1000)         1537000     avg_pool[0][0]                   
+==================================================================================================
+Total params: 55,873,736
+Trainable params: 55,813,192
+Non-trainable params: 60,544
+__________________________________________________________________________________________________
+None
+
+ +

If we look at the output of the 'avg_pool' layer from 'Top'. There will be 1536 features at the output.

+ +

You can make a model in this way:

+ +
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
+from keras.layers import Input
+import numpy as np
+def extract(image_path):
+    base_model = InceptionResNetV2(weights='imagenet', include_top=True)
+    model = Model(inputs=base_model.input,outputs=base_model.get_layer('avg_pool').output)
+
+    img = image.load_img(image_path, target_size=(299, 299))
+    x = image.img_to_array(img)
+    x = np.expand_dims(x, axis=0)
+    x = preprocess_input(x)
+
+    # Get the prediction.
+    features = model.predict(x)    
+    features = features[0]
+    return features
+
+
+features=[]
+features = extract(image)
+
+ +

I couldn't try the code as, right now, I don't have an environment to test this code.

+",31221,,,,,11/12/2019 13:25,,,,0,,,,CC BY-SA 4.0 +16455,2,,16448,11/12/2019 13:31,,10,,"

Oliver Mason's answer is quite good, but I think it can be expanded upon a bit.

+ +

I think there are extra factors that could be popularly interpreted as making AI code difficult to read (as compared to other code):

+ +
    +
  1. AI code actually is more complex than most code that is written. When we work in AI, we often lose sight of this, but most code ever written does one of two things: turn data in one standard format into data in another standard format; display something to a user. Both of those are conceptually easy to understand. Neither of them is likely to require knowledge of mathematics. This is very unlike most code written in AI, where understanding what was written, and why, requires extensive knowledge beyond the knowledge needed to read and write computer programs. So, reading AI code requires more knowledge of mathematics or of complicated AI-focused Algorithms.
  2. +
  3. The ""programs written by AIs"" are really our models in the modern context. Our algorithms ""program"" a template model to make it work for a specific application. This is especially true if you think of it in the senses in which programming is also used in ""linear programming"", ""quadratic programming"", and even ""dynamic programming"". Our models really are hard to understand. Often even their creators cannot explain or characterize the model's behavior on specific inputs without running the model. The reason for this is that our models do not represent simple enough concepts that humans can easily understand or simplify them.
  4. +
  5. Self-modifying code is rare, but does exist within AI. However, as with other AI-generated models, AI-generated code tends to be comparatively difficult for humans to interpret, because (unlike most human-generated code), it is not written with the intention that humans are going to try to read and understand it. There actually are some efforts to generate code that conforms to human styles, but usually the code that is generated does not work well.
  6. +
+",16909,,16909,,11/12/2019 14:16,11/12/2019 14:16,,,,1,,,,CC BY-SA 4.0 +16456,1,,,11/12/2019 14:20,,2,777,"

I am trying to manually implement calculations of the image classification process using pre-trained weights from the MobilenetV2 network. I know how to apply filter weights to the channels, but not sure what to do with the coefficients from the batch normalization (BN) layers. The model uses BN after each convolution before ReLu6. As explained in many sources, BN has a lot of benefits during model training. The original Mobilenetv2 paper does say that they used BN during training, but nothing about using it during testing. The pre-trained MobilenetV2 model comes with BN layers which contain weights 4 x n_channels (I assume gamma, beta, mean, and std for each input featuremap in the BN layer). The following questions is:

+ +
    +
  1. How do I apply the four coefficients to a featuremap during inference? (This article explains it, but I still don't get it - aren't those imported coefficients already pre-calculated, so the operation on a featuremap is reduced to a multiply-add?)
  2. +
+ +

The original paper on BN in section 3.1 says:

+ +
+

... Since the means and variances are fixed during inference, + the normalization is simply a linear transform applied to + each activation. It may further be composed with the scaling by γ and shift by β, to yield a single linear transform + that replaces BN(x)...

+
+ +

Does this mean that during inference I would use only gamma and beta coefficients to ""scale and shift"" each pixel of a corresponding feature map? That is, something like:

+ +
for ch
+  for row
+    for col
+      out_feature[row][col][ch] = in_feature[row][col][ch] * BN[gamma][ch] + BN[beta][ch]
+
+ +

Could anyone, please, confirm and explain if this is correct and re-iterate what exactly is expected from the BN layer output in terms of value ranges (before ReLu6)?

+",31222,,31222,,11/12/2019 14:33,11/12/2019 14:33,How to properly use batch normalization during inference,,0,0,,,,CC BY-SA 4.0 +16457,1,,,11/12/2019 14:25,,3,247,"

One of the main disadvantages of the MC Policy Gradient algorithm (REINFORCE) as described say here is the fact that it has high variance (returns, which we sample, will significantly vary from episode to episode). Therefore it is perfectly reasonable to use a critic to reduce variance and this is what for example Deep Deterministic Policy Gradient (DDPG) does.

+ +

Now, let's assume that we're given an MDP with completely deterministic dynamics. In that case, if we start from a specific state and follow a certain deterministic policy we will always obtain the exact same return (therefore we have zero bias and zero variance). If we follow a certain stochastic policy the return will vary depending on how much we explore, but under an almost-deterministic policy our variance will be quite small. In any case, there's no contribution to the variance from the deterministic MDP dynamics.

+ +

In deep reinforcement learning for portfolio optimization, many researchers (Xiong at al. for example) use historical market data for model training. The resulting MDP dynamics is of course completely deterministic (if historical prices are used as states) and there's no real sequentiality involved. Consequently, all return variance stems from the stochasticity of the policy itself. However, most researchers still use DDPG as a variance reduction mechanism.

+ +

What's the point of using DDPG for variance reduction when the underlying MDP used for training has deterministic dynamics? Why not simply use the Reinforce algorithm?

+",26195,,,,,11/13/2019 12:42,Purpose of using actor-critic algorithms under deterministic MDP dynamics?,,1,0,,,,CC BY-SA 4.0 +16458,1,16464,,11/12/2019 14:35,,2,301,"

I want to create a solution, which clones my voice. I tried my commercial solutions or implementation of Tacotron. Unfortunately, results not sound natural, generated voice sounds like a robot. Anybody could recommend good alternative?

+",19476,,,,,11/12/2019 15:57,AI natural voice generator,,1,1,,,,CC BY-SA 4.0 +16459,1,,,11/12/2019 15:06,,1,195,"

My question regards performing keyword spotting for custom keywords and justifying the use of keyword spotting models instead of speech recognition.

+ +

I have been doing some searching around Keyword Spotting and I realized there is not so much work out there. Probably the most common dataset I have found people using is the Speech Commands Dataset. However, this dataset has only 30 keywords.

+ +

If I want keyword spotting for my own custom application, then to the best of my knowledge I need either a pre-trained model or my own data to train a model on. However, to the best of my knowledge, there is no model pre-trained on a dataset with a large enough set of keywords that is likely to cover a lot of applications. Correct me if I am wrong in this.

+ +

I have come to the conclusion that I need to train my own model and the only two ways I could train models on custom keywords is to get that data myself, either by crowdsourcing or by performing speech recognition on large datasets, picking up segments which include the words of interest and then doing some manual work to check if these segments truly include the keywords I want. Does someone think that this would be a good or bad idea and why?

+ +

Lastly, why would I even bother going the keyword detection route and not just use a speech recognition model that will recognize the words a human speaks and see if any of them match my keyword? Is the performance that much better with keyword detection?

+",13257,,,,,11/12/2019 15:06,Keyword spotting with custom keywords and why not use speech recognition instead,,0,0,,,,CC BY-SA 4.0 +16460,2,,10910,11/12/2019 15:10,,1,,"
    +
  • Initial state: initial position of the monkey.
  • +
  • Possible actions + +
      +
    • climb on the crate,
    • +
    • get down the crate,
    • +
    • move the crate from one spot to another,
    • +
    • stack one crate on another,
    • +
    • walk from one spot to another,
    • +
    • grab bananas (if standing on the crate)
    • +
  • +
  • Goal test: did the monkey get the bananas?
  • +
  • Cost function: the number of actions completed
  • +
+",31184,,2444,,11/12/2019 17:15,11/12/2019 17:15,,,,1,,,,CC BY-SA 4.0 +16463,1,,,11/12/2019 15:43,,1,131,"

Is it possible to do person detection and object detection within one model? The training data would be images annotated with bounding boxes for objects and people. Because normally object detection and person detection are done separately? Is there any research about models that simultaneously detect both people and objects?

+",28201,,,,,11/12/2019 16:27,Two Models vs One Model for Person Detection and Object Detection,,1,4,,,,CC BY-SA 4.0 +16464,2,,16458,11/12/2019 15:57,,2,,"

The reason for robot like speech may be because tacotron uses griffin lim for vocoder, which cannot reproduce sound with perfection, often introducing robot like sound artifects.

+ +

+ +

A vocoder is a network that transforms a transform a spectrogram image back to speech waveform. Tacotron and many other speech generation neural network uses CNN to generate spectrogram instead of raw waveforms as output. Spectrogram is a lossy representation of raw audio waveform, so a perfect reconstruction of audio waveform is not possible. Griffin-Lim is a vocoder that uses algorithmic way to transform spectrogram to audio waveform, but often introduces a robot-like quality to generated waveforms. A neural network based vocoder can solve the problem. The wavenet vocoder is often used in speech generation as it can transform the spectrogram to audio with little artifects. Many new speech generation models use the wavenet vocoder as the deafult vocoder of the generation model. For a public implementation, this is a good github repository: https://github.com/r9y9/wavenet_vocoder

+ +

You can also use the newer tacotron 2 which uses the wavenet vocoder as the default vocoder. You can check it out here: https://github.com/Rayhane-mamah/Tacotron-2

+",23713,,,,,11/12/2019 15:57,,,,2,,,,CC BY-SA 4.0 +16465,1,16470,,11/12/2019 15:57,,5,224,"

I have been reading the paper which introduced spectral normalization in GANs.

+ +

At some point the paper mentions the following:

+ +
+

The machine learning community has been pointing out recently that the + function space from which the discriminators are selected crucially + affects the performance of GANs. A number of works (Uehara et al., + 2016; Qi, 2017; Gulrajani et al., 2017) advocate the importance of + Lipschitz continuity in assuring the boundedness of statistics.

+
+ +

What does it mean that the Lipschitz continuity assures the boundedness of statistics and why does that happen?

+",13257,,2444,,11/12/2019 17:10,11/12/2019 21:16,Why does a Lipschitz continuous discriminator in GANs assure statistical boundedness?,,1,0,,,,CC BY-SA 4.0 +16466,2,,16441,11/12/2019 16:23,,1,,"

I don't think he said that at all. Going back to the talk you'll see he mentions mode collapse comes from the naivete of using alternating gradient-based optimization steps because then $min_{\phi}max_{\theta}L(G_\phi, D_\theta)$ starts to look a lot like $max_{\theta}min_{\phi}L(G_\phi, D_\theta)$.

+ +

This is problematic because in the latter case the generator has an obvious minimum of transforming all generated output into a single-mode that the discriminator has considered acceptable.

+ +

Since then a lot of work has been done to deal with this point of failure. Examples include Unrolled GANs (he mentions this one in the talk), where you essentially make the generator optimize what the discriminator will think $K$ steps in the future to ensure the ordering of the $min \ max$ game, and Wasserstein GANs, where you focus on a different metric that still has the same global minimum but allows for side by side training completely eliminating the ordering and failure mode, to begin with. On top of this, other work has been done as well, these are just two important examples.

+ +

Regarding how they fare against other generative models, like VAEs, there is no one is better than the other. The recent empirical success of GANs is why they are so popularly used, but we still see others being used in practice as well.

+",25496,,2444,,11/13/2019 14:45,11/13/2019 14:45,,,,0,,,,CC BY-SA 4.0 +16467,2,,16463,11/12/2019 16:27,,2,,"

Yes you can classify people bounding box with object detection. State of the art object detection model have people as one of the class in the object detection, as shown here: +

+

As you can see the image have both object bounding box and people bounding box.

+",23713,,-1,,6/17/2020 9:57,11/12/2019 16:27,,,,0,,,,CC BY-SA 4.0 +16468,2,,2111,11/12/2019 18:33,,2,,"

A definition of life

+
+
    +
  1. The property or quality that distinguishes living organisms from dead organisms and inanimate matter, manifested in functions such as metabolism, growth, reproduction, and response to stimuli or adaptation to the environment originating from within the organism.

    +
  2. +
  3. The characteristic state or condition of a living organism.

    +
  4. +
+
+

Here's another definition

+
+

The condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity, and continual change preceding death.

+
+

Yet another definition.

+
+

Life is a characteristic that distinguishes physical entities that have biological processes, such as signaling and self-sustaining processes, from those that do not, either because such functions have ceased (they have died), or because they never had such functions and are classified as inanimate. Various forms of life exist, such as plants, animals, fungi, protists, archaea, and bacteria.

+
+

AIs (or, in general, computers) do not have a real metabolism, do not really reproduce, do not respond to stimuli or adapt to new circumstances (that is, circumstances they have not been programmed to deal with). AI does nothing without human intervention or it lacks real autonomy. In other words, if you do not turn the computer on and you do not program it, it really does nothing. A computer is a useful tool that you can use thanks to electricity. You can plug and unplug it indefinitely, but you cannot kill and revive a living being indefinitely.

+

Even though computers may possess (at least, conceptually) some properties similar to the properties of certain living beings, it does not mean they are living beings. Similarly, airplanes are not birds. Computers are not living beings, but this does not prevent you from drawing a comparison between computers and living beings, provided you are aware of their actual big differences. In fact, many useful AI software is inspired by the behavior of certain living beings or natural processes. For example, ant colony optimization algorithms are based on the behavior of real ants seeking a path between their colony and a source of food.

+

Here's what (biological) life looks like.

+

+",2444,,-1,,6/17/2020 9:57,11/13/2019 4:05,,,,1,,,,CC BY-SA 4.0 +16470,2,,16465,11/12/2019 21:16,,2,,"

To put it simply GANs suffer from a problem of uneven learning rate. Imagine the learning rate of a pitcher and hitter if the pitcher gets to a point where they can throw much better than the hitter can hit then the hitter may fall into a 'training pit' as to be unable to ever learn how to hit from the pitcher.

+ +

This follows a continues relationship in between the two learning rates where if the pitcher is becoming a much better pitcher at a faster rate they could become too good and make learning impossible for the hitter. So the rate must be 'slowed down' as to ensure the pitcher doesn't ruin the hitter.

+ +

If the cone of the Lipschitz continuity function of either function is outrunning/outpacing the other than the learning for the one who is in front must be slowed down so the other catches up.

+ +

Two runners trying to push each other athletically is another example. If one outpaces the other an injury may occur in the one lagging behind while trying to keep pace this happens, when the adversarial network becomes too good at generating training material that the behind network is not ready to learn with.

+ +

The GAN will do best when the learning rates are adjusted to slow down the fast learner artificially.

+ +

The statistics will not be bounded correctly if the learning rates are not kept in check similarly to how step size needs to be right to find local minimum and maximum. If the learning is not artificial augmented so both keep relatively same pace getting stuck at local minimum and maximum of the solution space will occur.

+",30365,,,,,11/12/2019 21:16,,,,2,,,,CC BY-SA 4.0 +16471,2,,14003,11/12/2019 22:05,,5,,"

Spectral Convolution +In a spectral graph convolution, we perform an Eigen decomposition of the Laplacian Matrix of the graph. This Eigen decomposition helps us in understanding the underlying structure of the graph with which we can identify clusters/sub-groups of this graph. This is done in the Fourier space. +An analogy is PCA where we understand the spread of the data by performing an Eigen Decomposition of the feature matrix. The only difference between these two methods is with respect to the Eigen values. Smaller Eigen values explain the structure of the data better in Spectral Convolution whereas it's the opposite in PCA. +ChebNet, GCN are some commonly used Deep learning architectures that use Spectral Convolution

+ +

Spatial Convolution +Spatial Convolution works on local neighbourhood of nodes and understands the properties of a node based on its k local neighbours. Unlike Spectral Convolution which takes a lot of time to compute, Spatial Convolutions are simple and have produced state of the art results on graph classification tasks. GraphSage is a good example for Spatial Convolution.

+ +

Additional References: https://towardsdatascience.com/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-2-be6d71d70f49

+ +

https://towardsdatascience.com/graph-convolutional-networks-for-geometric-deep-learning-1faf17dee008

+",30939,,,,,11/12/2019 22:05,,,,2,,,,CC BY-SA 4.0 +16472,2,,2111,11/12/2019 22:22,,1,,"
    +
  • My sense is that, yes, AI (and algorithms in general) constitute a form of ""life"" in that they are animate, able to respond to stimuli and act on an environment.
  • +
+ +

Algorithms may be deterministic (always produce the same output for identical input), and this is not much different from more elementary forms of life (like proteins.)

+ +

Computer viruses are another form of algorithmic life which typically have the capability of reproduction, copying themselves onto new systems/environments, similar to biological viruses. (Here it's a form of parthenogenesis or mitosis, where exact copies of the original are formed.)

+ +

Machine Learning algorithms can adapt to their environment (increasing fitness), and this applies to both genetic algorithms and other forms of machine learning that can optimize utility in relation to a problem (the environment in which the algorithm is applied.) Genetic process in particular will produce successive generations.

+ +

In his novel VALIS, PKD referred to an idea of god as a ""Vast Active Living Intelligence System"". This is relevant because it's a philosophical idea, as opposed to scientific. The drafting of that book involved an ephiphany and the drafting of a corresponding exegesis.

+ +

This idea is controversial and would likely be rejected by the vast majority of scholars and AI ethicists, but I'd posit rejection of this notion constitutes a form of biological-prejudice, and, in the case of hypothetical future AGI, a form of anthropomorphic bias. (I'd go so far as to suggest that not regarding algorithms as a form of life carries grave risks, in that computing has made active algorithms pervasive, with profound impacts to human experience.)

+ +

That said, there are no current algorithms I am aware of that have sufficient sentience to warrant having rights, whether human or animal.

+ +
+ +

A note on the term ""animate"" (adjective): derives from the latin anima, which initially refers to wind and the ""breath of life"" [see also the Greek pneuma. However, the Latin lexicon references animus as ""the mind as the seat of thought"" and ""the rational soul of man"".

+ +

Algorithms can be rational in the strictest sense, and an entire branch of engineering is based on pneumatics. Although our algorithms use electrical signals, (as opposed to pneumatic computing or hydraulic computing,) what it comes down to is that intelligence requires process, and process requires energy.

+ +

Machines convert energy into motion or change, and so humans fit the definition. (William Burroughs referred to us as ""soft machines."") If animals can be thought of as machines, why can't machines be thought of as animals?

+ +

DNA is a type of encoding, and RNA acts on that code.

+ +

The key distinction seems to be the medium in which process exist, where a biological context is used for what we conventionally think of as life. Algorithms merely utilize different mediums (mechanical & electrical) and may exist in different environments such as the digital.

+ +
+ +

A note on The Soft Machine: In this trilogy, Burroughs describes an elaborate alien reproductive process involving numerous alien species acting as surrogates at various stages in the cycle. (It is a description of a 3-dimensional representation of a higher dimensional process.)

+ +

This has precedent in biology where, for instance, the lifecycle of a seed may involve a period inside the digestive system of an animal. Pollination is another example, where a surrogate species plays a critical role. Viruses require host organisms to reproduce and spread.

+ +

It's not out of bounds to regard the lifecycle of current active algorithms as involving humans to fill in the capability gaps. Here humans are surrogates in bringing information ""to life"".

+",1671,,1671,,11/13/2019 3:19,11/13/2019 3:19,,,,3,,,,CC BY-SA 4.0 +16473,1,,,11/13/2019 0:10,,2,54,"

This is a follow-up question from my previous post here about explosion detection. I gathered a dataset of explosions. As I'm new to Deep Learning in Keras, I'm trying to see what architecture best suits this problem, given that here we have a cloud of smoke/fire as opposed to an object. Any suggestions?

+ +

For instance, I've learned about Faster RCNN or RetinaNet, but that is mostly for object detection. Is it going to be better than say a basic ResNet50? And here real-time prediction requirements are not an issue. So shall I assume a heavier model (e.g. NASNet Large or a Resnet-152 model) is better than a basic ResNet-50 model?

+",9053,,9053,,11/13/2019 17:52,11/13/2019 17:52,Which deep neural networks are appropriate for the detection of bombs?,,0,3,,,,CC BY-SA 4.0 +16476,2,,2917,11/13/2019 5:34,,0,,"

It might be of note to comment/update that the SuperGLUE benchmark, which is a suite of common sense reasoning tasks, incorporates the aforementioned Winnograd Schema challenge, among other tests that are said to be reflective of natural language understanding (as opposed to simply its processing or the optimal statistical generation of language). The most recent result by Google (T5) has reached parity with the Human Baseline in terms of the average performance across all tests.

+ +

The leaderboard is available below, and the site includes subscores for each of the challenges:

+ +

[1] https://super.gluebenchmark.com/leaderboard

+",16803,,,,,11/13/2019 5:34,,,,0,,,,CC BY-SA 4.0 +16478,1,,,11/13/2019 6:20,,2,115,"

In recent years, we have seen quite a lot of impressive display of Deep Neural Network (DNN), as demonstrated most famously by AlphaGo and its cousin programs.

+

But if I understand correctly, deep neural network is just a normal neural network with a lot of layers. We know about the principles of the neural network since the 1970s (?), and a deep neural network is just the generalization of a one-layer neural network to many.

+

From here, it doesn't seem like the recent explosion of DNN has anything to do with a theoretical breakthrough, such as some new revolutionary learning algorithms or particular topologies that have been theoretically proven effective. It seems like DNN successes can be entirely (or mostly) attributed to better hardware and more data, and not to any new theoretical insights or better algorithms.

+

I would go even as far as saying that there are no new theoretical insights/algorithms that contribute significantly to the DNN's recent successes; that the most important (if not all) theoretical underpinnings of DNNs were done in the 1970s or prior.

+

Am I right on this? How much weight (if any) do theoretical advancements have in contributing to the recent successes of DNNs?

+",16291,,2444,,1/20/2021 23:24,1/20/2021 23:24,Is there anything theoretically revolutionary about Deep Neural Networks?,,1,9,,,,CC BY-SA 4.0 +16479,1,16532,,11/13/2019 6:26,,2,136,"

In the field of adversarial machine learning, machine learning models are vulnerable to attacks both on the test and training data set. However, how does the attacker get access to these datasets? How do these datasets get manipulated/tampered with?

+",31240,,,,,11/15/2019 3:38,"In adversarial machine learning, how does an attacker have access to the test and training dataset in order to poison it?",,3,1,,,,CC BY-SA 4.0 +16481,1,16483,,11/13/2019 8:03,,4,122,"

A neural network can apparently be denoted as $N_{t,n,\sigma,L}$. What do these subscripts $t, n, \sigma$ and $L$ mean? Could you link me to a paper, article or webpage with an explanation for this?

+",31125,,2444,,11/13/2019 12:36,11/13/2019 12:36,"What do the subscripts mean in $N_{t,n,\sigma,L}$?",,1,0,,,,CC BY-SA 4.0 +16482,2,,16479,11/13/2019 8:32,,1,,"

They don't have acces to the original training or test dataset. Machine learning environments are build on the premise of a benign environment. The models are trained on real data (real inputs). When someone sends a made up input (fake input) it is very easy to fool the model.

+ +

This is used for example in image recognition. Imagine a fotograph of a panda. the model may correctly identify this fotograph as a panda. With knowledge of the model you can now alter some pixels in the fotograph. To the human Eye, the fotographs will appear exactly the same, but the model can be fooled to believe the fotograph is actually of a gibbon.

+ +

This is all done after the training of the model and doesn't require the original datasets.

+ +

For more info, visit this site: +https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8

+",29671,,,,,11/13/2019 8:32,,,,0,,,,CC BY-SA 4.0 +16483,2,,16481,11/13/2019 8:34,,4,,"

Here is a paper with the mathematical definition of each term:

+ +
+

Let Nt,n,σ,L be all target functions that can be implemented using a + neural network of depth t, size n, activation function σ, and when we + restrict the input weights of each neuron to be |w|1 + |b| ≤ L.

+
+",22301,,,,,11/13/2019 8:34,,,,2,,,,CC BY-SA 4.0 +16484,2,,16479,11/13/2019 8:50,,0,,"

In adverserial machine learning, someone (program or human) attempts to fool an existing model with a malicious input.

+ +

The best human example would be an optical illusion. The human brain's model for image processing starts outputting wrong information when looking at an optical illusion. So in the end we see wrong colour, shape, etc. In this case, the optical illusion would be considered as the malicious input.

+ +

We can trick the human brain’s model through images created with trial and error.

+ +

So, if you just have the trained model at hand, you don’t have to know the data it has been trained with. You just need to be able to input a value to the model and get the output.

+",31252,,,,,11/13/2019 8:50,,,,0,,,,CC BY-SA 4.0 +16485,2,,2111,11/13/2019 10:18,,1,,"

Mind the hardware:

+ +

While there are different definitions of what life (synonymously used with 'organism' here (source: Wikipedia: Life) is, e.g.

+ +
+

All types of organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli. (Source: Wikipedia: Organism)

+
+ +

they all have one thing in common: +they require physical matter! +In contrast, to ask whether AI is alive is comparable to asking whether the human mind is alive. It is per definition not! Therefore, the question needs to be extended to include the hardware. It is rather 'are machines/computers alive' or 'do machines/computers have the potential to be considered alive'?

+ +

We are talking about agents and most likely robots:

+ +

And more specifically any machine/computer to be potentially considered alive will most likely need to be an agent,as it needs to interact with its environment (see Wikipedia: Intelligent agent for a description of agents in Computer Science).

+ +

Also, any potentially intelligent machine/computer needs most likely to be a robot due to the strong emphasis on physical processes, incl. some kind of exchange with the physical environment (perception or manipulation of it), which our common definitions of life carry.

+ +

Some requirements for life are easy to fulfill while others are not:

+ +

Based on the requirement for life forms to maintain and reproduce their entities any machine/computer to be considered intelligent will need to be able to physically maintain and reproduce, i.e. assemble hardware. If you think of an intelligent robot assembling another robot that might sound to be very far away from reality. However, the definition might include indirect reproduction e.g. using an automated hardware production facility. Certainly this is not the direct reproduction that we know from current living beings but it might be considered an indirect way to reproduce. Which, however, is certainly far away from current reality too.

+ +

Similarly you could think of 'maintain' as taking care of the physical need to supply itself with electricity. Any machine with a solar panel easily fulfills this requirement in a similar way plants do.

+ +

While machines/computers considered alive do not need to have any artificial intelligence it is an easy way to fulfill the requirement of 'development': +Sub-symbolic AI learns from data which is a form of (non-physical, i.e. software-related) development. Just like humans and other animals learn from data that comes in through one of their senses.

+ +

Give it time:

+ +

To summarize: current machines/computers certainly do not fulfill the requirements usually being considered 'life'. And especially the requirement to (physically) maintain itself and reproduce will be longstanding unfulfilled requirements. However, considering that the homo sapiens has been on this planet for about 150,000 years we might just need to give it more time. It took about 1 billion years for the first living beings to develop on planet earth (see Wikipedia: Earliest known life forms). So it is a bit early to make a call on machines/computers which in the case of computers have been around for not even a century. Let's see where we stand in 1000, 1000000 or 10000000 years from now.

+ +

However, the definitions might change anyway:

+ +

Moreover, it is important to note that the definition of life is closely built on what we know as current carbon-based life forms. And it could very well be adjusted in the light of machines/computer further developing. For example the aspects of physical reproduction might be one aspect to be dropped (just speculating here). So maybe we do not even need to wait a billion years but might have machines/computers considered 'alive' already in 200 or 500 years. Compared to the time biological life took to develop that would still be very rapid.

+",30789,,,,,11/13/2019 10:18,,,,3,,,,CC BY-SA 4.0 +16486,2,,16478,11/13/2019 10:50,,1,,"

The first neural network machine was the stochastic neural analog reinforcement calculator (SNARC), built in the 1950s. As you can see, it's pretty old. After that, there were several advances regarding backpropagation and the vanishing gradient problem. However, the ideas itself are not novel. Simply put, we have the data and processing power today that we did not have back then.

+ +

You could look at the Wikipedia timeline.

+",31252,,2444,,11/13/2019 12:46,11/13/2019 12:46,,,,0,,,,CC BY-SA 4.0 +16487,1,16491,,11/13/2019 11:17,,5,1311,"

I have two trained models. One is using a LinearSVC algorithm and is trained on numerical data from medical examination from patients with diabetic retinopathy. The second one is a neural network trained on images of retina scans from patients with the same disease.

+ +

The models predict if the patient has retinopathy or not. Both are written using Python 3.6 and Keras and have accuracy around 0.84.

+ +

Is it possible to combine those two models in any way to increase the accuracy of predictions?

+ +

I'm not sure in what way it could be achievable as they are using a different kinds of data. I have tried using ensembling methods but didn't get better scores with them.

+",31256,,2444,,11/13/2019 12:39,11/14/2019 6:29,How do I combine models trained on different data to increase classification accuracy?,,1,2,,,,CC BY-SA 4.0 +16489,2,,16457,11/13/2019 12:18,,1,,"
+

In deep reinforcement learning for portfolio optimization, many researchers (Xiong at al. for example) use historical market data for model training. The resulting MDP dynamics is of course completely deterministic (if historical prices are used as states) and there's no real sequentiality involved.

+
+ +

Whilst I cannot comment on the specific financial model, I think it unlikely that these researchers would apply RL without there being a sequence.

+ +

More likely in my opinion, the historic data feed forms a major part of the environment, but that there are still time steps and a state which depends on an agent's actions. For instance, in a trading simulation, provided the values of trades are below levels that would significantly alter the market itself, it may be a reasonable approximation to use the history of prices and other factual information that progress like a recording, plus have state include the agent's current portfolio of investment and working funds.

+ +
+

What's the point of using DDPG for variance reduction when the underlying MDP used for training has deterministic dynamics? Why not simply use the Reinforce algorithm?

+
+ +

Variance in returns occurs due to stochastic dynamics (if they are present) and the behaviour policy. You cannot use any RL control algorithm with a deterministic behaviour policy*. It would never gain any data that allowed it to assess alternative behaviour.

+ +

So in REINFORCE, which is on-policy (the behaviour policy and target policy are the same), and typically starts with near equiprobable action choices, there is high variance. It could be very high when measured over a long episode with many action choices. In basic REINFORCE, the variance is not controlled for, and training uses individual Monte Carlo style returns.

+ +

In DDPG, which is off-policy (the target policy is deterministic**, the behaviour policy is stochastic), there is still variance, but it is much reduced with the actor-critic mechanism, plus can be constrained by choice of noise function that relates the behaviour policy to the target policy. In addition, updates to policy and value functions can be made independently of episode ends, which can significantly speed up learning.

+ +

To determine what difference this makes for any experiment, you would need to compare the two algorithms on the same task. In practice DDPG will significantly out-perform REINFORCE on many tasks, including those with deterministic environments. However, there might be specific combinations where the simplicity of REINFORCE wins out, if only because there are less hyperparameters to tune.

+ +

On one point:

+ +
+

under an almost-deterministic policy our variance will be quite small

+
+ +

That's true, but how do you get to that stage of training with REINFORCE? It is by testing and working through more stochastic policies, which is what will take time. Your statement only applies to REINFORCE when the control problem is nearly complete, or if you take a short-cut and force the policy function into what you hope is a near-optimal policy. In which case you are engaging in a form of variance reduction - it may even work for some scenarios, but is likely not as general as applying an Actor-Critic algoirthm.

+ +
+ +

* Actually technically you can if the environment is stochastic in the right way so that you effectively explore all state/action combinations. But we are talking about deterministic environments here, and obviously the stochastic environment would introduce variance in returns.

+ +

** The target policy changes over time, and this introduces non-stationarity and bias for the critic component to deal with, but not technically variance.

+",1847,,1847,,11/13/2019 12:42,11/13/2019 12:42,,,,6,,,,CC BY-SA 4.0 +16490,1,16499,,11/13/2019 12:40,,5,228,"

Deep fakes are a growing concern: the ability to credibly alter a video may have great (negative) impacts on our society. It is so much of a concern, that the biggest tech companies launched a specific challenge: https://deepfakedetectionchallenge.ai/.

+ +

However, from what I understand, most deep fake generation techniques rely on the use of adversarial models. One model generates a new image, while another model tries to detect if the image is doctored or not. Both models ""learn"" from being confronted with the other.

+ +

That being said, if a good deep fake detection model emerges (from the previous challenge, or not), wouldn't it be rendered useless almost instantly by learning from it in an adversarial setting?

+",22654,,2444,,11/13/2019 12:49,11/15/2022 19:02,Isn't deep fake detection bound to fail?,,1,0,,,,CC BY-SA 4.0 +16491,2,,16487,11/13/2019 12:48,,2,,"

You can try using a multi-input model. Here is a recent post with a similar discussion, with the required architecture defined in the answer.

+ +

Instead of combining the separate models, you can create a model which uses image and numerical data side by side. Keras allows you to use different types of data using multi input structure via functional API. And then you can combine them to create a single machine learning model. Basic idea is like this:

+ +

+ +

This image was taken from here, where you can find further details and the code implementation as well. Actually, the discussion in the introduction of that post is almost the same as your question: What to do when the inputs are:

+ +
+
    +
  • Numeric/continuous values, such as age, heart rate, blood pressure
  • +
  • Categorical values, including gender and ethnicity
  • +
  • Image data, such as any MRI, X-ray, etc.
  • +
+
+ +

Also, section 5.1 of this blogpost details the same process.

+",22301,,22301,,11/14/2019 6:29,11/14/2019 6:29,,,,3,,,,CC BY-SA 4.0 +16492,1,,,11/13/2019 12:52,,0,73,"

I was discussing with a friend whether current AI does anything remotely similar to 'thinking' and he argued that AIs that play games must think up strategies.

+ +

While thinking may not be precisely defined, my understanding of algorithms like OpenAI was that they just minimize a very non-convex objective, but still play the game based on examples, and not by coming up with intentional strategies. Is my understanding incorrect?

+",12996,,2444,,11/13/2019 12:54,11/13/2019 18:56,"Do algorithms like OpenAI's ""think up strategies""?",,1,0,,,,CC BY-SA 4.0 +16493,1,16494,,11/13/2019 13:12,,2,120,"

In the case of single shot detection of point clouds, that is the point cloud of an object is taken only from one camera view without any registration. Can a Convolutional Network estimate the 6d pose of objects (initially primitive 3D objects -- cylinders, spheres, cuboids)?

+

The dataset will be generated by simulating a depth sensor using a physics engine (ex:gazebo) and primitive 3D objects are spawned with known 6d pose as ground truth. The resulting training data will be the single viewed point cloud of the object with the ground truth label (6d pose)?

+",31259,,-1,,6/17/2020 9:57,11/13/2019 20:02,Pose estimation using CNNs on Point clouds,,1,0,,,,CC BY-SA 4.0 +16494,2,,16493,11/13/2019 18:19,,0,,"

The answer is yes this is possible and here are the papers where they do almost exactly the same project you are describing above. Although none of the bellow combine gazebo, single point/single shot, 6D-pose and CNNs. In order to use synthetic data to train a model that works on real data.

+ + + +

The model will be able to be trained but how effect a model trained on the synthetic data will be able to properly function on real data will be the challenge.

+",30365,,,,,11/13/2019 18:19,,,,2,,,,CC BY-SA 4.0 +16495,2,,16492,11/13/2019 18:40,,1,,"

The key here is think up strategies. If we define this as examining, creating a hypothesis, and testing it as strategizing then yes AI has the ability to strategize. It can examine other users' games, quantifies actions that correlated with victory then test if it gains victory by doing those actions.

+ +

Strategy by definition is: a plan of action or policy designed to achieve a major or overall aim.

+ +

AI can not classically plan a series of actions designed to achieve a major victory. Instead, it learns the right strategy by testing simulated scenarios, like someone who thinks about the consequences before doing them, but the AI actually has the opportunity to play the game hundreds of times, in order to learn the correct strategy. Similar to Bill Murray in the movie Groundhog Day, learning the ideal day to live. The AI can strategize by experiencing the game over and over until it fine-tunes what an ideal game should be and has seen enough examples of games to not be outwitted by gimmicky strategies.

+ +

To summarize, AI can strategize, just in a way fairly different than people.

+",30365,,2444,,11/13/2019 18:56,11/13/2019 18:56,,,,2,,,,CC BY-SA 4.0 +16496,1,,,11/13/2019 19:35,,5,379,"

+ +

These images are handmade, not auto-generated like they will be in production. Apologies for inaccuracies in the graph overlay.

+ +

I am trying to build an AI like that displayed in the diagram: when given a training set of images with their corresponding node maps of face/nose posture, and an image with a missing section (just a gap) with a node map, I would like it to reconstruct the initial image. My thoughts immediately went to GANs for this, but after some searching, the closest I could find were:

+ +
    +
  • Face recreation without context/not filling gaps, just following pose (DeepFake)
  • +
  • Filling gaps in images, but with no node reference
  • +
  • Filling gaps from reference drawings/mappings, but with no way to provide sample images
  • +
+ +

I would like to hear about any implementations of such an algorithm, if possible optimised for faces, and if none exists, I would like to hear of how I would go about altering the generator of the GAN to work with the context/gap-fill bit (e.g a paper which talks about this idea, but doesn't implement it). Any guidance on the NN that is best for this type of task is also appreciated.

+",27018,,27018,,11/20/2019 7:23,11/20/2019 7:23,Context-based gap-fill face posture-mapper GAN,,1,2,,,,CC BY-SA 4.0 +16499,2,,16490,11/13/2019 20:05,,5,,"

Not necessarily it depends on the function of the problem space for both the GANs.

+

A real world example: a batter's reaction time and a pitchers max speed are actual bounded values based on genetics and physics. If the max speed a pitcher can pitch is greater than the max reaction time a human needs to effectively hit against them they will permanently be a better pitcher because the threshold of reaction time.

+

We don't yet know if a maximum threshold on realistic fake image generation exists or if a threshold on detection exists.

+

As both reach near perfect accuracy it could be that the amount of nodes needed to detect a nearly perfect generated image from real image is more neurons than atoms in the universe, or conversely the amount of nodes needed to generated a nearly perfect image could reach impossible proportion we won't know until we continue to make better and better networks that close in on the boundary of generation and detection of real vs fake images from a neural network.

+

Edit: +Let's imagine this problem. One adversary edits an image with a colored line of pixels the goal hide the line by editing the image, the student is responsible with finding the line after the adversary changes the image. The problem can become infinitely difficult change all pixels to be the color of the line. The line is impossible to find the adversary always wins, if it finds this solution meaning it is in its reachable problem space based on its hardware capabilities and its learning model. Which we should assume it would because it's such a simple task.

+

Deep fake detection is not bound to fail because the limit on the effectiveness of a generative model may have a steeper limit than a discriminator at near optimal performance. I have not seen any paper about this specifically and in fact I believe the discriminator has a more difficult job in most cases I just disagree with deciding at this moment that the detectors are doomed. The combination of creating real images motion and sound perfectly in sink is not a trivial problem, in some scenarios it is basically impossible.

+",30365,,30365,,11/15/2022 19:02,11/15/2022 19:02,,,,3,,,,CC BY-SA 4.0 +16500,2,,15616,11/13/2019 21:43,,0,,"

Depending on the number of features you have you might want to try reducing features in order to reduce overfitting and speed up your model. I assume you are using proper regularization as well. PCA may be an option to help if time is the main issue, medium PCA. As the medium article states PCA can be used to reduce the dimensionality of the data while keeping retaining variance. If you run a random forest on the different sets and the feature importance ranking is at all different that is a big issue and the number of features should be reduced. Ideally, the features should be cropped till they have the same (structure/feature importance) but slightly different weights associated with those features. Here is a article demonstrating feature selection.

+

Here are some publications I believe are relevant to your problem. More in line with your desire to try to validate after one sweep. You may want to look up applications of LSTM or Yolo to your model I have a feeling these technologies will help direct your final decisions on what to do.

+

Subspace regularization method for the single-trial estimation of evoked potentials (1999) by P.A. Karjalainen, J.P. Kaipio, A.S. Koistinen, M. Vauhkonen

+

On Quadratic Penalties in Elastic Weight Consolidation (2017) by Ferenc Huszár

+

Parameter space exploration with Gaussian process trees (2014) by Robert B. Gramacy University of California, Santa Cruz, CA +Herbert K. H. Lee University of California, Santa Cruz, CA +William G. Macready

+

Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks (2017) by Nils Reimers, Iryna Gurevych

+

Evaluating Hospital Case Cost Prediction Models Using Azure Machine Learning Studio (2018) by Alexei Botchkarev -evaluation of regression performance

+",30365,,36737,,3/31/2021 22:20,3/31/2021 22:20,,,,0,,,,CC BY-SA 4.0 +16501,2,,16448,11/13/2019 22:22,,7,,"

This may be a much simpler explanation than you're looking for, but in Machine Learning Zero to Hero, Google engineer Laurence Moroney summarized it in a way that I thought was brilliant. Paraphrasing from a presentation slide:

+ +
+

In traditional programming, you input rules and data and the program outputs answers. In machine learning, you input data and answers and the program outputs rules.

+
+ +

There's an algebra-like symmetry to this. And the program doesn't even know what it's coming up with rules for. It just randomly evolves the rules until the data produces the correct answers. You can then take those rules, apply them to different data, and hopefully get correct answers.

+",27054,,,,,11/13/2019 22:22,,,,0,,,,CC BY-SA 4.0 +16502,1,16504,,11/13/2019 22:33,,3,1429,"

I've been researching Adversarial Machine Learning and I know that causative attacks are when an attacker manipulates training data. An exploratory attack is when the attacker wants to find out about the machine learning model. However, there is not a lot of information on how an attacker can manipulate only training data and not the test data set.

+ +

I have read about scenarios where the attacker performs an exploratory attack to find out about the ML model and then perform malicious input in order to tamper with the training data so that the model gives the wrong output. However shouldn't such input manipulation affect both the test and training data set? How does such tampering only affect the training data set and not the test data set?

+",31240,,2444,,11/14/2019 14:15,11/14/2019 14:15,What are causative and exploratory attacks in Adversarial Machine Learning?,,1,0,,,,CC BY-SA 4.0 +16503,2,,16448,11/13/2019 22:53,,2,,"

AI has been redefined recently to machine learning.

+ +

All programming except machine learning (and we'll come back to this) is embodying human knowledge in terms a computer can follow.

+ +

EG A text editor has user interface rules, user expectations, a contract with the OS that it has to follow. A programmer puts it all together. This applies to text editors, expert medical systems, banking software, accounting software (and the programmer needs to know accounting to program it).

+ +

Machine learning is training software with data and outputs allowing it to determine the link between them. No human knowledge. Nor can it explain what it is doing.

+ +

Of course they actually work far better when human knowledge surrounds them as part of their data. A AI that routes incoming invoices etc works better when told where things should actually go (accounts payable).

+",31272,,,,,11/13/2019 22:53,,,,1,,,,CC BY-SA 4.0 +16504,2,,16502,11/13/2019 23:07,,5,,"

When someone is able to do a causative attack it means there is a mechanism by which they are able to input data into the network. Maybe a website where people can input their images and it outputs a guess on what is in the picture and then you click if it got it right or not. If you continue to input images and lie to it it will obviously get worse and worse if they use the user input to add to the test set. Most people are careful and don't mess around with mixing new data into the testing sample. If they did something like mixed the user input training and test then resampled something like that could occur but most people don't do that. It's bad practice and even worse than leaving your NN open to tampering from malicious user input. Information isn't really added to the knowledge in the model till it is fed into the model and backpropagation occurs.

+ +

An exploratory attack is sending tons of inquiries to the model to gain information about the data set they have built into the model even to the point of extracting data about individuals pieces of data that are built into the model. Then, with this information, they could try to reconstruct the data set. They could attempt to trick the network by sending strange generated inputs.

+ +

In the paper Adversarial Machine Learning (2011), by Ling Huang et al., in section 2, the authors define these terms, under the category influence.

+ +
+

Influence

+ +

Causative - Causative attacks alter the training process through influence over the training data.

+ +

Exploratory - Exploratory attacks do not alter the training process but use other techniques, such as probing the detector, to discover information + about it or its training data.

+
+ +

They also provide other related definitions.

+ +
+

Security violation

+ +

Integrity - Integrity attacks result in intrusion points + being classified as normal (false negatives).

+ +

Availability - Availability attacks cause so many classification errors, both false negatives and false positives, that the system becomes effectively unusable.

+ +

Privacy - In a privacy violation, the adversary obtains information from the learner, compromising the secrecy or privacy of the system’s users.

+ +

Specificity (a continuous spectrum)

+ +

Targeted - In a targeted attack, the focus is on a single or small set of target points.

+ +

Indiscriminate - An indiscriminate adversary has a more flexible goal that involves a very general class of points, such as “any false negative.”

+
+",30365,,2444,,11/14/2019 14:14,11/14/2019 14:14,,,,0,,,,CC BY-SA 4.0 +16505,1,17182,,11/14/2019 0:01,,1,71,"

I am looking to train a bipedal robot using unity as a scape with a genetic algorithm. I will import the CAD into unity so the hardware is exact. My questions:

+ +
    +
  1. Is Unity physics accurate enough to train a neural network that will perform in the real world?

  2. +
  3. Should I optimize the network using reinforcement learning in the real world (after trained in scape)?

  4. +
  5. I am looking to use air muscles for my build. If the physics aren’t exactly right in unity (elasticity, max length, torque) will the bot still perform in the real world?

  6. +
  7. Are there any other programs that would be better than unity to train a robot inside a scape?

  8. +
  9. Any other approaches or new ideas on how to train the bot more efficiently would be greatly appreciated.

  10. +
+",4744,,,,,12/20/2019 9:33,Training methods for bipedal robot,,1,0,,12/19/2021 14:32,,CC BY-SA 4.0 +16506,1,16508,,11/14/2019 1:06,,3,1556,"

I am wondering how to calculate the size of a 3d object in an image without knowing the focal length of the camera but the distance from the camera to the object.

+",31273,,2444,,11/14/2019 14:09,11/14/2019 14:09,How to calculate the size of a 3d object from an image?,,1,0,,,,CC BY-SA 4.0 +16507,1,,,11/14/2019 1:40,,2,119,"

I am working on an assignment problem.

+ +

Consider $K$ agents $A_1, \dots A_K$ and $N$ tasks $T_1, \dots T_N$. Each task has a certain time $t(T_i)$ to be completed and each agent has a matching (or affinity) value associated with each task $M_{A_j}(T_i), \forall i, j$. The goal is to assign agents to tasks, such that the matching value is maximized and the overall time to complete the tasks is minimized. Moreover, an agent can be assigned to multiple tasks. However, an agent cannot start a new task before finishing the previous one.I want to solve it with GA+MOA*algorithm What would be an admissible heuristic function?

+",31274,,31274,,12/10/2019 1:18,12/10/2019 1:18,How can I assign agents to tasks based on time and affinity?,,0,2,,,,CC BY-SA 4.0 +16508,2,,16506,11/14/2019 4:35,,2,,"

There are hundred of papers on this task some older than I am! Normally this is done by trying to form a box shape around the image than estimate the volume. This task is typically done with multiple images so the two can generate a more clear picture of the size of the object than one image alone. An object could be 'infinitely' large but its mass could be behind the surface you can see with the picture. With the height and length dimensions extracted from the image and with the distance from the camera calculating the size of said surface is fairly easy.

+ +

Stack overflow about problem

+ +

Frustum PointNets for 3D Object Detection From RGB-D Data (2018) By Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, Leonidas J. Guibas

+ +

From contours to 3D object detection and pose estimation (2017) By Nadia Payet Sinisa Todorovic

+",30365,,,,,11/14/2019 4:35,,,,0,,,,CC BY-SA 4.0 +16509,1,,,11/14/2019 7:30,,6,2263,"

I am solving many sequence-to-sequence prediction problems using RNN/LSTM.

+ +

What type of evaluation metrics can be used for sequence prediction problems?

+ +

One metric is the mean squared error (MSE) that we can give as a parameter during the training model. Currently, the accuracy of my sequence-to-sequence problems is very low.

+ +

What are other ways through which we can compare the performance of our models?

+",14939,,2444,,11/15/2019 2:03,5/24/2023 2:05,What evaluation metric are used for sequence-to-sequence prediction problems?,,2,1,,,,CC BY-SA 4.0 +16510,1,,,11/14/2019 8:09,,5,101,"

"Why would the application of boosting prevent underfitting?"

+

I read in some paper that applying boosting would prevent you from underfitting. Why is that?

+

Source:
+https://www.cs.cornell.edu/courses/cs4780/2015fa/web/lecturenotes/lecturenote13.html

+",30599,,156,,3/4/2021 23:09,3/11/2022 0:03,Why would the application of boosting prevent underfitting?,,1,0,,,,CC BY-SA 4.0 +16512,1,,,11/14/2019 8:39,,1,124,"

I am a masters student going to work in a project to analyze the cracks in underwater concrete structures.

+ +

I need some suggestions for data acquisition and length measurement of the crack.

+ +

I have decided to do crack segmentation using Mask-RCNN. But I don't know which methodology is best to measure the length of the cracks. While searching about this, I found many ways to measure the crack size when there is another reference object of known size in the image. But in my case, there won't be any reference object and also it is not possible to know the distance between the camera and target since it is underwater.

+ +

If the images are of stereotype, Will that solve this issue?

+ +

Can anyone help?

+",31396,,,,,11/14/2019 8:39,How to measure the size of an crack which is segmented from an image using Mask-RCNN?,,0,1,,,,CC BY-SA 4.0 +16513,1,,,11/14/2019 9:21,,1,87,"

I am a student learning about image processing using CNN. I want to learn how to measure the object size from the disparity map obtained from left and right stereo images.

+",31396,,2444,,11/14/2019 14:04,11/14/2019 14:04,How to measure object size from the disparity map using CNN?,,0,0,,,,CC BY-SA 4.0 +16514,1,,,11/14/2019 9:25,,3,170,"

What needs to be done to make a fair algorithm (supervised and unsupervised)?

+ +

In this context, there is no consensus on the definition of fairness, so you can use the definition you find most appropriate.

+",30599,,2444,,11/14/2019 13:46,11/14/2019 23:28,What needs to be done to make a fair algorithm?,,1,0,0,,,CC BY-SA 4.0 +16515,1,,,11/14/2019 10:41,,3,249,"

I'm trying to train a neural network to do a multiple non-linear regression $y=f(x_i), i=1,2…N$. So far it works good (low MSE), but some predictions $y$ are “non-physical”, for instance for our application it is known from first principles that when $x_2$ increases, then $y$ also has to increase ($dy/dx_2>0$), but in some instances the neural network’s output doesn’t comply with this constraint. Another example is that $y + x_5 + x_7$ should be less than a constant $K$

+ +

I thought about adding a penalty term to the loss function to enforce these constraints, but I am wondering if there is a ""harder"" way to impose such a constraint (that is, to ensure that these constraints will always hold, no only that non-physical predictions will be penalized)

+",28108,,2444,,11/14/2019 14:01,12/18/2019 10:01,Imposing physical constraints (previous knowledge) in a neural network for regression,,1,2,,,,CC BY-SA 4.0 +16516,1,,,11/14/2019 11:41,,5,1657,"

My understanding is that masked self-attention is necessary during training of GPT-2, as otherwise it would be able to directly see the correct next output at each iteration. My question is whether the attention mask is necessary, or even possible, during inference. As GPT-2 will only be producing one token at a time, it doesn't make sense to mask out future tokens that haven't been inferred yet.

+",31284,,10649,,4/23/2023 4:41,4/23/2023 4:41,Is the Mask Needed for Masked Self-Attention During Inference with GPT-2,,1,0,,,,CC BY-SA 4.0 +16517,1,16519,,11/14/2019 11:57,,5,1354,"

On page 98 of Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning the author writes;

+ +
+

Redacted phase space: Studying the distribution of inputs and the + network performance after conditioning on standard physically-inspired + features can help to visualize what new information the network is + using from the jet. Training the network on inputs that have been + conditioned on specific values of known features can also be useful + for this purpose.

+
+ +

I cannot find other references to conditioning features. What does that mean?

+",31285,,2444,,11/14/2019 13:33,11/14/2019 13:33,"What is ""conditioning"" on a feature?",,1,0,,,,CC BY-SA 4.0 +16518,1,,,11/14/2019 12:13,,2,30,"

I’m trying to train a neural network to approximate the output of a dynamical system $dy/dt=f\left(y(t), u(t) \right)$, namely, given $y(0)$ and $u(t_i), i=1,2...N$ I want the network to predict $y(t_i), i=1,2...N$. So far I’ve thought of several approaches, namely

+ +
    +
  1. Predict the derivative $dy/dt (t_{i+1}) = f_1 \left(y(t_i), u(t_i) \right)$ and then compute $y(t_{i+1}) = dy/dt (t_{i+1}) \cdot dt + y(t_{i})$

  2. +
  3. Predict the increment $\Delta y (t_{i+1})= f_2 \left(y(t_i), u(t_i), \Delta t \right)$ and then compute $y(t_{i+1}) = \Delta y (t_{i+1}) + y(t_{i})$

  4. +
  5. Directly predict the next value $y(t_{i+1}) = f_3 \left(y(t_i), u(t_i), \Delta t \right)$

  6. +
+ +

Which option is recommended?

+",28108,,2444,,11/14/2019 13:39,11/14/2019 13:39,Choosing neural network output for prediction (regression) of a dynamical system,,0,0,,,,CC BY-SA 4.0 +16519,2,,16517,11/14/2019 13:17,,5,,"

This is conditioning in the sense of conditional probability. The idea is that the authors have some ""standard physically-inspired features"". They are splitting the data up into bins based on the values of these features, and then training a model for each bin. They are then examining the differences between the models. Usually this is done to learn something about the benefits of using the different features, and about the relationships between features and outputs.

+",16909,,,,,11/14/2019 13:17,,,,2,,,,CC BY-SA 4.0 +16521,2,,16516,11/14/2019 14:40,,2,,"

Answer to Q1) If sampling for next token do you need to apply mask during inference?

+ +

Yes you do! The models ability to transfer information across positions was trained in this manner, and changing it up will have unpredictable consequences. Let my try to give an example:

+ +

Tokens: 1:sally, 2:sold, 3:seashells, 4:on, 5:the, 6:____
+In the above you are trying to predict 6 from {1:5}

+ +

Denote $n^{(m)}$ as the set of tokens the $n^{th}$ positional embedding has info from at the $m^{th}$ layer.

+ +

In both cases we see that $n^{(0)} = \{n\} \ \ \forall n$. Now though with a mask we get $n^{(i)} = \{k\}_{k\leq n} \ \ \forall n \ \ s.t. \ \ i \geq 1$ but without we see $n^{(i)} = \{k\}_{k \in [1:N]} \ \ \forall n$. This difference means at the final layer the mebeddings going in will differ completely, and unless we train for such an approach it will cause error

+ +

Answer to Q2) What is the sample dimension?

+ +

It took me a couple reads to understand what youre asking for but I think I understand. The sample at each step is drawn from a distribution where its logits are linearly associated to a single embedding of dimension $d_{(model)}$ therefore that is our upper bound: $dim(sample) \leq d_{(model)}$ which in the example you gave is 768.

+",25496,,,,,11/14/2019 14:40,,,,1,,,,CC BY-SA 4.0 +16522,2,,2111,11/14/2019 16:29,,1,,"

[Disclaimer: This answer is research; not medical and/or legal advice. I am not a lawyer: < https://en.wikipedia.org/wiki/Practicing_without_a_license >.

+

Though perhaps pedantic, it is better to be safe than sorry.]

+
+

This is Direct Answer to Your Question:--

+

The matter is controversial, with some established parameters as what constitutes life. An example of an established parameter for being alive is having: cells or reproduction.

+

It is unclear whether A.I. can create a copy of itself that is independent of its parent. Arguably, partial satisfaction of the aforementioned has already occurred with code that is self-learning.

+

May insight on artificial life lead us to saving lives, particularly in regards to CoVid-19. We want children to live.

+
+

The Controversy

+

To begin, there is some controversy surrounding the definition of life in Biology.

+
+

" [...] The definition of life has long been a challenge for scientists and philosophers, with many varied definitions put forward. This is partially because life is a process, not a substance. This is complicated by a lack of knowledge of the characteristics of living entities, if any, that may have developed outside of Earth. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. Legal definitions of life have also been described and debated, though these generally focus on the decision to declare a human dead, and the legal ramifications of this decision. [...] " – Wikipedia contributors. "Life." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 5 Nov. 2019. Web. 14 Nov. 2019.

+
+
+

" [...] Of course, this lack of hard boundaries makes 'artificial life,' as a field of study, significantly ill-defined. Unlike the case for natural life, there are, as yet, no clear criteria for what +virtual world phenomena should qualify as 'living' or sufficiently 'life-like' to legitimately +count as lying within this field. In large measure, this simply reflects the continuing debate and +investigation within conventional biology, of what specific organizational (as opposed to +material) system characteristics are critical to properly living systems. The key advantage +and innovation in artificial life is precisely that it has this freedom to vary and explore +possibilities that are difficult or impossible to investigate in natural living systems. In this +context, a precise definition of 'life' (natural or artificial) is not a necessary, or even especially +desirable condition for progress. [...] " – Banzhaf, Wolfgang, and Barry McMullin. "Artificial life." Handbook of Natural Computing (2012): 1805-1834.

+
+

Since there is often debate and sometimes no clear answer in regards to these questions, I shall explore variable stances on these issues.

+
+

Philosophical Points to Ponder and Meditate On

+

These are some examples of points that involve controversy

+
    +
  1. Is a crystal alive?
  2. +
  3. Is a virus alive?
  4. +
  5. At what point does a self-learning machine become alive, if it can reproduce and create its own child-offspring A.I?
  6. +
  7. Do we have a possible resolution to these issues?
  8. +
  9. How does this resolution relate to artificial intelligence and artificial life?
  10. +
+

Some of these questions have no clear answer, never mind involving A.I. into the question.

+

Do viruses reproduce? Yes. Do they have genetic information? Yes. Do they have cells? No. So are they living? They are usually classified as non-living, yet they can have RNA.

+

The point is, there is debate as to whether a virus is non-living or living; and it is often unclear what the definition of life is.

+

(https://www.scientificamerican.com/article/are-viruses-alive-2004/)

+
+

Variable Stances:--

+

Artificial life researchers study traditional biology by trying to recreate aspects of biological phenomena.

+
+

" [...] Important propositions in the philosophy of AI include:

+
+
+
    +
  • Turing's 'polite convention': If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.
  • +
  • The Dartmouth proposal: 'Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.'
  • +
  • Newell and Simon's physical symbol system hypothesis: 'A physical symbol system has the necessary and sufficient means of general intelligent action.' +Searle's strong AI hypothesis: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'
  • +
  • Hobbes' mechanism: 'For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts...' [...] " – Wikipedia contributors. "Philosophy of artificial intelligence." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 15 Oct. 2019. Web. 14 Nov. 2019.
  • +
+
+
+

Update: I asked some of my colleagues and received the following advice.

+

Another approach is simply to check for self-preservation. Under this postulate, all forms of life, ranging from the absolutely-most-simple single-celled organism to postulated beings-of-immense-power (see the The Last Question by Isaac Asimov), would be profoundly invested in self-preservation.

+

When the Singularity Comes, Will A.I. Fear Death?

+

However, the preservation of one's children is a valid exception, and does not contradict self-preservation.

+ +

Many schools of meta-ethics claim that (Human) self-preservation and ethics are absolutely congruent. Other schools dispute this to varying degrees. Kantian meta-ethics, a strong reference, is worth considering here (not talking about A.I., here).

+

(https://en.wikipedia.org/wiki/Meta-ethics)

+
    +
  • Do viruses follow the principles of avoiding damage to one's self? I leave that up to you.

    +
  • +
  • How do addictions factor into this? I leave that up to you, but please actually be tactful and use common sense if you choose to discuss that topic.

    +
  • +
+

Incertae sedis.

+
+

Note: Viruses are a very serious topic, right now (pandemic). If drawing parallels to artificial life can help the Human Race, then I am touched to have being allowed to serve Humanity in this way.

+
+

"He who destroys a life, it is as if he destroyed an entire world. He who saves a life, it is as if he saved an entire world." ~ Hillel the Elder

+
+

I ask to be taken seriously.

+

Thank you. You, as an educated person, deserve better than this pandemic, and I hope your children will find happiness.

+
+

Sources, References, and Further Reading

+ +

Other Links:--

+ +
+

Notes:--

+
    +
  • I could not find much information regarding business applications of artificial life.
  • +
+",25982,,25982,,9/23/2020 12:01,9/23/2020 12:01,,,,12,,,,CC BY-SA 4.0 +16523,2,,16514,11/14/2019 17:48,,2,,"

The paper Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges argues that ensuring fairness is not a trivial task and that the current statistical formalizations of fairness lead to a long list of criteria that are each flawed (or even harmful) in different contexts, that is, there are trade-offs between the proposed formalizations. Therefore, fairness constraints in algorithms have to be specific to the domains to which the algorithms are applied. To achieve that, there is a need for collaboration with domain experts and explainable artificial intelligence.

+ +

The major obstacle towards fair machine learning algorithms is the presence of algorithmic bias, which can be subdivided into the following main categories:

+ +
    +
  • the bias in the data, and
  • +
  • the inductive bias (the implicit or explicit assumptions behind the algorithm or model) or, in general, any bias introduced during the development of the algorithm or model (for example, a certain choice of a subset of features can change the outcome of the model).
  • +
+ +

The bias in the data can be due to different factors, such as a biased choice of the collected (or labelled) data or measurement errors (which can make the data not representative of the population). Causal inference can be used to understand the causal relationships in the data, thus it can be used to find the source of bias in the data. To avoid unfairness due to bias in the data, there is a need to analyze and understand the data, so that to improve its quality (for example, by increasing the diversity of the data). However, the bias in the data is not always easily reducible, given that certain outcomes of an experiment may rarely occur or are hard to produce in practice, so unbiased data may not always be easily collectible.

+ +

In the paper The selective labels problem: Evaluating algorithmic predictions in the presence of unobservables (2017), the authors address the selective labels problem (that is, the judgments of decision-makers determine which instances are labeled in the data, which can thus introduce bias in the data) and develop an approach called contraction, which can be used to compare the performance of predictive models and human decision-makers (even in the presence of unobservables). There are also works that are based on Bayesian or causal inference (for example, risk-adjusted regression)

+ +

The sample bias (the data is not representative of the overall population due to a systematic intentional or unintentional error in data collection, a measurement error, which can also be due to social prejudices) is also a form of bias in the data. In the paper Residual unfairness in fair machine learning from prejudiced data (2018), the authors address this problem in the context of and of police stop-and-frisk (where a biased police behavior leads to over-proportional stopping of a racial minority group). Nathan Kallus and Angela Zhou show that adjusting the classifier for fairness does not solve the sample bias problem.

+ +

In the paper Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges, the author argues that any attempt to reduce the bias introduced during the development of the algorithms or models, if it does not take into account the specific social and moral context where they are supposed to be applied, can still lead to algorithmic bias. Furthermore, algorithms should undergo frequent reevaluations, given that, for example, the underlying population or the application context may change.

+",2444,,2444,,11/14/2019 23:28,11/14/2019 23:28,,,,1,,,,CC BY-SA 4.0 +16529,1,,,11/14/2019 23:00,,3,212,"

I have a CNN model that I need to train for a large scale genomics application. It is working well with a subset of my training data. I have scaled up to a subset of about 130 million examples and training time is very long, about 3 hours per epoch. I plan to scale up to the hundreds of billions of training examples and I anticipate training time to be not be feasible with my current design. I would appreciate feedback on how I can streamline the training or improve some aspect of my design that I may not be considering. Currently, I am training from a MongoDB. The training examples are not very large. Here is an example.

+ +
{
+    'added': datetime.datetime(2019, 11, 1, 6, 13, 13, 340000),
+    '_id': ObjectId('5dbbccf92464af872756022e'),
+    'label': 0,
+    'accession': 'GM_0001',
+    'data': '34363,30450,9019,19152,8726,22128,59881,17670,15803,64454,64579,28103,52442,64951,29783,64574,652,19243,33498,14775,18803,4700,55446,53912,47645,41465,48257,16305,62071,12334,44698,24371,46515,8445,3000,61849,43228,18120,23587,11105,5453,42707,42739,46122,31285,40773,48162,16653,58783,2928,2836,21330,46947,6719,26992,8852,14520,46212,47362,43554,2147,39372,33885,59716,37384,14825,53387,58763,18065,34070,23278,15641,40237,47950,58811,40015,36880,29841,45351,14904,49660,48224,54638,50358,17202,10701,3564,4829,62655,5684,37207,49724,16369,6769,37827,38144,63885,5070,42882,48960,16178,35758,50554,54253,34556,2383,39431,30176,11482,24459,4472,53825,7764,44500,4869,50875,33037,56353,46848,30769,18729,46026,41409,2826,12092,17086',
+    'name': 'Example_1'
+}
+
+ +

The relevant data is the 'data' field which is a string of 126 integers where each integer is a value between 0 and about 65,000. The other fields are convenient, but not necessary except for the 'label' field. But even this I could insert into the front of the data field. I mention this because I don't think I necessarily need to train from a MongoDB database.

+ +

I am using Keras 2.3.0 with TensorFlow 2.0.0. Below is an example of my code. The workflow is 1) Load a text file containing the document ids of all training examples in the MongoDB collection. I do this so I can shuffle the examples before sending them to the model for training. 2) I load the examples in batches of 50,000 using my Custom_Generator class. This class pulls the documents from the MongoDB using the list of document ids. 3) The model is trained. I use 5 epochs. I currently have 5-fold cross-validation but I know this is not feasible on the full training set. For that I will do a single train-test split. I am currently performing this on a Google Cloud instance with 2 Tesla T4 GPUs. The database is on a bucket. With the cloud I have flexibility of hardware architectures. I would appreciate any insight. This is a rather large engineering challenge for me.

+ +

Additional background to the problem: +The objective is to classify organisms into broad classes quickly for downstream analysis. The pool of organisms I want to classify is very large (10s of thousands) and very diverse. I'm essentially reading the genomes of the organisms like a book. The genome (a series of ""A"", ""T"", ""C"", or ""G"") is processed in chunks through a hash function producing integer strings as shown above. Depending on the size of the organism genome, thousands to millions of these integer strings may be produced. So I have many thousands of organisms producing many thousands to millions of examples. To be successful, I feel like I need to capture the diversity of the genomes in the organism pool. To give an example, even though Ecoli and Salmonella are both bacteria, their genomes are quite distinct. I feel like I need to have them both represented in the training set to distinguish them from other organisms I would label as a different class. As far as reducing the dataset, I think I can get by with only training on a representative organism for a give species (since there are many unique genomes available for Ecoli, for example). This will help considerably, but I think the training data set will likely still be in the billions of examples.

+ +
import sys
+import time
+from keras.utils import Sequence, to_categorical, multi_gpu_model
+from keras.models import Sequential
+from keras.layers import Dense
+from keras.layers import Flatten
+from keras.layers import Embedding
+from keras.layers.convolutional import Conv1D
+from keras.layers.convolutional import MaxPooling1D
+from sklearn.model_selection import KFold
+from keras.preprocessing.sequence import pad_sequences
+import numpy as np
+import random
+from pymongo import MongoClient
+from bson import ObjectId
+from sklearn.metrics import classification_report, confusion_matrix
+
+
+class Custom_Generator(Sequence) :
+
+    def __init__(self, document_ids, batch_size) :
+        self.document_ids = document_ids
+        self.batch_size = batch_size
+
+
+    def __len__(self) :
+        return (np.ceil(len(self.document_ids) / float(self.batch_size))).astype(np.int)
+
+
+    def __getitem__(self, idx) :
+        client = MongoClient(port=27017)
+        db = client[database]
+        document_ids = self.document_ids[idx * self.batch_size : (idx+1) * self.batch_size]
+        query_results = db[collection].find({'_id': {'$in': document_ids}})
+        batch_x, batch_y = [], []
+        for result in query_results:
+            kmer_list = result['kmers'].split(',')
+            label = result['label']
+            x = [x for x in kmer_list if len(x) > 0]
+            if len(x) < 1:
+                continue
+            batch_x.append(x)
+            one_hot_y = to_categorical(label, 5)
+            batch_y.append(one_hot_y)
+        batch_x = pad_sequences(batch_x, maxlen=126, padding='post')
+        client.close()
+        return np.array(batch_x), np.array(batch_y)
+
+
+# MongoDB database, collection, and document ids of collection
+database = 'db'
+collection = 'collection_subset2'
+docids_file = 'docids_collection_subset2.txt'
+id_ls = []
+# Convert docids strings to MongoDB ObjectID
+with open(docids_file) as f:
+    for line in f:
+        id_ls.append(ObjectId(line.strip()))
+random.shuffle(id_ls)
+
+# Model
+model = Sequential()
+model.add(Embedding(65521, 100, input_length=126))
+model.add(Conv1D(filters=25, kernel_size=5, activation='relu'))
+model.add(MaxPooling1D(pool_size=2))
+model.add(Conv1D(filters=30, kernel_size=3, activation='relu'))
+model.add(MaxPooling1D(pool_size=2))
+model.add(Flatten())
+model.add(Dense(1000, activation='relu'))
+model.add(Dense(5, kernel_initializer=""normal"", activation=""softmax""))
+metrics=['accuracy'])
+parallel_model = multi_gpu_model(model, gpus=2)
+parallel_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
+
+seed = 7
+batch_size = 50000
+
+# Currently training with 5-fold CV. Will only use single test train split 
+# on the full-scale dataset.
+kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
+kfold_stats = {}
+accuracy_ls = []
+val_accuracy_ls = []
+confusion_ls = []
+for fold_idx, (train_idx, test_idx) in enumerate(kfold.split(id_ls)):
+    ids_train = np.array(id_ls)[train_idx].tolist()
+    ids_test = np.array(id_ls)[test_idx].tolist()
+    training_batch_generator = Custom_Generator(ids_train, batch_size)
+    validation_batch_generator = Custom_Generator(ids_test, batch_size)
+    print('Number of train files: %d' % len(ids_train))
+    print('Number of test files: %d' % len(ids_test))
+    start = time.time()
+    history = parallel_model.fit_generator(
+        generator=training_batch_generator,
+        steps_per_epoch = int(len(ids_train) // batch_size),
+        epochs = 5,
+        verbose = 2,
+        validation_data = validation_batch_generator,
+        validation_steps = int(len(ids_test) // batch_size),
+        use_multiprocessing=True
+    )
+    sys.stderr.write(""time to train model (seconds): %d\n""%(time.time() - start))
+    sys.stderr.flush()
+    print(history.history)
+    fold_name = 'kfold_%s' % str(fold_idx)
+    kfold_stats.update({fold_name: history.history})
+    accuracy_ls.extend(history.history['accuracy'])
+    val_accuracy_ls.extend(history.history['val_accuracy'])
+    parallel_model.save('model_output_kfold_%s.h5' % str(fold_idx))
+    print(""Kfold %s finished"" % str(fold_idx))
+    Y_pred = parallel_model.predict_generator(validation_batch_generator)
+    y_pred = np.argmax(Y_pred, axis=1)
+    y_true = np.concatenate([np.argmax(batch[1], axis=1) for batch in validation_batch_generator])  
+    print('Confusion Matrix')
+    conf = confusion_matrix(y_true, y_pred)
+    print(conf)
+    confusion_ls.append(conf)
+    print('Classification Report')
+    target_names = ['Class_name_1', 'Class_name_2', 'Class_name_3', 'Class_name_4', 'Class_name_5']
+    report = classification_report(y_true, y_pred, target_names=target_names)
+
+
+",31294,,31294,,11/15/2019 18:40,11/15/2019 18:40,What is the fastest way to train a CNN with billions of examples?,,1,2,,,,CC BY-SA 4.0 +16531,1,,,11/15/2019 3:08,,3,454,"

I am trying to train a VAE for anomaly detection. I chose one architecture from this Github repository and I adjusted the input and output to match what I need. In my case, the input (and hence the output) are a 12D vector. I tried several sizes for the latent space, but, for some reason, it's not training. From the beginning, the KL loss in almost zero (around 1e-10), while the reconstruction loss (MSE for Gaussian distribution) is around 1, and they basically vary around these values without learning anything further.

+ +

Are there any general tips for troubleshooting a VAE (I never trained one before)?

+ +

I am pretty sure that the code is right and the data for sure has a background and signal (the ratio is 10:1), so I am not really sure what I am doing wrong.

+",31302,,2444,,11/15/2019 15:18,11/15/2019 15:18,Are there any general tips for troubleshooting a VAE when apparently it is not learning?,,0,2,,,,CC BY-SA 4.0 +16532,2,,16479,11/15/2019 3:38,,0,,"

We can manipulate a model's test data set if the machine learning model takes user input and uses it to resample test data set. The actual training dataset of the ML model does not get manipulated, but if we figure out the ML model through an exploratory attack (sending a lot of inquiries to the ML model to find out its nature), we can generate a training dataset which was built into the original ML model.

+",31240,,,,,11/15/2019 3:38,,,,0,,,,CC BY-SA 4.0 +16533,1,,,11/15/2019 7:00,,2,232,"

I have a large 2D grid having 30k rows and 35k columns, so a total of 30x35k grid cells. Each grid cell is represented by a unique integer number (identity of grid cell). I have several trajectories that passes through these grid cells. Each trajectory is represented by a sequence of numbers (that are grid cells through which the trajectory passes through).

+ +

I want to solve the problem of trajectory prediction by giving the partial trajectory as input and predict the full trajectory. This becomes a sequence to sequence problem, where all sequences are integer values by default.

+ +

I am trying to solve this problem through encoder-decoder LSTM architecture. Most tutorials/examples regarding sequence to sequence on net are on machine translations in which vocabularies or characters are one-hot-encoded to represent the text as integer values. When I hot-encode my sequence values the one-hot vector becomes very large because there are (30x35)k grid cells, the program has given memory overflow error (because each vector has of size 1 million).

+ +

I am confused here, do I need to treat grid identity as categorical variable? because all grid identities are numeric numbers but these identities are not comparable (like prices).

+ +

Do I need to hot-encode my integer values in my sequence? Or is there any other alternative to solve this problem? I also appreciate if you suggest me the similar tutorials having the sequence to sequence prediction problem.

+",14939,,14939,,11/15/2019 14:40,7/27/2023 21:10,How to represent integer values in sequence to sequence prediction task in encoder-decoder LSTM?,,1,0,,,,CC BY-SA 4.0 +16534,1,,,11/15/2019 7:27,,4,2424,"

I'm searching for a loss function that fits my project. Actually, I have two questions, but they are in the same direction. I take a look at the definition of the root mean squared error (RMSE) and the Euclidean distance and they look the same to me. That's why I want to know the difference between the two. What would be the difference if I use RMSE as a loss function or the euclidean distance?

+ +

The second question is how to search for a loss function. I mean I know it depends on the problem and common things are MSE for regression and cross-entropy for classification, but let's say I have a specific problem, how do I search for a loss function?

+ +

I also saw that some people use a custom loss function and most of the deep learning frameworks allow us to define a custom loss function, but why would I want to use a custom one? How do I get the intuition that I need a custom loss function?

+ +

To be more specific, I'm doing a project where I need to reduce the GPS error of a vehicle. I have some vehicle data and my neural network will try to predict the longitude and latitude, so it's a regression problem. That's why I thought that the Euclidean distance would make sense as a loss function, right? Now, somehow MSE also makes sense to me because it is getting the difference between prediction and ground truth. Does this make sense to you as a professional ML engineer or data scientist? And if there would be a custom loss function that you can use, what would you suggest and why?

+",30327,,2444,,11/15/2019 15:01,11/15/2019 15:01,"What's the difference between RMSE and Euclidean distance, and when to use a custom loss?",,1,0,,11/19/2019 4:25,,CC BY-SA 4.0 +16535,1,,,11/15/2019 8:27,,2,142,"

Is it possible to train a convolutional neural network (CNN) to predict the dimensions of primitive objects such as (spheres, cylinders, cuboids, etc.) from point clouds?

+ +

The input to the CNN will be the point cloud of a single object and the output will be the dimensions of the object (for example, radius and height of the cylinder). The training data will be the point cloud of the object with the ground truth dimensions in a regression final layer?

+ +

I think it is possible for images since it will similar to a bounding box detection, but I am not sure with point clouds.

+",31259,,2444,,11/15/2019 15:15,11/15/2019 20:30,Is it possible to train a CNN to predict the dimensions of primitive objects from point clouds?,,1,1,,,,CC BY-SA 4.0 +16536,1,16737,,11/15/2019 9:22,,14,502,"

I've read a few classic papers on different architectures of deep CNNs used to solve varied image-related problems. I'm aware there's some paradox in how deep networks generalize well despite seemingly overfitting training data. A lot of people in the data science field that I've interacted with agree that there's no explanation on why deep neural networks work as well as they do.

+ +

That's gotten me interested in the theoretical basis for why deep nets work so well. Googling tells me it's kind of an open problem, but I'm not sure of the current state of research in answering this question. Notably, there are these two preprints that seem to tackle this question:

+ + + +

If anyone else is interested in and following this research area, could you please explain the current state of research on this open problem? What are the latest works, preprints or publications that attempt to tackle it?

+",27548,,2444,,11/25/2019 16:15,11/25/2019 16:23,What are the state-of-the-art results on the generalization ability of deep learning methods?,,1,0,,,,CC BY-SA 4.0 +16537,2,,16534,11/15/2019 9:50,,3,,"

For the first question, RMSE and Euclidean distance have no difference, not that i know of.

+ +

For the second question, you only need the common loss function for normal tasks.

+ +

MSE is a common loss function used in linear regression tasks as well as loss function similar in nature like the RMSE. For classification tasks, Cross Entropy Loss is preferred. For logistic regression, use Binary Cross Entropy loss. See this for details:

+ +
+

Cross-entropy loss, or log loss, measure the performance of a + classification model whose output is a probability value between 0 and + 1. It is preferred for classification, while mean squared error (MSE) is one of the best choices for regression. This comes directly from + the statement of your problems itself. In classification you work with + a very particular set of possible output values thus MSE is badly + defined.

+ +

To better understand the phenomena it is good to follow and understand + the relations between

+ +

Cross-entropy

+ +

Logistic regression (binary cross-entropy)

+ +

Linear regression (MSE)

+ +

You will notice that both can be seen as a maximum likelihood + estimator (MLE), simply with different assumptions about the dependent + variable.

+ +

When you derive the cost function from the aspect of probability and + distribution, you can observe that MSE happens when you assume the + error follows Normal Distribution and cross-entropy when you assume + binomial distribution. It means that implicitly when you use MSE, you + are doing regression (estimation) and when you use CE, you are doing + classification.

+
+ +

Source: https://intellipaat.com/community/2015/why-is-the-cross-entropy-method-preferred-over-mean-squared-error-in-what-cases-does-this-doesnt-hold-up

+ +

For a custom loss function, there is examples like the triplet loss and cases where you need to optimize two loss at once, then you need a custom loss function. For the case of triplet loss, it is used in one-shot learning tasks like face recognition. There, face images are fed through a CNN to get embeddings about description of the face. In the training phase there is three images per batch. One anchor, one positive (which is the same person as anchor +) and one negative (different person from anchor). The loss function, triplet loss, maximizes the distance between the anchor embedding and the negative embedding. In reverse, it also minimize the distance between the anchor embedding and the positive embedding. The loss function is like this: +

+ +

As you can see, there is two components here, and they are subtracted to maximize the second component and minimize the first. The alpha value is to make the loss positive as the gradient descent optimizes the loss to be as near to zero as possible.

+ +

The case of style transfer also introduces a custom loss function. Here is the loss function: +

+ +

Style transfer basically alters the input image to be styled like the style image of the system.

+ +

The loss function consist of style loss and content loss. The content loss decreases the distance of the output image and the style loss decreases the distance of the output image and the style in an artistic sense. The second loss makes the output image imitate the style of the the style image.

+ +

In both scenario a custom loss function is used. In both cases, the custom loss function is used to optimize two losses together, either positively or negatively. A weight can also be introduced to weight the two losses as shown in the second case. However the second case introduce a custom loss function. That leads to the second use of custom loss function. They can be used for specific tasks and optimize a specific goal. The first use for custom loss function can be explored but the second requires careful engineering and research.

+ +

So in short, custom loss function are used for either combining two loss or a specific task.

+ +

Hope I can help you.

+",23713,,23713,,11/15/2019 11:36,11/15/2019 11:36,,,,10,,,,CC BY-SA 4.0 +16538,1,,,11/15/2019 9:54,,11,8321,"

What is the reason AMD Radeon is not widely used for machine learning and deep learning? Is it mainly an issue of lack of software? Or is Radeon's GPU not as good as NVIDIA's?

+",31307,,,,,11/15/2019 12:20,What is the reason AMD Radeon is not widely used for machine learning and deep learning?,,1,0,,6/14/2020 10:28,,CC BY-SA 4.0 +16539,1,16540,,11/15/2019 11:47,,1,73,"

I want to merge 2 data sets in one, but don't know the right approach to do it. The datasets are similar, the last column is the same - will or not them buy a product. In the first dataset, users who only will buy, in second - only who won't buy.

+ +

The 1st dataset contains 500 rows and 2nd 10000 rows. What will be the right approach to merge it? How can I normalize them? And to point for an algorithm that the last column is the main sequence on what it should learn?

+ +

Example:

+ +
id    income date will_buy
+
+23123 200    10.5 Yes
+
+ +

and second dataset:

+ +
id    income date will_buy
+
+2323  100    10.5 No
+
+",30737,,2444,,11/15/2019 14:25,11/15/2019 14:25,How can I merge two datasets?,,1,0,,11/15/2019 20:37,,CC BY-SA 4.0 +16540,2,,16539,11/15/2019 11:57,,1,,"

You can use append function:

+ +
final = df1.append(df2, ignore_index=True)
+
+ +

To set the last column as labels, you set them as so by:

+ +
labels = np.array(final[""will_buy""])
+
+ +

So, when calling the fit method on the model you build, you set labels = labels.

+",22301,,2444,,11/15/2019 14:16,11/15/2019 14:16,,,,0,,,,CC BY-SA 4.0 +16541,2,,16538,11/15/2019 12:20,,6,,"

The main reason that AMD Radeon graphics card is not used for deep learning is not the hardware and raw speed. Instead it is because the software and drivers for deep learning on Radeon GPU is not actively developed. NVIDIA have good drivers and software stack for deep learning such as CUDA, CUDNN and more. Many deep learning library also have CUDA support. However for AMD there is little support on software of GPU. There is ROCM but it is not well optimized and also a lot of deep learning libraries don't have ROCM support.

+ +

Also on the hardware side, AMD lacks deep learning specific features like tensor cores. AMD also don't have data centre specific cards like the Tesla lineup. The performing of AMD cards also is not very good until recently with the RDNA architecture.

+ +

Lastly, data centres don't change hardware very often. NVIDIA flourished in the deep learning field very early on so many companies bought a lot of Tesla GPU. Even if AMD caught up in the deep learning field it is very hard as many companies have used NVIDIA from a long time ago, and switching to a very different architecture of GPU is troublesome, especially for a data centre with couple hundred or more servers.

+ +

AMD has also ""given up"" on deep learning market. They don't actively tries to make effort in deep learning data centres using AMD hardware. Many data centre also use very old Tesla hardwares like Tesla K80 of M series.

+ +

Hope this can help you.

+",23713,,,,,11/15/2019 12:20,,,,0,,,,CC BY-SA 4.0 +16542,1,16544,,11/15/2019 12:37,,2,42,"

Suppose we want to detect whether an object is one of the following classes: $\text{Object}_1, \text{Object}_2, \text{Object}_3$ and $\text{Person}$. Should the annotated images only contain bounding boxes for either a person or an object? In other words, suppose an image has both $\text{Object}_1$ and $\text{Person}$. Should you create a copy of this image where the first version only has a bounding box on the object and the second copy only has a bounding box on the person?

+",28201,,2444,,11/15/2019 15:30,11/15/2019 15:30,"If an image contains two distinct objects, should I create a copy of this image with distinct labels for each copy?",,1,0,,,,CC BY-SA 4.0 +16543,2,,16533,11/15/2019 12:58,,0,,"

One hot encoding helps a lot. You can actually one hot encode each batch just before the training to reduce memory usage. One hot encoding makes the input data more intuitive for the network as a numeric value requires the network to do multiple comparisons to understand the value. For examples please tell me which deep learning framework you are using so I can give you some links or code. Hope I can help you.

+",23713,,,,,11/15/2019 12:58,,,,3,,,,CC BY-SA 4.0 +16544,2,,16542,11/15/2019 13:26,,2,,"

You should use both classes together. Let's say you use the method you proposed. Then they will be contradicting each other as one teaches the network to recognize people, not objects and the other teaches the network to recognizes objects not person. There is no need for seperation of the two classes, unless you are making two seperate classifier. Hope I can help you.

+",23713,,,,,11/15/2019 13:26,,,,5,,,,CC BY-SA 4.0 +16545,1,16547,,11/15/2019 13:54,,1,135,"

How does one interpret the ""min_input_size"", ""max_input_size"" and ""anchors"" fields in the Yolov3 config file here. In particular, suppose we have the following:

+ +
    ""min_input_size"":       288,
+
+    ""max_input_size"":       448,
+
+    ""anchors"":              [55,69, 75,234, 133,240, 136,129, 142,363, 203,290, 228,184, 285,359, 341,260]
+
+ +

Does the min_input_size and max_input_size indicate the maximum number of training images we can have? What do the numbers in the ""anchors"" field indicate? Are they the coordinates of the anchor boxes? Surprisingly, I have not been able to find a good explanation of many of these fields within this file.

+",28201,,,,,11/15/2019 14:03,Interpreting Keras Yolov3 config file,,1,1,,11/19/2019 0:53,,CC BY-SA 4.0 +16546,2,,16529,11/15/2019 13:56,,1,,"

First of all, you should add the argument workers = n in the fit generator call. n should be bigger than 1 to prefetch data. As your data processing requires the data be taken from a server or port, you should do pre fetching data as that would fetch the next data while GPU is processing.

+ +

If you call fit_generator with workers > 1 , use_multiprocessing=True , we will prefetch queue_size batches.

+ +

Source: https://github.com/keras-team/keras/issues/12847

+ +

Secondly, as @MichaelHearn mentioned, you should plot a graph to see how many data is needed. You task should not require several billion data samples. A couple hundred thousand data should be enough. If the model is not getting a good accuracy from a couple hundred thousand data samples, perhaps you can try changing up the model architecture instead of adding more data. Maybe you can add drop out layers.

+ +

Hope I can help you.

+",23713,,,,,11/15/2019 13:56,,,,2,,,,CC BY-SA 4.0 +16547,2,,16545,11/15/2019 14:03,,0,,"

The min and max input size should be the min and max image size of the input images. The numbers represent pixels in both axis of the image.

+ +

The anchors represents the size of the anchor box. Anchor box does not have coordinates, only have size.

+ +

Hope I can help you

+",23713,,,,,11/15/2019 14:03,,,,7,,,,CC BY-SA 4.0 +16549,2,,16535,11/15/2019 19:12,,2,,"

I'm working on a similar problem. I'm using a 2D point cloud of an object, for example, X and Y coordinates for height, and with that more simple data set I will train a regression model (currently working on that). In my opinion, this approach with dissecting complex point cloud into cross sections that contain wanted dimension and feeding that to the model will be more simple and easier for the training.

+",31315,,31315,,11/15/2019 20:30,11/15/2019 20:30,,,,1,,,,CC BY-SA 4.0 +16550,1,,,11/15/2019 20:32,,1,195,"

I'm working on a project where I need to extract text from grocery discount flyers like the Costco announcement below (retrieved in a random google search, Costco is not the deal here):

+ +

+ +

If I just run OCR (like with Tesseract in python):

+ +
import cv2
+import pytesseract
+img = cv2.imread('costco.jpg')
+text = pytesseract.image_to_string(img)
+print(text)
+
+ +

I get:

+ +
+

Cadbury Chocolate

+ +

variety pack packet

+ +

ere $12.99 i rom hagst 31026 2012 +> +> Je laa +> + a +> +> Wrigley’s Excel Gum variety +> +> Backol 24 +> +> $13.79 fom agus 26.202

+ +

OFF

+ +

Solon Extra virgin olive oil [...]

+
+ +

Which is a lot noisy.

+ +

My guess is that splitting the image to its base squares enchances the recognition.

+ +

However, I'm confused on how to do it. I can classify images using a CNN, but am not sure about object recognition.

+ +

Should I have a sliding window and train several ""grid box"" objects on a generic CNN and then provide this window data to be classified? How to adapt to distinct object window sizes?

+",31316,,,,,11/15/2019 22:14,Best approach for 2D Grid Image Segmentation,,1,0,,,,CC BY-SA 4.0 +16551,2,,16550,11/15/2019 22:14,,2,,"

This is a really cool problem. You already have a working model here are a few different ways of going forward with the project.

+ + + +

Training the model to detect boxes followed by extracting the text from each box seems like a very smart direction to move in. This paper talks about the former. automatic image segmentation and edge detection Good luck!

+",30365,,,,,11/15/2019 22:14,,,,1,,,,CC BY-SA 4.0 +16553,1,,,11/16/2019 10:36,,3,38,"

Problem Background

+ +

I am working with a problem, which requires a character-level, deep learning model. Previously I was working with word-level deep NLP (Natural Language Processing) models, and in these models almost always embedding encoding was used to represent given word in a lower-dimensional vector form. Furthermore, such embedding encoding allowed for putting similar words near themselves in the new lower-dimensional vector representation (i.e. man and woman vectors were near themselves in the vector space) which improved learning. Nevertheless, I often see that people use embedding encoding in character level NLP models. Even if the character-level one-hot encoding vectors are quite small in comparison to word-level one-hot encoding vectors (about 36 to 32k rows). Furthermore, there is no much correlation between characters, there is no something like ""similar characters"" in comparison to similar words, therefore some characters in comparison to other shouldn't be put near themselves.

+ +

Question +Why embedding encoding is used in the character-level NLP models?

+",31324,,,,,11/16/2019 10:36,Why embedding layer is used in the character-level Natural Language Processing models,,0,0,,,,CC BY-SA 4.0 +16554,1,,,11/16/2019 10:50,,2,51,"

I am trying to make a neural network which takes in 0 and 1 as it's input and should give me output ranging from [-20,-1].I am using three layers with sigmoid as the activation function .How should I design my output layer?Any sort of code snippet from your side will be helpful .I am using tensorflow.Please help me out with the same

+",31325,,,,,11/16/2019 10:50,"How should I make output layer of my neural network so that I can get outputs ranging from [-20,-1]",,0,3,,,,CC BY-SA 4.0 +16555,1,,,11/16/2019 10:56,,1,41,"

I'm wondering how is the general return-based off-policy equation in Safe and efficient off-policy reinforcement learning derived +$$\mathcal{R} Q(x, a):=Q(x, a)+\mathbb{E}_{\mu}\left[\sum_{t \geq 0} \gamma^{t}\left(\prod_{s=1}^{t} c_{s}\right)\left(r_{t}+\gamma \mathbb{E}_{\pi} Q\left(x_{t+1}, \cdot\right)-Q\left(x_{t}, a_{t}\right)\right)\right]$$

+ +

If it is applied to TD($\lambda$), is this equation the forward view of TD($\lambda$)?

+ +

What is the difference between trace $c_s$ and eligibility trace?

+",15525,,2444,,6/4/2020 17:09,6/4/2020 17:09,How is the general return-based off-policy equation derived?,,0,0,,,,CC BY-SA 4.0 +16556,1,16579,,11/16/2019 11:50,,6,2535,"

In the following paragraph from the book Automated Machine Learning: Methods, Systems, Challenges (by Frank Hutter et al.)

+ +
+

In this section we first give a brief introduction to Bayesian optimization, present alternative surrogate models used in it, describe extensions to conditional and constrained configuration spaces, and then discuss several important applications to hyperparameter optimization.

+
+ +

What is an ""alternative surrogate model""? What exactly does ""alternative"" mean?

+",17410,,2444,,7/24/2021 11:55,7/24/2021 11:55,"What is a ""surrogate model""?",,3,0,,,,CC BY-SA 4.0 +16557,1,16558,,11/16/2019 12:06,,4,169,"

I have a large set of data points describing mappings of binary vectors to real-valued outputs. I am using TensorFlow, and would like to train a model to predict these relationships. I used four hidden layers with 500 neurons in each layer, and sigmoidal activation functions in each layer.

+ +

The network appears to be unable to learn, and has high loss even on the training data. What might cause this to happen? Is there something wrong with the design of my network?

+",31325,,2444,,11/19/2019 2:41,11/19/2019 2:41,What could be the problem when a neural network with four hidden layers with the sigmoid activation function is not learning?,,2,0,,,,CC BY-SA 4.0 +16558,2,,16557,11/16/2019 12:41,,0,,"

When training our neural network, you need to scale your dataset in order to avoid slowing down the learning or prevent effective learning. +Try normalizing your output. +This Tutorial might help

+",26854,,,,,11/16/2019 12:41,,,,1,,,,CC BY-SA 4.0 +16560,2,,16556,11/16/2019 16:53,,5,,"

A surrogate model is a simplified model. It is a mapping $y_S=f_S(x)$ that approximates the original model $y=f(x)$, in a given domain, reasonably well. Source: Engineering Design via Surrogate Modelling: A Practical Guide

+ +

In the context of Bayesian optimization, one wants to optimize a function $y=f(x)$ which is expensive (very time consuming) to evaluate, therefore one optimizes the surrogate model $y_S=f_S(x)$ which is cheaper (faster) to evaluate.

+",31335,,2444,,11/19/2019 16:49,11/19/2019 16:49,,,,2,,,,CC BY-SA 4.0 +16562,2,,16336,11/16/2019 20:36,,1,,"
    +
  1. logp seen in code is actually logit p which has this story behind:
  2. +
+
+

Given a probability p, the corresponding odds are calculated as p / (1 – p). For example if p=0.75, the odds are 3 to 1: 0.75/0.25 = 3.

+

The logit function is simply the logarithm of the odds: logit(x) = log(x / (1 – x)).

+
+

Sigmoid near logp is like follows:

+
+

The inverse of the logit function is the sigmoid function. That is, if you have a probability p, sigmoid(logit(p)) = p.

+
+

Source: [1]

+
    +
  1. In reinforcement learning we know in the end of game, if taken actions were successful or not. Then before next round we can adjust the gradients. From your link (commentary section):
  2. +
+
+

For example in Pong we could wait until the end of the game, then take the reward we get (either +1 if we won or -1 if we lost), and enter that scalar as the gradient for the action we have taken (DOWN in this case). In the example below, going DOWN ended up to us losing the game (-1 reward). So if we fill in -1 for log probability of DOWN and do backprop we will find a gradient that discourages the network to take the DOWN action for that input in the future (and rightly so, since taking that action led to us losing the game).

+
+
    +
  1. In the very same commentary section (later) there is a pic, and explanation of what h is. Unfortunately, you have to check it by yourself, pic was not compatible to be attached here. By describing the pic I could say h is equal to weights in hidden layer and in gradient case the dh is the derivative of h.
  2. +
+

Roughly speaking the backpropagation is correcting the weights backwards the network after the round is done. More thorough explanation is in the mentioned comments section.

+

Sources:

+

[1] https://www.google.com/amp/s/nathanbrixius.wordpress.com/2016/06/04/functions-i-have-known-logit-and-sigmoid/amp/

+",11810,,-1,,6/17/2020 9:57,11/16/2019 20:36,,,,6,,,,CC BY-SA 4.0 +16563,2,,16496,11/16/2019 21:48,,1,,"

I believe you may want to use a Sum Product Network for this task. SPNs are the state-of-the-art approach for face completion, and there are several more recent papers on this topic since the original above.

+ +

Importantly, the SPN paper also covers other approaches that work well for this task. If lower-resolution results are acceptable for your task, PCA with 100 or more components works surprisingly well. If your dataset is very large, a nearest-neighbor approach can work decently too.

+",16909,,,,,11/16/2019 21:48,,,,1,,,,CC BY-SA 4.0 +16564,1,16565,,11/17/2019 0:09,,2,259,"

I was making a simple phoneme classification model for a 10 week-long class project and I ran into a small question.

+

Is it possible to create a model that takes a 1-second (the longest phoneme is 0.2 second but the large image is kept for context) spectrogram as input? Some people suggest creating an RNN for phoneme classification, but can you build a pure CNN phoneme classification model?

+",23546,,2444,,5/10/2022 7:49,5/10/2022 7:49,Can you build a pure CNN phoneme classification model?,,1,0,,,,CC BY-SA 4.0 +16565,2,,16564,11/17/2019 1:52,,1,,"

Yes you can, a few years ago I made a simple CNN for a single Arabic phoneme classification. You can use spectogram or using MFCC / MFSC as features, as long all data has the same size (use padding or cropping if needed).

+ +

You may need RNN if you want to combine some phonemes to recognize a single word or longer.

+",16565,,,,,11/17/2019 1:52,,,,12,,,,CC BY-SA 4.0 +16566,1,16568,,11/17/2019 3:06,,4,230,"

I just find that Google patents some of the widely used machine learning algorithms. For example:

+ +

Does that mean I can't use those algorithms commercially?

+",16565,,2444,,1/7/2022 22:21,1/7/2022 22:21,Can Google's patented ML algorithms be used commercially?,,1,0,,,,CC BY-SA 4.0 +16568,2,,16566,11/17/2019 4:00,,1,,"

Can you use them commercially?

+ +

Yes.

+ +

Is Google able to sue you any time they want?

+ +

Yes.

+ +

Will they do that...

+ +

Probably not.

+ +

Google isn't a known patent bully, I would give them the benefit of the doubt in this kind of situation and say, unless you start really giving them real trouble, they wouldn't do anything. Some companies/people know an idea can really be used for good and patent it to protect its use not to inhibit its use. By patenting the idea and setting a precedent of not suing they are effectively allowing everyone to benefit. Maybe in the future Google cloud, Azure and Amazon web services will lose some money in a legal battle, but I doubt you personally will be hit with a lawsuit.

+",30365,,2444,,11/18/2019 12:00,11/18/2019 12:00,,,,1,,,,CC BY-SA 4.0 +16570,1,16580,,11/17/2019 10:31,,6,1171,"

I want to prevent my model from overfitting. I think that k-fold cross-validation (because it is doing this each time with different datasets) may be more effective than splitting the dataset into training and test datasets to prevent overfitting, but a colleague (who has little experience in ML) says that, to prevent overfitting, the 70/30% split performs better than the k-fold cross-validation. In my opinion, k-fold cross-validation provides a reliable method to test the model performance.

+

Is k-fold cross-validation more effective than splitting the dataset into training and test datasets to prevent overfitting? I am not concerned with computational resources.

+",30599,,2444,,6/6/2021 0:03,2/11/2023 2:04,Is k-fold cross-validation more effective than splitting the dataset into training and test datasets to prevent overfitting?,,4,0,,,,CC BY-SA 4.0 +16571,2,,16570,11/17/2019 10:49,,3,,"

Purely in terms of overfitting, and assuming you train both for equal amounts of time, 70/30 is probably better but performance is not going to be very good. Not training on %30 of data will make both training and test results equally bad (in my opinion). But it won't overfit, that is for sure. Cross validation (you have in mind 90/10, I assume) will take a long time, so that won't have enough time to train and it might be overfitting more compared to 70/30, but as it is going to see all training samples %90 at a time, there is a good chance it will train better. So, at the end of the day, it will overfit more but perform better.

+ +

If you are asking which is better overall, performance and overfitting, I say it depends on the size of your dataset. If you have millions of samples in it, you can even use a 98/1/1 for training, testing and validation and still be OK.

+ +

Edit: Thinking a little more about it, even if the time is not an issue the situation will roughly be the same. But you will know the performance of the model on new data to a higher certainty with cross validation.

+",22301,,22301,,11/17/2019 13:35,11/17/2019 13:35,,,,4,,,,CC BY-SA 4.0 +16574,2,,16570,11/17/2019 15:35,,0,,"

Both methods are fine if used properly. As a rule of thumb, when training time is not an issue, use split method if you have more data than you can use in your model and cross-validation if not. I would suggest handling overfitting by some other means.

+",3579,,,,,11/17/2019 15:35,,,,0,,,,CC BY-SA 4.0 +16575,1,16589,,11/17/2019 16:10,,4,5254,"

In simple words, what does end-to-end training mean, in the context of deep learning?

+",31312,,2444,,3/13/2020 16:29,3/13/2020 17:41,What does end-to-end training mean?,,3,1,,,,CC BY-SA 4.0 +16576,1,16587,,11/17/2019 17:27,,1,137,"

I'm studying a master's degree and my final work is going to be about the convolutional neural network.

+ +

I read a lot of books and I did Convolutional Network Standford's course, but I need more.

+ +

Are there books or papers on the details of convolutional neural networks (in particular, convolutional layer)?

+",4920,,2444,,1/17/2021 19:32,1/17/2021 19:32,What are examples of books or papers on the details of convolutional neural networks?,,4,0,,,,CC BY-SA 4.0 +16577,2,,16576,11/17/2019 17:39,,1,,"

Chris Olah's work is always inspired, and not too technical as one would expect. He has several papers on CNNs on his website. In particular, check the series titled ""Convolutional Neural Networks"" with four papers on the topic.

+",22301,,22301,,11/17/2019 23:42,11/17/2019 23:42,,,,0,,,,CC BY-SA 4.0 +16578,1,,,11/17/2019 17:53,,1,105,"

Given the following 3 research papers, the authors have shown different heatmap graphical representations for features of the trained CNN models:

+ +
    +
  1. On the performance of Convnet feature for place recognition: link +

  2. +
  3. NetVLAD: CNN architecture for weakly supervised place recognition: link +

  4. +
  5. Deep Learning Features at Scale for Visual Place Recognition: link +
  6. +
+ +

Does anyone know the easiest heatmap implementation in Python given deploy.protxt and model_wights_bias.caffemodel files?

+ +

PS: I am aware of the following answers and packages: answer1, package1 but they do not provide these solutions shown in figures above!

+ +

Thanks,

+",31312,,,,,11/17/2019 17:53,Plot class activation heatmap of Caffe Model in Python,,0,0,,,,CC BY-SA 4.0 +16579,2,,16556,11/17/2019 18:40,,5,,"

What is Bayesian optimization?

+ +

Introduction

+ +

Bayesian optimization (BO) is an optimization technique used to model an unknown (usually continuous) function $f: \mathbb{R}^d \rightarrow Y$, where typically $d \leq 20$, so it can be used to solve regression and classification problems, where you want to find an approximation of $f$. In this sense, BO is similar to the usual approach of training a neural network with gradient descent combined with the back-propagation algorithm, so that to optimize an objective function. However, BO is particularly suited for regression or classification problems where the unknown function $f$ is expensive to evaluate (that is, given the input $\mathbf{x} \in \mathbb{R}^d$, the computation of $f(x) \in Y$ takes a lot of time or, in general, resources). For example, when doing hyper-parameter tuning, we usually need to first train the model with the new hyper-parameters before evaluating the specific configuration of hyper-parameters, but this usually takes a lot of time (hours, days or even months), especially when you are training deep neural networks with big datasets. Moreover, BO does not involve the computation of gradients and it usually assumes that $f$ lacks properties such as concavity or linearity.

+ +

How does Bayesian optimization work?

+ +

There are three main concepts in BO

+ +
    +
  • the surrogate model, which models an unknown function,
  • +
  • a method for statistical inference, which is used to update the surrogate model, and
  • +
  • the acquisition function, which is used to guide the statistical inference and thus it is used to update the surrogate model
  • +
+ +

The surrogate model is usually a Gaussian process, which is just a fancy name to denote a collection of random variables such that the joint distribution of those random variables is a multivariate Gaussian probability distribution (hence the name Gaussian process). Therefore, in BO, we often use a Gaussian probability distribution (the surrogate model) to model the possible functions that are consistent with the data. In other words, given that we do not know $f$, rather than finding the usual point estimate (or maximum likelihood estimate), like in the usual case of supervised learning mentioned above, we maintain a Gaussian probability distribution that describes our uncertainty about the unknown $f$.

+ +

The method of statistical inference is often just an iterative application of the Bayes rule (hence the name Bayesian optimization), where you want to find the posterior, given a prior, a likelihood and the evidence. In BO, you usually place a prior on $f$, which is a multivariate Gaussian distribution, then you use the Bayes rule to find the posterior distribution of $f$ given the data.

+ +

What is the data in this case? In BO, the data are the outputs of $f$ evaluated at certain points of the domain of $f$. The acquisition function is used to choose these points of the domain of $f$, based on the computed posterior distribution. In other words, based on the current uncertainty about $f$ (the posterior), the acquisition function attempts to cleverly choose points of the domain of $f$, $\mathbf{x} \in \mathbb{R}^d$, which will be used to find an updated posterior. Why do we need the acquisition function? Why can't we simply evaluate $f$ at random domain points? Given that $f$ is expensive to evaluate, we need a clever way to choose the points where we want to evaluate $f$. More specifically, we want to evaluate $f$ where we are more uncertain about it.

+ +

There are several acquisition functions, such as expected improvement, knowledge-gradient, entropy search, and predictive entropy search, so there are different ways of choosing the points of the domain of $f$ where we want to evaluate it to update the posterior, each of which deals with the exploration-exploitation dilemma differently.

+ +

What can Bayesian optimization be used for?

+ +

BO can be used for tuning hyper-parameters (also called hyper-parameter optimisation) of machine learning models, such as neural networks, but it has also been used to solve other problems.

+ +

What is an alternative surrogate model?

+ +

In the book Automated Machine Learning: Methods, Systems, Challenges (by Frank Hutter et al.) that you are quoting, the authors say that the commonly used surrogate model Gaussian process scales cubically in the number of data points, so sparse Gaussian processes are often used. Moreover, Gaussian processes also scale badly with the number of dimensions. In section 1.3.2.2., the authors describe some alternative surrogate models to the Gaussian processes, for example, alternatives that use neural networks or random forests.

+",2444,,2444,,11/17/2019 19:56,11/17/2019 19:56,,,,0,,,,CC BY-SA 4.0 +16580,2,,16570,11/17/2019 18:52,,4,,"

K-fold cross-validation is probably preferred in terms of completeness and generalization: you ensure that the system has seen the complete dataset for training. However, in deep learning this is often not feasible due to time and power constraints. They can both be used, and there is not one better than the other. It really depends on the specific case, the size of the dataset and the time and hardware available. Note that overfitting can be (partially) remedied by things such as dropout.

+ +

To be fair: it is fine to have a discussion about this with your colleagues, but as so often there is no one correct answer. If you really want proof, you can test it out and compare them. But performance-wise (i.e. the model's predictive power), the difference will be small.

+",29995,,29995,,11/18/2019 13:32,11/18/2019 13:32,,,,0,,,,CC BY-SA 4.0 +16583,5,,,11/17/2019 20:23,,0,,,2444,,2444,,11/17/2019 20:23,11/17/2019 20:23,,,,0,,,,CC BY-SA 4.0 +16584,4,,,11/17/2019 20:23,,0,,"For questions related to Bayesian optimization (BO), which is a technique used to model an unknown function (that is expensive to evaluate), based on concepts of a surrogate model (which is usually a Gaussian process, which models the unknown function), Bayesian inference (to update the Gaussian process) and an acquisition function (which guides the Bayesian inference). BO can be used for hyper-parameter optimization.",2444,,2444,,11/17/2019 20:23,11/17/2019 20:23,,,,0,,,,CC BY-SA 4.0 +16585,2,,16575,11/17/2019 21:59,,2,,"

This is relevant when you have two or more neural networks serving as components to a larger architecture. Training this architecture in an end-to-end manner means simultaneously training all components (i.e. training it as a single network).

+ +

The best example I can think of are image captioning architectures. These usually comprise of two networks: a CNN whose role is to extract features from the input images and a RNN that accepts the CNN's features and generates the output captions.

+ +

+ +

You have two options for training:

+ +
    +
  1. First, train the CNN first for some arbitrary task (e.g. image classification) in hopes that it learns how to extract features. Then use the CNN to extract features from the input images and use those as inputs to train the RNN. This procedure trains the two components in two completely separate phases.

  2. +
  3. Treat the whole architecture as a single network and backpropagete the gradients to the CNN so that it also can be trained. This procedure trains the two components simultaneously. This is what we call end-to-end training.

  4. +
+",26652,,,,,11/17/2019 21:59,,,,0,,,,CC BY-SA 4.0 +16586,2,,16576,11/17/2019 22:30,,0,,"

I'm not sure if this is what you are looking for but I find Goodfellow's book a pretty good resource:

+ +

Goodfellow, specifically Section 2, Chapter 9 deals with convolutional neural networks: https://www.deeplearningbook.org/

+ +

'Pattern Recognition and Machine Learning' by Bishop Might contains a section (5.5.5, pg 267 onwards) as well as an exercise, and a general discussion about neural networks in image recognition.

+ +

If you edit your question to post a bit more detail, we can offer better answers, for example, what is about the convolutional layer? How it's implemented?

+ +

If you are looking for a more basic introduction to convolutional layers I would also suggest:

+ +

A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way gives a pretty general overview, starting at the difference between CNNs and ANNs and explains why CNNs are superior to ANNs (for certain problems). It also gives some details about how the convolution actually works.

+ +

Demystifying the transpose convolution explains the transpose convolution operation in the context of how a traditional convolution; this may not be relevant if you are strictly using CNNs and not transpose-CNNs.

+ +

Understanding of Convolutional Neural Network (CNN) — Deep Learning is quite similar to ""A Comprehensive..."" link above, but it also includes information about filtering and shows the effect that different filters have on an image, which is certainly very import to an understanding of why we use CNNs.

+ +

Building a Convolutional Neural Network (CNN) in Keras (or one of the other thousand similar pages) are pretty good for just starting out and building your own CNN classifier. You can also check out examples from Keras, e.g. CIFAR10 CNN, but these tend to give you a very little information about why they designed the network the way that they did.

+ +

If, on the other hand, you are looking for some more advanced resources, here are is one that springs to mind:

+ +

Deep Residual Learning for Image Recognition by He et al., deals with a major advance in image recognition, using Residual Networks (ResNet). This type of network has become pretty popular, so I highly recommend giving it a read.

+",31360,,31360,,11/20/2019 20:06,11/20/2019 20:06,,,,3,,,,CC BY-SA 4.0 +16587,2,,16576,11/17/2019 22:31,,0,,"

Chapter 9 of the book Deep Learning (2016), by Goodfellow et al., describes the convolutional (neural) network (CNN), its main operations (namely, convolution and pooling) and properties (such as parameter sharing).

+ +

There's also the article From Convolution to Neural Network, which first introduces the mathematical operation convolution and then describes its connection with signal processing (where images can be viewed as 2D signals) and, finally, describes the CNN.

+",2444,,,,,11/17/2019 22:31,,,,0,,,,CC BY-SA 4.0 +16589,2,,16575,11/18/2019 6:40,,2,,"

Another explanation of deep learning as an end-to-end framework is in deep learning, pre-processing or feature extraction steps are not necessary. So it only uses a single processing step, which is to train the deep learning model. In other traditional machine learning methods, some separated feature extraction steps usually required.

+ +

+ +

For example in image classification, deep learning frameworks like CNN can receive a raw image and then trained to classify it directly. If we didn't use deep learning, we need to extract some features using more steps, like edge detection, corner detection, color histogram, etc.

+ +

you can also watch Andrew Ng's explanation here

+",16565,,,,,11/18/2019 6:40,,,,1,,,,CC BY-SA 4.0 +16590,2,,16515,11/18/2019 8:29,,2,,"

Some ideas out the top of my head:

+ +
    +
  1. In the case of $dy/dx_2>0$ you could compute the gradient using the chain rule and limit the weights so that the constrain holds

  2. +
  3. In the case of $y + x_5 + x_7 < K$ you could use a clipping function on the output layer?

  4. +
+",31335,,,,,11/18/2019 8:29,,,,0,,,,CC BY-SA 4.0 +16593,1,16595,,11/18/2019 9:07,,5,2583,"

I am reading the paper Hierarchical Attention-Based Recurrent Highway Networks for Time Series Prediction (2018) by Yunzhe Tao et al.

+ +

In this paper, they use several times the expression ""semantic levels"". Some examples:

+ +
    +
  • HRHN can adaptively select the relevant exogenous features in different semantic levels
  • +
  • the temporal information is usually complicated and may occur at different semantic levels
  • +
  • The encoder RHN reads the convolved features $(w_1,w_2,···,w_{T−1})$ and models their temporal dependencies at different semantic levels
  • +
  • Then an RHN is used to model the temporal dependencies among convolved input features at different semantic levels
  • +
+ +

What is the semantic level?

+",31370,,31370,,11/19/2019 7:56,11/19/2019 10:24,"What is the ""semantic level""?",,1,0,,,,CC BY-SA 4.0 +16595,2,,16593,11/18/2019 11:18,,4,,"

In language theory, there are generally several admitted levels that can be studied in relation with one another or independently. The semantic level is the one dealing with the meaning of the text (""semantic"" comes from the greek and means ""to signify""). The semantic level is therefore generally independent from the syntax and even the language used to convey the message. +Here is an interesting picture I found on the internet to illustrate my point. +

+ +

EDIT: I took some time reading the paper. I think ""semantic levels"" refers here to the different neural networks layers used for the exogenous features.

+ +

Here is a modified version of their figure I've drawned to make it clearer: +

+ +

In particular, from what I have understood so far, the attention coefficients apply to the whole semantic level (which I find not really clearly indicated on their figure)

+ +

The LHS of their figure would then be better described by this new one:

+ +

+ +

Hope this helps!

+",31374,,31374,,11/19/2019 10:24,11/19/2019 10:24,,,,5,,,,CC BY-SA 4.0 +16596,1,,,11/18/2019 12:18,,1,32,"

I'm working on a beta VAE model learning a latent representation used as a similarity metric for image registration.

+ +

One of the main problems I'm facing is that the encoder + sampler output doesn't fulfill the requirements for a mathematical metric (https://en.wikipedia.org/wiki/Metric_(mathematics)) - is there a known way of how to decrease same-sample distance after encoding + sampling as well as promoting transitivity (triangle inequality) and symmetry?

+",31376,,,,,11/18/2019 12:18,Reduce same sample distance in VAE encodings,,0,8,,,,CC BY-SA 4.0 +16597,1,16626,,11/18/2019 14:59,,4,735,"

I know that classical control systems have been used to solve the problem of the inverted pendulum - inverted pendulum.

+ +

But I've seen that people have also used machine learning techniques to solve this nowadays - machine learning of inverted pendulum.

+ +

I came across a video on how to apply a machine learning technique called reinforcement learning on openAI gym - OpenAI gym reinforcement learning.

+ +

My question is, can I use this simulation and use it to train a controller for a real-world application of inverted pendulum?

+",31381,,1847,,11/19/2019 7:27,11/19/2019 13:58,Can OpenAI simulations be used in real world applications?,,1,2,,,,CC BY-SA 4.0 +16598,1,,,11/18/2019 15:37,,5,79,"

Decision trees and random forests may or not be more suited to solve supervised learning problems with imbalanced labels (or classes) in datasets. For example, see the article Using Random Forest to Learn Imbalanced Data, this Stats SE question and this Medium post. The information across these sources does not seem to be consistent.

+ +

How could decision tree learning algorithms cope with imbalanced classes?

+",30599,,2444,,11/21/2019 2:15,11/21/2019 2:15,How could decision tree learning algorithms cope with imbalanced classes?,,1,0,,,,CC BY-SA 4.0 +16599,1,,,11/18/2019 16:03,,10,3324,"

Kaggle is limited to only supervised learning problems. There used to be www.rl-competition.org but they've stopped.

+ +

Is there anything else I can do other than locally trying out different algorithms for various RL problems?

+",30632,,37965,,6/18/2020 13:29,6/18/2020 13:30,Are there any online competitions for Reinforcement Learning?,,4,3,,,,CC BY-SA 4.0 +16600,2,,16599,11/18/2019 16:54,,2,,"

OpenAI has leaderboards for their gym-environments, if you want to compete with other people on runtime and efficiency.

+",30565,,26652,,11/19/2019 12:36,11/19/2019 12:36,,,,1,,,,CC BY-SA 4.0 +16601,2,,16599,11/18/2019 17:17,,5,,"

AICrowd has numerous challenges in the domain, with some very interesting challenges running currently. Here is a short list:

+ +

Hope this helps!

+",26961,,37962,,6/18/2020 13:30,6/18/2020 13:30,,,,0,,,,CC BY-SA 4.0 +16603,5,,,11/18/2019 18:53,,0,,,2444,,2444,,11/18/2019 18:53,11/18/2019 18:53,,,,0,,,,CC BY-SA 4.0 +16604,4,,,11/18/2019 18:53,,0,,"For questions related to the wake-sleep algorithm, which can be used in the context of unsupervised learning for neural networks.",2444,,2444,,11/18/2019 18:53,11/18/2019 18:53,,,,0,,,,CC BY-SA 4.0 +16605,1,,,11/18/2019 19:23,,1,40,"

I'm new to Deep Learning. I used Keras and trained a inception_resnet_v2 model for my binary classification application (fire detection). As suggested from my previous question of a non-X class, I prepared a dataset of 8000 images of fire, and a larger dataset for non-fire (20,000 random images) to make sure the network also sees images of non-fire to perform classification.

+ +

I trained the model, but now when trying to load the model and pass images of fire and non-fire ones, it shows same result for all of them:

+ +
[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+[[0. 1.]]
+
+ +

What is going wrong? Am I doing anything wrong? Should I get the result another way?

+ +

===============================================

+ +

I know it's not SO, but this is my prediction code in case it matters:

+ +
from __future__ import print_function
+from keras.models import load_model, model_from_json
+import cv2, os, glob
+import numpy as np
+from keras.preprocessing import image
+
+if __name__ == '__main__':
+    model = load_model('Resnet_26_0.79_model_weights.h5')
+
+    os.chdir(""test"")
+    for file in glob.glob(""*.jpg""):
+        img_path = file
+        img = image.load_img(img_path, target_size=(300, 300))
+        x = image.img_to_array(img)
+        x = np.expand_dims(x, axis=0)
+
+        dictionary = {0: 'non-fire', 1: 'fire'}
+
+        results = model.predict(x)
+        print(results)
+        predicted_class= np.argmax(results)
+        acc = 100*results[0][predicted_class]
+        print(""Network prediction is: file: ""+ file+"", ""+dictionary[predicted_class]+"", %{:0.2f}"".format(acc))
+
+ +

And here is the training:

+ +
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
+from keras.preprocessing.image import ImageDataGenerator
+from keras.layers import Dense, Activation, Flatten, Dropout
+from keras.models import Sequential, Model
+from keras.optimizers import SGD, Adam
+from keras.callbacks import ModelCheckpoint
+from keras.metrics import binary_accuracy
+import os
+import json
+#==========================
+HEIGHT = 300
+WIDTH = 300
+TRAIN_DIR = ""data""
+BATCH_SIZE = 8 #8
+steps_per_epoch = 1000 #1000
+NUM_EPOCHS = 50 #50
+lr= 0.00001
+#==========================
+FC_LAYERS = [1024, 1024]
+dropout = 0.5
+
+def build_finetune_model(base_model, dropout, fc_layers, num_classes):
+    for layer in base_model.layers:
+        layer.trainable = False
+
+    x = base_model.output
+    x = Flatten()(x)
+    for fc in fc_layers:
+        # New FC layer, random init
+        x = Dense(fc, activation='relu')(x) 
+        x = Dropout(dropout)(x)
+
+    # New layer
+    predictions = Dense(num_classes, activation='sigmoid')(x) 
+    finetune_model = Model(inputs=base_model.input, outputs=predictions)
+    return finetune_model
+
+train_datagen =  ImageDataGenerator(preprocessing_function=preprocess_input, rotation_range=90, horizontal_flip=True, vertical_flip=True
+                                    ,validation_split=0.2)
+train_generator = train_datagen.flow_from_directory(TRAIN_DIR, target_size=(HEIGHT, WIDTH), batch_size=BATCH_SIZE
+                                                    ,subset=""training"")
+#split validation manually
+validation_generator = train_datagen.flow_from_directory(TRAIN_DIR, target_size=(HEIGHT, WIDTH), batch_size=BATCH_SIZE,subset=""validation"")
+
+base_model = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=(HEIGHT, WIDTH, 3))
+
+root=TRAIN_DIR
+class_list = [ item for item in os.listdir(root) if os.path.isdir(os.path.join(root, item)) ]
+print (""class_list: ""+str(class_list))
+
+finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
+
+adam = Adam(lr)
+# change to categorical_crossentropy for multiple classes
+finetune_model.compile(adam, loss='binary_crossentropy', metrics=['accuracy'])
+
+filepath=""./checkpoints/"" + ""Resnet_{epoch:02d}_{acc:.2f}"" +""_model_weights.h5""
+checkpoint = ModelCheckpoint(filepath, monitor=[""val_accuracy""], verbose=1, mode='max', save_weights_only=False)
+callbacks_list = [checkpoint]
+
+history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=BATCH_SIZE, 
+                                    validation_data=validation_generator, validation_steps = validation_generator.samples, 
+                                       steps_per_epoch=steps_per_epoch, 
+                                       shuffle=True, callbacks=callbacks_list)
+
+",9053,,9053,,11/19/2019 1:38,11/19/2019 3:09,Semantic issues with predictions made by my trained model,,1,0,,,,CC BY-SA 4.0 +16606,1,16622,,11/18/2019 21:02,,2,1221,"

From many blogs and this one https://web.archive.org/web/20160308070346/http://mcts.ai/about/index.html +We know that the process of MCTS algorithm has 4 steps.

+ +
+
    +
  1. Selection: Starting at root node R, recursively select optimal child nodes until a leaf node L is reached.
  2. +
+
+ +

What does leaf node L mean here? I thought it should be a node representing the terminal state of the game, or another word which ends the game. +If L is not a terminal node (one end state of the game), how do we decide that the selection step stops on node L? From the terms of general algorithm, a leaf node is the one that does not have any

+ +
+
    +
  1. Expansion: If L is a not a terminal node (i.e. it does not end the game) then create one or more child nodes and select one C.
  2. +
+
+ +

From this description I realise that obviously my previous thought incorrect. +Then if L is not a terminal node, it implies that L should have children, why not continue finding a child from L at the ""Selection"" step? +Do we have the children list of L at this step?
+From the description of this step itself, when do we create one child node, and when do we need to create more than one child nodes? Based on what rule/policy do we select node C?

+ +
+
    +
  1. Simulation: Run a simulated playout from C until a result is achieved.
  2. +
+
+ +

Because of the confusion of the 1st question, I totally cannot understand why we need to simulate the game. I thought from the selection step, we can reach the terminal node and the game should be ended on node L in this path. We even do not need to do ""Expansion"" because node L is the terminal node.

+ +
    +
  1. Backpropagation: Update the current move sequence with the simulation result. Fine.
  2. +
+ +

Last question, from where did you get the answer to these questions?

+ +

Thank you

+",31389,,,,,11/20/2019 1:13,How to understand the 4 steps of Monte Carlo Tree Search,,2,5,,,,CC BY-SA 4.0 +16607,1,,,11/19/2019 1:21,,2,108,"

I have achieved around 85% accuracy using the following architecture: + +

+ +

I used a learning rate of 0.001 and trained the model over 125 epochs with a batch size of 64. +Any suggestions would be much appreciated. Thanks in advance.

+",29877,,,,,11/21/2019 7:42,What could I do to this CNN to achieve a higher accuracy on the cifar10 dataset?,,1,2,,,,CC BY-SA 4.0 +16608,1,,,11/19/2019 1:55,,1,93,"

Are there any good ways of simultaneously incorporating object detection with speech recognition? For example, if you want to identify whether an animal is a dog or cat, we can obviously use visual features (e.g. YOLO, CNNs, etc.). But how would you incorporate speech and sound in this model?

+",28201,,2444,,9/11/2020 15:07,5/29/2023 21:04,Are there any good ways of simultaneously incorporating object detection with speech recognition?,,1,0,,,,CC BY-SA 4.0 +16609,2,,16557,11/19/2019 1:55,,3,,"

Your code suggests a likely problem here: It looks like you are training a very deep neural network with sigmoidal activation functions at every layer.

+ +

The sigmoid has the property that its derivative (S*(1-S)) will be extremely small when the activation function's value is close to 0 or close to 1. In fact, the largest it can be is about 0.25.

+ +

The backpropigation algorithm, which is used to train a neural network, will propagate an error signal backwards. At each layer, the error signal will be multiplied by, among other things, the derivative of the activation function.

+ +

It is therefore the case that by the 4th layer your signal is at most $0.25^4 = \frac{1}{256}$ the size that it was at the start of the network. In fact, it is likely much smaller than this. With a smaller signal, your learning rates at the bottom of the nextwork will effectively be much smaller than the learning rates at the top, which will make it very difficult to pick a learning rate that is effective overall.

+ +

This problem is known as the vanishing gradient.

+ +

To fix this, if you want to use a deep architecture, consider using an activation function that does not suffer from a vanishing gradient. The Rectified Linear activation function, used in so-called ""ReLU"" units, is a non-linear activation that does not have a vanishing gradient. It is common to use ReLUs for the earlier layers in a network, and a sigmoid at the output layer, if you need outputs to be bounded between 0 and 1.

+",16909,,16909,,11/19/2019 2:13,11/19/2019 2:13,,,,0,,,,CC BY-SA 4.0 +16610,1,,,11/19/2019 2:30,,12,2022,"

On Sutton and Barto's RL book, the reward hypothesis is stated as

+ +
+

that all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward)

+
+ +

Are there examples of tasks where the goals and purposes cannot be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal?

+ +

All I can think of are tasks with subjective rewards, like ""writing good music"", but I am not convinced because maybe this is actually definable (perhaps by some super-intelligent alien) and we just aren't smart enough yet. Thus, I'm especially interested in counterexamples that logically or provably fail the hypothesis.

+",31395,,2444,,12/20/2020 16:44,1/13/2021 4:06,Counterexamples to the reward hypothesis,,4,4,,,,CC BY-SA 4.0 +16611,1,,,11/19/2019 2:32,,1,97,"

I tried to implement the exact same python coding by Andrej Karpathy to train RL agent to play Pong, except that I migrated the environment from Gym to Retro. +Everything is the same except the action space in Retro is in indices and not in discrete as in Gym. The index has a size of 8, and index 4 and 5 are actions to move up and down.

+ +

But why the little modification has caused the agent not learning at all with running reward at -20 after over 3,000 episodes?

+ +

I have checked the frame pre-processing before input to the policy forward neural network and it seems to be normal.

+ +

As far as I know, the output from the neural network is the probability of the paddle to move upwards. So I checked it. After few thousands episodes, the probability of the agent to move up just maintained at 0.5.

+ +

I know the problem exists between the pre-processing and policy forward neural network, but I just cannot locate the problem. Appreciate if someone could help.

+ +

The whole coding is as follow:

+ +
import retro
+import numpy as np
+import _pickle as pickle
+
+H = 200 # number of hidden layer neurons
+batch_size = 10 # every how many episodes to do a param update?
+learning_rate = 1e-4
+gamma = 0.98 # discount factor for reward
+decay_rate = 0.98 # decay factor for RMSProp leaky sum of grad^2
+resume = False # resume from previous checkpoint?
+render = True
+
+
+# model initialization
+D = 80 * 80 # input dimensionality: 80x80 grid
+if resume:
+  model = pickle.load(open('save.p', 'rb'))
+else:
+  model = {}
+  model['W1'] = np.random.randn(H,D) / np.sqrt(D) # ""Xavier"" initialization
+  model['W2'] = np.random.randn(H) / np.sqrt(H)
+
+grad_buffer = { k : np.zeros_like(v) for k,v in model.items() } # update buffers that add up gradients over a batch
+rmsprop_cache = { k : np.zeros_like(v) for k,v in model.items() } # rmsprop memory
+
+def sigmoid(x): 
+  return 1.0 / (1.0 + np.exp(-x)) # sigmoid ""squashing"" function to interval [0,1]
+
+def prepro(I):
+  """""" prepro 210x160x3 uint8 frame into 6400 (80x80) 1D float vector """"""
+  I = I[35:195] # crop
+  I = I[::2,::2,0] # downsample by factor of 2
+  I[I == 144] = 0 # erase background (background type 1)
+  I[I == 109] = 0 # erase background (background type 2)
+  I[I != 0] = 1 # everything else (paddles, ball) just set to 1
+  return I.astype(np.float).ravel()
+
+def discount_rewards(r):
+  """""" take 1D float array of rewards and compute discounted reward """"""
+  discounted_r = np.zeros_like(r)
+  running_add = 0
+  for t in reversed(range(0, r.size)):
+    if r[t] != 0: running_add = 0 # reset the sum, since this was a game boundary (pong specific!)
+    running_add = running_add * gamma + r[t]
+    discounted_r[t] = running_add
+  return discounted_r
+
+def policy_forward(x):
+  h = np.dot(model['W1'], x)
+  h[h<0] = 0 # ReLU nonlinearity
+  logp = np.dot(model['W2'], h)
+  p = sigmoid(logp)
+  return p, h # return probability of taking action 2, and hidden state
+
+def policy_backward(eph, epdlogp):
+  """""" backward pass. (eph is array of intermediate hidden states) """"""
+  dW2 = np.dot(eph.T, epdlogp).ravel()
+  dh = np.outer(epdlogp, model['W2'])
+  dh[eph <= 0] = 0 # backpro prelu
+  dW1 = np.dot(dh.T, epx)
+  return {'W1':dW1, 'W2':dW2}
+
+env=retro.make(game='Pong-Atari2600',players=1)
+observation = env.reset()
+prev_x = None # used in computing the difference frame
+xs,hs,dlogps,drs = [],[],[],[]
+running_reward = None
+reward_sum = 0
+episode_number = 0
+
+while True:
+  if render: env.render()
+
+  action=[0,0,0,0,0,0,0,0] #reset the rl action
+  # preprocess the observation, set input to network to be difference image
+  cur_x = prepro(observation)
+  x = cur_x - prev_x if prev_x is not None else np.zeros(D)
+  prev_x = cur_x
+
+  # forward the policy network and sample an action from the returned probability
+  aprob, h = policy_forward(x)
+
+  rlaction = 4  if np.random.uniform() < aprob else 5 # roll the dice!
+  # record various intermediates (needed later for backprop)
+  xs.append(x) # observation
+  hs.append(h) # hidden state
+  y = 1 if rlaction == 4 else 0 # a ""fake label""
+  dlogps.append(y - aprob) # grad that encourages the action that was taken to be taken (see http://cs231n.github.io/neural-networks-2/#losses if confused)
+  action[rlaction]=1
+  # step the environment and get new measurements
+  observation, reward, done, info = env.step(action)
+  reward_sum += reward
+  drs.append(reward) # record reward (has to be done after we call step() to get reward for previous action)
+
+
+  if done: # an episode finished
+
+    episode_number += 1
+
+    # stack together all inputs, hidden states, action gradients, and rewards for this episode
+    epx = np.vstack(xs)
+    eph = np.vstack(hs)
+    epdlogp = np.vstack(dlogps)
+    epr = np.vstack(drs)
+    xs,hs,dlogps,drs = [],[],[],[] # reset array memory
+
+    # compute the discounted reward backwards through time
+    discounted_epr = discount_rewards(epr)
+    # standardize the rewards to be unit normal (helps control the gradient estimator variance)
+    discounted_epr -= np.mean(discounted_epr)
+    discounted_epr /= np.std(discounted_epr)
+
+    epdlogp *= discounted_epr # modulate the gradient with advantage (PG magic happens right here.)
+    grad = policy_backward(eph, epdlogp)
+    for k in model: grad_buffer[k] += grad[k] # accumulate grad over batch
+
+    # perform rmsprop parameter update every batch_size episodes
+    if episode_number % batch_size == 0:
+      for k,v in model.items():
+        g = grad_buffer[k] # gradient
+        rmsprop_cache[k] = decay_rate * rmsprop_cache[k] + (1 - decay_rate) * g**2
+        model[k] += learning_rate * g / (np.sqrt(rmsprop_cache[k]) + 1e-5)
+        grad_buffer[k] = np.zeros_like(v) # reset batch gradient buffer
+
+    # boring book-keeping
+    running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01
+    print(('%d , %d , %f ') % (episode_number-1,reward_sum,running_reward))
+    if episode_number % 20 == 0: pickle.dump(model, open('save.p', 'wb'))
+    reward_sum = 0
+    observation = env.reset() # reset env
+    prev_x = None
+
+",30487,,,,,11/19/2019 2:32,"Same implementation, but agent is not learning in Retro Pong Environment",,0,1,,,,CC BY-SA 4.0 +16612,1,,,11/19/2019 2:41,,2,519,"

I have stereo pairs (left, right) images of concrete cracks. I want to measure the length of the crack from those image pairs. Which neural network is appropriate for measuring object dimensions from stereo images?

+ +

Note: I am insisted to use the NN-based technique only.

+",31396,,31396,,11/19/2019 8:24,11/19/2019 15:36,Which neural network is appropriate for measuring object dimensions from stereo images?,,2,2,,,,CC BY-SA 4.0 +16613,2,,16608,11/19/2019 2:54,,0,,"

Check out this paper. It deals with the problem of mixing input modalities! Here's the abstract.

+
+

This paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediate representation based on a combination of data from different modalities. The GMU learns to decide how modalities influence the activation of the unit using multiplicative gates. It was evaluated on a multilabel scenario for genre classification of movies using the plot and the poster. The GMU improved the macro f-score performance of single-modality approaches and outperformed other fusion strategies, including mixture of experts models. Along with this work, the MM-IMDb dataset is released which, to the best of our knowledge, is the largest publicly available multimodal dataset for genre prediction on movies.

+
+",31395,,2444,,9/11/2020 15:10,9/11/2020 15:10,,,,0,,,2/8/2021 18:15,CC BY-SA 4.0 +16614,2,,16612,11/19/2019 3:00,,2,,"

If you have stero pairs, and you can identify the objects in the scene, you do not need a neural network, you can just use triangulation.

+ +

If you need to identify which objects in the scene are the same, you have an image segmentation problem. Depending on your problem and the amount of data you have access to, you may be able to use simple techniques like clustering-based segmentation, or you may be able to use NN-based techniques, like Mask R-CNN.

+",16909,,,,,11/19/2019 3:00,,,,3,,,,CC BY-SA 4.0 +16615,2,,16605,11/19/2019 3:09,,1,,"

I think you may have a class imbalance problem here, if I am reading your output correctly. You have 20,000 negative examples, but only 8000 positive ones, and you are minimizing binary cross entropy without re-weighting the examples, so your model can achieve a low-ish loss just by consistently outputing a value close to 0. This forms a local optima in the search space for the model.

+ +

To fix this, you could try to optimize some other loss function that is more sensitive to class imbalances, or, likely more productively, you could just use an equal number of examples for each class.

+",16909,,,,,11/19/2019 3:09,,,,4,,,,CC BY-SA 4.0 +16616,1,16618,,11/19/2019 3:32,,3,364,"

In Sutton & Barto's ""Reinforcement Learning: An Introduction"", 2nd edition, page 199, they describe the on-policy distribution for episodic tasks in the following box:

+ +

+ +

I don't understand how this can be done without taking the length of the episode into account. Suppose a task has 10 states, has probability 1 of starting at the first state, then moves to any state uniformly until the episode terminates. If the episode has 100 time steps, then probability of the first state is proportional to $1 + 100\times 1/10$; if it has $1000$ time steps, it will be proportional to $1 + 1000\times 1/10$. However, the formula given would make it proportional to $1 + 1/10$ in both cases. What am I missing?

+",30679,,2444,user9947,1/29/2023 13:36,1/29/2023 13:36,"In the on-policy state distribution for episodic tasks, why don't we take into account the length of the episode?",,2,0,,,,CC BY-SA 4.0 +16617,1,16620,,11/19/2019 5:43,,0,199,"

Is there has any method to train Tensorflow AI/ML that I focus on detecting background of image more than common objects?

+ +

I'm newbie to ML field, but was assigned to do job that make an application which can detecting on showroom image/places and detecting on the floor, wall then find out what is the material/ceramic/marble/etc. product they are.

+ +

Example: This is showroom picture,

+ +

+ +

the wall and the floor of showroom are using this product material

+ +

+ +
    +
  • Is it possible to do something like I described?
  • +
  • How to start with?
  • +
  • If I don't want to install Tensorflow into my computer, is there a service that can make a model to use in the device? (my goal need to use the model in Android device)
  • +
  • What method/type of ML should I approach 'Classification' or 'Object Detection' or other else?
  • +
+",31401,,,,,11/19/2019 8:21,Is there has any method to train Tensorflow AI/ML that I focus on detecting background of image more than common objects?,,1,3,,5/16/2020 23:33,,CC BY-SA 4.0 +16618,2,,16616,11/19/2019 6:21,,3,,"

Let's first assume that there is only one action so that $\pi(a|s) = 1$ for every state - action pair which simplifies the discussion. +Now let's consider a case with 100 time steps, 10 states and uniform distribution for starting state $s_0$ with $h(s_0) = 1$. The result would be +\begin{align} +\eta(s_0) &= 1 + \sum_{i = 0}^9 \eta(s_i) \cdot p(s_0|s_i) =\\ +&= 1 + \sum_{i = 0}^9 10 \cdot \frac{1}{10} = 11 +\end{align} +Now let's consider a case with 1000 time steps where other settings are the same as in the first case. +\begin{align} +\eta(s_0) &= 1 + \sum_{i = 0}^{9} \eta(s_i) \cdot p(s_0|s_i) =\\ +&= 1 + \sum_{i = 0}^{9} 100 \cdot \frac{1}{10} = 101 +\end{align} +In the first case +\begin{equation} +\mu(s_0) = \frac{11}{9\cdot 10 + 11} = 0.1089 +\end{equation} +and in the second case you have +\begin{equation} +\mu(s_0) = \frac{101}{9\cdot 100 + 101} = 0.1009 +\end{equation} +so it looks like you are correct that $\mu(s)$ depends on the length of the episode, but they didn't really say that it doesn't. Obviously as the length of the episode increases so will the number of times a certain state was visited so you could say that formula implicitly depends on the number of time steps. If $h(s_i)$ is equal for every state then results would be the same in both cases regardless of number of time steps. Also, as the number of possible states grows very large, as it usually is in real problems, the results would be approaching each other as the number of states grows.

+",20339,,,,,11/19/2019 6:21,,,,1,,,,CC BY-SA 4.0 +16620,2,,16617,11/19/2019 8:21,,4,,"

Trying to address all the questions asked in the end in the same order

+ +
    +
  • Most definitely possible.

  • +
  • I would say its best you approach this with segmentation to start with.

  • +
  • Just use a free GPU runtime notebook service such as Google Colab or Kaggle Kernels. But you would not directly be able to integrate with the device, you'd have to keep moving input and output from your drive (on Colab). There might be a better service for the needs described, but this is the best I know on this.

  • +
  • Your background can be segmented and the segment can work on transforms such as maybe convolutions or affine transforms to be able to get the relevant information regarding the background.

  • +
+ +

Hope this was helpful!

+",25658,,,,,11/19/2019 8:21,,,,0,,,,CC BY-SA 4.0 +16621,2,,16612,11/19/2019 9:10,,3,,"

Is the image taken from a constant distance?

+ +

If yes, you'd need to scale the images to the same dimensions first of all. For few images say 100-500 images (more the better) you'd need to label the dataset by proper scaling.

+ +

Once labeled, use it to train a CNN (Although best would be training a ResNet). Once trained with decent accuracy, test it for the rest of your dataset.

+ +

I did something similar for one of my projects, check it out if you want to here.

+",31407,,2444,,11/19/2019 15:36,11/19/2019 15:36,,,,1,,,,CC BY-SA 4.0 +16622,2,,16606,11/19/2019 9:53,,2,,"

Imagine a game with a very clear first move, such as a game where choosing to go first if you win a coin toss brings a clear and obvious advantage.

+ +

In this situation standard MCTS does little exploration down the side of the tree that branches at the win toss > let opponent start step, as the basic simulations of the rest of the game at this split quickly show the large gain you get when always starting when you win the coin toss.

+ +

As a result, you would end up with a tree with very little expansion on the side of win the toss > put your opponent in, as every simulation step you do from even the most senior nodes ends with much worse expected outcome values than the alternatives on the other side of the tree where you do the correct move of always playing first.

+ +

These nodes on the side of letting your opponent start have huge potential sub trees (as the whole game would still need to be played out if your opponent started), but would have very little searching down them. As a result, on this side of the tree, you would have many leaf nodes with large (but as yet unexplored, outside of the basic, early simulations down that side) sub trees that you could search if you modified the exploration vs exploration algorithm.

+ +

As a basic example, take the 0/3 node at the far right on level one of the wiki example below, which would get much less attention than the much more promising 7/10 and 3/8 nodes, despite having potentially many subsequent children it could explore. If you took this node as your L node, you would expand it's children that you have not yet searched, and thus find out more about why this side of the tree is bad and update our now more granular probabilities accordingly, just as it does for the 3/3 node here:

+ +

+",14997,,14997,,11/19/2019 10:06,11/19/2019 10:06,,,,13,,,,CC BY-SA 4.0 +16623,2,,16616,11/19/2019 10:29,,2,,"

You are missing that the expression

+ +

$$\sum_{s'} \eta(s')$$

+ +

is already a count of the expected length of an episode, and is used in the denominator to scale $\mu(s)$ such that $\sum_{s} \mu(s) = 1$

+ +

So the length of the episode is taken into account in the formula.

+ +

In practice you don't need to know $\mu(s)$, it can be left unresolved as a theoretical construct. What you care about for the theory to work is that the samples that you train with are drawn with same frequency - this happens automatically if you work with an on-policy algorithm. So the theory can hide the maths that you might need to do in order to determine actual values for $\eta(s)$ or $\mu(s)$

+",1847,,,,,11/19/2019 10:29,,,,1,,,,CC BY-SA 4.0 +16624,2,,16447,11/19/2019 11:38,,1,,"

There is a platform called Flow, sounds like its what you're looking for

+ +

https://github.com/flow-project/flow

+",31411,,,,,11/19/2019 11:38,,,,0,,,,CC BY-SA 4.0 +16625,1,,,11/19/2019 12:22,,4,51,"

I'm looking for some suggestions on how to improve our vehicle image recognition. We have an online marketplace where customers submit photos of their vehicles. The photos need to meet certain requirements before the advert can be approved.

+ +

Customers are required to submit the following vehicle photos: front, back, left-side, right-side, engine (similar to the front photo but with the hood open) and instrument panel cluster. The vehicle must be well framed in the photo, in other words, it must not be too small or so big that the edges touch the frame of the photograph. It also needs to be one of the mentioned types and the camera must be facing the vehicle directly with only small angle variations (a front photo can't include a large piece of the side of a car).

+ +

Another developer had a go and built a CNN with Keras which does alleviate some manual grind (about 20,000 photos were used for training - no annotations). The accuracy sits at around 75% for the vehicle photos but only 55% for the engine and instrument cluster. Each photo is still manually checked, but it is a case of agreeing or disagreeing with what was recognised.

+ +

I was wondering if it wouldn't be better to detect a vehicle in the image using an existing pre-trained model like ImageAI. Use the bounding box of the vehicle to determine it is correctly placed in the frame of the photograph and within acceptable dimensions. There may be multiple vehicles in the picture so work with the most prominent one.

+ +

At that point would it be worth trying to develop something to workout the pose of the vehicle (idea: https://github.com/johnberroa/CORY) or just do some transfer learning with whatever pre-existing trained model was used and spend some time annotating the images?

+ +

+",31412,,2444,,11/19/2019 15:34,11/19/2019 15:34,How can I improve the performance of a model trained to detect vehicle poses?,,0,0,,,,CC BY-SA 4.0 +16626,2,,16597,11/19/2019 13:00,,1,,"

In general, you can use a simulation to prepare and train a controller for a real world application. A good example of this being done for robotics is in the paper Autonomous helicopter flight via reinforcement learning where a Reinforcement Learning agent was trained on a model of helicopter dynamics before being used in reality. Often, as in this case, such work is done to avoid expensive failures due to the trial and error nature of RL - if an error is expensive, such as crashing a helicopter, then ideally the agent performs the checks to avoid it in simulation, by planning or some other virtual environment as opposed to in the real world.

+ +

The main hurdle to completing training in simulation then transfering to the real world, is the fidelity of the simulation. The simulation's physics, including measurements of physical quantities, the size of time steps, amount of randomness/noise, should match between the simulation and the target real-world environment. If they do not match, then a learning agent could generate a policy that works in simulation, but that fails in reality.

+ +

For the autonomous helicopter, the researchers used data from a human operator controlling the real helicopter, to help generate a predictive model that was used in the simulation.

+ +

Can you do the same with Open AI Gym environments? Probably not, unfortunately. The main issue is that the units used are fixed in most environments, and are unlikely to closely relate to any specific real world implementation of the same kind of system. In addition, the physics is often simplified - probably a minor issue for CartPole, but a more major one for environments like LunarLander which ignores weight of fuel used and is a 2D simulation of a 3D environment.

+ +

So, for instance, in CartPole environments, the following values are fixed:

+ +
    +
  • Size of time step
  • +
  • Mass of cart
  • +
  • Mass and length of pole to be balanced
  • +
  • Force that cart motor pushes with
  • +
+ +

There are a couple of approaches you could use to work around this:

+ +
    +
  1. Make a new version of the environment and adjust it so that values match to a real environment you want to train for. Note this may still be limited, as the physics model is still quite simple, and may not allow for the real operating characteristics of the cart motor.

  2. +
  3. Use the CartPole environment as-is, not to train a controller directly, but to select hyper parameters, such as neural network size, learning rate etc. That will result in a learning agent that you are reasonably confident can learn policies with the state representation and general behaviour of your target system. You then train ""for real"" again in the physical system.

  4. +
+ +

You can combine these ideas, creating a best-guess controller from simulation, then refining it in a real environment by continuing the training on a real system.

+",1847,,1847,,11/19/2019 13:58,11/19/2019 13:58,,,,4,,,,CC BY-SA 4.0 +16627,1,,,11/19/2019 13:04,,6,92,"

I have the following problem while using convolutional neural networks to detect forgeries:

+

Resizing the image to fit the required input size may not be a good way because the forgery detection largely relies on the details of images, for example, the noise. Thus the resizing process may change/hurt the details.

+

Existing methods mainly use image patches (obtained from cropping) that have the same size. This way, however, will drop the spatial information.

+

I'm looking for some suggestions on how to deal with this problem (input size inconsistency) without leaving out the spatial information.

+",26955,,2444,,4/8/2022 13:50,5/3/2023 16:05,"How to deal with images of different sizes, which need to be passed to a model of fixed input size, without losing details and spatial information?",,1,2,,,,CC BY-SA 4.0 +16628,1,,,11/19/2019 13:14,,3,899,"

I was watching this video from corridor crew, according to them, they have used deepfake technology to create this video. I myself have never made a deepfake videos, but I have enough knowledge in the underlying technology to know that it's hard to swap a face with multiple people existing simultaneously in a frame and let alone swapping faces of multiple people in a single frame. But corridor crews video showed that multiple deepfakes can be done, and that's why I am sceptical about the video of using deepfake technology.

+ +

If the video is indeed made with deepfake technology, then what is the mechanism behind this? My own guess is that they might have masked other people and concentrated on one in a frame. Then they have used this masked frame to generate the deepfake, which then concatenated to the original frame. Do you think this is possible? Is there a research article or blog post which explains this process?

+",39,,2444,,11/19/2019 14:48,11/19/2019 14:48,How does deepfake technology work with multiple people in a single frame?,,0,0,,,,CC BY-SA 4.0 +16629,1,,,11/19/2019 14:31,,2,104,"

Consider a problem with many objectives. In my case, these are school grades for different courses (or subjects). To be more concrete, suppose that my current grade for the math course is $12/20$ and for the philosophy course is $8/20$. My objective is to get $16/20$ for the math course and $15/20$ for the philosophy course.

+

I have the possibility to take different courses, but I need to decide which ones. These courses can have a different impact depending on the subject. Let's say that the impact factor is in the range $[0, 1]$, where $0$ means no impact and $1$ means a big impact. Then the math course could have a big impact (e.g. $0.9$) on the grade, while maybe a philosophy course may not have such a big impact.

+

The overall goal is to increase all the grades as much as possible while taking into account the impact of their associated course. In my case, I can have more than two courses and subjects.

+

So, which algorithms can I use to solve this problem?

+",31415,,2444,,6/21/2020 16:29,6/21/2020 16:29,Which algorithm can I use to solve a problem with multiple objectives and constraints?,,1,1,,,,CC BY-SA 4.0 +16630,2,,16607,11/19/2019 14:53,,2,,"

You results show signs of overfitting at around epoch 40. In order to overcome this you can either simplify the model somewhat or increase regularization. You do not share what values you are using for dropout regularization so you can try increasing that.

+ +

But to be honest, I am not sure if that is going to help. You are using dropout in a pure CNN architecture, which I do not see that often in recent models. When they exists, they are usually at the very end of the network, or after densely connected layers. There are claims that regular dropout does not work as intended for convolutional layers, so the idea of spatial dropout is developed.

+ +

But you are using them after pooling layers and such a design may not be as bad as using them after convolutional layers. In any case, if you check the top performers on this dataset, you can see there are some who are trying to come up with new regularization techniques for CNN's, such as shake-shake, shakedrop and cutout.

+ +

However, given you already have an overfitting model, removing regularization might not be the best idea. But if you replace the layers after (and including) the last convolution with some fully connected layers, you can use dropout regularization on them instead. This would remove the largest convolutional layer of the model and you can add dense layers without increasing model size.

+ +

For image recognition tasks of any kind, easiest way to achieve high accuracy is via transfer learning.

+",22301,,22301,,11/21/2019 7:42,11/21/2019 7:42,,,,0,,,,CC BY-SA 4.0 +16631,1,20642,,11/19/2019 14:57,,5,1216,"

I understand that with vanilla VAEs, there are a few reasons justifying the production of blurred out images. The InfoVAE paper describes the case when the decoder is flexible enough to ignore the latent attributes and generate an averaged out image that best reduces the reconstruction loss. Thus the blurred image.

+ +

How much of the problem of blurring is really mitigated by the MMD formulation in practical experiments? If someone has experience working with MMD-VAEs, I'd like to know their opinion on what the reconstruction quality of MMD-VAEs is really like.

+ +

Also, does the replacement of the MSE reconstruction loss metric by other perceptual similarity metrics improve generated image quality?

+",31416,,31416,,11/19/2019 18:21,4/26/2020 11:17,Does MMD-VAE solve the problem of blurred images of vanilla VAEs?,,1,0,,,,CC BY-SA 4.0 +16632,5,,,11/19/2019 15:59,,0,,,2444,,2444,,11/19/2019 15:59,11/19/2019 15:59,,,,0,,,,CC BY-SA 4.0 +16633,4,,,11/19/2019 15:59,,0,,"For questions related to the ID3 (Iterative Dichotomiser 3) algorithm, which is a decision tree algorithm, so it can be used to learn a decision tree from a dataset. The ID3 algorithm was also developed in 1986 by Ross Quinlan.",2444,,2444,,11/21/2019 3:09,11/21/2019 3:09,,,,0,,,,CC BY-SA 4.0 +16634,1,16635,,11/19/2019 18:33,,2,49,"

Places205-VGG, a CNN trained model for 205 scene categories of Places Database with 2.5 million images Places205 dataset has top1 accuracy = 58.9% and top5 accuracy = 87.7%.

+ +

What does top1 and top5 (and, in general, top $N$) accuracy mean in the context of deep learning?

+",31312,,2444,,11/19/2019 19:45,11/19/2019 19:45,What does top N accuracy mean?,,1,0,,,,CC BY-SA 4.0 +16635,2,,16634,11/19/2019 19:30,,2,,"

It is explained in this CrossValidated post.

+ +

Top1 accuracy means the best guess (class with highest probability) is the correct result 58.9% of the time, while top5 accuracy means the correct result is in the top 5 best guesses (5 classes with highest probabilities) 87.7% of the time.

+",22301,,,,,11/19/2019 19:30,,,,1,,,,CC BY-SA 4.0 +16636,2,,16629,11/19/2019 19:35,,2,,"

From your description, the problem you want to solve is a linear optimization problem: suppose we use the indices $i$ and $j$ to denote the $i-$th class and the $j-$th grade. Also, let us call $y_j$ the current value of the $j-th$ grade, $g_j$ the goal value in grade $j$, $c_{ij}$ the impact factor of taking class $i$ in grade $j$, and $x_i$ the binary variable that indicates if you go to class $i$ or not. Now, suppose you decided to take certain classes (which is equivalent to choosing the values $x_i$ for each $i$) and measure the new grades $\tilde g_j$. If your impact factors are accurate, then the new grades should be $\tilde g_j = \sum_i(x_ic_{ij}+y_j)$. Clearly, what you would like is for this new values to be as close as possible to the goal values. So, a possible way to express your optimization problem is that you want to minimize the addition of all these differences:

+ +

$$\min_{x_0,\cdots,x_n} \sum_j g_j - \tilde g_j = \sum_j g_j - \sum_i(x_ic_{ij}+y_j), \\ \text{s.t. } x_i\in\{0,1\}, \forall\hspace 2pt i.$$

+ +

Most probably, you will have certain time restrictions. For example, maybe each class takes $t_i$ hours from your time and your total available time is just $T$ hours. Or maybe you have a limit of classes $N$ you can take, irrespective of their time duration. With these two constraints, your problem would look like this:

+ +

$$\min_{x_0,\cdots,x_n} \sum_j g_j - \sum_i(x_ic_{ij}+y_j), \\ \text{s.t. } x_i\in\{0,1\}, \forall\hspace 2pt i,\\ \sum_ix_it_i\leq T,\\ \sum_i x_i\leq N.$$

+ +

Apart from the constraint that $x_i$ should be 0 or 1, this is a linear optimization problem with linear constraints. Integer programming is an area of optimization that studies this type of problems so probably it's a good direction for your research.

+",30983,,30983,,11/19/2019 19:46,11/19/2019 19:46,,,,6,,,,CC BY-SA 4.0 +16637,2,,16251,11/19/2019 19:59,,1,,"

You need to have access to the 696th hour (or successive hours), otherwise, you cannot test your model. An alternative would be, for example, to train your model on the first 693 hours, validate it on the 694th hour, and test it on the 695th hour.

+",2444,,,,,11/19/2019 19:59,,,,0,,,,CC BY-SA 4.0 +16638,1,,,11/19/2019 20:10,,2,36,"

I've been working on a question that is posed in a document I've been reading, that models qualifying for a job as a POMDP. In this model, a person takes 3 exams, and must pass all of them in order to get the job. The person make either be qualified or not qualified in the subjects covered by each individual exam, and there is some probability that a person may be qualified for a subject covered in a particular exam, but still not pass (due to nerves). False passes are also possible as well. +A candidate is allowed a maximum of 2 exam attempts.

+ +

In understanding this problem, I've tried to list out all possible states the person might be in (qualified / not qualified for each exam), and have found the following possible states: (where Q is qualified, and N is not qualified)

+ +

QQQ +QQN +QNN +QNQ +NNN +NQN +NNQ +NQQ

+ +

So, the total number of possible states is 8.

+ +

Have I covered all possible states? I'm wondering if there's an easier way to find out total number of states, without having to list them all out in the above way. I'm very new to this field, so any help is appreciated.

+",31421,,,,,11/19/2019 20:10,Finding total number of states in a POMDP,,0,0,,,,CC BY-SA 4.0 +16639,2,,16598,11/19/2019 21:56,,4,,"

Decision Tree learners, on their own, are not a good way to deal with imbalanced data. The most commonly used algorithms, by default, make no attempt to address this problem.

+ +

If you look carefully at the three sources you post, you will find that they actually all agree on this point.

+ +

Two of the sources actually propose methods of addressing this shortcoming, by making adjustments to the decision tree learning algorithms. The proposed adjustments are essentially standard solutions to these problems, being applied to decision trees.

+ +

An example technique, discussed in the first paper you reference, is changing the weightings of the classes. An inefficient/approximate way to do this is to increase the number of examples from the minority class. For example, if you had an 80/20 split, you could add 3 new copies of each minority class example to move to an 80/80 = 50/50 split. Of course, if you add new data points, your algorithm may take longer to run. Instead, you can just modify the weightings of the classes in your optimization function. This approach is algorithm-specific, and will depend on your loss function, but achieves the same effect, just without needing to increase the number of points you use.

+",16909,,16909,,11/20/2019 13:59,11/20/2019 13:59,,,,0,,,,CC BY-SA 4.0 +16640,2,,16576,11/19/2019 22:03,,0,,"

You can look at the paper Gradient-Based Learning Applied to Document +Recognition (1998) by Yann LeCun et al., which reviews and compares various methods applied to handwritten character recognition and shows that CNNs outperform all other methods.

+ +

Also, I suggest Andrew Ng's CNN videos.

+",27315,,2444,,11/19/2019 22:10,11/19/2019 22:10,,,,0,,,,CC BY-SA 4.0 +16641,2,,9153,11/19/2019 22:06,,1,,"

Dempster-Shafer Theory was studied fairly seriously in AI through the 70's and 80's. However, Dempster-Shafer theory has some serious shortcomings, that will cause an agent that uses it to make irrational decisions. These were uncovered by Pearl and others in the 80's, with more problems emerging in later years.

+ +

See Cheeseman for a summary of the arguments that probability is the only suitable framework for reasoning under-uncertainty in AI (which is the current orthodox view), along with Josang & Harkin 2012 (which develops a theory that subsumes Dempster-Shafer, and under which DS can be shown to lead to poor decisions), and Pearls' On Probability Intervals.

+",16909,,,,,11/19/2019 22:06,,,,0,,,,CC BY-SA 4.0 +16642,1,,,11/19/2019 22:15,,2,15,"

I am trying to increse the quality of the images that I gather from the microscope. That is a acoustic microscope and there are lots of technical details but in a nutshell the low quality images and its corresponding high quality images that I gather from the same sample are not perfectly alligned because in my setting it is impossible to increase quality without removing the sample from the microscope so when I put it again it is a manual process so they are not perfectly aligned.

+ +

Output of my network will be,let's say, 256 x 256 image and its corresponding label will be high quality 256 x 256 image,in theory, of the exactly same area. If I make pixel to pixel comparison between them, for example taking MSE for the loss function, will it be able to learn ? I am not sure because pixels are not perfectly alligned, they do not represent the same area of the image(the difference is not that great but they are not perfectly alligned as I said)

+",31427,,,,,11/19/2019 22:15,Loss function for increasing the quality of the image when labels are not perfectly alligned,,0,0,,,,CC BY-SA 4.0 +16645,1,,,11/20/2019 1:05,,1,57,"

I am trying to to do a sort of block-validator for bitcoin(and alike chains), in it I need to depending on chain and block-height only allow certain operators in the transactions scripts. One thought I had was that this might be something that LISA should be good for. (I might be wrong in this) but shouldn't something like a rule engine be a good fit for that? What I want is a good way to defines the rules for my validator on how to validate that a block and its transactions adhere to the consensus rules?

+ +

I am sort of getting to this point

+ + + +
(defpackage #:chain-validator
+  (:shadowing-import-from #:lisa #:assert)
+  (:use #:lisa
+    #:cl))
+
+(in-package :chain-validator)
+
+(defclass chain-fundamental () ())
+
+(defclass chain-block (chain-fundamental)
+  ((height :initarg :height :initform 1)
+   (chain :initarg :chain :initform :bitcoin)))
+
+(defclass chain-tx (chain-fundamental)
+  ((in-block :initarg :block :initform 'nil)
+   (pk-script :initarg :pk-script)
+   (is-coinbase-tx :initarg :coinbase :initform 'f)))
+
+(defclass chain-OP (chain-fundamental)
+  ((name :initarg :name)
+   (op-code :initarg :op-code)))
+
+(defrule dont-allow-op-mul-after-height-1000
+    ;;; how to write this one?????
+    ;; but if I only want to allow mul on a certain chain after height
+    ;; 2000?
+    )
+
+(defrule startup ()
+  =>
+  (assert-instance
+   (make-instance 'chain-OP :name :PUSHDATA4 :op-code #x4e))
+  (assert-instance
+   (make-instance 'chain-OP :name :EQUAL :op-code #x87))
+  (assert-instance
+   (make-instance 'chain-OP :name :MUL :op-code #x95))
+  (let* ((genesis-blk (make-instance 'chain-block))
+     (later-blk (make-instance 'chain-block :height 2500))
+     (first-coinbase-tx (make-instance 'chain-tx :block genesis-blk))
+     (later-coinbase-tx (make-instance 'chain-tx :block later-blk)))
+    (assert-instance genesis-blk)
+    (assert-instance later-blk)
+    (assert-instance first-coinbase-tx)
+    (assert-instance later-coinbase-tx)))
+
+
+;; how can I use LISA to get the chain-OPs that are allowed for a
+;; transaction belonging to a specific block at some height, I sort of
+;; want to find them all so i later can verify that the pk-script part
+;; only contains those OPs. Could I write rules that would acctually
+;; do the validation for me? that would check if chain-tx.pk-script
+;; only contains certain OPs. And if we have multiple chains, how do I
+;; write the rules to take account for that?
+
+ +

But after that I don't know how to proceed, the questions I want LISA to answer for me are things like

+ +
    +
  • What are the valid script operations for a certain block, or transaction?
  • +
  • Is this block or transaction valid?
  • +
+ +

Maybe what I need is primer on rule engines or a good tutorial. I just can't really get my head around how to write the rules.

+",31430,,,,,11/20/2019 1:05,Trying to get started with LISA and Lisp,,0,0,,,,CC BY-SA 4.0 +16646,1,,,11/20/2019 1:11,,8,1344,"

Essentially, AI is created by human minds, so is the intelligence & creativity of algorithms properly an extension of human intelligence & creativity, rather than something independent?

+ +

I assume that intelligence does not necessarily require creativity, however, creativity can result from machine learning. (A simple example is AlphaGo discovering novel strategies.)

+",1671,,2444,,11/20/2019 12:23,1/19/2021 16:19,Is artificial intelligence really just human intelligence?,,8,0,,,,CC BY-SA 4.0 +16647,2,,16606,11/20/2019 1:13,,0,,"

I also want to answer my question after watching the video that @Philip post youtube.com/watch?v=UXW2yZndl7U . But still thank @Philip 's answer which is pretty helpful.

+ +
+

The definition of ""leaf"" node.

+
+ +

The key point is what tree is the host/owner of a ""leaf"" node to this question. Through ""Expansion"" step, we are actually creating a tree with MCTS. The tree, the owner of a ""leaf"" node, should be the one that we are building, not the tree of the game state in our head (or perhaps it is too big to fill in our head, the tree of the game state actually does not exist). Then we can understand that a ""leaf"" node is the one, which does not have any child, in the tree that we are building. +Once we get the answer of this question, the other questions can be answered automatically.

+",31389,,,,,11/20/2019 1:13,,,,0,,,,CC BY-SA 4.0 +16648,2,,16646,11/20/2019 2:05,,1,,"

I think no, it isn't. The reason I would say no, is that in order for it to be an extension of our intelligence & creativity, it must be limited by it. This, I believe, isn't the case however. We are capable of creating an AI that is smarter than ourselves (say at Go or Chess, without cheating and checking every possible move), and so it is not bound by our own intelligence.

+ +

I would liken it to creating a child. Just because you gave birth to Einstein, doesn't mean he's an extension of your intelligence. (This is of course pretty rudimentary, as it's very debatable as to whether it's reasonable to liken humans to AI).

+ +

Of course, this is a philosophical question, so it's hard to really answer yes or no.

+",26726,,,,,11/20/2019 2:05,,,,3,,,,CC BY-SA 4.0 +16649,2,,7446,11/20/2019 2:49,,2,,"

The book Computational Intelligence: An Introduction (2nd edition, 2007) by Andries P. Engelbrecht, which has been cited more than 3000 times, defines artificial intelligence as follows

+
+

These intelligent algorithms include artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, and fuzzy systems. Together with logic, deductive reasoning, expert systems, case-based reasoning and symbolic machine learning systems, these intelligent algorithms form part of the field of Artificial Intelligence (AI). Just looking at this wide variety of AI techniques, AI can be seen as a combination of several research disciplines, for example, computer science, physiology, philosophy, sociology and biology.

+
+

and computational intelligence as follows

+
+

This book concentrates on a sub-branch of AI, namely Computational Intelligence (CI) – the study of adaptive mechanisms to enable or facilitate intelligent behavior in complex and changing environments. These mechanisms include those AI paradigms that exhibit an ability to learn or adapt to new situations, to generalize, abstract, discover and associate. The following CI paradigms are covered: artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, and fuzzy systems.

+
+

He then notes

+
+

At this point it is necessary to state that there are different definitions of what constitutes CI. This book reflects the opinion of the author, and may well cause some debate. For example, swarm intelligence (SI) and artificial immune systems (AIS) are classified as CI paradigms, while many researchers consider these paradigms to belong only under Artificial Life. However, both particle swarm optimization (PSO) +and ant colony optimization (ACO), as treated under SI, satisfy the definition of CI given above, and are therefore included in this book as being CI techniques. The same applies to AISs.

+
+

So, there may be different definitions of CI (given by different people), but, given that this book has been cited so many times, I would just stick to these definitions and use this book as a reference (I have actually consulted it a few times in the past). My university library even contains a copy of it.

+

To summarise, CI is a sub-field of AI, which studies (or is associated with) the following topics

+
    +
  • artificial neural networks (NN),
  • +
  • evolutionary computation (EC),
  • +
  • swarm intelligence (SI),
  • +
  • artificial immune systems (AIS), and
  • +
  • fuzzy systems (FS).
  • +
+

which are also part of AI, which additionally studies

+
    +
  • logic,
  • +
  • deductive reasoning,
  • +
  • expert systems,
  • +
  • case-based reasoning, and
  • +
  • symbolic machine learning systems.
  • +
+

Just to give further credibility to these definitions, Andries P. Engelbrecht has an h-index of 59, has been cited 22557 times, and is an IEEE Senior Member. You can find more info about him here. Note that I have no affiliation with him. I am just providing this information so that people start to follow these definitions (rather than just looking at definitions given by people who have not extensively studied the field). Moreover, note that the definition of CI given by Engelbrecht is consistent with the definition given by IEEE that you are quoting.

+",2444,,2444,,11/27/2020 23:36,11/27/2020 23:36,,,,0,,,,CC BY-SA 4.0 +16650,1,,,11/20/2019 3:14,,0,193,"

When are interpreted languages more optimal? When are compiled languages more optimal? What are the qualities and functions that render the so in relation to various AI methods?

+",1671,,2444,,12/12/2021 11:01,12/12/2021 11:01,When are compiled vs. interpreted languages more optimal in AI?,,1,3,,,,CC BY-SA 4.0 +16651,2,,16646,11/20/2019 3:56,,1,,"

No it isn't.

+

AI is essentially human intelligence with a combination of computing power to achieve tasks that a human alone cannot achieve in the time period that a programmed machine can.

+

To give an example. A human can identify a pattern in a data set of say 1000 records. However if that same logic needs applied to a data set of a billion records, a human would take ages to do it. But a machine can do that in seconds if the human gives the right instructions to the machine on how to do it.

+

Hope that helps.

+",31435,,32410,,1/19/2021 16:19,1/19/2021 16:19,,,,1,,,,CC BY-SA 4.0 +16652,1,16654,,11/20/2019 4:49,,1,50,"

I trained some weights to identify apples and oranges (using YOLOv3).

+ +

If I want to be able to identify peaches, which approach is usually recommended:

+ +
    +
  1. Start clean and train the 3 classes.
  2. +
  3. Train the peaches over the already-trained weights (with apples and oranges) + +
      +
    1. Only train with peaches images
    2. +
    3. Use all available training data (including apples and oranges)
    4. +
  4. +
+ +

This is what I have found:

+ +
    +
  • If I start clean, it will take longer until I can get a good result, but the detection is usually better.
  • +
  • Every time I add a new class (using 2.2), the detection get worse for the already learned objects, but it takes less time until I can get a good result (however I suspect that apples and oranges become over-fitted?).
  • +
  • I haven't tested 2.1, as I think that it won't be able to re-adjust the weights for the apples and the oranges.
  • +
+ +

Is the above expected? What is the recommended course of action?

+",9856,,,,,11/20/2019 6:30,Classification with deeplearning : clean start vs continue training,,1,1,,,,CC BY-SA 4.0 +16653,2,,16646,11/20/2019 4:58,,1,,"

I believe AI is, at least in certain ways, both an extension of human intelligence & creativity, and something independent as well. Note people didn't design airplanes to try to fly like birds do. Although planes use the same principles of aerodynamics that birds use to fly, we've adapted how those physics principles are applied to accommodate what we have to work with, i.e., metal, by having things like propellers, jet engines, fixed wings (initially, although later we also had helicopter rotor blades), etc.

+ +

In a similar fashion, we have adapted a few things we've learned about how human minds & intelligence work, with artificial neural networks being a prime example. However, even with just our fairly limited understanding, we've implemented neural networks differently, e.g., by which activation functions are used. Although we are learning more about how our brains work through neuroscience research, there's still so much we don't yet know. Nonetheless, I believe one of the biggest differences overall between our minds & AI is that our general intelligence comes from mostly massive parallel processing, to a much greater extent than even higher end GPUs can deliver, or even at least most supercomputers, while artificial intelligence generally depends instead a lot more on the massive speed of calculations available on our modern computer chips.

+ +

It's this learning, adapting & extending what we know about how we think & create, in combination with the mostly independent boost of using the advantages of computer chips (mostly their ability to do very fast computations), that has allowed AI to advance as far as it has so far. Nobody, including myself, can be sure of where & how the next major advances in AI will occur, but I believe it'll likely be a combination of learning & using what we learn about how we mentally operate, along with advances in computer related knowledge & technology (e.g., new algorithm techniques, more & better parallel processing, quantum computers with many simultaneous qubits operating, etc.).

+",26797,,26797,,11/20/2019 7:27,11/20/2019 7:27,,,,0,,,,CC BY-SA 4.0 +16654,2,,16652,11/20/2019 6:30,,2,,"

If the task involves only apples, orange and peaches, you should use method 1. As the number of classes is small, the network cannot generalize well to all classes. As a side note, you should start with the pretrained weights of YOLO v3 as some classes of YOLO v3 may be fruits, which can help your model converge faster.

+ +

If the number of classes is large, for example a hundred different fruits, you should use method 2.2 . The model should be able to generalize to all fruits and can converage faster as many fruits look the same. This is the case of transfer learning. In original YOLO v3 training, image net weights for dark net is used for the backbone network. It accelerates the training of YOLO v3.

+ +

For 2.1, it will not work as gradient descent will not consider the trained weight. The trained weights will be over written by the weights for peaches.

+ +

For a recommended method, it depends on your class size. If the model will continue to add more classes, you should perhaps use 2.2 but only start using 2.2 when you have a considerable amount of classes. Hope I can help you.

+",23713,,,,,11/20/2019 6:30,,,,0,,,,CC BY-SA 4.0 +16655,1,,,11/20/2019 6:46,,1,194,"

I have segmented concrete cracks from concrete structure images using Mask R-CNN. Now I need to measure the length of the segmented masked crack.

+ +

Will the pixel counting method work? Can anyone help?

+ +

Note: The images are taken at the constant distance from the object.

+",31396,,31396,,11/20/2019 8:50,11/20/2019 8:50,How to count pixels in a object mask which is segmented using Mask R-CNN?,,0,0,,,,CC BY-SA 4.0 +16656,1,16658,,11/20/2019 7:50,,2,62,"

I'm pursuing a master's degree in Artificial Intelligence. My final work is about Convolutional Neural Networks.

+

I was looking for information about filters (or kernels) of the convolutional layers. I have found this article: Lode's Computer Graphics Tutorial - Image Filtering, but I need more.

+

Do you know more resources about more filters (that it is known that they work) and how to create new ones?

+

In other words, I want to know how they work and how can I create new ones.

+

I've thought to create a C++ program, or with Octave, to test the new kernels.

+

By the way, my research will be focused on image segmentation to process MRIs.

+",4920,,2444,,1/19/2021 15:00,1/19/2021 15:00,What are some references that describe known filters (or kernels) and how we can create new ones?,,1,0,,,,CC BY-SA 4.0 +16658,2,,16656,11/20/2019 9:24,,3,,"

I'd suggest you better understand edge detectors such as Robert or Sobel operators first to understand better how convolution operation on images extract features by constant value kernels.

+ +

Would personally recommend Gonzales and Woods for this, as it gives a pure mathematical explanation to how and why these features are extracted.

+ +

Essentially the convolution kernels used in CNN's are ones with a learned set of values for the kernel.

+ +

For a better understanding of learned convolution kernels and, quite frankly, any idea under deep learning would easily recommend Deep Learning by Goodfellow et al

+",25658,,,,,11/20/2019 9:24,,,,0,,,,CC BY-SA 4.0 +16662,2,,16650,11/20/2019 10:41,,5,,"

Interpreted languages allow for a faster development cycle, as they don't require time for compilation, and fragments can often be run without having a complete program. They often also have fewer constraints for variable declaration or typing. That means they can be used to quickly scope out a problem and try different solutions.

+ +

The drawback is the slower execution speed. But during development this is not a big factor; it only becomes important in a production environment. So one option would be to use an interpreted language during the R&D phase, and then re-implement the algorithm in a compiled language for performance improvements.

+ +

Since ML and NNs have become more prevalent in AI, numerical computing has become more important. This is an area where interpreted languages traditionally don't perform too well, so one would use a (compiled) library for, say neural networks, or genetic algorithms, and use 'glue code' to integrate this into a bigger system. The glue code would transform/prepare data and convert this between different formats required by libraries. This is often done in interpreted scripting languages, as they might have to be changed more frequently and are not performance critical.

+ +

Apart from development, the type of computation is also key: as mentioned, numerical computing generally works better with compiled code, but interpreted languages often have advantages in symbolic programming. This is why Lisp and Prolog have become popular AI languages, as opposed to Fortran or C.

+ +

In an ideal world you would use an interpreted language for development, and then compile this once you're done. However, due to the way these languages work, compilation is often non-trivial.

+",2193,,,,,11/20/2019 10:41,,,,1,,,,CC BY-SA 4.0 +16663,2,,16646,11/20/2019 11:55,,1,,"

I would say: no, it's not just an extension of human intelligence. +Actually, I would argue there's nothing like human intelligence. At least it's not clearly distinguishable from intelligence in general.

+ +

If you say AI is just a set of instructions that are made by humans, you might be right. But what if this set of instructions contains instructions on how to change instructions? That would mean that the AI knows how to learn. What if you include instructions on how to learn to learn to learn to learn (...) to change instructions?

+ +

At what point would you say that this intelligence is still an extension of human intelligence? If you argue like this then you must also put ""human intelligence"" in a set altogether with every animal intelligence because it all originates from some sort of intelligence that is based on physical brain activity.

+ +

In fact, when a child is born, it is not more intelligent than most of the animal species. The only thing that enhances its intelligence from time to time (and do stuff like speaking or using its hands like tools) is the ability to learn.

+ +

I don't see why an AI hasn't got the potential to increase its intelligence to level where one would say: ""This is not an extension of human intelligence anymore, this is something independent"".

+",31450,,,,,11/20/2019 11:55,,,,0,,,,CC BY-SA 4.0 +16664,1,,,11/20/2019 12:54,,3,32,"

I am working on multiple deep learning projects, most of them in the area of computer vision. For many of them I create multiple models, try different approaches, use various model architectures. And of course I try to optimize hyperparameters for each model.

+ +

Now, that itself works fine. However, I start to lose track of all the various parameters and model layouts I tried. The problem is, sometimes for example I want to re-train a model from a past project with a new data set, but using the same hyperparameters from the last (best) successful training. So I need to look up that project's documentation, or I have some hyperparameters saved in a text or Excel file, etc.pp.

+ +

For me that feels a bit cumbersome. I bet I am not the only one facing this problem, surely there must be a better way than ""remembering"" all the hyperparamters from all projects / models manually via text files and alike.

+ +

What are your experiences, have you found a better software / solution / approach / best practice / workflow for that? I must admit, I would welcome a software to aid with that a lot.

+",14504,,,,,11/20/2019 12:54,How to organize model training hyperparameters,,0,0,,,,CC BY-SA 4.0 +16666,2,,16646,11/20/2019 14:17,,3,,"

This is an old question, going back at least to 1950. It is one of the original objections to AI that Turing considers and attempts to refute in his seminal 1950 paper Computing Machinery and Intelligence.

+ +

Turing actually attributes this objection to Lady Lovelace, apparently quoted by another author. In Turing's paper, this is objection #6: Lady Lovelace's Objection, in section 6 of the paper. The objection is concisely stated as

+ +
+

The Analytical Engine has no pretensions to originate anything. It can + do whatever we know how to order it to perform.

+
+ +

where ""The Analytical Engine"" was an early design for an all-mechanical general purpose computer.

+ +

Turing offers two replies to this objection. First, he reminds us that computer programs have bugs. That is, they often do things their creators did not intend. This is unsatisfying to many readers, but it does address the objection: programs may act in ways that are unrelated to our intelligence, and in doing so, might display unexpected intelligent behaviors. In this sense, their intelligence would not be an intentional product of human intelligence.

+ +

Turing's stronger objection comes from an anticipation that learning would eventually move to the center of AI research (keep in mind again, this is written in 1950, well before any reasonable learning algorithms had been proposed!). Turing uses the example of a robotic child in Section 7 of the paper (Learning Machines) to elaborate on his point. A child is created by its parents, but, endowed with the ability to learn, quickly begins to display behaviors its parents do not anticipate or intend. No one would suggest that a person's intelligence is ""really just"" the intelligence of their parents, even though their parents created them, and are partially responsible for that intelligence.

+ +

Likewise, Turing's proposed robotic child is created by a parent, but, endowed with learning, quickly begins to engage in behaviors the parent does not anticipate or intend. Therefore, machine intelligence need not be reduced to just human intelligence.

+ +

I think that if Turing were alive today, he would agree that we are now beginning to move into the era of learning machines he anticipated. Some of our programs now engage in intelligent behaviors that we do not anticipate or understand. For example, self-driving cars now kill or maim people, because they have learned behaviors their creators did not intend or anticipate, perhaps not unlike a reckless teenage driver.

+",16909,,16909,,11/21/2019 2:12,11/21/2019 2:12,,,,4,,,,CC BY-SA 4.0 +16667,1,16670,,11/20/2019 14:30,,3,2810,"

Does the Markov assumption say that the conditional probability of the next state only depends on the current state or does it say that the conditional probability depends on a fixed finite number of previous states?

+ +

As far as I understand from the related Wikipedia article, the probability of the next state $s'$ to appear only depends on the current state $s$.

+ +

However, in the book ""Artificial Intelligence: A Modern Approach"" by Russell and Norvig, on page 568, they say: ""Markov assumption — that the current state depends on only a finite fixed number of previous states"".

+ +

To me, the second statement seems contradictory to the first, because it may mean that a state can depend on the history of states as long as the number is fixed a finite. For example, the current state depended on the last state and the state before the last state, which is 2 sequential previous states (a finite number of states).

+ +

Is Markov assumption and Markov property the same?

+",27777,,2444,,11/20/2019 23:55,11/20/2019 23:55,What does the Markov assumption say about the history of state sequences?,,1,0,,,,CC BY-SA 4.0 +16668,1,,,11/20/2019 14:42,,1,64,"

I have implemented a multi-label image classification model where I can choose which model to use, I was surprised to find out that in my case mobilenet_v1_224 performed much better (95% Accuracy) than the inception models (around 88% Accuracy), I'm using pretrained models (that I download from here and adding a final layer that I train on my own data (3000 images). I wanted to get your opinion and see if maybe I'm doing something wrong.

+",23866,,,,,11/20/2019 14:42,Can mobilenet in some cases perform better than inception_v3 and inception_resnet_v2?,,0,1,,,,CC BY-SA 4.0 +16669,1,16673,,11/20/2019 15:06,,3,68,"

I am training an RL agent (specifically using the PPO algorithm) on a game environment with 2 possible actions left or right.

+ +

The actions can be taken with varying ""force""; e.g. go left 17% or go right 69.3%. Currently, I have the agent output 21 actions - 10 for left (in 10% increments), 10 for right in 10% increments and 1 for stay in place (do nothing). In other words, there is a direct 1-1 mapping in 10% increments between the agent output and the force the agent uses to move in the environment.

+ +

I am wondering, if instead of outputting 21 possible actions, I change the action space to a binary output and obtain the action probabilities. The probabilities will have the form, say, [70, 30]. That is, go left with 70% probability and go right with 30% probability. Then I take these probabilities and put them through a non-linearity that translates to the actual action force taken; e.g an output of 70% probability to go left, may in fact translate to moving left with 63.8% force.

+ +

The non linear translation is not directly observed by the agent but will determine the proceeding state, which is directly observed.

+ +

I don't fully understand what the implications of doing this will be. Is there any argument that this would increase performance (rewards) as the agent does not need to learn direct action mappings, rather just a binary probability output?

+",27570,,,,,11/20/2019 18:04,Effects of translating RL action probability through non linearity,,2,0,,,,CC BY-SA 4.0 +16670,2,,16667,11/20/2019 15:17,,6,,"

A stochastic process has the Markov property if the probability distribution of future states conditioned on both the present and past states depends only on the present state or, more formally, the following equality holds.

+ +

$$ +p(s_{t+1} \mid s_{t}, s_{t-1:1}) = p(s_{t+1} \mid s_{t}), \forall t +$$

+ +

The hidden Markov model (HMM) is an example of a model where the Markov property is often assumed to hold. In other words, the Markov assumption is made in the case of the HMM.

+ +

There are also the variable-order (or higher-order) Markov models, where the future state can depend on the previously $n$ states or, more formally, the following equality holds.

+ +

$$ +p(s_{t+1} \mid s_{t}, s_{t-1:1}) = p(s_{t+1} \mid s_{t:t-n}), \forall t +$$

+ +

In this context, a hidden Markov model is called a first-order Markov model ($n=0$). Therefore, there can also be second-order ($n=1$), third-order ($n=2$), etc., Markov models. In fact, there are also higher-order hidden Markov models.

+ +

To conclude, the expressions Markov property and Markov assumption are not exactly interchangeable. The Markov property is an attribute that a stochastic process can be assumed to possess. In that case, the Markov assumption is made. The expression Markov property usually refers to a first-order Markov property, but it can more generally refer to a higher-order Markov property.

+",2444,,2444,,11/20/2019 18:32,11/20/2019 18:32,,,,0,,,,CC BY-SA 4.0 +16671,2,,16669,11/20/2019 15:38,,1,,"

Have you considered using a continuous action space? It might be worth looking into. If you aren't familiar with it, here are a few resources for discrete vs continuous action spaces -

+ +

Modeling and Planning in Large State and Action Spaces

+ +

Deep Reinforcement Learning in Continuous Action Spaces:a Case Study in the Game of Simulated Curling

+",31458,,,,,11/20/2019 15:38,,,,1,,,,CC BY-SA 4.0 +16672,1,,,11/20/2019 16:45,,10,2978,"

I need an algorithm to trace simple bitmaps, which only contain paths with a given stroke width.

+

Is there any existing attempt to create a deep learning model which extracts vector paths from bitmaps?

+

It is obviously very easy to generate bitmaps from vector paths, so creating data for a machine learning algorithm is simple. The model could be trained by giving both the vector and bitmap representation. Once trained, it would be able to generate the vector paths from the given bitmap.

+

This seems simple, but I could not find any work on this particular task. So, I suppose this problem is not fitted for current deep learning architectures, why?

+

The goal is to trace this kind of image, which would be drawn by hand with a thick felt pen and scanned:

+

+

So, is there a deep learning architecture fitted for this problem?

+

I believe this question could help me understand what is possible to do with deep learning and what is not, and why. Tracing bitmaps is a perfect example of converting sparse data to a dense abstract representation; I have the intuition one can learn a lot from this problem.

+",31460,,2444,,12/23/2021 23:22,2/26/2023 20:50,Is there any existing attempt to create a deep learning model which extracts vector paths from bitmaps?,,2,0,,,,CC BY-SA 4.0 +16673,2,,16669,11/20/2019 18:04,,1,,"
+

I don't fully understand what the implications of doing this will be.

+
+ +

Without other matching adjustments, you will break your agent.

+ +

The problem is how your new action space gets converted back into gradients to update the agent, after it has acted and needs to learn from the results. The NN component of policy function you are considering is designed to work by balancing a discrete probablility distribution. It learns by increasing the probability of actions (in the binary case, the probability of going left vs going right) that score better than a current baseline level.

+ +

When interpreting the result from going 63.8% left, you have to resolve two things - which action did the agent take, and what changes to your parameters will increase the probability of taking that action. Unfortunately neither of these tasks are simple if you combine the action choices in the way you suggest.

+ +

Also, you have lost exploration. The combined left/right algorithm will always output a fixed steering amount for each state. Whilst there are algorithms, like DDPG, that can work with this, it is not really possible to adapt PPO to do so.

+ +

However, PPO already supports continuous action spaces directly. You can have your network output the mean and standard deviation of a distribution for how to steer, and sample from that. Then the action choice taken will directly relate to the output of the network and you can adjust the policy to make that choice more or less possible depending on results from taking it. If you are using a library implementation of PPO, then this option should be available to you.

+",1847,,,,,11/20/2019 18:04,,,,0,,,,CC BY-SA 4.0 +16674,1,,,11/20/2019 19:21,,2,38,"

I do text annotations (POS tagging, NER, chunking, synset) by using a specific annotation tool for Natural Language Processing. I would like to make the same annotations on different tools to compare the performances of both.

+ +

Furthermore, for I found several logical and linguistic errors in the way the algorithm was previously trained, I would like to measure the way such anomalies affect the intelligence of the chatbot (that's to say its ability to understand questions and answers made by the customers as regard to sentences which have been structured in a certain way), by comparing results with those performed by other NLP engines. +In other terms, I would like to collect some ""benchmark"" to have an idea of which level the NLP algorithm developed by the company I work with works at.

+ +

Is there any tool (open source annotation tools based on other NLP algorithms, tools to collect benchmark, etc.) which might help me to perform such a task?

+",22959,,,,,11/20/2019 19:21,NLP annotation tool online and other tools to compare performances of different NLP algorithms,,0,0,,,,CC BY-SA 4.0 +16675,1,,,11/20/2019 19:29,,2,53,"

While learning neural networks I've found a basic Python working example to play with. It has 3 input nodes, 4 nodes in a hidden layer, 1 output node. 5 data sets for training.

+ +

The initial code is without biases, which I'm trying to implement, forward and back calculations. From different internet sources I see that bias is just like other weights with a static input value 1, and backpropagation calculation should be similar and simplier.

+ +

But my current code version is not working - with the same input I get very different results from ~0.002 to ~0.99.

+ +

Please help me to fix biases calculations. Probably lines marked with ???. Here is a Python 2 testing code:

+ +
import numpy as np
+
+
+# Sigmoid and it's derivative
+def nonlin(x, deriv=False):
+    if (deriv == True):
+        return x*(1-x)
+
+    return 1/(1+np.exp(-x))
+
+
+X = np.array([[0,0,1],
+              [0,1,1],
+              [1,0,1],
+              [1,1,1],
+              [1,1,1]])
+
+Y = np.array([[0],
+              [1],
+              [1],
+              [0],
+              [0]])
+
+# Static initial hidd. layer weights for testing
+wh = np.array([[-0.16258307,  0.43597283, -0.99471565, -0.39715906],
+               [-0.70551921, -0.81601352, -0.62549935, -0.30959772],
+               [-0.20477763,  0.07532473, -0.15920573,  0.3694664 ]])
+# Static initial output layer weights for testing
+wo = np.array([[-0.59572295],
+               [ 0.74949506],
+               [-0.95195878],
+               [ 0.33625405]])
+
+# Hidden layer's biases
+biasH = 2 * np.random.random((1, 4)) - 1  # ???
+# Output neuron's bias
+biasO = 2 * np.random.random((1, 1)) - 1  # ???
+# Static hidden layer's biases input
+biasInputH = np.array([[1, 1, 1, 1]])     # ???
+# Static output layer's bias input
+biasInputO = np.array([[1]])              # ???
+
+
+# Number of iterations to teach
+for j in xrange(60000):
+
+    # Feedforward
+    h = nonlin(np.dot(X, wh) + biasH)
+    o = nonlin(np.dot(h, wo) + biasO)
+
+    # Calculate partial derivatives & errors
+    o_error = Y - o
+
+    if (j % 10000) == 0:
+        print ""Error:"" + str(np.mean(np.abs(o_error)))
+
+    o_delta =  o_error * nonlin(o,     deriv=True)
+    o_biases = o_error * nonlin(biasO, deriv=True)  # ???
+
+    h_error =  o_delta.dot(wo.T)
+    h_delta =  h_error * nonlin(h,     deriv=True)
+    h_biases = h_error * nonlin(biasH, deriv=True)  # ???
+
+    # Update weights and biases
+    wo += h.T.dot(o_delta)
+    wh += X.T.dot(h_delta)
+
+    # biasH += biasInputH.dot(h_delta)  # ???
+    # biasO += biasInputO.dot(o_delta)  # ???
+
+
+# Try new data
+data = np.array([1,0,0])
+
+print ""weights 0:"", wh
+print ""weights 1:"", wo
+print ""biases 0:"",  biasH
+print ""biases 1:"",  biasO
+print ""input:   "",  data
+
+h = nonlin(np.dot(data, wh))
+print ""hidden:  "", h
+print ""output:  "", nonlin(np.dot(h, wo))
+
+",31464,,,,,11/20/2019 19:29,Calculation of Neural network biases in backpropagation,,0,0,,,,CC BY-SA 4.0 +16676,1,,,11/20/2019 21:05,,2,286,"

I am working on implementing MCTS for a scheduling problem where MCTS is formulated each time there are multiple jobs that need to be scheduled. When a job is executed, the resulting state of the system is random. The challenge I'm having is that the implementation I'm currently using relies on the ability to determine if a node is fully expanded. However, there are so many children of the root node that it's not feasible to expect all of them will ever be visited. Is there a suggested method of conducting MCTS in cases where nodes will not likely ever be fully expanded?

+",31466,,,,,11/20/2019 21:05,Formulating MCTS with random outcomes of actions?,,0,4,,,,CC BY-SA 4.0 +16677,1,,,11/20/2019 22:24,,3,179,"

I am looking to build a neural network that takes an input vector $\mathbf{x}$ and outputs a vector $\mathbf{y}$ such at $f(\mathbf{x}, \mathbf{y})$ is minimized, where $f$ is some function. The network will see many different $\mathbf{x}$ during training to adjust its weights and biases; then I will test the network by using the test set $\{\mathbf{x}_1, \dots, \mathbf{x}_n \}$ to calculate $\sum(f(\mathbf{x}_1, \mathbf{y}), \dots, f(\mathbf{x}_n, \mathbf{y}))$ to see if this sum is minimized.

+

However, I have no labels for the output $\mathbf{y}$. The loss function I am trying to minimize is based on the input and output, instead of the output and label.

+

I tried many standard Keras and TensorFlow loss functions, but they are unable to do the job. Any thoughts on how this might be achieved?

+",31468,,2444,,12/10/2021 21:30,12/10/2021 21:30,Unsupervised learning to optimize a function of the input,,2,1,,12/10/2021 21:31,,CC BY-SA 4.0 +16678,2,,16677,11/21/2019 0:32,,1,,"

According to your description, you already know your function $f$ to be optimized. So you should use it directly instead of the standard loss functions. In this other post there is an explanation of how to use $f$ as a custom loss function in Keras.

+",30983,,,,,11/21/2019 0:32,,,,0,,,,CC BY-SA 4.0 +16679,1,,,11/21/2019 0:44,,1,102,"

I'm reading about max-pooling in a dynamic CNN paper. I can see how it can help find features in images, given that the pixel with the highest density gets pooled, but how does it help to find features in words?

+",30885,,2444,,1/11/2021 0:51,10/3/2022 4:05,How can max-pooling be applied to find features in words?,,1,1,,,,CC BY-SA 4.0 +16680,2,,16646,11/21/2019 0:46,,1,,"

No, the way human minds think is in no way related to the way an AI thinks. Although you could say that AI is a much simpler form that represents how the brain processes information. For the human brain to think, sense, and act there are billions of connections is various cortex's of the brain that process information in different ways. If talking about brain information as electrical signals you could say that different cortex's of the brain have change in power of specific frequency bands of the brain signal which can be decoded as planning, preparation, thoughts, visual, movement, creativity, attentiveness and much more.

+ +

So, to answer your question AI could be considered as an extremely minute extension of human intelligence. It's like comparing our solar system to the Milky Way, although the comparison maybe a bit too large as we are slowly becoming able to understand the underlying processes and build fast processors mimicking brain processing and efficient power consuming hardware tech to run humongous neural nets. In the soon future your statement may hold true.

+",16740,,,,,11/21/2019 0:46,,,,0,,,,CC BY-SA 4.0 +16681,2,,16679,11/21/2019 2:28,,1,,"

In an image you are pooling usually over some (n x n) set of positions which lets you maintain spatial correlation but on the other hand most 1D CNNs used for language modeling pool over the temporal axis completely annihilating all form of temporal correlation within the resulting feature vector. I will take it this difference is what confuses you.

+ +

For simplicity sake Ill keep my explanation to max-pooling but it can extend to any of the others.

+ +

MaxPooling as the name suggests just takes a maximum value from a set of candidates. In images you commonly see maxpooling over neighborhoods of pixels. This is because if we assume high activation is correlated to some feature being more prevalent (this assumption is not inherent, its more learned than anything else by construction of the architecture/training), then doing this we keep the most prevalent features, maintain spatial correlation, and reduce the dimensionality.

+ +

When using 1D CNNs for LMs rather than RNNs the idea is that each kernel is searching for a feature, and so you use an exorbitant number of the kernels at various sizes to search for a lot of features. Pooling over the temporal dimension then just tells you at each neuron how prevalent its corresponding feature is in that time series input. That is why your output is usually a feature vector of size equal to the number of kernels you used.

+ +

A point I want to emphasize is that pooling takes multiple inputs and returns one based on some condition/function but that is vague, and it's usage in different scenarios can be for different goals (to an extent).

+",25496,,,,,11/21/2019 2:28,,,,2,,,,CC BY-SA 4.0 +16682,5,,,11/21/2019 3:06,,0,,,2444,,2444,,11/21/2019 3:06,11/21/2019 3:06,,,,0,,,,CC BY-SA 4.0 +16683,4,,,11/21/2019 3:06,,0,,"For questions related to the C4.5 algorithm, which is a decision tree learning developed by Ross Quinlan in 1993. The C4.5 algorithm is the successor of the ID3 algorithm, which was developed in 1986 by Ross Quinlan.",2444,,2444,,11/21/2019 3:06,11/21/2019 3:06,,,,0,,,,CC BY-SA 4.0 +16684,1,16732,,11/21/2019 3:25,,1,237,"

I was learning about the maximum a posteriori probability (MAP) estimation for machine learning and I found a nice short video that essentially explained it as finding a distribution and tweaking the parameters to fit the observed data in a way that makes the observations most likely (makes sense).

+ +

However, in mathematical terms, how does it determine which distribution best fits the data?

+ +

There are so many distributions out there that it could be any of them and the parameters you could fit them could be infinitely large.

+",30885,,2444,,11/21/2019 14:02,11/23/2019 17:14,How does maximum approximation of the posterior choose a distribution?,,1,0,,,,CC BY-SA 4.0 +16685,2,,15992,11/21/2019 3:41,,0,,"

My thoughts

+ +

AI is already indirectly regulated. This is important to acknowledge and this acknowledgement is missing, in my opinion, in the discourse about law and AI.

+ +

I'm assuming that your question is about law that directly aims at AI technologies and this exemplifies one of the risks of regulating AI: that the law will focus on the technology rather than outcomes.

+ +

Another concern is that law that is inadequate or outdates quickly creates a false sense of security and this could create a situation which is even more dangerous then if the laws are not there.

+ +

Law and innovation

+ +

When it comes to the view that law stifles innovation it is paramount to acknowledge that some regulation can have a very positive effect. There is no general rule that there is a inverse relation between law and innovation.

+ +

Pacing problem and Collingridge dilemma

+ +

The following is basically what Wendell Wallach says in an espisode of Future of Life Institute's AI Alignment Podcast entitled Machine Ethics and AI Governance with Wendell Wallach.

+ +
+

The pacing problem refers to the fact scientific discovery, and technological innovation, is far outpacing our ability to put in place appropriate ethical legal oversight.

+
+ +

Wendell Wallach continues to say that pacing problem converges with what is now called the Collingridge Dilemma, a problem that 'bedevilled' people in technology and governance since 1980, and he defines it the following way:

+ +
+

While it was easiest to regulate a technology early in its development, early in its development we have little idea of what its societal impact would be. By the time we did understand what the challenges and the societal impact the technology would be so deeply entrenched in our society that it would be very difficult to change its trajectory.

+
+ +

See also:

+ + +",3526,,3526,,11/21/2019 22:39,11/21/2019 22:39,,,,0,,,,CC BY-SA 4.0 +16686,2,,2876,11/21/2019 5:00,,0,,"

Here are some problems that my ape mind came up with.

+ +

1.Smart != all knowing

+ +

The AI self improvement explosion smakes it smarter and smarter. Being smarter doesn't mean knowing more facts. I suppose this is quite a picky argument, but I think worth thinking about.

+ +

A very smart doctor who doesn't know your history may still make a worse choice than a less intelligent one with better data.

+ +

2. Is it all for humans? Is it for all humans?

+ +

ASI which reaches a higher level could not be interested in our wellbeing.

+ +

A controlled ASI could still work for the benefit of the few only, if these few decide wrong goals we could go backwards.

+ +

3 Harsh ASI

+ +

A scientific mind in not necessarily full of sympathy or empathy.

+ +

4. Being smart and not being clever

+ +

Great minds still make mistakes: + * in setting their goals, and + * in executing the plan to achieve it.

+ +

Geat intellect doesn' guarantee lack of shorosightedness or lack of blind spots.

+ +

5. Limits

+ +

If there are bounds of existence (speed of light type limits) then the AI will be bound by these as well. This may mean that there are things that even ASI won't 'get'. +Also, as our mind may have limits based on its structure the next AI may have limits as well - and even if it improves uppn may hit limits that it cannot find solutions to because it's 'too stupid'.

+ +

6. We won't get it

+ +

ASI's understanding of certain aspects of the world may not be communicative to most humans. We just won't get it (even if we're capable of understanding everything, it doesn't mean we will understand it).

+ +

7. How to use it?

+ +

We may destroy ourselves, and the AI with the tech it helps us build. It doesn't need to be bombs. It can be geoengineering or wonder drugs.

+ +

This is especially intense when the ASI is already powerful but not strong enough to forsee negative consequences (or we'll just ignore it anyway).

+",3526,,,,,11/21/2019 5:00,,,,0,,,,CC BY-SA 4.0 +16687,1,16720,,11/21/2019 10:16,,5,100,"

In the context of the variational auto-encoder, can someone give me a concrete example of the application of the Bayes' rule

+ +

$$p_{\theta}(z|x)=\frac{p_{\theta}(x|z)p(z)}{p(x)}$$

+ +

for a given latent variable and observable?

+ +

I understand with VAE's we're essentially getting an approximation to $p_{\theta}(z|x)$ that models the distribution that we think approximates the latent variables, but I need a concrete example to really understand it.

+",30885,,2444,,11/22/2019 20:35,11/22/2019 20:35,Concrete example of latent variables and observables plugged into the Bayes' rule,,1,0,,,,CC BY-SA 4.0 +16688,1,,,11/21/2019 13:10,,3,303,"

Why AI is (or not) a good option for the generation of random numbers? Would GANs be suited for this purpose?

+",31483,,2444,,11/21/2019 13:49,11/21/2019 15:05,Why AI is (or not) a good option for the generation of random numbers?,,1,0,,,,CC BY-SA 4.0 +16689,1,,,11/21/2019 14:50,,3,20,"

I am trying to predict the solution time for riddles in which matchsticks are combined into digits and operators. An example of a matchstick riddle is 4-2=8. The solution for this riddle would be obtained by moving one matchstick from the ‘8’ to the ‘-’ sign, resulting in the correct equation 4+2=6. The data consists of 100 riddles and the corresponding solution times. The two types of features that are available for each riddle are:

+ +
    +
  • a 23 dimensional binary vector that indicates which of the available positions are filled with matches +or
  • +
  • a 12-dimensional integer vector that counts the appearance of each token (10 digits, 2 operators)
  • +
+ +

Although today neural nets are very popular I am not sure that a neural net is the best choice for this particular problem. Firstly, because the data set is very small. Secondly because of the binary inputs. What might be a more effective model for this problem ?

+",31485,,,,,11/21/2019 14:50,What is a good model for regression problem with binary features and small data?,,0,0,0,,,CC BY-SA 4.0 +16690,2,,16688,11/21/2019 14:52,,2,,"
+

Why AI is (or not) a good option for the generation of random numbers?

+
+ +

AI approaches are generally not good for generating random numbers, for these reasons:

+ +
    +
  • Similar to why they are not good for adding numbers, there already exist many strong pseudo-random and ""true"" random sources, possible without using any AI approach, and demonstrably good enough for purposes such as simulations and cryptography. These existing algorithms and devices perform much more efficiently at the task than any AI could.

  • +
  • Generating random numbers with good statistical properties is a very precise task, that the approximations typically used in machine learning cannot cope with (a pseudo-random function is inherently noisy and difficult to learn). AI search techniques are not helpful either, because the amount of output that needs to be generated in order to validate a random data source is very large, making things like tree searches for solutions intractable.

  • +
+ +

One possible relationship between AI and random number generation that could be explored is in code generation and code analysis for random number generators. In other words, an AI that is sophisticated enough to understand the complexity of code used in a typical random number generator (RNG) might help with making random number generation more efficient, or less easy to hack, or expose flaws in existing implementations. This is beyond the capabilities of AI techniques at the moment - or put another way, human intelligence has taken RNG technology quite far already, so there is very little practical that current AIs can do to improve on it.

+ +
+

Would GANs be suited for this purpose?

+
+ +

No. The input to a GAN is typically a vector of random numbers that have been generated, by an already-existing RNG. The purpose the the GAN is convert this simply-distributed random vector input $\mathbf{x}$ - which might be suitable for many other purposes, such as running simulations of stochastic processes - into a $\mathbf{y}$ sampled from a distribution of some target population $\mathbb{Y}$, which is typically being expressed as a subset of a large space $\mathbb{R}^d$. The values that $\mathbf{y}$ will take are biased to a subspace (or manifold) of all possible space within the representations possible in that vector space. For instance, you might input a random vector with 100 variables and the GAN will generate a picture of a dog. It cannot generate anything other than attempts at pictures of dogs, whilst the space of all possible images that it might create if completely random is far larger (and usually not interesting to look at).

+ +

So a GAN would take an already-good random source and turn it into a biased set of numbers that would be generally unusable as a RNG without further processing. The kind of processing you would do (e.g. ""noise whitening"") is something that some current ""secure"" random generators do - meaning to use the GAN to generate quality random numbers you would literally put existing RNG code on its input and output layers. The GAN would add nothing but wasted computation.

+ +

You cannot even train a GAN to do this task, since a truly random space contains all possible outputs, there will be no negative examples for the discriminator. Although a random picture most often looks like static noise, and a picture of the Mona Lisa is clearly considered ""not random"" by most people, in fact it is just as likely to be seen as any other image.

+",1847,,1847,,11/21/2019 15:05,11/21/2019 15:05,,,,2,,,,CC BY-SA 4.0 +16691,1,,,11/21/2019 15:24,,2,27,"

I want to build the yolo architecture in keras but can't understand the basic idea behind the training of the yolo, like how to define the labels for whether there is no object there what we have to do. Do we have to take the boundary box as 0 or not include the block only for that part? It's quite confusing.

+",31487,,,,,11/21/2019 15:24,yolo output and how to define labels for backpropogation on it,,0,0,,,,CC BY-SA 4.0 +16692,1,,,11/21/2019 16:42,,1,30,"

I am reading the Wikipedia article on gradient boosting. There is written:

+ +
+

Unfortunately, choosing the best function $h$ at each step for an arbitrary loss function $L$ is a computationally infeasible optimization problem in general. Therefore, we restrict our approach to a simplified version of the problem.

+
+ +

How would the ""best function"" been constructed if there are no computationally limitations?

+ +

Because the wiki gives me the idea that the model could been better, but that compromises have been made.

+",30599,,2444,,11/21/2019 21:17,11/21/2019 21:17,"How would the ""best function"" been constructed if there are no computationally limitations?",,0,0,,,,CC BY-SA 4.0 +16693,5,,,11/21/2019 16:42,,0,,,2444,,2444,,11/21/2019 16:42,11/21/2019 16:42,,,,0,,,,CC BY-SA 4.0 +16694,4,,,11/21/2019 16:42,,0,,"For questions related to the depth-first search algorithm, in the context of artificial intelligence.",2444,,2444,,11/21/2019 16:42,11/21/2019 16:42,,,,0,,,,CC BY-SA 4.0 +16695,1,16700,,11/21/2019 17:20,,0,62,"

I'm relatively new to Deep Learning, and trying various models and datasets using Keras. I'm starting to love it!

+ +

Through-out my experimentations, I have come into some semantic questions that I don't know how they can affect the overall accuracy of my trained model. My target application is fire detection in videos (fire vs non-fire). So I'm trying to get tips and tricks from those well experienced on Deep learning, and here are my semantic questions:

+ +
    +
  1. Given that I have to do detection on videos, I've been mostly adding actuall frames of videos to my dataset, and less photos. Does adding photos from Google ever help (as we largen our dataset) or it's actually more considered noises and shall be removed?

  2. +
  3. I've trained a deep model (ResNet50) as well as a shallow 5-layer model. I realized the ResNet50 model is more sensitive and has a high recall (all fires are definitely detected), but has false positives as well (strong source of lights like sunlight or lamps are identified as fire). While the shallower model is 10x faster, it can miss fires if it is smaller in the image, so it's less sensitive. But also has low false positives. Is it always true? So what are techniques and tips to fix these issues in each of these models?

  4. +
+ +

For instance, the shallow model doesn't see this fire. Shall I think it's not complex enough to work well when the scene has many objects inside?

+ +

+ +
    +
  1. The sample code I saw resizes photos to 256x256 for training. What's the effect of bigger sizes vs smaller ones say 300x300? Can I expect while bigger sizes increase computation time, they provide higher accuracy?

  2. +
  3. The sample code also converts photos to grayscale and uses Antialiasing before passing. Does it have good effects? What if I pass the colored version as fire is mostly about colors?

  4. +
  5. When I see the model is doing bad on certain scenes (say those sun lights or lamps), I take multiple of those frames and add them to my non-fire dataset. Does it have any positive effects and being taken care of? And is it better to add multiple successive frames or just one frame is enough?

  6. +
  7. My fire dataset has 1800 images and my non-fire dataset has 4500 images. As a rule of thumb, the bigger each class, the better? Of course the non-fire data should be bigger, but we can not add whatever on earth as non-fire so what should be the distribution of the sizes?

  8. +
+",9053,,9053,,11/21/2019 23:13,11/21/2019 23:13,Techniques and semantics in better training of deep learning models,,1,3,,11/24/2020 2:58,,CC BY-SA 4.0 +16696,1,,,11/21/2019 18:28,,2,30,"

I trained a binary classifier using ML.NET's AutoML feature on a small dataset (compared to other, similar models I've trained that seem to work well)-around 500 rows with around 50 features. AutoML used cross-validation with 5 folds.

+ +

The training data is balanced to about 200 positive cases to 300 negative cases, which isn't an unreasonable representation of the real world based on domain knowledge.

+ +

The model's metrics are poor compared to other, similar models, e.g.:

+ +
    +
  • Accuracy: 0.64
  • +
  • Positive Precision: 0.375
  • +
  • Positive Recall: 0.09
  • +
  • Negative Precision: 0.67
  • +
  • Negative Recall: 0.92
  • +
  • F1 Score: 0.15
  • +
+ +

When the model is run against unseen data, it predicts the negative case 99% of the time.

+ +

If the accuracy were truly as stated in the metric, a correct classification 2/3 of the time has some practical value in this application. However, the actual predictions of 99% the negative case are surely flawed.

+ +

Is the training set too small to expect reasonable results? Is there anything I can do to improve the model?

+",31489,,,,,11/21/2019 18:28,Troubleshooting Binary Classifier,,0,0,,,,CC BY-SA 4.0 +16698,1,,,11/21/2019 20:12,,2,57,"

I'm trying to use DQN to solve the cart-pole environment. I have 2 networks (target and behavior). Both of them have 3 hidden layers with 24 neurons, using the ReLU activation. The loss is MSE and the optimizer is Adam. I copy the weights of the behavior network to the target network every 15 backpropagation steps.

+ +

My agent learns. Below you can see the total reward and running average plots.

+ +

+ +

However, it has a lot of ""drops"". Usually, after a couple of perfect sequences, it just ""kills"" the running average with a couple of very short episodes. What may be the reason for this behavior?

+",31494,,2444,,11/22/2019 0:11,11/22/2019 0:11,What could be the cause of the drop of the total reward when using DQN to solve the cart-pole environment?,,0,0,,,,CC BY-SA 4.0 +16700,2,,16695,11/21/2019 22:26,,2,,"

I try to answer the things I know for sure:

+ +
    +
  1. One effect of bigger images is the increasing computation time due to more pixels (input to your training)
  2. +
+ +

4.Grayscaling reduces the information, which might decrease training time, but also model performance (accuracy, precision, recall). What I have seen is that grayscaling is used in for example face detection where structural information (shapes) is important and colors can be regarded as secondary. For your example you have to determine whether such structural differences between positives and negatives exist and if color is a valuable information.

+ +
    +
  1. If possible, I would somehow alter these copies by clipping the objects (sun lights, lamps) and rotate, scale, make them transparent etc.

  2. +
  3. In the end your training data should be representative of the data it should generalize to. If you would for example take my user icon as an additional non fire example, this wouldn't help your cause.

  4. +
+",31498,,,,,11/21/2019 22:26,,,,1,,,,CC BY-SA 4.0 +16701,5,,,11/21/2019 22:29,,0,,,2444,,2444,,11/21/2019 22:29,11/21/2019 22:29,,,,0,,,,CC BY-SA 4.0 +16702,4,,,11/21/2019 22:29,,0,,"For questions related to ensemble learning, which refers to machine learning techniques where multiple models (e.g. a neural network and a decision tree) are trained and their predictions are combined to solve the same problem. Bagging and boosting are two popular ensemble learning techniques.",2444,,2444,,11/21/2019 22:29,11/21/2019 22:29,,,,0,,,,CC BY-SA 4.0 +16703,1,,,11/21/2019 22:38,,3,218,"

In the Trust Region Policy Optimization paper, in Lemma 1 of Appendix A, I didn't quite understand the transition from (21) from (20). In going from (20) to (21), $A^\pi(s_t, a_t)$ is substituted with its value. The value of $A^\pi(s_t, a_t)$ is given as $\mathbb{E}_{s'∼P(s'|s,a)}[r(s) + \gamma V_\pi(s') − V_\pi(s)]$ at the very beginning of the proof. But when $A^\pi(s_t, a_t)$ gets substituted, I don't see the expectation (over $s'∼P(s'|s,a)$) appearing anywhere. It will be of great help if somebody lends some light on this.

+",31499,,2444,,1/21/2023 17:33,6/20/2023 18:07,"In lemma 1 of the TRPO paper, why isn't the expectation over $s'∼P(s'|s,a)$?",,1,0,,,,CC BY-SA 4.0 +16704,2,,15992,11/21/2019 22:51,,1,,"

I don't think regulating something necessarily causes that regulation to defacto become a ""risk"".

+ +

Regulation - including overregulation - may, in fact, aid in the dialogue between practitioners, which may end up educating the regulators, the public and the practitioners themselves.

+ +

My answers to your survey would most likely be ""it depends..."", or ""no risk"", which isn't to say it's not an impediment, but just not a ""risk"", per se.

+",31496,,,,,11/21/2019 22:51,,,,0,,,,CC BY-SA 4.0 +16705,1,,,11/21/2019 23:03,,1,41,"

How would we define a set that contains itself within a knowledge ontology?

+ +

I am thinking that set membership would probably inherit from a generic base class of total containment from which both physical containment and conceptual containment are derived.

+ +
    +
  • total containment + +
      +
    • physical containment
    • +
    • conceptual containment + +
        +
      • set containment
      • +
    • +
  • +
+",31497,,2444,,11/21/2019 23:17,11/21/2019 23:17,How would we define a set that contains itself within a knowledge ontology?,,0,2,,,,CC BY-SA 4.0 +16706,1,,,11/21/2019 23:57,,7,199,"

Imagine a game played on a 10x10 grid system where a player can move up down left or right and imagine there are two players on this grid: An enemy and you. In this game, there are walls on the grid which you can't go through. The objective of this game is to block the enemy in so he can't move around the rest of the board and is effectively ""trapped"".

+ +

I want to write an algorithm that detects which nodes on the board I as a player need to put blocks in, in order to trap the enemy. There are also some other considerations to think about. You have to be able to place the blocks before the enemy place can get out of the box. Also note more thing: You can move AND place a block in the position that you're moving to at the same time.

+ +

Here's a picture as an example of the game.

+ +

+ +

EDIT: note that the board in the picture is 5x5, but that's okay for the purposes of the example

+ +

In this example, I could go up, then right and place a block, then right and place a block, then up and place a block. If there's more than one way of blocking off the enemy, then I should use the way that's going to give my enemy the least amount of space.

+ +

Researching on google couldn't find me anything relevant, although it may have been because I wasn't using relevant search terms. I also thought about using a monte Carlo search tree algorithm for simultaneous games, but I would need to research into that more.

+",31501,,2444,,11/22/2019 0:04,7/29/2023 19:00,How could an AI detect whether an enemy in a game can be blocked off/trapped?,,1,3,,,,CC BY-SA 4.0 +16707,1,,,11/22/2019 0:16,,5,294,"

Suppose we have $1000$ products that we want to detect. For each of these products, we have $500$ training images/annotations. Thus we have $500,000$ training images/associated annotations. If we want to train a good object detection algorithm to recognize these objects (e.g. YOLO) would it be better to have multiple detection models? In other words, should we have 10 different YOLO models where each YOLO model is responsible for detecting 100 products? Or is it good enough to have one YOLO model that can detect all 1000 products? Which would be better in terms of mAP/recall/precision?

+",28201,,2444,,11/23/2019 18:12,11/24/2019 17:04,Should I train different models for detecting subsets of objects?,,1,2,,,,CC BY-SA 4.0 +16708,5,,,11/22/2019 0:45,,0,,,2444,,2444,,11/22/2019 0:45,11/22/2019 0:45,,,,0,,,,CC BY-SA 4.0 +16709,4,,,11/22/2019 0:45,,0,,"For questions related to the concept of novelty search (NS), where individuals in an evolving population (of an evolutionary algorithm) are selected based on how different they are compared to all of the other individuals evaluated so far. NS was proposed in ""Exploiting Open-Endedness to Solve Problems Through the Search for Novelty"" (2008) by Joel Lehman and Kenneth O. Stanley.",2444,,2444,,11/22/2019 0:45,11/22/2019 0:45,,,,0,,,,CC BY-SA 4.0 +16710,2,,5347,11/22/2019 2:14,,3,,"

In the paper Exploiting Open-Endedness to Solve Problems Through the Search for Novelty (2008), by Joel Lehman and Kenneth O. Stanley, which introduced the novelty search approach, it is written

+
+

Thus this paper introduces the novelty search algorithm, which searches with no objective other than continually finding novel behaviors in the search space.

+
+

and

+
+

instead of searching for a final objective, the learning method is rewarded for finding any instance whose functionality is significantly different from what has been discovered before

+
+

and

+
+

The novelty of a newly generated individual is computed with respect to the behaviors (i.e. not the genotypes) of an archive of past individuals whose behaviors were highly novel when they originated

+
+

Therefore, the goal of a novelty search is to search for novel behavior and not necessarily novel chromosomes (or genotypes).

+

In the experiments reported in the novelty search paper, the authors use neural networks to represent the policy that controls a robot that needs to navigate a maze, while evolving these neural networks with NEAT (a neuroevolution method) and a novelty metric (rather than a fitness metric, which is used in the original NEAT). In the same experiments section, Lehman and Stanley write

+
+

Thus, for the maze domain, the behavior of a navigator is defined as its ending position. The novelty metric is then the Euclidean distance between the ending positions of two individuals. For example, two robots stuck in the same corner appear similar, while one robot that simply sits at the start position looks very different from one that reaches the goal, though they are both equally viable to the novelty metric.

+
+

Therefore, the evolution of the neural networks, which represent the controllers, is not necessarily guided by the novelty of (the architecture of) the neural networks but by the novelty of the behavior generated by the neural networks, even though novel neural networks might correspond or lead to novel behaviors.

+",2444,,2444,,5/26/2022 9:58,5/26/2022 9:58,,,,2,,,,CC BY-SA 4.0 +16713,1,,,11/22/2019 10:09,,2,110,"

Here is a list of meta-heuristic algorithms

+ +
    +
  • Ant colony optimization,
  • +
  • Ant lion optimizer,
  • +
  • Artificial bee colony algorithm,
  • +
  • Bat algorithm,
  • +
  • Cat swarm optimization,
  • +
  • Crow search algorithm,
  • +
  • Cuckoo optimization algorithm,
  • +
  • Cuckoo search algorithm,
  • +
  • Differential evolution,
  • +
  • Firefly algorithm,
  • +
  • Genetic algorithm,
  • +
  • Glowworm swarm optimization,
  • +
  • Gravitational search algorithm,
  • +
  • Grey wolf optimizer,
  • +
  • Harmony search,
  • +
  • Multi-verse optimizer,
  • +
  • Particle swarm optimization,
  • +
  • Shuffled complex evolution,
  • +
  • Simulated annealing,
  • +
  • Tabu search,
  • +
  • Teaching-learning-based optimization
  • +
+ +

Can anyone explain the similarities and dissimilarities of evolutionary game theory and the meta-heuristic approach?

+",9863,,2444,,11/22/2019 14:50,11/22/2019 14:50,What is the difference between evolutionary game theory and meta-heuristics?,,0,0,,,,CC BY-SA 4.0 +16714,1,,,11/22/2019 10:09,,1,126,"

Problems I often face at work usually differ from tutorial or book-like examples so I end up with a code that works but it's not elegant and takes too much time to write.

+ +

I wanted to ask you if there are some publicly accesible examples or repositories of Python codes that deal with machine learning development and application process but were created in a real company or organisation to develop their real-life products or services?

+ +

EDIT: What I do not think about are libraries or packages repositories such as tensorflow. I would like to see some codes of projects that for example use tensorflow to create some other product or service.

+",22659,,22659,,11/24/2019 15:27,5/25/2020 9:42,Are there any public real-life code examples of ML applications in Python?,,3,4,,12/14/2021 22:07,,CC BY-SA 4.0 +16715,2,,16703,11/22/2019 10:18,,0,,"

Let's assume $\gamma = 1$ to simplify things +\begin{align} +\mathbb E_{\tau|\pi} [\sum_{t = 0}^{\infty} A_\pi(s_t, a_t)] &= \mathbb E_{\tau|\pi}[A_\pi(s_0, a_0) + \ldots A_\pi(s_i, a_i) + \ldots]\\ +&= \mathbb E_{a_0 \sim \pi,s_1 \sim P(s_1|s_0, a_0)}[A_\pi(s_0, a_0)] + \ldots + \mathbb E_{a_i \sim \pi,s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[A_\pi(s_i, a_i)] + \ldots +\end{align} +if we observe only $i$-th timestep +\begin{align} +\mathbb E_{a_i \sim \pi,s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[A_\pi(s_i, a_i)] &= \sum_{a'} (\mathbb E_{s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[A_\pi(s_i, a')]) \pi(a'|s_i)\\ +&= \sum_{a'} (\mathbb E_{s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[\mathbb E_{s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[r(s_i) + \\ &V_\pi(s_{i+1}) - V_\pi(s)]]) \pi(a'|s_i) +\end{align}

+

\begin{equation} +\mathbb E[\mathbb E[f]] = \mathbb E[f] +\end{equation}

+

\begin{align} +\mathbb E_{a_i \sim \pi,s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[A_\pi(s_i, a_i)] &= \sum_{a'} (\mathbb E_{s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[r(s_i) + V_\pi(s_{i+1}) - V_\pi(s)]) \pi(a'|s_i)\\ +&= \mathbb E_{a_i \sim \pi,s_{i+1} \sim P(s_{i+1}|s_i, a_i)}[r(s_i) + V_\pi(s_{i+1}) - V_\pi(s)] +\end{align}

+

now for all timesteps sum everything up.

+",20339,,2444,,1/21/2023 17:34,1/21/2023 17:34,,,,0,,,,CC BY-SA 4.0 +16716,1,,,11/22/2019 10:26,,1,23,"

Question on transfer learning object classification (MobileNet_v2 with 75% number of parameters) with my own synthetic data:

+ +

I made my own dataset of three shapes: triangles, rectangles and spheres. each category has 460 samples with diferent sizes, dimensions, different wobbles at edges. They look like this:

+ +

+ +

+ +

I want the network to classify these primitive shapes in other environments as well with different lighting/color conditions and image statistics.

+ +

Even though I'm adding random crops, scaling, and brightnesses, at training step 10 it's already at 100% training and validation accuracy. Cross entropy keeps going down though. I'm using tensorflow hub. The performance of the network in the end could be better within other environments (virtual 3d space with such shapes). Also trained and tested for ~ 50 steps to see if the network is overfitting, but that doesn't work too well.

+ +

What alterations would you recommend to generalize better? Or shouldn't I train on synthetic data at all to learn primitive shapes? If so, any dataset recommendations?

+ +

Thanks in advance

+",31180,,,,,11/22/2019 10:26,learning object recognition of primitive shapes through transfer learning problem,,0,2,,,,CC BY-SA 4.0 +16717,1,,,11/22/2019 12:03,,8,112,"

In my implementation of Thompson Sampling (TS) for online Reinforcement Learning, my distribution for selecting $a$ is $\mathcal{N}(Q(s, a), \frac{1}{C(s,a)+1})$, where $C(s,a)$ is the number of times $a$ has been picked in $s$.

+

However, I found that this does not work well in some cases depending on the magnitude of $Q(s,a)$. For example, if $Q(s_i,a_1) = 100$, and $C(s_i,a_1) = 1$, then then this gives a standard deviation of 0.5, which is extremely confident even though the action has only been picked once. Compare that to $a_2$ which may be the optimal action but has never been picked, so $Q(s_i, a_2) = 0$ and $C(s_i,a_2) = 0$. It is unlikely that TS will ever pick $a_2$.

+

So, how do I solve this problem?

+

I tried normalizing the Q-values such that they range from 0 to 1, but the algorithm returns much lower total returns. I think I have to adapt the magnitude of the standard deviations relative to the Q-values as well. Doing it for 1 normal distribution is pretty straightforward, but I can't figure out how to do it for multiple distributions which have to take into consideration of the other distributions.

+

Edit: Counts should be $C(s,a)$ instead of $C(s)$ as Neil pointed out

+",31518,,2444,,12/20/2021 14:51,12/20/2021 14:51,Normalizing Normal Distributions in Thompson Sampling for online Reinforcement Learning,,0,5,,,,CC BY-SA 4.0 +16718,2,,16706,11/22/2019 14:49,,-1,,"

If I've understood the logic correctly, the player tries to build a wall such that the enemy cannot reach the player in any way. This can normally be determined by some path-finding algorithms, Dijkstra Shortest Path algorithm being a reasonable choice for the setting and grid-size. +This algorithm explores the possible paths from a starting point to one or multiple end points, and usually returns the shortest path to the point(s). If there is no path to the mentioned point, it will not return anything, and you will know the two points are blocked from reaching each other.

+ +

Of course, if I've truly understood the rules correctly, the bigger question is how to avoid having the player just build a wall around themselves to block of the enemy instead, which is likely going to be trivially easy. And additionally rule of requiring the player to be in the ""room"" with the bigger area after a room has been blocked off (Something which coincidentally also could be done by extending the Dijkstra's algorithm a bit with custom logic)

+",10746,,,,,11/22/2019 14:49,,,,1,,,,CC BY-SA 4.0 +16720,2,,16687,11/22/2019 16:50,,4,,"

Let's assume the probability distributions are Gaussian (or normal) distributions. In other words, in the Bayes' rule

+ +

\begin{align} +p(z|x)=\frac{p(x|z)p(z)}{p(x)} +\tag{1}\label{1} +\end{align}

+ +

The posterior $p(z|x)$, the likelihood $p(x|z)$, the prior $p(z)$ and the evidence (or marginal) $p(x)$ are Gaussian distributions. You can assume this because Gaussian distributions are closed under conditioning and marginalization.

+ +

For simplicity, let's further assume that they are univariate Gaussian distributions. Given that the Gaussian distribution is a continuous probability distribution, it has an associated probability density function (rather than a probability mass function, which is associated with discrete probability distributions, such as the Bernoulli distribution). The probability density function of the Gaussian distribution is

+ +

\begin{align} +f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} } \tag{2}\label{2} +\end{align}

+ +

where $\mu$ and $\sigma^2$ are respectively the mean and variance of the Gaussian distribution and $x$ is a variable (similarly to a variable $x$ in any mathematical function $f(x)$). So, given a concrete value for $x$, for example, $x=1$, then $f(x=1, \mu, \sigma^2)$ is a so-called density value (rather than a probability, which a probability mass function returns, given an input). For example, let's assume that the mean $\mu=0$ and the variance $\sigma^2 = 1$, then, for $x=1$, the density will be

+ +

$$ +f(1 \mid 0, 1) = \frac{1}{\sqrt{2\pi} } e^{ -\frac{1}{2} } +$$

+ +

So, to obtain the concrete density value, I've just replaced the concrete values of $x$, $\mu$ and $\sigma^2$ in equation \ref{2}.

+ +

To calculate the posterior $p(z|x)$ in equation \ref{1}, you just need to replace the likelihood $p(x|z)$, the prior $p(z)$ and the evidence $p(x)$ with the Gaussian probability density shown in equation \ref{2}, so you will have

+ +

\begin{align} +p(z|x)=\frac{f_{X\mid Z}(x \mid \mu_{X\mid Z}, \sigma^2_{X\mid Z}, z) f_{Z}(z \mid \mu_{Z}, \sigma^2_{Z})}{f_{X}(x \mid \mu_{X}, \sigma^2_{X})} +\tag{3}\label{3} +\end{align}

+ +

I've explicitly added a subscript to the means and variances of each probability density, given that, for example, the mean of the probability density $f_{X\mid Z}$ might be different than the mean of the probability density $f_{Z}$ or $f_{X}$, etc. So, to get the actual density value (a real number) that represents $p(z|x)$, you just need to replace $f_{X\mid Z}$, $f_{Z}$ and $f_{X}$ with the definition of the Gaussian density function in \ref{2} with their actual mean and variance values. I'll let you do this, given that this is really just a matter of picking up a concrete value for the means and variances and doing some algebra.

+ +

If you assume the posterior, the likelihood, the prior or the evidence to have a different distribution, you will do the same thing, but using the probability density or mass function of your chosen distribution.

+ +

In the context of the variational auto-encoder, you will be learning the mean and variance of the distribution, so the mean and variance will not be fixed, but they will be the parameters that you want to find. However, this does not change the way you apply the Bayes' rule.

+",2444,,2444,,11/22/2019 19:24,11/22/2019 19:24,,,,0,,,,CC BY-SA 4.0 +16722,1,,,11/22/2019 23:15,,3,338,"

I know backpropagation uses cost and gradient descent to tweak the weights to minimize the cost. But how does it know which weights to give more weight to in the first place? Is there something inside each neuron in the hidden layers that defines how this is an important neuron for the correct result in some way? How does the network know how to tweak those weights for that specific neuron?

+",31534,,2444,,11/22/2019 23:19,1/20/2021 21:46,How does the neural-network know how to tweak weights for a specific neuron?,,1,0,,,,CC BY-SA 4.0 +16723,2,,16722,11/22/2019 23:58,,3,,"

tl;dr

+

The whole point of gradient descent is to assess the contribution of each parameter towards the loss. This information is uncovered through the gradient of the loss w.r.t each parameter.

+

A deeper look...

+

Suppose we have a NN with parameters $w_{i}, \; i={1, 2, ...}$. This NN makes some predictions, which we compare to the actual targets and compute a loss $J$. The loss (or cost) function tells us how far off we are from the target. This is what we want to reduce, so that the predictions fall closer to the target.

+

By computing the partial derivative of the loss function $J$ w.r.t a parameter $w_i$ (so this is just one partial derivative and not the full gradient vector)

+

$$ +\frac{\partial J}{\partial w_i} +$$

+

the NN uncovers two pieces of information:

+
    +
  • The slope of $J$ w.r.t $w_i$, which tells the NN how much $w_i$ affects $J$.
  • +
  • Its sign, which tells the NN which way to tweak $w_i$ to decrease (or increase) the value of $J$.
  • +
+

By making the parameter updates depend on the derivative

+

$$ +w_i^{new} \leftarrow w_i^{old} - \lambda \frac{\partial J}{\partial w_i} +$$

+

the NN causes parameters that affect the loss the most, to be updated the most.

+

You can think of this as: parameters that are more to blame for the network's mistakes (i.e. contribute more towards the loss) are forced to change the most, in the direction that will decrease the loss.

+",26652,,2444,,1/20/2021 21:46,1/20/2021 21:46,,,,0,,,,CC BY-SA 4.0 +16725,1,,,11/23/2019 8:10,,3,440,"

I read very often that Bayesian algorithms work well on small datasets. Why is that? I think it is because they might generalize more, but why is that?

+ +

See also Investigating the use of Bayesian networks for small dataset problems.

+",30599,,2444,,11/23/2019 17:27,12/13/2021 10:28,Why do Bayesian algorithms work well with small datasets?,,1,0,,,,CC BY-SA 4.0 +16726,2,,4377,11/23/2019 8:46,,2,,"

There are so many different versions of spiking neural networks out there. I think it is mainly due to the fact that there has been no dominant successful SNN model with proper learning algorithm like CNN with BP. However, there have been several recent papers(e.g. SuperSpike, SLAYER) on SNN that may lead to the standard framework for SNN. It happened within about 2 years, so it is one of the reasons why there have been no friendly introductions on SNN so far. I found some blogposts on SNN in general sense, but it didn't cover recent important trends in SNN.

+ +

Currently, the best way to learn about SNN is by reading papers. One paper that I would recommend is ""Surrogate Gradient Learning in Spiking Neural Networks"" which comprehensively reviews recent works on supervised learning in SNN with back-propagation.

+ +

If you want to implement the SNN from scratch, I would recommend you to checkout BindsNET(github, paper) which is an SNN framework based on PyTorch. To me, it was most intuitive to use and understand compared to other existing SNN libraries. It covers various neuron models and learning rules. But I'm not sure whether it also covers learning rules described in ""Surrogate Gradient Learning in Spiking Neural Networks""

+",31541,,,,,11/23/2019 8:46,,,,0,,,,CC BY-SA 4.0 +16727,1,,,11/23/2019 9:24,,1,81,"

I am trying to build a Multi label classification model, having dataset with different input numerical values and specific label...

+ +

Eg:

+ +

Value Label

+ +

35 X

+ +

35.8 X

+ +

29 Y

+ +

29.8 Y

+ +

39 AA

+ +

41 CB

+ +

So depending on input numerical value the model should specify its label....please note that the input values won't necessarily follow exact dataset values....eg dataset has 35 and 34.8 as input values with X as label. So if model has 35.4 as input label, the X should be output label. Bottom line is that the output label is based on range of input values instead of fixed one..

+ +

Can anyone help me with quick solution (example Jupyter notebook will be highly appreciated)

+",30642,,31544,,11/23/2019 13:38,11/23/2019 16:42,Multi label Classification using Keras,,1,0,,4/9/2022 7:06,,CC BY-SA 4.0 +16728,1,16765,,11/23/2019 11:26,,2,970,"

I was talking with an ex-fellow worker and he told me that the decision tree implicitly applies a feature selection. He told me that the most important feature is higher in the tree because of the usage of information gain criteria.

+ +

What does he mean with this and how does this work?

+",30599,,2444,,11/23/2019 17:54,11/25/2019 18:51,How does the decision tree implicitly do feature selection?,,1,4,,,,CC BY-SA 4.0 +16729,1,,,11/23/2019 14:22,,6,140,"

For the purposes of object detection, are there any easy ways to create annotated training images? For example, if we have $10,000$ images and want to draw bounding boxes on 2 objects for each image, do we have to physically draw those boxes? Is that what most people do these days to create training data?

+",31547,,2444,,11/23/2019 17:52,12/4/2020 10:15,Are there any easy ways to create annotated training images for object detection?,,0,5,,,,CC BY-SA 4.0 +16731,2,,16727,11/23/2019 16:42,,1,,"

For a simple multi layer perceptron, you can refer to here: +https://www.kaggle.com/fchollet/simple-deep-mlp-with-keras

+ +

This is a great resource for kerad multi input label classification. Also, here is a few reminders for implementing such classification model.

+ +

One hot encoding

+ +

In the sample data you provided, it seems like you are using raw numbers as input. This will create unnecessary complications for the model. A better approach will be to encode them into a vector with 0 in all except one index with 1. The index will be the numerical value you are encoding. For example, 1 will be 01000000... Till end of your range of input, and 2 will be 001000 and so on. If you have decimal values, then this approach isn't for you. However you should still scale down your input to something smaller like 0-1.

+ +

output encoding

+ +

The output seems to be text, which cannot be done and also complicates the problem. Instead you can also use one hot encoding for the output. For example, let A be 0, B be 1, C be 2... AA be 27.... Until your list ends. And then use one hot encoding to encode them to vector of 0 and 1.

+ +

Hope I can help you.

+",23713,,,,,11/23/2019 16:42,,,,0,,,,CC BY-SA 4.0 +16732,2,,16684,11/23/2019 17:08,,3,,"

Introduction: MAP finds a point estimate!

+ +

As opposed to your apparently current belief, in maximum a posteriori (MAP) estimation, you are looking for a point estimate (a number or vector) rather than a full probability distribution. The MAP estimation can be seen as a Bayesian version of the maximum likelihood estimation (MLE). Therefore, I will first remind you of the objective of MLE.

+ +

Maximum likelihood estimation (MLE)

+ +

Let $\theta$ be the parameters you want to find. For example, $\theta$ can be the weights of your neural network. In MLE, we want to find a point estimate (rather than a full distribution). The objective in MLE is

+ +

\begin{align} +\theta^* +&= \operatorname{argmax}_\theta p(X \mid \theta) \tag{1}\label{1} +\end{align}

+ +

where $p(X \mid \theta)$ is the likelihood of the data $X$ given the parameters $\theta$. In other words, we want to find the parameters $\theta$ such that $p(X \mid \theta)$ is the highest, where $X$ is your given training data, so $X$ is fixed.

+ +

The notation $p(X \mid \theta)$ can be confusing because, in a conditional probability distribution, $p(a\mid b)$, we often assume that $b$ is given and $p(a\mid b)$ is a distribution over $a$. However, in the case of MLE, $\theta$ in $p(X \mid \theta)$ is not fixed, but it is a variable, while $X$ is given and fixed. Hence we call $p(X \mid \theta)$ a likelihood rather than a probability density or mass function. Moreover, we often denote the likelihood as $\mathcal{L}(\theta; X) = p_{\theta}(X)$ (and there are other notations, but this is, in my opinion, the least confusing one), because we want to emphasize that the likelihood is actually a function of the variable $\theta$. However, this is notation can also be confusing because we equate a function of a variable $\theta$ to a probability distribution over $X$. However, you should note that $p_{\theta}(X)$ is parametrized by $\theta$.

+ +

Therefore, the MLE estimation \ref{1} can also be written as follows

+ +

\begin{align} +\theta^* +&= +\operatorname{argmax}_\theta \mathcal{L}(\theta; X) \\ +&=\operatorname{argmax}_\theta p_{\theta}(X) +\tag{2}\label{2} +\end{align} +where $\theta^*$ is the point estimate of the objective function.

+ +

This notation emphasizes the fact that we want to find $\theta$, such that the probability of the given data $X$ is maximized.

+ +

Maximum a posteriori (MAP)

+ +

MAP is similar to MLE, but the objective is slightly different. First of all, we assume that $\theta$ is a random variable, so we have an associated probability distribution $p(\theta)$.

+ +

Recall that the Bayes' rule is the following

+ +

\begin{align} +p(\theta \mid X) = \frac{p(X \mid \theta) p(\theta)}{p(X)} +\tag{3}\label{3} +\end{align}

+ +

The objective function in MAP estimation is

+ +

\begin{align} +\theta^* +&= \operatorname{argmax}_\theta \frac{p(X \mid \theta) p(\theta)}{p(X)} \\ +&= \operatorname{argmax}_\theta p(\theta \mid X) \tag{4}\label{4} +\end{align}

+ +

Given that $p(X)$ does not depend on $\theta$, for the purposes of optimization, we can ignore it, so equation \ref{4} becomes

+ +

\begin{align} +\theta^* +&= \operatorname{argmax}_\theta p(X \mid \theta) p(\theta) \\ +&= \operatorname{argmax}_\theta p(\theta \mid X) +\tag{5}\label{5} +\end{align} +which is the MAP objective.

+ +

What is the relationship between MLE and MAP?

+ +
    +
  • In MAP, the objective is \ref{5}, which includes a prior over $\theta$, while, in MLE, equation \ref{1}, there is no such thing.

    + +
      +
    • Therefore, in MAP, we can assume that the parameters $\theta$ follow a certain distribution, thanks to the usage of $p(\theta)$.
    • +
  • +
  • In both MAP and MLE, we want to find a point estimate (which can be a number, if you have just one parameter, or a vector of size $N$, if you have $N$ parameters).

  • +
  • MAP is equivalent to MLE if you use a uniform prior, that is, if $p(\theta)$ is a uniform distribution.

  • +
+ +

Which distribution fits the data $X$?

+ +

In MAP, the human (you, me, etc.) chooses the family of distributions. For example, you can assume that your parameters $\theta$ follow a Gaussian distribution, so $p(\theta)$ will be a Gaussian distribution over the parameters. Why do I say ""family""? For example, in the case of a Gaussian distribution, you have two parameters that control the shape of the distribution, namely, the mean and variance. Depending on the concrete values of these two parameters, you will have different Gaussian distributions, so you call all these Gaussian distributions a family.

+ +

How do you find $\theta$?

+ +

To find $\theta^*$, you can use an optimization method like gradient descent or, in certain cases, you can find a closed-form solution. See also Which distributions have closed-form solutions for maximum likelihood estimation?.

+ +

Resources

+ +

The following blog post MLE vs MAP: the connection between Maximum Likelihood and Maximum A Posteriori Estimation, by Agustinus Kristiadi (a Ph.D. student in machine learning), might also be useful, so I suggest you read it. It will give you more details that I've left out on purpose to avoid cluttering this answer.

+",2444,,2444,,11/23/2019 17:14,11/23/2019 17:14,,,,0,,,,CC BY-SA 4.0 +16733,5,,,11/23/2019 17:29,,0,,"

See http://www.scholarpedia.org/article/Bayesian_statistics for more info.

+",2444,,2444,,11/23/2019 17:29,11/23/2019 17:29,,,,0,,,,CC BY-SA 4.0 +16734,4,,,11/23/2019 17:29,,0,,"For questions related to Bayesian statistics in the context of artificial intelligence. Bayesian statistics is a system for describing uncertainty using the mathematical language of probability. Bayesian statistical methods start with existing ""prior"" beliefs, and update these using data to give ""posterior"" beliefs.",2444,,2444,,11/23/2019 17:29,11/23/2019 17:29,,,,0,,,,CC BY-SA 4.0 +16735,5,,,11/23/2019 18:26,,0,,,2444,,2444,,11/23/2019 18:26,11/23/2019 18:26,,,,0,,,,CC BY-SA 4.0 +16736,4,,,11/23/2019 18:26,,0,,"For questions related to the concept of boosting in machine learning, which is a collection of ensemble learning methods.",2444,,2444,,11/23/2019 18:26,11/23/2019 18:26,,,,0,,,,CC BY-SA 4.0 +16737,2,,16536,11/24/2019 3:51,,5,,"

Introduction

+ +

The paper Generalization in Deep Learning provides a good overview (in section 2) of several results regarding the concept of generalisation in deep learning. I will try to describe one of the results (which is based on concepts from computational or statistical learning theory, so you should expect a technical answer), but I will first introduce and describe the general machine learning problem and I will give a definition of the generalisation gap problem. To keep this answer relatively short, I will assume the reader is familiar with certain basic machine learning and mathematical concepts, such as expected risk minimization, but, nonetheless, I will refer the reader to more detailed explanations of the concepts (at least the first time they are mentioned). If you are familiar with the basic concepts of computational learning theory (e.g. hypotheses), you will be advantaged.

+ +

Machine Learning Problem

+ +

In the following description, unless stated otherwise, I do not make any assumption about the nature of the variables. However, I will occasionally provide examples of concrete values for these variables.

+ +

Let $x \in \mathcal{X}$ be an input and let $y \in \mathcal{Y}$ be a target. Let $\mathcal{L}$ be a loss function (e.g. MSE).

+ +

Then the expected risk of a function (or hypothesis) $f$ is defined as

+ +

\begin{align} +R[f] +&= \mathbb{E}_{x, y \sim \mathbb{P}(X, Y)} \left[ \mathcal{L}(f(x), y) \right] \\ +&= \int \mathcal{L}(f(x), y) d\mathbb{P}(X=x, Y=y), +\end{align}

+ +

where $\mathbb{P}(X, Y)$ is the true joint probability distribution of the inputs and targets. In other words, each $(x, y)$ is drawn from the joint distribution $\mathbb{P}(X, Y)$, which contains or represents all the information required to understand the relationship between the inputs and the targets.

+ +

Let $A$ be a learning algorithm or learner (e.g. gradient descent), which is the algorithm responsible for choosing a hypothesis $f$ (which can e.g. be represented by a neural network with parameters $\theta$). Let

+ +

$$S_m = \{(x_i, y_i) \}_{i=1}^m$$

+ +

be the training dataset. Let

+ +

$$f_{A(S_m)} : \mathcal{X} \rightarrow \mathcal{Y}$$

+ +

be the hypothesis (or model) chosen by the learning algorithm $A$ using the training dataset $S_m$.

+ +

The empirical risk can then be defined as

+ +

$$ +R_{S_m}[f] = \frac{1}{m} \sum_{i=1}^m \mathcal{L} (f(x_i), y_i) +$$

+ +

where $m$ is the total number of training examples.

+ +

Let $F$ be the hypothesis space (for example, the space of all neural networks).

+ +

Let

+ +

$$ +\mathcal{L_F} = \{ g : f \in F , g(x, y) = \mathcal{L}(f(x), y)\} +$$ be a family of loss functions associated with the hypothesis space $F$.

+ +

Expected Risk Minimization

+ +

In machine learning, the goal can be framed as the minimization of the expected risk

+ +

\begin{align} +f^*_{A(S_m)} +&= \operatorname{argmin}_{f_{A(S_m)}} R[f_{A(S_m)}] \\ +&= \operatorname{argmin}_{f_{A(S_m)}} \mathbb{E}_{x, y \sim \mathbb{P}(X, Y)} \left[ \mathcal{L}(f_{A(S_m)}(x), y) \right] \tag{1}\label{1} +\end{align}

+ +

However, the expected risk $R[f_{A(S_m)}]$ is incomputable, because it is defined as an expectation over $x, y \sim \mathbb{P}(X, Y)$ (which is defined as an integral), but the true joint probability distribution $\mathbb{P}(X, Y)$ is unknown.

+ +

Empirical Risk Minimization

+ +

Therefore, we solve the approximate problem, which is called the empirical risk minimization problem

+ +

\begin{align} +f^*_{A(S_m)} &= \operatorname{argmin}_{f_{A(S_m)} \in F} R_S[f_{A(S_m)}] \\ +&= +\operatorname{argmin}_{f_{A(S_m)} \in F} \frac{1}{m} \sum_{i=1}^m \mathcal{L} (f_{A(S_m)}(x_i), y_i) +\end{align}

+ +

Generalization

+ +

In order to understand the generalization ability of $f_{A(S_m)}$, the hypothesis chosen by the learner $A$ with training dataset $S_m$, we need to understand when the empirical risk minimization problem is a good proxy for the expected risk minimization problem. In other words, we want to study the following problem

+ +

\begin{align} +R[f_{A(S_m)}] - R_S[f_{A(S_m)}] \tag{2}\label{2} +\end{align}

+ +

which can be called the generalization gap problem. So, in generalization theory, one goal is to study the gap between the expected and empirical risks.

+ +

Clearly, we would like the expected risk to be equal to the empirical risk $$R_S[f_{A(S_m)}] = R[f_{A(S_m)}]$$ because this would allow us to measure the performance of the hypothesis (or model) $f_{A(S_m)}$ with the empirical risk, which can be computed. So, if $R_S[f_{A(S_m)}] = R[f_{A(S_m)}]$, the generalization ability of $f_{A(S_m)}$ roughly corresponds to $R_S[f_{A(S_m)}]$.

+ +

Therefore, in generalization theory, one goal is to provide bounds for the generalisation gap $R[f_{A(S_m)}] - R_S[f_{A(S_m)}]$.

+ +

Dependency on $S$

+ +

The hypothesis $f_{A(S_m)}$ is explicitly dependent on the training dataset $S$. How does this dependency affect $f_{A(S_m)}$? Can we avoid this dependency? Several approaches have been proposed to deal with this dependency.

+ +

In the following sub-section, I will describe one approach to deal with the generalization gap problem, but you can find a description of the stability, robustness and flat minima approaches in Generalization in Deep Learning.

+ +

Hypothesis-space Complexity

+ +

In this approach, we try to avoid the dependency of the hypothesis $f_{A(S_m)}$ by considering the worst-case generalization problem in the hypothesis space $F$

+ +

$$ +R[f_{A(S_m)}] - R_S[f_{A(S_m)}] \leq \sup_{f \in F} \left( R[f] - R_S[f] \right) +$$ +where $\sup_{f \in F} \left( R[f] - R_S[f] \right)$ is the supremum of a more general generalization gap problem, which is greater or equal to \ref{2}. In other words, we solve a more general problem to decouple the hypothesis (or model) from the training dataset $S$.

+ +

Bound 1

+ +

If you assume the loss function $\mathcal{L}$ to take values in the range $[0, 1]$, then, for any $\delta > 0$, with probability $1 - \delta$ (or more), the following bound holds

+ +

\begin{align} +\sup_{f \in F} \left( R[f] - R_S[f] \right) \leq 2 \mathcal{R}_m \left( \mathcal{L}_F \right) + \sqrt{\frac{\log{\frac{1}{\delta}} }{2m}} \tag{3} \label{3} +\end{align} +where $m$ is the size of the training dataset, $\mathcal{R}_m$ is the Rademacher complexity of $\mathcal{L}_F$, which is the family of loss functions for the hypothesis space $F$ (defined above).

+ +

This theorem is proved in Foundations of machine learning (2nd edition, 2018) by Mehryar Mohri et al.

+ +

There are other bounds to this bound, but I will not list or describe them here. If you want to know more, have a look at the literature.

+ +

I will also not attempt to give you an intuitive explanation of this bound (given that I am also not very familiar with the Rademacher complexity). However, we can already understand how a change in $m$ affects the bound. What happens to the bound if $m$ increases (or decreases)?

+ +

Conclusion

+ +

There are several approaches to find bounds for the generalisation gap problem \ref{2}

+ +
    +
  • Hypothesis-space complexity
  • +
  • Stability
  • +
  • Robustness
  • +
  • Flat minima
  • +
+ +

In section 2 of the paper Generalization in Deep Learning, bounds for problem \ref{2} are given based on the stability and robustness approaches.

+ +

To conclude, the study of the generalization ability of deep learning models is based on computational or statistical learning theory. There are many more results related to this topic. You can find some of them in Generalization in Deep Learning. The studies and results are highly technical, so, if you want to understand something, good knowledge of mathematics, proofs, and computational learning theory is required.

+",2444,,2444,,11/25/2019 16:23,11/25/2019 16:23,,,,3,,,,CC BY-SA 4.0 +16738,1,16752,,11/24/2019 7:24,,3,1070,"

Consider the following diagram of a graph representing a search space.

+ +

+ +

If we start at $B$ and try to reach goal state $E$, the lowest-cost first search (LCFS) (aka uniform-cost search) algorithm fails to find a solution. This is because, $B$ selects $A$ over $C$ to expand as $f(A)=g(A)=36 < f(C)=g(C)=70$. $f(n)$ is the cost function of node $n$, and $g(n)$ is the cost of reaching node $n$ from the start state. Continuing further, from $A$, LCFS will now select $B$ to expand, which in turn will select $A$ again over $C$. This leads to an infinite loop. This shows LCFS is incomplete (not guaranteed to find a solution, if one exists).

+ +

For A*, we define $f(n)=g(n)+h(n)$, where $h(n)$ is the expected cost of reaching goal state from node $n$. If we define Manhattan distance ($L_0$ norm) for $h(\cdot)$, books (such as Artificial Intelligence: A Modern Approach (3rd Ed) by Stuart Russell and Peter Norvig) says A* is bound to find the solution (since it exists). However, I couldn't find how. Using, A*, $B$ will still select $A$ since $f(A)=36+(h(A)=40)=76 < f(C)=70+(h(C)=30+50)=150$. You see, this means, when $A$ expands back $B$, $B$ will again select $A$, and an infinite loop ensues.

+ +

What am I missing here?

+",31550,,2444,,11/25/2019 2:43,11/25/2019 3:02,A* and uniform-cost search are apparently incomplete,,1,0,,,,CC BY-SA 4.0 +16739,1,16771,,11/24/2019 11:00,,2,57,"

I've got a timeseries with sensor data (e.g. accelerometer and gyroscope). I now want to extract the activity out of it (e.g. walking, standing, driving, ...). I Followed this Jupyter Notebook. But there are some issues left.

+ +
    +
  1. Why do they only pick 500 rows?
  2. +
  3. What's the point of re-arranging the rows/columns?
  4. +
  5. When they build their decicion tree learner with the train data, they build it upon extracted features. But how can we then use this tree for new sensor data? Should we extract the features of the new data and pass it as input for the tree? But new sensor data might not have as many features as the train data. Eg: (ValueError: Number of features of the model must match the input. Model n_features is 321 and input n_features is 312)
  6. +
+",31553,,,,,11/25/2019 22:23,"Feature extraction timeseries, model compatibility",,1,0,,,,CC BY-SA 4.0 +16740,1,33833,,11/24/2019 11:01,,3,1947,"

For search algorithms with heuristic functions, the performance of heuristic functions are measured by the effective branching factor ${b^*}$, which involves the total number of nodes expanded ${N}$ and the depth of the solution ${d}$.

+

I'm not able to find out how different values of ${d}$ affect the performance keeping the same ${N}$. Put another way, why not use just the ${N}$ as the performance measure instead of ${b^*}$?

+",31550,,2444,,6/6/2022 8:09,6/6/2022 8:09,Why is the effective branching factor used for measuring performance of a heuristic function?,,2,0,,,,CC BY-SA 4.0 +16741,1,,,11/24/2019 13:38,,30,16175,"

I am a programmer but not in the field of AI. A question constantly confuses me is that how can an AI be trained if we human beings are not telling it its calculation is correct?

+ +

For example, news usually said something like ""company A has a large human face database so that it can train its facial recognition program more efficiently"". What the piece of news doesn't mention is whether a human engineer needs to tell the AI program each of the program's recognition result is accurate or not.

+ +

Are there any engineers who are constantly telling an AI what it produced it correct or wrong? If no, how can an AI determine if the result it produces is correct or wrong?

+",,user31556,2444,,11/24/2019 15:29,12/1/2019 4:34,How can an AI train itself if no one is telling it if its answer is correct or wrong?,,7,3,,,,CC BY-SA 4.0 +16742,2,,16740,11/24/2019 15:04,,1,,"

As you found $N$ is the number of nodes that are expanded. The cost of expansion of each node is equal to the number of children of that node. Hence, we use $b^*$ for each node. In other words, the total number of nodes that are involved in the expansion process is $N \times b^*$.

+",4446,,4446,,11/24/2019 17:28,11/24/2019 17:28,,,,5,,,,CC BY-SA 4.0 +16743,2,,16741,11/24/2019 15:24,,33,,"

By ""company A has a large human face database so that it can train its facial recognition program more efficiently"" the article probably means that there is a training dataset $S$ of the form

+ +

$$ +S = \{ (\mathbf{x}_1, y_1), \dots,(\mathbf{x}_N, y_N) \} +$$

+ +

where $\mathbf{x}_i$ is an image of the face of the $i$th human and $y_i$ (which is often called a label, class or target) is e.g. the name of the $i$th human. So, the programmer provides a supervisory signal (the label) for the AI to learn. The programmer also specifies the function that determines the error the AI program is making, based on the answer of the AI model and $y_i$.

+ +

This way of learning is called supervised learning (SL). However, there are other ways of training an AI. For example, there is unsupervised learning (UL), where the AI needs to find patterns in the data by aggregating objects based on some similarity measure, which is specified by the programmer. There's also reinforcement learning (RL), where the programmer specifies only certain reinforcement signals, that is, the programmer tells the AI which moves or results are ""good"" and which ones are ""bad"" to achieve its goal, by giving to the AI, respectively, a positive or negative reward. You can also combine these three approaches and there are other variations.

+ +
+

Are there any engineers who are constantly telling an AI what it produced it correct or wrong?

+
+ +

Yes, in the case of SL. In the case of RL, the programmer also needs to provide the reinforcement signal, but it doesn't need to explicitly tell the AI which action it needs to take. In UL, the programmer needs to specify the way the AI needs to aggregate the objects, so, in this case, the programmer is also involved in the learning process.

+",2444,,2444,,11/25/2019 16:40,11/25/2019 16:40,,,,1,,,,CC BY-SA 4.0 +16744,1,,,11/24/2019 15:28,,2,47,"

I'd like to learn about generalization theory for machine learning algorithms. I'm looking for books and other references (in case books aren't available) that provide a gentle introduction to the field for a relative beginner like me.

+ +

My background includes exposure to mostly undergrad mathematics and I have enough mathematical maturity to learn graduate-level topics as well.

+ +

To be more specific, I'm looking to understand more about mathematical abstraction of ML concepts (e.g. learning algorithm, hypothesis space, complexity of algorithm/hypothesis etc.), the purpose of an ML algorithm as an expected risk minimization exercise, techniques used to get bounds on generalization and so on.

+ +

To be even more specific, I'm looking to familiarize myself with concepts, theory and techniques so that I can understand papers (at least on a basic level) like:

+ + + +

and references therein

+",27548,,,,,11/24/2019 15:28,References on generalization theory and mathematical abstraction of ML concepts,,0,0,,,,CC BY-SA 4.0 +16745,2,,16741,11/24/2019 15:32,,4,,"

Taking your example of the faces data, keep in mind that when the model is run on a new unseen image the model can only return the already seen identity which emerges as the closest match. The result may be incorrect. The chances of mis-identification are much lower as the number of features incorporated increases.

+ +

The input of the engineers lies at the level of the training data. Say we have a new photo of an individual that needs to be included in the model. The engineering task is now to morph that image to simulate different environments, angles of view, atmospheric conditions, lighting and so on to provide a large number of data input cases all of which will be ""true"" since the underlying features are all unchanged since the images are based on the same individual. Then the model is recalculated using the additional data.

+ +

Keep in mind too that adding a new set of data to an existing training set has the advantage that the parameters of the model are largely in the right ballpark already, and adding the new faces will make only small changes. Cross validation will show whether the addition has improved or spoiled the model.

+",4994,,,,,11/24/2019 15:32,,,,1,,,,CC BY-SA 4.0 +16746,1,16794,,11/24/2019 15:33,,2,853,"

What's the distinction between a learning algorithm $A$ and a hypothesis $f$?

+

I'm looking for a few concrete examples, if possible.

+

For example, would the decision tree and random forest be considered two different learning algorithms? Would a shallow neural network (that ends up learning a linear function) and a linear regression model, both of which use gradient descent to learn parameters, be considered different learning algorithms?

+

Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree. Are there other ways of varying $f$? And how can we vary $A$?

+",27548,,2444,,12/7/2020 21:16,12/7/2020 21:33,What is the difference between a learning algorithm and a hypothesis?,,2,0,,,,CC BY-SA 4.0 +16747,2,,16707,11/24/2019 16:33,,1,,"

This is called decomposition of multi-class classifier. Your proposed method is called one vs all.

+ +
+

One vs. all provides a way to leverage binary classification. Given a classification problem with $N$ possible solutions, a one-vs.-all solution consists of $N$ separate binary classifiers—one binary classifier for each possible outcome. During training, the model runs through a sequence of binary classifiers, training each to answer a separate classification question.

+
+ +

Source: https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/one-vs-all.

+ +

According to this article. The author of the article did experiments on SVM on 8 different benchmark problems. According to the results, this method is sometimes as good as others, but usually not the best. It is also never substantially better than any other method. The article also stated that the best method is usually problem dependent.

+ +

Also, this method will decrease inference speed a lot, and used substantial amount of GPU memory. According to the source, it does not improve performance a lot, so you best bet for getting a higher performance is probably to use a different model architecture, for example the FPN FRCN, which is stated in the YOLO v3 paper having the best performance, but not fast inference speed. YOLOv3 is designed to have a fast inference speed, to provide real time object detection system, so for performance you should probably use other model architecture instead.

+",23713,,2444,,11/24/2019 17:04,11/24/2019 17:04,,,,0,,,,CC BY-SA 4.0 +16748,2,,16672,11/24/2019 17:47,,4,,"

If we seek proven working source code to plug into a GPLv2-licence compatible solution, we should at least consider autotrace. Its source code is open for review. It can be tested against the example images we have and, if it works fine, called by our GPLv2 software. We can even use the calling code in Inkscape's plug-in image tracing implementation as a good starting point for design and implementation of our calling program, whether it be C, C++, Java, Python, or ECMA (JS).

+ +

The trace algorithm in Adobe Illustrator is comparable but is not open source.

+ +

If we seek theory, there are several academic publications discussing some of the theory, the last being most aligned with machine learning ideology. I would not dismiss earlier work simply because it doesn't connect with the current machine learning idioms. Investigating what is fully implemented and successfully used by many follows a wise old business proverb: The bird in the hand is worth two in the bush.

+ + + +

Many of the online drawing programs collect data. It would not be surprising if, behind the gracious give-away of online bandwidth, they are establishing a continuously improving data set for training a new breed of autotracers. None have published AI designs admitting as much, but they would not be legally obligated to do so because a single input example is indeterminable from the autotrace service that could resulting from the training.

+",4302,,4302,,11/24/2019 23:05,11/24/2019 23:05,,,,1,,,,CC BY-SA 4.0 +16749,2,,16746,11/24/2019 19:22,,0,,"

A hypothesis is a statement that suggests an as yet unproven explanation of a relationship between two or more phenomena that you intend to test. An agronomist thinks that more nitrogen on canola will always increase the crop output $$Harvest = f(N)$$, or a meteorologist thinks he can show that the path of a hurricane over the ocean can be determined by knowledge of the sea temperature and the wind speed at an altitude of 1000 feet one minute before. $$D(t,0) = f(T(t-1,1000),S(t-1,1000)$$ +Both hypotheses are pegs on which later steps are based; testing follows with a conclusion whether the hypothesis can be rejected or not.

+ +

Changing a hypothesis can be simply adding or subtracting arguments to the function or changing the nature of the relationship such as the acceleration of the wind as opposed to its velocity.

+ +

A ""learning"" algorithm describes how the parameters of a numeric model are changed in accordance with the delta rule, that is what the learning rate is and whether momentum is to be applied.

+ +

Random Forest and Decision Tree are ""classification"" algorithms. They are clearly stepwise processes that proceed towards the goal of a model, but they start by specifying the shape that the model will take and place boundaries on what values the parameters may take.

+ +

Both learning and classification algorithms specify a priori what shape the model will take and by doing so limit its relevance to particular problems.

+",4994,,,,,11/24/2019 19:22,,,,0,,,,CC BY-SA 4.0 +16750,1,,,11/24/2019 19:28,,1,166,"

I have a neural network for MNIST classification which I am hard coding using TensorFlow 2.0. The neural network has an input layer consisting of 784 neurons (28 * 28), one hidden layer having ""hidden_neurons"" number of neurons and an output layer having 10 neurons.

+ +

The part of the code that I want to get checked is as follows:

+ +
# Partial derivative of cost function wrt b2-
+dJ_db2 = (1 / m) * tf.reshape(tf.math.reduce_sum((A2 - Y), axis = 0), shape = (1, 10))
+
+# Partial derivative of cost function wrt b1-
+dJ_db1 = (1 / m) * tf.reshape(tf.math.reduce_sum(tf.transpose(tf.math.multiply(tf.matmul(W2, tf.transpose((A2 - Y))), relu_derivative(A1))), axis = 0), shape = (1, hidden_neurons))
+
+ +

The notation is as follows.

+ +
    +
  • ""b1"" - bias for hidden layer and has the shape (1, hidden_neurons"")
  • +
  • ""b2"" - bias for output layer having the shape (1, 10).
  • +
  • ""A2"" - is the output of output layer and have the shape (m, c)
  • +
  • ""Y"" - is one-hot encoded target and have the shape (m, c)
  • +
  • 'm' - is number of training examples
  • +
  • 'c' - is number of classes
  • +
  • ""A1"" - is the output of hidden layer and has the shape (hidden_neurons, m)
  • +
+ +

I have used multiclass cross-entropy cost function. Hidden layer uses ReLU activation function, while the output layer has softmax activation function.

+ +

Are my two lines of codes for cost function wrt to ""b1"" and ""b2"" correct?

+",31215,,2444,,11/25/2019 1:47,11/25/2019 1:47,Is this TensorFlow implementation of partial derivative of the cost with respect to the bias correct?,,0,1,,,,CC BY-SA 4.0 +16752,2,,16738,11/25/2019 2:41,,3,,"

You forgot to calculate and take into account the costs of the actual paths. You forgot to accumulate the cost of the edges for going forward and backward multiple times!

+ +

The evaluation function of uniform-cost search (UCS) is $f(n) = g(n)$, where $g(n)$ represents the cost of the path from the start node to $n$. The evaluation function of A* is $f(n) = g(n) + h(n)$, where $h(n)$ is an admissible heuristic. UCS is a special case of A*, with the admissible heuristic $h(n) = 0, \forall n$. The evaluation function is used to choose the next node to visit from the fringe, which is the set of nodes that can potentially be visited. Whenever we visit a node, we remove it from the fringe. To expand a node $X$ means to add the children of $X$ to the fringe. Whenever you visit a node, you will also expand it.

+ +

Let's apply UCS to your specific example. Initially, we check whether $B$ is a goal node or not. It is not, so we expand $B$. We can add $B$ to a list of visited (or expanded) nodes (graph search) or not (tree search). Let's use the tree search, so we will not be keeping track of the expanded nodes, which means that we could expand a node more than once. $B$ has two children, $A$ and $C$, so we add them to the fringe, $\mathbb{F} = \{A, C\}$. Should we now visit $A$ or $C$? We will visit the one with the smallest value of the evaluation function, which is $A$, given that $f(A)=g(A)=36 < f(C)=g(C)=70$, so we remove $A$ from the fringe, which is now $\mathbb{F} = \{ C \}$. Is $A$ a goal node? No, so we expand it, but it only has one child, $B$, so we add $B$ to the fringe, so $\mathbb{F} = \{ C, B \}$. The cost of the path to reach $B$ by going first to $A$ and then to $B$ is $f(B) = 36 + 36 = 72$ (given that you go back and forward on the same edge, so you need to accumulate the cost of these trips!) and $f(C) = 70$, so we visit $C$, so we remove it from the fringe, which is now $\mathbb{F} = \{ B \}$.

+ +

You should be able to work out the remaining search (I haven't actually done it). I suggest you watch the video Uniform Cost Search, by John Levine, who shows a concrete example of how UCS and A* work.

+",2444,,2444,,11/25/2019 3:02,11/25/2019 3:02,,,,0,,,,CC BY-SA 4.0 +16754,1,16755,,11/25/2019 4:32,,0,61,"

I have a paper about trading which has been implemented with RNN on Tensorflow. We have about 2 years of data from trading. Here are some samples :

+ +

Date, Open, High, Low, Last, Close, Total Trade Quantity, Turnover (Lacs)

+ +

2004-08-25 , 1198.7, 1198.7, 979.0, 985.0, 987.95, 17116372.0, 172587.61

+ +

2004-08-26 , 992.0, 997.0, 975.3, 976.85, 979.0, 5055400.0, 49828.65

+ +

I need to predict the the future of trading (for example, the latest 10 days ). So, how can I make sure that my model is working correctly. Do we have any ""accuracy"" or ""loss"" like what we have in Deep Learning?

+",31143,,31143,,11/25/2019 10:19,11/25/2019 10:19,Do we have anything like accuracy and loss in RNN models?,,1,0,,,,CC BY-SA 4.0 +16755,2,,16754,11/25/2019 5:22,,5,,"

RNN's stand for Recurrent Neural Networks which is, in fact, Deep Learning.

+ +

There has to be a loss since you're dealing with supervised learning and the typical loss metrics used are the same as you would see in feedforward networks (usually binary cross-entropy), the main difference being loss would be calculated between the true label at a particular time stamp $(t)$ and the prediction made from the subset of the network until time-stamp $(t-1)$. This leads the loss to act on all timestamps.

+ +

Accuracy metrics also would be used in the same way such as Mean Square Error or L1. For more details you can go through this link.

+ +

Hope this was helpful!

+",25658,,,,,11/25/2019 5:22,,,,2,,,,CC BY-SA 4.0 +16756,1,16766,,11/25/2019 5:48,,6,124,"

I have been reading a lot of articles on adversarial machine learning and there are mentions of ""best practices for robust machine learning"".

+ +

A specific example of this would be when there are references to ""loss of efficient robust estimation in high dimensions of data"" in articles related to adversarial machine learning. Also, IBM has a Github repository named ""IBM's Adversarial Robustness Toolbox"". +Additionally, there is a field of statistics called 'robust statistics' but there is no clear explanation anywhere about its relation to adversarial machine learning.

+ +

I would therefore be grateful if someone could explain what robustness is in the context of Adversarial Machine Learning.

+",31240,,,,,11/25/2019 17:43,What is the relationship between robustness and adversarial machine learning?,,1,0,,,,CC BY-SA 4.0 +16757,1,,,11/25/2019 6:18,,2,2849,"

I would like to know how do Kaldi and DeepSpeech speech recognition systems differ algorithmically? Which one would be more accurate for continuous speech in time?

+",31573,,4709,,11/25/2019 14:06,12/1/2019 14:15,What is the difference between Kaldi and DeepSpeech speech recognition systems in their approach?,,1,0,,,,CC BY-SA 4.0 +16758,1,,,11/25/2019 7:55,,2,87,"

I want to build a Deep Reinforcement Learning Model for Asset allocation.

+

Background:

+

I have 7 stock indexes from different markets, and I want to build a policy to produce the action (likes whether to sell or buy index? which index? and how much?) by observing the market informations.

+

Question 1:

+

I have two idea for the output of my policy. One is to produce a vector $w$ of length 8, Each element $w_i$ represent the target ratio of the stocks we want to hold (7 stock indexes and 1 cash), so I need to set $w_i>0, $ and $\sum_{i}^{8}w_i=1.$ How to implement? I just let the Activation function in the last layer of neural network to be sigmoid and divide the sum in environment. Is this available? And it's not easy to deal with transaction process if buy and sell fee exist. And the training process slowly when I use the policy gradient.

+

The two is also produce a vector $w$ of length 8, For each element $w_i$ represent sell percent for stock i when $w_i$ is negative and buy percent of cash when $w_i$ is positive. It can solve the problem I meet in idea one. But I will meet a new question is cash is finite. I need to decide order of buy, in other words, which stock to buy first and buy which one later.

+

Question 2:

+

Many papers tell me to produce Distributed parameters by policy then create the action by distribution (like: normal distribution). It makes that more difficult to control the action satisfy the condition above.

+

And the result whether be unstable if the action is produce by sample.

+",31479,,-1,,6/17/2020 9:57,11/25/2019 7:55,How to set the multiple continuous actions with constraints,,0,4,,,,CC BY-SA 4.0 +16759,2,,16741,11/25/2019 9:38,,0,,"

The trick with unsupervised learning is that the AI doesn't learn that something is a face or not, it just sees unnamed patterns that the researchers need to then name.

+ +

Let's say you feed it a dataset with one million pictures in order to train a facial recognition algorithm. After training, the AI will have found a few patterns in the pictures based on the parameters of each picture such as color, lighting, topography, etc. However, without labels (supervised learning) the AI doesn't know what exactly it found, so a researcher then needs to label those patterns. You don't need a label to tell that a picture of a face is mostly different than the picture of a building. You need a label to tell you that one is a ""face""and the other is a ""building"".

+",,anon,,,,11/25/2019 9:38,,,,2,,,,CC BY-SA 4.0 +16760,1,,,11/25/2019 9:38,,1,2617,"

I am trying to make a face login application. face comparison algorithm is using Euclidean distance to calculate two different face images that are the same or not the same. can anyone help me with how the Euclidean distance algorithm is working?

+",31576,,,,,11/25/2019 13:22,How Euclidean distance algorithm calculate two different face images are match or not match in face recognition?,,1,1,,11/26/2019 19:55,,CC BY-SA 4.0 +16761,2,,16757,11/25/2019 10:34,,1,,"

Both of them using the end-to-end approach for speech recognition. However, because of the code complexity in DeepSpeech, you can't tune the model for your work. Kaldi could be configured in a different manner and you have access to the details of the models and indeed it is a modular tool. I think Kaldi could be a better tool academically and also commercially. But, Deepspeech is a BlackBox and could be a proper tool if your work is near to the work of DeepSpeech.

+ +

Moreover, if you are working in a language that there is not enough data for learning, you have a dozen of tools such as grapheme-to-phoneme to establish your dataset to start the learning process. Also, you can start with other models of Kaldi to work on which do not depend on the big data to be learned. However, DeepSpeech requires many hours of samples that is not a common asset for everyone in every language.

+ +

Also, you can know more about the traditional technique of speech processing vs the contemporary technique (deep learning) in this medium post.

+",4446,,4446,,12/1/2019 14:15,12/1/2019 14:15,,,,2,,,,CC BY-SA 4.0 +16762,2,,16760,11/25/2019 13:22,,4,,"

Simply put, Euclidean distance measures how far away two items are (see Neil Slater's comment).

+ +

In order to apply this to a pattern recognition task, you will need to convert the items to compare (in your case images of faces) into feature vectors (ie lists of numerical values), and then you do a pairwise comparison to work out how distant two faces are. You will then need to set a threshold where you treat two images as being the same face (typically where the distance between the feature vectors is small).

+ +

Selecting the right features is obviously crucial here. I'm not an expert on face descriptions, but it would probably be something like

+ +
    +
  • the distance between the eyes
  • +
  • the distance between the bottom of the nose and the top of the upper lip
  • +
  • ...
  • +
+ +

Once you have these measurements, store the values for each image in a vector and you can apply the Euclidean distance. Effectively, each image is represented as a point in the $m$-dimensional feature space, where each measurement is a dimension. To select good features make sure they are not correlated (ie they are independent of each other) and are the same scale (eg all are distances, so not eye colour)

+ +

The choice of Euclidean distance is fairly minor: there are other distance metrics which might work equally well or even better. As I mentioned, the key point is selecting appropriate feature values.

+",2193,,,,,11/25/2019 13:22,,,,2,,,,CC BY-SA 4.0 +16764,2,,16741,11/25/2019 17:01,,0,,"

I can't remember the researcher's name, but he specializes in psychology in Great Britain and has done a lot of work with machine learning and artificial intelligence.

+ +

The project he was working on that I read about earlier this year was one where they tried to deduce how humans learn. They came up with the theory that we learn by making guesses about plausible and possible outcomes and that creates our expectations about reality. When we are wrong, depending on the degree, we are possibly surprised or shocked or not affected at all. They are working on creating AI that does not need human intervention, but to make guesses about outcomes before it performs tasks, and then update those expectations as it experiences more varying outcomes.

+ +

Extremely interesting stuff, and definitely closer to how sentient beings gain experience and grow as individuals.

+",31593,,,,,11/25/2019 17:01,,,,0,,,,CC BY-SA 4.0 +16765,2,,16728,11/25/2019 17:35,,3,,"

Consider a dataset $S \in \mathbb{R}^{N \times (M + 1)}$ with $N$ observations (or examples), where each observation $S_i \in \mathbb{R}^{M + 1}$ is composed of $M$ elements, one value for each of the $M$ features (or independent variables), $f_1, \dots f_M$, and the corresponding target value $t_i$.

+ +

A decision tree algorithm (DTA), such as the ID3 algorithm, constructs a tree, such that each internal node of this tree corresponds to one of the $M$ features, each edge corresponds to one value (or range of values) that such a feature can take on and each leaf node corresponds to a target. There are different ways of building this tree, based on different metrics to choose the features for each internal node and based on whether the problem is classification or regression (so based on whether the targets are classes or numeric values).

+ +

For example, let's assume that the features and the target are binary, so each $f_k$ can take on only one of two possible values, $f_{k} \in \{0, 1\}, \forall k$, and $t_i \in \{0, 1\}$ (where the index $i$ correspond to the $i$th observation, while the index $k$ correspond to the $k$th column or feature of $S$). In this case, a DTA first chooses (based on some metric, for example, the information gain) one of the $M$ features, for example, $f_j$, to associate it with the root node of the tree. Let's call this root node $f_j$. Then $f_j$ will have two branches, one for each of the binary values of $f_j$. If $f_j$ were a ternary variable, then the node corresponding to $f_j$ would have three branches, and so on. The DTA recursively chooses one of the remaining features for each node of the child branches. The DTA does this until all features have already been selected. In that case, we will have reached a leaf node, which will correspond to one value of the target variable. When the DTA chooses a feature for a node, all observations of the dataset $S$ that take on the first binary value of that feature will go in the branch corresponding to that value and all other observations will go in the other branch. So, in this way, the DTA splits the dataset based on the features.

+ +

The following diagram represents a final decision tree built by a DTA.

+ +

+ +

You can see that the first feature selected (for the root node) by the DTA is ""Is it male?"", which is a binary variable. If yes, then, on the left branch, we have another internal node, which corresponds to another feature and, at the same time, to all observations associated with a male. However, on the right branch, we have a leaf node, which corresponds to one value of the target, which, in this case, is a probability (or, equivalently, a numerical value in the range $[0, 1]$). The shape of the tree depends on the dataset and DTA algorithm. Therefore, different datasets and algorithms might result in different decision trees.

+ +

So, yes, you can view a decision tree algorithm as a feature selection or, more precisely, feature splitting algorithm.

+",2444,,2444,,11/25/2019 18:51,11/25/2019 18:51,,,,2,,,,CC BY-SA 4.0 +16766,2,,16756,11/25/2019 17:43,,4,,"

A robust ML model is one that captures patterns that generalize well in the face of the kinds of small changes that humans expect to see in the real world.

+ +

A robust model is one that generalizes well from a training set to a test or validation set, but the term also gets used to refer to models that generalize well to, e.g. changes in the lighting of a photograph, the rotation of objects, or the introduction of small amounts of random noise.

+ +

Adversarial machine learning is the process of finding examples that break an otherwise reasonable looking model. A simple example of this is that if I give you a dataset of cat and dog photos, in which cats are always wearing bright red bow ties, your model may learn to associate bow ties with cats. If I then give it a picture of a dog with a bow tie, your model may label it as a cat. Adversarial machine learning also often includes the ability to identify specific pieces of noise that can be added to inputs to confound a model.

+ +

Therefore, if a model is robust, it basically means that it is difficult to find adversarial examples for the model. Usually this is because the model has learned some desirable correlations (e.g. cats have a different muzzle shape than dogs), rather than undesirable ones (cats have bow ties; pictures containing cats are 0.025% more blue than those containing dogs; dog pictures have humans in them more often; etc.).

+ +

Approaches like GANs try to directly exploit this idea, by training the model on both true data and data designed by an adversary to resemble the true data. In this sense, GANs are an attempt to create a robust discriminator.

+",16909,,,,,11/25/2019 17:43,,,,0,,,,CC BY-SA 4.0 +16768,1,,,11/25/2019 19:46,,1,44,"

I read a paper, which is about Deep Reinforcement Learning and it tries to use this method on stock data set. It has been shown that it reaches the maximum return (profit). It has been implemented in Tensorflow.

+

My question is, how we can make sure that we achieve the maximum value? I mean, is there a parameter or value that can show us how well the RL did its job?

+",31143,,2444,,12/11/2021 20:53,12/11/2021 20:53,How can we make sure how well the reinforcement learning agent works on a stock dataset?,,0,0,,,,CC BY-SA 4.0 +16769,1,,,11/25/2019 21:17,,4,79,"

I have a big dataset (28354359 rows) that has some blood values as features (11 features) and the label or outcome variable that tells whether a patient has a virus caused by a Neoplasm or not.

+ +

The problem with my dataset is that 2% of the patients that are in my dataset have the virus and 98% does not have the virus.

+ +

I am mandatory to use the random forest algorithm. While my random forest model has a high accuracy scores 92%, the problem is that more than 90% of the patients that have the virus are predicted that they don’t have the virus.

+ +

I want the opposite effect, I want that my random forest is likely to predict more often that a patient has the virus (even if the patient does not have the virus (ideally I don’t want this side effect , but rather this than the opposite)).

+ +

The idea behind this is that performing an extra test (via an echo) could not harm the patient that has not the virus, but not testing a patient will have result terrible for the patient.

+ +

Does somebody have advice how I could tweak my random forest model for this task?

+ +

I my self experimented with the SMOTE transformation and other sampling techniques but maybe you guys have other suggestion.

+ +

I also have tried to apply a cutoff function.

+",30599,,30599,,11/26/2019 6:47,11/28/2019 16:28,Oposite type of predictions for unbalanced dataset,,2,0,,,,CC BY-SA 4.0 +16770,2,,16646,11/25/2019 21:28,,0,,"

The answer in part seems to depend on what you mean by ""human intelligence"". If you mean behavior that would usually be regarded as requiring intelligence were a human to produce it, then various types machines can be intelligent.

+ +

Such ""intelligent"" machines presumably include player pianos. Playing the piano and producing a melody is widely regarded as requiring human intelligence when humans do it. Player pianos produce the same sort of behavior, but without a human touching a key. Hence (so the argument goes) player pianos are intelligent.

+ +

But if ""intelligence"" includes having the inner process of understanding, say understanding the meanings of symbols of written language, then at least according to philosopher John Searle, purely symbol manipulating devices such as digital computers could never be intelligent. This is because symbols in themselves don't contain or indicate their meanings, and all the computing machine gets and manipulates is symbols in themselves.

+ +

However, there does seem to be a sense in which the question ""Is artificial intelligence really just human intelligence"" is true of computers. This is when the behavior of the machine is caused by human intelligence. A human writes a program that defines, mandates, the behavior of the machine (just like a human designs the mechanism and paper roll of a player piano). This design takes human intelligence. The machine has no intrinsic, or innate, intelligence. It's just an automaton mindlessly following the causal sequence created by the intelligent human designer.

+ +

Now if computers are purely symbol-manipulating devices, and if Searle is right, AI is doomed, at lest as long as its development platform is the digital computer (and no other machine is available or seems on the horizon).

+ +

However, are computers purely symbol-manipulating devices? If not, there may be a way they can acquire meanings, or knowledge, and, for instance, learn languages. If computers can receive (including from digital sensors) and manipulate more than just symbols, they may be able to acquire the inner structures and execute the inner processes needed for human-like understanding. That is, they might be able to acquire knowledge by way of sensing the environment (as humans do). A human might write the program that facilitates acquisition of such knowledge, but what the knowledge is about would be derived from the sensed environment not from a human mind.

+ +

But here we're talking about ""intelligence"" defined over inner processes and structures, not or not just external behavior. If you define human intelligence as external behavior, as the Turing test does and as AI researchers often do, then music boxes with pirouetting figurines, player pianos, and programmed computers all have human-like intelligence, and artificial intelligence as it exists today is really just the same sort of thing as human intelligence.

+",17709,,17709,,11/25/2019 21:34,11/25/2019 21:34,,,,0,,,,CC BY-SA 4.0 +16771,2,,16739,11/25/2019 22:23,,2,,"

there's a lot to un-pack in this question.

+ +

Why do they only pick 500 rows?

+ +

my guess: in order to keep the example running quickly. tsfresh usually takes a while to calculate its features. note that when they evaluated their model, they took the last 500 samples.

+ +

What's the point of re-arranging the rows/columns?

+ +

answer: the data frame format that tsfresh requires in order to calculate features is that format. it is a bit of a pain...especially when you need to keep track of an id-column for other data.

+ +

When they build their decicion tree learner with the train data, they build it upon extracted features.

+ +

answer: yes

+ +

But how can we then use this tree for new sensor data? +Should we extract the features of the new data and pass it as input for the tree?

+ +

answer: yes

+ +

But new sensor data might not have as many features as the train data. Eg: (ValueError: Number of features of the model must match the input. Model n_features is 321 and input n_features is 312)

+ +

answer: yes it will. I don't know where your copy-pasted error message came from. When you generate a set of features using tsfresh, you can do it in a couple of different ways. you can generate all of them or you can generate a subset of them---they generated a subset...but then subsetted it once again(using their importance stuff...whether you use the importance methods or other methods to select features you will end up with a bunch of features you need to calculate). If you follow the procedure for generating features based on a pre-determined subset (relevant_features in their case), you will end up with the same # of features. This needs to be stressed...don't generate tsfeatures that you don't need! as it will take foooorever.

+",31608,,,,,11/25/2019 22:23,,,,0,,,,CC BY-SA 4.0 +16772,1,,,11/25/2019 22:43,,4,79,"

I have just started playing with Reinforcement learning and starting from the basics I'm trying to figure out how to solve Banana Gym with coach.

+ +

Essentially Banana-v0 env represents a Banana shop that buys a banana for \$1 on day 1 and has 3 days to sell it for anywhere between \$0 and \$2, where lower price means a higher chance to sell. Reward is the sell price less the buy price. If it doesn't sell on day 3 the banana is discarded and reward is -1 (banana buy price, no sale proceeds). That's pretty simple.

+ +

Ideally the algorithm should learn to set a high price on day 1 and reducing it every day if it didn't sell.

+ +

To start I took the coach-bundled CartPole_ClippedPPO.py and CartPole_DQN.py preset files and modified them to run Banana-v0 gym.

+ +

The trouble is that I don't see any learning progress regardless what I try, even after running like 50,000 episodes. In comparison the CartPole gym successfully trains in under 500 episodes.

+ +

I would have expected some improvement after 50k episodes for such a simple task like Banana.

+ +

+ +

Is it because the Banana-v0 rewards are not predictable? I.e. whether the banana sells or not is still determined by a random number (with success chance based on the price).

+ +

Where should I take it from here? How to identify which Coach agent algorithm I should start with and try to tune it?

+",31606,,,,,11/26/2019 11:11,Unable to train Coach for Banana-v0 Gym environment,,0,4,,,,CC BY-SA 4.0 +16773,2,,16741,11/25/2019 23:22,,3,,"
+

how can an AI be trained if we human beings are not telling it its calculation is correct?

+
+ +

What you are looking for is called self-supervised learning. Yann LeCun, one of the originators behind modern neural network systems, has suggested that machines can reason usefully even in the absence of human-provided labels simply by learning auxiliary tasks, the answers for which are already encoded in the data samples. Self-supervision has already been successfully applied to a variety of tasks, showing improvement in multitask performance due to self-supervision. Unsupervised learning would in general be a subset of self-supervision.

+ +

Self-supervision can be performed in a variety of ways. One of the most common is to use parts of the data as input and other parts as labels, and using the ""input"" subset of the data to predict the labels.

+ +

Supervised learning looks like this:

+ +
model.fit(various_data, human_labels)
+
+ +

The human_labels correspond to entries in various_data, which we expect the model to predict.

+ +

Meanwhile, self-supervised learning can look something like this:

+ +
model.fit(various_data[:,:500], various_data[:,500:]) 
+
+ +

(Using Python array slice notation, some of the input data are used as training labels.)

+ +

For example, a machine could use half of the pixels in an image of a handwritten digit to try to predict the missing pixels. This is a form of self-supervision: Since the machine knows which pixels belong together in the same sample, it can ""automatically"" produce its own labeled data from the input itself, simply by using some inputs as outputs. +However, predicting pixels from other pixels is often not the desired task. +So instead, a neural network is often pretrained using self-supervised or unsupervised learning techniques, and then subsequently trained on some amount of human-labeled data as a form of transfer learning.

+ +

What the summary of the hypothetical news article promises is that self-supervision made the learning more efficient, not that it outgrew the need for any kind of human intervention. This is exactly what we get from successful self-supervision in pretraining.

+ +

In the best possible case, the machine learns to ""recognize"" each class of digit 0-9 but it still does not know how to ground its own internal labels to the human's labels. Then a human supplying the mapping between the machine's labels and the human-specified IDs would be the only step necessary to upgrade the self-supervised machine to one that is directly useful for digit recognition.

+ +

There will always be a need for humans to train a machine via direct supervision in order for the machine to learn the intended task. In order to solve a specific problem, a sufficient degree of supervision is always required, and sufficient labels to reflect the intention must be provided.

+",27060,,27060,,12/1/2019 4:34,12/1/2019 4:34,,,,2,,,,CC BY-SA 4.0 +16774,2,,16741,11/25/2019 23:48,,3,,"

I think you're probably looking at this the wrong way around. A conventional, old-fashioned AI doesn't make a guess, then require confirmation as to whether that guess was right or wrong. Instead, (in the simplest case) it undergoes a one-off computationally intensive ""training""/""learning"" phase, during which you feed it an enormous number of correct answers (which are labelled as correct) and an even more enormous number of incorrect answers (which are labelled as incorrect). Using whatever learning mechanism it has at its disposal, it then identifies some underlying structure in the ""corrects"" that doesn't exist in the ""incorrects"". When, in the future, it encounters something new that seems to also exhibit this structure, then it will classify this as a ""correct"". It might do rather well, or it might do terribly. Once the one-off training phase is done, it's stuck with whatever capability it has.

+ +

Let's say the company you mention is called Facebook and they have a feature that allows you to ""tag"" your friends in photos. No need to pay engineers to create the largest labelled image database in human history in order to train your AI.

+",31610,,,,,11/25/2019 23:48,,,,2,,,,CC BY-SA 4.0 +16776,1,,,11/26/2019 1:22,,1,45,"

I'm new to Deep Learning with Keras. With some tutorials online for cat vs non-cat classification, I was able to compile this simple architecture for my own classification problem. However, my target application is fire detection which essentially might have semantic differences with cats.

+ +

After training, I realized this model is accurate when the fire scene is simple and visible, but if many objects inside or fire is a bit smaller, it fails to detect. So I thought maybe I can change the architecture by increasing the complexity.

+ +

First thing came into my mind was increasing the first layer filters from 32 to 64 by changing to model.add(Conv2D(64, kernel_size = (3, 3), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1)))

+ +

Is it going to help? What are other best practices to change the architecture? How about increasing the number of kernels to kernel_size = (5, 5) or adding one more layer or even changing the images from grayscale to colored?

+ +

Here is my original training code:

+ +
from keras.models import Sequential, load_model
+from keras.layers import Dense, Dropout, Flatten
+from keras.layers import Conv2D, MaxPooling2D
+from keras.layers.normalization import BatchNormalization
+from PIL import Image
+from random import shuffle, choice
+import numpy as np
+import os
+
+IMAGE_SIZE = 256
+IMAGE_DIRECTORY = './data/test_set'
+
+def label_img(name):
+  if name == 'cats': return np.array([1, 0])
+  elif name == 'notcats' : return np.array([0, 1])
+
+def load_data():
+  print(""Loading images..."")
+  train_data = []
+  directories = next(os.walk(IMAGE_DIRECTORY))[1]
+
+  for dirname in directories:
+    print(""Loading {0}"".format(dirname))
+    file_names = next(os.walk(os.path.join(IMAGE_DIRECTORY, dirname)))[2]
+
+    for i in range(200):
+      image_name = choice(file_names)
+      image_path = os.path.join(IMAGE_DIRECTORY, dirname, image_name)
+      label = label_img(dirname)
+      if ""DS_Store"" not in image_path:
+        img = Image.open(image_path)
+        img = img.convert('L')
+        img = img.resize((IMAGE_SIZE, IMAGE_SIZE), Image.ANTIALIAS)
+        train_data.append([np.array(img), label])
+
+  return train_data
+
+def create_model():
+  model = Sequential()
+  model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', 
+                   input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1)))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(BatchNormalization())
+  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(BatchNormalization())
+  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(BatchNormalization())
+  model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(BatchNormalization())
+  model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
+  model.add(MaxPooling2D(pool_size=(2,2)))
+  model.add(BatchNormalization())
+  model.add(Dropout(0.2))
+  model.add(Flatten())
+  model.add(Dense(256, activation='relu'))
+  model.add(Dropout(0.2))
+  model.add(Dense(64, activation='relu'))
+  model.add(Dense(2, activation = 'softmax'))
+
+  return model
+
+training_data = load_data()
+training_images = np.array([i[0] for i in training_data]).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1)
+training_labels = np.array([i[1] for i in training_data])
+
+print('creating model')
+model = create_model()
+model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
+print('training model')
+model.fit(training_images, training_labels, batch_size=50, epochs=10, verbose=1)
+model.save(""model.h5"")
+
+",9053,,9053,,11/26/2019 16:50,11/26/2019 16:50,How to change the architecture of my simple sequential model,,0,0,,,,CC BY-SA 4.0 +16777,2,,16741,11/26/2019 5:34,,1,,"

What you are missing is what the news story does't mention and gloss over. When a news article says:

+ +
+

company A has a large human face database so that it can train its facial recognition program more efficiently

+
+ +

What it really means is:

+ +
+

company A has a large database of human faces along with additional information such as the identity of the person the face belong to that was created by other humans so that they can use this data set to train its facial recognition program

+
+ +

How training works is basically as follows:

+ +
    +
  1. You have a large database of correct (or almost all correct, ideally it should be correct) information that you want to relate one to the other. For example images of faces along with who that face belongs to.

  2. +
  3. You split this large database into several sets.

  4. +
  5. You use one set to train the AI.

  6. +
  7. After looping through the training set you use one or more of the other sets to test the AI and check if the training works.

  8. +
  9. If you've done this before compare the performance of the current AI to previous AI. Else go to 6.

  10. +
  11. Tweak some parameters of the AI to try to improve performance.

  12. +
  13. Go to 2 until you are satisfied with the performance of the AI.

  14. +
+ +

All the steps above are normally automated by scripts. The key here is that the original database has both the question you want to ask the AI (face) and the answer you want the AI to learn (person).

+ +

Yes, humans are involved in training the AI but the involvement happens earlier at the database gathering stage.

+",31616,,,,,11/26/2019 5:34,,,,0,,,,CC BY-SA 4.0 +16779,1,,,11/26/2019 7:05,,3,88,"

The specific problem I have is learning the relation $x^2$. I have an array of 0 through 19 (input values) and a target array of 0, 1, 4, 9, 16, 25, 36 and so on all the way up to $19^2$=361.

+ +

I have the following LSTM architecture:

+ +
1 input node
+1 output node
+32 hidden units
+
+ +

Now, interestingly, I accidentally trained my network wrong, in that I forgot to reverse the expected output list when training. So I trained the network for: +$$0 \rightarrow 361 \\ 1 \rightarrow 324 \\ 2 \rightarrow 289 \\ 3 \rightarrow 256 \\ ... \\ 17 \rightarrow 4 \\ 18 \rightarrow 1 \\ 19 \rightarrow 0$$

+ +

Starting with a learning rate of 1 and halving it every 400 epochs, after 3000 epochs my error (which started somwhere in the millions) was 0.2.

+ +

However, when I went to correct this mistake, my error will hardly ever go beneath 100,000. Testing the network shows it does well in the low inputs, but once it starts to get to $16^2$ onwards, it really struggles to increase the output values past ~250.

+ +

I was just wondering if anyone has an explanation for this, as to why the LSTM struggles to learn to increase exponentially but seems to be able to learn to decrease exponentially just fine.

+ +

EDIT WITH CODE:

+ +
a = np.array([i for i in range(20)])
+b = np.array([i**2 for i in range(20)])
+np.random.seed(5)
+ls = LSTM(1, 1, 32, learning_rate=1, regression=1)
+# Input size = 1, output size = 1, hidden units = 32
+if 1:
+    for k in range(3000):
+        total = 0
+        for i in range(20):
+            ls * a[i]
+        for i in range(20):
+            total += ls / b[i]
+        if k % 400 == 0:
+            ls.learning_rate *= 0.5
+        print(k, "":"", total)
+        ls.update_params()
+for i in a:
+    print(i, ls*i)
+#ls.save_state('1.trainstate')
+for i in range(20,30):
+    print(ls*i)
+
+ +

Note this code uses a class I wrote using numpy. If wanted I'll include this code as well it's just that is ~300 lines and I don't expect anyone to go through all that

+",26726,,26726,,1/18/2020 2:37,1/18/2020 2:37,Why do regression LSTMs learn high to low inputs significantly better than low to high?,,0,6,,,,CC BY-SA 4.0 +16780,2,,16646,11/26/2019 7:53,,1,,"

Right, AI is an extension of human creativity and the implied limitation is that it inherits bias through the specific choice of which features to consider. Given a set of features it is then far more able at calculating which combination of features best helps explain the relationship being considered than is the human mind. Humans are too distracted to think to the depth that AI and machine learning can. But that extreme focus is not intelligence.

+ +

One of the issues that prevents the human mind from thinking at comparable depth is the need to massage the set of features that might apply; we are constantly reviewing features, adding in new and eliminating those that do not contribute. Creativity is openness to admitting other seemingly unrelated features and hoping for emergence, and managing to persist in being creative when emergence is delayed.

+",4994,,,,,11/26/2019 7:53,,,,0,,,,CC BY-SA 4.0 +16781,1,,,11/26/2019 8:05,,1,47,"

I'm comparing the results of an Newton optimizer for a modified version of SVM ( a generalized quadratic loss, similar to the one stated in:

+ +

A generalized quadratic loss for SVM

+ +

) with classic SVM^light for regression. The problem is that it's able to overfit the data (UCI Yacht data-set) but I can't reach the generalization results of SVM^light. I've tried several hyper-parameters grids. I'm solving the primal problem. I'll send you my code if you need it. Any suggestion?

+",31620,,32140,,12/18/2019 20:21,12/18/2019 20:21,"A generalized quadratic loss and Newton iteration for Support Vector Regression, why doesn't it generalize well?",,0,2,,,,CC BY-SA 4.0 +16782,2,,12734,11/26/2019 10:02,,1,,"

You should only look for the cross-validation score. If this set is large enough, it will give you an accurate prediction of how your model will act for unseen data.

+ +

Your case is exceptional. The fitted model which is obviously overfitted actually performs better on the cross-validation set. This means in turn that your overfitted model will perform better with unseen data.

+",29671,,,,,11/26/2019 10:02,,,,0,,,,CC BY-SA 4.0 +16783,2,,2158,11/26/2019 10:22,,2,,"

##Why Some Investors and Researchers Prefer Radar Over Radar, and Thereof Recent Developments in Radar

+
+

Direct Answer to Your Question / What This Answer is About

+
+

" ... Why does Google use radar? Doesn't LIDAR do everything radar can do? In particular, are there technical advantages with radar regarding object detection and tracking? ... " ~ Crashalot (Stack Exchange user, Opening Oposter)

+
+

To begin, some researchers and investors hold reservations about LiDAR.

+

Lidar is criticised for being expensive and unwieldy.

+

Advances, such as capturing images from a bird's eye view, and other such improvements, have made radar near-accurate compared to LiDar, while having a cheaper cost. (This will be explored in the technical section, not here.)

+

This is not conclusive as autonomous cars, radar, and/or LiDAR are being heavily researched.

+

Another path to take is a hybrid system between radar and LiDAR. (This will be explored in the technical section, not here.)

+
+

Layperson Explanation

+

Start here: 10 Astonishing Technologies That Power Google’s Self-driving Cars (It has slides, so I can't extract the information as easily.)

+

-

+

Eon Musk has very strong opinions against LiDAR. Naturally, he has influence. A while back, radar was unable to detect certain phenomenon, such as in servere certain meteorological disturbances, such as a snow-storm.

+

This has changed with advances. More efficient systems and pierce the storms. New antenna and radar fabrication technologies are here. Technologies have enabled an such as: higher (millimeter wave) frequencies providing greater resolution with smaller phased-array antennas are now available.

+

-

+
+

"While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

+
+
+

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR." – Crowe, Steve. "Researchers Back Tesla's Non-Lidar Approach To Self-Driving Cars - The Robot Report". The Robot Report, 2019, < https://www.therobotreport.com/researchers-back-teslas-non-lidar-approach-to-self-driving-cars/ >.

+
+
    +
  • +
+
+

"So far in the self-driving realm, automakers and technology companies have been enamored with other sensors for this purpose. Automated cars are currently using cameras and laser sensors known as lidar.

+
+
+

By comparison, radar, which has been on production vehicles for two decades, has been a staple of driver-assist systems for obstacle detection, and until now, not viewed as a tool for localization. Perhaps it's even underappreciated. Venture capital has flowed into lidar and camera-based solutions for automated vehicles; radar has been viewed as a commodity.

+
+
+

'It's unfortunate that's the perception, but it's probably as it should be,' says John Xin, CEO of Lunewave, a startup developing radar-sensing systems. "Over the last 20 years, there's not been a whole lot of game-changing hardware technology coming out of radar sensors."

+
+
+

That's changing. Whether it's global suppliers such as Bosch, or startups such as Lunewave and WaveSense, a recent spinoff from MIT's Lincoln Laboratory, there's fresh innovation being wrung from radar, a technology that first found widespread use during World War II, and was first deployed on production automobiles by supplier Delphi in 1999.

+
+
+

These three companies are rethinking the role of radar in mobility. Here's a rundown of the technology advances underway." – Crowe, Steve. "Researchers Back Tesla's Non-Lidar Approach To Self-Driving Cars - The Robot Report". The Robot Report, 2019, < https://www.therobotreport.com/researchers-back-teslas-non-lidar-approach-to-self-driving-cars/ >.

+
+
+

Business Applications

+

Is LiDAR, radar, or camera Better for your business: Demystifying the ADAS / AD Technology Mix

+ +
+

Technical Mirror

+

Wikipedia is probably the best place to start:--

+ +

This paper is one example of advancement in radar:--

+
+

"A procedure for radar range calculation is described, reflecting current knowledge of the effects of external natural noise sources, atmospheric-absorption losses, and the refractive effect of the normal atmosphere. The range equation is presented in terms of explicitly defined and readily evaluated quantities. Curves and equations are given for evaluating the quantities that are not ordinarily known by direct measurement. Some conventions are proposed for use in general radar range calculation, including an antenna-noise-temperature curve, minimum-detectable signal-to-noise ratio (``visibility factor'') curves, and a formula for the reflection coefficient of a rough sea. A noise-temperature table and a work-sheet for range calculation are included in the Appendix." – Blake, Lamont V. "Recent advancements in basic radar range calculation technique." IRE Transactions on Military Electronics 2 (1961): 154-164.

+
+
    +
  • +
+

There is a technical paper here that compares/constrats Radar and LiDAR:--

+

(https://www.sae.org/publications/technical-papers/content/2000-01-0345/)

+

-

+

One path to take is synergy/a hybrid system between both radar and LiDAR:--

+
+

" .... Currently, aerial light detecting and ranging (LIDAR) systems are therefore preferred for the detection and ranging of objects submerged in the sea. LIDAR provides for large area coverage at high speed, but it lacks coherent detection capability, a shortcoming that severely limits system sensitivity and underwater target contrast. In response to this problem, this paper details the merging of RADAR and LIDAR technologies in the constitution of a hybrid LIDAR-RADAR detection scheme. ... " – Mullen, Linda J., et al. "Application of RADAR technology to aerial LIDAR systems for enhancement of shallow underwater target detection." IEEE Transactions on microwave theory and techniques 43.9 (1995): 2370-2377.

+
+ +

Where lives are at stake, back-ups/layers can be argued to be needed. Redundancies/back-ups ("redundancies" in a similar sense to a kidney, per the systems-sciences) can be extremely useful in the case where one system does not pick up shard of ice that makes a car fall off the path.

+
+

Sources, References, and Further Reading:--

+ +",25982,,25982,,9/16/2020 22:29,9/16/2020 22:29,,,,0,,,,CC BY-SA 4.0 +16787,1,,,11/26/2019 11:17,,1,25,"

I'm working on a college project about traffic sign detection and I have to choose a paper to implement it, but I have basic knowledge of TensorFlow and I'm afraid of choosing a paper that I can't implement it.

+ +

What are examples of models for traffic sign detection that can be easily implemented?

+",31628,,2444,,11/26/2019 16:22,11/26/2019 16:22,What are examples of models for traffic sign detection that can be easily implemented?,,0,0,,,,CC BY-SA 4.0 +16789,2,,16769,11/26/2019 13:22,,2,,"

A random forest is a collection of classification trees. If more than 50% of these trees predict class A (and not class B), the random forest will predict class A.

+ +

What you can do is lower the percentage needed to classify it as class A (in your case, patient has the virus). This way, you can tell your random forest to predict class A if only 20% (or 10%, or 5%, ...) of the decision trees actually predicts class A.

+ +

I don't know what code you are using for the random forest algoritm, but in most you should be able to ask the % of certainty the random forest has of each class.

+",29671,,,,,11/26/2019 13:22,,,,7,,,,CC BY-SA 4.0 +16792,1,,,11/26/2019 15:03,,2,272,"

In the neural Turing machine (NTM), the content-based addressing and location-based addressing is used for memory addressing. Content-based addressing is similar to the attention-based model, weighting each row of memory which shows the importance of each row of memory (or each location of memory). Then for location-based addressing, by using shift kernel, the attention focus is moved left or right or remains unchanged.

+ +

What is location-based addressing? Why was location-based addressing used? What is the concept of ""for location-based addressing, by using shift kernel, the attention focus is moved left or right or remains unchanged.""? +What is the difference between content-based addressing and location-based addressing?

+",31642,,2444,,11/26/2019 16:17,11/26/2019 16:17,What is a location-based addressing in a neural Turing machine?,,0,0,,,,CC BY-SA 4.0 +16793,1,16795,,11/26/2019 15:06,,1,74,"

Is it possible to perform multiclass classification on data where the number of features is less than the number of target variables? Do you have any suggestions on how to address a problem where I have 2000 target variables?

+",23866,,2444,,11/26/2019 15:52,11/26/2019 16:13,Can I perform multiclass classification when the number of features is less than the number of targets?,,1,0,,,,CC BY-SA 4.0 +16794,2,,16746,11/26/2019 16:09,,3,,"

In computational learning theory, a learning algorithm (or learner) $A$ is an algorithm that chooses a hypothesis (which is a function) $h: \mathcal{X} \rightarrow \mathcal{Y}$, where $\mathcal{X}$ is the input space and $\mathcal{Y}$ is the target space, from the hypothesis space $H$.

+

For example, consider the task of image classification (e.g. MNIST). You can train, with gradient descent, a neural network to classify the images. In this case, gradient descent is the learner $A$, the space of all possible neural networks that gradient descent considers is the hypothesis space $H$ (so each combination of parameters of the neural network represents a specific hypothesis), $\mathcal{X}$ is the space of images that you want to classify, $\mathcal{Y}$ is the space of all possible classes and the final trained neural network is the hypothesis $h$ chosen by the learner $A$.

+
+

For example, would the decision tree and random forest be considered two different learning algorithms?

+
+

The decision tree and random forest are not learning algorithms. A specific decision tree or random forest is a hypothesis (i.e. function of the form as defined above).

+

In the context of decision trees, the ID3 algorithm (a decision tree algorithm that can be used to construct the decision tree, i.e. the hypothesis), is an example of a learning algorithm (aka learner).

+

The space of all trees that the learner considers is the hypothesis space/class.

+
+

Would a shallow neural network (that ends up learning a linear function) and a linear regression model, both of which use gradient descent to learn parameters, be considered different learning algorithms?

+
+

The same can be said here. A specific neural network or linear regression model (i.e. a line) corresponds to a specific hypothesis. The set of all neural networks (or lines, in the case of linear regression) that you consider corresponds to the hypothesis class.

+
+

Anyway, from what I understand, one way to vary the hypothesis $f$ would be to change the parameter values, maybe even the hyper-parameter values of, say, a decision tree.

+
+

If you consider a neural network (or decision tree) model, with $N$ parameters $\mathbf{\theta} = [\theta_i, \dots \theta_N]$, then a specific combination of these parameters corresponds to a specific hypothesis. If you change the values of these parameters, you also automatically change the hypothesis. If you change the hyperparameters (such as the number of neurons in a specific layer), however, you will be changing the hypothesis class, so the set of hypotheses that you consider.

+
+

Are there other ways of varying $f$?

+
+

Off the top of my head, only by changing the parameters, you change the hypothesis.

+
+

And how can we vary $A$?

+
+

Let's consider gradient descent as the learning algorithm. In this case, to change the learner, you could change, for example, the learning rate.

+",2444,,2444,,12/7/2020 21:33,12/7/2020 21:33,,,,0,,,,CC BY-SA 4.0 +16795,2,,16793,11/26/2019 16:13,,2,,"

Of course. It only depends if those features are informative enough for the task at hand. In order to better understand the phenomenon, you can imagine 2 features displayed as points in a 2D plane. The number of possible target classes goes up to the number of clusters you can find in that plane.

+ +

About the suggestion, I can only recommend the utilisation of a non-linear classifier.

+",27444,,,,,11/26/2019 16:13,,,,0,,,,CC BY-SA 4.0 +16796,5,,,11/26/2019 16:19,,0,,,2444,,2444,,11/26/2019 16:19,11/26/2019 16:19,,,,0,,,,CC BY-SA 4.0 +16797,4,,,11/26/2019 16:19,,0,,"For questions related to the neural Turing machine model, proposed in ""Neural Turing Machines"" (2014) by Alex Graves et al.",2444,,2444,,11/26/2019 16:19,11/26/2019 16:19,,,,0,,,,CC BY-SA 4.0 +16799,1,,,11/26/2019 19:58,,1,60,"

Suppose you want to detect objects and also track objects and people. Is it better to train a model using a single fisheye camera or using multiple cameras that mimic the view of the fisheye camera?

+ +

Also, what can be done to remove objects that are washed out? Like for very small objects, how do you make them more visible? Would multicamera tracking be better in this scenario?

+",28201,,,,,12/27/2019 7:01,Multicamera Tracking vs Single Fisheye Camera,,1,0,,,,CC BY-SA 4.0 +16800,1,,,11/26/2019 21:54,,1,58,"

Suppose I learn an optimal policy $\pi(a|c)$ for a contextual multi-armed bandit problem, where the context $c$ is a composite of multiple context variables $c = c_1, c_2, c_3$. For example, the context is specified by three Bernoulli variables.

+ +

Is there any literature on how to determine the optimal policy in the event where I no longer have access to one of the context variables?

+",31663,,2444,,4/15/2020 21:22,4/15/2020 21:22,How do I determine the optimal policy in a bandit problem with missing contexts?,,0,2,,,,CC BY-SA 4.0 +16801,1,22728,,11/26/2019 22:28,,9,299,"

I'm interested in self replicating artificial life (with many agents), so after reviewing the literature with the excellent Kinematic Self-Replicating Machines I started looking for software implementations. I understand that the field is still in the early stages and mainly academic, but the status of artificial life software looks rather poor in 2019.

+ +

On wikipedia there is this list of software simulators. +Going trough the list only ApeSDK, Avida, DigiHive, DOSE, Polyword have been updated in 2019. I did not find a public repo for Biogenesis. ApeSDK, DigiHive and DOSE are single author programs.

+ +

All in all I don't see a single very active project with a large community around (I would be happy to have missed something). And this is more surprising considering the big momentum of AI and the proliferation of many ready to use AI tools and libraries.

+ +

Why is the status artificial life software so under-developed, when this field looks promising both from a commercial (see manufacturing, mining or space exploration applications) and academic (ecology, biology, human brain and more) perspective? Did the field underdelivered on expectations in past years and got less funding? Did the field hit a theoretical or computational roadblock?

+",16363,,16363,,11/28/2019 22:09,7/30/2020 16:01,Why is the status of artificial life software so under-developed?,,1,7,,,,CC BY-SA 4.0 +16802,2,,16714,11/26/2019 23:37,,1,,"

The best way is probably to Google it with ""[org name] tensorflow github"" and look what you get.

+ +

For instance I found:

+ +

Microsoft

+ +

Nvidia

+ +

Intel

+",16363,,,,,11/26/2019 23:37,,,,0,,,,CC BY-SA 4.0 +16803,1,,,11/27/2019 0:22,,2,227,"

Loss is MSE; orange is validation loss, blue training loss. The task is NN regression (18 inputs, 2 outputs), one layer 300 hidden units. +

+ +

Tuning the lr, mom, l2 regularization parameters this is the best validation loss I can obtain. Can be considered overfitting? Is 1 a bad vl loss value for a regression task?

+",31665,,,,,11/30/2019 0:05,How do you interpret this learning curve?,,3,0,,,,CC BY-SA 4.0 +16804,2,,16803,11/27/2019 4:13,,3,,"

Depends on what does 1 represent in your task. +If you are trying to predict household prices and 1 represents \$1, I think the average validation loss is good. If 1 represents \$10000 in this case, probably something is not right. But remember that there are 2 parts contributing to the overall loss. The mse loss and the l2 penalty loss. (Also remember that most optimizers already implement l2 penalty as weight decay. So you do not need to add it separately)

+ +

Some suggestions.

+ +
    +
  1. Check if your data has any outliers/ anomalies. Based on your task you should know as to what you can do with these data points. Also see if your dataset has high variance.

  2. +
  3. If you are worried about over-fitting, think about your data again. Less data + more parameters often leads to over-fitting. If your dataset is too small, you need to think again.

  4. +
  5. Try to adjust the number of hidden units and observe the results.

  6. +
  7. Try using cross validation.

  8. +
  9. Alternatively, try using different optimizers and see what happens (Try Adam).

  10. +
+",28182,,,,,11/27/2019 4:13,,,,0,,,,CC BY-SA 4.0 +16805,1,20630,,11/27/2019 6:44,,7,1454,"

Currently, I'm using a Python library, StellarGraph, to implement GCN. And I now have a situation where I have graphs with weighted edges. Unfortunately, StellarGraph doesn't support those graphs

+ +

I'm looking for an open-source implementation for graph convolution networks for weighted graphs. I've searched a lot, but mostly they assumed unweighted graphs. Is there an open-source implementation for GCNs for weighted graphs?

+",31672,,2444,,4/25/2020 20:58,10/20/2022 13:27,Is there an open-source implementation for graph convolution networks for weighted graphs?,,3,1,,,,CC BY-SA 4.0 +16806,2,,16799,11/27/2019 6:44,,1,,"

Fisheye camera is always worst. Both convolutional networks object detectors and feature-based object detectors rely on the ""isometry"" of planar image - lack of strong distortions. Multiple camera have added benefit of several independent sources of information - ensembling. If each camera processed by separate network that may help in verification of objects by voting. With very small objects no simple methods would help. Multiple camera may help a little if all camera output processed together (stacked) as single network input, but don't hold much hope - detecting few pixel objects is very difficult, if possible at all.

+",22745,,,,,11/27/2019 6:44,,,,0,,,,CC BY-SA 4.0 +16807,1,,,11/27/2019 7:39,,4,299,"

What is the difference between genetic algorithms and evolutionary game theory algorithms?

+",9863,,2444,,11/27/2019 17:58,12/28/2019 15:00,What is the difference between genetic algorithms and evolutionary game theory algorithms?,,2,0,,,,CC BY-SA 4.0 +16808,2,,2078,11/27/2019 9:53,,1,,"

My answer is with respect to game theory perspective, Replicator Dynamics is one of core concept of evolutionary game theory algorithm which means rate of adaptation with respect to rate of change in population. Whenever there is change in the system replicator dynamics will help to adapt with the change with respect to utility function.

+ +

Replicator Dynamics Equation

+ +

For Better Understanding go through this link: Evolutionary Algorithm Pdf

+ +

Hope this will be helpful.

+",9863,,9863,,12/20/2019 7:38,12/20/2019 7:38,,,,0,,,,CC BY-SA 4.0 +16810,1,,,11/27/2019 11:16,,2,57,"

I read this tutorial about backpropagation.

+ +

So using this backpropagation we are training the neural network repeatedly for one input set, say [2,4], until we reach 100% accuracy of getting 1 as output. And the neural network is adjusting its weight values accordingly. So once after the neural network is trained this way, suppose we are giving another input set, say [6,8], also then will the neural network update its weight values (overwriting previous values), right? This will result in losing the previous learning, right?

+",31684,,4709,,11/27/2019 18:59,11/28/2019 14:02,"How can I train a neural network for another input set, without losing the learning of the previous input set?",,1,0,,,,CC BY-SA 4.0 +16812,1,,,11/27/2019 12:05,,2,154,"

I know that reinforcement learning has been used to solve the inverted pendulum problem.

+ +

Can supervised learning be used to solve the inverted pendulum problem?

+ +

For example, there could be an interface (e.g. a joystick) with the cart-pole system, which the human can use to balance the pole and, at the same time, collect a dataset for supervised learning. Has this been done before?

+",31381,,2444,,11/27/2019 17:54,2/15/2021 22:03,Can supervised learning be used to solve the inverted pendulum problem?,,1,2,,,,CC BY-SA 4.0 +16813,2,,16810,11/27/2019 12:19,,3,,"

Yes, this is actually a limitation known as catastrophic forgetting. +A proposed way to deal with this is elastic weight consolidation that ""remembers old tasks by selectively slowing down learning on the weights important for those tasks"". See Overcoming catastrophic forgetting in neural networks for details. Another approach is Learning without forgetting.

+ +

If the tasks are different, the approach you are talking about is called transfer learning. You might want to have a look at multi-task learning as well

+ +

If the tasks are the same, you could try creating a join of both datasets and training on that.

+",27851,,27851,,11/28/2019 14:02,11/28/2019 14:02,,,,0,,,,CC BY-SA 4.0 +16814,1,,,11/27/2019 12:38,,2,75,"

I am working on an Intent detection problem for a chatbot in Java. +So I need to convert words from String to a double[] format. +I tried using wordToVec(deeplearning4j), but it does not return a vector for words not present in the training data.

+ +

e.g. My dataset for wordToVec.train() does not contain the word ""morning"". So wordToVec.getWordVector(""morning"") returns a null value.

+ +

There is no need to find the coorelation between two words(like in word2vec), but it should be able to give me some sort of vector representation for any word.

+ +

Here are some things I thought of-

+ +
    +
  1. I could use a fixed length hash function and convert resulatant hash into vector.(Will Hash Collision be strong enough to be an issue in this case?)
  2. +
  3. I could initialize for each word a vector of huge length as zero, and set its elements as the ASCII value-64. +e.g. Keeping Maximum vector length as 10, AND would be represented as- +[1,14,4,0,0,0,0,0,0,0], and normalize this. +Is there a better solution to this problem?
  4. +
+ +

Here is the code I used to train the model-

+ + + +
public static void trainModel() throws IOException
+    {
+        //These lines simply generate the dataset into a format readable by wordToVec
+        utilities.GenRawSentences.genRaw();
+
+        dataLocalPath = ""./TrainingData/"";
+        String filePath = new File(dataLocalPath, ""raw_sentences.txt"").getAbsolutePath();
+        //Data Generation ends   
+
+        SentenceIterator iter = new BasicLineIterator(filePath);
+        TokenizerFactory t = new DefaultTokenizerFactory();
+        t.setTokenPreProcessor(new CommonPreprocessor());
+
+        VocabCache<VocabWord> cache = new AbstractCache<>();
+        WeightLookupTable<VocabWord> table = new InMemoryLookupTable.Builder<VocabWord>()
+                .vectorLength(64)
+                .useAdaGrad(false)
+                .cache(cache).build();
+
+        Word2Vec vec = new Word2Vec.Builder()
+                .minWordFrequency(1)
+                .iterations(5)
+                .epochs(1)
+                .layerSize(64)
+                .seed(42)
+                .windowSize(5)
+                .iterate(iter)
+                .tokenizerFactory(t)
+                .lookupTable(table)
+                .vocabCache(cache)
+                .build();
+
+        vec.fit();
+
+        //Saves the model for use in other programs
+        WordVectorSerializer.writeWord2VecModel(vec, ""./Models/WordToVecModel.txt"");
+
+
+",31685,,,,,11/27/2019 12:38,How can I feed any word into a neural network?,,0,0,,,,CC BY-SA 4.0 +16815,1,,,11/27/2019 13:12,,4,147,"

Some people say that abstract thinking, intuition, common sense, and understanding cause and effect are important to make AGI.

+ +

How important is learning to learn for the development of AGI?

+",31686,,2444,,11/27/2019 17:46,4/13/2020 22:10,How important is learning to learn for the development of AGI?,,1,0,,,,CC BY-SA 4.0 +16816,1,,,11/27/2019 15:25,,2,107,"

I'm currently trying to develop an RL that will teach itself to play the popular fighting game ""Tekken 7"". I initially had the idea of teaching it to play generally- against actual opponents with various levels of difficulty- but the idea proved to be rather complex. I've liquefied the goal down to ""get a non-active standing opponent to 0 health as fast as possible"".

+ +

I have some experience with premade OpenAI environments, and tried making my own environment for this specific purpouse, but this proved to be rather difficult as there was no user friendly documentation.

+ +

Below is a DQN that was coded along with the help of a YouTube tutorial

+ + + +
import numpy as np
+from tensorflow.keras.layers import Dense, Activation
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.optimizers import Adam
+
+
+class ReplayBuffer(object):
+    def __init__(self, max_size, input_shape, n_actions, discrete=False):
+        self.mem_size = max_size#memory size dictated
+        self.discrete = discrete#determines a number of discrete values that can be inputted
+        self.state_memory = np.zeros((self.memsize, input_shape))
+        self.new_state_memory = np.zeros((self.mem_size, input_shape))
+        dtype = np.int8 if self.discrete else np.float32
+        self.action_memory = np.zeros((self.mem_size, n_actions))
+        self.reward_memory = np.zeros(self.mem_size)
+        self.terminal_memory = np.zeros(self.mem_size, dtype = np.float32)
+
+
+    def store_transition(self, state, action, reward, state, done):
+        index = self.mem_cntr % self.mem_size
+        self.state_memory[index] = state
+        self.new_state_memory[index] = state_
+        self.reward_memory[index] = reward
+        self.terminal_memory[index] = 1 - int(done)
+        if self.discrete:
+            actions = np.zeros(self.action_memory.shape[1])
+            self.action_memory[index] = actions
+        else:
+            self.action_memory[index] = action
+        self.mem_cntr += 1
+
+
+    def sample_buffer(self, batch_size):
+        max_mem = min(self.mem_cntr, self.mem_size)
+        batch = np.random.choice(max_mem, batch_size)
+
+        states = self,state_memory[batch]
+        states_ = self.new_state_memory[batch]
+        rewards = self.reward_memory[batch]
+        actions = self.action_memory[batch]
+        terminal = self.terminal_memory[batch]
+
+        return states, actions, rewards, states_, terminal
+
+
+    def build_dqn(lr, n_actions, input_dims, fcl_dims, fc2_dims):
+        model = Sequential([
+                    Dense (fcl_dims, input_shape = (input_dims, )),
+                    Activation('relu')
+                    Dense(fc2_dims),
+                    Activation('relu')
+                    Dense(n_actions)])
+
+        model.comile(optimizer = Adam(lr = lr), loss = 'mse')
+
+        return model
+
+    class Agent(object):
+        def __init__(self, alpha, gamma, n_actions, epsilon, batch_size,
+                     input_dims, epsilon_dec=0.996, epsilon_end=0.01,
+                     mem_size = 1000000, fname = 'dqn_model.h5'):
+            self.action_space = [i for i in range(n_actions)]
+            self.n_actions = n_actions
+            self.gamma = gamma
+            self.epsilon = epsilon
+            self.epsilon_dec = epsilon_dec
+            self.epsilon_min = eps_end
+            self.batch_size = batch_size
+            self.model_file = fname
+
+            self.memory = ReplayBuffer(mem_size, input_dims, n_actions,
+                                       discrete = True)
+            self.q_eval = build_dqn(alpha, n_actions, input_dims, 256, 256)
+
+        def remember(self, state, action, reward, new_state, done):
+            self.memory.store_transition(state, action, reward, new_state, done)
+
+        def choose_action(self, state):
+            state = state[np.newaxis, :]
+            rand = np.random.radnom()
+            if rand < self.epsilon:
+                action = np.random.choice(self.action_space)
+            else:
+                actions = slef.q.eval.predict(state)
+                action = np.argmax(actions)
+
+            return action
+
+        def learn(self):#temporal difference learning, delta between steps \
+            #and learns from this
+            #
+            #using numpy.zero approach, only drawback \
+            #is that batch size of memory must be full before learning
+            if self.memory.mem_cntr < self.batch_size:
+                return
+            state, action, reward, new_state, done = \
+                                    self.memory.sample_buffer(self.batch_size)
+
+
+            action_values = np.arary(self.action_space, dtype = np.int8)
+            action_indices = np.dot(action, action_values)
+
+            q_eval = self.q_eval.predict(state)
+            q_next = self.q_eval.predict(new_state)
+
+            q_target = q.eval.copy()
+
+            batch_index = np.arrange(self.batch_size, dtype = np.int32)
+
+            q_target[batch_index, action_indices] = reward + \
+                                    self.gamma*np.max(q_next, axis=1)*done
+
+            _ = self.q_eval.fit(state, q_target, verbose=0)
+
+            self.epsilon = self.epsilon*epsilon_dec if self.epsilon > \
+                           self.epsilon_min else self.epsilon_min
+
+            def save_model(self):
+                self.q_eval.save(self.model.file)
+
+            def load_model(self):
+                self.q_eval = load.model(self.model_file)
+
+",31692,,,,,11/27/2019 15:25,How would one develop an action space for a game that is proprietary?,,0,0,,,,CC BY-SA 4.0 +16818,1,,,11/27/2019 16:50,,3,682,"

Can a CNN (or other non-sequential deep learning models) outperform LSTM (or other sequential models) in time series data?

+ +

I know this question is not very specific, but I experienced this when predicting daily store sales and I am curious as to why it can happen.

+",31694,,2444,,11/27/2019 17:20,11/28/2019 9:20,Can non-sequential deep learning models outperform sequential models in time series forecasting?,,1,0,,,,CC BY-SA 4.0 +16820,2,,16815,11/27/2019 19:05,,3,,"

Learning to learn (also known as meta-learning) is very important for the development of artificial general intelligence (AGI), given that one of the desirable and fundamental properties of an AGI is the adaptability to different environments and the ability to continually learn, and meta-learning can be used to achieve that.

+ +

Meta-learning is thus related to multi-task, continual (or lifelong) and transfer learning. There are several approaches to meta-learning, such as MAML, but there is usually a meta-learner and a learner.

+ +

In the paper Building Machines That Learn and Think Like People (2016), Marcin Andrychowicz et al. argue that truly human-like learning and thinking +machines (or AGI) should

+ +
    +
  1. build causal models of the world that support explanation and understanding

  2. +
  3. ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned

  4. +
  5. harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations

  6. +
+ +

Another person that thinks that meta-learning is important for the development of AGI is Ben Goertzel. See e.g. his article From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps.

+",2444,,2444,,4/13/2020 22:10,4/13/2020 22:10,,,,0,,,,CC BY-SA 4.0 +16821,5,,,11/27/2019 19:12,,0,,,2444,,2444,,11/27/2019 19:12,11/27/2019 19:12,,,,0,,,,CC BY-SA 4.0 +16822,4,,,11/27/2019 19:12,,0,,For questions related to the concept of meta-learning (or learning-to-learn).,2444,,2444,,11/27/2019 19:12,11/27/2019 19:12,,,,0,,,,CC BY-SA 4.0 +16823,1,,,11/27/2019 19:40,,1,31,"

In this new book release, at the top of page 51 the authors mention that to do deep learning on time series tabular data the developer should structure the tensors such that the channels represent the time periods.

+

For example, with a dataset of 17 features where each row represents an hour of a day: the tensor would have 3 dimensions,

+
+

x - the 17 features

+

y - the # of days

+

z - the 24 hours in each day

+
+

So each entry in the tensor would represent that day/hour.

+

Is this necessary to capture time series elements? Would the DNN not learn these representations simply by breaking up the date column into: day, hour?

+",12983,,-1,,6/17/2020 9:57,11/27/2019 19:40,Deep Learning on time series tabular data,,0,0,,,,CC BY-SA 4.0 +16824,1,21981,,11/27/2019 23:08,,3,80,"

I understand that in general an RNN is good for time series data and a CNN image data, and have noticed many blogs explaining the fundamental differences in the models.

+

As a beginner in machine learning and coding, I would like to know from the code perspective, what the differences between an RNN and CNN are in a more practical way.

+

For instance, I think most CNNs dealing with image data use Conv1D or Conv2D and MaxPooling2D layers and require reshaping input data with code looks something like this Input(shape=(64, 64, 1)).

+

What are some other things that distinguish CNNs from RNNs from a coding perspective?

+",31075,,2444,,6/17/2020 11:50,6/18/2020 0:55,"From an implementation point of view, what are the main differences between an RNN and a CNN?",,1,0,,,,CC BY-SA 4.0 +16826,1,16842,,11/28/2019 3:58,,3,1164,"

If we were learning or working in the machine learning field, then we frequently come across the term "probability distribution". I know what probability, conditional probability, and probability distribution/density in math mean, but what is its meaning in machine learning?

+

Take this example where $x$ is an element of $D$, which is a dataset,

+

$$x \in D$$

+

Let's say our dataset ($D$) is MNIST with about 70,000 images, so then $x$ becomes any image of those 70,000 images.

+

In many papers and web articles, these terms are often denoted as probability distributions

+

$$p(x)$$

+

or

+

$$p\left(z \mid x \right)$$

+
    +
  • What does $p(\cdot)$ even mean, and what kind of output does $p(\cdot)$ give?
  • +
  • Is the output of $p(\cdot)$ a scalar, vector, or matrix?
  • +
  • If the output is vector or matrix, then will the sum of all elements of this vector/matrix always be $1$?
  • +
+

This is my understanding,

+

$p(\cdot)$ is a function which maps the real distribution of the whole dataset $D$. Then $p(x)$ gives a scalar probability value given $x$, which is calculated from real distribution $p(\cdot)$. Similar to $p(H)=0.5$ in a coin toss experiment $D={\{H,T}\}$.

+

$p\left(z \mid x \right)$ is another function that maps the real distribution of the whole dataset to a vector $z$ given an input $x$ and the $z$ vector is a probability distribution that sums to $1$.

+

Are my assumptions correct?

+

An example would be a VAE's data generation process, which is represented in this equation

+

$$p_\theta(\mathbf{x}^{(i)}) = \int p_\theta(\mathbf{x}^{(i)}\vert\mathbf{z}) p_\theta(\mathbf{z}) d\mathbf{z}$$

+",39,,2444,,1/21/2021 16:16,1/21/2021 16:16,What is a probability distribution in machine learning?,,2,1,,,,CC BY-SA 4.0 +16828,1,,,11/28/2019 5:36,,1,138,"

I am trying to make a face login application that authenticates the user when matching the registered face and the given face. +currently, the issue is I cant extract the face descriptions from the given face when the user is taking the photo in the night or the photo has a backlight.

+

currently, I am using JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js

+

can anyone suggest any good face detection and comparison algorithm that resolve my current issues, that will be very helpful for me

+

Now I am extracting face descriptions from the face and using the Euclidean distance equation is used for comparing the similarity of the images. if any good methods for comparison, please suggest

+",31576,,-1,,6/17/2020 9:57,11/28/2019 5:36,How to extract face details from a image,,0,3,,,,CC BY-SA 4.0 +16829,1,,,11/28/2019 8:10,,7,2648,"

I'm beginning to study and implement GAN to generate more datasets. I'll just try to experiment with state-of-the-art GAN models as described here https://paperswithcode.com/sota/image-generation-on-cifar-10.

+

The problem is I don't have a big dataset (around 1.000) for image classification, I have tried to train and test my dataset with GoogleNet and InceptionV3 and the results are mediocre. I'm afraid that GAN will require a bigger dataset than the usual image classification. I couldn't find any detailed guideline of how to prepare datasets properly for GAN (e.g. minimum images).

+

So, how many images are required to produce a good GAN model?

+

Also, I'm curious whether if I can use my image classification dataset directly to train GAN.

+",20612,,2444,,12/19/2021 18:21,8/16/2023 14:06,How many training data is required for GAN?,,1,3,,,,CC BY-SA 4.0 +16830,2,,16818,11/28/2019 9:15,,2,,"

You are right CNN based models can outperform RNN. You can take a look at this paper where they compared different RNN models with TCN (temporal convolutional networks) on different sequence modeling tasks. Even though there are no big differences in terms of results there are some nice properties that CNN based models offers such as: parallelism, stable gradients and low training memory footprint. In addition to CNN based models there are also attention based models (you might want to take a look at the transformer)

+",20430,,20430,,11/28/2019 9:20,11/28/2019 9:20,,,,0,,,,CC BY-SA 4.0 +16831,2,,16803,11/28/2019 9:55,,3,,"

The validation loss settles exactly at an error of one. Probably means there's something off with either the kind of data validation set has or with something in the training. An exact validation loss of one almost definitely means there's something off.

+ +

I'd recommend before doing anything thoroughly go through your data or see if there's anything to debug in the model itself. Considering training error reduces there's probably something different about either the formatting of the validation data or the validation data itself.

+ +

A mild description of the type of data and exact problem in hand could further help.

+",25658,,,,,11/28/2019 9:55,,,,3,,,,CC BY-SA 4.0 +16832,2,,12390,11/28/2019 9:56,,0,,"

I would recommend learning about Reinforcement learning first. You don't need a dataset as you train your network by letting it play the game over and over again. but knowing how to do so doea mean finding out about markov decision process and how you can use the neural network to solve this.

+",29671,,,,,11/28/2019 9:56,,,,0,,,,CC BY-SA 4.0 +16833,2,,16133,11/28/2019 10:35,,13,,"

This is my own understanding of the hidden state in a recurrent network. If it's wrong, please, feel free to let me know.

+

Let's consider the following two input and output sequences

+

\begin{align} +X &= [a, b, c, d, \dots,y , z]\\ +Y &= [b, c, d, e, \dots,z , a] +\end{align}

+

We will first try to train a multi-layer perceptron (MLP) with one input and one output from $X$ and $Y$. Here, the details of the hidden layers don't matter.

+

We can write this relationship in maths as

+

$$f(x)\rightarrow y$$

+

where $x$ is an element of $X$ and $y$ is an element of $Y$ and $f(\cdot)$ is our MLP.

+

After training, if given the input $a = x$, our neural network will give an output $b = y$ because $f(\cdot)$ learned the mapping between the sequence $X$ and $Y$.

+

Now, instead of the above sequences, try to teach the following sequences to the same MLP.

+

\begin{align} +X &= [a,a,b,b,c,c,\cdots, y,z,z]\\ +Y &= [a,b,c,\cdots, z,a,b,c, \cdots, y,z] +\end{align}

+

More than likely, this MLP will not be able to learn the relationship between $X$ and $Y$. This is because a simple MLP can't learn and understand the relationship between the previous and current characters.

+

Now, we use the same sequences to train an RNN. In an RNN, we take two inputs, one for our input and the previous hidden values, and two outputs, one for the output and the next hidden values.

+

$$f(x, h_t)\rightarrow (y, h_{t+1})$$

+

Important: here $h_{t+1}$ represents the next hidden value.

+

We will execute some sequences of this RNN model. We initialize the hidden value to zero.

+
x = a and h = 0
+f(x,h) = (a,next_hidden)
+prev_hidden = next_hidden
+
+x = a and h = prev_hidden
+f(x,h) = (b,next_hidden)
+prev_hidden = next_hidden
+
+x = b and h = prev_hidden
+f(x,h) = (c,next_hidden)
+prev_hidden = next_hidden
+
+and so on 
+
+

If we look at the above process we can see that we are taking the previous hidden state values to compute the next hidden state. What happens is while we iterate through this process prev_hidden = next_hidden it also encodes some information about our sequence which will help in predicting our next character.

+",39,,2444,,12/15/2021 11:14,12/15/2021 11:14,,,,1,,,,CC BY-SA 4.0 +16834,1,16840,,11/28/2019 11:49,,3,490,"

I am currently trying to understand SAC (Soft Actor-Critic), and I am thinking of it as a basic actor-critic with the entropy included. However, I expected the entropy to appear in the Q-function. From SpinningUp-SAC, it looks like the entropy is entering through the value-function, so I'm thinking it enters by the $\log \pi_{\phi}(a_t \mid s_t)$ in the value function?

+ +

I'm a little stuck on understanding SAC, can anyone confirm/explain this to me?

+ +

Also, side-note question: is being a soft agent equivalent to including entropy in one of the object functions?

+",31714,,2444,,11/23/2020 1:46,11/23/2020 1:46,Where does entropy enter in Soft Actor-Critic?,,1,0,,,,CC BY-SA 4.0 +16835,1,,,11/28/2019 12:36,,2,537,"

In Attention Is All You Need paper:

+
+

That is, the output of each sub-layer is $LayerNorm(x+Sublayer(x))$, where $Sublayer(x)$ is the function implemented by the sub-layer itself. We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized.

+
+

which makes the final formula $LayerNorm(x+Dropout(Sublayer(x)))$. However, in https://github.com/tensorflow/models/blob/0effd158ae1e6403c6048410f79b779bdf344d7d/official/transformer/model/transformer.py#L278-L288, I see

+
def __call__(self, x, *args, **kwargs):
+  # Preprocessing: apply layer normalization
+  y = self.layer_norm(x)
+
+  # Get layer output
+  y = self.layer(y, *args, **kwargs)
+
+  # Postprocessing: apply dropout and residual connection
+  if self.train:
+    y = tf.nn.dropout(y, 1 - self.postprocess_dropout)
+  return x + y
+
+

which ends up as $x+Dropout(Sublayer(LayerNorm(x)))$. Plus there are extra LayerNorms as final layers in both encoder and decoder stacks.

+

In a quick test, the performance of this model seems to be better than if I change back to the paper's order of operations. My question is: why? And could it be predicted in advance?

+

I note that Generating Long Sequences with Sparse Transformers uses the $x+Dropout(Sublayer(LayerNorm(x)))$ order, but doesn't discuss it, unlike the other changes it makes to Transformer.

+",31712,,2444,,11/30/2021 15:44,11/30/2021 15:44,Where should we place layer normalization in a transformer model?,,0,0,,,,CC BY-SA 4.0 +16836,1,,,11/28/2019 13:12,,1,25,"

I am trying to recreate the architecture of the following paper: https://arxiv.org/pdf/1807.03058.pdf

+ +

Can someone help me in explaining how are the feature maps coming out of the output of GradCam used in the following conv layers?

+",31516,,,,,11/28/2019 13:12,Can Grad CAM feature maps be used for Training?,,0,0,,,,CC BY-SA 4.0 +16837,2,,16807,11/28/2019 13:19,,1,,"

A genetic algorithm is typically a single population designed to optimise to a specific task, say minimising the distance on the travelling salesman problem.

+ +

Evolutionary game theory algorithms typically model changes between populations that are in competition, generally by using genetic algorithms as above but framed within a broader competitive environment between actors.

+ +

In the case of a problem like the travelling salesman problem, it might frame the game as one with competing players where a player getting to a city 'locks' that city and makes it impassable to other players. In these situations new optimisations like localism over adventurism etc may develop, and while players are still trying to minimise the distance travelled overall via their respective genetic algorithm's fitness function, they have to do so in a directly competitive environment with other strategies which quickly creates a lot of additional nuance and depth.

+",14997,,14997,,11/28/2019 13:35,11/28/2019 13:35,,,,0,,,,CC BY-SA 4.0 +16839,2,,16807,11/28/2019 14:23,,2,,"

Philip's answer is good, but I'll add to it.

+ +

In a GA, a population of individuals (typically represented by bit strings) is evaluated for its fitness on a particular task. Each individual is evaluated separately by a fitness function than can determine its quality. In the Traveling Salesman Problem, the bit string might represent a sequence of numbers, for instance, corresponding to an order in which cities are visited during the tour. The fitness function would inspect a single individual, compute the total cost of the tour, and assign that individual a fitness based on that value. Low scoring individuals are removed, high scoring individuals generate variants on themselves, and the process repeats.

+ +

In Evolutionary Game Theory, a population of individuals is also evaluated on some task, but usually the task involves interaction between the individuals. For example, you could use an EGT simulation to study what happens in a game like Iterated Prisoner's Dilemma. Here, an individual's fitness doesn't just depend on the rules of the task, but on the behaviors and strategies of the other players in the population. A strategy that is highly effective at first (like always cooperate) will quickly die out once defectors appear. Defectors are effective as long as there are some cooperators to prey upon, but are quickly defeated by strategies like Tit-for-Tat. Usually researchers are not interested in the specific strategies that emerge so much as in the population dynamics over the course of the simulation, and in what kinds of population equilibria can emerge. Check out some of Dan Ashlock's papers on Game Theory for more.

+",16909,,,,,11/28/2019 14:23,,,,0,,,,CC BY-SA 4.0 +16840,2,,16834,11/28/2019 15:14,,4,,"

In the answer I'll be using notation similar to the one from the SAC paper. +If we look at the standard objective function for policy gradient methods we have +\begin{align} +J_\pi &= V_\pi(s_t)\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t)]\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) + V(s_{t+1})]]\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) + \mathbb E_{a_{t+1} \sim \pi(a|s_{t+1})}[ \mathbb E_{s_{t+2} \sim p(s|s_{t+1}, a_{t+1})} [r(s_{t+1}, a_{t+1}) + V(s_{t+2})]]]]\\ +&\cdots\\ +&= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r( s_t, a_t)] +\end{align} +If you keep unwinding this $V(s_{t+i})$ you will get expected sum of rewards. +We can define soft state value as +\begin{align} +V(s_t) &= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) + \mathcal H(\cdot|s_t)]\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) + \mathbb E_{a \sim \pi(a|s_t)}[-\log(\pi(a|s_t))]]\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) - \log(\pi(a_t|s_t))] +\end{align} +third equality comes from the fact that $\mathbb E_{a \sim \pi(a|s_t)}[-\log(\pi(a|s_t))]$ is nonrandom so its the same thing as if we are sampling over $\pi$ only once.

+ +

In maximum entropy framework objective function would then be +\begin{align} +J_\pi &= V_\pi(s_t)\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[Q(s_t, a_t) - \log(\pi(a_t|s_t))]\\ +&= \mathbb E_{a_t \sim \pi(a|s_t)}[ \mathbb E_{s_{t+1} \sim p(s|s_t, a_t)} [r(s_t, a_t) - \log(\pi(a_t|s_t)) + V(s_{t+1})]]\\ +& \cdots\\ +&= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r(s_t, a_t) -\log(\pi(a_t|s_t))]\\ +&= \sum_t \mathbb E_{(a_t, s_t) \sim \rho_\pi} [r(s_t, a_t) + \mathcal H(\cdot|s_t)] +\end{align}

+",20339,,31714,,11/29/2019 8:26,11/29/2019 8:26,,,,0,,,,CC BY-SA 4.0 +16841,2,,16826,11/28/2019 15:17,,2,,"

A probability distribution in ML is the same as a probability distribution elsewhere.

+ +

A probability distribution (or probability function, or probability mass function, or probability density function) is any function that accepts as input elements of some specific set $x \in X$, and produces as output, real-valued numbers between 0 and 1 (inclusive), such that $\int_{x \in X} p(x) = 1$ or for discrete sets, $\sum_{x \in X} p(x) = 1$.

+ +

These distributions can also be more complex. For example, a conditional probability distribution $P(Y|X)$ or a joint probability distribution $P(X,Y)$ accept more than one input, but again, are constrained to producing an output in the range 0 to 1, and to ensuring that the summation of the output over all possible inputs is exactly 1.

+ +

When these conditions are met, the functions output can be interpreted as a belief about the percentage of times the input event will occur, out of all events, or as a degree of believe in the input event having occurred vs. other events (i.e. it can be interpreted as a probability).

+",16909,,,,,11/28/2019 15:17,,,,0,,,,CC BY-SA 4.0 +16842,2,,16826,11/28/2019 15:50,,3,,"

Random variables

+

You do not necessarily need to understand the concept of a random variable (r.v.) to understand the concept of a probability distribution, but the concept of a random variable is strictly connected to the concept of a probability distribution (given that each random variable has an associated probability distribution), so, before proceeding, you should get familiar with the concept of an r.v., which is a (measurable) function from the sample space (the set of possible outcomes of an experiment) to a measurable space (you can ignore the definition of a measurable space and assume that the codomain of the random variable is a finite set of numbers).

+

Probability measure, cdf, pdf and pmf

+

The expression "probability distribution" can be ambiguous because it can be used to refer to different (even though related) mathematical concepts, such as probability measure, cumulative distribution function (c.d.f.), probability density function (p.d.f.), probability mass function (p.m.f.). If a person uses the expression "probability distribution", he (or she) intentionally (or not) refers to one or more of these mathematical concepts, depending on the context. However, a probability distribution is almost always a synonym for probability measure or c.d.f..

+

For example, if I say "Consider the Gaussian probability distribution", in that case, I could be referring to either the c.d.f. or the p.d.f. (or both) of the Gaussian distribution. Why couldn't I be referring to the p.m.f. of the Gaussian distribution? Because the Gaussian distribution is a continuous distribution, so it is a distribution associated with a continuous random variable, that is, a random variable that can take on continuous values (e.g. real numbers), so a Gaussian distribution does not have an associated p.m.f. or, in other words, no p.m.f. is defined for the Gaussian distribution. Why don't I simply say "Consider the p.d.f. of the Gaussian distribution." or "Consider a Gaussian p.d.f."? Because it is unnecessarily restrictive, given that, if I say "Consider the Gaussian distribution" I am implicitly also considering a p.d.f. and c.d.f. of the Gaussian distribution.

+

Similarly, in the case of a discrete distribution, such as the Bernoulli distribution, only the c.d.f. and p.m.f. are defined, so the Bernoulli distribution does not have an associated p.d.f.

+

However, it is important to recall that both continuous and discrete distributions have an associated c.d.f., so the expression "probability distribution" almost always (implicitly) refers to a c.d.f., which is defined based on a probability measure (as stated above).

+

Notation

+

In the same vein, the notation $p(x)$ can be as ambiguous as the expression "probability distribution", given that it can refer to different (but again related) concepts. However, $p(x)$ usually refers to a probability measure (so it refers to a probability distribution, given that a probability distribution is almost always a synonym for probability measure). In this case, assuming for simplicity that the r.v. is discrete, $p(x)$ is a shorthand for $p(X=x)$, which is also written as $\mathbb{P}(X=x)$ or $\operatorname{Pr}(X=x)$, where $X$ is a r.v., $x$ a realization of $X$ (that is, a value that the r.v. $X$ can take) and $X=x$ represents an event. Given that an r.v. is a function, the notation $X=x$ may look a bit weird.

+

In the case of a discrete r.v., $p(x)$ can also refer to a p.m.f. and it can be defined as $p_X(x) = \mathbb{P}(X=x)$ (I added the subscript $X$ to $p$ to emphasize that this is the p.m.f. of the discrete r.v. $X$). In the case of a continuous r.v., the p.d.f. is often denoted as $f$. In the case of both discrete and continuous r.v.s, the c.d.f is usually denoted with $F$ and it is defined as $F_X(x) = \mathbb{P}(X \leq x)$, where $\mathbb{P}$ is again a probability measure (or probability distribution). The p.d.f. of a continuous r.v. is then defined as the derivative of $F$. At this point, it should be clear why a probability distribution can refer to different but related concepts, but, in any case, it always refers to a probability measure.

+

Empirical distributions

+

There are also empirical distributions, which are distributions of the data that you have collected. For example, if you toss a coin 10 times, you will collect the results ("heads" or "tails"). You can count the number of times the coin landed on heads and tails, then you plot these numbers as a histogram, which essentially represents your empirical distribution, where the adjective "empirical" usually refers to the fact that there is an experiment involved.

+

Multivariate r.v.s and distributions

+

To complicate things even more, there are also multivariate random variables and probability distributions. However, all the concepts above more or less are also applicable in this case.

+

Parametrized distributions

+

A parametrized probability distribution, often denoted by $p_{\theta}$, is +a family of probability distributions (defined by the parameters $\theta$), rather than a single probability distribution. For example, $\mathcal{N}(0, 1)$ refers to a single Gaussian distribution with zero mean and unit variance. However, $\mathcal{N}(\mu, \sigma)$, where $\theta=(\mu, \sigma)$ is a variable, is a family (or collection) of distributions.

+

Conclusion

+

To conclude, it is completely understandable that you are confused, given that the terminology and notation are used inconsistently, and there are several involved concepts, which I have not extensively covered in this answer (for example, I have not mentioned the concept of a probability space). If you get familiar with the concepts of probability measures, random variables, p.m.f., p.d.f., c.d.f., etc., and how they are related, then you will start to get a better feeling of the whole picture.

+",2444,,2444,,1/21/2021 15:34,1/21/2021 15:34,,,,0,,,,CC BY-SA 4.0 +16843,2,,16769,11/28/2019 16:28,,2,,"

There are two main things to consider for dealing with imbalanced data:

+ +
    +
  1. During Training: Undersampling the majority class (healthy patients) so that the model is not that biased to predicting healthy

  2. +
  3. During Evaluation: Using a suitable metric to try to evaluate your model and try to optimize on when you are fine-tuning your random-forest. For imbalanced data you usually use F1 score but since a high recall (predicting sick more often) is important here, F2 score (or other F-beta score where beta>1) is more suitable https://en.wikipedia.org/wiki/F1_score

  4. +
+ +

You can also check for example https://www.kdnuggets.com/2017/06/7-techniques-handle-imbalanced-data.html for more about how to deal with imbalanced data in general

+",27851,,,,,11/28/2019 16:28,,,,2,,,,CC BY-SA 4.0 +16844,2,,13725,11/28/2019 16:56,,1,,"

Let me add an example from machine learning that shows that resorting to randomness is the optimal way, sometimes.

+ +

When working on the whole data is not tractable (computation cost, data does not fit in memory), working on random samples can be an optimal way to train a machine learning algorithm. One of the most used optimization technique in those cases is the Stochastic Gradient Descent.

+ +

It is an iterative procedure that computes the estimates of the true gradient of the loss function that needs to be minimized, over a randomly selected data point from the whole data. After getting the gradient estimate, the weights are updated, all this done by the well known back-propagation algorithm. This procedure is repeated many times until a stopping criterion.

+ +

The rule is:

+ +

$ \theta_{k+1} \leftarrow{} \theta_{k} - \eta_k \nabla (f_{i(k)}(x_k))$

+ +

where $\theta$ are the weights of the network, $f$ is the loss function whose gradient is computed w.r.t the weights of the network, $x_k$ is the randomly chosen sample to compute gradients for, and $\eta_k$ is the step size to multiply the negative of the gradient with.

+",16708,,16708,,11/28/2019 17:50,11/28/2019 17:50,,,,4,,,,CC BY-SA 4.0 +16845,1,16851,,11/28/2019 20:50,,2,2848,"

There are three things in every constraint satisfaction problem (CSP):

+ +
    +
  1. Variables
  2. +
  3. Domain
  4. +
  5. Constraints
  6. +
+ +

In the given scenario, I know how to identify the constraints, but I don't know how to identify the variables and the domain.

+ +

The given scenario is:

+ +
+

You are given a $n \times n$ board, where $n \geq 3$. On this board, you have to put $k$ knights where $k < n^2$, such that no knight is attacking the other knight. The knights are expected to be placed on different squares on the board. A knight can move two squares vertically and one square horizontally or two squares horizontally and one square vertically. The knights attack each other if one of them can reach the other in a single move. For example, on a $3 \times 3$ board, we can place $k=5$ knights.

+
+ +

So, the input is $m = 3, n = 3, k = 5$. There are two solutions.

+ +

Solution 1

+ +
K A K   
+A K A    
+K A K
+
+ +

Solution 2

+ +
A K A
+K K K
+A K A
+
+",31420,,2444,,11/28/2019 23:20,11/29/2019 16:51,How can I formulate the k-knights problem as a constraint satisfaction problem?,,1,0,,,,CC BY-SA 4.0 +16846,1,,,11/28/2019 20:50,,1,438,"

Suppose an AI is to play the game flappy bird. And the fitness function is how long the bird has traveled before the game ends.

+ +

Would we have multiple neural networks initialized at the beginning with random weights (as in each bird has its own network) and then we determine the neural networks that have lasted the longest for the game and then we perform a selection of weights from the ""better"" neural networks followed by mutation? Those will then be used as the new weights of a brand new neural network (ie the offspring from two ""better"" neural networks?)?

+ +

If that is the case, does that mean there is no backpropagation because there isn't a cost function?

+",31727,,2444,,11/28/2019 21:43,12/1/2019 2:35,How are weights updated in a genetic algorithm with neural network?,,1,0,,,,CC BY-SA 4.0 +16847,2,,16846,11/28/2019 21:35,,1,,"

I found my answer in a different post: How to evolve weights of a neural network in Neuroevolution?. Note that the genetic algorithm is a subcategory of the neuroevolution algorithm. Short answer, my original thoughts were correct.

+",31727,,2444,,12/1/2019 2:35,12/1/2019 2:35,,,,2,,,,CC BY-SA 4.0 +16848,1,16853,,11/28/2019 23:19,,2,192,"

In policy gradient methods such as A3C/PPO, the output from the network is probabilities for each of the actions. At training time, the action to take is sampled from the probability distribution.

+ +

When evaluating the policy in an environment, what would be the effect of always picking the action that has the highest probability instead of sampling from the probability distribution?

+",31728,,,,,11/29/2019 13:02,What is the effect of picking action deterministicly at inference with Policy Gradient Methods?,,1,1,,,,CC BY-SA 4.0 +16850,1,16852,,11/29/2019 1:51,,5,621,"

When using CNNs for non-image (times series) data prediction, what are some constraints or things to look out for as compared to image data?

+

To be more precise, I notice there are different types of layers in a CNN model, as described below, which seem to be particularly designed for image data.

+

A convolutional layer that extracts features from a source image. Convolution helps with blurring, sharpening, edge detection, noise reduction, or other operations that can help the machine to learn specific characteristics of an image.

+

A pooling layer that reduces the image dimensionality without losing important features or patterns.

+

A fully connected layer also known as the dense layer, in which the results of the convolutional layers are fed through one or more neural layers to generate a prediction.

+

Are these operations also applicable to non-image data (for example, times series)?

+",31075,,2444,,12/6/2020 12:58,12/6/2020 13:02,"Can CNNs be applied to non-image data, given that the convolution and pooling operations are mainly applied to imagery?",,2,0,,,,CC BY-SA 4.0 +16851,2,,16845,11/29/2019 3:34,,2,,"

There are many possible ways to encode this problem, and some will be more advantageous than others

+ +

An encoding that seems like a reasonable starting point to me is:

+ +
    +
  1. Variables: Let $S$ be a set of $k$ variables representing the coordinates of knights on the chess board.
  2. +
  3. Domain: The domain of each variable is initially all vectors in $[n]^2$. Denote the components of these vectors with $[r, c]$.
  4. +
  5. Constraints: + +
      +
    • Let A be the set of 8 attacking moves: $A=\{[2,1],[2,-1],[-2,1],[-2,-1],[1,2],[1,-2],[-1,2],[-1,-2]\}$
    • +
    • $\forall_{i < j} S_i \neq S_j$ (no knights on the same squares)
    • +
    • $\forall_{i < j} [r_i, c_i] \notin \{[r_j + x, c_j + y] | x,y \in A\}$ (no knight attacks another knight)
    • +
  6. +
+ +

That should be it. Note the use of $\forall_{i<j}$ in the constraints is to reduce the total number of constraints by half, exploiting the symmetry that knight $i$ can attack knight $j$ iff knight $j$ can attack knight $i$. You could also use $\forall_{i\neq j}$, but it would increase your constraint count to no gain.

+",16909,,16909,,11/29/2019 16:51,11/29/2019 16:51,,,,0,,,,CC BY-SA 4.0 +16852,2,,16850,11/29/2019 7:53,,3,,"

Usually, you need to ensure that your convolutions are causal, meaning that there is no information leakage from the future into the past. You could start by looking at this paper, which compares Temporal Convolutional Networks (TCN) with vanilla RNNs models.

+",20430,,2444,,12/6/2020 13:02,12/6/2020 13:02,,,,0,,,,CC BY-SA 4.0 +16853,2,,16848,11/29/2019 13:02,,1,,"
+

When evaluating the policy in an environment, what would be the effect of always picking the action that has the highest probability instead of sampling from the probability distribution?

+
+ +

Depends what you mean by ""evaluating the policy"". Unlike in value-based methods, such as Q learning, the policy in gradient methods is not implied by anything else, it is described directly by the probability density function that is being optimised.

+ +

Taking the maximum probability item will technically change the policy (unless you are already using deterministic policy gradient), and you would be evaluating a different but related policy to that found by your policy gradient.

+ +

However, in a standard MDP environment, and after at least some training, this should be a reasonable process that would give some indication of how well the agent is performing.

+ +

In some cases, the nature of the environment means the agent is relying on a stochastic policy. In some partially-observable scenarios it may be better to decide randomly - a simple example is a corridor that needs to be traversed, but where the state features don't give enough information to determine the true direction. A deterministic policy will not be able to traverse the corridor in both directions, but a stochastic policy will get through it both ways, eventually. Another example is in adversarial situations where another agent can learn your agent's policy (the classic version of that being Scissor/Paper/Stone where two ideal opposed agents would learn probability $\frac{1}{3}$ for each action according to Nash equilibrium)

+ +

If you don't think you have these special cases, then it should be OK to derive a deterministic policy from your policy gradient agent, and assess that. That's not quite the same as assessing the ""learned policy"", but is quite sensible to do once you think the agent has converged anyway, since it may still be selecting non-optimal actions at some low probability, and you could get closer to optimal behaviour by removing that.

+",1847,,,,,11/29/2019 13:02,,,,0,,,,CC BY-SA 4.0 +16854,1,,,11/29/2019 18:19,,0,620,"

I want to detect moving objects in a surveillance video without using machine learning tools (like neural networks). +Is there a simple way in the OpenCV library? What is an efficient solution for this purpose?

+",30170,,30170,,12/1/2019 18:46,12/2/2019 5:06,How can I detect moving objects in a video by OpenCV without using deep learning techniques?,,1,0,,,,CC BY-SA 4.0 +16855,1,,,11/29/2019 21:05,,2,307,"

Today I was going through a tutorial of Andrew Ng about Inception network. He said that GoogLeNet's hidden layers are also good in prediction and it had somehow a regularization effect, so it reduces overfitting. I also search on this topic and tried to figure out by reading the GoogLeNet paper. But I am not satisfied.

+ +

Can anyone give me any mathematical intuition or reasoning about this in detail?

+",26342,,2444,,11/29/2019 21:35,12/31/2019 7:01,How GoogleNet actually deal with reducing overfitting?,,1,3,,,,CC BY-SA 4.0 +16856,2,,16850,11/29/2019 21:20,,3,,"

You can use CNN for time-series data. The Convolutional Recurrent Neural Network (RCNN) is one of the examples.

+

Convolutional layers basically extract features from images. It is not related to time-series data processing.

+

Some CNNs (such as in ResNet, Highway Networks, and DenseNet) use some recurrent concepts to improve their prediction, but they all are within single datapoint reasoning. You can go through these concepts to improve your intuitions.

+",26342,,2444,,12/6/2020 13:01,12/6/2020 13:01,,,,1,,,,CC BY-SA 4.0 +16857,1,16870,,11/29/2019 23:33,,5,167,"

Lee Sedol, former world champion, and legendary Go player today announced his retirement with the quote ""Even if I become the No. 1, there is an entity that cannot be defeated"".

+ +

Is it possible that AIs could kill the joy of competitive games(Go, chess, Dota 2, etc.) or (thinking more futuristic with humanoid AIs) in sports?

+ +

What happens if AIs gets better than us at painting and making music. Will we still appreciate it in the same way we do now?

+",31714,,31714,,11/30/2019 12:03,11/30/2019 23:36,Could AI kill the joy of competitive sports and games?,,1,2,,,,CC BY-SA 4.0 +16858,2,,16803,11/30/2019 0:05,,1,,"

The telltale signature of overfitting is when your validation loss starts increasing, while your training loss continues decreasing, i.e.:

+ +

+ +

(Image adapted from Wikipedia entry on overfitting)

+ +

It is clear that this does not happen in your diagram, hence your model does not overfit.

+ +

A difference between a training and a validation score by itself does not signify overfitting. This is just the generalization gap, i.e. the expected gap in the performance between the training and validation sets; quoting from a recent blog post by Google AI:

+ +
+

An important concept for understanding generalization is the generalization gap, i.e., the difference between a model’s performance on training data and its performance on unseen data drawn from the same distribution.

+
+ +

An MSE of 1.0 (or any other specific value, for that matter) by itself cannot be considered ""good"" or ""bad""; everything depends on the context, i.e. of the particular problem and the actual magnitude of your dependent variable: if you are trying to predict something that is in the order of some thousands (or even hundreds), an MSE of 1.0 does not sound bad; it's not the same if your dependent variable takes values, say, in [0, 1] or similar.

+",11539,,,,,11/30/2019 0:05,,,,1,,,,CC BY-SA 4.0 +16859,2,,15885,11/30/2019 0:41,,1,,"

I did an experiment, took a trained densenet121 and kept the bottom layers. I trained the FC layer to a softmax and then to a lambda layer that normalized the vector. I trained the network with imagenet to make the outputs the most far a away from (1,1,1,1,1...1) as possible, so I would get one hot vectors. I did, but the network trained to a single category (put all in the same hot vector). Then I added a penalty that encourages to make vectors different is their features are also different. it improved a little, a dozen categories instead of one, bout noting close to a thousand, the available number.

+ +

I am posting this so nobody wastes his time in a silly idea like this one.

+",30433,,30433,,11/30/2019 0:50,11/30/2019 0:50,,,,0,,,,CC BY-SA 4.0 +16860,1,,,11/30/2019 4:44,,3,540,"

Google says that their new AI program MuZero learnt the rules of chess and some other board games without being told so. How is this even possible?

+ +

https://towardsdatascience.com/deepmind-unveils-muzero-a-new-agent-that-mastered-chess-shogi-atari-and-go-without-knowing-the-d755dc80ff08

+",17601,,,,,1/28/2020 7:23,How did MuZero learn the rules of chess?,,0,2,,,,CC BY-SA 4.0 +16862,2,,16855,11/30/2019 9:05,,1,,"

The main reason of overfitting in any neural network is having too many unrestricted trainable degrees of freedom in the model. Methods similar to dropout reduce the number of neurons at each training run which effectively means having a smaller network. On the other hand in $l_1$ and $l_2$ regularization, a term added to the loss function which put a constraint on the total loss calculated at each run. So what we are trying to minimize with such regularizations is not just $L$, but $L +l_1*f(w)$ (for example).

+ +

What I understand from that paper is that the auxiliary outputs do both at the same time: the point is to build smaller version of the same inception network within itself, and use the losses obtained from those as a constraint on the final loss function. The full model described in the paper is essentially 3 separate models: the first one is a network with 3 inception modules, the second one with 6 and the final one has 9. When loss is calculated, results from the auxiliary outputs added to the total loss with a 0.3 weight. Let us write this as follows:

+ +

$L = 0.7*L_9+ 0.3*(L_6+L_3). $

+ +

Here $L_3$ and $L_6$ represent the losses calculated at first and second outputs, and $L_9$ is the loss calculated at the final output of the network.

+ +

This is the function we wish to minimize during training. But when the evaluation is made, the auxiliary outputs are discarded, and just the final softmax layer is used. Not very dissimilar from the idea of using dropout during training but using the full model for predictions.

+",22301,,22301,,12/1/2019 6:03,12/1/2019 6:03,,,,1,,,,CC BY-SA 4.0 +16863,1,,,11/30/2019 9:39,,1,148,"

I'm checking out how to manually apply resolution on a first order predicate logic knowledge base and I'm confused about what is allowed or not in the algorithm.

+ +

Let's say that we have the following two clauses (where $A$ and $B$ are constants):

+ +

$\neg P(A, B) \vee H(A)$

+ +

$\neg L(x_1) \vee P(x_1, y_1)$

+ +

If I try to unify these two clauses by making the substitutions $\{x_1/A, y_1/B\}$ do I get $\neg L(A) \vee H(A)$ ? Is it allowed to substitute $y_1$ by $B$ even if $B$ doesn't appear in the unified clause?

+ +

Then we have the other way around where:

+ +

$\neg P(A, y_1) \vee H(A)$

+ +

$\neg L(x_1) \vee P(x_1, B)$

+ +

Can I do $\{x_1/A, B/y_1\}$ for $\neg L(A) \vee H(A)$ ?

+ +

What about the case where:

+ +

$\neg P(A, z_1) \vee H(A)$

+ +

$\neg L(x_1) \vee P(x_1, y_1)$

+ +

Can I substitute $\{x_1/A, y_1/z_1\}$ and get $\neg L(A) \vee H(A)$ ?

+ +

Finally there is also the cases where we have something like this:

+ +

$\neg P(x_2, y_2) \vee H(z_1)$

+ +

$\neg L(x_1) \vee P(x_1, y_1)$

+ +

Can we do $\{x_1/x_2, y_1/y_2\}$ to get $\neg L(x_3) \vee H(z_2)$ ?

+ +

I'm really confused about when unification succeeds once we have two clauses with a literal of the same kind (negation in one of them and not in the other one) that are a candidates for unification.

+",31193,,,,,5/15/2023 13:09,Does the substituted variable/constant have to appear in the unified term?,,1,0,,,,CC BY-SA 4.0 +16864,2,,111,11/30/2019 12:01,,1,,"

I think we need to state our own moral before thinking about what the cars moral(or ethical setting) should be. I recommend reading this paper Autonomous Cars: In Favor of a Mandatory Ethics Setting which argues why it is in everyone's best interest that we prioritize the safety of the majority, and not just the driver(yes, in the best interest of the driver too).

+

You can test your own moral in many different situations, some like your examples, on MITs moral machine. It's rather uncomfortable but very interesting. You can also find some analysis of people's answers on their website.

+

My answers to your examples:

+
+

The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers)

+
+

I assume hitting the brakes is not going to work, or else the dilemma is pointless. I think the car should hit the wall. Pedestrians should not suffer just because someone else is driving a car, especially when there are 10 pedestrians and maximal 5(typically 1 or 2) in the car.

+
+

Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car

+
+

I think this one is harder, especially since the motorcycle probably is not autonomous, and (in contrast to the pedestrians in the previous example) riding motorcycles is quite dangerous. Are the motorcyclist accepting the risk when entering the roads? If avoiding the motorcyclist means a probable death for the driver, then no. If not, probably should avoid it.

+
+

Killing an animal on the street in favor of a human being

+

Purposely changing lanes to crash into another car to avoid killing a dog

+
+

I think humans are more important than animals.

+

I don't think there exists a correct answer for this. +One of the really interesting things is that from the data collected by the moral machine is that there are big differences based on where in the world you're from. Typically, the western typically countries prioritize saving children over the elderly, while this is not the case for the whole world. Countries with strong governments like Finland and Japan prioritize people abiding the law, while people from countries with weaker/corrupt government does not care so much about that. Even in this comment section, you can find differences! I, for example, think the pedestrians should be spared in the first example, +while Doxosophoi thinks that it is obvious that the passengers should be protected!

+",31714,,-1,,6/17/2020 9:57,11/30/2019 12:01,,,,0,,,,CC BY-SA 4.0 +16865,1,,,11/30/2019 15:37,,1,44,"

Suppose I trained a Gaussian process classifier with a linear kernel (using GPML toolbox) and got some feature weights for each input feature.

+ +

My question is then:

+ +

Does it/When does it make sense to interpret the weights to indicate the real-life importance of each feature or interpret at group level the average over the weights of a group of features?

+",31752,,31752,,12/1/2019 11:17,12/1/2019 11:17,Interpretability of feature weights from Gaussian process classifier,,0,4,,,,CC BY-SA 4.0 +16868,1,,,11/30/2019 20:50,,4,68,"

I have a neural network that takes as an input a vector of $x \in R^{1\times d}$ with $s$ hidden layers and each layer has $d$ neurons (including the output layer).

+ +

If I understand correctly the computational complexity of the forward pass of a single input vector would be $O(d^{2}(s-1))$, where $d^{2}$ is the computational complexity for the multiplication of the output of each layer and the weight matrix, and this happens $(s-1)$ times, given that the neural network has $s$ layers. We can ignore the activation function because the cost is $O(d)$.

+ +

So, if I am correct so far, and the computational complexity of the forward pass is $O(d^{2}(s-1))$, is the following correct?

+ +

$$O(d^{2}(s-1)) = O(d^{2}s + d^{2}) = O(d^{2}s)$$

+ +

Would the computational complexity of the forward pass for this NN be $O(d^{2}s)$?

+",31757,,2444,,12/1/2019 2:11,12/1/2019 2:11,"Given an input $x \in R^{1\times d}$ and a network with $s$ hidden layers, is the time complexity of the forward pass $O(d^{2}s)$?",,0,1,,12/2/2019 21:02,,CC BY-SA 4.0 +16869,1,,,11/30/2019 21:38,,2,705,"

I am trying to implement the original YOLO architecture for object detection, but I am using the COCO dataset. However, I am a bit confused about the image sizes of COCO. The original YOLO was trained on the VOC dataset and it is designed to take 448x448 size images. Since I am using COCO, I thought of cropping down the images to that size. But that would mean I would have to change the annotations file and it might make the process of object detection a bit harder because some of the objects might not be visible.

+ +

I am pretty new to this, so I am not sure if this is the right way or what are some other things that I can do. Any help will be appreciated.

+",31760,,2444,,1/29/2021 0:05,1/29/2021 0:05,How can I train YOLO with the COCO dataset?,,0,1,,,,CC BY-SA 4.0 +16870,2,,16857,11/30/2019 23:36,,7,,"

Unlikely!

+ +

Chess has been ""solved"" by AI much longer than GO (chess engines even before AI are way too strong for human players) and still people are playing and competing.

+ +

Simply put competition and sports live from the human element. Humans competing against each other will still create the same joy for most people regardless of the fact that all involved players might lose against a computer.

+ +

Some select individuals on the highest level might be put off by the new reality but it won't be the end of competition.

+ +

No human is faster than a car and yet we still celebrate running competitions.

+ +

Indeed I think long-term we will gain entertainment by watching different AIs and models compete against each other in chess or Go.

+",27665,,,,,11/30/2019 23:36,,,,0,,,,CC BY-SA 4.0 +16872,2,,9106,12/1/2019 0:57,,0,,"

This can be calculated quite easily in the context of 8-queen problem. Just start with a particular configuration. Starting from the queen in the left-most column just keep on counting the non-attacking positions (pairs) on the right with each queen. Carry on column by column towards your right until you reach the last queen. As a special case for the last queen the non-attacking pairs will be zero as their are no other queens after that.

+",31765,,,,,12/1/2019 0:57,,,,1,,,,CC BY-SA 4.0 +16873,1,,,12/1/2019 4:16,,2,29,"

I would like to map the simplest 8X8 matrices, one to one, but am not sure which AI algorithm would give the best performance. I am thinking about the DeepLearning4j, however, I don't know which neural architecture to use.

+ +

I would like to make a simple and a very ""stupid"" bot for playing the chess. The result I am hoping to obtain is a system which can learn the chess rules rather than make intelligent moves (although it would be great if it could do that as well). I understand that chess bots are nothing new, however, I am not interested in making a chess bot but a system that can simply learn by giving nothing else than an 8X8 matrix as an input and 8X8 matrix as an output. Chess is irrelevant, and it can be any other game that can be represented with values within an 8X8 matrix.

+ +

I am aware of the newest image mappers that transform horses to zebraz, but I need something more precise and 1 to 1 learning.

+ +

The amount of data I can get for learning is also not an issue (I can generate as much as I want).

+",31766,,,,,12/1/2019 4:16,Which algorithm and architecture to use for 1:1 matrix transformation of an 8X8 dimension?,,0,0,,,,CC BY-SA 4.0 +16874,1,,,12/1/2019 6:29,,4,195,"

For an AI to represent the world, it would be good if it could translate human sentences into something more precise.

+

We know, for example, that mathematics can be built up from set theory. So representing statement in language of set theory might be useful.

+

For example

+
+

All grass is green

+
+

is something like:

+
+

$\forall x \in grass: isGreen(x)$

+
+

But then I learned that set theory is built up from something more basic. And that theorem provers use a special form of higher-order logic of types. Then there is propositional logic.

+

Basically what the AI would need would be some way representing statements, some axioms, and ways to manipulate the statements.

+

Thus what would be a good language to use as an internal language for an AI?

+",4199,,2444,,10/27/2021 21:32,10/27/2021 21:32,What would be a good internal language for an AI?,,3,2,,,,CC BY-SA 4.0 +16875,2,,15761,12/1/2019 8:04,,1,,"

As you point out, they are not equivalent. I guess you could store the time index for each state visited, but there are two problems with this.

+ +

First, if you sample states according to their time index, sampling from the replay memory will become more cumbersome and probably much slower (you'd have to sample the time index and then a specific state with that time index). This is definitely something undesirable. If you choose to add the importance sampling term, then you will make really small most of the terms if $\gamma$ is not so close to 1.

+ +

Second, while the original objective is very nice to obtain theoretical results, you may ask yourself if that objective is really what you want to maximize. Do you really care more about the performance at the beginning of the trajectory than at the end?

+ +

While I have no rigorous proofs, I like to think that a better definition for the expected discounted reward is a time average:

+ +

$$\eta(\pi) = \lim_{N\to\infty}\frac{1}{N}\sum_{k=1}^N\mathbb E_{s_k,a_k,s_{k+1},\cdots}\left[\sum_{t=k}^\infty\gamma^{t-k}r(s_t)\right].$$

+ +

Since each term of the average satisfies the original proofs, the only difference with the equation 14 is that the density is $\tilde\rho_{\theta_{old}}(s) = \lim_{N\to\infty}\frac{1}{N}\sum_{k=1}^N\rho_{k,\theta_{old}}(s)$, where $$\rho_{k,\theta_{old}} = P(s_k = s)+\gamma P(s_{k+1}=s) + \gamma^2 P(s_{k+2}=s) + \dots.$$ +You can notice that for a large $N$ you basically give the same importance to all of the time-steps. So, the usual implementations actually optimize some complicated function similar to the one I defined, where you give a similar weight to all samples, irrespective of their corresponding time-step.

+",30983,,,,,12/1/2019 8:04,,,,0,,,,CC BY-SA 4.0 +16876,1,,,12/1/2019 8:51,,2,41,"

I remember reading an online news article a while ago on AI ethics that described a conference presentation on the subject of a military AI system that predicted terrorist movement inside buildings and used drones to shoot them when they exited the building. The presentation caused protests and sharp criticism in the audience due to the dehumanizing nature of the AI usage presented.

+ +

The article was in mainstream media, but I am unable to find it with Google search.

+ +

Which AI conference presentation was it and are any articles on it available?

+",31769,,31769,,12/1/2019 18:39,12/1/2019 18:39,Which AI conference presentation on predicting terrorist movement inside buildings caused protest in the audience and media?,,0,0,,,,CC BY-SA 4.0 +16879,1,,,12/1/2019 11:49,,2,397,"

I know that stochastic gradient descent always gives different results. What are the best practices to reduce this variance today? +I tried to predict simple function with two different approaches and every time I train them I see very different results.

+ +

Input data:

+ +
def plot(model_out):
+  fig, ax = plt.subplots()
+  ax.grid(True, which='both')
+  ax.axhline(y=0, color='k', linewidth=1)
+  ax.axvline(x=0, color='k', linewidth=1)
+
+  ax.plot(x_line, y_line, c='g', linewidth=1)
+  ax.scatter(inputs, targets, c='b', s=8)
+  ax.scatter(inputs, model_out, c='r', s=8)
+
+a = 5.0; b = 3.0; x_left, x_right = -16., 16.
+NUM_EXAMPLES = 200
+noise   = tf.random.normal((NUM_EXAMPLES,1))
+
+inputs  = tf.random.uniform((NUM_EXAMPLES,1), x_left, x_right)
+targets = a * tf.sin(inputs) + b + noise
+x_line  = tf.linspace(x_left, x_right, 500)
+y_line  = a * tf.sin(x_line) + b
+
+ +

Keras training:

+ +
model = tf.keras.Sequential()
+model.add(tf.keras.layers.Dense(50, activation='relu', input_shape=(1,)))
+model.add(tf.keras.layers.Dense(50, activation='relu'))
+model.add(tf.keras.layers.Dense(1))
+
+model.compile(loss='mse', optimizer=tf.keras.optimizers.Adam(0.01))
+model.fit(inputs, targets, batch_size=200, epochs=2000, verbose=0)
+
+print(model.evaluate(inputs, targets, verbose=0))
+plot(model.predict(inputs))
+
+ +

+ +

Manual training:

+ +
model = tf.keras.Sequential()
+model.add(tf.keras.layers.Dense(50, activation='relu', input_shape=(1,)))
+model.add(tf.keras.layers.Dense(50, activation='relu'))
+model.add(tf.keras.layers.Dense(1))
+
+optimizer = tf.keras.optimizers.Adam(0.01)
+
+@tf.function
+def train_step(inpt, targ):
+  with tf.GradientTape() as g:
+    model_out = model(inpt)
+    model_loss = tf.reduce_mean(tf.square(tf.math.subtract(targ, model_out)))
+
+  gradients = g.gradient(model_loss, model.trainable_variables)
+  optimizer.apply_gradients(zip(gradients, model.trainable_variables))
+  return model_loss
+
+train_ds = tf.data.Dataset.from_tensor_slices((inputs, targets))
+train_ds = train_ds.repeat(2000).batch(200)
+
+def train(train_ds):
+  for inpt, targ in train_ds:
+    model_loss = train_step(inpt, targ)
+  tf.print(model_loss)
+
+train(train_ds)
+plot(tf.squeeze(model(inputs)))
+
+ +

+",31772,,,,,12/1/2019 21:59,How to reduce variance of the model loss during training?,,1,0,,,,CC BY-SA 4.0 +16881,1,,,12/1/2019 12:20,,1,93,"

The +problem in Iris data is to classify three species of iris (setosa, versicolor and virginica) by +four-dimensional attribute vectors consisting of

+ +
    +
  • sepal length (x1)
  • +
  • sepal width (x2)
  • +
  • petal length (x3)
  • +
  • petal width (x4)
  • +
+ +

Every attribute of the fuzzy classifier is assigned with three linguistic +terms (fuzzy sets): short, medium and long. With normalized attribute values the +membership functions of these fuzzy sets for all the four attributes are depicted in the +figure below:

+ +

+ +

Now, consider the situation that a set of Rules are given - some examples are as below:

+ +
    +
  • R1: If (x3=short OR medium) AND (x4=short) Then iris setosa
  • +
  • R2: If (x2=short OR medium) AND (x3=long) AND (x4=long) Then iris virginica
  • +
+ +

Now, I want to make a fuzzy classifier on Iris dataset. The problem here is, I need to use a membership function for Consequents in Rules(i.e, The 3 classes) so as to be able to compute aggregation of rules and difuzzification.

+ +
    +
  • What is the proper pre-defined domain and membership function for three classes?
  • +
  • All the Consequents (Outputs) in fuzzy classifier need to acquire membership functions?
  • +
+",31773,,,,,12/1/2019 12:20,The membership function of Consequents (Outputs) in Fuzzy classifier,,0,0,,,,CC BY-SA 4.0 +16882,2,,16874,12/1/2019 13:16,,2,,"

This is (even though it doesn't look like it at first glance) a deeply philosophical question about the nature of 'meaning'. This answer is necessarily limited in scope.

+ +

There are many ways of representing knowledge, and countless formalisms have been developed since the early days of AI. Many of them are based on some kind of predicate calculus, ontologies, semantic networks (providing eg inheritance of features and part-of relationships), and they seem to work fine for limited domains.

+ +

One problem is the grounding: if you have a predicate isGreen(x), what does that actually mean? How is it related to isBlue(x)? Do you want to treat them similarly? If so, you need to represent this somehow. You quickly come to the point where you will need to encode all the world's knowledge in some generalised way. An impossible task.

+ +

Linguists have struggled with this for decades: what is the meaning of a particular sentence? Apart from the fact that every individual human will interpret a given sentence differently (based on their own life experience and culture), there are many aspects to 'meaning' that need representing: the 'factual' meaning, but also pragmatic, evaluative, and all sorts of other nuances. An innocent utterance, That's a nice Apple you've got there, could have a whole raft of meanings packed into it, all implicit. For example, the person probably likes apples, that one in particular, that apple looks like a tasty piece of fruit, the other person is the owner of the apple, and it might also be a request which is intended to prompt the other person to offer it to you. How are you going to represent all that meaning?

+ +

One area that interests me personally is representing narrative events. This can — up to a point — be done using Conceptual Dependency, which uses a limited set of semantic primitives. While useful to encode basic stories, you cannot easily use it to represent the fact that grass is green.

+ +

So the answer is: there is no answer. AI is too broad a field, and you need to look at a particular application to decide which knowledge is relevant to it, and then how it can best be represented. There is a reason why there are so many ways of representing knowledge.

+ +

PS: You suggest this would be more precise. My personal view is that precision here is a red herring. The word green is not precise, as it covers a range of wave lengths, and different people would disagree on whether something is green or not. So a predicate isGreen(x) is not any more precise than that. Hence the appeal of fuzzy logic, which allows computation to be based on less precision.

+",2193,,,,,12/1/2019 13:16,,,,3,,,,CC BY-SA 4.0 +16883,1,16884,,12/1/2019 15:31,,4,470,"

Suppose you want to predict the price of some stock. Let's say you use the following features.

+ +
OpenPrice  
+HighPrice
+LowPrice
+ClosePrice
+
+ +

Is it useful to create new features like the following ones?

+ +
BodySize = ClosePrice - OpenPrice  
+
+ +

or the size of the tail

+ +
TailUp = HighPrice - Max(OpenPrice, ClosePrice)  
+
+ +

Or we don't need to do that because we are adding noise and the neural network is going to calculate those values inside?

+ +

The case of the body size maybe is a bit different from the tail, because for the tail we need to use a non-linear function (the max operation). So maybe is it important to add the input when it is not a linear relationship between the other inputs not if it's linear?

+ +

Another example. Consider a box, with height $X$, width $Y$ and length $Z$.
+And suppose the real important input is the volume, will the neural network discover that the correlation is $X * Y * Z$? Or we need to put the volume as input too?

+ +

Sorry if it's a dumb question but I'm trying to understand what is doing internally the neural network with the inputs, if it's finding (somehow) all the mathematically possible relations between all the inputs or we need to specify the relations between the inputs that we consider important (heuristically) for the problem to solve?

+",31776,,2444,,12/2/2019 17:03,12/2/2019 17:03,Does the neural network calculate different relations between inputs automatically?,,2,0,,,,CC BY-SA 4.0 +16884,2,,16883,12/1/2019 16:47,,3,,"

On paper, one expects a complex enough network to determine any complicated function of a limited number of inputs, given a large enough dataset. But in practice, there is no limit to the possible difficulty of the function to be learnt, and the datasets can be relatively small on occasion. In such cases - or arguably in general - it is definitely a good idea to define some combination of the inputs depending on some heuristics as you suggested. If you think some combination of inputs is an important variable by itself, you definitely should include it in your inputs.

+ +

We can visualize this situation in TensorFlow playground. Consider the circular pattern dataset on top left corner with some noise. You can use the default setting: $x_1$ and $x_2$ as inputs with 2 hidden layers with 4 and 2 neurons respectively. It should learn the pattern in less than 100 epochs. But if you reduce the number of neurons in the second layer to 2, it is not going to get as good as before. So, you are making the model more complicated to get the correct answer.

+ +

You can experiment and see that one needs at least one 3 neuron layer to get the correct classification from just $x_1$ and $x_2$. Now, if we examine the dataset, we see the circles so we know that instead of $x_1$ and $x_2$, we can try $x_1^2$ and $x_2^2$. This will learn perfectly without any hidden layers as the function is linear in these parameters. The lesson to be learnt here is that, our prior knowledge of the circle ($x_1^2 + x_2^2 = r^2$) and familiarity with the data helped us in getting a good result with a simpler model (smaller number of neurons), by using derived inputs.

+ +

Take the spiral data at the lower right corner for a more challenging problem. For this one, if you do not use any derived features, it is not likely to give you the correct result, even with several hidden layers. Keep in mind that every extra neuron is a potential source of overfitting, on top of being a computational burden.

+ +

Of course the problem here is overly simplified but I expect the situation to be more or less the same for any complicated problem. In practice, we do not have infinite datasets or infinite compute times and the model complexity is always a restriction, so if you have any reason to think some relation between your inputs is relevant for your final result, you definitely should include it by hand at the beginning.

+",22301,,22301,,12/1/2019 17:06,12/1/2019 17:06,,,,0,,,,CC BY-SA 4.0 +16887,2,,4456,12/1/2019 21:24,,8,,"

Although there are several good answers, I want to add this paragraph from Reinforcement Learning: An Introduction, page 303, for a more psychological view on the difference.

+ +
+

The distinction between model-free and model-based reinforcement learning algorithms + corresponds to the distinction psychologists make between habitual and goal-directed + control of learned behavioral patterns. Habits are behavior patterns triggered by appropriate + stimuli and then performed more-or-less automatically. Goal-directed behavior, + according to how psychologists use the phrase, is purposeful in the sense that it is controlled + by knowledge of the value of goals and the relationship between actions and their + consequences. Habits are sometimes said to be controlled by antecedent stimuli, whereas + goal-directed behavior is said to be controlled by its consequences (Dickinson, 1980, + 1985). Goal-directed control has the advantage that it can rapidly change an animal’s + behavior when the environment changes its way of reacting to the animal’s actions. While + habitual behavior responds quickly to input from an accustomed environment, it is unable + to quickly adjust to changes in the environment.

+
+ +

It keeps going from there, and has a nice example afterwards.

+ +

I think the main point that was not always explained in the other answers, is that in a model-free approach you still need some kind of environment to tell you what is the reward associated with your action. The big difference is that you do NOT need to store any information about the model. You give the environment your chosen action, you update your estimated policy, and you forget about it. On the other hand, in model-based approaches, you either need to know the state transitions history as in Dynamic Programming, or you need to be able to calculate all possible next states and associated rewards, from the present state.

+",24054,,24054,,12/1/2019 21:33,12/1/2019 21:33,,,,0,,,,CC BY-SA 4.0 +16888,2,,16879,12/1/2019 21:59,,1,,"

There are a few things you can play with:

+ +
    +
  • Try reducing the learning rate, or increasing decay.

  • +
  • Try using regularization(L1/L2 or dropout)

  • +
  • Try using momentum(your model may be stuck in a local minima)

  • +
  • Adjust other hyperparams(nodes, layers, batch size, etc.)

  • +
+ +

Unless you have some knowledge about the specific cause of high loss variance, the above steps in some amount should get you where you need to go.

+",9608,,,,,12/1/2019 21:59,,,,1,,,,CC BY-SA 4.0 +16889,1,,,12/2/2019 0:42,,2,53,"

I've had a big interest in machine learning for a while, and I've followed along a few tutorials, but have never made my own project. After losing many games of connect 4 with my friends, I decided to try to make a replica of that, then create a neural network and AI to play against me (Or at least something where I can enter the current board scenario, and it will output which row is the best move). This may be an ambitious first project, but I'm willing to put in the work and research to create something I'm proud of. I created the game using p5.js, and though it may be simple, I'm really happy with how it turned out, as it's one of my first more interesting and unique projects in computer science. Now I don't know a ton about ML, so bear with me. I would like the use pytorch, but I'm open to tensorflow/keras as well.

+ +

Here are a few of my questions:

+ +
    +
  1. What output do I need to train? My game currently doesn't have a win condition. Would an array or matrix filled with a 0 where there isn't a chip, a 1 where a red chip is, and a 2 for a yellow? Ie
  2. +
+ +
[0,0,0,0
+ 1,0,0,0
+ 1,0,0,0
+ 1,0,2,0
+ 1,2,2,2]
+
+ +

and enter a 1 somewhere to signify this as a win for player 1? Could an AI recognize this 4 in a row pattern as what needs to be done to win?

+ +
    +
  1. What is the best way to simulate a lot of games to get my training data? I'm imagining using an RNG to drop chips randomly, export the data output to a file and then enter whether it was a win for p1, p2, or a tie?

  2. +
  3. Any other general words of wisdom or links to read?

  4. +
+ +

Thanks so much for reading this and any help you can offer?

+",31784,,,,,12/2/2019 0:42,What form of output would be needed to train a model on a connect 4 AI?,,0,6,,,,CC BY-SA 4.0 +16890,2,,16854,12/2/2019 5:06,,2,,"

After a quick scan, it would seem that, in the history of object detection, machine learning has always been at the forefront. Before then, it would just be a heuristic approach.

+ +

For a quick answer, here: https://towardsdatascience.com/real-time-object-detection-without-machine-learning-5139b399ee7d

+ +

That goes over object detection without using machine learning in OpenCV

+ +

That being said, there are some segments of computer vision that are not strictly machine learning. For example, a commonly used algorithm Selective Search for region proposal doesn't use machine learning, and the TF Algorithm for background generation doesn't either.

+ +

But when it comes to object detection, the gap between machine learning and all other methods is so sheer that ML is the only method really considered today.

+",26726,,,,,12/2/2019 5:06,,,,1,,,,CC BY-SA 4.0 +16891,2,,16883,12/2/2019 6:05,,0,,"

The question is related to ""feature extraction"". Firstly, to tackle a regression problem like both the problems stated by you, you need to provide the neural network with the most relevant inputs that have a effect on the output. Eg. If you want your network to add x and y, you need to provide it training examples like input(x=1, y=3) and output (sum=4). This will make your network do exactly what you want.

+ +

But suppose you do not know whether what inputs should you train your network on, neural networks can take care of that too. Look at this example: +Look at the first truth table. Notice that the output column is actually the first input column and the other two input columns are just random. Eventually, the network learns this relationship and provides the correct results. What we learnt: if you are unsure about which inputs should you choose for your network, just provide as many as possible, or as many combinations as possible. Neural networks excel in finding relationships in input data.

+ +

Next, talking of the volume problem, this is what I have been doing recently. It's actually an example of function approximation. Usually, the problem has multiple inputs and a single output (just like the addition problem), but the inverse is also possible. i.e., input : sum and output: x & y. This comes under one to many function mapping and multivariate regression. So YES, you need to provide the volume as input and x,y and z as outputs while training. The recommended configuration is one neuron in input layer, at least 6 hidden neurons and 3 neurons in output layer For magical results, you can use a deeper neural network rather than the shallow one suggested by me. But remember, neural networks have been proved to be **Universal Approximators*

+",26452,,,,,12/2/2019 6:05,,,,0,,,,CC BY-SA 4.0 +16893,1,,,12/2/2019 9:03,,1,36,"

I was reading about weight pruning in convolutional neural networks. Is it applied for all the layers including convolutional layers or only it is done for dense layers?

+",31795,,2444,,12/2/2019 13:32,12/2/2019 13:32,Is weight pruning applied to all layers or only to dense layers in CNNs?,,0,1,,,,CC BY-SA 4.0 +16895,1,16913,,12/2/2019 13:17,,3,433,"

I'm trying to build neural networks for games like Go, Reversi, Othello, Checkers, or even tic-tac-toe, not by calculating a move, but by making them evaluate a position.

+ +

The input is any board situation. The output is a score or estimate for the probability to win, or how favorable a given position is. Where 1 = guaranteed to win and 0 = guaranteed to lose.

+ +

In any given turn I can then loop over all possible moves for the current player, evaluate the resulting game situations, and pick the one with the highest score.

+ +

Hopefully, by letting this neural network play a trillion games vs itself, it can develop a sensible scoring function resulting in strong play.

+ +

Question: How do I train such a network ?

+ +

In every game, I can keep evaluating and making moves back and forth until one of the AI players wins. In that case the last game situation (right before the winning move) for the winning player should have a target value of 1, and the opposite situation (for the losing player) has a target value of 0.

+ +

Note that I don't intend to make the evaluation network two-sided. I encode the game situation as always ""my own"" pieces vs ""the opponent"", and then evaluate a score from my own (i.e. the current player's) side or perspective. Then after I pick a move, I flip sides so to speak, so the opponent pieces now become my own and vice versa, and then evaluate scores again (now from the other side's perspective) for the next counter move.

+ +

So the input to such a network does explicitly encode black and white pieces, or naughts and crosses (in case of tic-tac-toe) but just my pieces vs their pieces. And then evaluates how favorable the given game situation is for me, always assuming it's my turn.

+ +

I can obviously assign a desired score or truth value for the last move in the game (1 for win, 0 for loss) but how do I backpropagate that towards earlier situations in a played game?

+ +

Should I somehow distribute the 1 or 0 result back a few steps, with a decaying adjustment factor or learning rate? In a game with 40 turns, it might make sense to consider the last few situations as good or bad (being close to winning or losing) but I guess that shouldn't reflect all the way back to the first few moves in the game.

+ +

Or am I completely mistaken with this approach and is this not how it's supposed to be done?

+",31800,,,,,12/3/2019 14:52,"Building 'evaluation' neural networks for go, reversi, checkers etc, how to train?",,1,2,,,,CC BY-SA 4.0 +16896,1,16969,,12/2/2019 17:22,,1,3481,"

I am planning to enroll for Andrew Ng's Machine Learning course https://www.coursera.org/learn/machine-learning. I've no background in math. Is it OK if I start the course and learn math as and when required?

+",31804,,,,,12/7/2019 9:35,Prerequisites for Andrew Ng's Machine Learning Course,,3,1,,12/9/2019 18:07,,CC BY-SA 4.0 +16897,2,,16896,12/2/2019 19:40,,0,,"

No pre-requisites required for Andrew Ng ML course. There are a couple of lectures in which he gives basic idea of Linear algebra. Also you can learn math when required.

+",31799,,,,,12/2/2019 19:40,,,,0,,,,CC BY-SA 4.0 +16898,2,,16874,12/2/2019 19:47,,2,,"

I think the first question you should answer is: ""What questions should the AI be able to answer?"" If the intend was that the AI should be able to answer any questions, then that is simply not doable (or at least currently it is not doable). Currently this is similar to asking for a program that can do anything.

+ +

Currently the AI field is split between statistical approaches and logical approaches. In the early years AI was approached mainly from a logical perspective. Now statistical approaches are more popular. The main advantage of logical approaches is that answers can be explained, while the main advantage of statistical approaches is that given large enough data sets agents can be trained. There is definitely a drive in the AI community to merge statistical and logical approaches to AI, but these approaches are still in its infancy.

+ +

I therefore will strongly suggest you first determine the kind of problems you will want to address with AI, then based on that, you determine the AI approach that is best suited for those problems.

+",11450,,11450,,12/2/2019 19:54,12/2/2019 19:54,,,,0,,,,CC BY-SA 4.0 +16899,1,,,12/2/2019 20:01,,8,1332,"

I'm doing a paper for a class on the topic of big problems that are still prevalent in AI, specifically in the area of natural language processing and understanding. From what I understand, the areas:

+ +
    +
  • Text classification
  • +
  • Entity recognition
  • +
  • Translation
  • +
  • POS tagging
  • +
+ +

are for the most part solved or perform at a high level currently, but areas such as:

+ +
    +
  • Text summarization
  • +
  • Conversational systems
  • +
  • Contextual systems (relying on the previous context that will impact current prediction)
  • +
+ +

are still relatively unsolved or are a big area of research (although this could very well change soon with the releases of big transformer models from what I've read).

+ +

For people who have experience in the field, what are areas that are still big challenges in NLP and NLU? Why are these areas (doesn't have to be ones I've listed) so tough to figure out?

+",22840,,2444,,3/14/2020 18:39,3/14/2020 18:39,What are the current big challenges in natural language processing and understanding?,,2,0,,,,CC BY-SA 4.0 +16900,1,16903,,12/2/2019 20:23,,3,187,"

I have 50,000 samples. Of these 23,000 belong to the desired class $A$. I can sacrifice the number of instances that are classified as belonging to the desired class $A$. It will be enough for me to get 7000 instances in the desired class $A$, provided that most of these instances classified as belonging to $A$ really belong to the desired class $A$. How can I do this?

+ +

The following is the confusion matrix in the case the instances are perfectly classified.

+ +
[[23000   0]
+ [  0 27000]]
+
+ +

But it is unlikely to obtain this confusion matrix, so I'm quite satisfied with the following confusion matrix.

+ +
[[7000   16000]
+ [  500 26500]]
+
+ +

I am currently using the sklearn library. I mainly use algorithms based on decision trees, as they are quite fast in the calculation.

+",31808,,2444,,12/2/2019 23:52,12/2/2019 23:52,How can I minimise the false positives?,,1,3,,,,CC BY-SA 4.0 +16902,2,,5682,12/2/2019 21:22,,1,,"

Phillipe's excellent answer covers the crux of the subject, so I'm just going to state the obvious: the key difference is the medium and timescale.

+ +

Biological evolution is a function of the natural world, and typically occurs over a long time span, depending on the organisms and how quickly they produce new generations. (We typically think of biological evolution as occurring over ""millions of years"", but it can happen much more quickly, for instance in the case of microorganisms.)

+ +

Generic algorithms utilize a computing medium, which in the current era is silicon based, and involves microprocessors and various mediums for memory (magnetic tape and more recently solid-state.)

+ +

Both natural and artificial evolution are constrained by the size of the system (a planet or ecosystem in the former case, and available memory in the latter.) However:

+ +
    +
  • Artificial evolution can occur at an artificially accelerated pace, dependent on available processing resources.
  • +
+ +

This capacity for computationally ""accelerated subjective time"" and accelerated evolution of algorithms is on of the bases for the theory of the ""technological singularity"".

+ +

It might be argued that genetic engineering allows accelerated evolution for biological species, but that would not fall under natural evolution.

+",1671,,1671,,12/2/2019 21:37,12/2/2019 21:37,,,,0,,,,CC BY-SA 4.0 +16903,2,,16900,12/2/2019 23:43,,3,,"

I think you're looking for the minimization of false positives, that is, the instances that are classified as belonging to the desired class (the positive part of false positives) but that do not actually belong to that class (the false part of false positives). In practice, given your constraints, you may want to maximize the precision, while maintaining a good recall.

+ +

In this answer to the question How can the model be tuned to improve precision, when precision is much more important than recall?, the user suggests performing a grid search (using sklearn.grid_search.GridSearchCV(clf, param_grid, scoring=""precision"")) to find the parameters of the model that maximize the precision. See also the question Classifier with adjustable precision vs recall.

+",2444,,,,,12/2/2019 23:43,,,,1,,,,CC BY-SA 4.0 +16905,1,17713,,12/3/2019 4:47,,8,1636,"

When the time allotted to Monte Carlo tree search runs out, what action should be chosen from the root?

+ +
+

The game action finally executed by the program in the actual game, is the one corresponding to the child which was explored the most.

+
+ +
+

The basic algorithm involves iteratively building a search tree until some predefined computational budget – typically a time, memory or iteration constraint – is reached, at which point the search is halted and the best-performing root action returned.

+

[...] The result of the overall search a(BESTCHILD(v0)) is the action a that leads to the best child of the root node v0, where the exact definition of “best” is defined by the implementation.

+

[...] As soon as the search is interrupted or the computation budget is reached, the search terminates and an action a of the root nodet0is selected by some mechanism. Schadd [188] describes four criteria for selecting the winning action, based on the work of Chaslot et al [60]:

+
    +
  1. Max child: Select the root child with the highest reward.

    +
  2. +
  3. Robust child: Select the most visited root child.

    +
  4. +
  5. Max-Robust child: Select the root child with both the highest visit count and the highest reward. If none exist, then continue searching until an acceptable visit count is achieved [70].

    +
  6. +
  7. Secure child: Select the child which maximises a lower confidence bound.

    +
  8. +
+

[...] Once some computational budget has been reached, the algorithm terminates and returns the best move found,corresponding to the child of the root with the highest visit count.

+

The return value of the overall search in this case is a(BESTCHILD(v0,0)) which will give the action a that leads to the child with the highest reward, since the exploration parameter c is set to 0 for this final call on the root node v0. The algorithm could instead return the action that leads to the most visited child; these two options will usually – but not always! – describe the same action. This potential discrepancy is addressed in the Go program ERICA by continuing the search if the most visited root action is not also the one with the highest reward. This improved ERICA’s winning rate against GNU GO from 47% to 55% [107].

+
+

But their algorithm 2 uses the same criterion as the internal-node selection policy:

+

$$\operatorname{argmax}_{v'} \frac{Q(v')}{N(v')} + c \sqrt{\frac{2 \ln N(v)}{N(v')}}$$

+

which is neither the max child nor the robust child! This situation is quite confusing, and I'm wondering which approach is nowadays considered most successful/appropriate.

+",3373,,-1,,6/17/2020 9:57,1/28/2020 13:35,MCTS: How to choose the final action from the root,,1,1,,,,CC BY-SA 4.0 +16906,1,16907,,12/3/2019 9:02,,2,69,"

Let's suppose we have to train a neural network for the XOR classification task.

+ +

Are the inputs $(00, 01, 10, 11)$ inserted in a sequential way? For example, we first insert the 00 and change the weights, then the 01 and again slightly change them, etc. Or is there another way it can be implemented?

+",31817,,2444,,12/3/2019 16:03,12/3/2019 16:03,How are the inputs passed to the neural network during training for the XOR classification task?,,1,0,,,,CC BY-SA 4.0 +16907,2,,16906,12/3/2019 9:22,,1,,"

Hello and welcome to the community. There are multiple ways you can train a neural network: stochastic, mini-batch and batch. What you explained is the stochastic mode, where you input one training example 01 for example, calculate the gradients and update the networks weights before the next training example is fed. You could also select multiple such examples (a mini-batch) and update the weights only after you computed all the outputs (for this particular mini-batch). Finally you can use a batch size which is equal with the total number of examples in your dataset so you will update the weights only after you have all the outputs for all samples. Each of those methods have their own strengths and weaknesses, depending on your dataset you might prefer one over the others.

+",20430,,,,,12/3/2019 9:22,,,,0,,,,CC BY-SA 4.0 +16908,1,,,12/3/2019 10:32,,1,54,"

There are a lot of ML algorithms suggested for fraud detection. Now, I have not been able to find a general overview for all of them. My goal is to create this overview. What algorithms would you suggest and why?

+",31820,,,,,12/3/2019 10:32,What ML algorithms would you suggest in fraud detection?,,0,0,,,,CC BY-SA 4.0 +16909,1,16940,,12/3/2019 10:50,,4,333,"

I was going through the proof of the policy gradient theorem here: https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#svpg

+ +

In the section ""Proof of Policy Gradient Theorem"" in the block of equations just under the sentence ""The nice rewriting above allows us to exclude the derivative of Q-value function..."" they set +$$ +\eta (s) = \sum^\infty_{k=0} \rho^\pi(s_0 \rightarrow s, k) +$$ +and +$$ +\sum_s \eta (s) = const +$$ +Thus, they basically assume, that the stationary distribution is not dependent on the initial state. But how can we justify this? If the MDP is described by a block diagonal transition matrix, in my mind this should not hold.

+",31821,,2444,,5/11/2022 10:23,5/11/2022 10:23,Why is the stationary distribution independent of the initial state in the proof of the policy gradient theorem?,,1,0,,,,CC BY-SA 4.0 +16918,1,,,12/3/2019 11:07,,2,48,"

Considering I have some neural network that, using supervised learning, transforms a string into a learned feature vector where ""close"" strings will result into more close vectors.

+ +

I know that since a NN is no one-way-function there is a way to retrieve the input data from my output if I have the entire network at hand (if I know biases, weights, etc.)

+ +

My question is if the network is not known, is there a way for me (using ie. some probabilistic distributions) to make assumptions or even reconstruct the input data?

+",31885,Jonathan R,31885,,12/5/2019 15:01,12/5/2019 15:01,Can learned feature vectors be considered a good encryption?,,0,3,,,,CC BY-SA 4.0 +16910,1,16912,,12/3/2019 11:38,,7,2033,"

I'm taking a Coursera course on Reinforcement learning. There was a question there that wasn't addressed in the learning material: Does adding a constant to all rewards change the set of optimal policies in episodic tasks?

+ +

The answer is Yes - Adding a constant to the reward signal can make longer episodes more or less advantageous (depending on whether the constant is positive or negative).

+ +

Can anyone explain why is this so? And why it doesn't change in the case of continuous (non episodic) tasks? I don't see why adding a constant matters - as an optimal policy would still want to get the maximum reward...

+ +

Can anyone give an example of this?

+",27947,,,,,12/3/2019 13:41,Does adding a constant to all rewards change the set of optimal policies in episodic tasks?,,1,0,,,,CC BY-SA 4.0 +16912,2,,16910,12/3/2019 13:41,,12,,"

Generally we can write for $R_c$ the total reward with added constant $c$ of a policy as +$$ +R_c = \sum_{i=0}^K (r_i + c) \gamma^i = \sum_{i=0}^K r_i \gamma^i + \sum_{i=0}^K c \gamma^i +$$ +So if we have two policies with the same total reward (without added constant) +$$ +\sum_{i=0}^{K_1} r_i^1 \gamma^i = \sum_{i=0}^{K_2} r_i^2 \gamma^i +$$ +but with different lengths $K_1 \neq K_2$ the total reward with added constant will be different, because the second term in $R_c$ ( $\sum_{i=0}^K c \gamma^i$ ) will be different.

+ +

As an example: Consider two optimal policies, both generating the same cumulative reward of 10, but the first policy visits 4 states, before it reaches a terminal state, while the second visits only two states. The rewards can be written as: +$$ +10 + 0 + 0 + 0 = 10 +$$ +and +$$ +0 + 10 = 10 +$$ +But when we add 100 to every reward: +$$ +110 + 100 + 100 + 100 = 410 +$$ +and +$$ +100 + 110 = 210 +$$ +Thus, now the first one is better.

+ +

In the continious case, the episodes always have length $K = \infty$. Therefore, they always have the same length, and adding a constant doesnt change anything, because the second term in $R_c$ stays the same.

+",31821,,,,,12/3/2019 13:41,,,,2,,,,CC BY-SA 4.0 +16913,2,,16895,12/3/2019 14:52,,1,,"

The evaluation of the last steps in the game can be made with the 1 and 0 as you said. For all the other steps, the evaluation should be the evaluation of the best next step with a small decay.

+",29671,,,,,12/3/2019 14:52,,,,6,,,,CC BY-SA 4.0 +16914,1,16925,,12/3/2019 23:30,,3,526,"

During model training, I noticed various behaviour in between training and validation accuracy. I understand that 'The training set is used to train the model, while the validation set is only used to evaluate the model's performance...', but I'd like to know if there is any relationship between training and validation accuracy and, if yes,

+
    +
  1. what exactly is happening when training and validation accuracy change during training and;

    +
  2. +
  3. what do different behaviours imply

    +
  4. +
+

For instance, some believe there is overfitting problem if training > validation accuracy. What happens if one is greater than the other alternately, which is the case below?

+

Here is the code

+
inputs_1 = keras.Input(shape=(10081,1))
+
+layer1 = Conv1D(64,14)(inputs_1)
+layer2 = layers.MaxPool1D(5)(layer1)
+layer3 = Conv1D(64, 14)(layer2)
+layer4 = layers.GlobalMaxPooling1D()(layer3)
+
+inputs_2 = keras.Input(shape=(104,))             
+layer5 = layers.concatenate([layer4, inputs_2])
+layer6 = Dense(128, activation='relu')(layer5)
+layer7 = Dense(2, activation='softmax')(layer6)
+
+
+model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
+model_2.summary()
+
+
+X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:10185], df[['Result_cat','Result_cat1']].values, test_size=0.2) 
+X_train = X_train.to_numpy()
+X_train = X_train.reshape([X_train.shape[0], X_train.shape[1], 1]) 
+X_train_1 = X_train[:,0:10081,:]
+X_train_2 = X_train[:,10081:10185,:].reshape(736,104)   
+
+
+X_test = X_test.to_numpy()
+X_test = X_test.reshape([X_test.shape[0], X_test.shape[1], 1]) 
+X_test_1 = X_test[:,0:10081,:]
+X_test_2 = X_test[:,10081:10185,:].reshape(185,104)    
+
+adam = keras.optimizers.Adam(lr = 0.0005)
+model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc'])
+
+history = model_2.fit([X_train_1,X_train_2], y_train, epochs = 120, batch_size = 256, validation_split = 0.2, callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)])
+
+

model summary

+
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:15: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=[<tf.Tenso..., outputs=[<tf.Tenso...)`
+  from ipykernel import kernelapp as app
+Model: "model_3"
+__________________________________________________________________________________________________
+Layer (type)                    Output Shape         Param #     Connected to                     
+==================================================================================================
+input_5 (InputLayer)            (None, 10081, 1)     0                                            
+__________________________________________________________________________________________________
+conv1d_5 (Conv1D)               (None, 10068, 64)    960         input_5[0][0]                    
+__________________________________________________________________________________________________
+max_pooling1d_3 (MaxPooling1D)  (None, 2013, 64)     0           conv1d_5[0][0]                   
+__________________________________________________________________________________________________
+conv1d_6 (Conv1D)               (None, 2000, 64)     57408       max_pooling1d_3[0][0]            
+__________________________________________________________________________________________________
+global_max_pooling1d_3 (GlobalM (None, 64)           0           conv1d_6[0][0]                   
+__________________________________________________________________________________________________
+input_6 (InputLayer)            (None, 104)          0                                            
+__________________________________________________________________________________________________
+concatenate_3 (Concatenate)     (None, 168)          0           global_max_pooling1d_3[0][0]     
+                                                                 input_6[0][0]                    
+__________________________________________________________________________________________________
+dense_5 (Dense)                 (None, 128)          21632       concatenate_3[0][0]              
+__________________________________________________________________________________________________
+dense_6 (Dense)                 (None, 2)            258         dense_5[0][0]                    
+==================================================================================================
+Total params: 80,258
+Trainable params: 80,258
+Non-trainable params: 0
+
+

and the training process

+
__________________________________________________________________________________________________
+Train on 588 samples, validate on 148 samples
+Epoch 1/120
+588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
+Epoch 2/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
+Epoch 3/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
+Epoch 4/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
+Epoch 5/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
+Epoch 6/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
+Epoch 7/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
+Epoch 8/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
+Epoch 9/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
+Epoch 10/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
+Epoch 11/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
+Epoch 12/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
+Epoch 13/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
+Epoch 14/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
+Epoch 15/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
+Epoch 16/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
+Epoch 17/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
+Epoch 18/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
+Epoch 19/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
+Epoch 20/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
+Epoch 21/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
+Epoch 22/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
+Epoch 23/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
+Epoch 24/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
+Epoch 25/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
+Epoch 26/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
+Epoch 27/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
+Epoch 28/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
+Epoch 29/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
+Epoch 30/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
+Epoch 31/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
+Epoch 32/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
+Epoch 33/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
+Epoch 34/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
+Epoch 35/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
+Epoch 36/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
+Epoch 37/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
+Epoch 38/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
+Epoch 39/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
+Epoch 40/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
+Epoch 41/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
+Epoch 42/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
+Epoch 43/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
+Epoch 44/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
+Epoch 45/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
+Epoch 46/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
+Epoch 47/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
+Epoch 48/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
+Epoch 49/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
+Epoch 50/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
+Epoch 51/120
+588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
+Epoch 52/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
+Epoch 53/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
+Epoch 54/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
+Epoch 55/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
+Epoch 56/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
+Epoch 57/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
+Epoch 58/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
+Epoch 59/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
+Epoch 60/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
+Epoch 61/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
+Epoch 62/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
+Epoch 63/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
+Epoch 64/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
+Epoch 65/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
+Epoch 66/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
+Epoch 67/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
+Epoch 68/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
+Epoch 69/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
+Epoch 70/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
+Epoch 71/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041__________________________________________________________________________________________________
+Train on 588 samples, validate on 148 samples
+Epoch 1/120
+588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
+Epoch 2/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
+Epoch 3/120
+588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
+Epoch 4/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
+Epoch 5/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
+Epoch 6/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
+Epoch 7/120
+588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
+Epoch 8/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
+Epoch 9/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
+Epoch 10/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
+Epoch 11/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
+Epoch 12/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
+Epoch 13/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
+Epoch 14/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
+Epoch 15/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
+Epoch 16/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
+Epoch 17/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
+Epoch 18/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
+Epoch 19/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
+Epoch 20/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
+Epoch 21/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
+Epoch 22/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
+Epoch 23/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
+Epoch 24/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
+Epoch 25/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
+Epoch 26/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
+Epoch 27/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
+Epoch 28/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
+Epoch 29/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
+Epoch 30/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
+Epoch 31/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
+Epoch 32/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
+Epoch 33/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
+Epoch 34/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
+Epoch 35/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
+Epoch 36/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
+Epoch 37/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
+Epoch 38/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
+Epoch 39/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
+Epoch 40/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
+Epoch 41/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
+Epoch 42/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
+Epoch 43/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
+Epoch 44/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
+Epoch 45/120
+588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
+Epoch 46/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
+Epoch 47/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
+Epoch 48/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
+Epoch 49/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
+Epoch 50/120
+588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
+Epoch 51/120
+588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
+Epoch 52/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
+Epoch 53/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
+Epoch 54/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
+Epoch 55/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
+Epoch 56/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
+Epoch 57/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
+Epoch 58/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
+Epoch 59/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
+Epoch 60/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
+Epoch 61/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
+Epoch 62/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
+Epoch 63/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
+Epoch 64/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
+Epoch 65/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
+Epoch 66/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
+Epoch 67/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
+Epoch 68/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
+Epoch 69/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
+Epoch 70/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
+Epoch 71/120
+588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041
+
+

Notice how at first acc is lower than val_acc and later is greater than val_acc. Can someone please shed some light what could be happening here? Thank you

+",31075,,2444,,5/22/2022 21:50,5/22/2022 21:50,What is the relationship between the training accuracy and validation accuracy?,,1,0,,,,CC BY-SA 4.0 +16915,1,,,12/3/2019 23:34,,1,584,"

Is there a way to train a neural network to detect subliminal messages? Where can I find the dataset on which to train the neural network?

+ +

If I have to create the dataset, how would I go about it?

+ +

United Nations has defined subliminal messages as perceiving messages without being aware of them, it is unconscious perception, or perception without awareness. Like you may be aware of a message but cannot consciously perceive that message in the form of text, etc.

+ +

There are two many types of subliminal messages, one which can be made through visual means, another which can be made through audio.

+ +

In visual means, I'm referring to these types:

+ +
    +
  1. Messages which are flashed for very short while on the screen.
  2. +
  3. Messages whose opacity is changed to blend with the background.
  4. +
  5. Messages whose colors are varied slightly to blend with the background.
  6. +
+ +

Example of 3rd type of subliminal messages: if there is a background of red, on this can be shown a message made up of slight variation of red, as a conscious mind can't distinguish between such close shades of red, people will take the entire thing to be red block, but a subconscious mind notice the slight variation in color, register the message, because humans can see millions of colors.

+",31307,,2444,,6/15/2020 1:16,6/30/2023 9:03,How can I train a neural network to detect subliminal messages?,,1,1,,,,CC BY-SA 4.0 +16916,1,,,12/3/2019 23:36,,1,108,"

I've made a connect 4 game in javascript, and I want to design an AI for it. I made a post the other day about what output would be needed, and I think I could use images of the board and a CNN. I did some research into Reinforcement learning, and I think that's what I need to do. I don't have much experience with ML in general, much less RL with Q-learning, but that is what I'd like to do.

+ +

Now, I don't really know how to start out with such a big project. I have a few questions first:

+ +
    +
  1. What do I do with my input? I'm thinking I give the AI 7 options for moves to make, one for each row of the board. How do I implement a way that it can ""look"" at the board? Can I just import an image of the current board state?

  2. +
  3. How do I make a reward table? How should I do the points system for the Q-learning? I'm thinking something like: If it drops a chip it gets a point, if it lines up 2 chips in a row it gets 5 points, if it gets 3 in a row it gets 30, and if it gets 4 in a row it gets 100. Would that be an effective way to do this? How do I implement this?

  4. +
  5. Is there a library I can use to do any of the work where I make an algorithm and board states and reward tables? Or do I have to hard code any of it?

  6. +
  7. I've done some research, are there any links or tutorials you think I should read or follow along with? Any other general advice or help for me?

  8. +
+ +

I greatly appreciate anyone who answers one or all of these questions! Thank you so much!

+",31784,,,,,12/4/2019 14:10,Designing a reinforcement learning AI for a game of connect 4,,1,3,,9/13/2020 19:26,,CC BY-SA 4.0 +16917,1,,,12/4/2019 1:40,,4,279,"

I'm a student beginning to study deep learning, and would like to practice with a simple project using a Graph Convolutional Network.

+ +

However, I'm not quite sure how to handle different input sizes of graphs for the GCN. How would I do this?

+ +

Is zero-padding the only way to solve this problem? While zero-padding is applicable to CNNs, I'm not sure if it is for a GCN.

+",31836,,2444,,12/4/2019 14:09,12/4/2019 14:09,How should I handle different input sizes in graph convolution networks?,,0,2,,,,CC BY-SA 4.0 +16920,1,,,12/4/2019 5:06,,2,57,"

Is logistic regression used for unconstrained or constrained optimization problems, and why?

+",31840,,2444,,12/4/2019 14:50,12/4/2019 14:50,Is logistic regression used for unconstrained or constrained optimisation problems?,,0,1,,,,CC BY-SA 4.0 +16922,1,,,12/4/2019 6:38,,1,890,"

The RPN loss in Faster RCNN paper is

+

$$ +L({p_i}, {t_i}) = \frac{1}{N_{cls}} \sum_{i} L_{cls}(p_i,p_i^*) + \lambda \frac{1}{N_{reg}} \sum_i p_i^* L_{reg}(t_i, t_i^*) +$$

+

For regression problems, we have the following parametrization

+

$$t_x=\frac{x - x_a}{w_a}, \\ t_y=\frac{y−y_a}{h_a}, \\ t_w= \log \left( \frac{w}{w_a} \right),\\ t_h= \log \left(\frac{h}{h_a} \right)$$

+

and the ground-truth labels are

+

$$t_x^*=\frac{x^* - x_a}{w_a},\\ t_y^*=\frac{y^*−y_a}{h_a}, \\ t_w^*= \log \left( \frac{w^*}{w_a} \right), \\ t_h^*= \log \left(\frac{h^*}{h_a} \right)$$

+

where

+
    +
  • $x$ and $y$ are the two coordinates of the center, $w$ the width, and $h$ the height of the predicted box.

    +
  • +
  • $x$ and $y$ are the two coordinates of the center, $w$ the width, and $h$ the height of the anchor box.

    +
  • +
  • $L_{reg}(t_i, t_i^*) = R(t_i − t_i^*)$, where $R$ is a robust loss function (smooth $L_1$)

    +
  • +
+

These equations are unclear to me, so here are my two questions.

+
    +
  1. How can I get the predicted bounding box given the neural network's output?

    +
  2. +
  3. What exactly is the smooth $L_1$ here? How is it defined?

    +
  4. +
+",31844,,2444,,12/28/2020 15:31,5/17/2023 18:13,"In Faster R-CNN, how can I get the predicted bounding box given the neural network's output?",,1,0,,,,CC BY-SA 4.0 +16925,2,,16914,12/4/2019 10:18,,1,,"

very interesting questions:

+

1. what exactly is happening when training and validation accuracy change during training

+
    +
  • The accuracy change after every batch computation. You have 588 batches, so loss will be computed after each one of these batches (let's say each batch have 8 images). However, the accuracy you see in the progress bar it is the accuracy of the current batch averaged with the accuracy of all the previous batches so far. See keras.utils.generic_utils.Progbar.
  • +
  • The val_acc is computed only at the end of one epoch and it is computed with all your validation dataset at once (considering it as a single batch, so if you have 100 images for validation it will compute accuracy as a single batch of 100 images)
  • +
+

2. what do different behaviours imply

+
    +
  • The acc and val_acc normally differ from each other due to different split sizes.

    +
      +
    • Try same experiment with validation_split=0.01 and validation_split=0.4 and you will see how both accuracy and val_acc will change.
    • +
    • Normally the greater the validation split, the more similar both metrics will be since the validation split will be big enough to be representative (let's say it has cats and dogs, not only cats), taking into account that you need enough data to train correctly. This explains why in some cases the val_acc is higher than accuracy and vice versa.
    • +
    +
  • +
  • Overfitting only occurs when the graph fashion or tendency changes and val_acc starts to drop and accuracy keeping increasing. This means that your model can not do any better with the validation dataset (non previously seen images).

    +
  • +
+

I work with loss and val_loss which are highly correlated with accuracy. Normally the loss is the inverse, so interpret the comments above in the inverse sense (sorry about the confusion but I'm taking this example from my current experiments) I hope it helps:

+

+

There are 2 experiments, orange and grey.

+
    +
  • In both experiments, val_loss is always slightly higher than loss (because of my current validation split which it happens to be also 0.2, but normally is 0.01 and val_loss is even higher).

    +
  • +
  • On both experiments the loss trend is linearly decreasing, this is because gradient descent works and the loss functions is well defined and it converges.

    +
  • +
  • Orange experiment is overfitting from epoch 20 onwards because, the val_loss won't drop any more and, on the contrary, it start increasing.

    +
  • +
  • Grey experiment is just right, both loss and val_loss are still decreasing, and although the val_loss might be greater than loss it is not overfitting because it is still decreasing. So that is why it is still training :)

    +
  • +
+

Complex concepts here, I hope I was able to explain myself clearly! Cheers

+",26882,,2444,,5/22/2022 21:42,5/22/2022 21:42,,,,5,,,,CC BY-SA 4.0 +16926,1,16929,,12/4/2019 10:26,,3,1097,"

If I want to augment my dataset, is shuffling or permuting the channels (RGB) of an image a sensible augmentation for training a CNN? IIRC, the way convolutions work is that a kernel operates over parts of the image but maintains the order of the kernels.

+ +

For example, the kernel has $k \times k$ weights for each channel and the resulting output is the multiplication of the weights and the pixel values of the image and is finally averaged to form a new pixel in the next feature map.

+ +

In this case, if we shuffle the channels of the image (GBR, BGR, RBG, GRB, etc.), a CNN that is only trained on the ordering RGB would do poorly on such images. Therefore, is it not sensible to shuffle the channels of the image as a form of data augmentation? Or will this have a regularizing effect on the CNN model?

+",31851,,,,,12/4/2019 13:06,Can I shuffle image channel data as a form of data augmentation?,,1,0,,,,CC BY-SA 4.0 +16927,1,,,12/4/2019 12:53,,0,1172,"

I was just wondering if it's possible to use Machine Learning to train a model from a dataset of images of cups with a given volume in the image and then use object detection to detect other cups and assume the volume of the cup,

+ +

Basically the end goal is to detect the volume of a cup using object detection with a phone's camera,

+ +

I would highly appreciate it if someone can point me to the right direction.

+",16071,,,,,12/4/2019 13:54,Is it possible to use AI for detecting the volume of a cup,,1,0,,,,CC BY-SA 4.0 +16928,2,,16899,12/4/2019 13:05,,3,,"

According to a nice article by Sebastian Ruder https://ruder.io/4-biggest-open-problems-in-nlp/ based on answers from top NLP researchers https://docs.google.com/document/d/18NoNdArdzDLJFQGBMVMsQ-iLOowP1XXDaSVRmYN0IyM/edit

+ +
    +
  1. Natural language understanding
  2. +
  3. NLP for low-resource scenarios
  4. +
  5. Reasoning about large or multiple documents
  6. +
  7. Datasets, problems, and evaluation
  8. +
+ +

I recommend having a look at the article. More details in the slides https://drive.google.com/file/d/15ehMIJ7wY9A7RSmyJPNmrBMuC7se0PMP/view

+",27851,,,,,12/4/2019 13:05,,,,0,,,,CC BY-SA 4.0 +16929,2,,16926,12/4/2019 13:06,,3,,"

As a rule of thumb for image data augmentation, look at the augmented images:

+ +
    +
  • Can you correctly classify or measure your target label from the augmented images?

  • +
  • Could something similar to the augmented images appear in the environment where you want to run inferences on previously unseen inputs?

  • +
+ +

For your suggested augmentation of shuffling the channels, it may pass the first test. However, the second test shows that you are probably taking a step too far.

+ +
+

will this have a regularizing effect on the CNN model?

+
+ +

Yes, but it might not be that useful to have strong cross-channel regularisation.

+ +

If there is important information for your task in the separate colour channels, then shuffling the channels makes it harder for the neural network to use that (it is not impossible, the CNN can still learn filters that will trigger most strongly on features that tend to appear in red channel and not blue in your problem for instance).

+ +

If there is not important information for your task in the colour information, then you may find it simpler and easier to turn your images into single channel greyscale instead, and use that throughout. Although that is not completely the same, for many image types it will achieve a similar effect (and possible boost to accuracy) for a fraction of the effort.

+",1847,,,,,12/4/2019 13:06,,,,1,,,,CC BY-SA 4.0 +16931,2,,16927,12/4/2019 13:54,,2,,"

This could be possible, providing you have the right dataset to train it on.

+ +

The volume of a cup consist of width, height and depth. You can probably detect all three of those given the bounding box or the pixels of the cup. However detecting the dimensions of an object require a reference object, like a penny or your finger and you have to specify the exact dimension of that. If that method fits your problem which I assume not, this is a good resource to look through: Open CV measuring dimensions

+ +

However to do the task, you need a way to measure the volume without the reference object. To fo this you need a deep learning based approach. Depth detection using fully residual convolutional neural network maybe a good start for your project as you are doing something similar. You may use one of the pretrained model and apply transfer learning on it with the image of the cup and the surroundings, and get the output and feed it through another feed forward neural network to predict the volume.

+",23713,,,,,12/4/2019 13:54,,,,0,,,,CC BY-SA 4.0 +16932,2,,16916,12/4/2019 14:10,,1,,"

1) your input should be so that you describe your entire environment. This could be done by 8 (length)* 8 (height)* 3 (either empty space, opponent chip or your chip) = 192 input neurons. you can just import a image of the current boardstate (which is width pixels * height pixels input neurons), but this means you task the neural network with learning to read the image as wel as playing the game.

+ +

2) rewards for a game like this are (most of the time) 1 for a winning move and 0 for a losing move. if you do it your way it might have side effects like prioritizing lining 2 chips up instead of going for a winning move.

+ +

3) Tensorflow is most commenly used. (this is python and not javascript though)

+ +

4) I liked the book Reinforcement learning, an introduction and would recommand this to anyone.

+",29671,,,,,12/4/2019 14:10,,,,0,,,,CC BY-SA 4.0 +16933,1,,,12/4/2019 15:00,,2,166,"

I'm having a problem to understand the needed dimensions of a VAE, especially for mu, logvar and z layer.

+

Let's say I have an input of 512x512, 1 color channel (CT images), batch size 32. Then my encoder/decoder looks like the following:

+
self.encoder = nn.Sequential(
+            nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1),  # 32x512x512
+            nn.ReLU(True),
+            nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1),  # 32x256x256
+            nn.ReLU(True),
+            nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1),  # 32x128x128
+            nn.ReLU(True),
+            nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1),  # 32x64x64
+            nn.ReLU(True),
+            nn.Conv2d(32, 32, kernel_size=3, stride=2, padding=1),  # 32x32x32
+            nn.ReLU(True))
+
+self.decoder = nn.Sequential(
+            nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 32, kernel_size=4, stride=2, padding=1),
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 32, kernel_size=3, stride=1, padding=1),
+            nn.ReLU(True),
+            nn.ConvTranspose2d(32, 1, kernel_size=4, stride=2, padding=1),
+            nn.Sigmoid())
+
+

What is the correct dimension of mu/logvar and z? latent_dim = 1000, filter_depth=32.

+

I'm not sure if the input of the linear layer mu/logvar is right or not.

+
mu = nn.Linear(self.filter_depth * 32 * 32, self.latent_dim)
+logvar = nn.Linear(self.filter_depth * 32 * 32, self.latent_dim)
+z = nn.Linear(self.latent_dim, self.filter_depth * 32 * 32)
+
+",31857,,2444,,6/4/2022 14:21,6/4/2022 14:21,What is the correct dimension of mu/logvar and z in the VAE?,,0,0,0,,,CC BY-SA 4.0 +16936,1,,,12/4/2019 15:49,,2,69,"

I am given a 2-dimensional picture (black & white, white background) and it is assumed that there are some 'sticks' (basically 'thick lines' with different widths and lengths) that are (mostly) overlapping with one another. +I want to somehow recognize these sticks (where they are and how big they are).

+ +

Is there any approach you would recommend or, even better, anything that already exists? I am working with MatLab, but a general (theoretical) approach would be also fine! I am open to machine-learning, but I'd prefer classical algorithms here.

+",31859,,2444,,12/4/2019 15:59,12/4/2019 15:59,How can I recognise possibly overlapping line segments in 2D?,,0,2,,,,CC BY-SA 4.0 +16938,1,,,12/4/2019 19:00,,1,23,"

There are some backgroundsubtractor functions in opencv like backgroundsubtractormog2 , backgroundsubtractorGMG and ... . It seems that these functions only detect moving objects in a video.

+ +

But I understand from the concept of these functions, that they do some clustering in an image. +Do these functions only detect moving objects? Why? Or am I wrong?

+ +

Any help will be appreciated

+",30170,,,,,12/4/2019 19:00,Do backgroundSubtractor functions in opencv only detect moving objects?,,0,0,,,,CC BY-SA 4.0 +16939,1,,,12/4/2019 19:34,,2,118,"

I have heard of ensemble methods, such as XGBoost, for binary or categorical machine learning models. However, does this exist for regression? If so, how are the weights for each model in the process of predictions determined?

+ +

I am looking to do this manually, as I was planning on training two different models using separate frameworks (YoloV3 aka Darknet and Tensorflow for bounding box regression). Is there a way I can establish a weight for each model in the overall prediction for these boxes?

+ +

Or is this a bad idea?

+",23119,,2444,,12/4/2019 20:22,10/1/2020 18:25,Are there ensemble methods for regression?,,2,1,,,,CC BY-SA 4.0 +16940,2,,16909,12/4/2019 22:32,,1,,"

I think your doubt is completely reasonable. Probably there is an additional assumption that they (both Lilian Weng and Rich Sutton (pag.269)) do not make explicit in the proof and that is that your MDP is not only stationary, but also ergodic. A particular property of those systems is that the probability of eventually reaching a state $s$ from a starting point $s_0$ is 1. In such a case it is clear that $\eta(s)$ exists and is independent of any $s_0$ chosen.

+ +

Clearly, an MDP with block-diagonal transition matrix does not satisfy such an assumption since the starting point completely restricts those states you can reach in an infinite time.

+ +

What I do not understand is why Rich Sutton does mention ergodicity as a necessary condition in the case of a ""continuing task"", as opposed to ""episodic tasks"" (pag.275). For me, their proof requires this condition in both cases.

+ +

As an additional note, I also think that Lilian Weng does not really explain why we should buy that from the initial reasonable definition $J(\theta)=\sum_\mathcal S d^{\pi_\theta}(s)V^{\pi_\theta}(s)$ we should accept the much simpler one $J(\theta)=V^{\pi_\theta}(s_0)$. I guess the only reason is that the gradient of the initial expression does require to know the gradient of $d^{\pi_\theta}(s)$ and so you would be accepting the approximation:

+ +

$$\nabla_\theta J(\theta)=\nabla_\theta\left(\sum_\mathcal S d^{\pi_\theta}(s)V^{\pi_\theta}(s)\right)\approx\sum_\mathcal S d^{\pi_\theta}(s)\nabla_\theta V^{\pi_\theta}(s),$$

+ +

where the last term is just $\nabla_\theta V^{\pi_\theta}(s_0)$ under the ergodicity assumption.

+",30983,,,,,12/4/2019 22:32,,,,3,,,,CC BY-SA 4.0 +16941,2,,16294,12/4/2019 23:45,,2,,"

Apparently there is an example of non-convergence for semi-gradient sarsa, according to Rich Sutton (check slide 35). I guess TD(0) is not so different. So, probably your approximator will need to satisfy certain conditions to proof convergence.

+ +

Maybe this paper will be useful for you. It seems that they show that constraining your network to have relu activation functions allow you to show some convergence properties.

+",30983,,,,,12/4/2019 23:45,,,,0,,,,CC BY-SA 4.0 +16943,1,,,12/5/2019 3:25,,3,831,"

I would like to use OpenAI Gym to solve a continuing environment, that is, a problem with a single, never-ending episode (please note I don't mean a continuous environment with continuous state and actions).

+ +

The only continuing environment I found in their repository was the classic inverted pendulum problem, and I found no baseline methods (algorithms) that don't require episodic environments.

+ +

So I have two questions:

+ +
    +
  • are there any continuing environments other than the inverted pendulum one?

  • +
  • is there an OpenAI Gym baseline method that I can use to solve the inverted pendulum problem as well as other continuing environments?

  • +
+",30679,,,,,12/9/2019 3:15,Are there OpenAI Gym continuing environments (other than inverted pendulum) and baselines?,,1,0,,,,CC BY-SA 4.0 +16945,2,,16939,12/5/2019 7:17,,0,,"

There's similar boosting classes in XGBoost for regression. You can implement their built-in classes for your problem, rather than implementing from scratch. You can read more about it from their website. +You can also take a look at catboost, which implements a different approach.

+",31873,,,,,12/5/2019 7:17,,,,0,,,,CC BY-SA 4.0 +16946,1,16947,,12/5/2019 8:18,,1,183,"

How can I know what each neuron does in NN? +Consider the Playground from Tensorflow, there are some hidden layers with some neurons in each. Each of them shows a line(horizontal or vertical or ...). Where these shapes come from. I think they are understandable for nn not a person!

+",31143,,,,,12/6/2019 6:19,How can I find what does an specific neuron do in neural network?,,2,0,,,,CC BY-SA 4.0 +16947,2,,16946,12/5/2019 8:51,,3,,"

In TensorFlow Playground, the horizontal line show where each class is separated for each neuron. What happens when you take any intermediate neuron to make the decision? You can see the answer by the line provided by that neuron. And this decision is a result of the weighted sum from the decisions of the previous neurons (up to activation).

+ +

Take the middle-top neuron in the link you share, which is an almost horizontal line - slightly tilted to the right. This neuron classifies everything above it as a blue, and everything below it as an orange. Hover over the neuron to see a larger picture on the output.

+ +

You can also see how this is actually calculated by hovering over the line coming from the neurons in the previous layer to the neuron you are looking at. For the case of the same neuron (center-top), the weight coming from the first input ($x_1$) is 0.091, while from the second one ($x_2$), it is 0.49. The neuron ends up being almost horizontal because the contribution from the horizontal input ($x_2$) is so much larger compared to the vertical one ($x_1$).

+ +

Of course you need to take into account the nonlinearity coming from the activation function but the idea presented above is the essence of it. The example uses tanh activation, which behaves very linear in its intermediate region so one can ignore this issue to some extend for this particular case.

+ +

Edit: It appears that the values for the weights change at every browser session, so the neuron I describe might look a little different to you. To get the same configuration, simply click on the colored lines between neurons to edit them and use the values above for the connections.

+",22301,,22301,,12/6/2019 6:19,12/6/2019 6:19,,,,0,,,,CC BY-SA 4.0 +16948,1,,,12/5/2019 12:09,,2,59,"

I am currently working on a problem and now got stuck to implement one of it's steps. This is a simple attempt to explain what I am currently facing, which is something that I am aiming to implement in my python simulation.

+ +

The idea is that I will some input parameters into my simulation, however one simulation is not able to perfectly capture all the dynamics involved in a real scenario. Hence, what I am aiming to do is to feed some inputs of the real scenario into my simulate and perform the simulation for all cases in which I have real data. So I will have the same amount of data for technically the same situation for both real and simulated scenario.

+ +

With my simulated data I can find out the optimal parameters(for the simulation), so the idea now is to correlate my simulated model with the real data, and then, with this correlation, find out what would be the equivalent of the optimal simulated parameters into the the optimal real parameters. Here not really precise diagram that might help on the visualization of the problem:

+ +

+ +

I have already seen a lot of machine learning being utilized to fit to a set of data, but haven't really seen anything that could help me on this task that I currently have in hand, as ""fitting models"". So here comes the questions, how to correlate the models and utilized it to extract the optimal parameters.

+ +

Hope that I managed to be succinct and precise albeit the length of the text. I would really appreciate your help on this one!

+",31880,,,,,12/5/2019 12:09,Correlating two models to predict the output of one that corresponds to an output of the other,,0,0,,,,CC BY-SA 4.0 +16950,1,,,12/5/2019 13:38,,1,51,"

I am building a Convolution neural network to predict certain categories based on images (the location of a pointer on a surface) . However in many cases there will be no pointer in the view or something that is not the pointer. Initially I was just going to train it with outputs of the different classifications including the null classification. However given that the null classification is far more common than the others (perhaps 1000 times more likely) would it be better to have a separate null classifier and then if this outputted non null then the second classifier would be used.

+ +

Any suggestions?

+",31882,,31882,,12/5/2019 15:16,12/5/2019 20:38,structure of neural network for classification problems with large amounts of null classifications,,1,4,,,,CC BY-SA 4.0 +16951,1,16954,,12/5/2019 19:23,,4,1407,"

There is this nice result for policy gradients that the gradient of some performance measure, $\nabla v_{\pi_{\theta}}(s_0)$ (here, in the episodic case for the starting state $s_0$ and policy $\pi$, parametrised by some weights $\theta$) is equal to the expectation gradient of the logarithm of the policy, i.e.

+ +

$$\nabla v_{\pi_{\theta}}(s_0)=\mathbb{E}\Big{[}\sum_{t=0}^{T-1}\nabla_\theta\log(\pi_{\theta}(a_t|s_t)]\cdot G_t\Big{]},$$

+ +

where $G_t$ is the discounted future reward from state $s_t$ onward and $s_T$ the final state of some trajectory $(s_0, a_0, s_1, a_1, ..., s_{T-1}, a_{T-1}, s_T)$.

+ +

Now, when using a softmax policy, $\nabla_\theta\log(\pi_{\theta}(a_t|s_t)$ can be written as

+ +

$$\nabla_\theta\log(\pi_{\theta}(a_t|s_t))=\phi(s_t,a_t)-\mathbb{E}[\phi(s_t,\cdot)],$$

+ +

where $\phi(s,a)$ is some input vector of a state-action tuple.

+ +

However: what exactly is this vector? A typical input with policy gradients (for example in a neural network) is a feature vector for the state and the output a vector with dimensions equal to the number of actions, e.g. $(14, 15, 11, 17)^T$ for four possible actions. The softmax-function now scales these outputs, which results in the logits $(.042, .114, .002, .842)^T$ in this example.

+ +

What I would usually do in neural networks is take some input vector, for example something that describes if there are borders in a grid world, e.g. $\phi(s)=(1, 0, 0, 1)^T$, and multiply that with my weight matrix $\theta$ (and add biases b), i.e. $\theta\phi(s)+b$. So, continuing above example, $1\cdot \theta_{1,1} + 0\cdot \theta_{1,2} + 0\cdot \theta_{1,3} + 1\cdot \theta_{1,4} = 14$ and $1\cdot \theta_{2,1} + 0\cdot \theta_{2,2} + 0\cdot \theta_{2,3} + 1\cdot \theta_{2,4} = 15$.

+ +

But what is $\phi(s,a)$ here? And how would I compute $\nabla_\theta\log(\pi_{\theta}(a|s))=\phi(s,a)-\mathbb{E}[\phi(s,\cdot)]$?

+",22161,,2444,,12/4/2020 18:31,12/4/2020 18:31,Eligibility vector for softmax policy with policy gradients,,1,0,,,,CC BY-SA 4.0 +16952,1,,,12/5/2019 20:15,,0,47,"

I have a dataset with different types of numerical values (both negative and positive numerical values) for the inputs (for example, -40, -35, 1, 25, 39, etc., that is, multiple inputs) and single output numerical value (either negative or positive).

+ +

I have tried to use linear regression, but I haven't been so successful and I think one of the reasons is negative values.

+ +

What is the best way to deal with this scenario? What model should I use?

+ +

I am using Keras for my AI model.

+",31544,,2444,,12/6/2019 14:57,12/6/2019 14:57,How to perform regression with multiple numeric (positive and negative) inputs and one numeric output?,,0,2,,,,CC BY-SA 4.0 +16953,2,,16950,12/5/2019 20:38,,1,,"

I see multiple reasons to take a different route:

+ +
    +
  1. While in ""classical"" pattern recognition you might have done things like feature engineering outside of your model, one idea of deep learning was to ""insource"" it into the model and let the model take care of ""everything"". Following that, I have seen a general tendency in deep net architectures to let your deep net handle all the work in one single model. So your idea kind of contradicts the architectural mainstream.

  2. +
  3. There are so many deep net architectures being published which are well engineered and tested for all kinds of tasks. Therefore, I would check if there is anything ""ready off the shelf"" for a task like yours in the very first place. If there is, then it will probably be better than your self-engineered model architecture. And if there is not, you might want to check related tasks and see how they solved this problem, e.g. medical applications where specific cells need to be detected with most examples being negative (and if there is a positive example the area in a picture might still be mostly negative as it is just a single positive cell among many others).

  4. +
  5. Your thought of feeding the ""present/not present"" binary classification to your task of locating an object is very closely related to what has been discussed as auxiliary tasks. I don't know if it has been applied to exactly your problem but there are similar applications. For example there is an application where ""name recognition"" is complemented by the auxiliary task of deciding whether any name is present in a given sentence or not. Which is somewhat similar to your case. The paper An Overview of Multi-Task Learning in Deep Neural Networks provides an overview. A famous example using auxiliary tasks would be GoogleNet. This also goes back to my first point of letting the deep net handle ""everything"" internally.

  6. +
+ +

However, as a disclaimer, this is a theoretical perspective as I cannot speak from experience regarding your problem.

+",30789,,,,,12/5/2019 20:38,,,,0,,,,CC BY-SA 4.0 +16954,2,,16951,12/5/2019 21:27,,5,,"

Calculation of gradient +\begin{align} +\nabla_{\theta} \log(\pi_{\theta}(a|s)) &= \phi(s,a) - \mathbb E[\phi (s, \cdot)]\\ +&= \phi(s,a) - \sum_{a'} \pi(a'|s) \phi(s,a') +\end{align} +is only valid for linear function approximation with action preferences of form +\begin{equation} +h(s, a, \theta) = \theta^T \phi(s,a) +\end{equation} +and softmax policy +\begin{equation} +\pi(a|s) = \frac{e^{h(s,a,\theta)}}{\sum_{a'} e^{h(s,a',\theta)}} +\end{equation} +The gradient would be calculated as it is written. For example, if your current state is $s = (1, 1)$ and in that state you have actions $a_0 = 0$ and $a_1 = 1$ and probabilities for those actions are $\pi(a_0|s) = 0.7$, $\pi(a_1|s) = 0.3$ then gradient for action $a_0$ would be +\begin{equation} +\nabla_{\theta} \log(\pi_{\theta}(a_0|s)) = (1, 1, 0)^T - (0.7 \cdot (1, 1, 0)^T + 0.3\cdot (1,1,1)^T) = (0, 0, -0.3)^T +\end{equation} +Feature vector $\phi$ can be basically anything you want. For example you could stack state feature and action (like I did in small example), you could use polynomials, radial basis functions, tile coding, etc.

+ +

If you're using multilayered neural network you would have to propagate gradients through all layers, usually done with backpropagation algorithm. Easiest way is to use automatic differentiation software (e.g. Tensorflow) which can do that for you so you don't have to write your implementation. All you have to do is define your objective function that you want to optimize +\begin{equation} +J_\theta = \sum_t \log(\pi(a_t|s_t, \theta)) G_t +\end{equation} +and software will automatically calculate gradient $\nabla J_{\theta}$ and update weights.

+",20339,,20339,,12/5/2019 22:20,12/5/2019 22:20,,,,1,,,,CC BY-SA 4.0 +16956,2,,16946,12/6/2019 3:09,,1,,"

I think serali answered this question well, though I wanted to give some extra reading for those interested.

+ +

There are many ways of deciphering what a neuron in a NN is doing. This lecture does a fantastic job at covering some of these methods and is an incredibly interesting watch. This covers more advanced methods of visualising what a model is doing.

+",26726,,,,,12/6/2019 3:09,,,,0,,,,CC BY-SA 4.0 +16957,1,,,12/6/2019 3:12,,6,588,"

I have implemented minimax with alpha-beta pruning to play checkers. As my value heuristic, I am using only the summation of material value on the board regardless of the position.

+

My main issue lays in actually finishing games. A search with depth 14 draws against depth 3, since the algorithm becomes stuck in a loop of moving kings back and forth in a corner. The depth 14 player has a significant material advantage with four kings and a piece against a single king, however, it moves only one piece.

+

I have randomly selected a move from the list of equally valued moves and this leads to more interesting games (thus preventing the loop). However, whichever player used this random tactic ended up far worse off.

+

I am not quite sure how to solve this problem. Should I do a deeper search of the best moves with the same value? Or is the heuristic at fault? If so, what changes would you suggest?

+

So far I have tried a simple genetically generated algorithm that optimizes a linear scoring function (that accounts for the position). However as the algorithm optimized, it led to only draws and the same king loop.

+

Any suggestions for how to stop this king loop are very welcome!

+",31894,,2444,,1/3/2021 13:17,1/3/2021 13:17,"To deal with infinite loops, should I do a deeper search of the best moves with the same value, in alpha-beta pruning?",,1,1,,,,CC BY-SA 4.0 +16958,2,,16957,12/6/2019 3:29,,1,,"

I think this issue stems from the fact you aren't taking position into account. I would think this because as the game progresses, the number of moves that will result in a piece being taken becomes less and less, especially when there's only a few pieces left and quite a bit of ""chasing"" must occur before a piece is taken, likely more chasing then a depth of 14 allows.

+ +

To remedy this, you could, towards the end of the game, add to the value of a state the inverse of the total distance each friendly piece has from other pieces, that way the agent will try to move towards other pieces and minimise this distance. If you find the right scale for this heuristic, the agent will prioritise moving towards enemy pieces only when it cant find any moves that result in taking a piece, helping it break out of this loop.

+",26726,,,,,12/6/2019 3:29,,,,0,,,,CC BY-SA 4.0 +16959,1,,,12/6/2019 5:12,,1,319,"

Assume somebody knows only to write in Latin characters. If they write words of any other language (example: Hindi, French, Latin) using the Latin alphabet, how can I detect that language?

+ +

Example: If they write Hindi language word using the Latin alphabet)

+ +
+
+

kya kar raha hai

+
+
+ +
     >> the output is Hindi language
+
+",31573,,2193,,12/6/2019 11:46,12/6/2019 11:49,How to detect any native language when written in Latin characters?,,1,1,,,,CC BY-SA 4.0 +16960,1,,,12/6/2019 6:15,,1,23,"

I got a large dataset of images (dimensions of 16 x 16, 250k samples) and corresponding spherical coordinates (distributed uniformly in each coordinate). On these, I trained a convolutional regression network to directly yield the coordinates for a provided image. The network itself is rather simple and consists of multiple convolutional layers where the last of them is flattened and followed by some dense layers to get the desired output. Since the input image size is rather small, pooling layers are obsolete I think (doesn't make much difference if used).

+ +

If I now train on all of the data I will get reasonable results in the end. But, if I filter them before training, i.e. only use coordinates which are limited by a certain radius, the network will increase it's performance quite a bit, but will only work well if my input image corresponds to the parameter space used during training.

+ +

So my question is, if the network isn't deep enough or has the wrong architecture to perform on the complete dataset with high confidence or if this is expected behaviour. One naive approach would be to train the network for different coordinate ranges and to store the weights for each of them. Then, you could train a classifier to decide in which range you are and use the previously determined weights for the network accordingly. But this seems strange to me, as a single network should somehow be able to achieve the same without this weird architecture, I think.

+ +

I would be pleased if someone has an idea how I could optimise the performance of my network to yield the best results over the whole coordinate space.

+",31896,,,,,12/6/2019 6:15,Optimisation of dependence of efficiency of CNN on training data,,0,0,,,,CC BY-SA 4.0 +16961,1,,,12/6/2019 6:17,,1,62,"

Let us say that i have two ball throwing machines which has some algorithm running in the back-end for releasing the balls. One machine shows it throws 5 balls in 1 sec. Other shows the exact distribution of how many balls were thrown in each 0.2 secs (say the distribution is: 2,1,0,1,1) but the sum is 5 balls/sec for this machine too. Can i use this data and some other independent parameters like speed, direction etc as inputs and predict the similar distribution for lower accuracy machine.?

+ +

Re-framing my question:

+ +

I am searching for an apt supervised model for the following use case:

+ +

If I have a sum (say 10) and it can be distributed in a predefined number of bins (say 5) in a number of ways for instance:

+ +

1. 1, 2, 5, 0, 2

+ +

2. 0, 0, 3, 7, 0

+ +

etc.

+ +

The distribution bins always have whole numbers and the sum is also a whole number. The distribution depends on a number of factors and patterns which can be learned in the volumetric data. Hence, if I am able to load more than one sum (say n sums) and output the corresponding n*5 distributions it will be better for precise prediction (as per my intuitions). I tried using some networks but they are not doing much good.

+",31897,,31897,,12/27/2019 19:44,12/30/2019 19:09,Finding the right model,,0,0,,,,CC BY-SA 4.0 +16962,2,,16959,12/6/2019 8:58,,2,,"

The most distinct words of a language are usually the function words (the, and, of, with,...); other lexical items are often (at least partly) shared between languages that had come in contact with each other. So looking for function words is usually the best way to identify the language in a given text.

+ +

This can be done by having a list of function words for each language you recognise, look those words up in your text, and then calculate the probability that the text belongs to each language given the frequencies of the words in your list. There will be overlaps, for example Dutch of is equivalent to the conjunction or in English, but also looks like the English of.

+ +

Another approach would be to take texts in your languages, and split them into n-grams, typically trigrams. Then you use a list of trigrams and compare the frequency distributions in your known and unknown texts to find which is most likely to match. This has the advantage that you don't need to know anything about the language structure (because you don't need to identify function words), and it also captures the morphology (eg the common English suffix -ing).

+ +

You can simply implement this in a fairly basic program, or you could train a machine learning classifier with it. The former seems easier to me, but is not as exciting as the latter.

+ +

However, for single words (as per your example) this might not work properly. Generally, the longer the text, the more precise the recognition.

+ +

UPDATE: On re-reading your question I think I have misunderstood it, and that you're asking about transliteration into Latin characters. In a way the same applies, that you need to first identify which language it is, and then choose the correct mapping from the Latin transliteration to the writing system that language natively uses.

+ +

Since the transliteration is purely symbolic, I don't think that an ML approach would be better than a simple lookup-table of equivalences. And of course there might be difficulties if the writing system has more distinct characters than the Latin alphabet provides.

+ +

UPDATE 2: If the input is transliterated text, then your reference texts (where you know the language) also have to be transliterated in the same character set (eg Latin in your example).

+",2193,,2193,,12/6/2019 11:49,12/6/2019 11:49,,,,2,,,,CC BY-SA 4.0 +16963,1,16966,,12/6/2019 10:57,,3,258,"

The specific problem I'm having is with a Fully Visible Belief Network. It is an explicit density model (though I don't know what quantifies something being such) that uses the chain rule to decompose the likelihood of an image x into a product of 1-d distributions.

+ +

+ +

What is meant by ""the likelihood of an image x""? With respect to what? I assume it refers to how common this image would be in the data set it is selected from? Like if you had 1000 images, 800 of which were white and 200 of which were black, the model should ideally output 0.2 for any black image inputted? Obliviously with more complicated clustering like dogs vs cats it'd be a bit different, but that's my intuition. Is that correct?

+ +

Also as a side question, that equation looks very wrong. If you have an image of $1048\times720$ pixels, and say every pixel evaluates to have a probability of 0.9, you'd expect the final probability of the image to be 0.9 or 90%. But according to that equation, it's $0.9^{720*1048}$, which is stupidly small, essentially 0. What's going on here?

+",26726,,,,,12/6/2019 17:35,"In unsupervised learning, what is meant by ""finding the probability of an image""?",,1,4,,,,CC BY-SA 4.0 +16964,1,,,12/6/2019 11:18,,2,64,"

We are using 2D Laser Scanner to scan various objects of different geometric shapes for e.g. cylinder, spiked, cylinder with notch, cylinder with curved edges e.t.c. The dataset contains points in the format [x, y] with the dimension of 1 complete scan being 160x2. The goal is to use these scan points to classify the various shapes.

+ +

I have used a multilayer NN with sigmoid as the final layer and Adadelta optimizer for this problem but the accuracy reaches only upto 70%.

+ +

Can anyone recommend a proper model that can be used for Laser Scanner Data Classification?

+ +
+ +

MODEL

+ +
def baseline_model():
+    model = Sequential()
+    model.add(Dense(2048, input_dim=160, activation='relu'))
+    model.add(Dropout(0.1))
+    model.add(Dense(1024, activation='relu'))
+    model.add(Dropout(0.1))
+    model.add(Dense(512, activation='relu'))
+    model.add(Dropout(0.2))
+    model.add(Dense(256, activation='relu'))
+    model.add(Dropout(0.3))
+    model.add(Dense(128, activation='relu'))
+    model.add(Dropout(0.4))
+    model.add(Dense(64, activation='relu'))
+    model.add(Dropout(0.5))
+    model.add(Dense(32, activation='relu'))
+    model.add(Dropout(0.5))
+    model.add(Dense(6, activation='softmax'))
+    Adam = optimizers.Adam(lr=0.001)
+    Adadelta =  optimizers.Adadelta(lr = 1)
+    model.compile(loss='categorical_crossentropy', optimizer=Adadelta,   metrics=['accuracy'])
+
+",31900,,,,,5/12/2023 8:08,Applying Machine Learning to 2D Laser Scanner Data,,1,1,,,,CC BY-SA 4.0 +16965,1,,,12/6/2019 16:58,,2,87,"

I'm working with a problem where I have a lot of variables for different cases of different users. Depending on the values of the different variables of a concrete user in a concrete case, the algorithm must classify that user in that case as:

+ +
    +
  • Positive
  • +
  • Negative
  • +
+ +

But if the user is classified as positive, it must be classified as:

+ +
    +
  • Positive normal
  • +
  • Positive high
  • +
  • Positive extra-high
  • +
+ +

If a case is positive, depending on the values of a part of the parameters, we know that the probability to be, for example, positive normal is bigger or lower.

+ +

To sum up, I see the problem as a spam detector with different spam types.

+ +

May this work if I apply an algorithm like:

+ +
    +
  • Random Forest
  • +
  • Decision Tree
  • +
+ +

Or maybe I can include the negative case as a new group and then implement a K-means algorithm? Maybe this would help to find new groups of parameters that will say that the concrete case forms part of a group for sure.

+ +

Which one will fit best with a lot of parameters?

+",31902,,2444,,12/8/2019 3:14,12/8/2019 3:14,"How can I classify instances into two categories and then into sub-categories, when the number of features is high?",,0,5,,,,CC BY-SA 4.0 +16966,2,,16963,12/6/2019 17:35,,2,,"

When you say likelihood, you are invoking several other concepts like events, sample, parameters, model, probability density function (PDF), etc (it would be helpful if you learn more about these concepts). In essence, a likelihood function $l(x|\theta)$ is a PDF that quantifies how likely is that event $x$ happens out of a set of possible events, given the parameters $\theta$ that define your model.

+ +

In the specific case of images, the set of possible events is usually one of two options: 1) all the available images in a dataset, or 2) all the existing images. Usually you want to model the likelihood in the option 2), but only having access to a sample of all the possible images. In either case, the likelihood is just the probability that you select one image out of all the possible ones. If you consider only images of $1048\times 720$ pixels, the possible amount of images is $(256\times3)^{1048\times 720}$, where I am assuming that each pixel consists of 3 colors and each color can take 256 values. Since the amount of possible images is so so big, it is very common that the probability of selecting a specific one is very very small. This is a reason why you usually work with log-likelihood (the logarithm of the likelihood) instead of directly using likelihoods. For example, if all your images were equally probable, the likelihood would be in the order of $10^{-{10^7}}$, while the log-likelihood would be around $-10^7$.

+ +

To solve your paradox with the probability of images and pixels, consider that instead of pixels you have coins and instead of images you have sequences of coins. Let's say that you have a fair coin, so the probability of tails ($T$) after tossing the coin is 0.5. If you toss a second coin, the probability of having $T$ again is naturally 0.5 as well, but what is the probability that both results where $T$? It is the product (0.25) since the events are independent. Similarly, the probability of the other three sequences $TH$, $HT$ and $HH$ is just 0.25. You can see that since the probability needs to be shared between 4 sequences equally probable, they are less probable with respect to the probabilities of the sequences of length 1. If you toss the coin 3 times, then the probability of all these coins being tails is just $0.5^{3}$. Again, there are now 8 possible sequences, all sharing the same amount of probability. You can see what's going on. Since the amount of possible options becomes large, the probability of each possible sequence of coins becomes small. Clearly, you would never toss a coin 10 times and expect to obtain all $T$, right? Well, exactly the same happens in the case of the pictures.

+",30983,,,,,12/6/2019 17:35,,,,2,,,,CC BY-SA 4.0 +16967,1,,,12/6/2019 20:13,,2,48,"

I would like to know if my understanding of RPN training is correct, and if never training the RPN on some specific anchor box is bad (i.e if the anchor never sees good nor bad examples).

+ +

To make my point clear, assume we have two functions. $f_{\theta_1}$ which represents the backbone that outputs a feature map of size $n$ (assume flattened) for an image of size $m$ (WLOG assume the image is flattened) +$$ +f_{\theta_1}: \mathbb{R}^m \to \mathbb{R}^n +$$ +and $f_{\theta_2}$ that represents the 'objectness' of each anchor box. We can suppose that $f_{\theta_2}$ and $f_{\theta_1}$ are convolutional neural networks, where $\theta_1, \theta_2$ are the networks' parameters. For simplicity, assume the RPN does not output bounding boxes correction, and only outputs the probabilities that an anchor box is an object or not. +$$ f_{\theta_1}: \mathbb{R}^n \to \mathbb{R}^{k \cdot n}$$ +We can assume $k=1$, which is the number of boxes per anchor.

+ +

If my understanding is correct, we select $p$ good proposals $G_p$, and $p$ bad proposals $B_p$ for training the RPN, which are indices of good and bad predictions. In other terms, if $x$ is an image (assume flattened), then $f_{\theta_2}(f_{\theta_1}(x)) = y$, next we only back-propagate the loss for coordinates $B_p$ in $y$ and $G_p$ in y. For instance, if $p=1$, and $G_p = \{i\}$ and $B_p = \{j\}$ and $ 1\leq i \neq j \leq n$ then we only compute the loss of the RPN for coordinates $i$ and $j$ in $y$. +My questions are:

+ +

1- Is my understanding correct? and if not, how do we perform training?

+ +

2- Assuming my understanding is right or partially right about the last step, What happens if we never train component $y_0$ from the RPN's output for example? (i.e we never back propagate the loss through some components for $y$) woudn't this be a problem (i.e hurt performance or network training does not go well at all in some cases?)

+",21484,,21484,,12/9/2019 0:48,12/9/2019 0:48,FasterRCNN's RPN network training,,0,0,,,,CC BY-SA 4.0 +16968,2,,16896,12/7/2019 1:00,,0,,"

I must simply direct you to this excellent blog post on Machine Learning Mastery:

+ +

https://machinelearningmastery.com/what-is-holding-you-back-from-your-machine-learning-goals/

+",14777,,,,,12/7/2019 1:00,,,,0,,,,CC BY-SA 4.0 +16969,2,,16896,12/7/2019 9:35,,3,,"

This course is focused on machine learning using MATLAB, which is not practical nowadays as it is a programming language used specifically for computing, and cannot display GUI or communicate through the network. The language is powerful but limited in some ways. Nowadays most people use python for machine learning, as it is versatile and can connect to other backend like C++, java, JavaScript easily. The language is also a general language, and unlike MATLAB can do many things not limited to computing.

+ +

If you really want to join this course, I would recommend first learning MATLAB language and also learn basic calculus like derivatives. This would greatly help on your learning of the course.

+ +

However if you want to here serious about machine learning, I would encourage you to enroll in the Deep Learning specialization also by Andrew Ng on Coursera. +https://www.coursera.org/specializations/deep-learning +This course uses python as the programming language and teaches more modern approaches to deep learning like recurrent neural networks, convolutional neural networks and more. It also talks more about application of neural network. There is also theory, but it also talks about application of a specific algorithm and how it works.

+ +

Hope I can help you.

+",23713,,,,,12/7/2019 9:35,,,,4,,,,CC BY-SA 4.0 +16970,1,16971,,12/7/2019 9:40,,6,2918,"

It is my understanding that, in Q-learning, you are trying to mimic the optimal $Q$ function $Q*$, where $Q*$ is a measure of the predicted reward received from taking action $a$ at state $s$ so that the reward is maximised.

+ +

I understand for this to be properly calculated, you must explore all possible game states, and as that is obviously intractable, a neural network is used to approximate this function.

+ +

In a normal case, the network is updated based on the MSE of the actual reward received and the networks predicted reward. So a simple network that is meant to chose a direction to move would receive a positive gradient for all state predictions for the entire game and do a normal backprop step from there.

+ +

However, to me, it makes intuitive sense to have the final layer of the network be a softmax function for some games. This is because in a lot of cases (like Go for example), only one ""move"" can be chosen per game state, and as such, only one neuron should be active. It also seems to me that would work well with the gradient update, and the network would learn appropriately.

+ +

But the big problem here is, this is no longer Q learning. The network no longer predicts the reward for each possible move, it now predicts which move is likely to give the greatest reward.

+ +

Am I wrong in my assumptions about Q learning? Is the softmax function used in Q learning at all?

+",26726,,,,,6/23/2020 8:02,Does using the softmax function in Q learning not defeat the purpose of Q learning?,,2,1,,,,CC BY-SA 4.0 +16971,2,,16970,12/7/2019 9:59,,4,,"
+

However, to me, it makes intuitive sense to have the final layer of the network be a softmax function for some games. This is because in a lot of cases (like Go for example), only one ""move"" can be chosen per game state, and as such, only one neuron should be active.

+
+ +

You are describing a network that approximates as policy function, $\pi(a|s)$, for a discrete set of actions.

+ +
+

It also seems to me that would work well with the gradient update, and the network would learn appropriately.

+
+ +

Yes there are ways to do this, based on the Policy Gradient Theorem. If you read it you will probably discover this is more complex to understand than you first thought, the problem being that the agent is never directly told what the ""best"" action is in order to simply learn in a supervised manner. Instead, it has to be inferred from rewards observed whilst acting. This is a bit harder to figure out than the Q learning update rules which are just sampling from the Bellman optimality equation.

+ +

You can split Reinforcement Learning methods broadly into value-based methods and policy gradient methods. Q learning is a value-based method, whilst REINFORCE is a basic policy gradient method. It is also common to use a value based method within a policy gradient method in order to help estimate likely future return used to drive the polcy gradient updates - this combination is called Actor-Critic where the actor learns a policy function $\pi(a|s)$ and the critic learns a value function e.g. $V(s)$.

+ +
+

But the big problem here is, this is no longer Q learning. The network no longer predicts the reward for each possible move, it now predicts which move is likely to give the greatest reward.

+
+ +

This is true, but it is not a big problem. The main issue is that policy gradient methods are more complex than value based methods. They may or may not be more effective, it depends on the environment you are tryng to create an optimal agent for.

+ +
+

Is the softmax function used in Q learning at all?

+
+ +

I cannot think of any non-contrived environment in which this function would be useful for an action value approximation.

+ +

However, it is possible to use a variant of softmax to create a behaviour policy for Q learning. This uses a temperature hyperparameter $T$ to weight the Q values, and provide a probability of selecting an action, as follows

+ +

$$\pi(a_i|s) = \frac{e^{Q(s,a_i)/T}}{\sum_j e^{Q(s,a_j)/T}}$$

+ +

when $T$ is high all the probabilities of actions will be similar, when it is low even a small difference in $Q(s,a_i)$ will make a big difference to probability of selecting action $a_i$. This is quite a nice distribution for exploring whilst avoiding previously bad decisions. It will tend to focus the agent on exploring differences between similarly high rated actions. The main issue with it is that it introduces hyperparameters for deciding starting $T$, ending $T$ and how to move between them.

+",1847,,1847,,12/8/2019 14:29,12/8/2019 14:29,,,,0,,,,CC BY-SA 4.0 +16973,1,,,12/7/2019 14:38,,1,318,"

I need some explanation about the following paragraph (page 3) from the paper A Novel Approach for Robust Multi Human Action Detection and Recognition based on 3-Dimentional Convolutional Neural Networks.

+ +
+

We introduce a 3D convolution neural network with the following notations: $I(x, y, d)$ as an input video with a size of $x y$ and $d$ the temporal depth

+
+ +

What is ""temporal depth""? Is it the number of frames?

+",31910,,2444,,12/8/2019 13:50,2/15/2020 9:54,"What is ""temporal depth""?",,1,0,,,,CC BY-SA 4.0 +16974,1,16977,,12/7/2019 14:43,,0,419,"

Now I know this might break some StackExchange rules and I am definitely open for taking the thread down if it does! +I am trying to build an AI that can write it's own book and I have no idea where to start or what are the appropriate algorithms and approaches to go with. +How should I start and what do I exactly need for such a project?

+",3894,,3894,,12/7/2019 14:51,12/7/2019 17:06,Building an AI that generates text by itself,,2,3,,,,CC BY-SA 4.0 +16975,2,,16974,12/7/2019 15:27,,2,,"

Recurrent Neural Networks (RNNs) have been applied to generate text. In this blog post you will find a couple of interesting text examples (the author also has made his code available on github), e.g. their Shakespeare-like texts generated by an RNN:

+ +
+

PANDARUS: + Alas, I think he shall be come approached and the day + When little srain would be attain'd into being never fed, + And who is but a chain and subjects of his death, + I should not sleep.

+ +

Second Senator: + They are away this miseries, produced upon my soul, + Breaking and strongly should be buried, when I perish + The earth and thoughts of many states.

+ +

DUKE VINCENTIO: + Well, your wit is in the care of side and that.

+ +

Second Lord: + They would be ruled after this chamber, and + my fair nues begun out of the fact, to be conveyed, + Whose noble souls I'll have the heart of the wars.

+ +

Clown: + Come, sir, I will make did behold your worship.

+ +

VIOLA: + I'll drink it.

+
+ +

As you can see the RNN is able to somewhat mimic the ""flow"" of the texts it has been trained on but some sentences (like at the very end) do not make much contextual sense.

+ +

Moreover, RNNs have been trained to generate other content, e.g. drawing numbers (see here) or creating music (see here).

+",30789,,30789,,12/7/2019 15:35,12/7/2019 15:35,,,,3,,,,CC BY-SA 4.0 +16977,2,,16974,12/7/2019 17:06,,3,,"

There have been many methods proposed for text generating, but recurrent network dominates natural language processing with a key component: the perception of time.

+

Many networks have been tried for text generation, with notable examples such as Markov chain. However RNN have been proven to work the best and is dominating the field of language modelling (text generation).

+

How text generation works

+

A neural network that generates text is commonly called a language model. It is trained on large amount of text with labels being the next token. The text generation process uses several random token as the starting phrase and then the network predicts the rest. However the network does not just predicts the most probable word, instead it randomly chooses one of the few most probable token, hence the generating part.

+

Why RNN work best on language modelling

+

RNN have a perception of time built into the architecture of teh network. LSTM, a popular RNN variant used, is composed of "memory units" that "remembers" past text, thus the "time" part. RNN process input according to the sequence of time, so the network can naturally understand time, thus the superior performance compared to other networks.

+

Architecture of language model

+

A language model consists of the encoder and the decoder. The encoder compresses word one-hot representation to a smaller sized vector representation. The smaller sized representation is then passed through the decoder, which maps the encoding to the words one hot vectors again.

+

State of the art results for language modelling

+

Language modelling is an actively researched field in the AI community, and recently the model GPT-2 have achieved a breakthrough in language modelling accuracy, producing almost human like text with a special component added, the attention layer. Attention basically maps the "memory states" of the encoder and feed it as input to the decoder. The data teh model is trained on is also very large, with over 20GB of web scraped data from sites like Reddit.

+

Limits of language modelling

+

One limit of language modelling is the size of generated text. As GPU don't have unlimited memory, language model usually limits the input token size to a specific number, padding or trimming to this number. The number is usually 500-1000, which includes a paragraph or two, but not an entire book. You can only generate short paragraphs and essay with language modelling. For long text it is much harder.

+

Resources to help you get started

+

GPT-2 open AI blog: https://openai.com/blog/better-language-models/ +GPT-2 online interactive site for text generation: https://talktotransformer.com/ +How to train and fine tune GPT-2 in python: https://minimaxir.com/2019/09/howto-gpt2/

+

Hope I can help you

+",23713,,-1,,6/17/2020 9:57,12/7/2019 17:06,,,,2,,,,CC BY-SA 4.0 +16978,1,,,12/7/2019 18:03,,0,63,"

I am currently reading the paper ""Similarity of Narratives"" by Loizos Michael (link below) and I am having a hard time figuring out the definitions listed (p.107 - p.109).

+ +

Could someone please give me a practical example for each of the definitions?

+ +

Article: http://narrative.csail.mit.edu/cmn12/proceedings.pdf

+",31914,,,,,12/7/2019 20:02,Need examples for the following definitions,,1,1,,,,CC BY-SA 4.0 +16979,2,,16978,12/7/2019 20:02,,3,,"

The 'vocabulary' used is:

+ +
    +
  • $\mathscr{F}$ - fluents, conditions that change over time. The predicate holds(<fluent>,<state>) defines whether a condition applies in a given state.

  • +
  • $\mathscr{A}$ - actions, describe, erm, actions that happen. The predicate occurs(<action>,<state>) specifies that a given action happens at a particular state (presumably triggering a change to a new state, and changing the value of one or more fluents)

  • +
  • $\mathscr{T}$ - time points, which specify a chronological ordering.

  • +
+ +

Now we can look at the actual definitions: a discourse combines events (actions) and facts (fluents) and provides a (partial) ordering, ie a sequence in which they occur. This is represented in an acyclic graph. So the discourse describes a sequence of events that happen in a particular order and have an effect on facts. The ordering is relative, as there are no absolute time reference points give. Actions and facts are expressed in predicate calculus by the predicates mentioned above. Actions can involve a character moving from location A to location B; the fluent describing the location of the character will have been changed through that action.

+ +

An embedding assigns a given point in time to each state in the discourse. Each state is assigned a time point in a way that is consistent with the relative timings of the states in the discourse. So you have a general story, and you fix each action in time. The absolute time of an action in the embedding has to be before/after other actions according to their order in the discourse.

+ +

A domain adds two further predicates to a discourse, namely static($\phi$) and causes($\phi$,<fluent>). These formulas $\phi$ express logical constraints, that add consistency to the discourse. For example, if an action involves killing a character, the consequence has to be that the character is dead afterwards. So that action mandates a particular change, which would be encoded by such a formula. If the character was then to do some action after having been killed, that would invalidate the discourse in that domain. Different domains might have different constraints etc.

+ +

A model then works out if the domain is consistent, by evaluating all the actions and events using the domain constraints, and assigning it a truth value depending on whether it is possible or not. I might be wrong there, as I'm not sure I have interpreted the definition correctly — please let me know in a comment if that is the case.

+ +

Basically, if a model for a domain exists, it means that the actions and facts are consistent, and hence the narrative works (it is ""a discourse compatible with a given domain""): a narrative is a discourse in a domain which, when embedded in a timeline, is consistent.

+ +

A default domain is one domain out of a set of other domains which seems preferable over the others.

+ +

(I have to stop here for lack of time — I might be able to expand this answer later)

+",2193,,,,,12/7/2019 20:02,,,,0,,,,CC BY-SA 4.0 +16981,1,,,12/7/2019 20:58,,1,40,"

Is there a seq-to-seq model which does not require to know the start and end of a sentence? I need to model a system which gets a long sequence of words and creates a long sequence of tokens as long as the input. For example it takes a sequence of 1000 words and creates a sequence of 1000 tokens, each token corresponds to an input word.

+",29645,,,,,12/7/2019 20:58,Sequence-to-Sequence models without specifying the start and end of sentences,,0,2,,,,CC BY-SA 4.0 +16982,1,,,12/7/2019 21:08,,2,72,"

Suppose an object detection algorithm is good at detecting objects and people when an object and person is close to a camera and upright. If the person walks farther away from the camera and is ""upside-down"" from the perspective of the camera (e.g. a fisheye camera), should the algorithm still be good at detecting people and objects in this position?

+",31916,,2444,,12/8/2019 14:34,12/8/2019 14:34,Can a trained object detection model deal with variations of the input?,,1,0,,,,CC BY-SA 4.0 +16983,2,,16982,12/7/2019 23:20,,1,,"

Not necessarily. Supposing your data is from the distribution of possible images containing an upright person close to the camera. Something like a neural network would perform poorly on the new data since it comes from a distribution other than on what it was trained.

+ +

You could try augmenting the dataset to try to get some synthetic ""far away upside down people"" but there are no guarantees here.

+ +

I'll look for the source but A. Ng cite's an experiment where a team trained a neural network on a large dataset of vehicle images. They did not realize that the cars were all facing the same direction and their model performed very poorly on images that were very similar with the primary difference being a horizontal flip.

+",28343,,28343,,12/7/2019 23:37,12/7/2019 23:37,,,,0,,,,CC BY-SA 4.0 +16985,1,,,12/8/2019 1:06,,1,2336,"

I wonder if there is a way to solve snake game using Hamiltonian algorithm? +if there is a way

+ +
    +
  1. how to apply it?
  2. +
  3. what data structure will be used with algorithm?
  4. +
  5. time complexity and space complexity?
  6. +
  7. is this algorithm an optimal solution or there is a better way?
  8. +
+",31918,,1671,,12/9/2019 20:22,12/9/2019 20:22,How to solve Snake Game with a Hamiltonian graph algorithm?,,1,1,,,,CC BY-SA 4.0 +16987,2,,9912,12/8/2019 10:03,,1,,"

Good question. I think this part of the book is not well explained.

+

Off-policy evaluation of $V$ by itself doesn't make sense, IMO.

+

I think there are two cases here

+
    +
  1. is if $\pi$ is deterministic, as we probably want in the case of "control", i.e. we will determine $\pi$ to be deterministic and in every state choose the action that most likely to maximize the rewards/returns. +In that case, then evaluating $V$ from a different distribution might not be so useful, as $W$ becomes $0$ with high likelihood. I don't see any sense in it.

    +
  2. +
  3. if $\pi$ is not deterministic. And it's a good question why would we want to evaluate $V_\pi$ from $V_b$, instead of just evaluating it from $V_\pi$ directly.

    +
  4. +
+

So, IMO, off-policy evaluation of $V_\pi$ doesn't make any sense.

+

However, I think the goal here is actually the control algorithm given in the book (using $q(s,a)$, p. 111 of the book [133 of the pdf]). The idea here is to use some arbitrary behavior/exploratory policy and, while it runs, update ("control") the policy $\pi$. In there, you use the update rule for $W$, which uses the idea of importance sampling - i.e. how to update the expected value of $\pi$ based on $b$. But there it ACTUALLY makes sense.

+

So, I suspect the evaluation was given by itself just so the reader can better understand how to do the evaluation, though it really doesn't make sense outside the control algorithm.

+",27947,,2444,,11/5/2020 22:08,11/5/2020 22:08,,,,2,,,,CC BY-SA 4.0 +16990,1,16991,,12/8/2019 13:41,,1,54,"

I have written an AI that plays a strategy board game. There are lots of different types of moves (e.g. attack, defend, help ally colony, etc.).

+ +

I calculate the best moves to do depending on a variety of factors, such as the value of nearby enemy colonies, the number of armies the colony currently has, etc (each of these has separate weightings). I'm trying to find the optimal weighting for each of the different factors.

+ +

Currently, I decide the best configuration of parameters in a King of the Hill style tournament. I choose random values between a suitable range for each of the different parameters and then play two of these AI against each other 20 times. I have a total of 100 AI that play against the king, and then take the final king as the best AI.

+ +

The problem is that this is quite slow and I feel it's very inefficient, as a lot of the AI don't play well at all (probably due to the randomness of parameter values).

+ +

I'm wondering if there's a more efficient way to determine the optimal value of parameters?

+",31932,,2444,,12/8/2019 14:48,12/8/2019 18:29,How should I weight the factors that affect the choice of an action in a strategy board game with multiple actions?,,1,0,,,,CC BY-SA 4.0 +16991,2,,16990,12/8/2019 18:29,,2,,"

You could use a genetic algorithm to optimise the parameter settings. Here you don't choose random parameters all the time, but only at the beginning. Each AI (which is a vector of parameter settings) plays each other one for a ranking (you can probably reduce the number of total games by using a ladder-style ranking where only neighbours play against each other). Also, you don't need 100 to start with, as you might come across better combinations than those present initially throughout the processing.

+ +

Then you discard the worst AIs (ie parameter vectors), and recombine the best ones, adding some random mutations into the mix. In theory this should converge faster, as you preserve good parameter settings (depending on how interdependent they are) and remove bad ones from the 'pool'.

+ +

You can also have a higher rate of mutations initially, which slowly goes down as you progress through the generations.

+",2193,,,,,12/8/2019 18:29,,,,0,,,,CC BY-SA 4.0 +16992,2,,16985,12/8/2019 19:52,,2,,"

A Hamiltonian path in a graph is a path that visits all the nodes/vertices exactly once, a hamiltonian cycle is a cyclic path, i.e. all nodes visited once and the start and the endpoint are the same. +If we want to solve the snake game using this, we could divide the playable space in a grid and then try to just keep traversing on a hamiltonian cycle, this means you would eventually get all the rewards and never hit yourself unless you are longer than can fit on the screen. +You can look at the code given in this Github repo.

+ +

Note that this method, as you can tell, is not going to be optimal by time, there could be better solutions than this. +You will have to use a graph to store all the nodes and the complexity is supposed to be O(n!).

+",8549,,,,,12/8/2019 19:52,,,,0,,,,CC BY-SA 4.0 +16995,2,,16943,12/9/2019 3:15,,1,,"

I'm not sure why you need a continuing environment, but actually you can make most (if not all) OpenAI Gym environments continuing. When you perform a step, you receive information about the next state, the reward, a termination signal and a dictionary with additional information. Simply ignore the termination signal if you want the episodes to continue indefinitely. In some cases you will need to modify a variable of the environments called $\texttt {_max_episode_steps}$, which may force the simulation to stop or to restart.

+ +

About your second question, check a resource called Spinning Up, from OpenAI too. They explain several methods and their implementation.

+",30983,,,,,12/9/2019 3:15,,,,6,,,,CC BY-SA 4.0 +16996,1,,,12/9/2019 4:49,,1,4054,"

I've been using OpenAI's 2017 Sentiment Neuron implementation (https://github.com/openai/generating-reviews-discovering-sentiment) for a while, because it was easy to set up and was the most accurate on benchmarks. What is the most accurate alternative now that I should use?

+",31165,,,,,12/9/2019 19:46,What is the most accurate pretrained sentiment analysis model by 2019?,,1,0,,,,CC BY-SA 4.0 +16997,1,17000,,12/9/2019 6:12,,1,79,"

I just wanted to confirm that my understanding of Q learning was correct (with respect to a neural network).

+ +
The network, Q, is initialised randomly.
+for n ""episodes"":
+    The state, s1, is initialised randomly
+    while s1 != terminal state:
+        s1 is fed into Q to get action vector q1
+        action a1 is chosen based off the max of q1 (or randomly)
+        state s2 is found by progressing s1 based on a1
+
+        s2 is fed into Q to get action vector q2
+
+        The expected output for Q at q1, y, is found by:
+        {If s2 is terminal, it is the reward at s2
+        {Otherwise: reward at s2 + gamm*max(q2)
+        (The ""otherwise"" doesn't match bellmans equation as α=1)
+
+        Do gradient step where error = (y - max(q1))^2, only the max of q1 gets any gradient
+        s1 = s2
+
+ +

This does not directly follow equations found by searching Q-learning as I find them rather ambiguous.

+ +

I am also not taking into account storing network states (in this case, the network is called Q) for proper learning to avoid catastrophic forgetting, as I'm more concerned on getting the specifics right before good practice.

+",26726,,26726,,12/9/2019 6:18,12/9/2019 14:36,Is the following the correct implementation of the Q learning algorithm for a neural network?,,1,0,,,,CC BY-SA 4.0 +16998,2,,16996,12/9/2019 7:14,,2,,"

For best results, I'd recommend Google Cloud Machine Learning. It has [Natural Language Processing API] (https://cloud.google.com/natural-language/docs/basics) with Sentiment, Entity, and Entity-Sentiment analysis.

+ +

You can implement in C++, PHP, Python, or other languages. This does require running a virtual machine instance on Google Cloud. TensorFlow can also be used for NLP sentiment analysis. The only drawback is it's not economical.

+ +

As an alternative, this is something I came across recently by Fast Text : http://fasttext.cc. +They have some pre-trained models trained on yelp and some other datasets. They support a lot of languages

+",31407,,25658,,12/9/2019 19:46,12/9/2019 19:46,,,,0,,,,CC BY-SA 4.0 +16999,1,,,12/9/2019 9:13,,1,287,"

I want to train a DQN card game named witches. It consists of 60 Cards (1-14 of Yellow, Blue, Green, Red Cards + 4 Wizzards). The color of the first layed card has to be respected by the other players (if they have this card in hand). The one who has the card with the highest number gets the played cards. Each collected red card gives you a -1 point.

+ +

With respect to this answer I setup the inputs / state of the NN as binary (meaning I have 180 bool values (60 card x is currently on the table, 60 card x is in the ai hand, 60 card x is already been played)

+ +

How to design the outputs / actions?

+ +
    +
  • If the ai is the first player of a round it can play any card
  • +
  • If the ai is not first player it has to respect the first played card (or play a wizzard)
  • +
+ +

This means there is actually a list of available options. I then sort this list and have 60 Output bools which I set to 1 if this option is possible. Among these options the ai should then decide what the correct option is? Is this the correct procedure?

+ +

Inconsistent / Varying Action Space +This is what we have to deal here with. As explained in here I think a DQN as well as Policy Gradient Methods is not the correct architecture to choose for solving such multi-agent card games. What architecture would you choose?

+ +

General procedure?

+ +

Assume I have 4 players, so do I have to get the old state before the ai is the next player and the new state is directly after this round is finished?

+ +
my_game = game([""Tim"", ""Bob"", ""Lena"", ""Anja""])
+while True:
+    #1. Play unti AI has to move:
+    my_game.play_round_until_ai()
+
+    #2. Get old state:
+    state_old = agent.get_state(my_game)
+
+    #3. Get the action the AI should perform
+    action_ = agent.get_action(state_old, my_game)
+
+    #4. perform new Action and get new state
+    #reward rates how good the current action was
+    #score is the actual score of this game!
+    reward, done, score = my_game.finishRound(action_)
+
+    # 5: Calculate new state
+    state_new = agent.get_state(my_game)
+
+    #6. train short memory base on the new action and state
+    agent.train_short_memory(state_old, action_, reward, state_new, done)
+
+    #7. store the new data into a long term memory
+    agent.remember(state_old, action_, reward, state_new, done)
+
+    if done == True:
+        # One game is over, train on the memory and plot the result.
+        sc = my_game.reset()
+
+ +

My code so far is available here: https://github.com/CesMak/witches_ai

+",31928,,31928,,12/10/2019 16:47,12/10/2019 16:47,DQN card game - how to represent the actions?,,1,0,,,,CC BY-SA 4.0 +17000,2,,16997,12/9/2019 10:52,,1,,"

Your pseudocode is fairly close to something that could be implemented. There are a couple of omissions, where it is not clear whether your implementation would do the right thing. There are a few non-general assumptions that are worth noting - they don't prevent you implementing for a specific purpose, but do prevent you writing the most generic DQN that you could. There is one mistake that prevents it working at all.

+ +

Your mistake is here:

+ +
+

Do gradient step where error = (y - max(q1))^2, only the max of q1 gets any gradient

+
+ +

The loss function and gradient must be applied to the state/action pair s1, a1, i.e. the action that was taken. This is an important distinction for when the agent takes exploratory (non-greedy) actions, as it must if it is going to learn about their values.

+ +

A corrected statement might read: Do gradient step where error = (y - Q(s1,a1))^2, only the output for a1 gets any gradient.

+ +
+ +

Regarding the rest of the pseudocode I could make the following observations:

+ +
+

The state, s1, is initialised randomly

+
+ +

This is fine if it is possible. However, it is more usual in Q learning to initialise the state to whatever the normal start of the episode would be. That might be from some randomised starting rules, but that is not quite the same as +picking a random state from all possible states.

+ +
+

action a1 is chosen based off the max of q1 (or randomly)

+
+ +

It is not quite clear what you mean by ""or randomly"". For Q learning - and most RL - exploration is important. So this is not a design time choice for your algorithm, but a runtime choice that must itself be made so that all actions can be tried. Typically you would use an $\epsilon$-greedy policy here that took the argmax action by default, but with probability $\epsilon$ took a random sample from actions with same chance of each possible action.

+ +
+

state s2 is found by progressing s1 based on a1

+
+ +

This is one of a few places where you have not quite captured the nature of a time step in a general MDP. You appear to have made the assumption that rewards are directly associated with states (with language like ""reward at s2""). Whilst some MDPs are like that, it is not true in general.

+ +

A better, more general statement might be:

+ +

s2, reward, done are found by taking a step from s1, using a1 as the action

+ +

Typically the environment (or a wrapper for it that establishes state representations and rewards) will provide a step method that returns the tuple $s_{t+1}, r_{t+1}, (s_{t+1} = s_T)$, although you could also hold these values in the environment model for the current time step and return them as required.

+ +

This statement:

+ +
+

(The ""otherwise"" doesn't match bellmans equation as α=1)

+
+ +

I don't know which Bellman equation you are referring to, but they don't include a learning rate, $\alpha$ in any version I have seen. Perhaps you are referring to a tabular Q learning update? In any case, it doesn't seem relevant, as whatever learning rate you are using is now inside the gradient step, and does not need to be part of the pseuodcode.

+ +

Finally, regarding this:

+ +
+

I am also not taking into account storing network states (in this case, the network is called Q) for proper learning to avoid catastrophic forgetting, as I'm more concerned on getting the specifics right before good practice.

+
+ +

DQN will very likely not work at all, for most environments, unless you implement experience replay i.e. training the neural network on a random sample of stored s1,a1,r,s2,done taken from a table that you build up on each time step. Usually you cannot simply use the latest data from the current step. This is not just ""good practice"", but an almost-always necessary stability improvement.

+",1847,,1847,,12/9/2019 14:36,12/9/2019 14:36,,,,7,,,,CC BY-SA 4.0 +17001,2,,16999,12/9/2019 10:57,,1,,"

Because your action space is not discrete, It would not be wise to evaluate possible actions. What would probably be better is evaluating next states. This means evaluating the value of the state after the action with your neural network instead of the value of a action.

+ +

Why is this better? +You can use the neural net for each possible action , instead of trying to make the neural network figure out which actions are possible. the result would be taking the action of which the next state has the most value.

+",29671,,,,,12/9/2019 10:57,,,,4,,,,CC BY-SA 4.0 +17002,1,,,12/9/2019 12:30,,2,245,"

What is the difference between game theory and machine learning?

+ +

I had gone through the papers Deep Learning for Predicting +Human Strategic Behavior, by Jason Hartford et al., and When Machine Learning Meets AI and Game +Theory, by Anurag Agrawal et al., but I am not able to understand.

+",9863,,2444,,12/9/2019 13:51,1/8/2022 0:57,What is the difference between game theory and machine learning?,,2,0,,,,CC BY-SA 4.0 +17003,2,,17002,12/9/2019 13:37,,5,,"

These are big areas, so here is a brief description of the differences:

+ +

Game theory is concerned with studying solutions for 'games', which are basically a set of decisions leading to certain outcomes. In game theory you look at strategies to achieve the best outcome for a given participant. One classic example (which isn't really a game in the traditional sense) is the Prisoner's Dilemma: you and your friend have been arrested, and if only one of you testifies against the other, that person gets a reduced sentence, and the other one a much longer one. If you both testify against each other, you both get a medium sentence, and if you both keep quiet, you both go free. You don't know what your partner in crime does, so do you a) testify, or b) keep quiet? If you keep quiet, you might go free if your partner also keeps quiet, but if he testifies, you are in it for a long time. So it's risky to keep quiet, even though you get the better outcome. If you testify you might avoid a longer sentence, but also will not go free. What is your best choice?

+ +

Game theory is often used in economics to model behaviour, as a rational agent would try to optimise gains.

+ +

Machine Learning, on the other hand, is a way of training a statistical classifier. You feed features into an algorithm, and the algorithm then gives you a certain output, depending on the data you have trained it with. This hasn't got anything to do with game theory per se, but I guess you could use machine learning to train an algorithm to choose moves in a game situation and then compare how that matches the optimal choices according to game theory.

+ +

As I said, this is a very brief comparison. For more details I suggest you follow the links to read up on those two fields.

+ +

UPDATE: Now that the papers are accessible — game theory is indeed used as a benchmark. In the first paper, the rational agent assumption from game theory is being modeled, but without a human expert telling the algorithm what that means. So you learn (using deep learning) what it means to be rational. In the second paper the authors attempt to learn a better algorithm than tit-for-tat, and indeed use game theory as a theoretical framework for comparison/evaluation.

+",2193,,2193,,12/9/2019 13:44,12/9/2019 13:44,,,,0,,,,CC BY-SA 4.0 +17004,1,17022,,12/9/2019 14:26,,7,3469,"

Say I have a CNN with this structure:

+ +
    +
  • input = 1 image (say, 30x30 RGB pixels)
  • +
  • first convolution layer = 10 5x5 convolution filters
  • +
  • second convolution layer = 5 3x3 convolution filters
  • +
  • one dense layer with 1 output
  • +
+ +

So a graph of the network will look like this:

+ +

+ +

Am I correct in thinking that the first convolution layer will create 10 new images, i.e. each filter creates a new intermediary 30x30 image (or 26x26 if I crop the border pixels that cannot be fully convoluted).

+ +

Then the second convolution layer, is that supposed to apply the 5 filters on all of the 10 images from the previous layer? So that would result in a total of 50 images after the second convolution layer.

+ +

And then finally the last FC layer will take all data from these 50 images and somehow combine it into one output value (e.g. the probability that the original input image was a cat).

+ +

Or am I mistaken in how convolution layers are supposed to operate?

+ +

Also, how to deal with channels, in this case RGB? Can I consider this entire operation to be separate for all red, green and blue data? I.e. for one full RGB image, I essentially run the entire network three times, once for each color channel? Which would mean I'm also getting 3 output values.

+",31800,,2444,,12/19/2021 14:50,12/19/2021 14:50,Does each filter in each convolution layer create a new image?,,3,0,,12/19/2021 14:44,,CC BY-SA 4.0 +17005,1,,,12/9/2019 14:50,,1,148,"

I'd like to create an AI for a 2D game involving two players fighting against each other. The map look something like this (The map is a NxN array somehow randomly generated):

+ +

+ +

Basically the players can look for objects such as weapons located on platforms, shoot at each other to cause damages etc. The output actions are therefore limited to a few such as going up, left, right, down, shooting angle, shooting boolean...

+ +

I'm wondering if Reinforcement Learning using a neural network is a good approach to the problem. If so, how should I proceed for the learning phase? Should I force the AI to compete with a weaker version of itself at each iteration? Would it be computationally reasonable to train on a 4Gb GPU? +Thanks in advance for your advice !

+",31374,,31374,,12/10/2019 17:27,12/10/2019 17:27,Reinforcement learning for a 2D game involving two players,,0,1,,,,CC BY-SA 4.0 +17006,1,,,12/9/2019 16:09,,1,195,"

I am training an A2C reinforcement learning agent in a dense reward environment (where rewards are known and explicit at every timestep).

+ +

Is it redundant to include the previous reward in the current observation space?

+ +

The reward is implicitly observed by the agent when collecting experiences and updating the network parameters. But could it also be useful for the agent to explicitly observe the reward of its previous action?

+",27570,,,,,12/9/2019 16:09,Should an RL agent directly observe the reward?,,0,2,,,,CC BY-SA 4.0 +17007,1,,,12/9/2019 16:47,,3,154,"

I built a simple AI system that tries to solve the 8 puzzle using DQN. +The problem is, if the agent gets only a reward greater than zero when winning, the training will take a long time, so I made a smooth reward function instead: $R=(n/9)^3$, where $n$ is the number of pieces that are in the right position.

+ +

The training became quicker but the AI chose to match 7 pieces out of 9 to get a reward of $(7/9)^3/(1-\gamma) = 0.47/(1-\gamma) = 4.7$, for $\gamma=0.9$, choosing to win and getting reward of 1 doesn't make sense to the AI, lowering $\gamma$ will result in the AI to choose instant reward instead of long-term reward, so that will not be very helpful; lowering rewards of non-winning stats will make the training very slow.

+ +

So, how do I choose a good reward function?

+",31953,,2193,,12/9/2019 18:53,12/9/2019 18:53,"DQN, how to choose the reward fucntion?",,0,3,,,,CC BY-SA 4.0 +17008,2,,17004,12/9/2019 22:21,,4,,"

About the images inside the CNN layers: I really recommend this article since there is no one short answer to this question and it probably will be better to experiment with it.

+ +

About the RGB input images: When needed to train on RGB pictures it is not advised to split the RGB channels, you can think of it by trying to identify a fictional cat with red ears, green body and blue tail. Each separated channel don't represent a cat, most certainly not with high confidence. I would recommend to transform you RGB images to gray scale and measure the network performance. If the performance are not sufficient you can make a 3D convolution layer. For example: If 30x30x3 is the input image, the filter has to be NxNx3.

+",31957,,,,,12/9/2019 22:21,,,,1,,,,CC BY-SA 4.0 +17010,1,,,12/9/2019 23:26,,1,106,"

I need to predict the performance (CPI cycles-per-instruction) of 90 machines for the next hour (or day). Each machine has a thousand records (e.g. CPU and memory usage).

+ +

Currently, I am using a neural network with one hidden layer for this task. I have 9 inputs (features), 23 neurons in the hidden layer, and one output. I am using the Levenberg-Marquardt algorithm. Examples of the inputs (or features) are the CPU and memory capacity and usage, and the machine_id. and Output is performance. I have 90 machines. Currently, I get an MSE of $0.1$ and an R of $0.80$.

+ +

My dataset consist of 30 days. I trained my network for the first 29 days, and I use day 30 to test.

+ +

I have been advised to use deep learning to have more flexibility and improve the MSE and R results. Could deep learning be helpful in this case? If yes, which deep learning model could I use to improve the results?

+",30551,,30551,,12/11/2019 21:46,1/11/2020 8:01,Should I use deep learning to solve my task?,,0,2,,,,CC BY-SA 4.0 +17011,1,17013,,12/10/2019 2:17,,3,180,"

I have made several neural networks by using Brain.Js and TensorFlow.js.

+ +

What is the difference between artificial intelligence and artificial neural networks?

+",31949,,2444,,12/10/2019 2:43,12/10/2019 22:34,What is the difference between artificial intelligence and artificial neural networks?,,4,0,,,,CC BY-SA 4.0 +17012,1,,,12/10/2019 2:51,,1,20,"

We have a scenario where we need to implement an Artificial Intelligence solution which will evaluate the input data file of my Azure Data Factory pipeline and let us know whether the file is good or bad with respect to it's data.

+ +

For example, I have 10 files several rows which are good input files and 2 files with several rows which are bad input files.

+ +

Each file either it is good/bad has 26 columns. The above two files are bad because of below reasons.

+ +
    +
  1. One file has all empty values for one column which is not expected.

  2. +
  3. Another file has, the value 'TRUE for all rows for a specific column which was also not he general scenario. (some % of TRUE's and some % records with FALSE will appear in good files)

  4. +
+ +

Like this, there may be several scenarios where the input file may be treated as bad file.

+ +

We want to implement an Artificial Intelligence solution which should analyze all the input files and identify the hidden patterns of the data in file and detect abnormal scenarios like above and should eventually mark the file as bad file.

+ +

Please suggest for the approach or what components in Azure can help to achieve this kind of file sanity check.

+ +

Thanks.

+",31959,,,,,12/10/2019 2:51,Need to analyze input CSV files and determine whether input file is good or bad w.r.t it's data,,0,4,,,,CC BY-SA 4.0 +17013,2,,17011,12/10/2019 3:03,,0,,"

From wikipedia:

+ +
+

Artificial neural networks (ANN) or connectionist systems are computing systems that are inspired by, but not identical to, biological neural networks that constitute animal brains. Such systems ""learn"" to perform tasks by considering examples, generally without being programmed with task-specific rules.

+
+ +

Artifical Intelligence on the other hand refers to the broad term of

+ +
+

intelligence demonstrated by machines

+
+ +

This obviously doesn't clear much up, so the next logical question is: ""What is intelligence?""

+ +

This, however, is one of the most debated questions in computer science and many other fields, so there isn't a straight answer for this. The most you can do is decide yourself what you think intelligence refers to, because as far as we know, there is no agreed upon way of quantifying intelligence, and so the definition of such will remain ambiguous.

+",26726,,,,,12/10/2019 3:03,,,,0,,,,CC BY-SA 4.0 +17016,2,,17011,12/10/2019 4:44,,4,,"

Artificial intelligence is a broad field, ANN is a specific technique with in that field.

+",30178,,,,,12/10/2019 4:44,,,,0,,,,CC BY-SA 4.0 +17017,1,,,12/10/2019 6:02,,1,50,"

Background:
+This is for a simulated robot with four legs, walking on a flat terrain. The ANN (an MLP) is given inputs as the robot's body angle, positions and angle of each leg with respect to the body and two points of contact with the terrain, on each leg (if there's no contact, the value is zero). The outputs are the four motor rates for each leg.
+I'm using Keras with the CNTK backend to train the network. The network has 30 input nodes (ReLU), one hidden layer with 64 nodes (ReLU) and an output layer with 4 nodes (Sigmoid). Optimizer is Adam.

+ +

The training data has 2459 datapoints. Running model.validate with parameters testDataPercentage = 0.25, epochs = 50 and batchSize = 10 gave me: loss: 2.9509 - accuracy: 0.3283 - val_loss: 2.8592 - val_accuracy: 0.3213.

+ +

But running model.evaluate multiple times gave me:

+ +
['loss', 'accuracy'] [3.10, 0.50]
+['loss', 'accuracy'] [3.04, 0.23]
+['loss', 'accuracy'] [3.01, 0.11]
+['loss', 'accuracy'] [3.45, 0.02]
+['loss', 'accuracy'] [3.17, 0.40]
+['loss', 'accuracy'] [3.03, 0.27]
+['loss', 'accuracy'] [3.012, 0.46]
+
+ +

Loss doesn't decrease much over 50 epochs. It reduces from maybe 3 to 2.8. That's it.

+ +

Question:
+I don't understand why the accuracy varies so much for each run.
+If I add a hidden layer or even add a dropout of 0.2, the results are similar: loss: 2.9253 - accuracy: 0.2978 - val_loss: 2.9350 - val_accuracy: 0.3148. +Reducing the number of hidden nodes to 15 gives the same results: loss: 2.9253 - accuracy: 0.2978 - val_loss: 2.9350 - val_accuracy: 0.3148. Hidden layers with 64 nodes gives the same results. Training data with just 500 data points also gives the same results. Using sigmoid instead of ReLU gives slightly worse results.
+I've been through many tutorials and guides on how to debug or check why the neural network is not working, but they don't teach properly, what these values mean and how to adjust the network.
+Does the loss not decreasing, mean the network is not learning?
+Does the fact that increasing or decreasing the layers or the number of training data mean that the network is not learning?

+",31961,,,,,12/10/2019 6:02,Does a varying ANN model accuracy mean underfitting or overfitting?,,0,3,0,,,CC BY-SA 4.0 +17018,1,,,12/10/2019 6:49,,1,33,"

When I try to fit a Normal Distribution to the output of a policy network, for a continuous action space problem, what should be its standard deviation? mean for the distribution will directly be the output of the policy network.

+",29879,,,,,12/10/2019 6:49,Deciding std. deviation for policy network output?,,0,0,,,,CC BY-SA 4.0 +17019,2,,17011,12/10/2019 10:31,,1,,"

I would explain it as Artificial Intelligence is a huge topic concerning many fields as: robotics, computer vision, machine learning, etc. It focuses on any ""inteligent"" task that a computer can do.

+ +

Artificial Neural Networks are a sub-topic of Machine learning, and probably as you've seen, as you said you have some experience with them, deals with a specific way of solving problems using a set of 'neurons' that try to imitate actual biological neurons. Explaining it in a really simplistic way, it is a method of fitting a function to your specific data in such a way that it still stands and gives good predictions on test data. By 'training' the network, you're basically trying to find better values for the weights(analogous to synapses in an actual brain, connections between the neurons) between the neurons in order to give better outputs in general on that specific type of data instead of just one case.

+",31966,,31966,,12/10/2019 22:19,12/10/2019 22:19,,,,0,,,,CC BY-SA 4.0 +17020,1,22082,,12/10/2019 11:27,,2,80,"

I was trying to build a CNN model. I used time series data of daily temperature to predict if there is risk of an event, say bacteria growth. I calculated the descriptive statistics of the time series, ie. mean, variance, skewness, kurtosis etc for each observation and added them to input data.

+ +

My question:

+ +

Is CNN capable of extracting the effect of the descriptive statistics the label, meaning that adding these descriptive statistics features manually does not make a difference? +(I will still try this later, but like to hear what you think about it). Thanks

+",31075,,,,,2/6/2021 13:05,Is CNN capable of extracting the descriptive statistics features,,1,0,,,,CC BY-SA 4.0 +17021,2,,11220,12/10/2019 12:12,,1,,"

voice-builder is an opensource text-to-speech (TTS) voice building tool from Google.

+",30644,,,,,12/10/2019 12:12,,,,0,,,,CC BY-SA 4.0 +17022,2,,17004,12/10/2019 12:41,,4,,"

You are partially correct. On CNNs the output shape per layer is defined by the amount of filters used, and the application of the filters (dilation, stride, padding, etc.).

+ +

CNNs shapes

+ +

In your example, your input is 30 x 30 x 3. Assuming stride of 1, no padding, and no dilation on the filter, you will get a spatial shape equal to your input, that is 30 x 30. Regarding the depth if you have 10 filters (of shape 5 x 5 x 3) you will end up with a 30 x 30 x 10 output at your first layer. Similarly, on the second layer with 5 filters (of shape 3 x 3 x 10, note the depth to work on the previous layer) you have 30 x 30 x 5 output. The FC layer has the same amount of weights as the input (that is 4500 weights) to create a linear combination of them.

+ +

CNN vs Convolution

+ +

Note that the CNNs operate differently from the traditional signal processing convolution. In the former, the convolution operation performs a dot product with the filter and the input to output a single value (and even add bias if you want to). While the latter outputs the same amount of channels.

+ +

The CNNs borrow the idea of a shifting kernel and a kernel response. But they do not apply a convolution operation per se.

+ +

Operation over the RGB

+ +

The CNN is not operating on each channel separately. It is merging the responses of the three channels and mixing them further. The deeper you get the more mix you get over your previous results.

+ +

The output of your FC is just one value. If you want more, you need to add more FC neurons to get more linear combinations of your inputs.

+",31955,,,,,12/10/2019 12:41,,,,0,,,,CC BY-SA 4.0 +17023,1,,,12/10/2019 15:02,,3,167,"

Are there (complex) tabular datasets where deep neural networks (e.g. more than 3 layers) outperform traditional methods such as XGBoost by a large margin?

+

I'd prefer tabular datasets rather than image datasets, since most image dataset are either too simple that even XGBoost can perform well (e.g. MNIST), or too difficult for XGBoost that its performance is too low (e.g. almost any dataset that is more complex than CIFAR10; please correct me if I'm wrong).

+",31971,,2444,,12/26/2022 10:37,5/25/2023 11:04,Are there tabular datasets where deep neural networks outperform traditional methods?,,1,2,,,,CC BY-SA 4.0 +17024,1,,,12/10/2019 16:13,,2,63,"

I was looking at the source code for a personal project neural network implementation, and the bias for each node was mistakenly applied after the activation function. The output of each node was therefore $\sigma\big(\sum_{i=1}^n w_i x_i\big)+b$ instead of $\sigma\big(\sum_{i=1}^n w_i x_i + b\big)$. Assuming standard back-propagation algorithms are used for training, what (if any) impact would this have?

+",31975,,2444,,12/10/2019 22:10,12/10/2019 22:10,What would be the implications of mistakenly adding bias after the activation function?,,1,0,,,,CC BY-SA 4.0 +17025,1,17035,,12/10/2019 16:48,,8,2086,"

Suppose a deep neural network is created using Keras or Tensorflow. Usually, when you want to make a prediction, the user would invoke model.predict. However, how would the actual AI system proactively invoke their own actions (i.e. without the need for me to call model.predict)?

+",31978,,2444,,2/12/2021 17:19,2/12/2021 17:19,How can an AI freely make decisions?,,3,0,,,,CC BY-SA 4.0 +17026,2,,17024,12/10/2019 17:17,,2,,"

Let $L(\mathbf{w}, b) = \sigma \left(\sum_{i=1}^n w_i x_i \right)+b$, then the partial derivative of $L$ with respect to $b$, in Leibniz's notation, is $\frac{\partial L}{\partial b} = 1$. Let $L(\mathbf{w}, b) = \sigma \left(\sum_{i=1}^n w_i x_i + b \right)$, then the partial derivative of $L$ with respect to $b$ is $\frac{\partial L}{\partial b} = \frac{\partial L}{\partial \sigma} \frac{\partial \sigma}{\partial b}$.

+ +

So, in general, the partial derivatives with respect to $b$ would be different in these two cases, thus, in the gradient descent step, $b$ would be updated differently in both cases. In practice, it may or not affect the performance of the model, depending on the importance of the bias with respect to the given problem. Read this answer to understand the role of the bias.

+",2444,,2444,,12/10/2019 17:54,12/10/2019 17:54,,,,0,,,,CC BY-SA 4.0 +17027,1,,,12/10/2019 17:33,,1,16,"

Example 1

+ +

An object is composed of 3 sub-objects.

+ +
    +
  • Object 1: 90% looks like an eye 10% looks like a wheel
  • +
  • Object 2: 50% looks like an eye 50% looks like a wheel
  • +
  • Object 3: 90% looks like a mouth 10% looks like a roof
  • +
+ +

OK. So now we want to determine what the whole-object is. Using this evidence maybe we find:

+ +
    +
  • Combined Object: 90% looks like a face 10% looks like an upside-down car.
  • +
+ +

But now, given this, we go back an reclassify the sub-objects.

+ +
    +
  • Given whole object is a car object 2 is 99% an eye.
  • +
+ +

I'm looking for an algorithm that sort of goes back and forth between the context and the subobjects to classify both an object and it's parts.

+ +

(This is related to the rabbit-duck illusion, where such an algorithm once decided something is a rabbit, it classifies it's parts as rabbit-parts).

+ +

In other words the algorithm needs to calculate conditional probabilities P(A|B) but the B depends on what all the A's are! So it's a feed back loop.

+ +

Example 2 +There is a word ""funny"". The sub-letters are classified (wrongly) as F U M Y. It guesses that the word is FUNNY and then goes back and tries to reclassify the letters. Using this reclassification it is even more certain the word is FUNNY. And much more certain the letters in the middle are NN and not M. +(Perhaps later the word is used in a sentence ""The fumes from this fire are really fumy"". And then using this new evidence it has to go back and reclassify the word again and now it thinks the word is FUMY with 80% probability).

+ +

I have an idea that I would write all the condition probabilities in a matrix $M$ then gives starting probabilities $S_0$. Then iterate like this: $S_{n+1}=M S_n$ until hopefully it converges on something.

+",4199,,4199,,12/10/2019 19:13,12/10/2019 19:13,"Is there an algorithm for ""contextual recognition"" with probabilities?",,0,0,,,,CC BY-SA 4.0 +17028,1,,,12/10/2019 18:05,,2,74,"

Here's a question I might ask an AI to solve:

+ +
""Colour the states of the USA using just 4 colours"".
+
+ +

Now, a common heuristic a human might use is to start at one state and ""work their way out"". Or start at an edge state. Now this seems to work best rather than just colouring states in a random order like a computer might do. And it means a human is often better than a computer because a computer might just start colouring random states and get into trouble very quickly.

+ +

(Also I wonder if this is a learned heuristic or would a child develop this on his/her own?)

+ +

Now the question is, whether this heuristic is an innate optimisation strategy, or just laziness on the part of the human. i.e. colouring things close together takes less effort. Either way it leads to a good strategy.

+ +

But I wonder if there are any other examples of heuristics that humans inately use without realising it, that lead to good strategies?

+ +

One heuristic that computers often don't know is

+ +
""If you're trying to play a game don't keep turn around and go the other way for no reason."" 
+
+ +

Again, a human would not do this, but again this could be laziness on the part of a human. It takes more effort to turn around than keep going in one direction to explore it.

+",4199,,4199,,12/12/2019 19:26,12/13/2019 4:19,What are some common heuristics that might be innate?,,1,3,,,,CC BY-SA 4.0 +17029,1,,,12/10/2019 18:16,,2,90,"

When considering the policy network in PPO algorithm, we need to fit a Gaussian distribution to the neural network output (for a continuous action space problem). When I use this network to obtain action, should I sample from the fitted distribution or directly consider the mean output from the policy network? Also, what should be the standard deviation of the fitted distribution?

+",29879,,2444,,12/10/2019 21:25,12/10/2019 21:25,Should I consider mean or sampled value for action selection in ppo algorithm?,,0,0,,,,CC BY-SA 4.0 +17030,2,,7611,12/10/2019 18:20,,0,,"

There is a third strategy and that is to study what heuristics a human player uses.

+",4199,,,,,12/10/2019 18:20,,,,0,,,,CC BY-SA 4.0 +17031,1,,,12/10/2019 22:05,,4,648,"

How does a transformer leverage the GPU to be trained faster than RNNs?

+

I understand the parameter space of the transformer might be significantly larger than that of the RNN. But why does the transformer structure can leverage multiple GPUs, and why does that accelerate its training?

+",31984,,2444,,12/3/2021 10:08,12/3/2021 10:08,How does a transformer leverage the GPU to be trained faster than RNNs?,,2,2,,,,CC BY-SA 4.0 +17032,2,,17011,12/10/2019 22:34,,3,,"

Artificial intelligence can refer to a broad range of techniques by which machines (algorithms) demonstrate utility (fitness in an environment, where the environment may be either virtual or physical.)

+ +

This can include symbolic AI, which utilizes logic and search exclusively. (Symbolic AI is sometimes referred to as ""good old fashioned AI"" aka gofai, or ""Classical AI"".)

+ +
    +
  • A key distinction is that Neural Networks constitute a form of ""statistical AI"", which renders them capable of learning by trial/error & analysis.
  • +
+ +

The recent strength & applicability of statistically driven AI methods has been facilitated by advances in processing power and memory.

+",1671,,,,,12/10/2019 22:34,,,,0,,,,CC BY-SA 4.0 +17033,1,,,12/10/2019 23:09,,2,86,"

I want to train a network with video data and have it transform pixel values overtime on an input video. This is for an art project and does not need to be super elaborate, but the videos I want to render out of this might be big in resolution and frame count.

+ +

Which neural network would be appropriate for this task? I think I'm looking for a convolutional network (but I'm not too sure of that either). Which framework could easily allow me to do this?

+ +

Now, I'm no proper programmer, but self-learned on the go. I know some Javascript, but rather would like to learn more Python. Ideally, the easier and simpler the better though: I would be perfectly happy with something like ""Uber Ludwig"" (except maybe that it's from Uber).

+",31985,,2444,,12/11/2019 13:53,12/11/2019 13:53,Which neural network should I use to transform the pixels of a video overtime?,,0,4,,,,CC BY-SA 4.0 +17034,2,,17025,12/10/2019 23:13,,3,,"

The short answer, i think, is that it cannot.

+ +

The AI system will only do, and it will only be good at the task that the programmer made it for. Of course you could have an AI that, for example, can trigger a prediction on the input with different models depending on some other variables, but that will still be based on what the programmer wrote, it will never be able to do or learn new unintended things. Like having the model.predict() for an image classification NN in a loop and only stop when it detects a dog and then use another model to predict the breed for example.

+ +

What you mentioned about ""letting the AI lose on the network"" usually is part of some concerns about AI that it could evolve, learn new actions and start acting on its own. +But those people unknowingly are actually talking about a general AI or strong AI, an AI system that could be as smart as a human so it could act in its own too. But as far is a know at least, we are not even close to creating such a system.

+ +

Hope I actually answered your question and didn't deviated too much from what you actually asked. Please tell me if so.

+",31966,,,,,12/10/2019 23:13,,,,1,,,,CC BY-SA 4.0 +17035,2,,17025,12/11/2019 0:02,,12,,"

Neural networks, deep learning and other supervised learning algorithms do not ""take actions"" by themselves, they lack agency.

+ +

However, it is relatively easy to give a machine agency, as far as taking actions is concerned. That is achieved by connecting inputs to some meaningful data source in the environment (such as a camera, or the internet), and connecting outputs to something that can act in that environment (such as a motor, or the API to manage an internet browser). In essence this is no different from any other automation that you might write to script useful behaviour. If you could write a series of tests, if/then statements or mathematical statements that made useful decisions for any machine set up this way, then in theory a neural network or similar machine learning algorithm could learn to approximate, or even improve upon the same kind of function.

+ +

If your neural network has already been trained on example inputs and the correct actions to take to achieve some goal given those inputs, then that is all that is required.

+ +

However, training a network to the point where it could achieve this in an unconstrained environment (""letting it loose on the internet"") is a tough challenge.

+ +

There are ways to train neural networks (and learning functions in general) so that they learn useful mappings between observations and actions that progress towards achieving goals. You can use genetic algorithms or other search techniques for instance, and the NEAT approach can be successful training controllers for agents in simple environments.

+ +

Reinforcement learning is another popular method that can also scale up to quite challenging control environments. It can cope with complex game environments such as Defense of the Ancients, Starcraft, Go. The purpose of demonstrating AI prowess on these complex games is in part showing progress towards a longer-term goal of optimal behaviour in the even more complex and open-ended real world.

+ +

State of the art agents are still quite a long way from general intelligent behaviour, but the problem of using neural networks in a system that learns how to act as an agent has much research and many examples available online.

+",1847,,1847,,12/11/2019 0:17,12/11/2019 0:17,,,,1,,,,CC BY-SA 4.0 +17036,1,,,12/11/2019 1:17,,3,156,"

In most current models, the normalization layer is applied after each convolution layer. Many models use the block $\text{convolution} \rightarrow \text{batch normalization} \rightarrow \text{ReLU}$ repeatedly. But why do we need multiple batch normalization layers? If we have a convolution layer that receives a normalized input, shouldn't it spit out a normalized output? Isn't it enough to place normalization layers only at the beginning of the model?

+",31988,,2444,,12/11/2019 13:43,12/11/2019 13:43,Why do current models use multiple normalization layers?,,1,0,,,,CC BY-SA 4.0 +17037,1,,,12/11/2019 6:43,,1,278,"

I want to recognize the name of the chemical structure from the image of the chemical structure. For example, in the image below, it is a benzene structure, and I want to recognize that it is benzene from the image (I should be able to recognize all these structures as benzene).

+ +

How can I recognize the name of a molecule given an image of its structure?

+ +

+",17183,,2444,,12/11/2019 14:01,12/11/2019 14:01,How can I recognise the name of a molecule given an image of its structure?,,0,4,,,,CC BY-SA 4.0 +17038,2,,13360,12/11/2019 8:13,,0,,"

@mshlis, if $\sigma$ is covariance matrix, then exist $A$ s.t. $\sigma=AA^T$.

+ +

But if we generate $A$ by model, and calculate $\sigma = AA^T$, then Cholesky decomposition will not successful

+",31479,,,,,12/11/2019 8:13,,,,0,,,,CC BY-SA 4.0 +17039,2,,17036,12/11/2019 11:00,,2,,"

One issue is that a normalized set of initial weights may not stay normalized as learning progresses; so given that we adjust weights proportionately according to their relative values and also when working on a subset of the learning data the model may become convinced that one subset of features is important and others not, this can result in the weights becoming unbalanced again. In the learning curve we may see this as a plateau, where the learning becomes convinced that a few features are much more important than they really are and fails to find new features that can contribute even more because tiny proportionate changes did not move them far, or quickly, enough into a noticeable range.

+ +

So when we re-sample from the database to get the next batch we need to be fully open to the possibility that the previous batch learning was too heavily biased towards its own favoured set. In effect we are trading accuracy for openness to new feature combinations, and at the same time assisting generalization.

+",4994,,,,,12/11/2019 11:00,,,,0,,,,CC BY-SA 4.0 +17040,2,,17031,12/11/2019 11:48,,0,,"

The issue with Recurrent models is that they don't parallelization during training. +Sequential models performs better with more memory but faces problem in learning long-term memory dependencies.

+ +

On the other hand Transformers take into account of self attention which boosts the speed of how fast the model can translate from one sequence to another and establishes dependencies b/w input and output and focus on relevant parts of the input sequence, which in turn eliminates recurrence and convolution unlike RNNs where sequential computation inhibits parallelization.

+",31407,,,,,12/11/2019 11:48,,,,1,,,,CC BY-SA 4.0 +17041,2,,17023,12/11/2019 12:26,,0,,"

In my opinion, no. +Also images could be interpreted as tabular dataset as well, where certain columns represent different rgb codes of pixels. If you seek to use neural nets opt for image datasets, with large sample size. +Neural networks generally require large sample sizes to perform, and huge dimension inputs to not be outperformed by boosting.

+",31970,,,,,12/11/2019 12:26,,,,8,,,,CC BY-SA 4.0 +17043,1,,,12/11/2019 15:39,,1,64,"

I'm getting literally crazy trying to understand how U-NET works. Maybe it is very easy, but I'm stuck (and I have a terrible headache). So, I need your help.

+ +

I'm going to segment MRI to find white matter hyperintensities. I have a dataset with MRI brain images, and another dataset with the WMH. For each one of the brain images, I have one black image with white dots on it in the WMH dataset. These white dots represent where is a WMH on its corresponding brain image.

+ +

This is an image from the MRI brain images:

+ +

+ +

And this is the corresponding WMH image from the WMH dataset:

+ +

+ +

How can I use the other images in network validation?

+ +

I suppose there will be a loss function and this network is supervised learning.

+",4920,,2444,,6/12/2020 23:54,6/12/2020 23:54,Using U-NET for image semantic segmentation,,0,3,,,,CC BY-SA 4.0 +17044,1,17070,,12/11/2019 16:26,,6,779,"

Why is dropout favored compared to reducing the number of units in hidden layers for the convolutional networks?

+ +

If a large set of units leads to overfitting and dropping out ""averages"" the response units, why not just suppress units?

+ +

I have read different questions and answers on the dropout topic including these interesting ones, What is the "dropout" technique? and this other Should I remove the units of a neural network or increase dropout?, but did not get the proper answer to my question.

+ +

By the way, it is weird that this publication A Simple Way to Prevent Neural Networks from Overfitting (2014), Nitish Srivastava et al., is cited as being the first on the subject. I have just read one that is from 2012: +Improving neural networks by preventing co-adaptation of feature detectors.

+",30392,,2444,,12/11/2019 17:24,4/13/2021 15:55,Why is dropout favoured compared to reducing the number of units in hidden layers?,,2,0,,,,CC BY-SA 4.0 +17045,1,17054,,12/11/2019 17:19,,3,1461,"

Say a simple neural network's input is a collection of tags (encoded in some way), and the output is an image that corresponds to those tags. Say this network consists of some dense layers and some reverse (transpose) convolution layers.

+ +

What is the disadvantage of this network, that directs people to invent fairly complicated things like GANs or VAEs?

+",26791,,,,,12/12/2019 5:46,What makes GAN or VAE better at image generation than NN that directly maps inputs to images,,2,0,,,,CC BY-SA 4.0 +17046,2,,17045,12/11/2019 18:08,,2,,"

I will only focus on the VAE because I am more familiar with it, but the explanations may also apply to the GAN and other generative models.

+ +

In the case of the VAE, you train a neural network not only to generate images but to represent them compactly in a so-called latent space, so you train the VAE to do dimensionality reduction. More precisely, the VAE attempts to learn a probability distribution with smaller dimensionality than the dimensionality of the training data but that hopefully represents the training data. Consequently, the model is forced to learn the essential features of the probability distribution that generated the training data.

+ +

The VAE is a generative model rather than a discriminative model. In the case of the neural network that maps inputs to images, you will not be learning a latent probability distribution (unless you formulate your model in such a way), from which you could sample, but you would just be mapping, in a supervised way (that is, you would need a labelled training dataset) and deterministically, the inputs to the images. In the VAE, there's some form of stochasticity, while training (and testing) the model.

+",2444,,2444,,12/11/2019 18:14,12/11/2019 18:14,,,,1,,,,CC BY-SA 4.0 +17047,1,,,12/11/2019 19:12,,3,443,"

I'd like to ask for any kind of assistance regarding the following problem:

+ +

I was given the following training data: 100 numbers, each one is a parameter, they together define a number X(also given).This is one instance,I have 20 000 instances for training.Next, I have 5000 lines given, each containing the 100 numbers as parameters.My task is to predict the number X for these 5000 instances.

+ +

I am stuck because I only know of the sigmoid activation function so far, and I assume it's not suitable for cases like this where the output values aren't either 0 or 1.

+ +

So my question is this : What's a good choice for an activation function and how does one go about implementing a neural network for a problem such as this?

+",32012,,,,,10/8/2020 19:06,Regression using neural network,,4,0,,,,CC BY-SA 4.0 +17049,1,,,12/11/2019 21:00,,2,448,"

I'm new to deep learning. I was wondering what's the relationship between a deep model complexity (e.g. total number of parameters, or depth) and the dataset size?

+ +

Assuming I want to do a binary classification with 10K data for a problem like fire detection. How should I know what complexity I should go for?

+",9053,,,,,12/11/2019 21:00,Relationship between model complexity (depth) and dataset size,,0,3,,,,CC BY-SA 4.0 +17050,1,18352,,12/11/2019 21:36,,8,472,"

So I’ve been working on my own little dynamic architecture for a deep neural network (any number of hidden layers with any number of nodes in every layer) and got it solving the XOR problem efficiently. I moved on to trying to see if I could train my network on how to classify a number as being divisible by another number or not while experimenting with different network structures and have noticed some odd things. I know this is a weird thing to try and train a neural network to do but I just thought it might be easy because I can simply generate the training data set and test data set programmatically.

+ +

From what I’ve tested, it seems that my network is only really good at identifying whether or not a number is divisible by a number who is a power of 2. If you test divisibility by a power of two, it converges on a very good solution very quickly. And it generalizes well on numbers outside of the training set - which I guess it kind of makes sense, as I’m inputting the numbers into the network in binary representation, so all the network has to learn is that a number n is only divisible by 2^m if the last m digits in the binary input vector are 0 (i.e. fire the output neuron if the last m neurons on the input layer don't fire, else don't). When checking divisibility by non-powers of two, however, there does not seem to be as much of a ""positional"" (maybe that's the word, maybe not) relationship between the input bits and whether or not the number is divisible. I thought though, that if I threw more neurons and layers at the problem that it might be able to solve classifying divisibility by other numbers – but that is not the case. The network seems to converge on not-so-optimal local minima on the cost function (for which I am using mean-squared-error) when dividing by numbers that are not powers of 2. I’ve tried different learning rates as well to no avail.

+ +

Do you have any idea what would cause something like this or how to go about trying to fix it? Or are plain deep neural networks maybe just not good at solving these types of problems?

+ +

Note: I should also add that I've tried using different activation functions for different layers (like having leaky-relu activation for your first hidden layer, then sigmoid activation for your output layer, etc.) which has also not seem to have made a difference

+ +

Here is my code if you feel so inclined as to look at it: https://github.com/bigstronkcodeman/Deep-Neural-Network/blob/master/Neural.py

+ +

(beware: it was all written from scratch by me in the quest to learn so some parts (namely the back-propagation) are not very pretty - I am really new to this whole neural network thing)

+",32014,,32014,,12/11/2019 22:04,3/1/2020 20:31,Can a deep neural network be trained to classify an integer N1 as being divisible by another integer N2?,,2,1,,,,CC BY-SA 4.0 +17051,2,,17047,12/11/2019 21:43,,1,,"

Usually you're normalizing the data first, meaning that your whole dataset will be in between 0 and 1. Afterwords after you're having the model predictions, when computing the cost function or evaluating the model, you can apply the inverse of the normalization function.

+",20430,,,,,12/11/2019 21:43,,,,2,,,,CC BY-SA 4.0 +17052,1,,,12/12/2019 1:54,,2,96,"

I'm trying to train a VAE using a graph dataset. However, my latent space shrinks epoch by epoch. Meanwhile, my ELBO plot comes to a steady state after a few epochs.

+

I tried to play around with parameters and I realized, by increasing the batch size or training data, this happens faster, and ELBO comes to a steady state even faster.

+

Is this a common problem, with a general solution?

+

With these signs, which part of the algorithm is more possible to cause the issue? Is it an issue from computing loss function? Does it look like the decoder is not trained well? Or it is more likely for the encoder not to have detected features that are informative enough?

+
+

Edit:

+

I figured out that the problem is probably caused by the loss function. My loss function is a combination of the KL term and reconstruction loss. In the github page for graph auto-encoders, it is suggested that the loss function should include normalization factors according to the number of nodes in the graph. I haven't figured it out exactly, but by adding a factor of 100 to my reconstruction loss and a factor of 0.5 to my KL loss, the algorithm is working fine. I would appreciate it if someone can expand on how this exactly is supposed to be set up.

+",31990,,2444,,11/22/2020 17:47,11/22/2020 17:47,Why does the ELBO come to a steady state and the latent space shrinks?,,0,1,,,,CC BY-SA 4.0 +17053,2,,17050,12/12/2019 3:10,,1,,"

When you represent a number in base 2 (binary), you have already divided the number by 2 many times. If there is no remainder at the end, the number is obviously evenly divisible by 2. This hints that your AI could test for divisibility by dividing. Hmm-- not much gained there!

+ +

Unfortunately the problem is not one suited to solving via AI. That's why factorization of large numbers is a good basis for hard encryption schemes. I'd suggest finding a different sort of problem on which to test your AI.

+",28348,,,,,12/12/2019 3:10,,,,0,,,,CC BY-SA 4.0 +17054,2,,17045,12/12/2019 5:46,,2,,"

The only disadvantage and difference between these generative models and the method you describe, is the input. You describe inputting tags, where as for a GAN, or VAE, the generation segment of the model takes in some representation of a probability distribution. For a GAN, it's mostly random noise, and for a VAE it is some latent space (see nbros answer).

+

Your described method prevents the network from properly learning fluid generation. If you have a discrete input, the network will attempt to perform a sort of classification on the input, rather then generation, and so when you try to generate a new image, you will most likely get the image equivalent of gibberish.

+

In fact, that's why a standard auto-encoder (not variational) doesn't work very well for generation. You would think that you could feed in your own custom input into the latent space:

+

But if you tried this, you would end up telling the network to try and generate something from a latent space it can't interpret.

+

Hence where the "variational" part of the VAE comes in. This helps the network learn to generate from a continuous distribution, so no matter what input you use, the network will be able to interpret it and give an appropriate output.

+

As for a GAN, it is simply fed random noise at each training step, so it too generates based on a continuous distribution.

+

If you were to try and train your method of generation, I would predict that you would find an average of all images that share similar tags (say you have the tags of cat, dog and brown haired, if you input "dog=1, cat=0, brown haired=1" you would get an average of all brown haired dogs), but if you tried to input a combination of tags the network has not seen, as it has not learnt from a continuous distribution, the resultant image would not be anything like what you'd expect from those tags.

+",26726,,-1,,6/17/2020 9:57,12/12/2019 5:46,,,,1,,,,CC BY-SA 4.0 +17055,1,,,12/12/2019 5:50,,5,104,"

In the original GloVe paper, the authors discuss group theory when coming up with the equation (4). Is it possible that the authors came up with this model, figured out it was good, and then later found out various group theory justifications that justified it? Or was it discovered sequentially as it is described in the paper?

+

More generally: In AI research, are most things discovered because they work empirically and later justified mathematically, or is it the other way around?

+",22233,,18758,,12/23/2021 10:21,12/23/2021 10:21,"Are most things generally discovered because they work empirically and later justified mathematically, or vice-versa?",,1,2,0,,,CC BY-SA 4.0 +17057,2,,17025,12/12/2019 5:55,,2,,"

You invoke it in a loop. Imagine a digital assistant responding to voice queries. It might look something like this:

+ +
for(;;) {
+   var audio = RecordSomeAudio();
+   var response = model.predict(audio);
+   if(response.action == ""SAYSOMETHING"") {
+      PlaySomeAudio(response.output);
+   }
+}
+
+ +

Note that the model gets invoked repeatedly and can decide in a given situation whether to respond or not. In a digital assistant context, part of the model would be to check for if the user raised a query (e.g. ""Hey Google"" etc.).

+",32020,,,,,12/12/2019 5:55,,,,0,,,,CC BY-SA 4.0 +17058,1,,,12/12/2019 6:14,,1,39,"

This might seem like a really silly question, however I have not been able to find any answers to it on the internet.

+ +

From my rough understanding of data assimilation, it combines data with numerical models by having weights on the adjustment initial condition parameters? That sounds really similar to what machine learning/neural network does.

+ +

What are the distinct differences?

+",32021,,,,,12/12/2019 6:14,How are data assimilation and machine learning different?,,0,0,,,,CC BY-SA 4.0 +17061,1,,,12/12/2019 7:21,,1,38,"

If not perfect, how well can they do? For example, if I give the Seq2Seq setup a name it did not see in the training process, can it output the same name without error?

+ +

Example

+ + + +
name = ""Will Smith""
+output = DecoderRNN(EncoderRNN(name))
+can_this_be_true = name == output
+
+",32023,,,,,1/12/2020 9:02,Can a character-level Seq2Seq setup learn to perfectly reconstruct structured data like name strings?,,1,0,,,,CC BY-SA 4.0 +17063,1,,,12/12/2019 9:13,,2,39,"

Consider a fixed camera that records a given area. Three things can happen in this area:

+ +
    +
  • No action
  • +
  • People performing action A
  • +
  • People performing action B
  • +
+ +

I want to train a model to detect when action B happens. A human observer could typically recognize action B even with a single frame, but it would be much easier with a short video (a few seconds at low FPS).

+ +

What are the most suitable models for this task? I read this paper where different types of fusion are performed in order to feed different frames to a CNN. Are there better alternatives?

+",16671,,,,,12/12/2019 10:46,Most suitable model for video classification with a fixed camera,,1,0,,,,CC BY-SA 4.0 +17064,1,17091,,12/12/2019 9:25,,0,144,"

How can I detect diagram region and extract(crop) it from a research paper +

+",17183,,,,,12/14/2019 4:59,How can I detect diagram region and extract (crop) it from a research paper,,1,2,,12/14/2019 6:19,,CC BY-SA 4.0 +17066,2,,17004,12/12/2019 10:17,,4,,"

For a 3 channel image (RGB), each filter in a convolutional layer computes a feature map which is essentially a single channel image. Typically, 2D convolutional filters are used for multichannel images. This can be a single filter applied to each layer or a seperate filter per layer. These filters are looking for features which are independent of the color, i.e. edges (if you are looking for color there are far easier ways than CNNs). The filter is applied to each channel and the results are combined into a single output, the feature map. Since all channels are used by the filter to compute a single feature map, the number of channels in the input does not affect the structure of the network beyond the first layer. The size of a feature map is determined by the filter size, stride, padding and dilation(not commonly used - see here if you are interested.).

+ +

In your example, a 30 x 30 x 3 input convolved with 10 5 x 5 filters will yield a volume of 30 x 30 x 10 if the filters have a stride of 1 and same padding (or, 26 x 26 x 10 with valid padding / 34 x 34 x 10 with full padding).

+ +

Same padding buffers the edge of the input with filter_size/2 (integer division) to yield an output of equal size (assuming stride is 1) while valid padding would result in a smaller output. Valid padding doesn't crop the image as you said, it's more of a dilution of the signal at the edges, however the results is essentially the same. Note that even with same padding the edge pixels are used in fewer convolutions - a 5 x 5 filter with same padding will use a central pixel 25 times (every position on the filter) but only 9 times for a corner pixel. To use all pixels evenly full padding must be used which buffers the edge of the input with filter_size - 1.

+ +

                                          

+ +

Each feature map becomes a channel in the output volume. Therefore, the number of channels in the output volume is always equal to the number of filters in the convolutional layer. So, the second layer would output a volume of size 30 x 30 x 5 (stride 1, same padding).

+ +

The last layer in your example (fully connected) multiplies the value of each pixel in each feature map by a learned weight and sums the result. If the network is a binary classifier, the summed value results in a 1 or 0 output if a threshold is reached or as a decimal value for a regression model. This is determined by the FC neurons' activation function.

+ +

If visualizing this helps you as much as it helps me, I highly recommend having a look at the interactive examples here. Note that what is shown by this tool is the signal propagating through the network, i.e. the output from each layer, not the filters/weights themselves.

+ +

If you are interested in a bit more depth about ANNs and convolutional layers, I cover all the basics in my thesis(this is where the image is from) - p.9-16 ANNs & p.16-23 CNNs.

+",31980,,31980,,12/12/2019 10:25,12/12/2019 10:25,,,,0,,,,CC BY-SA 4.0 +17067,2,,17063,12/12/2019 10:40,,3,,"

It seems you need a spatio-temporal model to learn human-body detection and action. +With regards to interesting papers on the subject I would recommand to look at Vicky Kalogeiton web site.

+ +

Her PhD thesis +2017, V. Kalogeiton, Localizing spatially and temporally objects and actions in videos,

+ +

basically cover her 3 papers on the subject:

+ + + +

summary of Kalogeiton's PhD introduction .

+ +

Introduce an end-to-end multitask objective that jointly learns object-action relationships. +The action-object detector leverages the temporal continuity of videos.

+ +

Though intra class variations are key and appears as spatial location accuracy, appearance diversity, image quality, aspect distribution, and object size and camera framing.
+Actions class refers to an atomic class such as jump, walk, run, climb, etc.

+ +

The detector builds anchor cuboids named tubelets and defined as sequences of bounding boxes with associated scores. The action detection spans over a period of time (first and last video frame detected) and takes place at a specific location in each frame. Intra frame action detection can be ambigous. On the other way a sequence bears more information (across class similarities) than a single frame to infer action.

+ +

Most previous work uses per-frame object detections, and then leverage the motion of objects to refine their spatial localization or improve their classification.

+ +

Contributions

+ +
    +
  • differences between still and video frames for training and testing an object detector among which (see Chapter 3 for more details ): + +
      +
    • spatial location accuracy,
    • +
    • appearance diversity,
    • +
    • image quality,
    • +
    • aspect distribution,
    • +
    • camera framing
    • +
  • +
  • jointly detect object-action instances in uncontrolled videos using an end-to-end two stream network architecture (see chapter 4 for more details )
  • +
  • propose the ACtion Tubelet detector (ACT-detector), which takes as input a sequence of frames and outputs tubelets, i.e. sequences of bounding boxes with associated scores (see chapter 5 for more details).
  • +
+",30392,,30392,,12/12/2019 10:46,12/12/2019 10:46,,,,1,,,,CC BY-SA 4.0 +17068,1,,,12/12/2019 12:41,,1,34,"

+ +

Is anyone able to explain how to do this? I'm not looking for the complete answer, I would settle for a ""how to for dummies"" explanation of how this is supposed to be solved.

+ +

I understand constraints, but in the first example it would seem to me that the second half of the first partial assignment $x_2=-1$ is a violation of the constraint 2, that says $x_1 > x_2$... when down below it says both $x_1$, and $x_2$, are $-1$

+",32036,,30789,,12/12/2019 17:06,12/12/2019 17:06,CSP Formulation of an algebraic problem,,0,0,,,,CC BY-SA 4.0 +17069,1,,,12/12/2019 14:47,,1,39,"

I'm using DQN to train multi-version of the same system and there is a small difference when I run them both separately. However, my result suddenly dropped in both versions if I run them both at the same time. I tried again but I got the same results with slightly different. Would it be affecting my results if I run multi-version of my system at the same time?

+ +

Is there any explanation for that and How can I get accurate results when I train multi-version of the same system at the same time?

+",21181,,21181,,12/18/2019 14:56,12/18/2019 14:56,Is the training of multi-version of the same system at the same time affecting the results?,,0,6,,,,CC BY-SA 4.0 +17070,2,,17044,12/12/2019 20:15,,6,,"

Dropout only ignores a portion of units during a single training batch update. Each training batch will use a different combination of units which gives them the best chance of that portion of them working together to generalize. Note the weights for each unit are kept and will be updated during the next batch in which that unit is selected. During inference, yes, all units are used (with a factor applied to activation...the same factor that defines the fraction used)...this becomes essentially an ensemble of all the different combinations of units that were used.

+

Contrasted with fewer units, the fewer units approach will only learn what those fewer units can be optimized for. Think of dropout as an ensemble of layers of fewer units.(with the exception that there are partial weight sharing between the layers)

+",31608,,36737,,4/13/2021 15:55,4/13/2021 15:55,,,,1,,,,CC BY-SA 4.0 +17071,2,,17044,12/12/2019 21:01,,2,,"

The idea of dropout is that, at training time, with a certain probability $p_i \in [0, 1]$, the unit (or neuron) $i$ is dropped, $\forall i$, that is, the output of unit $i$ is set to zero so that $i$ does not affect the other units it is connected to, both during the forward and backward (or back-propagation) passes (or steps). At every mini-batch, you randomly drop usually different units, so, across different mini-batches (and consequently epochs), you do not always or necessarily drop the same units.

+ +

The title of the paper Improving neural networks by preventing co-adaptation of feature detectors emphasizes that dropout prevents the co-adaptation of the units (the feature detectors), so units attempt to detect certain features independently of other units, which reduces overfitting, that is, it improves the generalization ability of the neural network.

+ +

At test time, no unit is usually dropped. However, there is an approximation of a deep Gaussian process and Bayesian neural network that is based on the application of dropout at training and test times. This is called Monte Carlo dropout or, in short, MC dropout, for reasons you can understand if you read the paper Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.

+ +

There's also the possibility to drop the connections between the neurons, which is called DropConnect, rather than the neurons themselves. These two approaches are slightly different, even though DropConnect can be seen as a generalization of dropout. In DropConnect, you do not switch off completely the units, but only the contributions of certain units to the output of other units. In dropout, you completely switch off certain units.

+ +

If you decided to deterministically (and manually) reduce the number of units before training, essentially, you would train a fixed smaller network, but this will not necessarily reduce overfitting or, more precisely, co-adaptation of the units. In dropout, you randomly select the units to drop, so, at every mini-batch (or epoch, depending on the implementation of dropout), you effectively train a random subset of the units of the original neural network and, because of this, it can be thought of as an ensemble of smaller neural networks.

+ +

The two papers Improving neural networks by preventing co-adaptation of feature detectors (2012) and Dropout: A Simple Way to Prevent Neural Networks from Overfitting (2014) have exactly the same authors, but the latter paper was published in the Journal of Machine Learning Research, while the former wasn't apparently published in any journal. In fact, Dropout: A Simple Way to Prevent Neural Networks from Overfitting does not even cite Improving neural networks by preventing co-adaptation of feature detectors, but it cites the master's thesis Improving Neural Networks with Dropout (2013) by Nitish Srivastava, who is one of the authors of dropout.

+",2444,,2444,,12/12/2019 22:13,12/12/2019 22:13,,,,2,,,,CC BY-SA 4.0 +17072,1,,,12/12/2019 22:24,,1,100,"

My professor gave us a workshop where we have to do classification of a dataset of ECG signals between healthy and unhealthy types using LSTM. Each signal consists of 1,285 time steps.

+ +

What my prof did was to cut up each signal into segments 24 time steps long, advancing 6 steps for the next segment. In other words, for the following signal

+ +
0, 1, 2, ... 1283, 1284, 1285
+
+ +

It will be cut up into the following segments

+ +
0, 1, 2, ... 21, 22, 23
+6, 7, 8, ... 27, 28, 29
+12, 13, 14, ... 33, 34, 35
+...
+
+ +

These segments are the input to the LSTM model for each signal to be classified.

+ +

Using the code that my prof used to cut the signal into segments, and feeding that into Tensorflow-Keras InputLayer, it tells me that the output shape is (None, 211, 24).

+ +

However, I am told by a classmate that the correct implementation for Tensorflow-Keras LSTM should be (None, 24, 211). He tried clarifying with the prof but it seems the prof doesn't really understand what my classmate's trying to say. I tried to Google but examples I can find online are either of the two cases below:

+ +
    +
  • The format of an input signal should be (None, # timesteps, # features). However, in my problem the signal only has one feature that is chopped up into segments. In this example I found online, there's no mention of segments.
  • +
  • The only example of signals being cut into segments I can find is when each segment is a single input. That is not the case for me. My input is a single signal, cut up into 211 segments which collectively make up a single input.
  • +
+ +

Which is the correct output shape for my LSTM model? My prof used his method and achieved 96% classification accuracy, and our assignment is to surpass that rate, but when I tested using what my classmate said is the correct shape, my LSTM model with the exact same architecture, hyperparameters etc. gave me a flat 74.04% accuracy from the 1st epoch all the way to the end without ever changing. So which is wrong?

+",23844,,,,,12/12/2019 22:24,What is the correct input shape for my LSTM network?,,0,0,,,,CC BY-SA 4.0 +17073,2,,17028,12/13/2019 4:19,,3,,"

Perhaps Occam's razor counts.

+ +

Occam's razor is the meta-heuristic that ""the simplest explanation is the most likely to be correct"". I consider it a meta-heuristic because itself doesn't provide explanations, only means of comparing them among another. It is a heuristic because like all heuristics, it can lead to false conclusions, but is usually right.

+",6779,,,,,12/13/2019 4:19,,,,0,,,,CC BY-SA 4.0 +17074,1,17131,,12/13/2019 6:55,,1,346,"

I need to understand whether it is better to use AI algorithms (ML, DL, etc.) instead of the classic parser (based onto grammars with regular expression and automaton) for the following task: structuring in XML an unstructured plain text.

+ +

The text is a legal document, so the structure is well defined and a classic parser could do a good job.

+ +

In the case AI could be a viable way, what would be an appropriate approach for the task?

+",32050,,2444,,12/13/2019 12:55,12/16/2019 18:09,Would AI be appropriate for converting unstructured text into an XML?,,2,0,,,,CC BY-SA 4.0 +17075,2,,17074,12/13/2019 8:38,,3,,"

A rule-based approach will guarantee a correct result, and it works perfectly fine. On the other hand, an AI based approach will introduce errors as AI cannot produce result with 100% accuracy, and also decreasing speed. As the document you are processing is a legal document, it would be better to use a parser as AI is only adding unnecessary wasted time and non-accurate results.

+ +

Hope I can help you.

+",23713,,,,,12/13/2019 8:38,,,,1,,,,CC BY-SA 4.0 +17076,2,,17061,12/13/2019 8:42,,1,,"

If you give it a name, it will probably be almost perfect model, as the number of hidden units are definitely enough to store the raw data of a name. However as a neural network it will still not be perfect. There may also be serious overfitting if you do this. A Seq2Seq model have way more parameters than necessary to just ""remembers"" all the possible names, as names variants are not a lot.

+",23713,,,,,12/13/2019 8:42,,,,0,,,,CC BY-SA 4.0 +17077,1,,,12/13/2019 9:05,,1,45,"

We are having suspects in salary distribution in our organisation due to employee's region. The data we have is as the following:

+ +
Name
+Region
+Work Position (4 main positions)
+Salary
+Gender
+
+ +

What technique should we use in Machine learning to check and detect malicious salary distribution? By using clustering?

+",26028,,26028,,12/13/2019 11:58,12/13/2019 13:53,How to detect patterns in salary distribution if we are suspecting malicious distribution based on employee's region?,,1,0,,,,CC BY-SA 4.0 +17079,2,,17077,12/13/2019 13:53,,2,,"

A simple initial approach would be to separate it by position and check for each:

+ +

Use linear regression: $\hat{salary} = \sum_i \alpha_i * \hat{region}_i + \sum_k \beta_k * \mathbf{1}[\hat{gender}=k]$ and now you have an intuitive measure by looking at $\alpha$'s and $\beta$'s.

+ +

2 issues that may arise with this method:

+ +
    +
  • This assumes though that a linear model is a good fit, which it very well might not be
  • +
  • The amount of data points may be small and this will give you point estimates without any sense of confidence.
  • +
+ +

2 possible respective solutions:

+ +
    +
  • Use a stronger model with less intuitive parameters (like a neural network) and then use common visualization techniques to determine weight of each input (this could be as simple as changing them and looking at the outcome or something more complex that utilizes the gradient), but even more complex models can still suffer from the second problem, which is why id reccommend option 2
  • +
  • Use a Bayesian model, you can make something hierarchical something as simple as doing the above linear regression with a non informative prior and eventually you'd be able to build confidence intervals and get a better sense of the associations (note if youre looking for what to write this in, I recommend JAGS for its ease of use)
  • +
+",25496,,,,,12/13/2019 13:53,,,,6,,,,CC BY-SA 4.0 +17080,1,,,12/13/2019 15:41,,5,1165,"

When exactly is a model considered over-parameterized?

+

There are some recent researches in Deep Learning about the role of over-parameterization toward generalization, so it would be nice if I can know what exactly can be considered as such.

+

A hand-wavy definition is: over-parameterized model is often used to described when you have a model bigger than necessary to fit your data.

+

In some papers (for example, in A Convergence Theory for Deep Learning +via Over-Parameterization), over-parameterization is described as:

+
+

they have much more parameters than the number of training samples

+

meaning that the number of neurons is polynomially large comparing to the input size

+

the network width is sufficiently large: polynomial in $L$, the number of layers, and in $n$, the number of samples

+
+

Shouldn't this definition depend on the type of input data as well?

+

For example, I fit:

+
    +
  • 1M-parameters model on 10M samples of 2 binary features, then it should not be over-parameterized, or

    +
  • +
  • 1M-parameters model on 0.1M samples of 512x512 images, then is over-parameterized, or

    +
  • +
  • the model in the paper Exploring the Limits of +Weakly Supervised Pretraining "IG-940M-1.5k ResNeXt-101 32×48d" with 829M parameters, trained on 1B Instagram images, is not over-parameterized

    +
  • +
+",32055,,-1,,6/17/2020 9:57,5/14/2020 18:00,When exactly is a model considered over-parameterized?,,1,0,,,,CC BY-SA 4.0 +17084,1,17085,,12/13/2019 18:53,,13,1636,"

I previously asked a question about How can an AI freely make decisions?. I got a great answer about how current algorithms lack agency.

+

The first thing I thought of was reinforcement learning, since the entire concept is oriented around an agent getting rewarded for performing a correct action in an environment. It seems to me that reinforcement learning is the path to AGI.

+

I'm also thinking: what if an agent was proactive instead of reactive? That would seem like a logical first step towards AGI. What if an agent could figure out what questions to ask based on their environment? For example, it experiences an apple falling from a tree and asks "What made the Apple fall?". But it's similar to us not knowing what questions to ask about say the universe.

+",31978,,2444,,2/12/2021 17:21,7/25/2021 15:45,Why is reinforcement learning not the answer to AGI?,,2,0,,,,CC BY-SA 4.0 +17085,2,,17084,12/13/2019 19:58,,16,,"

Some AI researchers do think RL is a path to AGI, and your intuition about how an agent would need to be proactive in selecting actions to learn about is exactly the area these researchers are now focused on.

+

Much of the work in this area is focused on the idea of curiosity, and since 2014 this idea has gained a lot of traction in the research community.

+

So, maybe RL can lead to AGI. We don't know for sure yet.

+

However, many of the classic arguments against AGI aren't addressed by the RL approach. For instance, if like Searle, you think computers just don't have the right kind of hardware to do thinking, then running an RL algorithm on that hardware isn't going to yield AGI, just ever increasingly robust narrow AI. Ultimately Searle's arguments get into issues of metaphysics, so it isn't clear that there exists any argument that would convince someone like Searle that a particular computer-based technique is AGI-capable.

+

There are also other arguments. For example, the cognitivist school of thought thinks that statistical learning approaches to AI, and, in particular, the black-box approaches of statistically-driven RL, are unlikely to lead to general intelligence because they do not engage in the kind of systematic reasoning process that proponents of cognitivism assume is necessary for general intelligence. Some more extreme proponents of this school might say that a logical planning algorithm like STRIPS is innately more intelligent than any approach based on deep learning, because it involves sound logical deduction rather than mere statistical calculation. In particular, STRIPS can correctly generalize to any new domain, as long as it is fed the correct sense data, while an RL approach will need to learn how to act there.

+

So, while there are definitely reasons to be optimistic about RL as a direction for achieving AGI, it's definitely not yet settled.

+",16909,,2444,,7/24/2021 14:03,7/24/2021 14:03,,,,0,,,,CC BY-SA 4.0 +17087,2,,17055,12/13/2019 21:42,,1,,"

There are definitely approaches that are theory driven (like SVMs), and others where the theory comes after the practice (like a lot of deep neural networks). I think it would be difficult to argue strongly that either direction is more common ""in general"" within AI, or indeed, within any other branch of science.

+ +

The approach that is currently in favor will tend to change over time as well. If we develop a new method that we don't understand, it will tend to attract interest from theorists. If they are able to understand it well, that understanding will likely lead to natural next steps that improve the model. As with other areas of science, the empirical cycle is at work.

+",16909,,,,,12/13/2019 21:42,,,,0,,,,CC BY-SA 4.0 +17089,1,,,12/14/2019 3:32,,1,48,"

In target tracking, the dimensions of objects change, especially if they are detected using a LIDAR sensor. Also, the static objects in consecutive frames are not 100% static, their position changes a little bit, due to the point cloud segmentation algorithms (which is somehow expected).

+ +

After I associate a tracklet (which maintains the objects previous dimension) and a measurement (that has changed its dimension in the current frame) and perform a Kalman update, a small velocity is induced in the new updated tracklet, even if my object is static (I am considering the reference point of the object and tracking its center).

+ +

Is there any solution for not inducing and displaying such a velocity in the updated tracklets?

+",15775,,2444,,12/14/2019 15:08,12/14/2019 15:08,How can I avoid displaying the velocity in the updated tracklets?,,0,0,,,,CC BY-SA 4.0 +17091,2,,17064,12/14/2019 4:45,,1,,"

I will consider that you need to extract(crop) the digram from the pdf research paper. You can use PyPDF2 or PyMuPDF to extract the images from the PDF file and then you can apply machine learning to do recognition and classification of the images. There are different types of machine learning solutions for image classification and you can start with Convolutional Neural Network and you can start here.

+ +

For more information for Recognition and classification of figures see these and to extract of figures from scholarly documents read this

+",21181,,21181,,12/14/2019 4:59,12/14/2019 4:59,,,,0,,,,CC BY-SA 4.0 +17093,1,,,12/14/2019 8:03,,1,16,"

I'm working on Bilateral Recommendation System. But not able to find much related papers. Could anyone suggest any papers relative?

+ +

Thanks

+",32073,,,,,12/14/2019 8:03,Anyone familiar with Bilateral Recommendation System? And suggest any related papers?,,0,0,,,,CC BY-SA 4.0 +17094,1,,,12/14/2019 8:07,,6,276,"

In standard Reinforcement Learning the reward function is specified by an AI designer and is external to the AI agent. The agent attempts to find a behaviour that collects higher cumulative discounted reward. In Evolutionary Reinforcement Learning the reward function is specified by +the agent’s genetic code and evolved in simulated Darwinian evolution over multiple generations. +Here too the AI agent cannot directly adjust the reward function and instead adjusts its behaviour +towards collecting higher rewards. Why do both approaches prevent the AI agent from changing its reward function at will? What happens if we do allow the AI agent to do so?

+",32074,,,,,12/15/2019 9:43,Why cannot an AI agent adjust the reward function directly?,,1,0,,,,CC BY-SA 4.0 +17095,2,,10049,12/14/2019 8:08,,9,,"

Recent actor-critic algorithms do use $\lambda$-returns, but they are disguised as something called the Generalized Advantage Estimator defined as $A^{GAE}_t = \sum_{i=0}^{\infty} (\gamma\lambda)^i \delta_{t+i}$ where $\delta_t = r_t + \gamma V(s_{t+1}) - V(s_t)$. This turns out to be identically equal to $[G^\lambda_t - V(s_t)]$, i.e. the $\lambda$-return with a value-function baseline subtracted from it. Theoretically, any actor-critic gradient method could use this quite easily; it was combined with TRPO in the GAE paper, and later used for PPO. Similarly, ACER uses an off-policy variant known as Retrace($\lambda$).

+

For replay methods like DQN or DDPG, it is harder to implement $\lambda$-returns. This is why they have historically defaulted to $n$-step returns as @DennisSoemers mentioned. I recently published a paper that describes a way to efficiently combine $\lambda$-returns with experience replay, which I hope will increase the popularity of $\lambda$-returns for these methods.

+",32070,,-1,,8/11/2020 16:59,8/11/2020 16:59,,,,0,,,,CC BY-SA 4.0 +17096,2,,17094,12/14/2019 8:49,,5,,"
+

Why do both approaches prevent the AI agent from changing its reward function at will?

+
+ +

In RL for optimal control, the reward function is part of the problem formulation. That is, it describes the goals of the agent. Sometimes this is obviously something that should not be under the agent's control, if the reward is a real-world quantity that it should maximise - e.g. the amount of money it makes in profit - then it makes no sense that the agent could arbitrarily declare that the quantity is different to the thing that it observed.

+ +

Other times, there is some flexibility, an agent that needs to escape a maze in a short time could have -1 reward per time step inside the maze or -0.1 reward per time step, or +1 reward for escaping with a discount factor applied. However, the flexibility can only go so far before it describes a different problem. Changing the -1 per time step to +1 per time step means that the agent's goal switches from escaping to staying in the maze.

+ +

In general, multiplying all the rewards in a MDP by some positive constant does not change a reinforcement learning problem. Sometimes it may be worth doing this scaling to make it easier for a specific approach, such as neural networks, to work efficiently. However, this is not something to put directly under the agent's control, but a hyperparameter like the number of hidden layers in the neural network. As a hyperparameter, usually the reward scaling is very flexible and not something worth spending much effort tuning - unlike the architecture of a neural network.

+ +
+

What happens if we do allow the AI agent to do so?

+
+ +

Unless significant constraints are placed on what is allowed to change, then the agent will get any amount of reward it ""wants"" by doing anything it ""wants"", within whatever constraints are placed on changes allowed to the reward function. Typically in RL this would result in an agent that acts more or less randomly whilst getting progressively higher and higher reward on each iteration. Or in other words, an agent that does not attempt to solve any kind of problem.

+ +

There are a few special cases where a reward function can be adjusted or learned. One common case is inverse reinforcement learning, where an agent's activities are observed, it is assumed to be solving an MDP-like problem, and you are interested in understanding how it solves it, including what reward function it is using. The reward function must be learned by fitting it to observations of the agent.

+",1847,,1847,,12/15/2019 9:43,12/15/2019 9:43,,,,0,,,,CC BY-SA 4.0 +17097,1,,,12/14/2019 10:06,,2,184,"

Is there a neural network that has architecture optimizations for segmenting only one class (object and background)? I have tried U-net but it is not providing good enough results.

+ +

I am wondering if this can be due to the fact that my dataset has different image resolutions/aspect ratios.

+",30995,,2444,,6/13/2020 0:09,6/13/2020 0:09,How to train image segmentation task with only one class?,,0,3,,,,CC BY-SA 4.0 +17098,1,,,12/14/2019 11:47,,2,109,"

I'm trying to develop a real-time application that, from the sequence of chalkboard images captured by a webcam, recognizes the lines being draw on it.

+ +

It must be able of recognize the lines from the chalkboard background, filter the presence in the image of the teacher, and translate these lines to some representation, something as a list of basic events like ""start of line at xxx,xxx"", ""continue line at xxx,xxx"", ...

+ +

After several days looking for references and bibliography, none is found. The most similar are the character recognition applications, in particular when they have a stroke recognition stage.

+ +

Any hint ?

+ +

Input will be a sequence as this one, this one or this one (just without the presence of the students). I've expect the teacher not hidding his hand. We could imagine a start with an empty chalkboard.

+ +

Thanks.

+ +

Note: I am looking for more than an answer which says only something similar to ""you can use a deep learning training it with two classes"", without details or references.

+",12630,,12630,,12/14/2019 12:41,5/29/2023 0:02,Recognition of lines in a chalkboard,,2,3,,,,CC BY-SA 4.0 +17100,1,20322,,12/14/2019 13:00,,5,315,"

How can we prove that an auto-associator network will continue to perform if we zero the diagonal elements of a weight matrix that has been determined by the Hebb rule? In other words, suppose that the weight matrix is determined from $W = PP^T- QI$, where $Q$ is the number of prototype vectors.

+ +

I have been given a hint: show that the prototype vectors continue to be eigenvectors of the new weight matrix.

+ +

This is a question from Neural Network Design (2nd Edition) book by +Martin T. Hagan, Howard B. Demuth, Mark H. Beale, Orlando De Jesus .

+ +

Resource : E7.5 p 224-225

+",32076,,32076,,12/23/2019 10:27,4/16/2020 16:34,How can we prove that an autoassociator network will continue to perform if we zero the diagonal elements of a weight matrix?,,1,3,0,,,CC BY-SA 4.0 +17101,1,17111,,12/14/2019 13:47,,2,4531,"

How can we theoretically compute the number of weights considering a convolutional neural network that is used to classify images into two classes:

+
    +
  • INPUT: 100x100 gray-scale images.
  • +
  • LAYER 1: Convolutional layer with 60 7x7 convolutional filters (stride=1, valid +padding).
  • +
  • LAYER 2: Convolutional layer with 100 5x5 convolutional filters (stride=1, valid +padding).
  • +
  • LAYER 3: A max pooling layer that down-samples Layer 2 by a factor of 4 (e.g., from 500x500 to 250x250)
  • +
  • LAYER 4: Dense layer with 250 units
  • +
  • LAYER 5: Dense layer with 200 units
  • +
  • LAYER 6: Single output unit
  • +
+

Assume the existence of biases in each layer. Moreover, the pooling layer has a weight (similar to AlexNet)

+

How many weights does this network have?

+

Here would be the corresponding model in Keras, but note that I am asking for how to calculate this with a formula, not in Keras.

+
import keras
+from keras.models import Sequential
+from keras.layers import Dense
+from keras.layers import Conv2D, MaxPooling2D
+
+model = Sequential()
+model.add(Conv2D(60, (7, 7), input_shape = (100, 100, 1), padding="same", activation="relu")) # Layer 1
+model.add(Conv2D(100, (5, 5), padding="same", activation="relu")) # Layer 2
+model.add(MaxPooling2D(pool_size=(2, 2))) # Layer 3
+model.add(Dense(250)) # Layer 4
+model.add(Dense(200)) # Layer 5
+
+model.summary()
+
+",32076,,2444,,12/28/2021 22:42,12/28/2021 22:49,How to compute the number of weights of a CNN?,,1,1,,,,CC BY-SA 4.0 +17102,1,,,12/14/2019 13:55,,7,1256,"

I'm looking to make an NLP model that can achieve a dual purpose. One purpose is that it can hold interesting conversations (conversational AI), and another being that it can do intent classification and even accomplish the classified task.

+ +

To accomplish this, would I need to use multimodal machine learning, where you combine the signal from two models into one? Or can it be done with a single model?

+ +

In my internet searches, I found BERT, developed by Google engineers (although apparently not a Google product), which is an NLP model trained in an unsupervised fashion on 3.3 billion words or more and seems very capable.

+ +

How can I leverage BERT to make my own conversational AI that can also carry out tasks? Is it as simple as copying the weights from BERT to your own model?

+ +

Any guidance is appreciated.

+",20271,,2444,,5/14/2020 20:06,10/11/2020 21:02,How to use BERT as a multi-purpose conversational AI?,,1,0,,,,CC BY-SA 4.0 +17104,1,,,12/14/2019 14:56,,3,101,"

I wonder what is the better way of drawing rectangles on images for gender classification. My task is to create a classifier (CNN based) to detect gender from pictures of entire bodies (not just faces). When I started labeling pictures I noticed that I am not sure whether I should draw it around an entire person like example 1 (including hands and legs and some background space between them) or just the inner part like example 2 (where there is almost no background), in order to achieve better results?

+ +

Example 1

+ +

+ +

Example 2

+ +

+",22659,,22659,,12/15/2019 14:25,9/10/2020 16:02,How to draw bounding boxes for gender classification?,,1,0,,,,CC BY-SA 4.0 +17105,1,,,12/14/2019 21:11,,2,36,"

Secondary camera, ghost overlay, video merge... I do not know if what I mean has a more specific name.

+ +

I wonder if this is a thing. This could be insightful for example in racing sports where participants race one after another e.g. alpine skiing, downhill mountain bike, showjumping etc. E.g. comparing the current starter to the leader.

+ +

Given the camera position is fixed and only the camera angle and zoom is varying to focus on the current starter, the tasks to be able to overlay videos would be to:

+ +
    +
  • match the timing, i.e. both videos start when the timer starts
  • +
  • align and overlay the videos according to specific marker points. Keypoint detection and tracking.
  • +
  • get the opacity right so that both videos are visible
  • +
+ +

My question is if there is any research on this. If so, what keywords do I need to search for?

+ +

+ +

Edit: +My seach led me to SIFT (Scale Invariant Feature Transform) and SURF (Speeded-Up Robust Features). Feature matching should be possible with kNN or brute force. A lot can be done with OpenCV.

+",32082,,32082,,12/15/2019 7:45,12/15/2019 7:45,Ghost camera or video overlays for example in sports,,0,0,,,,CC BY-SA 4.0 +17107,1,17109,,12/15/2019 9:41,,2,1155,"

While studying backpropagation in CNNs, I can't understand how can we compute the gradient of max pooling with overlapping regions.

+

That's also a question from this quiz and can be also found on this book.

+",32076,,2444,,1/1/2022 10:14,1/1/2022 10:14,How can we compute the gradient of max pooling with overlapping regions?,,2,0,,,,CC BY-SA 4.0 +17108,1,17156,,12/15/2019 11:35,,3,143,"

I have a question about a reinforcement learning problem.

+ +

I'm training an agent to add or delete pixels in a [12 x 12] 2D space (going to be 3D in the future). +Its action space consists of two discrete outputs: x[0-12] and y[0-12].

+ +

What would be the value of instead outputting a (continuous) probabilistic output representation, like the [12 x 12] space with each pixel as a probability, and sampling from it. E.g. a softmax function applied to 144 (12*12) output nodes.

+ +

My environment is deterministic itself: taking action 𝑎 in state 𝑠 always results in the same next state 𝑠′.

+ +

I understand that this may be more difficult to train since the output space becomes continuous instead of discrete, and therefore bigger, but does stochastic/probabilistic output have any benefits over 1 discrete output?

+ +

Thanks!

+",31180,,31180,,12/15/2019 11:49,12/18/2019 18:46,What's the value of making the RL agent's output stochastic opposed to deterministic?,,1,0,,,,CC BY-SA 4.0 +17109,2,,17107,12/15/2019 13:42,,2,,"

When gradients in a neural network can follow multiple paths to same parameter, the different gradient values from the sources can often be added together, because the operations in the forward direction are also sums and $\frac{d}{dx}(y+z) = \frac{dy}{dx} + \frac{dz}{dx}$.

+ +

That is the case already with gradients of kernels (which are sums over the image area), and is equally the case for overlapping aggregation, including maximums, minimums or averages.

+ +

So in the 1d case, if you have a max pool over the input params $[a_0, a_1, a_2, a_3, a_4]$ a max function $m_0 = max(a_0, a_1, a_2)$, $m_1 = max(a_2, a_3, a_4)$ which overlap at $a_2$, and gradients $\nabla_{\mathbf{m}} J = [\frac{\partial J}{\partial m_0}, \frac{\partial J}{\partial m_1}]$, then you would allocate those gradients to vector $\mathbf{a}$ according to which items in each group was the max of that group, adding them when they overlapped.

+ +

Examples:

+ +

If $\mathbf{a} = [3,0,1,2,0]$ and $\nabla_{\mathbf{m}}J = [0.7, 0.9]$, then $\nabla_{\mathbf{a}}J = [0.7, 0, 0, 0.9, 0]$

+ +

If $\mathbf{a} = [3,0,4,2,0]$ and $\nabla_{\mathbf{m}}J = [0.7, 0.9]$, then $\nabla_{\mathbf{a}}J = [0, 0, 1.6, 0, 0]$

+",1847,,1847,,12/16/2019 7:14,12/16/2019 7:14,,,,0,,,,CC BY-SA 4.0 +17110,2,,17104,12/15/2019 14:57,,2,,"

In state-of-the-art papers. researcher managed to achieve 90-100% accuracy on gender classification on just human faces, so both method may work just fine, with minor improvements maybe using the first method. However using the image of the entire body enhances overfitting as there is more input features and increases overfitting. For a very large dataset it might be better to use the first method as chances of overfitting are low while for a smaller one you should probably go for the second method or even just the face as that won't make such a huge differnce.

+ +

A better way to input the data to the network is through landmarks of a body. +

+ +

Inputting the landmarks of the body helps reduce the features to the minimal and also keeps only the relevant features. The main difference in body between women and men is perhaps the torso and shoulder width, which you can get from body landmark detection. You can input both the image of the face and the body landmark as input to the network, and it would probably increase in accuracy. However this solution is not an one stage method, meaning it requires an extra model to predict the landmark.

+ +

Hope I can help you.

+",23713,,,,,12/15/2019 14:57,,,,0,,,,CC BY-SA 4.0 +17111,2,,17101,12/15/2019 15:15,,2,,"

Calculating the number of parameters in a CNN is very straightforward.

+

A CNN is composed of different filters, which are essentially 3d tensors. CNN weights are shared, meaning they are used multiple times and reused in different locations. Each layer has $n$ tensors, each with dimension $w \times h \times c$, where $w$ = width, $h$ = height, $c$ = channels (the input channel size). Therefore, the number of parameters of a convolutional layer is $w * h * c * n$. There is also a bias for each output channel, so the number of biases is $n$. At the end the parameter number is calculated with: $n * w * h * c + n$. See more about this in hear: Article

+

The pooling layer does not have weights, it only has hyperparameters. You may have confused the two. There are hyperparameters for the stride, the factor and etc. These are predefined and not trainable.

+

For Keras, you can use the solution here.

+",23713,,2444,,12/28/2021 22:49,12/28/2021 22:49,,,,0,,,,CC BY-SA 4.0 +17112,1,,,12/15/2019 15:23,,2,95,"

I want to know which algorithm will work most efficiently for calculating nutrients present in a food dish if I am giving the ingredients used in the food. Basically, let us assume that I want to make a health status for a person A based on the intake of food and based on it create a diet for him.

+",32091,,2444,,12/15/2019 15:35,12/15/2019 15:35,How can I predict the nutrients in dishes given the ingredients used to prepare them?,,0,2,,,,CC BY-SA 4.0 +17113,1,17119,,12/15/2019 15:53,,2,173,"

Is every process (such as data acquisition, splitting the data for validation, data cleaning, or feature engineering) that is done on the data before we train the model always called the pre-processing part? Or are there some processes that are not included?

+",16565,,2444,user9947,9/24/2021 0:22,9/24/2021 0:22,"How to define the ""Pre-Processing"" in machine learning?",,1,2,,,,CC BY-SA 4.0 +17114,1,,,12/15/2019 16:55,,1,176,"

I am very interested in the application of CycleGANs. I understand the concept of unpaired data and it makes sense to me.

+

But now a question comes to my mind: what if I have enough paired image data, is then a CycleGAN an over-engineering, if I use it in a "supervised" setting (input matches with the label - but still a CycleGAN)? For what kind of application could it be useful? Would it be more useful to process it using a "normal" supervised setting?

+

So, basically, my question is whether it is useful to train a CycleGAN in a supervised setting?

+",32029,,2444,,9/17/2020 17:51,9/17/2020 17:51,Is it useful to train a CycleGAN in a supervised setting?,,0,1,,,,CC BY-SA 4.0 +17116,2,,17098,12/15/2019 17:17,,0,,"

I will assume that the camera is stable (no change in position, zoom or other settings during the video recording), otherwise the task becomes markedly more complicated.

+ +

Let's say that your dataset is an array of rasters (images in array format). You mention that you want to detect events ""start line"" and ""end line"".

+ +

One way of doing this would be to compute an approximate time derivative of your image series. For instance, take the image raster at index idx and the one right after, at index idx+1 (captured at instants $t$ and $t + \Delta t$, where $\Delta t$ is the sampling interval).

+ +

At coordinates $(i,j)$, this derivative could look something like: timeDerivative = (images[idx][j][i] - images[idx+1][j][i])/DeltaT. This is a crude estimate, and there are better ways of computing an approximate discrete derivative, but you get the idea.

+ +

Following this, you could declare a state of the recording: drawing line or not drawing line. The states, we assume are always alternating, as a teacher has to take his hand off the blackboard to draw a new line.

+ +

When a derivative with large values is detected (a region in the image goes from being black to white suddenly) and the state is ""not drawing"", the event ""start line"" is recorded and the state switches to ""drawing"". While a derivative with large values continues to be detected as time goes on in the vicinity of the previous spot with a large derivative, nothing changes. Once this is no longer true, the state changes to ""not drawing"" and the event ""stop line"" is recorded at the last location with a large derivative.

+ +

This is the main idea, which can be improved with:

+ +
    +
  • Defining the area of the blackboard, either by hand or automatically
  • +
  • Thresholding the images to better isolate the chalk trace from the blackboard
  • +
  • Using a tracker such as a Kalman filter to know where to look next for the chalk
  • +
+",30210,,,,,12/15/2019 17:17,,,,2,,,,CC BY-SA 4.0 +17117,2,,17107,12/15/2019 21:58,,0,,"

Denote x(h,w) input to max-pooling, and y(h,w) - output.

+ +

Then $\frac{dL}{dx}(h,w)= \sum \frac{dL}{dy}(h',w')$

+ +

over all y(h',w') which have been obtained from x(h.w) such that y(h',w') = x(h,w).

+ +

related to this p 11

+",32076,,,,,12/15/2019 21:58,,,,1,,,,CC BY-SA 4.0 +17118,1,,,12/15/2019 22:21,,3,2338,"

What i really want to do, is to predict an integer sequence of (5 numbers with values from 1 to 50) for example based on a big dataset of other 5 numbers sequences with same values range created by the same random number generator. I suppose there is a way to train based on the dataset and the program will find a pattern or based on the most common numbers predict the next number sequence. The more numbers will predict in the sequence correctly the better of course. Any help, directions and preferably python code would be greatly appreciated.

+ +

I recently read the following can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number and i am new to the AI field. The proposed code while it creates a sequence of 25 numbers it ends showing 20 numbers i do not understand why. It seems they try to do something similar if i understand correctly

+ +

I tried The code here can-a-neural-network-be-used-to-predict-the-next-pseudo-random-number

+ +

It shows always the same numbers no matter how many epochs and or iterations i do is that normal? +Is the last code close to what i want to accomplish?

+ +

Thanks in advance.

+",32099,,32099,,1/12/2020 22:44,1/13/2020 1:09,Can a neural network be used to predict a sequence of integers based on dataset of previously produced random numbers?,,2,0,,,,CC BY-SA 4.0 +17119,2,,17113,12/16/2019 1:26,,2,,"

Data preprocessing consists of all those techniques used to generate the final datasets (with an appropriate size, structure, and format) for the machine learning algorithms or models. Data acquisition should not be part of data preprocessing, but the step preceding it, which gathers the raw data (which may e.g. be noisy).

+ +

The book Data Preprocessing in Data Mining (2014), by Salvador García et al., which provides a good overview of the data preprocessing techniques and their connection with data mining and machine learning algorithms and models, defines data preprocessing as follows.

+ +
+

Data preprocessing includes data preparation, compounded by integration, + cleaning, normalization and transformation of data; and data reduction tasks; such as feature selection, instance selection, discretization, etc. The result expected after a reliable chaining of data preprocessing tasks is a final dataset, which can be considered correct and useful for further data mining algorithms.

+
+ +

From page 10 onwards, there is a description and categorization of the main data preprocessing techniques. I will just list them, so refer to the book for a definition and explanation of each of these techniques.

+ +
    +
  • Data Preparation + +
      +
    • Data Cleaning
    • +
    • Data Transformation
    • +
    • Data Integration
    • +
    • Data Normalization
    • +
    • Missing Data Imputation
    • +
    • Noise Identification
    • +
  • +
  • Data Reduction + +
      +
    • Feature Selection
    • +
    • Instance Selection
    • +
    • Discretization
    • +
    • Feature Extraction/Instance Generation
    • +
  • +
+ +

Here are two screenshots (from the cited book) that illustrate some of the data preparation

+ +

+ +

and data reduction techniques.

+ +

+",2444,,2444,,12/16/2019 1:34,12/16/2019 1:34,,,,2,,,,CC BY-SA 4.0 +17120,2,,10641,12/16/2019 7:50,,2,,"

NEAT does not employ a feed forward concept and also it does not take any special action to avoid loops.

+

Network is evaluated in non-recursive model. The only non-deterministic loop, the evaluation has is the loop for activating all the outputs. Pseudo code is something like this,

+
Until all the outputs are active
+    for all non-sensor nodes
+        activate node
+        sum the input
+    for all non-sensor and active nodes
+        calculate the output
+
+

NOTE1: You can use a defensive mechanism (like a counter) to avoid a infinite loop

+

NOTE2: When summing the input, outputs of nodes which were at least evaluated/calculated once are considered, otherwise their outputs are assumed to be zero.

+

This is the note from the author of NEAT about identifying loops,

+
+

Note that checking networks for loops in general in not necessary and therefore I stopped writing this function

+

bool Network::integrity()

+
+",32106,,-1,,6/17/2020 9:57,12/18/2019 3:38,,,,5,,,,CC BY-SA 4.0 +17121,2,,6231,12/16/2019 7:56,,5,,"

Following is the pseudo code of the NEAT's network evaluation (converted from original source code),

+ +
Until all the outputs are active
+    for all non-sensor nodes
+        activate node
+        sum the input
+    for all non-sensor and active nodes
+        calculate the output
+
+ +

Note that there is no recursion for feed forwarding concepts according to the original author.

+",32106,,,,,12/16/2019 7:56,,,,0,,,,CC BY-SA 4.0 +17122,1,17124,,12/16/2019 8:16,,3,814,"

According to what we know about inductive and connectionist learning, what is the difference between them ?

+ +

For those who do not know about :

+ +

Inductive Learning, like what we have in decision tree and make a decision based on amount of samples

+ +

Connectionist Learning, like what we have in artificial neural network

+",31143,,,,,12/20/2019 4:23,What is the difference between Inductive Learning and Connectionist Learning?,,1,0,,,,CC BY-SA 4.0 +17123,2,,17118,12/16/2019 8:37,,3,,"

The post you linked to clearly states that pseudo random number cannot be predicted. Their randomness is made to be nearly perfect, and if you ever found a way to even predict a pseudo random number with 20% chance of correct, the security of the entire world would be vulnerable to attacks, as things ranges from cryptocurrency and secure data transfer is all protected by pseudo random number.

+",23713,,,,,12/16/2019 8:37,,,,0,,,,CC BY-SA 4.0 +17124,2,,17122,12/16/2019 11:40,,2,,"

All of the statistical learning is about inductive learning.

+ +

What is the difference between inductive learning and connectionist learning?

+ +

Inductive learning is about identifying patterns from examples. It is more related to statistics. Connectionist learning is more about finding a common pattern and predicting as well as self-learning(learning from the experience of prediction).

+ +

Connectionist learning is where learning occurs by modifying connection strengths based on experience. This is not the case with inductive learning. In inductive learning, we are not modifying things based on experience. Inductive learning just finds common patterns, not self-learning based on experience.

+ +
+

Learning requires both practice and rewards

+
+ +

In inductive learning, we learn the model from raw data (so-called training set), and in the deductive learning, the model is applied to predict the behaviour of new data.

+ +
+

Connectionist Learning is a group of inductive learning and deductive learning.

+
+",7681,,7681,,12/20/2019 4:23,12/20/2019 4:23,,,,1,,,,CC BY-SA 4.0 +17125,1,,,12/16/2019 11:51,,3,29,"

Now, the following may sound silly, but I want to do it for my better understanding of performance and implementation of GPU inference for a set of deep learning problems.

+ +

What I want to do is to replace a surface texture for a 3d model by a NN that stores that texture data in some way and allows to infere the rgb color of an arbitrary texel from its UV coordinates. So basically it should offer the same functionality as the texture itself.

+ +

A regular texture lookup takes a UV coordinate and returns the (possibly filtered) RGB color at these texture coordinates.

+ +

So, I want to train a network that takes two floats in [0,1] range as input and outputs three floats of rgb color.

+ +

I further want to then train that network to store my 4096x4096 texture. So the training data I have available are 4096*4096=16777216 of <float2, float3> pairs

+ +

Finally I want to evaluate the trained network in my (OpenGL 4 or directX11) pixel shader, feeding it for every rendered pixel the interpolated UV coordinates at this pixel and retrieving the RGB value from it.

+ +

It's clear that this will

+ +
    +
  • have lower fidelity than just using the texture directly
  • +
  • use likely more memory than just using the texture directly
  • +
  • be slower than using the texture directly
  • +
+ +

and as such may be silly to do, but I'd still like to try to do this somewhat optimally, especially in terms of inference performance (I'd like to be able to run it at interactive framerates at 1080p resolutions).

+ +

Can someone point me to a class of networks or articles or describe a model and training algorithm that would be well suited for this task (especially in terms of implementing inference for the pixel shader)?

+",32111,,32111,,12/16/2019 12:56,12/16/2019 12:56,Good model and training algorithm to store texture data for fast gpu inference,,0,0,,,,CC BY-SA 4.0 +17126,1,20324,,12/16/2019 12:43,,2,81,"

How can we train a competitive layer on non-normalized vectors using LVQ technique ?

+ +

an example is given below from Neural Network Design (2nd Edition) book

+ +

The net input expression for LVQ networks calculates the distance between the input +and each weight vector directly, instead of using the inner product. The result is that the +LVQ network does not require normalized input vectors. This technique can also be +used to allow a competitive layer to classify non-normalized vectors. Such a network is +shown in figure below.

+ +

+Use this technique to train a two-neuron competitive layer on the (non-normalized) +vectors below, using a learning rate $\alpha=0.5$

+ +

$p_1=\begin{bmatrix} +1 \\ +1 +\end{bmatrix}, p_2=\begin{bmatrix} +-1 \\ +2 +\end{bmatrix}, p_3=\begin{bmatrix} +-2 \\ +-2 +\end{bmatrix}$

+ +

Present the vectors in the following order : $p_1, p_2, p_3, p_2, p_3, p_1$

+ +

Initial weights : $W_1=\begin{bmatrix} +0 \\ +1 +\end{bmatrix}, W_2=\begin{bmatrix} +1 \\ +o +\end{bmatrix}$

+",32076,,,,,4/16/2020 16:41,Train a competitive layer on nonnormalized vectors using LVQ technique,,1,0,,,,CC BY-SA 4.0 +17127,1,,,12/16/2019 15:15,,1,52,"

If you have a very distorted video/image, would affine transformations of the images make object detection algorithms make more mistakes compared to a normal camera?

+",31547,,2444,,7/31/2021 12:46,7/31/2021 12:46,"If you have a very distorted image, would affine transformations applied to images make object detection algorithms make more mistakes?",,0,1,,,,CC BY-SA 4.0 +17128,2,,17102,12/16/2019 15:40,,2,,"

Think of BERT (or similar models) as as good starting place for understanding context.

+ +

A couple options to make BERT contextualize dialogue:

+ +
    +
  • Concatenate all messages with a seperator embedding and finetune a language model like BERT + +
      +
    • This has shown good results in this paper, but understand it has weaknesses like struggling to determine order or author comprehension or conversation disentanglement. Additionally as threads grow longer this becomes more and more expensive
    • +
  • +
  • Use A BERT like model as an RNN and pass along author/memory information as the hidden state + +
      +
    • Weaknesses here is that you are putting a lot of strain on the hidden state of the model, and ideally you want to train each comment block separately so the gradient wouldnt need to unroll or traverse the network in backprop (I actually implemented a variant of this for my work, and had good results)
    • +
  • +
+ +

The 2 modes above are just baseline examples that you can alter for your own needs, and im sure there is also others if you brainstorm about it.

+ +

Note if you want generation capabilities, you would want to use GPT2 or XLNet instead of BERT for unidirectional embeddings. Hope this helped.

+",25496,,,,,12/16/2019 15:40,,,,0,,,,CC BY-SA 4.0 +17129,2,,17080,12/16/2019 16:22,,1,,"

Ok so after a little more reading, I am currently satisfy with what I found for this question.

+ +
    +
  • Yes, the ""under-parameterized"" and ""over-parameterized"" terms do not currently have a widely accepted definitions.
  • +
  • Any definition for those term should consider the input data domain as well as the architecture and training procedure.
  • +
+ +

In a recent paper Deep Double Descent from OpenAI Nakkiran et. al 2019, the authors tried to formalized and generalized the concept of ""interpolation threshold"" in the ""double descent"" phenomenon, both terms popularized by Belkin et. al 2019 Reconciling modern Machine Learning practices with the Bias-Variance Tradeoff.

+ +

In the Deep Double Descent paper, they define a concept called ""Effective Model Complexity (EMC)"", which include model architecture, training procedure and data to describe the ""interpolation threshold"" (the moment at which model can fit near perfect the training data).

+ +

EMC below the interpolation threshold is considered ""under-parameterized"" and above interpolation threshold is considered ""over-parameterized"".

+ +

So according to this definition of EMC, I suppose:

+ +
+

1M-parameters model on 10M samples of 2 binary features, then it should not be over-parameterized

+
+ +

is over-parameterized because of the simplicity of input data.

+ +
+

1M-parameters model on 0.1M samples of 512x512 images, then is over-parameterized, or

+
+ +

is probably not parameterized, if it cannot fit the training data with near 0 loss.

+ +
+

the model in the paper Exploring the Limits of Weakly Supervised Pretraining ""IG-940M-1.5k ResNeXt-101 32×48d"" with 829M parameters, trained on 1B Instagram images, is not over-parameterized

+
+ +

is not over parameterized because it cannot fit the entire training data perfectly.

+ +

I am curious to see if EMC will catch on and be a popular measure for model complexity in the future.

+",32055,,,,,12/16/2019 16:22,,,,0,,,,CC BY-SA 4.0 +17130,1,,,12/16/2019 17:23,,2,49,"

I'm working on a question answering bot as my graduation project. The main concept is having a text file with many sentences, and building a question answering bot which answers a user's question based on the text file in hand.

+

Until now, I used tf-idf and cosine similarity and the results are somewhat satisfactory. The main problem is, if the user was to ask a question that doesn't have a word that is in the text file, my bot can't deduce what to bring back as an answer. For example, if I have a sentence in my text file that says "I have a headache because my heart rate is low", if the user was to ask "Why do you have a headache?", my bot chooses the correct sentence, but if he asked "What's wrong with you?" my bot doesn't know what to do.

+

All I've seen on the web until now are deep learning methods and neural networks, such as LSTM and such. I was wondering if there are any pure NLP approaches to go with my requirements.

+",32120,,2444,,1/26/2021 15:36,1/26/2021 15:36,Are there any approaches other than deep learning to deal with unexpected questions in a question answering system?,,0,1,,,,CC BY-SA 4.0 +17131,2,,17074,12/16/2019 18:09,,1,,"

The question confuses ML (including DL) with AI. AI is a bigger field than ML and includes rule-based systems

+ +

You probably need to extract entities (spans of text) from the unstructured text and embed them into an XML. ML (and DL) are good when the problem is fuzzy (you need very many rules to solve the problem) so it could be a valid option if you have a variety of document structures that each needs its own set of rules. You would need enough data to train your models in this case. Otherwise if you have limited document structures, very limited data (maybe none) and 100% accuracy is expected then going with rules is the obvious choice.

+",27851,,,,,12/16/2019 18:09,,,,5,,,,CC BY-SA 4.0 +17132,1,,,12/16/2019 18:41,,2,68,"

I am trying to teach my AI to talk. The problem is I'm struggling to find a good scenario in which it needs to.

+ +

Some ideas I had were: +""Describe a geometric scene"" - Then together with a parser we could see how close the generated instructions came to the official geometric language.

+ +

""Give another AI instructions of where to find some food"" e.g. ""Go straight on passed the box then turn left until you get to the tree. Look under the rock.""

+ +

Another one might be ""Find out more information about a scene by asking questions of another AI in order to navigate a scene blindfolded"". This is quite an extreme example!

+ +

I need it to talk in formal English sentences (not some kind of made up secret langauge.)

+ +

Basically instead of just interpreting a language and following instructions, I want my AI to generate instructions.

+ +

So the things I want to teach it are the following:

+ +
    +
  • Ability to ask questions + ability to use the information gathered
  • +
  • Ability to give instructions
  • +
+ +

Do you know of any projects like this?

+",4199,,,,,12/16/2019 19:50,Giving an AI a purpose to talk,,1,0,,,,CC BY-SA 4.0 +17133,2,,17132,12/16/2019 19:50,,2,,"

I just came across this piece of news yesterday: ""This week, Microsoft Research threw down the gauntlet with the launch of a competition challenging researchers around the world to develop AI agents that can solve text-based games."" +This seems to be an AI competition announced by Microsoft with the aim to create AI that can solve text-based games. This might give you some inspiration.

+",32124,,,,,12/16/2019 19:50,,,,0,,,,CC BY-SA 4.0 +17134,1,,,12/16/2019 19:53,,1,34,"

Are there example implementations of networks that apply constraints across sequences of image classifications where class labels are ordinal numbers? For example, to cause the output of a CNN to monotonically increase across frames, where the number may increase either more or less steeply but only in one direction across the entire sequence, or as another example, to smoothly vary rather than jumping precipitously from frame to frame. In my first example, the output can jump quickly from one frame to the next, as long as only in one direction, whereas in my second example, they can either increase or decrease as long as not too ""fast"" from one frame to the next as if being passed through a low pass boxcar filter. The first is a monotonicity constraint and the second is a smoothness constraint, but in both examples, the key is for adjacent frames to have an effect on the conclusions for a given frame.

+ +

Thank you, +Andy

+",32123,,,,,12/16/2019 19:53,Imposing contraints on sequence of image classifications,,0,0,,,,CC BY-SA 4.0 +17137,1,,,12/17/2019 9:07,,2,272,"

I want to make a face authentication application. I need to approve the face during the login based on whether the registered face and the login face match.

+ +

Which are the possible appropriate AI methods or technologies for this task?

+",31576,,2444,,12/17/2019 17:58,12/18/2019 9:29,Which AI methods are most appropriate for login face recognition?,,1,1,,,,CC BY-SA 4.0 +17138,2,,17137,12/17/2019 9:40,,3,,"

One of the methods which is quite fast and easy to implement.

+ +

You can do Principal Component Analysis (PCA) based face recognition. You can go through this paper for the theory behind it. For an example implementation you can see this blog post.

+ +

The process, roughly, is as following:

+ +

If you have a grayscale image of size $(20,20)$, then this image can be flattened to a vector of size $400$. If you have $5$ images of size $(20,20)$ then your data matrix will be of size $(5,400)$. +We wish to find the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the data matrix, which will be of size $(5,5)$. Each eigenvector (out of total $5$), of size $(5,1)$, accounts for some amount of variation among the images, and they are ordered by the amount (the eigenvalues corresponding to the eigenvectors) they account for. These eigenvectors can be thought of as a set of features that characterize the variation between the images. Recognition is done by projecting a new image (flattened to size $(400,1)$)into the subspace spanned by the eigenvectors of the covariance matrix and then classifying the face by comparing its coordinates in the face space with the coordinates of the known individuals.

+ +

You can see an implementation in this git repo.

+",16708,,16708,,12/18/2019 9:29,12/18/2019 9:29,,,,2,,,,CC BY-SA 4.0 +17140,1,,,12/17/2019 10:49,,4,55,"

Sequence-to-sequence models have achieved good performance in natural language translation. Could these models also be applied to convert source code written in one programming language to source code written in another language? Could they also be used to convert JSON to XML? Has this been done?

+ +

There are plenty of models that generate source code (which looks like real source code) using RNNs, although the generated source code doesn't make logical sense. I haven't been able to find any models or examples that take valid existing code and convert it into different valid code.

+",32142,,2444,,12/17/2019 18:04,12/17/2019 18:04,Can sequence-to-sequence models be used to convert source code from one programming language to another?,,0,1,,,,CC BY-SA 4.0 +17141,1,,,12/17/2019 11:10,,0,26,"

What are general best practices or considerations in designing a model that is optimized for real-time inference?

+",32111,,,,,12/17/2019 11:10,What are the properties of a model that is well suited for for high performance real-time inference,,0,4,,,,CC BY-SA 4.0 +17142,1,17181,,12/17/2019 14:25,,4,65,"

In “Abandoning Objectives: Evolution through the Search for Novelty Alone”, it is explained how the novelty search is a function that is domain specific, depending on the differing behaviors that can potentially emerge.

+ +

The primary test is a deceptive maze and it seems like they define novelty as a function that is dependent on each actor's ending position as a distance from other actors' ending position.

+ +

I am wanting to try implementing this on some tasks. Some simple AI tasks such as playing pong, or recreating MarI/O, or sticking them in an arena as an actor who can move, turn, and shoot (with other actors in the arena with them).

+ +

I have a really hard time thinking of how to model the behavior functions for these kinds of instances without making it into an objective. +For pong, I imagine I could determine novelty by the AI's point score, but isn't this basically making the score an objective since it can only go up? For MarI/O, I've seen some implementations that look at the list of unique grid locations that Mario visited in what order, but I didn't come up with that myself.

+ +

For the arena example, my first impulse is to have a score based on how long the actor survived and how many other actors the AI eliminated; but again, this can only go up and seems to me like it is defining an objective.

+ +

Are there any strategies or ways to think about the problems that would help me better visualize the 'behavior space' and make a better novelty function?

+",32148,,,,,12/20/2019 9:22,Are there any strategies that would help me visualize the 'behavior space' and make a novelty function?,,1,0,,,,CC BY-SA 4.0 +17143,1,,,12/17/2019 17:34,,1,49,"

Does it make sense to use Reinforcement Learning methods in an environment that does not have trajectories?

+ +

I have a lot of states and actions in my environment. However, there are no trajectories.

+ +

If the agent takes action $a$ in the state $s$, it reaches $s'$. In $s'$, it takes action $a'$ and reaches state $s''$. However, if it takes the reverse order of actions $a'$ and then $a$, it would reach the same state $s''$.

+ +

How reinforcement learning methods handle with this?

+",32153,,2444,,12/17/2019 17:50,12/17/2019 17:50,Reinforcement learning without trajectories,,0,4,,11/16/2020 21:02,,CC BY-SA 4.0 +17144,1,,,12/17/2019 21:22,,1,23,"

I'm confused about the following issue. Let assume that we have a neural network that takes one input and two outputs. I try to visualize my model like as follows:

+ +
        / --- First stream    --- > output_1
+Input --
+        \ ---- Second stream  ---> output_2
+
+ +

I used sgd with momentum. Is there any difference between using one optimizer for both streams and using two optimizers for each stream? In other words, if i use one optimizer, can one stream optimization process affect another stream? If it can, How can it be possible?

+",32159,,,,,12/17/2019 21:22,Optimizer effects on neural network with two outputs,,0,1,,,,CC BY-SA 4.0 +17146,1,,,12/18/2019 1:24,,1,34,"

Short question

+

How can I implement Logic Inference with Deep Learning?

+

Long question

+

Based on Symbolic Logic, chaining multiple predicates (a short example is Syllogism) is a method of implementing Logic Inference. This programmatic way is suitable for many cases, however, if combined with NLP, too much programmatic way is involved.

+

If someone implements that Logic Inference with not programmatic way but machine learning, what model should be adopted, and what label data should be input to the model?

+",32162,,-1,,6/17/2020 9:57,12/18/2019 1:24,Implementing Logic Inference with Deep Learning,,0,0,,,,CC BY-SA 4.0 +17147,1,,,12/18/2019 4:09,,2,30,"

I'm implementing recurrent BN per this paper in Keras, but looking at it and those citing it, a detail remains unclear to me: how are batch statistics computed? Authors omit explicit clarification, but state (pg. 3) (emphasis mine):

+ +
+

At training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample variance of the current minibatch

+
+ +

Yet another paper (pg. 3) using and citing it describes:

+ +
+

We subscript BN by time (BN_t) to indicate that each time step tracks its own mean and variance. In practice, we track these statistics as they change over the course of training using an exponential moving average (EMA)

+
+ +

My question's thus two-fold:

+ +
    +
  1. Are minibatch statistics computed per immediate minibatch, or as an EMA?
  2. +
  3. How are the inference parameters, shared across all timesteps, gamma and beta computed? Is the computation in (1) simply averaged across all timesteps? (e.g. average EMA_t for all t)
  4. +
+ +
+ +

Existing implementations: in Keras and TF below, but are all outdated, and am unsure regarding correctness

+ +
    +
  • Keras, TF-A, and TF-B
  • +
  • All above agree that during training, immediate minibatch statistics are used, and that beta and gamma are updated as an EMA of these minibatches
  • +
  • Problem: the bn operation (in A, and presumably B & C) is applied on a single timestep slice, to be passed to the K.rnn control flow for re-iteration. Hence, EMA is computed w.r.t. minibatches and timesteps - which I find questionable: + +
      +
    • EMA is used in place of a simple average when population statistics are dynamic (e.g. minibatch-to-minibatch), whereas we have access to all timesteps in a minibatch prior having to update gamma and beta
    • +
    • EMA is a worse but at times necessary alternative to a simple average, but per above, we can use latter - so why don't we? Timestep statistics can be cached, averaged at the end, then discarded - holds also for stateful=True
    • +
  • +
+",32165,,32165,,12/20/2019 4:42,12/20/2019 4:42,How are batch statistics computed in Recurrent Batch Normalization?,,0,0,,,,CC BY-SA 4.0 +17150,2,,1742,12/18/2019 7:05,,2,,"

Deep Learning is subset of Machine Learning.

+ +

Machine learning and Deep learning both are not two different things. Deep learning is one of the form of machine learning. +The level of layers in Neural network are more and more in depth learning is part of Deep learning.

+ +

+ +
+

“Deep learning is a particular kind of machine learning that achieves + great power and flexibility by learning to represent the world as + nested hierarchy of concepts, with each concept defined in relation to + simpler concepts, and more abstract representations computed in terms + of less abstract ones.”

+
+",7681,,,,,12/18/2019 7:05,,,,0,,,,CC BY-SA 4.0 +17151,1,17807,,12/18/2019 7:15,,1,2006,"

I have been exploring edge computation for AI, and I came across multiple libraries or frameworks, which can help to convert the model into a lite format, which is suitable for edge devices.

+
    +
  • TensorFlow Lite will help us to convert the TensorFlow model into TensorFlow lite.
  • +
  • OpenVino will optimise the model for edge devices.
  • +
+

Questions

+
    +
  1. If we have a library to optimise the model for edge devices (e.g. TensorFlow Lite), after conversion, could it make the accuracy decrease?

    +
  2. +
  3. If not, then why do people prefer don't always use e.g. TensorFlow Lite?

    +
  4. +
+",7681,,2444,,2/17/2021 22:46,2/17/2021 22:46,"Why don't people always use TensorFlow Lite, if it doesn't decrease the accuracy of the models?",,2,1,,,,CC BY-SA 4.0 +17152,1,,,12/18/2019 7:46,,1,36,"

What is the difference between principal component analysis and singular value decomposition in image processing? Which one performs better, and why?

+",9863,,2444,,12/18/2019 13:56,12/18/2019 13:56,What is the difference between principal component analysis and singular value decomposition in image processing?,,0,2,,,,CC BY-SA 4.0 +17153,1,,,12/18/2019 9:43,,3,42,"

I am trying to teach an agent to make any random 1-qubit state reach uniform superposition. So basically, the full circuit will be State -> measurement -> new_state (|0> if 0, |1> if 1) -> Hadamard gate. It just needs to perform 2 actions. That's all. So it's more of an RL problem rather than QC.

+ +

I am using reinforcement learning to train the model but it doesn't seem to learn anything. The reward keeps on decreasing and even after 3 million episodes, the agent doesn't seem to converge anywhere. This is how I am training:

+ +
def get_exploration_rate(self, time_step):
+        return self.epsilon_min + (self.epsilon - self.epsilon_min)*\
+               math.exp(1.*time_step*self.epsilon_decay)
+
+def act(self, data,t): #state
+        rate = self.get_exploration_rate(t)
+        if random.random() < rate:
+            options = self.model.predict(data) #state
+            options = np.squeeze(options)
+            action =  random.randrange(self.action_size)
+        else:
+            options = self.model.predict(data) #state
+            options = np.squeeze(options)
+            action = options.argmax()
+        return action, options, rate
+
+def train(self):
+
+        batch_size = 200
+        t = 0                   #increment
+        states, prob_actions, dlogps, drs, proj_data, reward_data =[], [], [], [], [], []
+        tr_x, tr_y = [],[]
+        avg_reward = []
+        reward_sum = 0
+        ep_number = 0
+        prev_state = None
+        first_step = True
+        new_state = self.value
+        data_inp = self.data
+
+        while ep_number<3000000:
+            prev_data = data_inp
+            prev_state = new_state
+            states.append(new_state)
+            action, probs, rate = self.act(data_inp,t)
+            prob_actions.append(probs)
+            y = np.zeros([self.action_size])
+            y[action] = 1
+            new_state = eval(command[action])
+            proj = projection(new_state, self.final_state)
+            data_inp = [proj,action]
+            data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
+            tr_x.append(data_inp)
+            if(t==0):
+                rw = reward(proj,0)
+                drs.append(rw)
+                reward_sum+=rw
+
+            elif(t<4):
+                rw = reward(new_state, self.final_state)
+                drs.append(rw)
+                print(""present reward: "", rw)
+                reward_sum+=rw
+            elif(t==4):
+                if not np.allclose(new_state, self.final_state):
+                    rw = -1
+                    drs.append(rw)
+                    reward_sum+=rw
+                else:
+                    rw = 1
+                    drs.append(rw)
+                    reward_sum+=rw
+
+            print(""reward till now: "",reward_sum)
+            dlogps.append(np.array(y).astype('float32') * probs)
+            print(""dlogps before time step: "", len(dlogps))
+            print(""time step: "",t)
+            del(probs, action)
+            t+=1
+            if(t==5 or np.allclose(new_state,self.final_state)):                         #### Done State
+                ep_number+=1
+                ep_x = np.vstack(tr_x) #states
+                ep_dlogp = np.vstack(dlogps)
+                ep_reward = np.vstack(drs)
+                disc_rw = discounted_reward(ep_reward,self.gamma)
+                disc_rw = disc_rw.astype('float32')
+                disc_rw -= np.mean(disc_rw)
+                disc_rw /= np.std(disc_rw)
+
+                tr_y_len = len(ep_dlogp)
+                ep_dlogp*=disc_rw
+                if ep_number % batch_size == 0:
+                  input_tr_y = prob_actions - self.learning_rate * ep_dlogp
+                    input_tr_y = np.reshape(input_tr_y, (tr_y_len,1,6))
+
+                    self.model.train_on_batch(ep_x, input_tr_y)
+                    tr_x, dlogps, drs, states, prob_actions, reward_data = [],[],[],[],[],[]
+                env = Environment()
+                new_state = env.reset()
+                proj = projection(state, self.final_state)
+                data_inp = [proj,5]
+                data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
+                print(""State after resetting: "", new_state)
+                t=0
+
+ +

I have tried various things like changing the inputs, reward function, even added exploration rate. I have assigned max time step as 5 even though it should complete in just 2.

+ +

What am I doing wrong? Any suggestions?

+",29843,,,,,12/18/2019 9:43,Reinforcement Learning on quantum circuit,,0,0,,,,CC BY-SA 4.0 +17154,1,17163,,12/18/2019 9:46,,3,589,"

Currently, I am working on a Gomoku AI implementation with minimax + alpha-beta pruning.

+

I'm targeting these two rules from 'acceptable implementation' in terms of search time and search depth :

+
    +
  • Search time (over 0.5 seconds is "bad", less 0.5 seconds is ok)
  • +
  • Search depth (less than 10 search depth levels is "bad", over 10 search depth levels is ok)
  • +
+

The minimax algorithm generates, by recursive function calls, a tree of nodes, each node represented by a function call with a specific game state.

+

Increasing the depth search increases the number of nodes in the tree, and therefore search time.

+

There is a compromise between search time and search depth.

+

Alpha-beta pruning tends to help this compromise by pruning useless nodes search and reducing tree size. The pruning is directly related to the evaluation/heuristic function. Bad implementation of heuristic may lead to bad efficiency of alpha-beta pruning.

+
+

If you are working on or have done a Gomoku AI, sharing your stats of tree size, search depth and time search from your implementation at some game steps, and explain how you reach it may help to investigate.

+
+

The implementation at this time does not fit the 'is not acceptable' for me, having search time over 1sec for a search depth of 4 at first step ... on IntelCore i7 3.60GHz CPU ...

+

Here are the properties of the actual implementation:

+
    +
  • Board of size 19x19
  • +
  • Implements search window of size 5x5 around stones to reduce search nodes
  • +
  • Implements heuristic computation at each node on the played stone instead of computation on all board size on leaf nodes number.
  • +
  • Implements alpha-beta pruning
  • +
  • No multi thread
  • +
+
+

Here are the current stats it is reaching for search depth of 4 at the first step:

+
    +
  • Timing minimax algorithm: 1.706175 seconds
  • +
  • Number of nodes in that compose the tree: 2850
  • +
+
. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . . . . . . . . . . . . . . 10
+. . . . . . . . . . . . . . . . . . . 11
+. . . . . . . . . . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+o x . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S 
+Player: o - AI: x
+
+

Bad stats might be lead to bad heuristics, causing inefficient pruning. Waiting for other stats/replies to validate this hypothesis may help.

+

Edit 1

+

Coming back from a new search campaign on this question.

+
    +
  • The implementation was facing a 19*19 loop index at each heuristic computation ... Removed this by heuristic computation at a specific index (not the entire board)

    +
  • +
  • The implementation was facing a 19*19 loop index to check win state ... Removed this by checking only around played index any alignment at each step.

    +
  • +
  • The implementation was facing a 19*19 loop index to check where it can play (even with the windows) ... +Removed by propagating indexes array of valid indexes through the recursion updated at each step. +The array is a dichotomic array (with $O(n)$ insertion, $O(\log n)$ search and $O(1)$ deletion by index)

    +
  • +
  • The implementation was lacking a Zobrist hash table, a very nice idea from the below answer. It is now implemented with unit tests to prove that implementation is working. An array sorted by hash is updated at each new node, with the hash-node association. The array is a dichotomic array (with $O(n)$ insertion, $O(\log n)$ search and $O(1)$ deletion by index)

    +
  • +
  • The implementation is at each step trying each index in a random way (not computation order or evaluation score order).

    +
  • +
+

The before edit example is not great because it is playing on a sideboard and the allowed indexes window is half max size.

+

Here are the newly obtained performances :

+
    +
  • with Zobrist table off and seed at 42 for search depth of 4 at the first step

    +
      +
    • Timing minimax algorithm: 0.083288 seconds
    • +
    • Number of nodes that compose the tree: 6078
    • +
    +
  • +
+
. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . . . . . . . . . . . . . . 10
+. . . . . . . . . x . . . . . . . . . 11
+. . . . . . . . o . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+. . . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S
+Player: o - AI: x
+
+
    +
  • with Zobrist table on and seed at 42 for search depth of 4 at the first step

    +
      +
    • Timing minmax_algorithm: 0.434098 seconds
    • +
    • Number of nodes that compose the tree: 9320
    • +
    +
  • +
+
. . . . . . . . . . . . . . . . . . . 00
+. . . . . . . . . . . . . . . . . . . 01
+. . . . . . . . . . . . . . . . . . . 02
+. . . . . . . . . . . . . . . . . . . 03
+. . . . . . . . . . . . . . . . . . . 04
+. . . . . . . . . . . . . . . . . . . 05
+. . . . . . . . . . . . . . . . . . . 06
+. . . . . . . . . . . . . . . . . . . 07
+. . . . . . . . . . . . . . . . . . . 08
+. . . . . . . . . . . . . . . . . . . 09
+. . . . . . x . . . . . . . . . . . . 10
+. . . . . . . . . . . . . . . . . . . 11
+. . . . . . . . o . . . . . . . . . . 12
+. . . . . . . . . . . . . . . . . . . 13
+. . . . . . . . . . . . . . . . . . . 14
+. . . . . . . . . . . . . . . . . . . 15
+. . . . . . . . . . . . . . . . . . . 16
+. . . . . . . . . . . . . . . . . . . 17
+. . . . . . . . . . . . . . . . . . . 18
+A B C D E F G H I J K L M N O P Q R S
+Player: o - AI: x
+
+

Actually, it is ok for search depth 4, but not for more than 6. The node number is becoming exponential (over 20 000) ...

+

Found here great implementation in the same language/techno than can go to 10 depth in less than 1sec, without Zobrist or smart trick, and followed the logic.

+

The issue must be somewhere else, causing exponential growth of node - inefficient pruning.

+",32168,,2444,,1/2/2022 11:22,1/2/2022 11:22,"Could you share with me the tree size, search time and search depth of your implementation of Gomoku with minimax and alpha-beta prunning?",,1,0,,,,CC BY-SA 4.0 +17155,1,,,12/18/2019 12:51,,2,319,"

Consider an optimization problem that involves a set of tasks $T = \{1,2,3,4,5\}$, where the goal is to find a certain order of these tasks.

+

I would like to solve this problem with a genetic algorithm, where each chromosome $C = [i, j, k, l, m]$ corresponds to a specific order of these five tasks, so each gene in $C$ corresponds to a task in $T$.

+

So, for example, $C = [1,3,5,4,2]$ and $C' = [1,5,4,2,3]$ would be two chromosomes that correspond to two different orders of the tasks.

+

In this case, how could we design the mutation and cross-over operations so that these constraints are maintained during evolution?

+

The genetic algorithm should produce the three best chromosomes or order of tasks.

+",31274,,2444,,12/8/2020 15:52,12/8/2020 15:52,How can we design the mutation and crossover operations when the order of the genes in the chromosomes matters?,,2,1,,,,CC BY-SA 4.0 +17156,2,,17108,12/18/2019 13:06,,2,,"

My understanding of your question is, you have 2 designs:

+ +
    +
  1. A deterministic policy that outputs 2 scalar for x and y respectively.
  2. +
  3. A value function that outputs the probability of each pixel in the 2D grid.
  4. +
+ +

If you choose the max of softmax on (2.), you'll get the same deterministic policy as (1.), assuming there are some tie-breaking designs. So I don't think choosing the max of softmax is a great policy.

+ +

So, the question becomes ""deterministic policy vs. value function"". Since the value function have more information, we can have special designs on exploration.

+ +

In (1.), if you use $\epsilon$-greedy on the discrete output, It'll take a random move with $\epsilon$ probability. When exploring (instead of exploiting), all actions except the best action will have the same probability.

+ +

However, in (2.), the probability can act like a value function (the value of $Q(s, a) \forall a$ in a certain state $s$), and you can use something like upper confidence bound (UCB) instead of $\epsilon$-greedy. If you are using neural networks to output this probability distribution, you can add an entropy term in the loss function like in A3C to encourage exploration.

+",32173,,32173,,12/18/2019 18:46,12/18/2019 18:46,,,,3,,,,CC BY-SA 4.0 +17157,2,,17151,12/18/2019 13:50,,3,,"

This partly answer to question 1. +There is no general rule concerning accuracy or size of the model. It depends on the training data and the processed data. +The lightest is your model compared to the full accuracy model the less accurate it will be. I would run the lite model on test data and compare to the accuracy of the full model to get an exact measure of the difference.

+ +

Tensor flow has different options to save the ""lite"" model (optimized in size, latency, none and default).

+ +

The following mostly answer question 2.

+ +

Tensor flow lite is intented to provide the ability to use the model to on line predict only and load the model not to train the model.

+ +

On the other hand Tensor flow is used to build (train) the model off line.

+ +

If your edge platform support any of the binding language provided for TensorFlow (javascript, java/kotlin, C++, python) you can use Tensorflow for prediction. The accuracy or speed options you might have selected to create the model will not be affected whether you use Tensor Flow or Tensor Flow Lite. Typically Tensor flow lite can be used on mobile devices (Ios, Android). There are other supported target, see this link

+",30392,,30392,,1/9/2020 13:38,1/9/2020 13:38,,,,0,,,,CC BY-SA 4.0 +17158,1,,,12/18/2019 14:00,,1,16,"

We are currently using a RL network with the following simple structure to train a model which helps to solve a transformation task:

+ +

Environment (a binary file) + reward ---> LSTM (embedding) --> FC layer --> FC layer --> FC layer --> decision (to select and apply a kind of transformation toward the environment from a pool of transformations)

+ +

The model will receive a simple reward and also take the input to make the decision. And we have a condition to stop each episode.

+ +

So the current workflow, although it is simple, it seems to have learned something and with multiple episode of training, we can observe the accumulated reward for each episode increases. So right now, what we are thinking is to interpret the model, well, a fancy term.

+ +

So basically we are thinking to let the model tell us from which component of the Environment (input file), the model somewhat makes the decision to select a transformation to apply. And I have learned a bunch of interpretability articles, which basically use an activation map (e.g., link) to highlight certain components from the input.

+ +

However, the problem is that, we don't have any sort of CNN layer in our simple RL model. In that sense, the aforementioned method cannot apply, right? I also learned a number of techniques from this book, but still, I don't see any specific techniques applicable for RL models.

+ +

So here is my question, in terms of our simple RL model, how can we do certain ""interpretability"" analysis and therefore have a better idea on which part of the ""Environment"" leads to each step of decision? Thank you very much.

+",25973,,,,,12/18/2019 14:00,How to perform Interpretability analysis toward a simple reinforcement learning network,,0,0,,,,CC BY-SA 4.0 +17159,1,17165,,12/18/2019 15:24,,4,376,"

Artificial neural networks are composed of multiple neurons that are connected to each other. When the output of an artificial neuron is zero, it does not have any effect on neighboring neurons. When the output is positive, the neuron has an effect on neighboring neurons.

+ +

What does it mean when the output of a neuron is negative (which can e.g. occur when the activation function of a neuron is the hyperbolic tangent)? What effect would this output have on neighboring neurons?

+ +

Do biological neural networks also have this property?

+",31143,,2444,,12/18/2019 20:54,12/18/2019 22:50,What effect does a negative output of a neuron have on neighbouring neurons?,,1,3,,,,CC BY-SA 4.0 +17160,1,,,12/18/2019 16:05,,2,807,"

I am currently working on a problem and now got stuck to implement one of it's steps. This is a simple attempt to explain what I am currently facing, which is something that I am aiming to implement in my regression simulation in python.

+ +

Let's say that I fit a non-linear model to my data. Now, I want to find the combination of inputs within a specified range that returns the the highest outcome. When I am using a quadratic function or only a few inputs, this task is quite simple. However, the problem comes when trying to apply the same logic for more complex models. Supposing that I have 9 variables as inputs, I will have to test all possible combinations and that would be computationally unfeasible by doing it with meshgrid if you want to cover a range with a several intervals in between.

+ +

So, here it comes my question, is there such a way to avoid having to go through this computationally costly process in order to achieve the combinations of inputs defined in a given range that return the optimal output?

+",31880,,,,,12/18/2019 17:15,Finding the optimal combination of inputs which return maximal output,,1,0,,,,CC BY-SA 4.0 +17161,2,,17160,12/18/2019 17:10,,4,,"

If your model is gradient-based, such as a neural network, then may also be able to use gradient methods to drive virtual inputs:

+ +
    +
  • Freeze all network weights to the trained version

  • +
  • Define a loss function that decribes how you want the output - or any internal measure - to behave. E.g. to maximise the output, the loss function can simply be the negative of the output, assuming you will perform gradient descent later. Some libraries will also support gradient ascent to maximise a function.

  • +
  • Define your input as a variable that can be optimised and instantiate an optimiser (details will vary depending on your library)

  • +
  • Start with a random or best-guess input, and iterate normal training routine (feed forward then backpropagate) to get better and better inputs

  • +
+ +

This is basically how Deep Dream and Style Transfer algorithms work - the detail that is different is definition of the loss functions. It is also a way to make adversarial attacks against known models, for example taking a picture of a car, modifying it such that a classifier returns that it is an ostrich (whilst it still looks like a car to a human).

+ +

It is not guaranteed to find the absolute best inputs for a given range, but should find good approximations of local minima or maxima far faster than a meshgrid when there are many dimensions. You could combine the idea with a coarse meshgrid or random search for start points for a better chance of finding the best results within input range constraints.

+ +

You should bear in mind that the ideal inputs you discover will only be as accurate as your trained model will let them be. If you discover maximising inputs in the model, that use inputs that are far away from any training examples, then in reality those inputs might not get anything like the result that the model predicts. Statistical models with many degrees of freedom are typically ok at interpolating between data points, but very bad at extrapolating beyond data that has been observed.

+",1847,,1847,,12/18/2019 17:15,12/18/2019 17:15,,,,0,,,,CC BY-SA 4.0 +17162,1,,,12/18/2019 17:52,,1,37,"

I am writing a checkers move generation function in C that will be extended to Python. It is much easier to handle the possible boards in a fixed size array to pass back to Python.

+ +

Basically, I build out possible boards and add them to this array:

+ +
uint32_t boards[x][3];
+
+
+ +

Therefore, the optimal value for x should be the maximum single ply branching factor out of all possible legal board states.

+ +

I am not sure if I am being very clear, so here is an example:

+ +

For tic-tac-toe this value would be 9, as the first move has the greatest number of possible directly resulting board states, out of all of the legal boards.

+ +

Has this value been calculated for checkers? Has a program like Chinook derived a reasonably close number?

+ +

Thank you for your help!

+",31894,,,,,12/18/2019 17:52,Maximum Single Ply Branching Factor for Legal Checkers Boards,,0,0,,,,CC BY-SA 4.0 +17163,2,,17154,12/18/2019 18:01,,3,,"

Intuitively I kind of doubt expecting a search depth of 10 in half a second is reasonable, especially for the initial game state where there's a rather large branching factor and no immediately-winning moves that help to prune some branches quickly.

+ +

I've never implemented any Alpha-Beta agents for Gomoku specifically, but I can provide some numbers for our Alpha-Beta implementation in the Ludii General Game System. Note that this is a general game system that implements a wide variety of games in a single game description language. Due to its generality, it's unlikely that any single game runs as efficiently as it would in a highly-optimised game-specific implementation. Therefore, you should consider these numbers to be lower bounds on what you can achieve in a Gomoku-only game-specific implementation.

+ +
+ +
    +
  • We can reach a search depth of 3 for the initial game state in half a second.
  • +
  • Increasing this to 10 seconds is still not enough for a depth of 4. I don't know how much I'd have to increase it to reach a depth of 4.
  • +
  • At a max search time of 1 second, it actually seems to play quite well against a few different MCTS-based baselines. So I'm not sure if you really need a depth of 10 before you consider it ""acceptable"".
  • +
  • I'm not keeping track of the number of visited nodes, so can't provide those.
  • +
+ +
+ +

Note that it's very important to take into account the computational cost of your heuristic evaluation function. We're using a somewhat expensive heuristic which computes all potential lines of a length of 5 (because this is appears in the end rules in Gomoku) through all pieces that have been placed so far, as described on pages 82-84 of Automatic generation and evaluation of recombination games (but with a simpler scoring rule than the union of probabilities as described there).

+ +
+ +

CPU: +- Intel Core i5-6500 CPU @ 3.20GHz

+ +

Game implementation details:

+ +
    +
  • General game system (so not optimised for this specific game).
  • +
  • I updated the board size to 19x19 to match your test, but as far as I'm aware 15x15 is more common (and the default in Ludii).
  • +
  • Implemented in Java
  • +
+ +

Alpha-Beta implementation details:

+ +
    +
  • Iterative deepening (so when I write that we reach depth 3, I mean that we completed searches of depth 1, followed by 2, followed by 3, and were probably in progress with a depth-4 search when we ran out of time).
  • +
  • Ludii does not provide undo operations for moves, only apply operations. This means we have to create lots of copies of states, because we cannot undo moves after exiting out of a recursive call.
  • +
  • No negamax (because we also want to support games in which players may have multiple moves in a row before control switches over to another player)
  • +
  • No transposition tables yet (we do already compute the Zobrist hashes, just didn't get around to implementing the TT yet).
  • +
  • No move ordering (other than in between the iterations of iterative deepening).
  • +
  • No smaller search windows than just the regular alpha-beta windows.
  • +
  • Also built-in support to handle games with more than 2 players (with Paranoid search), which probably adds a little bit of overhead to the algorithm.
  • +
  • No smart tricks like the ones you mentioned about only looking at windows around placed stones; we need to support general games.
  • +
+ +
+",1641,,1641,,12/18/2019 20:10,12/18/2019 20:10,,,,1,,,,CC BY-SA 4.0 +17165,2,,17159,12/18/2019 22:13,,3,,"

In the case of artificial neural networks, your question can be (partially) answered by looking at the definition of the operation that an artificial neuron performs. An artificial neuron is usually defined as a linear combination of its inputs, followed by the application of a non-linear activation function (e.g. the hyperbolic tangent or ReLU). More formally, a neuron $i$ in layer $l$ performs the following operation

+ +

\begin{align} +o_i^l = \sigma \left(\sum_{j=1}^N w_j o_j^{l-1} \right) \tag{1}\label{1}, +\end{align}

+ +

where $o_j^{l-1}$ is the output from neuron $j$ in layer $l-1$ (the previous layer), $w_j$ the corresponding weight, $\sigma$ an activation function and $N$ the number of neurons from layer $l-1$ connected to neuron $i$ in layer $l$.

+ +

Let's assume that $\sigma$ is the ReLU, which is defined as follows

+ +

$$ +\sigma(x)=\max(0, x) +$$

+ +

which means that all negative numbers become $0$ and all non-negative numbers become themselves.

+ +

In equation \ref{1}, if $w_j$ and $o_j^{l-1}$ have the same sign, then the product $w_j o_j^{l-1}$ is non-negative (positive or zero), else it is negative (or zero). Therefore, the sign of the output of neuron $j$ in layer $l-1$ alone does not fully determine the effect on $o_i^l$, but the sign of the $w_j$ is also required.

+ +

Let's suppose that the product $w_j o_j^{l-1}$ is negative, then, of course, this will negatively contribute to the sum in equation \ref{1}. In any case, even if the sum $\sum_{j=1}^N w_j o_j^{l-1}$ is negative, if $\sigma$ is the ReLU, no matter the magnitude of the negative number, $o_i^l$ will always be zero. However, if the activation function is hyperbolic tangent, the magnitude of a negative $\sum_{j=1}^N w_j o_j^{l-1}$ affects the magnitude of $o_i^l$. More precisely, the more negative the sum is, the closest $o_i^l$ is to $-1$.

+ +

To conclude, in general, the effect of the sign of an output of an artificial neuron on neighboring neurons depends on the activation function and the learned weights, which depend on the error the neural network is making (assuming the neural network is trained with gradient descent combined with back-propagation), which in turn depends on the training dataset, the loss function, the architecture of the neural network, etc.

+ +

Biological neurons and synapses are more complex than artificial ones. Nevertheless, biological synapses are usually classified as either excitatory or inhibitory, so they can have an excitatory or inhibitory effect on connected neurons.

+",2444,,2444,,12/18/2019 22:50,12/18/2019 22:50,,,,4,,,,CC BY-SA 4.0 +17166,1,17167,,12/19/2019 9:11,,1,153,"

My goal is to train an agent to play MarioKart on the Nintendo DS. My first approach (in theory) was to setup an emulator on my pc and let the agent play for ages. But then a colleague suggested to train the agent first on pre recorded humanly played video data, to achieve some sort of base level. And then for further perfection let the agent play for its own with the emulator.

+

But I have no clue how training with video data works. E.g. I wonder how to calculate a loss since there is no reward. Or am I getting the intuition wrong?

+

I would appreciate it if someone could explain this technique to me.

+",30431,,2444,,4/18/2022 9:30,4/18/2022 9:30,How does reinforcement learning with video data work?,,1,0,,,,CC BY-SA 4.0 +17167,2,,17166,12/19/2019 9:51,,2,,"

In reinforcement learning, to learn off policy control, you need data on the states, actions and rewards at each time step. If, in addition to a recorded video, you had a recording of controller inputs, and could add reward data by hand, then you could use a standard reinforcement learning method, e.g. DQN. Simply run the DQN training loop as normal, but skip the parts where it acts in the environment, and only train on batches of recorded experience.

+ +

With only video data, your options are limited. However, it might still be useful, because a significant part of the challenge is a machine vision task. For a DQN agent, it will need to convert frames from the video (e.g. last 4 frames) into a prediction of the different rewards that it could get depending on which controller buttons are pressed. If you can teach a separate neural network to perform a vision task on relevant video data, it may may help. You could use the learned weights from the first layers of this network as the starting point for your Q values network, and it will likely speed up a DQN figuring out the relationship to its predictions. This sort of task switch following learning is called transfer learning, and is often used in computer vision tasks.

+ +

A possibly useful starting task if you have a video, but no controller or reward data, would be to predict the next frame(s) of the video, given say four starting frames (you need more than one so that the neural network can use velocity information). It should be possible to generate the training data using opencv or ffmpeg from your recordings.

+",1847,,1847,,12/19/2019 10:26,12/19/2019 10:26,,,,2,,,,CC BY-SA 4.0 +17168,1,17192,,12/19/2019 9:58,,0,998,"

I found the terms front-end and back-end in the article (or blog post) How to Develop a CNN for MNIST Handwritten Digit Classification. What do they mean here? Are these terms standard in this context?

+",16871,,16871,,1/8/2021 19:39,1/8/2021 19:39,"What do the terms ""front-end"" and ""back-end"" refer to in this article?",,2,3,,,,CC BY-SA 4.0 +17169,1,,,12/19/2019 11:56,,1,34,"

TL;DR: I can't figure out why my neural network wont give me a sensible output. I assume it's something to do with how I'm presenting the input data to it but I have no idea how to fix it.

+ +

Background:

+ +

I am using matched pairs of speech samples to generate a model which morphs one persons voice into another. There are some standard pre-processing steps which have been done and can be reversed in order to generate a new speech file.

+ +

With these I am attempting to generate a very simple neural network that translates the input vector into the output one and then reconstructs a waveform.

+ +

I understand what I'm trying to do mathematically but that's not helping me make keras/tensorflow actually do it.

+ +

Inputs:

+ +

As inputs to my model I have vectors containing Fourier Transform values from the input speech sample matched with their counterpart target vectors.

+ +

These vectors contain the FT values from each 25ms fragment of utterance are in the form $[r_1, i_1, ..., r_n, i_n]$ where $r$ is the real part of the number and $i$ is the imaginary one.

+ +

I am constructing these pairs into a dataset reshaping each input vector as I do so:

+ +
def create_dataset(filepaths):
+    """"""
+    :param filepaths: array containing the locations of the relevant files
+    :return: a tensorflow dataset constructed from the source data
+    """"""
+    examples = []
+    labels = []
+
+    for item in filepaths:
+        try:
+            source = np.load(Path(item[0]))
+            target = np.load(Path(item[1]))
+
+            # load mapping
+            with open(Path(item[2]), 'r') as f:
+                l = [int(s) for s in list(f.read()) if s.isdigit()]
+                it = iter(l)
+                mapping = zip(it, it)
+
+            for entry in mapping:
+                x, y = entry
+                ex, lab = source[x], target[y]
+                ex_ph, lab_ph = np.empty(1102), np.empty(1102)
+
+                # split the values into their real and imaginary parts and append to the appropriate array
+                for i in range(0, 1102, 2):
+                    idx = int(i / 2)
+
+                    ex_ph[i] = ex[idx].real
+                    ex_ph[i+1] = ex[idx].imag
+                    lab_ph[i] = lab[idx].real
+                    lab_ph[i+1] = lab[idx].imag
+
+                examples.append(ex_ph.reshape(1,1102))
+
+                # I'm not reshaping the labels based on a theory that doing so was messing with my loss function
+                labels.append(lab_ph)
+
+        except FileNotFoundError as e:
+            print(e)
+
+    return tf.data.Dataset.from_tensor_slices((examples, labels))
+
+ +

This is then being passed to the neural network:

+ + + +
def train(training_set, validation_set, test_set, filename):
+    model = tf.keras.Sequential([tf.keras.layers.Input(shape=(1102,)),
+                                 tf.keras.layers.Dense(551, activation='relu'),
+                                 tf.keras.layers.Dense(1102)])
+
+    model.compile(loss=""mean_squared_error"", optimizer=""sgd"")
+
+    model.fit(training_set, epochs=1, validation_data=validation_set)
+
+    model.evaluate(test_set)
+    model.save(f'../data/models/{filename}.h5')
+    print(model.summary())
+
+ +

and I get out... crackling. Every time, no matter how much data I throw at it. I assume I'm doing something obviously and horribly wrong with the way I'm setting this up.

+",32190,,,,,12/19/2019 11:56,Can't figure out what's going wrong with my dataset construction for multivariate regression,,0,0,,,,CC BY-SA 4.0 +17172,1,17183,,12/19/2019 21:50,,0,769,"

How can I prove that all the a-cuts of any fuzzy set A defined on $R^n$ +are convex if and only if

+ +

$$\mu_A(\lambda r + (1-\lambda)s) \geq min \{\mu_A(r), \mu_A(s)\}$$

+ +

such that $r, s \in R^n$, $\lambda \in [0, 1]$ ?

+ +

That's a fuzzy question on my assignment. Any idea on how to start with?

+",32076,,2444,,12/19/2019 22:35,12/20/2019 12:12,How can I prove that all the a-cuts of any fuzzy set A defined on $R^n$ are convex?,,2,0,,,,CC BY-SA 4.0 +17173,1,,,12/19/2019 23:59,,2,65,"

In the paper Visual SLAM algorithms: a survey from 2010 to 2016 by Takafumi Taketomi, Hideaki Uchiyama and Sei Ikeda it is mentioned

+
+

It should be noted that tracking and mapping (TAM) is used instead of using localization and mapping. TAM was first used in Parallel Tracking and Mapping (PTAM) [15] because localization and mapping are not simultaneously performed in a traditional way. Tracking is performed in every frame with one thread whereas mapping is performed at a certain timing with another thread. After PTAM was proposed, most of vSLAM algorithms follows the framework of TAM. Therefore, TAM is used in this paper.

+
+

I do not quite follow the difference between localization and mapping versus tracking and mapping.

+
    +
  1. What is the difference?

    +
  2. +
  3. What are some advantages of TAM?

    +
  4. +
  5. Why is SLAM not called STAM?

    +
  6. +
+",14662,,2444,,12/13/2021 10:06,12/13/2021 10:06,What is the difference between tracking and mapping (TAM) and localization and mapping (LAM)?,,0,0,,,,CC BY-SA 4.0 +17174,2,,17172,12/20/2019 1:20,,2,,"

We can assume without loss of generality that +\begin{equation} +\min\{\mu_A(r), \mu_A(s)\} = \mu_A(r) = \alpha. +\end{equation} +$\implies$

+ +

a-cut of fuzzy set $A$ is on $R^n$ is convex. A-cut can be defined as +\begin{equation} +A = \{x \in R^n| \mu_A(x) \geq \alpha\} +\end{equation} +If we take two elements $r$ and $s$, by the definition of convex set, number $\lambda r + (1 + \lambda)s$ is also an element of that set. Since it's an element of that set that means +\begin{equation} +\mu_A(\lambda r + (1 + \lambda)s) \geq \alpha +\end{equation}

+ +

$\impliedby$

+ +

$\mu_A(\lambda r + (1 + \lambda)s) \geq \alpha$.

+ +

We know from $\min\{\mu_A(r), \mu_A(s)\} = \mu_A(r) = \alpha$ that $\mu_A(s) > \alpha$. We have an affine combination $\lambda r + (1 + \lambda)s$ for which also $\mu_A(\lambda r + (1 + \lambda)s) \geq \alpha$ so we know that all numbers $\lambda r + (1 + \lambda)s$ satisfy inequality $\mu_A(\cdot) \geq \alpha$ (belong to the same set as $r$ and $s$) which means this is a convex set again by the definition of a convex set.

+",20339,,,,,12/20/2019 1:20,,,,2,,,,CC BY-SA 4.0 +17175,1,17187,,12/20/2019 3:17,,2,458,"

How would a probabilistic version of minimax work?

+ +

For example, we may choose a move that could result in a very bad outcome, but that outcome might just be extremely unlikely so we might think it would be worth the risk.

+",4199,,2444,,12/20/2019 15:42,12/20/2019 15:56,Is there a probabilistic version of minimax?,,1,2,,,,CC BY-SA 4.0 +17176,1,,,12/20/2019 3:43,,1,125,"

In a hypothetical conversation:

+ +
Person A - ""Repeat the word 'cat' twice"".
+Person B - ""cat cat"".
+
+ +

I'm thinking about how a human or AI can learn the concept of ""repeat twice"". In reinforcment learning it would require that after the first sentence the AI would go through every random sentence until it got it right and hence got a reward.

+ +

Another way might be the AI or human overhearing the conversation. Then on hearing a repetition or a word it may trigger some neurons in the brain related to detecting repetition. Thus by pavlovian learning associate the word ""repeat"" or ""twice"" with these neurons. When given the stimulus of the word ""repeat"" these neurons may get triggered making the brain do some repetition algorithm. (This is my favorite theory).

+ +

I suppose a third way might be as follows:

+ +
Person A - ""Hello! Hello!""
+Person B - ""Stop repeating yourself"".
+
+ +

It might learn to associate repeating with the word ""repeating"" in this way.

+ +

I think either way the brain must have some neurons dedicated to detecting repetitions and possibly inacting them. (I don't think any standard RNN has this capability).

+ +

What do you think is the most likely way?

+",4199,,,,,12/20/2019 3:43,"How would an AI learn the concept of the words ""repeat twice""?",,0,0,,,,CC BY-SA 4.0 +17180,1,,,12/20/2019 8:43,,1,24,"

Why are the error rates in table 3 and table 4 are different in the paper Deep Residual Learning for Image Recognition (2015).

+ +

They are both error rates on the validation sets by single model.

+ +
    +
  • Why there are different rates for the same architecture?
  • +
+",32203,,1671,,12/20/2019 22:43,12/20/2019 22:43,"Why the error rates in table3 and table4 are differenct in the paper ""deep residual learning for image recognition""",,0,0,,,,CC BY-SA 4.0 +17181,2,,17142,12/20/2019 9:22,,1,,"

I think that the best approach is to ""switch point of view"" from the general, objective-oriented, Genetic Algorithm's behaviours.

+ +

Usually GAs rely on individualism: the best survives. To do this you have to define what 'best' means and this is done through a fitness function that measures something objective, independent from the individual (i.e. a score, classical example).

+ +

If you want to measure novelty, collectivism becomes more important, as the whole population (present and past) must be considered. You have to think about measuring how different the individual X did from 'the rest' (i.e. location never reached before).

+ +

Another interesting point is that you should think in a more multidimensional way: a score is uni-dimensional, everyone wants to get higher, that is easy. A location in space is more difficult to achieve and that is why novelty becomes interesting.

+ +
+ +

To sum up: stay multidimensional, always treat the population as a multi-organism rather than focusing on the single individual. I hope this helps :)

+",15530,,,,,12/20/2019 9:22,,,,0,,,,CC BY-SA 4.0 +17182,2,,16505,12/20/2019 9:33,,1,,"
    +
  1. Not really. Unity physics is just an approximation of an approximation. It has to look more or less real but at the same time the performances are very important, so it has not the realistic level you would hope to ""bring things to the real world"". There are some physics engine you can install that usually work a bit better. Still, don't expect ""real-world level"".

  2. +
  3. Based on 1, yes. If, once you got it to the real world you still give it the chance to optimize, it can only make things better.

  4. +
  5. This is difficult to say. You should try to use the best possible approximation in Unity, maybe testing multiple scenarios and seeing that they perform more or less in the same way. Then train. Then bring this ""approximated model"" to the real world and let it train more to adjust to the real physics.

  6. +
  7. I don't know this. Unity seems the most versatile around (together with other game engines). This versatility is its strength and weakness, as it cannot focus on solving perfectly a single problem. It rather has to aim at generalizing as much as possible. There might be ""more specialized"" programs around, but I doubt.

  8. +
  9. Just try different algorithms/methods. Usually with complex problems neuroevolutionary techniques perform better, but the amount of code is hugely higher.

  10. +
+",15530,,,,,12/20/2019 9:33,,,,1,,,,CC BY-SA 4.0 +17183,2,,17172,12/20/2019 12:12,,0,,"
+

A fuzzy set A in $R^n$ is said to be a convex fuzzy set if its + $\alpha$-cuts $A_\alpha$ are (crisp) convex sets for all $A \in (0,1]$ + .

+
+ +

Let A be a convex fuzzy set if and only if for all $r, s \in$ $R^n$, $\lambda \in [0, 1]$ .

+ +

Let $\alpha=\mu_A\leq\mu_B$

+ +

Then

+ +

\begin{equation} + r\in A_{\alpha}, s\in A_{\alpha} +\end{equation}

+ +

and also

+ +

\begin{equation} + \lambda r + (1-\lambda)s \geq \alpha = min \{\mu_A(r), \mu_A(s)\} +\end{equation}

+ +

Conversely, if the membership funciton $\mu_A$ of the fuzzy set A satisfies the inequality of Theorem 13.1 Convex fuzzy set, then taking $\alpha=\mu_A(r), A_\alpha$ may be regarded as set of all points $s$ for which $\mu_A(s)\geq\alpha=\mu_A(r)$. Therefore for all $r,s \in A_\alpha$,

+ +

\begin{equation} + \mu_A(\lambda r + (1-\lambda)s) \geq min \{\mu_A(r), \mu_A(s)\} = \mu_A(r)=\alpha +\end{equation}

+ +

which inplies that $\lambda r + (1-\lambda)s \in A_\alpha$. Hence $A_\alpha$ is a convex set for every $\alpha \in [0,1]$

+",32076,,,,,12/20/2019 12:12,,,,0,,,,CC BY-SA 4.0 +17184,2,,12099,12/20/2019 14:07,,0,,"

If we can do some reduction in the search space using CSP (constraint propagation) we can drastically reduce the search space or sometimes completely avoid the need for a search by directly reaching the solutions (for e.g. with variables having their domains reduced to size one). It could also happen that we come to a point when a variable domain size becomes zero, in that case no solution exists, given the constraints, so no need for a search.

+ +

Constraint propagation basically involves the concept of enforcing local consistency (this is done by enforcing node-consistency, arc-consistency, path-consistency and also global constraints using Alldiif or Atmost).

+ +

The terms: nodes, arc, path, etc. basically reflects a CSP problem represented as a graph with nodes as the variables and the arcs/edges as constraints. The process is simply to remove values from the domains of the variables that are inconsistent. Algorithms such as AC-3, PC-2, etc. precisely are for these purposes.

+",31765,,,,,12/20/2019 14:07,,,,0,,,,CC BY-SA 4.0 +17185,1,,,12/20/2019 14:16,,1,264,"

I have been looking into the backtracking search for CSPs, and understand that if we just plainly do a typical depth-limited search we have a vast tree with leaves size $n!d^n$ where $n$ is the number of variables and $d$ the domain size. It can also be easily understood that there exists instead only $d^n$ complete assignments. So the reason for the the tree being so large is attributed to the fact that we are ignoring the commutative way of variable assignments in CSP. Can anyone please explain, as to how exactly this commutative property affects?

+",31765,,40434,,7/17/2022 6:09,8/11/2023 9:08,What is exactly the role of commutative property in a Constraint Satisfaction Problem?,,1,1,,,,CC BY-SA 4.0 +17187,2,,17175,12/20/2019 15:41,,1,,"

Yes, there is at least one probabilistic version of minimax, which is called expectiminimax. In expectiminimax, in addition to min and max nodes, there are also chance nodes, which perform a weighted sum of the successors, so the probabilities associated with chance nodes must be known. Given that expectiminimax assumes the existence of random events (represented by the chance nodes), the decisions are thus based on expected values.

+ +

Section 5.5 of the book Artificial Intelligence: A Modern Approach provides a description of the expectiminimax algorithm, which was introduced by Donald Michie in Game-playing and game-learning automata (1966). The paper Optimal strategy in games with chance nodes (2007) also gives a decent description of the expectiminimax algorithm.

+",2444,,2444,,12/20/2019 15:56,12/20/2019 15:56,,,,0,,,,CC BY-SA 4.0 +17188,1,17189,,12/20/2019 17:00,,3,3967,"

I understand that the actual algorithm calls for using Depth-First Search, but is there a functionality reason for using it over another search algorithm like Breadth-First Search?

+",32215,,2444,,5/18/2021 0:09,5/18/2021 0:09,Why does the adversarial search minimax algorithm use Depth-First Search (DFS) instead of Breadth-First Search (BFS)?,,1,0,,,,CC BY-SA 4.0 +17189,2,,17188,12/20/2019 17:58,,3,,"

The primary reason is that Breadth-First Search requires much more memory (and this probably also makes it a little bit slower in practice, due to time required to allocate memory, jumping around in memory rather than working with what's still in the CPU's caches, etc.). Breadth-First Search needs memory to remember ""where it was"" in all the different branches, whereas Depth-First Search completes an entire path first before recursing back -- which doesn't really require any memory other than the stack trace. This is assuming we're using a recursive implementation for DFS -- which we normally do in the case of minimax.

+ +

You can clearly see this if you look at pseudocode for the two approaches (ignoring the minimax details here, just presenting pseudocode for straightforward searches):

+ +
BreadthFirstSearch(start):
+    Q = new queue()
+    Q.append(start)
+    while Q is not empty:
+        node = Q.pop()
+        if node is leaf:
+            do something with leaf
+        else:
+            for each child of node:
+                Q.append(child)
+
+DepthFirstSearch(start):
+    if start is leaf:
+        do something with leaf
+    for each child of start:
+        DepthFirstSearch(child)
+        // probably do something with return value from the recursive DFS call
+
+ +

You see that the BFS requires a queue object that explicitly stores a bunch of stuff in memory, whereas DFS doesn't.

+ +
+ +

There's more to the story once you get to extensions of Minimax, like Alpha-Beta pruning and Iterative Deepening... but since the question is just about Minimax, I'll leave it at that for now.

+",1641,,,,,12/20/2019 17:58,,,,0,,,,CC BY-SA 4.0 +17190,1,17191,,12/20/2019 18:26,,5,100,"

Are artificial neurons in layer $l$ only affected by those in layer $l-1$ (providing inputs) or are they also affected by neurons in layer $l$ (and maybe by neurons in other layers)?

+",30178,,2444,,12/20/2019 20:42,12/20/2019 21:04,Are neurons in layer $l$ only affected by neurons in the previous layer?,,1,0,,,,CC BY-SA 4.0 +17191,2,,17190,12/20/2019 20:54,,5,,"

It depends on the architecture of the neural network. However, in general, no, neurons at layer $l$ are not only affected by neurons at layer $l-1$.

+ +

In the case of a multi-layer perceptron (or feed-forward neural network), only neurons at layer $l-1$ directly affect the neurons at layer $l$. However, neurons at layers $l-i$, for $i=2, \dots, l$, also indirectly affect the neurons at layer $l$.

+ +

In the case of recurrent neural networks, the output of neuron $j$ at level $l$ can also affect the same neuron but at a different time step.

+ +

In the case of residual networks, the output of a neuron at a layer $l-i$, for $i=2, \dots, l$, can directly affect the neurons at layer $l$. These non-neighboring connections are called skip connections because they skip layers.

+ +

There are probably other combinations of connections between neurons at different layers or the same layer.

+",2444,,2444,,12/20/2019 21:04,12/20/2019 21:04,,,,0,,,,CC BY-SA 4.0 +17192,2,,17168,12/20/2019 22:55,,1,,"

I do not think these are formally defined.

+ +

The distinction is just to facilitate discussion of the NN architecture: e.g., you may have a few convolutional layers with pooling as a front-end, and a different architecture as a back-end (in a text-book architecture, just a fully-connected layer. But to get wild, maybe LSTM? To really get wild, BERT?).

+ +

In the end (no pun intended), computers do not care if a layer is seen by humans as a front-end or a back-end.

+",32218,,,,,12/20/2019 22:55,,,,0,,,,CC BY-SA 4.0 +17193,1,,,12/20/2019 23:36,,1,52,"

This question is about Real-Time Recurrent Learning Gradient on a Recurrent neural network .

+ +

How can I write out the RTRL equations for a network ?

+ +

Before present an example give let's introduce some notation :

+ +

Notation +

+ +

So the network for which we want to write the RTRL equations is the following :

+ +

Network +

+ +

A similar question can be found here at page 561 for another network .

+",32076,,,,,12/20/2019 23:36,How can I write out the Real-TIme Recurrent Learning Gradient equations for a network?,,0,0,,,,CC BY-SA 4.0 +17194,1,,,12/21/2019 1:32,,5,67,"

What is the best way to train / do inference when the context matters highly as to what the inferred result should be?

+ +

For example in the image below all people are standing upright, but because of the perspective of the camera, their location highly affects their skeletal pose. If the 2D inferred skeleton of the person on the right were located where the middle person is in pixel space, it should not be considered upright even though it should be considered upright where it is now.

+ +

I assume the location would be fed in during both training and inference somehow, but I don't know the names of the techniques that should be used and are there any best practices when doing this type of scenario?

+ +

+",32126,,,,,12/23/2019 18:25,Training and inference for highly-context-sensitive information,,0,0,,,,CC BY-SA 4.0 +17195,1,17740,,12/21/2019 6:25,,3,134,"

I keep reading about how LSTMs can't remember the ""important parts"" of a sequence which is why attention-based mechanisms are required. I was trying to use LSTMs to find people's name format.

+ +

For example, ""Millie Bobby Brown"" can be seen as first_name middle_name last_name format, which I'll denote as 0, but then there's ""Brown, Millie Bobby"" which is last_name, first_name middle_name, which I'll denote as 1.

+ +

The LSTM seems to be overfitting to one classification of format. I suspect it's because it's not paying special attention to the comma which is a key feature of what format it could be. I'm trying to understand why an LSTM won't work for a case like this. It makes sense to me because LSTMs are better at identifying sequence to sequence generation and things such as summarization and sentiment analysis usually require attention. I suspect another reason why the LSTM is not able to infer the format is that the comma can be placed in different indexes of the sequence, so it could be losing its importance in the hidden state the longer the sequence is (not sure if that makes sense). Anyone else has any theories? I'm trying to convince my fellow researchers that a pure LSTM won't be sufficient for this problem.

+",30885,,2444,,12/25/2019 3:26,1/29/2020 12:52,"Why can't LSTMs keep track of the ""important parts"" of a sequence?",,2,8,,,,CC BY-SA 4.0 +17196,1,17258,,12/21/2019 7:04,,4,294,"

I read an interesting essay about how far we are from AGI. There were quite a few solid points that made me re-visit the foundation of AI today. A few interesting concepts arose:

+
+

imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

+

Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

+
+

The concept of creativity seems like the initial thing to address when approaching a true AGI. The same type of creativity that humans have to ask the initial question or generate new radical ideas to long-lasting questions like dark matter.

+

Is there current research being done on this?

+

I've seen work with generating art and music, but it seems like a different approach.

+
+

In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

+
+

This is an interesting concept behind why reinforcement learning is not the answer. Without input from the environment, the agent has nothing to improve upon. However, with the actual brain, if you had no input or output, it is still in a state of "thinking".

+",31978,,2444,,12/12/2021 12:50,12/12/2021 12:50,Are current AI models sufficient to achieve Artificial General Intelligence?,,2,1,,,,CC BY-SA 4.0 +17197,2,,17155,12/21/2019 12:05,,1,,"

If I understood correctly, your problem is about finding the optimal way to execute a series of tasks in order to maximize the results, using Genetic Algorithms.

+ +

In few words, you're trying to solve the salesman problem.

+ +
+ +

If I am correct, you're looking for Crossover and Mutation algorithms that allow you to work with ordered sets of elements. For these scenarios you usually go for the classic PMX +(Partially Mapped Crossover) and Interchange Mutation. But, there are plenty of other crossover algorithms you can use OX1, OX2 (both variants of the Order Based Crossover), Shuffle Crossover, Ring Crossover, etc. Let's start from the mutation, that is easier.

+ +

For simplicity I'll represent the ordered genome like an array of integers: int[] genome = {1, 2, 3, 4, 5};

+ +

Interchange mutation

+ +

The concept is pretty basic: to mutate an ordered genome you just swap two elements. Easy.

+ +

+ + + +
    public int[] InterchangeMutation(int[] genome)
+    {
+        int i1 = random.Next(0, genome.Length);
+        int i2 = random.Next(0, genome.Length);
+
+        var copy = genome.ToArray(); //just making a copy here
+        copy[i1] = genome[i2];
+        copy[i2] = genome[i1];
+
+        return copy;
+    }
+
+ +

PMX Variation Crossover

+ +

This is a bit more complicated as we have to take repetitions into account. From experience, I like to use this variation of the Partially Mapped Crossover. It is way easier to implement than the original one (you can find the paper online) but it will cost some more computational complexity. Longer the genome, higher the price you will pay.

+ +
    +
  1. Start by selecting two parents to use for the crossover.
  2. +
  3. From the first parent (P1) select a random section that will be passed over.
  4. +
  5. For the remaining values:
    +3A. If they are not in the copied section, take them from P2
    +3B. If they are in the copied section, take them from P1
    +3C. End up filling the gaps with the missing values in the order they are in P1
  6. +
+ +

+ + + +
 public int[] PMX2Crossover(int[] P1, int[] P2)
+    {
+        //Initializing child genome
+        int[] child = new int[P1.Length];
+        for (int i = 0; i < P2.Length; i++) child[i] = -1;
+
+        //Step1: getting random section to copy over
+        int i1 = random.Next(0, P1.Length);
+        int i2 = random.Next(0, P1.Length);
+
+        //Step 2: Copying over section from P1
+        for (int i = Math.Min(i1, i2); i < Math.Max(i1, i2); i++) child[i] = P1[i];
+
+        //Step 3A: Copying values from P2
+        for (int i = 0; i < P2.Length; i++) if (child[i] ==-1 && !child.Contains(P2[i])) child[i] = P2[i];
+
+        //Step 3B: Copying values from P1
+        for (int i = 0; i < P2.Length; i++) if (child[i] == -1 && !child.Contains(P1[i])) child[i] = P1[i];
+
+        //Step 3C: Copying remaining values from P1
+        int emptyGene = child.IndexOfFirst(-1);
+        while (emptyGene != -1)
+        {
+            child[emptyGene] = FirstMissingGene(P1, child); 
+            emptyGene = child.IndexOfFirst(-1);
+        }
+
+        return child;
+    }
+
+    private int FirstMissingGene(int[] parent, int[] child)
+    {
+        foreach (var gene in parent) if (!child.Contains(gene)) return gene;
+        return -1; // should never get here
+    }
+
+ +

You can lower down the complexity of the crossover to O(n) (from O(n*n)) simply using a hashmap that keeps track of the genes already added to child.

+ +

To get the first child call PMX2Crossover(P1, P2); and for the second just swap the parents PMX2Crossover(P2, P1);

+ +

Hope this helps you.

+ +

Source: I have been a bachelor professor of AI for a period.

+",15530,,15530,,12/21/2019 12:11,12/21/2019 12:11,,,,1,,,,CC BY-SA 4.0 +17200,1,,,12/21/2019 14:47,,2,142,"

I am making a firetruck using Arduino Uno with flame sensors and ultrasonic sensors to detect how to move and where to go. As this is a project for my university, I am asked to implement AI in it for path planning.

+ +

I am not sure whether to use something like A* technique or ID3 decision tree or if there is something better than both to implement path planning for my robot. Any suggestions?

+",32233,mushter,2444,,12/22/2019 2:09,12/26/2019 11:24,What is the most suitable AI technique to use for path planning?,,1,6,,,,CC BY-SA 4.0 +17201,1,,,12/21/2019 18:56,,1,82,"

I'm trying to re-implement Elastic Weight Consolidation (EWC) as outlined in this paper. As a reference, I am also using this Github repository (another implementation).

+ +

My model/idea is pretty straightforward. Train the network to do the bit operation AND (e.g 1 && 0 = 0), then using EWC, train it to use OR (e.g 1 || 0 = 1). I've got three inputs: bit1, bit2 and operation (0 stands for AND and 1 for OR) and one output neuron - the output of the operation. For example, if I have 0 1 0 the ground truth should be 0.

+ +

The problem, however, comes when calculating the EWC loss.

+ +
def penalty(self, model: nn.Module):
+    loss = 0
+    for n, p in model.named_parameters():
+        _loss = self._precision_matrices[n] * (p - self._means[n]) ** 2
+        loss += _loss.sum()
+    return loss
+
+ +

I've got two problems:

+ +
    +
  • The current means (p) and the old ones (self._means[n]) are always the same, resulting in multiplication by 0, which completely negates EWC.
  • +
  • As I have just one output neuron the calculation of the fisher's matrix is a bit different than the repo. The one I have written seems to be wrong. Any ideas?
  • +
+ +

I initialise the self._means[n] and self._precision_matrices (fisher's matrix) in the init method of the EWC model:

+ +
class EWC(object):
+def __init__(self, model: nn.Module, dataset: list, device='cpu'):
+
+    self.model = model
+    self.dataset = dataset
+    self.device = device
+
+    self._means = {}
+    self._precision_matrices = self._diag_fisher()
+
+    for n, p in self.model.named_parameters():
+        self._means[n] = p.data.clone()
+
+def _diag_fisher(self):
+    precision_matrices = {}
+
+    # Set it to zero
+    for n, p in self.model.named_parameters():
+        params = p.clone().data.zero_()
+        precision_matrices[n] = params
+
+    self.model.eval()
+
+    for input in self.dataset:
+        input = input.to(self.device)
+
+        self.model.zero_grad()
+
+        output = self.model(input)
+        label = torch.sigmoid(output).round()
+        loss = F.binary_cross_entropy_with_logits(output, label)
+        # loss = F.nll_loss(F.log_softmax(output, dim=1), label)
+        loss.backward()
+
+        for n, p in self.model.named_parameters():
+            precision_matrices[n].data += p.grad.data ** 2 / len(self.dataset)
+
+    precision_matrices = {n: p for n, p in precision_matrices.items()}
+    return precision_matrices
+
+ +

And this is the actual training:

+ +
# Train the model EWC
+for epoch in tqdm(range(EPOCS)):
+
+    # Get the loss
+    ls = ewc_train(model, opt, loss_func, dataloader[task], EWC(model, old_tasks), importance, device)
+
+def ewc_train(model: nn.Module, opt: torch.optim, loss_func:torch.nn, data_loader: torch.utils.data.DataLoader, ewc: EWC, importance: float, device):
+    epoch_loss = 0
+
+    for i, (inputs, labels) in enumerate(data_loader):
+        inputs = inputs.to(device).long()
+        labels = labels.to(device).float()
+
+        opt.zero_grad()
+
+        output = model(inputs)
+        loss = loss_func(output.view(-1), labels) + importance * ewc.penalty(model)
+        loss.backward()
+        opt.step()
+
+        epoch_loss += loss.item()
+
+    return loss
+
+ +

Note: the loss function that I am using is nn.BCEWithLogitsLoss() and optimisation is: SGD(params=model.parameters(), lr=0.001).

+",32235,,2444,,12/22/2019 14:20,12/22/2019 14:20,Why are the current means and the old ones the same in this implementation of Elastic Weight Consolidation?,,0,1,,,,CC BY-SA 4.0 +17202,1,17205,,12/21/2019 21:31,,2,1205,"

I am trying to implement a denoising autoencoder (DAE) to remove noise from 1024-point FFT spectra. I am using two types of spectra: (1) that contain a distinctive high amplitude spectral peak and (2) that contain only noise peaks.

+ +

If I understood correctly, I can train the DAE using the corruputed spectra (spectra+noise) and afterwards I can use it the remove noise from new datasets. The problem is that when testing the DAE, it returns the type (1) spectrum mentioned above, regardless of the input. The same case when I apply predict on the training data. This is the code I am using (Python/Tensorflow):

+ +
def BuildModel(nInput):
+    input_dim = Input(shape = (nInput, ))
+
+    # Encoder Layers
+    encoded1 = Dense(896, activation = 'relu')(input_dim)
+    encoded2 = Dense(768, activation = 'relu')(encoded1)
+    encoded3 = Dense(640, activation = 'relu')(encoded2)
+    encoded4 = Dense(512, activation = 'relu')(encoded3)
+    encoded5 = Dense(384, activation = 'relu')(encoded4)
+    encoded6 = Dense(256, activation = 'relu')(encoded5)
+    encoded7 = Dense(encoding_dim, activation = 'relu')(encoded6)
+
+    # Decoder Layers
+    decoded1 = Dense(256, activation = 'relu')(encoded7)
+    decoded2 = Dense(384, activation = 'relu')(decoded1)
+    decoded3 = Dense(512, activation = 'relu')(decoded2)
+    decoded4 = Dense(640, activation = 'relu')(decoded3)
+    decoded5 = Dense(768, activation = 'relu')(decoded4)
+    decoded6 = Dense(896, activation = 'relu')(decoded5)
+    decoded7 = Dense(nInput, activation = 'sigmoid')(decoded6)
+
+    # Combine Encoder and Deocoder layers
+    autoencoder = Model(inputs = input_dim, outputs = decoded7)
+
+    autoencoder.summary()
+    # Compile the Model
+    autoencoder.compile(optimizer=OPTIMIZER, loss='binary_crossentropy')
+    #autoencoder.compile(loss='mean_squared_error', optimizer = RMSprop())
+
+    return autoencoder
+
+X_train, X_test, y_train, y_test = train_test_split(spectra.iloc[:,0:spectra.shape[1]-1], spectra['Class'], test_size=testDatasetSize, stratify=spectra.Class, random_state=seedValue)
+
+X_train, y_train = shuffle(X_train, y_train, random_state=seedValue)
+X_test, y_test = shuffle(X_test, y_test, random_state=seedValue)
+
+X_unseen = X_train.to_numpy()[0:1000,:] # Data not used for training, only for testing
+y_unseen = y_train.to_numpy()[0:1000]
+X_train = X_train.iloc[1000:]
+y_train = y_train.iloc[1000:]
+
+# Scaling
+maxVal = max(X_train)
+X_train = (X_train/maxVal).to_numpy()
+X_test = (X_test/maxVal).to_numpy()
+X_unseen = (X_unseen/maxVal)#.to_numpy()
+
+# Corrupted data
+noise_factor = 0.01
+X_train_noisy = X_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_train.shape)
+X_test_noisy = X_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_test.shape)
+X_unseen_noisy = X_unseen + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=X_unseen.shape)
+
+ae = BuildModel(X_train.shape[1])
+PrintConsoleLine('Creating model finished')
+print('')
+
+history = ae.fit(X_train_noisy, X_train, epochs=NB_EPOCH, batch_size=BATCH_SIZE, validation_data=[X_test_noisy, X_test])
+save_model(ae, modelFile, overwrite=True)
+
+# Test
+X = X_unseen
+X_noisy = X_unseen_noisy
+X_denoised = ae.predict(X_noisy) # X_train gives the same result (spectra type (1)) !?!
+N = len(X_denoised[0,:])
+index = 6
+PlotDataSimple(3, np.linspace(0,N-1,N), X[index,:], 'Frequency domain', 'Index', 'Amplitude', None)
+PlotDataSimple(4, np.linspace(0,N-1,N), X_noisy[index,:], 'Frequency domain', 'Index', 'Amplitude', None)
+PlotDataSimple(5, np.linspace(0,N-1,N), X_denoised[index, :], 'Frequency domain', 'Index', 'Amplitude', None)
+
+Dataset shape: (17000, 65, 65, 1) (files, samples X axis, samples Y axis, class)
+Train on 12600 samples, validate on 3400 samples
+Epoch 1/3
+12600/12600 [==============================] - 26s 2ms/sample - loss: 0.6813 - val_loss: 0.4913
+Epoch 2/3
+12600/12600 [==============================] - 14s 1ms/sample - loss: 0.1621 - val_loss: 0.0578
+Epoch 3/3
+12600/12600 [==============================] - 16s 1ms/sample - loss: 0.0230 - val_loss: 0.0169
+
+ +

The results I am getting (Column 1 - Initial signal, Column 2 - Corrupted signal, Column 3 - Denoised signal):

+ +

+ +

So why does the DAE output the same spectra regardless of the inputs? Am I misunderstanding the DAE principle or is there a problem in my implementation?

+",32237,,,,,4/11/2021 18:28,Why does the denoising autoencoder always returns the same output?,,2,0,,,,CC BY-SA 4.0 +17203,1,,,12/22/2019 6:08,,4,993,"

I'm trying to implement the VQ-VAE model. In there, a continuous variable $x$ is encoded in an array $z$ of discrete latent variables $z_i$ that are mapped each to an embedding vector $e_i$. These vectors can be used to generate an $\hat{x}$ that approximates $x$.

+ +

In order to obtain a reasonable generative model $p_\theta(x)=\int p_\theta(x|z)p(z)$, one needs to learn the prior distribution of the code $z$. However, it is not clear in this paper, or its second version, what should be the input of the network that learns the prior. Is it $z=[z_i]$ or $e=[e_i]$? The paper seems to indicate that it is $z$, but if that's the case, I don't understand how I should encode $z$ properly. For example, a sample of $z$ might be an $n\times n$ matrix with discrete values between $0$ and $511$. It is not reasonable to me to use a one-hot encoding, nor to simply use the discrete numbers as if they were continuous, given that there is no defined order for them. On the other hand, using $e$ doesn't have this problem since it represents a matrix with continuous entries, but then the required network would be much bigger.

+ +

So, what should be the input for the prior model? $z$ or $e$? If it is $z$, how should I represent it? If it is $e$, how should I implement the network?

+",30983,,,,,2/1/2023 14:48,What is the input for the prior model of VQ-VAE?,,1,0,,,,CC BY-SA 4.0 +17204,2,,16023,12/22/2019 8:53,,0,,"

I imagine that using the MaxNFFC as a stop criterion only happens in very particular implementations. And this is its main disadvantage.

+

Normally, you'd evaluate each individual, each generation. So, NFFC will always be the same as the size of the population times the number of epochs (+1 to consider the initial population).

+

$$N*(E+1)$$

+

As the population size is generally constant, MaxNFFC looks a lot like an MaxEpoch stop.

+

$$ \frac{\text{NFFC}}{N} = E + 1 $$

+

So, it seems that it might be used in scenarios where:

+
    +
  • the population size is not constant (in this case you want to make sure that at least K individuals have been evaluated before stopping).
  • +
  • not all individuals are evaluated each epoch. There might be a high abortive rate (individuals with invalid genomes that are kept in the population but not considered). In this case, as above, you want to perform a minimum amount of evaluations.
  • +
+

I wouldn't see the usage of MaxNFFC in other cases.

+",15530,,2444,,1/30/2021 2:51,1/30/2021 2:51,,,,0,,,,CC BY-SA 4.0 +17205,2,,17202,12/22/2019 11:00,,0,,"

You are using Dense layers, try 1d convolution instead. +Have you tried a different activation function such as softmax and instead of Binary cross entropy try MSE loss? Are all your inputs between 0 and 1? Also, I think your noise amplitude is too much in 2nd and 3rd case as compared to the actual signal. Can you try training on different types of spectra separately and check the result, if the DAE is learning anything. Also, try trining for more epochs.

+ +

I am not sure and would have commented this but I can't due to low rep, but I am interested in the solution.

+",27875,,,,,12/22/2019 11:00,,,,2,,,,CC BY-SA 4.0 +17206,2,,4748,12/22/2019 13:19,,0,,"

One more idea - I recall learning about Neyman-Pearson task in my studies. It is a statistical learning method for binary classification problem where overlooked danger (false negative error) is much unwanted.

+ +

You set a desired threshold for false negative error rate and then minimize false positive error. You just need to measure conditional probabilities of each class. It may be expressed as a linear program and solved to get the optimal strategy for the threshold of your choice.

+",32245,,,,,12/22/2019 13:19,,,,0,,,,CC BY-SA 4.0 +17208,2,,17196,12/22/2019 14:08,,1,,"

I am focusing on what you have posted here without going through (or having read) the whole essay you linked:

+
+

In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

+
+

The classic brain in a vat experiment does not disconnect the brain from all inputs and outputs but replaces "real" connections with its environment with "fake" connections, e.g. by connection the brain to a computer. This is what Wikipedia says:

+
+

In philosophy, the brain in a vat (BIV; alternately known as brain in a jar) is a scenario used in a variety of thought experiments intended to draw out certain features of human conceptions of knowledge, reality, truth, mind, consciousness, and meaning. It is an updated version of René Descartes's evil demon thought experiment originated by Gilbert Harman.1 Common to many science fiction stories, it outlines a scenario in which a mad scientist, machine, or other entity might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer which would provide it with electrical impulses identical to those the brain normally receives.2 According to such stories, the computer would then be simulating reality (including appropriate responses to the brain's own output) and the "disembodied" brain would continue to have perfectly normal conscious experiences, such as those of a person with an embodied brain, without these being related to objects or events in the real world.

+
+

Following that, the argument of the author referring to the brain in a vat scenario does not hold.

+

If you ignore that problem for a moment the next inconsistency arises: The author assumed that because a brain would act "so and so" an AGI would need to act "so and so" as well. That however is not in line with how artificial intelligence is usually defined. Even if you consider the broad range of definitions:

+

+

(source: "Artificial Intelligence: A Modern Approach"; Russel, Norvig, 3rd Ed, 2010)

+

None of these definitions directly references AI to the brain. Therefore, the assumption that an AGI (as a subset of AI) would need to act like a brain in a vat is, without further assumptions, generally flawed.

+

Another problem with the author's argumentation lies here:

+
+

to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

+

Such a program would presumably be an AGI (and then some).

+
+

The assumption that "creativity" (in quotation marks since we actually need to precisely define that in the first place) requires an AGI does not hold either. Let's stick to the authors example related to dark matter:

+
    +
  1. This is per definition a specialized field of application of an AI. Accordingly, there is not necessarily a need for an AGI since "specialization" is exactly what an AGI does not have.
  2. +
  3. Moreover, you could make an argument that coming up with new explanations related to dark matter is somewhat related to automated theorem proving. Accordingly, we might already have AI which is, in principle, capable of "solving" this task as of today.
  4. +
+",30789,,-1,,6/17/2020 9:57,12/22/2019 14:13,,,,0,,,,CC BY-SA 4.0 +17210,5,,,12/22/2019 14:25,,0,,,2444,,2444,,12/22/2019 14:25,12/22/2019 14:25,,,,0,,,,CC BY-SA 4.0 +17211,4,,,12/22/2019 14:25,,0,,"For questions related to the elastic weight consolidation (EWC) algorithm introduced in the paper ""Overcoming catastrophic forgetting in neural networks"" (2017) by James Kirkpatrick et al.",2444,,2444,,12/22/2019 14:25,12/22/2019 14:25,,,,0,,,,CC BY-SA 4.0 +17212,1,17213,,12/22/2019 15:38,,2,593,"

In machine learning, I understand that linear regression assumes that parameters or weights in equation should be linear. For Example:

+ +

$$y = w_1x_1 + w_2x_2$$

+ +

is a linear equation where $x_1$ and $x_2$ are feature variables and $w_1$ and $w_2$ are parameters.

+ +

Also

+ +

$$y = w_1(x_1)^2 + w_2(x_2)^2$$

+ +

is also linear as parameters $w_1$ and $w_2$ are linear with respect to $y$.

+ +

Now, I read some articles stating that in the equation like

+ +

$$y = \log(w_1)x_1 + \log(w_2)x_2$$

+ +

can also be made linear by considering other variables $v_1$ and $v_2$ as:

+ +

\begin{align} +v_1 &= \log(w_1)\\ +v_2 &= \log(w_2) +\end{align}

+ +

Thus,

+ +

$$y = v_1x_1 + v_2x_2$$

+ +

So, in this sense, any non-linear equation can be made linear, then what is non-linear regression here? I think I am missing something important here. I am a beginner in the field of Machine Learning. Can somebody help me?

+",32247,,2444,,12/22/2019 16:08,12/22/2019 17:29,What is the difference between linear and non-linear regression?,,1,1,,,,CC BY-SA 4.0 +17213,2,,17212,12/22/2019 17:15,,2,,"

The difference is simply that non-linear regression learns parameters that in some way control the non-linearity - e.g. any weight or bias that is applied before a non-linear function.

+ +

For instance:

+ +

$$y = (w_1 x_1 + w_2 x_2)^2 + w_3$$

+ +

With such a function to learn, you cannot separate out transformed values of $w_1$ and $w_2$ and turn this into a linear function of just $x_1$ and $x_2$.

+ +

What you are describing as non-linearities in your examples are instead all applied by the machine learning engineer to create new candidate features for linear regression. This is not usually described as non-linear regression, but feature transformation or feature engineering.

+ +

There is also a kind of middle ground where a central linear algorithm e.g. linear regression, is trained on many variations of the original features, by automated generation and filtering of transformed features. The most general variants of this approach are not hugely popular because they suffer from same risks of overfitting as non-linear models whilst not offering much in the way of improved performance. However, if you narrow down the types of feature and transformation combinations based on some knowledge of how you expect the target function to behave, it leads to many useful variants of linear regression - e.g. regression on fourier transforms, radial basis functions etc.

+",1847,,1847,,12/22/2019 17:29,12/22/2019 17:29,,,,0,,,,CC BY-SA 4.0 +17214,1,,,12/22/2019 22:00,,1,166,"

Given thousands of images, where some of the images contain target objects and others do not, is there an easy way of drawing bounding boxes on these target objects rather than relying on manual annotation? Wouldn't drawing 4 orientations of an object and their respective bounding boxes and randomly inserting them into the images be a viable option?

+

It becomes painful to manually annotate thousands of images by yourself.

+",22233,,2444,,9/17/2020 16:06,10/12/2021 18:06,Is there a way of automatically drawing bounding boxes around interested objects?,,1,0,,,,CC BY-SA 4.0 +17215,2,,13086,12/22/2019 23:26,,3,,"

Yes it has been tried. In fact there is a whole field, dubbed Genetic Programming.

+ +

There is an annual competition to obtain ""Human-Competitive"" algorithms, and many instances of those have been found over the years.

+",16363,,,,,12/22/2019 23:26,,,,0,,,,CC BY-SA 4.0 +17216,1,17246,,12/23/2019 5:17,,11,1166,"

Can someone explain the mathematical intuition behind the forget layer of an LSTM?

+ +

So as far as I understand it, the cell state is essentially long term memory embedding (correct me if I'm wrong), but I'm also assuming it's a matrix. Then the forget vector is calculated by concatenating the previous hidden state and the current input and adding the bias to it, then putting that through a sigmoid function that outputs a vector then that gets multiplied by the cell state matrix.

+ +

How does a concatenation of the hidden state of the previous input and the current input with the bias help with what to forget?

+ +

Why is the previous hidden state, current input and the bias put into a sigmoid function? Is there some special characteristic of a sigmoid that creates a vector of important embeddings?

+ +

I'd really like to understand the theory behind calculating the cell states and hidden states. Most people just tell me to treat it like a black box, but I think that, in order to have a successful application of LSTMs to a problem, I need to know what's going on under the hood. If anyone has any resources that are good for learning the theory behind why cell state and hidden state calculation extract key features in short and long term memory I'd love to read it.

+",30885,,2444,,12/25/2019 3:29,2/6/2022 21:21,How does the forget layer of an LSTM work?,,1,0,,,,CC BY-SA 4.0 +17217,1,,,12/23/2019 8:55,,3,97,"

I try to solve some easy functions with a neuronal network (aforge-lib):

+ +

This is how I generate the dataset:

+ +
const int GesamtAnzahl = 200;
+float[,] tempData = new float[GesamtAnzahl, 2];
+float minX = float.MaxValue;
+float maxX = float.MinValue;
+
+Random rnd = new Random();
+var granzen = new List<int>() 
+{
+    rnd.Next(1, GesamtAnzahl-1),
+    rnd.Next(1, GesamtAnzahl-1),
+    rnd.Next(1, GesamtAnzahl-1),
+    rnd.Next(1, GesamtAnzahl-1),
+};
+granzen.Sort();
+
+for (int i = 0; i < GesamtAnzahl; i++)
+{
+
+    var x = i;
+    var y = -1;
+    if ((i > granzen[0] && i < granzen[1]) ||
+        (i > granzen[2] && i < granzen[3]))
+    {
+        y = 1;
+    }
+    tempData[i, 0] = x;
+    tempData[i, 1] = y;
+}
+
+ +

So this is quite easy: The output is 1 if the input is between the 2 lower random generated ""borders"" or between the 2 higher numbers. Otherwise the output is 1.

+ +

The input values are standardices to fit between -1 and 1. So 0 is -1 and 200 is 1.

+ +

As a network I used a BackPropagationLearning with a BipolarSigmoidFunction and several configurations like:

+ +
Learning Rate: 0,1
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+
+
+Learning Rate: 0,1
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+Hidden Layer 3: 2 neurons
+
+
+Learning Rate: 0,2
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+Hidden Layer 3: 2 neurons
+
+ +

and so on. None of them worked. As described here: https://towardsdatascience.com/beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks-51466afa0d3e it should be enough to have 2 hidden layers. The first one with 4 neurons and the second one with 2.

+ +

The configurations which worked best were:

+ +
Learning Rate: 0,01
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 4 neurons
+Hidden Layer 3: 4 neurons
+
+Learning Rate: 0,02
+Momentum: 0
+Sigmoids alpha value: 2
+Hidden Layer 1: 4 neurons
+Hidden Layer 2: 2 neurons
+
+ +

This solves the problem about 50 % of the times.

+ +

As this is a quite simple problem I wonder if I am doing something wrong. I think there has to be a configuration which has better results.

+ +

What is the best configuration for this problem and why?

+ +

Additionally I tried:

+ +
    +
  • Having more data does not help. I created 5000 a dataset of 5000 points ( GesamtAnzahl = 5000). Then the networks have a even worse sucess rate.
  • +
  • I tried to add an extra constant input (always 1) to the dataset but this also lowered the sucess rate
  • +
+",32255,,,,,5/22/2023 0:08,"What is a working configuration of a neuronal network (number of layers, lerning rate and so on) for a specific dataset?",,1,0,,,,CC BY-SA 4.0 +17218,1,,,12/23/2019 12:58,,2,346,"

Assuming we have big $m \times n$ input dataset, with $m \times 1$ output vector. It's a classification problem with only two possible values: either $1$ or $0$.

+

Now, the problem is that almost all elements of the output vector are $0$s with a very few $1$s (i.e. it's a sparse vector), such that if the neural network would "learn" to give always 0 as output, this would produce high accuracy, while I'm also interested in learning when the 1s occurs.

+

I thought one possible approach could be to write a custom loss function giving more weight to the 1s, but I'm not completely sure if this would be a good solution.

+

What kind of strategy can be applied to detect such outliers?

+",32260,,2444,,10/28/2020 11:32,7/16/2023 3:46,How to perform binary classification when one class is more predominant than the other?,,2,1,,,,CC BY-SA 4.0 +17219,2,,7589,12/23/2019 14:38,,1,,"

Welcome to the mine-field of semantic definitions within AI! According to Encyclopedia Britannica ML is a “discipline concerned with the implementation of computer software that can learn autonomously.” There are a bunch of other definitions for ML but generally they are all this vague, saying something about “learning”, “experience”, “autonomous”, etc. in varying order. There is no well-known benchmark definition that most people use, so unless one wants to propose one, whatever one posts on this needs to be backed up by references.

+ +

According to Encyclopedia Britannica’s definition the case for calling MCTS part of ML is pretty strong (Chaslot, Coulom’s et al. work from 2006-8 is used for the MCTS reference). There are two policies used in MCTS, a tree-policy and a simulation-policy. At decision time the tree-policy updates action-values by expanding the tree structure and backing up values from whatever it finds from search. There is no hard-coding on which nodes should be selected/expanded; it all comes from maximizing rewards from statistics. The nodes closer to the root appear more and more intelligent as they “learn” to mimic distributions/state and/or action-values from the corresponding ones from reality. Whether this can be called “autonomous” is an equally difficult question because in the end it’s humans who wrote the formulas/theory MCTS uses. 50 years from now it may not be called autonomous, or ML, but today it would probably at least ""qualify"".

+",29670,,29670,,12/23/2019 15:01,12/23/2019 15:01,,,,0,,,,CC BY-SA 4.0 +17221,1,17222,,12/23/2019 15:01,,1,354,"

I have trained a multi-class CNN model using fastai. The model splits out probabilites for each of the three classes, which, of course, sum up to 1. The class with highest probability becomes the predicted class.

+ +

Is there any way I can convert them into 0 to 1 scale, where near to 0 value would mean class 1, near to 0.5 would mean class 2 and near to 1 would mean class 3?

+",25688,,2444,,12/24/2019 0:51,12/24/2019 0:51,How can I convert the probability score between 0 to 1 to another format?,,2,0,,,,CC BY-SA 4.0 +17222,2,,17221,12/23/2019 15:25,,1,,"

You could maybe do something like this, it's a bit hackish +\begin{equation} +y = C_1\cdot 1 + C_2 \cdot 0.5 + C_3 \cdot 0 +\end{equation} +$y$ represents the output and its bounded $\in [0, 1]$. $C_i$ is probability for class $i$. This way when $C_1 \approx 1, C_2 \approx 0, C_3 \approx 0$ you have +\begin{equation} + y \approx 1\cdot 1 + 0.5 \cdot 0 + 0 \cdot 0 \approx 1 +\end{equation} +when $C_1 \approx 0, C_2 \approx 1, C_3 \approx 0 $ you have +\begin{equation} + y \approx 1\cdot 0 + 0.5 \cdot 1 + 0 \cdot 0 \approx 0.5 +\end{equation} +and when $C_1 \approx 0, C_2 \approx 0, C_3 \approx 1 $ you have +\begin{equation} + y \approx 1\cdot 0 + 0.5 \cdot 0 + 0 \cdot 1 \approx 0 +\end{equation}

+",20339,,,,,12/23/2019 15:25,,,,8,,,,CC BY-SA 4.0 +17223,2,,17221,12/23/2019 16:09,,0,,"

In such cases, you can have just 1 final neuron and treat the problem as a regression problem where the output distance from all 3 classes is calculated and the class with least distance becomes the predicted class.

+ +

If you want independent values for 3 classes (such as [0.8, 0.5, 0.3]) which don't add up to 1, (something like multilabel/multiclass classification), you can use sigmoid in such cases( you won't get the probability ).

+",27875,,,,,12/23/2019 16:09,,,,1,,,,CC BY-SA 4.0 +17224,2,,17168,12/23/2019 16:56,,0,,"

I think that front end refers to a high level API for a CNN framework (c++ front end, Python front end).

+ +

The back end can be understood as a more peculiar (low level) interface to specific libraries.

+ +

You can use different back ends but still manipulate training data and model building process the same way using the front end (use Keras with TensorFlow, caffe with Pytorch, or the other way round use Theano, tensorflow, .. . with Keras!).

+ +

You can find some more material at the following links :

+ + + +

I don't think it refers to neural network layers structure. The term shallow or deep layers are usually prefered.

+",30392,,30392,,12/30/2019 19:26,12/30/2019 19:26,,,,0,,,,CC BY-SA 4.0 +17225,1,,,12/23/2019 18:41,,2,424,"

I’m experimenting with reinforcement learning for a 2D pixel plotting task, and am running into an issue that (I think) has to do with the big action space. It goes like this:

+ +

The Agent gets two vector inputs each step. +Each describes an (n x n) 2d matrix composed of zeros and ones.

+ +

One is the (n x n) target matrix, containing a certain shape of zeros +The other is an (n x n) state matrix, containing another shape

+ +

Every step, I want my agent to pick an (x, y) coordinate: +x (picks one of n) +y (picks one of n)

+ +

This will turn a zero into one, or one into zero.

+ +

every step, if correct, I give a small reward, and it’ll get punished when incorrect.

+ +

I’m training the agent (a network with 3 layers with 256 hidden units) with PPO, and curiosity in the loss, and for a 12 x 12 matrix it works quite well, not 100% but okay. (see image). Note that the agent doesn't get enough steps here to fully delete the initial shape when the target shape is empty, that's why it doesn't fully make it. Takes about 800K steps to converge though. +

+ +

But the agent starts struggling in local minima when I increase beyond 32 x 32.

+ +

This one is at 32 x 32:

+ +

+ +

Is this even scalable to bigger matrices even? I was hoping to go 3D eventually, by reaching 100x100x100 .

+ +

I do realize that i have a huge input and action space when working with such a grid. +Is something like that even possible with an RL paradigm? I’ve tried increasing the network size, and decreasing learning rate, but I’m not satisfied. Any ideas or alternative approaches to plot pixels like this?

+ +

Any input is very much appreciated! +Thanks!

+",31180,,,,,12/23/2019 18:41,Reinforcement learning possible with big action space?,,0,2,,,,CC BY-SA 4.0 +17226,1,,,12/23/2019 19:06,,1,64,"

I have the following kind of data frame. These are just example:

+ +
A 1 Normal
+A 2 Normal
+A 3 Stress
+B 1 Normal
+B 2 Stress
+B 3 Stress
+C 1 Normal
+C 2 Normal
+C 3 Normal
+
+ +

I want to do 5-fold cross-validation and splitting the data using

+ +
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)
+
+data = (ImageList.from_folder(PATH)
+        .split_by_rand_pct(valid_pct=0.2)
+        .label_from_folder()
+        .transform(get_transforms(do_flip=True, flip_vert= True,max_zoom=1.1, max_rotate=10, max_lighting=0.5),size=224)
+        .databunch()
+        .normalize() )
+
+ +

It works great. It splits the data randomly which is expected. Though I want to keep the data points together in the training or validation, having the same value in column 1. So, all the A's would be in either the training or validation dataset, all the B's would be in the training or validation dataset, and so on.

+ +

More info on my data: +I have cell assay images which are labelled in three classes. Now, these images are big in size, so I split one image into 16 small, non overlapping tiles, to bring down the size to 224( optimal enough to feed into CNN). All these tiles have the same label as the original image. These tiles are the final input to the CNN. TO perform cross-validation, I need to keep the tiles of same image into one fold and set.

+",25688,,25688,,12/24/2019 15:36,12/24/2019 15:36,How can I split the data into training and validation sets such that entries with a certain value are kept together?,,0,4,,,,CC BY-SA 4.0 +17227,1,18112,,12/23/2019 20:12,,6,163,"

SGs are a generalization of MDPs to multiple agents. Like this previous question on MDPs, are there any interesting examples of zero-sum, discrete SGs—preferably with small state and action spaces? I'm hoping to use such examples as benchmarks, but couldn't find much in the literature. One example I can think of is a pursuit-evasion game on a graph.

+",3373,,3373,,12/26/2019 6:18,2/19/2020 3:21,Interesting examples of discrete stochastic games,,1,0,,,,CC BY-SA 4.0 +17228,1,,,12/23/2019 22:52,,4,1201,"

I understood that we normalize to input features in order to bring them on the same scale so that weights won't be learned in arbitrary fashion and training would be faster.

+ +

Then I studied about batch-normalization and observed that we can do the normalization for outputs of the hidden layers in following way:

+ +

Step 1: normalize the output of the hidden layer in order to have zero mean and unit variance a.k.a. standard normal (i.e. subtract by mean and divide by std dev of that minibatch).

+ +

Step 2: rescale this normalized vector to a new vector with new distribution having $\beta$ mean and $\gamma$ standard deviation, where both $\beta$ and $\gamma$ are trainable.

+ +

I did not understand the purpose of the second step. Why can't we just do the first step, make the vector standard normal, and then move forward? Why do we need to rescale the input of each hidden neuron to an arbitrary distribution which is learned (through beta and gamma parameters)?

+",32269,,,,,10/26/2022 21:36,How does a batch normalization layer work?,,3,0,,,,CC BY-SA 4.0 +17231,1,,,12/24/2019 8:11,,5,1038,"

I am new to self-supervised learning and it all seems a little magical at the moment.

+

The only way I can get an intuitive understanding is to assume that, for real-world problems, features are still embedded at a per-object level.

+

For example, to detect cats in unseen images, my self-supervised network would still have to be composed exclusively of cats.

+

So, if I had 100 images of cats and 100 images of dogs, then I thought self-supervised approaches would learn the features of the images. For example, if an image is rotated 90 degrees, it learns what was in the image that was rotated 90 degrees. However, if I wanted to classify just cats using this representation, then I wouldn't be able to do so without separating out what makes a cat a cat and a dog a dog.

+

Is my assumption correct?

+",32275,,2444,,11/20/2020 2:49,11/20/2020 2:49,How to understand the concept of self-supervised learning in AI?,,2,1,,,,CC BY-SA 4.0 +17232,2,,17228,12/24/2019 10:47,,1,,"

Definition and Explaination

+ +

For how Batch Normalization works exactly, I'll suggest you to read the following papers:

+ + + +

The recent interpretation on How BN works is that it can reduce the high-order effect as mentioned in Ian Goodfellow's lecture. So it's not really about reducing the internal covariate shift.

+ +

Intuition

+ +

For how it works intuitively, you can think that we want to normalize the intermediate outputs (zero mean and unit variance) if the normalization won't remove too much useful information.

+ +

However, normalization may not be suitable for all intermediate outputs. So $\beta$ and $\gamma$ is introduced to provide additional flexibility, if normalization removes too much useful information then $\beta$ and $\gamma$ will learn to become the original mean and variance, making the BN layer an identity transformation, as if it doesn't exist.

+ +

In practice, $\beta$ and $\gamma$ won't become the original mean and variance, since all intermediate outputs can be normalized in some certain way without losing too much useful information. So you can think of it to be a customized normalization for each BN layer.

+ +

tl;dr

+ +

BN layer normalize the intermediate outputs in default, however, if the neural network find out that these intermediate outputs should not be normalized, then the neural network undos or provide more flexibility to the normalization.

+",32173,,,,,12/24/2019 10:47,,,,3,,,,CC BY-SA 4.0 +17233,2,,17231,12/24/2019 11:00,,4,,"

I don't think your interpretation is correct. Take images as example.

+ +
    +
  • Supervised Learning

    + +

    e.g. classification (maybe use CNN with a L2 loss function)

    + +

    Assume you have many images with different labels. You wish to find a function to approximate the function $y=f(x)$ given a lot of $(\hat x, \hat y)$ sample pairs.

  • +
  • Unsupervised Learning

    + +

    e.g. clustering (maybe use k-means)

    + +

    Assume you have many images, but we don't have the labels or we just want to see if there's a way to categorize them into different categories. So we cluster the images by some characteristic that isn't pre-defined.

  • +
  • Self-Supervised Learning

    + +

    e.g. super resolution (maybe use CNN with a L2 loss function)

    + +

    You have many high resolution images without labels, but, your target is to train a model to up sample a low resolution image. So you can have the high resolution images as target, and down size the image to be the input, and try to train the image pairs. So the target is not some manually tagged labels, but generated directly from the data.

  • +
+",32173,,,,,12/24/2019 11:00,,,,1,,,,CC BY-SA 4.0 +17234,1,,,12/24/2019 12:25,,1,68,"

This is more of a general question of how to model/preprocess 'visual' state-observations to an Agent in Reinforcement Learning that I'll illustrate with an example.

+ +

Say you have a reinforcement learning problem where the agent has to draw pixels in an n * n 2D state-matrix of 0's and 1's. Say n = 100. The agent can move one step (up, down, left, right) and on its location can additionally switch 0's into 1's or the other way around.

+ +

Each step, it needs to take action so that the state-matrix resembles an n * n target-matrix (that has a certain shape). It is rewarded accordingly each step.

+ +

The agent will know its location from an x and y position that are given in addition to the state- and target-matrix each step.

+ +

Now I'm curious to the question what the best way is to represent the state to the agent. Using a visual 'prior', or not. Here's two ways:

+ +
    +
  1. Based on that you want to give only the essential information to the agent: The agent is presented with a matrix (with target subtracted from state), that will be flattened into one array of n^2. Additionally it'll know its current location as an additional (x, y) vector observation.

  2. +
  3. Based on that (1) would be more difficult to solve for a human, because you'll have to learn from a flattened array how different points are connected (think about how hard a flattened game of chess would be), you can also use a convolutional neural network to encode the current scene. In this case the agent will be e.g. a red dot. Given that it's such a visual task, it seems to me that using this would give the agent a better model of how the environment works, since the spatial relations are kept intact. Also it feels that keeping the 2D shape intact with a CNN would mean that it'd form better representations that generalize to other shapes, but I can't really say why.

  4. +
+ +

On the other hand one could say that it's arrogant to assume that our 'human' spatial way of interpreting visual information is the best way for this case. Maybe there's a mathematical solution?

+ +

Any ideas?

+",31180,,,,,12/24/2019 12:25,Flattened vector observation or convolutional neural network input?,,0,0,,,,CC BY-SA 4.0 +17235,1,17237,,12/24/2019 12:56,,1,138,"

I am training a simple convolutional neural network to recognize two types of 1024-point frequency spectra (FFT). This is the model I'm using:

+ +
cnn = Sequential()
+cnn.add(Conv1D(filters=64, kernel_size=3, activation=LeakyReLU(), input_shape=(nInput,1)))
+cnn.add(Conv1D(filters=64, kernel_size=3, activation=LeakyReLU()))
+cnn.add(MaxPooling1D(pool_size=2))
+cnn.add(Flatten())
+cnn.add(Dense(nFinalDense, activation=LeakyReLU()))
+cnn.add(Dense(nOutput, activation='sigmoid'))
+
+ +

However I get the following accuracy and loss during training: +

+ +

Why do I get the large peak in both plots? How can it be explained? Is there a problem with the data I'm using (I mention that I obtain a similar peak when training an autoencoder for denoising using the same data)?

+",32237,,,,,12/24/2019 18:49,How to explain peak in training history of a convolutional neural network?,,1,2,,,,CC BY-SA 4.0 +17237,2,,17235,12/24/2019 18:49,,1,,"

I found that the peak was caused by the data I am using. Specifically, the MinMaxScaler changed the data shape and I resolved the issue by simply dividing to the max value.

+",32237,,,,,12/24/2019 18:49,,,,0,,,,CC BY-SA 4.0 +17238,1,,,12/24/2019 19:26,,4,59,"

Suppose we're training two agents to play an asymmetric game from scratch using self play (like Zerg vs. Protoss in Starcraft). During training one of the agents can become stronger (discover a good broad strategy for example) and start winning most of the time, which causes big portion of the state values (or Q(s,a) values) become very high for this agent and low for another, just because the first is generally stronger and receives most of the rewards. Some training time later the other one finds a weakness in the first's play (in many states too) and starts dominating and the reward stream shift the other way.

+ +

The problem is, we have to retrain function approximator (deep neural net) to wildly different value/Q states, this slows and destabilizes learning. For each of the agents this is similar to highly nonstationary environment (the opponent), that can be harsh or easy at times.

+ +

What do people usually do in such a case? I think what is needed is some kind of slowly changing baseline (similar to advantage in A2C), but applied to the reward values themselves.

+",32286,,,,,12/24/2019 19:26,How to deal with nonstationary rewards in asymmetric self-play reinforcement learning?,,0,0,,,,CC BY-SA 4.0 +17239,1,,,12/24/2019 20:55,,1,28,"

I have 10000 tuples of numbers (x1, x2, y) generated from the equation: y = np.cos(0.583 * x1) + np.exp(0.112 * x2). I want to use a neural network, trained with gradient descent, in PyTorch, to find the 2 parameters, i.e. 0.583 and 0.112

+ +

Here is my code:

+ +
class NN_test(nn.Module):
+    def __init__(self):
+        super().__init__()
+        self.a = torch.nn.Parameter(torch.tensor(0.7))
+        self.b = torch.nn.Parameter(torch.tensor(0.02))
+
+    def forward(self, x):
+        y = torch.cos(self.a*x[:,0])+torch.exp(self.b*x[:,1])
+        return y
+
+model = NN_test().cuda()
+
+lrs = 1e-4
+optimizer = optim.SGD(model.parameters(), lr = lrs)
+loss = nn.MSELoss()
+
+epochs = 30
+for epoch in range(epochs):
+    model.train()
+    for i, dtt in enumerate(my_dataloader):
+        optimizer.zero_grad()
+
+        inp = dtt[0].float().cuda()
+        output = dtt[1].float().cuda()
+
+        ls = loss(model(inp),output)
+
+        ls.backward()
+        optimizer.step()
+    if epoch%1==0:
+        print(""Epoch: "" + str(epoch), ""Loss Training: "" + str(ls.data.cpu().numpy()))
+
+ +

where x contains the 2 numbers x1 and x2. In theory, it should work easily, but the loss doesn't go down. What am I doing wrong?

+",22839,,2444,,12/25/2019 3:40,12/25/2019 3:40,How can I train a neural network to find the hyper-parameters with which the data was generated?,,0,0,,,,CC BY-SA 4.0 +17240,1,17301,,12/25/2019 3:54,,4,146,"

I know that autoencoders are one type of deep neural networks that can learn the latent representation of data. I guess there should be several other models like autoencoders.

+ +

What are some new deep learning models for learning latent representation of data?

+",32290,,2444,,12/26/2019 15:20,1/7/2020 4:12,What are some new deep learning models for learning latent representation of data?,,1,0,,,,CC BY-SA 4.0 +17241,1,,,12/25/2019 8:26,,1,47,"

I have a video dataset as follows.

+ +

Dataset size: 1k videos

+ +

Frames per video: 4k (average) and 8k (maximum)

+ +

Labels: Each video has one label.

+ +

So the size of my input will be (N, 8000, 64, 64, 3) +64 is height and width of video. I use keras. I am not really sure how to do an end-to-end training with this kind of dataset. I was thinking of dividing each input in blocks of frames (N, 80, 100, 64, 64, 3) for training. But still it wont work for an end-to-end network training.

+ +

I am not in favor of dropping the frames. That might be my last choice.

+ +

Any help will be appreciated. Thanks in advance.

+",32297,,32297,,12/26/2019 2:48,12/26/2019 2:48,How to handle a high dimensional video (large number of frames per video) data for training a video classification network,,0,0,,,,CC BY-SA 4.0 +17242,1,,,12/25/2019 9:56,,6,157,"

I'm quite new to machine learning (I followed the Coursera course of Andrew Ng and now starting deeplearning.ai courses).

+ +

I want to classify human actions real-time like:

+ +
    +
  • Left-arm bended
  • +
  • Arm above shoulder
  • +
  • ...
  • +
+ +

I first did some research for pre-trained models, but I didn't find any. +Because I'm still quite new, I want to have advice about how I should solve this.

+ +
    +
  1. I thought maybe I need to create for every action enough pictures and from there on I can do image classification.

  2. +
  3. Or I use PoseNet from TensorFlow so that I have the pose estimation points. And from there on I create videos of a couple of seconds with every pose I want to track and I save the estimation points. From there on, I use a classification algorithm (neural network) to classify those points.

  4. +
+ +

What is the most efficient option or are they both bad and is there a better way to do this?

+",32300,,2444,,12/26/2019 15:19,5/10/2023 19:01,How to classify human actions?,,1,0,,,,CC BY-SA 4.0 +17243,1,,,12/25/2019 12:16,,3,76,"

In neural Turing machine (NTM), reading memory is represented as

+ +

\begin{align} +r_t \leftarrow \sum\limits_i^R w_t(i) \mathcal{M}_t(i) \tag{2} +\end{align}

+ +

and writing to memory is represented as

+ +

Step1: Erase

+ +

\begin{align} +\mathcal{M}_t^{erased}(i) \leftarrow \mathcal{M}_{t-1}(i)[\mathbf{1} - w_t(i) e_t ] \tag{3} +\end{align}

+ +

Step2: Add

+ +

\begin{align} +\mathcal{M}_t(i) \leftarrow \mathcal{M}_t^{erased}(i) + w_t(i) a_t \tag{4} +\end{align}

+ +

In the reading mechanism, if we take this example values and applied to the above formula, instead of a vector, we get a scalar of value 2.

+ +
M_t =[[1,0,1,0],
+      [0,1,0,0],
+      [1,1,1,0]]
+
+w_t = [1,1,1]
+
+ +

The same thing happens in writing as well; here we take the dot product of two vectors, $w_t(i) e_t$, with a scalar value as output. According to paper, unless $w_t$ or $e_t$ are zeros, it will erase all values in the memory matrix.

+ +

My own idea about NTM memory was that it uses the weights to find the indices or rows inside the memory matrix corresponding to a certain task.

+ +

How does the memory in NTM work?

+ +

How a memory for a particular task is stored, that is, is it stored row-wise or it's stored in the whole matrix?

+",39,,39,,12/27/2019 12:13,12/27/2019 12:13,How does the memory mechanism (reading and writing) work in a neural Turing machine?,,0,0,0,,,CC BY-SA 4.0 +17244,1,,,12/25/2019 14:50,,2,41,"

In this paper, YOLO has three features compared to YOLO v1. This question is about Better and Faster.

+ +

In the Better section, there are many techniques such as Batch Norm, Anchor Box and so on. In the Faster section, there is a darknet only. +Darknet has 19 Conv Layer but it doesn't use Layer Norm or Passthrough Layer. So, I think that Darknet doesn't use Better section techniques.

+ +

Is the Better Section model different from Faster Section Model? +In my understanding, there are three models named YOLO v2. First is Better YOLO v2, second is Faster YOLO v2, third Strong YOLO v2. Is this right?

+",32303,,2444,,12/26/2019 5:39,12/26/2019 5:39,YOLO 9000 about Better Stronger,,0,0,,,,CC BY-SA 4.0 +17245,1,,,12/25/2019 16:39,,1,49,"

I have n-tuple based tic tac toe. I already have perfect minimax player and perfectly trained table-based player. My n-tuple network consists of 8 different rows of 3 of the board as triplets having possible empty, X or O, and one bit defining who's move is now, so totally 2 * 3^3 = 54 states in tuple. I train and update weights with the idea of the pseudo code from ""Learning to Play Othello with N-Tuple Systems"" by Simon Lucas:

+ +
public void inGameUpdate(double[] prev, double[] next) {
+  double op = tanh(net.forward(prev));
+  double tg = tanh(net.forward(next));
+  double delta = alpha * (tg - op) * (1 - op * op);
+  net.updateWeights(prev, delta);
+}
+
+public void terminalUpdate(double[] prev, double tg) {
+  double op = tanh(net.forward(prev));
+  double delta = alpha * (tg - op) * (1 - op * op);
+  net.updateWeights(prev, delta);
+}
+
+ +

And the score is the sum of weights of those rows of 3. The temporal difference training generally works for n-tuple based tic tac toe and after several thousands games it mostly plays perfectly. But after a while it diverges from the perfection and oscillates between perfect and near perfect. I realized it was in situations like this:

+ +
OXO
+X-O
+-XX
+
+ +

I suspect because row that prevent opponent from winning has big value. And having two of such rows seems to be better than losing later.

+ +

I know I can have perfect player basing on this particular n-tuple network. I could just stop training after I reach perfection, but in bigger games I can't do that. I fiddled with different alpha in ranges 0.1-0.0001 and e-greedy epsilon 1%-50%, or adaptive. Increasing epsilon to about 50% somewhat mitigates this effect, but this value is mostly very big to use in other games.

+ +

Here are couple questions:

+ +
    +
  1. Does this effect have a name in the machine learning world? It values preventing opponent from winning. But if opponent has more opportunities to win, its value will be bigger, so that it will exceed the (negative of) losing value.

  2. +
  3. Aside from probably using different n-tuple networks and tweaking hyper-parameters, what can I do to mitigate or eliminate this effect?

  4. +
  5. In bigger games, this learning and n-tuple system give fairly good result, but I see big oscillations after certain points. I.e. in breakthrough game against 1-ply minimax, after it reaches about 60% winrate (testing 10000 games after training every 10000 games against itself), its winrate goes slitghly up, but in testing its winrate oscillates between 45-65%. Can this effect be caused by the problem I mentioned in 1.?

  6. +
+",16663,,,,,12/25/2019 16:39,N-tuple based tic tac toe diverges in temporal difference learning,,0,0,,,,CC BY-SA 4.0 +17246,2,,17216,12/25/2019 19:28,,4,,"

TL;DR Here is a beautiful explanation with diagrams: source.

+

To address:

+
+

the cell state is essentially long term memory embedding (correct me if I'm wrong)

+
+

The embedding can be long or short term and it is a vector.

+

To answer:

+
+

Why is the previous hidden state, current input and the bias put into a sigmoid function? Is there some special characteristic of a sigmoid that creates a vector of important embeddings?

+
+

Excerpt from source:

+
+

The sigmoid layer outputs numbers between zero and one, describing how much of each component should be let through. A value of zero means “let nothing through,” while a value of one means “let everything through!”

+
+

Formally the forget vector is:

+

$$f_t = \sigma(W_f\cdot[h_{t-1},x_t],+b_f)$$

+

So we see that it is actually a linear operation followed by a non-linear operation, which restricts the values to be between 0 and 1. This is followed by an element-wise operation on the previous cell state. That is we "gate"/"filter" the previous cell state doing:

+

$$C_{t-1}\odot f_t,$$

+

where $\odot$ is element-wise multiplication.

+

Thus we see that the forget gate really is acting like a gate: It either lets values pass through or it pushes them toward zero. So, it's purpose is to assign what values to forget/remember.

+

To answer:

+
+

How does a concatenation of the hidden state of the previous input and the current input with the bias help with what to forget?

+
+

This concatenation takes into account the "rolling" hidden state and the current input. That is, the linear operation results in a new vector which can be seen as a "consideration embedding" of the current input and a "summary" of past inputs and cell states. The sigmoid then converts this "consideration embedding" into the forget vector.

+

In summary, the forget gate has the primary purpose of forming a vector of values between zero and one. This vector results by considering the current input and the previous hidden state. The vector is used to forget/remember parts of the previous cell state via element-wise multiplication. So, no, it is not a black box, it is a highly considerate non-linear operation through time.

+

Additional Intuition - More Mathematical

+

Recall that multiplying a matrix on the right by a column vector produces a vector, which is a linear combination of the columns of the matrix. So each column of $W_f$ can be seen as a column of "toggles" that push up to 1 or pull down to 0 (once we apply sigmoid). That is, the columns of $W_f$ form a sort of "basis" for a "toggle space."

+

When we concatenate to form $[h_{t-1},x_t]$ and then multiply we see that the term $W_f\cdot[h_{t-1},x_t]$ gives consideration to the hidden state and the input. That is, both $h_{t-1}$ and $x_t$ have there hand in toggling the gate.

+

The LSTM has learned from the data how to make this toggling "meaningful." Below is a simple pictographic example.

+

+

That is, $W_f\cdot[h_{t-1},x_t]$ will be a vector in the column space of $W_f$ (ie the "toggle space"). Thus, the concatenated vector will "point" to the best "toggle" in the column ("toggle") space of $W_f$. This "toggle" is then transformed into a "gate" in the "gate space."

+

So intuitively, the range of the forget component is a sort of "gate space" where the domain is all vectors of form $[h_{t-1},x_t]$. Formally:

+

$$f_t:[h_{t-1},x_t]\mapsto g_t$$

+

That is to say that the forget gate learns the best mapping from the domain of concatenated hidden/input's to the range of possible gates.

+

Important Note

+

The LSTM is only one of many types of forget-gate mappings. The main takeaway is that the LSTM works empirically for many applications.

+

That is a forget gate is an instance of a general "gate mapping":

+

$$f_t:v_t \mapsto g_t,$$

+

where $v_t$ could be any number of vectors resulting from any number of concatenations.

+

Conclusion

+

Some data from the past is irrelevant to the current time step. We need a way to correctly "forget" irrelevant information. One way to "forget" is to use a forget gate mapping (like the one in LSTMs). Then we need to optimize the parameters of that forget gate. One implementation is used in the common LSTM architecture. Finally, the concatenation of the previous hidden state provides contextual information to the current input which is very helpful with forgetting.

+

More Mathematical Intuition

+

Here is another view of what is happening in the LSTM from a dynamic point of view: video.

+",28343,,28343,,2/6/2022 21:21,2/6/2022 21:21,,,,1,,,,CC BY-SA 4.0 +17248,1,,,12/26/2019 10:17,,2,622,"

In Decision Tree Regression, we can use 'Reduction in Variance' or MSE (Mean Squared Errors) as splitting methods. There are methods like Gini Index, Information Gain, Chi-Square for splitting on classification trees. Now, I read somewhere that we cannot use Information gain (with impurity function as entropy) as a splitting method for regression trees. Why is it so, and what other methods are there which we can and cannot use, and why?

+ +

EDITS:

+ +

Please suggest me a reference to understand maths behind it.

+ +

The references I used are :

+ +

https://www.analyticsvidhya.com/blog/2016/04/complete-tutorial-tree-based-modeling-scratch-in-python/

+ +

https://www.python-course.eu/Regression_Trees.php

+ +

https://towardsdatascience.com/https-medium-com-lorrli-classification-and-regression-analysis-with-decision-trees-c43cdbc58054

+ +

In the first article, it is mentioned that:

+ +
+

Gini Index, Chi-Square and Information gain (impurity function as entropy) algorithms are used for Classification trees while Reduction in Variance is used for Regression Trees.

+
+ +

In the second article, it is mentioned that:

+ +
+

Since our target feature is continuously scaled, the IGs of the categorically scaled descriptive features are no longer appropriate splitting criteria.

+ +

As stated above, the task during growing a Regression Tree is in principle the same as during the creation of Classification Trees. Though, since the IG turned out to be no longer an appropriate splitting criteria (neither is the Gini Index) due to the continuous character of the target feature we must have a new splitting criteria. + Therefore we use the variance which we will introduce now.

+
+ +

In the third article, it is mentioned that:

+ +
+

""Entropy as a measure of impurity is a useful criterion for classification. To use a decision tree for regression, however, we need an impurity metric that is suitable for continuous variables, so we define the impurity measure using the weighted mean squared error (MSE) of the children nodes instead""

+
+ +

Thank You!

+",32247,,2444,,1/9/2020 0:02,1/9/2020 0:02,Why information gain with entropy as impurity function can't be used as a splitting method for Decision Tree Regression?,,0,1,,,,CC BY-SA 4.0 +17249,1,,,12/26/2019 11:04,,1,866,"

Objective : To find the nearest object (closer distance object) in the single camera image. But Image Contains multiple objects shown below:

+ +

+ +

I searched in the net and found this formula to calculate the distance of object from camera

+ +

F = (P x D) / W Further Detail click on the Link

+ +

Is there any other better approach to find the nearest object in a image?

+ +

Thanks in Advance!!!

+",9863,,,,,12/26/2019 11:04,Find the nearest object in a image which is captured from camera?,,0,2,,,,CC BY-SA 4.0 +17250,2,,17200,12/26/2019 11:24,,2,,"

If I had to implement a path exploration/finding algorithm on a robot, I would follow these steps:

+ +
    +
  1. Make sure you can detect your position. You need to be able to record your position otherwise you have no reference for the exploration. You don't need a global positioning system (like GPS), a local one is more than enough in your case. This means that the robot must know that it has been switched on in position (0,0). If you go straight for 5 meters, you'll update it to (5,0) or something like that.
    +Knowing how much you moved and towards where is the difficult thing.

  2. +
  3. Once you can know where you are, it is time to record it. As you want to explore the environment, you might want to create a tree with states on the nodes. The node can be open if you can still explore around it, closed if the exploration has been done.
    +To get the path from a node to another, A* works more than enough.

  4. +
  5. To know what is around you, you can use the sensors to explore the surroundings and know the position of the obstacles.

  6. +
+ +

This is the general idea:

+ +
Tree tree = new Tree()
+Node first = new Node(Here)
+tree.Add(first)
+
+Node current = first
+
+while(true) {
+    ExploreAround(current)
+    var nodes = CreateSurroundingNodes()
+    tree.Add(nodes)
+    current.State = closed
+    var next = PickNearestOpenNode()
+    MoveTo(next)
+    WaitUntilRobotIsOn(next)
+    current = next
+}
+
+ +

In my bachelor thesis I was implementing a very similar algorithm on a drone on Unity3D. You can find the package here. You can get an idea out of it.

+ +

Here is a video of how it worked: https://youtu.be/Xrh9-4Bfcew

+ +

+",15530,,,,,12/26/2019 11:24,,,,1,,,,CC BY-SA 4.0 +17251,1,,,12/26/2019 11:29,,1,31,"

I want to create a small dataset (about 10 classes and 20-30 images each), should I add some noise (wrong label sample) in the training, validation and test datasets, and why?

+",32320,,2444,,12/26/2019 13:10,12/26/2019 13:10,Should I add some noise when the dataset is small?,,0,2,,,,CC BY-SA 4.0 +17253,1,,,12/26/2019 15:16,,2,191,"

This is my first variational autoencoder. Background info: I am using the MNIST digits dataset. The model is created and trained in PyTorch. The model is able to get a reasonably low loss, but the images that it generates are just random noise. Here are my script and the images that were generated by the model: https://github.com/jweir136/PyTorch-Variational-Autoencoder-Latent-Space-Visualization.

+ +

Any advice or answers on how to solve this problem are greatly appreciated.

+",32323,,2444,,12/26/2019 15:22,12/26/2019 15:22,Why is my variational auto-encoder generating random noise?,,0,1,,,,CC BY-SA 4.0 +17254,2,,9692,12/26/2019 16:32,,2,,"

I believe that the idea is to have a similar ratio of fraud/""normal transaction"" as to the ones that bank encounter on real life.

+ +

If you balance it you will probably have a lot of false positive once you apply your solution to real world's data and, if that may be fine for you to play with, it's not what a bank would like as they can't block too much of the ""normal"" transaction or the client will change of bank. According to this post (https://www.quora.com/How-many-transactions-do-typical-banks-process-everyday) just the mastercard alone represent 3.4billion transaction/day, just imagine if 1% of the daily transaction where blocked every day, it would represent 34 million of transactions blocked without any valid reason.

+ +

It's different from a lot of classification problem where you want to have balanced dataset, here you try to detect anomalies and, by definition, they are rare so they should be as rare in your dataset.

+",26961,,,,,12/26/2019 16:32,,,,0,,,,CC BY-SA 4.0 +17255,1,17257,,12/26/2019 19:18,,1,1356,"

Okay so here's my CNN (simple example from a tutorial) along with some arithmetic to get the total number of free parameters.

+ +

We've got a dataset of 28*28 grayscale image (MNIST).

+ +
    +
  1. First layer is a 2D convolution using 32 3x3 kernels. Dimensionality of the output is 26x26x32 (kernel stride length was 1 and we have 32 feature maps of 26x26). Running parameter count: 288
  2. +
  3. Second layer is 2x2 MaxPool with a 2x2. Dimensionality of the output is 13x13x32 but then we flatten so we got a vector of length 5408. No extra parameters here.
  4. +
  5. Third layer is Dense. A 5408x100 matrix. Dimensionality of the output is 100. Running Parameter count: 540988
  6. +
  7. Fourth layer is Dense also. A 100x10 matrix. Dimensionality of the output is 10. Running Parameter count: 541988
  8. +
+ +

Then we're supposed to do stochastic gradient descent on a 541988 parameter space!

+ +

That feels like a ridiculously big number to me. And this is meant to be the hello world problem of CNNs. Am I missing something fundamental in my understanding of how this is meant to work? Or maybe the number is correct but it's not actually a big deal for a computer to crunch?

+ +

In case it helps. Here is how the model was built in Keras:

+ +
def define_model():
+    model = Sequential()
+    model.add(Conv2D(32, (3,3), activation = 'relu', kernel_initializer = 'he_uniform', input_shape=(28,28,1)))
+    model.add(MaxPooling2D((2,2)))
+    model.add(Flatten())
+    model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
+    model.add(Dense(10, activation='softmax'))
+    opt = SGD(lr=0.01, momentum=0.9)
+    model.compile(optimizer=opt, loss='categorical_crossentropy', metric=['accuracy'])
+    return model
+
+",16871,,16871,,12/26/2019 19:26,12/26/2019 20:32,How many parameters are being optimised over in a simple CNN?,,1,4,,,,CC BY-SA 4.0 +17256,1,17355,,12/26/2019 19:27,,2,744,"

A single neuron with 2 weights and identity activation can learn addition/subtraction as the 2 weights will converge to 1 and 1 (addition), or 1 and -1 (subtraction).

+ +

However, for multiplication and division, it's not that easy. Can a single neuron learn multiplication or division? If not, how many layers of DNN can learn these?

+",2844,,,,,1/4/2020 20:15,How to make DNN learn multiplication/division?,,1,0,,,,CC BY-SA 4.0 +17257,2,,17255,12/26/2019 20:27,,2,,"

Neural networks can have a lot of different structures. CNNs can have a number of parameters that ranges from a few thousands to several millions.

+ +

In general you aim to increase the number of filters and reduce the first 2 dimensions, as you go deeper in the network.

+ +

So if you had Conv -> pool -> Conv -> pool -> ... , you could do for example first conv with kernel size = 5 and 8 filters and the second conv with kernel size = 5 and 16 filters. And both pools being (2,2).. But this is just an example.

+ +

In your network you start with a 28*28 image, and you use 32 3*3 filters. so number of parameters is (3*3 + 1) * 32 = 320.

+ +

In the dense layer you have as input a 13*13*32 and use a 100 FC layer. so n_parameters is (13*13*32 + 1)*100 which is 540900.

+ +

Then you get (100+1) * 10 FC which is 1010 more.

+ +

total = 320 + 540900 + 1000 which is 542230, as expected.

+ +

The +1 that shows up in every layer is the Bias neuron. Basically you add a bias neuron per output in a connection between 2 layers. In the FC1000 to the FC10 it is easy to understand, you have a bias per output neuron. In the Convolutional layers you have a bias term per each of the filters applied, so for each filter you have the filters weights plus 1 for the bias.

+ +

Apart from that you also had a small math mistake when adding the 288 at the beginning, you added 188. So your missing 242 parameters were: 32 from the conv layer, 100 from the first dense, 10 from the second dense and 100 from the sum. 100+100+32+10 = 242.

+",24054,,24054,,12/26/2019 20:32,12/26/2019 20:32,,,,0,,,,CC BY-SA 4.0 +17258,2,,17196,12/26/2019 21:45,,1,,"

Task Specification

+ +

It's been proposed that novelty search may circumvent this problem. See: Abandoning Objectives: Evolution Through +the Search for Novelty Alone. In this model, the agent has no goal or objective, but just messes around with the data to see what results. (This could be regarded as finding/forming patterns. Here's a recent popular article on the subject: Computers Evolve a New Path Toward Human Intelligence).

+ +

A form of procedural generation may also be useful, specifically the capability of creating novel models/environments and processes/algorithms to analyze them. (See: AI-GAs: AI-generating algorithms).

+ +

In terms of programmers communicating a task to the AGI, that's a natural language problem if the task relates to mundane human activity or art and craft, and a math problem if the subject is physics. (In the former case, humans are describing the problem in natural language, in the latter, presumably feeding all of the data that suggests dark matter into the algorithm. Natural language is challenging for computers, but math, along with logic, is one of their two core functions.)

+ +

Re: dark matter, it may be a matter of asking the algorithm to find patterns in the data, and build models based on the data. The patterns and models would be the output, which humans could then consider. The output would be mathematical.

+ +

(Converting that mathematical output into metaphors, as in common on science programs like Nova and Cosmos, would be another goal of AGI.)

+ +

Brain in a Box

+ +

There needs to be stimulus/input to initiate ""thought"" process/computation. In brain in a box, the brain is providing it's own internal stimulus. I'd argue that an RL algorithm engaged in self-play is not dependent on external stimulus, but internally generated inputs, so that the process of model-based reinforcement learning is often a brain in a box, considering a subject or problem.

+",1671,,1671,,12/26/2019 22:09,12/26/2019 22:09,,,,0,,,,CC BY-SA 4.0 +17259,1,,,12/27/2019 2:14,,2,20,"

I have been using a network to generate graphs. The architecture that I have been using is the following:

+ +

+ +

In this figure, $D_1$ is the signal generator and $D_2$ is the graph topology generator, which is a square, symmetric matrix which indicates which node is connected to which. In this network, $l$ shows linear layers, $a$ shows activation functions. Here we are using leaky relu activation function.

+ +

The problem that I am experiencing is that after training the network, my output is only a chain of nodes, meaning that only subdiagonal and superdiagonal elements have non-zero values and it is very rare to have other forms of graph. I was wondering if anyone has a suggestion for improving the output. Note that my training data is diverse and has every kind of graphs.

+",31990,,,,,12/27/2019 2:14,Improving graph decoder network,,0,0,0,,,CC BY-SA 4.0 +17260,2,,17195,12/27/2019 6:34,,0,,"

You can do custom POS Tagging and use it as a multi featured sequence2sequence.

+",30690,,,,,12/27/2019 6:34,,,,1,,,,CC BY-SA 4.0 +17261,1,,,12/27/2019 6:42,,2,24,"

I am currently writing my thesis about human pose estimation and wanted to use Google's inception network, modify it for my needs and use transfer learning to detect human key joints. I wanted to ask if that could be done in that way?

+ +

Assuming I am having n-keypoints, generating the n-feature maps, use transfer learning and cut off the final classification layers and replace it by a FCN which guesses the key joints. I am asking myself if this might be possible.

+ +

However, these feature maps should output heatmaps with the highest probability as well. Is this assumption valid?

+",32337,,,,,12/27/2019 6:42,Can an image recognition model used for human pose estimation?,,0,0,,,,CC BY-SA 4.0 +17263,2,,16714,12/27/2019 8:20,,0,,"

Find some opensource projects like Arxiv sanity by andrej karpathy which finds ML related papers from arxiv.org. You can find similar opensource applications.

+",27875,,,,,12/27/2019 8:20,,,,0,,,,CC BY-SA 4.0 +17264,1,17279,,12/27/2019 9:14,,1,70,"

I'm using three pre-trained deep learning models to detect vehicles and count from an image data set. The vehicles belong to one of these classes ['car', 'truck', 'motorcycle', 'bus']. So, for a sample I have manually counted number of vehicles in each image. Also, I employed the three deep learning models and obtained the vehicle counts. For example:

+ +
    Actual        | model 1 count| model 2 count  | model 3 count 
+------------------------------------------------------------------
+    4 cars, 1 bus | 2 cars       | 2 cars, 1 truck| 4 cars
+    2 cars        | 0            | 1 truck        | 1 car, 1 bus
+
+ +

In this case, how can I measure accuracy scores such as precision and recall?

+",32343,,2444,,1/6/2022 13:45,1/6/2022 13:45,How to calculate the precision and recall given the predictions and targets in this case?,,1,0,,,,CC BY-SA 4.0 +17265,1,,,12/27/2019 9:24,,1,362,"

Could I get a dataset that can classify ambulances? +I have searched everywhere, but, couldn't seem to get hold of a set of annotated images for ambulances.

+",32249,,,,,12/27/2019 9:46,Ambulance dataset needed,,1,0,,10/28/2021 15:44,,CC BY-SA 4.0 +17267,2,,17265,12/27/2019 9:46,,1,,"

Answering my own question here.

+ +

Looked at the Open Image Dataset by Google @ https://storage.googleapis.com/openimages/web/index.html

+ +

They provide image-level labels, object bounding boxes, object segmentation masks, and visual relationships.

+",32249,,,,,12/27/2019 9:46,,,,0,,,,CC BY-SA 4.0 +17268,2,,8258,12/27/2019 9:54,,3,,"

Look at Google's Open Image Dataset @ https://storage.googleapis.com/openimages/web/index.html

+ +

They provide image-level labels, object bounding boxes, object segmentation masks, and visual relationships.

+ +

Here is the link for the traffic signs dataset.

+",32249,,,,,12/27/2019 9:54,,,,0,,,,CC BY-SA 4.0 +17270,1,17275,,12/27/2019 10:23,,5,164,"

I'm working on an example of CNN with the MNIST hand-written numbers dataset. Currently I've got convolution -> pool -> dense -> dense, and for the optimiser I'm using Mini-Batch Gradient Descent with a batch size of 32.

+ +

Now this concept of batch normalization is being introduced. We are supposed to take a ""batch"" after or before a layer, and normalize it by subtracting its mean, and dividing by its standard deviation.

+ +

So what is a ""batch""? If I feed a sample into a 32 kernel conv layer, I get 32 feature maps.

+ +
    +
  • Is each feature map a ""batch""?
  • +
  • Are the 32 feature maps the ""batch""?
  • +
+ +

Or, if I'm doing Mini-Batch Gradient Descent with a batch size of 64,

+ +
    +
  • Are 64 sets of 32 feature maps the ""batch""? So in other words, the batch from Mini-Batch Gradient Descent, is the same as the ""batch"" from batch-optimization?
  • +
+ +

Or is a ""batch"" something else that I've missed?

+",16871,,25496,,12/27/2019 16:20,12/27/2019 16:22,"What is a ""batch"" in batch normalization?",,1,0,,,,CC BY-SA 4.0 +17271,1,17277,,12/27/2019 10:41,,3,95,"

how would you describe a machine learning model in a scientific report? It should be detailed but I just listed the hyperparameters... Have you got more important properties?

+",32344,,1671,,12/30/2019 19:19,12/30/2019 19:19,How to describe an keras Model in a scientific report,,1,1,,,,CC BY-SA 4.0 +17272,2,,17242,12/27/2019 11:30,,0,,"

My suggestion is to go with 1st option. reason is you will get to know much about data and initial stage will find some challenges in developing the model, over a period of time you will get to better results after hypertunning. Please go through article , ignore you have already read this article

+",32346,,,,,12/27/2019 11:30,,,,3,,,,CC BY-SA 4.0 +17273,1,17276,,12/27/2019 12:14,,8,3910,"

Feature extraction is a concept concerning the translation of raw data into the inputs that a particular machine learning algorithm requires. These derived features from the raw data that are actually relevant to tackle the underlying problem. On the other hand, word embeddings are basically distributed representations of text in an n-dimensional space.

+ +

As far as I understand, word embedding is a somehow feature extraction technique. Am I wrong ? I had an argument with a friend who believes the two topics are totally separate. Is he right? What are the similarities and dissimilarities between word embedding and feature extraction?

+",32347,,2444,,12/27/2019 14:47,12/27/2019 20:43,Is word embedding a form of feature extraction?,,2,0,,,,CC BY-SA 4.0 +17274,2,,17273,12/27/2019 16:18,,2,,"

I think you guys are playing on semantics.

+ +

If you consider feature extraction to be an unlearned preprocessing step to get inputs for your model, then no, word embeddings are not a feature extraction technique (examples here would be BoW counts, n-gram features, etc)

+ +

If you consider feature extraction to be any form of conversion from text to a set of features, then yes, word embeddings should be considered a form of feature extraction, given that they've learned in the process (or stolen from another model's training). Note though if you do include this, you would probably include most pretrained models as a whole as feature extraction techniques (like BERT).

+ +

So the whole conversation you had can go either way depending on the definitions you set.

+",25496,,2444,,12/27/2019 20:43,12/27/2019 20:43,,,,0,,,,CC BY-SA 4.0 +17275,2,,17270,12/27/2019 16:22,,5,,"

The ""batch"" is same as in mini-batch gradient descent. The mean in batch-norm here would be the average of each feature map in your batch (in your case either 32 or 64 depending on which you use)

+ +

generally batch is used quite consistently in ML right now, where it refers to the inputs you send in together for forward/backward pass.

+",25496,,,,,12/27/2019 16:22,,,,1,,,,CC BY-SA 4.0 +17276,2,,17273,12/27/2019 17:14,,5,,"

Though word-embedding is primarily a language modeling tool, it also acts as a feature extraction method because it helps transform raw data (characters in text documents) to a meaningful alignment of word vectors in the embedding space that the model can work with more effectively (than other traditional methods such as TF-IDF, Bag of Words, etc, on a large corpus). Word Embedding techniques help extract information from the pattern and occurence of words and goes further than other traditional token representation methods to decode/identify the meaning/context of the words, thereby providing more relevant and important features to the model to tackle the underlying problem.

+ +

However, from another standpoint, word-embedding models were not developed aiming to solve a particular feature extraction problem, but rather, to generalize and model the language used in a corpus to gain a semantic understanding of the words and the relationships between them. Such that, all the various corpus-specific tasks can then employ the same ""library"" of information which was collectively & exhaustively learnt by the embedding model. Meaning, the word embedding model learns a language model that is task-agnostic for all tasks on that corpus unlike feature extraction methods which are specifically task-oriented.

+ +

Hence, the similarity is - word-embeddings can effectively aid in feature extraction; the dissimilarity is - they're not primarily meant to extract features more than they are for modeling a language which might be an ""overkill"" for a particular feature extraction task on a dataset.

+",30644,,30644,,12/27/2019 17:19,12/27/2019 17:19,,,,1,,,,CC BY-SA 4.0 +17277,2,,17271,12/27/2019 17:49,,3,,"

Some other details you could mention are:

+ +
    +
  • total number of model parameters (e.g. 1.2M or 0.15M) & depth of the network (e.g. 38-layered network)

  • +
  • family/style of the network architecture (e.g. encoder-decoder arch., LSTM)

  • +
  • specifics of connections between network layers (e.g. residual-, dense-, skip-connections)
  • +
  • specifics of individual components of the network structure (e.g. dilated-convs. (CNNs), attention (LSTMs))
  • +
  • description/reasoning of why you chose a particular structure/sequence of connections in your deep learning model
  • +
  • specifics of training/validation/testing procedures (e.g. augmented training data, cross-validation, test-time-augmentation (TTA), frozen network weights)
  • +
  • other specific details/caveats that allow the results of your deep learning model be easily reproduced from the scientific report
  • +
+ +

For more info on the best kinds of details to be included in the report, refer to ""Methodology""/ ""Training""/ ""Implementation""/ ""Proposed Architecture"" sections of the deep learning research papers in your relevant area.

+",30644,,30644,,12/27/2019 18:33,12/27/2019 18:33,,,,0,,,,CC BY-SA 4.0 +17279,2,,17264,12/27/2019 20:11,,2,,"

Precision is the number of true positives over the number of predicted positives(PP), and recall is the number of true positives(TP) over the number of actual positives(AP) you get. I used the initials just to make it easier ahead.

+ +

A true positive is when you predict a car in a place and there is a car in that place.

+ +

A predicted positive is every car you predict, being right or wrong does not matter.

+ +

A actual positive is every car that actually is in the picture.

+ +

You should calculate these values separately for each category, and then sum over the examples you sampled, if I am not mistaken.

+ +

So for the CAR category you have (assuming the predictions do match with the target, i.e., you are not predicting a truck as a car for example) :

+ +
model 1
+line 1  -> 2 TP, 2 PP, 4 AP 
+line 2  -> 0 TP, 0 PP, 2 AP
+
+ +

So in total precision is 2/2 = 1 and recall is 2/6 = 0.3(3).

+ +

You can then do the same for the other categories, and for the other models. This way you can check if a model is predicting one category better than the other. For example, model 1 can be better at finding cars in a picture whilst model 3 can be better at finding buses.

+ +

The important part is that you know if the objects the model predicted actually correspond to what is in the picture. A very unlikely example would be a picture with 1 car and 1 truck where the algorithm recognizes the car as a truck and the truck as a car. From the info that is in the table I cannot be sure if the 2 cars you predict are the actual cars in the picture, or in other words, if they are actually True Positives or are actually False Positives.

+",24054,,,,,12/27/2019 20:11,,,,2,,,,CC BY-SA 4.0 +17281,1,,,12/28/2019 4:53,,1,39,"

The capsule neural networks have been formally introduced in the paper Dynamic Routing Between Capsules.

+ +

Much ado has been made about how the capsules output a vector (magnitude = probability that an entity is present, orientation space = the instantiated parameters), which can then allow it to maintain more information than a max-pooled operation which outputs only a scalar.

+ +

Within these vector representations, the dimensions of this space turn out to be the parameters by which a written digit could vary: scale, stroke thickness, skew, width, etc. There are 16 neurons in each capsule to represent 16 dimensions.

+ +

It is unclear to me, from reading the paper, if these parameters emerged through training, or if they were hand-coded a priori. If these parameters were not hand-coded, why do such ""clean"" dimensions emerge? Why don't mixed-selective neurons emerge within the 16?

+",32369,,2444,,6/9/2020 11:36,6/9/2020 11:36,Is the number of neurons in each capsule in a capsule neural network hardcoded?,,0,1,,,,CC BY-SA 4.0 +17282,1,,,12/28/2019 7:17,,2,121,"

I am practicing with Resnet50 fine-tuning for a binary classification task. Here is my code snippet.

+
base_model = ResNet50(weights='imagenet', include_top=False)
+x = base_model.output
+x = keras.layers.GlobalAveragePooling2D(name='avg_pool')(x)
+x = Dropout(0.8)(x)
+model_prediction = keras.layers.Dense(1, activation='sigmoid', name='predictions')(x)
+model = keras.models.Model(inputs=base_model.input, outputs=model_prediction)
+opt = SGD(lr = 0.01, momentum = 0.9, nesterov = False)
+ 
+model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])  #
+   
+train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=False)
+  
+test_datagen = ImageDataGenerator(rescale=1./255)
+train_generator = train_datagen.flow_from_directory(
+        './project_01/train',
+        target_size=(input_size, input_size),  
+        batch_size=batch_size,
+        class_mode='binary')    
+
+validation_generator = test_datagen.flow_from_directory(
+        './project_01/val',
+        target_size=(input_size, input_size),
+        batch_size=batch_size,
+        class_mode='binary')
+
+hist = model.fit_generator(
+        train_generator,
+        steps_per_epoch= 1523 // batch_size, # 759 + 764 NON = 1523
+        epochs=epochs,
+        validation_data=validation_generator,
+        validation_steps= 269 // batch_size)  # 134 + 135NON = 269
+
+

I plotted a figure of the model after training for 50 epochs:

+

+

You may have noticed that train_acc and val_acc have highly fluctuated, and train_acc merely reaches 52%, which means that network isn't learning, let alone over-fitting the data.

+

As for the losses, I haven't got any insights.

+

Before training starts, network outputs:

+
Found 1523 images belonging to 2 classes.
+Found 269 images belonging to 2 classes.
+
+

Is my fine-tuned model learning anything at all?

+

I'd appreciate if someone can guide me to solve this issue.

+",31870,,2444,,5/23/2021 13:16,6/12/2023 21:02,Is my fine-tuned model learning anything at all?,,1,2,,,,CC BY-SA 4.0 +17283,2,,17282,12/28/2019 10:45,,0,,"

It's difficult to say without knowing what your data looks like but from the numbers it seems too less and the images might be too similar to one another or very different. In any case, I'd have checked using other networks like Inception and decreasing learning rate even further (say 0.0001) to not mess with the Imagenet weights if your data is not very different from Imagenet classes.

+",20494,,,,,12/28/2019 10:45,,,,3,,,,CC BY-SA 4.0 +17284,1,,,12/28/2019 11:16,,2,208,"

I wish to compile a (somewhat) comprehensive list of AGI systems that have actually been created and tested (to whatever degrees of success) instead of those that simply advertise they are going to 'do' something about it or have patented theoretical concepts.

+ +

For the purposes of this question, we can use the following definition of AGI:

+ +
+

Artificial general intelligence (AGI) is the intelligence of a machine that can understand or learn any intellectual task that a human being can

+
+",32373,,2444,,1/31/2021 18:30,1/31/2021 18:30,Which AGI systems have already been implemented and tested?,,1,0,,,,CC BY-SA 4.0 +17286,2,,17284,12/28/2019 14:35,,5,,"

As far as I know, no "true" (i.e. as intellectual and physically capable as a human) artificial general intelligent system (AGI) has been implemented or is practically useful (this is confirmed by Ben Goertzel, who is one of the leading researchers in AGI [1, 2]).

+

The closest to a practical AGI might be Sophia (or similar robots), which may look like an AGI, but it lacks several capabilities that we humans have and its ability to adapt to new circumstances is limited. Sophia uses OpenCog [1], which is supposed to be a software framework to develop AGI. Sophia is used for the loving AI project.

+

There are also theoretical frameworks for AGI, such as AIXI, which, in any case, have several flaws, such as incomputability (in the case of AIXI). There are approximations of AIXI, but these approximations can only be used to solve toy problems (such as tic-tac-toe), so they aren't really useful to solve complex real-world problems. However, it is possible that better approximations to AIXI and more theoretical frameworks for AGI will be developed that can deal with more complex problems.

+

It's also important to note that, despite their success, AlphaGo and AlphaStar are narrow AI systems, as they can only solve one specific problem (although the same approach can be adapted to solve very similar problems too, e.g. AlphaZero).

+

If you want to know more about AGI, the Scholarpedia's article Artificial General Intelligence, curated by Ben Goertzel, provides a good overview of the AGI field, including but not restricted to definitions of AGI and approaches to the development of AGI systems, such as

+
    +
  • universal (AIXI was created based on this approach),

    +
  • +
  • symbolic (Soar was created based on this approach), which is based on the physical symbol system hypothesis,

    +
  • +
  • emergentist (or sub-symbolic), which is based on the idea that general intelligence is expected to emerge from sub-symbolic dynamics (where a sub-symbolic system refers e.g. to an artificial neural network),

    +
  • +
  • hybrid (e.g. CLARION), which is a combination of the universal, symbolic or sub-symbolic approaches.

    +
  • +
+

You probably also want to take a look at this question, which has a few answers that mention a paper and book that you can read to know more about the AGI field. You may also be interested in this related question too.

+",2444,,2444,,1/31/2021 18:15,1/31/2021 18:15,,,,0,,,,CC BY-SA 4.0 +17287,1,,,12/28/2019 15:39,,5,174,"

I've seen some comments in online articles/tutorials or Stack Overflow questions which suggest that increasing the number of epochs can result in overfitting. But my intuition tells me that there should be no direct relationship at all between the number of epochs and overfitting. So I'm looking for an answer which explains if I'm right or wrong (or whatever's in between).

+

Here's my reasoning though. To overfit, you need to have enough free parameters (I think this is called "capacity" in neural networks) in your model to generate a function that can replicate the sample data points. If you don't have enough free parameters, you'll never overfit. You might just underfit.

+

So really, if you don't have too many free parameters, you could run infinite epochs and never overfit. If you have too many free parameters, then yes, the more epochs you have the more likely it is that you get to a place where you're overfitting. But that's just because running more epochs revealed the root cause: too many free parameters. The real loss function doesn't care about how many epochs you run. It existed the moment you defined your model structure before you ever even tried to do gradient descent on it.

+

In fact, I'd venture as far as to say: assuming you have the computational resources and time, you should always aim to run as many epochs as possible because that will tell you whether your model is prone to overfitting. Your best model will be the one that provides great training and validation accuracy, no matter how many epochs you run it for.

+

EDIT +While reading more into this, I realise I forgot to take into account that you can arbitrarily vary the sample size as well. Given a fixed model, a smaller sample size is more prone to overfitting. And then that kind of makes me doubt my intuition above. Still happy to get an answer though!

+",16871,,18758,,5/29/2022 4:56,5/29/2022 4:56,Is running more epochs really a direct cause of overfitting?,,1,6,,,,CC BY-SA 4.0 +17288,1,,,12/28/2019 18:17,,1,719,"

I have written this code to classify Cats and dogs using Resnet50. Actually while studying I came to the conclusion that Transfer learning gives very good accuracy for deep learning models, but I ended getting a far worse result and I didn't understand the cause for it. Any description with reasoning would be very helpful. The dataset contains 2000 images of cats and dogs as training and 1000 images as the validation set.

+ +

The following summarises my model

+ + + +
from tensorflow.keras.applications import ResNet50
+from tensorflow.keras.models import Sequential
+from tensorflow.keras.layers import Dense, InputLayer, Flatten, GlobalAveragePooling2D
+num_classes = 2
+IMG_SIZE = 224
+IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
+my_new_model=tf.keras.applications.ResNet50(include_top=False, weights='imagenet', input_shape=IMG_SHAPE, pooling='avg', classes=2)
+my_new_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
+
+
+from tensorflow.keras.preprocessing.image import ImageDataGenerator
+from tensorflow.keras.applications.resnet50 import preprocess_input
+train_datagen = ImageDataGenerator(
+ preprocessing_function=preprocess_input,
+ rotation_range=40,
+ width_shift_range=0.2,
+ height_shift_range=0.2,
+ shear_range=0.2,
+ zoom_range=0.2,
+ horizontal_flip=True,)
+
+# Note that the validation data should not be augmented!
+test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
+
+train_generator = train_datagen.flow_from_directory(
+     train_dir,  # This is the source directory for training images
+     target_size=(224,224),  # All images will be resized to 224x224
+     batch_size=20,
+     class_mode='binary')
+
+validation_generator = test_datagen.flow_from_directory(
+     validation_dir,
+     target_size=(224, 224),
+     class_mode='binary')
+
+my_new_model.fit_generator(
+     train_generator,
+     epochs = 8,
+     steps_per_epoch=100,
+     validation_data=validation_generator)
+
+ +

For this I get the training logs as,

+ +
Train for 100 steps, validate for 32 steps
+Epoch 1/8
+100/100 - 49s - loss: 7889.4051 - accuracy: 0.0000e+00 - val_loss: 7834.5318 - val_accuracy: 0.0000e+00
+Epoch 2/8
+100/100 - 35s - loss: 7809.7583 - accuracy: 0.0000e+00 - val_loss: 7775.1556 - val_accuracy: 0.0000e+00
+Epoch 3/8
+100/100 - 35s - loss: 7808.4858 - accuracy: 0.0000e+00 - val_loss: 7765.3964 - val_accuracy: 0.0000e+00
+Epoch 4/8
+100/100 - 35s - loss: 7808.0520 - accuracy: 0.0000e+00 - val_loss: 7764.0735 - val_accuracy: 0.0000e+00
+Epoch 5/8
+100/100 - 35s - loss: 7807.7891 - accuracy: 0.0000e+00 - val_loss: 7762.4891 - val_accuracy: 0.0000e+00
+Epoch 6/8
+100/100 - 35s - loss: 7807.6872 - accuracy: 0.0000e+00 - val_loss: 7762.1766 - val_accuracy: 0.0000e+00
+Epoch 7/8
+100/100 - 35s - loss: 7807.6633 - accuracy: 0.0000e+00 - val_loss: 7761.9766 - val_accuracy: 0.0000e+00
+Epoch 8/8
+100/100 - 35s - loss: 7807.6514 - accuracy: 0.0000e+00 - val_loss: 7761.9346 - val_accuracy: 0.0000e+00
+<tensorflow.python.keras.callbacks.History at 0x7f5adff722b0>
+
+ +

If I change the class_mode='categorical' it's giving error as
+Incompatible shapes: [20,2] vs. [20,2048].

+",32378,,32378,,12/28/2019 18:47,12/9/2021 16:48,Reasoning behind $Zero$ validation accuracy in the following ResNet50 model for classification,,0,0,,,,CC BY-SA 4.0 +17289,1,,,12/28/2019 20:39,,1,36,"

I'm looking for a supervized system/approach, that could learn how to categorize incoming texts/documents, where new categories can be added over time and the training set will be small. The trained model should not be static and should be able to evolve with adding new categories or evaluating new documents.

+ +

For each document it should first give it's suggestion that can be then corrected.

+",32382,,,,,12/30/2019 5:34,Categorizing text into dynamic amount of categories,,1,0,,,,CC BY-SA 4.0 +17290,1,,,12/29/2019 0:43,,1,31,"

There seems to be a severe problem with the taxonomy of neural network topologies. What I'd like to know is the term I should use to search for the most general topology: completely connected directed cyclic graph (henceforth CCDCGRNN). This is because all other topologies degenerate by constraint from CCDCGRNN. This includes topologies that are often confused with CCDCGRNN such as Elfman and Jordan networks* and more legitimately-so than, say LSTMs.

+ +

I know there are claims such as this question at stats.stackexchange.com (including cites) that unqualified ""RNN"" refers to CCDCGRNN but this is not true if one looks a little deeper. Examples include not only the Wikipedia article on ""RNN"" (who trusts WP anyway, right?), but a ""mostly complete"" catalog of neural network topologies.

+ +

There must have been, at some point in the ancient past, research into the methods by which one can, in a principled manner, degenerate the CCDCGRNs or at least why it isn't worth studying in its own right.

+ +

*RNNs containing feed-through time delays are a degenerate case of CCDCGRNNs where a time delay of N out of a node is accomplished by allocating N neurons constrained to have only one input with weight of 1 (and a linear transfer function with slope 1).

+",26053,,26053,,12/29/2019 0:49,12/29/2019 0:49,What is the term for an RNN that is a completely connected directed graph?,,0,0,,,,CC BY-SA 4.0 +17291,1,,,12/29/2019 7:19,,7,656,"

Are PAC learning and VC dimension relevant to machine learning in practice? If yes, what is their practical value?

+ +

To my understanding, there are two hits against these theories. The first is that the results all are conditioned on knowing the appropriate models to use, for example, the degree of complexity. The second is that the bounds are very bad, where a deep learning network would take an astronomical amount of data to reach said bounds.

+",32390,,2444,,12/29/2019 12:15,3/11/2020 4:40,Are PAC learning and VC dimension relevant to machine learning in practice?,,1,2,,,,CC BY-SA 4.0 +17293,1,,,12/29/2019 16:41,,1,66,"

I have created a CNN for use on the MNIST dataset for now (so I have 10 classes). I have trained SVMs on the sublayers of this trained CNN and wish to combine them into a combined SVM as to give a combined score.

+ +

So far, I trained two individual SVMs at two of the sublayers of my neural network. +What is the best method I can go about combining the two SVMs and what are the different options available to me? Is it simply a case of taking the maximum/average of each SVM prediction for a class and using that as the score for the combined SVM class prediction?

+ +

Thanks

+",29877,,,,,12/29/2019 16:41,Is it possible to combine multiple SVMs that were trained on sublayers of a CNN into one combined SVM?,,0,1,,,,CC BY-SA 4.0 +17294,1,,,12/29/2019 16:45,,2,49,"

I created a multi-label classification CNN to classify chest X-ray images into zero or more possible lung diseases. I've been doing some configuration tests on it and analyzing its results and I'm having a hard time understanding some things about it.

+ +

First of all, these are the graphs that I got for different configurations:

+ +

Results of CNN with different configurations

+ +

Note 1: I've only changed the dataset size and the number of color channels in each configuration +Note 2: In case you're wondering why I tested the network with both 1 and 3 color channels, it's because the images are technically grayscale, but I am using the AlexNet architecture, which was made to take as input 224 x 224 images with 3 channels, so I wanted to see if the network somehow performed better with 3 channels instead of just the one

+ +

These are the things about it I don't understand:

+ +
    +
  1. Why does the sensitivity and specificity of the network vary so much between different epochs?
  2. +
  3. Is it normal for the validation loss of the network barely ever change as the number of epochs increase?
  4. +
  5. Looking at the results I got, it looks like 2 epochs is where there tends to be the best results. Does that make sense? I've heard of people training their networks with dozens of epochs sometimes.
  6. +
  7. Why is it that, many times, when the sensitivity of the network increases between epochs, the specificity tends to decrease, and vice-versa?
  8. +
+ +

Sorry if some of these questions are dumb, I'm still a newbie at this. Also, my total dataset is drastically larger than what I present in these results (~110,000 images). I just haven't done tests with more images due to the time the network takes to train.

+ +

Network Architecture:

+ +
    +
  • Base Architecture: AlexNet
  • +
  • Loss Function: Sigmoid Cross-Entropy Loss
  • +
  • Optimizer: Adam Optimization Algorithm with learning rate of 0.001
  • +
+ +

EDIT: I forgot to mention that the number of diseases to predict is 15, and that the network sees 0's much more than 1's due to the imbalance of classes. I've considered changing the loss function to a weighted version of sigmoid cross-entropy because of that, but I'm not sure if that would help the network much.

+",32395,,,,,12/29/2019 16:45,How to understand my CNN's training results?,,0,0,,,,CC BY-SA 4.0 +17295,1,17297,,12/29/2019 17:45,,0,193,"

The gradient descent step is the following

+ +

\begin{align} +\mathbf{W}_i = \mathbf{W}_{i-1} - \alpha * \nabla L(\mathbf{W}_{i-1}) +\end{align}

+ +

were $L(\mathbf{W}_{i-1})$ is the loss value, $\alpha$ the learning rate and $\nabla L(\mathbf{W}_{i-1})$ the gradient of the loss.

+ +

So, how do we get to the $L(\mathbf{W}_{i-1})$ to calculate the gradient of $L(\mathbf{W}_{i-1})$? As an example, we can initialize the set of $\mathbf{W}$ to 0.5. How can you explain it to me?

+",31143,,2444,,12/29/2019 18:52,12/29/2019 18:57,How is the loss value calculated in order to compute the gradient?,,1,2,,,,CC BY-SA 4.0 +17296,2,,11992,12/29/2019 17:50,,0,,"

The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. +https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewPaper/12389

+",32396,,,,,12/29/2019 17:50,,,,0,,,,CC BY-SA 4.0 +17297,2,,17295,12/29/2019 18:52,,2,,"

In your case, $L$ is the loss (or cost) function, which can be, for example, the mean squared error (MSE) or the cross-entropy, depending on the problem you want to solve. Given one training example $(\mathbf{x}_i, y_i) \in D$, where $\mathbf{x}_i \in \mathbb{R}^d$ is the input (for example, an image) and $y_i \in \mathbb{R}$ can either be a label (aka class) or a numerical value, and $D$ is your training dataset, then the MSE is defined as follows

+ +

$$L(\mathbf{W}) = \frac{1}{2} \left(f(\mathbf{x}_i) - y_i \right)^2,$$

+ +

where $f(\mathbf{x}_i) \in \mathbb{R}$ is the output of the neural network $f$ given the input $\mathbf{x}_i$.

+ +

If you have a mini-batch of $M$ training examples $\{(\mathbf{x}_i, y_i) \}_{i=1}^M$, then the loss will be an average of the MSE for each training example. For more info, have a look at this answer https://ai.stackexchange.com/a/11675/2444. The https://ai.stackexchange.com/a/8985/2444 may also be useful.

+ +

See the article Loss and Loss Functions for Training Deep Learning Neural Networks for more info regarding different losses used in deep learning and how to choose the appropriate loss for your problem.

+",2444,,2444,,12/29/2019 18:57,12/29/2019 18:57,,,,4,,,,CC BY-SA 4.0 +17298,1,17299,,12/30/2019 0:06,,7,2372,"

I'm new to machine learning (so excuse my nomenclature), and not being a python developer, I decided to jump in at the deep (no pun intended) end writing my own framework in C++.

+

In my current design, I have given each neuron/cell the possibility to have a different activation function. Is this a plausible design for a neural network? A lot of the examples I see use the same activation function for all neurons in a given layer.

+

Is there a model which may require this, or should all neurons in a layer use the same activation function? Would I be correct in using different activation functions for different layers in the same model, or would all layers have the same activation function within a model?

+",32400,,18758,,5/30/2022 23:52,5/30/2022 23:52,Do all neurons in a layer have the same activation function?,,1,1,,,,CC BY-SA 4.0 +17299,2,,17298,12/30/2019 0:27,,2,,"

From here:

+ +
+

Using other activation functions don’t provide significant improvement in performance and tweaking them doesn’t provide any big improvement. So as per simplicity we use same activation function for most of the case in Deep Neural Networks.

+
+",26726,,,,,12/30/2019 0:27,,,,4,,,,CC BY-SA 4.0 +17300,1,,,12/30/2019 1:29,,1,68,"

I am optimising hyperparameters for my deep reinforcement learning project (using PPO2, DQN and A2C) and was wondering:

+ +

Should I find the optimum hyperparameters to get maximum reward from training over my entire range of training (e.g. 50 million steps) or can I optimise over less time (e.g. 1 million steps)?

+ +

What is the conventional approach and why?

+",32401,,,,,12/30/2019 1:29,Hyperparameter optimisation over entire range or shorter range of training episodes in Deep Reinforcement Learning,,0,1,,,,CC BY-SA 4.0 +17301,2,,17240,12/30/2019 4:28,,1,,"

Here's a link to my answer on CV Stack Exchange, where I have mentioned about latent spaces and some deep learning models that learn these representations: https://stats.stackexchange.com/questions/442352/what-is-a-latent-space/442360#442360

+ +

In short, deep learning models for Domain Adaptation, Computer Vision, Natural Language Processing, Recommendation Systems, Music/Speech/Audio processing, Adversarial models, etc., all learn some form of latent representation of data.

+ +

In fact, any place we're learning a function to map input and output spaces of a dataset, the model essentially learns a latent representation of data irrespective of whether the model is based on deep neural networks or a stochastic method or any other.

+",30644,,30644,,1/7/2020 4:12,1/7/2020 4:12,,,,0,,,,CC BY-SA 4.0 +17302,2,,17289,12/30/2019 5:34,,2,,"

Transfer Learning allows you to add new categories to be predicted to the output layer without needing to re-train the entire model every time a new category needs to be classified. Rather, the weights of all initial layers up to the last few layers of the network can be frozen and only the last one or two layers can be made trainable for fine-tuning the classification rather than training from scratch. This approach would be relatively fast than other methods since the dataset in your case is small.

+ +

In order for model to flag the need of a new output class by inferring on a test document, you could include a class ""Unknown"" in addition to existing N classes (hence, the output layer now contains N+1 classes). If the model predicts ""Unknown"" with highest probability, you could add a new class to the output layer after examining the test data.

+",30644,,,,,12/30/2019 5:34,,,,1,,,,CC BY-SA 4.0 +17304,1,17356,,12/30/2019 21:32,,3,2889,"

I'm reading this paper and it says:

+ +
+

In this paper, we present a multi-class embedded feature selection method called as sparse optimal scoring with adjustment (SOSA), which is capable of addressing the data heterogeneity issue. We propose to perform feature selection on the adjusted data obtained by estimating and removing the unknown data heterogeneity from original data. Our feature selection is formulated as a sparse optimal scoring problem by imposing $\ell_{2, 1}$-norm regularization on the coefficient matrix which hence can be solved effectively by proximal gradient algorithm. This allows our method can well handle the multi-class feature selection and classification simultaneously for heterogenous data

+
+ +

What is the $\ell_{2, 1}$ norm regularization? Is it L1 regularization or L2 regularization?

+",32416,,32416,,1/1/2020 13:17,3/10/2022 17:57,"What is the $\ell_{2, 1}$ norm?",,1,1,,,,CC BY-SA 4.0 +17305,1,,,12/30/2019 22:15,,1,40,"

I'm evaluating the performance and accuracy in detecting objects for my data set using three deep learning algorithms. In total there are 24,085 images. I measure the performance in terms of time taken to detect the objects. To measure the accuracy, I manually count the number of objects in each image and then calculate recall and precision values for three algorithms.

+ +

However, since I'm manually counting to get actual object count, I selected only 30 images. Will that sample be enough to make a conclusion that algorithm 1 is better than others in terms performance and accuracy?

+",32343,,32343,,12/30/2019 22:27,12/30/2019 22:27,Sample size for the evaluation of Deep Learning Models,,0,0,,,,CC BY-SA 4.0 +17306,1,,,12/30/2019 22:16,,1,132,"

Is there an AI technology out there or being developed that can predict human behaviour, given that we as humans are irrational decision-makers?

+

I'm looking at this from an economic standpoint - the issue with current economic models is that they assume that humans are perfectly rational, but obviously this isn't the case. Could AI develop better models and therefore produce better models of recessions?

+",32419,,2444,,9/11/2020 14:40,10/11/2020 18:51,Is there an AI technology that can predict human behaviour?,,2,3,,,,CC BY-SA 4.0 +17307,1,17308,,12/31/2019 5:05,,3,211,"

I am trying to replace last fully connected layer of size 4096/2048 with a matrix of size 100x300 with previous fc layer output of 2048.

+ +

+ +

I've tried

+ +
    +
  1. 2D convolution - to map from 2048 --> 100x300 (Which is not realizable)
  2. +
  3. Intermediate projections :
    + 2048 --> 100
    + [100x1] X [1x300] --> [100x300] (possible but complicated)
  4. +
+ +

I am looking for a simple and effective solution with least linear transformations.

+ +

+",25676,,25676,,1/3/2020 4:17,1/3/2020 4:17,How to create a fully connected(matrix) layer with vector input,,1,0,,,,CC BY-SA 4.0 +17308,2,,17307,12/31/2019 7:12,,4,,"

You can use tf.reshape() method (tensorflow doc) to reshape (2048) dimensional tensor to (100,300). Here's one way to do this:

+ +
input1 = tf.reshape(input1, [100,300], name=""reshaped_tensor"")
+
+ +

If you're not using TensorFlow but using Numpy, here's an implementation:

+ +
input1 = np.array(input1)
+input1 = np.reshape(input1, (100,300))
+
+ +

Note: You might want to follow up this layer with tf.nn.conv2d layers to ""densify"" the sparse matrix/values obtained from the above step.

+",30644,,,,,12/31/2019 7:12,,,,3,,,,CC BY-SA 4.0 +17310,1,,,12/31/2019 13:22,,5,331,"

I'm running some distributed trainings in Tensorflow with Horovod. It runs training separately on multiple workers, each of which uses the same weights and does forward pass on unique data. Computed gradients are averaged within the communicator (worker group) before applying them in weight updates. I'm wondering - why not average the loss function across the workers? What's the difference (and the potential benefits) of averaging gradients?

+",32431,,,,,2/2/2020 22:01,Why do we average gradients and not loss in distributed training?,,1,2,,,,CC BY-SA 4.0 +17311,1,17312,,12/31/2019 14:31,,4,284,"

In chapter 10 of Sutton and Barto's book (2nd edition) is given the equation for TD(0) error with average reward (equation 10.10):

+ +

$$\delta_t = R_{t+1} - \bar{R} + \hat{v}(S_{t+1}, \mathbf{w}) - \hat{v}(S_{t}, \mathbf{w})$$

+ +

What is the intuition behind this equation? And how exactly is it derived?

+ +

Also, in chapter 13, section 6, is given the Actor-Critic algorithm, which uses the TD error. How can you use 1 error to update 3 distinct things - like the average reward, value function estimator (critic), and the policy function estimator (actor)?

+ +

Average Reward update rule: $\bar{R} \leftarrow \bar{R} + \alpha^{\bar{R}}\delta$

+ +

Critic weight update rule: $\mathbf{w} \leftarrow \mathbf{w} + \alpha^{\mathbf{w}}\delta\nabla \hat{v}(s,\mathbf{w})$

+ +

Actor weight update rule: $\mathbf{\theta} \leftarrow \mathbf{\theta} + \alpha^{\mathbf{\theta}}\delta\nabla ln \pi(A|S,\mathbf{\theta})$

+",27947,,2444,,1/1/2020 14:03,1/1/2020 14:03,"What is the intuition behind the TD(0) equation with average reward, and how is it derived?",,1,0,,,,CC BY-SA 4.0 +17312,2,,17311,12/31/2019 15:25,,4,,"

This is simply from definition of return in average reward setting (look at equation $10.9$). The ""standard"" TD error is defined as +\begin{equation} +TD_{\text{error}} = R_{t+1} + V(S_{t+1}) - V(S_t) +\end{equation} +In average reward setting, average reward $r(\pi)$ is subtracted from reward at $t$, $R_t$, so TD error in this case is +\begin{equation} +TD_{\text{error}} = R_{t+1} - \bar R_{t+1} + V(S_{t+1}) - V(S_t) +\end{equation} +where $\bar R_{t+1}$ is estimate of $r(\pi)$.

+ +

You can use $\delta_t$ in all 3 updates because neither of these updates depend on each other. For example if you update $\mathbf w$ you don't use that then to update $\mathbf \theta$, or if you update $\bar R$ you don't use updated version to update $\mathbf w$ or $\mathbf \theta$ so you're not introducing additional bias. In each separate update you also don't have $\delta_t$ present multiple times so that you require multiple sampling per timestep to get the unbiased update.

+ +

Additionally this is semi-gradient algorithm, it uses bootstrapped estimate $V_{t+1}$ but it doesn't calculate full derivative with respect to it aswell, only with respect to $V_t$ so the algorithm is biased by default, but works well enough in practice for linear case.

+",20339,,20339,,1/1/2020 10:36,1/1/2020 10:36,,,,0,,,,CC BY-SA 4.0 +17313,1,,,12/31/2019 16:57,,3,186,"

I want a microphone to pick up sounds around me (let's say beyond a 3 foot radius), but ignore sounds made at my desk, such as the rustling of paper, clicking a mouse and typing, my hands brushing up on the table, putting a pen down, etc.

+ +

How hard would it be for AI to be able to distinguish these sounds from surrounding sounds, such as someone knocking on my door or a random loud sound from further away? How would you implement this? Is it possible that a pre-trained model could accomplish this, and work reliably for most people at their desk? I don't have any experience in AI.

+",32433,,,,,6/16/2023 17:01,How difficult is this sound classification?,,2,5,,,,CC BY-SA 4.0 +17314,1,,,12/31/2019 17:29,,2,69,"

Which hyper-parameters of a convolutional neural network are likely to be the most sensitive to depending on whether the training (and test and inference) data involves only accurately centered images versus off-centered images.

+ +

More convolutional layers, wider convolution kernels, more dense layers, wider dense layers, more or less pooling, or ???

+ +

e.g. If I can preprocess the data to include only accurately centered images, which hyper-parameters should I experiment with changing to create a smaller CNN model (for a power and memory constrained inference engine)? Or conversely, if I have a minimized model trained on centered data, which hyper-parameters would I most likely need to increase to get similar loss and accuracy on uncentered (shifted in XY) data?

+",2918,,,,,12/31/2019 17:29,Which CNN hyper-parameters are most sensitive to centered versus off centered data?,,0,2,,,,CC BY-SA 4.0 +17316,1,,,12/31/2019 21:27,,3,102,"

This question was inspired by watching AlphaStar play Starcraft 2, but I'm also interested in the concept in general.

+ +

How does the AI decide what build order to start with? In Starcraft, and many other games, the player must decide what strategy or class of strategies to follow as soon as the game begins. To use a Starcraft-specific example, one must decide to 6-pool Zerg Rush before any scouting information has been gathered. Delaying the rush to wait for info means the opponent will be stronger when the rush arrives; the opponent may even discover the rush and prepare a dedicated counter.

+ +

This is not limited to deciding between a risky early all-or-nothing attack. Some long-term strategies also preclude others. Terran players must decide early on how heavily they will invest in mech units. They can focus on biological units like marines, or vehicular units like siege tanks and hellions. Going equally into both, however, often means a weaker army overall, because you must spend resources on the overhead costs of both tech trees. You must upgrade your vehicle weapons as well as your infantry weapons for instance, meaning less resources can be spent on more units. Suffice to say, Terran players usually must decide very early on what they will focus on.

+ +

How can AI make these kinds of choices given incomplete and often uncertain information?

+",32435,,,,,12/31/2019 21:27,How do AI that play games of incomplete information decide their opening strategy?,,0,1,,,,CC BY-SA 4.0 +17317,1,,,12/31/2019 23:12,,10,850,"

Neural networks are incredibly good at learning functions. We know by the universal approximation theorem that, theoretically, they can take the form of almost any function - and in practice, they seem particularly apt at learning the right parameters. However, something we often have to combat when training neural networks is overtfitting - reproducing the training data and not generalizing to a validation set. The solution to overfitting is usually to simply add more data, with the rationalization that at a certain point the neural network pretty much has no choice but to learn the correct function.

+ +

But this never made much sense to me. There is no reason, in terms of loss, that a neural network should prefer a function that generalizes well (i.e. the function you are looking for) over a function that does incredibly well on the training data and fails miserably everywhere else. In fact, there is usually a loss advantage to overfitting. Equally, there is an infinite number of functions that fit the training data and have no success on anything but.

+ +

So why is it that neural networks almost always (especially for simpler data) stumble upon the function we want, as opposed to one of the infinite other options? Why is it that neural networks are good at generalizing, when there is no incentive for them to?

+",25415,,2444,,12/31/2019 23:35,12/13/2021 18:59,Why can neural networks generalize at all?,,4,3,,,,CC BY-SA 4.0