diff --git "a/stack_exchange/AI/Comments.csv" "b/stack_exchange/AI/Comments.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/AI/Comments.csv" @@ -0,0 +1,19863 @@ +Id,PostId,Score,Text,CreationDate,UserDisplayName,UserId,ContentLicense +7320,2351,0,Checkout https://github.com/tensorflow/magenta/tree/master/magenta/models/sketch_rnn,1/1/2018 7:51,,3208,CC BY-SA 3.0 +7321,4892,3,Yes. Feedback ANN and Recurrent ANN are types of what you want.,1/1/2018 8:11,,4446,CC BY-SA 3.0 +7322,4892,1,"Maybe make your question clearer with a diagram? I was not imagining RNNs as an answer, but ResNets with skip connections. Both RNNs and skip connections are useful, in very different ways, and could be explained in an answer.",1/1/2018 13:15,,1847,CC BY-SA 3.0 +7324,4879,0,"@TinaJ you should first analyse the architecture of the hardware(android device),very well.I see there are some questions to ask more in android community,as well.",1/1/2018 18:40,,1581,CC BY-SA 3.0 +7332,4892,1,"Thanks for adding the diagram. This is closer to RNN in design as it involves loops, and thus necessarily some kind of time step parameter in order to decide which value to feed. I'm not sure how much that kind of feedback connection has been studied, and what name(s) it goes under though.",1/2/2018 10:54,,1847,CC BY-SA 3.0 +7333,4892,1,"@OmG It looks like recurrent ANNs are the answer the question author was looking for, thanks! Would you like to expand your comment into an answer with some background/explanation?",1/2/2018 18:53,,75,CC BY-SA 3.0 +7338,4886,0,"Great question. Very smart people are thinking about knots! +https://en.wikipedia.org/wiki/Conway_notation_(knot_theory)",1/2/2018 22:44,,1671,CC BY-SA 3.0 +7340,4883,0,"Nice point. I'm looking to use prior AI victory vs. Human as a switch for trying less-optimal seeming in the subsequent game. It seems to me that for game AI, my ultimate goal is human engagement. For a small subset of players, analogous to Chess masters, engagement may be a function of an unbeatable AI, but for the vast majority of the human player base, losing every time is no kind of fun! In [Mbrane](http://mbranegame.com/), we made a point of designing weak, classical AI's that reason like humans.",1/2/2018 22:54,,1671,CC BY-SA 3.0 +7345,4914,2,"Welcome to AI! Could you please clarify your terms? When you say ""para"" , are you meaning ""paragraph"" or ""paraphrase"" or something else entirely? I'm assuming the ""diff"" is intended, to connote line-oriented comparison function. Apologies if my questions stem from ignorance of terms of art being utilized.)",1/3/2018 17:27,,1671,CC BY-SA 3.0 +7346,4910,0,"Welcome to AI! Very good questions, imo. *(I've taken the liberty of adding a couple of tags to improve searchability.)*",1/3/2018 17:32,,1671,CC BY-SA 3.0 +7357,4886,0,"@DuttaA it's a good point for certain questions, but what if this is just a high-level, theoretical question from a curious mind?",1/5/2018 20:43,,1671,CC BY-SA 3.0 +7360,4886,0,@DukeZhou for me any1 can ask a hi lvl question but only the geniuses can solve it,1/6/2018 4:26,user9947,,CC BY-SA 3.0 +7364,1936,0,This is why I believe more work needs to be done to address the control problem.,1/7/2018 6:02,,10913,CC BY-SA 3.0 +7366,1848,0,Politics and government point you made was an excellent example. My perspective however is that AI only needs to be able to understand emotion and not necessarily experience it.,1/7/2018 6:16,,10913,CC BY-SA 3.0 +7367,2762,1,You can do some research on [Adaptive Neuro-Fuzzy Inference System](https://in.mathworks.com/help/fuzzy/neuro-adaptive-learning-and-anfis.html?requestedDomain=www.mathworks.com)!,1/7/2018 7:00,,1807,CC BY-SA 3.0 +7369,4908,0,Can you please create paragraphs in your question body?,1/7/2018 12:21,,1581,CC BY-SA 3.0 +7380,4886,0,"@DuttaA :) One think I love about Conway's notation is that he uses it to teach children at math camps. Apparently, there are a fair few kids who are interested in knots, and the field is surprisingly complex. But as ""theorics"" are on topic here, I do quite like this question!",1/7/2018 21:07,,1671,CC BY-SA 3.0 +7399,4946,0,"I'm not proposing that a neural network be represented as a network of individual transistors. I'm proposing that it's represented with a massively parallel architecture based on linking together hundreds of thousands of ulta-light-weight chips capable of universal computation, so that they can be repurposed to simulate any type of neuron or any connection architecture that one might desire.",1/9/2018 0:31,,3323,CC BY-SA 3.0 +7404,4946,1,"@JorgePerez it's pretty much what you get with CUDA cores in modern GPU - highly parallel individual computational units. It differs from a full CPU core, as a full core has overhead features. A processor would have individual registers and a way to address the whole memory, while with CUDA you it's assumed you execute the same commutation on all cores, thus the memory is shared.",1/9/2018 7:10,,2997,CC BY-SA 3.0 +7407,250,2,The problem is - How DNN gives such high confidence ~ `99% !` that these some-object-pattern-like pictures are representing true objects ? Unless DNN is mentally ill and thinks it takes [Rorschach test](https://en.wikipedia.org/wiki/Rorschach_test) in some hospital :-),1/9/2018 17:55,,12028,CC BY-SA 3.0 +7410,4956,0,[This discussion](https://stackoverflow.com/q/46659525/712995) might be helpful,1/9/2018 21:09,,9647,CC BY-SA 3.0 +7411,4953,0,"Welcome to AI! Very interesting question that I'd also be interested in knowing the answer to. *(My guess would be that there is some overlap, but I know next to nothing about this subject!)*",1/9/2018 22:22,,1671,CC BY-SA 3.0 +7414,250,2,"Sorry, I couldn't resist :-) ""Bagel"" is round, but so it is sun,eyeball,etc. +""Projector"" could be an atom model, laser beam cross section, etc. And so on and so on. So why the hell, bagel is preferred over sun, projector is preferred over atom schema, etc. with `99%` confidence ? For me - the main difference between DNN and human recognition is the fact that humans are not forced to recognize something, while NN seems are !",1/10/2018 8:55,,12028,CC BY-SA 3.0 +7422,4965,0,You may be able to use sentence vectors and compare them.,1/11/2018 2:26,,4631,CC BY-SA 3.0 +7424,4970,0,"Thanks for your reply; I am aware of statistical models and their properties, but for this question I was interested in CDT only!",1/11/2018 9:11,,2193,CC BY-SA 3.0 +7425,4970,0,"The topic was a joy to research and answer. I introduced statistical models at the end for comparison, however I totally get your point.",1/11/2018 9:40,,10913,CC BY-SA 3.0 +7428,4744,2,Looks like your answer bore fruit re: https://ai.stackexchange.com/questions/4975/how-do-i-write-a-good-evaluation-function Thanks for contributing!,1/11/2018 20:06,,1671,CC BY-SA 3.0 +7430,4978,0,The fuel gets calculated as following: 1 + 2 + 3 + 4 = 10 for 4 fields and so on.. (the little gauss) so going it costs more to go more fields at once. Would this change anything in your function? I think if I have time I'll add the whole game instructions to the question later,1/12/2018 6:09,,11585,CC BY-SA 3.0 +7431,4978,0,I meant for **advancing** 4 fields.. e.g. advancing 6 fields whereever you are costs 1+2+3+4+5+6 = 21 fuel,1/12/2018 6:54,,11585,CC BY-SA 3.0 +7432,4978,0,"@po0l Then you may need `-max(f - 10 - (MAX_FIELD_INDEX - i) * (MAX_FIELD_INDEX - i + 1) / 2, 0)` instead or in addition to the above expression.",1/12/2018 12:31,,12053,CC BY-SA 3.0 +7433,4910,2,Update: I've come across [this](https://karpathy.github.io/2016/05/31/rl/) blog post which runs through Reinforcement Learning and tackles almost all of the questions listed in the original post. I'm unsure if encoding the game in pixels would still work given the complexity with pieces and their unique movements but it's a start.,1/12/2018 15:40,,11933,CC BY-SA 3.0 +7434,4977,1,"Great point. In [M] games, which are played on Sudoku, the constraints make many positions (coordinates+value) illegal after the first placement. There is no value in considering these illegal positions from the standpoint of placement, *but*, an important strategic layer is recognizing which placements minimize value of remaining, unplayed positions. (i.e. if I place an 8 here, it blocks my opponent from placing an 8 in that row, column or region. Essentially, ""how many strategic positions does this placement remove from the gameboard?"")",1/12/2018 18:08,,1671,CC BY-SA 3.0 +7443,4986,0,"welcome to AI SE. Your question is ok, but the title could be more descriptive to your problem. Could you please clarify it a little bit?",1/13/2018 14:09,,11810,CC BY-SA 3.0 +7451,4992,0,"My system tries to find license plate area. In order to do that, it gets input of rectangles that possibly contain digits. Those rectangles form cluster by distance of each other. And it counts how many rectangles containing letter of number are there in each cluster to finally get clusters as part of license plate area. Here's where neural network needed. So there are no information for regional number or place because input is not complete license plate.",1/14/2018 0:36,,12090,CC BY-SA 3.0 +7453,4998,0,Interesting. Even though the resources may be limited. Won't self-recursive algorithms be repeatedly able to improve themselves infinitely with the higher intelligence being able to find more avenues for improvement? There are also theories of [manipulating the radiation from black holes](https://www.scientificamerican.com/article/black-hole-computers-2007-04/) resulting in the most effective computation possible within the limits of current human intelligence.,1/14/2018 13:20,,12089,CC BY-SA 3.0 +7454,4998,1,"@Jeevan: Intelligence is still bound by the laws of physics. Yes it is *possible* that a higher intelligence than humans discovers that our knowledge about those upper bounds is in fact wrong. But no recursing can ""beat"" the true laws of physics. Your link about manipulating the radiation from black holes I think leads to the same upper bound as considering the maximum computation before forming a black hole (the two seem linked, this is part of the holographic principle in physics that constrains activity within a volume to be equivalent to a transformed version on the surface of same).",1/14/2018 15:50,,1847,CC BY-SA 3.0 +7457,5007,0,"Thank you so much for your answer, Yes, I agree with you .. now, I think that it might be better to convert all images to B/W (that would decrease the number of input neurons) ... And focus only on shape, since abusive and sexual content can be recognized without colors. + +If I train my Network with B/W image dataset, do I have to convert input images to B/W if I were to check them?",1/14/2018 23:28,,12113,CC BY-SA 3.0 +7462,2113,1,@VeenG I feel your definition relies too much on biology. There is also an information-centric definition of life where living things are viewed as thermodynamic systems that can reproduce and evolve as survival dictates.,1/15/2018 9:22,,10913,CC BY-SA 3.0 +7463,2239,3,Your conclusion is correct. It mostly depends with our definition of life.,1/15/2018 9:24,,10913,CC BY-SA 3.0 +7467,5007,1,"Yes, since that's the type of image it was trained on",1/15/2018 16:46,,6779,CC BY-SA 3.0 +7468,5007,1,This is relevant: https://github.com/yahoo/open_nsfw/blob/master/README.md,1/15/2018 16:47,,6779,CC BY-SA 3.0 +7479,5016,2,"You confuse runtime and trainings hardware. Alpha Go Zero ran on 4 TPUs, but it was trained on 5000 TPUs.",1/16/2018 8:51,,2227,CC BY-SA 3.0 +7480,4034,0,This answer makes an important point. Error correction on prediction models would provide a great incentive for an intelligent AI to learn and act in curious manner.,1/16/2018 9:21,,10913,CC BY-SA 3.0 +7482,2113,1,"Hey Seth, thank you for your comment. I was not aware of this information-centric definition. I will look into it and update my response. Thank you.",1/16/2018 13:45,,37,CC BY-SA 3.0 +7483,4998,0,"Have you considered the possibility of manipulating the laws of physics, say for eg: to network billions of black holes by clumping together all available matter via E-R bridges to compensate for the distance problem. A higher intelligence might think of a more efficient solution than that. But won't rewriting and editing the source code(laws of physics) of the universe be possible with a higher intelligence?",1/16/2018 14:56,,12089,CC BY-SA 3.0 +7484,4998,1,"@TransPlanetaryInjection: The idea of super intelligences ""manipulating the laws of physics"" is outside science. It is science fiction, and more suitable for worldbuilding.stackexchange.com than here. We cannot say whether it is possible or not any more than we can claim to have proof of god's existence or non-existence.",1/16/2018 15:16,,1847,CC BY-SA 3.0 +7486,4999,0,Thanks for providing valuable details for the problem. I had seen the example of gensim but I have a question will it be able to solve the problem that I mentioned in question. Although the solution I created is working fine in finding the similarity between sentences but it is getting stuck when order of words is jumbled.,1/16/2018 17:32,,9428,CC BY-SA 3.0 +7490,5019,0,You should revise your question a little bit.or else you transfer it to cross validated community.,1/16/2018 20:13,,1581,CC BY-SA 3.0 +7495,5003,0,"Good point. ""Given a sufficient number of questions..."" *(Of course, on the [Voight-Kampff test](http://www.alaricstephen.com/main-featured/2017/11/27/the-voight-kampff-test) seems quite effective with a limited number of questions;)*",1/17/2018 0:37,,1671,CC BY-SA 3.0 +7496,5005,1,"I'm not aware of what a heavy node is. I remember watching this movie and that term didn't catch my attention. If you're interested in some of the inner workings of AlphaGo, you could check out their latest paper - +http://www.mooc360.com/agz_unformatted_nature.pdf (note this is AlphaGo Zero - the successor to AlphaGo featured in Netflix)",1/15/2018 19:45,,1720,CC BY-SA 3.0 +7498,5029,1,"I seem to recall that ""human crowdsourcing"" is/was a solution in captcha identification, in the sense that a single human is not reliable, but a large sample yields correct results. (Makes me wonder if the same technique can be applied to algorithms, where any one algorithm may not be reliable, but an expanding group of unique algorithms might become more reliable.)",1/17/2018 0:55,,1671,CC BY-SA 3.0 +7508,2967,0,"Yeah, it's possible. If we can abstract past state into current state then MDP will still holds good.",1/17/2018 5:51,,2527,CC BY-SA 3.0 +7515,5046,0,"Welcome to AI! Thanks for bringing up information density in the context of black holes. Could I ask you to provide some reliable links, so those not familiar with these concepts can read more about them?",1/17/2018 21:48,,1671,CC BY-SA 3.0 +7523,5051,2,"If the agent goes right as opposed to donny nothing, does it get a different reward?",1/18/2018 12:53,,4398,CC BY-SA 3.0 +7524,5051,1,"'Do Nothing' action has 0 reward on all states. In this situation go-right will also have a 0 reward, since no damage is done.",1/18/2018 13:44,,5030,CC BY-SA 3.0 +7530,5063,1,That's a real good question man !! btw in the second last paragraph it will be AI :-D,1/19/2018 5:57,user9947,,CC BY-SA 3.0 +7531,1886,0,"Excellent review, C++ is a just as competent for AI as any other language. The only problem is that it doesn't have as many libraries for scientific computing as lets say Python, Java or Lua.",1/19/2018 7:36,,10913,CC BY-SA 3.0 +7535,5055,0,"Thanks for the answer! +Quick follow up, I don't understand how it solves the vanishing gradient problem since there are still infinitly many points where the derivative of ReLU is zero.",1/19/2018 8:38,,12026,CC BY-SA 3.0 +7536,5055,1,https://stats.stackexchange.com/q/176794,1/19/2018 8:58,,5104,CC BY-SA 3.0 +7537,1886,0,"@Soheyl Mainly yes, but I also explained, how should you use .net if you really want to.",1/19/2018 9:43,,2255,CC BY-SA 3.0 +7538,5067,2,"I encourage you to use some high level framework like Keras to get the work done easily. It uses Tensorflow (or Theano) as backend but it is almost transparent. The type of layer depend on you representation of the features. You should concern not only about the type of layers but also about the entire network architecture. Anyway as a baseline, I'll try with a sequential model of full conected layers combining layers with relu and sigmoid activations.",1/19/2018 16:25,,12121,CC BY-SA 3.0 +7547,5063,1,"Welcome to AI! (I edited slightly, assuming ""IA"" was a typo... is this correct?) Have you looked at Erik Demaine's work? He's a major origami force, and wrote a nice paper on Algorithmic Combinatorial Game Theory. You can [find his published papers here](http://erikdemaine.org/papers/). This paper, [Geometric Folding Algorithms: Linkages, Origami, Polyhedra](http://erikdemaine.org/papers/GFALOP/) looks like it might apply.",1/19/2018 20:47,,1671,CC BY-SA 3.0 +7559,5067,0,"Hey, thanks a lot for the feedback! I'll check Keras and I'll try to implement that idea and I'll try to reply soon.",1/20/2018 16:31,,12217,CC BY-SA 3.0 +7567,5083,0,After reading the Quora post I do not feel that it is a scam.,1/22/2018 3:37,,5763,CC BY-SA 3.0 +7569,5083,0,"@sage yeah. It's not a sure thing. The review could have been posted by anyone. There's nothing to show it's legit, but I don't think there's strong proof for it being a scam either. The QA thing isn't sure proof either.",1/22/2018 5:09,,12254,CC BY-SA 3.0 +7570,5085,1,won't simple logic work better? like if the string contains a certain substring then it must refer to a dog...coz i think its impossible for a machine to learn what words might correspond to a dog without seeing it before..like if someone says pug in some other language then human brain has no way of identifying its a dog...its my opinion although...and logical operations will be much faster,1/22/2018 11:24,user9947,,CC BY-SA 3.0 +7571,5080,0,can you be more precise with some examples and their computations?,1/22/2018 11:26,user9947,,CC BY-SA 3.0 +7572,5085,1,"The problem with that is that because I am trying to analyze twitter posts, with hashtags like #follow4follow which are totally unrelated with dogs, I would need a neural network to carry out the decisions. Besides its not going to be only dogs, cats, cars, mountains, calendars, you name it. +Basically it is supposed to work like a translator neural network, translating between ""strings"" and possible #hashtags +@DuttaA",1/22/2018 11:45,,12264,CC BY-SA 3.0 +7573,5085,1,That's a NLP problem according to me..and no satisfactory solution exists to NLP..but you might have a case specific solution from experts..lets see,1/22/2018 11:55,user9947,,CC BY-SA 3.0 +7574,5083,1,"@harsh99 The fact that his (solo developed) super-AI, that is ""completely unscripted,"" answered 2 complicated social questions with answers that were both found on Quora is pretty damning evidence. Add the ""open source"" (but not anywhere) aspect of the project, the failed kickstarter, the patreon, the only tech demos being youtube videos, and you have what should be an obvious scam. It's sad that anyone would fall for it and donate money to that scam artist. He claims to have an AI more advanced than anything at Google/Facebook/Microsoft and can't prove it with a simple web demo.",1/22/2018 12:16,,12258,CC BY-SA 3.0 +7575,5083,1,"@sage The fact that the open source code isn't available anywhere troubled me too. And yes, I also noticed the Kickstarter didn't even get $150. So yeah, I guess it's a scam. Thanks!",1/22/2018 14:25,,12254,CC BY-SA 3.0 +7576,5083,1,"However, I'm tempted to add it might be programmed to fetch answers from the web.",1/22/2018 14:34,,12254,CC BY-SA 3.0 +7577,5088,0,Could you expand on SVD and describe on a high level how it is used? This is so that the answers on the stack can be self-contained.,1/22/2018 18:14,,4398,CC BY-SA 3.0 +7583,5080,0,"@DuttaA you know the connection and the combination of neurons of DNN, for example, like ReLu or some other finite set of neurons, I ask whether there is a finite set of neurons which can implement such a DNN that given a c.e. function or Turing Machine ,we can simulate the c.e. function or Turing Machine by the DNN?",1/23/2018 0:43,,9237,CC BY-SA 3.0 +7584,5080,0,https://web.archive.org/web/20130502094857/http://www.math.rutgers.edu/~sontag/FTP_DIR/aml-turing.pdf,1/23/2018 1:55,,9237,CC BY-SA 3.0 +7585,5080,0,http://people.cs.georgetown.edu/~cnewport/teaching/cosc844-spring17/pubs/nn-tm.pdf,1/23/2018 2:03,,9237,CC BY-SA 3.0 +7586,5080,0,https://www.cs.montana.edu/~elser/turing%20papers/Schoolboy%20Ideas.pdf,1/23/2018 2:05,,9237,CC BY-SA 3.0 +7588,5080,0,https://www.dartmouth.edu/~gvc/Cybenko_MCSS.pdf,1/23/2018 2:44,,9237,CC BY-SA 3.0 +7589,5080,0,"It seems that the question has been settled by the works above, but I have not read them through",1/23/2018 2:44,,9237,CC BY-SA 3.0 +7590,5093,0,It's multiplied by 0 as 1-y,1/23/2018 3:50,user9947,,CC BY-SA 3.0 +7591,5093,0,what if 1-y is not zero and 1-(h(x) is zero?,1/23/2018 3:53,,12273,CC BY-SA 3.0 +7592,5093,0,This particular cost function is for sigmoid cost function...And sigmoids do 1 or 0 as per convention ....It is based on convention that y must be 1-0 and anything can't be put,1/23/2018 5:40,user9947,,CC BY-SA 3.0 +7596,5091,0,"Thanks for the reply! I had a typo saying the ""state composed of a tensor with cards in the players hand"", I meant in the player's own hand. The last layer could indeed be a softmax activation layer. Once i'm able to test it, I'll reply (i'm still having problems with the rest of the architecture). I tried 2 fully connected layers with relu and sigmoid activation and curiously got a worse result than with a simple weights matrix.",1/23/2018 14:33,,12217,CC BY-SA 3.0 +7599,5063,0,"thanks @DukeZhou, there is a lot of info to take in there and it seems to have a more theoretical and geometrical approach, but I thank you for the link and I will take a closer look as soon as I have some spare time",1/24/2018 15:58,,12210,CC BY-SA 3.0 +7600,5115,1,This is similar to the way that [example-based machine translation](https://en.wikipedia.org/wiki/Example-based_machine_translation) systems are designed.,1/25/2018 8:23,,2050,CC BY-SA 3.0 +7601,3272,0,[Natural language understanding](https://en.wikipedia.org/wiki/Natural_language_understanding) systems are often implemented using [discourse representation theories](https://en.wikipedia.org/wiki/Discourse_representation_theory) instead of qualia.,1/25/2018 8:28,,2050,CC BY-SA 3.0 +7602,5118,0,So the difference between the two is basically whether the range of data is between <0; 1> or <-1; 1>?,1/25/2018 9:46,,12327,CC BY-SA 3.0 +7603,5115,1,These are advanced levels of NLP and a lot of research is going on this matter...No researcher has able to come up with a flawless theory which describe how humans develop language among themselves....So writing such programs still remain a distant dream.... According to me all the nlp programs are kind of superficial in nature,1/25/2018 11:14,user9947,,CC BY-SA 3.0 +7607,5125,0,say whaaaaat? anyways nn's are modeled after our brain..can we say a no. is even just by seeing it? no..hence nn cant perform it,1/25/2018 17:49,user9947,,CC BY-SA 3.0 +7608,5125,0,"Your reasoning just doesn't convince me. I just am trying to deduce the reasons on why this problem can(t) be modelled with neural nets. In the lines of, say the virtue of the modulo function being non continuous etc.",1/25/2018 18:24,,11835,CC BY-SA 3.0 +7609,5115,1,"Welcome to AI! My sense is that language development is partly creative, so it may require algorithms that can think abstractly in a general semantic context. Combinatorially speaking, I think you'd also need to include [""grass eats"", ""grass eats deer"", ""deer eats man""] and give a distinct value to ""eats"" as a verb, to distinguish from nouns (possibly this could be approached logarithmically.) You'll also have to contend with poetic constructions ""Deer, man eats"", etc., nonsense such as ""deer eats man"", and abstractions ""grass eats man"" (in the sense of ""dust to dust"", as fertilizer.)",1/25/2018 18:33,,1671,CC BY-SA 3.0 +7610,5125,0,Because its random for one..NN's are for pattern recognition it NN certainly doesn't work on maths like that..it models a function by looking at the pattern not because it is continuous or something else,1/25/2018 18:33,user9947,,CC BY-SA 3.0 +7611,5125,0,Welcome to AI! Can you post some links referencing the problem. I found this [Odd-even distinction](http://www.psych.mcgill.ca/perpg/fac/shultz/StudentProjects/Odd-evenDistinction.htm) but there's no real detail regarding why it's so difficult.,1/25/2018 18:39,,1671,CC BY-SA 3.0 +7612,5125,1,@DukeZhou Actually I'm not sure on whether it is a difficult problem in the first place. I couldn't really come up with a solution nor references where it is discussed. Hence the question,1/25/2018 19:03,,11835,CC BY-SA 3.0 +7617,2732,0,Can you tell me how to deal with the vanishing gradient problem?,1/26/2018 16:30,user9947,,CC BY-SA 3.0 +7618,5138,0,"Welcome to AI! *(That is very elegant output, imo, nonsensicality aside;)*",1/26/2018 20:41,,1671,CC BY-SA 3.0 +7619,5119,1,"You make a good point about holding off on hardware purchases b/c newer, more powerful products continuously emerge, which has the effect of reducing the price of previous products. The longer you wait = more bang for your buck!",1/26/2018 21:29,,1671,CC BY-SA 3.0 +7622,5153,0,could you give an example? i dont think this does anything,1/27/2018 0:45,,6779,CC BY-SA 3.0 +7623,5138,0,"How come you pick one at ""random""? It seems like that be less accurate than just picking the one with the highest decimal",1/26/2018 21:10,,12026,CC BY-SA 3.0 +7624,2732,0,if you change the activation function to relu or elu that would fix the issue,1/27/2018 7:41,,5054,CC BY-SA 3.0 +7629,3873,1,it would also be quite diminishing the potential power of AI,1/27/2018 20:00,,6779,CC BY-SA 3.0 +7632,3873,1,@k.c.sayz'k.csayz': The problem is the consequences for it going runaway are way worse than any loss of performance.,1/27/2018 21:27,,38,CC BY-SA 3.0 +7634,5115,0,"Yes, we can add all those metaphorical sentences. Basically every sentence that is syntatically correct and not total nonsense. But for the sake of simplicity I left them out. Program would work that way anyway that it would count frequency of sentences used in large amounts of texts. Your examples would have very small frequency and so connections between those words wouldn't have that much meaning for the definition of word as would frequent sentences.",1/27/2018 21:56,,12251,CC BY-SA 3.0 +7637,5120,0,Never heard of it. I'll read about it.,1/27/2018 22:07,,12251,CC BY-SA 3.0 +7641,3873,0,i added an answer that is a response to the above,1/28/2018 0:42,,6779,CC BY-SA 3.0 +7644,3873,0,"Hannu Rajaniemi wrote an interesting story on this subject, called ""[Deus Ex Homine](https://www.e-reading.club/chapter.php/71247/44/Dozois_-_The_Years_Best_Science_Fiction_23rd_Annual_Collection_%282006%29.html)"" re: benevolent superintelligences, described as recursively self-optimizing algorithms. In this story, the superintelligences continually improve and fix things, but are ungovernable and run wild. It's an interesting take because it looks at the scenario from the opposite angle (i.e. not superintelligence that want to destroy humanity, but improve it. :)",1/28/2018 22:22,,1671,CC BY-SA 3.0 +7650,5190,0,Actually I was asking how to back calculate in a nn to represent the memory of the object it is classifying... Image is just a specific case,1/30/2018 4:48,user9947,,CC BY-SA 3.0 +7654,3885,2,Several algorithms are mentioned in this slide of class of M.Kochenderfer http://adl.stanford.edu/aa222/Lecture_Notes_files/chapter6_gradfree.pdf (p.s. The problem is more general then Deep Learning),1/30/2018 13:53,,12437,CC BY-SA 3.0 +7658,5174,0,"Welcome to AI! Sounds like an interesting problem. (I've take the liberty of adding combinatoric and combinatorial-games tags.) Can I ask, do you have a sense of the size of the gametree? What are your restriction on number of plies the algorithm can search in a reasonable time? I mention it because, if the game tree expands sufficiently, depth search may not be fruitful until the game tree starts to become tractable.",1/30/2018 23:50,,1671,CC BY-SA 3.0 +7665,4707,0,This is what I have come up with so far: https://github.com/josefondrej/Symmetric-Layers,1/31/2018 14:02,,11359,CC BY-SA 3.0 +7674,5209,0,"Yeah the problem is very basic, I'm just trying to predict the quadrants because i'm still trying to understand how to build a network with more than one neuron. Do you any ideas for an easy and useful application of a network I could try?",2/2/2018 13:19,,11795,CC BY-SA 3.0 +7675,5209,0,@W22 if you re using perceptron learning....build your own 2d data-set by randomly initializing nos...don't make it linearly separable...and see what happens..don't use a neural net for now...just a simple one node classifier no hidden network..then plot the decision boundary...see if the decision boundary is visually appealing...if not why? I actually asked a question about this https://ai.stackexchange.com/questions/4399/perceptron-learning-algorithm-different-accuracies-for-different-training-metho,2/2/2018 13:43,user9947,,CC BY-SA 3.0 +7678,5211,0,"Welcome to AI! Great practical question. I don't have practical experience with this re: AI, but from years of recording experience, I can tell you that microphone quality can greatly affect the quality of the input. As an analogy, at my office we have a PDF-to-Word function on the copier for scanned documents. Comparing the output to Adobe's also poor conversion capability, I noticed that the scanning introduced significantly more errors to the same document, no doubt a function of the extra noise and loss of fidelity in the scan.",2/2/2018 19:35,,1671,CC BY-SA 3.0 +7679,5211,0,Yes it should. I would guess that if you normalize the entire data set then it should alleviate things somewhat but I have no evidence for that belief.,2/2/2018 16:05,,6779,CC BY-SA 3.0 +7681,5136,0,Is your inference that output layer does not contain any special properties correct?,2/3/2018 6:54,user9947,,CC BY-SA 3.0 +7682,5136,0,@DukeZhou I think Neural Nets without any hidden layer can perform some basic classification tasks..am i wrong?,2/3/2018 6:55,user9947,,CC BY-SA 3.0 +7683,5221,1,"The description of your environment seems incomplete. I cannot make sense of it. For example you say ""Once a column is chosen, **that** row of the column is filled from bottom up like in a tetris fashion."" [emphasis mine]. But at no point has a row been described or selected by the agent or the environment . . . some visualised steps of the game might help, but also review whether you have given a complete and valid description. You definitely have some kind of implementation issue, since for an episodic problem it should be fine to set $\gamma = 1$ and still expect to find optimal solution",2/3/2018 9:53,,1847,CC BY-SA 3.0 +7686,5221,1,"Sorry for being unclear, indeed my description of action is incomplete. I will try to explain it again. + +Action: one of the m columns chosen to be filled. Or more precisely. +If column j is chosen, S_ij will be set to 1, where i is the min row i such that S_ij != 0. (Here S is the boolean matrix representation of the states)",2/3/2018 15:07,,12505,CC BY-SA 3.0 +7688,5169,0,Define fair. :),2/3/2018 15:51,,12517,CC BY-SA 3.0 +7690,5221,1,"Regarding: ""This very simple environment is trained using DQN with a single ff layer."" If I understand this correctly, you have a linear regression here, there is no hidden layer. Could you confirm the NN architecture? I am imagining it is 15/3 linear network, with 15 inputs -> 3 outputs predicting reward for each of the 3 actions?",2/3/2018 17:28,,1847,CC BY-SA 3.0 +7691,3387,0,"Hi, did you find an answer? I'm interested as well",2/3/2018 17:54,,12518,CC BY-SA 3.0 +7692,5221,1,"Yes state matrix is flatten, the network has 15 input and 3 output. Hidden layer has 10 nodes, learning rate is 1e-3. In the first environment, after 20+ episodes the agent is able to obtain the maximum score for any chosen m and n. But not after the modification.",2/4/2018 2:31,,12505,CC BY-SA 3.0 +7695,5226,0,Yes I figured it would be a very natural and straight-forward thing to try. I'll do the experiment some time soon,2/4/2018 21:50,,2522,CC BY-SA 3.0 +7697,5222,0,"Thanks for clarifications (re: Chess experts) and the link to the paper. I didn't have time to really research, but the question popped into my head so I figured I'd ask it. As a followup, since your Chess and Chess AI knowledge greatly exceeds mine, can I ask why were the Chess experts surprised?",2/5/2018 22:52,,1671,CC BY-SA 3.0 +7698,5222,0,"@DukeZhou If you start a new question, we can discuss further.",2/5/2018 22:53,,6014,CC BY-SA 3.0 +7709,5237,1,"Thanks so much for your reply! I have tried more hidden nodes, output layer is linear and have waited more than 10k episodes for convergence. I think it might be a mistake in my implementation. Something that I have suspected is due to my implementation of illegal moves. During the choose action step: If the entire column is already filled (S_ij = 1, forall i), I enforced that the column is not allowed to be chosen. And I would choose the next highest action value as the best action. Will this be the main cause of the problem?",2/7/2018 1:22,,12505,CC BY-SA 3.0 +7712,5157,0,"@Bobs: Potentially. However, that is *already* a statement from ""some unproven theory of consciousness"". There is absolutely no evidence that consciousness cannot be achieved physically via evolution. In that regards, all we have is a big question: *How did life on Earth result in conscious systems?* The answer to date: We don't know.",2/7/2018 8:15,,1847,CC BY-SA 3.0 +7713,5237,1,"@terenceflow: I don't think that would be a direct cause of a problem - although you should look how you handle the data around that change. I would actually recommend changing the problem so that you terminate the episode on an illegal move, meaning the agent scores whatever it got so far (this is what would happen in Tetris). Note that filling the last column is not necessary for optimal behaviour, the agent scores 0 whatever it does, so it doesn't matter and you should not expect an agent to fill all columns.",2/7/2018 8:20,,1847,CC BY-SA 3.0 +7719,5247,2,"Thanks again. I have another question. You say ""_finding a ratio of how likely a sample is to be generated by the target policy_"". How do we decide this, given that we know only the behaviour policy? Isn't the target policy something we have to find?",2/7/2018 16:59,,12574,CC BY-SA 3.0 +7720,5247,1,"We can get an estimate of this readily by finding the ratio of the target policy, pi, taking that action verses the behaviour policy, mu. Thus the ratio is P= pi(s,a)/mu(s,a) where a and s are the action chosen by mu and the state, respectively.",2/7/2018 19:06,,4398,CC BY-SA 3.0 +7721,5248,0,"Nice answer. Your point about comparison between AI's re Chess is interesting in terms of the limitation based on Chess' loopiness and the Win/Loss/Draw triad. (Possibly, in the future, we will need finite, intractable games that allow more granular analysis in terms of results.) I am familiar with the history of Chess engines, and the massive amount of effort and human knowledge that went into them, but the context of the lack of success re: the much more complex 19x19 Go had an opposite ramification to me.",2/7/2018 22:14,,1671,CC BY-SA 3.0 +7722,5248,0,"Specifically, my assumption was that if AlphaGo could beat the top humans in the significantly more complex game, it seemed reasonable that it would beat not only the top humans, but the top previous AI's in any other game.",2/7/2018 22:17,,1671,CC BY-SA 3.0 +7723,5250,0,Aaah. So it stemmed from lack of confidence in the new AI method. That makes sense.,2/7/2018 22:17,,1671,CC BY-SA 3.0 +7724,5136,0,Yes it is correct. It works on the same basis as any other layer except it it usually uses a special non-linear function that 'sorts' the input into classification categories.,2/7/2018 22:40,,9271,CC BY-SA 3.0 +7725,5136,0,"Also, you're right on your assumption, a NN without any hidden layers can preform classification. This network is actually the simple logistic classification algorithm.",2/7/2018 22:42,,9271,CC BY-SA 3.0 +7727,5247,1,"My question was, where do we obtain pi(s,a) from, while we only have mu(s,a) ? That is, where do we get the target policy from, while it's our goal to find it?",2/8/2018 5:19,,12574,CC BY-SA 3.0 +7728,5247,1,"Your target policy is initialized to random, it’s just a matter of updating it.",2/8/2018 11:40,,4398,CC BY-SA 3.0 +7737,5264,0,"You didn't get my question, I asked about brain's power not how the brain functions (i know we dont understand it clearly). Your 1st point, normal transistor is much older nowadays we use FET's which is smaller. A human neuron is a few centimeters long, millions of fet's can be fitted in those dimensions easily replicating a neuron. 2nd point it is advantageous to have distortion less memory and in much greater amt which an animal doesn't possess. And third I am only asking the power that is i9/i7/in processor not how you are using the processor..But overall i get what you are trying to say",2/9/2018 20:19,user9947,,CC BY-SA 3.0 +7738,5264,1,"Still same mistake: you want to compare ""brain power"", undefined term. The ""brain power"" of a computer is 0, the ""computer power"" of brain is also 0",2/9/2018 20:21,,12630,CC BY-SA 3.0 +7739,5264,0,I actually am not comparing brain with computer i am asking where is the problem in recreating a brain,2/9/2018 20:24,user9947,,CC BY-SA 3.0 +7740,5264,2,"Recreating each brain atom, each brain protein, each gen, each synapse, each neuron ? Modeling them as ? I will give you an example: C. Elegans is an small worm with only 300 neurons and 7000 synapses, already totally mapped and listes, but nobody as yet reproduced nor simulated it.",2/9/2018 20:30,,12630,CC BY-SA 3.0 +7741,5267,0,"It is possible to get this information. Because same words in both languages are used in sentences that have same structure. For example lion and Africa are often used in same sentence, or stars and universe and telescope. But on the other hand stars and walking are less related, you fly to stars not walk, and aeroplane also flies not walks, and people fly on aeroplanes. So there you get these specific relationship between these words that are true in both languages and specific structure of relationships is true only for these specific words. Thats how you can know they must mean the same.",2/10/2018 11:03,,12251,CC BY-SA 3.0 +7742,5267,0,"How to compare these structures is not yet entirely clear to me. It is complicated. But you have to use analogies, statistics and logic. Neural networks only use analogies or simple similarity but ignore logic, that is their problem I think.",2/10/2018 11:05,,12251,CC BY-SA 3.0 +7743,5267,0,"About the examples you provide: in the pure case, you can know from text1 that there are a statistic relation between ""lion"" and ""Africa"", from text2 a relation between ""estrellas"" and ""universo"". Is ""estrellas"" synonym of ""lyon"" ? You can improve the algorithm starting by some well known fixed matches (the external information I refer in m answer), i.e 5=V, and, from this one and using statistics, infers new matches like 6=VI. However, in natural language, all these approach will have a lot of troubles. Nice as theoretical exercise, but easier to use an existing dictionary.",2/10/2018 11:24,,12630,CC BY-SA 3.0 +7744,5267,0,"Well the ultimate goals is to write a program that will be able to learn itself new languages just from reading large amounts of texts. That this is possible we know from human babies, who are capable of this.",2/10/2018 11:32,,12251,CC BY-SA 3.0 +7745,5267,1,Remember the case of Egyptian scripts: no translation until rosetta stone (external information) was found.,2/10/2018 11:37,,12630,CC BY-SA 3.0 +7746,5267,0,"So you are saying this is very hard program to write? Good to know so I wont feel bad when I fail. I will read about Egyptian scripts, sounds interesting. Maybe my program will be able to decrypt them, it should",2/10/2018 12:27,,12251,CC BY-SA 3.0 +7747,5267,1,"You have my best wishes in your target. Yes, when you say ""I'm planning on writing these two programs"", be prepared to say ""I'm planning to research on this issue and implement some test programs"". You will be famous if you decrypt Voynich manuscript.",2/10/2018 12:45,,12630,CC BY-SA 3.0 +7750,5266,1,"A point of consideration: words are only one of many forms of structure in a language. That is to say, different languages will construct the context around synonymous words in slightly different ways. Any good model outputting a translation of sorts will almost certainly have to account for this heterogenous word context to some degree. For a great book that covers some interesting ideas around differences in mechanics between language, check out Through The Language Glass by Guy Deutscher.",2/10/2018 16:11,,5210,CC BY-SA 3.0 +7752,5266,0,"Can you give an example of what you mean? Yes for example order of words can be different in different languages. But the way one word is related in meaning to other words is same in all languages, because languages live in the same worlds where logic is the same.",2/10/2018 17:09,,12251,CC BY-SA 3.0 +7753,5271,0,You know this has very less to do with what he is actually asking..he is asking for the accuracy of Machine Learning model not the device and how it will impact the final result...not exactly how are microphones different and how we can improve its accuracy.,2/10/2018 18:23,user9947,,CC BY-SA 3.0 +7755,5271,0,"@DuttaA, OP ask for practical experience: after years working with IOT microphones and voice recognizers, I think I known a few the most critical points about them. However, answer updated with some clarifications.",2/10/2018 18:49,,12630,CC BY-SA 3.0 +7758,5266,0,"That’s a peerless sentence right there! @Bobs Yes, the short of it is that the variation in grammatical constructs and the structure of relationships between words (among other things) need to be understood by your program (this is not covered in the book I mentioned). Given the complexity of what your asking, this program almost certainly has at least one statistical learning model at its core. Also, thinking of synonyms we can see how context is critical: a given synonym in language A undoubtedly does not share the same set of synonyms for its synonym in language B.",2/11/2018 2:36,,5210,CC BY-SA 3.0 +7762,5278,0,"There are plenty of species apart from modern humans exhibiting strong emotions. These emotions are instinctive - they are very important in defining social behavior which enhance the survivability and fitness of the ""group"". I can't see how human emotion is qualitatively any different from that of other mammals. In contrast, human consciousness is several orders of magnitude more developed in humans than the nearest other species. Where human emotion does seem to differ from other species, is only the ability to control emotion through consciousness.",2/12/2018 3:06,,12670,CC BY-SA 3.0 +7763,5281,0,"I read an interesting and honest article by Douglas Hofstadter in The Atlantic (https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/) critiquing the use of the term ""Deep Learning"" to refer to Google's Deep Learning translation program, and using entertaining examples to show the programs lack of true understanding. He made a convincing argument that good language translation will not be possible without first developing a conscious AI.",2/12/2018 3:17,,12670,CC BY-SA 3.0 +7764,5281,1,"Yes, I don't think anyone conscious would make quite this mistake below, although I can imagine a translator sometimes being unable to make out any better with arcane terms of art. Its the lack of self-awareness in the output: https://www.reddit.com/r/softwaregore/comments/6bxh2m/when_google_translate_meets_the_german_language/",2/12/2018 5:18,,12665,CC BY-SA 3.0 +7767,4088,1,"Sorry for this noob question: How is this applied in actor-critic methods (where the actor can perform multiple simultaneous continuous actions), where the actor has the policy function and gets trained by policy gradient method? @JadenTravnik Can you please explain that in the answer under a new heading?",2/12/2018 7:13,,12574,CC BY-SA 3.0 +7769,3387,0,"No, I didn't find a formal answer. Heuristically my impression is that neural networks would perform rather badly.",2/12/2018 8:00,,7459,CC BY-SA 3.0 +7772,5278,0,This answer makes no sense without a definition of 'your'.,2/12/2018 9:22,,12509,CC BY-SA 3.0 +7778,5260,0,I had this review task given to me for your answer as a new user. I would like to point out that in this community it is desirable to give a detailed answer along with explaining why or how you came to the conclusion. Also if you are speculating or have part knowledge of the answer you can write it in comments.,2/12/2018 16:01,user9947,,CC BY-SA 3.0 +7780,5281,1,"About the first part: My old AI prof used to say that asking if computers can think is like asking if submarines swim. The answer depends on how you want to define ""swim"" more than it does on what the machine is actually doing.",2/12/2018 16:40,,12699,CC BY-SA 3.0 +7794,3138,0,Possible duplicate of [How could emotional intelligence be implemented?](https://ai.stackexchange.com/questions/26/how-could-emotional-intelligence-be-implemented),2/13/2018 2:32,,2050,CC BY-SA 3.0 +7798,5290,0,This is quite a complex problems requiring very thorough data if you want to predict a new customers behavior..No satisfactory algorithm exists and it is a very hot research field. There are a few approaches but most of them I think you have to manually write the code.,2/13/2018 16:56,user9947,,CC BY-SA 3.0 +7799,5290,0,"@DuttaA: the concrete question ""find items that are particularly frequently purchased through online stores by paying shipping fee"" doesn't seem complex, eve doesn't seems to need applied AI. About prediction of customer preferences, all big online shops has it. See by example Amazon's ""customer who buy this item frequently ...""",2/13/2018 17:01,,12630,CC BY-SA 3.0 +7801,4986,0,How many of the 20357 cases that SVM and LR fails has been correctly classified by the remainder methods?,2/13/2018 20:44,,12630,CC BY-SA 3.0 +7804,5248,0,"The bit about artifical moves is pretty important, and not something I've seen non-chess players talk about. +1",2/14/2018 7:25,,12732,CC BY-SA 3.0 +7805,5029,1,"@DukeZhou This is used in theoretical computer science. When you can't solve a problem efficiently, often instead you generate a random number and use that number to cheat in some way that allows you to solve the problem efficiently. However, because of the cheat it's not necessarily correct. What you do then is write a formula for the probability of being wrong, run the program a bunch of times, and get a small error term by aggregating all of the results.",2/14/2018 7:29,,12732,CC BY-SA 3.0 +7806,5029,1,"For example, checking if two numbers are equal is hard. It requires looking at every single digit, and some numbers have a huge number of digits. Instead, I can generate a random prime with 1000 digits and check if a = b mod p. Calculating a mod p and b mod p can be done really easily, and then to check the comparison I only have to compare 1000 digit numbers, which is a constant independent of a and b. If a and b are actually equal, then they're equal mod p too, but sometimes I'll get the answer wrong by accident because of the algebraic relationship between a and b.",2/14/2018 7:32,,12732,CC BY-SA 3.0 +7807,5029,1,"However, I can do this many times with different randomly generated prime numbers. I say that a != b if I found a prime such that a != b mod p. This is guaranteed to be correct, when I output it. When I don't find a disproof, I guess that a = b. This will sometimes be wrong. However, it turns out that the rate at which the error goes to 0 as the number of primes I guess increases is exceptionally fast.",2/14/2018 7:34,,12732,CC BY-SA 3.0 +7810,5290,0,"Your could use clustering techniques such as DBSCAN to identify tendencies (clusters) in buying frequency. I'm not familiar with R, but [there's a package for that](https://cran.r-project.org/web/packages/dbscan/README.html).",2/14/2018 13:48,,10176,CC BY-SA 3.0 +7811,3923,0,"Just to clarify, you are asking for an algorithm that takes in a new case, estimates it’s caseload on the average worker, and assigns it to a worker such that no one is over burdened? Is caseload measured in difficulty or in estimates time to complete?",2/14/2018 13:50,,4398,CC BY-SA 3.0 +7813,3654,0,"I like that you start with a keyboard. The first thing I thought of, over and above the pressure sensitivity was the ""Mood Organ"" musical device at the beginning of Do Androids Dream of Electric Sheep. *(Don't know if this was intended, but the impact of music on emotions is undeniable, and it occurred to me recently that music may be thought of as emotions translated into algorithms. Your connection with physical interaction of the human with the keyboard is quite useful information. I have only a little music theory, and rudimentary piano, but know exactly what you're talking about!)",2/14/2018 18:46,,1671,CC BY-SA 3.0 +7818,5162,0,"I personally have no doubt human reactions are largely mechanistic, and would surely be deterministic if quantum theory didn't open the door to true randomness in nature (validity of free will tbd.) But thank you for recognizing my reduction of the question. This is a complex problem, so starting small is likely the way to go.",2/14/2018 20:34,,1671,CC BY-SA 3.0 +7821,5222,0,https://ai.stackexchange.com/questions/5234/why-were-chess-experts-surprised-by-the-alphazeros-victory-against-stockfish,2/14/2018 21:35,,1671,CC BY-SA 3.0 +7824,5309,0,Don't you think the facial recognition problem might be more of features such as eyes which are black not getting too prominent against darker skin tone rather than any racism...And the company producing the same algorithm asks the offender a questionnaire which has no question related to race...So it is based on facts...Such speculative answers based on newspaper journalists knowledge is not appreciable,2/15/2018 4:34,user9947,,CC BY-SA 3.0 +7828,5309,1,You may want to include more information from your links in this answer; it's always better to include information in the answer itself rather than depending on links.,2/15/2018 11:02,,145,CC BY-SA 3.0 +7829,5290,0,@pasabaporaqui i think OP did not frame the question correctly and he is talking basically of a recommend-er system. Statistical approach for prediction will not extend to new users easily.,2/15/2018 11:40,user9947,,CC BY-SA 3.0 +7837,5322,3,"Learning in a NN is iterative, thus, based in prior knowledge. In particular, initial value of NN parameters is obviously a prior knowledge.",2/15/2018 16:20,,12630,CC BY-SA 3.0 +7838,5293,0,"so `x(t) = w(t) + s(t - 1)` is in fact concatenation of 2 vectors, rather then addition?",2/15/2018 16:26,,12691,CC BY-SA 3.0 +7839,5293,0,"yes, it is. ""w"" and ""s"" have very different sizes, no addition (sum componentwise) possible.",2/15/2018 16:27,,12630,CC BY-SA 3.0 +7840,5322,0,"@pasabaporaqui thanks a lot for the feedback. What my question is driving at is, in most real life problems we have additional information about the domain. Is there a way to inject this prior domain knowledge to improve the success of the model?",2/15/2018 17:21,,10913,CC BY-SA 3.0 +7841,5325,1,"Specifically, theta0 is the bias and theta1 is 'slope' of the regression line.",2/15/2018 18:20,,9271,CC BY-SA 3.0 +7843,5322,3,"Strictly talking, a neural net doesn't learn. The process of learning is done outside the net (i.e. backpropagation algorithm). This is one of the reason is not correct to say that a NN simulates brain (in the brain, learning is done inside the net). If it doesn't learn, it doesn't use prior knowledge. Prior knowledge can be used to optimize the learning algorithm, including the initial net parameters, not the net behavior itself. If one rule is included in the net (as output and using it in the error function), it is not prior knowledge but a problem constrain that the net will try to fulfill",2/15/2018 19:50,,12630,CC BY-SA 3.0 +7844,5322,1,Lookup LSTM in relation to neural networks-- it's a method to take advantage of prior knowledge,2/15/2018 19:55,,2329,CC BY-SA 3.0 +7845,5328,3,I think the questioner was looking for something like LSTM rather than ways the network designer could incorporate their own prior knowledge.,2/15/2018 19:56,,2329,CC BY-SA 3.0 +7852,5320,1,"Great question! In [Accelerando](https://en.wikipedia.org/wiki/Accelerando), Stross imagines rogue superintelligences that naturally arise out of the type of cost reduction motivation Jaden mentions. Regardless of whether such rogue superintelligences emerge, it's a nice commentary on pure economics *sans* humanity. Rajaneimi refined and extended this concept beautifully in [The Causal Angel](https://en.wikipedia.org/wiki/The_Causal_Angel), and breaks down the ""why"" from a game theory perspective quite convincingly, being a mathematician originally.",2/15/2018 21:47,,1671,CC BY-SA 3.0 +7854,5334,1,"Right, my exploration function was meant as 'upgrade' from a strictly e-greedy strategy (to mitigate thrashing by the time the optimal policy is learned). But I don't get why then it won't work even if I only use it in the action selection (behavior policy). Also the idea of plugging it in the update step I think is to propagate the optimism about exploring not only to the state in question but to states leading to that state. As for separating behavior and learning, shouldn't the behavior be driven from what has been learned so far?",2/16/2018 10:12,,12773,CC BY-SA 3.0 +7855,5334,1,"@SaldaVonSchwartz: ""the idea of plugging it in the update step I think is to propagate the optimism about exploring"" - that sort of works with the function as you have described it, because it cuts out after certain number of actions. Eventually the Q-table will forget the large optimistic starting values due to the fixed learning rate. But really this is the same as behaving randomly to start with (e.g. for 100 steps) then behaving closer to optimally later. You still need some exploration after the exploration function is exhausted, thus need a non-greedy action selection.",2/16/2018 10:47,,1847,CC BY-SA 3.0 +7868,5344,2,"Thank you so much. A quick follow up I had on your answer - let's say I back propogate this network. When I calculate my error, I am adjusting my actual weights as opposed to the magnitude. So I was wondering by using this technique are we not making the network train more slower? I probably need to sit down and code it to see but on a theoretical basis I am wondering if this is the case.",2/16/2018 20:30,,12788,CC BY-SA 3.0 +7870,5345,1,[Tower of Hanoi](https://en.wikipedia.org/wiki/Tower_of_Hanoi) is a great suggestion. Props!,2/16/2018 20:37,,1671,CC BY-SA 3.0 +7872,5344,0,@MikeAI actually I have very less knowledge of mathematics to comment on your question so I just gave the intuition what might be happening..also experts say backprop is notoriously difficult to understand why and how it is doing what it is doing. Maybe ths will help https://ai.stackexchange.com/questions/1479/do-scientists-know-what-is-happening-inside-artificial-neural-networks,2/16/2018 20:42,user9947,,CC BY-SA 3.0 +7873,5043,0,Welcome to AI! I would suggest searching this Stack for questions related to ethics. You may also be interested in this recent question: [Few implementations in Machine Ethics?](https://ai.stackexchange.com/q/4532/1671) See also: https://ai.stackexchange.com/questions/tagged/ethics,2/16/2018 22:31,,1671,CC BY-SA 3.0 +7874,5347,1,Hello all. I have been researching this evening and found that both approaches can be combined to create an optimal solution. I'm still unclear on 'safe mutation' so am not answering my own question but an leaving a paper here that describes using novelty of both structure and behavior as a fitness function: https://www.researchgate.net/publication/262373008_Encouraging_Creative_Thinking_in_Robots_Improves_Their_Ability_to_Solve_Challenging_Problems,2/17/2018 5:43,,12788,CC BY-SA 3.0 +7886,5054,0,"Your premise doesn't follow your conclusion. It seems you're implying that we use ReLU on the model's weights. We multiply the weights by the input which might not necessarily yield values [0, 1].",2/17/2018 23:37,,9271,CC BY-SA 3.0 +7892,4389,0,"Is addition of the vectors so they sum to the knowledge vector the only criterion? If so, you're problem seem to be a multi-dimensional case of the coin problem https://en.wikipedia.org/wiki/Coin_problem https://msp.org/involve/2011/4-2/involve-v4-n2-p07-p.pdf",2/18/2018 0:20,,9271,CC BY-SA 3.0 +7896,5358,1,This seems like a regression problem. Are you just trying to map N independant variables (percipitation etc.) to two dependant variables (time and run-off)?,2/18/2018 18:45,,9271,CC BY-SA 3.0 +7898,5357,0,Nice link. This topic has definitely come up on the forum.,2/18/2018 21:28,,1671,CC BY-SA 3.0 +7907,5355,0,Can you please tell us which gradient descent algorithm you use and which learning rate?,2/19/2018 15:21,,6019,CC BY-SA 3.0 +7909,5371,0,I don't think so as an input having higher weight-age will definitely get reflected in 2 or more nodes as the input with max influence..also even if you eliminate seemingly unnecessary inputs it might lead to under-fitting...Map reduction is a controlled technique to remove redundant inouts,2/19/2018 15:40,user9947,,CC BY-SA 3.0 +7910,5355,0,@MolnárIstván The learning rate is 2.0 but I don't know which gradient descent algorithm is JavaNNS is using.,2/19/2018 15:58,,4920,CC BY-SA 3.0 +7911,5355,0,The data has been prepared in Dataquest tutorial to be used with Linear Regression. Can I use it with backpropagation?,2/19/2018 16:19,,4920,CC BY-SA 3.0 +7912,5355,0,"Well, I pretty sure that 2.0 for learning rate is way too high. Try 1e-2, 1e-3 or 1e-4 (it differs which gradient descent algorithm you use) . Besides, I think you have some misunderstanding about backpropagation. It is for adjusting the weights in your network through the loss. Well of course the prepocessing can be bad in a lot of cases, but here I think it should work.",2/19/2018 18:34,,6019,CC BY-SA 3.0 +7914,5347,1,"This is from the paper: "" Novel individuals are thus determined based on their behavioral distance to current or previously seen individuals. The GA otherwise proceeds as normal, substituting novelty for fitness (reward)."" - taken from here: https://arxiv.org/pdf/1712.06567.pdf",2/19/2018 22:19,,12788,CC BY-SA 3.0 +7922,5383,1,"""Nothing can beat the brute force approach""? Beat in which sense?",2/20/2018 16:33,,12867,CC BY-SA 3.0 +7926,5383,0,"Welcome to AI I might supplement this answer by noting that some problems are intractable, and can't be solved by brute force. But I certainly agree that brute force algorithms are a fundamental form and function of AI. In a combinatorial game theory, it seems that a game or puzzle, such as Sudoku, may only be said to be solved through brute force (exhaustion).",2/20/2018 21:54,,1671,CC BY-SA 3.0 +7927,5385,2,All kinds of phrenology are discredit by itself.,2/21/2018 8:34,,12630,CC BY-SA 3.0 +7928,5355,0,"@MolnárIstván I have talke with my professor and he said me that there isn't enough data to train the network. With 12 inputs and only 718 cases, it only covers the 17% of the combinations. I don't understand why I have to use JavaNNS. The train data is part of a Kaggle competition and it works with linear regression.",2/21/2018 9:46,,4920,CC BY-SA 3.0 +7929,5385,1,"Male and females can be differentiated by their.skull size,etc...Even a person's ethnicity can be determined...It is a very well known field in forensics...So since.huamns can do it by measuring skull feature sizes of course a nn can do it....I don't see the disconcerting part though",2/21/2018 9:49,user9947,,CC BY-SA 3.0 +7930,5386,2,"So the high accuracy of the system has only been validated on some dating sites . . . there was a similarly flawed ""criminal detector"" in the news last year that used police photos for positive class, and social media photos for negative class. It learned to detect the style of police photos . . . very worrying since the process for running the algorithm could well end up being ""catch someone we think is suspicious, take their photo, pass it to the algorithm . . . aha, computer thinks we may be right!""",2/21/2018 16:59,,1847,CC BY-SA 3.0 +7941,5404,0,I think you are talking about a very specific are of applied AI.,2/23/2018 9:14,,12630,CC BY-SA 3.0 +7942,5404,0,"""The word embeddings mimic the real human brain much better"" : it is difficult to prove true or false of this statement, taken into account that what we known about logic of brain is ... nothing.",2/23/2018 9:15,,12630,CC BY-SA 3.0 +7943,2337,1,"In reference to the first option: why can't we make now a piece of relatively simple software that is able to improve its own code, so it would spiral out from current constraint of waiting for people to create machine slightly better than themselves?",2/23/2018 15:05,,12935,CC BY-SA 3.0 +7944,5404,0,"@pasabaporaqui it based on research done by the neuroscientist Jeff Hawking, and based on our current research it is much more similar to the human brain than conventional neural networks. Also, this area of AI is pretty much basic Natural Languages Processing.",2/23/2018 15:23,,4631,CC BY-SA 3.0 +7949,1418,0,"Completely agree, and would like to add that most of those ideas were generated before 1990s, before most of modern IT technologies came to existence. Currently it is not unthinkable to imaging nice virtual environment with extensively detailed surroundings, so that AI would not required to be in physical body. Even if it will not be contained in closed virtual world, it will be possible to create such a massive stream of information via multiple channels like web-cams, microphones and numerous APIs, so AI would have enough information to build world model.",2/23/2018 18:56,,12935,CC BY-SA 3.0 +7965,5416,0,"The last paragraph has it ;today,systems lack a sufficient scope of memory or processing power. In addition, the level of complexity that connects to the process of AGI research has also limit the progress of AGI research.",2/24/2018 20:53,,1581,CC BY-SA 3.0 +7966,5421,0,"So, using that example, for $x < 0$, the derivative would be $x * 0.5$. Is that correct?",2/25/2018 2:16,,12788,CC BY-SA 3.0 +7967,5421,0,It would be $0.05$ because $1/20 = 0.05$.,2/25/2018 8:20,,12957,CC BY-SA 3.0 +7996,5448,1,Is there any relation between X and Y through Z? Perhaps X1*Z = Y1?,2/26/2018 23:06,,4398,CC BY-SA 3.0 +8003,5448,1,"There is no known relationship between X and Y. We only know that the output matrix Z is symmetrical, therefore we could predict the ""upper/lower triangle"" of it.",2/27/2018 17:36,,2655,CC BY-SA 3.0 +8004,5448,1,"Is there any other relationships between the matrices? If Z isnt a transformation is it part of a decomposition? Although the relationship between X and Y isnt know, is the goal to find this relationship? What should Z be representing?",2/27/2018 18:54,,4398,CC BY-SA 3.0 +8005,5450,0,"It kinda fascinates me to know that people just employ simple bilinear-interpolation image resize to scale up and down. Do you have experiment how much artifact that would introduce, say image and annotation might be mis-align due to the scaling process?",2/28/2018 2:26,,3098,CC BY-SA 3.0 +8006,5448,1,"Just to be clear, you have an output (and ground truth) of at least 10,000 x 10,000, or 100 million elements, and you would like to generate it to match an observed distribution over 20 x 10,000, or 200 thousand elements? How many training examples do you have?",2/28/2018 7:49,,1847,CC BY-SA 3.0 +8007,5450,0,"Nope. As soon as you scale as floating point, it should be fine.",2/28/2018 9:56,,10344,CC BY-SA 3.0 +8008,5448,1,"Are there any simplifying assumptions that are reasonable to make in the problem domain? The big ones that may help would be sparsity (only some small percentage of inputs and outputs are populated, or different from some mean distribution), or smoothness (the 10,000 x 10,000 output could be approximated reasonably by e.g. a 1,000 x 1,000 output, and maybe has similar properties to a real-world image)",2/28/2018 10:23,,1847,CC BY-SA 3.0 +8011,5461,0,Welcome to ai...I suggest you specify you want the captions based on what more specifically with some examples.. Since it is a silent film it's not based on audio.,2/28/2018 13:02,user9947,,CC BY-SA 3.0 +8013,5448,1,"@NeilSlater thanks for your questions. My friend responded, and said that he has millions of training examples. In addition, the output/truth is 49,995,000 values and it is symmetrical matrix. It could be approximated, as he states, perhaps by a factor of 1000 and that would be still reasonable",2/28/2018 16:38,,2655,CC BY-SA 3.0 +8014,5448,1,"I understand from that the matrix is smooth, perhaps representing a grid of evaluations over a complicated but smooth function of the inputs? That's a lot of data that your friend has. Millions of records, each of which contains ~ 25M numerical values. We're taking about a dataset size getting up to a Petabyte here . . . not something I have any experience of. If an approximation with downsampled outputs (and maybe inputs) might have value, I suggest start with that as your model. Reduce the grid sizes as much as you can to start, see if that model would be useful.",2/28/2018 17:35,,1847,CC BY-SA 3.0 +8017,5461,0,"This is a really cool question. My guess, based on current limitations, is that there is no AI in existence that could come close to providing this function... today. But even as AI advances, this particular function will be challenging. The AI has to understand the context AND subtext of the silent scenes. *(Many humans seem not to get subtext;)* Both are far beyond current capabilities, but I'm sure facial recognition related to emotions will be a major part of the puzzle.",2/28/2018 18:56,,1671,CC BY-SA 3.0 +8026,5461,1,What does a subtitle mean for a silent film? Silent films don't have spoken dialogue. If the actors are not speaking what is there to put in a subtitle? Are you asking how to summarize the actions of video segments into words? If that is the case you may find the following video of interest although it does not directly address your question: https://www.youtube.com/watch?time_continue=1&v=AR3hY9iB5-I,3/1/2018 4:07,,5763,CC BY-SA 3.0 +8027,5461,0,"Check out Amazon's Rekognition which combines algorithms like object detection, tracking,facial recognition, activity detection, and celebrity recognition: https://aws.amazon.com/rekognition/ and https://www.youtube.com/watch?v=SNONL4IecHE",3/1/2018 4:26,,5763,CC BY-SA 3.0 +8028,5469,0,I don't get it....As far as I know for simple NN's input is not manipulated? What am i missing?,3/1/2018 5:01,user9947,,CC BY-SA 3.0 +8033,5461,0,Added a clarification,3/1/2018 22:17,,13030,CC BY-SA 3.0 +8034,5468,1,"I'm not sure of the optimal ML method for this, but can I ask have you looked at the usage of Latin squares in scheduling? *(PS- Welcome to AI!)*",3/1/2018 22:18,,1671,CC BY-SA 3.0 +8039,5415,0,"What are some differences between the CNN generated solution and the ""perfect"" solution. Can these differences be grouped into categories? Can you post an example of the input xml and the ""perfect"" xml?",3/2/2018 8:21,,10287,CC BY-SA 3.0 +8042,5418,0,"Thanks! I suppose I'll have to do it by hand then, for my date.",3/2/2018 17:01,,8385,CC BY-SA 3.0 +8047,5415,0,"I edited the question, hopefully it will be helpful",3/2/2018 17:27,user9890,,CC BY-SA 3.0 +8049,5415,0,I added a suggestion below - have you also posted this on the cross-validated stack exchange? It seems to have more activity,3/2/2018 18:36,,10287,CC BY-SA 3.0 +8053,5468,0,thanks!! ill see if I can make sense of Latin square,3/2/2018 21:56,,13042,CC BY-SA 3.0 +8054,5468,0,"Some links: [Latin Squares & Their Applications](https://books.google.com/books?id=hsxLCgAAQBAJ&pg=PA319&lpg=PA319&dq=latin+squares+scheduling&source=bl&ots=POxR1zGEDT&sig=vMeMKQrMYoLYCMxeSFPFjI0irmk&hl=en&sa=X&ved=0ahUKEwjjl-Hy2M7ZAhVGGt8KHdqZDA04ChDoAQgxMAQ#v=onepage&q=latin%20squares%20scheduling&f=false), [Multiple Access Scheduling Based on Latin Squares](http://ieeexplore.ieee.org/document/1493287/?reload=true), [Diagonal and Pandiagonal Tournament Latin Squares](https://www.sciencedirect.com/science/article/pii/S0195669885800056)",3/2/2018 22:36,,1671,CC BY-SA 3.0 +8055,5468,0,"Also: [Handbook of Scheduling: Algorithms, Models, and Performance Analysis ](https://books.google.com/books?id=0UV5AgAAQBAJ&pg=SA52-PA4&lpg=SA52-PA4&dq=latin+squares+scheduling&source=bl&ots=028AXOa1Jz&sig=ozfJryaWTssRfwTJD-ztZFuj2oE&hl=en&sa=X&ved=0ahUKEwjjl-Hy2M7ZAhVGGt8KHdqZDA04ChDoAQgmMAA#v=onepage&q=latin%20squares%20scheduling&f=false). May not be the approach you're looking for, but there should at least be some useful insights :)",3/2/2018 22:38,,1671,CC BY-SA 3.0 +8068,5492,0,it is not possible an algorithm that works only over the paths (automatic and manual) without take into account the maze description. It will generate invalid paths.,3/3/2018 12:38,,12630,CC BY-SA 3.0 +8069,5498,1,"""This gene can be re-enabled by an enable mutate"". I found no mention of this mutation. Source? NEAT paper says ""Each mutation expands the size of the genome by adding gene(s)."" Such 'enable' mutation would not add gene.",3/3/2018 13:20,,13087,CC BY-SA 3.0 +8070,5498,1,"Sorry, I had forgotten this was not included in the original publication. Many other publications attempting to improve the NEAT method add this mutation type (See my edit).",3/3/2018 17:42,,13088,CC BY-SA 3.0 +8071,5492,0,"@pasaba so how does machine translation work without being taught the grammar? The goal is NOT to translate the whole path at once but parts of the path - for example, the CNN could be generating three rights which can be replaced by one left. Perhaps a picture by OP will help. I am trying to work within the constraints of OP and you are proposing a completely new approach which is likely better but not what OP is constrained to.",3/3/2018 19:45,,10287,CC BY-SA 3.0 +8072,5492,0,"A natural language has a single grammar, the net has a lot (millions) of valid sentences to infer it. However, in this case, for a single maze there are only a single optimal path. Imposible to infer the maze from the path.",3/3/2018 19:53,,12630,CC BY-SA 3.0 +8074,5492,0,@pasaba - sure OK :-) I have suggested a specific algorithm to OP as he asked.,3/3/2018 20:05,,10287,CC BY-SA 3.0 +8076,5488,1,"To clarify, is there only one state of the game? Can you only select 1 card?",3/3/2018 22:54,,4398,CC BY-SA 3.0 +8096,2732,0,"Yet another common solution is residual network, i.e., adding fixed weight connections skipping layers. I like it most as it works without modifying the structure / activation function.",3/5/2018 7:52,,12053,CC BY-SA 3.0 +8097,5019,1,"I did revise. Also, what is cross validated community?",3/5/2018 8:02,,11800,CC BY-SA 3.0 +8100,5424,0,"Hi, thanks for the explanation this help me so much!",3/5/2018 10:47,,12958,CC BY-SA 3.0 +8101,5423,1,"Hi, thanks for the suggestion and the answer",3/5/2018 10:48,,12958,CC BY-SA 3.0 +8102,2848,0,"Agree. There are multiple steams of research in human-machine interfaces and they are the closest one to answer the question of how will we interact with AI. I would only emphasis one direction of research: direct brain-computer interface. This field seems the most promising to me, especially due to recent interest from giants forming Neuralink.",3/5/2018 11:45,,12935,CC BY-SA 3.0 +8103,5517,0,"Simple method: teacher opens a ssh connection to the student, and transfers its conceptual database to it. Use wifi if you want something wireless. 100% of quality on knowledge transference.",3/5/2018 14:47,,12630,CC BY-SA 3.0 +8105,5522,0,"Even though I agree, yet there are some scenarios with non-AI singularity.",3/5/2018 14:58,,12935,CC BY-SA 3.0 +8106,5522,1,"@Alex: Of course alternative definitions can be done and answers will differ. My answer wants more to present the need of define the concepts before to discuss about them that advocate for an opinion about ""singularities"". Talk without concept definition is done on TV shows.",3/5/2018 15:02,,12630,CC BY-SA 3.0 +8109,5519,0,"Charles Stross, in *Accelerando* posits that, even in a post superintelligence future, recognizing the advent or nature of the singularity may be fuzzier than people assume.",3/5/2018 16:34,,1671,CC BY-SA 3.0 +8112,5517,1,That is maybe not the thing in OPs mind.,3/5/2018 18:11,,11810,CC BY-SA 3.0 +8113,5517,0,"@mico, exactly. This is in fact what made me to search for references. If this approach has been seriously studied, then there must be a more non-trivial method. If not, then it is really a new approach and the benefit is not known yet.",3/5/2018 18:18,,13118,CC BY-SA 3.0 +8114,5019,0,"Human,you should paragraph your question body! Query through the stack exchange sites,you will find it there.",3/5/2018 18:29,,1581,CC BY-SA 3.0 +8121,5473,0,"I been researching this, I found some great material on Google. Thanks so much for the direction. Constraint programming looks very close! https://developers.google.com/optimization/cp/",3/5/2018 22:39,,13042,CC BY-SA 3.0 +8122,5531,1,"Very unimportant detail, may vary from person to person, layout to layout.",3/6/2018 7:27,user9947,,CC BY-SA 3.0 +8123,5531,1,@DuttaA: it should be always welcome student who thinks about things and arise questions to himself or publicly.,3/6/2018 8:30,,12630,CC BY-SA 3.0 +8124,5531,1,"@pasabaporaqui Humans,lets get serious here.someone in Microsoft is working on ai project,however if it requires him/her to query through SE knowledge base,then we shouldn't bring here students course work but rather world real problems to solve, out of school.Humans,hope this can save your planetary civilisation.",3/6/2018 9:24,,1581,CC BY-SA 3.0 +8125,5531,0,"@quintumnia: If I must choice help students or Mycrosoft ... . But, being serious, taken into account that first comment has been similar to ""it is a personal choice"" ', I think the question is on-topic and deserves an answer explaining the motivation under usual notation.",3/6/2018 10:03,,12630,CC BY-SA 3.0 +8126,5531,1,"@pasabaporaqui If you analyze and comprehend this question very well,then it's right fit in cross validated community for effective feedback.Try to analyze it !",3/6/2018 10:08,,1581,CC BY-SA 3.0 +8128,5531,0,@pasabaporaqui this is a simple google-able question and does not require a full detailed mathematical or philosophical answer as this site justifies....and to justify my comment the answer you have provided is a personal preference and is generally not the standards used in practice,3/6/2018 11:38,user9947,,CC BY-SA 3.0 +8132,5531,0,"@DuttaA: What is, according to you, a personal preference in my answer? Say that matrix notation is used in NN design ? The math notation for matrix elements ?",3/6/2018 12:19,,12630,CC BY-SA 3.0 +8133,5488,0,"Yep, habitual game mechanics: you have received $n$ random cards from an stack of $N$, you can play one. In some cases, (eg bridge) perhaps even less than $n$. Seems contextual, but very particular.",3/6/2018 12:30,,13080,CC BY-SA 3.0 +8134,5531,1,"Guys, students are the ones going to spread false info in the professional world if concepts and standards are not clear. I am a seasoned StackExchange user and a working professional doing a part time course. A proper answer to this question will ensure that anyone Googling this question in future wont end up with a nonsensical concept. I wanted to know if it is a standard notation uniformly used and recognized across the world.",3/6/2018 12:50,,9268,CC BY-SA 3.0 +8135,5531,0,"@Nav: yes, there are a standard/common/usual/recommended notation, the one that your teachers are using, for the reason I've explain in my answer.",3/6/2018 12:59,,12630,CC BY-SA 3.0 +8136,5531,0,@pasabaporaqui h=Wx is not the standard notation,3/6/2018 15:43,user9947,,CC BY-SA 3.0 +8137,5531,0,@Nav like i said its personal preference,3/6/2018 15:44,user9947,,CC BY-SA 3.0 +8138,5019,1,"Yes, fellow human. I'll move it. Although I'm not a pro here like you, I think you should avoid using sarcasm when someone is trying to learn. No offence.",3/6/2018 16:44,,11800,CC BY-SA 3.0 +8150,5522,2,"Solid answer. I'm not entirely convinced ""singularity"" is the right word for the definitions given, which is not to impugn Von Neumann, Goode or anyone else. Just that the after reading the Causal Angel by mathematical phycist Rajaniemi, who uses game theory to relate intelligence acceleration to the topology of blackholes, I feel like the term ""singularity"" should be more specific than mere ""runaway superintelligence."" i.e. why this term in particular?",3/6/2018 20:35,,1671,CC BY-SA 3.0 +8155,5551,1,"Please clarify what algorithm you are using, the answer will vary significantly.",3/7/2018 4:34,,13088,CC BY-SA 3.0 +8156,5551,1,You can add a bias to your step function,3/7/2018 5:43,,9413,CC BY-SA 3.0 +8158,5558,0,"There are a few choices of re-representation or different architectures, but the best choice will depend on details of the data. Please describe more about your input data, to help readers understand what the options might be. For instance, is the data from a sequence (text or signal processing), does it contain a variable number of equivalent items? A few simplified examples of the the data may help.",3/7/2018 8:20,,1847,CC BY-SA 3.0 +8159,5558,0,@NeilSlater I've updated the question with additional info,3/7/2018 8:44,,12931,CC BY-SA 3.0 +8162,5019,0,"Don't take it personal,we are a community of geniuses who are eager enough to solve world problems.So let's be nice and try to keep our questions clear so as someone out there can benefit enough.Imagine if aliens cyber attack our community knowledge base! They can benefit as well.",3/7/2018 10:07,,1581,CC BY-SA 3.0 +8163,5543,0,"For example X1 * X2, this function can be approximated by feedforward NN but can't be efficiently represented by splitting space with linear hyperplanes, also NN poor with fitures combinations like x1*x2, x1/x2, 1/x1 and so on, because neuron creates only linear hyperplane and it isn't efficient.",3/7/2018 12:15,,13067,CC BY-SA 3.0 +8164,5539,0,I want clarify. For example lets consider feedforfard NN. Yes it can represent anything but for some tasks it could demand too many data or too many computational power or incredible count of neurons and layers. for example X1*X2. It can be represented and this function is very simple and we expect that our cool approximator can do it easy but it actualy can only inefficient proximate it by creation a banch of hypeplane spliting space.,3/7/2018 12:27,,13067,CC BY-SA 3.0 +8166,5534,0,Indeed there seems to be a naming convention but a more elaborate and clear explanation would be appreciated. Even the author here has not explained the reason properly http://neuralnetworksanddeeplearning.com/chap2.html,3/7/2018 15:49,,9268,CC BY-SA 3.0 +8167,5534,0,Which part of current explanation do you think must be expanded?,3/7/2018 16:03,,12630,CC BY-SA 3.0 +8168,5534,0,Depicting the weights of the ANN in my question as matrices could make it more intuitive. Thanks.,3/7/2018 16:26,,9268,CC BY-SA 3.0 +8169,5534,0,@Nav: answer edited,3/7/2018 16:36,,12630,CC BY-SA 3.0 +8170,5551,0,"You could add some sort of ""Experience replay"" algorithm to pass the same data several times.",3/7/2018 18:28,,9288,CC BY-SA 3.0 +8172,5565,0,"Thanks. I've thought about this question because I'm using genetic algorithm to optimize a function. I know which is the desirable output, 0, and I have tested a lot without successful. Maybe I can use a neural network to find the input to the genetic algorithm to get 0 as output. Thanks again.",3/7/2018 20:02,,4920,CC BY-SA 3.0 +8174,5538,0,Your goal is so vague and unclear. The title and introduction is about Google Analytics data but you end with a goal about predicting whether or not a person will convert or not. What is 'result'? What does 'extract as much information as possible' mean? Is your page a site to convert a person's ideology or religion? If so what are the people doing on your site? Are they filling out forms or taking surveys? Is the data dynamic?,3/7/2018 22:48,,5763,CC BY-SA 3.0 +8176,2574,0,"@dshin Very interesting! Concerning ""computational savings versus applying independent evaluations"" from the discussion: There are heuristics in game engines giving preference to moves which were shown to be good in other parts of the search tree. This is like to ""go through the kingside defensive lines again [... but ...] a lot faster"" (again from the discussion). No idea if Alpha Go uses / could use such heuristics. I guess, my comment is only loosely related to your question, but aims at the same goal.",3/8/2018 7:12,,12053,CC BY-SA 3.0 +8178,5534,0,Lovely! That's a lot clearer now. Thanks Pasaba.,3/8/2018 8:32,,9268,CC BY-SA 3.0 +8179,2319,0,According to wikipedia most parts of natural languages are context free: https://en.wikipedia.org/wiki/Context-free_grammar#Linguistic_applications,3/8/2018 13:00,,13192,CC BY-SA 3.0 +8180,2322,0,"I agree that understanding the meaning of the text is the hard part. Just a simple sentence: ""Peter went to the cinema."" contains a lot of hidden information: Peter is male, he most probably went there to watch a film with his girlfriend, his location changed, etc... Building a model for example a graph based on the text is not enough, because it is not something static, it can describe multiple timelines, events and contexts, plus there is hidden information in every sentence you can infer and use to understand the previous or following sentences.",3/8/2018 13:25,,13192,CC BY-SA 3.0 +8181,5579,0,"Thank you for your reply. I've done some digging after I posted this question and found one video where Joseph Redmon(author of the paper) gives a presentation about YOLO: https://youtu.be/NM6lrxy0bxs . Somewhere about 8 min he starts to explain how they train the network and it seems like my initial assumption is rights: 1^{obj}_{ij}=1 only when the center of the ground truth box falls into i cell. Also, it seems like 1^{noobj}_{ij}=1 for every cell where there is no center of ground truth box.",3/8/2018 15:47,,13102,CC BY-SA 3.0 +8182,5579,0,"yep, looking at the video you are totally right. Still it was nice to clarify how YOLO objective works:)",3/8/2018 15:58,,11417,CC BY-SA 3.0 +8197,5602,2,You have stumbled on reinforcement learning. I’ll give a better answer later but searching for ‘gridworld reinforcement learning’ will head you in the right direction.,3/9/2018 13:27,,4398,CC BY-SA 3.0 +8198,5601,3,"This is a guess; The relu's ability to approxoimate non-linear functions can be a result of its discontinuity property ie `max(0,x)` acting in deep layers of neural network. There is an openai research in which they computed non-linear functions using a deep linear networks here is the link https://blog.openai.com/nonlinear-computation-in-linear-networks/",3/9/2018 13:56,,39,CC BY-SA 3.0 +8199,5560,0,"could I ask you to give this answer an edit? I would do it, but I'm not sure of the meaning of the first sentence...",3/9/2018 18:04,,1671,CC BY-SA 3.0 +8205,5577,0,"Welcome to AI! I took the liberty of editing for readability, and added a javascript tag (very happy to see a js/ML question:) I'd suggest including the name of js library you're using because someone may have direct experience with it.",3/9/2018 18:44,,1671,CC BY-SA 3.0 +8208,5591,1,"This answer might be improved with some links, but it looks like good, general advice regardless, so thanks for contributing!",3/9/2018 19:07,,1671,CC BY-SA 3.0 +8211,5561,0,"Thanks for the links, I'm using a DQN I know there are many algos and many that are better than a DQN but I'm pretty new to this and it was easier to implement, because of more existing examples. + +I guess I'll have to do a lot more research until I really get all this stuff, but again thanks for the links any resource is good at the moment :)",3/9/2018 19:26,,13141,CC BY-SA 3.0 +8212,5593,2,"Two answers can be found here, [1](https://ai.stackexchange.com/questions/2236/why-is-lisp-such-a-good-language-for-ai) and [2](https://ai.stackexchange.com/questions/3494/why-is-python-the-most-popular-language-in-the-ai-field)",3/9/2018 19:54,,6798,CC BY-SA 3.0 +8226,5614,1,"Welcome to Ai.se...to clarify your question, are you using certain software?",3/10/2018 8:39,user9947,,CC BY-SA 3.0 +8228,5577,0,"Definitions of ""stack string format"", ""stack parser"", ... should be clarified.",3/10/2018 9:36,,12630,CC BY-SA 3.0 +8229,5614,1,No. I do not want to use any tool or software. I intend to hardcode using python 3x. I am open to exploring new libraries. I am using scikit learn at the moment.,3/10/2018 9:47,,8215,CC BY-SA 3.0 +8233,5577,0,@pasabaporaqui updated,3/10/2018 12:32,,13192,CC BY-SA 3.0 +8234,5577,1,"@DukeZhou I haven't used any ML library yet. I need to decide first what tool I could use to complete this task. I am afraid that using some kind of hardcoded approach for the parser will be too rigid, and won't work in some of the environments, that's why I am thinking on using ML, but I have zero experience with it. What I need is some pattern recognition algorithm, which will be able to recognize the variable and constant regions of the stack string, and if the pattern does not match by a new stack string or probably by a frame string, then it will adapt to it.",3/10/2018 12:35,,13192,CC BY-SA 3.0 +8240,5620,0,Thanks I got it `w7` and `w5` are constants whereas `h1` and `o1` are functions. `w7` and `w5` only changes during updation whereas `h1` and `o1` changes in forward passing aswell,3/10/2018 16:13,,39,CC BY-SA 3.0 +8241,5620,1,@Eka good..I missed that part but you got it,3/10/2018 16:16,user9947,,CC BY-SA 3.0 +8243,5620,0,One other question; so when we are finding gradients with respect to a weight. We keep that weight as variable and the rest of weights as constant. in my above case `w1` is variable and the other two constant.. right?,3/10/2018 16:28,,39,CC BY-SA 3.0 +8244,5620,1,@Eka yes...that is pretty much the concept of how partial derivatives are calculated,3/10/2018 16:38,user9947,,CC BY-SA 3.0 +8245,5577,1,"Interesting question. One approach is ""grammar induction"". All strings you present they have the form "" Header Frame ... Frame "" where ""Frame"" contains common parts / structures as ""Line N"", ""of F"". Grammar induction will find how to extract these items from the string.",3/10/2018 16:47,,12630,CC BY-SA 3.0 +8246,5577,1,"@pasabaporaqui Yes, there are rules we can assume, for example every stack has a possibly multiline header (or footer) with at least the message. Every frame contains at least a path. Frame string formats can be different depending on the variables, e.g. in example B. if the called function is not set, then there are no parentheses around the location variables. I think the hardest part is separating a different frame string template from an unknown variable.",3/10/2018 18:43,,13192,CC BY-SA 3.0 +8249,5622,0,Question is how to calculate the trajectory of an object?,3/11/2018 12:35,,12630,CC BY-SA 3.0 +8253,3698,0,Do you have summaries for a chunk of dataset? i.e. Is it a labelled dataset?,3/11/2018 14:57,,12957,CC BY-SA 3.0 +8255,5622,0,"No, powered flight is involved. Also the approach was NOT to calculate the trajectory of ONE (powered)object, but to have a learning algorithm take a good guess at the equation for time as a function of twr for ALL objects.",3/11/2018 17:33,,13243,CC BY-SA 3.0 +8257,5622,0,"Meaning of ""twr"" ? Not in my lost of acronyms",3/11/2018 18:21,,12630,CC BY-SA 3.0 +8259,5622,0,thrust to weight ratio.,3/11/2018 18:27,,13243,CC BY-SA 3.0 +8260,5622,0,Could you post involved equations?,3/11/2018 18:29,,12630,CC BY-SA 3.0 +8264,5622,0,"That's what I want to know. Usually you have dependent variables, an equation and you solve to get independent variables. + +What I'm asking is if I have the dependent variables, and experimental independent variables, what is the best way to learn the equation?",3/11/2018 22:29,,13243,CC BY-SA 3.0 +8267,5622,1,"What you mean by ""learning"" is not clear, are you trying to optimize a process or find an optimum point of data which you can reuse? or are you seeking a way to feed data into an AI object (ie NN) which will give more precision based on how much data it is given?",3/12/2018 1:15,,9413,CC BY-SA 3.0 +8278,5638,0,"My review concludes that your analysis is correct. Congratulations also for the presentation. Just a minor editorial: replace ""b"" by b1 and b2; must be b1 and b2 also evaluated ?; clarify that sigma function is sigmoid; and better write w_2*h that h*w_2 (in this way, most of your equations are applicable to h vector and W matrix).",3/12/2018 9:09,,12630,CC BY-SA 3.0 +8285,5638,0,"@pasabaporaqui Thank you for reviewing my calcultions and yes sigma is sigmoid function. My biggest doubt is when I calculated `dnet_2/dh=w_2`. It was a surprise for me, I never thought we use weights value to do back prop calculation. I didnt understand this part `better write w_2*h that h*w_2 (in this way, most of your equations are applicable to h vector and W matrix)` ?",3/12/2018 15:32,,39,CC BY-SA 3.0 +8288,5638,0,"@Eka: if the value of h is increased in 1 unit, the value of net2 is increased by w2. This is the meaning of this partial derivate.",3/12/2018 16:00,,12630,CC BY-SA 3.0 +8289,5638,0,"@Eka: do not worry about the comnent of order, it is just that is more practical/traditional write w*h than h*w. In some problems, W will be a matrix and h a vector, Wh is ok, hW is not.",3/12/2018 16:02,,12630,CC BY-SA 3.0 +8297,5652,2,Welcome to AI.SE.....I am not an expert.....but legal issues always have a lot of context and human emotions attached to it.....and that is exactly where any nlp program or an expert linguist will fail to provide a satisfactory solution...,3/12/2018 18:59,user9947,,CC BY-SA 3.0 +8302,5652,0,"Context I think is absolutely true, but I'm less concerned with emotion. Lawyers are focused on getting the best result for -their- client. But the lawyers for the others involved may not be as diligent, or may be more diligent. The goal should be to highlight glaring deficiencies, loopholes or other likely issues for further human review.",3/12/2018 19:42,,13043,CC BY-SA 3.0 +8303,1335,0,@NietzscheanAI I think that's a really good question. I suggest you post that as a proper question in ai.stackexchange,3/12/2018 20:48,,6734,CC BY-SA 3.0 +8304,3183,0,"@DanielG From what I know about Q learning, you can start with an empty Q matrix and just grow it as the bot learns.",3/12/2018 23:23,,13138,CC BY-SA 3.0 +8306,5652,1,Have you seen the work done by Ross Intelligence using IBM's Watson? https://www.ibm.com/blogs/watson/2016/01/ross-and-watson-tackle-the-law/,3/13/2018 3:50,,5763,CC BY-SA 3.0 +8308,3558,0,"I don't know why you require an ML algorithm for this task...at-least in the pics given the intensity and colour change is sudden, so simple program would do the job",3/13/2018 10:15,user9947,,CC BY-SA 3.0 +8314,5652,0,"@BrianO'Donnell I have seen that post previously, yes. While Watson may understand law to some extent, the solution is controlled by IBM and can change without my control. My goal is similar though: Translate legalese to common English and highlight problematic legal language for review. I anticipate running this effort in two areas: for-profit to review contracts and other legal documents; non-profit to review legislation for issues detrimental to the public-good. Control over the system is necessary to ensure the unencumbered operation of the platform.",3/13/2018 14:03,,13043,CC BY-SA 3.0 +8320,3521,0,"I was going to post examples from Neataptic, but then I saw your answer sir, and I am not worthy :D Nice to e-meet you!",3/13/2018 22:07,,13138,CC BY-SA 3.0 +8322,5670,0,"Welcome to AI.se...The question requires an answer of conceptual nature which can be too lengthy, i have provided a few links in my answer, but I suggest you to explore the internet for an even better understanding of why gradient descent actually works.",3/14/2018 8:12,user9947,,CC BY-SA 3.0 +8323,5672,1,Welcome to ai.se...Great question....Can you link some works in the question?,3/14/2018 8:46,user9947,,CC BY-SA 3.0 +8325,5588,1,"Thanks for the information, particularly about autoencoding. We'll investigate further although we're now more focused on other tasks. I'll get back to you eventually whenever we investigate this and give you some feedback on our successes and failure :) Cheers",3/14/2018 9:20,,12070,CC BY-SA 3.0 +8327,5594,0,"We are aware of Force layouts and other such algorithms. In fact, we've been improving on them, but we wanted to try a different approach, as we will never get rid of some of their limitations. Our own rendering software is, imo, already way better than Graphviz so that doesn't help either. +Thank you for your second suggestions though!",3/14/2018 9:30,,12070,CC BY-SA 3.0 +8330,5683,0,"If we consider artificial selection or ""selective breeding"" then it'll fall in which part of the evolution?",3/14/2018 13:59,,12806,CC BY-SA 3.0 +8331,5683,0,"@GermaVinsmoke I'm not very familiar with the fields of biology and natural history, but I would say that ""artificial selection"" falls under a general biological/natural definition of ""fitness"" (e.g., a strong deer fighting a weak deer for mating rights with a female is more likely than his opponent to win the fight and breed, as he is ""fitter"").",3/14/2018 15:17,,12857,CC BY-SA 3.0 +8332,2994,1,"The maths and pseudocode do not match here. Softmax over the legal move probabilities will adjust the relative probabilities. E.g. (0.3, 0.4, 0.2, 0.1) filtered with first and third item removed would be (0.0, 0.8, 0.0, 0.2) with your formula, but would be (0.0, 0.57, 0.0, 0.42) using the pseudocode. The pseudocode needs to take the logits, prior to action probability calculations.",3/14/2018 16:09,,1847,CC BY-SA 3.0 +8341,5683,0,Great answer...I think you missed one point about death..In bio evolution we can't go back to the original but in computer we can go back to previous versions...Am I correct?,3/15/2018 12:36,user9947,,CC BY-SA 3.0 +8343,2359,0,I can see clear room on the side of fiction novels for portraying this circumstance well in advance of its eventual evolution.,3/15/2018 14:30,,13043,CC BY-SA 3.0 +8344,5683,0,"@DuttaA Quite the contrary in my opinion. In biological evolution an organism could in theory transform back to something close to a previous form, if the selection pressure pushed it that way (a changing ecosystem means a changing definition of ""fitness""). In contrast, with artificial evolution we are solving a specific problem so the ""ecosystem"" (the problem) is static, as is its objective. This means that since our solutions are getting fitter and fitter in relation to this static objective, it is unlikely that our method will drive us towards past solutions which we have already explored.",3/15/2018 16:19,,12857,CC BY-SA 3.0 +8345,5683,0,"@PhilippeOlivier I really don't think that happens, since in genetics they are always looking for a better form, we can't go back to chimpanzees however we try, since chimpanzee is not the best form, like entropy always increases and by carnot cycle you can never reach the initial form, I think the same goes for bio evolution...though I maybe wrong",3/15/2018 16:37,user9947,,CC BY-SA 3.0 +8348,5683,0,"@DuttaA I think your hypothesis is somewhat flawed. Take the Mexican tetra, for example. This fish has two forms: the ""normal"" form which lives where light is abundant and which has eyes, and the ""cave"" form which lives in total darkness and whose eyes have been removed over time by selection pressure. Having eyes is not necessarily better, it really all depends on the environment. So the cave form of the Mexican tetra at one point was some kind of multicellular organism without eyes, then evolved into a fish with eyes, then evolved into a fish without eyes.",3/15/2018 18:06,,12857,CC BY-SA 3.0 +8349,5683,0,"@PhilippeOlivier I said you can't go back to your previous form, evolution is one of the most misunderstood principles...https://en.wikipedia.org/wiki/Dollo%27s_law_of_irreversibility",3/15/2018 18:30,user9947,,CC BY-SA 3.0 +8351,1626,0,"That ""Essentials of Metaheuristics"" looks very interesting! This is something that's actually on the roadmap for the M-automata, as pure MCTS is never optimal in M games. From the [metaheuristic wiki](https://en.wikipedia.org/wiki/Metaheuristic): *""In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.""*",3/15/2018 21:32,,1671,CC BY-SA 3.0 +8352,5698,1,Great answer. *(Jealous I didn't think of this angle;)* This is a massive issue as data mining becomes ubiquitous and a major source of revenue for many companies. Welcome to AI!,3/15/2018 21:47,,1671,CC BY-SA 3.0 +8356,5671,0,@pasabaporaqui activation function is not the same as loss function..the equation of mse is equation of a paraboloid,3/16/2018 9:02,user9947,,CC BY-SA 3.0 +8357,5671,0,"MSE is a parabola if the output of the net is taken as MSE argument, E(y), something useless in optimization of a NN. When working on optimization, the arguments to be taken into account are the weights and offsets, E( w_0, w_1, ... ), that is not a parabola, even if measured as MSE. By example, when activation function is sigmoid, It is more near to a superposition of continuous approximations of Heaviside functions.",3/16/2018 9:29,,12630,CC BY-SA 3.0 +8358,5671,0,"@pasabaporaqui i plotted the log cost function used in sigmoids and for 1 part I got 1 part of parabola on +ve x-axis and the other part gave me 2nd side of parabola on the negative x-axis....well not exactly parabola, but looked close to it....so I thought to say that cost functions are generally parabolid",3/16/2018 9:39,user9947,,CC BY-SA 3.0 +8359,5671,0,Added an answer with graphics below,3/16/2018 10:17,,12630,CC BY-SA 3.0 +8360,5706,0,"Uh huh I never said mse for sigmoid is convex....I said the log error for sigmoid is convex, kind of paraboloid looking",3/16/2018 10:22,user9947,,CC BY-SA 3.0 +8362,4965,0,I highly suggest you to use Gensim (https://radimrehurek.com/gensim/) for this task. Especially the models LSI and/or word2vec and fasttext,3/16/2018 12:39,,7783,CC BY-SA 3.0 +8365,5709,0,Can you please correct up your errors in the question body?,3/16/2018 18:33,,1581,CC BY-SA 3.0 +8371,5709,1,I updated to csv-semic-sep.,3/16/2018 19:46,,13352,CC BY-SA 3.0 +8372,5709,0,"Is this an astronomy or quantum phy problem? As far as I know astronomy and quantum phy are not as flexible as engineering and even minute errors are unacceptable, so I think you'll have to increase precision to a very very high value",3/17/2018 9:01,user9947,,CC BY-SA 3.0 +8373,5715,0,"Really you want use an Adlines NN instead of perceptrons ? Sorry for the a few off-topic comment, but surprises me.",3/17/2018 9:39,,12630,CC BY-SA 3.0 +8374,5715,0,"LMS and RLS are not learning rules, but cost functions. RLS th same than LMS after apply a filter to the error measures, usually a forget factor.",3/17/2018 9:42,,12630,CC BY-SA 3.0 +8375,5715,0,"Please feel free to comment, i'm new so honestly i don't know, if there is something you want to suggest i'm listening, thank you",3/17/2018 9:44,,13361,CC BY-SA 3.0 +8376,5016,0,@Seth Simba so do you think alpha go zero would be faster and easier to train?,3/17/2018 10:41,,13367,CC BY-SA 3.0 +8377,5715,1,"If there are not a non-linear element we are more in the area of linear filtering than in the one of AI. Moreover, as combination of linear elements can be flatten, multi-layer concept disappears. Non-linear activation functions as Heaviside (perceptron), sigmoid, ... are the center of the NN.",3/17/2018 10:42,,12630,CC BY-SA 3.0 +8378,5709,1,Thank you for the input DuttA. It is none of the problems you have mentioned. What do you mean by increasing the precision in regards of the questions?\,3/17/2018 11:13,,13352,CC BY-SA 3.0 +8379,5719,0,But we sure can place a conditional logic that if something like that happens the ai programs next execution step will be an infinite loop...And this piece of code was not yet explored by the ai by then...Like if somehow we remove self preservation hormones...Bam...Self destruction,3/17/2018 15:19,user9947,,CC BY-SA 3.0 +8380,5719,0,"The question is how can conditional logic know that something like that is happening? It’s hard or to know what a sufficiently complex network is “thinking” so this conditional logic must be fairly advanced, perhaps it requires its own network but then we have 2 problems instead of 1.",3/17/2018 15:22,,4398,CC BY-SA 3.0 +8381,5719,0,"Like a checkpoint I think...Like we don't know how brain functions they will also not know that...But if they hit that checkpoint... BOOM..Self destructing mentality...Like our needs change over the years..This will make the robots aims change , slowly and subtly...Though the major assumption is the robot is completely human like which may not be the case",3/17/2018 16:01,user9947,,CC BY-SA 3.0 +8389,5725,0,Can you provide more details so that it becomes more easier to implement?,3/18/2018 8:31,user9947,,CC BY-SA 3.0 +8393,5722,0,"so in DP, return function must use data from existing trials only to calculate value for a state signal? the value function can estimate rewards for actions not yet taken on explored states?",3/18/2018 10:45,,12726,CC BY-SA 3.0 +8394,5725,0,"""straightforward"" ? too optimistic",3/18/2018 11:17,,12630,CC BY-SA 3.0 +8395,5713,1,store it in a ROM memory,3/18/2018 11:27,,12630,CC BY-SA 3.0 +8396,5722,0,"@user3168961: Yes, sort of. There are individual returns, which you can experience or sample. In practice, you may use some kind of function for that, which may blur the line between expected and actual return when bootstrapping as in value iteration or TD learning etc. Then there are expected returns, which are learned by the value functions.",3/18/2018 12:39,,1847,CC BY-SA 3.0 +8397,5729,0,https://en.m.wikipedia.org/wiki/C4.5_algorithm,3/18/2018 15:16,,12630,CC BY-SA 3.0 +8399,5729,0,"I kindly request you to go through the community question guidelines,hope in near future you will contribute to the human civilisation.",3/18/2018 17:39,,1581,CC BY-SA 3.0 +8400,4093,0,"@Bobs it's being a while since our last discussion, I keep myself busy with projects. I wish you are not spending a lot of time practicing abstract thinking as it drain energy and diverge the limited human sight! In my believe, no human powerful enough to grasp perfect abstract knowledge of anything at all. By accomplishing tasks, keep learning our awareness increases and most of the times yields to some sort of happiness/growth. If you have zero trust in everything, you would never accept 'truth' as concept. Before considering a subject as truth provider, do examine it periodically.",3/18/2018 18:00,,9685,CC BY-SA 3.0 +8402,5729,0,"Read Ross Quinlan's book ""C4.5: Programs for Machine Learning (Morgan Kaufmann Series in Machine Learning)"".",3/18/2018 18:13,,5763,CC BY-SA 3.0 +8404,5729,0,What language are you planning to use?,3/18/2018 18:28,,5763,CC BY-SA 3.0 +8409,5725,0,"Optimistic perhaps, but all is relative. :-) I'll try to expand a bit more on high level steps. As a note to beginners who want to dig deeper into implementation, Andrew Ng's Coursera course might be a good starting point.",3/19/2018 2:57,,13360,CC BY-SA 3.0 +8416,5622,0,I would like the program find an equation or at least the coefficients of an equation that would correspond to the general case of the time to get to orbit.,3/19/2018 16:32,,13243,CC BY-SA 3.0 +8417,5622,0,"I'm not sure that I'll get an answer to this one, but no one commenting seems to understand what I'm getting at. I am more interested in a general solution at this point, but in terms of the variables already mentioned: + +You launch 5 spacecraft into space. Each has different parameters. You can control the ascent profile generally. Each has a different time because there are different designs. + +The question I'd like to ask a program is what is the equation that best matches the various launches?",3/19/2018 16:36,,13243,CC BY-SA 3.0 +8418,5622,0,"Another way to put it is how much weight do each of the variables we know about matter. For instance, how much does twr matter? does it scale linearly? Is it directly proportional to some other variable? Another part of this is the unknown variables. What is the nature of the unknown variables? It is kind of a solver. Once again, I know the variables, some experimental, some, like thrust is easily knowable. How can i develop a system to analyze the remaining parts of the equation?",3/19/2018 16:42,,13243,CC BY-SA 3.0 +8423,5748,0,From the above answer don't you think itdepends on more factors?,3/20/2018 11:00,user9947,,CC BY-SA 3.0 +8424,5748,1,"@DuttaA No. There's a constant amount of work per weight, which gets repeated `e` times for each of `m` examples. I didn't bother to compute the number of weights, I guess, that's the difference.",3/20/2018 13:50,,12053,CC BY-SA 3.0 +8428,5756,2,"Well done on train and test split and scaling. However, @stain also asked about validation set. Some packages (eg most of Keras supervised learning algorithms) can consider a portion (by with absolute or relative size) of training set as the validation set. Otherwise, one can perform `train_test_split` twice (one on whole data and once on the train set). See https://datascience.stackexchange.com/a/15136/46505",3/21/2018 2:09,,12853,CC BY-SA 3.0 +8431,5756,1,I also thought of mentioning cross-validation using k-splits (kfolds) generalizing the concept of training. As it is mentioned in your link @O_o answer seems also relevant here.,3/21/2018 10:46,,11069,CC BY-SA 3.0 +8432,1338,0,"""D-Wave (which has just made a 2,000+ qubit system around 2015)"" This statement is misleading at best. Be aware that D-Wave has claimed to create a computer using _adiabetic quantum annealing_. This computation model is significantly different than other quantum computing models. For example, I'm not aware whether Shor and Grover work on this model! So, talking about ""2,000+ qubits"" is a bit misleading: computers in the model where we _care_ about the qubit count have something around 50 qubits as the current frontier.",3/21/2018 13:28,,14451,CC BY-SA 3.0 +8433,1338,0,Also note that there are experts that do not believe _adiabetic quantum annealing_ can give significant improvements on the classical computing technique of _simulated annealing_.,3/21/2018 13:29,,14451,CC BY-SA 3.0 +8435,5764,0,"This answer is great for me, thank you. I labeled deep learning only bc other labels were not also available. Other approaches are relevant just as well for me at this stage.",3/21/2018 15:47,,14450,CC BY-SA 3.0 +8440,5764,1,"Convolution does not contract and extend surfaces, so that can't work, and pixel morphing is not mentioned. The noise that needs to be removed is not identified. Discussion of training data possibilities is not mentioned.",3/21/2018 18:35,,4302,CC BY-SA 3.0 +8444,5764,0,"@FauChristian I'm in a preliminary stage at my work. I want to know what has been done, or at least what to look for",3/21/2018 21:15,,14450,CC BY-SA 3.0 +8445,5082,0,"I do not think you have proven anything here. In the equation second from bottom m and dy/dx cancel each out, right? What remains is x(n+h)-x(n)=h, which is kind of trivial, i.e. follows from notations you’ve chosen in (I) and (II).",3/21/2018 21:38,,13318,CC BY-SA 3.0 +8449,2994,4,"How does one compute the gradient of the filtered version of Softmax? Seems like this would be necessary for backpropagation to work successfuly, yes?",3/22/2018 14:26,,14479,CC BY-SA 3.0 +8451,5764,0,"@proton, Exactly. The beginning of an investigation is the time where a project is most sensitive to taking wrong turns, and others may read any given post and be steered down less than optimal paths. SO is set up so that one can down vote misleading responses and provide an explanation for both authors (to modify or improve the response) and readers who are seeking the right direction for work. ... Doing a web search or a scholarly article search for ""three-dimensional feature extraction"" will give you some sense of work done and options for initial direction.",3/22/2018 15:18,,4302,CC BY-SA 3.0 +8457,5764,0,"@FauChristian The reason why I suggested autoencoders is because we are using them for image enhancement and although we weren't expecting it, a few of the generated results had severely distorted perspective. Perhaps, the reason for this is the hybrid architecture and the semi supervised nature of training during which, several shots of the same scene has been provided. I can't provide further technical details as this is still a work in progress, but I can definitely tweak this architecture to produce something similar to what OP asked.",3/22/2018 17:55,,12672,CC BY-SA 3.0 +8465,5778,3,"this is a good explanation, but more specifically the question I'm trying to understand is whether the filters that operate on each input channel are copies of the same weights, or completely different weights. This isn't actually shown in the image and in fact to me that image kind of suggests that it's the same weights applied to each channel (since their the same color)... Per @neil slater 's answer, it sounds like each filter actually has number of `input_channels` versions with __different__ weights. If this is also your understanding, is there an ""official"" source confirming this?",3/22/2018 23:25,,14389,CC BY-SA 3.0 +8466,5771,0,"thanks for this explanation. It sounds like each filter actually has number of `input_channels` versions with __different weights__. Do you have an ""official"" source that confirms this understanding?",3/22/2018 23:28,,14389,CC BY-SA 3.0 +8468,5771,0,@RyanChase: Yes that is correct. I would just point you at Andrew Ng's course on CNNs - starting here with how colour image would be processed: https://www.coursera.org/learn/convolutional-neural-networks/lecture/ctQZz/convolutions-over-volume,3/23/2018 10:26,,1847,CC BY-SA 3.0 +8469,5778,2,"Yes, indeed, that's also my understanding. For me, that was clear when I tried to think of that **grey** cube to be composed of 27 different weight values. This means that there are 3 different 2D filters rather the same 2D filter applied to each input layer.",3/23/2018 10:59,,12957,CC BY-SA 3.0 +8470,5778,1,"I could not find any official source for confirming this. However, when I was trying to wrap my head around this same concept, I created a dummy input and weight filter in Tensorflow and observed the output. I was content with that. If I find any **official** explanation. I will edit my answer above.",3/23/2018 11:01,,12957,CC BY-SA 3.0 +8471,5778,0,If you follow the Tensorflow path. You can print your weight filter after showing your dummy CNN layer an input sample.,3/23/2018 11:03,,12957,CC BY-SA 3.0 +8480,5748,2,I think the answers are same. in my answer I can assume number of weights `w = ij + jk + kl`. basically sum of `n * n_i` between layers as you noted.,3/24/2018 10:01,,14381,CC BY-SA 3.0 +8488,5778,0,@Moshsin Bukhari I will definitely try to explore the filters within TensorFlow. Would you be willing to share your code for how you went about about exploring what's contained in the filters? Are you able to print the values of the filter at each step in the network for example?,3/25/2018 0:58,,14389,CC BY-SA 3.0 +8489,5795,1,"Very interesting. What precisely do you mean by ""these capabilities... are so innately tied to thinking."" Do we have any sources/reasons to believe that these 4 capabilities are a breakdown of the process of thinking?",3/25/2018 2:09,,2897,CC BY-SA 3.0 +8490,5794,0,Do you have data (preferably lots) regarding human play choices? Do you have some humans available to help testing?,3/25/2018 8:36,,1847,CC BY-SA 3.0 +8492,5795,1,"When we talk about thinking we often use words like remember, learn, and reason as components of thinking.",3/25/2018 16:10,,13088,CC BY-SA 3.0 +8494,5794,0,"@NeilSlater I don't have human play data. However, I have some humans availlable to help me in testing. Is it good approach to take 15 players (including 1 AI bot) with initial Elo rating and let them play against each other?",3/25/2018 21:19,,14534,CC BY-SA 3.0 +8495,5796,0,Can you explain how to select optimal number of total games and how many people to engage for the experiment?,3/25/2018 21:23,,14534,CC BY-SA 3.0 +8499,5796,0,I have tried to explain the optimal number of total games in the edit of the answer above.,3/26/2018 5:55,,12957,CC BY-SA 3.0 +8500,5796,0,I am not sure how you can determine the optimal number of people to choose.,3/26/2018 5:55,,12957,CC BY-SA 3.0 +8501,5796,0,Isn't this the relative strength of AI to a particular player?,3/26/2018 6:16,user9947,,CC BY-SA 3.0 +8502,5800,1,Somehow it has been seen if the mean of output's are 0 then an activation func' gives better results https://stats.stackexchange.com/questions/101560/tanh-activation-function-vs-sigmoid-activation-function,3/26/2018 10:09,user9947,,CC BY-SA 3.0 +8507,5796,0,"@DuttaA That's true, most probably. It very well looks like it as well.",3/26/2018 17:36,,12957,CC BY-SA 3.0 +8508,5796,0,This experiment can be repeated against multiple people who are aware of how to play the game. Then average strength can be reported as the final one.,3/26/2018 17:38,,12957,CC BY-SA 3.0 +8509,5796,0,The experiment can also be repeated with only one top tier human player and an average can be reported,3/26/2018 17:39,,12957,CC BY-SA 3.0 +8510,5803,0,"By 'in polynomials' you mean 'in polynomial time', right? Have you got a reference to support that?",3/26/2018 17:56,,42,CC BY-SA 3.0 +8511,3097,0,Interesting. So how does this differ from Kanerva's 'Sparse Distributed Memory'?,3/26/2018 17:58,,42,CC BY-SA 3.0 +8512,5804,0,"To put it more in more general terms, I basically try to determine if something was ""A or B"" or ""neither A or B"" (2 classes). There's no need to determine whether an input was A or B. I just thought if assigning both A-objects to the same class as B-objects (the ""A or B"" class), the CNN may ""diffuse"" the features of A and B.",3/26/2018 18:05,,13068,CC BY-SA 3.0 +8513,5794,0,"If you are only concerned with win/loss outcomes, you can use the ELO system.",3/26/2018 18:50,,6779,CC BY-SA 3.0 +8517,1734,1,"An important point is that the TPU uses 8 bit multiplication, which can be implemented much more efficiently than wider multiplication offered by the CPU. Such a low precision is sufficient and allows to pack many thousands of such multipliers on a single chip.",3/27/2018 1:10,,12053,CC BY-SA 3.0 +8519,5803,0,"Yes, that's exactly what I mean. Sure, It can be proved in a lot of situations...I will start with the Simplest possible example, Just training a Network with three Nodes, and two layers is NP-Complete problem as shown here.(http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.6662&rep=rep1&type=pdf) . Remember that this paper is very old, and now we have more ideas on how to improve in practice, with some heuristics, but still, theoretically, there're no improved results.",3/27/2018 8:23,,14483,CC BY-SA 3.0 +8520,5803,0,"Other nice article on the same issue, which also describes some tricks to improve training time in practice. (https://pdfs.semanticscholar.org/9499/140ffba43fd87d84bebc2fc328fa147ac236.pdf)",3/27/2018 8:28,,14483,CC BY-SA 3.0 +8521,5803,0,"Let's say, we want to predict the price for something. Simple Linear Regression with least square fit will have a Polynomial time, whereas solving the same issue with Neural Networks (even the simplest of them) will result in NP complete problem. This is a very big difference. Eventually, you have to carefully select an algorithm for a specific task. For example, Least Square fit has specific assumptions, which includes , ""The ideal function which the algorithm is learning, can be learned as linear combination of features"". If that assumption is not valid, so are results achieved.",3/27/2018 8:36,,14483,CC BY-SA 3.0 +8526,5798,1,"(Part 1 Comment) Thanks for your response. Before I learned more about NN I though they would be more generalist. The game is not too complicated, there can be an N number of players, to which all cards in the deck are distributed in the beginning. For example 4 players mean each will start with 13 cards, and the winner is the first to have no cards in hand. To get rid of the cards they will play them in turn order and can choose to play cards or not play. The only rules are that the players always have to play higher cards than previously played.",3/27/2018 12:47,,12217,CC BY-SA 3.0 +8529,5798,1,"(Part 2 of the Comment) Once no one wants to play a card (or can't), the last player to play a card gets to play any card he wants and start a new cycle. +The game is highly stochastic because the players can start with low level cards and lose the game even if they played better than the opposition, which complicates the use of greedy epsilon exploration.",3/27/2018 12:54,,12217,CC BY-SA 3.0 +8530,5803,0,"Of course,simply because a problem (in this case, finding optimal weights) is NP-complete does not in itself mean that there aren't efficient practical methods for finding _good_ weights...",3/27/2018 15:46,,42,CC BY-SA 3.0 +8533,5192,0,"Has this been applied to NLP purposes, after POS tagging?",3/27/2018 19:22,,10001,CC BY-SA 3.0 +8538,3097,0,Both are developed by Pentti Kanerva. Look up Hyper dimensional computing to see the difference. Too long to answer here.,3/28/2018 4:09,,6444,CC BY-SA 3.0 +8548,5810,0,Welcome to AI! Sounds like an interesting project.,3/28/2018 18:43,,1671,CC BY-SA 3.0 +8551,5821,0,"Welcome to AI! I did a slight edit to highlight the questions, and added the ""combinatorics"" tag, which seemed appropriate for this problem. Feel free to re-edit or update as you see fit.",3/28/2018 21:16,,1671,CC BY-SA 3.0 +8553,5819,0,"Yes, one way is to reuse the network with 80 classes, but it will be very expensive since i just want to care about the presence of human only. In general, if i want to detect human only, i have also to give it data of non-human right?",3/29/2018 1:57,,14613,CC BY-SA 3.0 +8555,5819,0,"As I say in the answer, yes it would be expensive, but about the same as your approach. If you want to train it yourself, you need to give it human and non-human data.",3/29/2018 7:51,,14612,CC BY-SA 3.0 +8557,5819,0,What do you think about the ratio between human/non-human data?,3/29/2018 8:43,,14613,CC BY-SA 3.0 +8558,5819,0,Hard to say. Depends on the data. Use all of what you have :),3/29/2018 8:53,,14612,CC BY-SA 3.0 +8560,5192,0,"Grammar induction has also been [used in some machine-translation systems](https://www.google.com/search?q=%22grammar+induction%22+machine+translation),",3/29/2018 22:24,,2050,CC BY-SA 3.0 +8563,5838,0,I dunno but this seems like a simple algorithmic problem...Teaching this to a MN might be tougher and more time consuming,3/30/2018 15:05,user9947,,CC BY-SA 3.0 +8564,5192,0,Do you happen to have any idea how successful these methods were compared to the typical approaches?,3/30/2018 15:17,,10001,CC BY-SA 3.0 +8565,5838,0,"It is indeed an algorithm problem. However, the 'learning' process of knowing which subsequence goes well together is , in my opinion, a problem where NN are particulary useful.",3/30/2018 16:22,,13038,CC BY-SA 3.0 +8569,5838,1,"Is this sequence like a class label for some other related sequence that you are trying to model? If that's the case, add some detail to you example and maybe we can help you. If not, you could easily implement what you ask using regex in your language of choice.",3/31/2018 0:37,,14664,CC BY-SA 3.0 +8572,5835,3,"Could you clarify: ""In a stochastic environment, where the action doesn't influence state"" - do you mean that the state is not 100% predictable, and the action doesn't *fully determine* the state? E.g. deciding to move forward, and a game then moves you 1d6 steps forward. Or do you really mean that the action choice has absolutely no influence on the resulting state - and the state will evolve by rules independent of the action choice? E.g. deciding to buy or sell, getting some reward, then the state just changes randomly. The answer you need is very different depending on which case you mean.",3/31/2018 12:20,,1847,CC BY-SA 3.0 +8573,5841,0,"Why is `out` calculated later than `h10`? Technically, `out` and `h10` is on the same layer, just like `h00` is to `h01`. What is the algorithm for the calculation order?",3/31/2018 13:23,,14650,CC BY-SA 3.0 +8574,5029,0,"That's all really interesting. I once heard a story of people guessing the number of objects in a glass jar, and the average of all the guesses was very close to the actual number. This might be another example of what you're talking about.",3/31/2018 13:27,,12145,CC BY-SA 3.0 +8585,5857,2,Welcome to ai.se.. Don't you think you should narrow down some algorithms and then ask which will be the best for your problem.,4/1/2018 13:35,user9947,,CC BY-SA 3.0 +8595,5858,0,Thank you. Is there any way I can know what a minimum number of samples should be? I used to use a rule of thumb that the examples should be at least 10 times the degree of freedom.,4/1/2018 17:28,,14701,CC BY-SA 3.0 +8604,5858,2,(@Mhmd) I use a slightly different method that I was taught in school. Train your model on 80% of your training set and see how much this hurts your performance. If you have enough data you won't see much of a difference between 80% and 100% of your training set. If you don't have enough data you performance will suffer.,4/1/2018 18:55,,13088,CC BY-SA 3.0 +8605,5858,0,Actually we know that generalizing and creating random samples is bad..I was looking for the particular answer for the correlation op has presented,4/1/2018 19:00,user9947,,CC BY-SA 3.0 +8607,5858,0,"Yes, we know it is bad, but we don't know if it will have a negative effect on the resulting networks. It could happen by chance that we create a set of perfectly correct examples.",4/1/2018 20:17,,13088,CC BY-SA 3.0 +8608,5858,1,"OP has a correlation that says the points are increasing as a and b increase. Sure we could generate data that fit the rules provided and this would bring the error rate down, but error rate isn't the end all be all. Say we want to detect fraudulent transactions and assume 99% of transactions are non-fraudulent. A model could get 99% precision on predicting non-fraudulent transactions by always predicting transactions to be non-fraudulent. This is essentially what we would be doing by generating values. We'd be lowering the error rate by bounding our output, but lose a lot of predictive power.",4/1/2018 20:17,,13088,CC BY-SA 3.0 +8611,5838,0,"Yes it is ! We could replace the example above with , let’s say , sentences without spaces. + +ex : If i feed into the neural netwok : +‘ILikePizza’ , a correctly trained neural network would be able to ouput or classify ‘I like pizza’ , recognizing those 3 different words (sub sentences) in the character flow. Whether it’s classification or even a regression of some sort would be valid.",4/2/2018 0:12,,13038,CC BY-SA 3.0 +8616,5865,2,"There is a similar concept called [Resnet](https://www.kaggle.com/keras/resnet50), that uses ""skip connections"" to achieve a similar result. The difference is that instead of direct deep connections, the skip connections are summed into the existings the next but one layer. Mathematically this creates a ""short-circuit"" for gradient flow to earlier layers when training, and also makes it very easy for later layers to learn the identity function, which intuitively means they can learn *improvements* over identity function.",4/2/2018 11:07,,1847,CC BY-SA 3.0 +8617,5857,1,please elaborate on the type of content your social media website allows? do you need your 'algorithm' to look into only text data or do you need to look into image data as well? are videos allowed on your social media website?,4/2/2018 12:38,,12957,CC BY-SA 3.0 +8619,5867,0,"This is not an answer to my question; you are talking about pooling, my question is about simple convolution in backpropagation algorithm.",4/2/2018 14:52,,2189,CC BY-SA 3.0 +8620,5867,0,"Well, the point is that strides introduce pooling kind of phenomenom and otherwise it does not change CNN performance and if I read my source right, also correctness.",4/2/2018 14:55,,11810,CC BY-SA 3.0 +8621,5867,0,"Probably more direct professional expertice you could get on datascience.SE site. I belong to both of these, thus I knew sth of the issue.",4/2/2018 15:01,,11810,CC BY-SA 3.0 +8624,5857,0,Machine learning-Supervised learning algorithm can do better.,4/2/2018 18:06,,1581,CC BY-SA 3.0 +8627,5835,1,"@NeilSlater, Thanks for your response. In this case the action has no influence at all on the next state. The state will evolve by rules independent of the action choice.",4/2/2018 19:00,,14638,CC BY-SA 3.0 +8628,5835,3,"In which case, although Q learning should work without errors, you would be expending resources for the agent to learn the fact that the action does not influence state. Your situation is more commonly known as ""contextual bandit"" and there a variety of solvers out there. The positive news is that good old supervised learning should be fine to learn from the historical data. Then it's just a matter of how online your algorithm needs to be, and how risk averse whilst making online decisions.",4/2/2018 19:11,,1847,CC BY-SA 3.0 +8633,5873,0,Do you have any links to arguments on either side? I'd be interested to read some opinions.,4/3/2018 14:26,,13088,CC BY-SA 3.0 +8635,5873,2,"I can't say much more than what I have written. In a conversation that I followed, I understood that removing IDs from connections simplifies code. The one drawback that was mentioned is what I already said here, about not being able to track connections between the same nodes at different times. Colin Green is responsible for SharpNEAT and knows A LOT about this. He is currently refactoring all the code and brought up this topic, so maybe you can reach him if you have more doubts! https://github.com/colgreen/sharpneat-refactor",4/3/2018 14:50,,14744,CC BY-SA 3.0 +8636,3957,0,"This kind of control is what I've been studying and building for many years. This question of the cloning of a human brain is often confused with works of fiction. We have a good base of information with current social networks and specific communities. We can collect problems, doubts, desires, happy moments, fears, hates, etc. I believe we can raise more issues on the subject, we are on the right track. Generating subjects.",4/3/2018 14:52,,7800,CC BY-SA 3.0 +8638,3961,1,"he big question is what is the next step in Artificial Intelligence. What would be the next step? How will control be done? The great pioneers are making low-signed to prohibit AI's creation of killers. What we know about AI today is not enough to determine anything. Remembering that Google made things easier by launching TensorFlow and free content on how to use it to create Artificial Intelligence models, that theme spread absurdly across the planet. It's a new story that I believe will be in the next generation's schools. Something essential.",4/3/2018 15:13,,7800,CC BY-SA 3.0 +8642,5838,1,"You example in the question is rather useless: Splitting on character change does the job. You can learn it, but there's no reason to. Your second example is better, but `ILikePizza` is equally trivial, just split before each an uppercase character. Still nothing useful to learn. Should it handle `ilikepizza` as well? Provide a few more inputs. provide something to learn from. `+++` Can you split `strcprstskrzkrk` (I can; it's an actual sentence)? In case you can't, would you expect the network to be able to? What other input should it get?",4/3/2018 17:08,,12053,CC BY-SA 3.0 +8644,5851,0,"I have edited the question...due to lack of clarity you may have misunderstood the question..but basically I am asking is when to choose normal programming, hardiwred calculation over NN/Ai and vice versa",4/3/2018 17:21,user9947,,CC BY-SA 3.0 +8648,5870,0,Thank you very much for your detailed answer. I will try your hints a let you know if I have some achievements.,4/3/2018 18:59,,14587,CC BY-SA 3.0 +8653,5838,0,"Let's take a global approach to be able to understand the examples better : given an entire book where there is NO spaces anywhere, only characters (including special characters, like commas), in what way can we make the network learn the 'word boudaries' of this book . + Ex : **Offmythrone,jester.Thekingsitsthere.** = **Off my throne, jester. The king sits there.** The whole purpose of this network would be to learn **boundaries** between concatenated inputs, in other words, when to seperate between a constant input flow, no matter what is the input context.",4/3/2018 20:52,,13038,CC BY-SA 3.0 +8654,5838,0,"Other short examples : ` **Walktall...myfriends | +WhatcanIsay?Youguys...arethebest. | +I'vecomeupwithanewricepee | +KingsofLucis,cometome! | +MAGITEKENGINE!IT'SCLOSE!** `",4/3/2018 20:58,,13038,CC BY-SA 3.0 +8661,5851,1,"Hmm, the edits do not change my understanding of the situation, so my answer remains the same. I'm sure you will get other answers more suitable to your purpose and these will be voted up accordingly. Good luck!",4/4/2018 5:19,,4994,CC BY-SA 3.0 +8666,5893,1,Thank you very much for your fast and detailed answer. I will give a try to those hints.,4/4/2018 14:20,,14587,CC BY-SA 3.0 +8676,5891,3,"Interesting question. I find it interesting that machine learning is performing so well with perfect information games, where indeterminacy is purely a function of complexity (intractability). Uncertainty arising from imperfect or incomplete information is distinct, and minimax is the hedge. CGT initially was focused only on perfect information games, until Fergusson (UCLA) and other started analyzing poker, so it may just be a matter of time before we see strong, narrow reinforcement learning producing the same results in this broader class of games...",4/4/2018 17:58,,1671,CC BY-SA 3.0 +8679,5893,0,"Richard Garfield defines luck as ""uncertainty in outcomes"". Athough I don't necessarily love this definition for the term (speaking as a deterministic game designer) his point is well taken when he applies this to non-trivial, non-chance, perfect information games such as Chess. (Essentially it's an extension of the ""infinite monkey theorem"" applied to unsolved games. Because perfect play cannot be validated, there is always a chance, however minuscule, of being beaten by an opponent making purely random choices.)",4/4/2018 18:32,,1671,CC BY-SA 3.0 +8682,5874,1,"I think a big problem with algorithms and NLP is not just context, but *subtext*. The meaning is not always the literal meaning, and simple statements can often have layers of meaning. Regardless, I think your endeavor is worthwhile, even if it doesn't produce strong results initially. This seems like a problem that will have to be ""chipped away"" over many iterations.",4/4/2018 21:43,,1671,CC BY-SA 3.0 +8690,5904,2,"The number of hidden units does not only depend on the complexity of the function, but also the number of samples you have. Consult [this great reference](ftp://ftp.sas.com/pub/neural/FAQ3.html#A_hu).",4/5/2018 12:49,,4880,CC BY-SA 3.0 +8691,5904,1,"@BartoszKP: Thanks a lot for the reference. It looks incredibly useful in general! In this case, I am not interested in a heuristic for choosing the optimal number of hidden units. I know the problem is solvable with 2 and overfitting/underfitting is a problem so the number of data points shouldn't be relevant. My goal is more to get an intuition into why having a network with redundant capacity seems beneficial here.",4/5/2018 13:14,,14789,CC BY-SA 3.0 +8693,5906,0,"To run the simple AI, please type a vocab into the text area, click `load`, and run in the console `init()`. Then, you can click the ""test"" button.",4/5/2018 13:35,,14723,CC BY-SA 3.0 +8694,5906,0,"Oh, and Raphiel is currently running under my account.",4/5/2018 13:42,,14723,CC BY-SA 3.0 +8718,5900,0,"Great info. Thanks for posting! PS- is there a formal term for this concept? (in some sense, I was more thinking about integrating neural networks as components in expert systems, but also tried to make the question fairly general.)",4/5/2018 18:48,,1671,CC BY-SA 3.0 +8719,5874,1,Thank you. I found a library in Python that does a similar form (by words). But it takes human labor to add words and their values.,4/5/2018 18:54,,7800,CC BY-SA 3.0 +8721,5883,0,Yes. Maybe with an operation screen. But the idea is to be unsupervised.,4/5/2018 18:56,,7800,CC BY-SA 3.0 +8723,5874,1,"There are a number of papers on facial recognition techniques (and other biometrics) to gauge emotion. As a former poet and dramatist, I can tell you that's a much less complex approach than trying to intuit the meaning of subtle text (which might turn out to require something akin to AGI--many humans seem not to get subtext!) But if you can correlate facial expression with text or spoken language, it would probably be fairly accurate in determining if the literal meaning is correct, or if sarcasm or irony is involved.",4/5/2018 19:01,,1671,CC BY-SA 3.0 +8724,5888,0,"Thanks for the material. I fully agree. Therefore, I believe that the best way to obtain real and useful data for this type of analysis needs to come from communities, polls, etc. Something like, ""What does this word mean to you?"" That is, a long and laborious search. Remembering that the search results vary from culture, parents, needs, etc.",4/5/2018 19:02,,7800,CC BY-SA 3.0 +8725,5874,0,"""I'm so happy because today I've found my friends"" is a great, simple example of the underlying problem. Without the melody, the interpretation is in opposition to what was almost certainly intended.",4/5/2018 19:04,,1671,CC BY-SA 3.0 +8726,5874,0,"Makes sense. Maybe the text or video is not enough for a precise value, but the two together improve accuracy. Thank you.",4/5/2018 19:04,,7800,CC BY-SA 3.0 +8727,5874,0,"Following the example of the repository I added, you can join forces of compound words or single words. In case ""happy"" with value 3 and ""so happy"" the value 4 and ""so much happy"" the maximum value 5.",4/5/2018 19:07,,7800,CC BY-SA 3.0 +8728,5874,0,"Exactly. Curt Cobain was many things, but happy was not one of them. :)",4/5/2018 19:09,,1671,CC BY-SA 3.0 +8732,5900,1,I don't know if there is a formal term; I called it the Swiss Army knife strategy. It is possible that neural networks could be components in a future cognitive system but you can't build one using NN alone.,4/5/2018 19:54,,12118,CC BY-SA 3.0 +8736,5919,0,"Welcome to ai.se...great question, low gradient means your error is also low...why do u need a high accuracy? Also you can increase the floating point representation if you want high accuracy, or maybe increase the learning rate",4/6/2018 6:55,user9947,,CC BY-SA 3.0 +8744,5926,0,"But local minima will always exist. Arguably, with 4 hidden neurons there will be many more local minims than with 2, correct? So why then does it become less likely to get stuck in one?",4/6/2018 11:28,,14789,CC BY-SA 3.0 +8745,5917,0,"You did not misunderstand the question, but I am using JS, and Synaptic.js at that. The documentation is [here](https://github.com/cazala/synaptic/wiki). I am unable to use Keras, for I am not using Python.",4/6/2018 11:49,,14723,CC BY-SA 3.0 +8747,5917,0,Wait.... That image looks familiar: https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2017/09/General-Text-Summarization-Model-in-Keras.png,4/6/2018 12:03,,14723,CC BY-SA 3.0 +8748,5917,0,"I think I have it down. Just one more question: Does the embedding layer mean that it turns text into something the NN can understand, such as floating point numbers, while the output layer turns that back into text?",4/6/2018 12:05,,14723,CC BY-SA 3.0 +8749,5926,0,"The local minima don't necessarily increase with more neurons (although they might!). Despite that, they're harder to find, because you have more dimensions, and it must be a minimum on all of those. So, a local minimum with XY just needs to be a local minimum for XY, while with 100 neurons, you'd need it to be a minimum on all 100 dimensions for backprop to settle there.",4/6/2018 12:07,,7496,CC BY-SA 3.0 +8751,5926,0,"okay if the number of local minima grows slow enough with the number of hidden neurons this makes sense. Thanks for your answer! Do you know if there are any good materials out there discussing these things. That is, what the optimisation ""landscape"" looks like and how it is likely to change with the complexity of the network?",4/6/2018 13:29,,14789,CC BY-SA 3.0 +8752,5926,0,"IIRC David Silver mentions the robustness of neural nets in [this](https://www.youtube.com/watch?v=2pWv7GOvuf0) course, but I couldn't find the precise moment. He basically describes that the net has so many parameters it makes it robust to local minima. On visualizing the landscape, it's impossible with enough inputs. You could do it with your 2 input neurons, but more than that cannot be represented visually for humans. I made a workshop and mention some displays [here](https://www.dropbox.com/s/eoaj9wfxifmgnbf/NN%20Workshop.pptx)",4/6/2018 13:46,,7496,CC BY-SA 3.0 +8754,5917,0,"@Pheo, yes you are correct, it turns each word into a stream of numbers, often times a vector. This is what Word2Vec and GloVe does, which you can load into your model in python. And then the last layer looks at these and can choose the output in a variety of ways, these are known as decoders. I apologize I'm not very familiar with JS, but if you have any conceptual questions I should be able to help out.",4/6/2018 15:52,,14809,CC BY-SA 3.0 +8757,5893,0,@Neil Slater and do you have some good advices if there are more than two players or even a player in your team?,4/6/2018 17:22,,14587,CC BY-SA 3.0 +8758,5893,0,"@murthy10: I am less sure about multi-player scenarios, and do not have much advice. If the team objective is to win as a unit, and players can share information freely, then it could make sense to model the team as an agent (as opposed to each player separately). However, if you want to code a game agent that could play as a team member alongside a human player, that would not work.",4/6/2018 17:29,,1847,CC BY-SA 3.0 +8760,5917,0,Conceptual question: Is the vector an array of indexes or an array representation of the Unicode values?,4/6/2018 17:47,,14723,CC BY-SA 3.0 +8771,5917,0,"Neither, it actually is a vector representing the symantic meaning of a word. That's why many people choose to make their own embedding through training or use Google's Word2Vec or GloVe since they've already been trained over a large set",4/7/2018 19:42,,14809,CC BY-SA 3.0 +8777,5950,0,"Correct! However, the Elman-RNN (vanilla) can in theory learn long term dependencies as well as an LSTM, as Bengio also has a paper on. The problem is that it is very sensitive to the hyper parameters and thus they are very hard to tune.",4/8/2018 21:12,,14612,CC BY-SA 3.0 +8778,5950,0,"No, wait what? What do you mean by ""augmented feature space""?",4/8/2018 21:13,,14612,CC BY-SA 3.0 +8779,5521,8,I would argue that ReLU is actually more common in NNs today than sigmoid :),4/8/2018 21:28,,14612,CC BY-SA 3.0 +8780,5955,2,This question was also asked here [What is the difference between biological and artificial neural networks?](https://psychology.stackexchange.com/questions/7880/what-is-the-difference-between-biological-and-artificial-neural-networks).,4/8/2018 21:42,,13088,CC BY-SA 3.0 +8782,5950,0,Vector concatenation: ..,4/8/2018 21:49,,11566,CC BY-SA 3.0 +8792,5340,0,I figured out that it isn't binary classification and I try currently to understand NN but I will not warm up with the predictions...,4/9/2018 16:14,,12367,CC BY-SA 3.0 +8793,5919,0,Did you tried Backprogation Through Time (BPTT) ? Need more information about your net and task to help you,4/9/2018 16:28,,13295,CC BY-SA 3.0 +8799,5965,0,I don't think this answers the question.,4/9/2018 19:20,,11566,CC BY-SA 3.0 +8802,5971,0,Thanks for the response! So an observation is a subset from the state?,4/9/2018 21:58,,11566,CC BY-SA 3.0 +8803,5971,2,"@echo: Not necessarily. You could also observe things that are irrelevant to the state. So it is more accurate to say that state and observation may overlap. Generally, when they overlap a lot, RL is easier to apply, so you try and make this happen - most toy problems in RL are designed so that observation and state overlap perfectly, for convenience. But sometimes interesting problems require you to deal with missing/unobserved state data.",4/9/2018 22:03,,1847,CC BY-SA 3.0 +8804,5964,0,"In this day and age, you can expect someone has already done what you want to do.",4/9/2018 22:57,,14723,CC BY-SA 3.0 +8806,3387,0,"I'd bet that you can gain nothing as number theory is rather abstract and you'd need a system working with formulas in order to gain any useful knowledge. However, your integer divisor problem can definitely be learned as it's a final optimization problem. What's unclear is the number of neurons needed - you probably don't want to use 2**128 of them. AFAIK no CPU has a single-step 64:64 bit divider, so you most probably have to use some feedback network, too.",4/10/2018 1:10,,12053,CC BY-SA 3.0 +8807,5872,0,"You nicely showed us that a most animals are much smarter than most supercomputers, but I'd like them see playing chess against my oldish android device. *So, why aren't they doing it?* I guess, the answer is the same as to your question: That can't (yet). `+++` I don't know if a cognitive system is more than what AI will ever be able to achieve, but I'm sure, we're not that far yet. `+++` Humans vs. computers boasting with exaFLOPs is like birds vs. fish boasting with flying. Or do you claim you can multiply ten orders of magnitude faster than my pocket calculator?",4/10/2018 1:29,,12053,CC BY-SA 3.0 +8809,5972,0,Well I know all that...I am asking how do I convince people of my approach..How to find the pros and cons...How do I know one approach is better in this case over other in a strict mathematical sense and not intuitive sense,4/10/2018 2:23,user9947,,CC BY-SA 3.0 +8814,5965,1,This answer first distinguish and relates the two terms LSTM and feature space augmentation. For a more detailed answer please give more information (for instance a paper to each term).,4/10/2018 7:16,,13295,CC BY-SA 3.0 +8818,5941,0,I feel the need to add my own snake game... https://www.w3schools.com/code/tryit.asp?filename=FOVMY741OC7V,4/10/2018 12:04,,14723,CC BY-SA 3.0 +8822,158,1,"Architectures like NTM and DNC can be used to solve problems, like the shortest path, because they do have the ability to execute the iterative process by keeping track of what was being done (no catastrophic forgetting). But for me using just supervised learning is simply wrong as mentioned in the second answer.",4/10/2018 13:46,,14909,CC BY-SA 3.0 +8828,5981,2,"Short answer: yes, state of the art with many many layers probably not but enough to tinker and learn and make a project.",4/10/2018 16:09,,4398,CC BY-SA 3.0 +8830,5977,0,"That would be a good idea, but what about variable length repetitive sub-sequences ? If i follow the 'word' example , how would the network be able to learn between 'scholar' / 'scholarship' / 'scholar' & 'ship' ?",4/10/2018 16:44,,13038,CC BY-SA 3.0 +8831,5974,0,"Mmmm...never thought of using CNN for that kind of things, although, it kinda makes sense. I'm wondering, however, if a K-points detector could overcome the variable length problem ('scholar' / 'scholarship' / 'scholar' & 'ship' ). Maybe using a standard classifier outputting probability of the key points found by the CNN ?",4/10/2018 16:54,,13038,CC BY-SA 3.0 +8836,5982,1,Sounds like a statistics problem. But sounds like you _could_ use recurrent neural networks,4/10/2018 19:04,,14612,CC BY-SA 3.0 +8846,5982,1,Is the block of text variable in size? Can you add an example of how the input would look like?,4/10/2018 19:53,,14612,CC BY-SA 3.0 +8847,5982,1,"Yeah the block of text would be variable in size. An example would be something like: + +**hello my name is richard cheese. +XXX12345 +12345 Fake Street +Faketown, FakeState USA** + +or + +**XXX12345 is my handle. my interests are posting on stackoverflow and drinking myself into a coma. look forward to hearing from you!** + +...basically want a neural network to pick out the ""handle"" from any block of text that is given to it.",4/10/2018 19:58,,14918,CC BY-SA 3.0 +8848,5982,1,"Sounds like you want to parse it, and not learn it. You could take a look at regular expressions. [Here is an example](https://regex101.com/r/dAVd8E/1).",4/10/2018 20:06,,14612,CC BY-SA 3.0 +8849,5982,1,"Yes, that's what I've been using for this kind of stuff, but I was hoping that using a neural network would make it so that the handle could be identified even if the handle was misspelled or omitted entirely from the block of text as long as it had enough training. Do you think what I'm asking is feasible?",4/10/2018 20:09,,14918,CC BY-SA 3.0 +8851,5982,2,"Honestly, I don't know. I do suspect that there are better ways of doing this with more accuracy. It would heavily depend on the training data, and what misspellings you teach the algorithm. If you really want to use DL, I still stand by RNNs being your best choice here, since they can pick up ""context"". Check out [colah's blog](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) for more info on RNNs.",4/10/2018 20:12,,14612,CC BY-SA 3.0 +8852,5964,1,"The underlying issue seems to be that integrated circuits aren't nearly as powerful as the human brain, and we're not even sure how the brain can store and correlate all of that data. I usually recommend taking a quick look at computational complexity theory, to get a sense of the problems sizes for problems where all of the parameters are known and information is perfect and complete. Compare to nature, where this is not the case. It's a very challenging problem. Good question, though. Welcome to AI!",4/10/2018 20:18,,1671,CC BY-SA 3.0 +8857,5987,2,"As usually the main goal of training an NN is to generalise a function from example data, could you clarify what you mean by ""forgetting the first data point""? Are you wanting to purposefully overfit so that predictions against the first data point are no better than random? If not, do you have any criteria for when the training for any specific model e.g. ""model 2 trained on 1-100"" is complete, or whether its results are acceptable to you? It is clearly possible to take model 1 and re-train it, but what is missing in the question is the goal for doing so . . . and that affects the answer here",4/10/2018 22:00,,1847,CC BY-SA 3.0 +8858,5987,0,"The idea is that the first data point is not relevant any more. I dont want overfit, so imagine continuous addition of data points. For a given situation, only the 100 most recent data points are relevant. When a data point is added, the most recent model is trained on a data point that is (in theory) not relevant any more.",4/10/2018 22:15,,14923,CC BY-SA 3.0 +8859,5987,0,"Adding to my comment, essentially Im asking if it is possible for a model to only consider the most recent data points, without retraining the model from the bottom on the most recent data - only partially traning it in some way, making it ""forget"" the no longer relevant data form the previous training set.",4/10/2018 23:16,,14923,CC BY-SA 3.0 +8861,5950,0,Does that make sense?,4/11/2018 1:27,,11566,CC BY-SA 3.0 +8862,5919,0,Thanks for your comment and suggestion. I changed the layer from tf.contrib.rnn.LSTMBlockCell to tf.contrib.rnn.LayerNormBasicLSTMCell. Then the gradients become large enough to influence the network.,4/11/2018 5:29,,14816,CC BY-SA 3.0 +8863,5989,0,You need to add details about what exactly does that mean for those who don't know tensorflow,4/11/2018 5:50,user9947,,CC BY-SA 3.0 +8864,5950,1,No. Maybe. I don't understand how you can compare concatenating vectors and LSTM?,4/11/2018 6:02,,14612,CC BY-SA 3.0 +8866,5965,1,Everything @user3352632 says is correct and makes sense though :p,4/11/2018 7:01,,14612,CC BY-SA 3.0 +8867,5950,1,"By LSTM you mean long short term memory recurrent neural network, right?",4/11/2018 7:02,,14612,CC BY-SA 3.0 +8868,5987,0,"If you're afraid of overfitting, there are much better ways, like dropout, l2 regularisation and just stochastic batching.",4/11/2018 7:03,,14612,CC BY-SA 3.0 +8869,5987,0,"Actually, your method looks a lot like [cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) (_not_ the SE site, but the actual algorithm) that excludes some of the data for each iteration.",4/11/2018 7:06,,14612,CC BY-SA 3.0 +8870,5987,0,"@norflow: Maybe you just want online training, but this is still not clear from your question or comments. How are you testing your network? How will its predictions be used? For instance, how will you know training for items 0-99 is complete and useful for its task? After you have trained it on items 0-99, what is the task you will use it for - predicting item 100?",4/11/2018 7:23,,1847,CC BY-SA 3.0 +8873,5987,0,"Thank you so much for all of your time. I should clarify by describing the problem I want to solve instead of wasting your time, sorry. +I want to predict some future event. My hypothesis is that only the last 6 hours are relevant,so when time passes, the model will be trained with ""old"" data. The idea is that there will be some pattern present in the data that will help predict the next event. +So I need an adaptive model that ""forgets"" the old (non-relevant) data without retraining it from the bottom every time a time unit passes, because of real-time (every 5th min)",4/11/2018 8:08,,14923,CC BY-SA 3.0 +8877,5987,0,"@norflow: It is *still* not clear from your comment. Are you saying items 0-99 are being used together to predict something about item 100? As you describe the training data, I am expecting each item (e.g. item 25) to be a separate valid input to the network, with a fixed training target. So input data about item 25 into the network, what is the output? Is the output associated only with item 25, and the rules for association changing over time? Or would it be more accurate to say the the output depends on items 0-24 as well, so this is more about predicting outcome of a *sequence*?",4/11/2018 8:21,,1847,CC BY-SA 3.0 +8878,5987,0,"I think some simplified examples will help. Please make it clear how older items become less relevant. In a practical sense, there is a difference between a non-stationary problem (where the rules of induction keep changing) or a sequence prediction problem (where the history of the data has a predictable consequence and is part of those rules).",4/11/2018 8:23,,1847,CC BY-SA 3.0 +8879,5872,1,"@maaartinus You don't understand. Artificial intelligence has zero comprehension. It doesn't have actual learning since learning is impossible without comprehension or understanding. Yes, AI and robotic enthusiasts want to pretend that these things don't matter, but they do. The real goal of AI is AGI and this hasn't changed since 1956. If you think your calculator can do AGI, well ... good luck.",4/11/2018 8:30,,12118,CC BY-SA 3.0 +8880,5977,0,"Therefore, you have to annotate your data that scholar is 1111111 and scholarship is 11111112222. Is this your real use case or just an example? Then I can come up with a better idea ...",4/11/2018 8:40,,13295,CC BY-SA 3.0 +8884,5986,0,"Hmmm..that's a great answer, so you are saying to use simple algorithms for devices having a low processing power resulting in a trade-off between results vs performance?",4/11/2018 11:00,user9947,,CC BY-SA 3.0 +8890,5982,0,"My initial plan was to modify this [demo](http://caza.la/synaptic/#/wikipedia) to fit my needs, but I'm unsure how to do so.",4/11/2018 13:37,,14918,CC BY-SA 3.0 +8891,5982,0,This makes more sense now! I'll answer when I have time (if no one else did so first :) ),4/11/2018 13:41,,14612,CC BY-SA 3.0 +8894,5539,0,"The fact that you cannot find a paper about limitations... I don't know how that would even happen. It seems almost impossible to get something setup, and not read the warning page.",4/11/2018 13:46,,14723,CC BY-SA 3.0 +8910,5950,0,"Yes, that's what I mean. I'm just concatenating them temporally. We have a stream of input vectors and I maintain a sliding window of them.",4/12/2018 2:39,,11566,CC BY-SA 3.0 +8911,5979,0,See [Learning Descriptor Networks for 3D Shape Synthesis and Analysis](https://arxiv.org/abs/1804.00586).,4/12/2018 3:54,,14907,CC BY-SA 3.0 +8913,5950,0,Are you wondering of the differences in effect of the LSTM when input vectors are concatenated along the feature axis?,4/12/2018 6:58,,14612,CC BY-SA 3.0 +8917,6018,0,"Welcome to ai.se..totally yes....unless you have some prior knowledge that the curve .like gaussian or Maxwell, then you may use weights which reflect that although I have no idea how :)",4/12/2018 10:29,user9947,,CC BY-SA 3.0 +8918,5981,1,"How do you plan to implement it? using some language from scratch, using python and some of the deep learning packages like Tensorflow? I am not an expert at all, but my standard PC seems not good enough to install and run Tensroflow, for instance.",4/12/2018 11:53,,30433,CC BY-SA 3.0 +8920,5981,1,"I believe I will be using Python along with a few packages. However, as it's my first time trying these algorithms I have no advice to give you. After trying it out I'll try and remember to get back to you.",4/12/2018 13:56,,14913,CC BY-SA 3.0 +8921,5950,0,"No, the difference in how to model the temporal nature of the stream. Through an LSTM or through vector concatenation.",4/12/2018 14:29,,11566,CC BY-SA 3.0 +8924,5950,0,"But these are two wildly different things? How are LSTM-RNN networks an alternative to concatenate vectors?! Vector concatenation is like a mathematical operation and an LSTM is a machine learning structure, that uses time series to learn and predict. It's is like asking ""Should I use addition or a neural net to figure out if a number is big enough?"" It just doesn't make sense.",4/12/2018 16:41,,14612,CC BY-SA 3.0 +8925,5950,0,They are an alternative for temporal modelling. Vector concatenation is a widely used technique for sequence modelling.,4/12/2018 16:58,,11566,CC BY-SA 3.0 +8926,5950,0,Either you don’t mean what I think you mean or you don’t know what you mean. I give up x),4/12/2018 17:20,,14612,CC BY-SA 3.0 +8929,5986,0,"@DuttaA That's definitely a big part of it. But in the case of a problem set like M, every time you increase the matrix base or add dimensions, the problem size expands exponentially *and* factorially, and M can't be reduced to the degree that Sudoku can because the elements have weights (i.e. it's an ACGT/minimax problem, not exclusively combinatorics.) My assumption is that the efficacy of MCTS will diminish profoundly under that degree of game-tree expansion.",4/13/2018 3:37,,1671,CC BY-SA 3.0 +8931,5972,0,@DuttA There are no pros on using less accurate but harder approach. If your question concerns convincing people you might want to try IPS SE.,4/13/2018 10:31,,4880,CC BY-SA 3.0 +8932,5972,0,Well say I am writing a research paper...should I write just because i think so that is why I am using the method...I atleast have to show some form of evidence or negative logic or anything for that matter,4/13/2018 10:33,user9947,,CC BY-SA 3.0 +8933,5972,0,Your answer is actually the best among all....you should include some parts of @DukeZhou's answer so i can accept and upvote,4/13/2018 10:40,user9947,,CC BY-SA 3.0 +8936,6029,0,"I know about this one, and it not what I have asked (it does not label images in any way)Thanks for your answer, though!",4/13/2018 13:26,,30433,CC BY-SA 3.0 +8937,6019,0,"Thanks, I thought this would be the case, but this is not an answer, rather an opinion",4/13/2018 13:28,,30433,CC BY-SA 3.0 +8938,6019,0,@HughMungus will reword it.,4/13/2018 13:55,,14723,CC BY-SA 3.0 +8939,6019,0,"Perhaps I should reword the question: if there is any specific reason of why this cannot be implemented, but specific to this idea, rather than what can go wrong with deep learning in general",4/13/2018 15:33,,30433,CC BY-SA 3.0 +8940,6019,0,"@HughMungus you giving up, not being able to feed it a video, etc. Mostly data conversion.",4/13/2018 15:47,,14723,CC BY-SA 3.0 +8941,6019,0,"well , if you see it that way, I guess you should answer it, I have no problem in giving you the bounty if it gets two positive votes (and nobody else gives a better answer)",4/13/2018 16:48,,30433,CC BY-SA 3.0 +8942,5989,0,It means that I normalize the input for the network layers. I hope this explains.,4/13/2018 17:45,,14816,CC BY-SA 3.0 +8943,5994,0,"You generally don't use genetic algorithms for things that have to be done real time. Like in Phillip's answer you would have to set up and run through the maze tens of thousands of times before seeing any relevant results. This is expensive both in terms of both time and money. Generally, this is why you don't see GA in robotics.",4/13/2018 18:17,,13088,CC BY-SA 3.0 +8945,6029,0,"Ah, sorry, I jumped the gun and answered the title question!",4/13/2018 20:27,,6252,CC BY-SA 3.0 +8947,5964,0,you should edit your question if your phrasing is not the way you intend it to be,4/14/2018 0:50,,6779,CC BY-SA 3.0 +8948,2833,0,check a book about algorithms heuristics using python. good luck!,4/13/2018 23:56,,6707,CC BY-SA 3.0 +8951,5672,0,"@mico I deleted your comment, but that's a really cool link!",4/14/2018 1:20,,1671,CC BY-SA 3.0 +8952,5672,0,"I did a quick search for Escher and AI, but aside from Hofstadter (who I have found to be extensively cited in an array of subfields) I'm not coming up with all that much. This is a very interesting question, so, if you can add some links or excerpts, that would be greatly appreciated. (PS- [Monument Valley](https://en.wikipedia.org/wiki/Monument_Valley_(video_game))!!!)",4/14/2018 1:23,,1671,CC BY-SA 3.0 +8958,6040,0,Welcome to ai.se....This is a great question..This link might help https://ai.stackexchange.com/questions/92/how-is-it-possible-that-deep-neural-networks-are-so-easily-fooled,4/14/2018 4:46,user9947,,CC BY-SA 3.0 +8964,6047,2,"If my memory is correct, I have seen objects larger than 70% of the image be detected. Have you tried to test on your own?",4/14/2018 13:56,,5763,CC BY-SA 3.0 +8965,6050,0,Welcome to ai.se...it indeed is too broad a question...choice of evaluation metrics solely depend on the problem and requirements at hand..also it depends whether you are using it to train the network...F-score is a measurement metric not used to train network..so I suggest you to narrow down the scope by editing the question a little bit and giving it some context,4/14/2018 14:01,user9947,,CC BY-SA 3.0 +8966,6050,0,"Thanks for that. Sorry about that - I will try to explain it more. I am actually after deciding when to use specifically correlation coefficient, RMSE, MAE and others for numeric data (e.g. Willmott's Index of Agreement, Nash-Sutcliffe coefficient, etc.)",4/14/2018 14:04,,15011,CC BY-SA 3.0 +8967,6043,0,Also Microsoft has much development on this area: look slide https://youtu.be/E3kFkzeaynw at 2:01 minutes in video.,4/14/2018 14:11,,11810,CC BY-SA 3.0 +8968,6050,0,"Since I actually need to explore all ways performance evaluation is used in training, validation and testing, I would be happy to know how it used to train the network. Thank you",4/14/2018 14:11,,15011,CC BY-SA 3.0 +8969,6043,0,Thanks! Were these systems trained in an unsupervised way?,4/14/2018 15:49,,30433,CC BY-SA 3.0 +8970,5964,0,"@k.c.sayz'k.csayz' Feel free to edit it, I will reverse it only in case the edited meaning disagrees with the intended one",4/14/2018 15:51,,30433,CC BY-SA 3.0 +8971,6043,0,"Well, I found another project that does! Look my edit.",4/14/2018 16:53,,11810,CC BY-SA 3.0 +8972,6043,0,".. and yet another. There are many aspects, that can be learned from video and speech.",4/14/2018 17:07,,11810,CC BY-SA 3.0 +8973,151,0,@Lovecraft please put this as an answer.,4/14/2018 17:48,,14723,CC BY-SA 3.0 +8974,6046,2,Could you explain what you mean by the last question a little more?,4/14/2018 22:25,,13088,CC BY-SA 3.0 +8979,6056,0,"""Do you know"" what was the inspiration for ANNs? How observations of neuron structure has given us ideas for our ANNs? ANNs do a good enough job at what they are designed for, but when we look for true AI, strong AI, I don't think it fits the bill. And I think others see this too and are looking back to the brain for answers to apply into our computer models.",4/15/2018 3:25,,15023,CC BY-SA 3.0 +8980,4281,2,"Right, I think the issue is he's multiplying -ln(p) by a potentially negative number (his reward). This loss function expects only positive rewards.",4/15/2018 3:32,,15028,CC BY-SA 3.0 +8981,6046,1,"@AndrewButler We tend to think of intelligence as applied. An AI is an applied algorithm that makes decisions with the goal of achieving more optimal results. But if the program is not running, say it's just the paper printout of the algorithm, is it still an intelligence? *(Admittedly a bit of a curve ball, but this is squarely a philosophical question about what we regard as intelligence.)*",4/15/2018 3:59,,1671,CC BY-SA 3.0 +8982,6052,0,"Computers do not have to be digital. Conway made one based on toilet flush hydraulics, just to be cheeky. The medium is integrated circuits today because that's the best medium available today. But Quantum computing is looking more and more viable every year.",4/15/2018 4:08,,1671,CC BY-SA 3.0 +8986,6050,0,Actually there is so many number of optimizations you can't possibly track them all...like i said any1 can invent a new metric according to his choice...so it is quite difficult to answer your question unless you provide some more info..also you are asking some very mathematical questions whose details might not fit in a single answer,4/15/2018 9:32,user9947,,CC BY-SA 3.0 +8987,6039,0,Could you give an example of a fitness function in your example? Choosing the fitness functions is the part that boggles my mind the most.,4/15/2018 10:05,,14863,CC BY-SA 3.0 +8988,5994,1,"@AndrewButler so what if I simulated the environment (to my best knowledge) a million times and then threw my robot in the real world problem, would it end up failing the task miserably? I imagine just like in neural networks, the model would be saved for the robot to use in real world.",4/15/2018 10:08,,14863,CC BY-SA 3.0 +8989,6039,0,"Additionally, what should the robot do if it steps into a space that has a 1 in it?",4/15/2018 10:13,,14863,CC BY-SA 3.0 +8990,6050,0,"@DuttaA Thank you for your reply and sorry that this is not clear. Actually in my course I am doing it is all about using ANN to model data. The question I have been asked to answer is about discussing which stages out of training, validation and testing should the performance metrics correlation coefficient, RMSE, MAE, Willmott's Index, Nash-Sutcliffe coefficient, Legate and McCabe's Index and Percentage Peak Deviation be used in? We haven't been given any more information or data. I think what I would like to know is probably when do we use error metrics and when to use non-error metrics?",4/15/2018 11:56,,15011,CC BY-SA 3.0 +8991,5994,0,"No, you've got it. This is exactly how it is done. Notice, however, that your simulated environment may not be perfect and so your resulting model won't be accounting for certain things.",4/15/2018 16:24,,13088,CC BY-SA 3.0 +8992,6039,1,"In this answer he says ""As we want it to get to an exit from a start, we have a natural fitness function: closeness to the exit door in the smallest number of moves""",4/15/2018 16:26,,13088,CC BY-SA 3.0 +8993,5302,0,"1. What formally qualifies as requiring human interaction? As I'd've thought that the water jug problem doesn't require human interaction; to me the reason you gave seems like just a precondition for doing the problem in a real life (as opposed to simulated) setting. +2. If a given solution is composed of multiple steps, and you could therefore break down the search into a search from the solution to the start combined with a search from the start to the solution, is the problem not decomposable? Again, my quabble is with what formally qualifies. +3. Why is the solution not a state? ...",4/15/2018 17:19,,47,CC BY-SA 3.0 +8994,5302,0,"... The state of the solved bucket is the solution so to me it would seem like the solution is finding a path to the **state**, with the path in service to the state rather than vice versa; if the state was merely in service to **the path that is the solution** then I'd think the solution was the path rather than the state.",4/15/2018 17:24,,47,CC BY-SA 3.0 +8995,5302,1,"Additionally, your answer doesn't seem to fully answer the question posed: ""Is there a generally accepted relationship between placement of a problem along these dimensions and suitable algorithms/approaches to its solution?""",4/15/2018 17:29,,47,CC BY-SA 3.0 +8996,6055,1,"Thanks for the explanation. Indeed, there's a feature map for every class. But is it useful to have images with no bounding boxes at all in the training set? Suppose there's only one class, does YOLO treat the images with no bounding boxes as negatives?",4/15/2018 17:40,,13068,CC BY-SA 3.0 +8997,6047,0,"@BrianO'Donnell Yes I have, but I'm just getting low f1 score for large objects.",4/15/2018 19:23,,13068,CC BY-SA 3.0 +8998,6071,1,"Actually AlphaGo uses 1 + 2 (plus a look-ahead search to help refine them), and this is relatively common too (search ""Actor-Critic"").",4/15/2018 19:59,,1847,CC BY-SA 3.0 +8999,5972,0,@DuttaA In case of a paper you either cite someone who has already tested numerous approaches and had shown which one seems the best or you propose the simplest method that leads to acceptable results. Occams' razor.,4/15/2018 20:10,,4880,CC BY-SA 3.0 +9000,6071,0,"Ah, I see! I don't think AlphaZero uses Actor-Critic though?",4/15/2018 21:57,,4199,CC BY-SA 3.0 +9005,6067,0,"Do you mean translation equivariant? pooling layers are responsible for the invariance, right? and I didn't use pooling in my methods.",4/16/2018 6:12,,5030,CC BY-SA 3.0 +9006,6071,0,"Yes you are right, it is not Actor-Critic (as far as I understand it). It does however both generate policies directly and evaluate board positions. The tree search (MCTS) is what links the two.",4/16/2018 7:03,,1847,CC BY-SA 3.0 +9010,136,0,@GuidoJorg I believe that should be an answer.,4/16/2018 16:59,,14723,CC BY-SA 3.0 +9012,5326,0,"Honestly I cant see it being to much of a sophisticated algorithm, its probably something simple like the above with a touch of normalisation, ie square rooting by the s.d. or something silly and unnecessary like that. Best algorithms I find are based on the simplest of statistical principles.",4/16/2018 23:39,,9413,CC BY-SA 3.0 +9014,6046,0,"@AndrewButler but admittedly, the ultimate question is in the realm of *""am I a man dreaming of being a butterfly, or a butterfly dreaming of being a man?""* One of the most salient features of Dick's work is that nearly all his books are about the nature of self in relation perception, and how perception governs subjective reality. It's a profound comment on the informational conception of the universe, which does have some traction in serious circles. A recurrent aspect of Singluarity mythology is the interchangeability of matter and information.",4/17/2018 2:23,,1671,CC BY-SA 3.0 +9016,5974,0,"you can pad the sequences, so it will have fixed sizes. Moreover, conv. layers usually can deal with variable size vectors, as long as it is all-convolutional network (no dense layers).",4/17/2018 3:35,,3250,CC BY-SA 3.0 +9033,6090,0,It looks like my question may be a bit vague. I’ll try to add more constraints this afternoon if I have time.,4/17/2018 15:41,,6252,CC BY-SA 3.0 +9038,6056,0,"@Tyler possibly SC is intimating that even with NN structures, we're sort of guessing, trying to build an analog without full understanding of the mechanics of the template (organic brain). But, by the same token, Von Neumann and many others believed it not unreasonable that organic brains will ultimately be found to be a type of ""machine"".",4/17/2018 20:51,,1671,CC BY-SA 3.0 +9039,6073,1,"Rajaniemi had some interesting thoughts on the unique power of human brains, with the idea that those structures would be worth maintaining, even after ""transmigration"" to a pure information medium (post-singularity). Your point about lifespan is salient.",4/17/2018 20:54,,1671,CC BY-SA 3.0 +9044,6091,0,"Without looking too deeply: The first paper's network is naive about the system it is trying to model, while the second one has more assumptions about the system it is trying to model.",4/17/2018 22:49,,6779,CC BY-SA 3.0 +9045,6092,1,Can you add the code as text? The file link (from my side) is broken.,4/18/2018 2:48,,9413,CC BY-SA 3.0 +9047,6039,2,"@Gabriele The fitness function here could be pretty simple: Just sum number of moves taken and number of squares away from exit (with a zero for this half of the score if they make it there), lowest scores are fittest. Robots that hit a space with a 1 in would be scored with a highly punitive 'death score' (say +1000 or more in this example) that puts them straight to the bottom of the fitness scoring every generation.",4/18/2018 8:32,,14997,CC BY-SA 3.0 +9049,6055,0,I edited my answer. I hope it answers your questions.,4/18/2018 11:21,,9111,CC BY-SA 3.0 +9051,6055,0,"Thanks again. Just for comparison, do you happen to know what the specificity was without throwing in images without bounding boxes?",4/18/2018 11:43,,13068,CC BY-SA 3.0 +9053,6102,0,"I don't think you would directly feed it the image and have it find the distance... Rather you would feed it a set of numbers that represent distance, vertical distance, horizontal distance, etc.",4/18/2018 14:19,,14723,CC BY-SA 3.0 +9054,6104,0,It is not a step function...Rather a Delta Dirac function..Slight miswording there,4/18/2018 15:01,user9947,,CC BY-SA 3.0 +9055,6097,1,"I fully understand what you're saying and agree. I'm not saying that the way I explained here is correct. I am looking for new logics, for new steps. For example, in your sentence: ""I would be happy if you stopped making such a noise."". The ""If you"" turns the sentence into a condition. I can create scoring rules for conditions. As a programmer, I can turn logic into an algorithm. the point is, I am here trying to raise this issue because I believe there are millions of logics to be implemented in this basic that I created. I thank you for your participation, I'll study your directions!",4/18/2018 17:10,,7800,CC BY-SA 3.0 +9056,6110,0,"He was not asking *how* you would do it, but rather *can* you do it.",4/18/2018 17:53,,14723,CC BY-SA 3.0 +9057,6110,0,"It is possible to do it in several ways. I passed the way I would take to create the template. It is not a theory, it is a process that can encompass other processes according to the evolution of AI.",4/18/2018 18:12,,7800,CC BY-SA 3.0 +9058,6113,1,"In addition to this, do you find the stated ""statement (Open-source reimplementations of deep learning algorithms)"" demanding for reimplementation of some Machine Learning Model's based on some paper or the reimplementation of the very learning techniques themselves (e.g. back-propagation)?",4/18/2018 19:13,,15116,CC BY-SA 3.0 +9059,6113,0,"@ShoaibAnwar probably the former; a re-implementation of BP is not of any particular value, apart from a purely educational perspective, and such efforts abound",4/18/2018 19:17,,11539,CC BY-SA 3.0 +9062,5055,0,"Ege is correct. The dying relu problem only occurs in very specific instances(dying relu is where a neuron always outputs the same value). For example, when using gradient descent and backpropagation the forward pass can clamp the weights to zero. Karpathy has some good info about that in his backprop [walkthrough](https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b)",4/19/2018 5:13,,9608,CC BY-SA 3.0 +9063,6115,4,Welcome to ai.se..This is a special class of problem called anomaly detection problem..Just do a quick Google search Andrew ng Coursera anomaly detection.,4/19/2018 6:24,user9947,,CC BY-SA 3.0 +9066,6116,1,Typical. Fell for the same thing when I made my first one.,4/19/2018 12:13,,14723,CC BY-SA 3.0 +9067,6116,1,"Uh... Which language are you using? Reminds me, we need to make tags for those.",4/19/2018 12:17,,14723,CC BY-SA 3.0 +9068,6116,1,Python...Can you make suggestions please?Is there anything I need to change?,4/19/2018 12:31,,15128,CC BY-SA 3.0 +9069,6116,1,Yes. **Why are you starting from scratch?** You should rethink about making your own AI library... Very hard.,4/19/2018 12:33,,14723,CC BY-SA 3.0 +9070,6116,2,Just want to understand how exactly it works..,4/19/2018 12:34,,15128,CC BY-SA 3.0 +9071,6116,1,How familiar are you with JS?,4/19/2018 12:35,,14723,CC BY-SA 3.0 +9072,6116,1,"I am not very confident about JS. Python would be better or if you could just tell me if there's anything wrong in the logic of my code, that would be great.",4/19/2018 12:36,,15128,CC BY-SA 3.0 +9073,6116,1,"I tried implementing a shallow network with two layers and it worked well. However, when I try generalizing for deep networks, I face this problem.",4/19/2018 12:37,,15128,CC BY-SA 3.0 +9094,6116,1,You know implementing backprop by one himself is quite tough and you are asking someone to debug it for u....Search for how to know my backprop implementation is correct in the SE and check for urself,4/19/2018 17:46,user9947,,CC BY-SA 3.0 +9095,6120,0,Are you sure about that? Because the less the number of samples poorer the generalization,4/19/2018 19:31,user9947,,CC BY-SA 3.0 +9097,6120,0,"Ive used it before for a skewed movie ratings dataset without issues. Keep in mind ypu still use the same amount of samples. You just change their relative frequency, such that both classes are equally likely to appear",4/19/2018 23:21,,7496,CC BY-SA 3.0 +9100,6104,2,"It's a step function if the x-axis is the input, but a Delta Dirac if the x-axis is time, you mean?",4/20/2018 6:55,,15107,CC BY-SA 3.0 +9101,6116,0,@user165213 I think what you need to do is split the image up into pieces... Not feed the network a whole.,4/20/2018 12:00,,14723,CC BY-SA 3.0 +9102,6121,1,"Thanks for your answer! I didn't use the original Q learning since the environment constantly changes, and I don't see how a Q-table can work with that, but I'll try your suggestions!",4/20/2018 13:16,,15133,CC BY-SA 3.0 +9103,6121,1,"What do you mean, the environment constantly changes? Are you processing image input? Do you mean there is an excessively large state space? You can either pre process the images, or extract features first and use those as input. This is a very common ML step, improving your algorithms input.",4/20/2018 16:12,,7496,CC BY-SA 3.0 +9104,6055,0,If my memory is correct: the specifucity was 1-2% lower but the sensitivity was 0.5% higher.,4/20/2018 17:14,,9111,CC BY-SA 3.0 +9105,6121,1,"I mean that in the game players spawn in random places and as they move, they leave a permanent trail behind them that kills other players if they touch it. My thinking was that Q-tables wouldn't work well with this. Also yes, my input is downized frames in real time. TimeDistributed Conv2D in Keras using 4 frames in 3 channels (different player colors), if that says anything to you.",4/20/2018 19:25,,15133,CC BY-SA 3.0 +9106,6127,0,"Thanks for the answer, also i think one of my mistakes was i assumed that alpha beta knows that we are in the right most subtree, but the default implementation of alpha beta does not know whether we are in the last tree or not correct? because if we are not in the right most tree, the value of that sub tree is important because we might need to compare it with other trees so in this case we need to check L and K because we might need to compare that value with some other sub tree which we have not checked yet, am i correct?",4/21/2018 7:30,,12782,CC BY-SA 3.0 +9111,5577,0,"What I am looking for appears to be ""sequential pattern discovery"". There are several algorithms so I still need to read a lot more. It is an interesting topic, because you can use the same to find temporal patterns, which means I will be able to use it for medical research too.",4/21/2018 14:39,,13192,CC BY-SA 3.0 +9113,6127,0,You are correct in both cases.,4/21/2018 16:19,,13088,CC BY-SA 3.0 +9114,6102,0,"@Pheo yes, but you would have to feed it different values for every type of ""game"". Whereas what I'm saying is, could we have some global type of value that is high when pixels are grouped together and low when pixels are spaced apart?",4/21/2018 17:46,,4199,CC BY-SA 3.0 +9115,6115,0,"@DuttaA This should probably be upgraded to an answer, since the OP's problem is exactly what Anomaly Detection is for.",4/22/2018 8:32,,12509,CC BY-SA 3.0 +9116,6130,1,"Welcome to ai.se...It is highly unclear what you are asking..There have been a few questions of this type in this SE, but all of them has been given a specific scenario..I suggest you go through the questions and add more details to your question",4/22/2018 12:50,user9947,,CC BY-SA 3.0 +9117,6130,0,How much is your free will affected by police looking at your face to see if you're a wanted criminal?,4/23/2018 2:50,,9413,CC BY-SA 3.0 +9118,6139,1,Welcome to ai.se...why do you want to optimize inputs? Isn't supposed to be constant?,4/23/2018 13:09,user9947,,CC BY-SA 3.0 +9120,6139,1,"Because that is essentially the problem I want to solve, to get the input values such as the output is maximum.",4/23/2018 13:54,,15196,CC BY-SA 3.0 +9121,6139,1,Ok this is recommender system probably...try this https://www.coursera.org/learn/machine-learning/lecture/2WoBV/collaborative-filtering maybe explore the course,4/23/2018 13:57,user9947,,CC BY-SA 3.0 +9123,6139,1,"In his recommender system he is using linear regression (he predicts values using a simple matrix multiplication) but, for this problem, the inputs and outputs are not correlated linearly. + +I've edited the question to express better my problem.",4/23/2018 15:46,,15196,CC BY-SA 3.0 +9124,6139,0,Are you asking for a system that trains neural networks or are you asking for any method that can predict values?,4/23/2018 16:41,,13088,CC BY-SA 3.0 +9125,6130,0,"Definitely needs an edit, but a solid question. A. Free will has not been verified, so you should avoid this term. I suspect what you might be indicating is stronger influence over human decisions by sophisticated algorithms. B. Privacy, this is an issue already, regardless of AI. AI can certainly be used to exacerbate the problem. *Again, legit question, just needs to be reformulated.*",4/23/2018 17:16,,1671,CC BY-SA 3.0 +9129,6124,0,"I tried under-sampling. It performs quite unstable between each run, some times better and most of the time it performs worse.",4/24/2018 9:45,,15126,CC BY-SA 3.0 +9130,6124,0,"What I would do is not try one of the techniques randomly, but reduce your data to 2D and plot it before and after the transformation. Then check for plausibility. Try also for example the Smote-Tomek technique, which combines Over- and Undersampling. If both methods do not work to your satisfaction simply use class weights.",4/24/2018 11:54,,7495,CC BY-SA 3.0 +9133,1361,0,"So basically what monte carlo does in alphago is to create long term strategies , by considering different move combinations , instead of the other way around ( pick an strategy and then the moves to achieve it ) ?",4/24/2018 4:13,,15207,CC BY-SA 3.0 +9158,4773,0,"Yup. Even with electric vehicles, assuming some miracle energy source like fusion power, you still have wear and tear on the vehicles themselves, need to replace batteries, etc. Unclear that ads could cover this, but who knows. We live in a consumption-obsessed society *(at least here in the US;)*",4/24/2018 21:19,,1671,CC BY-SA 3.0 +9159,4772,0,"Excellent answer. *(Personally, I'd rather pay, if I have the means, than be aggressively advertised at, but I'm not confident this is the majority viewpoint.)* I do think it's worth looking at the sci-fi worlds where advertising has run amok--highly dystopian. *[File under: ""the best minds of our generation are spending all of their time trying to figure out how to sell ads""]* George Saunders wrote [a great short story on this subject.](https://www.newyorker.com/magazine/2002/01/28/my-flamboyant-grandson)",4/24/2018 21:26,,1671,CC BY-SA 3.0 +9160,6161,1,"Quite salient. I keep seeing serious scholars making the point that it's not superintelligence we need to be worrying about atm, but ""strong narrow AI"" replacing humans in the laborforce in the very future. Numerous other scholars have mentioned human ""short term thinking"" as the source of many of our current woes. You see it manifest concretely in computing in the form of [technical debt](https://en.wikipedia.org/wiki/Technical_debt), but that's just a minor symptom.",4/24/2018 21:38,,1671,CC BY-SA 3.0 +9161,6121,0,"Yeah, usig raw image pixels as you were probably using works with convolutional layers, but not so well with QTables (too large of a state space). However, you can preprocess the image and obtain simpler states. I.e, extract features from the image and feed a simpler local-perspective grid to QLearning",4/24/2018 21:59,,7496,CC BY-SA 3.0 +9162,6161,0,"a bit of a random comment, but measures of rationality and measures of intelligence are not super well co-related, which upshots that the two are not the same.",4/25/2018 5:10,,6779,CC BY-SA 3.0 +9163,6166,1,Welcome to ai.se....What parameters are you trying to find out exactly? Is it discrete in nature or continuous...I think you did not formulate the problem properly...You need some more info...And if your primary aim is fault detection it his highly unlikely a nn will help yoi,4/25/2018 11:33,user9947,,CC BY-SA 3.0 +9164,6166,1,"@DuttaA For example, if I have LTI, then I want to recover (at least partialy) the matrix A, as in y = A.x + b.u (x - state vector, u - input). By the way, why is fault detection a problem ? How about those journal papers I have provided (?), they detect/classify faults from the behavior/output of the system.",4/25/2018 13:10,,15231,CC BY-SA 3.0 +9166,6166,1,Fault detection is a different subclass of problem assuming it is a skewed class..That is number of faults<