diff --git "a/stack_exchange/SE/SE 2017.csv" "b/stack_exchange/SE/SE 2017.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2017.csv" @@ -0,0 +1,114791 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense, +339230,1,339234,,1/1/2017 12:14,,106,11111,"
The following commentator writes:
+ +++ +Microservices shift your organizational dysfunction from a compile time problem to a run time problem.
+
This commentator expands on the issue saying:
+ +++ +Feature not bug. Run time problem => prod issues => stronger, faster feedback about dysfunction to those responsible
+
Now I get that with microservices you:
+ +My question is: What does it mean that shifting to microservices creates a run-time problem?
+",13382,,60357,,42736.73611,42949.83889,How does shifting to microservices create a run-time problem?,I'm working on an automation project in C# and it has 2 wrappers: DesktopAutomation
and BrowsersAutomation
. The first has a dependency on UIAutomation.dll
s (access to the MS desktop elements) and the latter on Selenium. Their role is understood I hope ;)
Now, There are user actions on the browsers that require a dependency on UIAutomation
(or DesktopAutomation
for that matter), since Selenium gives you access to the DOM and not the extensions buttons in Chrome for example.
So my question is, what would be the correct way / best practice, software construction wise:
+ +BrowsersAutomation
on DesktopAutomation
which has the advantage of a working project that has existing methods I can use.BrowsersAutomation
on UIAutomation.dll
s which makes this project more generic. e.g. using it in other projects won't require another dependency.Or, perhaps, some other configuration I haven't thought of...?
+",97472,,97472,,42736.55069,42736.625,Separations of concerns and dependency management in automation project,I have that self-hosted RESTful messaging service with authorization, SSL and more good stuff that goes with it. Now, I would like to consume that service, so I need an UI. Usually (for cross-platforms sake) I tend to develop ASP.NET MVC web application, but this time I'm not sure how to proceed.
+ +However, I have some ideas:
+ +If there are two decoupled applications - a web application and a REST service - there will have to be CORS enabled on the client.
If there is a web application that somehow uses some proxy (forwarder?) to get to the REST service, I don't need CORS. But I don't know how exactly that should be done in MVC.
Another thing - I would really prefer this being decoupled, so I don't want a third option to be stuffing both things into one sack.
+ +I am a bit disappointed with what google says on the topic. There should obviously be more information, or I don't know how to look for it.
+ +My questions are:
+ +I know GA questions are often almost impossible to answer exactly, but I'm looking for some general advice (although specific advice would be great too!).
+ +I've just written my second GA, which tries to find a phrase (say ""i like bananas""), which it does by generating binary strings of 5 times the length of the target string (as I have allowed 32 = 2^5 characters in my strings, the lowercase alphabet, space and five punctuation characters) and breeding and mutating them.
+ +This is all based on an example in Practical Genetic Algorithms by Randy and Sue Ellen Haupt (not sure if I'm allowed to link to Amazon, so I didn't). other sources show similar outlines, so I don't think there is anything specific about that bok, I was just reading it, and so tried their example.
+ +I tried my GA on ""colorado"" which was the one they used in the book. It found the right answer in around 200-800 generations, which compared to the 1E12 possible combinations of the allowed characters is not bad. However, the authors of the book said that their GA found the answer in just 17 generations, which makes my algorithm look incredibly slow.
+ +If there's had managed it in (say) 100-300, I could have written my poorer performance down to a lack of experience, but 17 is a huge difference from my results. I want to know how to improve my GA to get anywhere near that.
+ +I'll post some code below. This is C#, but anyone familiar with any of the C-family of languages should be able to understand it. I don't really use much C#-specific stuff here. I won't include some of the utility functions, as they have been tested, so I know they work, and this will help keep the amount of code down. If you think I've missed out anything important, please let me know and I'll add it.
+ +First, here's my simple Chromosome class...
+ +public class Chromosome {
+ public Chromosome(string genes) {
+ Genes = genes;
+ }
+
+ public string Genes { get; set; }
+ public double Fitness { get; set; }
+}
+
+
+Here is the main routine...
+ +void Main() {
+ // We are assuming that each character is mapped to a number between 0 (a) and 25 (z),
+ // with space . , ! ? and - taking up the numbers from 26 to 31.
+ // Thus, each character can be encoded in a binary string of length 5 (ie ""00000""
+ // is a, ""11001"" is z and so on), and so any string can be encoded as a sequence
+ // of 1s and 0s, with the encoded length being five times the original string length
+ int len = target.Length * 5; // Length of gene string in each chromosome
+ int totalChromosomes = 32; // Number of chromosomes in the population
+ double crossover = 0.5;
+ // The gene number at which crossover will take place
+ int crossoverGene = (int)(len * crossover);
+ double mutationRate = 0.04;
+ // Generate the initial (random) population
+ List<Chromosome> population = Initial(totalChromosomes, len);
+ int generations = 10000;
+ int genNumber = 0;
+ Chromosome best;
+ do {
+ // get the next generation
+ population = Breed(population, crossoverGene, mutationRate);
+ // Find the best chromosome
+ best = population.OrderBy(c => c.Fitness).First();
+ genNumber++;
+ }
+ while (genNumber < generations && best.Fitness > 0);
+ Console.WriteLine(""Best fitness: "" + best.Fitness.ToString(""F3"") + ""\tGenes: ""
+ + Decode(best.Genes) + ""\t@ generation "" + genNumber + ""/"" + generations);
+}
+
+
+Here is the fitness function, which returns the number of incorrect characters...
+ +private static int Fitness(Chromosome c) {
+ int fitness = 0;
+ for (int i = 0; i < target.Length; i++) {
+ int cTarget = (int)target[i];
+ string genes = c.Genes.Substring(i * 5, 5);
+ char cChromosome = BinaryToChar(genes);
+ if (cTarget != (int)cChromosome) {
+ // Add 1 to the fitness for every incorrect character
+ fitness++;
+ }
+ }
+ return fitness;
+}
+
+
+The Breed function takes our current population, breeds chromosomes together and returns a new (hopefully better) population. Say we have a population of n chromosomes, we generate n/2 new chromosomes, then add on the best n/2 from the current population.
+ +The Roulette function used here is a straightforward implementation of a roulette wheel selection. I didn't include the code as I tested a lot on the previous GA, and it seems to work fine...
+ +private static List<Chromosome> Breed(List<Chromosome> population, int crossoverGene,
+ double mutationRate) {
+ List<Chromosome> nextGeneration = new List<Chromosome>();
+ for (int nChromosome = 0; nChromosome < population.Count() / 2; nChromosome++) {
+ Chromosome daddy = Roulette(population);
+ Chromosome mummy = Roulette(population);
+ string babyGenes = daddy.Genes.Substring(0, crossoverGene)
+ + mummy.Genes.Substring(crossoverGene);
+ string mutatedGenes = """";
+ foreach (char gene in babyGenes) {
+ // P() returns a random number between 0 and 1
+ mutatedGenes += P() < mutationRate ? (gene == '1' ? '0' : '1') : gene;
+ }
+ Chromosome baby = new Chromosome(mutatedGenes);
+ baby.Fitness = Fitness(baby);
+ nextGeneration.Add(baby);
+ }
+ // Add on the best of the previous generation to make up the numbers in the next gen
+ nextGeneration = nextGeneration // the new chromosomes we just bread
+ // join with the previous generation, order by fitness, best first
+ .Union(population.OrderBy(p => p.Fitness)
+ // Only take the first n chromosomes, discarding the rest
+ .Take(population.Count() - nextGeneration.Count())).ToList();
+ return nextGeneration;
+}
+
+
+I hope that's enough of the code to see what I'm doing. I don't think any of the omitted functions have any significant code.
+ +I have tried this on various strings, varying the population size, crossover and mutation, but other than the fact that it just fails to find an answer on longer strings, nothing seems to have made any noticeable difference.
+ +Anyone able to give me any idea how I can improve my algorithm?
+ +The book authors mentioned that they used a population of 16. I tried varying the population, and found that values around 16 took significantly longer to converge (100 generations or more), whereas once I got up to about 50, it settled at around 200-800.
+ +Edit Following a suggestion by amon, I tried a fitness function that compares the current and target strings on a bit-by-bit basis. I encoded the target string into a binary string, and used the following fitness function...
+ +private static int Fitness(Chromosome c) {
+ int fitness = 0;
+ for (int i = 0; i < encodedTarget.Length; i++) {
+ if (c.Genes[i] != encodedTarget[i]) {
+ fitness++;
+ }
+ }
+ return fitness;
+}
+
+
+However, this didn't make any difference. I'm including it here in case anyone can make any suggestions as to how to improve it.
+",123358,,123358,,42736.76736,42736.76736,Why do my GAs take so long to converge?,I'm working on an Agile program and we are debating on how to deal with what we call ""stabilization sprints"". We have to build our team and decide on several key items but it seems there aren't really a well defined guideline to help us decide about them (or we can't find them) so I was hoping to pick your brain on this.
+ +Our first release is due in June, we have three months of stabilization but in parallel we need to build a team and start working on next release due for October and then a 3rd release for next June.
+ +Here are the items we want to decide on:
+ +Do we build two separate teams to deal with next release and stabilization tasks? On one hand having a single team (several pods) to deal with both helps us to load balance our resources better and assign developers with deeper knowledge of the issues require fixing to them. On the other hand not having a dedicated team for next release makes it deficult to plan our next release.
Do we size issues identified (bugs to be fixed during stabilization, technical debts) or we deal with them by assigning a percentage of the pod's velocity to bug fixing as we used to do for our normal development sprints? Sizing them helps to plan better but creates a need for debates and meetings we want to avoid.
Do we combine our stabilization tasks with next release story cards or keep them separate? This is kind of continuation of the first question. If we decide to have a single team to deal with both stabilization and new release then do we really need two backlogs or just a single one?
I've been looking for a good book/article that describes the best practices to deal with an Agile project with multiple releases planned specifically to explain the team structure and estimation model but can't find anything good.
+",258265,,9113,,42736.69722,42739.59236,Agile stabilization and release management,I'm creating an HTML5 game using javascript and have got some problems during the first instantiation of the objects of the scene.
When I instantiate the game scene I load the data of the scene from the local storage at first, and then I proceed to instantiate its objects. The problem is that the type of the game object (sprite, text...) is declared in the model, not in the object (that has a reference to the model). In this way I have to fetch the model of the object in order to know what type of game object I need to instatiate, and I really don't like it.
+I could save the type of the object as a property of the object, but it would be logically wrong: I should not be able to redifine the type of the object decided in the model because it would easily break the implementation of the game object declared in the model. So, it has no sense saving the type of the object in the object itself given the fact that the model already has the job to declare it.
+Hence, maybe I need a new architecture...
+How can I avoid to fetch the model without breaking the logic of 'this is where I should save this property'?
+If the question is not clear, please provide me some feedback: I'd be happy to improve it.
I teach software engineering at undergraduate level and I have a question to UML practitioners.
+ +Most software engineering textbooks take a serious effort in covering UML diagrams. But on the other hand, I heard from many graduates that UML does not seem to be used in the trenches anymore.
+ +What UML diagrams are still being widely used in professional practice, and why ? Are there diagrams that are no longer used and why ?
+ +N.B: In order to avoid opinion based debates and discussions, please illustrate your answer with factual and objective elements (if possible, verifiable) or neutral observations on personal experience
+",257536,,209774,,42737.81042,42737.85278,Which UML diagrams are still being widely used?,I want to understand how EF track ID when the primary key is identity. (DB first)
+ +For example
+ +class User
+{
+ int Id; //auto generated via SQL identity, also the primary key of the table Users.
+ string Name;
+}
+
+
+//adding a new user-
+User user = new User () {Name=""TestUser""};//Id will be 0;
+DB.Users.Add(user);
+DB.SaveChanges();
+user.Id; //Will have value;
+
+
+More over if I have navigation property with foreign key to the user.Id will be updated as well.
+ +I believe that EF track tables with the primary key, but in this case it is determined by SQL (or w/e) on creation, so how does the EF gets the actual ID after creation?
+",244843,,,,,42737.44306,How does Entity Framework track object with identity as key,I've always thought that a ""common library"" was a good idea. By that I mean a library that contains the common functionality that is often needed by a few different applications. It results in less code duplication/redundancy.
+ +I recently read an article (can't find now) that said this is actually a bad idea and went as far to say it was an ""anti-pattern""
+ +While there are upsides to this approach. Versioning and managing change means regression test for the suite of apps that use this library.
+ +I'm kind of stuck in a rut for my new (Golang) project. Code deduplication has been hammered into me over the years but I feel like I should try it this time around.
+ +While writing this, I am beginning to think that this ""common lib"" approach is the result of skimming on architecture? Perhaps my design needs more thought?
+ +Interested to hear thoughts.
+",57613,,,,,43495.32778,Is a common library a good idea?,I like working in languages with static types, because I like using types as a tool for designing an API before I start coding it.
+ +I also like TDD, because it helps me concentrate on working in small steps to ensure I get consistent results.
+ +But when I combine the two approaches, I often have this problem: I design the type of an API, but before I write unit tests for part of the functionality I find I must implement it because otherwise the compiler complains about the methods being incorrectly typed. For example, in a Java project, I have the following class:
+ + public class TransformedModelObserver<O,S>
+ {
+ private O sourceModel;
+ private Function<O,S> transform;
+ // note: a ChangeNotification<S> is a class that can only be constructed with a non-null instance of S
+ private Consumer<ChangeNotification<S>> receiver;
+
+ // ....
+
+ /** Should call the receiver if and only if the source model change
+ * is visible in the transformed model.
+ */
+ public void notifySourceModelChanged ()
+ {
+
+ }
+ }
+
+
+I can simplify the test by using an identity function for the transform, which would allow for an easy first step, but the compiler complains if I don't call it anyway. So how would I work to implement this method in small test-driven steps in this scenario?
+",153823,,,,,42737.62569,TDD with predesigned static types,I have a piece of code parsing a text file line by line. I have to goals: Testing the syntax of the text and extracting information of it. It is quite likely that syntax errors occur so I want to provice helpful information about where and what.
+ +To give an idea of the problem, I have a text file like the following (simplified example):
+ +1=\tSTRING\tDevice name
+15=\tFLOAT\tSpeed
+17=\tINTEGER\tMax Speed
+18=INTEGER\tMax Speed
+
+
+As you can guess, the syntax of each line is: <Parameter ID>=\t<Data Type>\t<Description>
My goal is to
+ +My general structure is:
+ +std::vector<ParameterDAO> ParseText (std::string text)
ParameterDAO ParseTextLine (std::string text)
ParseTextLine
is called by ParseText
for each line.FYI: The strings/substrings itself I parse with regular expressions and some standard string operations (compare, ...). But this is not the main point in my question.
+ +OK, now some more details of my implementation:
+ +std::invalid_argument(""my error message"")
Code snippet for 3:
+ +try
+{
+ Parse_HPA_CATEGORY_SingleLine_THROWS;
+}
+catch ( std::exception e )
+{
+ std::string l_ErrorMessage = ""Error in Line x: "";
+ l_ErrorMessage.append ( e.what () );
+ throw std::invalid_argument ( l_ErrorMessage.c_str() );
+}
+
+
+This structure works and has the following benefit:
+ +But there may be some drawbacks / things I am not sure about, too:
+ +I am in the process of creating my own concatenative language, heavily based on Forth.
+ +I am having a little trouble understanding how the compiling words CREATE
and DOES>
work, and how they are implemented (How the state of Forth's run time environment changes exactly when they are executed).
I have read the following resources that give a general view, but only of how to use them, and not of how a system implements them:
+ +The following things about the behaviour of these two words are unclear to me:
+ +CREATE
takes the next (space-delimited) word from the input stream, and creates a new dictionary item for it.
+
+CREATE
fill in anything in the new dictionary item, or not?CREATE
return (on the stack?)? CREATE
and DOES>
?DOES>
'fills in' the run time behaviour of the created word.
+
+DOES>
consume as input? 17 CREATE SEVENTEEN ,
, no DOES>
is used. Is there some kind of 'default behaviour' that DOES>
overrides?These different unclarities of course all arise from the core problem, that I have trouble understanding what is going on, and how these concepts, that seem rather complex, can be/are implemented in a simple manner in a low-level language like Assembly.
+ +How do CREATE
and DOES>
work exactly?
Occasionally, the most logical name for something (e.g. a variable) is a reserved keyword in the language or environment of choice. When there is no equally appropriate synonym, how does one name it?
+ +I imagine there are best practice heuristics for this problem. These could be provided by the creators or governors of programming languages and environments. For example, if python.org (or Guido van Rossum) says how to deal with it in Python, that would be a good guideline in my book. An MSDN link on how to deal with it in C# would be good too.
+Alternatively, guidelines provided by major influencers in software engineering should also be valuable. Perhaps Google/Alphabet has a nice style guide that teaches us how to deal with it?
Here's just an example: in the C# language, ""default"" is a reserved keyword. When I use an enum, I might like to name the default value ""default"" (analogous to ""switch"" statements), but can't.
+(C# is case-sensitive, and enum constants should be capitalized, so ""Default"" is the obvious choice here, but let's assume our current style guide dictates all enum constants are to be lower-case.)
+We could consider the word ""defaultus"", but this does not adhere to the Principle of Least Astonishment. We should also consider ""standard"" and ""initial"", but unfortunately ""default"" is the word that exactly conveys its purpose in this situation.
This is more of a theoretical question which I hope is okay!? I want to code my own drag and drop jQuery plugin, but i'm wondering the best way to go about structuring my code and actually doing it.
+ +Note: This MAY be opinion orientated as well but I don't mind, I just need suggestions on a good way to go about this and structuring the code etc.
+ +Current plan:
+ +Item Class: I want to use Object Orientated Programming here, so was going to store the reference to each draggable item in an instance of a class, which are in turn stored within an array. Then within this class I can do calculations based on that draggable item.
+ +When to capture mousemove: To save on computing power incase of other JavaScript intensive scripts I was thinking about only capturing any mouse movements when the mouse is down on an item.
+ +Actually moving items: Well, I will just change the items position to fixed/absolute and adjust the top and left values, what happens on release I cover below.
+ +Moving items out of the way to place selected, and actually dropping an item in a new location: That's a long bolded one! But yeah, i'm not too sure on how I would go about this! Basically I would have a list of li's
which I want to re-order, but as I move an item to a new position I want the other items to slide out of the way smoothly, preferably using transforms as to pass the animation over to the GPU.
Moving items into other stacks: Some of my li's
may have ul's
within then, I need the ability to detect when the draggable item is being attempted to be dropped into another item. If this makes sense! :(
Kinda example structure of my items?
+ +<ul id=""sortable"">
+ <li id=""1"">Home</li>
+ <li id=""2"">Showroom
+ <ul>
+ <li>Stoke</li>
+ <li>Macclesfield</li>
+ </ul>
+ </li>
+ <li id=""3"">Finance</li>
+ <li id=""4"">Servicing</li>
+ <li id=""5"">About Us</li>
+ <li id=""6"">Contact</li>
+</ul>
+
+
+I have sort of started some of the code, however I don't want to do too much if i'm going about it all wrong!
+ +/* Global Javascript */
+(function($) {
+ $.fn.draggable = function() {
+ // Establish our default settings
+ var settings = $.extend({ }, options);
+
+ class Item {
+ constructor(obj) {
+ this.elm = obj;
+ }
+
+ // Return the items dimentions
+ getDimentions() {
+ return new array((this.obj).outerWidth(), (this.obj).outerHeight());
+ }
+ }
+ // Store draggable items
+ var $items = new Array();
+ $(this).each() {
+ if ($(this).is('li')) {
+ $items[] = new Item($(this));
+ }
+ }
+ console.log($items);
+
+
+ var $dragObject = null;
+ function makeClickable(object) {
+ object.onmousedown = function() {
+ $dragObject = this;
+ }
+ }
+ function mouseUp(ev) {
+ $dragObject = null;
+ }
+ }
+}(jQuery));
+$('sortable').draggable();
+
+",196345,,,,,42737.54653,Drag and Drop with animations,I'm working on a website project for a software engineering course. This site will be using a database to store data, currently through JDBC and MySQL.
+ +Now the first thing I would want to do, is use the bridge pattern in order to decouple JCBC/MySQL from the implementation of the website, so that if in the future we decide to switch to another vendor (like Microsoft Server), it will be easier, ""just"" change the reference to the implementor class in the abstraction class.
+ +At the same time, many of my classes use very similar functions on the databae. For example, I have three classes, TripControl, RouteControl, and LocationControl, and they each have a class they use to speak to the database(TripDB, RouteDB, LocationDB). So I was thinking, let's use the Strategy pattern, and have it so that TripControl, RouteControl, and LocationControl all talk to a Context class (using the book terminology here), and then use a Policy object to select which behaviour to use (TripPolicy for TripDB, RoutePolicy for RouteDB, LocationPolicy for LocationDB), this way it should make using the DB easy for the other devs (just choose the policy and forget about the rest).
+ +Ok, so let's say I use the Strategy pattern, without the Bridge, and I switch from MySQL to MS Server (or I use both), I would need to have the following policies objects: TripPolicyMySQL, TripPolicyMS, RoutePolicyMySQL, RoutePolicyMS, RoutePolictyMySQL, RoutePolicyMS, to be able to choose which kind of database I'm working on. This makes it harder for the developers to implement their classes, and it looks (to me, at this moment at least), not really well suited to a change.
+ +If I were to use the Strategy in conjunction with the Bridge, I should have something like this:
+ +The developers have just three policy objects (LocationPolicy, RoutePolicy, TripPolicy), and they just use those. Then, on a lower level, the Strategy pattern will use the Bridge's interface(For example TripDB would be a bridge for TripDBMySQL and TripDBMS), which will hide the implementation of the database, which could be MS Server or MySQL.
+ +Would doing this make any sense? I guess it's slower because of all the indirection, but it should make it easier on the developers and in theory it should make the system easier to exapnd.
+",235622,,209774,,42737.64722,42737.74514,"Using Bridge and Strategy together, is my idea correct/useful?",I'm trying to design a N-Tier Solution for my existing WebAPI project.
+ +I have a WebAPI project where, as of now, all the business logic are written in the controllers and the data validation are done by annotations.
+ +This is now leading to duplicate code across controllers when I'm trying to implement same logic.
+ +So I thought of moving my Business Logics to Business Layer. But I'm mostly facing challenges in returning Business Validations to controller.
+ +For E.g I have a code portion in controller like
+ +//Check if User is adding himself
+ if (RequestUser.Email == model.Email)
+ {
+ return BadRequest(""Email"", ""You cannot add yourself as an User"");
+ }
+
+
+Now how do I return BadRequest from Business Class Methods?
+ +And it's getting tough when the next line of the controller is
+ +IdentityResult result = await UserManager.CreateAsync(user);
+
+ if (!result.Succeeded)
+ {
+ return result;
+ }
+
+
+So I cannot return both BadRequest & IdentityResult from same method. Also BadRequest, ModelState is not accessible in controllers. Ofcourse I can add System.Web.Mvc there in BLL, but would that be a good idea?
+ +Another thing that I'd like to know is, I'm just creating Methods inside BLL's which are taking ViewModels that I receive in controllers. Is that a good idea for existing project? Or should I create DTO's (same as like Models) in BLL and use AutoMapper to map the properties and let BLL operate on DTO's instead of passing ViewModels.
+ +I think the latter would be more extendable, but would require more time.
+ +Lastly, if you do suggest me to go with DTO's, then I have to change at BLL DTO's as well as in Model when introducing new properties, isn't that a bad idea? Code is then duplicating here too. On other side, as of now, I change all the related ViewModels too (sometimes 2-4) (which I think is not the right approach) when adding a new property to Models.
+ +So what's the right approach?
+",156393,,,,,42737.79931,How To Design BLL in ASP.NET MVC,Let's say I want to program a parallelized web crawler which has a shared FIFO queue (multi consumer/multi producer). The queue only contains URLs. How do I detect the end of the queue?
+ +A worker process is always consumer and producer at the same time because it takes an URL from the queue, crawls it and adds any found URLs to the queue. I think there is no way to have separate processes for consumer and producer tasks in this scenario.
+ +Since the amount of input data is unknown but not infinite it's impossible to use a 'poison pill' as sentinel in the queue, right?
+ +Also, the queue size is not a reliable way to find out if the queue is empty (because of multiple consumers/producers).
+ +Please enlighten me :-)
+",258176,,,,,42737.81806,How to detect end of queue in a parallelized web crawler?,I'm not really sure if that is right ""stack"" to ask that question, well two questions actually.
+ +What is the best way to find the duplicates in a list of a list of integers (no matter what position thay are in)? +I don't necessary need code just the best way to go about this problem.
+ +eg:
+ +List<List<int>> TestData = new List<List<int>>
+{
+ new List<int> { 1, 2, 3 },
+ new List<int> { 2, 1, 3 },
+ new List<int> { 6, 8, 3 },
+ new List<int> { 9, 2, 4 },
+};
+
+
+The idea is that this will return
+ +2x) 1,2,3
+1x) 6,8,3
+1x) 9,2,4
+
+
+I've been breaking my head over this seemingly very simple question but for some reason I can't figure it out. +Hope someone is able to help, Like I said code not necessary but greatly appreciated.
+",258368,,258368,,42737.99444,42739.87708,Find duplicate in a list of a list of integers,problem:
+ +Making a video-game has the following challenges on variable storage:
+ +solutions considered so far:
+ +However, each of these solutions using the ""player id"" as an index doesn't work well when the concurrency is unstable- with players coming and going. There are three approaches (to the actual structure of the data sent):
+ +What would be the best solution to this scenario?
+",258203,,258203,,42738.09861,42738.10764,Most efficient way to store multiplayer player data?,I'm looking at the upcoming Visual Studio 2017.
+ +Under the section titled Boosted Productivity there is an image of Visual Studio being used to replace all occurrences of var with the explicit type.
+ + + +The code apparently has several problems that Visual Studio has identified as 'needs fixing'.
+ +I wanted to double-check my understanding of the use of var in C# so I read an article from 2011 by Eric Lippert called Uses and misuses of implicit typing.
+ +Eric says:
+ +++ ++
+- Use var when you have to; when you are using anonymous types.
+- Use var when the type of the declaration is obvious from the initializer, especially if it is an object creation. This eliminates redundancy.
+- Consider using var if the code emphasizes the semantic “business purpose” of the variable and downplays the “mechanical” details of its storage.
+- Use explicit types if doing so is necessary for the code to be correctly understood and maintained.
+- Use descriptive variable names regardless of whether you use “var”. Variable names should represent the semantics of the variable, not details of its storage; “decimalRate” is bad; “interestRate” is good.
+
I think most of the var usage in the code is probably ok. I think it would be ok to not use var for the bit that reads ...
+ +var tweetReady = workouts [ ... ]
+
+
+... because maybe it's not 100% immediate what type it is but even then I know pretty quickly that it's a boolean
.
The var usage for this part ...
+ +var listOfTweets = new List<string>();
+
+
+... looks to me exactly like good usage of var because I think it's redundant to do the following:
+ +List<string> listOfTweets = new List<string>();
+
+
+Although based on what Eric says the variable should probably be tweets rather than listOfTweets.
+ +What would be the reason for changing the all of the var
use here? Is there something wrong with this code that I'm missing?
I'm exploring composite pattern to write a file system, one of my requirements is to create a unique root element in this case a directory, similar to Linux System ('/'), I have seen many examples of creating this in the client like this:
+ +class CompositeDemo
+{
+ public static StringBuffer g_indent = new StringBuffer();
+
+ public static void main(String[] args)
+ {
+ Directory one = new Directory(""dir111"");
+ Directory two = new Directory(""dir222"");
+ Directory thr = new Directory(""dir333"");
+ File a = new File(""a"");
+ File b = new File(""b"");
+ File c = new File(""c"");
+ File d = new File(""d"");
+ File e = new File(""e"");
+ one.add(a);
+ one.add(two);
+ one.add(b);
+ two.add(c);
+ two.add(d);
+ two.add(thr);
+ thr.add(e);
+ one.ls();
+ }
+}
+
+
+Source: https://sourcemaking.com/design_patterns/composite/java/1
+ +Since my requirement is to create a unique root node is it best practice to create a new Class that has only one root element? Can I use a Singleton design pattern?
+",122385,,122385,,42739.20764,42739.20764,Does my file system implemented using the Composite pattern require a singleton?,I'am little confused about how business logic should be implemented using web services. For example, think about an education management application. There are simply students, teachers and courses. Now, the server side of the application may provide getStudents
operation via a WSDL interface. This operation returns list of Student
elements.
According to object oriented paradigm a class should have a certain responsibility. It should hide its internal state and one can reach its data only using its operations. But at the client side a Student
class is only a data bag. There is no logic so no responsibility here.
Another problem is that there is no reference semantics at the client side. Normally, a student is associated with
some courses. But in the implementation a Student
object has
list of Course
objects or it may hold some identifier for courses.
Finally, using web services (by WSDLs) seem to convenient for access remote data but not for execute business logic remotely. Am I right, or do I miss something important about web services?
+ +Edit:
+ +My intent is implement business logic at server side. For example suppose that a have classes in server side like that:
+ +class Student
+{
+ //some properties like name, courses, etc.
+ double calculateGPA(); //calculates average grade using course credits.
+ //other operations like getName()
+}
+
+class SchoolRepository
+{
+ List<Student> getStudents();
+ List<Course> getCourses();
+ //other operations
+}
+
+
+Now, I can create WSDL which provide SchoolRepository
interface. So, client get list of students. But they cannot reach business logic implemented in calculateGPA()
directly. I may provide another WSDL interface for that. But it breaks data and behavior encapsulation.
It may not be clear so I'll develop the idea:
+ +The point is to develop a web interface to write down notes (and anything you like), and track its evolution. My question is then: how can I store such information such that I can also keep track of modification history?
+ +On the relational database world it would ideally look like:
+Table document: | docId | authId | content | <meta> |
+Table documentHist: | docId | editDate | <data> |
The question is about what to store as the documentHist.<data>
. Should I store here all the revisions (easy but huge replication)? Or should I store only differences? (smarter, but no see how I could do this (without implementing a kind of versioning system myself).
That's why I previously mentionned Git, and even more Github which precisely do it: you can edit files and commit. We could then use here Git ""under the hood"" for our versioning. I'm just not sure how difficult this would be. (Select/Update &co looks easier to me that handling files and git command from web server, I'm maybe wrong)
+ +Thanks for any comment, clue or idea. +I maybe have minsunderstanding or misconception, do not hesitate to point it out. (same for language mistake, I'm not EN native as you may have noticed)
+ +pltrdy
Edit notes:
+ +History isn't backup: My point isn't to create database backups but instead to be able to query/work with edit history (e.g. tell the +user when/what was the last modification, when was this line added +etc...
Documents: By document I do not (necessarily) talk about file on a file system. It could just be record in a database (we could imagine 1 table for the current content of ""document"" and 1 for its history)
Volume & Goals: I aim to develop such a system for personnal need, but with a scalable design. I would otherwise just use Git. The point is to give a wbe interface to write down notes and keep track of evolutions (among other features)
The application will continuously (approximately every second) collect the location of users and store them.
+ +This data is structured. In a relational database, it would be stored as:
+| user | timestamp | latitude | longitude |
However, there is too much data. There will be 60 × 60 × 24 = 86,400 records per user, daily. Even with 1000 users, this means 86,400,000 records daily.
+ +And it is not only 86,400,000 records daily. Because these records will be processed and the processed versions of them will be stored as well. So, multiply that number with approximately 2.
+ +Essentially, I plan to make coarser grained versions of location data for easier consumption. That is:
+ +What should I use to store this data? Should I use a relational database or a NoSQL solution? What other things should I consider when designing this application?
+",158474,,158474,,42738.62222,42738.62222,How to store large amounts of _structured_ data?,I'm not sure if I am in the right place to ask this question. Please tell me if I'm not. I have the following problem:
+ +I have a production process, where a product first has to be produced, it is then stored, and packed afterwards. Hence, it can be seen as that the product has to go through two machines in series. However, there are several producing machines, and several packing machines (thus, in parallel). Also, not all products are compatible with all machines, and they have different producing and packing times on the machines.
+ +Now, I'm trying to implement the shortest/longest processing time first scheduling rule. However, I'm not sure what rules should be applied. For now, I have it like this: +A product with the shortest/longest processing time on a machine can go first. However, this does not take into account the packing time. It can be that the product with the shortest producing time, has a very high packing time, and hence, it causes that the other products may have to wait very long before they can be assigned to a packing machine. Since there are so many assignments possible, because of the parallel AND series machines, different processing times on machines etc., I'm not sure how to implement these rules in my case. Any suggestions?
+",258414,,,,,43159.06944,"How to ""model"" shortest/longest processing time first on machines in parallel and series",We are moving to Self-Hosted TFS and I am having a difficult time with setting things up properly. What we want to do is:
+ +1) Have some user accounts be testers on Project 1 and thus they can create and manage work items, but not have access to the code on the project. We got this done by setting those user accounts up as Stakeholder and it works no problems.
+ +2) Have those same users that only have access to work items on Project 1 be developers on Project 2 and have access to work items and code. This we cannot set up or have not been able to. Despite being made Admins on the project, given full Allow access as Team Members on the project, etc.
+ +Any suggestions?
+ +Thanks, +Josh
+",258446,,,,,42803.99167,Different TFS Permissioning For User on Different Projects,This question is about refactoring existing database design.
+ +My data flow is
+ +Current design has 3 tables: data_a
, data_b
, data_c
, where each table shares some columns that are identical (in name) and some that are unique to that product line.
For example, same-name columns in each table are weight
, unit_system
and a few others. The differently-named columns have values that represent physical quantities of the particular product line. Those are named using various alphanumeric identifiers, like a
, b5
, e2
, and there is a different set of them for different product line. Those sets can share elements, i.e. b5
can be in more than one table, but then something like t1
can be in one table but not the others.
Problem
+ +Currently when there is a need to add some value say x9
to product line a, I would update the database schema for data_a
to have column x9
. I make the values of x9 as 0 for existing column rows, and new records will begin to populate with the actual x9
values. Then I update the code in relevant places to insert x9
into the table or retrieve it from the table.
Existing design
+ +data_a(id, item_id, shared, different_a)
+data_b(id, item_id, shared, different_b)
+data_c(id, item_id, shared, different_c)
+
+
+
+
+where shared
columns is a group of columns that is identical in each table, while different
are columns that are disjointed in theory, as they represent 3 different product lines, but actually may share some similarly-named elements, as some variable names are the same for different product lines.
Proposed design
+ +This is where I'm struggling. Because I don't see a good clean design that is also efficient. I wanted to get rid of the need to alter database schema every time there is a new variable added to a product line. And I believe I can do that, but I also want to make an efficient design, and I don't see one.
+ +But this is my try:
+ +Keep primary key, foreign key and shared column names in a single table:
+ +data(id, item_id, shared)
+
+
+Create a single table for variables only (variables are ones found in different
sets):
data_variables(id, item_id, data_id, variable, value)
+
+
+
+
+I am not sure if this design will be worth the trouble, because ... I will actually be storing more data - all the extra data_id
or all the extra item_id
values for each variable name. There are 15 to 30 variable names for each product line. I will be storing 15 to 30 item_id
(or data_id
) fields in the new design data_variables
table, where in the old design there was only one item_id
value per table row.
Question:
+ +Is there a more efficient design that also does not require changes in schema design for every addition/deletion/modification of variable name in a product line? Might it be best to stick with existing design despite the trouble of altering schema when needing to add new variables?
+ +Using JSON for variable ""different"" fields
+ +one_data_table(id, item_id, product_line, shared, json_encoded_value_pairs);
+
+
+Decision to not use EAV (Entity–attribute–value) Model
+ +In my case Entities change very rarely if at all (on the order of years), and attributes change rarely as well, on the order of months or more. As such, reworking the database design to use EAV is probably not a good fit for my case.
+ +That aside, I am still debating on my JSON Design.
+",119333,,119333,,42739.73681,42739.73681,What is an efficient design to store variables for different product lines in ER database?,I have a client who insisted that we keep our new development separate from the main branches for the entirety of 2016. They had 3-4 other teams working on the application in various capacities. Numerous large changes have been made (switching how dependency injection is done, cleaning up code with ReSharper, etc). It has now fallen on me to merge main into our new dev branch to prepare to push our changes up the chain.
+ +On my initial merge pull, TFS reported ~6500 files with conflict resolution. Some of these will be easy, but some of them will be much more difficult (specifically some of the javascript, api controllers, and services supporting these controllers).
+ +Is there an approach I can take that will make this easier for me?
+ +To clarify, I expressed much concern with this approach multiple times along the way. The client was and is aware of the difficulties with this. Because they chose to short on QA staff (1 tester for 4 devs, no automated testing, little regression testing), they insisted that we keep our branch isolated from the changes in the main branch under the pretense that this would reduce the need for our tester to know about changes being made elsewhere.
+ +One of the bigger issues here is an upgrade to the angular version and some of the other third party softwares --unfortunately we have no come up with a good way to build this solution until all the pieces are put back into place.
+",258451,,1204,,42743.98958,42743.98958,Strategies for merging 1 year of development in Visual Studio,I still remember good old days of repositories. But repositories used to grow ugly with time. Then CQRS got mainstream. They were nice, they were a breath of fresh air. But recently I've been asking myself again and again why don't I keep the logic right in a Controller's Action method (especially in Web Api where action is some kind of command/query handler in itself).
+ +Previously I had a clear answer for that: I do it for testing as it's hard to test Controller with all those unmockable singletons and overall ugly ASP.NET infrastructure. But times have changed and ASP.NET infrastructure classes are much more unit tests friendly nowadays (especially in ASP.NET Core).
+ +Here's a typical WebApi call: command is added and SignalR clients are notified about it:
+ +public void AddClient(string clientName)
+{
+ using (var dataContext = new DataContext())
+ {
+ var client = new Client() { Name = clientName };
+
+ dataContext.Clients.Add(client);
+
+ dataContext.SaveChanges();
+
+ GlobalHost.ConnectionManager.GetHubContext<ClientsHub>().ClientWasAdded(client);
+ }
+}
+
+
+I can easily unit test/mock it. More over, thanks to OWIN I can setup local WebApi and SignalR servers and make an integration test (and pretty fast by the way).
+ +Recently I felt less and less motivation to create cumbersome Commands/Queries handlers and I tend to keep code in Web Api actions. I make an exception only if logic is repeated or it's really complicated and I want to isolate it. But I'm not sure if I'm doing the right thing here.
+ +What is the most reasonable approach for managing logic in a typical modern ASP.NET application? When is it reasonable to move your code to Commands and Queries handlers? Are there any better patterns?
+ +Update. I found this article about DDD-lite approach. So it seems like my approach of moving complicated parts of code to commands/queries handlers could be called CQRS-lite.
+",7369,,23622,,42739.53125,42739.54028,Isn't CQRS overengineering?,<.net>Let's say I have a users
resource, with two properties: name
and email
as specified by a users
JSON Schema document, which right now looks like this:
{
+ ""$schema"": ""http://json-schema.org/draft-04/schema#"",
+ ""type"": ""object"",
+ ""additionalProperties"": false,
+ ""properties"": {
+ ""name"": {
+ ""type"": ""string""
+ },
+ ""email"": {
+ ""type"": ""string""
+ }
+ },
+ ""required"": [
+ ""name"",
+ ""email""
+ ]
+}
+
+
+My requirements state that we need to be able to change the schema, e.g. to add a property such as phoneNumber
, and do so via HTTP in a RESTful way. That is, I need to be able to update the JSON Schema definition of the users
resource to look like this:
{
+ ""$schema"": ""http://json-schema.org/draft-04/schema#"",
+ ""type"": ""object"",
+ ""additionalProperties"": false,
+ ""properties"": {
+ ""name"": {
+ ""type"": ""string""
+ },
+ ""email"": {
+ ""type"": ""string""
+ },
+ ""phoneNumber"": {
+ ""type"": ""string""
+ }
+ },
+ ""required"": [
+ ""name"",
+ ""email"",
+ ""phoneNumber""
+ ]
+}
+
+
+Now, clients of the API can create new users that have the additional phoneNumber
property (where previously I would have gotten a schema validation error).
I am puzzling over how to do this. One way I can imagine doing it is by creating a ""meta-resource"" called resources
. This resource might have some properties, for example: path
and schema
. The schema
property would be a full JSON Schema object. To update the users
resource, then, I could maybe POST to resources
with an HTTP request body like:
{
+ ""path"": ""users"",
+ ""schema"": { ...JSON Schema object goes here... }
+}
+
+
+Is this a reasonable implementation? If not, why not? Alternative ideas? Any pitfalls I should watch out for? Any articles/blogs on this topic that I should read? (I haven't been able to Google successfully for this).
+",,user92338,,user92338,42738.91736,42741.62847,Define a RESTful API for creating/updating other resource definitions?,My company employs people who add and edit records in a PostgreSQL database using a web interface. These updates then need to be compiled into a mobile app (Android, iOS) via SQLite and released as a new version every few months. We haven't quite gotten around to 'hot patching' the SQLite database; that is to say, downloading updates from the server instead of recompiling the app with new data downloaded during the build process.
+ +I'm wondering what the typical process is here - how to get from server to client. My initial thought is to write a script to:
+ +It seems like a reasonable approach, but I'm wondering if there is a better way, or if there is some process that's more standard. Are there caveats to this approach?
+ +And I know that I could expose this to clients via a REST API. That's basically what I'm doing for the 'downloading' aspect. However, that's not what the boss wants to do, so this is the way it has to be. I'm asking if my approach (downloading a JSON export, importing that data, etc.) is a decent approach, or if e.g. dumping through psql and doing some magic with that data) would be better for what I need to accomplish: getting PostgreSQL data from the web into a local SQLite database.
+",109112,,109112,,42738.97222,42739.13472,From web database (PostgreSQL) to mobile (SQLite),After picking up some Swift skills with Java as my strongest language, one feature of Swift that I really like is the ability to add extensions to a class. In Java, a pattern I see very often is Utils
or Helper
classes, in which you add your methods to simplify something you're trying to accomplish. This might be a silly question, but is there any good reason not to subclass the original class in Java and just import your own with the same name?
A swift example of a Date extension would be something like this
+ +extension Date {
+ func someUniqueValue() -> Int {
+ return self.something * self.somethingElse
+ }
+}
+
+
+Then an implementation would look like this:
+ +let date = Date()
+let myThing = date.someUniqueValue()
+
+
+In Java you could have a DateHelper class, but this now seems archaic to me. Why not create a class with the same name, and extend the class you want to add a method to?
+ +class Date extends java.util.Date {
+ int someUniqueValue() {
+ return this.something * this.somethingElse;
+ }
+}
+
+
+Then the implementation would look like this:
+ +import com.me.extensions.Date
+
+...
+
+Date date = new Date()
+int myThing = date.someUniqueValue()
+
+
+Then, just import your own Date class which now acts like a class with Swift extensions.
+ +Has anyone had any success with doing this, or see any reasons to stay away from a pattern like this?
+",32455,,,,,42739.59722,Swift-like extensions in Java using inheritance,For example, to keep a CPU on in Android, I can use code like this:
+ +PowerManager powerManager = (PowerManager)getSystemService(POWER_SERVICE);
+WakeLock wakeLock = powerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, ""abc"");
+wakeLock.acquire();
+
+
+but I think the local variables powerManager
and wakeLock
can be eliminated:
((PowerManager)getSystemService(POWER_SERVICE))
+ .newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, ""MyWakelockTag"")
+ .acquire();
+
+
+similar scene appears in iOS alert view, eg: from
+ +UIAlertView *alert = [[UIAlertView alloc]
+ initWithTitle:@""my title""
+ message:@""my message""
+ delegate:nil
+ cancelButtonTitle:@""ok""
+ otherButtonTitles:nil];
+[alert show];
+
+-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
+ [alertView release];
+}
+
+
+to:
+ +[[[UIAlertView alloc]
+ initWithTitle:@""my title""
+ message:@""my message""
+ delegate:nil
+ cancelButtonTitle:@""ok""
+ otherButtonTitles:nil] show];
+
+-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
+ [alertView release];
+}
+
+
+Is it a good practice to eliminate a local variable if it is just used once in the scope?
+",196142,,25199,,42740.98681,43566.42917,Should we eliminate local variables if we can?,I'm using Java Eclipse EMF to model my Composite Pattern. What would be the right UML representation to model aa new class (Root) which implements a unique root directory. This is the original Composite pattern.
+ + + +This is my representation:
+ + + +Target representation would be:
+ +root
+ |___ dir1
+ |___ dir2
+ |___ dir3
+ | |___ fileA
+ | |___ dir4
+ | |__ fileB
+ |
+ |___ file1
+
+",258519,,,,,42739.37778,Modeling Composite Design Pattern,How is software architecture decided in a scrum/agile project environment, if everyone is focused on just one small piece of the problem how is over all system design decided upon.
+ +There doesn't seem to be a role where one person take ownership over the technical execution of the project so you could possibly end up in a situation where by everyone individually has done their job but the over all quality of the project isn't very good.
+",94888,,,,,42739.49097,How is software architecture decided in a scrum/agile project environment?,I'm looking for some assistance and ideas for developing an algorithm for choosing random soccer teams based on the skill levels of the participating players.
+What I have so far is a list of particpating players with arbitrary skill levels between 1 and 100. e.g.
PlayerA: 30,
+PlayerB: 45,
+PlayerC: 50,
+PlayerD: 55,
+PlayerE: 30,
+PlayerF: 20,
+PlayerG: 75,
+PlayerH: 75
+
+
+I'd like the option of being able to choose random teams, but efficiently running it again to offer different results if the teams just don't look fair (even if on paper, the assigned skill levels match).
+ +I've already coded an example which creates all possible combinations of teams and orders them by fairest first, so that efficiently creates the functionality that allows the person to hit the ""randomise"" button again and instantly display new teams. However, this is only suitable for teams of 7 per side (maybe 8) as there are just too many combinations after that and it takes too long to process.
+ +Can anyone suggest a better way of doing this for more than 8 players? I've tried the option which mimics the old school yard method of picking players by taking the two best players as the captains and then each player taking the best of who is left until everyone is picked, but then i was stumped if those teams weren't acceptable and they wanted an option to randomise again.
+ +Many thanks - i'm coding this in C# but the language is probably less important than the theory.
+",93096,,,,,42741.07083,Algorithm for generating 2 random teams from player list based on skill level,I have a Google App Engine app which is used by a small amount of users of a certain niche website. The app's only function is to get data about the user from that website's API, use that data to produce a CSS file, and deliver that CSS to the user. There are a few apps (made by others) like mine for this website; mine is the newest, so my amount of traffic is small compared to the others'.
+ +However, one of the other apps (which served a large portion of the available users) just crashed due to it exceeding its GAE quotas. As a result, a large amount of users are starting to migrate to my service. Since the service is by nature not practically monetizable, I'd like to be able to continue my service without enabling billing on GAE.
+ +My question is this: The only quota that I am likely to exceed using the free limits is the bandwidth quota (specifically incoming, due to the API calls). Would it be feasible to create a new free GAE app just like the first and have the first one redirect to the second one when the first one runs out of bandwidth? What obstacles would I run into using this approach? Are there any better solutions?
+",258536,,,,,42741.11875,Using a second GAE app as backup,I am currently designing a system to support our employee staffing process. I collect weekly preferences from employees, and feed that into a system for staffing. Availabilities can be defined as absolute dates (e.g. Employee John is available on 01/01/2017 from 14:00 to 16:00). Or by recurrence, where one can define a weekly recurring availability (e.g. Employee John is available Weekly on Sundays from 14:00 to 16:00).
+ +Weekly Recurring Availabilities: +start time/end time - most of the queries would be by local time, but we might also need the UTC time in the queries. we keep the timezone as well to resolve DST issues. +employee_id
+ +Absolute Availability: +start time/end time - absolute datetime +employee_id +is_available - can represent a time an employee isn’t available at all
+ +Employee: +id +zone_id +is_active
+ +The following queries should be supported: +- Get availabilities by employee +- Get availabilities by zone and date range - Should filter out inactive employees, should return also recurring availabilities, and should also convert the recurring dates to absolute datetime records, for example: +Given employee John has a single weekly recurring availability, every Sunday from 14:00 to 16:00, an API consumer might request all employees that are available between the upcoming Sun-Sat, the output should be: +employee_id, start_datetime, end_datetime
+ +We need to build an ETL on the data - transfer it to a data warehouse db. +We are required to transfer recurring availabilities as their real time availability - for instance +if I have a recurring availability on Sunday - I should transfer future availabilities matching to this - +row for each week with the actual date for a range of month or two in the future. +Is there any approach to handle that? Should we do it be saved in the data warehouse with the specific dates? or should the reports over it do that?
+ +We currently thought on saving the availabilities in mongo in one collection for availabilities (recurring and absolute), and copy fields of is_employee_active and zone_id to each availability. +We also thought of assigning start_time/end_time an absolute date for recurring availabilities with proximate date matching the selected day.
+ +I would like to get a feedback on what's the best approach, thanks.
+",258540,,,,,42739.55764,DB design for a scheduling system,This is something I see all over Cocoa:
+ +func someAction(_ sender: Any)
+
+
+which is called like:
+ +someAction(someObject)
+
+
+This can be very confusing to me. The infamous example is in NSView
subclasses:
print(""Hello, World!"")
+
+
+Despite this being standard Swift syntax for printing to the console, in an NSView
, this will open the printer dialog, claiming the sender is the String ""Hello, World!""
. So, in my code I started doing this:
func someAction(sender: Any)
+
+
+but I fear that the fact I see none of this in Cocoa means it's an anti-pattern. Is that the case, or am I in the right?
+",104052,,,,,42739.65417,Is it an anti-pattern for Swift functions that take in a sender to have a label for that parameter?,I have an application which works with pure JDBC. I have a dilemma where should transaction handling go, in Service or DAO layer. I have found that in most cases it should be implemented in Service layer since DAO must be as simple as possible and exists solely to provide a connection to the Database. I like this approach, but all the solutions and examples that I've found works with Spring or other frameworks and use annotations to mark it as @Transactional
.
In pure JDBC in my DAO layer I have a DaoFactory
, I take a connection object from it in each DAO class (UserDao
, CarDao
), which implements a connection pooling, and use this object to connect to the database and perform CRUD operations. In Service layer, I create an instance of specific DAO that I need and do actions/calculations on top of it.
Where do I implement transaction handling here?
+",247225,,-1,,42838.53125,43418.66181,Transaction handling in DAO or Service layer in pure JDBC without frameworks,I am currently working with a system which has been upgraded piecemeal from an original Visual FoxPro solution, to a system that now has the following parts:
+ +Local FoxPro installation (this is a Point Of Sale system, so designed to be used on touchscreens in stores / salons)
Local windows service which syncs data from the local Foxpro database into a remote PostgreSQL DB over a series of REST APIs. This loop both pushes data and also checks for new data (which can come from online booking systems for example)
An online SaaS style portal which is backed by the central PostgreSQL DB and allows for a suite of additional functionality over and above the local install - dashboards, detailed reporting, marketing, online bookings amongst any others.
The final stage of this project is replace the Foxpro system itself with a hosted solution. Ideally this POS would not fall over if the internet dropped out and would also support multiple terminals, and after a lot of research and testing of frameworks I've settled on Meteor as an ideal approach for this. It handles the reactive updating between terminals, minimongo seems to provide sufficient resilience against temporary internet outages, and overall fits the bill.
+ +The architectural decision I am struggling with is with the remote database. I have knocked up a PoC project with Angular2 and Mongo, with a hosted Mongo remote DB and it works. I now need the data to be 2 way synced into the PostgreSQL DB which leaves me with 2 options:
+ +Sync the remote Mongo DB with PostgreSQL (either over the existing REST APIs or similar)
Work with one of the experimental packages and try to use my existing PostgreSQL DB as the backend, removing the need for the 'interim' Mongo DB.
My instinct is to take the first approach, it feels more robust and I already have the APIs in place, however without having a lot (almost zero) Mongo experience, am I going about this the wrong way? And if this is the right way, is there a best-paractice approach to syncing Mongo like this?
+",8425,,,,,42739.59306,Relational DB sync to Mongo DB,I have an intern and he writes code fast.
+ +However, I have difficulty making him understand the importance of writing classes and follow the OOP paradigm.
+ +We recently had a discussion that went like something this:
+ +""Instead of having this long function that extracts data from two different queries and then combine the data into a new data structure as a standalone function, why not start by putting it in a class?
+ +I understand that it's not much differences for now, but I can foresee that this class will grow to have more functions and the next guy who takes over will naturally refactor the giant function into more functions within the same class.""
+ +When he objected, I told him, ""Okay, I gave you my criteria (write the function within a class) and my reason (we will likely have it as a class in the future, might as well start now no matter how imperfect the start). If you have a better criteria and a better reason, why don't you suggest it?""
+ +One day later his reply was, ""python is an object oriented programming language so when codes are organised inside a file, it is somewhat oop alr""
+ +How do I make him understand the importance or better yet appreciate the importance of software craftsmanship?
+ +In case, I made some bad assumptions myself, I am willing to stand corrected and I understand the dangers of asking this question and having it closed down. So if there was a better place to pose this question, I am willing to try it.
+",16777,,,,,42739.68125,How do you explain the importance of writing classes over writing procedural functions to a programmer?,Let's suppose we have a nullable variable.
+ +int? myVar = null;
+
+
+If we wish to use it's value, there are at least two options:
+ +Explicit casting
+ +DoSomething((int)myVar);
+
+
+Calling Nullable<T>
's .Value
property
DoSomething(myVar.Value);
+
+
+I've noticed that when the conversion fails, in both cases the same exception is thrown (System.InvalidOperationException: Nullable object must have a value
), which makes me think that they are both implemented in the same way (but I have found not evidence for that claim).
My question is:
+ +Update:
+ +It might be obvious, but just in case you stumble into this question, you might consider that an advantage of .Value
versus explicit casting is that it prevents from unintendedly trying to cast to an incorrect type.
On the other hand, explicit casting might be easier to read in some scenarios. For instance, having var value = (int)myVar;
allows an easier identification of the type than var value = myVar.Value;
Let's say I have a reverse proxy set up getting traffic at http://gluten-free-snacks.example.com. It serves different URLs by sub-directories, not sub-domains, for a better web UX.
+ +Its default behavior is to route all requests to a WordPress site which I handed over to the marketing team which they will definitely use to create some social media buzz and generate leads. No questions there. The reverse proxy's additional behavior is that all requests to http://gluten-free-snacks.example.com/my-account/*
get routed to a separate server running a small CRUD app. It's running express.js, or not if you'd prefer.
Should I write this app to serve requests from /
(or ./
, in another sense) and have the proxy hide from it the fact it's publicly available at /my-account/
?
From /
(agnostic about its URL and directory), the code seems more self-contained and easy to refactor, and we've separated out what seems to be a networking detail. However, all its HTML links to static assets like /stylesheets/main.css
are now broken, because they're actually available at /my-account/stylesheets.main.css
. In fact, all its links need to become relative, which hurts refactorability.
Should I:
+ +/
and use relative paths for links?/my-account/
and use absolute paths for links?Multiple answers may apply.
+",171407,,-1,,42814.43681,42739.90347,"Should a web application be aware of its URL, including its sub-directory?",Android developers probably are familiar with Ceja's Clean Architecture, where use cases are classes that implement the Command Pattern.
+ +Shvets defines the pattern intents as follows:
+ +I use that approach in order to improve code readability and testability. But, after reading Shvets's Anti-Patterns course, I got confused with his Functional Decomposition Anti-Pattern definition:
+ +How may I am figure out if I am using Functional Decomposition Anti-Pattern instead of Command Pattern?
+",212528,,212528,,42739.83125,42741.37986,Command Pattern vs Functional Decomposition,When we call the same function on a list of things, we call that ""map"". What do we call it when we call a list of functions on the same data? I don't mean pipe - not feeding the output of each function in turn into the next function - but simply iterating over a list of functions, passing each the same input?
+",8120,,1204,,42739.79306,42740.62083,What is a term for iterating over many functions with the same input?,Suppose you set up a Redis cluster with one master and two slaves. Two clients are connected to each of the slaves. Both clients make conflicting changes at the same time:
+ + + +What happens if these changes are replicated to Master at around the same time? Are they just applied to Master in the order they are received, then replicated back down?
+ +What if transactions are used? Is the result eventually consistent, i.e. does Master resolve the conflict by applying the transactions in some order, then replicate the resolution down?
+ +I don't expect perfect consistency from a distributed cache, but I do want to understand the fine points so that I use caching well. The application I'm working on uses the distributed cache for coordination among worker threads/processes. For example, when one worker processes an item, it puts a key in the cache with an expiration of 1 minute telling other workers not to process the same item. It's acceptable if two or three workers end up processing the same item, but this mechanism prevents infinite reprocessing.
+",3650,,,,,42830.11944,How does Redis (or any typical distributed cache) handle replication conflicts?,I need to send messages from a Windows Service to a Azure Service Fabric Stateful service. The network connection is not very reliable, and there must not be lost data. I was hoping I could use NServiceBus with a store & forward pattern to send the messages. Is my thinking fundamentally flawed?
+",258591,,258591,,42740.88333,42740.88333,Durable messaging over HTTP,According to this clean-code guide you should encapsulate conditionals:
+ +function shouldShowSpinner() {
+ return fsm.state === 'fetching' && isEmpty(listNode);
+}
+
+if (shouldShowSpinner()) {
+ // ...
+}
+
+
+Why not just write:
+ +const shouldShowSpinner = fsm.state === 'fetching' && isEmpty(listNode)
+
+if (shouldShowSpinner) {
+ // ...
+}
+
+",227145,,,,,42740.10556,What are the benefits of encapsulating conditionals (in functions)?,I am developing a website where client needs that any notification should reach as soon as it is created. so i am using setinterval function of jquery and using ajax requests to get the notifications. the time interval I set is 2 seconds. and its not the only ajax request which is going this way. +there are following ajax request being done within interval of 2 sec
+ +I am worried because i think sending this much request at very short time period may disturb the system. and worse if the number of users increase. +Please tell me your opinions and solutions to this if this is wrong aproach
+",258616,,,,,42741.78472,sending ajax request with setinterval . is it good?,For the first I would like to mention that I'm newbie in real-time systems programming +That's why I'm not sure if my questions are correct. Sorry for that +But I need some help
+ +Question in short: +How to implement hard real-time software to be sure it meets hard deadlines? It is necessary to use some QNX features? Or it is just enough to write it for linux, port to QNX and it will be real-time by default?
+ +Full question: +We have implemented some complex cross-platform multiprocess software with inter-process communcation for Linux, Windows, Android and QNX. +Programming language is C++, we use Boost and planty of other libs. +Our software does it's job well and quickly but it is still prototype. +For production purposes we need to do it real-time +Some of our features have to be real-time and very robust because they are very important and safety of people that use our software may depend on them. +They work pretty quickly - up to hundreds of milliseconds. But I'm not sure that our system is really real-time because of this fact (am I right?).
+ +So there is a main question: how to modify our software to be real-time? +I've googled a lot but I still have no idea how to do it.
+ +Some additional information about our platforms: +Linux and Windows we currently use only for testing purposes. +Android - we still haven't decided whether we need it. +QNX - is our target OS for production. +I guess that answer for my next question is ""NO"" :) +But is it possible at all to implenet cross-platform real-time software (for real-time OSes (RTOS) as well as for general purpose OSes (GPOS) )?
+ +Possibly we need to make our efforts to implement all real-time features only for QNX? +But I still don't understand how to do it. Could somebody shed a light on this question?
+",258632,,198652,,42740.63819,42740.63819,How to modify software to become real-time?,A little bit of confusion over here.
+ +I am trying to reproduce Git's behavior regarding pagers and editors (as I think Git developers already done good (maybe the best) design choices in this scope).
+ +While trying to break it down I found that Git uses the pager/editor set to the environment variable $PAGER/$EDITOR
. However even if $PAGER/$EDITOR
is not set, git still opens a pager/editor.
For example, on my system when I run.
+ +$ PAGER=cat git log
+
+
+Git works as expected and uses cat
to print the data.
But I (obviously) don't have to do that. And even if $PAGER
is not set, which is the case by default on my system according to the following command.
$ echo $PAGER
+
+$
+
+
+Git still can open a nice, well chosen pager (less
in my case) to print data properly.
This looks neat! This is (to a certain extent) the behavior I am looking for.
+ +But I am not able to find out how this is implemented. Is the default pager/editor is chosen at build time? If so how can I do the same knowing that I am using autotools
as my build system. And by how I mean how should the option for choosing the default pager/editor look like? And is there any specific autoconf
/automake
macro(s) dedicated to this.
Is this a dynamic configuration (Can be changed after the build in a configuration file)? And if so, I'd like to take a look at this configuration file. Where can I find it?
+ +Maybe this is more complicated than that and Git is able to guess and automatically choose the pager/editor by it self. And if this is the case, I'd like to know how it does that.
+ +Any advice or pointers will be helpful. Not necessarily about how Git is implementing the stuff. Therefore I'd like to point out that the package I am building is intended to be cross-platform, easily compilable/cross-compilable to non linux-like platforms. Which may or may not have a convenient command line editor/pager (BTW. can I support GUI editors?) ie. a binary provider might have to include the editor/pager to the deployment package. I want to make that process as easy as possible (the binary provider should not look at the code).
+ +Basically I want to make design choices as best as I can afford. With a little boost from you guys I can do even better.
+ +Thanks.
+",257538,,257538,,42741.96528,42741.96528,Git-like pager/editor management,Is it a best practice to initialize class dependencies in a constructor or should a class be initialized in the method where it is used. Let's say we have the following situation, and PriceCalcService is used only in a couple of methods. Order also takes other parameters, and sets its state.
+ +public class ShippingService() { .......... }
+
+public class Order() {
+ public string SomeOrderProperty;
+ private PriceCalcService priceCalcService;
+
+ public Order(string someOrderproperty) {
+ priceCalcService= new PriceCalcService();
+ SomeOrderProperty = someOrderProperty;
+ }
+
+ public void Method1() {
+ PriceCalcService.MethodX();
+ }
+ .......
+}
+
+
+Or should the PriceCalcService be initialized only in the methods that use it?
+ +public class PriceCalcService() { .......... }
+
+public class Order() {
+
+ public string SomeOrderProperty;
+ public Order(string someOrderproperty) {
+ SomeOrderProperty = someOrderproperty;
+ }
+
+ public void Method1() {
+ new PriceCalcService().MethodX();
+ }
+ .......
+}
+
+
+First example: Advantage: We can see the dependencies + Disadvantage: We instantiate a class even if it is not used. Which method should I choose?
+",257950,,257950,,42740.53889,42741.01458,Constructor containing class dependencies,I'm trying to rewrite some code I wrote time ago using some C++ good practices. To achieve such a goal I'm using as reference Effective C++ (1) and google coding convention (2). According to (2) a function should be declared inline if it is 10 lines or less, according to (1) furthermore the compiler could ignore the inline directive, for example when there are loops or recursion (just some example is provided so I don't know all the cases that would be ignored by the compiler).
+ +Say I then have 2 functions, the first one is 10 lines, and there's no call to any other function and no external reference in general. The second one assume is still 10 lines but at some point there's a call to the first one
+ +Something like
+ +Type1 f(Type2 arg) {
+ //10 lines of self contained code
+}
+
+Type3 g(Type4 arg) {
+ //0 <= n <= 8 lines of code
+ //g(x);
+ //9 - n lines of code
+}
+
+
+I would declare the f
inline, because of the suggestion given by google (fully justified) But I would be puzzled about g
what would be a good practice here? Would declaring g
as inline ignored by the compiler? If not can I still have the benefits of the inline directive?
Let's suppose I have a backend with API-only Rails. There is also a Javascript single-page application (Aurelia, but could be something else) talking to this API.
+ +Should I keep these together, in the same Git repository, integrating Rails with Aurelia to some extent, maybe with the Rails asset pipeline building/bundling Aurelia somehow? Can this even be done with reasonable effort? Or should I keep them totally separate, because in reality they are two separate things?
+ +What are the pros/cons of having the Aurelia project set up inside the Rails project or totally separate?
+ +Also I suspect this will be different during development and in production. In prod, the Aurelia app will be about two .js files anyway, which will be served by the web server as usual. I think it's better to use Aurelia tooling separately to build this.
+ +How should this be done properly?
+",247013,,,,,43128.8875,How should Rails be set up with an SPA client like Aurelia?,A recent bug fix required me to go over code written by other team members, where I found this (it's C#):
+ +return (decimal)CostIn > 0 && CostOut > 0 ? (((decimal)CostOut - (decimal)CostIn) / (decimal)CostOut) * 100 : 0;
+
+
+Now, allowing there's a good reason for all those casts, this still seems very difficult to follow. There was a minor bug in the calculation and I had to untangle it to fix the issue.
+ +I know this person's coding style from code review, and his approach is that shorter is almost always better. And of course there's value there: we've all seen unnecessarily complex chains of conditional logic that could be tidied with a few well-placed operators. But he's clearly more adept than me at following chains of operators crammed into a single statement.
+ +This is, of course, ultimately a matter of style. But has anything been written or researched on recognizing the point where striving for code brevity stops being useful and becomes a barrier to comprehension?
+ +The reason for the casts is Entity Framework. The db needs to store these as nullable types. Decimal? is not equivalent to Decimal in C# and needs to be cast.
+",22742,,155513,,43631.09792,43631.97222,At what point is brevity no longer a virtue?,I have been spending quite a lot of time trying to decide if I should use apache** or nginx. I am very biased towards nginx due to the simple configuration, better scalability and it just feels more secure overall.
+ +However, AJAX is a must have on my list of requirements, so if nginx prohibits the implementation of AJAX, or if it is just not worth the effort, then I wouldn't mind using apache.
+ +So the question is, does the choice of the web server (in my case nginx vs. Apache) makes a difference when one wants to implement AJAX? Are there any additional components/installations required?
+ +**For the purpose of answering this, I suggest to treat httpd
and tomcat
as one and the same.
I think I've got the hang of writing a GA when you know the number of genes in a chromosome. For example, if you're searching for a string, and you know the length, you can just generate your initial population of random strings, then breed and mutate until you (hopefully) converge on a solution. Similarly, in the travelling salesman probelm, you know how many cities there are, and so are only involved with changing the order.
+ +However, you don't always know the number of inputs. Say you want to solve the problem of given the digits 0 through 9 and the operators +, -, * and /, find a sequence that will represent a given target number. The operators will be applied sequentially from left to right as you read (taken from this page). You don't know in advance how long the answer will be, it could be as simple as a single digit, or a complex string of additions, multiplications, etc. For that matter, any given target will have multiple representations (eg 8 * 3 is the same as 2 * 2 * 6 which is the same as 2 * 4 + 4 * 2 and so on).
+ +How would you write a GA for this? You don't know how long to make the gene string in the initial population. You could generate strings of varying length, but then how do you know how far to go? Maybe a good solution would be just one character longer?
+ +I wondered about introducing an extra character into the vocabulary to represent a null place, so the solution ""8 * 3"" would be represented as ""8 * 3 null null null..."" but at least two immediate problems with this are a) you still need to pick a maximum length, and b) You would penalise shorter solutions, as they would only be found if you hit a long string of nulls at the end.
+ +Please can anyone explain how you would approach such a probelm.
+",123358,,123358,,42740.85417,42743.65486,How do you encode the genes when you don't know the length?,I'm designing a configurable api and I've seen a few ways to accept options from the caller.
+ +One is using an options object like so:
+ +var options = new MyApiConfigurationOptions {
+ Option1 = true,
+ Option2 = false
+};
+
+var api1 = MyApiFactory.Create(options);
+
+
+Another is using a configuration function:
+ +var api2 = MyApiFactory.Create(o => {
+ o.Option1 = true;
+ o.Option2 = false;
+});
+
+
+Is one approach any better/worse/different than the other? Is there any real difference or would it be nice to support both so the caller can use whatever syntax they prefer?
+",73165,Eric B,,,,42740.99514,Configuration object vs function,Say I'm writing a GA to solve the travelling salesman problem. I don't know in advance what the shortest path is, so how does my GA know when to stop?
+ +If I wait until the best fitness doesn't reduce for a few generations, how do I know I'm not temporarily stuck in a local minimum, which some mutation in the next generation may help? If the best fitness goes up, how do I know this isn't just a temporary thing that will again be solved in a future generation?
+",123358,,,,,42741.00556,How does the genetic algorithm know when to stop if the global minimum isn't known?,I am working on a very basic driving simulation. I am trying to decide the relationship between the following objects: Freeway, Vehicle, Driver, ProximitySensors.
+ +My real-world analysis suggests the following relationships:
+ +A. Freeway has a vehicle: because a freeway can have multiple cars and a car can only have one freeway
+ +B. Vehicle has a driver: because a vehicle can (usually) only have one driver and a driver can (usually) only have on vehicle
+ +C. Vehicle has proximity sensors: only a vehicle can have apparatus for detecting nearby vehicles
+ +However, when beginning to code this up, I've noticed a few oddities I want to straighten out. Here are the constructors I have come up with:
+ +public Freeway(Vehicle car)
+public Vehicle(Freeway freeway, Driver driver)
+public ProximitySensors(Car car) // In order for it directly access the particular car's position
+public Driver()
A lot of these are based on convenience / ease, so I am sure that I'm taking the shorter/incorrect approach. Here are a few questions I have encountered:
+ +First of all, I feel like the Driver should be controlling the vehicle, but as you can see from my other questions, I may be asking the Vehicle to ask the Driver to change lanes instead of the other way around.
Often, I want the proximity sensors to access the freeway based on the vehicle's position (to detect other nearby vehicles), however with this structure, a Freeway has a vehicle and so I'm not sure how the proximity sensors (through the vehicle) will access the freeway unless I pass it to the vehicle as well.
Does the car request permission from the Freeway to change positions? I wanted the car to be independent of the freeway and for it to have an accident if not programmed/performed correctly.
What function should the Driver play exactly? They have a name, age, etc. but should they be the ones to call the proximity sensors on behalf of the car? Should the car do it directly?
Should the Driver have its own method changeLanes(), which calls changeLanes() from the Vehicle which then calls its own proximity sensor function checkSide() which then operates on the Freeway?
When I started to code this, the relationships became murky without every object having access to just about every other object.
+",160860,,,,,42740.92431,Relationship Between Driving Simulation Objects,Say that I have a C++ class with some fields with static storage duration, call it class A.
+ +Is there some way to use inheritance to ""inject"" these static fields into classes which derive from class A? That is to say, if class B and class C derive from A, B and C will have the same static fields as the base class A, shared with all other instances of B and C, but operations on these fields within instances of B and C will be distinct to their respective subclasses, and not affect each other.
+",257221,,,,,42740.975,Static field injection into subclasses,I wrote abstract base for some turn-based games (like chess or tic-tac-toe) and wrote game based on this base. And I stuck with choosing how to design class hierarchy. Here is two variants for wich I came up:
+
+And here is second screenshot (it is too long to post here as image)
In first variant all classes in different namespaces (or I can move them to one namespace). In second variant all classes separated with static classes and they all in one namespace. First variant's diagram looks better, but I think that second is more correctly. How better to design this structure?
+",258694,,,,,42745.56181,Turn based game class design,I am designing a SaaS application where thousands of users will be using this application. This application will do lots of data crunching and analytics. I am considering creating a main database which will hold all user credentials and configurations, then I will create individual databases for each customer for storing their data. +Main database will be used by services to determine what services should run at what time on which user's database.
+ +I am using postgres as my database.
+ +How efficient is this design, or is there any better design which I should follow.
+",88222,,,,,42741.125,Database Architecture for SaaS application,I have few async REST services which are not dependent on each other. That is while ""awaiting"" a response from Service1, I can call Service2, Service3 and so on.
+ +For example, refer below code:
+ +var service1Response = await HttpService1Async();
+var service2Response = await HttpService2Async();
+
+// Use service1Response and service2Response
+
+
+Now, service2Response
is not dependent on service1Response
and they can be fetched independently. Hence, there is no need for me to await response of first service to call the second service.
I do not think I can use Parallel.ForEach
here since it is not CPU bound operation.
In order to call these two operations in parallel, can I call use Task.WhenAll
? One issue I see using Task.WhenAll
is that it does not return results. To fetch the result can I call task.Result
after calling Task.WhenAll
, since all tasks are already completed and all I need to fetch us response?
Sample Code:
+ +var task1 = HttpService1Async();
+var task2 = HttpService2Async();
+
+await Task.WhenAll(task1, task2)
+
+var result1 = task1.Result;
+var result2 = task2.Result;
+
+// Use result1 and result2
+
+
+Is this code better than the first one in terms of performance? Any other approach I can use?
+",245170,,258807,,42742.57778,43395.77917,Calling multiple async services in parallel,We all may have seen applications like JIRA, or many CRM or other applications that allow its users to define their own custom fields to an entity, and do a variety of stuff with it, like making them mandatory, validate their values and so on.
+ +I want to do just that in the Product we are creating.
+ +Let's assume our product allows a user to create his/her own Project. A project has pre-defined attributes such as
+ +Now, as a user, I would like to add the following custom field to my project
+ +Ideally he should be able to create a custom field in my product which would capture the following details:
+ +Similarly, I would like to allow this feature of adding custom attributes not only to a project, but to a few other entities as well.
+ +This is the technology stack we're using and so far we're pretty ok with it.
+ +How do I approach this requirement? I would like to be educated on the following:
+ +I'm extremely sorry if my question is not framed or worded properly. I'm relatively new to these technologies, and would like to learn with each challenge.
+ +Thanks, +Sriram
+",258702,,258702,,42741.24306,42741.24306,Allowing users to add their own custom fields in a Spring MVC Hibernate application - What's an ideal approach?,I'm a newer developer who has worked on some personal projects as well as non-profit/charity projects. However, I seem to be the most ""senior"" developer in my circle, meaning, most guys come to me for help and when I need some help, they can't help me.
+ +As I don't have a full-time programming job, I'm sort of lost in terms of how to get some professional-grade code review. In other words, before I go full-hog into applying for full-time jobs, I want the opinions of a few reliable/reputable people on how my current code looks, where I could improve, how they would rate me as a programmer, etc... Because currently I have no clue in the slightest and even if I did, it's not just my opinion that matters anyway. The other issue is I have no idea whether my portfolio projects are ""good enough"" or are just little joke projects in terms of what an employer is looking for. I keep thinking I have to work on bigger projects, but that could go on forever. The thing is, I'd rather do this with someone local in person and not a random stranger on the internet as there is no way to judge whether that person's advice is credible or is in line with where I am trying to go if that makes sense.
+ +Is this type of service offered by programming consultants? I can't be the only one facing this issue. As a self-taught programmer, this is very difficult because people often say put portfolios and projects up, but I have no way to judge whether my code is ""good"" or not other than my own perception off what I read from books such as Clean Code by Uncle Bob and Code Complete by Steve McConnell. Of course some of this is subjective, but that doesn't mean there isn't some sort of professional standard that can't be attained. Thanks for your advice.
+ +PS: I also hear a lot about ""mentoring"" yet I've not seen how one would go about getting a mentor at all. I would love a mentor, is this a paid service or is this some type of relationship someone typically has with a more senior co-worker in the context of an office? I'm talking about a real-life person, not a YouTube Channel.
+",237893,,,,,42744.52847,Is Professional Code Review/Mentoring Offered?,NULL
is the billion-dollar mistake but there is nothing in the type system of C++ to prevent it. However, C++ already has const-correctness so implementing NULL
-correctness seems trivial:
__nonnull
, as a specifier in the same class as const
, which can only be placed after the *
in a declaration.&
operator is __nonnull
.__nonnull
pointer value or const
reference can be automatically converted to a normal pointer, but not vice-versa without a cast. (T *__nonnull
can be converted to T *
, and T *__nonnull &
can be converted to T *const &
)Writable references of pointers cannot be automatically converted between normal and __nonnull
(T *__nonnull &
CANNOT be converted to T *&
), i.e.
int x;
+int *__nonnull p = &x;
+int *q = p; // OK
+int *const &r = p; // OK
+int *const *s = &p; // OK
+int *&t = p; // ERROR, don't want to assign NULL to t
+int **u = &p; // ERROR, don't want to assign NULL to *u
+
const_cast
can be used to cast a normal pointer to a __nonnull
pointer, in which if the pointer is really NULL
, the behaviour is undefined.
0
, NULL
and nullptr
to a __nonnull
pointer variable is an error.__nonnull
pointers cannot be default initialised, like a reference.__nonnull
pointer, which throws an exception on NULL
.NULL
check, which the practice is going to be deprecated.Is the above proposal viable? Are the above things enough for a NULL
-safe type system?
My model objects are generated by the library using hard-wired new operators, which makes dependencies injection using the constructor impossible. However, they also have methods, which are called by the library (i.e. adding service objects as parameter is not an option), using external service objects for the logic.
+ +Is using the service locator anti-pattern the only option here?
+",88201,,,,,42741.41597,How can I load service dependencies into model classes?,I'm planning to add a feature to my application where you can switch to the ""Translation"" locale and then see the names of the translation placeholders in the application instead of the actual translations. Another nice thing are ""context descriptions"" where you see explanations in plain english what the placeholder actually is for.
+ +My question is: Are there any standardized language/locale codes (e.g. defined by ISO 639-3 or ISO 15897) for these use cases?
+ +If not, I'll probably use a character sequence like qqq
or xx_XX
.
Due to some issues with other shorteners like goo.gl (disabling my links for example) I want to create my own URL shortener.
+ +I am looking to have a single table that will contain the following columns :-
+ +links_id - autoincrement id
+url - the actual full URL
+abbreviation - the shortened version
+
+
+In a nutshell, when a new link is added to the table, I will insert the URL into the table and give this a unique abbreviated value, obviously if an existing URL is found it won't need to re-add the URL.
+ +My question is what is the best way to generate such abbreviations that a) are fast to produce and are as unique as possible and not simple to guess. In addition how many number of characters would people recommend, for instance if I had an abbreviation of 6 characters how many unique combinations would this provide me based I am using the standard characters as used by other URL shorteners.
+ +I will be using PHP/MySQL, any advice would be appreciated.
+",127940,,9113,,42741.54792,42741.66181,Need advice on making my own custom URL shortener?,I've heard both about use cases (I'm talking about the description, not the diagram) and user stories being used to gather requirements and organize them better.
+ +I work alone, so I'm just trying to find the best way to organize requirements, to understand what has to be done in the development. I don't have, nor need, any formal methodologies with huge documents and so forth.
+ +User stories I've seem being used to build the product backlog, which contains everything that needs to be done in the development.
+ +Use cases, on the other hand, provide a description of how things are done in the system, the flow of interaction between external actors and the system.
+ +It seems to me that for one use case there are several user stories.
+ +This leads me to the following question: when discovering requirements, what should I do first? Find and write user stories or find and write use cases? Or they should be done ""at the same time"" somehow?
+ +I'm actually quite confused. Regarding use cases and user stories, for a developer who works alone, what is a good workflow, to use these methodologies correctly in order to have a better development?
+",82383,,,,,42789.44861,Which should be done first: use cases or user stories?,I recently asked a question about design and got suggestion about how to structure my code. I'm still working on design so I only have pseudo code, but this is what I had in mind.
+ +class TableManager()
+{
+ int init(DBManager manager, String name)
+ {
+ this.name = name
+ this.manager = manager
+ }
+
+ int add_thing(Thing thing)
+ {
+ try {
+ manager.cursor.execute(""INSERT INTO %s, (%s)) % (this.name, thing)
+ return 1
+ } catch {
+ return -1;
+ }
+ }
+
+
+Initially I figured that you would unittest this by initializing TableManager in the unittest setup by passing it a DBManager connected to localhost and ""TEST_TABLE"" as the name argument.
+ +Then you would call add_thing with various table states. For example, the first test would call add_thing with an initially empty table. The unittest would then check the status of the TEST_TABLE to make sure the added thing is in the table.
+ +Is this considered integration testing or unit testing?
+ +Someone mentioned using a MockDatabase to unit test the table manager. I don't see what that would do? You could create a MockDatabase which just returns true when execute is called, but I don't see how that would test the functionality of add_thing without actually having a database to make sure the element was added successfully.
+",258744,,325277,,43859.39792,43859.39792,Unit Test or Integration Test,I have a situation where I need to make a decision between choosing multiple environments or sticking to one. The Business wants to use multiple, but it is simple glossary (list of terms and definitions) which we link to our development tools. Considering the fact that it is simple glossary and not code development, I don't see any reason, why we need to have multiple environments. Also, another drawback with multiple environments is some migration processes between the environments is not automated and must be done manually with every release. Can anyone point me to relevant resources or explain me one convincing reason to explain to the Business?. I appreciate your time and help.
+ +Thank you
+",258746,,,,,42741.64236,Are multiple environments required for Business Glossary?,Given a system with static permissions (1 permission for every action that can be made: create a resource, update a resource, etc), and dynamic roles (can be created and assign permissions to it dynamically).
+ +The system have a preconfigured set of roles with the purpose of initial setup and/or testing. These can be deleted or modified after the initial setup, hence ""dynamic"".
+ +When acting as a user with one of these preconfigured roles on a [functional/acceptance] test to assert a use case works properly, do tests that assert a user with a role that does not have the permission to execute that use case have any value?
+",136188,,,,,42742.93542,Do tests that asserts a user can't do an action have any value?,Both classes below implement the same interface and are in fact intended to be interchangeable one for the other. Why is the second one not referred to as a ""client"" in the literature?
+ +There are many references to service layers, repositories, etc.:
+ +How essential is it to make a service layer?
+ +https://www.asp.net/mvc/overview/older-versions-1/models-data/validating-with-a-service-layer-cs
+ + + +This is a WebAPI client. We see the same pattern with WCF client, etc.
+ +namespace Application.WebAPIClient
+{
+ public class UsersClient : BaseClient, IUsersService
+ {
+ public async Task<int> SaveUser(User user)
+ {
+ string json = JsonConvert.SerializeObject(user);
+ StringContent content = new StringContent(json, System.Text.Encoding.UTF8, ""application/json"");
+ HttpResponseMessage msg = await httpClient.PostAsync(""users/saveuser"", content);
+ return Convert.ToInt32(await msg.Content.ReadAsStringAsync());
+ }
+ }
+}
+
+
+Why is the following not a ""LAN client"" or some other kind of client? Often called Repository or Service but never client although it wraps a call to sqlClient just as the code above wraps a call to HttpClient.
+ +namespace Application.Repository
+{
+ public class UsersRepository : BaseService, IUsersService
+ {
+ // ...
+
+ public async Task<int> SaveUser(User user)
+ {
+ db.Users.Add(user);
+ await db.SaveChangesAsync();
+ return user.ID;
+ }
+ }
+}
+
+",201007,,-1,,42878.52778,42741.98681,Why is code that wraps a call to a database or DAL not referred to as a client?,For an example, +In a testing phase if i got a defect which is due to some delayed job restarting,can I raise it as a bug? +In our project,devteam merges and deploy their codes into test site. +usually there occurs some issues which is due to not restarting a delayed job. +We used to log it as a bug. +but development team prefer to rectify the issue with out raising it as a bug .In fact they are some what disturbed when QA team raise a bug which is not due to code issue :)
+ +So i need to know whether its a good practice to raise a bug which is not due to code issue
+",258755,,,,,42783.17986,"In a testing phase,can I raise a defect which has occured due to deployment issues?",As I understand in the 3-tier architecture, the presentation layer talks to business logic layer, which talks to data access layer. And, ideally, business layer knows nothing about presentation, and data access layer knows nothing about business layer. I want to write classes to do CRUD database work that are separate from the domain classes. For example, Foo is a domain class in business layer, and I want to write a PersistFoo class that takes Foo objects and CRUDs. My question is (somewhat theoretical?) which layer does PersistFoo go in? Logically, it belongs in the data layer to me. However, PersistFoo depends on Foo (e.g. it reads database and converts data to Foo objects and returns them). So, if PersistFoo is in the data layer, then it depends on the business layer, which violates that lower layers should not depend on higher layers.
+",258769,,,,,42741.79653,3-tier data access layer usage,I'm currently writing an application and I'm struggling with the decision of how to correctly design a class to connect to a database. I came up with something like this:
+ +public class DatabaseConnector {
+ private Connection databaseConnection = null;
+
+ public DatabaseConnector(String url, String user, String password) {
+ databaseConnection = DriverManager.getConnection(url, user, password);
+ }
+
+ public void close() throws SQLException {
+ databaseConnection.close();
+ }
+}
+
+
+Additionally, in this class I have methods to pull something from database or insert and so on, and for each method a create a separate PrepareStatement
and ResultSet
and other objects.
My question is if this approach is correct, somehow wrong, or terribly wrong. I will be glad for every tip on designing a good communication class and how to correctly work with databases.
+ +I use a MySQL database and JDBC for communication.
+",258786,Piter _OS,10422,,42742.57778,43346.18264,How to write a proper class to connect to database in Java,How do we compile analytics from millions of rows in a PostgreSQL table?
+ +We pull order data from multiple CRM's and need to compile the data for reporting and each CRM has it's own orders table. We compile these tables into a compiled_orders table in 24 hour increments.
+ +Our current implementation uses SQL Views to aggregate results and SUM the columns
+ +CREATE OR REPLACE VIEW crm1_sql_views AS
+ SELECT
+ account_id
+ , name
+ , COUNT(*) AS order_count
+ , SUM(CASE WHEN
+ status = 0
+ THEN 1 ELSE 0 END) AS approved_count
+ , SUM(CASE WHEN
+ status = 0
+ THEN total ELSE 0 END) AS approved_total
+ FROM crm1_orders
+ WHERE
+ AND is_test = false
+ GROUP BY
+ account_id
+ , name
+ ;
+
+
+We select the data we want from this view. The issue that we are running into is that a query like this pulls all the order data for a client into memory. If a client has 20M orders, it becomes extremely slow, and sometimes the query results are larger than the available memory/cache.
+ +How do we incrementally/consistently/quickly take 20M records in a table and compile it into another table?
+ +Increasing hardware is one solution, but we feel that is not the correct solution right now. We looked at materialized views, but since each CRM has it's own tables, it would have major maintenance implications every time we added a new CRM to our offering.
+ +The goal is for our end users to answer questions like: +- How many orders did we receive last week/month/year? +- What weekday do I receive the most orders?
+ +What technologies/methodologies/terms do we need to look at and research?
+ +I was reading through a Java book by author Herbert Schildt and he writes how the advantage of Java over C++ in portabilaty is that while C++ can be run anywhere, it still requires each program to be compiled with a compiler that was created for that CPU, and creating compilers is difficult, while Java doesn't need to be compiled for each CPU as long as there is a JVM for that processor.
+ +My question is how is this an improvement? Doesn't the JVM need to be compiled for each architecture anyway, so you still require a individual compiler for each type of CPU? So what is this advantage?
+",258776,,247375,,42742.06111,42744.60903,How does Java improve over C++ in the area of portability?,I have 2 JVMs on the same machine that I want to pass about 1Mb of (serializable) data between ideally in under 5 ms.
+ +Under load, using HTTP to localhost takes about 70ms average.
+ +I tried hazelcast, passing the data via a distributed queue - about 50ms average.
+ +Is there a faster way?
+ +I'm using spring boot.
+",31101,,,,,43667.15764,What the fastest way to pass large data between JVMs?,I am planning to write some financial modeling software targeting enterprises. It will be based on an existing open-source project that already has a BSD-3 license. I do not own the copyright to the original project but will be using it to create my derivative work. I would like to keep my project open-source as well but I can imagine a situation where a company wants to hire me to make additional modifications or request development of special features specifically for their business. They would likely require such modifications to be closed and proprietary especially if it pertains specifically to their business.
+ +When people talk about MapReduce you think about Google and Hadoop. But what is MapReduce itself? How does it work? I came across this blog post that tries to explain just MapReduce without Hadoop, but I still have some questions.
+ +Does MapReduce really have an intermediate phase called