diff --git "a/stack_exchange/SE/SE 2017.csv" "b/stack_exchange/SE/SE 2017.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2017.csv" @@ -0,0 +1,114791 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense, +339230,1,339234,,1/1/2017 12:14,,106,11111,"

The following commentator writes:

+ +
+

Microservices shift your organizational dysfunction from a compile time problem to a run time problem.

+
+ +

This commentator expands on the issue saying:

+ +
+

Feature not bug. Run time problem => prod issues => stronger, faster feedback about dysfunction to those responsible

+
+ +

Now I get that with microservices you:

+ + + +

My question is: What does it mean that shifting to microservices creates a run-time problem?

+",13382,,60357,,42736.73611,42949.83889,How does shifting to microservices create a run-time problem?,,5,4,46,,,CC BY-SA 3.0, +339235,1,,,1/1/2017 12:58,,2,76,"

I'm working on an automation project in C# and it has 2 wrappers: DesktopAutomation and BrowsersAutomation. The first has a dependency on UIAutomation.dlls (access to the MS desktop elements) and the latter on Selenium. Their role is understood I hope ;)

+ +

Now, There are user actions on the browsers that require a dependency on UIAutomation(or DesktopAutomation for that matter), since Selenium gives you access to the DOM and not the extensions buttons in Chrome for example.

+ +

So my question is, what would be the correct way / best practice, software construction wise:

+ +
    +
  1. Adding a dependency to BrowsersAutomation on DesktopAutomation which has the advantage of a working project that has existing methods I can use.
  2. +
  3. Adding a direct dependency to BrowsersAutomation on UIAutomation.dlls which makes this project more generic. e.g. using it in other projects won't require another dependency.
  4. +
+ +

Or, perhaps, some other configuration I haven't thought of...?

+",97472,,97472,,42736.55069,42736.625,Separations of concerns and dependency management in automation project,,1,2,,,,CC BY-SA 3.0, +339240,1,339257,,1/1/2017 14:14,,2,209,"

I have that self-hosted RESTful messaging service with authorization, SSL and more good stuff that goes with it. Now, I would like to consume that service, so I need an UI. Usually (for cross-platforms sake) I tend to develop ASP.NET MVC web application, but this time I'm not sure how to proceed.

+ +

However, I have some ideas:

+ +
    +
  1. If there are two decoupled applications - a web application and a REST service - there will have to be CORS enabled on the client.

  2. +
  3. If there is a web application that somehow uses some proxy (forwarder?) to get to the REST service, I don't need CORS. But I don't know how exactly that should be done in MVC.

  4. +
+ +

Another thing - I would really prefer this being decoupled, so I don't want a third option to be stuffing both things into one sack.

+ +

I am a bit disappointed with what google says on the topic. There should obviously be more information, or I don't know how to look for it.

+ +

My questions are:

+ +
    +
  1. Should I go with choice #1 or #2?
  2. +
  3. About choice #2, is that practical and is there a best practice solution?
  4. +
  5. Is there a third choice I'm missing here?
  6. +
+",204790,,1204,,42736.62708,42736.89444,Is CORS required to integrate a REST service with a web application?,,2,2,,,,CC BY-SA 3.0, +339242,1,,,1/1/2017 16:03,,0,663,"

I know GA questions are often almost impossible to answer exactly, but I'm looking for some general advice (although specific advice would be great too!).

+ +

I've just written my second GA, which tries to find a phrase (say ""i like bananas""), which it does by generating binary strings of 5 times the length of the target string (as I have allowed 32 = 2^5 characters in my strings, the lowercase alphabet, space and five punctuation characters) and breeding and mutating them.

+ +

This is all based on an example in Practical Genetic Algorithms by Randy and Sue Ellen Haupt (not sure if I'm allowed to link to Amazon, so I didn't). other sources show similar outlines, so I don't think there is anything specific about that bok, I was just reading it, and so tried their example.

+ +

I tried my GA on ""colorado"" which was the one they used in the book. It found the right answer in around 200-800 generations, which compared to the 1E12 possible combinations of the allowed characters is not bad. However, the authors of the book said that their GA found the answer in just 17 generations, which makes my algorithm look incredibly slow.

+ +

If there's had managed it in (say) 100-300, I could have written my poorer performance down to a lack of experience, but 17 is a huge difference from my results. I want to know how to improve my GA to get anywhere near that.

+ +

I'll post some code below. This is C#, but anyone familiar with any of the C-family of languages should be able to understand it. I don't really use much C#-specific stuff here. I won't include some of the utility functions, as they have been tested, so I know they work, and this will help keep the amount of code down. If you think I've missed out anything important, please let me know and I'll add it.

+ +

First, here's my simple Chromosome class...

+ +
public class Chromosome {
+  public Chromosome(string genes) {
+    Genes = genes;
+  }
+
+  public string Genes { get; set; }
+  public double Fitness { get; set; }
+}
+
+ +

Here is the main routine...

+ +
void Main() {
+  // We are assuming that each character is mapped to a number between 0 (a) and 25 (z),
+  // with space . , ! ? and - taking up the numbers from 26 to 31.
+  // Thus, each character can be encoded in a binary string of length 5 (ie ""00000""
+  // is a, ""11001"" is z and so on), and so any string can be encoded as a sequence
+  // of 1s and 0s, with the encoded length being five times the original string length
+  int len = target.Length * 5; // Length of gene string in each chromosome
+  int totalChromosomes = 32; // Number of chromosomes in the population
+  double crossover = 0.5;
+  // The gene number at which crossover will take place
+  int crossoverGene = (int)(len * crossover);
+  double mutationRate = 0.04;
+  // Generate the initial (random) population
+  List<Chromosome> population = Initial(totalChromosomes, len);
+  int generations = 10000;
+  int genNumber = 0;
+  Chromosome best;
+  do {
+    // get the next generation
+    population = Breed(population, crossoverGene, mutationRate);
+    // Find the best chromosome
+    best = population.OrderBy(c => c.Fitness).First();
+    genNumber++;
+  }
+  while (genNumber < generations && best.Fitness > 0);
+  Console.WriteLine(""Best fitness: "" + best.Fitness.ToString(""F3"") + ""\tGenes: "" 
+                + Decode(best.Genes) + ""\t@ generation "" + genNumber + ""/"" + generations);
+}
+
+ +

Here is the fitness function, which returns the number of incorrect characters...

+ +
private static int Fitness(Chromosome c) {
+  int fitness = 0;
+  for (int i = 0; i < target.Length; i++) {
+    int cTarget = (int)target[i];
+    string genes = c.Genes.Substring(i * 5, 5);
+    char cChromosome = BinaryToChar(genes);
+    if (cTarget != (int)cChromosome) {
+      // Add 1 to the fitness for every incorrect character
+      fitness++;
+    }
+  }
+  return fitness;
+}
+
+ +

The Breed function takes our current population, breeds chromosomes together and returns a new (hopefully better) population. Say we have a population of n chromosomes, we generate n/2 new chromosomes, then add on the best n/2 from the current population.

+ +

The Roulette function used here is a straightforward implementation of a roulette wheel selection. I didn't include the code as I tested a lot on the previous GA, and it seems to work fine...

+ +
private static List<Chromosome> Breed(List<Chromosome> population, int crossoverGene,
+                                               double mutationRate) {
+  List<Chromosome> nextGeneration = new List<Chromosome>();
+  for (int nChromosome = 0; nChromosome < population.Count() / 2; nChromosome++) {
+    Chromosome daddy = Roulette(population);
+    Chromosome mummy = Roulette(population);
+    string babyGenes = daddy.Genes.Substring(0, crossoverGene)
+                       + mummy.Genes.Substring(crossoverGene);
+    string mutatedGenes = """";
+    foreach (char gene in babyGenes) {
+      // P() returns a random number between 0 and 1
+      mutatedGenes += P() < mutationRate ? (gene == '1' ? '0' : '1') : gene;
+    }
+    Chromosome baby = new Chromosome(mutatedGenes);
+    baby.Fitness = Fitness(baby);
+    nextGeneration.Add(baby);
+  }
+  // Add on the best of the previous generation to make up the numbers in the next gen
+  nextGeneration = nextGeneration // the new chromosomes we just bread
+                    // join with the previous generation, order by fitness,  best first
+                    .Union(population.OrderBy(p => p.Fitness) 
+                    // Only take the  first n chromosomes, discarding the rest
+                    .Take(population.Count() - nextGeneration.Count())).ToList();
+  return nextGeneration;
+}
+
+ +

I hope that's enough of the code to see what I'm doing. I don't think any of the omitted functions have any significant code.

+ +

I have tried this on various strings, varying the population size, crossover and mutation, but other than the fact that it just fails to find an answer on longer strings, nothing seems to have made any noticeable difference.

+ +

Anyone able to give me any idea how I can improve my algorithm?

+ +

The book authors mentioned that they used a population of 16. I tried varying the population, and found that values around 16 took significantly longer to converge (100 generations or more), whereas once I got up to about 50, it settled at around 200-800.

+ +

Edit Following a suggestion by amon, I tried a fitness function that compares the current and target strings on a bit-by-bit basis. I encoded the target string into a binary string, and used the following fitness function...

+ +
private static int Fitness(Chromosome c) {
+  int fitness = 0;
+  for (int i = 0; i < encodedTarget.Length; i++) {
+    if (c.Genes[i] != encodedTarget[i]) {
+      fitness++;
+    }
+  }
+  return fitness;
+}
+
+ +

However, this didn't make any difference. I'm including it here in case anyone can make any suggestions as to how to improve it.

+",123358,,123358,,42736.76736,42736.76736,Why do my GAs take so long to converge?,,0,16,,,,CC BY-SA 3.0, +339244,1,,,1/1/2017 16:25,,3,2756,"

I'm working on an Agile program and we are debating on how to deal with what we call ""stabilization sprints"". We have to build our team and decide on several key items but it seems there aren't really a well defined guideline to help us decide about them (or we can't find them) so I was hoping to pick your brain on this.

+ +

Our first release is due in June, we have three months of stabilization but in parallel we need to build a team and start working on next release due for October and then a 3rd release for next June.

+ +

Here are the items we want to decide on:

+ +
    +
  • Do we build two separate teams to deal with next release and stabilization tasks? On one hand having a single team (several pods) to deal with both helps us to load balance our resources better and assign developers with deeper knowledge of the issues require fixing to them. On the other hand not having a dedicated team for next release makes it deficult to plan our next release.

  • +
  • Do we size issues identified (bugs to be fixed during stabilization, technical debts) or we deal with them by assigning a percentage of the pod's velocity to bug fixing as we used to do for our normal development sprints? Sizing them helps to plan better but creates a need for debates and meetings we want to avoid.

  • +
  • Do we combine our stabilization tasks with next release story cards or keep them separate? This is kind of continuation of the first question. If we decide to have a single team to deal with both stabilization and new release then do we really need two backlogs or just a single one?

  • +
+ +

I've been looking for a good book/article that describes the best practices to deal with an Agile project with multiple releases planned specifically to explain the team structure and estimation model but can't find anything good.

+",258265,,9113,,42736.69722,42739.59236,Agile stabilization and release management,,4,2,,,,CC BY-SA 3.0, +339251,1,339258,,1/1/2017 19:31,,1,51,"

I'm creating an HTML5 game using javascript and have got some problems during the first instantiation of the objects of the scene.

+ +

Scenario

+ +
    +
  • Self-written 2d game engine that supports multiple types of objects.
  • +
+ +

'Glossary'

+ +
    +
  • An object is a scene-related entity, and is always an extension of a model, which is abstract.
  • +
  • A scene contains a collection of objects.
  • +
+ +

Problem

+ +

When I instantiate the game scene I load the data of the scene from the local storage at first, and then I proceed to instantiate its objects. The problem is that the type of the game object (sprite, text...) is declared in the model, not in the object (that has a reference to the model). In this way I have to fetch the model of the object in order to know what type of game object I need to instatiate, and I really don't like it.
+I could save the type of the object as a property of the object, but it would be logically wrong: I should not be able to redifine the type of the object decided in the model because it would easily break the implementation of the game object declared in the model. So, it has no sense saving the type of the object in the object itself given the fact that the model already has the job to declare it.
+Hence, maybe I need a new architecture...

+How can I avoid to fetch the model without breaking the logic of 'this is where I should save this property'?

+If the question is not clear, please provide me some feedback: I'd be happy to improve it.

+",,user242937,,,,42736.91111,How to avoid fetching additional informations when instantiating objects,,1,0,,,,CC BY-SA 3.0, +339262,1,339267,,1/2/2017 0:15,,20,10226,"

I teach software engineering at undergraduate level and I have a question to UML practitioners.

+ +

Most software engineering textbooks take a serious effort in covering UML diagrams. But on the other hand, I heard from many graduates that UML does not seem to be used in the trenches anymore.

+ +

What UML diagrams are still being widely used in professional practice, and why ? Are there diagrams that are no longer used and why ?

+ +

N.B: In order to avoid opinion based debates and discussions, please illustrate your answer with factual and objective elements (if possible, verifiable) or neutral observations on personal experience

+",257536,,209774,,42737.81042,42737.85278,Which UML diagrams are still being widely used?,,4,9,8,42737.41042,,CC BY-SA 3.0, +339274,1,,,1/2/2017 9:20,,1,3913,"

I want to understand how EF track ID when the primary key is identity. (DB first)

+ +

For example

+ +
class User
+{
+    int Id; //auto generated via SQL identity, also the primary key of the table Users.
+    string Name;
+}
+
+
+//adding a new user-
+User user = new User () {Name=""TestUser""};//Id will be 0;
+DB.Users.Add(user);
+DB.SaveChanges();
+user.Id; //Will have value;
+
+ +

More over if I have navigation property with foreign key to the user.Id will be updated as well.

+ +

I believe that EF track tables with the primary key, but in this case it is determined by SQL (or w/e) on creation, so how does the EF gets the actual ID after creation?

+",244843,,,,,42737.44306,How does Entity Framework track object with identity as key,,1,0,1,,,CC BY-SA 3.0, +339276,1,,,1/2/2017 10:32,,17,4363,"

I've always thought that a ""common library"" was a good idea. By that I mean a library that contains the common functionality that is often needed by a few different applications. It results in less code duplication/redundancy.

+ +

I recently read an article (can't find now) that said this is actually a bad idea and went as far to say it was an ""anti-pattern""

+ +

While there are upsides to this approach. Versioning and managing change means regression test for the suite of apps that use this library.

+ +

I'm kind of stuck in a rut for my new (Golang) project. Code deduplication has been hammered into me over the years but I feel like I should try it this time around.

+ +

While writing this, I am beginning to think that this ""common lib"" approach is the result of skimming on architecture? Perhaps my design needs more thought?

+ +

Interested to hear thoughts.

+",57613,,,,,43495.32778,Is a common library a good idea?,,5,4,1,,,CC BY-SA 3.0, +339278,1,339280,,1/2/2017 11:14,,2,145,"

I like working in languages with static types, because I like using types as a tool for designing an API before I start coding it.

+ +

I also like TDD, because it helps me concentrate on working in small steps to ensure I get consistent results.

+ +

But when I combine the two approaches, I often have this problem: I design the type of an API, but before I write unit tests for part of the functionality I find I must implement it because otherwise the compiler complains about the methods being incorrectly typed. For example, in a Java project, I have the following class:

+ +
 public class TransformedModelObserver<O,S>
+ {
+       private O sourceModel;
+       private Function<O,S> transform;
+       // note: a ChangeNotification<S> is a class that can only be constructed with a non-null instance of S
+       private Consumer<ChangeNotification<S>> receiver;
+
+       // ....
+
+       /** Should call the receiver if and only if the source model change
+        *  is visible in the transformed model.
+        */
+       public void notifySourceModelChanged ()
+       {
+
+       }
+ }
+
+ +

I can simplify the test by using an identity function for the transform, which would allow for an easy first step, but the compiler complains if I don't call it anyway. So how would I work to implement this method in small test-driven steps in this scenario?

+",153823,,,,,42737.62569,TDD with predesigned static types,,3,6,,,,CC BY-SA 3.0, +339279,1,339281,,1/2/2017 11:39,,3,964,"

I have a piece of code parsing a text file line by line. I have to goals: Testing the syntax of the text and extracting information of it. It is quite likely that syntax errors occur so I want to provice helpful information about where and what.

+ +

To give an idea of the problem, I have a text file like the following (simplified example):

+ +
1=\tSTRING\tDevice name
+15=\tFLOAT\tSpeed
+17=\tINTEGER\tMax Speed
+18=INTEGER\tMax Speed
+
+ +

As you can guess, the syntax of each line is: <Parameter ID>=\t<Data Type>\t<Description>

+ +

My goal is to

+ +
    +
  • Return a vector of structs for every parameter.
  • +
  • If there is an error, give an error message + +
      +
    • for example: ""Error in Line 2: data type of INTEGER is not allowed""
    • +
    • for example: ""Error in Line 3: missing tab""
    • +
  • +
+ +

My general structure is:

+ +
    +
  • A ""function"": std::vector<ParameterDAO> ParseText (std::string text)
  • +
  • A ""sub function"" ParameterDAO ParseTextLine (std::string text)
  • +
  • As you can guess, ParseTextLine is called by ParseText for each line.
  • +
  • Some ""subsub functions"" used by ParseTextLine (checking spaces in the text, checking elements for validity/range/...
  • +
+ +

FYI: The strings/substrings itself I parse with regular expressions and some standard string operations (compare, ...). But this is not the main point in my question.

+ +

OK, now some more details of my implementation:

+ +
    +
  1. Any of my functions (ParseText, ParseTextLine, ...) can throw an exception.
  2. +
  3. I always throw the standard exception std::invalid_argument(""my error message"")
  4. +
  5. The function ""ParseText"" always checks for exceptions thrown in one of the sub functions to add the ""Error in Line x"" message. This is done by getting the what-message of the exception thrown, creating a new string with this message and the line info an rethrow the message:
  6. +
  7. The code calling ""ParseText"" also checks for exceptions. If an exceptions has occured it will show the error message (for example ""Error in Line 3: missing tab"" to the user
  8. +
+ +

Code snippet for 3:

+ +
try
+{
+    Parse_HPA_CATEGORY_SingleLine_THROWS;
+}
+catch ( std::exception e )
+{
+    std::string l_ErrorMessage = ""Error in Line x: "";
+    l_ErrorMessage.append ( e.what () );
+    throw std::invalid_argument ( l_ErrorMessage.c_str() );
+}
+
+ +

This structure works and has the following benefit:

+ +
    +
  • The error message is close to the location where the error occurs (for example close to a string compare or the regular expression).
  • +
+ +

But there may be some drawbacks / things I am not sure about, too:

+ +
    +
  • In the unit test, I have to repeat the string literally ( I don't know if this is actually bad).
  • +
  • I read (unfortunately I can't remember where) that the ""what message"" is usually not used to directly create error messages. Do I misuse the ""what message""? Should I maybe deviate an special exception error class from std::exception for every exception case?
  • +
  • The ParseText function makes a kind of rethrow. Is there a way to avoid this?
  • +
+",179812,,902,,42738.79028,42738.79028,Exception Hierarchy and Use of What Message for Parsing Strings,,1,0,,,,CC BY-SA 3.0, +339283,1,370785,,1/2/2017 12:46,,3,3129,"

I am in the process of creating my own concatenative language, heavily based on Forth.

+ +

I am having a little trouble understanding how the compiling words CREATE and DOES> work, and how they are implemented (How the state of Forth's run time environment changes exactly when they are executed).

+ +

I have read the following resources that give a general view, but only of how to use them, and not of how a system implements them:

+ + + +

The following things about the behaviour of these two words are unclear to me:

+ +
    +
  • CREATE takes the next (space-delimited) word from the input stream, and creates a new dictionary item for it. + +
      +
    • What happens then?
    • +
    • Does CREATE fill in anything in the new dictionary item, or not?
    • +
    • What does CREATE return (on the stack?)?
    • +
    • Is there anything special that happens to the words between CREATE and DOES>?
    • +
  • +
  • DOES> 'fills in' the run time behaviour of the created word. + +
      +
    • What does DOES> consume as input?
    • +
    • How does it alter the dictionary entry of the CREATE'd word?
    • +
    • In code snippets like 17 CREATE SEVENTEEN ,, no DOES> is used. Is there some kind of 'default behaviour' that DOES> overrides?
    • +
  • +
+ +

These different unclarities of course all arise from the core problem, that I have trouble understanding what is going on, and how these concepts, that seem rather complex, can be/are implemented in a simple manner in a low-level language like Assembly.

+ +

How do CREATE and DOES> work exactly?

+",41643,,,,,43402.09653,Forth: How do CREATE and DOES> work exactly?,,3,1,2,,,CC BY-SA 3.0, +339285,1,339291,,1/2/2017 12:58,,65,8621,"

Occasionally, the most logical name for something (e.g. a variable) is a reserved keyword in the language or environment of choice. When there is no equally appropriate synonym, how does one name it?

+ +

I imagine there are best practice heuristics for this problem. These could be provided by the creators or governors of programming languages and environments. For example, if python.org (or Guido van Rossum) says how to deal with it in Python, that would be a good guideline in my book. An MSDN link on how to deal with it in C# would be good too.
+Alternatively, guidelines provided by major influencers in software engineering should also be valuable. Perhaps Google/Alphabet has a nice style guide that teaches us how to deal with it?

+ +

Here's just an example: in the C# language, ""default"" is a reserved keyword. When I use an enum, I might like to name the default value ""default"" (analogous to ""switch"" statements), but can't.
+(C# is case-sensitive, and enum constants should be capitalized, so ""Default"" is the obvious choice here, but let's assume our current style guide dictates all enum constants are to be lower-case.)
+We could consider the word ""defaultus"", but this does not adhere to the Principle of Least Astonishment. We should also consider ""standard"" and ""initial"", but unfortunately ""default"" is the word that exactly conveys its purpose in this situation.

+",39958,,39958,,42739.55069,42739.66389,How to name something when the logical option is a reserved keyword?,,7,42,5,42738.92431,,CC BY-SA 3.0, +339287,1,,,1/2/2017 13:07,,2,1006,"

This is more of a theoretical question which I hope is okay!? I want to code my own drag and drop jQuery plugin, but i'm wondering the best way to go about structuring my code and actually doing it.

+ +

Note: This MAY be opinion orientated as well but I don't mind, I just need suggestions on a good way to go about this and structuring the code etc.

+ +

Current plan:

+ +

Item Class: I want to use Object Orientated Programming here, so was going to store the reference to each draggable item in an instance of a class, which are in turn stored within an array. Then within this class I can do calculations based on that draggable item.

+ +

When to capture mousemove: To save on computing power incase of other JavaScript intensive scripts I was thinking about only capturing any mouse movements when the mouse is down on an item.

+ +

Actually moving items: Well, I will just change the items position to fixed/absolute and adjust the top and left values, what happens on release I cover below.

+ +

Moving items out of the way to place selected, and actually dropping an item in a new location: That's a long bolded one! But yeah, i'm not too sure on how I would go about this! Basically I would have a list of li's which I want to re-order, but as I move an item to a new position I want the other items to slide out of the way smoothly, preferably using transforms as to pass the animation over to the GPU.

+ +

Moving items into other stacks: Some of my li's may have ul's within then, I need the ability to detect when the draggable item is being attempted to be dropped into another item. If this makes sense! :(

+ +
+ +

Kinda example structure of my items?

+ +
<ul id=""sortable"">
+  <li id=""1"">Home</li>
+  <li id=""2"">Showroom
+    <ul>
+      <li>Stoke</li>
+      <li>Macclesfield</li>
+    </ul>
+  </li>
+  <li id=""3"">Finance</li>
+  <li id=""4"">Servicing</li>
+  <li id=""5"">About Us</li>
+  <li id=""6"">Contact</li>
+</ul>
+
+ +

I have sort of started some of the code, however I don't want to do too much if i'm going about it all wrong!

+ +
/* Global Javascript */
+(function($) {
+  $.fn.draggable = function() {
+    // Establish our default settings
+    var settings = $.extend({ }, options);
+
+    class Item {
+      constructor(obj) {
+        this.elm = obj;
+      }
+
+      // Return the items dimentions
+      getDimentions() {
+        return new array((this.obj).outerWidth(), (this.obj).outerHeight());
+      }
+    }
+    // Store draggable items
+    var $items = new Array();
+    $(this).each() {
+      if ($(this).is('li')) {
+        $items[] = new Item($(this));
+      }
+    }
+    console.log($items);
+
+
+    var $dragObject = null;
+    function makeClickable(object) {
+      object.onmousedown = function() {
+        $dragObject = this;
+      }
+    }
+    function mouseUp(ev) {
+      $dragObject = null;
+    }
+  }
+}(jQuery));
+$('sortable').draggable();
+
+",196345,,,,,42737.54653,Drag and Drop with animations,,0,2,,,,CC BY-SA 3.0, +339294,1,,,1/2/2017 15:25,,3,257,"

I'm working on a website project for a software engineering course. This site will be using a database to store data, currently through JDBC and MySQL.

+ +

Now the first thing I would want to do, is use the bridge pattern in order to decouple JCBC/MySQL from the implementation of the website, so that if in the future we decide to switch to another vendor (like Microsoft Server), it will be easier, ""just"" change the reference to the implementor class in the abstraction class.

+ +

At the same time, many of my classes use very similar functions on the databae. For example, I have three classes, TripControl, RouteControl, and LocationControl, and they each have a class they use to speak to the database(TripDB, RouteDB, LocationDB). So I was thinking, let's use the Strategy pattern, and have it so that TripControl, RouteControl, and LocationControl all talk to a Context class (using the book terminology here), and then use a Policy object to select which behaviour to use (TripPolicy for TripDB, RoutePolicy for RouteDB, LocationPolicy for LocationDB), this way it should make using the DB easy for the other devs (just choose the policy and forget about the rest).

+ +

Ok, so let's say I use the Strategy pattern, without the Bridge, and I switch from MySQL to MS Server (or I use both), I would need to have the following policies objects: TripPolicyMySQL, TripPolicyMS, RoutePolicyMySQL, RoutePolicyMS, RoutePolictyMySQL, RoutePolicyMS, to be able to choose which kind of database I'm working on. This makes it harder for the developers to implement their classes, and it looks (to me, at this moment at least), not really well suited to a change.

+ +

If I were to use the Strategy in conjunction with the Bridge, I should have something like this:

+ +

The developers have just three policy objects (LocationPolicy, RoutePolicy, TripPolicy), and they just use those. Then, on a lower level, the Strategy pattern will use the Bridge's interface(For example TripDB would be a bridge for TripDBMySQL and TripDBMS), which will hide the implementation of the database, which could be MS Server or MySQL.

+ +

Would doing this make any sense? I guess it's slower because of all the indirection, but it should make it easier on the developers and in theory it should make the system easier to exapnd.

+",235622,,209774,,42737.64722,42737.74514,"Using Bridge and Strategy together, is my idea correct/useful?",,3,4,,,,CC BY-SA 3.0, +339298,1,,,1/2/2017 16:28,,0,1504,"

I'm trying to design a N-Tier Solution for my existing WebAPI project.

+ +

I have a WebAPI project where, as of now, all the business logic are written in the controllers and the data validation are done by annotations.

+ +

This is now leading to duplicate code across controllers when I'm trying to implement same logic.

+ +

So I thought of moving my Business Logics to Business Layer. But I'm mostly facing challenges in returning Business Validations to controller.

+ +

For E.g I have a code portion in controller like

+ +
//Check if User is adding himself
+            if (RequestUser.Email == model.Email)
+            {
+                return BadRequest(""Email"", ""You cannot add yourself as an User"");
+            }
+
+ +

Now how do I return BadRequest from Business Class Methods?

+ +

And it's getting tough when the next line of the controller is

+ +
IdentityResult result = await UserManager.CreateAsync(user);
+
+                if (!result.Succeeded)
+                {
+                    return result;
+                }
+
+ +

So I cannot return both BadRequest & IdentityResult from same method. Also BadRequest, ModelState is not accessible in controllers. Ofcourse I can add System.Web.Mvc there in BLL, but would that be a good idea?

+ +

Another thing that I'd like to know is, I'm just creating Methods inside BLL's which are taking ViewModels that I receive in controllers. Is that a good idea for existing project? Or should I create DTO's (same as like Models) in BLL and use AutoMapper to map the properties and let BLL operate on DTO's instead of passing ViewModels.

+ +

I think the latter would be more extendable, but would require more time.

+ +

Lastly, if you do suggest me to go with DTO's, then I have to change at BLL DTO's as well as in Model when introducing new properties, isn't that a bad idea? Code is then duplicating here too. On other side, as of now, I change all the related ViewModels too (sometimes 2-4) (which I think is not the right approach) when adding a new property to Models.

+ +

So what's the right approach?

+",156393,,,,,42737.79931,How To Design BLL in ASP.NET MVC,,2,0,,,,CC BY-SA 3.0, +339310,1,,,1/2/2017 18:38,,2,245,"

Let's say I want to program a parallelized web crawler which has a shared FIFO queue (multi consumer/multi producer). The queue only contains URLs. How do I detect the end of the queue?

+ +

A worker process is always consumer and producer at the same time because it takes an URL from the queue, crawls it and adds any found URLs to the queue. I think there is no way to have separate processes for consumer and producer tasks in this scenario.

+ +

Since the amount of input data is unknown but not infinite it's impossible to use a 'poison pill' as sentinel in the queue, right?

+ +

Also, the queue size is not a reliable way to find out if the queue is empty (because of multiple consumers/producers).

+ +

Please enlighten me :-)

+",258176,,,,,42737.81806,How to detect end of queue in a parallelized web crawler?,,1,1,1,,,CC BY-SA 3.0, +339315,1,340348,,1/2/2017 21:06,,0,602,"

I'm not really sure if that is right ""stack"" to ask that question, well two questions actually.

+ +
    +
  1. What's the potential use for capped collections? (besides logging)
  2. +
  3. Capped collections cannot be sharded, but they can be replicated. In case of network partition and after rejoin, how is capped collection synchronized/merged?
  4. +
+",258362,,209774,,43999.89306,43999.89306,MongoDB capped collections,,1,1,,,,CC BY-SA 3.0, +339317,1,,,1/2/2017 23:48,,2,7544,"

What is the best way to find the duplicates in a list of a list of integers (no matter what position thay are in)? +I don't necessary need code just the best way to go about this problem.

+ +

eg:

+ +
List<List<int>> TestData = new List<List<int>>
+{
+     new List<int> { 1, 2, 3 },
+     new List<int> { 2, 1, 3 },
+     new List<int> { 6, 8, 3 },
+     new List<int> { 9, 2, 4 },
+};
+
+ +

The idea is that this will return

+ +
2x) 1,2,3
+1x) 6,8,3
+1x) 9,2,4
+
+ +

I've been breaking my head over this seemingly very simple question but for some reason I can't figure it out. +Hope someone is able to help, Like I said code not necessary but greatly appreciated.

+",258368,,258368,,42737.99444,42739.87708,Find duplicate in a list of a list of integers,,5,14,,,,CC BY-SA 3.0, +339319,1,339326,,1/3/2017 0:33,,0,420,"

problem:

+ +

Making a video-game has the following challenges on variable storage:

+ +
    +
  • send player states every 50-200ms, so store position/ rotation as efficiently as possible.
  • +
  • store large blocks of data for each player regarding their cooldowns, abilities, and loadout
  • +
  • all variables have to revolve around the connection id assigned when they connect to the server, not who they say they are
  • +
+ +

solutions considered so far:

+ +
    +
  • use an integer as a ""player id"" which looks up data on dozens of arrays. Inform client of this number regarding each player for them to use in the backend API. Drawback- ugly and unwieldy on the server side.
  • +
  • map each ""player id"" to a class. Drawback- have to iterate just to get the positions / rotations every 50-200ms.
  • +
  • make an array of structures again using the ""player id"" as the index. Drawback- same as above.
  • +
+ +

However, each of these solutions using the ""player id"" as an index doesn't work well when the concurrency is unstable- with players coming and going. There are three approaches (to the actual structure of the data sent):

+ +
    +
  • mirror the list/array on the clients, then notify the clients when there needs to be a change made (ex: player x left, all player's >x down iterate). This has a tenancy to break things between the lag.
  • +
  • attach the player id to each positional update as opposed to inferring the clients can deduct it from the array index.
  • +
  • send a static array with an unchanging number of slots, all unoccupied positions get sent anyway. +There's supposed to be a crossover point where this is more efficient than including the ids if you're utilizing a certain amount of it.
  • +
+ +

What would be the best solution to this scenario?

+",258203,,258203,,42738.09861,42738.10764,Most efficient way to store multiplayer player data?,,1,3,,,,CC BY-SA 3.0, +339321,1,339337,,1/3/2017 1:15,,17,4763,"

I'm looking at the upcoming Visual Studio 2017.

+ +

Under the section titled Boosted Productivity there is an image of Visual Studio being used to replace all occurrences of var with the explicit type.

+ +

+ +

The code apparently has several problems that Visual Studio has identified as 'needs fixing'.

+ +

I wanted to double-check my understanding of the use of var in C# so I read an article from 2011 by Eric Lippert called Uses and misuses of implicit typing.

+ +

Eric says:

+ +
+
    +
  • Use var when you have to; when you are using anonymous types.
  • +
  • Use var when the type of the declaration is obvious from the initializer, especially if it is an object creation. This eliminates redundancy.
  • +
  • Consider using var if the code emphasizes the semantic “business purpose” of the variable and downplays the “mechanical” details of its storage.
  • +
  • Use explicit types if doing so is necessary for the code to be correctly understood and maintained.
  • +
  • Use descriptive variable names regardless of whether you use “var”. Variable names should represent the semantics of the variable, not details of its storage; “decimalRate” is bad; “interestRate” is good.
  • +
+
+ +

I think most of the var usage in the code is probably ok. I think it would be ok to not use var for the bit that reads ...

+ +
var tweetReady = workouts [ ... ]
+
+ +

... because maybe it's not 100% immediate what type it is but even then I know pretty quickly that it's a boolean.

+ +

The var usage for this part ...

+ +
var listOfTweets = new List<string>();
+
+ +

... looks to me exactly like good usage of var because I think it's redundant to do the following:

+ +
List<string> listOfTweets = new List<string>();
+
+ +

Although based on what Eric says the variable should probably be tweets rather than listOfTweets.

+ +

What would be the reason for changing the all of the var use here? Is there something wrong with this code that I'm missing?

+",81480,,114716,,43269.19444,43269.19444,Is Microsoft discouraging the use of 'var' in C#? (VS2017),,2,6,2,,,CC BY-SA 4.0, +339329,1,339330,,1/3/2017 6:38,,1,910,"

I'm exploring composite pattern to write a file system, one of my requirements is to create a unique root element in this case a directory, similar to Linux System ('/'), I have seen many examples of creating this in the client like this:

+ +
class CompositeDemo
+{
+    public static StringBuffer g_indent = new StringBuffer();
+
+    public static void main(String[] args)
+    {
+        Directory one = new Directory(""dir111"");
+        Directory two = new Directory(""dir222"");
+        Directory thr = new Directory(""dir333"");
+        File a = new File(""a"");
+        File b = new File(""b"");
+        File c = new File(""c"");
+        File d = new File(""d"");
+        File e = new File(""e"");
+        one.add(a);
+        one.add(two);
+        one.add(b);
+        two.add(c);
+        two.add(d);
+        two.add(thr);
+        thr.add(e);
+        one.ls();
+    }
+}
+
+ +

Source: https://sourcemaking.com/design_patterns/composite/java/1

+ +

Since my requirement is to create a unique root node is it best practice to create a new Class that has only one root element? Can I use a Singleton design pattern?

+",122385,,122385,,42739.20764,42739.20764,Does my file system implemented using the Composite pattern require a singleton?,,1,6,,,,CC BY-SA 3.0, +339335,1,,,1/3/2017 9:32,,1,3049,"

I'am little confused about how business logic should be implemented using web services. For example, think about an education management application. There are simply students, teachers and courses. Now, the server side of the application may provide getStudents operation via a WSDL interface. This operation returns list of Student elements.

+ +

According to object oriented paradigm a class should have a certain responsibility. It should hide its internal state and one can reach its data only using its operations. But at the client side a Student class is only a data bag. There is no logic so no responsibility here.

+ +

Another problem is that there is no reference semantics at the client side. Normally, a student is associated with some courses. But in the implementation a Student object has list of Course objects or it may hold some identifier for courses.

+ +

Finally, using web services (by WSDLs) seem to convenient for access remote data but not for execute business logic remotely. Am I right, or do I miss something important about web services?

+ +

Edit:

+ +

My intent is implement business logic at server side. For example suppose that a have classes in server side like that:

+ +
class Student
+{
+  //some properties like name, courses, etc.
+  double calculateGPA(); //calculates average grade using course credits.
+  //other operations like getName()
+}
+
+class SchoolRepository
+{
+  List<Student> getStudents();
+  List<Course> getCourses();
+  //other operations
+}
+
+ +

Now, I can create WSDL which provide SchoolRepository interface. So, client get list of students. But they cannot reach business logic implemented in calculateGPA() directly. I may provide another WSDL interface for that. But it breaks data and behavior encapsulation.

+",161367,,161367,,42738.61944,43309.99722,How to implement business logic with Web Services?,,3,6,1,,,CC BY-SA 3.0, +339336,1,,,1/3/2017 9:34,,3,368,"

It may not be clear so I'll develop the idea:

+ +

The point is to develop a web interface to write down notes (and anything you like), and track its evolution. My question is then: how can I store such information such that I can also keep track of modification history?

+ +

On the relational database world it would ideally look like:
+Table document: | docId | authId | content | <meta> |
+Table documentHist: | docId | editDate | <data> |

+ +

The question is about what to store as the documentHist.<data>. Should I store here all the revisions (easy but huge replication)? Or should I store only differences? (smarter, but no see how I could do this (without implementing a kind of versioning system myself).

+ +

That's why I previously mentionned Git, and even more Github which precisely do it: you can edit files and commit. We could then use here Git ""under the hood"" for our versioning. I'm just not sure how difficult this would be. (Select/Update &co looks easier to me that handling files and git command from web server, I'm maybe wrong)

+ +

Thanks for any comment, clue or idea. +I maybe have minsunderstanding or misconception, do not hesitate to point it out. (same for language mistake, I'm not EN native as you may have noticed)

+ +

pltrdy

+ +
+ +

Edit notes:

+ +
    +
  • History isn't backup: My point isn't to create database backups but instead to be able to query/work with edit history (e.g. tell the +user when/what was the last modification, when was this line added +etc...

  • +
  • Documents: By document I do not (necessarily) talk about file on a file system. It could just be record in a database (we could imagine 1 table for the current content of ""document"" and 1 for its history)

  • +
  • Volume & Goals: I aim to develop such a system for personnal need, but with a scalable design. I would otherwise just use Git. The point is to give a wbe interface to write down notes and keep track of evolutions (among other features)

  • +
+",258398,,7422,,42738.47639,42738.61458,Work with user content edits history: storing differences vs data duplication,,2,9,,,,CC BY-SA 3.0, +339338,1,339343,,1/3/2017 10:16,,9,13237,"

The application will continuously (approximately every second) collect the location of users and store them.

+ +

This data is structured. In a relational database, it would be stored as: +| user | timestamp | latitude | longitude |

+ +

However, there is too much data. There will be 60 × 60 × 24 = 86,400 records per user, daily. Even with 1000 users, this means 86,400,000 records daily.

+ +

And it is not only 86,400,000 records daily. Because these records will be processed and the processed versions of them will be stored as well. So, multiply that number with approximately 2.

+ +

How I plan to use the data

+ +

Essentially, I plan to make coarser grained versions of location data for easier consumption. That is:

+ +
    +
  1. Sort the received data w.r.t timestamps.
  2. +
  3. Iteating on this list in order, determine if the location has changed significantly (by checking out how much the latitude and longitude changed)
  4. +
  5. Represent the non significant location changes as a single entry in the output (hence, output is a coarser grained version of the location data).
  6. +
  7. Iterate this process on the output, by requiring an even larger latitude and longitude change for a significant change. Hence, the output to be produced from the previous output will be even more coarse grained.
  8. +
  9. Iterate the whole process as much as needed.
  10. +
  11. Aggregate a range of resolutions and send them to users. Also, store all resolutions of the data for later consumption.
  12. +
+ +

What should I use to store this data? Should I use a relational database or a NoSQL solution? What other things should I consider when designing this application?

+",158474,,158474,,42738.62222,42738.62222,How to store large amounts of _structured_ data?,,3,12,4,,,CC BY-SA 3.0, +339341,1,,,1/3/2017 12:19,,3,640,"

I'm not sure if I am in the right place to ask this question. Please tell me if I'm not. I have the following problem:

+ +

I have a production process, where a product first has to be produced, it is then stored, and packed afterwards. Hence, it can be seen as that the product has to go through two machines in series. However, there are several producing machines, and several packing machines (thus, in parallel). Also, not all products are compatible with all machines, and they have different producing and packing times on the machines.

+ +

Now, I'm trying to implement the shortest/longest processing time first scheduling rule. However, I'm not sure what rules should be applied. For now, I have it like this: +A product with the shortest/longest processing time on a machine can go first. However, this does not take into account the packing time. It can be that the product with the shortest producing time, has a very high packing time, and hence, it causes that the other products may have to wait very long before they can be assigned to a packing machine. Since there are so many assignments possible, because of the parallel AND series machines, different processing times on machines etc., I'm not sure how to implement these rules in my case. Any suggestions?

+",258414,,,,,43159.06944,"How to ""model"" shortest/longest processing time first on machines in parallel and series",,1,6,1,,,CC BY-SA 3.0, +339353,1,,,1/3/2017 16:33,,1,35,"

We are moving to Self-Hosted TFS and I am having a difficult time with setting things up properly. What we want to do is:

+ +

1) Have some user accounts be testers on Project 1 and thus they can create and manage work items, but not have access to the code on the project. We got this done by setting those user accounts up as Stakeholder and it works no problems.

+ +

2) Have those same users that only have access to work items on Project 1 be developers on Project 2 and have access to work items and code. This we cannot set up or have not been able to. Despite being made Admins on the project, given full Allow access as Team Members on the project, etc.

+ +

Any suggestions?

+ +

Thanks, +Josh

+",258446,,,,,42803.99167,Different TFS Permissioning For User on Different Projects,,1,0,,,,CC BY-SA 3.0, +339356,1,,,1/3/2017 17:14,,2,1012,"

This question is about refactoring existing database design.

+ +

My data flow is

+ +
    +
  1. User generates some data for product lines A, B, C
  2. +
  3. Data is saved into the database once
  4. +
  5. Data is later retrieved multiple times
  6. +
+ +

Current design has 3 tables: data_a, data_b, data_c, where each table shares some columns that are identical (in name) and some that are unique to that product line.

+ +

For example, same-name columns in each table are weight, unit_system and a few others. The differently-named columns have values that represent physical quantities of the particular product line. Those are named using various alphanumeric identifiers, like a, b5, e2, and there is a different set of them for different product line. Those sets can share elements, i.e. b5 can be in more than one table, but then something like t1 can be in one table but not the others.

+ +

Problem

+ +

Currently when there is a need to add some value say x9 to product line a, I would update the database schema for data_a to have column x9. I make the values of x9 as 0 for existing column rows, and new records will begin to populate with the actual x9 values. Then I update the code in relevant places to insert x9 into the table or retrieve it from the table.

+ +

Existing design

+ +
data_a(id, item_id, shared, different_a)
+data_b(id, item_id, shared, different_b)
+data_c(id, item_id, shared, different_c)
+
+ +

+ +

where shared columns is a group of columns that is identical in each table, while different are columns that are disjointed in theory, as they represent 3 different product lines, but actually may share some similarly-named elements, as some variable names are the same for different product lines.

+ +

Proposed design

+ +

This is where I'm struggling. Because I don't see a good clean design that is also efficient. I wanted to get rid of the need to alter database schema every time there is a new variable added to a product line. And I believe I can do that, but I also want to make an efficient design, and I don't see one.

+ +

But this is my try:

+ +

Keep primary key, foreign key and shared column names in a single table:

+ +
data(id, item_id, shared)
+
+ +

Create a single table for variables only (variables are ones found in different sets):

+ +
data_variables(id, item_id, data_id, variable, value)
+
+ +

+ +

I am not sure if this design will be worth the trouble, because ... I will actually be storing more data - all the extra data_id or all the extra item_id values for each variable name. There are 15 to 30 variable names for each product line. I will be storing 15 to 30 item_id (or data_id) fields in the new design data_variables table, where in the old design there was only one item_id value per table row.

+ +

Question:

+ +

Is there a more efficient design that also does not require changes in schema design for every addition/deletion/modification of variable name in a product line? Might it be best to stick with existing design despite the trouble of altering schema when needing to add new variables?

+ +

Using JSON for variable ""different"" fields

+ +
one_data_table(id, item_id, product_line, shared, json_encoded_value_pairs);
+
+ +

Decision to not use EAV (Entity–attribute–value) Model

+ +

In my case Entities change very rarely if at all (on the order of years), and attributes change rarely as well, on the order of months or more. As such, reworking the database design to use EAV is probably not a good fit for my case.

+ +

That aside, I am still debating on my JSON Design.

+",119333,,119333,,42739.73681,42739.73681,What is an efficient design to store variables for different product lines in ER database?,,1,5,0,,,CC BY-SA 3.0, +339358,1,339370,,1/3/2017 17:31,,31,1866,"

I have a client who insisted that we keep our new development separate from the main branches for the entirety of 2016. They had 3-4 other teams working on the application in various capacities. Numerous large changes have been made (switching how dependency injection is done, cleaning up code with ReSharper, etc). It has now fallen on me to merge main into our new dev branch to prepare to push our changes up the chain.

+ +

On my initial merge pull, TFS reported ~6500 files with conflict resolution. Some of these will be easy, but some of them will be much more difficult (specifically some of the javascript, api controllers, and services supporting these controllers).

+ +

Is there an approach I can take that will make this easier for me?

+ +

To clarify, I expressed much concern with this approach multiple times along the way. The client was and is aware of the difficulties with this. Because they chose to short on QA staff (1 tester for 4 devs, no automated testing, little regression testing), they insisted that we keep our branch isolated from the changes in the main branch under the pretense that this would reduce the need for our tester to know about changes being made elsewhere.

+ +

One of the bigger issues here is an upgrade to the angular version and some of the other third party softwares --unfortunately we have no come up with a good way to build this solution until all the pieces are put back into place.

+",258451,,1204,,42743.98958,42743.98958,Strategies for merging 1 year of development in Visual Studio,,4,8,2,,,CC BY-SA 3.0, +339359,1,339362,,1/3/2017 17:40,,19,8325,"

I still remember good old days of repositories. But repositories used to grow ugly with time. Then CQRS got mainstream. They were nice, they were a breath of fresh air. But recently I've been asking myself again and again why don't I keep the logic right in a Controller's Action method (especially in Web Api where action is some kind of command/query handler in itself).

+ +

Previously I had a clear answer for that: I do it for testing as it's hard to test Controller with all those unmockable singletons and overall ugly ASP.NET infrastructure. But times have changed and ASP.NET infrastructure classes are much more unit tests friendly nowadays (especially in ASP.NET Core).

+ +

Here's a typical WebApi call: command is added and SignalR clients are notified about it:

+ +
public void AddClient(string clientName)
+{
+    using (var dataContext = new DataContext())
+    {
+        var client = new Client() { Name = clientName };
+
+        dataContext.Clients.Add(client);
+
+        dataContext.SaveChanges();
+
+        GlobalHost.ConnectionManager.GetHubContext<ClientsHub>().ClientWasAdded(client);
+    }
+}
+
+ +

I can easily unit test/mock it. More over, thanks to OWIN I can setup local WebApi and SignalR servers and make an integration test (and pretty fast by the way).

+ +

Recently I felt less and less motivation to create cumbersome Commands/Queries handlers and I tend to keep code in Web Api actions. I make an exception only if logic is repeated or it's really complicated and I want to isolate it. But I'm not sure if I'm doing the right thing here.

+ +

What is the most reasonable approach for managing logic in a typical modern ASP.NET application? When is it reasonable to move your code to Commands and Queries handlers? Are there any better patterns?

+ +

Update. I found this article about DDD-lite approach. So it seems like my approach of moving complicated parts of code to commands/queries handlers could be called CQRS-lite.

+",7369,,23622,,42739.53125,42739.54028,Isn't CQRS overengineering?,<.net>,2,8,3,,,CC BY-SA 3.0, +339363,1,,,1/3/2017 18:36,,1,926,"

Let's say I have a users resource, with two properties: name and email as specified by a users JSON Schema document, which right now looks like this:

+ +
{
+  ""$schema"": ""http://json-schema.org/draft-04/schema#"",
+  ""type"": ""object"",
+  ""additionalProperties"": false,
+  ""properties"": {
+    ""name"": {
+      ""type"": ""string""
+    },
+    ""email"": {
+      ""type"": ""string""
+    }
+  },
+  ""required"": [
+    ""name"",
+    ""email""
+  ]
+}
+
+ +

My requirements state that we need to be able to change the schema, e.g. to add a property such as phoneNumber, and do so via HTTP in a RESTful way. That is, I need to be able to update the JSON Schema definition of the users resource to look like this:

+ +
{
+  ""$schema"": ""http://json-schema.org/draft-04/schema#"",
+  ""type"": ""object"",
+  ""additionalProperties"": false,
+  ""properties"": {
+    ""name"": {
+      ""type"": ""string""
+    },
+    ""email"": {
+      ""type"": ""string""
+    },
+    ""phoneNumber"": {
+      ""type"": ""string""
+    }
+  },
+  ""required"": [
+    ""name"",
+    ""email"",
+    ""phoneNumber""
+  ]
+}
+
+ +

Now, clients of the API can create new users that have the additional phoneNumber property (where previously I would have gotten a schema validation error).

+ +

I am puzzling over how to do this. One way I can imagine doing it is by creating a ""meta-resource"" called resources. This resource might have some properties, for example: path and schema. The schema property would be a full JSON Schema object. To update the users resource, then, I could maybe POST to resources with an HTTP request body like:

+ +
{
+    ""path"": ""users"",
+    ""schema"": { ...JSON Schema object goes here... }
+}
+
+ +

Is this a reasonable implementation? If not, why not? Alternative ideas? Any pitfalls I should watch out for? Any articles/blogs on this topic that I should read? (I haven't been able to Google successfully for this).

+",,user92338,,user92338,42738.91736,42741.62847,Define a RESTful API for creating/updating other resource definitions?,,5,3,,,,CC BY-SA 3.0, +339365,1,,,1/3/2017 19:18,,1,863,"

My company employs people who add and edit records in a PostgreSQL database using a web interface. These updates then need to be compiled into a mobile app (Android, iOS) via SQLite and released as a new version every few months. We haven't quite gotten around to 'hot patching' the SQLite database; that is to say, downloading updates from the server instead of recompiling the app with new data downloaded during the build process.

+ +

I'm wondering what the typical process is here - how to get from server to client. My initial thought is to write a script to:

+ +
    +
  1. download the data (as JSON) that needs to be compiled into the app
  2. +
  3. use the SQLite library to construct the database and import the data
  4. +
  5. compress/encrypt the database
  6. +
  7. put the database in the correct asset folder for iOS and Android
  8. +
+ +

It seems like a reasonable approach, but I'm wondering if there is a better way, or if there is some process that's more standard. Are there caveats to this approach?

+ +

And I know that I could expose this to clients via a REST API. That's basically what I'm doing for the 'downloading' aspect. However, that's not what the boss wants to do, so this is the way it has to be. I'm asking if my approach (downloading a JSON export, importing that data, etc.) is a decent approach, or if e.g. dumping through psql and doing some magic with that data) would be better for what I need to accomplish: getting PostgreSQL data from the web into a local SQLite database.

+",109112,,109112,,42738.97222,42739.13472,From web database (PostgreSQL) to mobile (SQLite),,1,4,,,,CC BY-SA 3.0, +339372,1,339418,,1/3/2017 21:42,,5,1732,"

After picking up some Swift skills with Java as my strongest language, one feature of Swift that I really like is the ability to add extensions to a class. In Java, a pattern I see very often is Utils or Helper classes, in which you add your methods to simplify something you're trying to accomplish. This might be a silly question, but is there any good reason not to subclass the original class in Java and just import your own with the same name?

+ +

A swift example of a Date extension would be something like this

+ +
extension Date {
+    func someUniqueValue() -> Int {
+        return self.something * self.somethingElse
+    }
+}
+
+ +

Then an implementation would look like this:

+ +
let date = Date()
+let myThing = date.someUniqueValue()
+
+ +

In Java you could have a DateHelper class, but this now seems archaic to me. Why not create a class with the same name, and extend the class you want to add a method to?

+ +
class Date extends java.util.Date {
+    int someUniqueValue() {
+        return this.something * this.somethingElse;
+    }
+}
+
+ +

Then the implementation would look like this:

+ +
import com.me.extensions.Date
+
+...
+
+Date date = new Date()
+int myThing = date.someUniqueValue()
+
+ +

Then, just import your own Date class which now acts like a class with Swift extensions.

+ +

Has anyone had any success with doing this, or see any reasons to stay away from a pattern like this?

+",32455,,,,,42739.59722,Swift-like extensions in Java using inheritance,,1,7,2,,,CC BY-SA 3.0, +339384,1,339389,,1/4/2017 3:35,,101,20789,"

For example, to keep a CPU on in Android, I can use code like this:

+ +
PowerManager powerManager = (PowerManager)getSystemService(POWER_SERVICE);
+WakeLock wakeLock = powerManager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, ""abc"");
+wakeLock.acquire();
+
+ +

but I think the local variables powerManager and wakeLock can be eliminated:

+ +
((PowerManager)getSystemService(POWER_SERVICE))
+    .newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, ""MyWakelockTag"")
+    .acquire();
+
+ +

similar scene appears in iOS alert view, eg: from

+ +
UIAlertView *alert = [[UIAlertView alloc]
+    initWithTitle:@""my title""
+    message:@""my message""
+    delegate:nil
+    cancelButtonTitle:@""ok""
+    otherButtonTitles:nil];
+[alert show];
+
+-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
+    [alertView release];
+}
+
+ +

to:

+ +
[[[UIAlertView alloc]
+    initWithTitle:@""my title""
+    message:@""my message""
+    delegate:nil
+    cancelButtonTitle:@""ok""
+    otherButtonTitles:nil] show];
+
+-(void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex{
+    [alertView release];
+}
+
+ +

Is it a good practice to eliminate a local variable if it is just used once in the scope?

+",196142,,25199,,42740.98681,43566.42917,Should we eliminate local variables if we can?,,13,21,19,,,CC BY-SA 3.0, +339390,1,,,1/4/2017 7:53,,2,1062,"

I'm using Java Eclipse EMF to model my Composite Pattern. What would be the right UML representation to model aa new class (Root) which implements a unique root directory. This is the original Composite pattern.

+ +

+ +

This is my representation:

+ +

+ +

Target representation would be:

+ +
root
+  |___ dir1
+  |___ dir2
+  |___ dir3
+  |      |___ fileA
+  |      |___ dir4
+  |             |__ fileB
+  |    
+  |___ file1
+
+",258519,,,,,42739.37778,Modeling Composite Design Pattern,,2,3,1,,,CC BY-SA 3.0, +339391,1,,,1/4/2017 8:24,,1,2062,"

How is software architecture decided in a scrum/agile project environment, if everyone is focused on just one small piece of the problem how is over all system design decided upon.

+ +

There doesn't seem to be a role where one person take ownership over the technical execution of the project so you could possibly end up in a situation where by everyone individually has done their job but the over all quality of the project isn't very good.

+",94888,,,,,42739.49097,How is software architecture decided in a scrum/agile project environment?,,2,1,3,42739.51319,,CC BY-SA 3.0, +339405,1,,,1/4/2017 12:36,,0,1107,"

I'm looking for some assistance and ideas for developing an algorithm for choosing random soccer teams based on the skill levels of the participating players.
+What I have so far is a list of particpating players with arbitrary skill levels between 1 and 100. e.g.

+ +
PlayerA: 30,
+PlayerB: 45,
+PlayerC: 50,
+PlayerD: 55,
+PlayerE: 30,
+PlayerF: 20,
+PlayerG: 75,
+PlayerH: 75
+
+ +

I'd like the option of being able to choose random teams, but efficiently running it again to offer different results if the teams just don't look fair (even if on paper, the assigned skill levels match).

+ +

I've already coded an example which creates all possible combinations of teams and orders them by fairest first, so that efficiently creates the functionality that allows the person to hit the ""randomise"" button again and instantly display new teams. However, this is only suitable for teams of 7 per side (maybe 8) as there are just too many combinations after that and it takes too long to process.

+ +

Can anyone suggest a better way of doing this for more than 8 players? I've tried the option which mimics the old school yard method of picking players by taking the two best players as the captains and then each player taking the best of who is left until everyone is picked, but then i was stumped if those teams weren't acceptable and they wanted an option to randomise again.

+ +

Many thanks - i'm coding this in C# but the language is probably less important than the theory.

+",93096,,,,,42741.07083,Algorithm for generating 2 random teams from player list based on skill level,,2,16,1,,,CC BY-SA 3.0, +339407,1,339541,,1/4/2017 13:19,,1,37,"

I have a Google App Engine app which is used by a small amount of users of a certain niche website. The app's only function is to get data about the user from that website's API, use that data to produce a CSS file, and deliver that CSS to the user. There are a few apps (made by others) like mine for this website; mine is the newest, so my amount of traffic is small compared to the others'.

+ +

However, one of the other apps (which served a large portion of the available users) just crashed due to it exceeding its GAE quotas. As a result, a large amount of users are starting to migrate to my service. Since the service is by nature not practically monetizable, I'd like to be able to continue my service without enabling billing on GAE.

+ +

My question is this: The only quota that I am likely to exceed using the free limits is the bandwidth quota (specifically incoming, due to the API calls). Would it be feasible to create a new free GAE app just like the first and have the first one redirect to the second one when the first one runs out of bandwidth? What obstacles would I run into using this approach? Are there any better solutions?

+",258536,,,,,42741.11875,Using a second GAE app as backup,,1,0,,,,CC BY-SA 3.0, +339408,1,,,1/4/2017 13:23,,1,706,"

I am currently designing a system to support our employee staffing process. I collect weekly preferences from employees, and feed that into a system for staffing. Availabilities can be defined as absolute dates (e.g. Employee John is available on 01/01/2017 from 14:00 to 16:00). Or by recurrence, where one can define a weekly recurring availability (e.g. Employee John is available Weekly on Sundays from 14:00 to 16:00).

+ +

Weekly Recurring Availabilities: +start time/end time - most of the queries would be by local time, but we might also need the UTC time in the queries. we keep the timezone as well to resolve DST issues. +employee_id

+ +

Absolute Availability: +start time/end time - absolute datetime +employee_id +is_available - can represent a time an employee isn’t available at all

+ +

Employee: +id +zone_id +is_active

+ +

The following queries should be supported: +- Get availabilities by employee +- Get availabilities by zone and date range - Should filter out inactive employees, should return also recurring availabilities, and should also convert the recurring dates to absolute datetime records, for example: +Given employee John has a single weekly recurring availability, every Sunday from 14:00 to 16:00, an API consumer might request all employees that are available between the upcoming Sun-Sat, the output should be: +employee_id, start_datetime, end_datetime

+ +

We need to build an ETL on the data - transfer it to a data warehouse db. +We are required to transfer recurring availabilities as their real time availability - for instance +if I have a recurring availability on Sunday - I should transfer future availabilities matching to this - +row for each week with the actual date for a range of month or two in the future. +Is there any approach to handle that? Should we do it be saved in the data warehouse with the specific dates? or should the reports over it do that?

+ +

We currently thought on saving the availabilities in mongo in one collection for availabilities (recurring and absolute), and copy fields of is_employee_active and zone_id to each availability. +We also thought of assigning start_time/end_time an absolute date for recurring availabilities with proximate date matching the selected day.

+ +

I would like to get a feedback on what's the best approach, thanks.

+",258540,,,,,42739.55764,DB design for a scheduling system,,0,2,,,,CC BY-SA 3.0, +339411,1,339422,,1/4/2017 13:42,,3,277,"

This is something I see all over Cocoa:

+ +
func someAction(_ sender: Any)
+
+ +

which is called like:

+ +
someAction(someObject)
+
+ +

This can be very confusing to me. The infamous example is in NSView subclasses:

+ +
print(""Hello, World!"")
+
+ +

Despite this being standard Swift syntax for printing to the console, in an NSView, this will open the printer dialog, claiming the sender is the String ""Hello, World!"". So, in my code I started doing this:

+ +
func someAction(sender: Any)
+
+ +

but I fear that the fact I see none of this in Cocoa means it's an anti-pattern. Is that the case, or am I in the right?

+",104052,,,,,42739.65417,Is it an anti-pattern for Swift functions that take in a sender to have a label for that parameter?,,1,0,,,,CC BY-SA 3.0, +339413,1,,,1/4/2017 14:06,,5,12803,"

I have an application which works with pure JDBC. I have a dilemma where should transaction handling go, in Service or DAO layer. I have found that in most cases it should be implemented in Service layer since DAO must be as simple as possible and exists solely to provide a connection to the Database. I like this approach, but all the solutions and examples that I've found works with Spring or other frameworks and use annotations to mark it as @Transactional.

+ +

In pure JDBC in my DAO layer I have a DaoFactory, I take a connection object from it in each DAO class (UserDao, CarDao), which implements a connection pooling, and use this object to connect to the database and perform CRUD operations. In Service layer, I create an instance of specific DAO that I need and do actions/calculations on top of it.

+ +

Where do I implement transaction handling here?

+",247225,,-1,,42838.53125,43418.66181,Transaction handling in DAO or Service layer in pure JDBC without frameworks,,3,1,2,,,CC BY-SA 3.0, +339416,1,,,1/4/2017 14:14,,1,220,"

I am currently working with a system which has been upgraded piecemeal from an original Visual FoxPro solution, to a system that now has the following parts:

+ +
    +
  1. Local FoxPro installation (this is a Point Of Sale system, so designed to be used on touchscreens in stores / salons)

  2. +
  3. Local windows service which syncs data from the local Foxpro database into a remote PostgreSQL DB over a series of REST APIs. This loop both pushes data and also checks for new data (which can come from online booking systems for example)

  4. +
  5. An online SaaS style portal which is backed by the central PostgreSQL DB and allows for a suite of additional functionality over and above the local install - dashboards, detailed reporting, marketing, online bookings amongst any others.

  6. +
+ +

The final stage of this project is replace the Foxpro system itself with a hosted solution. Ideally this POS would not fall over if the internet dropped out and would also support multiple terminals, and after a lot of research and testing of frameworks I've settled on Meteor as an ideal approach for this. It handles the reactive updating between terminals, minimongo seems to provide sufficient resilience against temporary internet outages, and overall fits the bill.

+ +

The architectural decision I am struggling with is with the remote database. I have knocked up a PoC project with Angular2 and Mongo, with a hosted Mongo remote DB and it works. I now need the data to be 2 way synced into the PostgreSQL DB which leaves me with 2 options:

+ +
    +
  1. Sync the remote Mongo DB with PostgreSQL (either over the existing REST APIs or similar)

  2. +
  3. Work with one of the experimental packages and try to use my existing PostgreSQL DB as the backend, removing the need for the 'interim' Mongo DB.

  4. +
+ +

My instinct is to take the first approach, it feels more robust and I already have the APIs in place, however without having a lot (almost zero) Mongo experience, am I going about this the wrong way? And if this is the right way, is there a best-paractice approach to syncing Mongo like this?

+",8425,,,,,42739.59306,Relational DB sync to Mongo DB,,0,2,1,,,CC BY-SA 3.0, +339417,1,,,1/4/2017 14:14,,1,389,"

I have an intern and he writes code fast.

+ +

However, I have difficulty making him understand the importance of writing classes and follow the OOP paradigm.

+ +

We recently had a discussion that went like something this:

+ +

""Instead of having this long function that extracts data from two different queries and then combine the data into a new data structure as a standalone function, why not start by putting it in a class?

+ +

I understand that it's not much differences for now, but I can foresee that this class will grow to have more functions and the next guy who takes over will naturally refactor the giant function into more functions within the same class.""

+ +

When he objected, I told him, ""Okay, I gave you my criteria (write the function within a class) and my reason (we will likely have it as a class in the future, might as well start now no matter how imperfect the start). If you have a better criteria and a better reason, why don't you suggest it?""

+ +

One day later his reply was, ""python is an object oriented programming language so when codes are organised inside a file, it is somewhat oop alr""

+ +

How do I make him understand the importance or better yet appreciate the importance of software craftsmanship?

+ +

In case, I made some bad assumptions myself, I am willing to stand corrected and I understand the dangers of asking this question and having it closed down. So if there was a better place to pose this question, I am willing to try it.

+",16777,,,,,42739.68125,How do you explain the importance of writing classes over writing procedural functions to a programmer?,,4,7,1,42739.89583,,CC BY-SA 3.0, +339429,1,339432,,1/4/2017 15:26,,6,1207,"

Let's suppose we have a nullable variable.

+ +
int? myVar = null;
+
+ +

If we wish to use it's value, there are at least two options:

+ +

Explicit casting

+ +
DoSomething((int)myVar);
+
+ +

Calling Nullable<T>'s .Value property

+ +
DoSomething(myVar.Value);
+
+ +

I've noticed that when the conversion fails, in both cases the same exception is thrown (System.InvalidOperationException: Nullable object must have a value), which makes me think that they are both implemented in the same way (but I have found not evidence for that claim).

+ +

My question is:

+ +
    +
  • Is there any difference between these two options? (i.e., is there some scenario where they would have a different behavior?)
  • +
  • Style-wise, is one of these two options preferred over the other? Why? According to which style guide?
  • +
+ +

Update:

+ +

It might be obvious, but just in case you stumble into this question, you might consider that an advantage of .Value versus explicit casting is that it prevents from unintendedly trying to cast to an incorrect type.

+ +

On the other hand, explicit casting might be easier to read in some scenarios. For instance, having var value = (int)myVar; allows an easier identification of the type than var value = myVar.Value;

+",75694,,75694,,42739.73056,42739.73056,Is there any difference between casting and calling .Value in Nullable objects?,<.net>,1,1,,,,CC BY-SA 3.0, +339435,1,,,1/4/2017 16:03,,4,229,"

Let's say I have a reverse proxy set up getting traffic at http://gluten-free-snacks.example.com. It serves different URLs by sub-directories, not sub-domains, for a better web UX.

+ +

Its default behavior is to route all requests to a WordPress site which I handed over to the marketing team which they will definitely use to create some social media buzz and generate leads. No questions there. The reverse proxy's additional behavior is that all requests to http://gluten-free-snacks.example.com/my-account/* get routed to a separate server running a small CRUD app. It's running express.js, or not if you'd prefer.

+ +

Should I write this app to serve requests from / (or ./, in another sense) and have the proxy hide from it the fact it's publicly available at /my-account/?

+ +

From / (agnostic about its URL and directory), the code seems more self-contained and easy to refactor, and we've separated out what seems to be a networking detail. However, all its HTML links to static assets like /stylesheets/main.css are now broken, because they're actually available at /my-account/stylesheets.main.css. In fact, all its links need to become relative, which hurts refactorability.

+ +

Should I:

+ +
    +
  • Make the app serve from / and use relative paths for links?
  • +
  • Host static assets elsewhere?
  • +
  • Make the app serve from /my-account/ and use absolute paths for links?
  • +
  • Do something different because this is an XY problem?
  • +
+ +

Multiple answers may apply.

+",171407,,-1,,42814.43681,42739.90347,"Should a web application be aware of its URL, including its sub-directory?",,2,3,0,,,CC BY-SA 3.0, +339438,1,339472,,1/4/2017 16:29,,5,2535,"

Android developers probably are familiar with Ceja's Clean Architecture, where use cases are classes that implement the Command Pattern.

+ +

Shvets defines the pattern intents as follows:

+ +
    +
  • Encapsulate a request as an object, thereby letting you parametrize clients with different requests, queue or log requests, and support undoable operations.
  • +
  • Promote ""invocation of a method on an object"" to full object status
  • +
  • An object-oriented callback
  • +
+ +

I use that approach in order to improve code readability and testability. But, after reading Shvets's Anti-Patterns course, I got confused with his Functional Decomposition Anti-Pattern definition:

+ +
    +
  • Classes with ""function"" names such as Calculate_Interest or Display_Table may indicate the existence of this AntiPattern.
  • +
  • All class attributes are private and used only inside the class.
  • +
  • Classes with a single action such as a function.
  • +
  • An incredibly degenerate architecture that completely misses the point of object-oriented architecture.
  • +
  • Absolutely no leveraging of object-oriented principles such as inheritance and polymorphism. This can be extremely expensive to maintain (if it ever worked in the first place; but never underestimate the ingenuity of an old programmer who's slowly losing the race to technology).
  • +
  • No way to clearly document (or even explain) how the system works. Class models make absolutely no sense.
  • +
  • No hope of ever obtaining software reuse.
  • +
  • Frustration and hopelessness on the part of testers.
  • +
+ +

How may I am figure out if I am using Functional Decomposition Anti-Pattern instead of Command Pattern?

+",212528,,212528,,42739.83125,42741.37986,Command Pattern vs Functional Decomposition,,6,11,,,,CC BY-SA 3.0, +339441,1,339445,,1/4/2017 18:33,,4,257,"

When we call the same function on a list of things, we call that ""map"". What do we call it when we call a list of functions on the same data? I don't mean pipe - not feeding the output of each function in turn into the next function - but simply iterating over a list of functions, passing each the same input?

+",8120,,1204,,42739.79306,42740.62083,What is a term for iterating over many functions with the same input?,,1,4,1,,,CC BY-SA 3.0, +339449,1,,,1/4/2017 21:11,,3,1209,"

Suppose you set up a Redis cluster with one master and two slaves. Two clients are connected to each of the slaves. Both clients make conflicting changes at the same time:

+ +

+ +

What happens if these changes are replicated to Master at around the same time? Are they just applied to Master in the order they are received, then replicated back down?

+ +

What if transactions are used? Is the result eventually consistent, i.e. does Master resolve the conflict by applying the transactions in some order, then replicate the resolution down?

+ +

I don't expect perfect consistency from a distributed cache, but I do want to understand the fine points so that I use caching well. The application I'm working on uses the distributed cache for coordination among worker threads/processes. For example, when one worker processes an item, it puts a key in the cache with an expiration of 1 minute telling other workers not to process the same item. It's acceptable if two or three workers end up processing the same item, but this mechanism prevents infinite reprocessing.

+",3650,,,,,42830.11944,How does Redis (or any typical distributed cache) handle replication conflicts?,,1,4,,,,CC BY-SA 3.0, +339451,1,,,1/4/2017 22:22,,1,406,"

I need to send messages from a Windows Service to a Azure Service Fabric Stateful service. The network connection is not very reliable, and there must not be lost data. I was hoping I could use NServiceBus with a store & forward pattern to send the messages. Is my thinking fundamentally flawed?

+",258591,,258591,,42740.88333,42740.88333,Durable messaging over HTTP,,3,3,,,,CC BY-SA 3.0, +339460,1,339461,,1/5/2017 2:12,,4,537,"

According to this clean-code guide you should encapsulate conditionals:

+ +
function shouldShowSpinner() {
+  return fsm.state === 'fetching' && isEmpty(listNode);
+}
+
+if (shouldShowSpinner()) {
+  // ...
+}
+
+ +

Why not just write:

+ +
const shouldShowSpinner = fsm.state === 'fetching' && isEmpty(listNode)
+
+if (shouldShowSpinner) {
+  // ...
+}
+
+",227145,,,,,42740.10556,What are the benefits of encapsulating conditionals (in functions)?,,1,0,2,,,CC BY-SA 3.0, +339468,1,,,1/5/2017 8:09,,1,2577,"

I am developing a website where client needs that any notification should reach as soon as it is created. so i am using setinterval function of jquery and using ajax requests to get the notifications. the time interval I set is 2 seconds. and its not the only ajax request which is going this way. +there are following ajax request being done within interval of 2 sec

+ +
    +
  1. get notifications
  2. +
  3. get messages.
  4. +
  5. get counts.
  6. +
  7. some other checks
  8. +
+ +

I am worried because i think sending this much request at very short time period may disturb the system. and worse if the number of users increase. +Please tell me your opinions and solutions to this if this is wrong aproach

+",258616,,,,,42741.78472,sending ajax request with setinterval . is it good?,,1,5,,,,CC BY-SA 3.0, +339473,1,,,1/5/2017 10:16,,9,7095,"

For the first I would like to mention that I'm newbie in real-time systems programming +That's why I'm not sure if my questions are correct. Sorry for that +But I need some help

+ +

Question in short: +How to implement hard real-time software to be sure it meets hard deadlines? It is necessary to use some QNX features? Or it is just enough to write it for linux, port to QNX and it will be real-time by default?

+ +

Full question: +We have implemented some complex cross-platform multiprocess software with inter-process communcation for Linux, Windows, Android and QNX. +Programming language is C++, we use Boost and planty of other libs. +Our software does it's job well and quickly but it is still prototype. +For production purposes we need to do it real-time +Some of our features have to be real-time and very robust because they are very important and safety of people that use our software may depend on them. +They work pretty quickly - up to hundreds of milliseconds. But I'm not sure that our system is really real-time because of this fact (am I right?).

+ +

So there is a main question: how to modify our software to be real-time? +I've googled a lot but I still have no idea how to do it.

+ +

Some additional information about our platforms: +Linux and Windows we currently use only for testing purposes. +Android - we still haven't decided whether we need it. +QNX - is our target OS for production. +I guess that answer for my next question is ""NO"" :) +But is it possible at all to implenet cross-platform real-time software (for real-time OSes (RTOS) as well as for general purpose OSes (GPOS) )?

+ +

Possibly we need to make our efforts to implement all real-time features only for QNX? +But I still don't understand how to do it. Could somebody shed a light on this question?

+",258632,,198652,,42740.63819,42740.63819,How to modify software to become real-time?,,3,10,3,42740.71181,,CC BY-SA 3.0, +339475,1,339621,,1/5/2017 10:31,,-1,151,"

A little bit of confusion over here.

+ +

I am trying to reproduce Git's behavior regarding pagers and editors (as I think Git developers already done good (maybe the best) design choices in this scope).

+ +

While trying to break it down I found that Git uses the pager/editor set to the environment variable $PAGER/$EDITOR. However even if $PAGER/$EDITOR is not set, git still opens a pager/editor.

+ +

For example, on my system when I run.

+ +
$ PAGER=cat git log
+
+ +

Git works as expected and uses cat to print the data.

+ +

But I (obviously) don't have to do that. And even if $PAGER is not set, which is the case by default on my system according to the following command.

+ +
$ echo $PAGER
+
+$
+
+ +

Git still can open a nice, well chosen pager (less in my case) to print data properly.

+ +

This looks neat! This is (to a certain extent) the behavior I am looking for.

+ +

But I am not able to find out how this is implemented. Is the default pager/editor is chosen at build time? If so how can I do the same knowing that I am using autotools as my build system. And by how I mean how should the option for choosing the default pager/editor look like? And is there any specific autoconf/automake macro(s) dedicated to this.

+ +

Is this a dynamic configuration (Can be changed after the build in a configuration file)? And if so, I'd like to take a look at this configuration file. Where can I find it?

+ +

Maybe this is more complicated than that and Git is able to guess and automatically choose the pager/editor by it self. And if this is the case, I'd like to know how it does that.

+ +

Any advice or pointers will be helpful. Not necessarily about how Git is implementing the stuff. Therefore I'd like to point out that the package I am building is intended to be cross-platform, easily compilable/cross-compilable to non linux-like platforms. Which may or may not have a convenient command line editor/pager (BTW. can I support GUI editors?) ie. a binary provider might have to include the editor/pager to the deployment package. I want to make that process as easy as possible (the binary provider should not look at the code).

+ +

Basically I want to make design choices as best as I can afford. With a little boost from you guys I can do even better.

+ +

Thanks.

+",257538,,257538,,42741.96528,42741.96528,Git-like pager/editor management,,2,2,,,,CC BY-SA 3.0, +339480,1,,,1/5/2017 12:02,,0,134,"

Is it a best practice to initialize class dependencies in a constructor or should a class be initialized in the method where it is used. Let's say we have the following situation, and PriceCalcService is used only in a couple of methods. Order also takes other parameters, and sets its state.

+ +
public class ShippingService() { .......... }
+
+public class Order() {
+    public string SomeOrderProperty;
+    private PriceCalcService priceCalcService;
+
+    public Order(string someOrderproperty) {
+         priceCalcService= new PriceCalcService();
+         SomeOrderProperty = someOrderProperty;
+    }
+
+    public void Method1() {
+         PriceCalcService.MethodX();
+    }
+    .......
+}
+
+ +

Or should the PriceCalcService be initialized only in the methods that use it?

+ +
public class PriceCalcService() { .......... }  
+
+public class Order() {
+
+    public string SomeOrderProperty;
+    public Order(string someOrderproperty) {
+         SomeOrderProperty = someOrderproperty;
+    }
+
+    public void Method1() {
+         new PriceCalcService().MethodX();
+    }
+    .......
+}
+
+ +

First example: Advantage: We can see the dependencies + Disadvantage: We instantiate a class even if it is not used. Which method should I choose?

+",257950,,257950,,42740.53889,42741.01458,Constructor containing class dependencies,,3,1,,,,CC BY-SA 3.0, +339486,1,,,1/5/2017 12:57,,4,19481,"

I'm trying to rewrite some code I wrote time ago using some C++ good practices. To achieve such a goal I'm using as reference Effective C++ (1) and google coding convention (2). According to (2) a function should be declared inline if it is 10 lines or less, according to (1) furthermore the compiler could ignore the inline directive, for example when there are loops or recursion (just some example is provided so I don't know all the cases that would be ignored by the compiler).

+ +

Say I then have 2 functions, the first one is 10 lines, and there's no call to any other function and no external reference in general. The second one assume is still 10 lines but at some point there's a call to the first one

+ +

Something like

+ +
Type1 f(Type2 arg) {
+   //10 lines of self contained code
+}
+
+Type3 g(Type4 arg) {
+   //0 <= n <= 8 lines of code
+   //g(x);
+   //9 - n lines of code
+}
+
+ +

I would declare the f inline, because of the suggestion given by google (fully justified) But I would be puzzled about g what would be a good practice here? Would declaring g as inline ignored by the compiler? If not can I still have the benefits of the inline directive?

+",239439,,,,,42740.58542,When a function should be declared inline in C++,,1,8,6,,,CC BY-SA 3.0, +339488,1,364837,,1/5/2017 13:10,,5,302,"

Let's suppose I have a backend with API-only Rails. There is also a Javascript single-page application (Aurelia, but could be something else) talking to this API.

+ +

Should I keep these together, in the same Git repository, integrating Rails with Aurelia to some extent, maybe with the Rails asset pipeline building/bundling Aurelia somehow? Can this even be done with reasonable effort? Or should I keep them totally separate, because in reality they are two separate things?

+ +

What are the pros/cons of having the Aurelia project set up inside the Rails project or totally separate?

+ +

Also I suspect this will be different during development and in production. In prod, the Aurelia app will be about two .js files anyway, which will be served by the web server as usual. I think it's better to use Aurelia tooling separately to build this.

+ +

How should this be done properly?

+",247013,,,,,43128.8875,How should Rails be set up with an SPA client like Aurelia?,,1,0,0,,,CC BY-SA 3.0, +339495,1,339500,,1/5/2017 16:17,,104,12186,"

A recent bug fix required me to go over code written by other team members, where I found this (it's C#):

+ +
return (decimal)CostIn > 0 && CostOut > 0 ? (((decimal)CostOut - (decimal)CostIn) / (decimal)CostOut) * 100 : 0;
+
+ +

Now, allowing there's a good reason for all those casts, this still seems very difficult to follow. There was a minor bug in the calculation and I had to untangle it to fix the issue.

+ +

I know this person's coding style from code review, and his approach is that shorter is almost always better. And of course there's value there: we've all seen unnecessarily complex chains of conditional logic that could be tidied with a few well-placed operators. But he's clearly more adept than me at following chains of operators crammed into a single statement.

+ +

This is, of course, ultimately a matter of style. But has anything been written or researched on recognizing the point where striving for code brevity stops being useful and becomes a barrier to comprehension?

+ +

The reason for the casts is Entity Framework. The db needs to store these as nullable types. Decimal? is not equivalent to Decimal in C# and needs to be cast.

+",22742,,155513,,43631.09792,43631.97222,At what point is brevity no longer a virtue?,,14,21,25,,,CC BY-SA 4.0, +339503,1,,,1/5/2017 16:51,,-1,2242,"

I have been spending quite a lot of time trying to decide if I should use apache** or nginx. I am very biased towards nginx due to the simple configuration, better scalability and it just feels more secure overall.

+ +

However, AJAX is a must have on my list of requirements, so if nginx prohibits the implementation of AJAX, or if it is just not worth the effort, then I wouldn't mind using apache.

+ +

So the question is, does the choice of the web server (in my case nginx vs. Apache) makes a difference when one wants to implement AJAX? Are there any additional components/installations required?

+ +

**For the purpose of answering this, I suggest to treat httpd and tomcat as one and the same.

+",258659,,9113,,42740.95486,42744.80486,"If I want to implement ajax, does the choice of the web server make a difference?",,1,6,,,,CC BY-SA 3.0, +339509,1,339703,,1/5/2017 18:07,,6,846,"

I think I've got the hang of writing a GA when you know the number of genes in a chromosome. For example, if you're searching for a string, and you know the length, you can just generate your initial population of random strings, then breed and mutate until you (hopefully) converge on a solution. Similarly, in the travelling salesman probelm, you know how many cities there are, and so are only involved with changing the order.

+ +

However, you don't always know the number of inputs. Say you want to solve the problem of given the digits 0 through 9 and the operators +, -, * and /, find a sequence that will represent a given target number. The operators will be applied sequentially from left to right as you read (taken from this page). You don't know in advance how long the answer will be, it could be as simple as a single digit, or a complex string of additions, multiplications, etc. For that matter, any given target will have multiple representations (eg 8 * 3 is the same as 2 * 2 * 6 which is the same as 2 * 4 + 4 * 2 and so on).

+ +

How would you write a GA for this? You don't know how long to make the gene string in the initial population. You could generate strings of varying length, but then how do you know how far to go? Maybe a good solution would be just one character longer?

+ +

I wondered about introducing an extra character into the vocabulary to represent a null place, so the solution ""8 * 3"" would be represented as ""8 * 3 null null null..."" but at least two immediate problems with this are a) you still need to pick a maximum length, and b) You would penalise shorter solutions, as they would only be found if you hit a long string of nulls at the end.

+ +

Please can anyone explain how you would approach such a probelm.

+",123358,,123358,,42740.85417,42743.65486,How do you encode the genes when you don't know the length?,,1,12,0,,,CC BY-SA 3.0, +339512,1,339519,,1/5/2017 18:34,,2,283,"

I'm designing a configurable api and I've seen a few ways to accept options from the caller.

+ +

One is using an options object like so:

+ +
var options = new MyApiConfigurationOptions {
+    Option1 = true,
+    Option2 = false
+};
+
+var api1 = MyApiFactory.Create(options);
+
+ +

Another is using a configuration function:

+ +
var api2 = MyApiFactory.Create(o => {
+    o.Option1 = true;
+    o.Option2 = false;
+});
+
+ +

Is one approach any better/worse/different than the other? Is there any real difference or would it be nice to support both so the caller can use whatever syntax they prefer?

+",73165,Eric B,,,,42740.99514,Configuration object vs function,,6,1,,,,CC BY-SA 3.0, +339523,1,,,1/5/2017 21:33,,3,171,"

Say I'm writing a GA to solve the travelling salesman problem. I don't know in advance what the shortest path is, so how does my GA know when to stop?

+ +

If I wait until the best fitness doesn't reduce for a few generations, how do I know I'm not temporarily stuck in a local minimum, which some mutation in the next generation may help? If the best fitness goes up, how do I know this isn't just a temporary thing that will again be solved in a future generation?

+",123358,,,,,42741.00556,How does the genetic algorithm know when to stop if the global minimum isn't known?,,2,1,,,,CC BY-SA 3.0, +339527,1,,,1/5/2017 22:11,,0,44,"

I am working on a very basic driving simulation. I am trying to decide the relationship between the following objects: Freeway, Vehicle, Driver, ProximitySensors.

+ +

My real-world analysis suggests the following relationships:

+ +

A. Freeway has a vehicle: because a freeway can have multiple cars and a car can only have one freeway

+ +

B. Vehicle has a driver: because a vehicle can (usually) only have one driver and a driver can (usually) only have on vehicle

+ +

C. Vehicle has proximity sensors: only a vehicle can have apparatus for detecting nearby vehicles

+ +

However, when beginning to code this up, I've noticed a few oddities I want to straighten out. Here are the constructors I have come up with:

+ +

public Freeway(Vehicle car)
+public Vehicle(Freeway freeway, Driver driver)
+public ProximitySensors(Car car) // In order for it directly access the particular car's position
+public Driver()

+ +

A lot of these are based on convenience / ease, so I am sure that I'm taking the shorter/incorrect approach. Here are a few questions I have encountered:

+ +
    +
  1. First of all, I feel like the Driver should be controlling the vehicle, but as you can see from my other questions, I may be asking the Vehicle to ask the Driver to change lanes instead of the other way around.

  2. +
  3. Often, I want the proximity sensors to access the freeway based on the vehicle's position (to detect other nearby vehicles), however with this structure, a Freeway has a vehicle and so I'm not sure how the proximity sensors (through the vehicle) will access the freeway unless I pass it to the vehicle as well.

  4. +
  5. Does the car request permission from the Freeway to change positions? I wanted the car to be independent of the freeway and for it to have an accident if not programmed/performed correctly.

  6. +
  7. What function should the Driver play exactly? They have a name, age, etc. but should they be the ones to call the proximity sensors on behalf of the car? Should the car do it directly?

  8. +
  9. Should the Driver have its own method changeLanes(), which calls changeLanes() from the Vehicle which then calls its own proximity sensor function checkSide() which then operates on the Freeway?

  10. +
+ +

When I started to code this, the relationships became murky without every object having access to just about every other object.

+",160860,,,,,42740.92431,Relationship Between Driving Simulation Objects,,0,3,,,,CC BY-SA 3.0, +339528,1,339529,,1/5/2017 23:17,,0,94,"

Say that I have a C++ class with some fields with static storage duration, call it class A.

+ +

Is there some way to use inheritance to ""inject"" these static fields into classes which derive from class A? That is to say, if class B and class C derive from A, B and C will have the same static fields as the base class A, shared with all other instances of B and C, but operations on these fields within instances of B and C will be distinct to their respective subclasses, and not affect each other.

+",257221,,,,,42740.975,Static field injection into subclasses,,1,0,,,,CC BY-SA 3.0, +339537,1,339874,,1/6/2017 2:09,,-3,1495,"

I wrote abstract base for some turn-based games (like chess or tic-tac-toe) and wrote game based on this base. And I stuck with choosing how to design class hierarchy. Here is two variants for wich I came up: + +And here is second screenshot (it is too long to post here as image)

+ +

In first variant all classes in different namespaces (or I can move them to one namespace). In second variant all classes separated with static classes and they all in one namespace. First variant's diagram looks better, but I think that second is more correctly. How better to design this structure?

+",258694,,,,,42745.56181,Turn based game class design,,1,7,,,,CC BY-SA 3.0, +339539,1,339542,,1/6/2017 2:35,,0,1551,"

I am designing a SaaS application where thousands of users will be using this application. This application will do lots of data crunching and analytics. I am considering creating a main database which will hold all user credentials and configurations, then I will create individual databases for each customer for storing their data. +Main database will be used by services to determine what services should run at what time on which user's database.

+ +

I am using postgres as my database.

+ +

How efficient is this design, or is there any better design which I should follow.

+",88222,,,,,42741.125,Database Architecture for SaaS application,,1,0,,42752.86528,,CC BY-SA 3.0, +339540,1,,,1/6/2017 2:46,,19,39783,"

I have few async REST services which are not dependent on each other. That is while ""awaiting"" a response from Service1, I can call Service2, Service3 and so on.

+ +

For example, refer below code:

+ +
var service1Response = await HttpService1Async();
+var service2Response = await HttpService2Async();
+
+// Use service1Response and service2Response
+
+ +

Now, service2Response is not dependent on service1Response and they can be fetched independently. Hence, there is no need for me to await response of first service to call the second service.

+ +

I do not think I can use Parallel.ForEach here since it is not CPU bound operation.

+ +

In order to call these two operations in parallel, can I call use Task.WhenAll? One issue I see using Task.WhenAll is that it does not return results. To fetch the result can I call task.Result after calling Task.WhenAll, since all tasks are already completed and all I need to fetch us response?

+ +

Sample Code:

+ +
var task1 = HttpService1Async();
+var task2 = HttpService2Async();
+
+await Task.WhenAll(task1, task2)
+
+var result1 = task1.Result;
+var result2 = task2.Result;
+
+// Use result1 and result2
+
+ +

Is this code better than the first one in terms of performance? Any other approach I can use?

+",245170,,258807,,42742.57778,43395.77917,Calling multiple async services in parallel,,3,11,3,,,CC BY-SA 3.0, +339547,1,,,1/6/2017 5:43,,4,1126,"

We all may have seen applications like JIRA, or many CRM or other applications that allow its users to define their own custom fields to an entity, and do a variety of stuff with it, like making them mandatory, validate their values and so on.

+ +

I want to do just that in the Product we are creating.

+ +

Let's assume our product allows a user to create his/her own Project. A project has pre-defined attributes such as

+ +
    +
  • Name (String)
  • +
  • Description (CLOB)
  • +
  • Type (String)
  • +
  • Owner (String)
  • +
  • Status (String)
  • +
+ +

Now, as a user, I would like to add the following custom field to my project

+ +
    +
  • Due Date (Date)
  • +
+ +

Ideally he should be able to create a custom field in my product which would capture the following details:

+ +
    +
  • Name of the field
  • +
  • Type of the field
  • +
  • Default Value
  • +
  • List of values (if the field is to be a drop down list)
  • +
  • Mandatory or not
  • +
+ +

Similarly, I would like to allow this feature of adding custom attributes not only to a project, but to a few other entities as well.

+ +

This is the technology stack we're using and so far we're pretty ok with it.

+ +
    +
  • Spring MVC, JSP and jQuery as the Web Framework and for the Views
  • +
  • JPA with Hibernate for persistence
  • +
  • Oracle, MS SQL, MySQL - Currently our product works on these +databases.
  • +
+ +

How do I approach this requirement? I would like to be educated on the following:

+ +
    +
  • How to I decide the best data model for this? Do I add a separate table for custom field definitions, and another one for their values, and associate them to my entity by means of a foreign key?
  • +
  • What should I do in my JSP/JS Layer to dynamically paint a screen with whatever custom fields that are defined?
  • +
  • How do I let Spring MVC and Hibernate handle all this data model and the views?
  • +
+ +

I'm extremely sorry if my question is not framed or worded properly. I'm relatively new to these technologies, and would like to learn with each challenge.

+ +

Thanks, +Sriram

+",258702,,258702,,42741.24306,42741.24306,Allowing users to add their own custom fields in a Spring MVC Hibernate application - What's an ideal approach?,,0,1,2,,,CC BY-SA 3.0, +339548,1,339644,,1/6/2017 6:40,,2,796,"

I'm a newer developer who has worked on some personal projects as well as non-profit/charity projects. However, I seem to be the most ""senior"" developer in my circle, meaning, most guys come to me for help and when I need some help, they can't help me.

+ +

As I don't have a full-time programming job, I'm sort of lost in terms of how to get some professional-grade code review. In other words, before I go full-hog into applying for full-time jobs, I want the opinions of a few reliable/reputable people on how my current code looks, where I could improve, how they would rate me as a programmer, etc... Because currently I have no clue in the slightest and even if I did, it's not just my opinion that matters anyway. The other issue is I have no idea whether my portfolio projects are ""good enough"" or are just little joke projects in terms of what an employer is looking for. I keep thinking I have to work on bigger projects, but that could go on forever. The thing is, I'd rather do this with someone local in person and not a random stranger on the internet as there is no way to judge whether that person's advice is credible or is in line with where I am trying to go if that makes sense.

+ +

Is this type of service offered by programming consultants? I can't be the only one facing this issue. As a self-taught programmer, this is very difficult because people often say put portfolios and projects up, but I have no way to judge whether my code is ""good"" or not other than my own perception off what I read from books such as Clean Code by Uncle Bob and Code Complete by Steve McConnell. Of course some of this is subjective, but that doesn't mean there isn't some sort of professional standard that can't be attained. Thanks for your advice.

+ +

PS: I also hear a lot about ""mentoring"" yet I've not seen how one would go about getting a mentor at all. I would love a mentor, is this a paid service or is this some type of relationship someone typically has with a more senior co-worker in the context of an office? I'm talking about a real-life person, not a YouTube Channel.

+",237893,,,,,42744.52847,Is Professional Code Review/Mentoring Offered?,,3,8,2,42742.56944,,CC BY-SA 3.0, +339549,1,,,1/6/2017 7:08,,5,3025,"

NULL is the billion-dollar mistake but there is nothing in the type system of C++ to prevent it. However, C++ already has const-correctness so implementing NULL-correctness seems trivial:

+ +
    +
  • introduce a keyword, __nonnull, as a specifier in the same class as const, which can only be placed after the * in a declaration.
  • +
  • All pointers obtained by & operator is __nonnull.
  • +
  • A __nonnull pointer value or const reference can be automatically converted to a normal pointer, but not vice-versa without a cast. (T *__nonnull can be converted to T *, and T *__nonnull & can be converted to T *const &)
  • +
  • Writable references of pointers cannot be automatically converted between normal and __nonnull (T *__nonnull & CANNOT be converted to T *&), i.e.

    + +
    int x;
    +int *__nonnull p = &x;
    +int *q = p; // OK
    +int *const &r = p; // OK
    +int *const *s = &p; // OK
    +int *&t = p; // ERROR, don't want to assign NULL to t
    +int **u = &p; // ERROR, don't want to assign NULL to *u
    +
  • +
  • const_cast can be used to cast a normal pointer to a __nonnull pointer, in which if the pointer is really NULL, the behaviour is undefined.

  • +
  • Assigning a null-pointer constant 0, NULL and nullptr to a __nonnull pointer variable is an error.
  • +
  • __nonnull pointers cannot be default initialised, like a reference.
  • +
  • Have a standard library function to convert a normal pointer to a __nonnull pointer, which throws an exception on NULL.
  • +
  • An optional warning will be given by compiler for dereferencing normal pointers without NULL check, which the practice is going to be deprecated.
  • +
+ +

Is the above proposal viable? Are the above things enough for a NULL-safe type system?

+",88201,,28374,,42741.62639,42741.62639,Is it possible to make nonnull become part of C++ type system?,,1,7,1,,,CC BY-SA 3.0, +339556,1,,,1/6/2017 9:59,,1,85,"

My model objects are generated by the library using hard-wired new operators, which makes dependencies injection using the constructor impossible. However, they also have methods, which are called by the library (i.e. adding service objects as parameter is not an option), using external service objects for the logic.

+ +

Is using the service locator anti-pattern the only option here?

+",88201,,,,,42741.41597,How can I load service dependencies into model classes?,,0,3,,,,CC BY-SA 3.0, +339559,1,339564,,1/6/2017 11:40,,0,74,"

I'm planning to add a feature to my application where you can switch to the ""Translation"" locale and then see the names of the translation placeholders in the application instead of the actual translations. Another nice thing are ""context descriptions"" where you see explanations in plain english what the placeholder actually is for.

+ +

My question is: Are there any standardized language/locale codes (e.g. defined by ISO 639-3 or ISO 15897) for these use cases?

+ +

If not, I'll probably use a character sequence like qqq or xx_XX.

+",31126,,,,,42741.51806,Language code for translation placeholders and translation context?,,1,1,,,,CC BY-SA 3.0, +339563,1,,,1/6/2017 12:06,,1,1079,"

Due to some issues with other shorteners like goo.gl (disabling my links for example) I want to create my own URL shortener.

+ +

I am looking to have a single table that will contain the following columns :-

+ +
links_id - autoincrement id
+url - the actual full URL
+abbreviation - the shortened version 
+
+ +

In a nutshell, when a new link is added to the table, I will insert the URL into the table and give this a unique abbreviated value, obviously if an existing URL is found it won't need to re-add the URL.

+ +

My question is what is the best way to generate such abbreviations that a) are fast to produce and are as unique as possible and not simple to guess. In addition how many number of characters would people recommend, for instance if I had an abbreviation of 6 characters how many unique combinations would this provide me based I am using the standard characters as used by other URL shorteners.

+ +

I will be using PHP/MySQL, any advice would be appreciated.

+",127940,,9113,,42741.54792,42741.66181,Need advice on making my own custom URL shortener?,,2,7,,,,CC BY-SA 3.0, +339568,1,,,1/6/2017 13:15,,10,3627,"

I've heard both about use cases (I'm talking about the description, not the diagram) and user stories being used to gather requirements and organize them better.

+ +

I work alone, so I'm just trying to find the best way to organize requirements, to understand what has to be done in the development. I don't have, nor need, any formal methodologies with huge documents and so forth.

+ +

User stories I've seem being used to build the product backlog, which contains everything that needs to be done in the development.

+ +

Use cases, on the other hand, provide a description of how things are done in the system, the flow of interaction between external actors and the system.

+ +

It seems to me that for one use case there are several user stories.

+ +

This leads me to the following question: when discovering requirements, what should I do first? Find and write user stories or find and write use cases? Or they should be done ""at the same time"" somehow?

+ +

I'm actually quite confused. Regarding use cases and user stories, for a developer who works alone, what is a good workflow, to use these methodologies correctly in order to have a better development?

+",82383,,,,,42789.44861,Which should be done first: use cases or user stories?,,7,2,2,,,CC BY-SA 3.0, +339572,1,339578,,1/6/2017 14:40,,0,293,"

I recently asked a question about design and got suggestion about how to structure my code. I'm still working on design so I only have pseudo code, but this is what I had in mind.

+ +
class TableManager()
+{
+    int init(DBManager manager, String name)
+    {
+        this.name = name
+        this.manager = manager
+    }
+
+    int add_thing(Thing thing) 
+    {
+        try {
+            manager.cursor.execute(""INSERT INTO %s, (%s)) % (this.name, thing)
+            return 1
+        } catch {
+            return -1; 
+        }
+    }
+
+ +

Initially I figured that you would unittest this by initializing TableManager in the unittest setup by passing it a DBManager connected to localhost and ""TEST_TABLE"" as the name argument.

+ +

Then you would call add_thing with various table states. For example, the first test would call add_thing with an initially empty table. The unittest would then check the status of the TEST_TABLE to make sure the added thing is in the table.

+ +

Is this considered integration testing or unit testing?

+ +

Someone mentioned using a MockDatabase to unit test the table manager. I don't see what that would do? You could create a MockDatabase which just returns true when execute is called, but I don't see how that would test the functionality of add_thing without actually having a database to make sure the element was added successfully.

+",258744,,325277,,43859.39792,43859.39792,Unit Test or Integration Test,,2,0,,,,CC BY-SA 3.0, +339574,1,,,1/6/2017 14:51,,0,40,"

I have a situation where I need to make a decision between choosing multiple environments or sticking to one. The Business wants to use multiple, but it is simple glossary (list of terms and definitions) which we link to our development tools. Considering the fact that it is simple glossary and not code development, I don't see any reason, why we need to have multiple environments. Also, another drawback with multiple environments is some migration processes between the environments is not automated and must be done manually with every release. Can anyone point me to relevant resources or explain me one convincing reason to explain to the Business?. I appreciate your time and help.

+ +

Thank you

+",258746,,,,,42741.64236,Are multiple environments required for Business Glossary?,,2,1,,,,CC BY-SA 3.0, +339582,1,339584,,1/6/2017 15:37,,3,206,"

Given a system with static permissions (1 permission for every action that can be made: create a resource, update a resource, etc), and dynamic roles (can be created and assign permissions to it dynamically).

+ +

The system have a preconfigured set of roles with the purpose of initial setup and/or testing. These can be deleted or modified after the initial setup, hence ""dynamic"".

+ +

When acting as a user with one of these preconfigured roles on a [functional/acceptance] test to assert a use case works properly, do tests that assert a user with a role that does not have the permission to execute that use case have any value?

+",136188,,,,,42742.93542,Do tests that asserts a user can't do an action have any value?,,3,3,,,,CC BY-SA 3.0, +339586,1,,,1/6/2017 16:00,,2,147,"

Both classes below implement the same interface and are in fact intended to be interchangeable one for the other. Why is the second one not referred to as a ""client"" in the literature?

+ +

There are many references to service layers, repositories, etc.:

+ +

How essential is it to make a service layer?

+ +

https://www.asp.net/mvc/overview/older-versions-1/models-data/validating-with-a-service-layer-cs

+ +

https://stackoverflow.com/questions/133350/whats-the-difference-between-a-data-service-layer-and-a-data-access-layer

+ +

This is a WebAPI client. We see the same pattern with WCF client, etc.

+ +
namespace Application.WebAPIClient
+{
+    public class UsersClient : BaseClient, IUsersService
+    {
+        public async Task<int> SaveUser(User user)
+        {
+            string json = JsonConvert.SerializeObject(user);
+            StringContent content = new StringContent(json, System.Text.Encoding.UTF8, ""application/json"");
+            HttpResponseMessage msg = await httpClient.PostAsync(""users/saveuser"", content);
+            return Convert.ToInt32(await msg.Content.ReadAsStringAsync());
+        }
+    }
+}
+
+ +

Why is the following not a ""LAN client"" or some other kind of client? Often called Repository or Service but never client although it wraps a call to sqlClient just as the code above wraps a call to HttpClient.

+ +
namespace Application.Repository
+{
+    public class UsersRepository : BaseService, IUsersService
+    {
+        // ...
+
+        public async Task<int> SaveUser(User user)
+        {
+            db.Users.Add(user);
+            await db.SaveChangesAsync();
+            return user.ID;
+        }
+    }
+}
+
+",201007,,-1,,42878.52778,42741.98681,Why is code that wraps a call to a database or DAL not referred to as a client?,,4,2,,,,CC BY-SA 3.0, +339588,1,,,1/6/2017 16:24,,3,579,"

For an example, +In a testing phase if i got a defect which is due to some delayed job restarting,can I raise it as a bug? +In our project,devteam merges and deploy their codes into test site. +usually there occurs some issues which is due to not restarting a delayed job. +We used to log it as a bug. +but development team prefer to rectify the issue with out raising it as a bug .In fact they are some what disturbed when QA team raise a bug which is not due to code issue :)

+ +

So i need to know whether its a good practice to raise a bug which is not due to code issue

+",258755,,,,,42783.17986,"In a testing phase,can I raise a defect which has occured due to deployment issues?",,4,1,,,,CC BY-SA 3.0, +339597,1,339603,,1/6/2017 18:36,,0,712,"

As I understand in the 3-tier architecture, the presentation layer talks to business logic layer, which talks to data access layer. And, ideally, business layer knows nothing about presentation, and data access layer knows nothing about business layer. I want to write classes to do CRUD database work that are separate from the domain classes. For example, Foo is a domain class in business layer, and I want to write a PersistFoo class that takes Foo objects and CRUDs. My question is (somewhat theoretical?) which layer does PersistFoo go in? Logically, it belongs in the data layer to me. However, PersistFoo depends on Foo (e.g. it reads database and converts data to Foo objects and returns them). So, if PersistFoo is in the data layer, then it depends on the business layer, which violates that lower layers should not depend on higher layers.

+",258769,,,,,42741.79653,3-tier data access layer usage,,1,0,,,,CC BY-SA 3.0, +339598,1,340032,,1/6/2017 17:48,,4,30551,"

I'm currently writing an application and I'm struggling with the decision of how to correctly design a class to connect to a database. I came up with something like this:

+ +
public class DatabaseConnector {
+    private Connection databaseConnection = null;
+
+    public DatabaseConnector(String url, String user, String password) {
+        databaseConnection = DriverManager.getConnection(url, user, password);
+    }
+
+    public void close() throws SQLException {
+        databaseConnection.close();
+    }
+}
+
+ +

Additionally, in this class I have methods to pull something from database or insert and so on, and for each method a create a separate PrepareStatement and ResultSet and other objects.

+ +

My question is if this approach is correct, somehow wrong, or terribly wrong. I will be glad for every tip on designing a good communication class and how to correctly work with databases.

+ +

I use a MySQL database and JDBC for communication.

+",258786,Piter _OS,10422,,42742.57778,43346.18264,How to write a proper class to connect to database in Java,,4,13,1,,,CC BY-SA 3.0, +339600,1,339602,,1/6/2017 18:39,,0,145,"

How do we compile analytics from millions of rows in a PostgreSQL table?

+ +

We pull order data from multiple CRM's and need to compile the data for reporting and each CRM has it's own orders table. We compile these tables into a compiled_orders table in 24 hour increments.

+ +

Our current implementation uses SQL Views to aggregate results and SUM the columns

+ +
CREATE OR REPLACE VIEW crm1_sql_views AS
+  SELECT
+      account_id
+    , name
+    , COUNT(*) AS order_count
+    , SUM(CASE WHEN
+        status = 0
+        THEN 1 ELSE 0 END) AS approved_count
+    , SUM(CASE WHEN
+        status = 0
+        THEN total ELSE 0 END) AS approved_total
+  FROM crm1_orders
+  WHERE
+    AND is_test = false
+  GROUP BY
+    account_id
+    , name
+  ;
+
+ +

We select the data we want from this view. The issue that we are running into is that a query like this pulls all the order data for a client into memory. If a client has 20M orders, it becomes extremely slow, and sometimes the query results are larger than the available memory/cache.

+ +

How do we incrementally/consistently/quickly take 20M records in a table and compile it into another table?

+ +

Increasing hardware is one solution, but we feel that is not the correct solution right now. We looked at materialized views, but since each CRM has it's own tables, it would have major maintenance implications every time we added a new CRM to our offering.

+ +

The goal is for our end users to answer questions like: +- How many orders did we receive last week/month/year? +- What weekday do I receive the most orders?

+ +

What technologies/methodologies/terms do we need to look at and research?

+ +
    +
  • Sharding
  • +
  • ETL
  • +
  • Data Pipelines
  • +
  • ""Big Data"" tools
  • +
  • NoSQL
  • +
+",258771,,,,,42969.42639,Compile analytics from millions of rows in PostgreSQL,,2,1,,,,CC BY-SA 3.0, +339604,1,339605,,1/6/2017 19:10,,6,915,"

I was reading through a Java book by author Herbert Schildt and he writes how the advantage of Java over C++ in portabilaty is that while C++ can be run anywhere, it still requires each program to be compiled with a compiler that was created for that CPU, and creating compilers is difficult, while Java doesn't need to be compiled for each CPU as long as there is a JVM for that processor.

+ +

My question is how is this an improvement? Doesn't the JVM need to be compiled for each architecture anyway, so you still require a individual compiler for each type of CPU? So what is this advantage?

+",258776,,247375,,42742.06111,42744.60903,How does Java improve over C++ in the area of portability?,,4,8,2,,,CC BY-SA 3.0, +339614,1,,,1/6/2017 21:48,,7,2056,"

I have 2 JVMs on the same machine that I want to pass about 1Mb of (serializable) data between ideally in under 5 ms.

+ +

Under load, using HTTP to localhost takes about 70ms average.

+ +

I tried hazelcast, passing the data via a distributed queue - about 50ms average.

+ +

Is there a faster way?

+ +
+ +

I'm using spring boot.

+",31101,,,,,43667.15764,What the fastest way to pass large data between JVMs?,,1,4,1,,,CC BY-SA 3.0, +339623,1,,,1/6/2017 23:09,,1,87,"

I am planning to write some financial modeling software targeting enterprises. It will be based on an existing open-source project that already has a BSD-3 license. I do not own the copyright to the original project but will be using it to create my derivative work. I would like to keep my project open-source as well but I can imagine a situation where a company wants to hire me to make additional modifications or request development of special features specifically for their business. They would likely require such modifications to be closed and proprietary especially if it pertains specifically to their business.

+ +
    +
  1. In that situation, am I allowed to issue a separate license to the client since I am the copyright holder for the derivative work? Is it possible to keep that version of the software closed source?
  2. +
  3. If I accept contributions from the open-source community, I assume there would then be other copyright holders to different parts of the software. Do I need to acquire copyright transfer from contributors? Is that even possible with the existing BSD-3 on the original work?
  4. +
+",258798,,258798,,42743.09444,42743.09444,Is BSD-3 compatible with dual licensing?,,1,2,,,,CC BY-SA 3.0, +339626,1,,,1/7/2017 0:28,,6,1302,"

When people talk about MapReduce you think about Google and Hadoop. But what is MapReduce itself? How does it work? I came across this blog post that tries to explain just MapReduce without Hadoop, but I still have some questions.

+ +
    +
  • Does MapReduce really have an intermediate phase called grouping as the article describes?

  • +
  • Can the grouping phase also be done in parallel or only the map and reduce phases?

  • +
  • Does the map and reduce operations described in the article make sense for the problem proposed (indexing web pages by keywords)? They look too simple to me.

  • +
  • Is the main purpose of MapReduce really just parallelization when indexing large amounts of data?

  • +
  • Do you think too many people know Hadoop without understanding the fundamentals of MapReduce? Is it a problem?

  • +
+",258801,,,,,42800.35833,Can someone explain the technicalities of MapReduce in layman's terms?,,4,0,1,,,CC BY-SA 3.0, +339627,1,,,1/7/2017 0:34,,5,1681,"

I'd like to follow the RAII(resource acquisition is initialization) idiom throughout my code but I'm also doing the template pattern where I'm developing generic versions of my classes and using them to build a common codebase for certain things. Sometimes I need to enforce an initialization sequence where I would need to call the specialized object's virtual functions in the constructor but that's not possible in C++. The only solution I can think of is a two step initialization by calling an init function after the object is created but that breaks the RAII idiom. Is there any solution to this?

+ +
#include <memory>
+
+class A {
+public:
+    A() {
+        // I want to call B's foo() here
+    }
+    virtual void foo() = 0;
+};
+
+class B : public A {
+public:
+    B() {};
+    virtual void foo() {};
+};
+
+void main() {
+    std::unique_ptr<A> a(static_cast<A*>(new B));
+
+    // Use b polymorphically from here...
+}
+
+",258800,,,,,43488.61111,How to avoid two step initialization (C++)?,,3,3,1,,,CC BY-SA 3.0, +339635,1,339639,,1/7/2017 5:45,,1,137,"

Below is the working List abstraction design,

+

+
+

List is a generic abstraction holding any type.

+

Below is the code directory structure. Currently symbol table(ST) and file api fileIO is using List abstraction.

+
./Code$
+.:
+fileIO  list  ST tinyTale type.h frequencyCounter.c frequencyCounter.exe
+
+./fileIO:
+file.h  fileReading.c
+
+./list:
+arrayImpl.c   config.c virtualImplLayer.c linkedListImpl.c  list.h listHandler.h  listInterface.h  
+
+./ST:
+implWithArray.c  ST.h
+
+
+

Below is the relevant code(for improvement),

+

list.h

+
#ifndef LIST_H /* Header guard */
+#define LIST_H
+#include"type.h"
+
+  typedef struct List List;
+
+
+ typedef int (*compareTo)(const void *, const void *);
+ typedef bool (*isLess)(const void *, const void *);
+ typedef bool (*isEqual)(const void *, const void *);
+
+#endif
+
+
+

listHandler.h

+
/***********listHandler.h ***********/
+#ifndef LISTHANDLER_H
+#define LISTHANDLER_H
+
+#include"list/list.h"
+typedef struct {
+
+         bool(*canHandle)(char*);
+        List*(*createList)(void);
+         void(*freeList)(List*);
+         void(*swim)(List*, int, isLess);
+         void(*sink)(List*, int, isLess);
+        void*(*listDeleteMaxElement)(List*, isLess);
+        void*(*sortedListDeleteMaxElement)(List*);
+          int(*listGetSize)(List*);
+  const void*(*listGetItem)(List*, const int);
+        List*(*sortedListInsertItem)(List*, void*, compareTo);
+         void(*listInsertItem)(List*, void*);
+        void*(*listDeleteItem)(List*, int);
+        void*(*listDeleteLastItem)(List*);
+        void*(*listDeleteFirstItem)(List*);
+         int(*linearSearch)(const void*, List*, size_t, compareTo);
+        void*(*binarySearch)(const void*, List*, size_t, compareTo);
+         void(*insertionSort)(List*, size_t, isLess);
+         void(*mergeSort)(List*, size_t, isLess);
+         void(*swap)(List*, int, int);
+
+}ListHandler;
+
+/*
+  "config.c" lookup below 2 global symbols created in impl handlers,
+   before linking time, so "extern" keyword
+*/
+extern ListHandler arrayImplHandler;
+extern ListHandler linkedListImplHandler;
+
+/*
+  "viml.c" lookup below global symbol created in "config.c",
+   before linking time, so "extern" keyword
+*/
+extern ListHandler *listHandlers[];
+
+/* Prototypes for definitions in viml.c - start ********/
+        List* vCreateList(char *);
+         void vFreeList(List*, char *);
+         void vSwim(List*, int, isLess, char *);
+         void vSink(List*, int, isLess, char *);
+        void* vListDeleteMaxElement(List*, isLess, char *);
+        void* vSortedListDeleteMaxElement(List*, char *);
+          int vListGetSize(List*, char *);
+  const void* vListGetItem(List*, const int, char *);
+        List* vSortedListInsertItem(List*, void*, compareTo, char *);
+         void vListInsertItem(List*, void*, char *);
+        void* vListDeleteItem(List*, int, char *);
+        void* vListDeleteLastItem(List*, char *);
+        void* vListDeleteFirstItem(List*, char *);
+          int vLinearSearch(const void*, List*, size_t, compareTo, char *);
+        void* vBinarySearch(const void*, List*, size_t, compareTo, char *);
+         void vInsertionSort(List*, size_t, isLess, char *);
+         void vMergeSort(List*, size_t, isLess, char *);
+         void vSwap(List*, int, int, char *);
+/*****End ***********************************************/
+
+#endif
+
+
+

listInterface.h

+
#include"list/listHandler.h"
+
+#ifndef LISTINTERFACE_H
+#define LISTINTERFACE_H
+
+/*********** User Interface - start *****************/
+#define createList()                   vCreateList(argv[1])
+#define freeList(a)                    vFreeList(a, argv[1])
+#define swim(a, b, c)                  vSwim(a, b, c, argv[1])
+#define sink(a, b, c)                  vSink(a, b, c, argv[1])
+#define deleteMax(a, b)                vListDeleteMaxElement(a, b, argv[1])
+#define sortDeleteMax(a)               vSortedListDeleteMaxElement(a, argv[1])
+#define getSize(a)                     vListGetSize(a, argv[1])
+#define getItem(a, b)                  vListGetItem(a, b, argv[1])
+#define sortInsertItem(a, b, c)        vSortedListInsertItem(a, argv[1])
+#define insertItem(a, b)               vListInsertItem(a, b, argv[1])
+#define deleteItem(a, b)               vListDeleteItem(a, b, argv[1])
+#define deleteLastItem(a)              vListDeleteLastItem(a, argv[1])
+#define deleteFirstItem(a)             vListDeleteFirstItem(a, argv[1])
+#define lSearch(a, b, c, d)            vLinearSearch(a, b, c, d, argv[1])
+#define bSearch(a, b ,c, d)            vBinarySearch(a, b, c, d, argv[1])
+#define callInsertionSort(a, b, c)     vInsertionSort(a, b, c, argv[1])
+#define callMergeSort(a, b, c)         vMergeSort(a, b, c, argv[1])
+#define swap(a, b, c)                  vSwap(a, b, c, argv[1])
+
+/*********** User Interface - end *****************/
+#endif
+
+
+

where, listInterface.h is an interface for a user of List abstraction.

+

All api wth prefix(v) are defined in virtualImplLayer.c.

+

But,

+

1)

+

listInterface.h is not a readable code for user, because it does not possess List typedef which is actually available in list.h and indirectly included via listHandler.h.

+

2) +listInterface.h is not easy to use because user(say fileReading.c) need to pass argv argument to List public api. Public api is given in listInterface.h. User has to run its application passing argv[1] as,

+

$ ./userapp.exe ARRAY

+

or

+

$ ./userapp.exe LINKEDLIST

+

Question:

+

Can listInterface.h get more readable & easy to use?

+",131582,,-1,,43998.41736,42742.42083,Refactoring List abstraction - C,,1,8,,,,CC BY-SA 3.0, +339650,1,339654,,1/7/2017 14:21,,0,858,"

After 50 years of software engineering, why are computer systems still insecure? I don't get it.

+ +

Two questions: (i) What's so hard about just denying or restricting networked access to bad actors who lack passwords? It's not like these bad actors arrive with crowbars and dynamite; they only have bits and bytes, right? (ii) Once a bad actor has achieved networked access, why haven't operating-system kernels been re-engineered to make privilege escalation unfeasible?

+ +

I am not looking for a book-length answer but merely for a missing concept. If you can see the flaw in my thinking and can shed a little light on the flaw, that will be answer enough.

+ +

Is there some specific reason top scholars have not yet been able to solve the problem? Is there a sound reason we still have, say, bootstrapped compilers and unauditable microprocessor designs, despite the long-known security risks?

+ +

Is there some central observation, answerable at StackExchange length, that ties all this together? Why are computer systems still insecure?

+ +

Update: Commenters have added some interesting links, especially ""Is Ken Thompson's compiler hack still a threat?""

+",53118,,-1,,42838.53125,42743.00486,Why are computer systems still insecure?,,4,11,,42743.21597,,CC BY-SA 3.0, +339652,1,,,1/7/2017 14:44,,3,140,"

Let's say I have a complex software product about which the information or knowledge is scattered all over the organization that built it. There are requirements and features about which even the Quality Assurance/Testing department is not very sure.

+ +

There are also many facets of the same generic software product and the product gets continuously customized as per the customer's business requirements. +Moreover, when you try to explore the software hands on, then often tend to get lost due to its complexity.

+ +

What I precisely want to know is how can one document the end user guides as per the aforementioned scenario.

+",41586,,41586,,42742.63264,42742.68403,How to document a Software whose requirements are poorly managed?,,1,4,,42744.80764,,CC BY-SA 3.0, +339656,1,,,1/7/2017 16:26,,6,2802,"

In Domain-Driven Design much is said about the domain experts. They are the ones who knows about the domain, and which should be in contact with the developer in order to build the ubiquitous language and hence the domain model.

+ +

The only problem I have is that from a practical point of view I don't know who the domain experts are: Are they the end users? Are they the people who asks for the development of the software? Are they random people that deal with the problem being solved by the software?

+ +

From a practical point of view, who are the domain experts and where should I look for them when building a software?

+",82383,,209774,,42742.79236,43701.80486,Who are the domain experts?,,3,2,1,,,CC BY-SA 3.0, +339661,1,340005,,1/7/2017 17:50,,4,1033,"

Scenario: An open-source PHP project that has existed for several years has too many classes in the main namespace that we want to refactor some of them into another namespace, but we have a widely used and diverged plugin API, and refactoring may cause a major backwards incompatibility. This plugin API is basically loading external PHP files that interact with the classes and functions in the core code.

+ +

This project is not a Composer project (don't ask why), and it uses an class autoloading library maintained by ourselves.

+ +

I recently discovered the function class_alias, which might help with this problem.

+ +

I am considering refactoring the classes in the main namespace into a sub-namespace, and update references to them in the project. For the PHP files that originally contained the refactored classes (now they have been moved), I want to put something like this:

+ +
<?php
+class_alias(NewClassName::class, OldClassName::class);
+
+ +

This will be loaded by the autoloading library when a plugin requires the use of OldClassName. I have tested this method and primary tests show that it works well.

+ +

Next step is to clean up the main namespace directory to fulfill the real motive of our refactoring - there are too many files in the main namespace directory. Hence, I am creating a new directory next to the source directory called als (alias directory), and making als a secondary source folder where alias class files are located.

+ +

Before, the project looked like this:

+ +
src/
+    src/Foo/Bar.php
+
+ +

Now, the project looks like this:

+ +
src/
+    src/Foo/Bar/Bar.php
+
+als/
+    als/Foo/Bar.php
+
+ +

Tests show that this method is working well so far.

+ +

However, are there any possible side-effects, from technical/performance perspective, documentation perspective or code structure perspective? We have never tried doing something like this and have never seen anyone doing this. Many developers are watching the repository for reference of the API, so we want to be more careful before trying these changes.

+",234322,,234322,,42742.74792,42746.84167,Impacts of using class_alias() to maintain backwards compatibility in PHP,,1,2,2,,,CC BY-SA 3.0, +339666,1,339680,,1/7/2017 20:41,,6,1322,"

Garbage collection happens during runtime, real time, while managed code is running. In C++ however, we need to write destruct statements into the code. So we could say that GC is built into the code (by us). So why can't managed languages behave like this? The compiler would analyse the code and insert the destruct methods into the appropriate places in the object code. +If the answer to this is that eligibility for GC for some objects become clear only during running, it might still make sense to build destruct statements into the compiled code in some cases, no? Maybe in the majority of cases?

+",258861,,,,,42743.40417,Managed code: could GC be taken care of during compile time?,,5,5,3,,,CC BY-SA 3.0, +339672,1,339673,,1/7/2017 22:14,,1,1309,"

The point of redux is to decouple ""what happened"" from ""how the state changes"" according to Dan, anyway, but I'm having trouble figuring out how to handle side effects without having getters and setters.

+ +

Here's where I am now:

+ +
    +
  • User enters email + password and taps ""log in""
  • +
  • App fires off { type: ""USER/LOGIN"", payload: { email, password } }
  • +
  • Middleware sees ""USER/LOGIN"" and makes an HTTP request
  • +
  • Reducer sees ""USER/LOGIN"" and updates state to fetching
  • +
+ +

THIS IS WHERE I GET STUCK

+ +

Here's how I'm handing it now, but it feels like I'm doing something wrong:

+ +
    +
  • Response comes back
  • +
  • If we get a user, dispatch { type: ""USER/LOAD"", payload: { ... } }
  • +
  • If we didn't, dispatch { type: ""USER/ERROR"", payload: { ... } }
  • +
+ +

Is this right?

+",100670,,8553,,42742.92917,42742.93333,How to avoid get/set actions in redux?,,1,1,,,,CC BY-SA 3.0, +339677,1,,,1/7/2017 23:09,,6,473,"

I'm developing a Ruby on Rails app. The app contains a service wrapping an external REST API, called from a controller, with several possible error states. The current implementation returns the response body on success and raises a service specific catch-all exception otherwise.

+ +

I want to refactor this to be able to distinguish between network errors et al, and authorization errors which are caused by the client supplying invalid parameters. What is the most idiomatic way to do this? What has resulted in the most maintainable code in your experience?

+ +

Below are some alternatives I've considered.

+ +

Exceptions throughout

+ +
+

It is recommended that a library should have one subclass of StandardError or RuntimeError and have specific exception types inherit from it. This allows the user to rescue a generic exception type to catch all exceptions the library may raise even if future versions of the library add new exception subclasses.

+
+ +

From the official Ruby documentation.

+ +

The complete code for my service is currently 39 lines and can't be considered a library, but this strategy might still be applicable. A possible implementation is listed below.

+ +
class MyService
+  def self.call(input)
+    res = do_http_call input
+
+    if res and res.code == 200
+      res.body
+    elsif res and res.code == 401
+      fail MyServiceAuthenticationError
+    else
+      fail MyServiceError
+    end
+  end
+end
+
+class MyServiceError < StandardError
+end
+
+class MyServiceAuthenticationError < MyServiceError
+end
+
+ +

Coming from other languages this approach doesn't feel right. I've often heard the mantra ""reserve exceptions for exceptional cases"", for instance in Code Complete (Steve McConnell, 2nd edition, p. 199):

+ +
+

Throw an exception on for conditions that are truly exceptional

+ +

Exceptions should be reserved for conditions that are truly + exceptional – in other words, for conditions that cannot be addressed by + other coding practices. Exceptions are used in similar circumstances + to assertions – for events that are not just infrequent but for events + that should never occur.

+
+ +

Are exceptions really for exceptional errors? is a discussion of this topic. The answers provide varying advice, and S.Lott's answer explicitly states ""Don't use exceptions to validate user input,"" which I think is roughly what the strategy outlined above amounts to.

+ +

Symbols for ""non exceptional"" errors

+ +

My first intuition is to use exceptions for errors I want to bubble up the stack and symbols for results the caller can expect and wants to handle.

+ +
class MyService
+  def self.call(input)
+    res = do_http_call input
+
+    if res.code == 200
+      res.body
+    elsif res.code == 401
+      :invalid_authentication
+    else
+      fail MyServiceError
+    end
+  end
+end
+
+class MyServiceError < StandardError
+end
+
+ +

Just like Exceptions throughout, this is easy to extend with additional errors.

+ +

It could however lead to maintainability problems. If a new symbol return value is added and a caller isn't modified the error symbol could silently be interpreted as successful return since the success return value is a string. I don't know how realistic this is in practice though.

+ +

Additionally, this approach can be considered stronger coupled to its caller. Whether an error should bubble up the call stack or be handled by the immediate caller is arguably not something the callee should concern itself with.

+ +

False on error

+ +

An example of this approach is ActiveRecord::Base#save.

+ +
    +
  • If the operation is successful it returns the result, or true in the case of #save.
  • +
  • If validations fail it returns false.
  • +
  • If some type of unexpected errors occur, like encoding fields with UTF-8 in #save, an exception is thrown.
  • +
+ + + +
class MyService
+  def self.call(input)
+    res = do_http_call input
+
+    if res.code == 200
+      res.body
+    elsif res.code == 401
+      false
+    else
+      fail MyServiceError
+    end
+  end
+end
+
+class MyServiceError < StandardError
+end
+
+ +

I generally dislike this strategy as false doesn't carry any semantic meaning and it's impossible to distinguish between errors.

+ +

Another way

+ +

Is there another superior way?

+",167729,,-1,,42878.48125,42785.47222,How do I create idiomatic error interfaces in Ruby?,,2,0,,,,CC BY-SA 3.0, +339685,1,339737,,1/8/2017 7:14,,-3,3297,"

As many other questions and answers already stated, there is no syntax in C++ which allows you to declare and fill a dynamic-sized array with non-default constructible objects.

+ +
Obj* array = new Obj[size];
+
+ +

Here, if Obj has a default constructor, it will be used to fill array with size default-constructed instances of Obj, which is a problem.
+The most recurrent answers to this question mention vectors (here), or the mechanism used by vectors, the placement new (here). However, the former is not an option in my case, and I would like to avoid the latter because it looks plain dirty and messy to me (being used by the STL perhaps makes it a nice way to do things, but it really does look messy).
+Edit: why vectors are not an option: This project is a challenge I want to test myself against, and I want to get my hands in the dirt as much as I can. If not using vectors means using placement news as stated below, then that is what I will do.
+I do realise that I complained about placement news being messy, and I also do realise that getting hands in the dirt does imply having to handle such mess.

+Another recurrent answer is to use the curly braces to fill the array with non-default constructed objects:

+ +
Obj* array = new Obj[2] {Obj(foo), Obj(bar)};
+
+ +

That makes it a half-dynamic-sized array, if I may say. Being allocated on the heap makes it not static, but the size has to be a compile-time constant, so I would not consider it fully dynamic (or at least not as much as I want it to be).

+ +

The most obvious solution to me would be to declare the array as above, let it be filled with junk, default-constructed Objects, and then re-fill it up with the correct objects, as follows:

+ +
Obj* array = new Obj[size];
+for (int i = 0; i < size; i++)
+{
+    array[i] = Obj(whatever);
+}
+
+ +

However, performance is a great deal in my program, and I am quite concerned about the performance impact of such methods. If size is 10000, the array would be filled with 10000 junk objects, which could be very time-expensive. Then, the cost of the replacement afterwards could be even worse (while still possible to minimise with efficient use of the copy-and-swap idiom), and that too is a concern.
+Instead, I was thinking of using a double malloc.

+ +
Obj** array = malloc(sizeof(Obj*) * size);
+for (int i = 0; i < size; i++)
+{
+    array[i] = malloc(sizeof(Obj));
+    array[i] = new Obj(whatever);
+}
+//...
+for (int i = 0; i < size; i++)
+{
+    delete array[i];
+    free(array[i]);
+}
+free(array);
+
+ +

But that looks totally not cache-friendly. And it just does not feel right having a pair of malloc (having only one already feels not okay).

+ +

So here is my question: is there a nice and clean way that allows you to allocate uninitialised memory, and then fill it with custom-constructed objects, that does not degrade performance?
+Edit: my real question is: are there other ways than the ones I mentioned above?

+ +

(P.S.: I know that ""uninitialised memory"" does not go well with RAII, and thus does not go well with ""nice and clean"")

+",153704,,-1,,42878.52778,42744.22569,"Proper way to implement a ""dynamic array of non-default-constructible objects""",,2,8,,,,CC BY-SA 3.0, +339695,1,,,1/8/2017 11:35,,0,237,"

I'm trying to learn proper object-oriented design, with class relations and avoiding anemic domain models[1]. I'm creating an application to store and retrieve information about ""cyberattacks"". There are five relevant classes to this question:

+ +
    +
  • Directory: the class representing the collection of the data indicated below
  • +
  • Group: a group of attackers known to be related to each other
  • +
  • Hacker: a person who makes an attack
  • +
  • Attack: a hacking attack, including information such as severity and cost to repair the damage
  • +
  • Type: attacks are classified into these types, i.e. denial of service, information leak and so on. These types are entered by the user when they add an attack.
  • +
+ +

The kinds of questions users of the application will ask are:

+ +
    +
  • Which Attacks have been performed by a specific Hacker? The response includes information about the damage of each Attack, as well as which Type it has.

  • +
  • For a specific Group, which attacks have been made? By whom, and which was caused the highest damage cost?

  • +
  • For a specific Type, which attacks are made? Who did them? What is the total damage cost?

  • +
+ +

The naive way to solve this would include a lot of circular dependencies between these classes (i.e. a Hacker has Attacks, but Attacks also have Hackers, and a Type has Attacks, but an Attack also has a Type, and so on.) As circular dependencies are unwanted, how would I solve this with proper OO design?

+",258894,,,,,42743.91806,"How to handle ""reverse dependencies"" between classes with proper object-oriented design?",,2,6,1,,,CC BY-SA 3.0, +339696,1,339700,,1/8/2017 11:40,,2,257,"

I would like to modelize some nucleotid chains : ADN & ARN. +ADN is a list of nucleotids : A,T,G,C. +ARN is a list of nucleotids : A,U,G,C.

+ +

ideally, I would like to define e.g. ADN as a list of data types A,T,G,C.
+I have this code, which works but doesn't suffice:

+ +
data NtADN = Td | Ad | Cd | Gd deriving (Eq)
+    data NtARN = Ur | Ar | Cr | Gr deriving (Eq)
+    data ADN = ADN [NtADN] deriving (Eq)
+    data ARN = ARN [NtARN] deriving (Eq)
+
+
+    class NucleotidChain a where
+        valid :: a -> Bool
+        countACGX :: a -> (Int,Int,Int,Int)
+
+ +

but I'm not satisfied with it : the nucleotids are declared 2 times with arbitrary suffixes (Ad,Ar...)

+ +

moreover, countACGX, which counts the number of each nucleotid in ADNs & ARNs, must be declared 2 times, one for ADN and one for ARN:

+ +
instance NucleotidChain ADN where
+        valid (ADN s) = all (\t->(t==Ad)|| (t==Td)||(t==Cd)||(t==Gd)) s 
+        countACGX (ADN s) = 
+            let     a= length $ elemIndices Ad s 
+                    c= length $ elemIndices Cd s 
+                    g= length $ elemIndices Gd s 
+                    t= length $ elemIndices Td s 
+            in (a,c,g,t) 
+
+    instance NucleotidChain ARN where
+        valid (ARN s) = all (\t-> (t==Ur) || (t==Ar) || (t==Cr) || (t==Gr)) s
+        countACGX (ARN s) = 
+            let     a= length $ elemIndices Ar s 
+                    c= length $ elemIndices Cr s 
+                    g= length $ elemIndices Gr s 
+                    u= length $ elemIndices Ur s 
+            in (a,c,g,u) 
+
+ +

is there a way to get rid of this duplication?to declare only 5 nucleotids (A,T,G,C,U) and especially why not to success in declaring a data ADN (and ARN) as an array of elements taken in the result of a function which is different for ADN (& ARN)? +such as it:

+ +
data Nt = A | T | G | C | U
+data ADN = ADN [nts]
+data ARN = ARN [nts]
+
+class NtChain a where
+  nts :: [Nt]
+
+",233240,,,,,42743.93542,how to model ADN & ARN in haskell?,,2,2,,,,CC BY-SA 3.0, +339698,1,339699,,1/8/2017 12:47,,0,168,"

I was wondering if a value that is defined by the user at the start of a program, and not modified by the program, is considered a constant or a variable. I know that a constant is a word/letter that holds a value that is not changed during the execution of a program (EG: pi or e), and a variable is also a word/letter that holds a value during the execution of a program, but it's value can be changed.

+ +

For example, in the code below, would the identifier interest be considered a variable or a constant?

+ +
#Python 3
+#Program to calculate the interest gained in a bank account after 1 year
+
+interest=float(input(""Interest rate: ""))
+while True:
+    value=float(input(""Account value: ""))
+    value=(value * interest)-value
+    print(value)
+    print()
+
+",225674,,7422,,42743.62083,42743.68611,Is this a constant or a variable?,,2,0,1,,,CC BY-SA 3.0, +339702,1,343835,,1/8/2017 14:28,,1,92,"

I have a data model based around users. User owns records on several derived entities, e.g. A user have tasks and each task have documents. Documents don't have the User ID in their properties, only the task ID.

+ +

I want to allow users to read and update only their own documents via the /documents endpoints.

+ +

What is the security/permissions/access control model I should use for this scenario?

+",258908,,,,,42803.84444,How to create access control on a record level on derived entities?,,2,5,,,,CC BY-SA 3.0, +339705,1,339707,,1/8/2017 16:27,,3,2524,"

When encoding our chromosome's characteristics (for want of a better word), binary seems to be the favoured method. I understand that this gives the maximum possibilities for crossover and mutation, but it also seems to have a serious limitation.

+ +

For example, suppose I am trying to solve the problem described here, given the digits 0 through 9 and the operators +, -, * and /, find a sequence that will represent a given target number. The operators will be applied sequentially from left to right as you read. This requires the digits 1 to 9, as well as the four operators, giving 13 characters to be encoded. Thus, I need to use a binary representation with a length of 4, with a total of 16 possible binary strings.

+ +

Now, for a sequence to be valid in that problem, it would need to be of the form...

+ +
d o d o d ... o d
+
+ +

...where d means a digit and o means an operator. Suppose you are looking at a sequence of length 5 (eg 1 + 2 * 3). There are 9 binary representations that are valid for digits (ie probability 0.5625) and 4 that are valid for operators (probability 0.25). Thus, there is only a probability of 0.5625 * 0.25 * 0.5625 * 0.25 * 0.5625 = 0.011124 of a random binary string being a valid sequence. In other words, only about 1% of the strings will be valid.

+ +

This seems hugely inefficient. Crossover and mutation are going to invalidate any existing valid strings, so I don't see how the GA would ever converge.

+ +

Related to this is the question of how to handle invalid binary strings. Suppose you've crossed and mutated, and you end up with an invalid string. Do you just assign a huge fitness value, so it will be discarded as soon as possible, or do you throw it away and try and find a valid child chromosome? The former option sounds inefficient as you would have very few valid chromosomes in your population, and the latter sounds just as inefficient, as you would spend ages trying to find valid binary strings.

+ +

Forgive me if this is a dumb question, but I'm still quite new to GAs, and am struggling to understand what you would do in a case like this.

+",123358,,,,,42745.03958,Why do we use binary encoding when it seems so inefficient?,,4,4,,,,CC BY-SA 3.0, +339710,1,339746,,1/8/2017 17:13,,1,218,"

I create Application for my client. I use some libraries released on GitHub under MIT, BSD and Apache license. I create also documentation (PDF file) where I would like to point what libraries and components I've used.

+ +

What details about libraries should I place beside the name/source of library to satisfy MIT, BSD and Apache License conditions?

+ +

Is it enough to give only the name and licence of the resource? Or should I put also the Author name and the full text of specific license?

+",246065,,,,,42744.32639,"MIT, BSD, Apache License: Create application for client",,1,0,,,,CC BY-SA 3.0, +339714,1,339715,,1/8/2017 18:23,,1,135,"

I was hoping you could give some feedback on an idea I had for designing functions.

+ +

I am trying to think of a unifying principle for choosing what functions should return. The specific project is mostly data access classes.

+ +

So the principle is this: ""When deciding what value to return as a status code, either True or False, opt to return True if the desired state is achieved.""

+ +

For example, if you made a call to remove_email('email') and the email that you passed as an argument was not in the list, return True because the desired state, one in which the email is not in the database, now exists. An alternative principle might be, always return False if the exact functionality is not executed. Like removing when the email doesn't exist or the table does not exist.

+ +

I think I unifying principle like that would be helpful in creating a shared mindset in the code where we can all use it as a guiding principle.

+ +

So first, can you tell me if the principle itself is a good idea? Should a function return True if it doesn't actually do what it claims to do, like remove an item? And if this is a bad principle, is there any other principle or set of principles that are accepted as a good standard? Always throw an error code if the exact behavior does not match? Always return False? etc.

+ +

And second, is it common to have common design philosophies like this in code bases? Is so, could you provide me some examples? The closest thing that comes to mind is the Unix philosophy that a program should do one thing and one thing only. But that is more of a higher level design principle than an implementation principle.

+ +

I apologize if this is not a good question, but I am trying to learn and develop a strong fundamental understanding and I want to run these ideas by more experienced programmers to get feedback.

+",258744,,,,,42743.79236,Critique on design principle and validity of such in general,,2,0,,,,CC BY-SA 3.0, +339718,1,339721,,1/8/2017 19:48,,12,2970,"

The Scrum Team

+ +
    +
  • 3 x Developers
  • +
  • 2 x Testers
  • +
  • 1 x Automation Test Analyst
  • +
+ +

We are not a multi-functional team in that the developers don't test and the testers don't develop. I believe this is the root cause of the issue.

+ +

We currently do two-week sprints.

+ +

At the start of the sprint everyone is busy, the developers are making a start on the development work and the testers are doing their test preparation (writing test cases, etc.)

+ +

Once the testers have finished their preparation they are now waiting for the development work to be complete OR the development work is complete and the developers are waiting for feedback/bugs.

+ +

The developers get itchy feet here and start to work on items in the backlog which are outside of the current sprint. This has created a strange affect whereby we are always developing next sprints work in the current sprint. To me this doesn't feel right.

+ +

From managements point of view, they would rather the developers do work than sit at their desks doing nothing but at the same time I feel like the scrum team's goal and focus should solely be on the current sprint. I wish our team was multi-functional but unfortunately it isn't achievable. The testers don't have the necessary skills to do development work and the majority of developers have the opinion that testing is beneath them.

+ +

Is this considered a problem in scrum? +Is there a solution to this? +Does scrum only work with multifunctional teams?

+ +

I'd like to know other peoples experiences with this if possible :)

+",220731,,217956,,42744.53472,44062.22639,Scrum - Developers Working Outside of Sprint,,7,11,2,,,CC BY-SA 3.0, +339725,1,,,1/8/2017 22:19,,3,995,"

I'm in a bit of a tricky situation where I need to use the Observer pattern but I don't really know the best way to go about it.

+ +

Here's a quick briefing on my application:

+ +

I'm implementing a GUI application that allows users to create flowcharts and mindmaps by dropping shapes onto a canvas and then manipulating the shapes by clicking, dragging, and holding. My shapes have the base class type of MindMapComponentInstance, and my canvas has type Canvas. The Canvas object acts as a kind of controller, creating and storing references to all the MindMapComponentInstance objects when a user drops onto the canvas.

+ +

Now I need to implement mouse event functionality. Specifically, I need the Canvas object to ""watch"" or ""listen"" to all the mouse events that are registered on the MindMapComponentInstance objects. So, basically I have an undefined amount of publishers (the MindMapComponentInstance objects), and one subscriber (the Canvas object).

+ +

To make things slightly more complicated, I need to be able to distinguish between different type of mouse events. A click, drag, hold, etc must all be distinguished by the Canvas object, as it needs to act differently depending on the type of mouse event that is registered by the MindMapComponentInstance objects.

+ +

What is the best way to implement what I need? Will the standard observer pattern suffice? Or should I do things a little differently to result in better code design?

+ +

EDIT: I thought I'd mentioned that my application is a web app and as such is being written in javascript, if that helps.

+",244416,,244416,,42744.2,42744.20208,Implementation of observer pattern with one observer/multiple publishers and multiple events?,,1,0,1,,,CC BY-SA 3.0, +339727,1,339781,,1/8/2017 22:28,,29,1817,"

In Test Driven Development (TDD) you start with a suboptimal solution and then iteratively produce better ones by adding test cases and by refactoring. The steps are supposed to be small, meaning that each new solution will somehow be in the neighborhood of the previous one.

+ +

This resembles mathematical local optimization methods like gradient descent or local search. A well-known limitation of such methods is that they do not guarantee to find the global optimum, or even an acceptable local optimum. If your starting point is separated from all acceptable solutions by a large region of bad solutions, it is impossible to get there and the method will fail.

+ +

To be more specific: I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

+ +

This thought can actually be applied to all agile methods that proceed in small steps, not only to TDD. Does this proposed analogy between TDD and local optimization have any serious flaws?

+",217956,,110531,,42743.96667,43565.65694,Is this limitation of Test Driven Development (and Agile in general) practically relevant?,,8,7,5,,,CC BY-SA 3.0, +339732,1,339736,,1/9/2017 3:33,,0,169,"

Below is the design, that is implemented similar to design used in Linux/net/socket.c.

+ +

Below design provide List abstraction,

+ +

+ +

where, list.h provides List interface, show here

+ +
+

Background:

+ +

Reason to implement List abstraction in this approach is to consider as a prototype inspired from Linux/net/socket.c design, where any one of the multiple protocol family implementations(like net/ipv4/af_inet.c or net/unix/af_unix.c/..) is available on invoking socket(AF_INET | AF_UNIX | AF_XYZ,,) api.

+ +

This prototype would help understand implementing snmp library that talks to multiple network elements(Router/Switch/Server) using snmp protocol.

+
+ +

Above design(shown in above image) is an analogy to Linux/net/socket.cas shown in this code here with a slight difference in linking time(linker phase) unlike linking implementations in Linux/net/socket.c happens at loading phase by overriding _init() run-time code in implementation(say af_inet.c) and invoking sock_register()

+ +

To further improve this analogy,

+ +

Am thinking on improving design(shown in above image), that can allow createList(ARRAY_IMPL) get called from fileIO/fileReading.c(for its own purpose) and createList(LINKED_LIST_IMPL) get called from ST/implUsingList.c(for its own purpose).

+ +

With current design(shown in above image), it breaks the purpose, as it works with any one implementation(say arrayImpl.c) linked at linker phase.

+ +

Reason: ListHandler *handler = NULL; is global variable defined in virtualImplLayer.cand gets over-ridden on every call to createList(ImplType), as shown in this code here

+ +

My question:

+ +

How to enhance this prototype design to pick multiple implementations for client scenario(shown in image)? Does it require multi-threading at virtual implementation layer??

+",131582,,131582,,42744.20486,42744.22153,Design improvement - C,,1,6,,,,CC BY-SA 3.0, +339734,1,339784,,1/9/2017 4:57,,42,6518,"

In JS you can return a Boolean having custom properties. Eg. when Modernizr tests for video support it returns true or false but the returned Boolean (Bool is first class object in JS) has properties specifying what formats are supported. At first it surprised me a bit but then I began to like the idea and started to wonder why it seems to be used rather sparingly?

+ +

It looks like an elegant way of dealing with all those scenarios where you basically want to know if something is true or false but you may be interested in some additional info that you can define without defining a custom return object or using a callback function prepared to accept more parameters. This way you retain a very universal function signature without compromising capacity for returning more complex data.

+ +

There are 3 arguments against it that I can imagine:

+ +
    +
  1. It's a bit uncommon/unexpected when it's probably better for any interface to be clear and not tricky.
  2. +
  3. This may be a straw man argument, but with it being a bit of an edge case I can imagine it quietly backfires in some JS optimizer, uglifier, VM or after a minor clean up language specification change etc.
  4. +
  5. There is better - concise, clear and common - way of doing exactly the same.
  6. +
+ +

So my question is are there any strong reasons to avoid using Booleans with additional properties? Are they a trick or a treat?

+ +
+ +

Plot twists warning.

+ +

Above is the original question in full glory. As Matthew Crumley and senevoldsen both pointed it is based on a false (falsy?) premise. In fine JS tradition what Modernizr does is a language trick and a dirty one. It boils down to JS having a primitive bool which if set to false will remain false even after TRYING to add props (which fails silently) and a Boolean object which can have custom props but being an object is always truthy. Modernizr returns either boolean false or a truthy Boolean object.

+ +

My original question assumed the trick works differently and so most popular answers deal with (perfectly valid) coding standards aspect. However I find the answers debunking the whole trick most helpful (and also the ultimate arguments against using the method) so I'm accepting one of them. Thanks to all the participants!

+",196173,,25373,,42745.76042,42755.09514,Is a JS Boolean having custom properties a bad practice?,,9,8,2,,,CC BY-SA 3.0, +339743,1,339744,,1/9/2017 7:17,,-6,382,"

I have been a developer for past 3 years and I have been seeing interface in most of the places as a contract for the developers to write their own implementation or a marker (eg. Serializable). But I quite dont understand how this concept is named as INTERFACE in literal meaning. Am I missing something pretty basic?

+",207328,,61852,,42744.60694,42745.37431,Why are interfaces in Java called that way?,,2,11,1,,,CC BY-SA 3.0, +339748,1,339750,,1/9/2017 8:19,,0,112,"

I have a general question and unfortunately no background in programming apps, so I'm sorry if this is a stupid question:

+ +

If my app has to process difficult calculations for which the power of the phone or tablet would not be enough, would it be possible to outsource these calculations to an external server? +And if this is already done in some apps, could you give me examples?

+ +

I have done some research but did not find anything... probably I have searched with the wrong terms.

+",258957,,31260,,42744.38819,42744.38819,Is it possible to outsource calculations of an app to an external server?,,1,2,0,,,CC BY-SA 3.0, +339749,1,339780,,1/9/2017 8:49,,-1,147,"

I'll try word this as succinctly as possible.

+ +

I currently have an object called a StoresAdaptor which inherits from a base class ERPAdaptor, that takes a Stores object and pushes it into a 3rd party ERP system via an adaptor design pattern.

+ +

Now, there are multiple types of these Stores movements. Each has their own implementation, but the creation is nigh on identical. The issue arises from the fact that this Stores object has the ""To"" location, and the Line object has the ""From"" location - however, when making a Movement, we need to know both the ""To"" and ""From"".

+ +
public class StoresAdaptor : ERPAdaptor
+{
+    protected Stores storesRequest
+    {
+        get { return (Stores) this.GenericObject; }
+    }
+    public StoresAdaptor(Data_Controller DCO, GenericObject storesRequest, ERPConnection conn)
+        : base(DCO, storesRequest, conn)
+    { }
+    private class ReturnMiscMaterialAdaptor
+    {
+        //This here feels wrong...
+        private StoresAdaptor _storesAdaptor;
+        private Stores.Line _storesLine;
+
+        public ReturnMiscMaterialAdaptor(StoresAdaptor storesAdaptorRef, Stores.Line storesLine)
+        {
+            _storesAdaptor= storesAdaptorRef;
+            _storesLine= storesLine;
+        }
+     // task to bay
+        public Update()
+        {
+            ERPSystem.ToLocation = _storesAdaptor.storesRequest.ToLocation;
+            ERPSystem.FromLocation = _storesLine.FromLocation;
+            //Return specific functionality
+        }
+    }
+    private class IssueMiscMaterialAdaptor
+    {
+        //As well as here... there has to be a better way to do this...
+        private StoresAdaptor _storesAdaptor;
+        private Stores.Line _storesLine;
+
+        public IssueMiscMaterialAdaptor(StoresAdaptor storesAdaptorRef, Stores.Line storesLine)
+        {
+            _storesAdaptor= storesAdaptorRef;
+            _storesLine= storesLine;
+        }
+        public Update()
+        {
+            ERPSystem.ToLocation = _storesAdaptor.storesRequest.ToLocation;
+            ERPSystem.FromLocation = _storesLine.FromLocation;
+            //Issue specific functionality
+        }
+        // bay to task
+    }
+
+ +

In essence, we need to keep the header object (Stores) exposed to all the sub-classes, whilst passing in the line (Line) to the separate sub-classes to be actioned appropriately.

+ +

What I've done so far is passed in a reference to the object in the sub-class constructor, but I find I'm repeating the exact same fields and ctor declaration (changing the name to match of course) which after the 3rd time of hitting ""Paste"" made me stop, and take a step back and realise I'm doing something wrong.

+ +

I've also considered making these 'sub-classes' : StoresAdaptor but the base StoresAdaptor inherits from an abstract class with no parameterless constructor - and as such I would need to duplicate the constructor of the base class over and over.

+ +

Can anyone suggest which pattern would work best in this scenario? I will keep working on it but if I feel something is wrong with the pattern, there must be something VERY wrong with the design.

+",258959,,,,,42744.69028,Design - Sub-classes with similar implementations,,2,1,,,,CC BY-SA 3.0, +339751,1,339772,,1/9/2017 10:05,,4,539,"

I'm looking at implementing a new RESTFul call where a User can follow/unfollow a generic 'thing' item, but I need to know the best or common approach below, 1 or 2?

+ +

1) A GET or POST on the following URL's:

+ +
user/{thing_to_follow_id)/follow
+user/{thing_to_follow_id)/unfollow
+
+ +

Or

+ +

2) A POST containing thing_to_follow_id

+ +
POST
+user/follow
+
+ +

A DELETE on

+ +
user/follow/{thing_to_follow_id}
+
+ +

The first seems the most restful, but it feels wrong to post an empty payload where the second allow me to do a GET and return all things a user is following.

+ +

What is the best approach here?

+",258966,,,,,42744.63611,Options for the RESTFul approach for follow and unfollowing,,3,0,1,,,CC BY-SA 3.0, +339756,1,,,1/9/2017 11:12,,3,615,"

I am wondering if it is a good idea to do the following:

+ +

I have a Django model (which is related to a migration, therefore it has a database entry) with a bunch of properties. Accessing these are obviously hitting the database every time.

+ +

I also have many methods using these properties, like:

+ +
def is_good(self):
+  return not self.bad and self.good > self.threshold
+
+ +

And these methods are used pretty frequently. So I am trying to reduce the database queries as much as possible, for performance's sake.

+ +

Maybe I can take advantage of the fact that my models are very ""static"", meaning that most of these attributes will never actually change their values, so self.bad or self.good and self.threshold will always hold the same values and it will never change in the database. Perhaps I can use this to my advantage and cache is_good() to reduce the database work?

+ +
def is_good(self):
+  try:
+    return self._is_good
+  except AttributeError:
+    self._is_good = not self.bad and self.good > self.threshold
+  return self._is_good
+
+ +

Is this a common and recommended practice?

+",223143,,,,,43164.7875,Django: caching properties for non-changing entries,,1,0,1,,,CC BY-SA 3.0, +339765,1,,,1/9/2017 13:39,,7,454,"

I am facing problems structuring projects and libraries.

+ +

In the company I am working for I often see, that things would be more maintainable and less error prone, if we could extract common code and build libraries with that code.

+ +

A simple example may be the usage of a custom logging library instead of copying code from other projects or, even worse, writing everything again from scratch.

+ +

So wanting to expose things into libraries there comes up the question how to do that? My idea is to put every lib(or related set of libraries) into its own repository. But this would mean, that to build the project, you definetly need to check out at least two repositories. Because I have been workong on a project with dependencies to a lot of other repositories I became quite careful with creating dependencies to other projects.

+ +

So my question is: is there another nice solution for this? What do you guys do?

+ +
+ +

There are more than 150 repositories, which are not all related of course. Most of the projects have their own repository. To provide a scenario, lets assume the following:

+ +
    +
  • Application A
  • +
  • Application B (already references 30 other in-house repositories)
  • +
  • LibXYZ
  • +
+ +

Both applications need to use LibXYZ.

+ +

Versioning:

+ +

Versioning depends on the project. The older projects just took the revision number from SVN. We are now moving to versioning with fix numbers like 1.2 or 1.2.5, so let's assume this as versioning approach. Every release is tagged in SVN.

+ +

Plattform:

+ +

We are using Qt targetting mainly Windows. But I appreaciate a cross-plattform approach very much.

+ +

What prevents me from just creating a repository for common code?

+ +

The problem I see is that it adds a lot of complexity to a project. To be able to compile and run an application I need to:

+ +
    +
  • check out multiple repositories
  • +
  • compile all depencies
  • +
  • copy all compiled DLL's into the application's build directory
  • +
+ +

All this again has to be documented and maybe also to be included in the build scrips. Moreover this forces the other team members to have the exact same setup as I do.

+ +

I am not saying that this is impossible, I am just searching for a better approach.

+",258989,,1204,,42744.6875,42745.63125,How to treat in-house libraries,,2,13,1,,,CC BY-SA 3.0, +339782,1,339792,,1/9/2017 16:19,,7,2281,"

For developing a C++ dynamically-linked library (i.e. shared object) which may interface with C programs, what is the best practice to save program state across exported function calls?

+ +

I as a not-experienced C++ programmer can think of the following methods:

+ +
    +
  • Compilation unit level static variables.
  • +
  • Instantiate a struct at heap which holds the state and passing back and forth its address in each API call (somewhat like JNI).
  • +
+ +

The problem with the first approach is that my state variables need some data to be initialized and these data are provided by calling init API (one of exported functions). On the other side, when using module's level static variables, those data aren't available yet when those variables are getting initialized.

+ +

Also my problem with the second method is that each API function should be supplied with that pointer and this is a bit cumbersome.

+ +

Note that there is another option that static variables are pointers to those state variables and are assigned in that init function (actually state variables are instantiated in init and their address are saved in those static variables). This option is fine, but I would like to not use pointers where possible.

+",96931,,96931,,42744.69236,42744.76042,Best Practice for saving state in a C++ shared object,,1,0,1,,,CC BY-SA 3.0, +339783,1,339788,,1/9/2017 16:21,,7,14149,"

After looking at a lot of session/state debates with regard to REST and finding nothing concrete, I'm just going to cut to the chase and ask myself.

+ +

Developing a RESTful API as a backend for a mobile app, I (think I) want to keep track of all users (even unregistered ones) with guest sessions. This allows me to customise content and do statistics on my users.

+ +

Implementation-wise I would have my app identify itself, as if to an /authenticate end-point, just without e-mail/password, more like UUID. But this effectively makes my whole API expect a ""session token"" with each request.

+ +

Is this a bad approach, or would you do the same?

+",259006,,,,,43620.46597,RESTful API with session tokens.. ehh?,,2,4,4,,,CC BY-SA 3.0, +339794,1,,,1/9/2017 18:34,,24,2132,"

I am a budding software engineer (now a sophomore, major in CS) and I really struggle to understand other people's programs. I want to know if this skill (or lack of it) can be a handicap for me, and if yes, then how can I develop it?

+",258188,,5570,,42745.30139,42752.40139,"As a software engineer, how important is it to read other's code?",,7,5,4,,,CC BY-SA 3.0, +339802,1,339818,,1/9/2017 19:52,,2,4744,"

When using bpmn to specify a process model. And using UML to specify a use case diagram. Don't they both describe processes? What's the difference between the two?

+ +

I'm reading a course which states:

+ +
+

A process model makes the processes in which the system is used + readily understandable, but does not hold enough detail to develop a + system

+ +

A use case diagram denotes the interaction between a system and its + users and the hierarchical relation between functionalities of the + system

+
+",21435,,209774,,43564.85417,44000.93542,What is the difference between a process model and a use case diagram?,,1,6,,,,CC BY-SA 3.0, +339807,1,339810,,1/9/2017 20:44,,69,10421,"

How do you collaboratively develop software in a team of 4-5 developers without acceptance criteria, without knowing what the testers will be testing for and with multiple(2-3) people acting as product owner.

+ +

All we have is a sketchy 'spec' with some screen shots and a few bullet points.

+ +

We've been told that it will be easy so these things are not required.

+ +

I'm at a loss on how to proceed.

+ +

Additional Info

+ +

We have been given a hard deadline.

+ +

The customer is internal, we have a product owner in theory but at least 3 people testing the software could fail a work item simply because it doesn't work how they think it should work and there is little to no transparency of what they expect or what they are actually testing for until it has failed.

+ +

product owner(s) are not readily available to answer questions or give feedback. There are no regular scheduled meetings or calls with them and feedback can take days.

+ +

I can understand that we cannot have a perfect spec but i thought it would be 'normal' to have acceptance criteria for the things we are actually working on in each sprint.

+",94888,,209774,,43968.81875,43968.81875,How do you develop software without acceptance criteria?,,9,15,11,,,CC BY-SA 3.0, +339817,1,,,1/9/2017 22:23,,12,16687,"

I have a number of web services that form a web application. Clients can access these services through REST APIs calls.

+ +

Should these services be able to talk directly to each other? If so wouldn't that make them couple which goes against the concept on microservices?

+ +

Should the client call them directly one after another to get the data it needs to load up a web page on the client?

+ +

Or should I have another layer on top of my services, that handles a request from the client, fetches the data for that request and then sends it back to the client?

+",259036,,,,,43111.46528,Should services talk directly to each other in a microservice architecture?,,7,1,10,,,CC BY-SA 3.0, +339821,1,340120,,1/9/2017 22:48,,2,505,"

I am going to build raspberry pi based device which reads data from sensor bus (serial port), analyzes it and present it on touch screen. +Touch screen will be also used for configuration. +Device needs to have web interface with equal functionality.

+ +

What I am planing to do is to split application for 2 or 3 separate (python) applications with database as medium for exchanging data. Flask based for web interface and kivy based for touch screen. In order not to choose which one will be ""master"" that reads sensor bus I am thinking about introducing third one (headless) that is doing main job - getting data, analysing, producing events and storing them into database (MongoDB or MySQL).

+ +

Theoretically all frameworks could work in one program in separate threads with common data space, but it seems to be less elastic, and more difficult to maintain and adding another interfaces (e.g. another serial bus to talk to other devices) not mentioning integration with nginx.

+ +

Is that right approach?

+",258924,,,,,42748.43611,"Architecture of app with two independedt interfaces (web, touch screen)",,1,2,0,,,CC BY-SA 3.0, +339825,1,339831,,1/10/2017 1:13,,1,2250,"

I am currently perusing through Jdk 8 and I came across the feature where you have multiple interfaces that overlap with the same method signature for a default method, the compiler will throw an error if you try to have one class implement both interfaces, to prevent the problem of ambigious methods with multiple inheritance.

+ +

My question is, why can't java also allow multiple inheritance with classes like it does with interfaces, and solve the ambigious field and method problem that results from multiple inheritances by simply throwing an exception like it does when implementing multiple interfaces that have identical default method signatures?

+",258776,,,,,42745.76528,Why doesn't java allow multiple inheritance of classes when it allows multiple inheritance of interfaces?,,4,3,,42746.79861,,CC BY-SA 3.0, +339829,1,339834,,1/10/2017 3:24,,6,489,"

I want to keep a dependency decoupled, but at the same time, once it's passed to the constructor, I want to allow changes only through Whatever (in the following example) because changing the dependency from the outside would invalidate the state of the Whatever object that depends on it. So I keep a copy instead:

+ +
class Whatever{
+    private Dependency d;
+    public constructor(Dependency d){
+        this.d = d.clone();
+    }
+    // ...
+}
+
+ +

However the dependency is huge so I've decided to avoid the copy; I've removed the clone() and made clear through documentation that once the dependency is passed to this class, it must not be modified from the outside.

+ +

Question is: is there a way to avoid the copy and at the same time maintaining proper encapsulation? And with proper encapsulation I mean either immediately error on access attempts from the outside or avoiding the possibility of the access from the outside entirely.

+ +

My guess is that I cannot. Not using the Java/C#/etc's OOP interpretation.

+ +

So I also ask, are there languages that cover such a case? How do they work?

+ +

An option I could have is the following, assuming a language doing reference counting:

+ +
class Whatever{
+    private Dependency d;
+    public constructor(Dependency d){
+        this.d = d.isTheOnlyReference() ? d : d.clone();
+    }
+    // ...
+}
+
+new Whatever(new Dependency()); // no clone performed
+
+Dependency d = new Dependency();
+new Whatever(d); // clone performed, as Dependency has >= 1 references 
+
+",165064,,165064,,42745.15486,43085.98125,Strategy for avoiding defensive copies while keeping correct encapsulation,,4,4,1,,,CC BY-SA 3.0, +339841,1,339856,,1/10/2017 9:28,,8,5252,"

Is every object in C++ mutable if not stated otherwise?

+ +

In Python and Javascript I can't change strings, tuple, unicodes. I was wondering if there is something in C++ that is immutable or every object is mutable and I have to use const type qualifier to make it immutable.

+",104099,,104099,,42745.45417,42746.04861,Are all the objects in C++ mutable if not stated otherwise?,,3,5,5,,,CC BY-SA 3.0, +339843,1,339848,,1/10/2017 9:58,,1,186,"

A Tech Lead in my team said:

+ +
+

We're going to use sonar on our (500KLoc) codebase so that everytime you do a commit, it will check the classes you've touched against the coverage goals. If you don't meet the coverage goals for your commit, the build will break.

+
+ +

Another developer responded:

+ +
+

No that has no value. The classes were written against a specification with particular test scenarios in mind. It's not possible to discover what those test scenarios were from the code. You can't write a useful JUnit test on legacy code because you don't understand the original intent.

+
+ +

The first tech lead responded:

+ +
+

You can infer the requirements from the codebase, and the user experience of the software. All the JUnit test does, even if it does capture the original test scenarios, is demonstrate that a working path exists in the code. You can't say that JUnit tests represent a proof, but providing coverage of the code is extremely valuable.

+
+ +

My question is: When writing a JUnit test for non-covered legacy code - how important is it to understand the original scenarios?

+ +

EDIT: +Note that this is different to the linked question because it is about intent or inferred requirements as input to the JUnit tests on the legacy code.

+",13382,,13382,,42745.46319,42745.46319,When writing a JUnit test for non-covered legacy code - how important is it to understand the original scenarios?,,2,2,,,,CC BY-SA 3.0, +339855,1,339892,,1/10/2017 10:55,,10,1527,"

I one failed at an algorithmic test with Codility because I tried to find a better solution, and in the end I had nothing.

+ +

So it made me think if I could use an approach similar to TDD? +I.e. If I can usually develop a solution gradually in similar fashion?

+ +

If I were writing a sorting algorithm I could move from a standard Bubblesort to a 2-way bubblesort, but then something more advanced like Quicksort would be a ""quantum leap"", but at least I would have testdata I can easily validate.

+ +

Other tips for such tests? One thing I would do the next time is to use more methods/functions than inner loops. For example, in sorting, you often need a swap. If it were a method I would just need to modify the calling code. +I could even have more advanced solutions as derived classes.

+ +

With ""Algorithmic"" vs ""normal"" problems I mean problems where Time-complexity is important. So instead of passing more tests as in TDD, you would make it ""behave better"".

+ +

With ""similar to TDD"" I mean:

+ +
    +
  1. Write relatively automatic test to save time on manual tests pr increment.
  2. +
  3. Incremental development.
  4. +
  5. Regression testing, ability to detect if code breaks or at least if functionality changes between increments.
  6. +
+ +

I think this should be quie easy to understand if you compare

+ +
    +
  1. Writing a shell-sort directly
  2. +
  3. Jumping from bubblesort to quicksort (Total rewrite)
  4. +
  5. Moving incrementally from a one-way bubble-sort to shell sort (If possible).
  6. +
+",48489,,48489,,42745.83681,42745.86806,TDD like approach to Algorithmic problems,,3,13,4,,,CC BY-SA 3.0, +339858,1,339923,,1/10/2017 11:08,,4,3047,"

Having built a RESTful API (using Laravel) for a project at work, I followed what seemed (from lots of reading) to be a the majority in terms of my approach to nested resources - by defining them in the path:

+ +
https://myapi.com/clients/{clientId}/tasks
+
+ +

rather than:

+ +
https://myapi.com/tasks?client={clientId}
+
+ +

Filtering results further than this, plus ordering etc was done using query parameters:

+ +
https://myapi.com/clients/{clientId}/tasks?orderby=title
+
+ +

Whilst initially this seemed like a good approach, it did become more difficult to maintain in terms of the controllers and routes. For example, for the above, I created a ClientsController, a TasksController AND a ClientsTasksController.

+ +

TasksController

+ +
public function store() {}
+public function updated() {}
+public function destroy() {}
+
+ +

TasksClientsController

+ +
public function index($client_id) {}
+
+ +

And the routes:

+ +
Route::resource('tasks', 'TasksController', ['except' => [
+    'edit', 'index'
+]]);
+Route::resource('clients.tasks', 'ClientsTasksController', ['only' => [
+    'index'
+]]);
+
+ +

This application did not need many more nested resources such as this, it was relatively straightforward. However, I am now looking to create an API for existing application which has many more complex relationships. It is a league management app, EER diagram below:

+ +

+ +

Where I'm concerned is that I am going to need to query resources by a lot of different data - for example:

+ +
    +
  1. teams by a specific division, season and round
  2. +
  3. statistics by both team id and season id, and possibly round id also
  4. +
  5. matches by season, round and division
  6. +
+ +

and combinations of the above.

+ +

It therefore seems to me to make more sense to have a controller and route for each final resource, but to use query parameters rather than path variables.

+ +

For example:

+ +
https://myapi.com/matches?team_id={teamId}&season_id={seasonId}&roundId={roundId}
+
+ +

rather than:

+ +
https://myapi.com/seasons/{seasonId}/teams/{teamId}/matches
+
+ +

and the various extra controllers this would create.

+ +

However I feel this may be going against the grain in terms of what most people seem to suggest - see here for example. The second URL also seems neater to me.

+ +

Any help / recommendations would be very much appreciated.

+",259086,,-1,,42878.52778,42759.53194,RESTful api and nested resources,,1,1,4,,,CC BY-SA 3.0, +339864,1,339889,,1/10/2017 12:14,,5,7093,"

I'm working on my CS Masters thesis at a company which does user interfaces in the field of embedded devices. As part of that I am developing a library for integrating a certain device. My C++ library wraps the device drivers and integrates the device's features in the Qt framework so that it can be used with a variety of Qt/QML-based applications.

+ +

So far I have written many notes and detailed API documentation (using Doxygen). But I have no idea how to properly format all the information in order to provide a good overview of my work in the thesis. Simply using the API documentation doesn't make sense since 1) it's way too detailed and 2) doesn't exactly give a formal overview of how things are structured and work together.

+ +

From what I know I have to describe at least the following aspects of the software product I'm developing:

+ +
    +
  • Technology(ies): which technologies I have used. I have split my project into:

    + +
      +
    • a library which provides device management, data conversion, and Qt integration;
    • +
    • tests; and
    • +
    • a collection of ready-made Qt widgets which use the library.
    • +
  • +
  • Patterns: what software patterns I have used (MVC, Singleton, Factory, Observer etc.). The problem here is that there are many, many patterns out there and there is a known inconsistency when it comes to which pattern is what exactly (alone the MVC pattern is very famous for it - pick three books from different authors and there is a very high chance that the descriptions of this pattern vary greatly!). Personally I work following the principle ""XYZ makes sense to me hence I will use it"", which probably many of you will find too broad. :D

  • +
  • Information flow: mostly sequence and flow diagrams here: if user does action X how does the device and the rest of the system respond to it.

  • +
  • Data management: for storing and processing the data from the device, my library uses data containers such as arrays, vectors, hash tables etc..

  • +
+ +

I doubt that this is enough or even correct that's why I'm asking for help. I was unable to find a step-by-step tutorial that I can follow. I also don't know how deep I have to go.

+ +

I have asked my professor about the hardware-level stuff and he said that he doesn't want to see that in the thesis and that I should concentrate on what I am developing.

+ +

However this restriction only helps a little bit since Qt framework alone has a huge and complex structure. For example if I use a QVector3D do I have to actually describe in detail what this is and how it's managed or can I assume that a simple ""A container for 3D vectors"" would suffice?

+ +

Frankly, we have studied various software architecture related things like patterns etc, but we were never shown how to formally describe a software system. All I did previously was short lab reports and API documentation + some evaluation statistics about performance.

+",129156,,60357,,42752.56111,42752.56111,How to describe the architecture of a software product?,,3,7,6,,,CC BY-SA 3.0, +339866,1,339871,,1/10/2017 12:34,,60,13309,"

I'm comparing two technologies in order to reach a recommendation for which one should be used by a company. Technology A's code is interpreted while technology B's code is compiled to machine code. In my comparison I state that tech B in general would have better performance since it doesn't have the additional overhead of the interpretation process. I also state that since a program could be written in many ways it is still possible a program written in tech A could outperform one written in tech B.

+ +

When I submitted this report for review, the reviewer stated that I offered no clear reason why in general the overhead of the interpretation process would be large enough that we could conclude that tech B's performance would be better.

+ +

So my question is can we ever say anything about the performance of compiled/interpreted technologies? If we can say compiled is generally faster then interpreted, how could I convince the reviewer of my point?

+",248193,,3329,,42747.64236,42749.52014,Can we make general statements about the performance of interpreted code vs compiled code?,,14,23,13,,,CC BY-SA 3.0, +339882,1,,,1/10/2017 15:18,,6,9383,"

Suppose you were asked to design a scalable chat server with the following requirements:

+ +
    +
  1. The main use case is: player A sees B online, A sends a message to B, B receives it.
  2. +
  3. The secondary use case is: player A sees B offline, A sends a message to B. When B comes back online, B receives the message. (No push notifications).
  4. +
  5. The goal is to minimize latency. Speed matters.
  6. +
  7. The messages should arrive in order. We cannot lose messages but receiving duplicates once in a while is fine.
  8. +
  9. Just text data, no binary data.
  10. +
  11. No need to store chat history. Once sent, the messages can be destroyed.
  12. +
+ +

I've been reading this article: How League Of Legends Scaled Chat To 70 Million Players and I think I missed the core architecture they used in the game. But anyway here is my ""thought process"". Can someone have a look at it?

+ +
    +
  • If the secondary use case didn't exist, I wouldn't have to store anything. I think I could use a p2p network, wherein a user regularly sends a ping message ""i'm online"" to all his friends to notify of presence.
  • +
  • But since I have to store messages to be able to deliver them later, I need my own servers that store user presence, user friendship lists, and messages.
  • +
  • The goal of minimized latency can be achieved by placing servers close to the users. This means that there will be more than one server so they need to stay in sync. Also, we need to load balance them so that one server does not store everything.
  • +
  • I've read somewhere on the Internet that a way to load balance the servers is to assign a server to each user. So for example server 1 gets assigned everything related to user A, and server 2 gets assigned everything related to user B. We could decide this by proximity.
  • +
  • When A sends something to B, there has to be a way to dispatch the message to server 2. Maybe use a service bus to communicate servers.
  • +
  • So the flow would be something like this:

    + +
      +
    1. A writes message ""Hi B!""
    2. +
    3. Server 1 receives message and B. Since it does not find B in his user base, he forwards the message to the service bus. He stores a copy of the message.
    4. +
    5. The service bus requests all servers to look for user B.
    6. +
    7. Server 2 replies that he has B in his user base.
    8. +
    9. Server 2 receives the message and stores it.
    10. +
    11. Server 2 sends message to user B.
    12. +
    13. Server 2 signals to the service bus that the message was sent. He destroys the message.
    14. +
    15. Server 1 destroys his copy of the message.
    16. +
  • +
  • If B were offline, everything up to step 5 would stay the same. The difference is that server 1 can destroy his copy of the message but server 2 cannot.

  • +
  • Now, storage... My guess is that each server should have their own persistent storage, but I've no idea what should be optimized here (speed of reads? speed of writes?). Also I'm not sure if a MySQL store or a NoSQL store would be better. Since NoSQL is optimized to be partitioned and there's no need here I guess MySQL would be enough.
  • +
  • If a server crashes we need a way to failover quickly. I suppose we could place like a ""primary"" and ""secondary"" server in each location, the primary would be connected to primary storage and the secondary to replicated data.
  • +
+ +

So the overall architecture would look like this:

+ +

+ +

I realize I am missing many many things here, did I miss something obvious? Is there any part of my thought process just plain wrong?

+",109252,,187812,,42745.79444,43647.54792,System Design: Scalable Chat Server,,2,5,4,,,CC BY-SA 3.0, +339884,1,339886,,1/10/2017 15:27,,17,4205,"

When creating time estimates for tickets should the time taken for testers (QAs) be included in a tickets estimate? We have previously always estimated without the testers time but we are talking about always including it. It makes sense for our current sprint, the last before a release, as we need to know the total time tickets will take with one week to go.

+ +

I always understood estimation was just for developer time as that tends to be the limiting resource in teams. A colleague is saying that wherever they have worked before tester time has also been included.

+ +

To be clear, this is for a process where developers are writing unit, integration and UI tests with good coverage.

+",259112,,,,,42746.40486,Should tester's time be included when estimating tickets?,,9,4,1,,,CC BY-SA 3.0, +339885,1,384205,,1/10/2017 15:29,,1,710,"

I am designing an application file format which will store chunks of user data, ranging from a few bytes to a few gigabytes - median size probably in the 10MB - 30MB range.

+ +

I have the option of storing this data in a sequence of fixed-size blocks, each block having some lightweight structure to it. This structure would provide some minor benefits (such as storing a checksum).

+ +

The alternative is to store the data in a contiguous sequence of raw bytes. I am imagining some benefits to this approach, such as being able to read large extents of data without having to parse the block structure. But I can't quite put my finger on whether this is a real benefit or not.

+ +

Are there other implications of the two approaches that I should be considering?

+",38914,,,,,43452.55694,What are the benefits of storing data contiguously?,,6,13,,,,CC BY-SA 3.0, +339899,1,339903,,1/10/2017 18:14,,0,175,"

High-level languages often expose stream-based I/O abstraction to the programmer, where blocking or non-blocking streams offer select/read/write operations. (AFAIK, message-based I/O seems is usually an even higher-level abstraction built on top of streams.)

+ +

But at the low level, the communication between the CPU and the hardware devices uses interrupts, programmed (polled) I/O, or DMA.

+ +

How can all of them map well into the same stream I/O abstraction that (as far as I can tell) completely hides the difference between these very different low-level models? Are there any cases where a high-level developer who uses stream I/O abstraction should actually be concerned about the difference between interrupt-driven and programmed I/O?

+",4485,,,,,42745.78194,How are low-level I/O models mapped into stream I/O abstraction?,,1,4,,,,CC BY-SA 3.0, +339901,1,,,1/10/2017 18:42,,0,432,"

In some MVC frameworks I've used the Model has the method and SQL in the Model so that if you call the controller, it invokes a method on the model class (say Products), and it returns the data. In ASP.NET MVC Core, from what I have seen so far, there is a separate file besides the model to do the logic. Do I use two classes? They appear (the ones I read) to be using the Repository Pattern.

+ +
public class Product
+{
+    public int ProductId { get; set; }
+    public string Name { get; set; }
+    public int Quantity { get; set; }
+    public double Price { get; set; }
+}
+
+ +

That's one class. Where do the methods to get the data go? Is the Repository Pattern necessary or it is a ""Best Practice""?

+ +
public interface IStoreRepository
+{
+    //CRUD signatures
+}
+
+public ProductRepository : IStoreRepository
+{
+
+    //CRUD implementation
+}
+....
+
+public IActionResult Products()
+{
+//A controller
+//Call method in ProductRepository.GetAll or the like
+}
+
+ +

If it matters I am attempting to use Dapper, not EF.

+",8802,,,,,42745.87708,In ASP.NET MVC Core do CRUD operations follow the Repository Pattern?,,1,0,,,,CC BY-SA 3.0, +339911,1,,,1/10/2017 19:50,,0,576,"

I know you can configure NServiceBus to automatically retry to send messages (FLR: First Level Retries) and wait before retrying again (SLR: Second Level Retries), but, using the default configuration (5 FLR + 5 SLR) it'll take about one minute before seeing a message into the <error> queue.

+ +

I understand the value of automatic retries, but isn't it better to fail early, configuring zero FLR pus zero SLR and actually coding expecting errors to occur ?

+ +

I mean, automatic retries goes against Fail-Fast paradigm, doesn't it ?

+",9506,,,,,42781.89861,NServiceBus: What are the advantages of not using retries?,,2,4,,,,CC BY-SA 3.0, +339922,1,339924,,1/10/2017 22:55,,1,395,"

I implemented some CSV reading functionality to my Java program, for which I required adding a Maven dependency.

+ +

Should changes to pom.xml be part of the commit for the CSV reading functionality, or should it be a separate commit?

+ +

I'm leaning towards the dependency being part of a logical unit of code, and thus should be in the same commit. I am however new to using Maven, so I'm not aware if there are any unwritten rules regarding changes to pom.xml.

+",259163,,,user22815,42759.68194,42759.68194,Should adding Maven dependencies be a separate git commit?,,1,1,,,,CC BY-SA 3.0, +339926,1,,,1/11/2017 0:06,,1,406,"

It is very, very common to see code like this:

+ +
for (int i = 0; i < array.Length; i++)
+{
+    DoSomething(array[i]);
+}
+
+ +

The above code makes certain assumptions about arrays (which apparently hold true most of the time, but not all the time). Wouldn't it be more explicit and more forward-compatible to use something like this instead?

+ +
for (int i = array.GetLowerBound(0); i <= array.GetUpperBound(0); i++)
+{
+    DoSomething(array[i]);
+}
+
+ +

Why is the former format so widely accepted and used?

+ +

(I know we could use foreach, but let's assume that there is some reason that would not work for us in this specific case).

+",115084,,115084,,42746.02361,42746.02847,Is using 0 and length-1 a bad practice for iterating through an array?,<.net>,1,7,,,,CC BY-SA 3.0, +339931,1,339952,,1/11/2017 3:05,,11,1419,"

This question has an excellent answer by Eric Lippert describing what the stack is used for. For year's I've known - generally speaking - what the stack is and how it's used, but parts of his answers make me wonder if this stack structure is less-used today where async programming is the norm.

+ +

From his answer:

+ +
+

the stack is part of the reification of continuation in a language without coroutines.

+
+ +

Specifically, the without coroutines portion of this has me wondering.

+ +

He explains a bit more here:

+ +
+

Coroutines are functions that can remember where they were, yield control to another coroutine for a while, and then resume where they left off later, but not necessarily immediately after the just-called coroutine yields. Think of ""yield return"" or ""await"" in C#, which must remember where they were when the next item is requested or the asynchronous operation completes. Languages with coroutines or similar language features require more advanced data structures than a stack in order to implement continuation.

+
+ +

This is excellent in regards to the stack, but leaves me with an unanswered question about what structure is used when a stack is too simple to handle these language features that require more advanced data structures?

+ +

Is the stack going away as technology progresses? What replaces it? Is it a hybrid type of thing? (e.g., does my .NET program use a stack until it hits an async call then switches over to some other structure until completed, at which point the stack is unwound back to a state where it can be sure of the next items, etc?)

+ +

That these scenarios are too advanced for a stack makes perfect sense, but what replaces the stack? When I had learned about this years ago, the stack was there because it was lightning fast and lightweight, a piece of memory allocated at application away from the heap because it supported highly efficient management for the task at hand (pun intended?). What's changed?

+",204829,,-1,,42838.53125,42748.49375,Is a stack structure used for async processes?,,2,0,7,,,CC BY-SA 3.0, +339941,1,339944,,1/11/2017 6:32,,2,264,"

I am creating an object oriented design for a cab-calling app like Uber. I have some of the classes listed. I am having trouble in designing behavior between the classes. For example, I have these two classes - +Customer and Driver

+ +

A customer can call drivers and a driver can accept a ride request from customers.

+ +

So my thought is Customer class will have a function contactNearByDrivers(). And Driver class will have a function like getRideRequest()

+ +

I have two questions here -

+ +
    +
  1. Where should the actual implementation of contacting nearby drivers and confirming driver's response should be? My thought is that there can a utility class, lets say CustomerDriverInteraction. This class will actually have functions for contacting near by drivers and recording their responses. contactNearByDrivers() function in Customer class would instantiate CustomerDriverInteraction and call its functions . Is this a good way of going about it?

  2. +
  3. Once a driver accepts a request, the customer and driver need to be linked together for a ride. Should there be another class called Ride? Ride class will have a Customer and Driver objects as their members, among other fields. Is this approach correct? What are are the better ways of designing it?

  4. +
+",259195,,,,,42746.31667,Designing interactions in Object Oriented Design,,1,0,,,,CC BY-SA 3.0, +339945,1,339969,,1/11/2017 7:32,,2,254,"

This is the title of a user story:

+ +
+

As APPLICATION I require a new SOAP element for OTHER_APPLICATION

+
+ +

Lately I read a lot of entries like this in our tool of choice. Here is a write-down of the description. Obviously I had to remove a couple details, but I hope it is still understandable.

+ +
+

A new SOAP element XXX must be defined in APPLICATION. APPLICATION will send XXX to OTHER_APPLICATION.

+ +

OTHER_APPLICATION shall return

+ +
    +
  • some information
  • +
  • more information
  • +
  • another piece of data
  • +
+ +

The received information shall be stored.

+ +

The information shall be retrieved, when

+ +
    +
  • condition A (result of a user interaction with APPLICATION)
  • +
  • condition B (result of system event on OTHER_APPLICATION)
  • +
  • condition C (user interaction with APPLICATION)
  • +
+
+ +

APPLICATION is a web app for technicians/customer service. OTHER_APPLICATION is a hardware appliance deployed remotely without direct access to the user.

+ +

To me this doesn't feel like a (good) user story. Here are my thoughts about it:

+ +
    +
  • ✅ Description is concise and tries to avoid hard implementation details like SOAP XML definitions or database table definitions.
  • +
  • ✅ The list of conditions can be used to demo and prepare test scenarios.
  • +
  • ❎ There is no information why this change to the SOAP API is needed - this should be the actual title or first sentence IMHO, something like ""As technician/customer service, I need a way to get information about the state of OTHER_APPLICATION in case of a service failure."" There can then be a note about the existing webservice used to communicate with OTHER_APPLICATION from the PO, just in case all developers have suddenly forgotten about it... 😕
  • +
  • ❎ There is a follow-up story to actually display the information to the user - it should be included in this story, because storing it in the database has no value to technicians/customer service. I understand that the asynchronous nature of the communication with OTHER_APPLICATION is the reason to split the story, but that is an implementation detail and the user story should be about a piece of functionality/a workflow which has value to the user.
  • +
+ +

Can anyone give me some pointers how to rewrite this story? Or am I wrong with my point of view? Do you see a lot of those stories in your projects and how do you handle them (as a developer/product owner/other roles)?

+",3801,,,,,42746.7,Is this a good user story?,,2,0,1,,,CC BY-SA 3.0, +339947,1,339950,,1/11/2017 8:19,,-1,2764,"

We know some of this syntax won't compile, specifically the line where I'm explicitly referring to the class name for a variable.

+ +
class Main {
+
+    static String Main.s1 =""output""; //won't compile
+    static String s2 = ""output"";
+
+    public static void main(String... s1) {
+        System.out.println(""output"");
+    }
+}
+
+ +

The Main.s1 =""output""; gives a compiler error. But we do know that the following is ok:

+ +
class Main {
+
+    static String s2 = ""output"";
+
+    public static void main(String... s1) {
+        System.out.println(Main.s2);
+    }
+}
+
+ +

Similarly, the following is also ok.

+ +
class Main {
+
+    static String s2 = ""output"";
+
+    public static void main(String... s1) {
+        Main.s2 = ""baz"";
+        System.out.println(Main.s2);
+    }
+}
+
+ +

It's just at declaration that the compilers complains. I read that there are these rules for identifiers in Java

+ +
    +
  • reserved words cannot be used. they cannot start with a digit but
  • +
  • digits can be used after the first character (e.g., name1, n2ame are +valid). they can start with a letter, an underscore (i.e., ""_"") or a +dollar sign (i.e., ""$"").
  • +
  • you cannot use other symbols or spaces +(e.g., ""%"",""^"",""&"",""#"").
  • +
+ +

So apparently it is not legal to use a dot in an identifier. I agree is it unnecessary, but there is a lot of other unnecessary legal code.

+ +

Any reason why the dot is illegal in a Java identifier?

+",12893,,7422,,42746.35972,42746.36111,Why is the dot illegal in a Java identifier?,,2,4,,,,CC BY-SA 3.0, +339951,1,387612,,1/11/2017 8:51,,1,1305,"

I'm learning Composite and Observer design patterns and I have created a FileSystem class where I define Node, Folder and File as a composite relationship. Now I want to implement Observer pattern so when there is a FileBrowser/Finder/File Explorer Observers they can get updated when I add a File to a Folder. +This is my relationship and code:

+ +

+ +

Im implementing the Subject Interface for Node (Component) and in Directory itself I implemented the Subject methods: attach, dettach and notifyObservers. +Is this a good design? What If I want to get notifications for folder and subfolders? do I need to pass each Directory to my FileBrowser observer? +If a File change we may need to duplicate the code meaning add attach, dettach and notifyObservers to my File class?.

+ +
public class SolutionMain {
+
+    public static StringBuffer g_indent = new StringBuffer();
+
+    public static void main(String[] args) {
+
+        Node root = FileSystem.getFileSystem();
+        Node one = new Directory(""dir1"");
+        Node two = new Directory(""dir2"");
+        Node thr = new Directory(""dir3"");
+        Node a = new File(""a"", 100);
+        Node b = new File(""b"", 200); 
+        Node c = new File(""c"", 200); 
+        Node d = new File(""d"", 400);
+        Node e = new File(""e"", 10);
+
+        new FileBrowser(root);
+        root.add(one);
+        root.add(two);
+        one.add(a);
+        one.add(two);
+        one.add(b);
+        two.add(c);
+        two.add(d);
+        two.add(thr);
+        thr.add(e);
+    }
+
+ +

}

+ +

FileBrowser

+ +
public class FileBrowser extends Observer {
+
+    private static int observerIDTracker = 0;
+    private int observerID;
+    private Subject subject;
+
+    public FileBrowser (Subject subject) {
+        this.subject = subject;
+        this.observerID = ++observerIDTracker;
+        System.out.println(""New observer ""  + this.observerID);
+        // Attach observer in this case FileBrowser
+        this.subject.attach(this);
+    }
+
+    public void update() {
+        Node d = (Directory) subject;
+        d.display();
+    }
+
+
+}
+
+ +

Directory

+ +
public class Directory extends Node{
+
+    private String _name;
+    private ArrayList<Node> _children = new ArrayList<Node>();
+    private ArrayList<Observer> _observers = new ArrayList<Observer>();
+
+    public Directory(String name) { _name = name; }
+
+    public void name(String name) { _name = name; }
+
+    public String name() { return _name; }
+
+    public void add(Node obj) { 
+        _children.add(obj); 
+        notifyAllObservers();
+    }
+
+
+    public void display() { System.out.println(""Directory: "" + _name + "" changed""); }
+
+    public void attach(Observer observer){
+          _observers.add(observer);     
+    }
+
+    public void detattach(Observer observer){
+          _observers.remove(observer);      
+    }
+
+    public void notifyAllObservers(){
+          for (Observer observer : _observers) {
+             observer.update();
+          }
+    } 
+}
+
+ +

The notification works, but wondering if this is a good implementation of observer.

+ +

I found an example online where there is a use of Mediator pattern and there is an implementation of this class, but not sure what are the advantages.?

+ +

+",122385,,122385,,42747.21806,43519.55903,Composite and Observer pattern implementation,,1,0,,,,CC BY-SA 3.0, +339954,1,,,1/11/2017 9:36,,1,162,"

my company currently develops software without ever having releases. This makes all our customers live on the bleeding edge, and they know that.

+ +

Now we want to do releases, and intend to regulary fork from the branch, let that branch be testing and after some weeks of testing that branch becomes the new stable.

+ +

But how to I effectively fix bugs in the branch now? I could switch to the branch, refresh stuff in Eclipse ( 10 Minutes gone ), now fix the bug in the testing branch, test and commit it. Then I manually create a patch, switch to trunk, submit the patch and continue ""normal"" development ( Another 20 minutes gone).

+ +

I am concerned that the fixed I will do to the testing branch are too time consuming.

+ +

Are there better ways to do it?

+",6619,,,,,42747.69306,"How do you handle releases effectively without wasting too much time? Work environment is Eclipse, Java, Subversion",,2,6,1,,,CC BY-SA 3.0, +339956,1,339962,,1/11/2017 9:42,,0,617,"

Assuming I'm using a class from third party library that does not implement an interface, for example:

+ +
class ThirdPartyLibClass {
+    void DoThis() { ... }
+    void DoThat() { ... }
+}
+
+ +

I want to create a very thin wrapper around it, directly reflecting class' interface and delegating to the ThirdPartyLibClass. The purpose of this is to stub ThirdPartyLibClass in my unit tests. Example:

+ +
interface IThirdPartyLibClass {
+    void DoThis();
+    void DoThat();
+}
+
+class DefaultImplementation : IThirdPartyLibClass {
+    private ThirdPartyLibClass realImplementation = new ThirdPartyLibClass ();
+
+    void DoThis() {
+        realImplementation.DoThis();
+    }
+
+    void DoThat() {
+        realImplementation.DoThat();
+    }      
+}
+
+ +

Is there a name for this pattern? Wrapper or Adapter seem to differ slightly, and I don't intend to ever swap the implementation in production code, so the interface is exactly the same as that of ThirdPartyLibClass. Also, how would I call the DefaultImplementation to make the pattern usage clear to the reader?

+ +

Thanks in advance.

+",212458,,,,,42747.31528,Design pattern name for thin wrapper for unit testing purpose,,2,2,,,,CC BY-SA 3.0, +339959,1,339995,,1/11/2017 9:59,,1,147,"

Assume I have the following code.

+ +

class D { static Integer i1 = 42; }

+ +

Is it true that D has an Integer? Or is it only for instance variable that we can have a has-a relation?

+ +

I also wonder about very similar, if a primitive variable also can make a has-a relation or is it strictly for classes? e.g.

+ +

class D { int i1 = 42; }

+ +

It seems that D has an int but it is a primitive and I'm not sure that primitives are legal for a has-a relationship.

+ +

To construct an example that might be awkward, I assume that it is not a has-a relation that the String s1 = ""s1""; has a char because it is not that kind of relation even though s and 1 are variables types of char in this case.

+ +

Similarly, I assume it's false that a class or an object ""has"" something that it only has conceptually or as internal implementation rather than the plain way of how we understand a has-a relation.

+ +

AFAIK, Java has no class Digit, it has no class Mantissa, it has no class Coefficient and no class Fraction when I browse the javadocs for Java 8, while Java has many classes for many different purposes. I suppose the reason for omitting certain classes that are fairly common concepts could be that the object would not be useful or that the class hierarchy would become unnecessary complex.

+ +

Did I understand the usage and definition of the has-a relation? Did I misunderstand something? Are there corner cases that I didn't think of?

+",12893,,,,,42746.95486,Can the has-a relation in OOP become ambiguous or difficult to know?,,1,2,0,,,CC BY-SA 3.0, +339963,1,339975,,1/11/2017 10:17,,3,231,"

I always see the discussion when there's an API version in the URL where to point the versionless one.

+ +

I mean look at these three URLs.

+ +

http://host/api/customers/1
+http://host/v1/api/customers/1
+http://host/v2/api/customers/1

+ +

In every blog post I read, they are talking about where to point the first URL. I would first of all disable that URL, what good can come out of it? And even if I have that URL I would obviously point that to the oldest supported version, and hopefully retire it with that! (Because only reason me having that URL would be that before publishing I would've forgotten there could be versions)

+ +

It's not just that I can't see the point in binding it to the newest API but it's like the worst thing to do, even stupid!

+ +

What are your thoughts on this? Why do even people write about that pardon me stupid URL?

+",80717,,,,,42746.55486,"API versioning, where to point unversioned API",,1,0,1,,,CC BY-SA 3.0, +339966,1,339974,,1/11/2017 11:45,,6,1987,"

I have a question similar to this other question

+ +

Why aren't design patterns added to the languages constructs?

+ +

Why isn't there java.util.Singleton and then we inherit it? The boilerplate code seems to be always the same.

+ +
class Singleton {
+    private static final Singleton s = new Singleton();
+
+    public static Singleton getInstance() {
+        return s;
+    }
+
+    protected Singleton() {
+    }
+}
+
+class XSingleton extends Singleton {
+
+}
+
+ +

Now if there was a Singleton built-in to Java then we wouldn't have to include the same boiler-plate over and over in projects. We could just inherit the code that makes the Singleton and just code our specific in our XSingleton that extends Singleton.

+ +

I suppose the same goes for other design patterns e.g. MVC and similar. Why aren't more design pattern built into the standard libraries?

+",12893,,-1,,42838.53125,43452.36944,Why aren't OOP design patterns included in the standard libraries?,,4,15,1,,,CC BY-SA 3.0, +339981,1,339996,,1/11/2017 14:27,,2,635,"

I find it difficult allying CQRS/ES with the ""Out of tar pit"" paper architecture.

+ +

This architecture implies 4 layers:

+ +
    +
  • State (state of the application)
  • +
  • Business Domain (purely functional)
  • +
  • I/O
  • +
  • Control ( all the dirty stuff for making the layers work together)
  • +
+ +

In my case, the state depends heavily on a database.

+ +

With the CQRS/ES in mind, I have a decision engine. +The decision engine for producing an event from a command, is the Business Domain, which has to be purely functional.

+ +

When a command asks for an Item to be created, the decision engine chooses to accept or not to create an event CreateItem only if the Item is not already existing (simplified for the example).

+ +

In theory, I should pass the state of the application as a argument to the Business Domain, so as to decide accordingly.

+ +

But in the case of a database, I cannot pass the database as a side-effect free parameter. I feel like I have to perform the query checking the existence of the item beforehand inside the Control layer, and then pass the result of the query on the database to the Business Domain, which will then return an event or not.

+ +

The consequence is that I have some business code (the one related to the query and the business logic to perform the checkings) that will be inside the Control layer.

+ +

That feels wrong for me as far as I understood the ""Out of the tar pit"" paper and also CQRS/ES.

+ +

The main issue seems for me that the state is huge: it's the whole database.

+ +

What is wrong in my reasoning ?

+",14378,,,,,42746.89236,"CQRS/ES in haskell, using ""Out of the tar pit"" paper architecture",,1,2,1,,,CC BY-SA 3.0, +339982,1,,,1/11/2017 14:47,,3,391,"

How does one even approach testing an abstract data type?

+ +

I mean, besides the normal (supposed) way of doing unit tests, where you mock the collaborators, feed them in the class to test, together with sample data, and call the public methods, verifying that the output is the expected one, how do you ensure that the internals are what you want?

+ +

Let's take an example : let's say I want to implement my own PriorityQueue, but I want to use a Heap as internal representation and even further, an array for the Heap part.

+ +

The normal way to test would be to check the public methods in different scenarios, the methods being : push, pop, peek.

+ +

This does not give any guarantees on the performance of the algorithm used internally. I could and should bother make some ""scenarios"" to check the performance, but these are useful after I have implemented my thing and they are mostly for collecting metrics.

+ +

So, how do I test the internal parts? Or better said, how do I ensure that the internal representation uses the algorithm I want.

+ +

I know that I will have several levels on internals if I go with the ""heap"" implementation (these are everywhere on the Internet for implementations of PriorityQueues) :

+ +

1. calculating the left-child, right-child, parent for a node ; I could extract these in a separate class and test that. Or I could just make them protected and test them in the PriorityQueue class, but this break encapsulation because the tests look into the state of the class to test

+ +

2. shiftUp and shiftDown ; same issues as for the one in point 1., except now I can make them receive the object that represents the internal state, or use the private field direct, in case of an Object-oriented language. So, protected or in another entity?

+ +

3. the internal representation is a array, so I could have a toArray() method that is public and test that. Testing the output of this can even ""save"" me from testing the previous two points, but again, the internal state is exposed to the outside world. Do I really want to do that?

+ +

The questions here are :

+ +
    +
  • do you separate the code into more pieces? when do you stop with the granularity?
  • +
  • how much do I sacrifice from KISS in order to have some unit tests? I want the things simple and could have most of the things in the same class and even methods to use the internal fields directly (in case of OO)
  • +
  • are there other ""suggestions"", other ways to ensure both the functionality of the data structure and the algorithm used ?
  • +
+",2657,,2657,,42746.85764,43148.42222,How to test abstract data types that I implement myself?,,2,9,,,,CC BY-SA 3.0, +339984,1,,,1/11/2017 14:54,,2,1300,"

I want to implement a backup solution in Python where a Backup-Server initiates backups on a number of virtual and physical servers (one server = one backup task). Disregarding the details of the actual backups tasks I am concerned with the scheduling/multiprocessing part for now.

+ +

The constraints I have are:

+ +
    +
  1. Only backup two servers at once (e.g. have at maximum two backup threads running at once)
  2. +
  3. Don't backup two servers on the same physical machine (oftentimes multiple virtual servers share a common hardware machine) at once.
  4. +
+ +

Since I am not too experienced in multiprocessing in Python I am wondering what an optimal Python solution would be. The following came to my mind:

+ +
    +
  • Have a thread for each backup-job (e.g. for each server) and use a threading.BoundedSemaphore to ensure only two are running at once. Use more semaphores/conditions to ensure that multiple threads are not backupping two servers on the same physical machine.
  • +
  • Have exactly two threads that are running all the time and retrieve their tasks from a queue. Simultaneously the queue would have to make sure no tasks on the same physical machine are handed out at once (e.g. skipping/reordering tasks at times) . I would probably do this by subclassing Queue.PriorityQueue to add the additional constraints.
  • +
+ +

I am leaning towards the second option but I am not sure whether or not a queue is the right data structure to hand out the tasks to multiple working threads. I don't need to add tasks to the queue at runtime (which a queue allows) and I need a bit of logic to hand out the tasks rather than just process them in a linear order. Is there is a better (standard) data structure for this?

+ +

I would be thankful to hear some thoughts from more experienced programmers.

+",257548,,,,,43198.05833,Scheduling of parallel I/O-bound tasks (Backup solution),,2,0,,,,CC BY-SA 3.0, +339985,1,339988,,1/11/2017 15:20,,0,367,"

I'm currently working on an online app that includes many fairly new (for the company) features and they fall outside my expertise area by far, which means the edges may not be really well rounded yet. My boss wants it to be released in a somehow continuous manner, publishing features one at a time. All of these features are new, the interface might not be definitive, and the style is definitely not, so everything is somehow left hanging. That's why we are only telling about this software to a bunch of users (although it's publicly available to everyone).

+ +

This continuous delivery method allowed him to decide to release quickly and without a deep testing or understanding on what other issues could be raised by imaginative users.

+ +

I decided to be honest and tell the users we're on a Beta phase, ensuring people understand what that means and allowing them to contact us through email in case they find any issue.

+ +

However, my boss told me to remove the Beta warning and test while it's online already, even removing the access to the contact email.

+ +

I guess this is to hide premature hurry (well... according to his words, to hide hesitation), but isn't it a backfire, if a blocking problem lets people out for too long, or the interface changes from one day to another? Wouldn't this make the software look unstable and untrustable?

+ +

How to convince my boss it would be better to show users the software they use is far from finished and may present problems?

+",23015,,23015,,42746.66944,42746.66944,Is it a bad idea to use a Beta warning on an onworking web application?,,1,8,,,,CC BY-SA 3.0, +339987,1,339998,,1/11/2017 15:47,,3,239,"

I'm using swagger to prototype a RESTful API and I got to a situation where one property is part of a resource but not always should be filled.

+ +

Let's say my resource is stores.

+ +

Basic endpoints would be:

+ +

GET: /stores - returns a list of store

+ +

GET: /stores/{storeId} - returns a single store

+ +

Say store is defined along the lines of:

+ +
Store {
+  id: integer,
+  name: string,
+  pictures: array[]
+}
+
+ +

But when returning the list of stores, also returning every store's list of pictures is overkill. Pictures should be only returned for a single store request.

+ +

I'm confused on how to model that situation. On swagger, both methods responses are associated with a store object.

+ +

Should I split store into two objects and definitions so that each method return a different type even though only one property is different?

+ +

Should I use a query string parameter so that the consumer can choose whether or not pictures should be filled? Something along the lines of:

+ +

GET: /stores?fillPictures=false or maybe

+ +

GET: /stores?detailed=false

+ +

When choosing the second option, the definition of a single store would be the same no matter which endpoint is being accessed. That would mean an empty property would be transmitted to the consumer for every non detailed (with pictures) request. Should that be a concern?

+ +

Can someone shed some light on how to handle this scenario in a RESTful way? Maybe you know some API with a similar operation?

+ +

Thanks in advance.

+",201289,,,,,42746.75,Is it legal for a RESTful API to provide different structures for a given resource? How should that be modeled?,,2,1,,,,CC BY-SA 3.0, +339999,1,,,1/11/2017 18:28,,7,5169,"

Having read others posts, it already seems to me that the concept ""repository"" and ""database"" do not go well hand in hand, as they are meant to be completely separate concepts.... but I'll ask the question anyway.

+ +

I currently have to import different types of data (one type of data may consist of several thousand records) from several databases that happen to be all different (oracle, sybase, sql server), process that data (depending on what kind of data set is) and then write that processed data into a different database. The language I am using now is C#.

+ +

I have been told that using the repository pattern in my situation might come in handy, but I am unsure how to engineer it, and more importantly, where to place all the different parametrized SQL queries in this context. Having so many different products and different database sources only contributes to increase my confusion.

+ +

For the reason mentioned in my first paragraph, I have the feeling my SQL queries should be part of my data access layer, while my repositories actually live in layers above. Am I getting all this wrong? Is the repository pattern actually a terrible way of solving my problem?

+",257581,,257581,,42747.38958,42747.38958,Repository Pattern and database queries,,1,0,2,,,CC BY-SA 3.0, +340000,1,362244,,1/11/2017 19:25,,2,1118,"

I'm working on a Entity Component System for learning purpose. I made major changes with my design so I was wondering if I could pick a better design for my event system.

+ +
+

The purpose of the event system (if you don't know ECS), is to allow + communication between process. For example, in a video game (that's + always more easy to understand with video games) your collision system + may want to tell to other system when there is a collision and with + which entities.

+
+ +
+ +

With this little context, my current design for the event system was more :

+ +
    +
  • When an event is emit -> save it into event manager (shared ptr)
  • +
  • Then call every subscribers and send them a shared ptr of the data (thanks to Bart van Ingen Schenau)
  • +
  • Inside the callback, save the event into a queue to consume it on the +next update call
  • +
+ +
+ +

But events are read only, so I wonder if something like that won't be better:

+ +
    +
  • When an event is emit -> call every subscribers and send them a copy of the event
  • +
  • Inside the callback, move the event into a queue to consume it on the +next update call
  • +
+ +
+ +

The systems will be multithreading (which mean at least than events can be emit on any thread).

+ +

One of the most important part of my new design, is that system are ""split"" in blocs (you can see that like several update function). But events are for the whole system. So copies will mean that I'll need some garbage collector function for each system to free events which where consume. Well in fact I still need that with shared ptr.

+ +

There is probably a lot of other designs. So if you think of something good for an ECS generic (not only for video games), where events subscriber tend to regroup, give me your thought please.

+",224869,,855,,42747.77986,43081.74514,How chose a design for an event system,,2,0,,,,CC BY-SA 3.0, +340002,1,,,1/11/2017 19:35,,4,196,"

I am working on an application that is essentially a calculator, not the handheld calculator but more of a spreadsheet. It will take many inputs across different views and show the outputs all over the place.

+ +

The application will be running many calculations. These calculations depend mostly on user given inputs but it can also depend on the results of other calculations.

+ +

Another requirement is that the calculations offered to the user can be edited in a CMS style. This means that when the application starts, it will load the calculations and its necessary inputs from a file called calculations content.

+ +

The outputs should always be up to date, meaning that if a user updates a value, then the calculations that depend on this input should run again, sending the output to its dependent calculations and so on.

+ +

So far, I've conceived a directed graph of calculations in which parent-children relationship represent the input and its dependent calculations. This way, the process that executes a calculation will be able to check if it has any dependants and run them.

+ +

The problem with this pattern is that it can lead to duplicated calculations. If an input A has two dependants B and C and a calculation D depends on both B and C, then when A is updated, D and its dependants will run two times.

+ +

It might be worth knowing that the application is built with a Redux architecture on Javascript.

+",157681,,29020,,42792.3375,42792.3375,How do I manage dependant values without running the same computation twice?,,3,4,2,,,CC BY-SA 3.0, +340003,1,,,1/11/2017 19:59,,0,1243,"

In my web application I need to get data from Wikidata, for example to show item's details. I thought about using ajax for this, but was't sure from where I should to call it, so I asked this question.

+ +

But after some thinking, why should I use ajax? I can make http request from my server side code (C#). I can put it into controller, get information from Wikidata, fill out ViewModel and then call View(ViewModel).

+ +

What can be some downsides of this? Will it affect performance?

+ +

So, question is: ajax or C#?

+ +

UPDATE: Before development starting I was thinking if I should develop client part (HTML, JavaScript) and Web API separately. In this case I would use Ajax of course. But I decided at this stage I'll do one MVC application. Because it has server side code anyway (even in View), so I think that http request from C# looks more naturally than JavaScript request.

+",57952,,-1,,42878.52778,42747.56458,making http request by ajax or c#,<.net-core>,1,9,,,,CC BY-SA 3.0, +340006,1,342048,,1/11/2017 20:42,,1,111,"

I'm currently part of a team developing a solution intended for small business. One of it's components is kind of a ""dynamic business rules engine"" (note the quotes) and because of it's nature we would like it to be highly configurable.

+ +

My idea was to implement/use an external DSL to write the rules with a simple configuration utility to create and edit them. We could also keep a set of ""common"" rules and distribute them with the application. An external DSL + the ""editor"" will allow us to fix or modify rules as needed, and our clients need that agility in their process.

+ +

My team members think that the best approach here is to provide the rules as plugins to the application and redeploy or upgrade those plugins, they argue that, in that way, we'll have better quality control given that testing the plugins will be easier than testing a DSL script. But in my opinion, maintaining a group of plugins, most of them in a ""per client"" basis will eventually become a living nightmare.

+ +

Bottomline my question is: Which solution do you think will work better with a small team of ""jack of all trades"" freelancers who have to develop/deploy/maintain/upgrade around four or five clients?

+",126657,,,,,42777.46389,Plugin versus External DSL,,2,4,,,,CC BY-SA 3.0, +340012,1,340472,,1/11/2017 22:11,,0,73,"

I am a student worker currently close to the finish line of a project I have been working on. Right now I'm implementing the .net Auto generated Api help pages. The nuget package WebApiTestClient does most of this for me. In the test calls however my manager wants certain fields to have options to choose from. So I want a dropdown with populated values vs the default text input.

+ +

This package creates the test dialog using knockout, and I'm okay with creating some knockout templates to get the knockout to work, or completely writing my own java script to run the test dialog.

+ +

I have attempted several ways to solve the problem. One using attributes and codes to reach out to service and extend the Model used to generate the pages. This worked but had caching, and dependency issues, plus the added headache of mapping an enum to data-calls, which I don't like the idea of anyway.

+ +

The option I'm exploring is to modify the HelpPageSampleGenerator the nuget package gives you to handle this. There is a config that runs on start that adds things like custom samples for Type to the HelpPageSampleGenerator. Hopefully I could create the custom inputs here.

+ +

Another options I have thought of but haven't explored yet is using attributes to map to custom input templates and calling APIs to get the options from there and creating the input. I would have to do more code in a view than I would like to but if I call the API itself I won't have to worry about dependency issues, and this seems very flexible for the UI.

+ +

If anyone has done something similar or has any knowledge of the framework that would make this easier that would be very appreciated. I may be making this harder than it needs to be this is new territory for me. Thank you to anyone who took the time to read this.

+ +

EDIT: Purged and cleaned the question since it was too long and there were no responses to the old questions that could make ongoing conversation confusing to new viewers.

+",254660,,254660,,42747.71528,42753.9875,Auto-generated Api help pages test dialog using parameter options from a data service,,1,0,,,,CC BY-SA 3.0, +340018,1,,,1/12/2017 0:08,,2,224,"

Question

+

When training and using an OCR algorithm for handwriting recognition, is it helpful to indicate the author of the handwriting?

+

Use Case

+

Have a warehouse full of documents that need to be transcribed into digital format.

+

We'd like to feed the documents into an OCR algorithm first. If the OCR algorithm reports a low confidence score, then we will pass the documents off to a real person for transcription - and of course use the results for additional algorithm training - so that future documents, especially by the same author, will have a higher chance of being transcribed at a satisfactory confidence level.

+

For each document, it is feasible but non-trivial, to determine the author of the document and give that to the OCR algorithm as well. We anticipate there should be on the order of 100 authors for 8 million documents.

+

Intuitively, I assumed that knowing the author would increase the effectiveness of the algorithm, but on further reflection, I am unsure if this is the case. When I read handwriting I don't usually think about the author, but instead intuit how to decipher the handwriting based on the style.

+

Note: By effectiveness I mean primarily higher accuracy, and secondarily lower resource usage.

+",71905,,-1,,43998.41736,42747.00903,Is handwriting OCR more accurate if the author of the handwriting is indicated?,,1,6,,,,CC BY-SA 3.0, +340019,1,,,1/12/2017 0:11,,8,1827,"

I'm a developer that works alone, and recently, searching for what kinds of documents I still could benefit, even though I don't require anything formal, I've read that one document that still is highly recomended to write is the project vision document.

+ +

I found it described as a short document (a page and a half or two) containing what problem has to be solved, how it will be solved and how the end user will benefit from the solution.

+ +

Reading more, it seems that this document is one extremely important input to discover the requirements and finaly the user stories.

+ +

In that case, considering that this is, as I understood, the first step when starting to build a software, what is the importance of this document, and how is it used so that it helps the development?

+ +

I still don't fully get how such a simple and short document can be that important and help that much.

+ +

How is this document important and how it is used to help the development process? In particular, what is the role of this document in the requirements gathering process?

+",82383,,,,,42755.65556,What is the importance of the vision document and how it helps development?,,5,2,3,,,CC BY-SA 3.0, +340024,1,340132,,1/12/2017 1:22,,6,216,"

JavaScript maps (the data type, not the array method) seem set up to accept data (key/value pairs) but not necessarily methods. At least they're not advertised that way. However, we can put methods onto a map instance. Intriguingly, the keyword this works in such methods, and returns the map itself. For example:

+ +
const m = new Map();
+m.set('key1', 'value1');
+m.get('key1'); // returns 'value1', i.e. standard map usage
+m.methodA = function(x) {console.log(x + ' to you too');};
+m.methodA('hello'); // shows 'hello to you too'
+m.methodB = function() {console.log(this.get('key1'));};
+m.methodB(); // shows 'value1'
+
+ +

Is this a proper use of maps and/or methods within maps and/or this within methods within maps? Or am I corrupting something somehow or breaking some rules by doing this? It seems fairly straight forward and reasonable, making me think it should be OK, but I've never seen or heard anything about this before which makes me nervous.

+ +

I can't create a map with a constructor the way I can create an object with a constructor. However, I can create a map factory to produce maps of a given ""type"". For example, I can use a factory to create maps of the ""car type"". I can thus also attach methods to each map of this ""type"" by including them in the factory:

+ +
const createCarMap = function(features) {
+  const carMap = new Map(features);
+  carMap.set('# tires', 'four (assumed)');
+  carMap.speakersAreFeatured = function() {
+    return this.has('speakers');
+  };
+  return carMap;
+};
+const yourCar = createCarMap([
+  ['# cylinders', 'twelve'],
+  ['speakers', 'awesome']
+]);
+const myCar = createCarMap([
+  ['exterior', 'pearly white']
+]);
+yourCar.speakersAreFeatured(); // returns true
+myCar.speakersAreFeatured();   // returns false
+
+ +

However, such a method will be attached repeatedly for every map produced. This is in contrast to how methods can be added to an object's prototype, allowing method re-usability. So, can methods be attached to a map in a way that allows method re-usability?

+ +

My more general question is: Should we be using methods on maps. If so, how? Should we think of them and use them essentially the same way we do with objects, or are there ""new rules"" for using them with maps? (I suppose one could ask similar questions about other new-ish data types too, e.g. sets, weakMaps, etc., but I'm limiting myself here to maps.)

+",232250,,232250,,42747.11944,42748.65625,"Are we ""allowed"" to use methods on JavaScript maps (the data type) and if so are there any new rules?",,3,7,,,,CC BY-SA 3.0, +340030,1,,,1/11/2017 1:08,,0,117,"

Do prototypical languages provide a remedy from Liskov's problem?

+ +

So the way I see this is: a subclass is very tightly coupled with it's superclass and this creates subtle side effects when polymorphic types are used.

+ +

So can this be remedied by prototypical languages in the sense that classes inherits from a copy of the superclass instead of all pointing to the same one superclass?

+ +

To give an example, suppose I have this class:

+ +
class Rectangle
+    constructor(width, length)
+         this.width = width
+         this.height = height
+
+    function double_width()
+         this.width = this.width * 2
+
+ +

And now the program uses the rectangles to do business or whatever it needs to do:

+ +
function Main
+    Rectangles[] all_rectangles_in_program = new Rectangles[]
+    all_rectangles_in_program.append(new Rectanlge(5, 10))
+    do_some_buisness_logic(all_rectangles_in_program)
+
+ +

And then someone in the future needs to add a new feature:

+ +
class Square inherits Rectangle
+    constructor(side_length)
+       super(side_length, side_length)
+
+    function double_width()
+        this.width = this.width * 2
+        this.length = this.length * 2
+
+
+// MAIN HAS BEEN CHANGED BY OUR COLLEAGUE IN THE FUTURE
+
+function Main
+    Rectangles[] all_rectangles_in_program = new Rectangles[]
+    all_rectangles_in_program.append(new Rectangle(5, 10))
+
+    Square s = new Square(10)                         <--- new code
+    all_rectangles_in_program.append(s)               <--- new code
+
+    do_some_business_logic(all_rectangles_in_program)
+
+ +

The problem is that square overrides the function double_width but introduces a side effect that rectangle didn't have namely that when you change the width now the length changes too. This is a problem because the original program do_some_business_logic may have relied somewhere on the fact that the length of a rectangle doesn't change when you change it's width. Remember that the array all_rectangles_in_program is polymorphic type and the original code do_some_business_logic doesn't know if these are rectangles or squares in it. The run-time system decides which version of double_width to call.

+ +

So my question is, does prototypical inheritance alleviate this problem because you don't inherit one superclass everywhere in the program but instead inherit your own copy of the superclass?

+ +

If not, then what's the advantage of prototypical languages over class based languages?

+",285402,Jenia Ivanov,31260,,42747.37639,42747.40486,Liskov's substitution principle and prototypical languages,,2,4,,,,CC BY-SA 3.0, +340034,1,,,1/12/2017 5:36,,5,388,"

I'm working on an application in the browser and I would like to make sure that my code does not conflict with code from other libraries or with possible calls added by browser manufacturers in the future.

+ +

In my environment I would avoid collisions by using a unique application name and placing all my classes and code in my domain like so:

+ +
package {
+    import com.mydomain.controls.*; // my domain
+
+    public class MyApplication {
+        var button:Button = new com.mydomain.Button();
+        button.text = ""Hello World"";
+        addChild(button);
+    }
+
+}
+
+ +

Or when I'm using a declarative language I would define my namespace URI and prefix like so:

+ +
<s:Application xmlns:s=""http://www.default.com"" xmlns:abc=""www.mydomain.com"">
+   <abc:Button text=""Hello world"" />
+</s:Application>
+
+ +

But how would you do that in the browser when you're pulling in code from different libraries? Where do I define global objects and how do I make them unique? Do I do something like:

+ +
window[""com.mydomain.controls.MyApplication""];
+
+ +

I've seen libraries do something like:

+ +
GalleryWidget = function() { 
+      var version = 1.2.3;
+      var getGallery = function() { //do stuff };
+}
+
+ +

But what if there is another GalleryWidget some where? +Sorry if this is a beginner question.

+ +

Addendum
+Maybe if I paste some code it will clear things up. Is there any problems with the following:

+ +
window.VideoPlayer = {}; // write my class, etc
+window.myVideoPlayer = new VideoPlayer();
+window.submitForm = function() {}; //etc
+window.parse = function() {}
+JSON.parseXML = function(zml) {};
+document.write = function() {};
+
+ +

UPDATE 2:
+I found a web page that is using Yahoo Global Objects:

+ +
var $D  =  YAHOO.util.Dom;
+var $E  =  YAHOO.util.Event;
+var $A  =  YAHOO.util.Anim;
+var $M  =  YAHOO.util.Motion;
+var $EA =  YAHOO.util.Easing;
+var $DD =  YAHOO.util.DD;
+var $C  =  YAHOO.util.Connect;
+var $   =  $D.get;
+
+YAHOO.namespace (""Smb.Asteroids.Logger"");
+YAHOO.Smb.Asteroids.Logger = {
+    Log : function(e) {
+        if (typeof console !== 'undefined') {
+            console.log(e);
+        }
+    }
+}
+var $LOG = YAHOO.Smb.Asteroids.Logger.Log;
+    YAHOO.namespace('Smb.Asteroids');
+var YSA = YAHOO.Smb.Asteroids;
+
+YSA.Nav = {
+    isNavNorth : false,
+
+    init : function() {
+        // For the first visit, subscribe to the layout(template) change event
+        // When user changes template from the ribbon, we need to re-init this JS, based on the new templates settings. 
+        if (YSA.Nav.isFirstVisit) {
+            YSA.Nav.isFirstVisit = false;
+            if (YSA.UiMgr) {
+                YSA.UiMgr.Layout.onChange.eventObj.subscribe(
+                    function() { YSA.Nav.init() });
+            }
+        } else {
+            YSA.Nav.clearSubNavStyles();
+        }
+
+        YSA.Nav.initNavSettings();
+        var navDiv = $('navigation');
+        if (! $D.hasClass(navDiv, 'sub_dynamic')) {
+            return;
+        }
+        YSA.Nav.initNavSettings();
+        var triggers = $D.getElementsByClassName('trigger', '', navDiv);
+        $E.on(triggers, 'mouseover', this.mouseOverTrigger);
+        $E.on(triggers, 'mouseout', this.mouseOutTrigger);
+        var toggles = $D.getElementsByClassName('toggle', 'a', navDiv);
+        $E.on(toggles, 'click', this.toggleClicked);
+        var triggers = $D.getElementsByClassName('mainNav', '', navDiv);
+        $E.on(triggers, 'mouseover', this.mouseOverMainNav);
+    }
+};
+
+$E.on(window, 'load', YSA.Nav.init, YSA.Nav, true); 
+
+ +

I've truncated a lot of the code. It is based on Yahoo YUI framework here but it looks like the page is down. Way back machine should show it.

+ +

Anyway, it answers some questions I had. But I noticed that this is based on framework 2. They have framework 3 that seems to get rid of namespaces. So that leaves more questions.

+",48061,,48061,,42776.90347,42776.90347,How to create a safe namespace for my application in JavaScript,,2,10,,,,CC BY-SA 3.0, +340036,1,340049,,1/12/2017 6:08,,1,171,"

http://www.cs.unc.edu/~stotts/GOF/hires/pat3cfso.htm

+ +

CreateMaze is the function which instantiates the objects. IMO, according to factory pattern we are not supposed to overload or modify or re-write the function which instantiates objects.

+ +

But in the example, the CreateMaze function returns a Maze*. So, now if we have to write an EnchangedMaze class, will we have to re-write CreateMaze function to return a pointer of EnchantedMaze?

+ +

From: http://www.cs.unc.edu/~stotts/GOF/hires/chap3fso.htm

+ +
+

Changing the layout means changing this member function, either by overriding it—which means reimplementing the whole thing—or by changing parts of it—which is error-prone and doesn't promote reuse.

+
+ +

Isn't this what factory pattern wants to avoid?

+ +

What point am I missing?

+",23355,,23355,,42748.20903,42748.20903,Aim of Factory pattern is to stop us from over-riding or re-writing the functions which instantiate?,,2,0,,,,CC BY-SA 3.0, +340047,1,340056,,1/12/2017 9:40,,35,12809,"

I recently started working with GitFlow model as implemented by bitbucket. And there is one thing that is not completely clear to me.

+ +

We try to regularly address our technical debt by backlogging, planning, and implementing the refactoring tasks. Such refactoring branches end with pull-requests that are merged in develop. My question is where do the refactoring branches belong in GitFlow?

+ +
    +
  • Using feature prefix seems the most logical, however it does not feel entirely right, because refactoring does not add any new functionality.
  • +
  • However using bugfix prefix seems not right as well as there is no actual bug refactoring fixes.
  • +
  • Creating a custom prefix on the other hand seems like complicating if not over-engineering the things.
  • +
+ +

Did you have such situation? Which practice do you use to address this? Please explain why.

+",259309,,259309,,42747.40625,42747.42847,Where does refactoring belong in GitFlow branch naming model?,,1,6,8,,,CC BY-SA 3.0, +340053,1,340054,,1/12/2017 9:57,,0,96,"

It often makes sense to ""fetch only what you need"" for example if I should display only 10 rows of data then I should not fetch the entire data set because it would waste resources for a large data set.

+ +

A practical example is the SQL Limit keyword. select * from users order by added limit 10

+ +

I wonder if we can connect that to a software principle. There is the keep-it-simple principle but maybe it is a case of ""rule of the least power"" but for data instead of programs?

+",12893,,,,,42747.42569,Which principle is it to fetch only needed data?,,1,3,,,,CC BY-SA 3.0, +340061,1,340064,,1/12/2017 11:46,,5,2030,"

I have a software for small delivery stores (pizzeria, Japanese, etc) here in Brazil running on a few dozen customers and have the possibility to expand it to many more customers after evolving it to a full POS (Point of sale) software.

+ +

However, there is a critical feature we've been struggling to develop: an offline mode. Internet in Brazil is not reliable for commercial use so its usual in some regions to have problems from a few disconnections a week up to a disconnections for 0-90 minutes every other day.

+ +

The software is made in C#/Windows Forms and the database is MySQL hosted on Azure. The customers connect to the same database and every table has a CompanyId. It is strategic to have a database online rather than a MySQL server in the customers local network.

+ +

I've done a good amount of research but didn't find a solution or a direction so far.

+ +

We took one approach: consider the installation would also install a MySQL and download (if any) the company data, the software would connect to the local database and every a few seconds would make the changes from the local to the cloud database. But it proved challenging and had impact in the code design (can't do from the scratch due to current customers).

+ +

Right now we would consider make this mode from the scratch and the migrate previous.

+",259328,,31260,,42747.49375,43648.36875,What is a good software architecture for POS with offline mode?,,1,0,1,,,CC BY-SA 3.0, +340062,1,,,1/12/2017 12:49,,0,2148,"

I'm trying to decide who is right in the following argument:

+ +

How to effectively process data in MSSQL? Which one is faster?

+ +

Opinion 1: Data requests (this is especially true for complex ones) should be handled by SQL queries and the result be returned to the requesting C# code.

+ +

Opinion 2: First, raw data should be requested by simplistic SQL queries (SELECT * involving all concerned (joined) tables) and selection by conditions should be handled by C# LINQ or set operations involving lists, maps, arrays and whatnot, combined with iterations and local variables. This way, the heavy lifting happens in memory, making it faster than the disk-intensive operations of SQL server.

+ +

Opinion 3: A smart mix of the above.

+ +

(Personally I think Opinion 2 just explains how Opinion 1 works under the hood, of course we have to replace the terms C#, sets, loops etc. with low level constructs that SQL uses.)

+",258861,,,,,42747.57778,Querying data with SQL vs. C#,,3,3,,42747.56181,,CC BY-SA 3.0, +340071,1,,,1/12/2017 13:56,,6,2158,"

We have a mobile app that accepts input into some fields, formalises them as a JSON document and then sends it to the back-end for processing.

+ +

We want to agree on a schema for this document that can be validated and referenced indirectly in both the back-end and the front-end.

+ +

One of the motivations is that the input can change depending on the language, so in a different language, although the +structure will be the same, the JSON entries will have different values and so we cannot have those hardcoded at either end (but especially at the back-end).

+ +

I'm primarily concerned with how such a schema can be represented and how it can be validated at the back-end. +Shall I define an interface for it? If so, is there something standardised already that accomplishes this painlessly?

+ +

The target back-end language is Python on Django, we're happy to go with any package as long as it does the job.

+",92272,,,,,43019.35347,How to define and share a JSON schema between the front-end and back-end of an application?,,2,3,2,,,CC BY-SA 3.0, +340076,1,340173,,1/12/2017 16:24,,2,442,"

We have a system that is largely configurable, that can be organized in different architectures, and I'm struggling to write its requirement specification. I'll give an example

+ +
    +
  • Module 1 does A
  • +
  • Module 2 does B
  • +
  • Module 3 does C
  • +
+ +

The system can be configured to have any combination or all of the three modules. How do I write the requirement?

+ +

1) The system shall be configurable by Engineering to do A.

+ +

2) The system shall be configurable by Engineering to do B.

+ +

3) The system shall be configurable by Engineering to do C.

+ +

4) The system shall be able to do A.

+ +

5) The system shall be able to do B.

+ +

6) The system shall be able to do C.

+ +

I like the first 3. The next 3 however will not necessarily always be true (if system is configured with modules A and B, requirement 6 will be false).

+ +

Shall I complete requirements 4 to 6 with ""if the system is configured to do so""? Are 4 to 6 necesssary at all?

+ +

Thanks for your help!

+ +

EDIT: So basically the question is, how do you write a requirement when the system should be able to something only under a certain configuration?

+",259359,,259359,,42748.32014,42749.55,How to write System requirements - not all architectures fulfill all requirements,,2,5,,,,CC BY-SA 3.0, +340080,1,,,1/12/2017 16:59,,17,911,"

The deadline for a release is tomorrow, your collegue finally finished his task that's crucial for this release, project manager is standing over your shoulder and presses you to finally make a build and you notice a flaw in your collegue's code during review. Not a critical one, but something you wouldn't let go if it weren't for the release tomorrow. And to make things worse, you have your own work you need to finish ASAP. So, what do you do? Do you raise your objection despite the pressure or do you just let this one slip?

+ +

One way I found is to temporarily merge this commit on a different branch and leave review for later. It works if the issue is just a cosmetic one and if it's the only one still waiting for code review. However, is there a more efficient way to handle this? For example, would you recommend commiting one person to only code review and tests?

+",145259,,145259,,42748.43681,42751.50278,How to do efficient code review during release fever?,,5,6,3,,,CC BY-SA 3.0, +340086,1,,,1/12/2017 18:45,,10,2105,"

I have a class hierarchy for which I would like to separate the interface from the implementation. My solution is to have two hierarchies: a handle class hierarchy for the interface and a non-public class hierarchy for the implementation. The base handle class has a pointer-to-implementation which the derived handle classes cast to a pointer of the derived type (see function getPimpl()).

+ +

Here's a sketch of my solution for a base class with two derived classes. Is there a better solution?

+ +

File ""Base.h"":

+ +
#include <memory>
+
+class Base {
+protected:
+    class Impl;
+    std::shared_ptr<Impl> pImpl;
+    Base(Impl* pImpl) : pImpl{pImpl} {};
+    ...
+};
+
+class Derived_1 final : public Base {
+protected:
+    class Impl;
+    inline Derived_1* getPimpl() const noexcept {
+        return reinterpret_cast<Impl*>(pImpl.get());
+    }
+public:
+    Derived_1(...);
+    void func_1(...) const;
+    ...
+};
+
+class Derived_2 final : public Base {
+protected:
+    class Impl;
+    inline Derived_2* getPimpl() const noexcept {
+        return reinterpret_cast<Impl*>(pImpl.get());
+    }
+public:
+    Derived_2(...);
+    void func_2(...) const;
+    ...
+};
+
+ +

File ""Base.cpp"":

+ +
class Base::Impl {
+public:
+    Impl(...) {...}
+    ...
+};
+
+class Derived_1::Impl final : public Base::Impl {
+public:
+    Impl(...) : Base::Impl(...) {...}
+    void func_1(...) {...}
+    ...
+};
+
+class Derived_2::Impl final : public Base::Impl {
+public:
+    Impl(...) : Base::Impl(...) {...}
+    void func_2(...) {...}
+    ...
+};
+
+Derived_1::Derived_1(...) : Base(new Derived_1::Impl(...)) {...}
+Derived_1::func_1(...) const { getPimpl()->func_1(...); }
+
+Derived_2::Derived_2(...) : Base(new Derived_2::Impl(...)) {...}
+Derived_2::func_2(...) const { getPimpl()->func_2(...); }
+
+",1142,,209774,,42747.97986,42888.68264,"Is this a good approach for a ""pImpl""-based class hierarchy in C++?",,2,10,0,,,CC BY-SA 3.0, +340090,1,340093,,1/12/2017 18:57,,0,114,"

So I working on a project and want to profile my code. I have been using KCachegrind to get general idea of what functions cost the most. But now I want to get the exact time spent on those particular functions. So I decided to measure them manually using clock_gettime using object oriented approach. ie. wrap clock_gettime function inside a class.

+ +

Lets say I want to create a class that handles the measurement of time

+ +
class measure_time{
+
+    inline int start(){...}; // return ts_start 
+    inline int end(){...};   // return ts_end
+
+};
+
+ +

Then I use this class to measure time across the projects. When I am about to measure I have to create instance of measure_time per each classes of my project. ie: Lets say I have class A, class B, class C etc in my project.

+ +
// A.h
+class A{
+   void f();
+   void f_1();
+   measure_time mt;
+}
+// A.cpp
+void A::f(){
+    // does some work 
+}
+
+void A::f_1(){
+
+    // measure time start
+    s = mt.start();
+    f()
+    // measure time end
+    e = mt.end();
+    //record time
+    time = s-e;
+
+}
+
+// B.h
+class B{
+   void g();
+   void g_1();
+   measure_time mt;
+}
+// B.cpp
+void B::g(){
+    // does some work
+}
+
+void B::g_1(){
+
+     // measure time start
+    s = mt.start();
+    g()
+    // measure time end
+    e = mt.end();
+    //record time
+    time = s-e;
+
+}
+
+// C.h
+class C{
+   void h();
+   void h_1();  
+   measure_time mt;
+}
+//C.cpp
+void C::h(){
+    // does some work
+}
+
+void C::h_1(){
+
+     // measure time start
+    s = mt.start();
+    h()
+    // measure time end
+    e = mt.end();
+    //record time
+    time = s-e;
+
+}
+
+ +

By this approach I have to define 'measure_time' class in each of the class. So what I wanted was to define the measure_time class only once and use it across the class A, B and C.

+",101222,,101222,,42747.79931,42747.86667,How to profile my code using my own class that measure time?,,1,13,,,,CC BY-SA 3.0, +340094,1,340095,,1/12/2017 21:15,,4,313,"

I had an interesting discussion with a coworker that revolved around how people interpret the use of properties and methods on an interface. For example, let's say we have a blog with posts in various statuses: Working and Published.

+ +

When in ""Working"" the author is still making changes and shouldn't be visible to readers. When ""Published"", well... it's published and end users can read it.

+ +

Let's also say we have an interface for the Blog, which I defined as:

+ +
public interface IBlog
+{
+    IEnumerable<Post> Posts { get; }
+    IEnumerable<Post> PublishedPosts { get; }
+}
+
+ +

My coworker was worried that users of this interface might interpret these two properties as separate collections of objects. I figured since you have a ""Posts"" property, and another property called ""Published Posts"" that returns the same kind of object that people would make the correct assumption that Posts is one collection, and PublishedPosts is a filtered view of the Posts collection.

+ +

His suggestion was:

+ +
public interface IBlog
+{
+    IEnumerable<Post> Posts { get; }
+    IEnumerable<Post> GetPublishedPosts();
+}
+
+ +

Basically, replace the PublishedPosts property with a GetPublishPosts() method. He said this was more idiomatic for C#, because a method communicates to people using the interface that you are performing an operation on the ""Posts"" collection (filtering it by status). I haven't really seen any formal documentation for this, but that doesn't stop the C# community from leaning one direction or the other.

+ +

Is having one collection property, and then methods to filter the collection, or having two collections where the second filters the first idiomatic for C#? If so, is there formal documentation anywhere?

+",118878,,,,,42751.1375,"Is an interface with two collection properties, where the second filters the first collection, idiomatic for C#?",,3,4,1,,,CC BY-SA 3.0, +340096,1,340098,,1/12/2017 21:26,,1,913,"

I just want to check that my current understanding of Java interfaces is correct.

+ +

If an interface says it must include public void increase(int amount), then does that just mean the class that implements that interface must have a matching method?

+",227066,,88774,,42747.91458,42747.91667,"Is it correct to think of a Java Interface as a ""contract"" that a class must implement?",,1,2,,,,CC BY-SA 3.0, +340099,1,,,1/12/2017 22:38,,10,6297,"

Similarities and differences between the two:

+ +

Template Method

+ +
    +
  • Relies on inheritance.
  • +
  • Defines the steps of an algorithm, and leaves the task of implementing them to subclasses.
  • +
+ +

Factory Method

+ +
    +
  • Relies on inheritance.
  • +
  • A superclass defines an interface to create an object. Subclasses decide which concrete class to instantiate.
  • +
+ +

The two side by side:

+ +

+ +

I'm not sure what the phrase ""Factory Method is a specialization of Template Method"" means (it's in the Head First Design Patterns book). In Beverage we have the method prepare which is final and defines a series of steps. In PizzaStore we have a method which is abstract, and subclasses redefine it. How is the latter a specialization of the former?

+",109252,,,,,42748.59792,"""Factory Method is a specialization of Template Method"". How?",,3,1,3,,,CC BY-SA 3.0, +340100,1,,,1/12/2017 22:23,,3,453,"

So I had this idea to map my framework's folder structure to namespaces with a dynamic build process.

+ +

To give you an idea how this would work here an example structure:

+ +
src/
+    FS/
+       File/
+            open.php (function)
+    Math/
+         add.php (function)
+         test.php (function)
+         MAGIC.php (constant)
+
+ +

Would be translated to:

+ +
namespace FS\File {
+    function open() { /* ... */ }
+}
+
+namespace Math {
+    function add() { /* ... */ }
+    function test() { /* ... */ }
+    const MAGIC = /* ... */;
+}
+
+ +

There would be no support for classes.

+ +

A function would be defined as follows:

+ +
<?php
+/**
+ * src/Math/add.php
+ *
+ * This is the Math\add module.
+ */
+return function($require) {
+    // Require Math\test module.
+    $test = $require('Math\test');
+
+    return function($a, $b) use ($test) {
+        // Call Math\test function
+        $test($a, $b);
+
+        return $a + $b;
+    };
+};
+?>
+
+ +

The outer most function is for providing a new scope (so all previous defined variables are non existing).

+ +

Then the $require variable is for including other modules.

+ +

(The idea to require all other dependencies came from node. I think this would make mocking and testing a lot easier because one could test the module with an overwritten $require variable.)

+ +

And constants could be defined as follows:

+ +
<?php
+/**
+ * src/Math/MAGIC.php
+ */
+return 0xDEADBEEF;
+?>
+
+ +

The whole framework would then have a require method to load a module from filesystem:

+ +
/**
+ * Function that is responsible to load a module.
+ */
+function require($module) {
+    static $loadedModules = [];
+
+    $modFileName = str_replace('\\', '/', $module);
+
+    // Check if already loaded or load from file system
+    // ...
+    // ...
+
+    return $loadedModule;
+}
+
+ +

So using Math\add would look like this:

+ +
require('Math\add')('require')(1, 2)
+                   // ^- dependency loader function
+
+ +

Of course this does not look very nice so the build process would generate the namespaces and automatically fill in the require calls:

+ +
namespace Math {
+    function add(...$args) {
+        return require(__FUNCTION__)('require')(...$args);
+    }
+
+    /* Or constants ... */
+    const MAGIC = 0xDEADBEEF;
+}
+
+ +

Constants could be require'd in the same fashion as functions:

+ +
require('Math\MAGIC') // 0xDEADBEEF
+// or through the generated namespace
+Math\MAGIC // 0xDEADBEEF
+
+ +

The benefits I see:

+ +
    +
  • Flexible structure (namespaces are generated automatically).
  • +
  • Easily mockable and testable.
  • +
  • Clear dependencies (dependencies are always require'd)
  • +
+ +

But I can also see the danger of having too many variables because of the way dependencies are managed. Also I'm not too sure about the performance.

+ +

I have never seen such an architecture and I'm curious what others think of my concept.

+",131669,d3l,,,,42750.43333,Procedural PHP Framework Concept,,1,5,1,,,CC BY-SA 3.0, +340103,1,,,1/13/2017 0:59,,5,241,"

+ +

So I know this diagram is really wrong. But I honestly don't know how to express my ideas using UML notation and it is hard to find resources to this very specific case. basically I have these questions

+ +
    +
  1. After the customer entered his or her details, the Arrangement class will calculate the cost and the distance based on the info. (Using that information as the input) In this case, is it ok to use calCostAndDis (Customer) : (int) for the function bit?

  2. +
  3. When I use an arrow and say ""has access to"", by that I want the variables in one class to be accessible by the other class, meaning the other class contains the those variables. For example, after a customer makes an order, the info will be saved to the CarsRecord class. So that later we can check the info in the record. How can I express it in UML notation? A class stores the info of another class?

  4. +
  5. If I want the SystemRecord to contain both Carsrecord and ChauffeursRecord, (meaning it has access to both of them as they are part of the system record). Does my graph express this idea correctly?

  6. +
+ +

I am really sorry but I am desperate. I have spent days trying to understand class diagrams but I cannot find anything that explains this kind of situation.

+ +

Many thanks in advance.

+",259396,,257720,,42748.45556,42748.45556,UML diagram question: Creating a system for booking and saving,,1,2,,,,CC BY-SA 3.0, +340105,1,,,1/13/2017 2:53,,5,1386,"

I have a set of ORM models that are shared between the main business application and a couple minor side applications (such as an administrative web interface).

+ +

I don't want to put the object's business logic for the main application inside the ORM model classes because the web interface doesn't use any of it (and it would be too bloated).

+ +

That leaves me with the problem of having two classes for every real ""object"" (the business layer class and the ORM model class), and wondering how I should link the two. Either composition or inheritance would work, but both feel wrong. For example, I have a User class and a DBUser ORM model class. User ""is not"" a DBUser and a User ""does not have"" a DBUser.

+ +

Is there a standard solution or best practice to address this predicament? Or is this a case where there is no great answer and I just have to pick the one that makes me the least uneasy?

+ +

Here is a code example, just in case the above wasn't clear:

+ +
class DBUser(SQAlchemyBase):
+
+    __tablename__ = 'users'
+
+   user_id = Column(Integer, primary_key=True)
+   username = Column(String, nullable=False)
+   # ...
+
+
+class User(object):
+
+    def __init__(self, user_id):
+        self.dbuser = db.query(DBUser).filter(DBUser.user_id == user_id).first()
+
+    @property
+    def username(self):
+        return self.dbuser.username
+
+    @username.setter
+    def username(self, username):
+        self.dbuser.username = username
+
+    def connect_to_server(self, server):
+        ...
+
+    def save(self):
+        db.add(self.dbuser)
+        db.commit()
+        db.detach(self.dbuser)
+
+    def disconnect_from_server(self):
+        ...
+
+    def handle_incoming_action(self, action):
+        ...
+
+",259399,,,,,42750.45556,Business logic outside shared ORM models,,2,0,1,,,CC BY-SA 3.0, +340122,1,,,1/13/2017 10:50,,5,708,"

A number N and a range a to b will be input by the user, with a < b < N.

+ +

The program purpose is to generate random sets of positive integers that sum up to N, with each positive integers within the range a and b.

+ +

For example,

+ +
N = 26
+a = 1
+b = 10
+
+ +

And here are some possible output of the program:

+ +
1,1,10,1,1,1,1,10
+3,2,1,10,5,5
+10,10,6
+1,2,3,4,5,6,5
+
+ +

One way to do that is:

+ +
    +
  1. Generate 2 y[0],y[1] within the range

  2. +
  3. If the y[0]+y[1] > N, start over again.

  4. +
  5. If the y[0]+y[1]-N < a, start over again

  6. +
  7. If the y[0]+y[1]-N < b, return the set y[0], y[1], +N-y[0]-y[1]

  8. +
  9. Else, generate y[i] within the range

  10. +
  11. If the y[0]+y[1]+...+y[i]-N < b, return the set y[0], y[1], ..., +N-y[0]-y[1]-...-y[i]

  12. +
  13. Else, repeat 5-6

  14. +
+ +

The problem is that the random function is slow. Is there a more efficient way to do this with less number of random?

+",188791,,188791,,42751.08264,42754.48542,How to generate a random bag of positive integers that sum up to an input number?,,4,13,,,,CC BY-SA 3.0, +340123,1,,,1/13/2017 11:38,,5,2046,"

Some background: +I develop web services for internal departments of a large organisation that are used in public facing websites. There is a geographic differences between myself and my colleagues in these departments so most communication is by email, phone, skype etc.

+ +

My process for development is develop in test environment where only my team and I have access to it. This is fine and works well.

+ +

Once I and other members of my team are happy with the service I would upload and publish to a UAT/test environment which is accessible to the local domain (wider LAN area) for lets say eCommerce colleagues in another office to test their front end websites/applications against.

+ +

This is where the problem occurs. Testing would commence and then some time later eCommerce colleagues update their production/live environments to use this UAT service without my or my teams knowledge (obvious communication problem I know). I only find out when something has changed in UAT which breaks the service and eCommerce complain.

+ +

The services are clearly labelled with UAT in the titles/domain name. I have clearly specified when supplying them the UAT service not to use it in a live environment, keep me informed of testing etc. Then when all parties are happy with the service I would go through the relevant change control process and upload to the live production environment.

+ +

Are there any processes, methods, tips, advice I should be using to ensure UAT services are not used in a live environment that I have little control over?

+",259427,,,,,42749.92014,How to stop UAT/QA/Test services being used in a production environment,,7,12,1,,,CC BY-SA 3.0, +340125,1,340131,,1/13/2017 13:41,,6,488,"

I'm currently trying to refactor some rather complicated object graph to use dependency injection. I'm using C# and StructureMap, but the problem isn't really technology specific. I understand the basic principle, but drawing blanks on how would I resolve a dependency like this:

+ +
public class Food
+{    
+    public Food(IIngredient ingredient)
+    {       
+        this.foodProcessor = new FoodProcessor(this);
+    }
+
+    // ... Lots of things to customize food
+
+    public void BuildFood()
+    {
+        this.foodProcessor.Process();
+    }
+}
+
+public class FoodProcessor
+{
+    public FoodProcessor(Food f)
+    {
+        this.food = f;
+    }
+
+    public void Process(){
+        // yummy
+    }
+}
+
+ +

In that case, a FoodProcessor on its own doesn't make sense. I can't process null or empty food. On the other hand, new in ctors are not good (right?), and should be injected as a dependency.

+ +

Is the new okay in this case, or is it possible to refactor this without major changes to my application?

+",80352,,60357,,42750.42847,42752.04375,How do you deal with dependencies that require the object you inject into?,,4,1,1,,,CC BY-SA 3.0, +340127,1,,,1/13/2017 14:38,,6,486,"

I've been brainstorming on a specific problem for a while and today I've thought of a solution. But am not too sure about it. Hence this question for feedback and suggestions.

+ +

I'll use the simple example of a product T-Shirt.

+ +

The T-Shirt has multiple options:

+ +

Color: White, Black

+ +

Size: Small, Medium, Large

+ +

Now in the case of a White T-shirt, there is no Large and Medium. So Large and Medium option should not be available when selecting White.

+ +

This means that if you first select Large or Medium. Then White should not be available.

+ +

Previous implementation was done as a tree structure. So you always have to select Color then Size. But it's not really a tree the way I see it.

+ +

My idea was to create a list of rules.

+ +

pseudo code:

+ +
rule1: if color is white, sizes not allowed are [large, medium]
+
+//then generate the opposite rules based on rule1.
+rule2: if size is medium, color not allowed are [white]
+rule3: if size is large, color not allowed are [white]
+
+store rules in database
+
+ +

When you are dealing with products that have many options this could get complicated, that's why I thought generating the other rules based on the first rule can reduce the complexity.

+ +

Thoughts anyone?

+ +

Update:

+ +

Someone remarked below and I realised I used the wrong example. It's not a product which has a SKU and stock level. It's a service. A better example would be a configurable computer. Many different CPU, RAM, GPU, etc combinations. Which all produce different price and depending on specific motherboard or some specific selection, not all CPUs and/or RAM etc are selectable.

+ +

Update2:

+ +

The products/services each have around 7 options. Each option can have between 2 - 7 values. A matrix structure as suggested, would become complex IMO.

+ +

Also we've moved away from having a price for each single variation (which was ridiculous to manage) to having formula's to generate prices dynamically.

+ +

There was always an issue with the DB load because of the tree structure. Each time an option is selected it has to fetch the values of the subsequent options. Each time you add a new value to an option you also duplicate a lot of the subsequent options. So it gets out of hand really quickly.

+ +

To go into more details my solution was to use a document based database (NoSQL) +You would have a ""Products"" or ""Services"" collection.

+ +

Each product/service would look something like this:

+ +
{
+  ""product"": ""T-Shirt"",
+  ""options"": {
+    ""size"": [],
+    ""color"": [],
+    ""pattern"": [],
+    ... about 4 more
+  },
+  ""rules"": [....],
+}
+
+ +

Initially you just load all the options in the interface. Then as you make selections you run the rules to disable the specified option values.

+ +

Using such a structure seems to me that it would have less overhead by having the rules embedded in each product/service instead of having a large relational table with all the options (which is already massive).

+ +

The client side benefits because it doesn't have to query the DB each time an option is changed.

+",259437,,259437,,42751.84653,42753.26736,Modeling complex product options,,4,12,1,,,CC BY-SA 3.0, +340135,1,,,1/13/2017 16:03,,12,710,"

Below is an example image, if I have a point of the white dot in the middle and I want to find the nearest possible location for the blue circle (which is obviously at the location where I placed it) if all the red circles already exist. How can I find that location?

+ +

Performance for me is not a major concern for this application.

+ +

+",259448,,1204,,42749.02847,42753.99792,Find nearest best fit for circle,,5,15,2,,,CC BY-SA 3.0, +340140,1,,,1/13/2017 17:06,,2,180,"

Let's say I am using a RSA keypair to encrypt and decrypt a large amount of traffic over a public network. Assume all traffic is padded and the key is 2048 bits, how often would you recommend renewing the private key?

+ +

Is there a mathematical solution to how much encrypted data is needed in bytes that would allow a hacker to calculate the private key?

+ +

A real life example of this might be a messaging service which encrypts all traffic with one key.

+",258552,,,,,42759.41875,RSA Private key renewal - How often?,,0,9,1,,,CC BY-SA 3.0, +340145,1,,,1/13/2017 18:39,,6,1564,"

We currently have a store/shopping cart system that uses a single database. We have products with a field for the number we have in inventory (say 100 widgets). We have a customer table. When someone adds a widget to their cart, we insert a record in a join table between the customer and the product which represents intent to purchase. That customer_product record has a status indicating that it's either in the cart or that the purchase has been completed ('Pending','Purchased').

+ +

When a customer request hits the system to add a product to their cart, we count the number of purchased and pending customer_product records for that product and disallow it if the number is equal to the total (100). This way, we ensure that we don't allow 101 people to have 100 items.

+ +

The database is our system bottleneck, and the join table gets hit a lot. I suspect row and page locks affect performance under load. I would guess systems like Amazon's/eBay's must have a distributed db architecture, and yet somehow manage the problem of 2 people wanting to put the last item in their cart at the same time. I'd like to rearchitect our store/cart to alleviate the db constraint.

+ +

With a single database, we can do something in our join record insert WHERE clause to include a subquery count so that if two db transactions are trying to do the ""last widget"" insert concurrently that whichever tries to commit second will fail because the count will prevent it after the 2nd-to-last transaction takes the last widget and changes the count. But in a distributed database, I'm guessing that trick won't work.

+ +

What general system architecture guiding principles or patterns apply when addressing such concurrency and shared resource challenges in a distributed system?

+ +

Note: I'm aware of similar questions (like Best-practice to manage concurrency into a basket in a e-commerce website). This question is specifically about how to handle it in a distributed architecture where every db instance has a copy of the tables and changes in one propogate to the others only every so often (at least that's how I imagine it - I haven't actually set up a distributed db system before).

+",115964,,-1,,42838.53125,43384.62569,How to architect a store to avoid overselling inventory (distributed database scenario),,2,4,3,,,CC BY-SA 3.0, +340150,1,340191,,1/13/2017 20:05,,1,527,"

Suppose that there are multiple classes (let's call them Container-s) that have somewhat similar structure. They are smart containers for some other Foo-s classes.

+ +

These Container-s have:

+ +

[1] A single private STL container (vector, set, map) for objects of some other class Foo

+ +

[2] Standard public operations to work with the container in terms of Foo class (Add(FooObject), Remove(FooObject), IsMember(FooObject), iter Begin(), Clear() etc.)

+ +

[3] Public operations with standard names and arguments specific to the Foo class. For example, Add(int id, int profile) may actually create a Foo object inside Container and add it to its private STL container

+ +

[4] Public operations with non-standard names, for example GetNumberOfRedFooObjects() that exist only for the specific class Foo

+ +
class SomeContainer {
+
+private:
+
+    std::vector<SomeFoo> __someFoos; [1]
+
+public: 
+
+    (about 5-30 functions)
+
+    void Add(const SomeFoo &foo);    [2]
+
+    void Add(int id, int profile);   [3]
+
+    int GetNumberOfRedSomeFoos();    [4]
+
+}
+
+ +

Question 1: What good design choices/guidelines are available for such Container-s given that there are many of them (tens, hundreds)? Should I write each Container class from scratch? Should I implement some templated base class(es) for such Container-s?

+ +

Question 2: Is it fine to have methods of both [2] and [3] types or I should stick with just [2]?

+ +

The language is C++, but I think it's a more or less language agnostic question. Thank you for your help in advance!

+",204301,,204301,,42748.84861,42749.89583,Design patterns for smart containers,,2,0,0,,,CC BY-SA 3.0, +340159,1,,,1/14/2017 3:39,,1,690,"

This is probably a naive question but I'm trying to figure out the industry best practice for working with magic numbers and their corresponding display texts. For example, whether a transaction is debit or credit might be stored as a bit field in the database, but the true/false/0/1 need to be displayed as ""Debit"" or ""Credit"" etc. Typically, there would be an enum as well that needs to be kept in sync with the magic numbers and their meanings.

+ +

There are two specific cases that I'm trying to resolve -

+ +
    +
  1. We want to translate them to human readable text - UI needs to know that when it sees zero, it needs to display ""Debit"" etc.

  2. +
  3. We want to work with the magic numbers within the source code - Here we usually translate them as enums. Enums get rid of the magic numbers but are hardcoded so difficult to keep in sync with the database values.

  4. +
+ +

In a moderately sized application we can end up with hundreds to thousands of such translations, for example -

+ +

Context - {Value, Translation}

+ +

TransactionType - {0, Debit}, {1, Credit}

+ +

FileType - {0, CSV}, {1, XML}, {2, Excel}

+ +

ItemType - {0, Manual}, {1, Automated}, {2, Writeoff}, {3, System}

+ +

Status - {0, Active}, {1, InActive}, {2, Pending}, {3, Delivered} etc etc

+ +

I can't figure out a solution that allows enums to be synced/loaded dynamically from the database AND allows translations of database values to texts without losing referential integrity.

+ +

I see the below options -

+ +
    +
  1. Hardcoding in the UI (Views or via GetDisplayTextFor(int) methods etc).
  2. +
  3. Some external Text/XML files.
  4. +
  5. A database table with columns (Context, Value, Translation) - But now we've created new magic numbers in the ""Context"" column.
  6. +
  7. Database tables for each set of such mappings with values as referencing foreign keys - This will allow referential integrity but will mean addition of potentially numerous tables.
  8. +
+ +

What are the industry practices for this problem? Can enums be generated dynamically from the database and can they be converted to texts or should enums be avoided altogether for such cases? Any other standard solutions/patterns that solve at least as much of the problem as possible?

+",242348,,,,,42752.75278,Where to store translations for values to their corresponding display texts?,<.net>,2,2,1,,,CC BY-SA 3.0, +340160,1,,,1/14/2017 5:00,,-1,201,"

I would like to create a multiplayer mobile game where players play together over an ad-hoc network, with no internet connection required. Players should be able to join and leave mid-game, and the game should not rely on a ""host"" device being online.

+ +

How can I coordinate game state across multiple devices in an ad-hoc network and allow any player to join and leave mid-game?

+",171407,,171407,,42753.86875,42753.86875,How can a multiplayer game manage state over a local network?,,1,5,,42752.81111,,CC BY-SA 3.0, +340165,1,340235,,1/14/2017 10:59,,0,2913,"

This is a specific question here, but I'm interested in the general ""best practice"" for similar situations as I'm new to Java.

+ +

Suppose I have Java code that needs to open a file (see below for code). I first have a function that checks for the files existence. If the file exists, we call functions to open it and process it. Otherwise we return a message to the user stating the file could not be found.

+ +

Now in the functions that open the file, we still need to have a try/catch statement for the possible IOException because it's a checked exception. The function openSpecifiedFile has to return a FileInputStream. The fact that our file was proven to exist several milliseconds ago is not enough to guarantee the catch statement will never be executed (though it's unlikely) so I'd rather not return a null here.

+ +

Is there away to return a default object instead, or just avoid the null return statement all together and exit the program with some kind of runtime exception? The only way things could go bad here is if something very bad had happened I feel...

+ +

I suppose the general question is ""When running checks to ensure certain checked exceptions shouldn't occur, what is a good way to deal with the necessary try/catch blocks?""

+ +
 public static void main(String[] args) {
+    String filename = args[0];
+    if (specifiedFileExists(filename)) {
+        FileInputStream specifiedFile = openSpecifiedFile(filename);
+        processFile(specifiedFile);
+    } else
+        System.out.println(""The specified file does not exist"");
+}
+
+
+private static boolean specifiedFileExists(String filename) {
+    File currentFile = new File(filename);
+    return currentFile.exists();
+}
+
+private static FileInputStream openSpecifiedFile(String filename) {
+    try {
+        return new FileInputStream(filename);
+    } catch (IOException e) {}
+    return null;
+}
+
+private static void processFile(FileInputStream currentFile) {
+    ByteBuffer filledBuffer = fillBufferFromFile(currentFile);
+    String messageFromFile = processBufferToString(filledBuffer);
+    System.out.println(messageFromFile);
+}
+
+private static ByteBuffer fillBufferFromFile(FileInputStream currentFile) {
+    try {
+        FileChannel currentChannel = currentFile.getChannel();
+        ByteBuffer textBuffer = ByteBuffer.allocate(1024);
+        currentChannel.read(textBuffer);
+        textBuffer.flip();
+        return textBuffer;
+    } catch (IOException e) {}
+    return ByteBuffer.allocate(0);
+}
+
+private static String processBufferToString(ByteBuffer filledBuffer) {
+    StringBuilder characterBuilderFromFile = new StringBuilder();
+    while (filledBuffer.hasRemaining())
+        characterBuilderFromFile.append((char) filledBuffer.get());
+    return characterBuilderFromFile.toString();
+}
+
+",259504,,,,,42750.8875,How to deal with IOException when file to be opened already checked for existence?,,4,3,,,,CC BY-SA 3.0, +340177,1,340179,,1/14/2017 15:07,,11,801,"

When I'm trying to create an interface for a specific program I'm generally trying to avoid throwing exceptions that depend on non-validated input.

+ +

So what often happens is that I've thought of a piece of code like this (this is just an example for the sake of an example, don't mind the function it performs, example in Java):

+ +
public static String padToEvenOriginal(int evenSize, String string) {
+    if (evenSize % 2 == 1) {
+        throw new IllegalArgumentException(""evenSize argument is not even"");
+    }
+
+    if (string.length() >= evenSize) {
+        return string;
+    }
+
+    StringBuilder sb = new StringBuilder(evenSize);
+    sb.append(string);
+    for (int i = string.length(); i < evenSize; i++) {
+        sb.append(' ');
+    }
+    return sb.toString();
+}
+
+ +

OK, so say that evenSize is actually derived from user input. So I'm not sure that it is even. But I don't want to call this method with the possibility that an exception is thrown. So I make the following function:

+ +
public static boolean isEven(int evenSize) {
+    return evenSize % 2 == 0;
+}
+
+ +

but now I've got two checks that perform the same input validation: the expression in the if statement and the explicit check in isEven. Duplicate code, not nice, so let's refactor:

+ +
public static String padToEvenWithIsEven(int evenSize, String string) {
+    if (!isEven(evenSize)) { // to avoid duplicate code
+        throw new IllegalArgumentException(""evenSize argument is not even"");
+    }
+
+    if (string.length() >= evenSize) {
+        return string;
+    }
+
+    StringBuilder sb = new StringBuilder(evenSize);
+    sb.append(string);
+    for (int i = string.length(); i < evenSize; i++) {
+        sb.append(' ');
+    }
+    return sb.toString();
+}
+
+ +

OK, that solved it, but now we get into the following situation:

+ +
String test = ""123"";
+int size;
+do {
+    size = getSizeFromInput();
+} while (!isEven(size)); // checks if it is even
+String evenTest = padToEvenWithIsEven(size, test);
+System.out.println(evenTest); // checks if it is even (redundant)
+
+ +

now we've got a redundant check: we already know that the value is even, but padToEvenWithIsEven still performs the parameter check, which will always return true, as we already called this function.

+ +

Now for isEven of course doesn't pose a problem, but if the parameter check is more cumbersome then this may incur too much cost. Besides that, performing a redundant call simply doesn't feel right.

+ +

Sometimes we can work around this by introducing a ""validated type"" or by creating a function where this issue cannot occur:

+ +
public static String padToEvenSmarter(int numberOfBigrams, String string) {
+    int size = numberOfBigrams * 2;
+    if (string.length() >= size) {
+        return string;
+    }
+
+    StringBuilder sb = new StringBuilder(size);
+    sb.append(string);
+    for (int i = string.length(); i < size; i++) {
+        sb.append('x');
+    }
+    return sb.toString();
+}
+
+ +

but this requires some smart thinking and quite a large refactor.

+ +

Is there a (more) generic way in which we can avoid the redundant calls to isEven and performing double parameter checking? I'd like the solution not to actually call padToEven with an invalid parameter, triggering the exception.

+ +
+ +

With no exceptions I do not mean exception-free programming, I mean that user input doesn't trigger an exception by design, while the generic function itself still contains the parameter check (if just to protect against programming errors).

+",46582,,1204,,42749.77083,43231.88125,How to perform input validation without exceptions or redundancy,,3,11,4,,,CC BY-SA 3.0, +340187,1,,,1/14/2017 19:59,,2,151,"

Title says it all. Should I increment my API version if I add, say, an image property to each instance of my JSON-represented 'Restaurant' resource? or should API versioning change only when implementation changes?

+",157760,,,,,42749.89583,Should API version change when data is added?,,3,2,,,,CC BY-SA 3.0, +340198,1,340952,,1/15/2017 0:53,,5,15079,"

I am currently in middle of designing a backup application in JavaFX written in pure Java (meaning without Fxml) .

+ +

I am having trouble implementing the MVC pattern for the following reason. The way I understand it is that the view has to be separate from the controller which processes events and updates the model to reflect the changes that the event causes. My problem is linking the events to the controller class, this is due to the fact that the events are required to be added in the view class where I declare the components. Which makes part of the controller in the view class, which as far as I understand is exactly what the MVC pattern is supposed to prevent.

+ +

So my question is how do I link the events from the view class to the controller class without putting part of the controller in the view class?

+",258776,,61852,,42761.47153,42762.65208,How to implement the MVC design pattern with JavaFX written in pure Java,,1,2,4,,,CC BY-SA 3.0, +340199,1,340201,,1/15/2017 1:24,,16,6379,"

Many times I find myself null checking when fetching a value from some data hierarchy to avoid NullPointerExceptions, which I find to be prone to errors and a needs a lot of boilerplate.

+ +

I've written a very simple routine which allows me to skip null checking when fetching an object...

+ +
public final class NoNPE {
+
+    public static <T> T get(NoNPEInterface<T> in) {
+        try {
+            return in.get();
+        } catch (NullPointerException e) {
+            return null;
+        }
+    }
+
+    public interface NoNPEInterface<T> {
+        T get();
+    }
+}
+
+ +

I use it a bit like this...

+ +
Room room = NoNPE.get(() -> country.getTown().getHouses().get(0).getLivingRoom());
+
+ +

The above resulting in me getting a Room object or a null, without having to null check all parent levels.

+ +

What do you think of the above? Am I creating a problematic pattern? Is there a better way to do this in your opinion?

+",148394,,61852,,42768.98194,42769.47222,Fetching a value without having to null check in Java,,4,4,2,,,CC BY-SA 3.0, +340207,1,,,1/15/2017 10:13,,3,229,"

There are cases where I don't know how to use exception handling. To make it clear, let me divide exceptions into two types:

+ +
    +
  1. exceptional cases that may happen occasionally, such as when you try to open a non-existent file.

  2. +
  3. exceptions that you wouldn't expect to happen if you have written your program correctly, such as out-of-range indexing.

  4. +
+ +

In the first case I'd prompt the user and/or return the program to a normal state, but I don't know what to do for the second type. If I don't catch them, the runtime will show its own message. If I wanted to handle it myself, I would have no idea of how to handle a case that was not supposed to happen whatsoever.

+",259560,,110531,,42750.50208,42750.72569,What to do with exceptions that arise from bugs?,,6,4,,,,CC BY-SA 3.0, +340209,1,340211,,1/15/2017 11:16,,4,1157,"

I've seen a few answers on this already but nothing that really applies to my particular situation.

+ +

I'm going to be building a mobile application, so primarily phones, tablets etc. Probably using Amazon Web Services, and I plan on having large amount of user data among other things, like statistics, images and so on.

+ +

So I'm curious what would work better in terms of what will scale better, and will be more efficient in its queries, separating all the different types of data into databases or just have one massive one. +The way I'm thinking now if I were to do them separately I would have about 3 databases based on the cloud.

+ +

Any thoughts on this would be appreciated, thanks

+",259566,,,,,42750.48611,Multiple databases or one single database,,1,3,1,,,CC BY-SA 3.0, +340217,1,,,1/15/2017 13:56,,1,80,"

I am currently in a dilemma. I am thinking about downloading a JSON file from a GitHub repo to replace local files. The local files are stored in a folder named lang, which is stored in the project folder. The repo is stored on GitHub and can be accessed using a URL.

+ +
+ +

The GitHub solution, in my opinion, would be good so that I could push automatic updates (our clients want strings) to local projects. Only the language needed would be downloaded as well, and would download every 30 minutes, on startup, and via a custom function. Plus, I could code in a manual override so you can supply your own ones as well. Here's the code for the GitHub solution:

+ +

+ +
def get_jsonparsed_data(url):
+    response = urllib.request.urlopen(url)
+    data = response.read().decode(""utf-8"")
+    return json.loads(data)
+
+if MESSAGE_LANGUAGE in get_jsonparsed_data(""https://raw.githubusercontent.com/user/repo/master/languages.json"")['languages']:
+    url = (""https://raw.githubusercontent.com/user/repo/master/"" + MESSAGE_LANGUAGE + "".json"")
+else:
+    url = (""https://raw.githubusercontent.com/user/repo/master/lang.json"")
+
+lang = get_jsonparsed_data(url)
+
+ +

On the other hand, locally storing it will make it easier to edit while harder to update.

+ +

+ +

Here's the code for this as well:

+ +

+ +
if os.path.isfile('./lang/' + MESSAGE_LANGUAGE + '.json'):
+    with open('lang/' + MESSAGE_LANGUAGE + '.json') as data_file:
+        lang = json.load(data_file)
+else:
+    with open('lang/en.json') as data_file:
+        lang = json.load(data_file)
+
+ +

Which one would be better to use, and why?

+",228074,,228074,,42751.62083,42751.62083,Should I pull the language data files of a project from a GitHub repository?,,0,6,,,,CC BY-SA 3.0, +340220,1,340227,,1/15/2017 15:09,,3,2851,"

I'm developing a Java software according to the object-oriented Layers architectural pattern. Every layer should be clearly separated from the rest, and provide a well-defined interface to use it's services (maybe more than one).

+ +

A common example for these layers could be an architecture consisting of a request processing layer, a business logic layer and a persistence layer.

+ +

However, I'm not sure how to use Java interfaces correctly to implement this structure. I guess that each layer should have it's own Java package. Should every layer contain one Java interface that defines methods to access it? Which classes implement these interfaces? Classes from the layer or classes from outside the layer? Which classes methods does an outside object use if it wants to use a layer?

+",259577,,,,,43537.62361,Java Interfaces in Layers pattern,,3,0,,,,CC BY-SA 3.0, +340223,1,340243,,1/15/2017 16:42,,4,3652,"

When you are working with an external library in git, should you add it to Git or should it be in gitignore? If you put it in gitignore, you run into the problem that if someone (or you yourself on another PC) wants to work on your code, they will first have to download and link the library. If you do include it, the graph of code activity will show 22k lines of code being changed on the initial commit and then only 100 on each following commit (at least that happened to me).

+ +

So is one of these solutions right or is there another one that I am missing?

+ +

PS: The same also applies to Makefiles and similar files.

+",226868,,226868,,42750.7,42750.88611,Should you include libraries and code-unrelated files in your git project and upload them to Github?,,1,8,2,,,CC BY-SA 3.0, +340231,1,,,1/15/2017 17:50,,4,117,"

I come from a background where using a configuration file for every, if not every, constant is the best solution for maintainability and flexibility of the program. By this I mean, every hard coded string, integer, table/array, boolean, color, formatting expression, etc. are all put in one file or class (often called the ""Configuration"" class).

+ +

A big benefit to this solution is that is future proofs ""accidental"" code changes that can introduce new ""features"" when a main class or code block is modified. I find that it gets developers out of code areas that should not be touched without really thinking about it. It also makes a code in a main class or section much more readable and coherent (see below):

+ +
if (x == 5)
+{
+    // Logic here
+}
+
+ +

Versus

+ +
if (x == Config.MaxNumberOfResults)
+{
+    // Logic here
+}
+
+ +

the later being MUCH more readable and coherent (especially in the future) and requires little to no comments to maintain as well as no logic needs rewriting if we decide we want a higher max of results.

+ +

The issue with the former code is that some other developer would eventually figure out that 5 is the max result limit and then think I want something different than that so (a few attempts later) they include that number (because god-forbid changing the number) thus putting >= now so the entire code block changes...which may introduce a ""feature"" somewhere else later on down the line. Using the configuration method would result in most developers never even touching the main logic (usually).

+ +

The problem is that my boss likes the idea, but wants to leave some constants in main code files even if they are only used once (i.e. that 5 or max result is only used once (right there), so does it really make sense to put a reference to it from another class/file?) I, of course, think so because of the aforementioned reasons as well as it is just cleaner and more coherent code.

+ +

Is my thinking more inline with correct coding conventions/methodologies or is my boss'?

+ +

I am fine with doing it his way, but it just feels like it will come back to haunt me later on down the road.

+",184167,,184167,,42750.75764,42750.94097,Using A Configuration Class (For All Constants/Magic's),,2,0,1,,,CC BY-SA 3.0, +340244,1,,,1/15/2017 21:32,,1,113,"

I have an OpenGL application that plots data in real time. I would like to have a background TCP server that will accept data from a client without blocking on a call. The data is originally an array of 2048 doubles. Once this array of data arrives, I need to place it in a circular array buffer that is read from the foreground OpenGL program.

+ +

I initially tried the async TCP server code from Microsoft but this code blocks on the listener.BeginAccept.

+ +

I thought that this code would operate in the background but obviously not. So I need to run the server in a background thread and not block on a read. I could use a much simpler server since I only have one client at a time to deal with.

+ +

Now the circular array buffer is a static class. To write to it it is a simple CircularBuffer.Write(double buffer); The class takes care of all indexes.

+ +

So what I need is a thread safe TCP server running in the background which can write to this circular buffer. I assume I might need to have some locking mechanism?

+ +

Can you suggest an over approach to this please?

+",259600,,,user22815,42754.73958,42754.73958,Background TCP server to collect data,,0,5,,,,CC BY-SA 3.0, +340248,1,340264,,1/15/2017 22:55,,4,409,"

With almost every software there are errors and those must be given levels. Grave errors may simply stop your program while simple notices can be resolved with a click. I've always proceeded by giving them a certain numeric degree of importance. But is there a ""general rule"" between programmers on how to chose these degrees?

+ +
    +
  • Should a higher degree of importance be represented by a larger number (e.g. 500) or a smaller one (e.g. 5)? Is there a reason why?
  • +
  • Should error levels be widely spaced (100, 200, 300, ...) or closer to each other (100, 101, 102)? And again, are there any advantages to this technique?
  • +
+",257422,,110531,,42750.9625,42752.22569,How should errors be given levels?,,3,1,1,,,CC BY-SA 3.0, +340249,1,340250,,1/15/2017 22:55,,4,238,"

I'm building a web site and I want to use a big background image. Because of speed considerations, I thought that it would make sense to send a low-res version of the image the user first (for fast page loading and smooth user interface experience), and when the page finish loading, use JavaScript to load the higher-res version of that same image.

+ +

Do the server and the client's web browser know that it's the same image, and load just the ""rest"" of the data (delta of the low-res and the high-res)? Or does the server send the bigger image unrelated to the smaller image sent before?

+",259603,,7422,,42751.34931,42751.34931,Is loading the same .jpg in different qualities a waste of data?,,1,2,,,,CC BY-SA 3.0, +340260,1,,,1/16/2017 7:43,,1,717,"

I am currently implementing WYSIWYG editor that will be available on the web, what are the main security issues I should tackle? The editor currently works that when user is done typing, the text gets saved to folder with editor text on same domain, and the iframe gets refreshed with the contents of it.

+ +

I know that when it comes to JS, someone could scale up the DOM from the parent windows, but how could that affect security of my website?

+ +

The editor instances are not share-able between users and never will be. Only admins can view all instances.

+",259626,,235135,,42751.33611,42781.33819,What are security risks of WYSIWYG HTML/CSS/JS editors running as web service?,,1,0,,,,CC BY-SA 3.0, +340267,1,,,1/16/2017 10:17,,2,84,"

Consider an alphabet of k symbols and a requirement to optimally encode a series of values of known frequency. The obvious choice for this is to use Huffman coding, which is known to be optimal for this problem. Consider now the extra requirement that when the coded values are received it will be unknown whether or not the symbols that represent them have been reversed, so for example if the coding suggests that ""value 1"" is encoded as ""aab"", it may be received at the receiving end as either ""aab"" or ""baa"". Therefore each encoding used must not have a valid encoding that contains the same symbols in reverse order.

+ +

When k > 2, one possible implementation would be to reserve one of the symbols for a 'start bit' and ensure that it is never used as the terminal symbol of any code. But are there any better approaches?

+ +

Update

+ +

Just so anyone reading this can get more of an idea what I was talking about, you can see the final implementation I wrote (using the algorithm I was suggesting above, except reversed -- I reserve a colour for the end marker and don't use it in the first symbol, as that's much easier to implement due to the way the Huffman algorithm prepends symbols to the code as it grows) here: http://periata.co.uk/shb/colourcoder.html

+ +

I'm still interested in any better ideas, if anyone can come up with one.

+",153823,,153823,,42768.375,42768.375,Direction-free optimal encoding,,0,6,,,,CC BY-SA 3.0, +340268,1,340270,,1/12/2017 14:37,,3,2659,"

The moment of working on a system that gives you statistics based on some data gathered from the database has arrived in my company.

+ +

How do you efficiently gather statistics from a database in such a way that does not add too much overhead in loading a page, or too much complexity in cache management?

+ +

Currently, statistics are calculated at run-time, no data is saved or cached, the problem is that as you add new statistics, and those as well get calculated at run-time, I'll reach a point where the website is going to be reaaally slow, which is not acceptable.

+ +

The only idea that came to my mind to solve this issue is caching data that has date filters past the day of when they are calculated.

+ +

For example, let's say that I'd like to know if a user has visited a specific page between 2017-01-01 and 2017-01-08. Since today it's 2017-01-12, it's implied that this result could never change in the future, since the dates selected are old.

+ +

This is an example of how I calculate statistics in Laravel (4.x):

+ + + +

+ +

namespace App\Composers\Users;
+
+use Illuminate\Support\Collection;
+use User;
+
+class ShowComposer
+{
+    public function compose($view)
+    {
+        $viewData = $view->getData();
+
+        $view->with([
+            'sellings'    => $this->getSellingStatistics($viewData['user'])
+        ]);
+    }
+
+    public function getSellingStatistics(User $user)
+    {
+        $sellings = [];
+
+        $getSellingsOf = function (User $user, $months) {
+            $startOfMonth = \Carbon::now()->subMonths($months)->startOfMonth();
+            $endOfMonth   = \Carbon::now()->subMonths($months)->endOfMonth();
+
+             return $user
+                ->mavs()
+                ->whereHas('buyerProposal', function ($proposal) use ($startOfMonth, $endOfMonth) {
+                    $proposal->whereBetween('sold_at', [
+                        $startOfMonth, $endOfMonth
+                    ]);
+                })
+                ->count();
+        };
+
+        $sellings['best'] = value(function () use ($getSellingsOf) {
+            $months = [];
+
+            for ($month = 0; $month < 12; $month++) {
+                $startOfMonth = \Carbon::now()->subMonths($month)->startOfMonth();
+                $endOfMonth   = \Carbon::now()->subMonths($month)->endOfMonth();
+
+                $query = <<<SQL
+            SELECT
+                id, (SELECT COUNT(*)
+                    FROM `mav`
+                    INNER JOIN `mav_proposals` ON `mav`.`mav_proposal_id` = `mav_proposals`.`id`
+                    WHERE sold_at BETWEEN ? AND ?
+                    AND mav.user_id = users.id) AS sellings
+            FROM users
+            ORDER BY sellings DESC
+            LIMIT 1
+SQL;
+
+                $response = \DB::select($query, [
+                    $startOfMonth->toDateTimeString(),
+                    $endOfMonth->toDateTimeString()
+                ]);
+
+                $user = User::find($response[0]->id);
+
+                $months[] = $getSellingsOf($user, $month);
+            }
+
+            $months = array_reverse($months);
+
+            return $months;
+        });
+
+        $sellings['personal'] = value(function () use ($user, $getSellingsOf) {
+            $months = [];
+
+            for ($month = 0; $month < 12; $month++) {
+                $months[] = $getSellingsOf($user, $month);
+            }
+
+            $months = array_reverse($months);
+
+            return $months;
+        });
+
+        $sellings['global'] = value(function () use ($user) {
+            $months = [];
+
+            for ($month = 0; $month < 12; $month++) {
+                $startOfMonth = \Carbon::now()->subMonths($month)->startOfMonth();
+                $endOfMonth   = \Carbon::now()->subMonths($month)->endOfMonth();
+
+                $companySoldMavs = \App\Models\MAV::whereHas('buyerProposal',
+                    function ($proposal) use ($startOfMonth, $endOfMonth) {
+                        $proposal->whereBetween('sold_at', [
+                            $startOfMonth, $endOfMonth
+                        ]);
+                    })->count();
+
+                $usersWithSoldMavs = \User::whereHas('mavs', function ($mav) use ($startOfMonth, $endOfMonth) {
+                    $mav->whereHas('buyerProposal', function ($proposal) use ($startOfMonth, $endOfMonth) {
+                        $proposal->whereBetween('sold_at', [
+                            $startOfMonth, $endOfMonth
+                        ]);
+                    });
+                })->count();
+
+                $months[] = ($usersWithSoldMavs > 0)
+                    ? round($companySoldMavs / $usersWithSoldMavs)
+                    : 0;
+            }
+
+            $months = array_reverse($months);
+
+            return $months;
+        });
+
+        return $sellings;
+    }
+}
+
+ +

Now, here are the only two options I have thought of:

+ +
    +
  • Calculate statistics every 24 hours and save them in a database.
  • +
  • Cache data based on the parameters used to gather the statistics.
  • +
+ +

The first option is quite complicated and it takes a lot of time to be developed proplerly.

+ +

The second option could be interesting, however I am afraid that cache is going to give me headaches sooner or later.

+ +

Is there an other way to do it efficiently? How do enterprises move themselves towards data mining? Are languages like R always used in these cases, or PHP can be just fine if used properly?

+ +

It's a new world for me, please be kind.

+",109956,GiamPy,109956,,42751.48194,42751.50903,How to perform data mining efficiently (in PHP)?,,1,5,,,,CC BY-SA 3.0, +340271,1,,,1/16/2017 12:21,,1,144,"

I need to implement a site with a real-time graph.

+ +

I'm currently using WebSockets and chartjs.org for displaying the values.

+ +

Now I'm not sure whether I should send all data points in every message or just only the new data points and save the remaining data points on a client-side ring buffer.

+ +

In the current setup I need to send about 200-3200 data points per second.

+ +

I'm currently favoring sending all data points at once to keep the U/I stateless. But I have the fear that performance will degrade because of the higher needed data throughput.

+",131669,,,,,43119.11875,Save realtime data on client vs on server,,1,1,1,,,CC BY-SA 3.0, +340272,1,,,1/16/2017 13:21,,1,98,"

Is it the number of iterations or recursive calls made? Or is it the number of times the conditional check has been applied?

+ +

Ex: if there are 10 elements in an array in descending order. +The time complexity will be 10 log 10(base 2).

+ +

The answer is 33.2192.

+ +

Here is the java implementation of the code. +I have added a count in the while loops where the array is updated.

+ +
public class MyMergeSort {
+
+    private int[] array;
+    private int[] tempMergArr;
+    private int length;
+    private static int count;
+
+    public static void main(String a[]){
+        count = 0;
+        int[] inputArrNew = {10,9,8,7,6,5,4,3,2,1};
+        MyMergeSort mms = new MyMergeSort();
+        mms.sort(inputArrNew);
+        System.out.println(""Condition Checks are "" + count);
+    }
+
+    public void sort(int inputArr[]) {
+        this.array = inputArr;
+        this.length = inputArr.length;
+        this.tempMergArr = new int[length];
+        doMergeSort(0, length - 1);
+    }
+
+    private void doMergeSort(int lowerIndex, int higherIndex) {
+
+        if (lowerIndex < higherIndex) {
+
+            int middle = lowerIndex + (higherIndex - lowerIndex) / 2;
+
+            doMergeSort(lowerIndex, middle);
+
+            doMergeSort(middle + 1, higherIndex);
+
+            mergeParts(lowerIndex, middle, higherIndex);
+        }
+    }
+
+    private void mergeParts(int lowerIndex, int middle, int higherIndex) {
+
+        for (int i = lowerIndex; i <= higherIndex; i++) {
+            tempMergArr[i] = array[i];
+        }
+        int i = lowerIndex;
+        int j = middle + 1;
+        int k = lowerIndex;
+        while (i <= middle && j <= higherIndex) {
+            if (tempMergArr[i] <= tempMergArr[j]) {
+                array[k] = tempMergArr[i];
+                i++;
+            } else {
+                array[k] = tempMergArr[j];
+                j++;
+            }
+            k++;
+            count++;
+        }
+        while (i <= middle) {
+            array[k] = tempMergArr[i];
+            k++;
+            i++;
+            count++;
+        }
+
+    }
+
+}
+
+ +

here is the output:

+ +
    Condition Checks are 34
+
+ +

Is my understanding correct?

+ +

Why is the method doMergeSort() not taken into consideration while calculating the time complexity. We know that recursive calls affect performance of the code.

+ +

I tried to read the content in below link, but it is slightly difficult to understand at a high level.

+ +

For bubble sort it is easy to understand that the for loop runs twice creating a n X n structure resulting into O(n2) complexity.

+ +

https://cs.stackexchange.com/questions/23593/is-there-a-system-behind-the-magic-of-algorithm-analysis

+",259652,,-1,,42838.53333,42751.56736,What does time complexity of n log n signify?,,0,5,,42751.66736,,CC BY-SA 3.0, +340278,1,,,1/16/2017 14:45,,2,222,"

I have a class that models LogicalExpressions. The leaves are classes that implement an interface IEvaluable, that has a method called Evaluate which returns a boolean as the result.

+ +
public class MyEvaluable : IEvaluable
+{
+    public bool Evaluate(Environment env)
+    {
+
+    }
+}
+
+ +

Some of these evaluable objects need to do some heavy stuff to produce the result, like calling a web service for instance. And since a logical expression may have multiple such objects that are related, I would like to evaluate them all at once, doing one web service call for all of them instead of separate calls for each parameter.

+ +

So I've been thinking about a good way to design such a system and came up with 2 solutions:

+ +

1) Make my evaluable objects mutable.

+ +
public interface IBatchEvaluable
+{
+    void BatchEvaluate(object[] siblings, Common.Environment env);
+    bool IsEvaluated { get; }
+    bool EvaluationResult { get; }
+}
+public interface IBatchEvaluable<T> : IBatchEvaluable
+{
+    void BatchEvaluate<T>(T[] siblings, Common.Environment env);
+}
+
+ +

So every object that is IBatchEvaluable will have a state. When I need to evaluate it I check if it's already been evaluated and do the batch evaluation if it's needed. +The only con is that my objects will be mutable, and that's not really desirable.

+ +

2) Store evaluation data in the environment

+ +

I could keep the objects immutable, and move the IsEvaluated and EvaluationResult data in the environment. So each object would look in the environment first to see if it has already been evaluated and if so get the result from the environment, otherwise evaluate all the siblings in one go and put the data in the Environment.

+ +

This is also not very attractive, since the implementation of my objects will depend on outside data, doesn't seem to abide by OOP principles.

+ +

How should I go about this from an OOP perspective ? I am open to hearing other potential solutions to this.

+",191693,,,,,42751.6625,Immutable vs Mutable objects - Design,,2,9,,,,CC BY-SA 3.0, +340279,1,340325,,1/16/2017 14:53,,3,1402,"

A classical problem: read the words from a text file and list the occurence of each unique word in the file.

+ +

I solved the problem using a hash map, but how could the performance be improved? I tried reading multiple lines of that file using threads, but even that would be like a bottle neck and there are chances of race condition in a hashmap. Using concurrent HashMap would cause a bottleneck. What would be an ideal multithreaded approach?

+",258000,,1204,,42751.85486,42752.4375,A multi-processing approach to listing the occurrences of words in a text file,,2,8,1,,,CC BY-SA 3.0, +340283,1,,,1/16/2017 15:21,,2,178,"

I have this two types

+ +
open FSharpx
+open FSharpx.Reader
+
+type First =
+    { Name: string
+      Items: string list }
+
+type Second =
+    { Name: string
+      Numbers: int list }
+
+ +

Using a Reader monad from the FSharpx library I can do this

+ +
let map f =
+        fun n xs ->
+            { Name = n; Items = xs } // : Second
+    <!> Reader.ask (fun (o:First) -> o.Name)
+    <*> (List.map f <!> Reader.asks (fun o -> o.Items))
+
+ +

and I execute this without problem

+ +
> let first = {Name=""stuff"";Items=[""1"";""2"";""3""]}
+> map int first;;
+val it : Second = {Name = ""stuff"";
+                   Numbers = [1; 2; 3];}
+
+ +

the problem is that the function int is not safe, so I have to protect it into a Choice type.

+ +
> let safeInt = Choice.protect int
+val safeInt : (int -> Choice<int,exn>)
+
+ +

Now, how do I use this function in my map function from First to Second?

+ +

my attempt is using pattern matching by creating a wrapper function with the knowledge of the Choice type

+ +
let second n = function
+    | Choice1Of2 xs -> Choice1Of2 {Name=n;Numbers=xs}
+    | Choice2Of2 err -> Choice2Of2 err
+
+let map' f =
+    mapping
+    <!> Reader.ask (fun (o:First) -> o.Name)
+    <*> (Choice.mapM f <!> Reader.ask (fun o -> o.Items))
+
+ +

but this is not good because it might seem from outside the module that Second might not have an Number property

+ +

Is there a better way to apply the result of a choice type to my example?

+",147330,,209774,,43072.775,43072.775,combine reader monad with choice,,0,1,,,,CC BY-SA 3.0, +340284,1,362533,,1/16/2017 15:25,,7,2956,"

Recently I had to implement a Semaphore using a Mutex and a Conditional Variable (this combination is also known as a Monitor) for an exercise at the university:

+
+

the Semaphore's decrement operation blocks until its counter is more than zero before decrementing,

+

and the increment operation increments the counter and then notifies one waiting thread.

+
+

However, I also learned that:

+
+

a Mutex is a binary semaphore with the extra restriction that only the thread decremented it can increment it later.

+
+

As these two definitions are clearly mutually recursive, I am wondering how Semaphores and Mutexes can be implemented (in pseudocode) directly, without using the other data type in their implementations.

+",41643,,-1,,43998.41736,43993.65694,Mutex vs Semaphore: How to implement them _not_ in terms of the other?,,2,3,4,,,CC BY-SA 3.0, +340285,1,340291,,1/16/2017 15:35,,2,2151,"

I would consider myself an intermediate Python programmer. One of my recent challenges was creating a list of all possible solutions to a given +Countdown problem. +Without getting into too much detail, I have approached the problem through:

+ +
    +
  • first generating a list of all possible Number-Operator arrangements using RPN

  • +
  • and then bruteforcing all possible permutations numbers/operators for all possible arrangements, recording the patterns that give me the answer.

  • +
+ +

The full code listing is further below.

+ +

I am aware that this is utterly inefficient and my program takes on the scale of 5-10 minutes to complete.

+ +

I have come across an alternative approach here, which uses recursion and generators and finishes considerably faster - on the scale of 30 seconds. My level of understanding of Python does not allow me to just read through the code I found and fully understand the nuances.

+ +

I understand that it recursively creates branched expressions with all possible permutations and evaluates them until the correct result is reached, which is essentially another take of what I am doing. I do not understand why that code is orders of magnitude faster than mine.

+ +

Operations-wise, the faster code makes on the scale of 5 million attempts, mine makes 15 million attempts, but that still does not match up to the difference in time of execution.

+ +

My question: I would be very grateful for a pointer as to what exactly about the class/recursion approach makes it this much more efficient than my rather naive approach to basically the same method.

+ +
+ +

After tinkering with switching off various modules in the nested loop, I think I narrowed it down. I think, quite disappointingly, that the slowest part is the way I evaluate RPN expressions.

+ +

What I did:

+ +
    +
  • Replaced the line result = RPN_eval(...) with result = [0]. This completes the program in under 9 seconds.

  • +
  • I then restored the line back to call the RPN_eval(...) function. Instead, I got rid of the attempt string generation and replaced it with a fixed 2 2 + - this version terminated in under 69 seconds...

  • +
  • Finally, fixing attempt to be 2 2 + 2 + increased the running time to 120 seconds.

  • +
+ +

Extrapolating (roughly) this finding that each additional number and operator in the expression increases the program time by a factor of around 1.7 - I get total run time of 10-11 minutes, which is what my program shows under normal conditions.

+ +

My new question: Therefore, what is the part of the RPN_eval function that seems to be so awkward and slow? Will do more research and formalise this into an actual separate question, not relevant here as such

+ +
+ +

I think I am onto something - I am trying to dynamically convert RPN pattern expressions into a (horrendous) lambda function, that I can then feed individual number permutations to and yield outcomes, without having to remake the lambda function until the next pattern kicks in. Will add code here once it cooperates...

+ +

My code listing:

+ +
import itertools as it
+import random
+import time
+operators = [""+"", ""-"", ""/"", ""*""]
+count = 0
+
+def RPN_eval(expression, answer): #a standard stack approach to evaluating RPN expressions
+    explist = expression.split("" "")
+    explist.pop(-1)
+    stack = []
+
+    for char in explist:
+
+        if not char in operators:
+            stack.append(int(char))
+        else:
+            if char == ""+"":
+                num1 = stack.pop()
+                num2 = stack.pop()
+
+                if num1 > num2:
+                    return[-1]
+
+                result = num1 + num2
+                stack.append(result)
+
+            if char == ""-"":
+                num1 = stack.pop()
+                num2 = stack.pop()
+                result = -num1 + num2
+                stack.append(result)
+
+            if char == ""*"":
+                num1 = stack.pop()
+                num2 = stack.pop()
+
+                if num1 > num2:
+                    return [-1]
+
+                result = num1 * num2
+                stack.append(result)
+
+            if char == ""/"":
+                divisor = stack.pop()
+                divident = stack.pop()
+
+                try:
+                    result = divident / divisor
+                except:
+                    return [-1]
+
+                stack.append(result)
+
+            if result<=0 or result != int(result):
+                return [-1]
+
+    return stack
+
+################### This part runs once and generates 37 possible RPN patterns for 6 numbers and 5 operators
+def generate_patterns(number_of_numbers): 
+#generates RPN patterns in the form NNoNNoo where N is number and o is operator
+
+    patterns = [""N ""]
+
+    for pattern1 in patterns:
+        for pattern2 in patterns:
+            new_pattern = pattern1 + pattern2 + ""o ""
+            if new_pattern.count(""N"")<=number_of_numbers and new_pattern not in patterns:
+                patterns.append(new_pattern)
+
+    return patterns
+#######################################
+
+
+######### Slowest part of program ################
+def calculate_solutions(numbers, answer):
+    global count
+    patterns = generate_patterns(len(numbers)) #RPN symbolic patterns for a given number pool, runs once, takes less than 1 second
+    random.shuffle(patterns) #not necessary, but yields answers to look at faster on average
+    print(patterns)
+    solutions = [] #this list will store answer strings of good solutions. This particular input produces 56 answers.
+
+    for pattern in patterns:
+        nn = pattern.count(""N"") #counts the number of numbers in a symbolic pattern to produce corresponding number group permutations
+        no = pattern.count(""o"") #same for operators
+        numpermut = it.permutations(numbers,nn) #all possible permutations of input numbers, is an itertools.permutations object, not a list. Takes 0 seconds to define.
+
+        print(pattern)
+
+        for np in numpermut:
+            oppermut = it.product([""+"",""-"",""*"",""/""],repeat=no) #all possible permutations of operator order for a given pattern, itertools object, not a list. Takes 0 seconds to define
+            for op in oppermut:
+                attempt = """"
+                ni = 0
+                oi = 0
+                for sym in pattern:
+                    if ""N"" in sym:
+                        attempt+=str(np[ni])+"" "" #replace Ns in pattern with corresponding numbers from permutations
+                        ni+=1
+                    if ""o"" in sym:
+                        attempt+=str(op[oi])+"" "" #replace os in pattern with corresponding operators from permutations
+                        oi+=1
+
+                count+=1
+                result = RPN_eval(attempt, answer) #evaluate attempt
+
+                if result[0] == answer:
+                    solutions.append(attempt) #if correct, append to list
+
+                    print(solutions)
+    return solutions
+#####################################    
+
+
+
+
+solns = calculate_solutions([50 , 8 , 3 , 7 , 2 , 10],556)
+print(len(solns), count)
+
+ +

And faster code listing:

+ +
class InvalidExpressionError(ValueError):
+    pass
+
+subtract = lambda x,y: x-y
+def add(x,y):
+    if x<=y: return x+y
+    raise InvalidExpressionError
+def multiply(x,y):
+    if x<=y or x==1 or y==1: return x*y
+    raise InvalidExpressionError
+def divide(x,y):
+    if not y or x%y or y==1:
+        raise InvalidExpressionError
+    return x/y
+
+count = 0
+add.display_string = '+'
+multiply.display_string = '*'
+subtract.display_string = '-'
+divide.display_string = '/'
+
+standard_operators = [ add, subtract, multiply, divide ]
+
+class Expression(object): pass
+
+class TerminalExpression(Expression):
+    def __init__(self,value,remaining_sources):
+        self.value = value
+        self.remaining_sources = remaining_sources
+    def __str__(self):
+        return str(self.value)
+    def __repr__(self):
+        return str(self.value)
+
+class BranchedExpression(Expression):
+    def __init__(self,operator,lhs,rhs,remaining_sources):
+        self.operator = operator
+        self.lhs = lhs
+        self.rhs = rhs
+        self.value = operator(lhs.value,rhs.value)
+        self.remaining_sources = remaining_sources
+    def __str__(self):
+        return '('+str(self.lhs)+self.operator.display_string+str(self.rhs)+')'
+    def __repr__(self):
+        return self.__str__()
+
+def ValidExpressions(sources,operators=standard_operators,minimal_remaining_sources=0):
+    global count
+    for value, i in zip(sources,range(len(sources))):
+        yield TerminalExpression(value=value, remaining_sources=sources[:i]+sources[i+1:])
+    if len(sources)>=2+minimal_remaining_sources:
+        for lhs in ValidExpressions(sources,operators,minimal_remaining_sources+1):
+            for rhs in ValidExpressions(lhs.remaining_sources, operators, minimal_remaining_sources):
+                for f in operators:
+                    try:
+                        count+=1
+                        yield BranchedExpression(operator=f, lhs=lhs, rhs=rhs, remaining_sources=rhs.remaining_sources)
+                    except InvalidExpressionError: pass
+
+def TargetExpressions(target,sources,operators=standard_operators):
+    for expression in ValidExpressions(sources,operators):
+        if expression.value==target:
+            yield expression
+
+def FindFirstTarget(target,sources,operators=standard_operators):
+    for expression in ValidExpressions(sources,operators):
+        if expression.value==target:
+            return expression
+    raise (IndexError, ""No matching expressions found"")
+
+if __name__=='__main__':
+    import time
+    start_time = time.time()
+    target_expressions = list(TargetExpressions(556,[50,8,3,7,2,10]))
+    #target_expressions.sort(lambda x,y:len(str(x))-len(str(y)))
+    print (""Found"",len(target_expressions),""solutions, minimal string length was:"")
+    print (target_expressions[0],'=',target_expressions[0].value)
+    print()
+    print (""Took"",time.time()-start_time,""seconds."")
+    print(target_expressions)
+    print(count)
+
+",259665,,1204,,42752.94167,42753.01736,Efficiency considerations: nested loop vs recursion,,2,3,1,,,CC BY-SA 3.0, +340299,1,,,1/16/2017 20:43,,4,105,"

Should you format numbers in the view or in the services layer?

+ +

If you are going to round a number to 2 decimal places. Does it make sense if that number is rounded in the view or in the services layer? What about if that number is a nullable type like in C#? I don't want to write if(number.HasValue) { number.ToString() }

+ +

It is definitely more testable if this is done in the services layer for more complex formatting.

+",117486,,13156,,42751.89028,42751.89028,Does formatting belong in the view or in the services layer?,,1,2,,,,CC BY-SA 3.0, +340300,1,,,1/16/2017 20:59,,-6,1173,"

doing an lscpu cmd this is what i get

+ +
$ lscpu
+Architecture:          x86_64
+CPU op-mode(s):        32-bit, 64-bit
+Byte Order:            Little Endian
+CPU(s):                2
+On-line CPU(s) list:   0,1
+Thread(s) per core:    1
+Core(s) per socket:    1
+Socket(s):             2
+NUMA node(s):          1
+Vendor ID:             GenuineIntel
+CPU family:            6
+Model:                 63
+Model name:            Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
+Stepping:              0
+CPU MHz:               2494.224
+BogoMIPS:              4988.44
+Hypervisor vendor:     VMware
+Virtualization type:   full
+L1d cache:             32K
+L1i cache:             32K
+L2 cache:              256K
+L3 cache:              30720K
+NUMA node0 CPU(s):     0,1
+
+ +

doing the math tells me it should be able to run 2 threads at a given time

+ +

Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s): 2

+ +

however the tomcat configuration sets the max threads at 150

+ +
redirectPort=""8443"" maxThreads=""150"" acceptCount=""15"" enableLookups=""true""
+
+ +

how does the web application support 150 max threads when the cpu only supports 2?

+",259668,,1204,,42751.87639,42752.66597,How does the web application support 150 max threads when the cpu only supports 2?,,2,5,,42752.43333,,CC BY-SA 3.0, +340304,1,,,1/16/2017 23:40,,5,640,"

I wrote a parser for a certain type of binary file with recursive structure. I made its API to be similar to SAX, that is:

+ +
    +
  • the parser accepts an object of a specific interface,
  • +
  • this interface has several methods called as the parsing happens: startFoo(type, name), endFoo(), datum(type, name, value), badEntry(errorMsg), etc.
  • +
  • there are certain promises regarding how these callbacks are called: e.g. for each startFoo there will be an endFoo, with appropriate nesting, etc.
  • +
+ +

I used to think this is a variant of a Visitor pattern. However, the visitor pattern doesn't have multiple callbacks, these callbacks have different arguments, and the Visitor pattern doesn't talk about promises like in the last point. Also, strictly speaking, it's not an in-memory data structure being iterated over, but I guess this part is less important…

+ +

I also think it's not strictly an Observer pattern: there is no state to observe; no late registration for receiving events (ie. either you get all parsing events or none, you can't start in the middle); only a single object is accepted.

+ +

Is there a more proper name for this design pattern?

+ +

EDIT: As I understand it, design patterns exist to quickly and precisely communicate common code structures. However, if all I can say about the above code structure is that it is a ""Strategy"" pattern, or an ""Observer"" pattern, the communication is inefficient. Not all ""Strategy"" pattern implementations have interfaces with multiple methods with promises regarding the order in which they are called, etc.

+ +

I am looking for a name or a phrase that would directly communicate the set of conditions mentioned above, or at least some close approximation of them.

+",1411,,1411,,42756.8125,42756.8125,SAX-like parser: what is this pattern called?,,2,0,1,,,CC BY-SA 3.0, +340311,1,,,1/17/2017 4:47,,12,506,"

In any software development project that involves distributed systems with multiple developers, having Logical and Physical Architecture diagrams is best practice but in my experience these diagrams always start off being well maintained at the start of a project but do not get updated as the project get released and the maintenance phases kick in.

+ +

For complex projects with a lot of distributed processes, the diagrams tend to get outdated or inaccurate really quickly even before the initial release since no one person has all the knowledge.

+ +

Given this background, I want to ask the following questions to the community:

+ +
    +
  1. How important are having accurate and up to date Logical and Physical Architecture diagrams?
  2. +
  3. Are there any tools and processes that can help keep them up to date?
  4. +
  5. Who should be responsible for keeping them up to date? How can sys admins, developers, and QA teams contribute?
  6. +
+",255048,,,,,42752.50764,Keeping Logical and Physical Architecture diagrams updated,,1,2,1,,,CC BY-SA 3.0, +340315,1,340317,,1/17/2017 5:46,,0,3055,"

In cloud computing these two terms really confuses me, block-level virtualization and file-level virtualization .

+ +

As of my knowledge, in file-level virtualization compute systems are not allocated partitions and just deals with the storage systems APIs to retrieve or upload a file.

+ +

Block-level virtualization is allocating a space as partition for compute systems, that compute systems are responsible for setting the file systems, writing and reading processes.

+ +

is that correct and can block have different meaning ?

+ +

N.B : I am not sure if software engineering is the place to ask, if not just tell me and I will remove it.

+",112489,,,,,42752.27986,What's the difference between block-level virtualization and file-level virtualization?,,1,0,,,,CC BY-SA 3.0, +340316,1,340319,,1/17/2017 6:07,,11,1807,"

Let's say that we have the following interface -

+ +
interface IDatabase { 
+    string ConnectionString{get;set;}
+    void ExecuteNoQuery(string sql);
+    void ExecuteNoQuery(string[] sql);
+    //Various other methods all requiring ConnectionString to be set
+}
+
+ +

The precondition is that ConnectionString must be set/intialized before any of the methods can be run.

+ +

This precondition can be somewhat achieved by passing a connectionString via a constructor if IDatabase were an abstract or concrete class -

+ +
abstract class Database { 
+    public string ConnectionString{get;set;}
+    public Database(string connectionString){ ConnectionString = connectionString;}
+
+    public void ExecuteNoQuery(string sql);
+    public void ExecuteNoQuery(string[] sql);
+    //Various other methods all requiring ConnectionString to be set
+}
+
+ +

Alternatively, we can create connectionString a parameter for each method, but it looks worse than just creating an abstract class -

+ +
interface IDatabase { 
+    void ExecuteNoQuery(string connectionString, string sql);
+    void ExecuteNoQuery(string connectionString, string[] sql);
+    //Various other methods all with the connectionString parameter
+}
+
+ +

Questions -

+ +
    +
  1. Is there a way to specify this precondition within the interface itself? It is a valid ""contract"" so I'm wondering if there is a language feature or pattern for this (the abstract class solution is more of a hack imo besides the need of creating two types - an interface and an abstract class - every time this is needed)
  2. +
  3. This is more of a theoretical curiosity - Does this precondition actually fall into the definition of a precondition as in the context of LSP?
  4. +
+",242348,,31260,,42753.47986,42753.69028,How to specify a precondition (LSP) in an interface in C#?,,4,3,1,,,CC BY-SA 3.0, +340331,1,,,1/17/2017 10:46,,0,797,"

I'm trying to wrap a RESTful API around an existing implementation of a game.

+ +

Here is a possible state diagram of a simple API design that comes to mind:

+ +

+ +

I'm having troubles here because the existing domain implementation does not expose a list of moves. Instead it exposes the state of the whole board. A list of moves is not needed because the game does not support undos, or replays of games etc.

+ +

I can think of a few options:

+ +
    +
  1. I drill open the existing domain implementation and have it expose a list of moves for each game. So that I can implement the design from the diagram above.

    + +

    I don't like this because having to change the existing domain model only to shoehorn it into an API has a bad smell.

  2. +
  3. I design a state transition from /api/games/{id} with the extension relation newMove (or sth. similar) that allows a POST on the /api/games/{id}/moves resource. I will provide an API doc under /docs/rels/newMove. There won't be an implementation for GET on the list of moves.

    + +

    Is this still RESTful? Must every RESTful resource have a representation and allow GET?

    + +

    I find this appealing, though, because it means almost no additional server side logic.

  4. +
  5. Of course there are more options. I could design the board as a sub-resource and implement a PUT state transition (+ link relation and docs) to update it.

    + +

    I don't like this either. It means that I have to transfer the whole board, including the new move. Putting the whole board back to the server and have the server find out, which field has changed and also validate that only one field has changed, and what not, is needlessly complex. And it would result in some amount of additional logic on the server. Which again smells bad. So maybe PATCH instead?

  6. +
  7. Finally I could design the board as a sub-resource and every field of the board as a sub-resource of the board (e.g. /api/games/{id}/board/5/11) and then implement a PUT to make the move.

    + +

    This doesn't look bad. But it would be stupid to have the client get the state of the board by having it make multiple requests to get each field state. So there should probably be a representation of the complete board somewhere (e.g. /api/games/{id}/board or as a value of the game itself). In this case the question arises if it makes sense to implement GET for every field of the board at all because you ain't gonna need it.

    + +

    So here again, must every RESTful resource have a representation?

  8. +
+ +

What is the simplest solution that still adheres to the REST constraints?

+ +

One of the options above or maybe something completely different?

+",120304,,120304,,42752.80833,42933.40694,RESTful API design for an existing domain implementation of a simple game?,,2,2,,,,CC BY-SA 3.0, +340334,1,340920,,1/17/2017 11:09,,3,1075,"

In my current project I'm refactoring the code to get a DBAL. I have a class Entity that is the base class for all classes that model a database table. So there are several classes that inherit from Entity like Document, Article and so on.

+ +
abstract class Entity {
+    /** @var DatabaseRequest $dbReuest */
+    protected $dbRequest;
+
+    /** @var Login $login */
+    protected $login;
+
+    /* some methods like insert(), update(), JsonSerialize(), etc.
+
+}
+
+ +

Since all these classes have the same constructor __construct( DatabaseRequest $dbRequest, Login $login ) and I don't want to throw around those two paramenters, I also made this:

+ +
class EntityFactory {
+
+  public function __construct( DatabaseRequest $dbRequest, Login $login )
+  {
+      $this->dbRequest = $dbRequest;
+      $this->login = $login;
+  }
+
+  public function makeEntity( string $class )
+  {
+
+    if ( $this->extendsEntity( $class ) ) {
+
+        $reflection = new \ReflectionClass( $class );
+        $construct = $reflection->getConstructor();
+        $params = $construct->getParameters();
+
+
+        return new $class( clone $this->dbRequest, clone $this->login );
+    }
+
+    throw new APIException( ""Class $class does not extend "" . Entity::class, JsonResponse::DEBUG );
+  }
+
+}
+
+ +

You call the method like this: $factory->makeEntity( Document::class ) and this will give you an object of that class.

+ +

This way a change in in the Entity constructor reduced the refactoring effort to a minimum. However, in class that extend EntityI also defined some methods for the relationships between their tables. E.g.:

+ +
class DocumentAddressee extends Entity {
+
+    /* ... */
+
+    public static function createFromCustomer( Address $address )
+    {
+        $self = new DocumentAddressee( clone $address->dbRequest, clone $address->login );
+
+        /* transferring data from Address to $self */
+
+        return $self;
+    }
+
+}
+
+ +

(According to verraes.net this is a legit use of static methods as named constructory ).

+ +

And methods like these happen quite some times (roughly 1-2 methods per foreign key in a table). Now I'd like to keep those methods, because I can easily access dependend data this way. But I'd also like to keep those constructors to the factory so I don't have to refactor all 100+ Entity-classes when the Entity construcor changes (this might happen if we'll decide to use a QueryBuilder in the future.

+ +

Is there already some kind of best practice to handle these methods? Should I propably handle those relationships within the factory or model those relationships in extra classes?

+",238992,,,,,42765.25347,Handle Named constructors with factory pattern,,1,1,2,,,CC BY-SA 3.0, +340340,1,,,1/17/2017 12:41,,1,301,"

I am inheriting an api decision in an SDK I am writing where I am required to fetch domain objects (entries) from the server like this:

+ +
blogEntries = client.content_type('blog').entries
+
+ +

As you can see, the setter for the content_type property here is parameterised. To implement this design my Client class has a method like this that sets the @content_type instance variable before passing it on to other objects:

+ +
def content_type(content_type_uid)
+  @content_type = content_type_uid
+  // do something with @content_type
+end
+
+ +

Now elsewhere in the class when I need to fetch back the content_type I can no longer call an attr_reader method like configuration.content_type because it conflicts with the method above. Now that forces me to have a separate getter method called get_content_type which is really non-idiomatic ruby.

+ +

How do I work myself out of this conflicting situation where instead of a conventional setter like content_type= I have a setter with a different signature? What kind of trade-off would make most sense?

+",17918,,,,,42752.64306,Conflict in getter and setter method names in ruby api design,,2,0,1,,,CC BY-SA 3.0, +340347,1,340375,,1/17/2017 13:26,,23,7814,"

I have to write unit tests and integration tests for a project.

+ +
    +
  • Should all tests be put into a single tests folder?
  • +
  • Or should unit tests and integration tests each be in a separate tests folder?
  • +
  • Or should I even put them into separate projects?
  • +
+ +

If I keep them together, are there any advantages or drawbacks with this approach?

+",222705,,60357,,42752.57847,42966.62569,Should I separate unit tests and integration tests?,,1,2,3,,,CC BY-SA 3.0, +340351,1,340354,,1/17/2017 13:45,,3,12590,"

I've a domain class named Campaign.

+ +
Class Campaign {
+    public long CampaignID { get; set; }
+    public string CampaignName { get; set; }
+    public DateTime StartTime { get; set; }
+    public DateTime EndTime { get; set; }
+}
+
+ +

In addition to that, i'm using Asp.net MVC, and I defined a model class.

+ +
[DataContract]
+public class CampaignModel
+{
+    [DataMember(Name=""id"")]
+    public long Id { get; set; }
+
+    [DataMember(Name = ""name"")]
+    public string CampaignName { get; set; }
+
+    [DataMember(Name = ""isDummy"")]
+    public bool IsDummy { get; set; }
+} 
+
+ +

My goal is to convert from one class to another and vice versa.

+ +

I'm using the ASP.NET for REST calls from my Angular 2 app.

+ +

My app has two use cases, one is to create Campaign, therefore I need to convert from CampaignModel to Campaign class and store the campaign. +The second use-case is to load existing Campaign into the UI, therefore convert from Campaign to CamapignModel.

+ +
    +
  • If that's matter, the conversation between the two isn't trivial and involves using complex data structures.
  • +
+ +

I've thought of several options:

+ +
    +
  1. Create two methods in the CampaignModel.

    + +
    public Campaign Convert();
    +public CampaignModel Convert(Campaign);
    +
  2. +
  3. Create separate class e.g. CampaignConverter

  4. +
  5. Use dedicated known library that facilitates the conversion between the two, or known design pattern?
  6. +
  7. any other ideas?
  8. +
+ +

Thanks

+",259750,,,,,42752.59583,Convert class to another class and vice versa,,1,2,,,,CC BY-SA 3.0, +340353,1,,,1/17/2017 13:58,,3,2004,"

We have an application that allows users to enter conditionals in the form bound op x op bound2, we store this as a string, and then parse it at runtime to evaluate it.

+ +

It is a decent amount of work, for a very limited conditional statement.

+ +

We are looking for a way to serialize and then evaluate complex conditions, with at least a switch statement level of complexity.

+ +

I saw in this question How to serialize and deserialize lambda expression in F#? that in f# you can just serialize a lambda (and I assume that there is some way to get a lambda from a text string), but:

+ +
    +
  1. We need to do this in Java
  2. +
  3. I know that we can compile java code on the fly, and make it ""safe"" by stripping out keywords, and disallowing things like System. but even if its not a security nightmare, it is prohibitively computationally expensive to do thousands of times.
  4. +
+ +

Does anyone know of a small language out there whose interpreter can do just the basics (conditionals, loops, variable assignment) and be run in/as java, or any way to run those types of expression as they are defined at runtime and need to be persisted.

+ +

Update: To be clear, the minimum functionality I need is an if else chain. I don't just need to evaluate one condition. The one condition is what I have now already.

+",43057,,-1,,42838.53125,42752.84653,Store conditional expression in database,,2,10,,,,CC BY-SA 3.0, +340355,1,340526,,1/17/2017 14:21,,8,7945,"

An example would be if you had multiple inputs, each of which effect a completely different process, (perhaps one changes a div's width, while another sends ajax request for predictive search), would you bind a single event use a switch statement to call the process functions, or set multiple eventListeners bound to each input element?

+ +

Part of the reason for me asking is I'm not sure of the performance implications of multiple event listeners, but even if those are negligible, readability and code maintenance considerations could still impact on the answer.

+ +

Some considerations:

+ +

a) Would it depend on the expected frequency of use?

+ +

b) would a common event such as mousemove be better with one approach, while input be better with another?

+ +

c) if the switch approach is deemed better, what is the maximum size/complexity that the switch statement should be allowed to grow to.

+",,user232573,,,,42754.58264,Should I reduce event listeners by making functions more complex?,,3,0,5,,,CC BY-SA 3.0, +340356,1,,,1/17/2017 15:14,,26,1536,"

I work in a Data Warehouse that sources multiple systems via many streams and layers with maze-like dependencies linking various artifacts. Pretty much every day I run into situations like this: I run something, it doesn't work, I go through loads of code but hours later I realise I've managed to conceptualise the process map of a tiny portion of what I now know later in the day is required, so I ask someone and they tell me that this other stream has to be run first and that if I checked here (indicating some seemingly arbitrary portion of an enormous stack of other coded dependencies), then I would have seen this. It's incredibly frustrating.

+ +

If I were able to suggest to the team that perhaps it'd be a good idea if we did more to make the dependencies between objects more visible and obvious, rather than embedding them deeply in recursive levels of code, or even in the data that has to be present due to it being populated by another stream, perhaps by referring to a well known, tried and tested software paradigm — then I might be able to make my job and everyone else's a lot simpler.

+ +

It's kind of difficult to explain the benefits of this to my team. They tend to just accept things the way they are and do not 'think big' in terms of seeing the benefits of being able to conceptualise the entire system in a new way — they don't really see that if you can model a huge system efficiently then it makes it less likely you'll encounter memory inefficiencies, stream-stopping unique constraints and duplicate keys, nonsense data because it's much easier to design it in keeping with the original vision and you won't later run into all these problems that we are now experiencing, which I know to be unusual from past jobs, but which they seem to think of as inevitable.

+ +

So, does anyone know of a software paradigm that emphasises dependencies and also promotes a common conceptual model of a system with a view to ensuring long term adherence to an ideal? At the moment we pretty much have a giant mess and the solution every sprint seems to be ""just add on this thing here, and here and here"" and I'm the only one that's concerned that things are really beginning to fall apart.

+",259758,,158187,,42764.24236,42764.24236,Is there a programming paradigm that promotes making dependencies extremely obvious to other programmers?,,8,8,4,,,CC BY-SA 3.0, +340364,1,340368,,1/17/2017 16:40,,2,151,"

At my work, we use Git as our version control system. We have a master branch, which I have direct commit access to. Sometimes I have to make a trivial fix, such as fixing a typo in documentation, and the unwritten standard in our office is that getting reviewers for that is a waste of time (please don't argue this point, changing company culture is not a battle I want to fight today). So I have two options for how to go about making my trivial change:

+

Create a Pull Request

+
    +
  1. Create a branch off of master
  2. +
  3. Make my trivial change
  4. +
  5. Push the change to the branch remotely
  6. +
  7. Go the Web UI, which is the only way to make pull requests on our system
  8. +
  9. Create a pull request with zero reviewers*
  10. +
  11. Merge it
  12. +
+

Direct Commit

+
    +
  1. Make my trivial change
  2. +
  3. Commit directly to master
  4. +
+

The end result for the two is the same; my change makes it into master. But the first method takes much more time than the second does. Assuming my team is not interested in reviewing the change, is there any reason, technical or organizational, that I would want to do the pull request rigmarole?

+",81973,,-1,,43998.41736,42752.72292,"If I have direct commit access, is there any reason to create a pull request for a small change requiring no feedback?",,1,7,1,,,CC BY-SA 3.0, +340366,1,,,1/17/2017 17:10,,0,173,"

We are retrofitting an architecture on a fragmented landscape of about 9 software products. These products are all related to a social/community platform. These software products, consisting of webapps, webservices and smartphone apps, were developed by students with no adherence to any architecture. The apps themselves were developed with a certain architecture, but no system-wide architectures have been considered or adhered to. It is therefore my group's responsibility to glue everything together so these software products can communicate together.

+ +

I've been thinking of a service-oriented architecture because I like the idea of being able to plug in services and because a lot of the software products themselves can be called services in their own respect. However, the subject that really bugs me is the service discovery I see in every SOA text. I don't understand why I would need it when:

+ +
    +
  1. The information and service needs of every software product are clearly defined. They know which other products they need and what information they want.
  2. +
  3. There is no external/3rd-party service catalogue and there are no plans to use one.
  4. +
+ +

To put it bluntly, why can't I hardcode needed services within the respective software products? Or even just make small database of servers and use a broker to orchestrate API calls and whatnot? Of course the issue of scalability comes to mind: ""What if a new service is added? Are you going to edit all the source code?"" No, just the ones that need the new service. ""What if all products need the new service?"" Well, then we add it in a small development effort.

+ +

Transparency might also be affected: ""What if we want to know the full service description of a service, and the calls we can make against its API?"" We just look at the documentation then.

+ +

I'm having trouble with understanding the necessity in this context. I'm also nervous that I can't coin it as a true SOA if there is no such thing as service discovery or descriptions. Can someone explain the potential of service discovery in this project?

+",210153,,,,,42752.73194,What is the added value of Service Discovery in SOA for this project?,,1,0,,,,CC BY-SA 3.0, +340377,1,340389,,1/17/2017 19:29,,10,1388,"

From the interview with Kent Beck in a recent Java Magazine issue:

+
+

Binstock: Let’s discuss microservices. It seems to me that test-first on microservices would become complicated in the sense that some services, in order to function, will need the presence of a whole bunch of other services. Do you agree?

+

Beck: It seems like the same set of trade-ofs about having one big class or lots of little classes.

+

Binstock: Right, except I guess, here you have to use an awful lot of mocks in order to be able to set up a system by which you can test a given service.

+

Beck: I disagree. If it is in an imperative style, you do have +to use a lot of mocks. In a functional style where external dependencies are collected together high up in the call chain, then I don’t think that’s necessary. I think you can get a lot of coverage out of unit tests.

+
+

What does he mean? How can functional style liberate you from mocking external dependencies?

+",35257,,-1,,43998.41736,42755.50486,How does functional style helps with mocking dependencies?,,1,3,5,,,CC BY-SA 3.0, +340379,1,,,1/17/2017 20:01,,0,50,"

I was thinking it should hold pointers:

+ +
struct Expr
+{
+    string sym;
+    Expr*[] sub;
+
+    this(self, string sym) {
+        this.sym = sym;
+    }
+
+    @property auto dup() const {
+        auto e = new Expr(sym);
+
+        foreach (s; sub) {
+            e.sub ~= s.dup;
+        }
+
+        return e;
+    }
+}
+
+ +

But then that .dup function will duplicate nodes; N node copies when originally there was only 1; so it needs to be much more complicated than that.

+ +

On the other hand, with Expr values, there may be a larger proliferation of Expr objects, say if I were pooling them.

+ +

So which way works better in symbolic computation projects?

+",212122,,,,,42753.45208,When should an expression tree hold pointers and when should it hold values of subexpressions?,,1,2,,,,CC BY-SA 3.0, +340381,1,,,1/17/2017 20:39,,10,367,"

I work with a team of developers who are given choices as to what hardware and software they run. Our feeling is that this scenario lets us see a wide variety of target systems before ever hitting test. Our experience is that we find a number of strange problems in different browsers and operating systems soon after the introduction of the problem. But that is just one group's experience.

+ +

This variety of systems is difficult for our infrastructure and security teams, so it comes up often as a pain point.

+ +

Is it more beneficial to have homogeneous or heterogeneous development environments on a team of developers?

+",71380,,,user22815,42759.67639,42759.67639,Is there an advantage to heterogenous development environments?,,1,8,2,,,CC BY-SA 3.0, +340383,1,340466,,1/17/2017 21:19,,1,14232,"

If I need to have a function do some processing in order to initialize multiple of the object's variable (I'm having a hard time coming up with a simple example that doesn't seem weird).

+ +

Which of the following patterns is preferable?

+ +
class foo( object ):
+    def __init__(self, bar, bar2):
+        self.bar = bar
+        self.baz, self.square_baz = self.func(bar2)
+
+    def func(self, bar2):
+        return self.bar + bar2, bar2 * ba2
+
+ +

Or

+ +
class foo( object ):
+    def __init__(self, bar, bar2):
+        self.bar = bar
+        self.func(bar2)
+
+    def func(self, bar2):
+        self.baz = self.bar + bar2
+        self.square_baz = bar2 * bar2
+
+ +

I feel like in the first pattern, the __init__ constructor and the processing func are nicely decoupled. And it's easy to tell what variables each instance of foo will contain. On the other hand, having to return multiple variables from a function to assign them to an object seems... ugly, yet this seems to be a consequence of the library I'm using.

+",251322,,,,,43937.48194,Assigning instance variables in function called by __init__ vs. function called from __init__,,2,1,1,,,CC BY-SA 3.0, +340386,1,,,1/17/2017 22:40,,2,504,"

First, I am not a programmer (yet) and I can only understand basic algorithms written in pseudocode (+Dijkstra, which is a little harder than others, for me). I have been trough logic, set theory, relations, combinatorics. Currently, I am studying graph theory.

+ +

Can you give me a simple explanation on how Lyndon words are constructed with Duval's algorithm? And how is that related to de Bruijn sequwnce and what pseudocode is used to construct that sequence? Simple, because I am not so math proficient in understanding some of the notation and concepts, and also because I haven't study algorithms and programming. This problem was in my graph theory lessons ----> Eulerian and Hamiltonian cycles.

+ +

I tried understanding it from the wikipedia, but I only understood it in parts. Also, pseudocode from GitHub is not understandable to me, and I couldn't find another. Here it is:

+ +
def LyndonWords(s,n):
+  """"""Generate nonempty Lyndon words of length <= n over an s-symbol alphabet.
+  The words are generated in lexicographic order, using an algorithm from
+  J.-P. Duval, Theor. Comput. Sci. 1988, doi:10.1016/0304-3975(88)90113-2.
+  As shown by Berstel and Pocchiola, it takes constant average time
+  per generated word.""""""
+
+  w = [-1] # set up for first increment
+  while w:
+    w[-1] += 1 # increment the last non-z symbol
+    yield w
+    m = len(w)
+    while len(w) < n: # repeat word to fill exactly n syms
+        w.append(w[-m])
+    while w and w[-1] == s - 1: # delete trailing z's
+        w.pop() 
+
+ +

I would be thankful if you could show me by example, with some letters or numbers, so that I can intuitively comprehend it, and with more understandable pseudocode, heavy commented if possible. Thanks.

+",259802,,188153,,42752.99792,42873.12431,"Duval's algorithm, Lyndon words and de Bruijn sequence",,1,1,,,,CC BY-SA 3.0, +340392,1,340397,,1/18/2017 1:21,,0,136,"

I am wondering how the apps that allow +a user to choose an item and, once the user has selected an item and checked out, give the retailer information about order that has been placed.

+ +

For example, say a takeaway has an iOS app and customer has chosen fish and chips and placed an order.

+ +
    +
  1. How does the takeaway know an order has been placed? If it's TCP IP then I guess we need to start a server on the takeaway's computer? Is that right?

  2. +
  3. How can modify the menu without making any changes in app from developer side?

  4. +
+ +

I am looking for an answer about how things work in real world. Once I have the idea then developing it is a piece of cake.

+",148295,,110531,,42753.33403,42753.62569,How does an app send an order to a retailer? What happens under the hood?,,2,1,,,,CC BY-SA 3.0, +340393,1,,,1/18/2017 1:25,,2,642,"

Imagine we have an integer amount (e.g. integer cents) to be allocated across a weighted set of items where the total allocated amount must sum to the original amount. For example:

+ +
Amount: $1.00
+
+Item      Weight     Allocated Amount
+a         1          $0.33
+b         1          $0.33
+c         1          $0.33
+
+ +

Which could be brought to satisfaction by adjusting item c to be $0.34.

+ +

Is there an algorithm that results in even distribution of rounding error with only earlier weights and the total weight being known?

+",129895,,129895,,42754.02153,42814.21875,Allocating an integer sum proportionally to a set of reals,,3,1,,,,CC BY-SA 3.0, +340395,1,340407,,1/18/2017 1:31,,2,623,"

Von neumann architecture allows sequential processing of instructions. So, a single core within a CPU executes instructions sequentially.

+

Consider, OS providing 1-1 threading model(here) in a multi-core processor system,

+
+

Properties of concurrent system

+
    +
  • Multiple actors(say, each thread assigned to a different core)
  • +
  • Shared resource(heaps, global variables, devices)
  • +
  • Rules for access(Atomic/Conditional synchronization)
  • +
+

With atomic synchronization mechanism(Lock) or conditional synchronization mechanism(say Semaphore), messages are indirectly passed between actors, that help compliance with rules for accessing shared resource.

+
+

In this answer, it says, The actor model helps to force you to program concurrent portions of your code as self contained nuggets that can be performed in parallel and without depending on another piece of code.

+

Question:

+

To help understand the difference between concurrent vs actor model,

+

Using actor model, How can one program concurrent portion(critical section) of code as self contained nuggets?

+",131582,,-1,,43998.41736,42753.33819,"Using actor model, how can one program concurrent portion (critical section) of code as self contained nuggets?",,2,0,1,,,CC BY-SA 3.0, +340408,1,340463,,1/18/2017 8:36,,0,60,"

Let's say I want to model a phonebook.

+ +

I would like a phonebook to be maintained in sorted order by name of the person that the phonebook entry is created for. However, since it is quite possible that there are two people with the same name, I cannot use any of the sorted collections, because they require the use of keys (which, by definition, must be unique). On the other hand, I must have some sort of unique identifier for each entry, should I desire to implement the phonebook as a service, otherwise, I would have no way to identify the entry that I would like to update or delete.

+ +

I would like to hear suggestions and ideas on how to organize the data and the process of adding and updating it, so it is ordered in a desired way, and yet maintain the possibility of quick lookup of a single entry based on a unique id.

+ +

The idea that is closest to me goes something like this.

+ +

Phonebooks are organized alphabetically, so one of the approaches might be something like this:

+ +
SortedDictionary<char, List<PhonebookEntry>> phonebook = new SortedDictionary<char, List<PhonebookEntry>>();
+
+ +

The key would be the first character of the PhonebookEntry primary identifier (name).

+ +

The list would be maintained in sorted order on insert and update. The insert would search for the index in the list where the entry should be added, and then insert the entry at that index. Alternative would be a brute force approach: to simply sort the list after each insert. Update would check whether the name of the entry was updated. If so, it would sort the list accordingly if the entry remains in the same list, or it would remove it from that list and insert it in the appropriate list of the dictionary.

+ +

Now, the problem is how to find the entry based on some sort of unique identifier that would be assigned to each entry (some GUID, for instance). I am closest to the idea to have another dictionary, like this:

+ +
Dictionary<Guid, Tuple<char, long>> phonebookEntryMap = new Dictionary<Guid, Tuple<char, long>>();
+
+ +

The key would be the unique ID of the entry, the value would be a Tuple that contains the first character of the entry (to identify it in the main structure) and the index in the list. The obvious downside to this approach is that this structure needs to be carefully maintained. The upside is that it is a minimal memory overhead, while providing all the information required for a quick access to the desired entry.

+ +

An alternative would be to keep a reference to PhonebookEntry in the mapping dictionary instead of the Tuple. It would require more memory, but no mapping overhead.

+ +

Are there any other suggestions on how to elegantly resolve this problem?

+",235135,,,,,42753.83403,Organizing the same set of data based on two different criteria simultaneously,,1,2,,,,CC BY-SA 3.0, +340414,1,,,1/18/2017 11:01,,4,4559,"

In a multi-threaded environment, we must consider concurrent access to writable resources. A common approach is to use Monitor or its shorthand form lock.

+ +

Task is at a different abstraction level than Thread. A task may run on a thread of its own (and according to the logs, they do so in our application), but that is not guaranteed. See e.g. What is the difference between task and thread?:

+ +
+

If the value you are waiting for comes from the filesystem or a + database or the network, then there is no need for a thread to sit + around and wait for the data when it can be servicing other requests. + Instead, the Task might register a callback to receive the value(s) + when they're ready.

+
+ +

That is, that kind of Task somehow shares a Thread with other running code (I must admit that I do not understand how that works in detail, currently it looks to me like a specialization of the ""famous"" DoEvents).

+ +

Consequently, Monitor won't be able to distinguish between them, and - because Monitor can be re-entrant - allow both of them access the resource. That is, Monitor ""fails.""

+ +

Examples with Threads typically use Monitor nonetheless. So I want to ask how I can be sure that Monitor is safe with a Task (or: how can I be sure that a Task is running on a Thread of its own).

+",214847,,-1,,42878.52778,42759.66181,When is it safe to use Monitor (lock) with Task?,,1,3,1,,,CC BY-SA 3.0, +340416,1,,,1/18/2017 11:31,,1,509,"

I'm currently working on a native mobile solution where I need to design a modular application.

+ +

Imagine a scenario where you have your core application and 3 supporting modules.

+ +

You can either sell your entire application or your client can negotiate and select what modules are more important. Your client might also want to buy the module separately.

+ +

+ +

From a developer point of view, each module represents a different project and the core application must be designed to support them one-by-one.

+ +

What are the best design solutions that I should look for when developing this with Android Studio and XCode?

+ +

Should I integrate each module in the project as libraries and make the Main Project independent?

+",202620,,,,,42753.47986,How should one design a modular mobile application?,,0,3,,,,CC BY-SA 3.0, +340417,1,340421,,1/18/2017 11:37,,1,395,"

I have added this method to a C# class:

+ +
public bool CanAddLeave(Leave newLeave, out AddLeaveResult result)
+{
+    result = _repository.CreateSqlQuery(""CanSendLeaveRequest "")
+            .SetParameter(""employeePositionId"", newLeave.EmployeePosition.EmployeePositionId)
+            .UniqueResult<AddLeaveResult> ();
+
+    return (result == AddLeaveResult.Success);
+}
+
+ +

The method returns a bool which the client will use to determine quickly if a leave (holiday/time off etc) request can be sent to the employee's manager by email.

+ +

The AddLeaveResult output parameter is an enum with following values Success =0, NoManagersWithEmailAddress =1, NoManagersToDeliverTo = 2

+ +

When writing this I had the TryParse pattern in mind, but this code smells. My gut is telling me there's a better way, but I can't think of one. Can you suggest a better, cleaner pattern to use?

+",132092,,,,,42753.55972,smelly C# pattern - refactoring advice please,,2,0,0,,,CC BY-SA 3.0, +340418,1,,,1/18/2017 11:49,,1,577,"

I have a few operations to make over many similar elements. +I would like to collect data from each element firstly and next bind all the data to an object (binding is expensive operation so I need to do it once).

+ +

Is it consistent with the Visitor pattern?

+ +

Example of my problem:

+ +
class Element {
+    public $name;
+
+    public function accept(VisitorInterface $visitor) {
+        $visitor->visitElement($this);
+    }
+}
+
+class SimpleVisitor implements VisitorInterface {
+    private $data = [];
+
+    public function visitElement(Element $element) {
+        $this->data[] = $element->name;
+    }
+
+    public function bindData(Object $object) {
+        $object->setNames($this->data);
+    }
+}
+
+$visitor = new SimpleVisitor();
+$object = new Object();
+
+$elementA = new Element();
+$elementA->name = 'test1';
+$elementA->accept($visitor);
+
+$elementB = new Element();
+$elementB->name = 'test2';
+$elementB->accept($visitor);
+
+$visitor->bindData($object);
+
+",52939,,,,,42753.54167,Visitor pattern and collecting visited data,,1,4,,,,CC BY-SA 3.0, +340420,1,,,1/18/2017 11:52,,1,608,"

I am working on problem where a call to reduce a counter will come to a service and if counter is greater zero then call should be able to reduce it else fail.

+ +

Pretty straightforward? huh!

+ +

For a request get the counter value, reduce and put it back

+ +

Well it becomes interesting with below Constraints:

+ +
    +
  1. Request is sandboxed: So request can come to any host, each request creates a new thread and dies after returning the response. (So no batch update possible out of the box, in other words can't update counter with -10 on behalf of 10 requests if each request wanted to do -1)
  2. +
  3. Maximize success rate of parallel requests for same counter update
  4. +
  5. Minimize latency impact due to your solution (<700ms)
  6. +
  7. Counter is stored in some data store (lets say DynamoDB, may not be the right data store for accessing same key with high rate as it causes hot partition and increasing the throughput just to support this weird call pattern is not acceptable)
  8. +
+ +

Whats the problem exactly, you ask?

+ +
    +
  1. Accessing same record many times tend to create hot partition scenario where most of underneath data store starts throttling you as your requests/access pattern seems sort of attack. (don't suggest keeping high throughput to support the pattern, not acceptable!)

  2. +
  3. Directly processing a request used to work when there is no contention or say not many parallel request updating the same counter. Now most of the requests (99%) will fail due to lock/conditional fail cases and retries will take hell lot of time for all of them to succeed. (I am ok few requests fail ~10%)

  4. +
+ +

About failures: +""failure due to counter reaching 0"" is not retryable while ""failure due to lock/conditional fail cases"" is retryable.

+ +

Aim is to maximize the success rate of parallel request as much as possible.

+ +

Side Note:

+ +

I am not limited or restricted with particular data model or store. That means you can come up with any data model which help you crack the problem is efficient way and choose any data store you believe is right for such use-case.

+ +

I have a fairly good solution(using randomness) which I can talk about later. (Not putting it upfront in order to keep problem open and interesting to be solved rather than discussing a single solution)

+ +

Wanted to collect thoughts here, how you will approach it!

+",43267,,43267,,42753.64167,42753.64167,Decrement counter with high concurrency in distributed system,,2,3,,,,CC BY-SA 3.0, +340425,1,,,1/18/2017 12:44,,3,617,"

When you browse for the phrase ""constructors must not do work"", then in various blog posts you will find the advice to not let the constructor do work. Despite this, I am having some trouble understanding why this is the case. Additionally, this popular post suggests to take such advice with a grain of salt.

+ +

I have an example of two implementations of the same situation. In the situation, a AFactory has a method createA, using a B. A needs a query result, that B produces. There are two ways to implement this:

+ +

Example 1:

+ +
class AFactory {
+    public function createA(B $b): A {
+        return new A($b->getQueryResult());
+    }
+}
+
+class A {
+    private $query_result;
+
+    public function __construct(array $query_result) {
+        $this->query_result = $query_result;
+    }
+
+    public function doFooWithQueryResult() {
+        // Do something with query result
+    }
+
+    public function doBarWithQueryResult() {
+        // Do something with query result
+    }
+}
+
+ +

In the first example, the factory fetches the query result and passes it to A's constructor. A then merely assigns the query result to the corresponding class property. However, there is one problem here: A does not verify if the query result is a valid data structure, i.e. an actual query result suited for A. It does not know where it came from. The responsibility for this validation has now leaked to the AFactory, and A has become very tightly coupled to AFactory. The other implementation resolves this issue, but then the constructor performs work. And apparently that is bad.

+ +

Example 2:

+ +
class AFactory {
+    public function createA(B $b): A {
+        return new A($b);
+    }
+}
+
+class A {
+    private $query_result;
+
+    public function __construct(B $b) {
+        $this->query_result = $b->getQueryResult();
+    }
+
+    public function doFooWithQueryResult() {
+        // Do something with query result
+    }
+
+    public function doBarWithQueryResult() {
+        // Do something with query result
+    }
+}
+
+",233992,,-1,,42878.52778,42753.82639,Tell don't ask vs constructor doing work,,3,4,1,,,CC BY-SA 3.0, +340433,1,340436,,1/18/2017 14:45,,30,27592,"

I'm building libraries with various small utility functions in C#, and trying to decide on a namespace and class naming convention. My current organization is like this:

+ +
Company
+Company.TextUtils
+    public class TextUtils {...}
+Company.MathsUtils
+    public class MathsUtils {...}
+    public class ArbitraryPrecisionNumber {...}
+    public class MathsNotation {...}
+Company.SIUnits
+    public enum SISuffixes {...}
+    public class SIUnits {...}
+
+ +

Is this a good way to organize the namespace and classes, or is there a better way? In particular it seems like having the same name in the namespace and class name (e.g. Company.TextUtils namespace and TextUtils class) is just duplication and suggests that the scheme could be better.

+",259883,,,,,42754.95208,C# namespace and class naming convention for libraries,,4,4,5,,,CC BY-SA 3.0, +340444,1,340497,,1/18/2017 15:57,,14,17827,"

I was cleaning unused variables warnings one day, and I started to ponder, what exactly is the problem about them?

+

In fact, some of them even help in debugging (e.g. inspect exception details, or check the return-value before returned).

+

I couldn't find real actual risk in having them..

+

Examples

+

I do not mean lines that takes time of other programmers' attention, such as:

+
int abc = 7;
+
+

That is an obvious redundancy and distraction. I mean stuff like:

+
try {
+    SomeMethod();
+} catch (SomeException e) {
+    // here e is unused, but during debug we can inspect exception details
+    DoSomethingAboutIt();
+}
+
+",119774,,-1,,43998.41736,42754.90417,Why unused variables is such an issue?,,7,16,2,,,CC BY-SA 3.0, +340450,1,375867,,1/18/2017 16:17,,8,3647,"

Visual studio projects, as opposed to makefiles or other projects I know, have some quirks:

+ +
    +
  • The directory structure of the project has no real connection to actual directory structure - all directories are purely virtual - this makes it harder to re-add mass of files while keeping directory structure
  • +
  • The project consists of rather complex XML, that file contains everything from files to compiler settings
  • +
+ +

Now whenever I merge branches, I have conflict on the project files, because everyone inadvertently changes them as they operate over the project. Often the changes happen in such ways that the merge tool does not even recognize conflict properly. In those cases, files end up missing in the project, outdated settings reappear and so on.

+ +

Our project is 2010 C++ solution consisting of five separate sub-projects.

+ +

Are there any strategies that would have low impact on the development process, but would alleviate problems caused by merging project files?

+",80416,,80416,,42759.41389,43306.66944,How to deal with merging of Visual Studio projects,,2,5,1,,,CC BY-SA 3.0, +340469,1,340474,,1/18/2017 21:24,,17,11932,"

We are designing a RESTful API that is mainly intended to meet the needs of a single client. Because of its very particular circumstances, this client has to make as few requests as possible.

+ +

The API handles i18n through an Accept-Language header in the requests. This works for all things that the client needs to do except for one feature, in which client needs to store the responses of a request to a single endpoint in all available locales.

+ +

Can we somehow design the API in a way that allows the client to grab all this information with a single request and without breaking a consistent, well-structured RESTful API design?

+ +

Options we have considered so far:

+ +
    +
  • Allowing the inclusion of multiple locales in the Accept-Language header and adding localized versions for all requested locales in the response, each one identified by its ISO 639-1 language code as the key.
  • +
  • Creating something like an ""?all_languages=true"" parameter to that endpoint and returning localized versions for all available locales in the response if that parameter is present.
  • +
  • (If none of the above works for us) making multiple requests to grab all localized versions from client.
  • +
+ +

Which one is the best alternative?

+",259922,,55314,,42759.66458,43728.58889,RESTful API and i18n: how to design the response?,,1,0,5,,,CC BY-SA 3.0, +340473,1,,,1/18/2017 22:18,,0,360,"

I'm developing a Cordova app and I've got the UI ready but I need data to my app from a database. For example I want my cordova app to include user authentication when the user opens the app which means, I have to access my database in some way to check the user input. I would also like to get data from the database and show in the app such as the members information, ranking and so on. How do I do this in a easy and proper way? Am I right when I say that there is not a good thing to access a database directly from the javascript in the cordova app, for security reasons?

+ +

Is the solution through a architecture that looks like this?

+ +

Cordova app(In Google play) --> Application Server --> Database

+",259924,,,user22815,42754.75833,42754.75833,Send data between database and Apache Cordova app in a secure way,,1,0,1,,,CC BY-SA 3.0, +340479,1,,,1/19/2017 0:50,,0,57,"

I am working on a project that is using Thrift and as a result has auto generated code.

+ +

Firstly, what are the pros and cons of placing this generated code in SCM vs not and insisting developers generate themselves if the wish the build the dependent software package.

+ +

I feel like auto generated code shouldn't be placed in SCM but then again there are shackles attached to having to generate the code via Thrift.

+",57613,,,,,42754.05347,Where to place Thrift generated code,,1,0,,,,CC BY-SA 3.0, +340481,1,,,1/19/2017 1:33,,1,94,"

Should every single class in my system have an interface?

+ +

I understand that interfaces provide an abstraction from the implementation of a class and so changes to the implementation do not affect classes that are using the interface, i.e. cross platform implementations may differ but the interface can stay the same.

+ +

But what about when testing? I want to test every class and function I write, and sometimes I want to mock classes that may use heavy resources like databases or third party web apis.

+ +

If I create an interface for every single class in my system I can easily create mocks and unit tests against the interfaces, and this will allow me to test all classes and functions in the system, but I feel like this is overkill.

+ +

Is there a more efficient way to get 100% coverage when testing a system, using unit tests and mocks, without creating interfaces for every single class?

+",153617,,,,,42754.06458,Should every class in my system have an interface?,,0,2,,42754.32986,,CC BY-SA 3.0, +340483,1,340484,,1/19/2017 2:53,,-2,291,"

I recently did a phone interview with a company. The interviewer told me that he didn't like like to get people to write code on the spot without access to stack overflow or documentation because that's not how programming works in real life.

+ +

He emailed me a small programming task to complete in the next couple of days and asked me to email my response or share it on github or something.

+ +

I've completed the task in a private repo on Github, but I'm not sure if I should share the full, ""un-curated"" commit history with the interviewer.

+ +

Should I email him the code, or should I make the repo public with all my changes?

+",41989,,,,,42754.16875,Should I show my commit history on an Interview Question?,,1,2,,42760.99444,,CC BY-SA 3.0, +340487,1,,,1/19/2017 6:15,,5,176,"

We are currently building a framework (closed source static library) that will communicate with some Smart home devices via Wi-Fi. This framework will be used by 3rd party developer to build their own applications (mainly mobile application) in order to communicate with those devices.

+ +

Currently, we have split opinions on if the framework should generate any visible logs (say a log file, or event logger) on a release version (we will supply a debug and a release version to 3rd party developers).

+ +

Reasons to have logs:

+ +
    +
  • Logs are always helpful if we need to find out the root cause of unexpected error.
  • +
  • Any form of logs is always good
  • +
  • Getting the logs information from mobile phone should be easy. (By app user or support desk technician)
  • +
  • We can prove it is application developers' fault if they blame it to us.
  • +
  • Some issues may only happen in production that cannot be reproduce on test environment.
  • +
  • Log files are not big anyway. A small log file that use some of the device storage shouldn't be an issue.
  • +
  • All servers application/API always have logs.
  • +
  • Reverse engineer is always possible via decompile, so it shouldn't really matters.
  • +
+ +

Reasons to not have logs:

+ +
    +
  • Framework doesn't own the application.
  • +
  • Most users have no idea how to get the log files out of application's storage (hence less likely to be able to get it), so don't add something that will not be used
  • +
  • Application developer should be able to pinpoint the problem by their own logs/debug method before coming to us
  • +
  • Risk of exposing too much information to end users.
  • +
  • No other frameworks seems to do it (e.g. Facebook SDK/Google SDK)
  • +
  • Taking up device storage. Every byte count.
  • +
  • It is the responsible of the framework user (developer) to have their own logs/crash reporting if they want to.
  • +
  • Debug version with console/debugger logs should be enough for developers.
  • +
+ +

So basically we are not able to get into an agreement. Just wondering what does the wider community think about this if you are the developer that use a closed source static library on a client application.

+",259943,,259943,,42754.26875,42759.84931,Framework logs on client application,,1,4,,,,CC BY-SA 3.0, +340494,1,340495,,1/19/2017 7:54,,28,5094,"

We have here a large legacy code base with bad code you can't imagine.

+ +

We defined now some quality standards and want to get those fulfilled in either completely new codebase, but also if you touch the legacy code.

+ +

And we enforce those with Sonar (code analysing tool), which has some thousands violations already.

+ +

Now the discussion came up to lower those violations for the legacy. Because it's legacy.

+ +

The discussion is about rules of readability. Like how many nested ifs/for.. it can have.

+ +

Now how can I argue against lowering our coding quality on legacy code?

+",19772,,19681,,42754.41111,42754.7125,How to argue against lowering quality standards for legacy codebase?,,8,13,5,42754.8125,,CC BY-SA 3.0, +340509,1,,,1/19/2017 10:50,,2,562,"

I'm building a multi-tenant cloud application and actually I need a bit of help to solve a situation about the login.

+ +

My app is a webscheduler, this allow to each customer to have a certain location where store the appointment, the location is the database of my customer (buyer).

+ +

Each buyer can have multiples locations, so I'll create for each location a database (1 location = 1 license). Until here no problem I can handle the situation correctly.

+ +

What I'm trying to do is create a login panel for each (buyer), noticed that the buyer have also operators, secretaries and his customers. So In the location database will be stored all the credentials of all workers and customers of this location.

+ +

Now the first problem's require the database connection for each tenant, so imagine that my buyer insert his credentials in my app, a practice example is better:

+ +
USERNAME: Foo
+PASSWORD: bar
+
+ +

I need to recover the correct database connection for this tenant. My idea is insert in a XML file an access token (license for exmple), so imagine this structure:

+ +
<licenses>
+    <license>
+        <token>#dfpeFHTd93GHa9x$3d+Asòd3</token>
+        <connection>
+            <host>localhost</host>
+            <username>foo</host>
+            <password>foo</password>
+            <dbname>appname_buyerid_locationid</dbname>
+        </connection>
+    </license>
+    <license>
+        <token>3dòsA+d3$x9aHG39dTHFepfd#</token>
+        <connection>
+            <host>localhost</host>
+            <username>foo</host>
+            <password>foo</password>
+            <dbname>appname_buyerid_locationid</dbname>
+        </connection>
+    </license>
+</licenses>
+
+ +

so how you can see I've a list of license, when the user put his credentials in my system, my app need to retrieve the db connection associated to this user, so start's to iterate through each license in my XML file, and get the connection associated to the token.

+ +

Now the main problem in this logic is the token, 'cause I've no idea how to assign this token to my buyer and his workers and customers.

+ +

So essentially as a rest-api request, the trace (in this case the login) should be associated to a token with the credentials of the user, the token is a license or something like that recognized the location.

+ +

I need to assign this token somewhere in the endpoint to recognize my buyer, but I've no idea where, so I need an help here, maybe someone see something that I can't see; or have maybe another and powerfull logic better than my.

+ +

For any questions or details, please don't esitate to ask.

+ +

Thanks.

+",256642,,,,,42754.67847,Multi-tenant cloud application and user credentials,,1,0,,,,CC BY-SA 3.0, +340511,1,,,1/19/2017 10:56,,1,66,"

I was wondering if there were a set of standards or best practices for displaying information in lists for mobile applications (specifically Android applications that utilize RecyclerViews and custom RecyclerAdapters). I would be especially interested in hearing about this from Android developers who currently work or have worked at professional development agencies. I thought I would post this here as it seems a little to general for SO.

+ +

For example, is there a list of quality standards that one goes through before deciding that yes, the way I retrieve and display information in my app's main data feed is satisfactory enough for a production build?

+ +

In particular, it would beneficial if your answer addressed the following with examples:

+ +
    +
  • Is data cached in the local datastore for offline viewing?
  • +
  • Are server queries sufficiently separated from business logic?
  • +
  • Do we extend our layout with a custom LinearLayout Manager so that images are loaded before they are seen by the user?
  • +
  • Have we implemented an OnScrollListener of some kind to load additional information once the user has scrolled through say 20 adapter items so that we aren't immediately bogging down performance by loading everything all at once?
  • +
  • Etc, etc.
  • +
+ +

The question is fairly open-ended but at the same time, I suspect there is a lot of convergence in terms of what a good feed looks like. It would be nice to know which methods, libraries, etc. you use for each of the above considerations, and when you have decided that your feed is sufficiently fast and stable for use in production.

+ +

As an example, I've included a fragment and associated adapter to show you just how probably bad the feed is in one of my open-source apps. Given that I have not worked at an Android agency, I'm really not sure how to compare what I've produced with what is typically seen in professional production-grade applications:

+ +

FeedFragment: https://github.com/santafebound/yeetclub-android/blob/master/app/src/main/java/com/yeetclub/android/fragment/FeedFragment.java +FeedAdapter: https://github.com/santafebound/yeetclub-android/blob/master/app/src/main/java/com/yeetclub/android/adapter/FeedAdapter.java

+ +

You don't have to look at these, but a general list of things to be aware of and to optimize for RecyclerViews and their adapters would probably be of great benefit to Android developers in general. Thanks in advance for your input. Cheers!

+",227990,,227990,,42754.46042,42754.46042,How should I optimize my RecyclerAdapter (Android) to maximize efficiency of information retrieval in my app's data feed?,,0,0,,,,CC BY-SA 3.0, +340517,1,,,1/19/2017 12:36,,7,1879,"

I worked on a project and found a working solution, but now I got some questions on how I solved some problems in the code. First of all, I am no expert in design patterns, but I know the (anti-)pattern of singletons and normally I avoid them. But what I use quite often, are static helper / utility methods.

+ +

So the project I've been working on is based on the Atlassian Plugin SDK. I implemented servlets, accessed some data via Atlassian components, all pretty straight forward. When it comes to the point of rendering pages, the Atlassian platforms use Apache Velocity. So I build my context map and access the objects in Velocity, everything is fine. At some point, I want to generate URLs to link to other servlets or pages. So, I create a class with static url generation methods. Sadly, I cannot access those methods in Velocity, because there is no instance I could pass to the Velocity context (which defines the scope). But Velocity allows me to use static methods via instances of the class. The resulting class looks like this (java code):

+ +
public class Urls {
+    private static Urls Singleton = new Urls();
+
+    public static Urls getInstance() {
+        return Singleton;
+    }
+
+    private Urls() { }
+
+    public static String getBaseUrl() { ... }
+
+    public static String forUserProfile(ApplicationUser user) { ... }
+
+    ...
+}
+
+ +

Now, in my regular java code, I can simple use the static method:

+ +
String myUrl = Urls.forUserProfile(myUser);
+
+ +

But I can also pass my singleton to the velocity context ...

+ +
context.put(""urls"", Urls.getInstance());
+
+ +

... to use it in my Velocity template:

+ +
<a href=""$urls.forUserProfile($myUser)"">User profile</a>
+
+ +

I used this 'pattern' or similar ones several times in my project. Is there any name for something like this? I assume this is kinda rare, because normally one could simply access the static methods. Do you think there are any big disadvantages I forgot? Any reason not to use this way? Any better ways?

+",259973,,,,,42754.80139,Singleton without any state,,4,4,,,,CC BY-SA 3.0, +340524,1,340546,,1/19/2017 13:28,,3,1037,"

I was wondering what is the best way to represent a function that is passed as a parameter in UML. Because I want to create a sequence diagram of my current networking code in my Swift project and some functions pass a function as parameter to other functions. So I was wondering is their a standard way to describe it in UML, or can I use swift syntax in my UML.

+",259978,,,,,42754.80417,A function as a parameter in UML,,2,1,0,,,CC BY-SA 3.0, +340531,1,340532,,1/19/2017 15:20,,32,26196,"

Is a multi-tenant database:

+ +
    +
  • A DB server that has a different (identical) database/schema for each customer/tenant?; or
  • +
  • A DB server that has a database/schema where customers/tenants share records inside of the same tables?
  • +
+ +

For instance, under Option #1 above, I might have a MySQL server at, say, mydb01.example.com, and it might have a customer1 database inside of it. This customer1 database might have, say, 10 tables that power my application for that particular customer (Customer #1). It might also have a customer2 database with the exact same 10 tables in it, but only containing data for Customer #2. It might have a customer3 database, a customer4 database, and so on.

+ +

In Option #2 above, there would only be a single database/schema, say, myapp_db, again with 10 tables in it (same ones as above). But here, the data for all the customers exists inside those 10 tables, and they therefore ""share"" the tables. And at the application layer, logic and security control which customers have access to which records in those 10 tables, and great care is taken to ensure that Customer #1 never logs into the app and sees Customer #3's data, etc.

+ +

Which of these paradigms constitutes a traditional ""multi-tenant"" DB? And if neither, then can someone provide me an example (using the scenarios described above) of what a multi-tenant DB is?

+",154753,,,,,43710.58403,Do multi-tenant DBs have multiple databases or shared tables?,,2,3,10,,,CC BY-SA 3.0, +340538,1,,,1/19/2017 17:21,,9,1136,"

Here I am in the process of scoping and estimating a relatively small new software development project. I have been through the user stories suggested by the customer and placed tasks against each, with an estimate and some brief notes on how the task will be accomplished. There are acceptance criteria. All ought to be good with the world.

+ +

+ +

When looking at the work I'd planned, I realised there was something missing. There is going to be initial outlay in simply setting up things into which we can bolt functionality. Things that belong to all user stories, not one particular user story.

+ +

For example, part of this application is a service that parses XML. From the user's point of view there are specific stories where different things will need to be done depending on the content of the XML. Actually writing an XML parser - the bits that look for a file, read it and pull out the relevant data before deciding what to do with the contents - is part of all those stories. As is wrapping it in a windows service with an installer etc. It is a developer-centric task with no direct relevance to a user.

+ +

Another relevant example from this particular application is taking and rewriting a block of poor legacy code which is useful to the functions of this app. Again, this has no immediate outcomes for the user but it's necessary work. Where does the planning and execution of this work ""live"" in a project plan focused on user stories?

+ +

I have seen people solve this by writing user stories ""As a developer, I want to ..."" but as has been discussed elsewhere this isn't a user story. It's a developer one.

+ +

I am seeking a concrete answer to this, to help me (and others) planning projects using strict management frameworks like TFS online. These do not tend to have the function to make ""stakeholder stories"" or other vague meta-solutions mentioned in the answers to How does a Scrum team account for infrastructure tasks in the planning meeting?

+",22742,,-1,,42838.53125,42758.56667,"In agile, how are basic infrastructure tasks at the start of a project planned and allocated using strict management frameworks like TFS online?",,3,15,3,,,CC BY-SA 3.0, +340547,1,,,1/19/2017 19:20,,2,551,"

I went for an interview, and got a workload problem:

+ +
Problem: write a function to tell whether a series of workloads will exceed 
+         the maximum workload or not
+
+Input: MaxWorkLoad: example 10
+       Timeslot and workload: example [(2, 6, 3), (3, 8, 2), ... ]
+       The (2, 6, 3) is begin time, end time, and workload
+       And it means from time 2 to time 6, the workload is 3
+       You can treat the 2, 6 as the UNIX epoch time.
+       The time may not be integers, so instead of 2, it can be 2.2
+       The input can be in any time order. For example: [(20, 60, 3), (3, 8, 2)]
+       The workload will ""add up"", so a 3 and 2 will add up to 5
+
+Output: a boolean indicating whether the series of workload can fit in without
+        exceeding MaxWorkLoad 
+
+ +

The short question is: does this workload problem belong to a class of algorithm, and that when the array is empty, but data keep on coming in M times and we need to tell possible or not M times, is there a better solution than O(M * M)?

+ +
+ +

Details:

+ +

If I focused on how to determine whether the time ranges will overlap with each other, it turns out it is not an easy solution.

+ +

So I am not sure whether this is suitable as an interview question, as you may either know how to solve it or you don't. If you have seen it before, you will solve it like a breeze. If you haven't seen it before, I don't think 20 minutes may be enough time for you to get unstuck.

+ +

You may want to think about how you may solve it, if you want to have some fun.

+ +

The simple solution, which I could come up with, but after 15 minutes later, actually, can be: simply use a dictionary, and use the time boundary as the key, and if it is (2, 6, 3), then just mark it as dict[2] = 3 and dict[6] = -3.

+ +

Likewise, for (3, 8, 2), then dict[3] = 2 and dict[8] = -2

+ +

(and actually, if we treat the time endpoint as inclusive, then we won't have dict[8] = -2 but have dict[9] = -2, treating it as dropping some workload at time 9 instead of at 8)

+ +

And then, once you have the whole dictionary, now loop through each key in the dictionary, in sorted order, and keep a CurrentWorkLoad number as the work load. So when you see dict[2] as 3, add 3 to CurrentWorkLoad, and when you see dict[3], add the 2 to CurrentWorkLoad, and when you see dict[6], add the -3 to CurrentWorkLoad.

+ +

So as soon as CurrentWorkLoad is greater than MaxWorkLoad, then you can return false right away. Otherwise, at the end of the loop, simply return true.

+ +

And what if there is (2, 6, 3) and (6, 8, 1), meaning that the endpoint can ""overlap"" at the time 6? So I came up with, either use an array to remember all the values when it collides at 6, or, simply add up the values. So the first time you see (2, 6, 3), then dict[6] is -3, and when you see (6, 8, 1), then dict[6] += 1 and becomes -2.

+ +

So if in JavaScript, it is like

+ +
dict[beginTime] ||= 0;    // if not defined, then set it to 0
+dict[beginTime] += workload;
+
+dict[endTime] ||= 0; 
+dict[endTime] -= workload;
+
+ +

and the rest of the algorithm will stay the same.

+ +

So the time complexity for the array size N is O(N log N), because we need to sort the keys.

+ +

The interviewer then asked me, what if this operation is repeated M times?

+ +

So for example, if the initial array is empty, but data keep on coming in, for M times, and M can be a million or ten million. Then what is the time complexity? I initially said then it is O(M * M log M), but later on found out it could be O(M * M), because we don't need to sort the keys every time. We can just ""insert"" the key in an already sorted list.

+ +

Is there a class of algorithm or problem solving that is related to this and have a solution better than O(M * M)?

+",5487,,5487,,42756.49167,42829.82431,Does the workload problem belong to a class of computer science problem?,,6,3,,,,CC BY-SA 3.0, +340550,1,340555,,1/19/2017 20:18,,82,18604,"

I am doing database programming using Java with SQLite.

+ +

I have found that only one connection at a time to the database has write capabilities, while many connections at once have read capability.

+ +

Why was the architecture of SQLite designed like this? As long as the two things that are being written are not being written to the same place in the database, why can't two writes occur at once?

+",258776,,258776,,43153.83819,43153.83819,Why are concurrent writes not allowed on an SQLite database?,,2,2,7,,,CC BY-SA 3.0, +340553,1,340556,,1/19/2017 20:43,,2,830,"

I am creating object oriented design for a simple app through which users can order food from restaurants. User can browse nearby restaurants, explore menu, add items to cart, and finally checkout.

+ +

For now I am concentrating on two main classes User and Restaurant and the interaction that a user can browse nearby restaurant. Lets say that there is a function called getNearByRestaurants(Location location). Which is the best place for the function to be? Some options that I thought -

+ +
    +
  1. In User class. My confusion with this is should User class just have all the functions related only to a user, like changing email, credit card, etc. or should it have functions that can interact with other entities?
  2. +
  3. A new class called UserActions where all user interactions with other entities can be listed.
  4. +
  5. A class called RestaurantRegister, which can be a singleton. Any new restaurant would register itself using functions of this class. getNearByRestaurants(Location location) can be in that function.
  6. +
+",259195,,,,,43215.53542,Separating functionalities in a food delivery app,,1,0,1,,,CC BY-SA 3.0, +340564,1,,,1/20/2017 0:32,,1,92,"

So I'm writing a game and using the Google Play Services framework to send and receive data.

+ +

I have to implement the IRealTimeEventListener interface and override the functions below to receive incoming network data.

+ +
class EventHandler : public gpg::IRealTimeEventListener
+private:
+
+  //IRealTimeEventListener
+  void OnRoomStatusChanged(gpg::RealTimeRoom const &room) override;
+  void OnConnectedSetChanged(gpg::RealTimeRoom const &room) override;
+  void OnP2PConnected(gpg::RealTimeRoom const &room, gpg::MultiplayerParticipant const &participant) override;
+  void OnP2PDisconnected(gpg::RealTimeRoom const &room, gpg::MultiplayerParticipant const &participant) override;
+  void OnParticipantStatusChanged(gpg::RealTimeRoom const &room, gpg::MultiplayerParticipant const &participant) override;
+  void OnDataReceived(gpg::RealTimeRoom const &room, gpg::MultiplayerParticipant const &from_participant, std::vector<uint8_t> data, bool is_reliable)
+      override;
+
+ +

My question is how is this functionality implemented?

+ +

I have used the Observer pattern before, the Subject would call a collection of registered Observers and pass an event BUT I have to register the Observer first with the Subject.

+ +

In the case of the Google Play Services library there is NO Registration I simply implement the interface and start receiving data.

+ +

gpg.framework/Headers/i_real_time_event_listener.h

+ +
#ifndef GPG_I_REAL_TIME_EVENT_LISTENER_H_
+#define GPG_I_REAL_TIME_EVENT_LISTENER_H_
+
+#include <vector>
+#include ""gpg/multiplayer_participant.h""
+#include ""gpg/real_time_room.h""
+
+namespace gpg {
+
+class GPG_EXPORT IRealTimeEventListener {
+ public:
+  virtual ~IRealTimeEventListener() {}
+  virtual void OnRoomStatusChanged(RealTimeRoom const &room) = 0;
+  virtual void OnConnectedSetChanged(RealTimeRoom const &room) = 0;
+  virtual void OnP2PConnected(RealTimeRoom const &room, MultiplayerParticipant const &participant) = 0;
+  virtual void OnP2PDisconnected(RealTimeRoom const &room, MultiplayerParticipant const &participant) = 0;
+  virtual void OnParticipantStatusChanged(RealTimeRoom const &room, MultiplayerParticipant const &participant) = 0;
+  virtual void OnDataReceived(RealTimeRoom const &room, MultiplayerParticipant const &from_participant, std::vector<uint8_t> data, bool is_reliable) = 0;
+};
+
+}  // namespace gpg
+
+#endif  // GPG_I_REAL_TIME_EVENT_LISTENER_H_
+
+ +

How can the functions of the interface be called by the library when my event handler does not register with a subject inside the library?

+",153617,,153617,,42755.52708,42755.52708,Receiving events through an interface,,0,3,,,,CC BY-SA 3.0, +340565,1,,,1/20/2017 1:17,,0,60,"

I'm writing a trading framework in MQL and I'm confused how I should organise my class hierarchy.

+ +
|-- Terminal (Log)
+|   |-- Market
+|   |   |-- Chart
+|   |   |   |-- Draw
+|   |-- Account
+|   |   |   |-- Trade (Orders?)
+|   |   |   |   |-- Orders?
+|   |   |   |   |   |-- Order
+
+ +

In the brackets are the class variables within that class. Question marks are unclear.

+ +

Here is my previous attempt, but the compiler basically gave up and was too confused.

+ +

Below is brief sample of implementation of these classes to give you an idea what the class is about with key methods to show their purpose (method bodies are omitted if not relevant).

+ +

To be clear, the language syntax doesn't allow to extend one class by multiple, also doesn't support abstract classes. However instance of the class can be passed into another in the constructor and assigned to a class variable.

+ +
+ +

More detailed explanation of the classes and their purpose. I hope this is clear.

+ +
    +
  • Terminal

    + +

    Defines trading terminal methods and error handling.

    + +
    class Terminal {
    +  Log *logger;
    +  // Methods.
    +  static bool IsTradingAllowed(); // Check if terminal is allowed to trade.
    +  static string CodeToError(int code); // Translate error code into text.
    +}
    +
    + +
      +
    • Market

      + +

      Defines class to access the market properties for the given symbol.

      + +
      class Market : public Terminal {
      +  string symbol; // Trading symbol pair.
      +  // Methods.
      +  double Ask(); // Get ask price (it's using symbol).
      +  static bool SymbolExists(string _symbol); // Log an error on fail (use logger).
      +}
      +
      + +
        +
      • Chart

        + +

        Chart and timeframe operations. One symbol pair in the market potentially could have multiple timeframes, so it's more specific way to access market data.

        + +
        class Chart : public Market {
        +  ENUM_TIMEFRAMES tf; // Class variable for timeframe (e.g. M1, M30).
        +  // Methods.
        +  void Chart(_tf) { tf = _tf; } // Constructor.
        +  bool IsPeak(); // Uses symbol and tf to check price peak.
        +}
        +
        + +

        There is also another possibility that Market could be a sub-class of Chart. However there can be multiple chart timeframes for the same market. Before I had separate Timeframe class, but having just Chart class is more obvious. To avoid confusion with Market, Chart consist bars on the chart.

        + +
          +
        • Draw

          + +

          Interacts with objects on the chart (e.g. draw a line on the chart).

          + +
          class Draw : public Chart {
          +  long chart_id;
          +  // Methods.
          +  bool DrawVLine(name, time);
          +  bool ObjectAdd(...);
          +  bool ObjectDelete(name);
          +}
          +
        • +
      • +
    • +
    • Account

      + +

      Class to access the main account details. User is logged into account via terminal, so it's logical that Account extends Terminal.

      + +
      class Account : public Terminal {
      +  double init_balance, current_balance;
      +  // Methods.
      +  double GetBalance(); // Current account balance.
      +  double GetProfit(); // Account profit.
      +  double GetMarginFree(); // Use Terminal logger to report any error.
      +}
      +
      + +
        +
      • Trade

        + +

        Class to take user account into market action. Trade is extending Account class, because logically its actions affect the user balance. If user has no balance, trades are not possible.

        + +
        class Trade : public Account {
        +  struct trade_params { uint slippage; }
        +  Orders *orders; // ???
        +  Chart *chart; // ???
        +  // Constructor.
        +  void Trade(Chart *_chart) { chart = _chart; } /// ??? Chart or Market?
        +  // Methods.
        +  double CalcLotSize(); // Problem: Lack of access to Market.
        +  double CalcMaxLotSize(); // Problem: Lack of access to Market.
        +  bool NewOrder() { // Create a new Order instance.
        +    if (IsTradingAllowed()) { orders[] = new Order(); };
        +  }
        +  double OptimizeLotSize() { chart.market.GetSymbol(); }
        +  Orders *Orders(); // Getter to return access to Order class.
        +}
        +
        + +

        Concerns:

        + +
          +
        • I'm not sure whether it's better for Orders be defined as a class variable, or another sub-class.
        • +
        • I need to access to Market as well to Account class variables.
        • +
        • Not sure about passing another sub-class from different tree into constructor to access its values (whether it's Market or Chart).
        • +
        • Whether passing another class in constructor and calling chart.market.GetSymbol(); is a valid approach.
        • +
        • Passing Chart instance in Trade, it's not clear how I should initialize Draw class when I need to call some drawing methods on the current chart. On the other hand, passing the deepest class to access all of its features, doesn't make much sense.
        • +
      • +
      • Orders

        + +

        Class to deal with list of orders/deals on the market (as a whole). The Orders class deal with pool of orders and it extends Trade, because each trade is result of trading action.

        + +
        class Orders : public Trade {
        +  Order *orders[]; // Current orders.
        +  Order *history[]; // Orders from history.
        +  // Methods.
        +  Order *SelectOrder(int _ticket); // Selects Order instance by a ticket.
        +  Order *SelectMostProfitable(); // Returns Order instance.
        +  int TotalOrders(); // Returns number of all orders from the main pool.
        +  bool CloseAll(); // Traverse orders and invoke Close()
        +}
        +
        + +
          +
        • Order

          + +

          Class to deal with a single specific order. Once trading action takes place, new order (instance) is being placed.

          + +
          class Order : public Orders {
          +  struct params { int ticket; double price, profit; }
          +  Order(); // Constructor to open new order and update params.
          +  // Methods.
          +  double GetTicket(); // Returns params.ticket.
          +  double GetProfit(); // Returns order current profit.
          +  bool Close(); // Closes the order.
          +}
          +
          + +

          It extends Orders, because it's part of the pool, however the Order doesn't benefit much from its parent (no common variables, it's more like other way round). It doesn't make much sense to give Order access to methods to deal with all orders, but on the other hand, which class should be a parent instead?

        • +
      • +
    • +
  • +
+ +
+ +

I would like to understand what is the problem in above class hierarchy, ideally by following some good OOP practices.

+ +

Question:

+ +

How above hierarchy should look like in the ideal world and what would be your suggestion to address my concerns?

+ +
+ +

My thoughts are:

+ +
    +
  • At first, it seems to be logical that Order is extended by Orders class, but on the other hand having orders[] in Trade class seems illogical to have separate instances of Orders (each time when it's created), so it could be that extending Order to Market would be a better approach, but I'm not sure. Or to not extend Order class at all.
  • +
  • I think Orders class doesn't fit right. Ideally dropping it and merging into Trade would be great, but I think having separate class to deal with orders as a whole would be more logical. On the other hand, dropping Orders class could solve a lot of problems.
  • +
  • I'm not sure at what point Trade and Chart should interact with account. Should Market/Chart class be used in Trade class as a variable and assigned from the constructor, should the interaction happen from the common parent class or the hierarchy should be completely different.
  • +
  • The hierarchy is not fixed, and I'd like to add further more classes on top of it, so the hierarchy should be fairly flexible.
  • +
+ +

Any thoughts?

+",60532,,-1,,42878.52778,42755.46736,How to organise tree of sub-classes which should interact?,,1,3,,,,CC BY-SA 3.0, +340573,1,,,1/20/2017 7:47,,2,857,"

We have customers that often have varying requests. Currently, we do things such as if customer_code = ? then ... and do something different just for that customer.

+ +

I want to employ something similar to how Wordpress/Magento works where you can subscribe to events, receive the data, process it how you want and then return control back to the application.

+ +

I want to store this customer specific functionality inside of an assembly specific to that customer. Note, I don't want to completely override the method, just points within the method to do things (in most instances anyway).

+ +

Examples are: +- Before order placed +- After order placed +- Before allocation +- After allocation +- After status change

+",58286,,,,,42756.03125,How do I implement hooks/filters/actions within my application?,,2,1,,,,CC BY-SA 3.0, +340577,1,,,1/20/2017 8:49,,7,7029,"

We have an Android app written as a Cordova AngularJS SPA which has now grown so that we need to add some work to it meaning we have to improve the synchronization part of the app. The app is mainly used offline. I can't give details of the business use but data structure and constraints can be illustrated with an over simplified car mechanic's business!

+ +

Imagine the data entities are similar to this : +

+ +

Claims are at Company, Garages and Car level, so in other words Jack may be able to see all garages and all cars in company A, Jill may be able to see all garages in Company A and only some cars in each garage, Jo may be able to see only one garage and only some of the cars in that garage. - you get the idea.

+ +
    +
  • The app downloads information that the mechanic can see about their jobs.  They are able to see all jobs assigned to them and any other jobs that the claims they have allow them to see, so that they can pick those jobs up.  A job is a grouping of Checkpoints that need to be completed on a car.
  • +
  • The garage workshop has no data connection, the mechanic can only sync when they go back to the office.  This may need to be user initiated, not entirely sure yet though.
  • +
  • More than one mechanic can work on a job, if the same data item gets updated by both mechanics the last item synchronised wins.
  • +
  • If a data item has been updated on the server since the mechanic downloaded it to the tablet app, they then updated it and uploaded it, the mechanic's change wins.
  • +
  • Jobs may only be partially complete at time of synchronisation.
  • +
  • When the app synchronises it needs to apply the changes it holds and update what data is held locally with the most up to date information from the SQL database.  It may need to remove data from the app that is no longer relevant to the jobs being worked on currently or in the future.
  • +
+ +

The backend is hosted on Azure, data stored in an Azure SQL database.  There is an OData API we're phasing out and a Web API that can be used / added to.  The complex authorisation happens within a shared module in the APIs, the only data taken down to an app would be data the user is allowed to see.

+ +

The data will get fairly large over time.

+ +

This needs to be transactional – ie. If there is a problem with the sync we want a whole job to have been synchronized at a time, not entity type at a time.

+ +

The app has been written as a SPA Cordova application so, unless there is a compelling reason not to, it has to remain this way.

+ +

Ideas we've considered so far :

+ +

BreezeJS has been used to track changes. We could send just those changes up to the server, wait for a process on the server to apply those changes to the database and then start the download of any changes and new data required. However this could mean that the sync will take a long time to complete if lots of people are syncing at the same time and things get clogged up in a queue of some sort.

+ +

Using CouchDb/PouchDB - I know very little about this, would this work with the authorisation model we have?  And how would it work getting the changes to/from the master SQL database?

+ +

Azure offline data sync (https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-offline-data-sync) - As far as I can see this is pretty similar to using CouchDb, or have I misunderstood?

+ +

The required authorisation model is the part that seems to be the blocker to several solutions being viable, we need different users to have different sets of common data at a given time, this is imperative.

+",260060,,260060,,42758.33958,42950.70139,What's the best implementation for offline mobile app synchronization?,,2,0,9,,,CC BY-SA 3.0, +340585,1,340630,,1/20/2017 11:30,,2,2741,"

+ +

I have read the guide and they say

+ +
+

Query Router Sharding is transparent to applications; whether there is + one or one hundred shards, the application code for querying MongoDB + is the same. Applications issue queries to a query router that + dispatches the query to the appropriate shards.link to guide

+
+ +

Does it mean i dont have to care about the router? or is this a specific machine? i dont get it.

+",256336,,209774,,43999.88056,43999.88056,What is the query router in mongodb architecture of sharding?,,1,0,,,,CC BY-SA 3.0, +340586,1,340587,,12/11/2016 15:57,,4,8882,"

In Chen notation multi-valued attributes are denoted by double eclipse(oval) and derived attributes are denoted by dotted eclipse(oval), but how they are denoted in crow foot notation?

+ +

Also how associative entities are denoted in crow foot?

+",,Saif Ullah,,,,43531.99931,How multi-valued and derived attributes are denoted in crow foot's notation of ERD?,,2,0,,,,CC BY-SA 3.0, +340588,1,340591,,1/20/2017 11:43,,5,1381,"

All articles (example) I read till now about regular expressions and NFAs explain three operations:

+ +
    +
  • sequence
  • +
  • alternation (union)
  • +
  • repetition (Kleene star)
  • +
+ +

No one talks about negation. Is the negation not regular?

+ +

Update:

+ +

@RemcoGerlich What does it mean ""swap the states"". Can you explain it with the Kleene closure?

+ +

+ +

How would the Kleene closure look with swapped states?

+",96131,,96131,,42755.51944,42755.60278,Is the negation of a regular expression still regular?,,2,1,1,,,CC BY-SA 3.0, +340592,1,340601,,1/20/2017 12:20,,1,2813,"

Context:

+ +

I'm creating a WPF application using MVVM. I have a Page which displays a status informing what task the app is performing on Background.

+ +

I have a container, and bind its Content to an property on the ViewModel.

+ +

For an illustration, take a look at the following code:

+ +
<StackPanel x:Key=""Status_Success"" Orientation=""Horizontal"">
+    <iconPacks:PackIconMaterial Kind=""Check"" />
+    <TextBlock>Success!</TextBlock>
+</StackPanel>
+
+<StackPanel x:Key=""Status_Error"" Orientation=""Horizontal"">
+    <iconPacks:PackIconMaterial Kind=""Exclamation"" />
+    <TextBlock>Error!</TextBlock>
+</StackPanel>
+
+ +

If the background task succeeds, then I'd set the content property to Status_Success StackPanel. Otherwise, I'd set the content to Status_Error.

+ +

Here's the binding:

+ +
<Controls:TransitioningContentControl [...] Content=""{Binding CurrentStatusElement}"">
+
+ +

Problem:

+ +

Well, I firstly created all the StackPanels as Resources in my Page. But as I said, I'm using MVVM, so I don't have direct access to page resources from the ViewModel.

+ +

Approaches:

+ +

Here's some possible approaches (these are not the only possibilities, I'm taking sugestions):

+ +

1. Create the StackPanels on the ViewModel:

+ +
StackPanel _StatusSuccessElement = new StackPanel();
+
+_StatusSuccessElement.Children.Add([...];
+
+[...]
+
+ +

2. Create a new Resource Dictionary and import it in the ViewModel:

+ +
var resourceDictionary = new ResourceDictionary()
+{
+    Source = new Uri(""SymbolTemplates.xaml"", UriKind.Relative)
+};
+
+StackPanel _StatusSuccessElement = resourceDictionary[""Status_Success""] as StackPanel;
+
+ +

3. Create an element (Page/UserControl/whatever) and create a new instance of it on the View Model

+ +
var _StatusSuccessElement = new StatusSuccessElement();
+
+ +

Question:

+ +
    +
  1. Which, if any, of these approaches fit better with MVVM and why?

  2. +
  3. If none, what's the best approach to prevent pattern violations?

  4. +
+",220480,,220480,,42755.55764,42755.60208,Preserve MVVM while using XAML resources,,2,2,1,,,CC BY-SA 3.0, +340595,1,,,1/20/2017 13:42,,1,107,"

There is a current project which has many lines of code and written for Java 6-7. I want to migrate it to Java 8 and use lambda expressions wherever I can.

+ +

How can i scan the project and report that an expression can be replaced with a lambda equivalent?

+",11786,,1204,,42755.575,42755.575,Migrating to Java 8: How can I scan the project and report that an expression can be replaced with a lambda equivalent?,,0,8,,,,CC BY-SA 3.0, +340603,1,340616,,1/20/2017 14:29,,-3,89,"

I am creating a simple application where users will be shown random quizzes one at a time and they have to answer them. Each quiz has a category and many tags, right now I have stored quizzes in a single flat file but it seems that I may need many quizzes in near future and hence my solution isn't scalable at all.

+ +

What I thought is to keep quizzes in separate folders which would be named after category but the issue is then how to keep the sequence number of the quizzes in order.

+ +

Is there a better solution?

+",198298,,,,,42755.75972,Managing quiz collection,,2,2,,,,CC BY-SA 3.0, +340608,1,340618,,1/20/2017 15:02,,3,136,"

I am working on writing a code for Hilbert algorithm to solve a Traveling Salesmen Problem. Although there are several efficient methods out there, I am just curious about the implementation of Hilbert Space filling curve.

+ +

+ +
    +
  1. First we create a Hilbert curve and divide the entire area into a number of squares.

  2. +
  3. Then using the sequence of squares we connect all the points though which the salesmen has to travel.

  4. +
+ +

My problem is on the second part. How can I find the squares which contains a point? Or how can I find the empty squares?

+",260098,,,user22815,42758.70764,42758.70764,Mapping points to squares,,1,1,,,,CC BY-SA 3.0, +340609,1,342122,,1/20/2017 15:17,,2,143,"

First, I am not sure if I'm asking this within the correct Stack Exchange community, so if this question belongs somewhere else, please let me know. It didn't seem appropriate for Stack Overflow.

+ +

I am developing an audiobook app for iOS that uses a navigation controller to navigate between two TableViewControllers. The first view controller holds a list of books to choose from, and the second holds the list of chapters of those books. When a user clicks the play button of a chapter row, it starts to play the audio for that chapter.

+ +

At the bottom of each view controller I've created a custom view that holds the components of a standard audio player (play/pause button, track slider, duration, labels for the current track playing):

+ +

+ +

Each view controller has its own instance of the audio player view; there isn't a shared, global audio player view (I don't think you can even do this in iOS?).

+ +

I would like the state of the audio player at the bottom of both controllers to maintain a shared state as a user navigates around the app. For example, a user plays the Chapter 1 row. I need the audio player view at the bottom of each controller to display that the Chapter 1 entry is playing.

+ +

How do I achieve this? I'm fairly new to iOS development, but I believe I need to define a delegate protocol for the audio player view and have each view controller implement it? Or is there a better way of implementing my audio player; maybe creating a custom UIToolbar?

+",260097,,,,,42778.90139,"Correct Design Choice for Maintaining ""static"" View Across Multiple Controllers?",,1,2,1,,,CC BY-SA 3.0, +340612,1,340614,,1/20/2017 15:56,,2,662,"

I have an enum with > 10 items each having 8 static properties. Contrived example:

+ +
enum JavaTypes {
+    INTEGER,
+    BOOLEAN,
+    STRING,
+    ...;
+
+    boolean isPrimitive() {
+    }
+
+    boolean isNumeric() {
+    }
+    ...
+}
+
+ +

I am trying to find the most readable and maintainable (easy to add properties) style.

+ +
    +
  1. The standard override getters doesn't work very well, as each enum body gets quite long and it's difficult to compare the same property across items.

  2. +
  3. Setting all the properties in the constructor. Plagued by the problem of which argument is which. I tried to align the arguments into a table, but defeated by the hard 120-char limit we have.

  4. +
  5. Non-final fields and instance initialiser: e.g.

    + +
    INTEGER {{ primitive = true; numeric = true; }},
    +
    +boolean primitive, numeric;
    +
    +boolean isPrimitive() {
    +    return primitive;
    +}
    +
    + +

    This feels a bit dirty though.

  6. +
+ +

Is 3 acceptable? Is there any better option.

+ +

Edit:

+ +

Just to clarify, not all of the properties are booleans.

+",44660,,220461,,42757.64722,42757.64722,"Static per-enum data: constructor, set in initialiser or override getter?",,2,2,1,,,CC BY-SA 3.0, +340613,1,,,1/20/2017 16:29,,0,39,"

My client has a nodejs SDK that fetches entries using a client that makes http requests. Their api looks like this:

+ +
    var Query = Client.ContentType('blog').Query();
+    Query
+      .where(""title"", ""welcome"")
+      .includeSchema()
+      .includeCount()
+      .toJSON()
+      .find().then((response) => resolvePromise etc...))
+
+ +

I have been tasked with mirroring this api but in an idiomatic way in ruby.

+ +

My earlier attemps at doing entries = client.entries({content_type: 'blog'}) were rejected. They now want me to now write an api that reads like this:

+ +
query = client.content_type('blog').query;
+entries = query
+              .where(""title"", ""welcome"")
+              .include_schema
+              .include_count
+              .to_json
+              .find;
+
+ +

Somehow this doesn't make sense to me (perhaps due to the level of misdirections involved) and I don't exactly know why.

+ +

If I write a method like content_type('content_uid') on my client class, I am writing a parameterized setter that already breaks rules.

+ +

If you are a rubyist, does this look like good api design to you? How can I improve on this?

+",17918,,17918,,42755.70694,42755.86319,Creating a better translation for a node.js api to ruby,,1,0,,,,CC BY-SA 3.0, +340621,1,,,1/20/2017 20:19,,1,296,"

How would one implement a multi tenancy application structure with the following technologies:

+ +
    +
  • Multiple SQL Server Databases (One per tenant)
  • +
  • Asp.Net
  • +
  • Entity Framework
  • +
  • Active Directory and possibly custom role providers for authorization.
  • +
+",161141,,161141,,42758.71875,42758.71875,How to decide level of Single Tenancy vs Multi Tenancy for application,<.net>,1,1,,,,CC BY-SA 3.0, +340622,1,340709,,1/20/2017 20:24,,2,366,"

What are the steps taken by the CPU to sum 2 numbers(2+2) from the keyboard input to the display in the screen?

+ +

example: reading the ascii code .... convert the typed number to binary ... send to cpu .... printing on screen?

+",79894,,,,,42758.12986,How does a simple (2+2) math operation work in a CPU?,,1,26,,42758.48194,,CC BY-SA 3.0, +340636,1,,,1/21/2017 0:38,,1,47,"

I have a pattern that's something like this:

+ +
    +
  1. Create a database model UsagePlan which stores how many usage tokens a paid customer receives in exchange for its respective monthly fee. Through my ORM (Django), I override the save method to not allow any writes to this table from my code.

  2. +
  3. I create a fixture that specifies the usage plans as my client specified them. Every time I deploy, this fixture gets loaded into the database (idempotently).

  4. +
  5. I create another model LiveUsagePlan. When a user turns into a paid customer, an instance of this model is related to the user and stores a copy of the UsagePlanthat they picked.

  6. +
  7. My client updates their usage plans, so I update the fixture and deploy. Current customers aren't affected since their LiveUsagePlans don't change (""grandfathered""), however all new customers' LiveUsagePlans will copy the new UsagePlans

  8. +
+ +

I am skeptical of this pattern, as I made it up and have never seen it anywhere else. I've also never seen fixtures used outside of the context of tests. Are there any glaring problems that I'm not seeing? What's the best way to handle this kind of mostly-static state?

+",260136,,,,,42756.02639,Using fixtures to maintain static database assets,,0,4,0,,,CC BY-SA 3.0, +340637,1,,,1/21/2017 0:59,,0,458,"

Our organization has been clamoring for a more organized spec-making process. At the moment, we use a combination of UX Specs (created in a wireframing tool and published as PDFs), and Functional Specs (created in a writing tool, either a client such as Word or a hosted tool such as an internal Wiki). The two documents are created by separate teams on a project (the former by the UX Designer, the latter by one or more of the developers).

+ +

The key audience for this is the QA team. For now, the forms of these documents are working fine, but the fact that they are created, delivered and maintained separately is the problem. In practice, they refer to these two documents in parallel, switching back and forth to develop a full understanding of how the product they are testing is supposed to work. I am exploring ways that they can be managed as an integrated set of documents. In an ideal world, the functional spec would be able to refer to parts of the UX spec, and show those parts inline.

+ +

I am about to do a deep dive to evaluate SharePoint as a possible platform for this, as I believe it supports linking between managed documents. (And there appears to be a way to link to a particular page within a given PDF.) This is still not as integrated as I would like, but may be the best available option. Has anyone here dealt with this issue before, and if so, can you describe how you ultimately addressed it?

+ +

NOTE: I have also asked this question over on https://ux.stackexchange.com/, but thought this community would also have some ideas to share.

+",260134,,-1,,42838.52222,42759.7,How can I create a combined UX and Functional Spec?,,1,4,,,,CC BY-SA 3.0, +340638,1,340639,,1/21/2017 1:56,,2,1666,"

The question is in the title. Here is the context:

+ +

Some people think that the null pointer is a big mistake. Tony Hoare famously apologized for inventing it. Since version 2.0 C# has had nullable value types (int? foo;), which introduces more of the null badness into the language. To confuse the issue, the C# team is considering adding non-nullable reference types too (MyClass! myClass).

+ +

So are we trying to increase or decrease null pointers? Why are null pointers bad? If they are bad, why did the C# team expand the language's ability to use them? If they are good, why might the C# team expand the language's ability to prevent them? When we are reviewing code, how do we decide whether a variable or property should be allowed to be null or not?

+ +

In short: when do the benefits of a nullable value type outweigh the cost of a null pointer?

+",116650,,116650,,42756.92292,42757.73056,When do the benefits of nullable value types outweigh the cost of null pointers?,,6,3,1,,,CC BY-SA 3.0, +340642,1,,,1/21/2017 5:40,,-1,288,"

I'm designing a tool in my organisation to help me with release management. The organisation is composed of several small teams that manage their own repositories in git. The release manager is responsible for cutting branches across all development lines in each of the repositories and then hand it off to QA processes.

+ +

The tool asks each team to define a simple YAML spec detailing the steps used to compile and package the team's source into binary artifacts. These could be a maven instruction or a python setup. Different languages (and build tools) are used for each of these projects. The YAML spec and instructions to build/package are stored within individual team repositories.

+ +

During a release, my release tool will have to cut branches from all the development branches of these repositories. What I'm struggling with is -- where to store this list of development branches per repository?

+ +

It doesn't belong within my tool because I don't want teams to define that list in my tool's source code.

+ +

It can't be stored in the repos of the teams because my tool wouldn't know which branch of the team's repo to look at for the development branch information. That's a strange self-reference.

+",254067,,,,,42876.75764,development branch information in release management tool,,1,4,0,,,CC BY-SA 3.0, +340648,1,,,1/21/2017 8:16,,5,460,"

An internal API I've built will soon be consumed by a third party.

+ +

Should I open the current internal API to the public, or should I create a new API endpoints for external access?

+",232733,,6605,,42756.38333,42756.95694,What should I think of when making an internal API public?,,2,1,1,42763.92639,,CC BY-SA 3.0, +340649,1,340652,,1/21/2017 8:47,,1,375,"

Map (or HashMap) takes Constant time for Insert, Remove and Retrieve. While all other data structures which I know so far, do not take constant time and their time for above operations depends on the size of input.

+ +

So, why do we need all other data structures ever ? Isn't HashMap is universal data structure ?

+",196162,,,,,42756.40139,Why do we need datastructures other than HashMap,,2,2,,,,CC BY-SA 3.0, +340664,1,340697,,1/21/2017 16:54,,3,359,"

I am working at translating a github repo, which I do not own, from Python to Java. The logic will remain the same, which is significant, as this is a Neural Network application, but I need to be able to deploy and run this in a more platform agnostic manner, and Python simply doesn't offer that.

+ +

My question is this: should I fork the original repo to demonstrate that my work is clearly derivative? There would never be any potential for pull requests or merges, as there will be no code in common between the two projects...

+",218173,,164151,,42757.73958,42757.75347,Proper Etiquette for Porting a Github Project to a new Technology,,2,1,1,,,CC BY-SA 3.0, +340665,1,,,1/21/2017 16:57,,2,115,"

When designing an application I usually stumble upon a problem I've never quite managed to handle properly.

+ +

Suppose you have Products and Orders. Usually in my data-access layer I have repositories that can return back lists of these object already filtered based, for example, the logged user that made the request. When I say ""already filtered"" I mean that the security-layer of the application (which is a cross-cutting concern) constructed the necessary criteria so that the data-access layer can actually query the db asking for object the logged user can actually see - no other filters are applied in memory.

+ +

Suppose that the actual Product class is something like this:

+ +
public class Product
+{
+    public virtual IList<Order> Orders {get; set;}
+    // other stuff 
+}
+
+ +

where the Orders collection contains all orders (from every user) placed on the product.

+ +

Now, this presents a problem because by accessing this property I can potentially show a user orders placed by other users.

+ +

I usually use NHibernate as an ORM an I cannot find a proper solution to this problem. I'm aware of ""filters"" but they are just string and not really useful when you need to make something more complex than the contrived example given here for brevity. Unless there is a way to use the criteria api with filters but, as far as I know, it's not possible

+",,user260171,,,,42817.64444,How to make associations security proof,,1,4,,,,CC BY-SA 3.0, +340666,1,340668,,1/21/2017 19:17,,4,843,"

I come from an object oriented programming background, therefore, I am pretty curious about the core philosophy of functional programming? Why does it exist at the first place, and what types problems it is trying to solve?

+",256784,,,,,42757.59653,What is the core philosophy of functional programming?,,3,2,4,42756.91042,,CC BY-SA 3.0, +340670,1,340671,,1/21/2017 22:35,,2,2895,"

In functional languages without type checking, is there any substantial disadvantage (apart from readability) to limiting all functions to take exactly one argument - that is, replacing multi-argument functions with functions that take a tuple as an argument?

+ +

(I saw some comments about it being somewhat more difficult to design the type system when functions accept tuples. I am interested in considerations unrelated to this issue.)

+",4485,,,,,42757.80278,Multiple arguments vs a tuple argument,,2,1,,,,CC BY-SA 3.0, +340673,1,340674,,1/22/2017 0:42,,1,404,"

I have been doing some research in to java servlets and I am having trouble understanding why it more efficient then a cgi based solution.

+ +

The reason for my lack of understanding is that java servlets run on the thread per request model, meaning that a new thread is spawned or taken from a pool of threads each time a request is made. While a cgi based solution would create a new process per request.

+ +

My question is, why would creating a process per request be less efficient then a thread, after all in each process is a a thread doing the work, So why is cgi less effecient?

+",258776,,61852,,42757.09167,42757.09167,Why is a servlet more efficient than CGI?,,1,2,,,,CC BY-SA 3.0, +340682,1,,,1/22/2017 13:16,,0,1177,"

I have a python module dataProcessor.py which initialises a large amount of data into memory (approximately 3GB) I want to use this module in different processes which are running simultaneously.

+ +

But the problem is there is not enough Memory on machine to run everything at same time due to dataProcessor.py loading data into memory for every process (3GB for each process, so for 3 processes a total of 9GB Memory).

+ +

I tried using server-client model to initialise data only once and and serve all processes but this model is too slow. Is there any method to load data only once and have other processes access the methods in module dataProcessor.py

+ +

The module I am talking about is Spacy which is written in Cython. The data can be any Python object, and won't change once written. +It is OK if the solution is a C extensions to Python.

+ +

Is there any alternative to server-client or subprocess model which shares memory.

+",219303,,60357,,42757.77083,42979.50972,Best way to import a large module to use in different modules,,2,4,,,,CC BY-SA 3.0, +340687,1,,,1/22/2017 15:47,,3,848,"

I have a method for reading data from file. The problem is how to handle files that are too big for a simple read and save to database? I was thinking about reading a chunk of it and saving it to database, but I don't know if having an asynchronous method with callback is a good idea.

+ +

Basically I think that a reader class shouldn't be aware of any database interface, so in order to notify of successfully reading a chunk of data it has to have a callback. I don't know if this is a good approach or not.

+ +
private const int Buffer = 100000;
+
+    public Task ReadAsync(Action<Tuple<DataTable, int>> statusCallback) {
+        DataTable data;
+        return await Task.Run(() => {
+            var totalRows = GetRowsCount(); // iterates file to calculate total number of rows
+            var progress = 0;
+            if(totalRows < Buffer) {
+                /**Read whole file...*/
+              progress = 100;
+            }
+            else {
+              while(/**loop until end of file*/) {
+                   for(var rowIndex = 0; rowIndex < Buffer; rowIndex++) { 
+                      var row = reader.Read();
+                      /**Split, parse, etc...*/
+                      data.Add(row);
+                   }
+                progress += Buffer/totalRows * 100; // Add read rows to total result %
+                statusCallback(new Tuple(data, progress));
+                }
+            }
+        }
+
+    }
+
+ +

And then save it

+ +
public void Start() {
+    _reader.ReadAsync(ReadingProgress);
+}
+
+private void ReadingProgress(DataTable data, int progress) {
+     _loadingBar.Update(progress);
+     using(var tran = _database.BeginTrans()) {
+         foreach(var row in data.Rows) 
+         {
+             _database.Insert(row);
+         }
+         tran.Commit();
+     }
+}
+
+ +

For some reason it seems wrong to me, but I don't know why. Any ideas how I could improve this?

+ +

EDIT: +I would like to notify users of how much the program read of the file, so I need to iterate through the whole file once and read how many lines it has. This bothers me, because it means I have to iterate a file two times.

+ +

One approach I thought about was get the byte size of first line and then divide the size of the file by that size. It would give me estimated count of lines in a file, but I'm not really sure if the approximation error wouldn't be too big.

+",255682,,255682,,42759.3625,42759.3625,Reading and saving big data to db,,0,9,,,,CC BY-SA 3.0, +340691,1,340693,,1/22/2017 16:43,,1,871,"

Consider this example of polymorphism. I have two different API, IWrite and IRead, and then a single implementation of these.

+ +
interface IRead
+{
+    Entity Find(int id);
+}
+
+interface IWrite
+{
+   void Persist(Entity);
+}
+
+class SomeRDBMSRepository : IRead, IWrite
+{
+   public void Persist(Entity entity)
+   {
+      ...persist it
+   }
+
+   public Entity Find(int id)
+   {
+      ...return an entity
+   }
+}
+
+ +

Because the implementation has all the details of how to implement the interfaces, then we could easily use the same type, SomeRDBMSRepository. We could have several clients of the APIs, maybe like these:

+ +
 class FindUserQueryHandler
+    {
+        public FindUserQueryHandler(IRead readEntities)
+        {
+        }
+
+        public User Handle(int id)
+        {
+            return _readEntities.Find(id);
+        }
+    }
+class RegisterUserCommandHandler
+{
+    public RegisterUserCommandHandler(IWrite writeEntities)
+    {
+    }
+
+    public void Handle(User user)
+    {
+        _writeEntities.Persist(user);
+    }
+}
+
+ +

When we want to stress the idea of polymorphism, then we simply create new types that provide different functionality. We give this example when we have, let's say, a requirement that the users must be queried from a different repository, then we could create a new type, SomeNoSQLRepository : IRead and in this way FindUserQueryHandler can use an instance of this implementation without caring about its actual details, as long as the type implements the IRead API. Also FindUserQueryHandler doesn't have to do anything special (ie to change)

+ +

Someone asked me a while ago for an example of polymorphism (cough / interview) and I gave him the example of SomeRDBMSRepository because I think this type has different roles or responsibilities for different clients. For a query type is the reader, while for the command is the writer. So my reason was this type is poly because it has the IRead form and the IWrite form in the same time. It seems I was wrong.

+ +

While reading the polymorphism definition from Wikipedia, I could adapt it to the meaning that if we can use any type in the place where some API is expected, as long as that type implements the clearly defined API, then that code is polymorphic. The mechanics don't really mater. C# or Java requires to specify that a class implements an interface, but for a language like Go there is enough to be a match of the method signatures between the type and interface.

+ +

Wikipedia also says that a polymorphic type is a type that has operations that can accept objects of different types. In my example the query and command types are polymorphic types by this definition.

+ +

Basically my question is: is there a taxonomy for SomeRDBMSRepository? How wrong I was?

+",50548,,50548,,42757.70347,42757.71736,"If polymorphism is the ability of different types to share the same interface, is there a name for a single type that fulfills different interfaces?",,1,4,,,,CC BY-SA 3.0, +340696,1,340812,,1/22/2017 17:46,,3,3926,"

SFML, in this case, has a Git repo and a download page. Until now I have always downloaded from the download page, which came (at least for the compiler I use) with .a and .hpp files and could easily be used with MinGW with -l and -i tags.

+ +

When I wanted to use this on Github, though, a couple of problems arised.

+ +
    +
  1. When copying it into the repo on the first commit, the Github graph would show 22k changes on the first commit and only a couple hundred on the following commits. I got around this by putting the download into a private repo and using it as a submodule. This isn't very elegant and still has the problem of being OS/compiler-specific.

  2. +
  3. Using the official Github repo as a submodule. But how do you use external raw cpp and hpp files? Do you put the necessary cpp files into the Makefile? If so, what are the downloads on the download page for?

  4. +
+",226868,,,,,42759.63194,How do you use external libraries in git?,,1,3,,,,CC BY-SA 3.0, +340701,1,340714,,1/22/2017 21:59,,3,239,"

I've just finished up a project in which I created a visual simulation of the life cycle of an ARM instruction in a single cycle processor. I used the MVC pattern in this project and ran into a design crossroads when I thought about how I was going to handle a specific part of the data being passed between the model and the view.

+

In my diagram in the view, I have a set of lines drawn over the data paths of the processor, which I will highlight based on the step in the ARM life cycle I'm talking about (for example, if the current step is referencing two sources being passed to the ALU, the lines corresponding to those two sources will light up). For each step there is a textual explanation of what is going on, as well as a set of lines being highlighted.

+

Here is a gif of the view in action: +

+

It is simple to pass the textual data between the model and the view without violating encapsulation, however I was not sure how to handle the graphical data (the lines). I've worked with a similar design before in a course I took on OOP, and we were told to ignore the violation of encapsulation in this instance - what we did was we created a set of states which were passed from the view to the model, updated, and sent back to the view.

+

It seems to me that this solution could be detrimental in a real-life situation due to arising bandwidth issues between the server and the client (passing relatively large data objects back and forth for any change of state in the view). Also, if I used this solution I would also be violating a basic tenant of the MVC pattern - keeping the view and the model entirely separate. If the view were to pass its line objects to the model, the model would know exactly how it's data was being implemented.

+

For my project I decided to stick with accepted programming practices and preserve encapsulation by making the concrete classes in the model send string objects to the controller that represented a certain set of lines. The controller would translate this string into a pre-initialized set of line segments and activate them (it would also deactivate line segments not mentioned). This solution, however, seems to me to be messy and didn't seem to scale very well because the controller would need to create a new set of lines and map it to any new possible string being sent from the model.

+

Here is a snippet of my controller code:

+
private Map<String, HashSet<Line>> lineSet;    
+
+public void initController(){
+    //groups of line objects initialized...
+    HashSet<Line> ctrlIn = new HashSet<>();//a set of lines that will be mapped to a string
+    ctrlIn.add(instrH);
+    ctrlIn.add(cond);
+    ctrlIn.add(op);
+    ctrlIn.add(funct);
+    ctrlIn.add(rd);
+    ctrlIn.add(instrV1);
+
+    lineSets.put("CtrlIn", ctrlIn);//maps the string expected from the model to the set of lines in the view
+    
+    ...
+}
+
+public void updateLines(ArrayList<String> s){//called by the model to update states of line objects
+  activateLines(s);
+ }
+
+//helper function for handling lines, called by the updateLines function above
+ public void activateLines(ArrayList<String> s){
+   for(Line l: currLine)//currline is the current set of displayed line objects
+       l.setVisible(false);
+   for(String set : s){
+     HashSet<Line> c = lineSet.get(set);
+     for(Line l : c){
+       currLine.add(l);
+       l.setVisible(true);
+     }
+   }
+ }
+
+

Here is some code from my implementing classes in the model:

+
//`step` is a ArrayList<String> that contains the set of steps in order for the instruction
+step.add("Step 1: Step description here...");
+
+//creates an ArrayList<String> object to hold all strings representing lines that should be displayed in the view (translated by the controller)
+ArrayList<String> step1 = new ArrayList<>();
+step1.add("CtrlIn");
+//more strings added to `step1`...
+
+lines.add(step1);//an array of strings is added to an outer array, corresponding to step 1
+
+

An index keeps track of the current step and returns two pieces of data to the controller: the description of the step from the step arraylist object, and an arraylist of string objects (corresponding to a set of line objects in the view) from the lines arraylist object. The controller displays the textual data to a textbox, and then uses lineSets.get(String s) to return the set of line objects mapped to the string returned by the model, and activates all lines in the set.

+

Violating encapsulation here seems like it might work better for my case because it would scale better. Neither the controller nor the concrete classes in my model would need to do much heavy lifting if they were both on board with what was going on in the view. I could inject references of the line objects in the view to the concrete classes in my model. When the view requests a change of state (next step), the model could just pass the set of lines to be displayed - the controller could skip translating the response from the model and just update the state of the lines in the view.

+

Was I correct in opting to preserve encapsulation in my design, or could an exception be made to handle the data alternatively, for example in the way I described above? Ultimately, what is the most correct way to handle the line object data in this situation, in accordance with accepted programming practices?

+

I tried to limit the amount of raw code to save space - let me know if there is any pertinent code related to the question that I should include.

+

Thank you for any advice.

+",260247,,-1,,43998.41736,42759.14861,Is it always optimal to preserve encapsulation in MVC?,,1,1,,,,CC BY-SA 3.0, +340705,1,340742,,1/23/2017 1:39,,35,6724,"

We had two major dependency-related crises with two different code bases (Android, and a Node.js web app). The Android repo needed to migrate from Flurry to Firebase, which required updating the Google Play Services library four major versions. A similar thing happened with our Heroku-hosted Node app where our production stack (cedar) was deprecated and needed to be upgraded to cedar-14. Our PostgreSQL database also needed to update from 9.2 to 9.6.

+ +

Each of these apps' dependencies sat stale for almost two years, and when some were deprecated and we reached the 'sunset' period, it has been a major headache to update them, or replace them. I've spent over 30 hours over the past month or two slowly resolving all of the conflicts and broken code.

+ +

Obviously letting things sit for two years is far too long. Technology moves quickly, especially when you're using a platform provider like Heroku. Let's assume that we have a full-fledged test suite, and a CI process like Travis CI, which takes a lot of the guesswork out of updating. E.g. if a function was removed after an upgrade, and you were using it, your tests would fail.

+ +

How often should dependencies be updated, or when should dependencies be updated? We updated because we were forced to, but it seems that some kind of pre-emptive approach would be better. Should we update when minor versions are released? Major versions? Every month if updates are available? I want to avoid a situation like what I just experienced at all costs.

+ +

PS - for one of my personal Rails projects, I use a service called Gemnasium which tracks your dependencies so that you can be notified of e.g. security vulnerabilities. It's a great service, but we would have to manually check dependencies for the projects I mentioned.

+",109112,,,,,42765.65208,When should dependencies be updated?,,6,0,8,,,CC BY-SA 3.0, +340712,1,,,1/23/2017 4:28,,1,761,"

Background

+ +

I'm a huge fan/believer of Jeff Patton's user story map. I'm currently reading his book..

+ +

I find using story maps a very effective way in convincing clients to use the lean start-up principles, by forcing them to think long and hard about what features are mvp and which should be released first (ie by visualizing the releases and the backlog etc).

+ +

Problem

+ +

My problem is that I'm currently working on a very technical solution. It's more about getting a user (B2C) application (that has a lot of UIs) and creating a cloud version of it (B2B) that will be handled by a handful of admins. As part of the estimation we figured out that on the mvp phase we should mostly use command line and not bother too much with UI.

+ +

My question is: how can user story maps be used to visualize a project like this where there isn't too much of a user story going on. it's mostly stuff going on at the backend to scale operations of which a UI has already been implemented for the individual consumer.

+ +

Example

+ +

The following is a list of tasks that I would like to put on a user map, and I'm struggling on how to lay them out:

+ +
Backend/API-Basic-Setup
+Backend/DB-Model/Setup
+Backend/DB-Model/Credit-Cards
+Backend/DB-Model/User-Data
+Backend/DB-Model/Task
+Backend/DB-Model/Task-Machine-Assignment
+Backend/DB-Model/Cloud-State
+Backend/DB-Migration
+Backend/Pubsub-Setup
+Backend/Provider/Abstraction
+Backend/Provider/Abstraction-Min-Implementation
+Backend/API/User-Data-CRUD
+Backend/API/Task-CRUD
+Backend/API/Task-Machine-Assignment
+Backend/API/Bot-Facing-APIs
+Backend/Coordination/Task-Scheduling
+Dashboard/Login
+Dashboard/User-CRUD
+Dashboard/Task-List-Management
+Dashboard/Task-Create-View
+Dashboard/Task-Provisioning
+Dashboard/Machines-Overview
+Bot/Web-Server-Interface-Setup
+Bot/Refactor-Existing-Tasks
+Bot/Connect-API-To-Web-Routes
+Bot/DB-State-Setup
+Bot/Deploy-updates
+Bot/Instance-image
+
+",82182,,,,,42758.75625,how can user story maps be used for projects with heavy backend work and little UI,,1,3,,,,CC BY-SA 3.0, +340715,1,,,1/23/2017 7:43,,3,482,"

I've come across a few cases lately where a package on NuGet has a name that starts with ""Microsoft"" but is actually uploaded by someone else. Take Microsoft.TestApi for example. Ostensibly this is a NuGet wrapper around the TestAPI project on CodePlex. The CodePlex project is from Microsoft as there are blog posts hosted on a Microsoft domain talking in great detail about the project and containing direct links to the CodePlex site. So I'm comfortable that downloading the package from there is safe.

+ +

However, the team who made the CodePlex project have not created a NuGet wrapper. Someone else has gone ahead and created one, which has gained some traction - 15k downloads at the time of writing. The owner is a personal account, in contrast to the clearly official Microsoft one used for other Microsoft packages. So far the only evidence I have for the provenance of the TestAPI package is a conversation on CodePlex where the owner of the original package looks like they are an acquaintance of the person who uploaded the NuGet one.

+ +

I feel like the above security credential is weak, and I am therefore minded to obtain the source directly from CodePlex. I would rather get it from NuGet though as then it is consistent with all my other packages. Have I missed something in NuGet's validation process? Maybe there is a signing process where it does not matter who uploads it because the package is securely signed and versioned previously?

+",75084,,4,,42761.47014,42761.47014,How can I be sure that an unofficially uploaded NuGet package is genuine?,,1,4,,,,CC BY-SA 3.0, +340716,1,340732,,1/23/2017 7:45,,3,753,"

I have a requirement where user selects a ReportType from a dropdown and hits download button. Based on his type chosen, the system should generate a report. Right now I have only report type that is QuoteReport. In future I will have other report types like PolicyReport,ClaimReport. Right now I have no idea what will be data-fields in these reports too, But ""They all will have at least some common properties"" such as ID, and Address

+ +
 public class QuoteReport
+ {
+  public String DeviceType { get; set; }
+  public String ProductName { get; set; }
+  public String Description { get; set; }
+  public String ID { get; set; }
+  public String Address { get; set; }     
+ }
+
+ +

Now what I am doing is I send reporttype and parameters to fill the report and I have created a switch case to catch type of report being selected.

+ +
public string PrepareReport(string selectedReport, List<int> Ids)
+{
+string response = string.Empty;
+try
+{
+    ReportTypeEnum reportTypeEnum;
+    if (Enum.TryParse(selectedReport, out reportTypeEnum))
+    {
+        switch (reportTypeEnum)
+        {
+            case ReportTypeEnum.QuoteReport:
+                response = CreateReportData(Ids,response);
+                break;
+            default:
+                break;
+        }
+    }
+}
+catch (Exception exc)
+{
+    handleException(DOWNLOAD_REPORT, exc);
+}
+return response;
+}
+
+ +

My method CreateReportData fills the fields of QuoteReport class from wcf.

+ +
 public string CreateReportData(List<int> Ids, string response)
+{
+List<QuoteReport> quoteReportList = new List<QuoteReport>();            
+foreach (var Id in Ids)
+{
+    dynamic dynamicEntity;
+    List<string> devices = proxy.GetData(Id);
+    for (int i = 0; i < devices.Count; i++)
+    {
+        QuoteReport quoteReport = new QuoteReport();
+        dynamicEntity = JObject.Parse(devices[i]);
+        quoteReport.Type = dynamicEntity.DeviceTypeString;
+        quoteReport.ProductName = dynamicEntity.ProductName;
+        quoteReport.Description = dynamicEntity.Desc;
+        quoteReport.ID = dynamicEntity.ID;
+        assetReport.Address = dynamicEntity.Address;
+         quoteReportList.Add(quoteReport );
+
+    }
+}
+response = JsonConvert.SerializeObject(quoteReportList );
+return response;
+}
+
+ +

Now I am perplexed how can I make my code more generic. Or should I use some design patterns like Factory to make code adaptable for future needs? How can I make CreateReportData method generic so that it accepts any class type and fills it up?

+",260274,,13156,,42758.63472,42759.1125,how to create generic class structure,,2,1,,,,CC BY-SA 3.0, +340724,1,340727,,1/23/2017 9:58,,15,6430,"

Assuming an IReader interface, an implementation of the IReader interface ReaderImplementation, and a class ReaderConsumer that consumes and processes data from the reader.

+ +
public interface IReader
+{
+     object Read()
+}
+
+ +

Implementation

+ +
public class ReaderImplementation
+{
+    ...
+    public object Read()
+    {
+        ...
+    }
+}
+
+ +

Consumer:

+ +
public class ReaderConsumer()
+{
+    public string location
+
+    // constructor
+    public ReaderConsumer()
+    {
+        ...
+    }
+
+    // read some data
+    public object ReadData()
+    {
+        IReader reader = new ReaderImplementation(this.location)
+        data = reader.Read()
+        ...
+        return processedData    
+    }
+}
+
+ +

For testing ReaderConsumer and the processing I use a mock of IReader. So ReaderConsumer becomes:

+ +
public class ReaderConsumer()
+{
+    private IReader reader = null
+
+    public string location
+
+    // constructor
+    public ReaderConsumer()
+    {
+        ...
+    }
+
+    // mock constructor
+    public ReaderConsumer(IReader reader)
+    {
+        this.reader = reader
+    }
+
+    // read some data
+    public object ReadData()
+    {
+        try
+        {
+            if(this.reader == null)
+            {
+                 this.reader = new ReaderImplementation(this.location)
+            }
+
+            data = reader.Read()
+            ...
+            return processedData    
+        }
+        finally
+        {
+            this.reader = null
+        }
+    }
+}
+
+ +

In this solution mocking introduces an if sentence for the production code since only the mocking constructor supplies an instances of the interface.

+ +

During writing this I realise that the try-finally block is somewhat unrelated since it is there to handle the user changing the location during application run time.

+ +

Overall it feels smelly, how might it be handled better?

+",260284,,1204,,42758.80972,42758.80972,Mocking introduces handling in production code,,3,4,2,,,CC BY-SA 3.0, +340737,1,340739,,1/23/2017 13:31,,3,687,"

I have an endpoint returning single-element collection (I didn't return just object-instance to keep consistent with resource-as-collection convention, so only get-by-id returns single instance)

+ +
GET /devices?serialNumber=12345
+[
+    {
+        ""id"": 1, 
+        ""serialNumber"": ""12345""
+    }
+]
+
+ +

And the new requirement showed up: some devices are connected into pairs and while searching for device 12345 which is paired with device 78901 I need to retrieve both of those (preferably in single HTTP call). What are my best options? I tried with:

+ +
GET /devices?serialNumber=12345
+[
+    {
+        ""id"": 1, 
+        ""serialNumber"": ""12345""
+    },
+     {
+        ""id"": 2, 
+        ""serialNumber"": ""78901""
+    }
+]
+
+ +

But this breaks the semantics (I filter devices rresoure ""list"" for one S/N and suddenly another device with different S/N pops up)

+ +

Then I tried this:

+ +
GET /devices?serialNumber=12345
+[
+    {
+        ""id"": 1, 
+        ""serialNumber"": ""12345""
+        ""connected"": 
+        {
+            ""id"": 2, 
+            ""serialNumber"": ""78901""
+        }
+    }
+]
+
+ +

But still recursively-nested resource doesn't feel right since this is totally different from how the domain model is expressed. +Is there any better way to design this endpoint without exposing the details of how they are connected? There is some business logic which is pretty complicated and irrelevant to the endpoint consumer, she only needs to know whether there is a paired device or not.

+",260307,,,,,42759.43472,"How to design HTTP endpoint which search for single resource instance and returns it with another, connected instance",,4,2,,,,CC BY-SA 3.0, +340750,1,340758,,1/23/2017 19:28,,5,2908,"

I've been bothered by this line of code I've written and I've been a bit confused into what should be written instead.

+ +
class SomeClass
+{
+    IBeneficiary _latestBeneficiary => new Beneficiary(Iban, Name);
+}
+
+ +

In context, the field represents the latest version of a beneficiary object that is about to be created, and I want this variable to represent its latest possible version considering whatever is inside those public properties.

+ +

Here are my assumptions and thought process, I'm thinking there is something wrong in there otherwise I wouldn't have an issue.

+ +

So, clearly, this is a field. It's a field because it is a private variable I'm keeping inside my class, and to recognize it, I add an underscore as a prefix.

+ +

That field always returns the latest IBeneficiary possible consideringIban and Name (irrelevant here). Those properties are public and are classic MyProperty SomeProperty { get; set; } defined in the class.

+ +

I've defined a field with a property getter, the =>. This is confusing because it's not the expected behaviour of a field to always return something new. Or is it?

+ +

I feel like this then should be a function, something like

+ +
IBeneficiary CreateLatestBeneficiary (MyProperty param1, MyOtherProperty param2)
+{ 
+      return new Beneficiary(param1, param2);
+} 
+
+ +

Or even name it GetLatestBeneficiary, but in both cases this looks and feels like a really simple getter, so I'd rather have a property with a single getter, that does exactly this, like the following. Right?

+ +
IBeneficiary LatestBeneficiary
+{
+   get
+   {
+       return new Beneficiary(Iban, Name);
+   }
+}
+
+ +

But a private property is pretty much a field. Isn't it?

+ +

And with that in mind we're back to square one, using a field.

+ +

I feel like somewhere in there, one of my statements is wrong.

+ +

Are private properties okay ? Or is a property by definition something public, with at least a getter ? Or is it okay for a field to not be a plain old variable ?

+ +

Ultimately, how would you write this line of code yourself?

+",151303,,,user22815,42758.88889,42759.4375,Correct usage of Property vs Field vs Function in C#,,2,5,,,,CC BY-SA 3.0, +340761,1,,,1/23/2017 23:25,,1,182,"

I have an application that involves processing packets of IDs from a variety of sources. Some of these sources contain the information I want, some of them constitute, effectively, noise. Currently, whenever my application receives a data packet, it checks the database to verify that the received IDs match internal data before doing more work. I would like to eliminate this step and/or minimize the processing due to noise.

+ +

One idea I had was to make some portion of my IDs non random. E.g. instead of using a completely random UUID, I might replace the last four characters with a fixed string - thus my application can perform a simple check that will be able to easily filter out the noise 99.9% of the time.

+ +

However, this seems... dirty...

+ +

Another idea would be to create hashes from some bit of random string and some other constant. Hence when un-hashed, I could locally detect if the constant matched. However, this just seems like adding a layer of complexity on top of the dirty solution.

+ +

How should I best deal with this situation? Am I right in my instinct that my proposed solution is a potentially bad idea?

+ +
+ +

Each time my application (a mobile app) receives a packet (bluetooth low energy beacon data), it sends a request to a server (AWS lambda function), which then queries the database (DynamoDB) to find the ID and responds with the result of the query. Servers, database throughput, and API calls cost money. Hence, reducing the number of times I have to perform this operation reduces my costs. Furthermore, I feel that minimizing the total amount of bandwidth I'm requiring from my users mobile phones is just a nice thing to do.

+",238456,,61852,,42759.72222,42759.72222,"Embedding data inside IDs, good idea?",,3,5,,,,CC BY-SA 3.0, +340762,1,340997,,1/23/2017 23:30,,2,160,"

I've spent the last few days working with tensorflow for the first time as part of a natural language processing assignment for my degree. It's been interesting (fun isn't the right word) trying to get it to run on a GPU but it got me thinking.

+ +

The recent advances in deep learning have come about as GPGPU technologies have matured and the frameworks have arrived to make doing massive amounts of linear algebra on your computer much quicker and easier. Nvidia now sell chips that are designed specifically for this task and from what I understand, papers like the one featuring AlexNet would not have been possible without GPU acceleration. This point is nicely articulated by the authors:

+ +
+

All of our experiments suggest that our results can be improved + simply by waiting for faster GPUs and bigger datasets to become + available.

+
+ +

Given this, my question is then why haven't we seen more adoption of GPUs for traditional HPC tasks (simulation, rendering etc.)? These workloads have been around for years yet it seems like only recently that GPGPU has taken off as an approach. It appears to me that the requirements are pretty similar, namely 'read a load of data, do a load of floating point transforms on the data, save some data, repeat' but a look at the TOP500 reveals that many of the systems on there are still using CPUs (although an increasing number using 'manycore' processors like the Intel Phi which seem to straddle the CPU/GPU divide).

+ +

Are there actually less similarities between traditional HPC and large-scale ML workloads than I imagine? Maybe it's that GPUs are less efficient in terms of flops/w which is what really matters when it comes to running a huge compute cluster? Is it just down to a very effective marketing effort by Nvidia in the machine learning space?

+",211229,,251821,,42761.99931,42762.03056,Large Scale Machine Learning vs Traditional HPC Hardware,,1,0,,,,CC BY-SA 3.0, +340769,1,,,1/24/2017 1:08,,1,611,"

This question may be a bit subjective, but I have tried three different solutions and none of them has felt right. I will provide some context and the solutions I have tried. The issue I am facing seems to boil down to whether or not I should split an interface into multiple interfaces / sub-interfaces, or maybe make it generic instead. Three different solutions (that I am aware of).

+ +

I am working on an inventory system for a game. Items, which can be placed in the inventory, can have an action associated with them which will be performed when the item is clicked. This is accomplished through the use of an interface, IAction, which has a void Perform(IActor actor) method declaration. Not all items implement IAction, and different items may have different actions to be performed, causing the IActor interface to contain a lot of method declarations.

+ +

The first solution I used had all these methods on IActor and I did not like it since not all actors needed all the methods on IActor. I decided to create sub-interfaces of IActor, such as IHealthUser, as a way to segregate the methods. The reason I kept IActoras a base interface was because of the void Perform(IActor actor)method declaration on IAction. The solution ended up looking like this:

+ +
public interface IActor { ... }
+public interface IHealthUser : IActor { void AddHealth(int amount); }
+public interface IAction { bool Perform(IActor actor); }
+public class HealthPotion : IAction
+{
+    public bool Perform(IActor actor)
+    {
+        var performedAction = false;
+        var healthUser = actor as IHealthUser;
+        if (healthUser != null)
+        {
+            healthUser.AddHealth(Amount);
+            performedAction = true;
+        }
+        return performedAction;
+    }
+}
+
+ +

and is called from the event handler method in the inventory like this

+ +
var action = Item as IAction;
+if (action != null)
+{
+    var performedAction = action.Perform(Owner); // Where 'Owner' is IActor
+    ...
+}
+
+ +

I had to change the return type of Perform() from void to bool to indicate whether or not the action was actually performed, since I have to cast actor inside the method before being able to do perform the action. This works, but I am not sure it is a good / ""standard"" solution.

+ +

I also tried making IAction generic:

+ +
public interface IAction<T> where T : IActor { void Perform(T actor); }
+
+ +

which makes the implementation of IAction<T> quite clean and also gets rid of the bool return value:

+ +
public class HealthPotion : IAction<IHealthUser>
+{
+    public void Perform(IHealthUser healthUser)
+    {
+        healthUser.AddHealth(Amount);
+    }
+}
+
+ +

This solution, however, puts a lot of burden on the callers, which I do not have a good solution for:

+ +
// Do we have to cast and test for every possible IActor interface?
+var action = Item as IAction<IHealthUser>;
+var action = Item as IAction<IManaUser>;
+...
+
+ +

The easiest solution is to just let all methods stay on IActor, but that is ugly. The current solution, casting inside Perform() works ok. The generic solution feels most right of the three, if it were not for the issue I have with figuring out what to cast Item to before calling Perform(). My question, therefore, is, is the generic solution the way to go and if so, how should I go about figuring out what to cast Item to?

+",97450,,,,,42759.19236,How to figure out what interface to cast to?,,1,6,1,,,CC BY-SA 3.0, +340773,1,,,1/24/2017 1:38,,1,53,"

For example, say I have a service that kicks off some operation, such as running a cron-job. Then, it returns whether it not it was successful.

+ +

The code might look something like

+ +
 var service = new MetricSyncWakeJob(...);
+ var jobStarted = service.performSyncJob();
+
+ if(jobStarted) {
+     notificationService.notifySuccess(""Operation was successful."");
+ } else {
+     notificationService.notifySuccess(""Operation was NOT a success."");   
+ }
+
+ +

Is it ever worth wrapping up such code or this something akin to eager-abstraction? If so, how would you do this this? What a Facade work here?

+ +

i.e: UserInteractiveMetricSyncWakeJob which can kick off the job and fire the message automatically?

+ +

It's not the job of the service to kick off this message -- otherwise, it could just be put there. I guess the class could look like this...

+ +
class UserInteractiveMetricSyncWakeJob
+  UserInteractiveMetricSyncWakeJob(INotificationService notifcationService, MetricSyncWakeJob wakeJob)
+
+ +

It seems overkill to do this -- yet, copy & pasting the same ""handle this response"" code seems bad, too.

+",96062,,96062,,42759.14167,42759.57361,Is it worth encapsulating messages shown to a user from a repeated operation?,,1,0,,,,CC BY-SA 3.0, +340775,1,,,1/24/2017 2:29,,0,337,"

As a common practice followed by agile team members using agile methods(scrum, kanban etc), they volunteers/sign-up/pick-up/self-select tasks from the backlog using Jira/Trello/etc.

+ +

What I am interested in is knowing is that what are the challenges associated with this approach of self-selection.

+ +

I'm sure agile practitioners encountering situations where multiple people wanting same tasks, all tasks looking repetitive, similarly, if some critical issue comes up, then you see a task assigned to you directly. I'm sure in a real world there would be many cases.

+ +

Can you please share some of your experiences to support or against this?

+",260340,,1204,,42759.19306,42760.65625,Are there any challenges with self-selecting/volunteering/signing up for agile tickets/tasks?,,4,2,1,42759.58264,,CC BY-SA 3.0, +340780,1,340785,,1/24/2017 3:36,,1,306,"

When I want to implement axis aligned 2d rectangles I always go with {x, y, w, h}, because that is the natural approach to it. With 3d axis aligned rectangles you need {x, y, z, w, h, d(depth)}. For a 2d triangle I need {x1, y1, x2, y2, x3, y3}. But what do I need for axis aligned right triangles. How would you store them?

+ +

I can imagine going with the same data as a rectangle and then a number 0 to 3 indicating which point is opposite to the hypotenuse. I can also imagine going with {x, y, w, h}, where w and h is allowed to be negative (opposite to the normal rects).

+ +

Which is the common approach to implementing right triangles?

+ +

EDIT:

+ +

Well I finally decided to go with {x, y, w, h, r}, where w, h >= 0 and r is the radians. +So at first I can concentrate r = {0, pi/2, pi, 3pi/2} and if I wanna go crazy later on, I can do just that without breaking my interface.

+",217255,,217255,,42759.70556,42759.70556,How do you usually implement right triangles in programming,,3,9,,,,CC BY-SA 3.0, +340788,1,340796,,1/24/2017 8:28,,10,1182,"

Background

+ +

I m reading the ""Clean Code book"", and, in paralel, I m working on calisthenic objects Kata like the banker account, and I m stuck on that rule :

+ +

The 9th rule of calisthenic objects is that we don't use getter or setters.

+ +

It seems pretty fun, and I agree with this principle. Moreover, at page 98-99 of Clean Code, the author explains that getters / setters break abstraction, and that we don't have to ask our object, but we have to tell our object.

+ +

This makes perfect sense in my mind, and I fully agree with this principle. The problem comes in practice.

+ +

Context

+ +

For example, I m having an application in which I have to list some users, and to display the user details.

+ +

My user is composed of :

+ +
-> Name
+   --> Firstname --> String
+   --> Lastname --> String
+-> PostalAddress
+   --> Street --> String
+   --> PostalCode --> String
+
+ +

Problem

+ +

How can I do, or what can I do to avoid getters when I only need to display a simple information (and I have to confirm that I don't need extra operation on that particular field) to display the Firstname value in a simple (random) output support ?

+ +

What comes up in my mind

+ +

One solution is to make :

+ +
user.getName().getFirstName().getStringValue()
+
+ +

Which is totatally terrible, breaking many rules of calisthenic objects, and breaking the Demeter Law.

+ +

Another one would be something like :

+ +
String firstName = user.provideFirstnameForOutput();
+// That would have called in the user object =>
+String firstName = name.provideFirstnameForOutput();
+// That would have called in the name object =>
+String firstName = firstname.provideFirstnameForOutput();
+
+ +

But I don't feel comfortable with this solution, that only seem to be a ""higher order accessor"" like bypassing standard getter/setter with a method that only aims to match Demeter law...

+ +

Any idea ?

+",160326,,,,,42759.48403,"Avoid getters and setters, displaying user informations",,2,0,3,,,CC BY-SA 3.0, +340802,1,,,1/24/2017 13:20,,-1,3271,"

One of the client for whom I had worked has a MVVM architecture for web application.I dont know why they incorporated instead of MVC. is this a feasible idea because as far as I have gone through all MVVM tutorials if they are concerned with XAML or WPF ,they are going for MVVM.Can anybody explain in detail ?.

+",259208,,,,,42759.68194,Can MVVM architecture be used in designing web applications?,<.net>,1,1,,,,CC BY-SA 3.0, +340803,1,340809,,1/24/2017 13:28,,20,5442,"

I'm developing a physics simulation, and as I'm rather new to programming, I keep running into problems when producing large programs (memory issues mainly). I know about dynamic memory allocation and deletion (new / delete, etc), but I need a better approach to how I structure the program.

+ +

Let's say I'm simulating an experiment which is running for a few days, with a very large sampling rate. I'd need to simulate a billion samples, and run over them.

+ +

As a super-simplified version, we'll say a program takes voltages V[i], and sums them in fives:

+ +

i.e. NewV[0] = V[0] + V[1] + V[2] + V[3] + V[4]

+ +

then NewV[1] = V[1] + V[2] + V[3] + V[4] + V[5]

+ +

then NewV[2] = V[2] + V[3] + V[4] + V[5] + V[6] +...and this goes on for a billion samples.

+ +

In the end, I'd have V[0], V[1], ..., V[1000000000], when instead the only ones I'd need to store for the next step are the last 5 V[i]s.

+ +

How would I delete / deallocate part of the array so that the memory is free to use again (say V[0] after the first part of the example where it is no longer needed)? Are there alternatives to how to structure such a program?

+ +

I've heard about malloc / free, but heard that they should not be used in C++ and that there are better alternatives.

+ +

Thanks very much!

+ +

tldr; what to do with parts of arrays (individual elements) I don't need anymore that are taking up a huge amount of memory?

+",260412,,,,,42762.51389,"Professional way to produce a large problem without filling up huge arrays: C++, free memory from part of an array",,3,13,4,,,CC BY-SA 3.0, +340813,1,,,1/24/2017 15:19,,3,1440,"

I'm just discovering the Go programming language.

+ +

(FWIW, I am fluent in C++, Ocaml, C, Common Lisp, Scheme, I know well Linux, and I have designed & implemented GCC MELT; I am considering a rewrite of some MELT monitor in Go, but have not decided it yet)

+ +

I am a bit confused about the required layout of some Go workspace. I still am unease with the notion of packages in Go (they seems similar to Ocaml's modules) and the interfaces in Go (they look like Ocaml's signatures or module types).

+ +

The How To Write Go Code document mentions both a pkg/ and a src/ mandatory subdirectory, but Michael Maccanis' Oh shell example don't have any src/ and I was still able to build it using the

+ +
 go get github.com/michaelmacinnis/oh
+
+ +

command mentionned in its README.md. Why (and how) does that work (without any src/) ?

+ +
+ +

If that is important, I'm using go1.8rc2 on Linux/amd-64 (Debian/Sid). I'm trying Go right now (and did not look into it before) because Go 1.8 (scheduled before spring 2017) should have plugins and these are a feature essential to me.

+ +

PS. I've read (and liked) the Go for C++ programmer wiki; but I might miss the Go for Ocaml programmer equivalent.

+",40065,,40065,,42759.74653,42761.90069,directory layout of a Go-lang project?,,1,0,,,,CC BY-SA 3.0, +340816,1,,,1/24/2017 16:05,,1,325,"

I'm reading about Angular on their website here: https://angular.io/features.html, and I see the following:

+ +
+

Cross Platform

+ +

Progressive web apps - Use modern web platform capabilities to deliver app-like experiences. + High performance, offline, and zero-step installation.

+ +

Native - Build native mobile apps with strategies from Ionic Framework, + NativeScript, and React Native.

+ +

Desktop - Create desktop-installed apps across Mac, Windows, and Linux using the + same Angular methods you've learned for the web plus the ability to + access native OS APIs.

+
+ +

What part of Angular is this page talking about when it says that you can ""create desktop-installed apps across Mac, Windows, and Linux""? Does Angular have some built-in ability to generate desktop apps? Or are they talking about using some 3rd-party framework like e.g. Electron?

+",27492,,,,,43151.83819,Angular and desktop,,1,1,,,,CC BY-SA 3.0, +340818,1,340819,,1/24/2017 16:45,,4,290,"

I inherited a project that includes a database (MS SQL Server) that was filled to the brim with bad data. I've manually cleaned it up, which was a painstakingly tedious process and am working on modernizing the application (ASP.NET MVC w/ EF6).

+ +

One of the tables is for tracking an item as various processes are performed on it through the organization's facility. There are a lot of rules that go into this table to make sure an item's state is valid. Some of the many rules include:

+ +
    +
  • An item only has a process performed on it if that process is valid for the job the item is assigned. (simple look up to check)
  • +
  • Process B cannot be performed until Process A was marked complete (look up that there's a record in the database that Employee X completed Process A, keeping in mind that an employee can have a record that they worked on a process, but didn't complete it)
  • +
  • Employee 1 cannot mark a process complete on an item if they were working on it with Employee 2, but Employee 2 is still checked into the item for that process (multiple employees can be working on an item for the same process at the same time).
  • +
  • There should never be a start time for Process B that occurs before the finish time for Process A.
  • +
+ +

There are a few other rules, but those should give anyone a feel for the type of constraints that are needed. I was able to create CHECK CONSTRAINTS for a few things, such as a process being valid for a particular item, start times are always before finish times, etc. However, some of the more complicated rules that require look-ups before insertion, updates, or deletions (as described above) I'm not sure if they should be business rules within the application or a trigger / stored procedure that prevents data from being inserted/updated/removed that could cause an invalid state for an item.

+ +

I guess I'm just not sure if it's OK to allow for the potential for an invalid state if someone attempts to perform a C_UD query directly on the database. If this is the case it's pretty clear that these would be best as business rules that I can have within an entity, but it might be faster to have the database handle these operations, as read speed is way more important than write speed for this application. What's the consensus of where within an application's stack should these types of constraints reside?

+",102059,,61852,,42759.70903,42759.71597,Complicated Data Constraints -- Business Rules or Database Constraints?,,1,0,1,,,CC BY-SA 3.0, +340820,1,,,1/24/2017 17:16,,1,985,"

I've been writing a lot of time related code recently, and in various files, I always end up re-defining things like

+ +

var SECOND = 1000; + var MINUTE = 60 * SECOND; +

+ +

This gets frustrating quickly. So I'm considering two possibilities:

+ +
    +
  1. Getting rid of the constants and instead letting things be inferred from code like 60 * 1000 + +
      +
    • I dislike this option because it's not as human readable
    • +
  2. +
  3. Attaching the constants to a global so they only have to be written in one place + +
      +
    • I think this is the best way to go, but I'm unsure about the potential consequences of this
    • +
  4. +
  5. I could use a package and import from it + +
      +
    • This has the same problem that I'm already doing, which would be defining it everywhere over and over
    • +
  6. +
+ +

How do you handle this issue or is it something we have to live with?

+ +

Side note 1: I am writing this in JavaScript, which is why globals are an option, but I feel like this might still be applicable to other languages

+ +

Side note 2: Specifically for JavaScript, why are these constants not already attached to the global objects browser/globals?

+",123606,,,,,42759.75972,"How do you handle time unit constants (second, minute, etc)?",