diff --git "a/stack_exchange/SE/SE 2019.csv" "b/stack_exchange/SE/SE 2019.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2019.csv" @@ -0,0 +1,95286 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense,,,,, +384775,1,,,1/1/2019 10:56,,-3,427,"

If a file foo.cpp already includes foo.h, and foo.cpp requires some types from (for example, string.h), which is better, to include string.h in foo.cpp, or in foo.h ?

+ +

For example, Guideline#9 in this tutorial recommends to include it in the cpp, if possible, but I don't understand exactly why.

+",324699,,9113,,1/2/2019 10:49,1/2/2019 10:49,Is it a bad practice to include stdlib header file from a header file corresponding to the source file that needs that stdlib header?,,2,12,1,,,CC BY-SA 4.0,,,,, +384780,1,384796,,1/1/2019 13:07,,2,87,"

I'm designing an API for a Python library. The user will create objects with several parameters. In most cases, the user will either leave these at their default values or will set them globally, for all objects. However, it should be possible also to set them individually on a per-object basis.

+ +

The most obvious way to do this is to do something like this:

+ +
# myModule.py
+
+contrafibularity_threshold = 10.7
+pericombobulation_index = 9
+compunctuous_mode = False
+
+class Thing:
+    def __init__(self):
+        self.contrafibularity_threshold = None
+        pericombobulation_index = None
+        compunctuous_mode = None
+
+    def get_contrafibularity_threshold(self):
+        if self.contrafibularity_threshold is not None:
+            return self.contrafibularity_threshold
+        else:
+            return contrafibularity_threshold
+
+    def get_pericombobulation_index(self):
+        if self.pericombobulation_index is not None:
+            return self.pericombobulation_index
+        else:
+            return pericombobulation_index
+
+    def get_compunctuous_mode(self):
+        if self.compunctuous_mode is not None:
+            return self.compunctuous_mode
+        else:
+            return compunctuous_mode
+
+ +

This works as I would like: it allows the user to do myModule.contrafibularity_threshold = 10.9 to set the global value while also being able to do someThing.contrafibularity_threshold = 11.1 to set it for a particular object. The default may be changed at any time and will affect only those objects to which a specific value has not been assigned.

+ +

However, the code above contains a lot of repetition, and seems prone to hard-to-notice bugs if I make a mistake copy-pasting the code. Is there a better (less repetitive, less error-prone, more Pythonic) way to achieve these goals? I don't mind changing the API, as long as the user can change the defaults at both the global and per-object level.

+ +

(One could arguably improve the above code by using @property, but that wouldn't resolve the repetitive code issue.)

+",91273,,91273,,1/1/2019 13:13,1/1/2019 19:25,Designing a Python API with defaults,,2,0,1,,,CC BY-SA 4.0,,,,, +384787,1,384788,,1/1/2019 15:59,,6,629,"

In Chapter 10 of Clean Architecture, Martin gives an example for the Interface Segregation Principle. I have some trouble understanding that example and his explanations.

+ +

In this example we have three separate Users (Classes) that use a Class called OPS. OPS has three methods, op1, op2, and op3. Each of these is only used by one user (op1 only by User1 and so on).

+ +

Martin now tells us that any change in OPS would result in a recompilation for the other classes since they all depend on OPS, even if the change was performed in a method that is of no interest to them. (So a change in op2 would require a recompilation of User1.)

+ +

He argues that thus there should be three separate interfaces, one for each method. The OPS class then implements all of them. The users only use the interface they use. So you have User1 implementing only Interface1 and so on.

+ +

According to Martin, this would stop the otherwise necessary redeployment of, say, User1 if the implementation of ops2 in OPS was changed (since User1 does not use the interface that describes op2).

+ +

I had my doubts and did some testing. (Martin explicitly used Java for his example, so I did as well.) Even without any interfaces any change in OPS does not cause any user to be recompiled.

+ +

And even if it did (which I thought it would), using three interfaces and then having the same class implement all three of them makes no sense to me either. Wouldn't any change in that class require all of the users to be recompiled, interface or no? Is the compiler smart enough to separate where I did my changes and then only recompile those users that rely on the interface describing the method I changed? I kind of doubt that.

+ +

The only way how this principle makes sense to me is if we were to split the OPS class into three different classes, interfaces or no. That I could understand, but that's explicitly not the answer Martin gives.

+ +

Any help would be greatly appreciated.

+",324715,,,,,1/1/2019 17:55,Interface Segregation Principle in Clean Architecture,,2,1,5,,,CC BY-SA 4.0,,,,, +384800,1,384805,,1/2/2019 3:59,,4,347,"

I've made an engine that plays Connect Four using the standard search algorithms (minimax, alpha-beta pruning, iterative deepening, etc). I've implemented a transposition table so that the engine can immediately evaluate a same position reached via a different move order, and also to allow for move-ordering.

+ +

The problem with the TT is that on each step of the iterative deepening process, the amount of bytes it takes up grows by at least double. The TT stores objects that represent important info for a given position. Each of these objects is 288 bytes. Below, depth limit is how far the engine searches on each step of iterative deepening:

+ +

depth limit = 1 - TT size = 288 bytes (since just one node/position looked at).

+ +

depth limit = 2 - TT size = 972 bytes.

+ +

depth limit = 3 - TT size = 3708 bytes.

+ +

depth limit = 4 - TT size = 11664 bytes

+ +

depth limit = 5 - TT size = 28476 bytes.

+ +

....

+ +

depth limit = 12 - TT size = 11,010,960 bytes.

+ +

depth limit = 13 - TT size = 22,645,728 bytes.

+ +

And now at this point the .exe file crashes.

+ +

I'm wondering what can be done about this problem. In Connect Four the branching factor is 7 (since there are usually 7 possible moves that can be made, at least in the beginning of the game). The reason the TT isn't growing by a factor of 7 on each step is due to pruning methods I've implemented.

+ +

This problem isn't a big deal if the engine only searches up to depth 8-9, but my goal is to get it to refute Connect Four by going all the way to the end (so depth limit = 42).

+",287384,,,,,1/3/2019 0:23,Game Playing AI - Strategy to overcome the transposition table taking up too much memory?,,2,3,,,,CC BY-SA 4.0,,,,, +384817,1,384822,,1/2/2019 12:55,,0,1988,"

I am using Kafka. I am developing a simple e-commerce solution. I have a non-scalable catalog admin portal where products, categories, attributes, variants of products, channels, etc are updated. For each update, an event is fired which is sent to Kafka.

+ +

There can be multiple consumers deployed on different machines and they can scale up or down as per load. The consumers consume and process the events and save changes in a scalable and efficient database.
+Order of events is important for me. For example, I get a product-create event. A product P is created and lies in category C. It is important that event for the creation of category C is processed before the product-create event for product P. Now if there are two consumers, and one consumer picks up product-create event for product P and the other consumer picks up event for creation of category C, it may happen product-create event is processed first, which will lead to data inconsistency.
+There can be multiple such dependencies. How do I ensure the ordered processing or some alternative to ensure data consistency?

+ +

Two solutions that are right now in my mind:

+ +
    +
  1. We can re-queue an event until its dependent event is successfully processed.
  2. +
  3. We can wait for the dependent event to get processed and try processing the event at some intervals say 1 second with some maximum retries.
  4. +
+ +

Requeuing has issues that event is now stale and no longer required. Example:

+ +
    +
  • Initial Order = Create-Event(Dependent on event X), Event X, Delete-Event .
  • +
  • After Requeuing, Order = Event X, Delete-Event, Create-Event(Dependent on event X).
    +Create event is processed after delete event again leading to inconsistent data.
  • +
+ +

The same issue is applicable to the second solution (waiting and retrying).

+ +

Above issues can be solved by maintaining versions for events and ignoring an event if the targeted object(which is going to be modified by the event) has a higher version than that of the event.
+But I am very unsure of the pitfalls and the challenges of the above solutions that might not be very obvious right now.

+ +

PS: Stale data works for me but there should be no inconsistencies.

+",257969,,257969,,1/2/2019 13:41,1/2/2019 14:53,Maintaining order of events with multiple consumers in a typical pub sub scenario,,2,3,,,,CC BY-SA 4.0,,,,, +384819,1,,,1/2/2019 13:24,,0,446,"

I am trying to get my head around ""event driven"" microservices. I understood, there are several techniques and patterns, like event notification, event sourcing, CQRS, etc that can help us to achive that. Very simply said, it boils down to some kind of a command has been sent, which leads to a change of the systems state. If the change was applied, the system emits an event. Other services can listen to this events.

+ +

But what about querying a microservice for data? Let's say we have an API gateway and some services behind that gateway. Now we want to get a list of all users, which are stored in the user-service. The API gateway could simply send an HTTP GET request to the user-service to receive the list of users. In some kind this might lead to tight coupling, but it seems like the most plausible way.

+ +

Can you share your knowlage and experience, when someone should not use HTTP requests for querying a microservice and what alternatives there are.

+",324705,,,,,11/7/2020 20:02,Querying in event driven microservices,,1,1,,,,CC BY-SA 4.0,,,,, +384820,1,,,1/2/2019 13:27,,1,126,"

When specifying a period is there ever a case where passing the number of the month as integer is preferred over passing two datetimes? For example GetTotalSumByMonth(int month) vs GetTotalSum(DateTime begin, DateTime end).

+ +

It seems to me that the second option has clear advantages since it is more generic and less ambiguous. You wouldn't be able to pass a month of last year in the first option since it's never given. And some people might think the number of the month starts with 0 instead of 1, like in Javascript or C, which could lead to confusion.

+ +

Are there any more pros or cons which might tip the scales?

+",277345,,,,,1/2/2019 20:28,Passing a period as datetimes vs as integer,,2,8,,,,CC BY-SA 4.0,,,,, +384825,1,384828,,1/2/2019 15:20,,1,378,"

In my understanding of Uncle's Bob Clean Architecture, a Use Case (Interactor) is responsible for orchestrating the Entities. At the same time, it represents Application-specific rules. The business rules live in Entities.

+ +

I have a need to orchestrate the Entities to achieve a Business-specific rule. What is the best approach for it?

+ +
    +
  1. Is it allowed to have a Use Case in the Domain layer (rather than in the Application layer) to indicate that this use case is business rules, not application rules?
  2. +
  3. Or should I simply create another Entity that will do the orchestration?
  4. +
+",208210,,4,,2/15/2019 11:05,2/15/2019 11:05,Domain Use Case,,1,1,,,,CC BY-SA 4.0,,,,, +384826,1,,,1/2/2019 15:21,,1,59,"

I am trying to do the architecture of a new application we are building. We wanted something quite modular. Considering we are open source and we want other people to be able to easily add new features, we opted for a component based architecture.

+ +

We have a back end that is made in Java, and a front end that will be made with JavaFX and an other front end which is a website. We want the back end to be the same for users that use our application through a website or through a mobile/desktop device (JavaFX). All of my team, including me, are students and do not have a lot of experience designing new software.

+ +

The problem I am facing is that I want to split main features in packages, so they form components that work independently of the front end and can be easily be modified. The thing is that sometimes, different components will interact with the same model. Let me illustrate my problem so you can understand better.

+ +

This would be a very partial overview of my IntelliJ project structure:

+ +
-/src/main/java/com/ourDomainName/ourAppName
+    -/ApplicationCore
+        -/DriversCore
+        -/LoggingCore
+            -/Signal.Java
+        -/ParsersCore
+            -/Signal.Java
+
+ +

ApplicationCore would be the package that holds all the back end. DriversCore, LoggingCore and ParsersCore are packages that all represent a feature. They are components. I want ParsersCore and LoggingCore to both use a certain model class, Signal. My question is where should I put said file? This situation doesn't just apply to one file, there are many files in my model that I want different components to use. I know many will just say that I should have a package called model and put all my model there, but from what I've seen, I should keep all the model relative to a component in the same package as that component. So, what exactly is the procedure when you have many model classes that you want to be shared across different components?

+",324786,,,,,1/2/2019 15:21,How can I have many application components partly share the same model?,,0,2,1,,,CC BY-SA 4.0,,,,, +384837,1,,,1/2/2019 20:08,,0,1159,"

I´m looking for some kind of better compilation of principles which takes the old basic concepts (DRY, KISS, etc...) and applies them to OOP related concepts like abstract clasess, interfaces etc...

+ +
+ +

Some reasoning behind this quest:

+ +

I find some interpretations of the SOLID principles very compelling, but I've found so diverse interpretations of these principles on the web that I find them close to useless. The idea of a software design principle IMHO, is to provide a base framework for developers to discuss software development best practices. But when the advocators can't even agree on what their principles mean, it's time to look for alternatives.

+ +

I also have found that people trying to follow these principles create extremely over-modularized architectures mostly because they decompose simple implementations into even smaller modules, disperse over the project which makes it close to imposible to discern the purpose of these micro-modules in the context of the whole project.

+ +
+ +

Summarizing, I just want to know if there is any other well known name for a different group of OOP principles that are more tied to the old basic KISS, DRY, etc...

+ +
+ +

The question has been considered too broad by the community, so let's see how this goes for clarification:

+ +

Let A be a set of renown names of sets of OOP design principles (call them ""High Level Principles""). I know SOLID to be an element of A. I found GRASP seems to be another element of A, which I found out thanks to a comment from user949300 on this question.

+ +

Let B be the set of the General Principles listed here which includes these and only these: ML, KISS, MIMC, DRY, GP and RoE. Let's call them ""Low Level Principles""

+ +

Let's say that there is a function T that measures how much a High Level Principle from A is tied to the Low Level Principles from B (as a whole). Eg: T(e) = Tieness of e where e is an element from A.

+ +

I am asking if anyone can name any x such that T(x) >> T(SOLID). Where "">>"" means ""considerably higher than"".

+ +

Such an an answer should explain how T(x) is being estimated. I understand the estimation of T will be highly subjective, but with a good explanation, subjective can be useful.

+ +

How can I tell if an answer is better than another answer? I'll consider the explanation and the number of new elements of A that is provided in the answer, but any answer mentioning at least an element from A different than ""SOLID"", and explaining how T(x) is higher for this element shall be considered as correct.

+ +

I hope that makes the question clear enough for the community...

+",314241,,314241,,1/6/2019 4:34,1/6/2019 4:34,Are there any well known alternatives to the SOLID principles for OO programming?,,3,14,,43468.60486,,CC BY-SA 4.0,,,,, +384846,1,384929,,1/2/2019 21:20,,16,6191,"

A lot of tutorials on DDD I studied are mostly covering theory. They all have rudimentary code examples (Pluralsight and similar).

+ +

On the web there are also attempts by a few people to create tutorials covering DDD with EF. +If you begin studying them just briefly - you quickly notice they differ a lot from one another. Some people recommend to keep the app minimal and to avoid introducing additional layers e.g. repository on top of EF, others are decidedly generating extra layers, often even violating SRP by injecting DbContext into Aggregate Roots.

+ +

I'm terribly apologizing if I'm asking an opinion-based question, but...

+ +

When it comes to practice - Entity Framework is one of the most powerful and widely-used ORMs. You will not find a comprehensive course covering DDD with it, unfortunately.

+ +
+ +

Important aspects:

+ +
    +
  • Entity Framework brings UoW & Repository (DbSet) out of the box

  • +
  • with EF your models have navigation properties

  • +
  • with EF all of the models are always available off DbContext (they are represented as a DbSet)

  • +
+ +

Pitfalls:

+ +
    +
  • you cannot guarantee your child models are only affected via Aggregate Root - your models have navigation properties and it's possible to modify them and call dbContext.SaveChanges()

  • +
  • with DbContext you can access your every model, thus circumventing Aggregate Root

  • +
  • you can restrict access to the root object's children via ModelBuilder in OnModelCreating method by marking them as fields - I still don't believe it's the right way to go about DDD plus it's hard to evaluate what kind of adventures this may lead to in future (quite skeptical)

  • +
+ +

Conflicts:

+ +
    +
  • without implementing another layer of repository which returns Aggregate we cannot even partly resolve the abovementioned pitfalls

  • +
  • by implementing an extra layer of repository we are ignoring the built-in features of EF (every DbSet is already a repo) and over-complicating the app

  • +
+ +
+ +

My conclusion:

+ +

Please pardon my ignorance, but based on the above info - it's either Entity Framework isn't adequate for Domain-Driven Design or the Domain-Driven Design is an imperfect and obsolete approach.

+ +

I suspect each of the approaches has its merits, but I'm completely lost now and don't have the slightest idea of how to reconcile EF with DDD.

+ +
+ +

If I'm wrong - could anyone at least detail a simple set of instructions (or even provide decent code examples) of how to go about DDD with EF, please?

+",175145,,208831,,5/25/2019 13:25,5/25/2019 13:25,Pitfalls of Domain Driven Design with Entity Framework,,5,1,4,,,CC BY-SA 4.0,,,,, +384852,1,,,1/2/2019 22:36,,1,185,"

While setting up a nodejs server with a mariadb database, I found this:

+ +
+

While the recommended method is to use the question mark placeholder, you can alternatively allow named placeholders by setting this query option. Values given in the query must contain keys corresponding to the placeholder names.

+
+ +

This seems odd to me as the named placeholders seem more readable and the ability to use each instance multiple times makes it more flexible. For example, consider this with the ? method:

+ +
connection.query(
+  ""INSERT INTO t VALUES (?, ?, ?)"",
+  [1,""Mike"",""5/12/1945""]
+)
+
+ +

A named version could look like

+ +
connection.query(
+  { namedPlaceholders: true,
+  ""INSERT INTO t VALUES (:id, :name, :dob)"" },
+  { id: 1, name: ""Mike"", dob: ""5/12/1945"" }
+)
+
+ +

It also seems much more likely for the data to be in an object format over an array anyway. So why recommend a less readable option? Why not make namedPlaceholders default to true instead?

+",100213,,,,,1/3/2019 15:01,Why are unnamed placeholders recommended over named ones?,,1,3,1,,,CC BY-SA 4.0,,,,, +384861,1,,,1/3/2019 0:06,,53,6592,"

I have recently graduated from university and started work as a programmer. I don't find it that hard to solve ""technical"" issues or do debugging with things that I would say have 1 solution.

+ +

But there seems to be a class of problems that don't have one obvious solution -- things like software architecture. These things befuddle me and cause me great distress.

+ +

I spend hours and hours trying to decide how to ""architect"" my programs and systems. For example - do I split this logic up into 1 or 2 classes, how do I name the classes, should I make this private or public, etc. These kinds of questions take up so much of my time, and it greatly frustrates me. I just want to create the program - architecture be damned.

+ +

How can I get through the architecture phase more quickly and onto the coding and debugging phase which I enjoy?

+",278692,,73508,,1/3/2019 15:08,1/8/2019 17:18,How to stop wasting time designing architechture,,13,15,14,43468.88681,,CC BY-SA 4.0,,,,, +384866,1,,,1/3/2019 1:00,,1,110,"

This is a very broad question, but maybe someone has a worthwhile response.

+ +

There is a general synchronization issue that often has to be solved, but always seems to be difficult. Here's an example:

+ +

I was working on a remote system and had an ssh-connection and a remote desktop open at the same time for some reason. I happened to create a file on the desktop in shell, and of course it also appeared on the remote desktop view.

+ +

For this to happen one of two things must take place:

+ +

1) the desktop session must be constantly polling the filesystem for changes. Costly, ugly, and of course unlikely.

+ +

2) The system knows that this change made by the ssh-session requires action on the remote desktop side, and updates the view. This is neat and elegant in a sense, but maintaining an accurate capability to decide when any action performed by any process in the system should cause this update is horrendously complex.

+ +

In this case the culprit is the linux kernel (or Desktop environment?) and I presume what it does is the option 2). It's also very common to encounter small bugs and issues that are clearly the result of this kind of issue not being taken care of.

+ +

This kind of a problem where any of multiple changes to a common resource can have an effect on other instances, but determining when is very tedious pops up in many places. +Is there a general approach to this? +Do we form separate trackers that know how the instance is sensitive to changes and that object can be interrogated? +Does every change to the resource (filesystem in this case) include a stage of making sure this kind of stuff takes place? If so, that too must compound to be a massive ordeal. +Does someone happen to know how linux handles this specific example case?

+",324822,,,,,1/22/2021 9:07,How state updates to existing instances/sessions are generally done?,,1,2,,,,CC BY-SA 4.0,,,,, +384867,1,384871,,1/3/2019 1:13,,0,184,"

I was hired to program a basic, plain text site for a local business that amongst other things, provides basic pricing quotes through a Javascript Applet. For obvious reasons, it seemed unnecessary to me to in anyway encrypt the traffic to and from the site. However, the person who hired me strongly requested that I set up HTTPS on the site ""for security reasons"". Assuming I provide minimal upkeep, is there any further risk associated with setting up the SSL certification?

+",324824,,,,,1/3/2019 17:49,Is there any risk in creating a SSL certified site?,,3,3,1,,,CC BY-SA 4.0,,,,, +384869,1,,,1/2/2019 17:21,,3,158,"

A water user can submit an Application for a water right with the hope of getting a Permit to use water, which might later become a BonaFideWaterRight. The right holder may apply to Transfer any of the above items (or others not listed for brevity) by changing ownership, moving it to new ground, splitting it in half and selling the remainder to another individual, etc...

+ +

The above-emboldened states of being for a water right (and other non-water-right things as well) have come to be known here as Processes. All of the above processes have lots of individual work items (sub-processes? But confusingly they're still referred to as Processes) in common, but the only one we need concern ourselves with here is the PointOfDiversion.

+ +

I'm in the midst of an effort to refactor code that I inherited regarding these processes.

+ +

First the abstract parent classes I've created (omitting a fair amount of ISomethingProcess interfaces being inherited along the way) . . .

+ +
public abstract class WREditProcess : IWREditProcess { }
+
+public abstract class WaterRightsProcess : WREditProcess 
+{ 
+    public IWaterRightLookupRepository QueryWaterRights { get; }
+    protected ILocationQueries LocationRepository { get; }
+
+    protected WaterRightsProcess(IWaterRightLookupRepository queryWaterRights, ILocationQueries locationRepository)
+    {
+        QueryWaterRights = queryWaterRights;
+        LocationRepository = locationRepository;
+    }
+    /* Work performed in virtual methods using those repositories */
+}
+
+public abstract class PointOfDiversionProcess : WaterRightsProcess, IPointOfDiversionProcess  
+{
+    protected IPODLocationRepository PODLocationRepository { get; }
+    protected IPointOfDiversionRepository PODRepository { get; }
+
+    protected PointOfDiversionProcess(IWaterRightLookupRepository queryWaterRights, IPODLocationRepository locationRepository, IPointOfDiversionRepository pointOfDiversionRepository)
+        : base(queryWaterRights, (ILocationQueries)locationRepository)
+    {
+        PODLocationRepository = locationRepository;
+        PODRepository = pointOfDiversionRepository;
+    }
+    /* Work performed in virtual methods using those repositories */
+}
+
+ +

There's a large amount of concrete work done in those abstract classes using the repositories passed in from their child classes' constructors. This continues to the concrete classes (the one for transfers shown here in its entirety) . . .

+ +
public class TransferPointOfDiversionProcess : PointOfDiversionProcess
+{
+    protected override ILog Log => LogManager.GetLogger(typeof(TransferPointOfDiversionProcess));
+
+    /// <summary>
+    /// Constructor for a TransferPointOfDiversionProcess
+    /// </summary>
+    /// <param name=""baseWaterRightRepository"">Repository for base water right information</param>
+    /// <param name=""locationRepository"">Repository that abstracts the locPODTransfer table (if such a thing existed, but instead it's a clump of XML)</param>
+    /// <param name=""pointOfDiversionRepository"">Repository that abstracts the PointOfDiversion table</param>
+    [SuppressMessage(""ReSharper"", ""SuggestBaseTypeForParameter"")]
+    public TransferPointOfDiversionProcess(ITransferRepository baseWaterRightRepository,
+        IPODLocationRepository locationRepository,
+        TransferPointOfDiversionRepository pointOfDiversionRepository)
+        : base(
+            baseWaterRightRepository,
+            locationRepository,
+            pointOfDiversionRepository)
+    {
+    }
+
+    /// <inheritdoc />
+    public override string DisplayName => ""Transfer"";
+
+    /// <inheritdoc />
+    public override string ConfigLayerID => ""locPODWRTransfer"";
+
+    /// <inheritdoc />
+    public override string Name => ""locPODWRTransfer"";
+
+    /// <inheritdoc />
+    public override string CorrelateProcessName => ""Transfer"";
+}
+
+ +

Note that the constructor for TransferPointOfDiversionProcess asks for a concrete TransferPointOfDiversionRepository class rather than the IPointOfDiversionRepository interface that its parent specifies. This is critical -- especially for transfers because the TransferPointOfDiversionRepository overrides all sorts of things from its parent because transfers are stored in a wholly different way from everything else. For the same reason, I'm planning a similar TransferPointOfDiversionLocationRepository class to take the place of the IPODLocationRepository parameter as well but haven't gotten there yet.

+ +

ReSharper tickles me with the ""Parameter can be declared with base type"" warning on this parameter, suggesting the IPointOfDiversionRepository type be used instead. I disabled this warning for each constructor, but now I can't shake the feeling that I'm getting this warning because of design flaws--failing to abstract something away or the need for some other pattern to indicate clearly the need for a specific implementation of an interface or something like that--but I can't figure out what. Can anyone suggest improvements (or, even better, tell me not to put so much faith in ReSharper)?

+",317092,id est laborum,317092,,1/3/2019 17:41,1/3/2019 17:41,"Does ReSharper's warning ""SuggestBaseTypeForParameter"" suggest design problems?",,3,0,,,,CC BY-SA 4.0,,,,, +384876,1,,,1/3/2019 7:19,,1,483,"

My question is that is there any reason for Thread class to implement Runnable interface by itself. Are there any specific use cases where overriding Thread makes more sense than implementing Runnable by design

+",277489,,277489,,1/3/2019 17:35,2/4/2019 11:50,Why does the Thread Class implement Runnable interface,,2,2,2,,,CC BY-SA 4.0,,,,, +384878,1,384882,,1/3/2019 8:11,,0,606,"

New to DDD I have a simple case a I would like to model using DDD approach

+ +

2 entities Student and Course

+ +

Relevant property for Student are StudentId and Budget

+ +

Relevant property for Course are CourseId and Price

+ +

Student and Course are entities that can exists on its own and have their own life cycle

+ +

Business requirements:

+ +

1) Student can book one course (CourseId is fk for Student table)

+ +

2) Student can book the course only if the user's budget is higher or equal to the course price.

+ +

3) Changes of course price doesn’t affect the students have already booked the course.

+ +

4) When the student book the course the his budget remains unchanged (maybe changes later at the end of the course)

+ +

5) Student budget can be modified setting a different amount but new amount have to be higher or equal to the price of the course the user booked. +Setting a lower amount should throw a runtime error.

+ +

What the way to model this simple case following domain driven design? Where to enforce the two busines rules (points 2 and 5)?

+ +

As a Course can exist without a Student I can’t define the aggregate where Student is the root entity and Course its child entity. Can I?

+ +

But at the same time the business rule defined at point 5 seems to me be an invariants. Is it?

+ +

So where and how to apply this rules?

+ +

I tried a service approach, can work for the first simple rule (point 2) but fail for the rule described at point 5

+ +
var student = studentRepository.Get(srtudentId);
+var course = courseRepository.Get(courseId)
+
+var studentService = new StudentService();
+
+studentService.SubScribeStudentToCourse(student, course);
+
+studentRepository.Update(student);
+
+
+StudentService.ChangeStudentBudget(student, 100000);
+
+studentRepository.Update(student);  
+
+ +

when I update the student with the new budget someone else can change the course price making the student budget inconsistent

+ +
public class StudentService
+{
+    SubScribeStudentToCourse(Studen student, Course course)
+    {
+        if (studentt.Budget >= course.Price)
+        {
+            student.CourseId = course.CourseId
+        }
+    }
+
+    ChangeStudentBudget( Student student, decimal budgetAmount)
+    {
+        if (student.CourseId != null)
+        {
+            var studentCourse = courseRepository.Get(student.CourseId);
+            if ( studentCourse.Price <= budgetAmount)
+            {
+                student.Budget = budgetAmount;
+            }
+            else
+            {
+                throw new Exception(""Budget should be higher than studentCourse.Price"");
+            }
+        }
+    }
+}
+
+",261565,,,,,1/3/2019 19:53,DDD enforcing business rules,,2,0,,,,CC BY-SA 4.0,,,,, +384883,1,,,1/3/2019 9:37,,0,133,"

Can I draw mutual dependencies between two artifacts in a deployment diagram as a dashed line with two arrow heads? Or is this a no-go in UML?

+",324848,,,,,1/4/2019 8:39,UML dependency in a UML deployment diagram with two arrow heads,,3,0,,,,CC BY-SA 4.0,,,,, +384887,1,384891,,1/3/2019 9:50,,0,1719,"

As the code below, class Foo1 implements interface IFoo, which has a property of IData.

+ +
public interface IFoo
+{
+    public IData Data { get; set; }
+}
+
+public interface IData { ... }
+
+public class DataA : IData {...}
+public class DataB : IData {...}
+
+public class Foo1 : IFoo
+{
+    private DataB _data;
+    public IData Data
+    {
+        get { return _data; }
+        set { _data = new DataB(value); }
+    }
+}
+
+ +

If the user assigns the Data property of Foo1 with an object of DataA, and then gets the property value back later. He will get an object of DataB instead of DataA. Does this violate any OO principles? Thanks.

+",154886,,78230,,1/3/2019 14:01,1/3/2019 22:42,Interface properties implementation,,3,3,,,,CC BY-SA 4.0,,,,, +384890,1,,,1/3/2019 9:59,,0,96,"

I am trying to figure out the best way to decorate html. What I mean is replacing specific syntax string with the actual content.

+ +

Kind of like, razor syntax in Asp.net MVC using <%= %>.

+ +

Currently, I have an HTML page with design and I just need to replace tags (for ex: <%HISTORICTABLE%>) with actual content.

+ +

I have 5-6 tags in HTML that needs to be replaced with the original html.

+ +

I might add new/remove tags ('behaviour') from html.

+ +

I think decorator pattern should do the trick or would you think its an overkill?

+",264551,,,,,1/3/2019 12:40,decorator pattern for generating complete html,,1,0,,,,CC BY-SA 4.0,,,,, +384900,1,384912,,1/3/2019 11:33,,1,682,"

Emacs starts up as an editor (which probably has m functions that takes ninputs) and an Elisp interpreter running in the background (which can be used to change the behavior of the program - probably so much so that it is no longer emacs :-)).

+ +

Why do programs need an extension that is an interpreter? Is there any theory behind this? What fundamental feature does it provide, so that you can make a similar decision for your own project?

+ +

Assuming that this is how a (linux) program is in memory,

+ +

+ +

is it because without an interpreter (lying in the text segment), your program is just a finite machine that can execute a finite set of instructions in the text segment (real code a.k.a machine instructions) present in the program layout? However, when you add something like an interpreter, you can add new instructions (probably in the heap, because data and instruction, both are just bits?) and make it behave like an infinite machine?

+ +

I think it is the same as asking why do you need interpreter in the first place(!), but my question actually came from this specific scenario in Emacs like editors. So I would like to understand this from both perspectives.

+",7686,,7686,,1/31/2019 5:05,1/31/2019 5:05,Why do we need to embed an interpreter in a program?,,5,4,,,,CC BY-SA 4.0,,,,, +384927,1,384930,,1/3/2019 18:17,,1,393,"

I have a main window and I amgetting data from http client service while form load.

+ +
public class MainWindow: Window
+{
+    private IClientService service;     
+    public MainWindow(IClientService service)
+    {
+        this.service = service;                 
+        GetClient();            
+    }       
+    public async Task<Client> GetClient()
+    {
+        try
+        {
+            IsDownloading = true;    
+            var client = await service.GetClient();             
+            if(client!=null)
+            {
+                if (client.Status)
+                    Visibility = Visibility.Collapsed;
+                else
+                    Visibility = Visibility.Visible;                        
+                ShowClientRegistrationForm();               
+            }else{
+                HideClientRegistrationForm();
+            }
+        }
+        catch
+        {
+            MessageBox.Show(""Access error."", ""Error"", MessageBoxButton.OK, MessageBoxImage.Error);    
+            throw;
+        }
+        finally
+        {
+            IsDownloading = false;
+        }
+    }
+}
+
+ +

My GetClient() method does 3 operations.

+ +
    +
  1. Gettting client using service
  2. +
  3. Changes the visibility if client status is true or false
  4. +
  5. Shows the registration form to user, if client object is null.
  6. +
+ +

I think this is an antipattern. And violates the single responsibility principle. How can I get rid of it?

+",160523,,2722,,6/5/2019 16:20,6/5/2019 16:20,How can I get rid of this antipattern,<.net>,1,11,,,,CC BY-SA 4.0,,,,, +384934,1,,,1/3/2019 21:29,,0,581,"

I was given a more or less complex task. The goal is to interpret a SQL Check Constraint inside my C# .NET libary. In our case we have a simple UI that displays what is inside the database. We do not want out UI-Components to allow any values that wouldnt even be possible because there is a check constraint. Since everything has to be dynamic (the database can change), I cannot just hardcode the UI components.

+ +

I have managed to retrieve data about every check constraint inside my SQL Server database (Northwind) with the following query:

+ +
SELECT 
+    [cck].[name] AS [CONSTRAINT_NAME],
+    [s].[name] AS [SCHEMA],
+    [o].[name] AS [TABLE_NAME],
+    [cstcol].[name] AS [COLUMN_NAME],
+    [cck].[definition] AS [DEFINITION],
+    [cck].[is_disabled] [IS_DISABLED]
+FROM sys.check_constraints cck
+    JOIN sys.schemas s ON cck.schema_id = s.schema_id
+    JOIN sys.objects o ON cck.parent_object_id = o.object_id
+    JOIN sys.columns cstcol ON cck.parent_object_id = cstcol.object_id AND cck.parent_column_id = cstcol.column_id
+
+ +

This query gives me the following result:

+ +

+ +

As you can see, there is a column 'DEFINITION', which pretty much shows what the CC does in a human-readable medium. Here comes my problem: How can my .NET libary understand this check constraint so that I can adjust my UI components to now allow any values that violate the CC?

+ +

I've thought about those two possible solutions:

+ +
    +
  1. Using Expressions to 'express' what the CC is doing
  2. +
  3. Returning every single possible value of the check constraint.
  4. +
+ +

Number 1 is probably the fastest if done right, but very complex (at least for me since I do not have any experience with expressions). Number 2 would be slower but the easiest way to do it, if possible.

+ +

Sadly I couldnt find any good help for both of my solutions.

+ +

Also: At least for now I will only care about CC on the column-level. Handling table-constraints will be another challenge

+ +

Now my quesion is: What is an ""easy"" way to do something like this. It definetly does not have to be the fastest solution.

+",308504,,308504,,1/3/2019 21:36,2/4/2019 16:01,How can I interpret a SQL Check Constraint inside my C# .NET class libary?,<.net>,3,6,,,,CC BY-SA 4.0,,,,, +384936,1,,,1/3/2019 23:21,,0,505,"

I have a class Person.

+ +
Person {
+ String firstName;
+ String lastName;
+ String Date dob;
+ String email;
+ String mobileNumber;
+ String address;
+}
+
+ +

To add a person, I have following REST APIs:

+ +
    +
  1. POST /person

    + +
    {
    +""firstName"":""Donald"",
    +""lastName"":""Trump"",
    +""dob"":""01/01/1990""
    +}
    +
  2. +
  3. PUT /person/{id}/addContact

    + +
    {
    +""email"":""donald.trump@us.com"",
    +""mobileNumber"":""+1-123-456-789""
    +}
    +
  4. +
  5. PUT /person/{id}/addAddress

    + +
    {
    +""address"":""white house""
    +}
    +
  6. +
+ +

Now there are two ways to do that -

+ +
    +
  1. Use same Person class and keep adding new information in the same object from API 1, 2 and 3.

  2. +
  3. Create separate models for all three APIs.

    + +
    PersonMain {
    + String firstName;
    + String lastName;
    + String Date dob;
    +} 
    +
    +PersonContact {
    + String email;
    + String mobileNumber;
    +}
    +
    +PersonAddress {
    + String address;
    +}
    +
  4. +
+ +

Finally, we also need our main Person class because all that information is going into single Person table and finally this whole Person object will be used at every place.

+ +

What do you think which approach is good and why?

+",324909,,,,,1/4/2019 15:13,Which is better solution - having separate model class against each REST API or keep adding info in single object?,,1,4,,,,CC BY-SA 4.0,,,,, +384939,1,384963,,1/4/2019 0:24,,2,967,"

I am writing a potentially large web application using Angular 7, where I came across a design problem. My angular applications until now have been relatively small, so there was no problem keeping whole code in one project (divided in modules with lazy loading). However now that an application can grow in size I find it hard to keep all code in the same project as it makes project hard to navigate.

+ +

My thoughts were that I could divide my application into multiple angular libraries by functionalities, which poses the following questions: do I really gain some advantage with such approach or do I just create overhead with managing dependencies making development harder because of having to link in all dependencies? If this option is viable, what would be good way to split code into multiple libraries? I have looked around for some articles about large angular apps but haven't found any with my solution - all were just one project - are there any good articles on such matter?

+",274265,,,,,1/4/2019 13:02,Split large Angular codebase to libraries,,1,0,1,,,CC BY-SA 4.0,,,,, +384940,1,,,1/4/2019 1:23,,2,195,"

Currently, my thoughts are that GET requests would be feasible by using the concept of screen scraping combined with a cron job that runs at a set interval to scrape data from the GUI and sync to my own database.

+ +

However, I'm not quite sure how I would handle actions that seek to mutate the database that sits behind the GUI. I am quite certain I would need to interface directly with the GUI, but what tools are available that could help automate this by programmatically controlling the GUI?

+ +

Also, since an overall architecture such as this is far from conventional, I'm curious what strategies might be utilized to help scale a system such as this.

+ +

Note: It is acceptable for data returned from a GET request to be stale for at least as long as the cron job interval, and for POSTs and PUTs and the like to complete sometime in the future, let's say half an hour.

+ +

Note: Maybe my train of thought is completely idiotic and there's a better angle. I'd love to know.

+",280324,,280324,,1/4/2019 20:51,1/29/2020 23:01,"Is it possible to layer an API (REST, GraphQL, etc.) in front of data that is currently only accessible via an enterprise desktop GUI?",,1,12,,,,CC BY-SA 4.0,,,,, +384942,1,384944,,1/4/2019 2:31,,0,788,"

In laymen's terms, what is the difference between opcodes and operands in assembly language programs? I understand that one involves where to get the data and one involves what is to be performed on the data, but is there a way to understand it in more detail?

+",313211,,,,,1/4/2019 15:36,Opcodes vs Operands,,1,1,,,,CC BY-SA 4.0,,,,, +384947,1,384971,,1/4/2019 8:07,,0,44,"

Yay or nay? I have several related but separate services that are to be run in different processes. They execute a particular task unique to the service. Their call signature is similar, but the name of the service changes. For example.

+ +
Service 1:
+:5000/Invoice/<id>
+:5000/Customer/<id>
+
+Service 2:
+:5001/Invoice/<id>
+:5001/Customer/<id>
+
+ +

Each of the calls has e.g. GET and POST methods associated with it. I'd like to refactor this to be:

+ +
:5000/Invoice/<id>/service1
+:5000/Customer/<id>/service1
+:5000/Invoice/<id>/service2
+:5000/Customer/<id>/service2
+
+ +

These calls would then delegate to the services themselves. Notice there is only one port or address to call the entire service instead of a port for each service on its own. So I'm thinking that adding a layer that calls the relevant service locally would be the way to go.

+ +

Is this a good approach? Is it more intuitive? It does add a layer of calling things again, so it might introduce some delay to requests, but maybe the trade off is worth it. Are there other ways of doing it? I'm rather new to web development, so I don't know much about common practices. If it makes a difference, I'm using Python and Flask.

+ +
+ +

There is one service that is used more often and the others and is more critical. Perhaps the other requests could be routed through that service.

+",301321,,301321,,1/4/2019 10:51,1/4/2019 14:39,Abstracting a set of services behind a common interface,,1,5,,,,CC BY-SA 4.0,,,,, +384951,1,384955,,1/4/2019 9:29,,1,292,"

Let's pretend I have an 'Book' entity, that can contain many 'Chapter' entities, having both their own unique IDs. A chapter must belong to a book, it cannot exists on its own (ie: there is a required foreign key in the chapter's table).

+ +

There is a screen where the chapter's content can be edited. So far, we considered books as aggregate roots, and everthing that was done on chapters was done through aggregates that had the parent book as root. All was great.

+ +

Suddenly, we get a requirement for the chapter editing screen, in which we need to add a dropdown list to be able of changing in which book we want the chapter to appear (from a list of account owned books), which breaks our current way of doing things.

+ +

How should I approach this? The application is SQL based, so being the relationship one-2-many, the operation is essentially changing the FK value in the chapter's table... but DDD wise, I believe it is more complex, since there may situations in which we need to update the book information (number of chapters, etc ... ). We work in transactional fashion, we cannot use eventual consistency for this.

+ +
    +
  • Making a chapter an aggregate root itself?

  • +
  • Making a aggregate with a composite root between the current document and the one to where I want to assign the chapter to?

  • +
+ +

Thanks.

+",324934,,,,,1/4/2019 10:25,DDD: Re-assign an entity from one aggregate to other,,1,4,1,,,CC BY-SA 4.0,,,,, +384954,1,,,1/4/2019 10:14,,0,50,"

I am in search of information on how I to manage code in git flow and methodology to test and work with it :

+ +
    +
  • I have an API in place to build and manage a product catalog and it was designed around the business needs of my first client project (let's call this the master branch)
  • +
  • For a second project with another client , I have deliver initially the same API, but after some exchange with this new client, the interface of my API need to evolve and introduce a new version of the API.
  • +
  • I'd like to have a main base API as a base for my future others projects so I am wondering how could I merge and evolve my API by incorporating the various contract breaking changes in a single master branch...
  • +
+ +

Not sure if I am clear in my question : but the fact is as now, for every new projects that I am dealing with, I first start to analyse the need of my client and chose to start from the branch that seem the closer to their needs : in the end I have as many branches and specialisation as clients... It's not maintenable anymore... So please give advice and strategy on modeling and git flow branching...

+",285517,,,,,1/4/2019 10:36,how to build a stable API product and allow specification per project?,,1,2,,43499.77708,,CC BY-SA 4.0,,,,, +384960,1,384962,,1/4/2019 11:46,,-3,165,"

I have just created a function which checks whether a ipv4 is public or not. I have not heavily tested it yet since it is kind of practically impossible to do so (because I do not know where to start).

+ +

My algorithm is based on an article from this website.

+ +
+

Public IP addresses will be issued by an Internet Service Provider and + will have number ranges from 1 to 191 in the first octet, with the + exception of the private address ranges that start at 10.0.0 for Class + A private networks and 172.16.0 for the Class B private addresses.

+
+ +

Is this algorithm correct (implemented here in C++) ?

+ +
struct IpAddress {
+    uint8_t oct1;
+    uint8_t oct2;
+    uint8_t oct3;
+    uint8_t oct4;
+};
+
+bool isIpPublic(const IpAddress &ip){
+    if (ip.oct1 >= 1 && ip.oct1 <= 191){            // not class C
+        if (ip.oct1 != 10){                         // not class A (all of class A is private)
+            if (ip.oct1 != 172 && ip.oct2 != 16){   // not class B (172.16.x.x is private)
+                return true;
+            }
+        }
+    }
+
+    return false;
+}
+
+",324765,,324765,,1/4/2019 13:58,1/4/2019 13:58,Is my algorithm for determining whether a ipv4 is public or private correct?,,1,7,1,43469.68056,,CC BY-SA 4.0,,,,, +384968,1,384972,,1/4/2019 14:06,,2,277,"

Can anyone tell me what does ""machine"" means in Compiler Theory? Does it mean computer in general or operating system? Actually, the problem is I understand the definition of machine language as ""the language understand by the computer"". But does machine here refers to anything specific other than computer.

+ +

I was reading dragon book Compilers: Principles, Techniques, and Tools. In the class professor told that Java is both compiled and interpreted language. I didn't understand the definition so I referred to the book. I still don't get the following paragraph:

+ +
+

Java language processors combine compilation and interpretation, + as shown in Fig. 1.4. A Java source program may first be compiled into + an intermediate form called bytecodes. The bytecodes are then interpreted by a virtual machine. A benefit of this arrangement is that bytecodes compiled on one machine can be interpreted on another machine, perhaps across a network.

+
+",324958,,5099,,1/4/2019 19:25,1/4/2019 22:49,Meaning of Machine in Compiler Theory,,4,8,,,,CC BY-SA 4.0,,,,, +384974,1,384993,,1/4/2019 16:16,,1,267,"

I'm not sure what the correct procedure is, when you have a question based off an answer you read but it is a seperate question that arose because of the answer provided.

+ +

the answer in question Which HTTP verb should I use to trigger an action in a REST web service? +

+ +

Walkthrough of my method, where this is relevant

+ +
[HtpPut(""StartDate/{id}"")]
+public async Task<IActionResult> StartDate(int id)
+{
+    //do checks to see if resource exists, and authorisation .
+    //start backend task
+    //if task successfully starts update 'isStarted' field for the entity with the inputted id
+    //return status code 200 if there is no errors 
+}
+
+ +

Question +when designing an API that adheres to REST as much as possible, is it okay practice in a situation like above to use a 'HttpPut' or 'HttpPatch' verb and allow the API method not to check for a Patch doc or resource? ie: the user sends a request, with whatever resource or patch doc they wish and the server does not care as long as the request id is valid and the user is authorized.

+ +

secondary question if this adheres to REST(or even if it deviates from REST), is what I am doing a good solution that is acceptable, or is there a cleaner design I should be implementing for a situation like this?

+",277313,,,,,1/4/2019 21:33,Is it okay to use a 'HttpPut' or 'HttpPatch' verb and allow the API method not to check for a Patch doc or resource?,,2,0,,,,CC BY-SA 4.0,,,,, +384975,1,,,1/4/2019 16:31,,1,101,"

I have a base type of Entity, and multiple implementations, Enemy, Bunker, Projectile

+ +

I have separated these entities into their own containers so I can pass them to different classes to perform different actions on them. However it is becoming clear now this may not have been the best approach. I am currently writing the collisions between the Projectile and Enemy/Bunker. As they have their own separate lists I'm having to write multiple functions to handle the collisions.

+ +

The enemies are stored in a 2d grid using std::vector<std::vector<std::unique_ptr<Enemy>>>

+ +

The bunkers are stored in a vector std::vector<std::unique_ptr<Bunker>>

+ +

The projectiles are stored in a vector std::vector<std::unique_ptr<Projectile>>

+ +

Here are the collision functions so far

+ +

Projectile -> Enemy collisions

+ +
void ProjectileEnemyCollisions()
+{
+    auto projectileIterator = projectiles.begin();
+
+    while (projectileIterator != projectiles.end()) {
+        auto enemyRowIterator = enemies.begin();
+        while (enemyRowIterator != enemies.end()) {
+            std::vector<std::unique_ptr<Enemy>>const& column = *enemyRowIterator;
+            auto enemyColumnIterator = column.begin();
+
+            while (enemyColumnIterator != column.end()) {
+                if (projectiles.size() == 0) {
+                    break;
+                }
+
+                std::unique_ptr<Projectile>const& projectile = *projectileIterator;
+                std::unique_ptr<Enemy>const& enemy = *enemyColumnIterator;
+
+                if (m_collisionManager->Collision(projectile->GetBoundingBox(), enemy->GetBoundingBox())) {
+
+                    //collision
+
+                }
+                else {
+                    ++enemyColumnIterator;
+                }
+            }
+            ++enemyRowIterator;
+        }
+
+        if (projectiles.size() != 0) {
+            if (projectileIterator != projectiles.end())
+                ++projectileIterator;
+        }
+
+    }
+
+}
+
+ +

Projectile -> Bunker collisions

+ +
void ProjectileBunkerCollisions()
+{
+    auto projectileIterator = projectiles.begin();
+
+    while (projectileIterator != projectiles.end()) {
+
+        std::unique_ptr<Projectile> const& projectile = *projectileIterator;
+
+        auto bunkerIterator = bunkers.begin();
+
+        while (bunkerIterator != bunkers.end()) {
+
+            if (projectiles.size() == 0) {
+                break;
+            }
+
+            std::unique_ptr<Bunker> const& bunker = *bunkerIterator;
+
+            if (m_collisionManager->Collision(projectile->GetBoundingBox(), bunker->GetBoundingBox())) {
+
+                //collision
+
+            }
+            else {
+                ++bunkerIterator;
+            }
+
+
+        }
+
+        if (projectiles.size() != 0) {
+            if (projectileIterator != projectiles.end()) {
+                ++projectileIterator;
+            }
+        }
+    }
+}
+
+ +

All of these types are of Entity, so is there a more efficient way to iterate over them? I feel like having three loops for the enemies, and then having another two loops to check the bunkers seems counter-intuitive. I'm unsure which approach is better, grouping all the entities into a single container and then iterating over them once, or separating them out into different containers like I have now, but having to iterate over them multiple times.

+ +

I have also split up the entities so that I don't have to pass around data that isn't required, i.e. for the enemy specific logic, it only requires Enemy objects.

+ +
+ +

Entity.h

+ +
    class Entity {
+
+    friend class MovementManager;
+
+    public:
+        Entity(std::unique_ptr<Sprite> sprite) : m_sprite(std::move(sprite)) {
+
+        };
+
+        virtual void Update(DX::StepTimer const& timer) = 0;
+        virtual void DealDamage(int damage) = 0;
+
+        bool IsDead() {
+            return m_health == 0;
+        }
+
+        Sprite& GetSprite() const {
+            return *m_sprite;
+        }
+
+        XMFLOAT3 GetPosition() const {
+            return m_position;
+        }
+
+        BoundingBox const& GetBoundingBox() {
+            return *m_boundingBox;
+        }
+
+
+    protected:
+        std::unique_ptr<Sprite> m_sprite;
+        std::unique_ptr<BoundingBox> m_boundingBox;
+
+        XMFLOAT3 m_position;
+        XMFLOAT3 m_scale;
+        XMFLOAT3 m_rotation;
+
+        int32_t m_health;
+
+        XMFLOAT3 m_velocity;
+        XMFLOAT3 m_maxVelocity;
+        XMFLOAT3 m_slowdownForce;
+        float m_movementSpeed;
+        float m_movementStep;
+
+
+    };
+
+ +
+ +

Most recent implementation using the idea from the comments

+ +
void HandleCollisions()
+{
+    std::vector<std::shared_ptr<Projectile>>const& projectiles = m_projectileManager->GetProjectiles();
+    std::vector<std::vector<std::shared_ptr<Enemy>>>const& enemies = m_enemyManager->GetEnemies();
+    std::vector<std::shared_ptr<Bunker>>const& bunkers = m_bunkerManager->GetBunkers();
+
+    std::vector<std::unique_ptr<EntityBoundingBox>> boundingBoxes;
+    //projectiles
+    for (std::shared_ptr<Projectile>const& projectile : projectiles) {
+        std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(projectile->GetBoundingBox(), std::weak_ptr<Entity>(projectile));
+        boundingBoxes.push_back(std::move(boundingBox));
+    }
+
+    //enemies
+    for (unsigned int i = 0; i < enemies.size(); ++i) {
+        for (unsigned int j = 0; j < enemies[i].size(); ++j) {
+            std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(enemies[i][j]->GetBoundingBox(), std::weak_ptr<Entity>(enemies[i][j]));
+            boundingBoxes.push_back(std::move(boundingBox));
+        }
+    }
+
+    //bunkers
+    for (std::shared_ptr<Bunker>const& bunker : bunkers) {
+        std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(bunker->GetBoundingBox(), std::weak_ptr<Entity>(bunker));
+        boundingBoxes.push_back(std::move(boundingBox));
+    }
+
+    CheckEntityCollisions(boundingBoxes);
+}
+
+void CheckEntityCollisions(std::vector<std::unique_ptr<EntityBoundingBox>>& boundingBoxes) {
+
+    for (std::unique_ptr<EntityBoundingBox>& entity1 : boundingBoxes) {
+        for (std::unique_ptr<EntityBoundingBox>& entity2 : boundingBoxes) {
+            if (entity1 == entity2) continue;
+
+            //if the entity has already been removed, continue
+            auto tmp = entity1->GetEntity().lock();
+            auto tmp2 = entity2->GetEntity().lock();
+            if (!tmp || !tmp2) {
+                continue;
+            }
+
+            if (m_collisionManager->Collision(entity1->GetBoundingBox(), entity2->GetBoundingBox())) {
+                m_eventManager->Fire(Events::EventTopic::COLLISIONS_ENTITY_HIT, { { (void*)&entity1 }, { (void*)&entity2 } });
+            }
+
+        }
+    }
+}
+
+",324963,,324963,,1/5/2019 11:51,1/5/2019 11:51,Architecture of iterating over polymorphic types,,1,10,1,,,CC BY-SA 4.0,,,,, +384980,1,384982,,1/4/2019 16:58,,16,2219,"

I currently have two derived classes, A and B, that both have a field in common and I'm trying to determine if it should go up into the base class.

+ +

It is never referenced from the base class, and say if at some point down the road another class is derived, C, that doesn't have a _field1, then wouldn't the principal of ""least privileged"" (or something) be violated if it was?

+ +
public abstract class Base
+{
+    // Should _field1 be brought up to Base?
+    //protected int Field1 { get; set; }
+}
+
+public class A : Base
+{
+    private int _field1;
+}
+
+public class B : Base
+{
+    private int _field1;
+}
+
+public class C : Base
+{
+    // Doesn't have/reference _field1
+}
+
+",100503,,100503,,1/4/2019 17:02,1/4/2019 22:02,When to move a common field into a base class?,,4,4,,,,CC BY-SA 4.0,,,,, +384983,1,384992,,1/4/2019 17:13,,2,149,"

In my application, I have a finite number of question types, but the order in which they're asked and whether they're asked at all is not known up-front.

+ +

An example analogy is a hotel booking process, during the process you may be asked a number of questions, like whether you want late check-out, rent-a-car, breakfast-selection.

+ +
interface IAncillary
+{
+    string FormType { get; }
+    object GetViewData();
+    void SaveResponse(object response);
+    void Skip();
+}
+
+class LateCheckOutAncillary : IAncillary
+{
+    public FormType { get; } = ""late-check-out"";
+
+    public object GetViewData()
+    {
+        return new LateCheckOutOption[] 
+        {
+            new LateCheckOutOption(""2pm"", 50m),
+            new LateCheckOutOption(""4pm"", 75m)
+        };
+    }
+
+    public void SaveResponse(object response)
+    {
+        // record in database (string response).
+        // potentially add another ancillary
+    }
+
+    public void Skip()
+    {
+        // record in database.
+        // potentially add a different ancillary or 
+        // remote other ancillaries
+    }
+}
+
+ +

My initial thought is that the State Design Pattern is most applicable, however, the problem for me is that the view data format and response format is different per ancillary. It'll most likely be represented as a Wizard to the end-user, but I haven't found any design pattern that solves this.

+ +

All ancillaries have a Skip option which is to be used if the client does not understand the FormType.

+ +

The ancillaries use object for view data and object for response data, so if there's something that can account for that too it would be nice.

+ +

Ultimately this will need to be represented as an HTTP interface, however, I'm still wrapping my head around how I would express it with an object oriented language first.

+ +

What design pattern would be best used for representing a set of sequential questions where each question is in a different format?

+",61302,,,,,6/3/2019 23:02,Design pattern for an indeterminate number and format of questions,,1,3,,43620.08194,,CC BY-SA 4.0,,,,, +384986,1,,,1/4/2019 19:31,,1,84,"

I know the sites are not geared for recommendations so I am hoping to pose this question in a way that doesn't ask for recommendations. Questions comments are welcome.

+ +

I am just getting involved in wanting to create a .net Core API for one of the projects we are working on. I have read a little on the topic and what I kind of have a hard time understanding is the authentication piece of it.

+ +

Maybe I am just making a big deal out of nothing and it is as simple as:

+ +

https://stackoverflow.com/questions/38977088/asp-net-core-web-api-authentication

+ +

But I wanted to know since this is an API I assume I need to worry about authentication and that in some way if the authentication fails just send them along to a not authenticated page otherwise allow for usage/entry. How does this authentication piece really work (specific to .net core).

+ +

Is the post I made above the recommended practice for a basic scheme to authenticate folks to my API or should I be using some other mechanism? What resources (books, videos) are there (I know there are a ton via the internet but a lot just seem to be glossing over this topic)?

+",13870,,,,,1/4/2019 20:11,Am I making API creation difficult when it comes to authentication?,<.net-core>,1,1,,,,CC BY-SA 4.0,,,,, +384995,1,,,1/5/2019 0:29,,-5,72,"

I may have a tough one for you.

+ +

I have a machine in the wild that is and will probably continue to be compromised. The machine is owned by a user who will be unable to keep it secure.

+ +

I must have this machine pull from git. It must also automatically install all pulls without restart (no startup solutions).

+ +

I would prefer a platform agnostic solution.

+ +

I have a few objectives: +1). Email remote admin with logs of all pulls, making sure this process cannot be subverted or altered +2). Authenticate all git pulls in some manner without the auth being able to be cracked by an adversary

+ +

I hope you all can help.

+",324988,,,,,1/5/2019 17:19,Authenticate Git Pulls on a Compromised Machine,,2,2,,43472.76528,,CC BY-SA 4.0,,,,, +384998,1,385002,,1/5/2019 2:36,,5,863,"

I'm developing a Python library, and I'm also developing some code that uses it. Currently they are in the same git repository, but I want to separate out the library part into a separate repo, in preparation for eventually* releasing it.

+ +

However, I'm unsure of the right way to work with two repositories. The library has a complex API that's likely to change a lot as I develop it, and at this stage of the project I'm usually developing new features simultaneously with code that uses them. If I need to restore things to a previous working state, I will need to roll both projects back to previous commits - not just the last working state for each project individually, but the last pair of states that worked together.

+ +

I am not a very advanced git user. Up to now I have used it only as an undo history, meaning that I don't use branches and merges, I just do some work, commit it, do some more work, commit that, and so on. I am wondering what's the minimal change to this workflow that will allow me to keep both projects in sync without worrying.

+ +

I'd like to keep things simple as simple as possible given that I'm a single developer, while bearing in mind that this project is likely to become quite complex over time. The idea is to make small changes in order to minimise disruption at each step.

+ +

*A note on what I meant here: the project is in nowhere near a releasable state, and I'm not planning to release it in anywhere near its current state. (""Repository"" in this context means ""folder on my hard drive with a .git in it"", not a public repository on github.) I asked this question because I thought that putting it in a separate repository would be something I needed to do early on for the sake of my own management of my code, but the answer from Bernhard convinces me this is not the case.

+",91273,,91273,,1/8/2019 23:06,1/8/2019 23:06,How do I keep two git projects in sync with each other?,,5,3,,,,CC BY-SA 4.0,,,,, +385001,1,,,1/5/2019 5:20,,0,284,"

We are building a new application for a client to manage their cases. They are already using their existing system in which they are storing files associated to the cases in an FTP folder. There is an attachment table (~1M rows) which maps the cases with the ftp file location. As part of the migration the client also wants to move away from FTP to a cloud storage provider (Azure). Currently there is roughly 1TB of files in the FTP folder which we need to move to Azure.

+ +

Current architecture:

+ +

+ +

In the FTP there is no folder structure , they are just dumping the file and storing that link in the Attachment table. But in the Azure we would need to create a folder structure. Because of this we cannot just copy-paste the same files in Azure.

+ +

There are couple of approaches:

+ +

Option 1:

+ +
    +
  1. Write a script in node.js which will read the case table, get all the assoicated rows from the attachment table for one case.
  2. +
  3. Get the links from the attachment table.
  4. +
  5. Make a ftp connection and get actual file using the links that were fetched from the previous step.
  6. +
  7. Generate the folder structure in local system.
  8. +
  9. Once all the files are retrieved then psuh the files into azure.
  10. +
  11. Delete the folder structure
  12. +
  13. Repeat the steps for the next Case.
  14. +
+ +

Option 2:

+ +
    +
  1. In this option we will run thru the same steps as before until 5.
  2. +
  3. But we will not delete the folder structure instead we will build the folder strcuture for all the cases in the local machine.
  4. +
  5. Deploy the files all at once into Azure.
  6. +
+ +

It would really be helpful to understand what is the best approach that we can take? Are there any other approaches apart from the above.

+ +

Also, Option 1 could be run in parallel (multiple cases in one shot). What could be limitation in this? +Option2 would required atleast 1.2 TB local space which is little hard to get considering the current logistical limitation in the company.

+",91791,,,,,1/13/2019 13:35,Architecture for File migration from FTP to cloud service,,1,3,,,,CC BY-SA 4.0,,,,, +385008,1,,,1/5/2019 13:40,,1,76,"

Hopefully, this is the right forum for this type of question..

+ +

We have a set of common entities which are 'shared' throughout the company - much like Master Data Services (MDS) data. Everyone has differing ways of maintaining said data...most of which are painful and/or lacking.

+ +

So...I created a working 'demo' using the SQL Service Broker (SSB) to show how we can easily & seamlessly propagate the 'shared' data. Of course, this data is centrally managed & spoke-applications (themselves) do not change said data.

+ +

Another person wants to use SignalR to propagate the 'shared' data to application databases. And, I love SignalR. However, to me, SignalR is ""real-time"" front-end ""componentry""...not a data transfer service solution for MDS-styled data.

+ +

I see the broker as the right tool for this job. And frankly, to me...just because you CAN do something...doesn't mean you SHOULD. But I am open to being wrong.

+ +

(1) Am I wrong or right. +(2) If so, why or why not?

+ +

Thanks for the help.

+",23450,,,,,9/19/2019 5:20,Propagating MDS Data - SQL Service Broker or SignalR?,,1,0,,,,CC BY-SA 4.0,,,,, +385010,1,385012,,1/5/2019 14:58,,0,232,"

Clean architecture decouples an app's core from the presentation/UI layer. The UI is just a plugin, replaceable (eg, web-based to desktop) without impacting the core.

+

Many data science apps mix code, user inputs, text, graphics and other outputs in one notebook, eg, Jupyter. Everything seems coupled: the domain, UI, presentation, persistence.

+

Q: +How to design such an app cleanly, with the notebook maximally decoupled? Or are notebooks inherently incompatible with clean architecture?

+

Perhaps I could have an independent module with core functionality. The notebook would call this module, without defining any non-trivial functionality. Would this, however, allow enough decoupling or even fit with a notebook?

+

Why:

+

I'll be developing an app for a client who's only used Excel. The app will predict cost effectiveness of medical treatments and will need MCMC simulations, regression and other stats.

+

I plan to implement it in Python with Jupyter or the nteract notebook, pushed by Netflix https://medium.com/netflix-techblog/tagged/nteract. However, this may eventually prove unsuitable for the client, as Jupyter is mainly used by those who program it themselves. There're other potential pitfalls, eg, https://docs.google.com/presentation/d/1n2RlMdmv1p25Xy5thJUhkKGvjtV-dkAIsUXP-AL4ffI/edit#slide=id.g362da58057_0_1. +Ideally, I could easily swap between notebook types or change over to a desktop GUI.

+",294434,,-1,,6/16/2020 10:01,1/6/2019 1:16,How compatible are data science notebooks with clean architecture?,,2,0,,,,CC BY-SA 4.0,,,,, +385020,1,385031,,1/6/2019 0:50,,-2,183,"

I would like your tipps about implemening a command line interface +to interact with a running java application.

+ +

Example:

+ +

The Java Application is a webserver and a cli-client should interact with it:

+ +

1 ) Start server application: +java -jar webserver.jar

+ +

2) Get status of the running application: java -jar webserver.jar --status +or other commands like: java -jar webserver.jar --add-user Paule --password 1234 so adding an entry in a hashmap in the running application.

+ +

Does anyone know a Best-Practice tutorial about this?

+ +

Implementing a HTTP/TCP/UDP/UNIX-Socket would be one solution for interaction.

+ +

An other solution would be reading external resources and placing commands in a file for example.

+ +

What is your way to implement this?

+ +

Is there a technical term for interaction with a running thread?

+ +

Thanks in Advance

+",325045,,,,,1/6/2019 11:05,How to implement a CLI interaction with running java programm?,,1,3,,43472.35486,,CC BY-SA 4.0,,,,, +385033,1,,,1/6/2019 13:44,,2,195,"

I have a question about architecture in .NET.

+ +

My architecture is like this :

+ +

Projet :
+ - DAL (Data Acces Layer)
+ - BLL (Business Logic Layer)
+ - DTO (Data Transfer Object)
+ - IHM (man/machine interface)

+ +

DAL : Acces to the database (CRUD) It reference DTO
+BLL : Logic Layer do all logic process and make the connection between IHM and DAL. This layer reference DAL and DTO
+IHM : Presentation Layer (asp MVC) this layer has a reference on BLL and DTO +DTO : I put EDMX (Entity Data Model) in this layer (cross cutting)

+ +

My question is about the EDMX. I put it in DTO layer in order to make accessible the object to all other layer. +In my IHM layer I map DTO's object with ViewModel to send to the view only the field needed

+ +

I see in other project they put the EDMX in DAL but they create object in each layer and map them. It's unpleasing and it's code duplication.

+ +

Is it bad to put EDMX in DTO and why ?

+ +

Regards

+",325076,,325076,,1/6/2019 15:24,1/6/2019 17:02,Is my Architecture correct?,<.net>,1,2,,,,CC BY-SA 4.0,,,,, +385036,1,385040,,1/6/2019 15:31,,1,121,"

When I develop apps I reach situations like this frequently but I never found a best practice to solve it.

+ +

Imagine:

+ +
    +
  • We have chats, each chat can have many message.

  • +
  • We have tickets, each ticket can have many message too.

  • +
+ +

Solution 1:

+ +

We create 3 tables: chats, tickets, messages, and we link each chat or ticket to its messages using polymorphic relation ship.

+ +

In this solution:

+ +
    +
  • We have clean database. and we don't use two tables for same kind of data.
  • +
  • We won't able to use relational database features like cascade delete. so we have to remove the related messages programmatically when we do remove any chat or ticket, we also can use triggers.
  • +
+ +

Solution 2:

+ +

We create 4 tables: chats, tickets, chat_messages, ticket_messages, and we link each chat or ticket to its messages using foreign key.

+ +

In this solution:

+ +
    +
  • Our database is ugly and we use two tables for same kind of data.
  • +
  • We can use features like cascade delete and etc, this is really good and clean.
  • +
+ +

Solution 3:

+ +

Please you tell...

+",278040,,278040,,1/6/2019 15:48,1/6/2019 17:18,Database relation design,,3,5,,,,CC BY-SA 4.0,,,,, +385041,1,,,1/6/2019 16:33,,2,90,"

Apologies if the title is incorrect, I couldn't think of better wording. I have the following code

+ +
template <class E>
+void ResolveEntityHit(E& entity, Projectile& projectile) {
+    static_assert(std::is_base_of<Entity, E>::value, ""entity must be of type Entity"");
+
+    entity.DealDamage(projectile.GetProjectileDamage());
+
+    if (entity.IsDead()) {
+        DestroyEntity(entity); //here is the problem
+    }
+
+    m_projectileManager->DestroyProjectile(projectile);
+}
+
+void DestroyEntity(Enemy & enemy)
+{
+    m_enemyManager->DestroyEnemy(enemy);
+}
+
+void DestroyEntity(Bunker & bunker)
+{
+    m_bunkerManager->DestroyBunker(bunker);
+}
+
+ +

I'm trying to avoid using dynamic_cast as I have read that this isn't good practice. I'm trying to keep the ResolveEntityHit as basic as possible that can accept multiple types, but then I would like to branch off and do different things depending on which type the entity is. +For example, I have my entities separated into different classes, and each class is responsible for removing/adding entities, so I would need to call the function on the correct manager to remove the entity.

+ +

The code above doesn't compile and I get error C2664: 'void DestroyEntity(Bunker &)': cannot convert argument 1 from 'E' to 'Enemy &'

+ +

Hopefully it's clear what I'm trying to achieve, but I'm asking is there a better way to do this in terms of design/architecture and without the use of dynamic_cast? Possibly through using templates?

+",324963,,,,,1/6/2019 21:15,Convert template type to derived type when calling a function?,,2,3,,,,CC BY-SA 4.0,,,,, +385049,1,389214,,1/6/2019 23:04,,0,163,"

I'm designing a system that acts as a master data service for what I shall here call boxes. The system is to be implemented in Java with a relational database (SQL) as the main storage. Each box has a ~dozen different top-level properties: ranging from simple primitives (booleans, integers, etc) to other objects and lists of other objects.

+ +

The main issue is that most of the box properties may change over time, and one needs to be able to schedule those changes in advance. Furthermore, one needs to be able to schedule any number of upcoming changes to any of the attributes, in any chronological order.

+ +

For example, in October we might schedule new set of derps for a box for December, to be returned back to normal on January 1. If, in mid-December, we find out the box gets a new foobar_id in February, we need to be able to schedule that change for February without affecting the upcoming derp change on January 1 -- and without accidentally reverting the derps of December back when the time comes to apply the foobar_id update.

+ +

My idea is to create some sort of a queue of upcoming change events. Each queue item would only change the values of the properties given in that exact event. New events could be added into any position of the queue and existing events could be removed from it. When an event would occur, it would record the old value of the property it changed.

+ +

Now, the keywords of the previous paragraph are some sort of a queue. I'm unsure how to actually implement this in Java + a relational database! It seems that a language with strong static typing doesn't lend itself well to this kind of an exercise in generic attribute changes.

+ +

I'm considering a relatively simple database table with a timestamp (date of the change), the name of the property that's going to change (an enumeration), and a serialized (JSON) representation of the new data. Then each property would basically need their own handler/deserializer. Another way would be to copy the box database structure for upcoming changes and just store a bunch of ""boxes"" with no other properties than the ones that are going to change. This seems like it might be easier Java-wise but the database would become quite complex, when almost all tables would need to be duplicated.

+ +

I need the system to be robust so that it's not too easy to break it when new properties inevitably are added, or some old properties are changed. As such, I'm not too fond of the idea of using reflection on this. New changes can only come in to the system as fast as human beings can type, so I don't need the solution to be optimized for speed. But I do plan to keep one complete and up to date version of each box object in the database, so that I don't need to reconstruct the object from a number of changes every time I need it. Also, for what it's worth, the database queries that the system is going to be handling are going to be pretty simple and the number of boxes is unlikely to exceed ten thousand. I'm not concerned about the actual scheduling part, i.e. triggering the changes at the correct time.

+ +

So I guess that basically my questions are these, starting from the most important one:

+ +
    +
  1. Does the queue pattern that I just described have some name by which I could find some more material on it? If it does not have a name, can you point me to something similar?
  2. +
  3. Can you point me to some good resources specific to Java, SQL and this kind of a pattern?
  4. +
  5. Any other thoughts? Anecdotes? Am I missing something obvious and I should do it some other way? (Java and SQL are going to stay anyway.)
  6. +
+",52158,,52158,,1/6/2019 23:18,3/26/2019 11:54,Implementing a pattern for maintaining changes of an entity over time?,,2,1,,,,CC BY-SA 4.0,,,,, +385052,1,385060,,1/7/2019 4:12,,2,781,"

I have read on several methods to securing an API key like gitignore or placing in another file if using an application, but at some point if taken the time, anyone can get the key, even when apikey is in use or called, right? Other methods explain using a proxy which is well beyond my league. I am only aware of understanding the foundation of C# and JavaScript, and the thought of securing an apikey is mind boggling, as I think what is the most secure method. Recently, I wanted to start working on a portfolio for a better occupation, so I had thought of doing something with the Steam API, but couldn't find a concrete method to store this elsewhere and call it without anyone taking the idea of stealing or digging up this info. Even if I used it within JavaScript, how would I call this during unattended events, if I were to make a public website that was accessed by thousands of people.

+ +

Edit 1: Honestly, I have though about storing the key encrypted elsewhere, but then I would need a decryption method, as well as a key, which could still be a vulnerable method.

+ +

Edit 2: I understand that the key should be in clear text, as it is a cryptographic key itself. If stored on the server, is the api key stored in a path on the server, where only the web application is directed?

+",325119,,,,,1/7/2019 9:31,Methods to Securing APIKeys,,1,3,,,,CC BY-SA 4.0,,,,, +385053,1,,,1/7/2019 6:24,,7,490,"

I have to define the new way of working for a development team which goes from a one man unit, to a distributed team with programmers al over the world. The team will work with svn. This is a non-negotiable thing. I recommended that they switch from svn to git, but that is not going to happen. This is the first time I do something like this. At the moment I think about something like:

+ +

+White text are things that are done manually. Blue text are things that are done automatically.

+ +
    +
  • Every developer has his own branch and does his development on this branch. (This is my preferred way and in my research I saw this also recommended. But I also saw often that svn users did not like to do this. Especially on the long term I think this would bear more fruit. Or am I overestimating the difference?)
  • +
  • At least every morning before a developer starts working he merges trunk into his branch.
  • +
  • Normal work should be checked in the same days as it is started. For work that takes longer a special branch should be created. This branch should also have trunk merged into it at least at the start of the workday.
  • +
  • Every-time the developer has added something that can be tested, he should run the unit tests. Before running the unit tests an automatic merge is done. If this results in conflicts those have to be solved.
  • +
  • When the developer thinks he has something that can be committed he calls the unit tests with a commit flag. When everything is OK the commit and follow-up actions are executed.
  • +
  • A pre-hook is defined that checks that the merge and unit tests where successful.
  • +
  • After the commit is done successfully a post-hook will create an integration server where the above tests are run again and integration tests are performed. When it is not a special branch: on success an acceptation server is created with this branch and the branch is merged (--reintegrate) into trunk. The developer is always notified of the result.
  • +
  • A developer is only allowed to go home when his version is successfully committed. Ideally it should also pass the integration tests. (This sounds a bit harsh, but I added this because I have seen developers not committing for weeks because they had not changed much yet. With all the merge problems this created.)
  • +
+ +

Because it is important to minimise things that can go wrong (people that would give help when there is a problem probably sleep at the moment it is needed, so it should be minimised at 'all costs'), I am thinking about locking svn trunk before the commit and releasing after the automatic steps are done. In this way it should be nearly impossible that the 'Merge Back Into Trunk' goes wrong. The idea is that the tests are reasonable fast and it is better to wait a little before the commits are done, then that there is a chance that the automatic part goes wrong.

+ +

Is this an acceptable way of working?
+If so: can this be done with svn?

+ +

More about the way of working I am thinking about.

+",324072,,324072,,1/7/2019 7:00,1/14/2020 14:41,Is it a good idea to lock svn,,3,10,1,,,CC BY-SA 4.0,,,,, +385055,1,,,1/7/2019 7:16,,-1,45,"

my requirement is i want to delete a Object A

+ +

A-> B-> C-

+ +

here if you want to delete A you have to delete B which is dependent on B , then If you Want to Delete B you Have to Delete C which is dependent on B and The Chain goes like this

+ +

i'm planning to Solve it using chain of responsibility design pattern , or is there any design patterns or principles that fit this scenario

+",325137,,,,,1/7/2019 7:30,Deleting a list of dependent OPbject using chain of Responsibility design pattern,,1,5,1,,,CC BY-SA 4.0,,,,, +385058,1,,,1/7/2019 9:27,,-1,899,"

I'm currently testing a web service and I have noticed that there is only one error code ever return: 400.

+ +

However, the error message return isn't always the same. Here are some examples of the error messages I got:

+ +
    +
  • XXX must contain only digits.
  • +
  • YYY: This value is not valid.
  • +
  • XXX: This value should not be blank.
  • +
  • Etc...
  • +
+ +

So I was wondering if we should use different error code for each message (keeping the HTTP error to 400 but using another code inside the message like 4001, 4002, 4003, etc...). Why would it be a bad idea to do that and why would it be a good one?

+ +

Is using only one single error code could make life harder for the front-dev (assuming they have to translate the error message before printing it for clients)? Wouldn't it be simpler for them to have multiple error code? And what would be the drawbacks of having multiple error code instead of one?

+",317653,,317653,,1/7/2019 11:33,1/8/2019 9:04,good practice: error message and error code,,1,8,,,,CC BY-SA 4.0,,,,, +385063,1,385071,,1/7/2019 10:13,,3,437,"

I need to produce some documentation to be compliant with IEC 62304 and, while reading all of the processes needed to be documented, I'm having a couple doubts about how to structure the whole lot of documentation.

+

My concern is about how to divide all of the documentation in separate documents and what should be included.

+

The whole software system can be considered composed of 3 main subsystems:

+
    +
  • Firmware on embedded device (#1)
  • +
  • A companion Android Application (#2)
  • +
  • An application backend used to ingest, process and save data from the devices (#3)
  • +
+

I'm especially in charge of the latest, which is a fairly streamlined streaming-oriented application which processes and saves data on a DB (a SOUP, in the case of IEC 62304 compliance).

+

Now, the data saved in the DB is visualized in a Grafana dashboard: in which document should this component be considered? What should be the limit of the scope regarding the #3 application and its interaction with the other components? +Since Grafana would be a SOUP, I was thinking about writing about it in the appropriate document where all configurations and SOUP management is. +Should I mention/reference inside the SRS of #3 application the requirements for the needed visualizations? +Which is the appropriate document where I should put this information?

+

I'm using as a template reference for all of the documentation needed this blog, since I'm new to software development with ISO/Standard Regulations, but any additional resources as to how structure the whole docs in this context is highly appreciated.

+

Thank you

+",325146,,-1,,6/16/2020 10:01,1/7/2019 12:36,How to structure SW documentation with SOUP components,,1,0,,,,CC BY-SA 4.0,,,,, +385064,1,,,1/7/2019 10:19,,-1,91,"

I have few strategy class that calculate ranking. Those class implements interface with method scoreUpdates. Method scoreUpdates take two parameters( winners and lossers). Now i need add new strategies and some need more parameters. Should i add methods to base interface for this new strategy? +What is best solution for this type of case?

+ +

Also I use RankingSelector service that find right strategy and returning interface. Also DI is based on this interface so i can't add new interface.

+",325149,,,,,1/7/2019 13:05,add new class that implement base interface but need one more parameter,,1,6,1,,,CC BY-SA 4.0,,,,, +385066,1,,,1/7/2019 11:06,,0,55,"

I recently came across a set of possibilities for creating rows in a table in my database. the scenario is that I am trying to populate a notifications table by different types of notifications data based on different tables.

+ +

Adding to the notifications table is done instantly after the adding of rows in other tables (like adding in invoices table).

+ +

Since the code adding to the other tables is on a higher level (php) the question is : should I add new rows to notification table with a php sql query or should I implement a trigger that would do that automatically ?

+",301866,,,,,1/8/2019 3:31,Using sql triggers over higher level scripts,,1,1,,,,CC BY-SA 4.0,,,,, +385069,1,385072,,1/7/2019 12:14,,-4,81,"

I've noticed this style of code a lot in frameworks like Symfony and Magento 2 (which is based on Symfony):

+ +
<?php
+    class Foo
+    {
+        protected $foo;
+
+        # construct function - not needed for question
+
+        public function getFoo()
+        {
+            return $this->foo;
+        }
+    }
+
+ +

Which makes things easier to pick up in terms of get/set/unset but is there any actual advantage over using public vars?

+ +
<?php
+    class Foo
+    {
+        public $foo;
+    }
+
+ +

It seems the latter has fewer lines but less obvious flexibility.

+ +

What are the advantages/disadvantages to each method and when should I use one over the other?

+",303264,,,,,1/7/2019 12:43,Public var vs protected var and get function,,1,4,,43473.58472,,CC BY-SA 4.0,,,,, +385074,1,385093,,1/7/2019 13:43,,1,47,"

I want users to be able to dynamically add 'columns' from the front-end of the website. I understand that it is probably not best practice to actually add columns to a table from the front-end, so I was looking for a better way to handle this.

+ +

The use case:

+ +

I am making an app with a determination table. The user can fill out details of the animal/plant (for example leaf shape) and is supposed to end at the right species.

+ +

I want to make it future proof, so that if someone fills out all details and the species they have is not the one the table comes up with, the user can add both their species, and the detail that would tell both species apart.

+ +

For example: if the user found a daisy but the table comes up with dandelion, the user could add the daisy and add 'petal colour' as a distinquishing feature.

+ +

Users should then be able to fill out the petal colour for all plants that were already in the database.

+ +

My database at the moment:

+ +

I have one table where all details (like species name, leaf shape etc.) are stored in columns.

+ +

My webiste: +I use angular 7 for the front-end, PHP on the server and a MySQL database, but general answers are also very welcome.

+",325162,,,,,1/7/2019 18:33,What is a good way to add extra info to data-entries from a website front-end?,,1,3,,,,CC BY-SA 4.0,,,,, +385075,1,,,1/7/2019 14:35,,0,39,"

Is there a protocol or a convention that supports REST (ok, maybe we should use HTTP here instead) processing chain and some neat features to help with that? Let me explain what I mean.

+ +

Let's assume I have some public REST service available. Using HTTP GET, I have multiple static pictures, GIFs and movie clips available. Generally, I would like to take this data and send it to another REST endpoint, along with additional data about the recognized visual elements in the content. For example, if the image contains Steve Ballmer drinking tea, a description ""Steve Ballmer drinking tea"" is normally expected at the endpoint.

+ +

However, I don't have an image processing and recognition service available, but if there are some such services available somewhere on the internet, I'm happy. Even if one works exclusively with static images and another one with movies.

+ +

So, my application (let's call it MyApp) will do the following:

+ +
    +
  1. Call service (let's call it Src) to retrieve the picture/GIF/video or whatever
  2. +
  3. If that resource is a picture, send it to the picture recognition service, let's call it PRS, and retrieve a picture description
  4. +
  5. If that resource is a video, send it to the video recognition service, let's call it VRS, and retrieve a description of the video
  6. +
  7. Combine the picture/video content with description, pack it inside an archive and send it to the endpoint that expects the result, let's call it End
  8. +
+ +

This means the data flow is:

+ +
+

MyApp -> Src -> MyApp -> PRS (or VRS) -> MyApp_> End -> MyApp + (confirmation)

+
+ +

I am looking for the solution where data flow is this:

+ +
+

MyApp -> Src -> PRS (or VRS) -> End -> MyApp

+
+ +

This means that I only have to say to the Src: ""Get the whatever video resource I want and forward it to PRS or VRS depending on the content; after that forward it to the End"". Then Src takes the picture, sends it to PRS and says ""process this, after that forward it (along with result of processing) to the End"". +You see, I don't want MyApp to be orchestrator of everything, additionally creating extra network traffic along the way.

+ +

Oh, btw, since I want it to be neatly archived, I need a zipping service in the chain, so the solution should look more like this:

+ +
+

MyApp -> Src -> PRS (or VRS) -> Zip -> End -> MyApp

+
+ +

One more thing is that I want MyApp to be informed about the percentage of processing, errors. I expect some asynchronous processing somewhere along the path (e.g. VRS is a good candidate) and everything to work correctly in that condition as well.

+ +

Does something like this exist? Something maybe most similar to the Unix/Linux piping. Like ""web-pipe"". Or something. +If it does, I can't find it.

+ +

EDIT

+ +

I am looking for a protocol, convention, whatever fits my need and is neither tied for an existing framework (e.g. Spring, .NET MVC/WebApi, ...) nor a ""proprietary"" part of some existing technology (Java, .NET etc.) It should be something that just-works with (or via) existing HTTP. So any technology can use it. Maybe ""concept"" should be the proper term here. If it isn't something widespread already.

+ +

For example, there is basic authentication. It just works with any technology. It has its rules, do's and dont's. There are WebSockets, working just the same. I need something in that context.

+",204790,,204790,,1/7/2019 16:33,1/7/2019 16:33,HTTP/REST and chained processing protocol/convention,,0,4,,,,CC BY-SA 4.0,,,,, +385078,1,385096,,1/7/2019 15:23,,0,799,"

Let's say I use a Pair in this way:

+ +
Pair<Long, Date> signup = getSignup();
+System.out.println(""User with ID "" + signup.getLeft() + "" signed up on "" + signup.getRight());
+
+ +

Is it a form of Primitive Obsession?

+ +

I could have something like

+ +
Signup signup = getSignup();
+System.out.println(""User with ID "" + signup.getUsrId() + "" signed up on "" + signup.getSignupDate());
+
+ +

If it's not a form of primitive obsession, why is that?

+",325169,,,,,1/8/2019 18:35,Is using the Pair class a sign of primitive obsession code smell?,,4,3,,,,CC BY-SA 4.0,,,,, +385084,1,,,1/7/2019 16:23,,2,151,"

What is good practice of settings up a database in potentially large project - creating tables and updating them - should it be done in code of the app or should it be done by external database related tools like phpmyadmin ? I mean I have two ways - create tables and set them up when the app starts, or I can do this stuff with phpmyadmin by hand independently of codebase.

+",325177,,,,,1/7/2019 18:18,Database management good practice - should it be in code or with database tools?,,4,2,,,,CC BY-SA 4.0,,,,, +385087,1,,,1/7/2019 16:51,,0,142,"

This is a best practices question for release management of an app. But this scenario is a bit different than what I've been able to find myself.

+ +

Essentially my company maintains a fork of its own app. There are two versions of the app that will have different configurations of bug fixes / features. These fixes and features come from a common pool of what's completed. The reason for the two configurations is that there are two main testing environments with different goals.

+ +

Let me explain that a bit more with a scenario:

+ +
    +
  1. We have features 1, 2, & 3 and bug fixes 1, 3, & 3 being worked on at the same time by different devs.
  2. +
  3. For the upcomming releases to QC, Config-1 of the app wants to include feature 1, and bug fixes 1 & 2, and Config-2 wants to include features 2 & 4.
  4. +
  5. For subsequent releases to QC, certain features may get rejected as being incomplete, buggy, no longer needed, etc. And same with the bug fixes.
  6. +
+ +

#3 is important because not all features get removed or synced between the two configurations. This means that the two configurations diverge slightly over time. But only in the short term for what's in active development. Over the long term, the code base is in sync with what's in production.

+ +

So, as a diagram, the builds could look like this over time (with some added features / bugs from the bulleted scenario above):

+ +

+ +

Basically a normal development life cycle, but with twin timelines. There's the main app, and a fork of it that's derived merely by a different combination of the available patches. Patch queues would work well but we use a build server to produce the builds which requires us to publicly push committed changes (as far as I can figure out) to a remote repo.

+ +

My question is really about what the easiest way to manage this is, at the actual source control level. What we've done in the past is (using Mercurial) maintain two repositories (one per configuration) and all features / bugs would get imported as needed as patches. Removing items would be done using a variety of ways, backouts probably being the most common. The problem with this is that the two repos ended up wildly different from each other with different items being applied at different times. So the entire changeset stack would be a different order.

+ +

What we're thinking about doing, is still maintain two separate repos, but every effort (features, bugs) would be developed as a branch and that branch gets pushed to the repo it's needed in. Within a given repo, if the branch is wanted in the upcoming build, it gets merged in the build branch which is monitored by our Jenkins server and produces the builds.

+ +

Is there a better way? An ironed out best practice that prevents messy build branches as a result of backing out items, and possibly even other issues that we don't know about yet?

+",16275,,16275,,1/9/2019 1:52,9/30/2020 4:05,Source Control Release Management: Simultaneous Releases with Different Configurations,,1,4,,,,CC BY-SA 4.0,,,,, +385088,1,385090,,1/7/2019 16:54,,-5,44,"

Should I run my own webserver? If so, how do I do that? I'm running on Windows 10 with VS2017, IIS Express and MS SQL Server.

+ +

I don't need a domain name. Just providing access via IP-address is fine. I'm just looking for a cheap and easy way of enabling other people to help me beta test my apps.

+ +

Can Azure be used for this?

+",320898,,,,,1/7/2019 17:21,How should I make my Asp.Net Core web apps available online for beta testing?,,1,0,,,,CC BY-SA 4.0,,,,, +385099,1,,,1/7/2019 20:48,,0,400,"

Considering this pattern is used to support CQRS message bus, examples are buslane Python or MessageBus PHP

+ +

It uses commands to change the domain model, and publishes domain events

+ +

This looks great providing the separation, and encapsulating each domain write operation on its own classes, but doesn't that make an anemic domain model ? Can a domain model be thought of a collection of services, and objects ?

+ +

Even if, doesn't that results in a domain model that is just an entity, or a data container, and all the business logic is implemented in their own command handlers.

+ +

On the contrary, if all the commands which changes the model are implemented in a class, isn't that a kind of a god class ? Or doesn't it violate SRP ?

+",85286,,85286,,1/7/2019 20:57,1/8/2019 5:39,"Anemic Domain Model, CQRS, command bus",,2,0,1,,,CC BY-SA 4.0,,,,, +385102,1,385113,,1/7/2019 21:18,,0,52,"

I just started a new job and one of my first tasks is to create local nuget packages from the existing libraries, to help with versioning, maintenance, etc. This task had already been started by another engineer. However, he chose to grab many libraries that relate, create a project holding all these libraries, and publish it as one package (specifically a nuget package).

+ +
Example:
+LibraryA_v1 + LibraryB_v2 + LibraryC_v3 = PackageA_v1
+LibraryB_v1 + LibraryC_v3 = PackageB_v2
+
+ +

Then, PackageA_v1 and/or PackageB_v2 would be referenced by whatever project that needs them. However, I see a lot of different problems with this approach.

+ +
    +
  1. PackageA_v1 and PackageB_v2 are extremely unstable. Anytime a library changes, the package would need to update.
  2. +
  3. Since the packages are unstable, it is highly likely that the principle ""Depend upon packages whose I metric is lower than yours"" would be broken.
  4. +
  5. I can't seem to access the libraries within the packages (in a simple C# test application), which was the original intent.
  6. +
  7. The last problem I see is that libraries of different versions could be imported into the same project, possibly causing problems (ex. LibraryB_v1 and LibraryB_v2 would be in the same project, if PackageA_v1 and PackageB_v2 are both referenced)
  8. +
+ +

From my studies in software engineering and the principle previously mentioned, I think each library should be kept separate in their own nuget packages. However, my co-worker had obviously thought differently. So, should libraries be packaged together based on similar traits?

+",314489,,132397,,1/8/2019 4:48,1/8/2019 5:42,Should libraries be packaged together based on similar traits?,,1,1,,,,CC BY-SA 4.0,,,,, +385105,1,,,1/7/2019 22:42,,2,173,"

I'm really struggling with overheads of context switching. When I need to continue work on some part of the code after a break, it takes up to an hour to recall all the context of the problem I working on and tune up to work. How do you deal with that issue? Maybe you leave some prompts in the code describing context and next action, or keeping some kind of lists, or using any other management tricks?

+",325205,,,,,1/8/2019 6:35,How do you manage context switching overhead in software development when getting back to work over different parts of your project?,,2,4,,43473.29236,,CC BY-SA 4.0,,,,, +385116,1,385122,,1/8/2019 6:40,,-2,173,"

I am writing drivers for different devices to work with my embedded system. Currently I write any new driver manually but for the future I would like to automate this using a settings file. I figure I have 2 options:

+ +
    +
  1. write a single universal driver that reads the setting file and behaves accordingly;

  2. +
  3. write a code generator that reads the setting file and generates code from that with appropriate behavior.

  4. +
+ +

Which one of these is the better option and why? Are there any better options still?

+",320971,,209665,,1/9/2019 8:30,1/9/2019 8:30,generate code or write generic code,,1,5,,43473.69583,,CC BY-SA 4.0,,,,, +385127,1,385159,,1/8/2019 9:54,,4,3392,"

Context: I have an open source project which uses JNI. While it's possible to build it yourself, I expect most users will use standard Java tools like Maven, Gradle, or SBT to download the JAR, which contains the pre-compiled binary image. This is no problem on Windows, but on Linux it gets complex.

+ +

I'm wondering how much to statically link when creating these packages so it's compatible with most Linux distributions. I know that, for example, Alpine Linux does not come with libstdc++, meaning that it would fail when in a small docker container.

+ +

There's also the possibility of older versions. For example, a quick look at nm suggests it's linking _ZNSt11logic_errorC1EPKc@@GLIBCXX_3.4.21 and __vsnprintf_chk@@GLIBC_2.3.4. What if the host has versions older than 3.4.21 and 2.3.4?

+ +

However, most literature I've seen tells me not to link against libgcc. Is that still true? Is it the same if I switch to clang (which has its own standard libs?)

+",180,,,,,1/9/2019 6:30,Is it good practice to statically link libstdc++ and/or libgcc when creating distributable binaries for Linux?,,2,3,,,,CC BY-SA 4.0,,,,, +385139,1,,,1/8/2019 17:04,,0,65,"

I am working on an ASP.NET Core application that grabs a model from a database via Entity Framework, and will pass a ""subset"" of that model to our Angular front end. For example:

+ +

I have a list of Users. On the user-list page, I would like to grab a list of User objects from the API and display them on the page. For each User, it should only show Name and maybe a few other fields.

+ +

I would also like to be able to click on the user's name to redirect to their profile page. On this page, we will get more fields from the User table - perhaps more in-depth information like Nickname or Middle Name, etc.

+ +

My question is, what is the ""correct"" way to structure this, on the front-end side and the server side?

+ +

On the front end, is it best to have one class with nullable values that either get filled out or left null based on which page they are on? Like this:

+ +
export class User{
+    firstName: string;
+    middleName?: string;
+    lastName: string;
+}
+
+ +

Or would I have ""UserListUser"" and ""UserProfileUser"" classes that are completely separate? Or, would it be a parent class called ""User"" with a subclass with more information like ""UserFull""?

+ +

And, on the back-end, is it best to do the same thing? Would you create separate classes for each page that has access to that database model? Would you just use the ORM object that is created from Entity Framework? Or would you always map that to a smaller object with only a subset of the fields on the database table?

+",176600,,,,,1/12/2019 14:32,How to handle different pages of a web application having different levels of access to a database model,,1,0,0,,,CC BY-SA 4.0,,,,, +385142,1,385147,,1/8/2019 18:05,,0,98,"

Given the formula to calculate instability...

+ +

I = (Ce / (Ca + Ce)) with Ce = outgoing dependencies, Ca = incoming dependencies, and I = Instability,

+ +

...should I include system dependencies (such as System, System.Data, System.XML, etc.) when counting outgoing dependencies (Ce)? Or, do I just count it as one outgoing dependency?

+ +

Background Info

+ +

I have been studying this topic in an academia environment. I'm starting to apply what I've learned, thus where this question derived. More info on the topic can be found at this link.

+",314489,,,,,1/9/2019 8:03,Do I include system dependencies when calculating Instability?,,1,0,1,,,CC BY-SA 4.0,,,,, +385149,1,,,1/8/2019 22:59,,113,22329,"

I'm a junior developer that is given the ability to help shape my team's processes if I can justify the change, and if it helps the team get work done. This is new for me as my past companies more or less had rigidly defined processes that came from management.

+

My team is fairly small and somewhat new (<3 years old). They lack:

+
    +
  • a well defined software development/work management framework (like +scrum)
  • +
  • strong product ownership
  • +
  • well defined roles ( e.g. business staff will do manual testing)
  • +
  • regular standup meetings
  • +
  • a consolidated issue tracking process (we have a tool, the process is still being developed)
  • +
  • a unit, system, regression, or manual testing suite or list
  • +
  • documentation on business logic and processes
  • +
  • a knowledge base to document internal and customer facing tips
  • +
+

And the list goes on. Management is open to the implementation of improvements so long as the value is justified and it helps the most important work (namely the development) get done. The underlying assumption however is that you have to take ownership in the implementation, as no one is going to do it for you. And it goes without saying some of the above projects are non-trivial, without a doubt time consuming, and are clearly not development work.

+

Is it worth a (junior) developer's effort to try and push for the above as time goes on? Or is it best to "stay in your lane" and focus on the development, and leave the bulk of the process definition, and optimization to management?

+",,user321981,155513,,1/4/2021 23:20,1/5/2021 14:19,Should a (junior) developer try to push for better processes and practices in their development/IT team?,,13,19,38,,,CC BY-SA 4.0,,,,, +385161,1,,,1/9/2019 7:59,,-1,670,"

According to the book, The domain layer should be isolated. In domain entity, you should avoid adding a property represents database PK (usually identity surrogate column called ID).

+ +

There is no problem in identifying a domain entity because by definition it includes a natural key. If this key is the same as PK, then repository will have no problem in persisting the domain entity using PK. Otherwise, the repository will need to construct a SQL command that find the entity based on some column(s) instead of PK.

+ +

Allowing PK to be in domain layer is the perfect approach by book, however I cannot see risky practical issues. On the other hand, without this approach, the saving process for an aggregate might lead to a performance issue in saving.

+ +

I can see only one practical problem which is ""the wrong guidance for other developers"". Do you know other practical problems for this approach?

+",247564,,,,,1/9/2019 14:45,DDD Including DB Id in domain entity,,1,4,,,,CC BY-SA 4.0,,,,, +385168,1,385182,,1/9/2019 9:13,,1,72,"

I have in mind to develop a Ruby on Rails app with two different databases.

+ +

In the primary SQL db - let's say MySQL, for instance - I'd keep my app items, e.g. user profiling, user-user interactions and, in general, everything that's bound to a model, anything that I already know how is made.

+ +

Now the best part: I'd like to add a secondary No-SQL db - let's say MongoDB, for instance - where I want to put other documents that I don't know which fileds they may contain, not bound to any model. End-users, while interacting with the app, should be able to add their own custom documents and create their collections, making queries and also creating views - I mean views inside the db, I'm not talking about web pages - to aggregate not-so-well-formatted records.

+ +

What do I mean for not-so-well-formatted records? For example, let's say that a user inserts a record like this one:

+ +
{ ""name"":""Bill"", ""surname"":""Ball"" }
+
+ +

and then another record like this one:

+ +
{ ""firstname"":""Tim"", ""lastname"":""Tam"" }
+
+ +

As you can see, the fields name and firstname are meant to be the same field, while they're actually different feilds; at the same time, surname and lastname are meant to be the same, but they're different because the user did a sloppy job while inserting those records.

+ +

I'd like the app could notify the discrepancy to the user so he can choose wether to aggregate those two fields or keep them separated; if he chooses to aggregate them, he should be able to define - in a very simple and friendly way - a view into the db, maybe applying some kind of alias to every field. Maybe even defining the type of each field, e.g. strings, dates, integers, etc. So, after the user makes few clicks, the view could look like:

+ +
{ ""firstname"":""Bill"", ""lastname"":""Ball"" }
+{ ""firstname"":""Tim"",  ""lastname"":""Tam"" }
+
+ +

while preserving the original/raw data inside the collection. +I already know how to do this with MongoDB, by the way, but still I don't know if this could be the right approach.

+ +

I don't want the user to be obliged to create any model for his data, I'd simply want to let him throw raw documents into the db and eventually autonomously ""fixing"" the discrepancies after, so he can continue querying his collections without worrying about those discrepancies in naming convention.

+ +

So here's my question: is this a good approach to solve my problem? I already know that I can have multiple dbs attached to my Rails app, but is this structure convenient? Or is there something better?

+",325321,,325321,,1/9/2019 9:18,1/9/2019 12:46,Ruby on Rails: primary SQL db and secondary No-SQL db without models,,1,0,,,,CC BY-SA 4.0,,,,, +385170,1,,,1/9/2019 9:50,,1,55,"

I am wondering how best to slice up a Java Mustache web app which has:

+ +
    +
  • Data layer (JPA, Repos, Entities, etc)
  • +
  • Service layer getting data from other company web services outside the package
  • +
  • Web/Controller layer serving up Mustache template pages
  • +
+ +

Important to note this is all in the same Maven module (I know).

+ +

What I really dislike about this is that a lot of the Mustache logic is wrapped up with non-UI code, in case all the way to the Service layer.

+ +

What I am thinking of doing is extract all the Mustache logic together with all the controllers and Spring web security stuff into a new module called project-ui or similar.

+ +

Leave the Data and Service logic inside a project-api module.

+ +

At this point would you either

+ +
    +
  1. Deploy project-api on its own and expose endpoints for all data? This seems a little over-engineered to me
  2. +
  3. Add the API package as project-ui dependency and leave it to the UI controllers to retrieve the data?
  4. +
+ +

Any clear way of doing this that I am not aware of? Ideally, I'd rather let a NodeJS/ReactJS app serve the UI layer but the decisions have already been made. Think big corp environment.

+",120144,,,,,1/9/2019 9:50,Spring Boot and Mustache app separation of concerns,,0,0,,,,CC BY-SA 4.0,,,,, +385177,1,,,1/9/2019 11:14,,0,558,"

In short, is instanceof a bad thing?

+ +

I had a code something like

+ +
Converted convert(Object o) {
+    if (o instanceof ClassA) {
+        convert((ClassA) o);
+    }
+    if (o instanceof ClassB) {
+        convert((ClassB) o);
+    }
+    throw new IllegalArgumentException(o.class + "" not suported"");
+}
+
+ +

I didn't realize it can be in fact refactored to

+ +
Converted convert(Object o) {
+    throw new IllegalArgumentException(o.class + "" not suported"");
+}
+Converted convert(ClassA a) {
+    // do the conversion for ClassA
+}
+Converted convert(ClassB b) {
+    // do the conversion for ClassB
+}
+
+ +

ClassA and ClassB are some generated classes without common interface so I cannot do a conversion in one method (without reflection as the method names I'm interested in are the same in ClassA and ClassB).

+ +

On the other hand I do not really see a benefit of implementing it that way.

+ +

Additionally (when same principe is applied), I'm working with JSF and I have several implementations of javax.faces.convert.Converter, for example

+ +
@Component
+public class CountryConverter implements Converter {
+
+    @Autowired
+    private CountryServiceImpl countryService;
+
+    @Override
+    public CountryDto getAsObject(FacesContext context, UIComponent component, String value) {
+        return countryService.findById(Long.parseLong(value));
+    }
+
+    @Override
+    public String getAsString(FacesContext context, UIComponent component, Object value) {
+        if (value instanceof CountryDto) {
+            Long id = ((CountryDto) value).getId();
+            return Long.toString(id);
+        }
+        return null;
+    }
+
+}
+
+ +

...I can have very similarly

+ +
    @Override
+    public String getAsString(FacesContext context, UIComponent component, Object value) {
+        return null;
+    }
+
+    @Override
+    public String getAsString(FacesContext context, UIComponent component, CountryDto country) {
+        return country.getId();
+    }
+
+ +

where overloaded version with Object can be in some common parent, but it all seems to me as overengineering = as I mentioned I see no benefit of doing it that way, just because I can. KISS is a principle I like and implementation with instanceof is straightforward...

+",71371,,,,,1/10/2019 10:50,Method overriding as a substitution for instanceof in Java?,,2,4,,,,CC BY-SA 4.0,,,,, +385181,1,,,1/9/2019 11:54,,0,94,"

I have data object's let say PersonDto (fields name, surname) and OrganizationDto (field name, type).

+ +

Then I have some common screen, showing such data, but screen title is something related to type of - Person/Organization.

+ +

The easiest way how to implement that functionality is to have field/constant in that class. Another approach would be to use some instanceof checks (related to my previous question), the most complex (from my point of view) is to use Visitor pattern, so at some point, there is one of methods called:

+ +
Visitor {
+    accept(PersonDto dto) {
+        return ""Person"";
+    }
+    accept(OrganizationDto dto) {
+        return ""Organization"";
+    }
+}
+
+ +

Just now I realized, some magic with class name can be done (for example if DTO name is not one word I'd need to add space or something), but I do not like that approach at all.

+ +

Approach with additional field seems the most straighforward to me especially if I have common interface for DTOs, but it is breaking SRP in a sense, that class not only holds data, but knows something about screen/UI. I just prefer KISS more.

+",71371,,,,,1/9/2019 11:54,Is this violating SRP? Data object with some additional info,,0,3,,,,CC BY-SA 4.0,,,,, +385183,1,,,1/9/2019 12:38,,1,212,"

I have been trying to figure out an ACL solution for my application which should manage API endpoint's access rights dynamically. Some said that I have an option of Spring Security ACL. I checked it but lack of documentation frightened me a bit. So that I started to design my own ACL implementation; since I did not start implementation can not provide code example but at least I can provide the flow and components planned to be used.

+ +
    +
  1. A security configuration service. Which will interact with required services to provide dao services between application and database (ofc this will have ref to DAO class as well)
  2. +
  3. A new annotation to be used with aspect to tag/label all endpoints and to catch through aspect.
  4. +
  5. An aspect service to intercept requests and check authorization.
  6. +
+ +

So this a highly overall idea above definitions with their required helpers.

+ +

I will trigger aspect per end point access request. Since I know the label for the endpoint (written in the annotation) I can simply cross-check the access rule and user's roles ( I will access it through the Authentication object of Spring).

+ +

Any suggestion or a flaw I am missing here ?

+",325349,,353068,,2/10/2020 10:56,11/6/2020 12:03,Custom ACL Implementation,,1,2,,,,CC BY-SA 4.0,,,,, +385185,1,385190,,1/9/2019 13:33,,11,1176,"

Hopefully not too academic...

+ +

Let's say I need real and complex numbers in my SW library.

+ +

Based on is-a (or here) relationship, real number is a complex number, where b in imaginary part of complex number is simply 0.

+ +

On the other hand, my implementation would be, that child extends parent, so in parent RealNumber I'd have real part and child ComplexNumber would add imaginary art.

+ +

Also there is an opinion, that inheritance is evil.

+ +

I recall like yesterday, when I was learning OOP at university, my professor said, this is not a good example of inheritance as absolute value of those two is calculated differently (but for that we have method overloading/polymorfism, right?)...

+ +

My experience is, that we often use inheritance to solve DRY, as a result we have often artificial abstract classes in hierarchy (we often have problem to find names for as they do not represent objects from a real world).

+",71371,,316049,,1/9/2019 20:14,1/9/2019 22:35,How to implement RealNumber and ComplexNumber inheritance?,,5,16,,,,CC BY-SA 4.0,,,,, +385186,1,385195,,1/9/2019 13:38,,2,317,"

I am writing in C#, but this question may apply to other languages as well.

+ +
public class Test
+{
+    int a = 10; // I created 'a' here
+    public void M()
+    {
+        int a = 20; // I forgot that I already have 'a' in the class and I initial it again ;
+
+        // do other stuff with ""a""... 
+    }
+}
+
+ +

This way isn't against the declaring space rule, so the compiler will have no problem with it. I know I could use this.a if I want to access the a outside M() and they are not a variable actually.

+ +

My question is:

+ +

Is this way could make some people confused by allowing declare the same name in a sub scope? Will it be harder to debug or doing code review?

+",277346,,,,,1/9/2019 19:51,How to avoid repeating variable initialization?,,3,7,,,,CC BY-SA 4.0,,,,, +385188,1,385191,,1/9/2019 13:56,,-2,109,"

I have a specific problem in git which I havent found an answer to yet. +In gitlab I have 3 seperate repos. For my school project the teacher wants me to copy everything into a repo of his and wants to see all my git history.

+ +

So I would need to fork my 3 repos into his main repo, but I just couldnt figure out an answer.

+",325358,,,,,1/9/2019 14:21,Fork 3 repo's into 1 main one,,1,2,0,,,CC BY-SA 4.0,,,,, +385196,1,385204,,1/9/2019 15:17,,1,435,"

Assume there is any program that is supposed to be tested and you like to perform an equivalence class analysis on it. Let's say you identified six valid and four invalid equivalence classes. Then, how many test-cases need to be created in each case at least?

+ +

I'm not sure about that but I think because every equivalence class of input files needs to be considered in at least one test-case, so you will need at least one test-case for each equivalence class? Or maybe even less because it might be possible to skip the invalid equivalence classes..? :S

+",320477,,,,,1/9/2019 17:04,How many test-cases need to be created at least (valid and invalid equivalence class)?,,1,2,,,,CC BY-SA 4.0,,,,, +385197,1,,,1/9/2019 15:19,,0,90,"

I'm currently building a React Native application and wondering if storing device information such:

+ +
    +
  • if my app has granted location permission
  • +
  • location service is turned on
  • +
  • last known user location
  • +
+ +

in the Redux store can be a good idea.
+I have different components that needs to know this informations and storing in Redux can grant me a predictable state. +In the case, maybe I can store them like this:

+ +
{
+  user: {
+    id: '123'
+    name: 'Markus'
+    ...
+  },
+  device: {
+    locationPermission: 'denied'
+    locationActive: false
+    lastKnownLocation: {
+      lat: 44.123,
+      lng: 32.123
+    }
+  },
+  ...
+}
+
+ +

Are there any cons about this approach?

+",325366,,,,,10/13/2020 15:07,Correctly store device info with Redux in React Native app,,1,0,,,,CC BY-SA 4.0,,,,, +385200,1,,,1/9/2019 16:07,,1,121,"

We're a small team of 3 senior and 1 junior developers and I've been tasked with introducing BDD within our development process.

+ +

To say there's a lot of confusion about BDD is an understatement and it's appearing within the team after I created some scenarios for user based behaviour.

+ +

My understanding of BDD is that it's a way of abstracting requirements in a way that everyone can understand, and so far it seems to help the team visualise some of the behaviour that's required. The problem is now that the rest of the team has run away with the idea and want all behaviour written in Gherkin, including non-user based things such as what should happen in the database (e.g. auditing, error logging, sessions etc) and interaction between web services.

+ +

I know BDD isn't about testing, which is why it was invented by Dan North, but the few user-centric scenarios I've created can nicely have user acceptance tests derived from them, so now the rest of the team would like this applied to all layers of the system - even though we won't produce UATs for the behaviour, instead integration and unit tests.

+ +

The BDD work is under my responsibility but now I'm not sure how to proceed. I'm weary that we'll get bogged down with a huge number of scenarios and waste precious time if we continue with the wishes of everyone else.

+ +

I'd like to know how other teams who use BDD/TDD actually use their secnarios, as everything I've seen online only seems to refer to user interaction.

+ +

I understand that scenarios are best used as part of the ""living documentation"", so does this mean all behaviour?

+ +

For instance how useful would the following be? Especially since the users won't care about this, it's that we require auditing as standard when creating systems:

+ +
Feature: The audit service logs all requests made
+Scenario: A request is made and logged to the audit database
+
+Given a request is made to *the service*
+When *the service* receives the request
+Then *the service* calls the *audit service*
+And *the audit* service logs the request to *the database*
+
+ +

I can understand that this flow helps us know what we should be programming but this seems like its shoe-horning something that doesn't fit into BDD. We already have sequence diagrams detailing the above scenario.

+",146235,,146235,,1/9/2019 16:33,1/10/2019 16:05,Should BDD/Gherkin be used only for user visible behaviour?,,1,14,,,,CC BY-SA 4.0,,,,, +385201,1,,,1/9/2019 16:23,,1,247,"

I am member of the Apache PLC4X (incubating) project. Here we are currently implementing multiple industry PLC protocols. While we initially focussed on creating Java versions of these, we are currently starting to work on also providing C++ and other languages.

+ +

Instead of manually syncing and maintaining these, we would rather define the message structures of these protocols in a generic way and have the model, parsers and serializers generated from these definitions.

+ +

I have looked at several options: 1) Protobuf 2) Thrift 3) DFDL

+ +

The problems with these are the following:

+ +

1) Protobuf seems to be ideal do design a model and have model, serializers and parsers generated from that. With Protobuf it is easy to define a model and ensure I can serialize an object and deserialize it with any language. However I don't have full control over the transport format. For example if I was to encode the constant byte value of 0xFF, this would be a problem.

+ +

2) Thrift seems to be more focussed on the services and the models used by these services. The same limitations seem to apply as for Protobuf: I have no full control over the transport format

+ +

3) DFDL seems to be exactly what I'm looking for as I want a language to describe my data-format ... unfortunately I could find projects like Daffodil, which seem to be able to use DFDL definitions to parse any data format into some XML like Dom structure. For performance and memory reasons we would rather not do that. Other than that I couldn't find any usable tooling.

+ +

Also had a look at Avro and Kaitai Struct but Avro seems to have the same issues for my usecase as Protobuf and the guys from Kaitai told me serialization was still experimental

+ +

My ideal workflow would be (Using Maven):

+ +

1) For every protocol I define the DFDL documents describing the different types of messages for a given protocol

+ +

2) I define multiple protocol implementation modules (one for each language)

+ +

3) I use a maven plugin in each of these to generate the code for that particular language from those central DFDL definitions

+",325378,,,,,11/1/2020 22:05,"Options for having model, parsers and serializers for a given data-format generated in multiple languages?",,2,4,2,,,CC BY-SA 4.0,,,,, +385202,1,,,1/9/2019 16:34,,1,32,"

I'm developing a system, and I've had a question that might help other people.

+ +

The system would be written in PHP, with a lot of chance of turning a mobile app later, too. Normally, I would create it with a controller that would make the connection to the database, and others responsible for inserts, updates, etc.

+ +

Considering the application, you would then do an API with endpoints for the same functions.

+ +

Thinking about it, I imagined then that the system itself could also work with the API. In this way, it would be the same as the mobile application: It would not connect directly to the database, but would connect to the API and it would take care of operations.

+ +

I see that, this way, the system would be slower than connecting directly to the database, but, perhaps, it would be more useful in maintenance, because it would be just a place to change, the API.

+",325379,,,,,1/9/2019 16:34,API as system controller,,0,1,,,,CC BY-SA 4.0,,,,, +385205,1,,,1/9/2019 17:14,,0,914,"

I am trying to modify an opensource project (json serialization one: gson, I want to let it serialize/deserialize objects with circular references, which is not allowed now. +) to do it I have to change an abstract class widely extended.

+ +

I want to change the behaviour of that class with a strategy object so every child classes will be using it without needing to know it, so I have add the strategy object to the abstract class. As draft, I have done this:

+ +
...
+public abstract class TypeAdapter<T> {
+
+  CircularReferenceStrategy<T> circularStrategy = (new CircularStrategyFactory()).create();
+
+  public void write(JsonWriter out, T value) throws IOException{
+    circularStrategy.write(this, out, value);
+  }
+/**
+   * Writes one JSON value (an array, object, string, number, boolean or null)
+   * for {@code value}.
+   *
+   * @param value the Java object to write. May be null.
+   */
+ public abstract void doWrite(JsonWriter out, T value) throws IOException;
+... 
+
+ +

That circularStrategy can be, p.e: let it fail by throwing a StackOverflowException (actual behaviour), or substitute circular references with 'null'/NullObjects or, as is done in Jackson (another serialization library), mark each object with an 'id' and add a reference to that id in the serialized json so the serialized/deserialized objects will have the circular references... whatever, the point is that they can be many strategies and that one of them must be selected before starting the serialization.

+ +

So my question is:

+ +

How should I tell the factory which strategy must be used?

+ +

I would set the info about which strategy must be used up in the GsonBuilder* and inject it to the abstract CLASS TypeAdapter (as a static field, to the class, not to the instances) but there is something that stops me from doing that, a spider-alert... Is it ok to inject things to a class? from where?

+ +

How would you do this?

+ +

* That is the main builder of the library, it builds a Gson object that has toJson and fromJson methods that are what you use

+",110507,,,,,1/10/2019 8:53,How to inject behaviours to an abstract class?,,1,0,,,,CC BY-SA 4.0,,,,, +385206,1,,,1/9/2019 17:47,,-3,525,"

I'm trying to accomplish this scenario :

+ +

There are 2 types of users, let's say Admin and Worker, and there have different roles.

+ +

Admin can do a CRUD of questions, and also can create a room where the users can join to play together (this is just a name) but maybe is a good idea to create more attributes inside of it like, WHO is playing in this room, POINTS for everyone but let's talk it afterwards when I show you the design.

+ +

Worker can play solo or multiplayer.

+ +

Ok the thing is, on my design I have :

+ +

Collection named User which contains :

+ +
    +
  1. _id
  2. +
  3. username
  4. +
  5. password
  6. +
+ +

This is a default one, but then I'm wondering how do I define the Role if it's an Admin or a Worker, something like isAdmin:true and then I check this Bool? Also I'd like to have the reference for those questions where the user has failed more, I mean like a wrongQuestionNumber which contains the _id of the question and the times he/she failed

+ +

Then I'd like to have the Question collection where contains :

+ +
    +
  1. _id
  2. +
  3. question_name
  4. +
  5. answers[1,2,3,4]
  6. +
  7. correctAnswer or answers because it can be multi option chooser
  8. +
  9. topic
  10. +
  11. isExamQuestion
  12. +
+ +

Then the Room collection should contains :

+ +
    +
  1. _id
  2. +
  3. name
  4. +
  5. capacity
  6. +
  7. type (game can be as a group or solo) that's why this attribute
  8. +
  9. exam (this is an object created by Admin means that when he creates a question he can select many and create an exam with those)
  10. +
  11. ranking (This is the top X from 1 to X)
  12. +
  13. don't think if I have to add the winner here because if I get the position 0 from ranking I get the winner...
  14. +
+ +

There's a collection named Topic as well, if my question have a topic then I can select the question by Topic. +An example of Topic should me Math so user can do only exams or do tests with math questions.

+ +
    +
  1. _id
  2. +
  3. Name
  4. +
  5. Questions[...]
  6. +
+ +

Then I have to store like a historic about what are the questions worker has answered correct and what did not, to make some statistics, but I need to store some historicals for Admin to see in this topic the average that Workers have failed more is : Question23 (for instance) something like that.

+ +

Any tip is welcome, and improvement as well.

+ +

EDIT

+ +

@uokesita recommend to me to use PostgreSQL so maybe it's a good idea doing this way, how could be the schema?

+",303627,,303627,,1/14/2019 20:39,1/14/2019 20:39,Database Schema for a multiplayer quiz game,,2,0,2,,,CC BY-SA 4.0,,,,, +385218,1,,,1/9/2019 20:50,,-1,151,"

Immutable objects

+ +
+

In object-oriented and functional programming, an immutable object (unchangeable[1] object) is an object whose state cannot be modified after it is created

+ +

Wikipedia (https://en.wikipedia.org/wiki/Immutable_object)

+
+ +

Code example in PHP (see question below...)

+ +
    class ImmutablePaymentMethodManager 
+    {
+        private $paymentMethods = [];
+
+        public function __construct(array $paymentMethods)
+        {
+            $this->paymentMethods = $paymentMethods;
+        }
+
+        public function enabledPaymentMethods() : iterable 
+        {
+            $result = [];
+            foreach($this->paymentMethods as $paymentMethod) {
+                if($paymentMethod->enabled()) {
+                    $result = $paymentMethod;
+                }
+            }
+            return $result;
+        }
+    }
+
+    class InMemoryPaymentMethod implements PaymentMethodInterface 
+    {
+        private $name, $costs, $enabled;
+
+        public function __construct(string $name, float $costs, bool $enabled)
+        {
+            $this->name = $name;
+            $this->costs = $costs;
+            $this->enabled = $enabled;
+        }
+
+        public function name()
+        {
+            return $this->name;
+        }
+
+        public function costs() : float 
+        {
+            return $this->costs;
+        }
+
+        public function enabled() : bool 
+        {
+            return $this->enabled;
+        }
+    }
+
+    class DbAwarePaymentMethod implements PaymentMethodInterface 
+    {
+        private $dao;
+
+        public function __construct(PaymentMethodDao $dao)
+        {
+            $this->dao = $dao;
+        }
+
+        public function name()
+        {
+            return 'My db aware payment method';
+        }
+
+        public function costs() : float 
+        {
+            return $this->dao->getCosts($this->name);
+        }
+
+        public function enabled() : bool 
+        {
+            return $this->dao->isEnabled($this->name);
+        }
+    }
+
+    class TimeAwarePaymentMethod implements PaymentMethodInterface 
+    {
+        public function name()
+        {
+            return 'My time aware payment method';
+        }
+
+        public function costs() : float 
+        {
+            return 33;
+        }
+
+        //only enabled at even 2,4,6,8,10,12,14,16,... hours
+        //is this considered a state change? 
+        public function enabled() : bool 
+        {
+            $hour = date('h');
+            return $hour % 2 === 0;
+        }
+    }
+
+    //Immutable (enabledPaymentMethods) we can expect the same results 
+    $paymentMethodManager = new ImmutablePaymentMethodManager([
+        New InMemoryPaymentMethod()
+    ]);
+
+    //Not immutable (enabledPaymentMethods) we cannot expect the same result 
+    $paymentMethodManagerWithDbAwarePaymentMethod = new ImmutablePaymentMethodManager([
+        New InMemoryPaymentMethod(),
+        new DbAwarePaymentMethod(new PaymentMethodDao())
+    ]);
+
+    //Not immutable (enabledPaymentMethods) we cannot expect the same result each time
+    $paymentMethodManagerWithTimeAwarePaymentMethod = new ImmutablePaymentMethodManager([
+        New InMemoryPaymentMethod(),
+        new TimeAwarePaymentMethod() 
+    ]);
+
+ +

Immutability

+ +

In the example above, encapsulation is a great way to hide the database details. But hiding the database logic in this DbAwarePaymentMethod now makes ImmutablePaymentMethodManager mutable, since it's result can very each time it is accessed.

+ +

I ask these questions, because I really like immutability, but I also like encapsulation like in the example above.

+ +Assumption + +

We can say $paymentMethodManager is immutable. I will assume there is no debate about this.

+ +

Question 1; can we say $paymentMethodManagerWithDbAwarePaymentMethod is immutable?

+ +

Is accessing the database seen as a state change? Even the state of the object does not change, the state it communicates outwards does...

+ +

Question 2: can we say $paymentMethodManagerWithTimeAwarePaymentMethod is immutable?

+ +

Is the added behavior to paymentmethod.enabled(), seen as a state change?

+ +

Question 3: Does immutability break encapsulation (as shown in this example...) sometimes?

+ +

If all objects should be immutable, we must find a way of hiding the enabled-logic in another structure.

+ +

Question 4: Is there any OO language that deals with this issue?

+ +

Or are there any patterns (known to PHP) or other languages that make all payment method managers immutable? Those would include moving the behavior out of the payment method implementations and use InMemoryPaymentMethod for each payment method?

+",265076,,,,,11/10/2019 0:02,Immutability and encapsulating state changing behavior,,2,4,,,,CC BY-SA 4.0,,,,, +385219,1,,,1/9/2019 20:55,,-2,94,"

I wrote a small INI file parser as a library which I want to use in a bigger project. Following good practice I decided I should write test cases, too. But I fail to find a good start.

+ +
    +
  • The library is extremely small (1 source file for the implementation, +1 header)
  • +
  • The library has few public methods. (parse(), get_sections(), get_value(section, key))
  • +
  • Most logic is private to the implementation, therefore not trivially accessible for tests.
  • +
  • The input for a test would be an INI file and I don't want to write +multiple test INI files.
  • +
+ +

So the first test I'd write is:

+ +
    +
  • Call the parse method on a provided test input INI file and check that it was successful.
  • +
  • Hardcoding expected sections and key-value pairs in the test itself and check that all of them are there.
  • +
  • Also check that no other sections and key-value pairs were extracted.
  • +
+ +

I am unhappy with that. Maintaining the expected output and the input files is quite a burden. The tests will fail when you update the input but not the expectations which I think is a bad design decision for tests.

+ +

There are many guidelines for writing good tests out there, but I always find it hard to apply them. I guess experience is the key. So maybe you can guide me to some good example codes or share your personal experiences? Much appreciated.

+",321010,,,,,1/9/2019 22:20,How to identify test cases?,,1,1,,,,CC BY-SA 4.0,,,,, +385226,1,,,1/9/2019 23:41,,-3,80,"

Is it possible to scale a low resolution image to a highier resolution upto the point with minimum effect on quality, sharpness and other notable attributes of an image.

+",324609,,,,,1/9/2019 23:51,Low resolution Image to High resolution,,1,3,,,,CC BY-SA 4.0,,,,, +385230,1,385232,,1/10/2019 2:04,,9,833,"

I'm reading Clean Code by Robert C. Martin, and the phrase TILT inexplicably appears in some code samples. Example (it's in Java, by the way):

+ +
    ...
+    public String errorMessage() {
+      switch (status) {
+        case ErrorCode.OK:
+          // TILT - Should not get here.
+          return """";
+        case ErrorCode.UNEXPECTED_ARGUMENT:
+          return ""Unexpected argument"";
+        case ErrorCode.MISSING_ARGUMENT:
+          return ""Missing argument"";
+        ...
+    }
+    ...
+
+ +

From the context, I'm guessing TILT designates a state that is unreachable and only included to satisfy the compiler (for example, in the above code, TILT appears in the ErrorCode.OK case because there shouldn't be an error message if the state is OK), but I'm not sure.

+ +

Does anybody know what TILT stands for / means?

+",324028,,4,,2/15/2019 11:05,2/15/2019 11:05,"What does ""TILT"" mean in a comment?",,1,2,,,,CC BY-SA 4.0,,,,, +385233,1,,,1/10/2019 4:49,,0,319,"

I am trying to improve the architecture of a system where data flows between 3 systems. Every minute, there are thousands of items generated on App 1, which are sent to Midpoint in the following way:

+ +
    +
  1. App 1 sends items (through REST API) to Midpoint + +
      +
    • This happens on an ""as-and-when"" basis: App 1 will keep sending items when needed
    • +
  2. +
  3. Midpoint receives items and saves it into its database + +
      +
    • Every minute, a cron job runs that takes items from this database, performs business logic and sends them (through REST API) to App 2
    • +
  4. +
  5. App 2 performs business logic and sends back a response to Midpoint, which uses ActiveMQ to send the response back to App 1
  6. +
+ +

There is usually 1000+ items being sent from App 1 to Midpoint every minute. As a result, Midpoint is under too much load and very slow. Once an item gets sent from App 1 to Midpoint, its response can take up to 30 minutes to travel through the flow and come back to App 1 (Midpoint -> App2 -> Midpoint -> App1).

+ +

Are there better solutions out there or design philosophies for this kind of task? There must be a way to make this task more efficient and fast. How are all the big companies handling it with billions of transactions?

+",241284,,,,,1/10/2019 6:15,Efficiently processing millions of records every minute,,1,5,,,,CC BY-SA 4.0,,,,, +385234,1,385236,,1/10/2019 6:07,,-1,66,"

I would like to ask if there is any powerful method we can use to combine the Id's to a single field for faster search and retrieval.

+ +

My case is like this, I have a table that stores ID's of companies.

+ +
+ +

ID | CompanyName

+ +
+ +

1 | Company A

+ +

2 | Company B

+ +

3 | Company C

+ +

I have another table where there is data that need to be processed by these companies. so i am looking for a better method to find the data that is not processed by a specific company.

+ +
+ +

Data | CompaniesProcessed

+ +
+ +

A | 1,2

+ +

B | 3,2

+ +

C | 3

+ +

one easy way i can think is store the company ID's as a separated value and search for it. The data set to search can be huge, so, Is there any other way that will be faster for search rather than the string operation?

+ +
+ +

EDIT: I need to read through data and check if it is already processed for a specific company. so if this is linked table i have to do a read operation of data and search the linked table to know if it is processed. However, if it is in a single table, i can just filter out those records that are processed while i read.

+",78231,,78231,,1/10/2019 6:52,1/11/2019 20:54,Combine Ids to a single field,,1,6,,,,CC BY-SA 4.0,,,,, +385237,1,,,1/10/2019 7:07,,2,251,"

I just started looking into programming language design and came across the term intrinsic but could not find any good definition of this anywhere. +I hope that someone here can help me with a good definition to broaden my understanding.

+ +

The description of the word in the Cambridge dictionary says:
+Intrinsic (adjective)
+""being an extremely important and basic characteristic of a person or thing""

+ +

There is also an intrinsic function which I think relates to mapping to assembler codes in the codegen phase, i.e. the back end of of a compiler.

+ +

In particular I want to understand the word in the context of the LLVM internal representation described in the LLVM Language Reference Manual.

+",49167,,,,,1/10/2019 8:31,Definition of intrinsic in language design,,2,0,,,,CC BY-SA 4.0,,,,, +385244,1,385292,,1/10/2019 8:55,,0,2182,"

I know that @RolesAllowd annotation can be used to provide role-based access control to REST endpoints and I am currently using that with RestEASY.

+ +

I need to know how it is working behind the scenes. Can anybody please explain to me how Java validates the roles mentioned in the annotation.

+ +

So far I managed to figure out that the roles are stored in the UserPrincipal of the HttpServletRequest.

+",78509,,,,,1/10/2019 20:02,How @RolesAllowed annotation is workin in Java,,1,0,,43475.87361,,CC BY-SA 4.0,,,,, +385245,1,,,1/10/2019 9:08,,-2,133,"

When writing unit tests, I want to reduce the cognitive load of the reader as much as possible. One thing I've noticed that bothers me is that the variable names of the thing that is being tested are often varying quite a bit in my and my teams code. The variable is often named similar to the component in question. As an example

+ +
class Car:
+  def get_tires():
+    return 4
+
+class TestCar:
+  def test_get_tires():
+    car = Car()
+    assert car.get_tires() == 4
+
+ +

this is totally OK when talking about small and clearly readable tests. But when moving to large test suits, I was wondering if it's a good idea to name the object being tested uniformly the same. So:

+ +
class TestCar:
+  def test_get_tires():
+    testee = Car()
+    assert testee.get_tires() == 4
+
+ +

However, I've not found such a pattern being used very much on SO and it's sibling pages like SWE. What are the pros and cons of moving to such a convention?

+",325439,,,,,1/10/2019 9:19,What is a common name for a module being tested?,,1,4,,,,CC BY-SA 4.0,,,,, +385255,1,385270,,1/10/2019 11:06,,1,677,"

I've been asked to design folder-like structure (in java, but I expect this solution to be language agnostic). +There is one root, just like / on linux. Then, it can have almost indefinitely nested folders and files inside. Now, here's the catch: I have to be able, to track some of file details, like size, on every level. For example, user may want to know, how many files are in Folder 2, and what is their size. To do that, system needs to take every file in Folder 2 (so in this example that would be File5 and File 6), take every folder in it (Folder 4), and every file in Folder4. Folder 4 can have its own nested folders, and they can have more nested folders and files in them as well. Realistically, max level of nested folders will be around 32.

+ +

My question is, how to design a system like that? I know I cannot calculate this on the fly. There are just too much data, and user would have too wait for too long. So, I think, there is a necessity to store additional information in folders. Then, they have to be updated every single time a file is changed, and information would have to be propagated up to the root folder. If that changes anything, I'll be using cassandra as database storage.

+ +

+",258309,,,,,1/10/2019 16:01,How to design deeply nested data structure?,,3,2,1,,,CC BY-SA 4.0,,,,, +385256,1,,,1/10/2019 11:26,,-2,195,"

I'm wondering about Unit Tests. Let say i got a code ( in C#, but language is not important here):

+ +
public class SOT: ISOT
+{
+    List<string> _internalCollection = new List<string>();
+    public string CurrentCollectionName { get; }
+
+    public void AddItem(string item)
+    {
+        _internalCollection.Add(item);
+    }
+
+    public void ChangeCollection()
+    {
+        _internalCollection.Clear();
+    }
+
+    public List<string> GetCollectionItems()
+    {
+        return new List<string>(_internalCollection);
+    }
+}
+
+ +

and we want to test it:

+ +
public class SOT_Test
+{
+    [Fact]
+    public void GetCollectionItem_ReturnAllAddedItem()
+    {
+        var onTest = new SOT();
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+
+        var result = onTest.GetCollectionItems();
+        Assert.Collection(result, e=>e.Equals(""first""), e=>e.Equals(""second""));
+
+    }
+
+    [Fact]
+    public void GetCollectionItem_CollectionChanged_ReturnOnlyFromNewCollection()
+    {
+        var onTest = new SOT();
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+        onTest.ChangeCollection();
+        onTest.AddItem(""3th"");
+        onTest.AddItem(""4th"");
+
+        var result = onTest.GetCollectionItems();
+        Assert.Collection(result, e => e.Equals(""3th""), e => e.Equals(""4th""));
+    }
+}
+
+ +

But after some time, few more feature and refactors, out SOT looks more like this:

+ +

public interface ICollectionManager + { + string GetNewCollection(); + }

+ +
public class ColectionManager: ICollectionManager
+{
+    public string GetNewCollection()
+    {
+        return Guid.NewGuid().ToString();
+    }
+}
+
+public interface IItemManager
+{
+    void Add(string item);
+    void Clear();
+    IEnumerable<string> GetItems();
+}
+
+public class ItemManager: IItemManager
+{
+    List<string> _items = new List<string>();
+    public void Add(string item)
+    {
+        _items.Add(item);
+    }
+
+    public void Clear()
+    {
+        _items.Clear();
+    }
+
+    public IEnumerable<string> GetItems()
+    {
+        return new List<string>(_items);
+    }
+}
+
+public class SOT: ISOT
+{
+    private IItemManager _itemManager;
+    private ICollectionManager _collectionManager;
+
+    public SOT2(IItemManager itemManager, ICollectionManager collectionManager)
+    {
+        _itemManager = itemManager;
+        _collectionManager = collectionManager;
+    }
+
+    public void AddItem(string item)
+    {
+        _itemManager.Add(item);
+    }
+
+    public void ChangeCollection()
+    {
+        var collection = _collectionManager.GetNewCollection();
+        _itemManager.Clear();
+        //... Do something with collection 
+    }
+
+    public List<string> GetCollectionItems()
+    {
+        return new List<string>(_itemManager.GetItems());
+    }
+}
+
+ +

So, in our test, we should Mock both dependences in test to focus on testing logic +in this particular unit, or rather keep tests as they are with only small adjustment:

+ +
public class SOT_Tests
+{
+    [Fact]
+    public void GetCollectionItem_ReturnAllAddedItem()
+    {
+        var onTest = new SOT2(new ItemManager(), new ColectionManager());
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+
+        var result = onTest.GetCollectionItems();
+        Assert.Collection(result, e => e.Equals(""first""), e => e.Equals(""second""));
+
+    }
+
+    [Fact]
+    public void GetCollectionItem_CollectionChanged_ReturnOnlyFromNewCollection()
+    {
+        var onTest = new SOT2(new ItemManager(), new ColectionManager());
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+        onTest.ChangeCollection();
+        onTest.AddItem(""3th"");
+        onTest.AddItem(""4th"");
+
+        var result = onTest.GetCollectionItems();
+        Assert.Collection(result, e => e.Equals(""3th""), e => e.Equals(""4th""));
+    }
+}
+
+ +

Tests are still working, and are still correct, but now they are penetrating two layers of code. So ther are more like integration tests. +But if we just mock dependences:

+ +
public class SOT_Tests_Alt
+{
+    [Fact]
+    public void GetCollectionItem_ReturnAllAddedItem()
+    {
+        var itemManagerMock = new Mock<IItemManager>();
+        var collectionManagerMock = new Mock<ICollectionManager>();
+
+        var onTest = new SOT2(itemManagerMock.Object, collectionManagerMock.Object);
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+        itemManagerMock.Setup(e => e.GetItems()).Returns(new List<string>() {""first"", ""second""});
+
+        var result  = onTest.GetCollectionItems();
+
+        Assert.Collection(result, e => e.Equals(""first""), e => e.Equals(""second""));
+
+    }
+
+    [Fact]
+    public void GetCollectionItem_ShouldAllAddedItemBePassedToItemManager()
+    {
+        var itemManagerMock = new Mock<IItemManager>();
+        var collectionManagerMock = new Mock<ICollectionManager>();
+
+        var onTest = new SOT2(itemManagerMock.Object, collectionManagerMock.Object);
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+
+        onTest.GetCollectionItems();
+
+        itemManagerMock.Verify(e => e.Add(It.IsAny<string>()), Times.Exactly(2));
+
+    }
+
+
+
+    [Fact]
+    public void GetCollectionItem_CollectionChanged_ShouldCollectionManagerBeCalled()
+    {
+        var itemManagerMock = new Mock<IItemManager>();
+        var collectionManagerMock = new Mock<ICollectionManager>();
+        var onTest = new SOT2(itemManagerMock.Object, collectionManagerMock.Object);
+        onTest.AddItem(""first"");
+        onTest.AddItem(""second"");
+        onTest.ChangeCollection();
+        onTest.AddItem(""3th"");
+        onTest.AddItem(""4th"");
+
+        onTest.GetCollectionItems();
+        collectionManagerMock.Verify(e=>e.GetNewCollection(),Times.Once);
+    }
+}
+
+ +

We end with testing method internals, what is well, not as good as testing system output, or we just test logic that we just mocked, what is also pretty dump i think.

+ +

So, what approach is correct one?

+",48334,,,,,1/10/2019 12:32,Unit Tests - correct approach to test system with multiple layers,,1,2,,,,CC BY-SA 4.0,,,,, +385257,1,,,1/10/2019 11:42,,-1,95,"

Imagine the application I am building as a normal media playlist (Video / Music). On the client side, I select the files I want to play (Files are located on server), and I send its paths to the server as an array of objects. Converted to JSON the object would look like:

+ +
{
+   ""mediaFiles"":[
+      {
+         ""type"":""photo"",
+         ""path"":""/../photo.jpg""
+      },
+      {
+         ""type"":""video"",
+         ""path"":""/../video.mp4""
+      }
+   ]
+}
+
+ +

On the server side I will be launching video player or image viewer based on file type. So far everything is simple and clear.

+ +

The part where it get's bit tricky is that I will have remote control in my client application. Therefore I need to keep a track of which file is currently playing, and also the array index of that file to be able to integrate all remote play features (play / stop / prev / next).

+ +

My current logic was to have my server multi threaded. One thread would be in charge of playing the playlist and keeping the track of it, and another thread would be in charge of remote play.

+ +

I normally use Spring and NodeJS for building RESTful API's. I consider using Spring as an over kill only for this feature, and the problem with NodeJS is that it is JavaScript and it doesn't support multi-threading.

+ +

Therefore, I decided to switch to Python (Flask) for this problem. Keep in mind that this will be my first touch of Python

+ +

Possible problems with this plan:

+ +

Like I said, I will have one thread which will be listening for mediaFiles array (sent from client).

+ +
+
    +
  1. Once the array is received, start the while loop and inside it play the files.
  2. +
  3. In order to stop it, I call stop endpoint, and while loop will break.

  4. +
  5. How would I be able to use prev, next, and pause features while while loop is running? This is the part why I think this approach + isn't good.

  6. +
+
+ +

I would really like to hear some ideas on how to improve this approach.

+",325459,,186050,,1/14/2019 14:52,1/14/2019 14:52,Making the REST API to act as a playlist,,1,3,,,,CC BY-SA 4.0,,,,, +385258,1,385278,,1/10/2019 11:53,,1,460,"

Are there any algorithms related to the following problem that could be usefull for solving it?

+ +

I have a convex hull built on some point set. +I would like to simplify it (reduce number of points) by still keeping its perimiter (or area) as small as possible. New simplified polygon should not intersect the original hull.

+ +

The basic idea I am trying to implement is to calculate for each point of a polygon perimeter added by removal of this point. And then remove the cheapest point (which removal adds minimum value to the perimiter).

+ +

So we keep iterating and removing points while added perimiter or area value is suitable and passes some creteria.

+ +

Here comes the problem:

+ +

When removing point p1 we introduce a new edge formed by previous point p0 and the next point p2. This new edge can be non-optimal or invalid (intersecting the original hull). So I would like to adjust points p0 and p2 along their edges to keep perimeter valid and small as possible.

+ +

How can I find these adjusted positions of p0 and p2 ?

+ +

+ +

UPDATE:

+ +

I think my current problem is finding the optimal slope of the new (green) edge. But I am looking forward to any related suggestions and algorithms.

+",265914,,265914,,1/10/2019 14:12,8/7/2019 5:40,Algorithm to simplify 2D convex hull at the cost of extra area,,4,2,,,,CC BY-SA 4.0,,,,, +385264,1,,,1/10/2019 13:03,,0,280,"

I am reading the following link to learn about system designs of various systems. (This is a paid link, I am attaching all explanations below.) +In an attempt to explain the system design of Instagram the above link gives us the following requirements:

+
    +
  1. Users should be able to upload/download/view photos.
  2. +
  3. Users can perform searches based on photo/video titles.
  4. +
  5. Users can follow other users.
  6. +
  7. The system should be able to generate and display a user’s News Feed consisting of top photos from all the people the user follows
  8. +
+

It suggests us to have 3 schemas +

+

These schemas suit an RDBMS well.

+

The link says, if we want to scale up we need to tap into the benefits of NoSQL, and for this, we will need to create another table +UserPhoto in which, the ‘key’ would be ‘UserID’ and the ‘value’ would be the list of ‘PhotoIDs’ the user owns, stored in different columns.

+

In the link, they have suggested the use of Cassandra NoSQL database for this use case.

+

To fulfill the requirement of Generating a user's newsfeed by aggregating the top 100 photos of all the users followed by the current user, the link suggests us to have our PhotoID comprise of a number and timestamp of photo upload (epoch time) because we have a primary index on photo Id in Photo table. +This is the point where it got confusing for me.

+

How will we use the above schema to get 100 latest photos of all the users followed by the current user in a NoSQL database?

+",325461,,173647,,11/20/2020 20:52,12/20/2020 21:05,Regarding data modeling in a NoSQL use case,,1,1,1,,,CC BY-SA 4.0,,,,, +385271,1,,,1/10/2019 14:17,,-2,115,"

As far as we know, the User Stories are the way a requirement is defined in bounded context with acceptance criteria. On the other hand, the Product Backlog enlists all requirements, i.e., new feature, enhancements, and existing production issues.

+ +

This is not clear, if or if not, the Product Backlog just consists out of User Stories. So, the question here is as follows.

+ +

Does the Product Backlog just consists out of User Stories in Agile Development?

+",183455,,118878,,1/10/2019 14:52,1/10/2019 14:52,Does the Product Backlog just consists out of User Stories in Agile Development?,,1,4,,43475.62014,,CC BY-SA 4.0,,,,, +385275,1,,,1/10/2019 14:47,,-3,1002,"

I was hoping that someone could explain to me in the simplest way possible and with an example, what abstraction is with regards to Oop. I've read articles online and I just don't get it. I'm hoping a simple coding example would help it sink in. I understand it's the concept of hiding complexity from the user and I understand real life examples like a coffee machine. A user doesn't need to know the complexity of how it makes the coffee, just that they need to insert a coffee pods and press a button.

+ +

I'd really appreciate any help,

+ +

Thanks :)

+",325489,,,,,1/10/2019 18:33,Object Oriented Programming what is abstraction?,,3,0,,43475.85278,,CC BY-SA 4.0,,,,, +385280,1,,,1/10/2019 15:41,,0,330,"

I am developing a ""platform"", I have an MVC site that will hold all the main data, as well as our generic API, uses Microsoft authentication to create an account, then our employee MVC will add the data to the database that will allow the user to start consuming our API. I have a custom API for one of our customers, using a different MVC (just handles the API). I will have other custom APIs as well. I also am working on an employee MVC that uses active directory as its authentication. I am able to add this all to IIS using ports, but I am wondering if I am going about this all wrong. My ultimate goal was to have multiple projects, so if I need to make changes to any one project, it would not affect another project. Now I am thinking of trying to ditch the port method and instead set up the MVCs to have postfixes.

+ +
example.com(Main MVC with Microsoft Authentication)
+example.com/api/{key}/{token} (Main MVC using Attribute Routing)
+example.com/employee (MVC with AD Login)
+example.com/custom/api/{additionStaticDefiner}/{key}/{token} 
+
+ +

This additional definer would be something like the company name put in the initial route template:

+ +
config.Routes.MapHttpRoute(
+            name: ""DefaultApi"",
+            routeTemplate: ""custom/api/BobsBurger/{controller}/{id}"",
+            defaults: new { id = RouteParameter.Optional }
+        );
+
+ +

The main two questions, am I splitting these up to much, (they are all in the same solution, and I have shared library applications to not duplicate coding), and is it possible to do this without the dynamic code areas running into each other? I tried once and had some issues and am not sure where to start, I will figure it out, but if someone has experience in this and suggestions I am all ears.

+",325492,,,,,1/10/2019 15:41,Running multiple MVC projects: One site,,0,2,,,,CC BY-SA 4.0,,,,, +385284,1,,,1/10/2019 16:55,,0,30,"

One of the database systems I work with (I'll call it database A) was essentially sharded into 3 schema-identical copies. This was easy to source control, and when a change was made to any of the copies, it would go to all the copies.

+ +

A few months ago a change was made. A new database was created (not sharded, I'll call it database B), and a view was created in database A to reference database B.

+ +

The problem is that the view is not the same between the three instances of database A. Database B has 3 schemas: one to match each of the 3 instances of database A. The difference in the view between the three instances is only to point to the different schemas.

+ +

We're using TFS2015 to source control. There'd be too big of an undertaking to change this DB structure, but I'm not sure how to source control the view that has to be different between the 3 instances of database A.

+ +

Is there a way to source control this without having three copies in TFS?

+",313164,,,,,1/10/2019 16:55,Source control Sql Server multiple shards with minimal differences,,0,5,,,,CC BY-SA 4.0,,,,, +385285,1,,,1/10/2019 17:55,,1,155,"

I am trying to learn CQRS in my spare time. Say I have a class like this:

+ +
public class Person
+{
+  private Guid Id {get; set;}
+  private Guid Name {get; set;}
+  private List<Order> orders;
+
+  //Methods and constructor go here
+
+}
+
+ +

I am thinking of a scenario where it would be beneficial to persist the Person class to the write database (SQL Server) and then the person and order class to the read database (Mongodb).

+ +

The only reason I need to persist the Person class on the write side is to benefit from change tracking.

+ +

Is it normal for the write database to have different fields to the read database i.e. in this case orders are not persisted on the write side. Or should the same classes and fields be persisted on both sides?

+ +

I realise that relational databases are different to NoSQL databases and both have their benefits and limitrations. I am specifically asking if it is normal for information to be available in the read database that is not available in the write database.

+ +

I realise the answer to my question is yes; from a technical point of view. I am asking when thinking about the principle of least astonishment.

+ +

Update 20/01/19

+ +

Here is a sample of code from the write side domain model:

+ +
 public class Product
+    {
+        public Guid Id { get; set; }
+        public string Code { get; set; }
+        public string Description { get; set; }
+
+        public void LookupDescriptionByCode(List<ProductDescriptionLookup> productCodes)
+        {
+            Description = productCodes.Find(x => x.Code == Code).Description;
+        }
+    }
+
+    public class ProductDescriptionLookup
+    {
+        public Guid Id { get; set; }
+        public string Code { get; set; }
+        public string Description { get; set; }
+    }
+
+ +

The database (lookup tables) looks like this:

+ +
create table ProductDescriptionLookup(Id uniqueidentifier, Code varchar(100), Description varchar(100))
+insert into ProductDescriptionLookup (Id,Code,[Description])
+values (newid(), 'prod1','Product1')
+
+ +

Here is some test code for the write side:

+ +
[Fact]
+        public void Test()
+        {
+            List<ProductDescriptionLookup> list = new List<ProductDescriptionLookup>();
+            list.Add(new ProductDescriptionLookup { Id = Guid.NewGuid(), Code=  ""prod1"", Description=""Product1"" });
+            Product p1 = new Product { Id = Guid.NewGuid(), Code = ""prod1"", Description = """" };
+            p1.LookupDescriptionByCode(list);
+        }
+
+ +

Here is the read model (which is mapped to a MongoDB database):

+ +
public class Product
+    {
+        public Guid Id { get; set; }
+        public string Description { get; set; }
+     }
+
+ +

Notice that the Product Code is not even persisted to the read side. Therefore there is no need for the lookup table on the read side (because it will never have to lookup the description by product code). Therefore I am planning to exclude the ProductDescriptionLookup table from the read side. This works perfectly from a technical point of view, however I am wandering if it is frowned upon as the read database is now different to the write database.

+",65549,,65549,,1/20/2019 16:42,1/20/2019 16:42,Can the write database have different fields to the read database?,,1,0,,,,CC BY-SA 4.0,,,,, +385287,1,385458,,1/10/2019 18:55,,-5,91,"

I'm looking for failures in this ""token based login"" design, besides the UML ""syntax"" errors. Theoretically speaking, it will do the trick when implemented on an small project.

+ +

If true, sorry about my english.

+ +

Thaks in advance.

+ +

EDIT: Is 1) this model something useful and clear? 2) Will it work when implemented? 3) Besides the implementation, is secure?

+ +

+",325505,,325505,,1/12/2019 16:52,1/14/2019 2:15,How this diagram can be improved,,1,3,,43488.58333,,CC BY-SA 4.0,,,,, +385288,1,385302,,1/10/2019 19:08,,0,100,"

Lets say we have some configurations stored in a database for how we should acquire some data. One of these fields is an AcquisitionStrategy which denotes how we will acquire the data.

+ +

I want to add a field that corresponds only to a subset of the rows in this table that have an acquisition strategy of HTTP. Should I make a new table that maps the CrawlerType to a Table 1 ID?

+ +
Design 1
+----------------------
+Table 1 -> ID | AcquisitionStrategy | CrawlerType ...
+           1    HTTP                  GenericWebCrawler
+           2    FTPS                  NONE
+           3    SFTP                  NONE
+
+Design 2
+Table 1 -> ID | AcquisitionStrategy | ...
+           1    HTTP                  
+           2    FTPS                  
+           3    SFTP                  
+
+Table 2 -> ID | CrawlerType
+           1    GenericWebCrawler
+
+",244069,,,,,2/11/2019 15:03,Should I make a new table when a column only is populated when another column has a certain value,,2,0,1,,,CC BY-SA 4.0,,,,, +385290,1,,,1/10/2019 19:52,,1,112,"

I have a data access layer, which currently communicates with a database.

+ +
public interface IDao<T>  // T is my DTO
+{
+   Write(IEnumerable<T> dtosToPersist)
+}
+
+public class Dao<T> : IDao<T>
+{
+    private readonly IBulkCopySaver<T> _bulkSaver;
+    private IEnumerable<T> _buffer;
+
+    public Dao(IBulkCopySaver<T> bulkSaver)
+    {
+       _bulkSaver = bulkSaver;
+       _buffer = new List<T>();
+    }
+
+    public void Write(IEnumerable<T> dtosToPersist)
+    {
+     // implementation logic.
+    }
+
+}
+
+ +

I would like to also add persistence into XML for example. My goal is to replace the Dao with a new object implementing IDao, changing calling code as little as possible (Dao is massively used). I'm using a dependancy injector and the golden goose would be to just switch from database to xml persistence just by changing the injection interface/concrete type bindings.

+ +

The problem for me is writing in the XML case would need more information, I would need the folder path for example. Also that folder path is determined at runtime from program arguments and the type T. The interface contract of my data access object is not satisfying as it only takes an IEnumerable.

+ +

What are my options here ?

+ +

My first idea is to make the dependancy injector, aware of the filepath parameter and inject it as constructor parameter. Supposing the folderPath is in a context local field of the injector, using Ninject :

+ +
Bind<IDao<DtoConcreteType>.To<Dao<DtoConcreteType>
+Bind<IDao<DtoConcreteType>.To<XmlDao<DtoConcreteType>.WithConstructorArguments(""filePath"", context.folderPath)
+
+ +

However this feels a bit wrong to me. The injection framework will do reflection under the hood and I feel there should be a OOP way to solve this.

+ +

Another idea as I was writing this bits is that maybe the XmlDao.Write may be calling a provider object (injected) that would give it the details it lacks (an IFilePathProvider that knows program arguments and makes decision based on type T). This seem way more simple

+ +

What is your opinion ? Do you have an idea for this ? What about the ideas I had ?

+",325509,,,,,1/11/2019 6:38,Add behaviours without changing existing code,,2,0,,,,CC BY-SA 4.0,,,,, +385291,1,385303,,1/10/2019 19:53,,0,52,"

I am implementing an asynchronous remote (to aws S3) logging handler (in python, but it doesn't matter) supporting 4 modes:

+ +
    +
  1. Immediate: the messages get written immediately to S3 - I am +trying to avoid this mode
  2. +
  3. By queue size: the messages get stored +into a linked queue, and after reaching a certain size (say 500ko), +the whole content is persisted all at once to a file in S3
  4. +
  5. By delimiters: all the messages contained between 2 string +delimiters, will be treated as 1 and sent all at once, e.g: +here message \n message2 \n will be sent at once

    + +
    logger.info(""#start#"")
    +logger.info(""message"")
    +logger.info(""message2#end#)
    +
  6. +
  7. By timer: every 10 seconds or so, the cumulated logs get sent to a file in S3.
  8. +
+ +
+ +

The reason I decided to implement remote logging is that the cluster could crash at any time or get terminated after finishing its processing and I would therefore lose the locally stored logs, when it would be helpful to know ""why"" it crashed of course, but also and very importantly, ""when"" it crashed, especially for long running operations, in which case the processing could be resumed at that stage.

+ +

An idea would be to have a queue (with ActiveMQ or Kafka etc.) on which messages get published in real-time, then probably aggregated before going to S3, but I thought it would probably be overkill to drag a whole broker infrastructure for this use-case.

+ +
+ +

My implementation works, but my questions are more conceptual and best-practice oriented:

+ +
    +
  • In case of using the modes ""2"" (by queue size) and ""4"" (by timer), how could I be notified of the end of the programs execution, so I flush the content of the local queue, and stop the timer thread? currently marking the logging thread as daemon but I obviously miss the last messages, looking now for a better way, to avoid daemons yet getting notified that the main thread finished, to delay the termination of these threads until everything gets pushed.

  • +
  • Does my approach make sense? or am I completely, and badly in a hacky way, reinventing the wheel?

  • +
+",325303,,325303,,1/10/2019 22:30,1/10/2019 23:19,Custom remote logging strategies,,1,0,,,,CC BY-SA 4.0,,,,, +385294,1,385296,,1/10/2019 20:30,,-3,254,"

While a fundamental concept, I don't understand the statement ""every CLASS in Java is a subclass of the class object,"" which is often quoted in JAVA tutorials usually in the inheritance section.

+ +

I thought this statement would be true if stated in reverse: every class OBJECT is a subclass of a class."" Here is why: in OO design, we use our class template to stamp out our designated objects, so that would make the object subordinate to the class because we create the class before creating our objects.

+ +

Yet, since this is a often a quoted gold-standard statement in Object-Oriented design, I know my logic is incorrect. But why?

+",325511,,,,,1/10/2019 20:59,Java - OO - Understanding Subclass of Class Object,,3,1,,,,CC BY-SA 4.0,,,,, +385304,1,,,1/10/2019 23:24,,0,433,"

So at a high level, we have multiple projects/solutions in company and we need to keep them in 1 source control. +Since the number of projects is around 50, we are not creating individual repositories for each of them but instead are all in same repository. +Our branches reflect the environments which is Dev, QCT, STG, PRD.

+ +

Having said that, which of the below 2 options should be good for us:

+ +
    +
  1. Each project has its own folder inside which different branches are created
  2. +
  3. Core repository starts with branches inside which are folders for all project
  4. +
+ +

Any experience based input with these 2 strategies would be greatly appreciated. +Even if its a blog or some other place where this discussion has happened would be nice.

+ +

EDIT: Just to be clear, i am not asking whether to create branches or not and to create them for each feature or not. We have already defined we are going to work with 4 branches representing our environments. +The question is the location of those branches. Should those branches be inside the project for each project or at a global scale.

+ +

The one difference for me is inside project branches has more management burden but also allows better control over branch and script control for just that project.

+",318901,,318901,,1/11/2019 20:54,7/6/2019 10:01,What are branching strategy pros and cons for 2 main types,,1,13,,,,CC BY-SA 4.0,,,,, +385305,1,,,1/10/2019 23:43,,0,154,"

I am implementing Authentication from scratch (php).

+ +

I reached a point where I have different types of users (admin, author, editor etc.)

+ +

After dealing with this, I realized I can allow multiple account login at the same time (the login pages/paths are different for each user type).

+ +
    +
  1. Would it be a good idea to implement multiple account login (with different user types)?

  2. +
  3. How about having same user type multiaccount, is it a good feature, or just a waste of time?

  4. +
+ +

Number 2 is similar to what gmail has (multiple accounts, allows you to switch between account, even has an All mails across all accounts page)

+",320458,,320458,,1/11/2019 22:47,1/14/2019 1:42,Is it a good idea to allow multiple account login (in the same browser),,1,8,,,,CC BY-SA 4.0,,,,, +385309,1,,,1/11/2019 2:55,,0,159,"

I am new to unit testing, but finally getting started.

+ +

I have been running into a situation where my unit test names apparently grow too long to readable due to the multiple parameters and combinations of them. For example, consider these hypothetical test case names resembling my actual test case names:

+ +
GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsFinanceAndDesignationIsManager_AccessAllowed
+GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsFinanceAndDesignationIsAnalyst_AccessDenied
+GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsFinanceAndDesignationIsTemp_AccessDenied
+
+GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsHrAndDesignationIsManager_AccessAllowed
+GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsHrAndCorpConselor_AccessDenied
+GetAllEmployeeRecords_UserBelongsToHeadQuarterAndDeptIsHrAndDesignationIsTemp_AccessDenied
+
+GetAllEmployeeRecords_UserDoesNotBelongToHeadQuarter_AccessDenied
+
+ +

I feel that the UserBelongsToHeadQuarter phrase in the names is redundant for the 6 out of 7 test cases but it is also important to distinguish the 6 tests from the 7th test.

+ +

Using a [TestCategory] might work but my team members have concerns that failing test case might have difficulty communicating what exact scenario is failing on test execution reports and/or the [TestCategory] is more suitable for breaking down the tests by component or feature.

+ +

Is there a better way to organize or name these for more readability?

+ +

I looked up unit test practices at Microsoft page or in NodaTime code base but I don't find any longer names.

+",267521,,,,,1/11/2019 6:34,How to name and organize unit tests with combinations of multiple parameters?,,2,2,,,,CC BY-SA 4.0,,,,, +385313,1,385331,,1/11/2019 5:03,,0,929,"

I am reading Eric Evan's Domain Driven Design, and I encountered this concept on p108. I am having a hard time grasping the concept, in spite of the explanations mentioned on the pages 107 and 108.

+ +

Here is an excerpt of the topic from the book:

+ +
+

Medium-grained, stateless SERVICES can be easier to reuse in large + systems because they encapsulate significant functionality behind a + simple interface. Also, fine-grained objects can lead to inefficient + messaging in a distributed system.

+ +

As previously discussed, fine grained domain objects can contribute to + knowledge leaks from the domain into the application layer, where the + domain object's behavior is coordinated.

+
+ +

Can somebody explain to me what is granularity so I can understand more what is being described in the excerpt above?

+",267828,,,,,1/11/2019 9:03,What is Granularity?,,2,2,,,,CC BY-SA 4.0,,,,, +385317,1,,,1/11/2019 5:56,,-3,72,"

why every startup and every big company uses #wordpress ? what is the benefit of using it ? fast management , easy to use , customization instead of creating one with html , css , js and back-end (php) ?

+",325552,,,,,1/11/2019 6:12,Why it is the good to use WordPress instead of creating one from scratch?,,1,1,,43476.41458,,CC BY-SA 4.0,,,,, +385322,1,385328,,1/11/2019 7:05,,0,87,"

I created a CSV export that works like the code below. There is a LinkedHashMap where the keys are the column title and values are functions where certain properties are read.

+ +

By reording the lines where entries are added to the map, one also reorders the csv column representation. +Column header and data are connected so that one can't move one without the other.

+ +

Are there any downsides with my code? Is there a better approach? +(I omitted escaping characters and so on to reduce code)

+ +
public class Test {
+
+    private final static Map<String, Function<Bean, Object>> DEF_MAP = new LinkedHashMap<>();
+
+    public static void main(String[] args) {
+        DEF_MAP.put(""Prop B"", bean -> bean.getB());
+        DEF_MAP.put(""Prop A"", bean -> bean.getA());
+
+        List<Bean> beans = new ArrayList<>();
+        Bean a = new Bean();
+        a.setA(""a1"");
+        a.setB(""b1"");
+        beans.add(a);
+
+        Bean b = new Bean();
+        b.setA(""a2"");
+        b.setB(""b2"");
+        beans.add(b);
+
+        DEF_MAP.keySet().forEach(k -> {
+            System.out.print(k + "";"");
+        });
+        System.out.println();
+
+        beans.forEach(bean -> {
+            DEF_MAP.values().forEach(v -> {
+                System.out.print(v.apply(bean) + "";"");
+            });
+            System.out.println();
+        });
+    }
+
+    private static class Bean {
+        private String a;
+        private String b;
+
+        public String getA() {
+            return a;
+        }
+
+        public void setA(String a) {
+            this.a = a;
+        }
+
+        public String getB() {
+            return b;
+        }
+
+        public void setB(String b) {
+            this.b = b;
+        }
+
+    }
+}
+
+",272203,,,,,1/11/2019 8:29,Generating CSV export,,1,11,,,,CC BY-SA 4.0,,,,, +385323,1,,,1/11/2019 7:18,,2,200,"

I work on several projects and sometimes they share a common base. How do you work with version control?

+ +

Here's an example:

+ +

I've got a boilerplate Wordpress plugin that I reuse. On each new Wordpress plugin project I create code I want to add to my boilerplate plugin. Currently I'm manually extracting libraries & fixes and add them to my boilerplate.

+ +

Is there a better way I could do this? So when I create a new library or fix a specific bug, can I somehow say 'this should also be added to my boilerplate repository'?

+ +

Update:

+ +

I've read the link, answers and comments. It gave me insight in if it's a good idea to make a 'mudball' library. Reading all of it I think sometimes a common library can make sense. In my example, a 'common base', it makes sense to me.

+ +

But i still have my question: How do i do it? Say i have this:

+ +
    +
  • boilerplate plugin
  • +
  • plugin 1
  • +
  • plugin 2
  • +
  • plugin 3
  • +
+ +

I decide to work on plugin 1 and write some code that would be usefull for the boilerplate plugin. Or I fix a bug in plugin 3, that also needs to be fixed in the boilerplate.

+ +

Is there a way in version control (I use netbeans / bitbucket ) to say: Commit this code to plugin 1 and also add this part to the boilerplate plugin?

+",325556,,325556,,1/23/2019 12:23,3/4/2019 10:05,How do I work on a new project and simultaneously add generic code to a base library with version control?,,1,3,,,,CC BY-SA 4.0,,,,, +385324,1,,,1/11/2019 7:20,,5,2183,"

Edited: Update is at the bottom

+ +

There could be a common or best practice for this scenario, but I am unfamiliar with it. However, it could very easily be a matter of subjective opinion on how one wants to implement their classes. Either way I am hopefuly to get some opinion from the spectrum of class designers here.

+ +

I am currently working on a project that allows users to generate files for data visualization.

+ +

The library will support two file types that are formatted differently (binary and XML). Due to this, I am left with a dilemma on how I want to control class instantiation and API access:

+ +
    +
  1. Create a separate class for each file type and visualization type
  2. +
  3. Create a separate class for each visualization type and load with methods for each file type
  4. +
  5. (Not demonstrated) The inverse of Option 2
  6. +
+ +

Option 1:

+ +
class Base:
+    # Stuff
+class F1V1(Base):
+    # Methods specific to file and visualization type one
+class F1V2(Base):
+    # Methods specific to file type one and visualization type two
+class F1V3(Base):
+    # Methods specific to file type one and visualization type three
+class F2V1(Base):
+    # Same logic as before but for file type two
+class F2V2(Base):
+    # ...
+class F2V3(Base):
+    # ...
+
+ +

Here a user of the library would call their direct class to perform operations specific to it without the need of setting keyword parameters to determine what it does (e.g. fv = F1V2())

+ +

What is great about this way is that its explicit, a user knows exactly what file and visualization type they are working with. But I think its cumbersome and not extensible in the event I want to add more file or visualization types; forcing me to write a new class for each possible combination.

+ +

Option 2:

+ +
class Base:
+    # Stuff
+class V1(Base):
+    def __init__(self, ftype_1=True)
+        self.ftype_1 = ftype_1
+    def write(self):
+        if self.ftype_1:
+            # Write method specific to file type one
+        # Write method specific to file type two
+    # Assume 3 methods for each operation
+    # One method such as `write()`
+    # One method for each of the file types
+class V2(Base):
+    # Same logic as before
+class V3(Base):
+    # ...
+
+ +

What I don't like about this method is that I now have to define multiple methods for each class that execute based upon keywords provided at construction (i.e. fv = V2(ftype_1=False)). Ideally, what I would like is to supply a keyword argument that then determines which of the methods should belong to that class. For example:

+ +
fv = V2() # Only contains methods to file type one operations
+fv = V2(ftype_1=False) # Only contains methods to file type two operations
+
+ +

As shown, there is nothing that prevents the following:

+ +
fv = V2() # Set for file type one
+fv.write_ftype_2() # Will produce an invalid file type formatting
+
+ +

I am not sure of a way where I can dynamically bind/remove methods based upon keyword arguments. It would be great if I could simply write all the methods for each file type within each visualization type, then remove methods that are not relevant to the class anymore. Im not sure if this is even advisable, I can already think of a scenario like so:

+ +
def write(self):
+    if self.ftype_1:
+        # Write method specific to file type one
+    elif self.type_2:
+        # ...
+    else:
+        # ...
+
+ +

If I dynamically removed methods from a class based upon keyword arguments, what would the point of the conditions be if say the first one held?

+ +

Summary:

+ +

So which is a common or best practice? Which can be improved? Or am I missing another way?

+ +

An ideal example would be (in my mind):

+ +
fv = Hexagons(ftype='.abc', flat=True, area=3)
+fv.insert(data)
+fv.write(data) # Specifically writes for file types of '.abc'
+
+ +

I suppose I could make Hexagons() return a subclass via __new__ but I think that might be unclear as to what is happening. To call Hexagons() but receive a ABCHexagons object could lead to confusion when users inspect the code base.

+ +

A factory method is ideal for this, but that is simply class instantiation. But each visualization type may have a variety of different keyword parameters that may not apply to the others. Rather, my issue lies with how to define them in the code base which ultimately leads to solving how to expose them to users.

+ +

Update:

+ +

After @Mihai and @Ewan gave suggestions, it is clear that having a writer class for each file type is the best way to go and inheritence is not. Now I need to examine if composition is a better strategy and I would like to clarify some details.

+ +
    +
  1. Each file type contains data that is formatted to represent a grid of shapes
  2. +
  3. A visualizer class is not used to display any data a user inputs
  4. +
  5. A visualizer class is used solely to determine how the write class writes
  6. +
+ +

For example:

+ +
class binaryWriter:
+    # Writes only binary files
+class xmlWriter:
+    # Writes only XML files
+class Hexagons:
+    # Contains methods for determining geometry
+class Triangles:
+    # Same as Hexagons
+
+ +

Suppose I wanted to write a binary file containing an array of hexagon tiles. Once the visualizer (i.e. the hexagonal grid) is selected, operations within the writer and visualizer class work together to determine how the write class writes to a file on disk. Lets say I want to insert a point in my grid and figure out which hexagon it belongs to, I would do:

+ +
w = binaryWriter(shape='hexagon')
+w.insert(1, 2) # Some point of x, y
+# Repeat insertions
+w.save() # Write to file on disk
+
+",325526,,325526,,1/12/2019 0:24,1/12/2019 0:24,Design pattern for similar classes that require different implementations,,2,0,,,,CC BY-SA 4.0,,,,, +385330,1,385332,,1/11/2019 8:48,,-1,314,"

I was wondering if there is a way to get the values of every case in a switch statement? +When you provide a not implemented case, I would like to throw some exception and provide a list of available case values.

+ +
switch (partName.Trim().ToLower())
+{
+    case ""engine"":
+        //something
+        break;
+    case ""door"":
+        //something
+        break;
+    case ""wheel"":
+        //something
+        break;
+    default:
+        throw new NotImplementedException($""Available parts are {????}."");
+}
+
+",325563,,,,,1/11/2019 12:37,How to get switch case values,,2,2,,43476.54583,,CC BY-SA 4.0,,,,, +385334,1,385344,,1/11/2019 9:28,,12,4445,"

Still trying to wrap my head around microservice architecture since I'm used to a monolithic approach

+ +

Suppose we try to build a extremely simplified Uber booking system. To simplify things we let's say we have 3 services and a gateway api for the client: Booking, Drivers, Notification and we have the following workflow:

+ +

When creating new booking:

+ +
    +
  1. Check if existing user already have a booking
  2. +
  3. Get list of available drivers
  4. +
  5. Send notification to the drivers to pick up the booking
  6. +
  7. Driver picks up the booking
  8. +
+ +

Let's say all messaging is done through an http call rather than a messaging bus like kafka to keep things simple.

+ +

So in this case, I thought that the Booking service can do the checking for existing booking. But then who should be getting the list of available drivers and notification? I'm thinking of doing it on the gateway level but then now logic is kind of split into two places:

+ +
    +
  • Gateway - get list of available drivers + send notifications
  • +
  • Booking - check for existing booking
  • +
+ +

And I'm pretty sure gateway is not the right place to do it but I feel like if we are doing it in the Booking service, it's becoming tightly coupled?

+ +

To make it more complicated, what happens if we have another project that wants to reuse the booking system but with its own business logic on top of it? That's why I thought of doing it in the gateway level so the new project gateway can have its own business logic separate from the existing one.

+ +

Another way of doing it I suppose is to each project have its own booking service that will talk to the core booking service but I'm not sure what's the best approach here :-)

+",37685,,209331,,1/11/2019 14:34,1/11/2019 19:20,Where should business logic sit in microservice architecture?,,3,1,10,,,CC BY-SA 4.0,,,,, +385342,1,,,1/11/2019 12:57,,1,92,"

I'm using the MVVM design pattern in my application which is comprised of,

+ +
    +
  • A Xamarin.IOs project (View Layer)
  • +
  • A Net Standard project (Common Layer)
  • +
  • A Xamarin.Android project (in the future) (View Layer)
  • +
+ +

In the Xamarin.IOs project, there's a Delegate class (NotificationCenterDelegate extending UNUserNotificationCenterDelegate) which triggers a method (WillPresentNotification) on receiving a notification. This method should inform a ViewModel (BasePageViewModel) that the method was invoked. The BasePageViewModel is not injected to the Delegate class.

+ +

There's a couple of approaches I can use here,

+ +
    +
  1. Inject a ViewModel (ControlViewModel). Then invoke a method in the ViewModel from the WillPresentNotification method.

    + +
    public void WillPresentNotification(/* parameters */)
    +{
    +    ControlViewModel.UpdateCount();
    +}
    +
    + +

    In ControlViewModel I invoke an event that is captured and handled in BasePageViewModel

  2. +
  3. Implement an Interface (IDelegateViewService) in the Delegate class that has an Event Handler. Trigger the event in the WillPresentNotification method.

    + +
    public class NotificationCenterDelegate : UNUserNotificationCenterDelegate, INotificationCenterDelegate
    +{
    +    public void WillPresentNotification(/* parameters */)
    +    {
    +        CountUpdated?.Invoke(this, null);
    +    }
    +}
    +
    +public interface IDelegateViewService
    +{
    +    event EventHandler CountUpdated;
    +}
    +
    + +

    Then capture handle the event in the BasePageViewModel.

  4. +
+ +

I prefer the first approach as it doesn't use an interface to invoke something in the ViewModel. Because as I feel using ViewServices reduces the ability of sharing code among different platforms. And I feel ViewServices should be only used if it's the only approach available.

+ +

Can anyone comment on these two design approaches? I would like to get some theory involving why one is better than the other or why a completely different design is much better!

+",279673,,,,,1/11/2019 12:57,When not to use View Services in MVVM design pattern?,,0,0,,,,CC BY-SA 4.0,,,,, +385346,1,385348,,1/11/2019 16:10,,2,1086,"

I currently build applications that are fairly monolithic. I have a one or many code bases that compile into one single binary/package and deployed on a cluster of docker containers. All of the data is stored on a single MySQL database, a redis cluster and possibly a NoSQL database for some of the data.

+ +

In this case, the bulk of my data is stored in an MariaDB RDS instance on Amazon Web Services. This works fairly well because RDS handles automated backups, and other benefits.

+ +

However, let's say that I want to split a service into its own ""microservice"". Where would it store its data? If I have 5 microservices, spinning up 5 RDS instances, 5 redis clusters doesn't seem to be the most cost effective and seems to be a lot of management overhead.

+ +

It seems to me that a cluster of docker containers would be more manageable for a single microservice. For example using something like docker-compose, you can spin up several docker images into as a single unit very easily. AWS has a similar concept called ""Task Definitions"" (to my knowledge) which you can launch on AWS Fargate or ECS. However, Fargate does not allow persistent storage, so you are basically forced to launch your database in something like RDS.

+ +

I suppose this is a fairly open ended question, and might pertain to DevOps more than actual Software engineering. How would someone design a microservice to be easily deployed on the cloud whilst being easily maintained as a separate but packaged unit of sub-services (app server, databases, cache, etc)? Using docker compose and amazon Task definitions seems to be the best way to keep consistency between development/staging/production environments, however it does have some limitations such as not having persistent storage on Fargate for example.

+ +

Just looking for examples on how someone might achieve this to help my understanding.

+",303638,,,,,1/11/2019 17:03,Where to store data for Microservices Architecture?,,1,0,1,,,CC BY-SA 4.0,,,,, +385350,1,385355,,1/11/2019 17:47,,-1,172,"

Using a website with Javascript as example.

+ +

Let's say I have script A which only performs a specific function on page Foo. For example something like sorting elements in a list.

+ +

Script A is only added to pages that have a list that needs sorted via javascript.

+ +

In the javascript we access the list with document.querySelector('#list'); now because this script is only added to pages I know have the list I feel like it would be safe to just immediately start accessing the children of list to sort. Additionally even if a list is removed for whatever reason the only issue is an error in the console.

+ +

Thus my question is should a null check be performed even if I know it will only be used on pages that will not return null?

+",325609,,,,,1/11/2019 19:25,Null checks good practice if code will only be ran when object is never null,,2,1,,,,CC BY-SA 4.0,,,,, +385354,1,385356,,1/11/2019 19:11,,0,179,"

My parent class has a vector field

+ +

I want to force child classes to push in that vector as many as items they have. At least one item

+ +

example:

+ +
class Options {
+    protected:
+        vector<string> optionItems;
+    public:
+        void printItemsInConsole(){
+            // prints items on console
+        }
+}
+
+class MainOptions : public Options {
+    //have to pushback {Open game, Open highScores, Exit} options for example
+
+}
+
+class SideOptions : public Options {
+    // have to pushback {Clear game database, Uninstall game, Back to main menu} options for example
+}
+
+ +

How to force child classes to push back their items?

+ +

I don't want to pass items in constructor and then put them in vector just right there, because some times items would be a lot, so it's not a clean way

+",324139,,324139,,1/14/2019 14:42,1/14/2019 14:42,Force field initialize not by constructor in c++,,2,0,,,,CC BY-SA 4.0,,,,, +385364,1,385366,,1/11/2019 23:43,,1,343,"

I like to create a finite-state machine for the given text below. While solving this, I came accross with several problems I listed at the bottom. (This example is in terms of testing (where you later derive tests of a specification which is in form of a state machine).)

+ +
+

Following specification of an elevator is given: The elevator is + inside an house with access to the top/roof and can be requested from + every storey (e.g. from outside). The top/roof of the house can only + be approached if there is a key inside.

+ +

If the elevator is requested to any storey, then the elevator goes + right there after having closed its doors. When elevator arrived, the + doors open. The doors close, if the event timeout is triggered. If + the elevator reaches the desired storey, then the event + storeyReached (or topReached for reaching the top/roof) is triggered, as well as the event openElevatorDoors.

+
+ +
    +
  • I'm not sure how to choose the initial state. It's all about an elevator roughly said and starts there (my second option would be roof AND storey but then we would have two initial states which seems wrong, we only can have one). But then the elevator itself is inside a house, so I need to use house as a state too...?

  • +
  • At some states, for example elevator, I used two events/edges to illustrate a decision, so only one of both events can be chosen, not both at once. This is fine like that?

  • +
  • At the states storey and elevator, I made multiple events at one edge separated by a comma, is this possible too? Also I'm not sure about these self-edges :(
  • +
+ +

+",320473,,,,,1/12/2019 1:51,Problem in creating a correct finite-state-machine for a given short text,,1,1,,,,CC BY-SA 4.0,,,,, +385365,1,385371,,1/12/2019 1:00,,2,1234,"

I'm building a personal project, as an introduction to DDD, I'm doing a little bit of analysis and can't get my head around it.

+ +

My ERD looks as follows:

+ +

+ +

To go over it, as an admin you can setup a restaurant, you can add multiple tables to the restaurant, and you can setup workdays ( like the days the restaurant is opened ) which each can contain X services ( breakfast, lunch, diner e.g.). This is kind of the ""admin"" part, there is another part, the ""customer"" part, where a customer can create an account and do a reservation, at a given date for a given service, the app will determine the best table for the customer and a reservation is added ( which is basically a service, table and customer linked ).

+ +

Now my problem is that I can split it up in different bounded context, like this:

+ +

Restaurant BC: Will contain the restaurant information and the table setup.

+ +

Reservation BC: Will contain the workdays, service & reservation part of the restaurant.

+ +

Customer BC: Will contain the customer part of the app.

+ +

What's not clear to me is how the reservation will access the table & customer which are defined in another BC, as DDD says that a Domain doesn't have references to the exterior, so I cant just add the Table, Service & Customer classes into my Reservation class. What should I do then, only have the ID of those classes in Reservation ?

+ +

Maybe my split isn't correct as a Reservation should be bound tot the customer bounded context ? If you guys have a better idea as a setup for BC, feel free to shoot.

+ +

Thanks !

+ +

EDIT 1:

+ +

Maybe I shouldn't treat reservation as an Aggregate Root, but as a Value object containing a Service ID, Customer ID & Table ID ? The problem is a reservation could an will be updates, so that's contradictory to the VO being immutable.

+",264017,,35900,,5/2/2019 21:40,5/2/2019 21:40,DDD - Referencing Aggregate Root of other bounded context?,,2,0,,,,CC BY-SA 4.0,,,,, +385367,1,385380,,1/12/2019 2:27,,1,340,"

It is pretty obvious that any interpreted language CAN also be compiled. For a long time I thought that it was not necessarily the other way around. Then I discovered Ch which is an interpreter that can interpret the whole C language. It also supports parts of C++, Java, Matlab, Fortran and C-shell.

+ +

This made me draw the conclusion that whether a language is a compiled or interpreted language is not a property of the language itself, but rather a convention. Is this correct?

+",283695,,9113,,1/12/2019 9:58,1/12/2019 11:54,Is it really correct to talk about compiled and interpreted languages?,,3,5,,43477.53194,,CC BY-SA 4.0,,,,, +385368,1,,,1/12/2019 2:40,,2,368,"

I'm storing data which logs whether or not a user has logged their attendance for a given day. Some days are unimportant (holiday, weekend), so those are also stored.

+ +

The two requirements are that:

+ +
    +
  1. Calculating the number of logs and missed logs can be done quickly, and
  2. +
  3. The structure is scallable for whenever new users are added.
  4. +
+ +

Right now it seems like I'm faced with two options for how the data should be stored, each with their own advantages/disadvantages:

+ +

Option 1: Two Tables

+ +

Table calendar - Tracks days to be not counted

+ +
date       | log |
+-----------+-----|
+2019-01-10 | DNL | // ""Do Not Log"" - holiday etc.
+2019-01-12 | NB  | // ""Non-business day""
+2019-01-13 | NB  |
+
+ +

Table logs - Tracks successful attendance logs

+ +
user_id | date       |
+--------+------------|
+      1 | 2019-01-08 |
+      1 | 2019-01-09 |
+      2 | 2019-01-09 |
+
+// It's implied that user #2 missed their log on Jan. 8
+
+ +

Advantages:

+ +
    +
  • Data is efficiently stored.
  • +
  • Tallying user logs and non-counting days is trivial.
  • +
+ +

Challenges:

+ +
    +
  • Knowing how many days were missed is not obvious.
  • +
+ +

Option 2: One Table (What I've tried)

+ +

Table calendar - Tracks logs and days to be counted and not counted

+ +
date       | user_id | log  |
+2018-01-09 |       1 |    1 | // Counted, logged
+2019-01-10 |       1 |  DNL | // Not counted
+2019-01-11 |       1 |   NB | // Not counted
+2019-01-09 |       2 | NULL | // Counted, missed log
+
+ +

Advantages:

+ +
    +
  • A tally of days missed vs. days logged is trivial (used to calculate an overall percentage). The number of days in the calendar is explicit.
  • +
+ +

Challenges:

+ +
    +
  • Adding new entries to the calendar is tricky, in the event that: + +
      +
    • The calendar grows in length.
    • +
    • New users are added.
    • +
  • +
  • Table has gaps (wherever log == NULL), making traversal slower than Option 1.
  • +
+ +

My question is this: Is there a way to either use Option 1 and somehow encode the number of missed logs, or is there some other way of storing the data that meets both requirements? I've tried using Option 2, although scaling has become quite a challenge. Thanks in advance for any advice.

+",319488,,,,,2/12/2019 8:01,How to structure database for daily events?,,2,2,,,,CC BY-SA 4.0,,,,, +385374,1,385376,,1/12/2019 8:04,,-1,2080,"

Which one of the following way is recommended and why?

+ +
Date d = Date.from(curr);
+Date d = new Date(curr);
+
+ +

Can you also provide some examples behind the reasoning?

+",287375,,,,,1/12/2019 17:29,Static Factory Methods vs Constructors,,4,4,,,,CC BY-SA 4.0,,,,, +385375,1,385381,,1/12/2019 8:20,,19,7554,"

I'm studying domain driven design (DDD) and came across terms: Event Driven and Event sourcing. I know it's about publishing event from producer to consumer, and store the log, so my question is:

+ +

What is the difference between Event Driven and Event sourcing?

+",325643,,,,,1/12/2019 9:50,What is the difference between Event Driven and Event sourcing?,,1,0,7,,,CC BY-SA 4.0,,,,, +385379,1,,,1/12/2019 9:28,,2,464,"

We had a interesting debate today over what our REST API should default to doing when validating a request body where unexpected fields are present. I think we ended the conversation in a good place, but it brings up somewhat of a meta point of whether or how to apply Postel's law that'd I'd like to explore further.

+ +

Our application is essentially a large and complicated order management, matching, and dispatching system (similar to a ridesharing service). Most of our endpoints are related to actions at various stages in the order input, matching, and management flows, in addition to entity management (CRUD). There's no plans to open this up this API publicly, so the only clients are our mobile and web apps (and potentially scripts we write).

+ +

I've written systems in the past where we where we took the liberal approach in accepting fields, where the situation was we were the receiver of an event stream that we had to handle. In that case the API must be liberal in accepting arbitrary fields, as otherwise the upstream producer can't effectively add features or change their system.

+ +

However, it seems to me in our case that this is a case where we'd want to be strict in what we accept. All of the users here will be internal developers, so it's hard to imagine many common situations where being permissive buys you reliability over a strict API. We've agreed that there are large improvements in developing vs. a strict API and many bugs that we can avoid by strictly validating during dev, however there are arguments for a permissive API being more reliable in prod by not bombing out on otherwise acceptable requests. However, since we control all the legitimate clients it seems like we shouldn't end up in this situation, except in certain rare cases of client releases that use some fields before the API is ready to consume them. So if we start seeing unexpected fields, we'd at least want to know and likely to error most of the time. Is there a common case I'm missing that where we'd be slowed down by a restrictive API.

+ +

There are some difference in background on the team, so some of this might be due to different frames of mind. I'd like to hear and understand other's opinions here as I don't fully buy arguments for permissive being more reliable and us wanting to serve clients who have unexpected fields.

+",325639,,325639,,1/12/2019 17:30,1/12/2019 17:30,Permissive vs. Strict API Message validation,,3,0,,,,CC BY-SA 4.0,,,,, +385390,1,385399,,1/12/2019 7:11,,0,219,"

I'm studying Domain driven design (DDD), reading many articles, but never found a simple explanation. Please, help, let's say we have a design:

+ +

+ +

My bunch of question is:

+ +
    +
  1. What is domain? ( where is domain depicted) on this pic?
  2. +
  3. Where is aggregate on the pic? ( is it many objects or only one?)
  4. +
  5. Is it right that Order is the aggregate root?
  6. +
  7. Why LineItem is not an entity? (i'm asking, because here: https://softwareengineering.stackexchange.com/questions/351853/should-internal-entities-in-an-aggregate-respond-to-domain-events-directly) guys directly say that OrderLine is an entity
  8. +
  9. Can DDD schema exists without CQRS? Should CQRS be applied to every part (domain? aggregate?) of the architecture
  10. +
  11. Is Event Driven == Event sourcing?
  12. +
  13. Is compensation required in DDD Event Driven/sourcing instead of distributed transactions, or can it be somehow exists without compensation (and without distributed transactions)?
  14. +
+ +

I really googled and read articles, and these questions are raised in my head...

+",325643,J.J. Beam,,,,1/24/2019 1:04,"Where are: Domain, Aggregate, Entity on the picture?",,1,0,,,,CC BY-SA 4.0,,,,, +385404,1,,,1/12/2019 21:57,,0,146,"

context and background:

+ +

I prefer OOP for the most part and find it, largely, more intuitive -- this is my bias.

+ +

When I read that functional language x is better than OOP language y I think to myself: sure, sounds great, if you're computing integrals or performing other mathematical operations. Outside of that domain, I don't appreciate the appeal.

+ +

That being said, I do note that http is stateless and so, as with OOP and RDBMS, or SQL, there's a mismatch. Why? Because objects have state, that's the intention of the paradigm. At least, this is my perspective.

+ +

I find myself dipping my toes in XML processing, which can use such oddities as ""out parameters"" for performance. It just seems a mismatch, not just for that reason. Alternately, potentially huge classes can be autogenerated for jaxb, and there some, perhaps, better approaches as well -- but none of them ""feel like"" a good fit for xslt.

+ +

Here's the question in two parts:

+ +
    +
  • Are OOP languages, such as Java, inherently at odds with XSL?
  • +
  • If so, would a functional language better fit XSL transforms?
  • +
+ +

Yes, I'm aware that xslt is a functional language itself, or at least influenced by them. As a practical matter, the xslt processor is used by a more general purpose language.

+ +

+ +

Perhaps I'm misunderstanding or not appreciating the xslt expressions and how they relate to objects. I say that because the above graphic looks very, well, object oriented, for lack of a better term...

+",102335,,102335,,1/12/2019 22:18,1/13/2019 5:12,Is there a mismatch between XSL and OOP?,,4,3,,43479.61528,,CC BY-SA 4.0,,,,, +385405,1,,,1/12/2019 22:24,,3,112,"

In the case that a type is specified, it could be on the left (before) or the right (after) of the variable name.

+ +

For example, C, C# and Java have the type specified before the variable:

+ +
int num = 5;
+
+ +

TypeScript, Rust and Haxe (can) have the type specified after the variable:

+ +
let num: number = 5;
+let num: u32 = 5;
+var num: Int = 5;
+
+ +

Is there a term that denotes the way a language's syntax work with types? E.g. ""The _____ language uses (left/back or right/front) typing type declaration"".

+",307701,,307701,,1/13/2019 4:24,1/13/2019 4:24,What is the term for the side on which a variable type is written in a given language?,,2,1,,,,CC BY-SA 4.0,,,,, +385409,1,,,1/13/2019 0:19,,2,397,"

I'm using MVVM and I have an app with a UITabBarController, the Main tab of which is a list of publications and the other is a Search screen where they can search for publications. Searching returns a list of publications.

+ +

Imagine that Publication A is the first item in the list on the Main tab. Now let's say the user goes to the Search tab, performs a search, and Publication A is in the search results. If the user taps Publication A, it will be marked as read and shaded certain color in the search results list. If the user goes back to the Main tab, Publication A should also be shaded to indicate it has been read.

+ +

There are plenty of MVVM tutorials on the subject but they all seem to assume one Model, one View, and one ViewModel. I believe what I need is to understand how to wire up a single Model to two Views, and two ViewModels. I have found several threads here in Software Engineering (like this one) but no real guidance or answers.

+ +

Can somebody please point me in the right direction? Thanks.

+",277686,,,,,7/18/2019 8:26,How do I architect an iOS app when multiple view models must know when the model has changed?,,3,0,1,,,CC BY-SA 4.0,,,,, +385413,1,,,1/13/2019 2:02,,-1,117,"

I am using ADO.NET to read a bunch of data from the database into in-memory objects.

+ +

This is my domain model:

+ +
// Question.cs
+public class Question
+{
+    public int ID { get; set; }
+    public string Title { get; set; }
+    public string Description { get; set; }
+    public IEnumerable<Tag> Tags { get; set; }
+}
+
+// Tag.cs
+public class Tag 
+{
+    public int ID { get; set; }
+    public string Name { get; set; }
+}
+
+ +

On retrieving the list of Questions, I would like to fetch the related tags for each question. I am able to do this as follows:

+ +
// QuestionRepository.cs
+
+public IList<Question> FindAll()
+{
+    var questions = new List<Question>();
+
+    using (SqlConnection conn = DB.GetSqlConnection())
+    {
+        using (SqlCommand cmd = conn.CreateCommand())
+        {
+            cmd.CommandText = ""select * from questions"";
+
+            SqlDataReader reader = cmd.ExecuteReader();
+
+            while (reader.Read())
+            {
+                Question question = new Question();
+                // Populate the question object using reader
+                question.Load(reader);
+
+                questions.Add(question);
+            }
+            reader.Close();
+        }
+     }
+    return questions;
+}
+
+
+// Question.cs
+public void Load(SqlDataReader reader)
+{
+    ID = int.Parse(reader[""ID""].ToString());
+    Title = reader[""Title""].ToString();
+    Description = reader[""Description""].ToString();
+
+    // Use Tag Repository to find all the tags for a particular question
+    Tags = tagRepository.GetAllTagsForQuestionById(ID); 
+}
+
+    return questions;
+}
+
+// TagRepository.cs
+public List<Tag> GetAllTagsForQuestionById(int id)
+{
+    List<Tag> tags = new List<Tag> ();
+    // Build sql query to retrive the tags
+    // Build the in-memory list of tags 
+    return tags;
+}
+
+ +

My question is, are there any best practices/patterns for fetching related objects from the database?

+ +

Most of the SO questions I came across for loading related data provide the solution for entity framework. There is no answer for this duplicate question.

+ +

Even though my code works, I would like to know other ways to do the same. The closest explanation I came across that's targeting my particular problem was Martin Fowler's Lazy Load pattern, which I believe, will result in following implementation:

+ +
public class Question
+{
+    private TagRepository tagRepo = new TagRepository();
+    private IList<Tag> tags;
+
+    public int ID { get; set; }
+    public string Title { get; set; }
+    public string Description { get; set; }
+    public IEnumerable<Tag> Tags {
+        get
+        {
+            if (tags == null)
+            {
+                tags = tagRepo.GetAllTagsForQuestionById(ID);
+            }
+            return tags;
+        }
+    }  
+}
+
+ +

Are there any other alternatives?

+",217385,,,,,1/14/2019 13:12,Patterns for loading related objects in memory (without an ORM),,1,1,,,,CC BY-SA 4.0,,,,, +385414,1,385466,,1/13/2019 2:24,,3,1015,"

Long story short, I wanted to use C as a scripting language and in a portable manner so I created a register-based, JITless VM to accomplish this.

+ +

I've formalized the VM's ISA, script file format, calling conventions, and even created an assembler that assembles the human-readable bytecode to numeric bytecode, now I need a compiler that can target my VM.

+ +

I have considered whether to create a GCC or an LLVM backend to accomplish this but I've never undertaken such a project before. Should I create a GCC backend, an LLVM backend, or are additional options I could choose?

+ +

link to vm/runtime env

+",228783,,228783,,2/4/2019 6:34,2/8/2019 7:34,GCC or Clang to output bytecode for a VM?,,3,3,1,,,CC BY-SA 4.0,,,,, +385431,1,385434,,1/13/2019 16:18,,1,53,"

I have the following dictionary:

+ +
{ ""Drama"": 8, ""Adventure"": 8, ""Action"": 4, ""Comedy"": 3, ""Thriller"": 1 }
+
+ +

This dictionary is like a representation of a user's preferences in movies. (in the past he selected like 8 drama movies, 8 adventure ones etc).

+ +

Then I have another movie with genres:

+ +
[""Action"",""Adventure"",""Fantasy""]
+
+ +

Is there a way to have like a measure to say that the movie is pretty recommendable to the user (having the fact that adventure and action movies are the ones he likes the most)?

+ +

I've tried not to create a dictionary, but an array with all the genres combined and then compare the two arrays. (this doesn't take into account the fact that I have multiple drama movies liked by the user i.e.).

+ +

I've tried to create two strings from those arrays and apply the Jaccard method, but this is just like an intersection and it doesn't take into account how many times the genre appeared in the first string. Also I've tried the Levensthein method, but I'm not really looking to see how many changes do I need to make in order to have 2 equal strings.

+ +

I'm not sure how to approach this.

+",118817,,,,,1/13/2019 17:04,Compare two arrays by the number of occurances,,1,1,,,,CC BY-SA 4.0,,,,, +385435,1,,,1/13/2019 17:25,,-1,78,"

I am currently implementing a TCP Proxy Server. The huge problem I have right now is that, based on the clients' TCP data, I am trying to determine whether the client is making an HTTP, FTP or SMTP request.

+ +

My solution, I am thinking that I have to read the first line of data of the TCP data. For example, an HTTP request would contain an HTTP method such as GET, POST, DELETE, etc.

+ +

My question is: is this solution of mine correct? and if so or not, how should I go about it (as well as for determining FTP and SMTP) ?

+",324765,,,,,1/13/2019 20:45,How to determine the application used on top of TCP?,,1,10,,,,CC BY-SA 4.0,,,,, +385437,1,,,1/13/2019 18:25,,3,93,"

Suppose of this question the following:

+ +
    +
  • I'm in full control of this project
  • +
  • I'm writing a media player
  • +
+ +

Obviously, a media player will allow a user to adjust the volume, so I might have a class that looks like this:

+ +
public class Audio {
+
+     private int level;
+     // Constructor and other attributes left out
+
+    public int getCurrentVolume()
+    public void turnUp(int amount)
+    public void turnDown(int amount)
+}
+
+ +

My media player will also allow you to take screenshots of current video, so I might have a class that looks like this:

+ +
public class Video {
+
+     private String screenshotsDirectory;
+     // Constructor and other attributes left out
+
+     public String getCurrentScreenshotDirectory()
+     public updateScreenshotDirectory(String newScrenshotPath)
+}
+
+ +

Problem:

+ +

At some point, you'll write and read the data from a file, the problem is, you have have to create a stream for each type.

+ +
FileWriter volumeWriter = new FileWriter(""settings.txt"");
+volumeWriter.writer(audioObj.getCurrentVolume());
+
+FileWriter videoWriter = new FileWriter(""settings.txt"")
+videoWriter.writer(videoObj.getCurrentScreenshotDirectory());
+
+ +

It would be nice to pass the FileWriter object an abstract type, which means I could make an abstract class or interface called Settings, but as far as I can see the settings don't share common behavior. Sure, the settings can change, but in different ways, for example, the screenshot path is a String while volume is an int.

+ +

Question:

+ +

What is the clean OOP way to structure classes that are on the same type (configuration/user settings), but can change and behave in different ways?

+ +

I can turn up the volume and change the screenshot path, but I cannot ""turn up"" the screenshot path or update the volume with a String (representing the path).

+",,user325735,,user325735,1/14/2019 23:09,1/14/2019 23:09,How are settings structured when they can be configured in diffferent ways?,,2,0,,,,CC BY-SA 4.0,,,,, +385445,1,,,1/13/2019 20:57,,3,352,"

You've probably been in the same situation: there's a class of objects which have a specific date before* which they should not be considered ""active."" But how should you represent ""active since forever?"" PostgreSQL or Python, for example, have very definite but different earliest possible datetime values.

+ +

Let's say I have an alarm limit which applies to historical readings since year 2000:

+ +
{
+    'limit': 5,
+    'active_since': datetime.datetime(2000, 1, 1),
+}
+
+ +

This is very clear and easy to work with. But how would you create a limit which is active at all times before year 2000? Neither PostgreSQL nor Python has any way to create a datetime value arbitrarily far in the past. For example, in Python:

+ +
>>> import datetime
+>>> datetime.datetime(-1, 1, 1)
+Traceback (most recent call last):
+  File ""<stdin>"", line 1, in <module>
+ValueError: year -1 is out of range
+
+ +

Since I can't have a datetime object representing infinitely in the past, the start of the universe, or even 1 BCE, what should I do instead?

+ +

Some possibilities:

+ +
    +
  1. Overload the language's equivalent of NULL.
  2. +
  3. Just use the earliest possible datetime which can be represented in all parts of the system.
  4. +
  5. Add a boolean field.
  6. +
  7. Add an ""active before"" field.
  8. +
+ +

However, all of these have their problems:

+ +

NULL:

+ +
{
+    'limit': 5,
+    'active_since': None,
+}
+
+ +
    +
  • Risks semantic overloading. It's typically used to represent ""unknown,"" ""undefined"" or ""not applicable,"" and is already problematic for being unclear. Taking it to mean specifically ""earliest possible datetime"" means that you might need some other way to represent other special meanings. And you'd have to be careful that this special datetime value is documented, since it's likely that you will have other nullable ones which do not mean ""earliest possible datetime.""
  • +
  • Complicates checks such as having to order with NULLs first or having to if value is None: ….
  • +
+ +

Earliest representable datetime:

+ +
{
+    'limit': 5,
+    'active_since': datetime.datetime(1, 1, 1),
+}
+
+ +
    +
  • Should make for the simplest naive implementation. for example SELECT statements can just check against the current datetime or other columns without worrying about any special values.
  • +
  • Makes testing awkward, especially when combining languages where the earliest representable datetime in the languages are not the same. For example, I insert into PostgreSQL using Django, and my Django REST Framework serializer translates None into '-4713-01-01' when inserting. datetime.datetime(-4713, 1, 1) is out of range, so presumably Django translates that back into datetime.datetime(1, 1, 1). Now my test specifies None, my database says 4713 BCE, and the response says 1-01-01. Very confusing. Instead, min/maxing the datetime across all the languages and frameworks in use (which might also include JavaScript and more) should give a more sensible pipeline of what humans would consider identical values, but it's still arbitrary.
  • +
  • It could change between versions of the same software. This is not purely academic - PostgreSQL 11's max timestamp is less than PostgreSQL 8's.
  • +
  • It might not be far enough in the past. For example, astronomical or historical data may go back to long before 1 CE, never mind 1970 CE. However, at this point a datetime field is unusable anyway.
  • +
  • There might not be an easy way to find out what the actual earliest value is. Python 3 for example has no constant for this value. It does have datetime.MINYEAR, but I don't know if it can represent datetime.datetime(datetime.MINYEAR, 1, 1) for every time zone.
  • +
+ +

Separate boolean field:

+ +
{
+    'limit': 5,
+    'active_since': None,
+    'active_since_forever': True,
+}
+
+ +
    +
  • At least as complicated to implement checks as with NULL.
  • +
  • Have to avoid or define the meaning of having a non-NULL datetime but the boolean field set to indicate that it's the earliest possible datetime.
  • +
  • Reading the schema or code it may not be obvious whether or how this field is related to the datetime field if they are part of a larger object.
  • +
  • Normalizing the above feels really awkward - there'll just be a table with {ID: 1, since_forever: False}, and{ID: 2, since_forever: True}.
  • +
+ +

""Active before"" field:

+ +
{
+    'limit': 5,
+    'active_before': datetime.datetime(2000, 1, 1),
+    'active_since': None,
+}
+
+ +
    +
  • Has to be kept equal to the minimum ""active after"" value.
  • +
  • Have to avoid or define what it means to have more than one of these being non-NULL.
  • +
  • Have to avoid or define what it means if both ""active before"" and ""active after"" are specified.
  • +
+ +

Are there any other options which do not have big drawbacks?

+ +

* This discussion should apply equally to upper limits, but it's easier to write it without constantly duplicating this detail.

+",13162,,13162,,1/13/2019 21:55,1/14/2019 12:42,"Which value to use to represent ""since forever""?",,4,4,,,,CC BY-SA 4.0,,,,, +385449,1,385452,,1/13/2019 22:34,,-2,51,"

I have dell edge box that is stored in another location, the edge box is having 4G sim card internet access with dynamic IP address that changes frequently, I usually use ssh putty with the 4G IP address which is IPv6, now if the IP address changes i will not be able to know the updated ip address of the edge box, any ideas how can I keep track of the ip address with a microservice using node.js. +Thakns in advance. +Seham.

+",325755,,,,,1/13/2019 22:55,Node.js app to keep track of dynamic IP Address of dell edge box?,,1,1,,,,CC BY-SA 4.0,,,,, +385451,1,,,1/13/2019 22:47,,0,40,"

I have implemented application using microservices architecture. I take a user story to implement. This involves front end/UI, and two microservices to implement the user story. So I have three components involved. Should one team implement the user story, or should the user story be split among three teams. +It is possible that three components are using different technologies. Front end is in Javascript, one microservice is in Java, and the other microservice is implemented in C#.NET. Somehow, I feel, ideally, one team should implement the complete user story. This is more efficient, and reduces communication overheads. But since three components are in different technologies, it doesn't seem feasible, and I guess the user story has to be split into three parts. What should be approach. Any help is appreciated.

+",261365,,,,,1/13/2019 23:48,Should complete user story be implemented by one team in context of microservices archicture,,1,0,,,,CC BY-SA 4.0,,,,, +385465,1,,,1/14/2019 7:34,,2,1080,"

I'm currently planning an architecture for a system that should consist of multiple microservices. We definitely want to make use of a message broker (probably RabbitMQ).

+ +

A simplified diagram of my current approach looks like this:

+ +

+ +

As you can see in the diagram I added a Response Queue, which is used by the services to send their responses back to the REST Server.

+ +

For some reason I have a bad feeling about exposing the message broker to the frontend. That's why I'm planning to combine the access to the different exchanges in a single REST API instead of having the clients to communicate directly with the message broker.

+ +

I would then have a mapping like this:

+ +
REST                     Exchange            Routing Key
+--------------------------------------------------------
+DELETE /users/{id}       UsersExchange       cmd.delete
+POST   /users            UsersExchange       cmd.create 
+
+ +

So my first question is: Is that a common way of working with a message broker or is it better to address it directly from my client applications?

+ +

And my second question: How would I handle a GET request? If I follow the pattern above, I should use the following mapping:

+ +
REST                     Exchange            Routing Key
+--------------------------------------------------------
+GET /users              UsersExchange       cmd.get
+
+ +

I just put a JSON with all users into the payload of my response message. So far, so good. But in some talk about RabbitMQ they said that you should keep your messages as lightweight as possible. I wonder what ""lightweight"" means in this context. Are we talking about a Kilobyte? Megabyte? ...? Is there even any alternative? As soon as my REST server addresses the database or one of the services (e.g. via http) directly, I loose any advantage of the message broker.

+",249918,,,,,10/7/2020 22:07,Message Broker in Client-Server-Applications,,3,0,1,,,CC BY-SA 4.0,,,,, +385467,1,390555,,1/14/2019 7:53,,0,958,"

I have or am going to have a database with countries and their emissions. Countries in the API have a 3-letter ISO Code. Should I use this for identifying the countries in database, or just a plain numeric id?

+ +
    CREATE TABLE country
+        (
+            countryid integer PRIMARY KEY NOT NULL, 
+        ),
+
+    CREATE TABLE country_emissions
+        (
+            id integer PRIMARY KEY NOT NULL,
+            countryid id NOT NULL references country(countryid),
+            year smallint NOT NULL, 
+            emissions integer 
+        ),
+
+ +

So, could countryid in this case be replaced with countries' respectable ISO code?

+ +

Reason why I'm not sure is cause numbers seem more logical as you can't really typo them, where as you can have the ISO code written with lower case letters, etc. +But then again, you wouldn't have to fetch the numeric ID of a country before adding emissions, you could just straight add them with the country ISO code.

+",325776,,325776,,1/14/2019 23:49,4/18/2019 4:00,"Should I use ISO country code for primary key, or integer ID?",,3,4,,,,CC BY-SA 4.0,,,,, +385473,1,385476,,1/14/2019 11:44,,7,2858,"

In Clean Architecture by Robert C. Martin the dependency rule points strictly from the outermost layer/ring to the innermost.

+ +

As an example a Dependency Injection Framework should lie on the outermost layer (frameworks) to allow for it to be replaced.

+ +

However, a DI framework which relies on attributes would clearly break this, as any class which would require those attributes would have a dependency on the framework. Therefore such a library could not be used following the dependency rule strictly.

+ +

I am running into the same problem with utility libraries, e.g. a Math Library or some Rx library providing IObservables/Subjects.

+ +

The math library could be wrapped by an adapter to keep it replacable - which makes sense, but for example a Entity providing the framework for both entities (inner most layer) as well as systems (business rules) and maybe even UI (presenters) simply does not go well with this design.

+ +

However, even for the math library the cost of adding the interfaces for Dependency Inversion+Adapter sounds pretty insane.

+ +

Am I missing something or is this more or less a rule which commonly break when trying to implement Clean Architecture.

+",282953,,9113,,1/14/2019 12:12,1/14/2019 13:05,Clean Architecture: Dependency Rule and Libraries/Frameworks,,2,0,3,,,CC BY-SA 4.0,,,,, +385479,1,385489,,1/14/2019 13:31,,2,164,"

If I have a SQL Server fact table with four dimensions (OrderDate, Customer, Product, Region), my understanding is that it's best to create a non-clustered index per foreign key (dim key column in the fact table).

+ +

Assuming that is correct, is it optimal to combine OrderDate with each dimension in each of the non-clustered indexes as follows - because date is almost always included in a fact table query?

+ +
    +
  • NC index 1: (OrderDate, Customer)
  • +
  • NC index 2: (OrderDate, Product)
  • +
  • NC index 3: (OrderDate, Region)
  • +
+",325797,,,,,1/14/2019 15:41,Indexes on a SQL Server fact table,,1,0,,,,CC BY-SA 4.0,,,,, +385480,1,,,1/14/2019 14:11,,-1,164,"

a few years ago I wrote a python script for reading CSV, handling the headers, filtering data, renaming stuff via RegEx...bascially to do various ETL stuff. +This was done using a exhaustive configuration file that defined all the sources and all the targets. It worked quite fine - but the configuration was everything but intuitively.

+ +

I was now wondering what would be the best approach to get the same down using some kind of ""visual configuration"". First step would be to visualize the mapping between source and target columns and use a drag and drop approach with a visual representing of the mappings between two listboxes with mapping arrows.

+ +

I was thinking about using tkinter for that, but as far as I understand tkinter is not supported by web browsers and that would be another goal: +Having an easy to deploy webapp.

+ +

So I thought about just HTML5 - but I have no idea if that would be the best approach and if it is possible to recycle the already existing python material and how to connect both parts (python logic + displaying in HTML5).

+ +

What would be your suggestion how to deal with that kind of situation - do you know of any projects where something similar was done?

+ +

Thanks a lot!

+",325800,,,,,6/10/2020 17:01,WebApp for ETL with visual mapping - read csv and map it to data model,,1,4,,,,CC BY-SA 4.0,,,,, +385481,1,,,1/14/2019 14:19,,2,310,"

When implementing interfaces, as a general rule, Impl is evil. Ok, but is it evil in the following case?

+ +

I've a service that has (and probably will have) only one implementation. In such case, normally I don't need any interface. But, the ""contract"" must be shared with other modules because the service is exposed using HttpInvokerServiceExporter. And the client, needs the interface to dynamically buid a proxy to do the remote invocation:

+ +
<bean id=""stockService"" class=""org.springframework.remoting.httpinvoker.HttpInvokerServiceExporter"">
+    <property name=""serviceUrl"" value=""rmi://HOST:1199/StockService""/>
+    <property name=""serviceInterface"" value=""example.StockService""/> <!-- Interface needed here -->
+</bean>
+
+ +

This is the only reason why the api is shared. To expose the service. The client should not have the implementation classes as dependency.

+ +

In such scenario I'm thinking in a project layout like:

+ +
- stock-service        (project)
+|- stock-service-api   (shared interfaces module)
+\- stock-service-impl  (implementation module)
+
+ +

so that, other-service can have stock-service-api as dependency.

+ +

Should, ""-impl"" naming also be avoided in this case? +If yes, what would be a good alternative?

+",161357,,161357,,1/14/2019 15:15,1/14/2019 15:15,"Should ""Impl"" naming always be avoided?",,1,14,,,,CC BY-SA 4.0,,,,, +385482,1,385690,,1/14/2019 14:33,,13,1885,"

I'm reading the book Dependency Injection Principles, Practices, and Patterns and I read about the concept of leaky abstraction which is well described in the book.

+ +

These days I'm refactoring a C# code base using dependency injection so that async calls are used instead of blocking ones. Doing so I'm considering some interfaces which represent abstractions in my code base and which needs to be redesigned so that async calls can be used.

+ +

As an example, consider the following interface representing a repository for application users:

+ +
public interface IUserRepository 
+{
+  Task<IEnumerable<User>> GetAllAsync();
+}
+
+ +

According to the book definition a leaky abstraction is an abstraction designed with a specific implementation in mind, so that some implementation details ""leak"" through the abstraction itself.

+ +

My question is the following: can we consider an interface designed with async in mind, such as IUserRepository, as an example of a Leaky Abstraction ?

+ +

Of course not all possible implementation have something to do with asynchrony: only the out of process implementations (such as a SQL implementation) do, but an in memory repository does not require asynchrony (actually implementing an in memory version of the interface is probably more difficult if the interface exposes async methods, for instance you probably have to return something like Task.CompletedTask or Task.FromResult(users) in the method implementations).

+ +

What do you think about that ?

+",310179,,310179,,1/17/2019 8:24,4/19/2019 16:27,Is an interface exposing async functions a leaky abstraction?,,8,7,4,,,CC BY-SA 4.0,,,,, +385488,1,,,1/14/2019 15:37,,2,138,"

Sometime ago in a code-review (C++) I suggested to change the input argument from Path type to Optional<Path>, where the function has specific logic for unset path. It looks for me intuitively better, but the author appealed that Path::empty() (empty path) method should semantically mean the same as unset Optional.

+ +

My sole rational argument is that empty path may also be interpreted as the current working directory. But then I also thought that . may be used as CWD as well.

+ +

What is a good default semantic for a cross-platform API path emptiness? E.g. . may be not so common outside *nix OSes, or there are already some common semantics in any popular programming language.

+",119169,,28374,,1/15/2019 14:41,1/18/2019 15:06,filesystem::path vs. optional as argument to function,,3,8,,,,CC BY-SA 4.0,,,,, +385491,1,385505,,1/14/2019 15:58,,4,217,"

I have a question about versioning when I depend on a third party (TP) project's versioning. Our current process is to release a new version every time TP creates a release with security fixes. The problem is when a TP release contains new funcionally that we want to use in some but not all of the projects we maintain.

+ +

Scenario

+ +

Current project structure

+ +
Third party project (version 5.5)
+    My base project (version 1.2 (inherits v5.x))
+        My project 1 
+        My project 2 
+        My project 3 
+
+ +

New project structure

+ +
Third party project (version 5.6)
+    My base project (version 1.3 (inherits v5.x))
+        My project 1        
+        My project 3 
+Third party project (version 6.0)
+    My base project (version ?? (inherits v6.x))        
+        My project 2 
+
+ +

Key points

+ +
    +
  • They are Maven projects and are inheriting from the parent project pom.xml
  • +
  • I cannot use the import scope because both projects (TP and Base projects) modify the lifecycle of the application.
  • +
  • In order to move from TP version 5.6 to 6.0 it is necesary a costly migration process.
  • +
  • Both new versions (5.6 & 6.0) have security fixes so it is advisable to update all the projects
  • +
  • Version 6.0 has new features that we need on My Project 2
  • +
  • TP might release version 7.0 and I would have 3 active versions of My base.
  • +
+ +

Question

+ +

I would like to have some advice about how to version My Base Project to keep more than one active version at the same time.

+ +

My current options are:

+ +
    +
  • Match the version number of TP by updating the version number for MyBase to 5.6 and 6.0 respectively. This approach might generate gaps if I decide not to upgrade a specify version.
  • +
  • Use the semantic versioning and increase minor and micro version number for keeping both versions (Thank you Berin Loritsch). As a variant I coult increase major version number if the changes affects to end users (Thank you Ryathal)
  • +
  • Use semantic versioning plus sign to mark the one of the versions. How to Name Different Branches with Identical Functionality in Semantic Versioning
  • +
  • Get rid of My base project and add the features of it in each of My projects. This option will have maintainability issues so I don't thing it would be the better way.
  • +
+",111866,,111866,,1/16/2019 9:47,1/17/2019 20:15,Versioning depending on Third parties,,3,8,1,,,CC BY-SA 4.0,,,,, +385493,1,,,1/14/2019 16:58,,0,146,"

Are there any published standards of archive files to hold webassembly (multiple files) with a metadata file that defines the entry point? I'm thinking of something similar to JAR or APK.

+",115049,,115049,,1/14/2019 17:04,2/14/2019 15:02,equivalent of a JAR file for webassembly?,,1,0,,,,CC BY-SA 4.0,,,,, +385497,1,385499,,1/14/2019 18:12,,52,11103,"

Lets say you are coding a function that takes input from an external API MyAPI.

+ +

That external API MyAPI has a contract that states it will return a string or a number.

+ +

Is it recommended to guard against things like null, undefined, boolean, etc. even though it's not part of the API of MyAPI? In particular, since you have no control over that API you cannot make the guarantee through something like static type analysis so it's better to be safe than sorry?

+ +

I'm thinking in relation to the Robustness Principle.

+",285284,,92517,,1/16/2019 3:49,1/17/2019 21:33,Should you guard against unexpected values from external APIs?,,9,13,8,,,CC BY-SA 4.0,,,,, +385507,1,385514,,1/14/2019 19:25,,0,75,"

I've my front end written using HTML/CSS/Javascript. Front end is communicating with the backend(Oracle database) using Java Webservices (Spring boot app).

+ +

Situation #1: +User clicks on the Download button, using Ajax call, I call my Java webservice, java webservice returns the data in JSON format, I display this data in the UI in tabular format.

+ +

Situation #2: +Since the amount of data to be returned is going to be huge, when a user clicks on the download button,I am expecting that the query could take hours or maybe a day to finish. Once the query is finished,I would like to upload the data returned from the webservice in a file at some location on the (RHEL)server so that when user comes back, he/she can click the Download button and download the file with huge data in whatever format it was saved (CSV, Excel etc)

+ +

Questions:

+ +

1) I was reading in another Stack overflow post here that saving file using FileSystemAPI on Firefox isn't supported. I am wondering if this is even a good solution based on the situation #2 described above?

+ +

2)If a webservice keeps on running for a day, isn't it going to time out in few hours? I've a feeling that the approach I've mentioned in Situation #2 above isn't an efficient one.

+ +

3) Is there something else that can be done in a more efficient manner to accomplish my task? Probably, some scheduling at database point of view?

+ +

4) In Situation #2, since the query is going to take long time, I am also planning to change the display of Download button to something ""Come Back again"". In this scenario, how would I determine that a particular query is going to take long time?

+",198234,,198234,,1/14/2019 22:21,1/14/2019 22:21,Uploading file to the server for download,,1,0,,,,CC BY-SA 4.0,,,,, +385511,1,385569,,1/14/2019 19:51,,-1,212,"

I have an object that acts as nothing more complicated than a data store for a collection of items. I do this because it lets me bind the data to a single object, which I can store in the Unity (game engine) editor and assign to a bunch of other objects so they all operate on the same list of data.

+ +

I'm not really sure what to call the class though:

+ +
class NameNeeded<T> : // Unity stuff
+{
+    public List<T> items { get; }
+}
+
+ +

I can't inherit this object from anything but a special Unity object, so I can't mask it and pretend that it's a collection itself. There's some other bookkeeping methods, but it's basically a collection container.

+ +

If I treat it like a regular collection, I end up with this...

+ +
class Lobby
+{
+    NameNeeded<User> users;
+
+    void DoSomething()
+    {
+        users.items.Whatever();
+    }
+}
+
+ +

... which I find unattractive from the double plural implying users is a collection itself.

+",314083,,,,,1/15/2019 21:10,What to name a class that acts as a container for a collection but isn't a collection itself?,,2,2,,43634.51667,,CC BY-SA 4.0,,,,, +385517,1,,,1/14/2019 23:39,,0,111,"

I have a user table that has 4 rows with id = 1, 2, 3, 4. I delete the row with id = 2 and insert another user and the user will get id = 5. What will happen to the id = 2? Will it be used again or do I rearrange the ids? Rearranging ids doesn't make sense if I have a large amount for every delete operation.

+",325838,,338865,,6/18/2019 10:08,6/18/2019 10:08,What to do with the unused identity after deleting a row?,,1,6,,,,CC BY-SA 4.0,,,,, +385519,1,385529,,1/15/2019 0:47,,1,273,"

I recently had a discussion with a friend about code maintainability with regards to modifying an iterator inside of the body of a loop (C# syntax):

+ +
List<int> test = new List<int>();
+for (int i = 0; i < 30;)
+    test.AddRange(new int[] { i++, i++, i++ });
+
+ +

The code works fine, and as expected adds 0 - 29 into the list. However, he pointed out that the execution does look odd (and I agree), and told me about using Enumerable.Range(start, end). I have since switched to using the method and it works as needed.

+ +

During our discussion he stated that things like this cause issues with maintainability because it forces other developers to pause and examine what the intent is along with what is actually happening prior to making changes (we should all be doing this anyways in my opinion). He stated that things of this nature, aren't truly needed and should be refactored to a simpler version for that very reason. I do agree with this statement but he gave an example of obfuscation that we both agreed would not compile in C# and is undefined behavior in C++. I posted a question on StackOverflow but it was poorly received thus I believe it may be a better fit for here.

+ +

The code he wrote was:

+ +
int i = 2;
+int c = i++ + ++((i++)++);
+
+ +

I tried to solve it using order of operations but I receive different results if I write it out by hand, and if I try to write it proceduraly in code:

+ +
int i = 2;
+int p1 = i++;
+int p2 = p1++;
+int p3 = ++p2;
+int c = i++ + p3;
+
+ +

The code above compiles in C# and presume it would have no issues in most languages. This produces a result of c = 6 and i = 4, however trying to solve by hand gives me c = 10 and i = 6.

+ +
+ +

Can these issues be solved so the line i++ + ++((i++)++) will compile and produce a sensible result? If so, what is the result of execution? Maybe there is already a major language (with C-style increment operators) where this works, which I am not aware of?

+ +

I've tried the following languages:

+ +
    +
  • C#
  • +
  • C++
  • +
  • Java
  • +
  • JavaScript
  • +
+",319749,,9113,,1/16/2019 7:25,1/16/2019 7:25,Solving issues in using post and pre increment operators as part of expressions,,2,7,,,,CC BY-SA 4.0,,,,, +385520,1,385527,,1/15/2019 0:47,,1,158,"

My team and I have this Desktop client developed in JavaFX. It basically has a ""Remember my password"" checkbox. If the user ticks this checkbox, reboots, and reopens the app, the user should be automatically logged in. I'm using a cheap trick just saving a temp file to the temp directory. It's encrypted and it contains the information required for this feature. It just checks if certain info is present - if yes, login and go to dashboard.

+ +

Now what they want is if the user logs out (different from just closing the app) but ticked the checkbox beforehand, then the next time he opens the app the username and password will already be there. While I was thinking of a solution, my teammate suddenly pushes a fix to this problem. If the user ticks the checkbox, it saves the info (rewrites the file) to the temp file. If the user un-ticks the checkbox, the file is deleted.

+ +

Is this a safe practice?

+ +

Edit: +Thanks Abigail for the comment. But disregarding that it's being saved to temp dir, let's say I'm saving it somewhere else, is the actual function of saving and deleting a file upon the toggle of a checkbox an ""okay"" practice?

+",311909,,105684,,1/15/2019 9:14,1/15/2019 10:52,Does saving a file to temp and deleting whenever a checkbox is toggled considered a wise choice?,,3,1,,,,CC BY-SA 4.0,,,,, +385521,1,385522,,1/15/2019 0:54,,7,1000,"

I'm actually studying about web development. +I was just asking why a lot of web apps and chats(Whatsapp, Telegram, Discord, and a lot, a lot more!) are using cache. +I mean, after learning cache systems like Redis and Memcached, I was asking how could I use them practically. +For example, Redis can cache entire pages for limited time(that will expire). Or it can store login sessions.

+ +

But how so many chat (web, desktop and mobile) apps do use caching? +I know that they store messages in cache... but wouldn't just a common database be enough? +Also, how are messages ""sorted"" in cache? +It seems too difficult to me to understand cache.

+",325843,,,,,1/15/2019 1:53,How is caching used within messaging apps?,,1,0,,,,CC BY-SA 4.0,,,,, +385524,1,,,1/15/2019 2:26,,-1,2317,"

From general observation I've come across the standard to be 36. I was looking to incorporate a uuid in my urls but didn't want it that long. Is there a minimum where I can still keep the uuid unique?

+",121757,,121757,,1/15/2019 2:32,1/15/2019 11:25,What is the minimum length for a UUID?,,2,2,,,,CC BY-SA 4.0,,,,, +385528,1,385532,,1/15/2019 2:52,,2,2379,"

I'm designing a relatively simple web application using .net core. I've mostly done desktop development in my career so far, so I'm a bit new to the nuances between desktop and web development.

+ +

In this application there is a business logic layer and a presentation logic layer which takes the business logic and transforms the properties into marked up output server-side and then returns the data to the client through a web API.

+ +

Due to the controls I'm using and the structure of the application, it makes sense to have this ""presentation logic"" layer on the server side, since some aspects of the presentation are actual business logic requirements (most of the presentation is handled in views, partial views and view components).

+ +

Currently the way that I am handling this is by injecting the business logic classes into presentation logic classes, then having the api controller return an interface to the presentation logic class.

+ +

A simplified example of the approach:

+ +
public class BusinessLogic
+{
+  public string PropertyA { get; set; }
+
+  public string PropertyB { get; set; }
+
+  public void DoSomeLogic() { // some code here }
+}
+
+public class PresentationLogic : IPresented
+{
+  private BusinessLogic businessLogic;
+
+  public PresentationLogic(BusinessLogic businessLogic)
+  {
+    this.businessLogic = businessLogic;
+  }
+
+  public string PresentationPropertyA
+  {
+    get
+    {
+      return ""<span class='businesslogicspecificclass'>"" + this.businessLogic.PropertyA + ""</span>"";
+    }
+  }
+}
+
+public interface IPresented
+{
+  string PresentationPropertyA { get; }
+}
+
+[Route(""api/[controller]"")]
+public class MyController
+{
+  [HttpGet]
+  public IPresented Get()
+  {
+    var businessLogic = new BusinessLogic();
+    // manipulate businessLogic
+    return new PresentationLogic(businessLogic);
+  }
+}
+
+ +

The API exposes interfaces which are implemented by the PresentationLogic classes. As I understand it, these interfaces are then serialised into JSON and returned to the page to use within its controls.

+ +

A different approach to solving the same problem would be to create a DTO, and have the PresentationLogic class take the business logic and spit out the DTO, adding the extra markup to the properties during the creation of the DTO. For example:

+ +
public class PresentationLogic
+{
+  public Dto GetDtoFromBusinessLogic(BusinessLogic businessLogic)
+  {
+    return new Dto { PresentationPropertyA = ""<span class='businesslogicspecificclass'>"" + this.businessLogic.PropertyA + ""</span>"" };
+  }
+}
+
+public class Dto
+{
+  public PresentationPropertyA { get; set; }
+}
+
+[Route(""api/[controller]"")]
+public class MyController
+{
+  [HttpGet]
+  public Dto Get()
+  {
+    var businessLogic = new BusinessLogic();
+    // manipulate businessLogic
+    var presentationLogic = new PresentationLogic();
+    return presentationLogic.GetDtoFromBusinessLogic(businessLogic);
+  }
+}
+
+ +

What I want to know is what the advantages or disadvantages to each approach is.

+ +

As I understand it, both controller methods will return effectively the same JSON to the calling page.

+ +

The first approach feels more natural to me for the following reasons:

+ +
    +
  • I do not like DTOs unless they're absolutely necessary since I believe that they tend to encourage anaemic domain models
  • +
  • The presentation logic class becomes an adapter class that sits between the business logic and the view. It has a clearly defined responsibility.
  • +
  • I have not created any classes purely for the purpose of acting as a return type - it feels like less wasted code.
  • +
  • I could potentially add new properties to the PresentationLogic class and implement a new interface if version 2 called for changes.
  • +
  • Interfaces feel like the natural tool for abstraction for C# code.
  • +
+ +

I was discussing this with other developers and they were suggesting that returning a DTO was the more standard way of approaching this problem. I have come up with a couple of reasons why this might be a better approach:

+ +
    +
  • DTOs are clearly marked as such and nobody is tempted to add breaking logic to them.
  • +
  • If it's the standard way then it will help new developers to get up to speed.
  • +
  • Adding a new version forces the use of a new DTO class, which means that there's less potential to introduce breaking changes (though this could also be done with the other approach if needed)
  • +
+ +

Note that this question is generally about layered web architecture than specifically about my needs on this project. If no presentation logic needed to be added server-side, this question could easily be about business logic and persistence logic classes.

+ +

So which is better - using DTOs in a web API or using interfaces on complex objects in a web API?

+",97122,,,,,1/15/2019 6:33,C# .Net Core API design - interfaces vs DTOs,,1,2,,,,CC BY-SA 4.0,,,,, +385534,1,,,1/15/2019 8:07,,-3,97,"

Although I understand privacy concerns, the measure has been imposed by politcians and I want to know what the proper way to put this into place would have been.

+ +

Right now, each site has to implement GDPR compliance which is kind of redundant with no promise that your wishes are actually respected.

+ +

Wouldn't it make more sense to have a standard and control over settings on the browser side? Or some other way which is closer to the user?

+ +

Of course, this question has to do with the actual design or implementation and not the politics.

+",3170,,3170,,1/15/2019 9:58,1/15/2019 10:39,Does the implementation of GDPR compliance per site make sense in terms of good practises?,,3,9,,,,CC BY-SA 4.0,,,,, +385542,1,,,1/15/2019 11:20,,1,70,"
    +
  • In Notepad++ * stands for an arbitrary string and . for one arbitrary character (optionally including newlines).
  • +
  • In the Linux console * stands for an arbitrary string and ? for one arbitrary character.
  • +
  • In SQL % stands for an arbitrary string and _ for one arbitrary character. But only in string matching, in selection of columns * matches everything.
  • +
+ +

I've heard about even more different systems like this and I didn't even start on more complex selections. I can't even use the same regular expressions that work in Notepad++ in its Linux clone, NotepadQQ.

+ +

My question: How did regex develop to be like this? Were there multiple separate, parallel developments that just turned out to be similar, except using different characters? Or did it start with one standard and then in different situations different commonly used characters got avoided (like . in the console being used for file extension separators)?

+",287884,,,,,1/15/2019 11:20,How and why did so many different systems of Regex develop differently?,,0,5,0,43480.54097,,CC BY-SA 4.0,,,,, +385550,1,,,1/15/2019 14:15,,1,673,"

While designing a complex system some of my colleges came back with the idea of having two separates APIs, one that will perform the writes into de databases and another one that will only do the reads.

+ +

I fail to deeply understand why separating those two concerns can be useful for the whole system. It can be useful when the system is for example suffering from heavy traffic when trying to read or viceversa but more than that...

+ +

Separating those to concerns also implies more complexity on the system since logic must be separated but in the other hand it can be useful when distributing and scaling the system, eg: having a read only database...

+ +

Any light into this darkness is more than welcome.

+ +

Thanks,

+ +

Javi.

+",311800,,,,,1/15/2019 17:47,Why read and write API are good or why not?,,1,3,1,,,CC BY-SA 4.0,,,,, +385554,1,385558,,1/15/2019 15:00,,0,556,"

Relatively new to Domain Driven Design i decided to try it out in an saas app currently under development/refactoring. I've refactored the identity part out to it's own context (class library in .net) and came up with the following Domain models:

+ +
    +
  • Organization. Is an Entity and Aggregate Root (all users in the system should be part of an organization) and Users can only by added through the Organization AR.
  • +
  • User. Is an Entity referenced (List of User) by Organization (the user that is part of an Organization.
  • +
+ +

So one of the most important constraints is that a User should always be part of an Organization. At this point there aren't many business rules so the Domain Entities are realy simple at this point. +I have multiple designs but cannot figure out what design is best in terms of encapsulation/performance (i know, don't optimize prematurely).

+ +

Design 1:

+ +
    +
  • Organization implements Users as child entities (object reference). In this way Users can only be added through the Organization AR giving full encapsulation. This design also makes it explicit that Users in this context should always be deleted when an Organization is deleted. Exactly what needs to be done. But now when i want to add a User i have to load the whole Organization AR (keeping invariants) including all Users within the Organization so i can check if a User doesn't exists already within the Organization. I don't see any problems with this approach but for large Organizations that can have hundreds of Users this can be resource intensive. Another approach within this design to ensure uniqueness is to update a read model (UniqueUser) when a User is added to an Organization. When the API tries to re-register a User than in the AfterLoginEnsureOrganization UseCase the User is first validated by the UniqueUser readmodel repository.
  • +
+ +

Design 2:

+ +
    +
  • Organization and User are both AR (User referencing Organization by OrganizationId) and the UseCase implements the constraints leaving us with no encapsulation whatsoever. Currently i'm the only developer for this app so no biggie but how to prevent other developers from adding orphan users by misusing the IUserRepository and not the specific UseCase.
  • +
+ +

I'm not sure where to make the trade-off between encapsulation, complexity and performance.

+",325893,,,,,1/16/2019 15:06,Domain Driven Design Modelling Organization -> User,,2,0,,,,CC BY-SA 4.0,,,,, +385555,1,385556,,1/15/2019 15:27,,3,164,"

I have a collection with millions of items of Generic type T. Assume this list never changes. +I want to perform many types of searches with subsets of fields of type T. Some with only 1 field and others with 2 or more fields. +Each search almost always returns more than 1 result.

+ +

This answer is very close to what I want but it only works for searches with only one field:

+ +

https://codereview.stackexchange.com/questions/40811/multiple-indexes-over-an-in-memory-collection-for-faster-search

+",119906,,,,,1/15/2019 15:43,Performing complex searchs using C# collections,,1,0,,,,CC BY-SA 4.0,,,,, +385557,1,,,1/15/2019 15:38,,1,267,"

I'm working on a problem, in which i have some real time weather information of different cities through out the world. I'm exposing subscribe function to interested people/clients, with input:- location(this would be the name of the city) and time(HH::MM::SS).

+ +

Now, lets suppose, N number of people subscribe for Location=>(Paris) and time=>(20:10:17). Then, how can i publish/notify N of subscribers at exactly 20:10:17 for Paris location. I' mean if iterate over the list of subscribers, then, this list processing itself would take delta time, some subscribers would receive information with delay.

+ +

What data structure should i use, e.g, list, map or vector etc, and how should i process it, i mean, threading model, so, that, i could be able to publish information to N number of subscribers at the same time with out any delay?

+ +

edit: +Resolution is upto seconds, i mean, mili seconds delay is ok. +And, +server hardware capable of running Millions of threads.

+",325903,,,,,1/23/2019 8:42,Notify Millions of subscribers at the same time(or with min. delay) in C++?,,2,14,0,,,CC BY-SA 4.0,,,,, +385563,1,385587,,1/15/2019 17:49,,1,72,"

UML Diagrams says:

+ +
+

A use case is a kind of behaviored classifier that specifies a + [complete] unit of [useful] functionality performed by [one or more] + subjects to which the use case applies in collaboration with one or + more actors, and which [for complete use cases] yields an observable + result that is of some value to those actors [or other stakeholders] + of each subject.

+
+ +

But it is not clear for me in specific small situation. For example in a mobile application I have a list and user can do

+ +
    +
  • +''item click'' for +
      +
    1. +selecting item +
    2. +
    3. +deselecting item +
    4. +
    +
  • +
  • +''long item click'' for
    +changing selection mode (''multiple'' or ''single'') +
  • +
+ +

Now, are usecases ''selecting item'', ''deselecting item'' and ''changing selection mode'' or they are ''item click'' and ''item ling click''?

+ +

I think ''item click'' and ''item long click'' are not UC because

+ +
    +
  1. +Although ''item click'' and ''item long click'' are behaviors of list; but I did not create the application to provide a way for user to do ''click'' and ''long click'' (those are not useful independently) +
  2. +
  3. +''item click'' does not provide unit complete useful functionality (can lead to different useful behaviors with observable output: ''selecting/deselecting item'') +
  4. +
+ +

Am I right? I'm in doubt.

+",174635,,,,,1/17/2019 14:54,Detecting UML usecase correctly,,2,0,,,,CC BY-SA 4.0,,,,, +385570,1,,,1/15/2019 22:12,,0,254,"

While mostly every company got it's standard on APIs, the question just came out after that one of my colleagues stated we must use different object models for creating and getting objects from API.

+ +

To be clear, he states that when you want to get the object from the API you will receive an object like this:

+ +
{
+ id: '1111',
+ name: 'SomeName',
+ createdBy: '',
+ createdDate: '',
+ yitle: 'oh my titlte',
+ details 'Some Dets'
+ persons: 'whom may concern'
+ products: [{p1}, {p2}]
+}
+
+ +

But when you want to create an object of the same entity you must send the following object to API:

+ +
{
+ name: 'blahbla',
+ title: 'valhvalh'
+}
+
+ +

So you shouldn't send the properties that are not needed ass empty. Because i'm using C# it's easier for me to work with one class object for a known entity, so i think that when i want to create the object instance i can send the following object:

+ +
{
+  id: '',
+  name: 'SomeName',
+  createdBy: '',
+  createdDate: '',
+  title: 'oh my titlte',
+  details '',
+  persons: '',
+  products: ''
+}
+
+ +

Mentioning that we don't have a high traffic and bandwidth is not issue i thought why we should do excess work to gain nothing but more complex code. He added this is an standard used in social medias too witch i myself could't agree on and could't provide any evidence.

+ +

The Question is that is this an standard in API's? and if yes what's the gain here and what is it for.

+",281866,,281866,,1/16/2019 8:01,1/16/2019 11:00,Is It good practice on API design to use different object model for getting object and creating,,1,7,,,,CC BY-SA 4.0,,,,, +385572,1,385576,,1/15/2019 23:20,,1,602,"

I am trying to figure out the optimal branching strategy for my organization. And I have few doubts.

+ +

We have 3 main environments, Live(Production), UAT (pre-release) and Staging. Similary in GIT we have 2 Major branches. Master(State of Prod) , and Dev.

+ +

Developers start with checking out Git Dev and creating a feature branch from it. Suppose they developed feature F1, F2 and F3. And once they are happy with it they merged those feature back in DEV. A CI job deploys DEV code to our Staging environment.

+ +

Staging environment is where QA Tests. Lets assume QA Approves F1 and F3 and rejects F2. Now I want to move only F1 and F3 to the Release Branch. Release branch is auto deployed to UAT where end user can have sanity tests or user training for new release.

+ +

Now my question is should I create Release Branch from Master and Merge F1 and F3 into it? Or Stick with creating Release Branch from Dev? I am more inclined about creating release branch from Master as I dont have to revert F2 from Dev.

+ +

Can anyone help me out with this dilemma?

+",308928,,,,,1/16/2019 2:17,Effective GIT Branching Strategy,,1,0,1,,,CC BY-SA 4.0,,,,, +385575,1,385578,,1/16/2019 1:17,,-1,231,"

I'm looking for a design pattern that might work for this class I am working with. This main class is an entity using Domain Driven Design.

+ +
   public class TimeCard() : ITimeCardHeader
+    {
+        public int TimeCardHeaderID { get; pivate set; }
+        public int ContractorID { get; internal set; }
+        public System.DateTime Date { get; internal set; }
+        public StateEnum State { get; protected internal set; }
+        public System.DateTime CreatedDate { get; internal set; }
+
+        public Update( ITimeCardHeader header)
+        {
+         //validation logic.
+         //assign values  e.g. This.ContractorID = header.ContractorID
+         // create and send domain event ""Time card change"" 
+         }
+    }
+
+ +

I would like to ask Questions(functions) of the timeCard object to determine if it could be edited. However, the business rule of if a timecard can be edited really is based on who you are.

+ +

So my Idea is to ask questions via an interface

+ +
interface ITimeCardCreater{
+ bool CanEditTimeCards{get;}
+ int? ContractorId {get;}
+ }
+
+ +

Then I could have higher level classes Create a user then add methods to my TimeCard like.

+ +
    timecard.CanEdit(ITimeCardCreator);
+
+ +

My Question is where should that type of logic live. I had it directly on the Timecard object but I now think that it should be a class unto itself as it's getting large.

+ +

Edit as suggested by king-side-slide Rename interface to ITimeCardCreater.

+ +

Question: is there a design pattern that removes logic from a domain model that is exclusively used by that model to answer questions against it.

+",293429,,293429,,1/22/2019 17:56,1/22/2019 18:12,Design Pattern for object that asks questions of another object,,5,6,,,,CC BY-SA 4.0,,,,, +385581,1,,,1/16/2019 3:53,,0,205,"

I have one request interface IRequest and two classes ClientAddress and ClientOrder are implementing it. The same design is followed for Response with inteface IResponse and classes ClientAddressResponse and ClientOrderResponse.

+ +

I have tried to do it 2 ways but I am not sure which is better way

+ +
    +
  1. With Explicit type casting by assigning class object to interface.
  2. +
  3. By using classes object and variable
  4. +
+ +

See here:

+ +
public interface IRequest
+{
+    string ClientId { get; set; }
+}
+
+public class ClientAddress : IRequest
+{
+    public string ClientId { get; set; }
+    public string AppId { get; set; }
+}
+
+public class ClientOrder : IRequest
+{
+    public string ClientId { get; set; }
+    public string FromDate { get; set; }
+    public string ToDate { get; set; }
+    //other properties
+}
+
+public interface IResponse
+{
+    string ClientName { get; set; }
+}
+
+public class ClientAddressResponse : IResponse
+{
+    public string ClientName { get; set; }
+    public string HouseNumber { get; set; }
+    public string Area { get; set; }
+    public string City { get; set; }
+}
+
+public class ClientOrderResponse : IResponse
+{
+    public string ClientName { get; set; }
+    public List<Orders> OrderList { get; set; }
+    //other properties
+}
+
+public class DAL
+{
+    public IResponse GetAddress(IRequest req)
+    {
+        //return client address 
+    }
+    public IResponse GetOrderList(IRequest req)
+    {
+        //return order list 
+    }
+}
+
+ +

Variant 1

+ +
class Program
+{
+    static void Main(string[] args)
+    {
+        IRequest req1 = new ClientAddress();
+        req1.ClientId = 1;
+        ((ClientAddress)req1).AppId = ""abcd"";
+        DAL d = new DAL();
+        IResponse res1 = d.GetAddress(req1);
+
+        IRequest req2 = new ClientOrder();
+        req2.ClientId = 1;
+        ((ClientOrder)req2).FromDate = ""2019/01/01"";
+        ((ClientOrder)req2).ToDate = ""2019/01/15"";
+        IResponse res2 = d.GetOrderList(req2);
+    }
+}
+
+ +

Variant 2

+ +
class Program
+{
+    static void Main(string[] args)
+    {
+        ClientAddress req1 = new ClientAddress();
+        req1.ClientId = 1;
+        req1.AppId = ""abcd"";
+        DAL d = new DAL();
+        IResponse res1 = d.GetAddress(req1);
+
+        ClientOrder req2 = new ClientOrder();
+        req2.ClientId = 1;
+        req2.FromDate = ""2019/01/01"";
+        req2.ToDate = ""2019/01/15"";
+        IResponse res2 = d.GetOrderList(req2);
+    }
+}
+
+ +

Please guide me which design is good/bad and why we should use/avoid it or is there any other way to achieve the same.

+",223353,,154896,,1/17/2019 1:35,1/17/2019 1:35,Which option is good in terms of software design?,,2,3,,,,CC BY-SA 4.0,,,,, +385592,1,,,1/16/2019 8:26,,1,284,"

I want to have two methods, one which returns all items and another, which returns specific Item by name (assume Item has name field):

+ +
Item getItem(String itemName)
+List<Item> getItems()
+
+ +

How should I name these methods? I'm not sure about getItem and getItems cause these methods just looks too simular (it's hard to catch if we have extra s at the end).

+ +

I can imaging many options:

+ +

For first method I imagine these names:

+ +
getItem
+getItemByName
+getSingleItem
+
+ +

For second method I imaging these names:

+ +
getItems
+getAllItems
+getItemList
+getItemsList
+
+ +

I have 3*4 = 12 combinations in total already, which one is better to use?

+",168989,,,,,1/16/2019 13:13,how to name two methods returning collection/single item,,3,0,,,,CC BY-SA 4.0,,,,, +385598,1,385635,,1/16/2019 10:30,,0,45,"

An application I'm considering writing needs to show a text that a user has edited at a particular time.

+ +

So consider if we have a text the user is editing from Time A to Time X, and I want to show what it looked like at Time L.

+ +

This text can be for a long running process, and it can be theoretically big, therefore it is not going to work just the brute solution of I will save what it is like at every keystroke - or well it might work but not very well.

+ +

But I'm betting there is probably some algorithm for how to handle this problem that I'm unfamiliar with such that basically I take Time L, take all the diffs from start of the text as an empty string at Time A and give me the actual resulting text.

+ +

I guess another way to think about it would be that it each Time in our time series has a keystroke associated with it, and then Time L just replays everything from Time A to L to get what the value of Text should be at Time L.

+ +

I hope this has been clear because often people seem to think I am unclear about these kinds of things.

+",162573,,,,,1/16/2019 15:38,storage of text history and algorithm to return text at particular time,,1,2,,,,CC BY-SA 4.0,,,,, +385600,1,385601,,1/16/2019 10:38,,2,139,"

I recently started working as a programmer and I am currently going through some codes (in C#) written by my colleagues. I have noticed that they tend to include logic within function calls within some conditional statements. For example:

+ +
if (MoveSpace(currentPos, distance * time))
+{ doSomething }
+
+ +

They would also include logic when determining array positions. For example:

+ +
Space currentSpace = array[a*b, x*y]
+
+ +

Is this a good/bad practice? Would it be better if I create a new variable for these logic results and pass in the variable instead? Thank you.

+",325975,,7422,,1/16/2019 10:53,1/16/2019 10:53,Should I use complex expressions as function arguments?,,1,0,,,,CC BY-SA 4.0,,,,, +385610,1,,,1/16/2019 13:23,,3,782,"

I am developing a financial system and want to have a defined policy for rounding monetary values.

+ +

Given the following layers:

+ +
    +
  • View
  • +
  • API
  • +
  • Entity Model
  • +
  • Persistence
  • +
+ +

If I am passing a monetary value through these layers, and maybe using it in calculations in the model, at what layer (if any) should I be applying my ""default"" rounding policy?

+ +

My instinct is in the View, as rounding the number is a ""view"" concern, like formatting a date. I also feel it could cause problems if my entities round money values as this could affect the accuracy of calculations. However if my API specifies a value as ""money"" it would seem incorrect to provide consumers with values that are not rounded e.g. £109.563393939939 rather than £109.56.

+ +

I have been unable to find any ""best practice"" around this.

+ +

Aside from this old question: https://stackoverflow.com/a/3840903/470183

+",278811,,,,,1/22/2019 15:50,Which layer should have responsibility for rounding numbers?,,5,4,,,,CC BY-SA 4.0,,,,, +385611,1,,,1/16/2019 13:28,,1,76,"

Scenario

+ +

I have an application where some operations require the authentication of an admin.

+ +

Example Steps:

+ +
    +
  1. I need to validate the admin's username & password
  2. +
  3. insert a row into a MySQL table (if credentials are valid).
  4. +
+ +

One Trip Approach

+ +

If I use a single Connection to the database, with multiple PrepearedStatements, I'm able to achieve my goals.

+ +

However, the abstraction of the application is damaged. Using this approach, in my DAO (database access object) method, I have to accept the arguments for the row insertion + the arguments for the admin validation.

+ +

It does save another connection to the database, but on the other side, the DAO that handles insertion shouldn't know or care about user validation.

+ +

An Example signature of the method would be this:

+ +
insertData(String data,String metadata,String username,String password)
+
+ +

Two Trip Approach

+ +

Using this approach, I am using 2 connections to the database. First one validates the users username & password, and if they are valid, opens another connection, and inserts the row into the database.

+ +

This approach uses an extra connection, but the DAO method now accepts only arguments that are relevant for the actual data insertion.

+ +

An Example signature of the method would be this:

+ +
 insertData(String data,String metadata)
+
+ +

Question

+ +

So my question is - which way is better? Should I sacrifice abstraction for database trips, or do several database trips for the sake of abstraction?

+",325990,,,,,1/16/2019 17:01,Database Authentication - Are 2 database trips better than 1?,,1,4,,,,CC BY-SA 4.0,,,,, +385614,1,,,1/16/2019 14:05,,1,78,"

I'm trying to encapsulate permissions logic for a particular view model in a way that the permission logic has access to the view model object, but is also exposed inside of it

+ +

Trivial Implementation:

+ +
public class ClientViewModel
+{
+    public Client Client { get; set; }
+
+    /* permissions section */
+    public bool CanVote => Client.Age > 18
+    public bool CanDrink => Client.Age > 21
+}
+
+ +

The implementation is pretty clean and simple. The view will need to make lots of decisions based on the set of properties available within the permissions. But there are going to be a lot of permissions so ideally I'd like to contain that logic somewhere else.

+ +

Right now I can access like this:

+ +
var vm= new ClientViewModel() { Client = myClient };
+vm.CanVote
+
+ +

But I'd like to contain all the logic inside a single class and access like this:

+ +
vm.Permissions.CanVote
+ +

Circular Implementation

+ +

So I can put a property of type ClientViewModelPermissions on the ViewModel itself. It needs to have access to data objects on the ViewModel it's describing so I can pass in the instance of the model into the Permissions constructor and new it up during the construction of the model itself, like this:

+ +
public class ClientViewModel
+{
+    public ClientViewModel()
+    {
+        // create instance of permissions for current object
+        Permissions = new ClientViewModelPermissions(this);
+    }
+
+    public Client Client { get; set; }
+
+    public ClientViewModelPermissions Permissions { get; set; }
+}
+
+public class ClientViewModelPermissions
+{
+    public ClientViewModelPermissions(ClientViewModel clientVm)
+    {
+        // permissions must describe a particular view model
+        ClientModel = clientVm;
+    }
+
+    private ClientViewModel ClientModel { get; set; }
+
+    public bool CanVote => ClientModel.Client.Age > 18
+    public bool CanDrink => ClientModel.Client.Age > 21
+}
+
+ +

So each class contains a reference to the other as a property. Should this be avoided for any reason? Is there a cleaner way to evaluate properties for a given class, but keep that logic separate than actual class itself by decorating it somehow?

+ +

Here's an image with the above code showing the flow of dependencies across classes.

+",87466,,,,,1/16/2019 15:23,"Encapsulating Permissions Logic, but looking to Avoid Circular Dependency",<.net>,2,0,,,,CC BY-SA 4.0,,,,, +385617,1,,,1/16/2019 14:26,,0,219,"

Why do I define my Queries, Data, and Mutation as Singleton when using GraphQL in .NET Core?

+ +

From the doc's dependency injection page:

+ +
public void ConfigureServices(IServiceCollection services)
+{
+  services.AddSingleton<IDependencyResolver>(s => new FuncDependencyResolver(s.GetRequiredService));
+
+  services.AddSingleton<IDocumentExecuter, DocumentExecuter>();
+  services.AddSingleton<IDocumentWriter, DocumentWriter>();
+
+  services.AddSingleton<StarWarsData>();
+  services.AddSingleton<StarWarsQuery>();
+  services.AddSingleton<StarWarsMutation>();
+  services.AddSingleton<HumanType>();
+  services.AddSingleton<HumanInputType>();
+  services.AddSingleton<DroidType>();
+  services.AddSingleton<CharacterInterface>();
+  services.AddSingleton<EpisodeEnum>();
+  services.AddSingleton<ISchema, StarWarsSchema>();
+}
+
+ +

At the beginning of the docs:

+ +
+

The library resolves a GraphType only once and caches that type for + the lifetime of the Schema.

+
+ +

While I understand that these are more like DTOs in which they hold values or their class content doesn't change at all... Why do I specify them as singleton instead of just letting them get instantiated?

+",151252,,,,,1/16/2019 14:26,"Why do I define my Queries, Data, and Mutation as Singleton when using GraphQL in .NET Core?",,0,3,1,,,CC BY-SA 4.0,,,,, +385620,1,385651,,1/16/2019 14:51,,1,3015,"

A language agnostic approach since I see this problem in both compiled and interpreted languages with the builder pattern.

+ +

Let's say I have a Model that has 10 required fields and 5 optional fields. Of course, adding all these fields to the constructor would be a mess, but the constructor would allow us to easily check for failure because it can verify the types of the fields and that all the fields are provided.

+ +

Using the Builder pattern, we can make this code much cleaner to read and write, but as far as I see, it'd be hard for the compiler or IDE to know that a required field hasn't been provided.

+ +

For instance, let's say email is required:

+ +
instance = new Model(firstName, lastName, phoneNumber);
+
+ +

The compiler, or other forms of checks, can see email is not provided so it can fail since the constructor defines email as a required parameter.

+ +
instance = new ModelBuilder()
+            ->withName(firstName, lastName)
+            ->withPhoneNumber(phoneNumber)
+            ->build();
+
+ +

Here, the compiler, as far as I know, cannot tell that withEmail() should have been called in order to define the email which can lead to a runtime exception if you have one instance of the Builder that is missing a required field.

+ +

Is this unavoidable? Is there some pattern that can be used to solve this problem?

+ +

Beyond making sure every instance that uses Builder has test coverage, I haven't been able to come up with a solution to the runtime exceptions. This problem seems to present itself more when the model has a new required field added after the builder instances have been implemented across the application.

+",108383,,108383,,1/16/2019 14:59,1/16/2019 17:53,Builder pattern: How to verify required fields before runtime,,3,7,1,,,CC BY-SA 4.0,,,,, +385623,1,385630,,1/16/2019 15:07,,11,6997,"

I would like to be able to debug building a binary builder. Right now I am basically printing out the input data to the binary parser, and then going deep into the code and printing out the mapping of the input to the output, then taking the output mapping (integers) and using that to locate the corresponding integer in the binary. Pretty clunky, and requires that I modify the source code deeply to get at the mapping between input and output.

+ +

It seems like you could view the binary in different variants (in my case I'd like to view it in 8-bit chunks as decimal numbers, because that's pretty close to the input). Actually, some numbers are 16 bit, some 8, some 32, etc. So maybe there would be a way to view the binary with each of these different numbers highlighted in memory in some way.

+ +

The only way I could see that being possible is if you actually build a visualizer specific to the actual binary format/layout. So it knows where in the sequence the 32 bit numbers should be, and where the 8 bit numbers should be, etc. This is a lot of work and kind of tricky in some situations. So wondering if there's a general way to do it.

+ +

I am also wondering what the general way of debugging this type of thing currently is, so maybe I can get some ideas on what to try from that.

+",73722,,591,,1/28/2019 18:26,1/28/2019 18:26,How do you debug a binary format?,,5,11,3,,,CC BY-SA 4.0,,,,, +385624,1,385627,,1/16/2019 15:09,,1,458,"

e.g I'm parsing whole Excel file with many rows, that has an column which contains Date.

+ +

I'm not sure how to handle error-handling when it comes to parsing string to DateTime

+ +

Here's an sample code which is in C#

+ +
for (int i = 1; i < sheet.RowsCount; i++)
+{
+    // cells are strings
+    var cells = sheet.GetRow(i).Cells;
+
+    (...)
+
+    if (!DateTime.TryParse(cells[1], out DateTime date))
+    {
+        Console.WriteLine($""Unable to parse DateTime for row {i}"");
+        continue; // <--- here, basically skip current on fail and go to next.
+    }
+}
+
+ +
+ +
for (int i = 1; i < sheet.RowsCount; i++)
+{
+    // cells are strings
+    var cells = sheet.GetRow(i).Cells;
+
+    (...)
+
+    if (!DateTime.TryParse(cells[1]), out DateTime date))
+    {
+        throw new Exception($""Problem with parsing DateTime at {i} row""); // <--- here
+    }
+}
+
+ +

Is 1st approach ok to do that?

+ +

Isn't it something like ""hidden behaviour""? It may be confusing that e.g file contains 300 rows, but my function returned only 270.

+ +

Should my program yell loudly when it fails (with an exception) or just perform its job ""properly"", with ""silent"" Console Logs?

+",321277,,,,,1/17/2019 8:48,Is avoiding throwing an exceptions OK?,,3,2,,,,CC BY-SA 4.0,,,,, +385625,1,,,1/16/2019 15:10,,0,26,"

Most of the react application I've seen are organized with components that, in my opinion, do too much.

+ +

They may follow this pattern:

+ +
class MyComponent extends Component {
+
+    constructor() {
+        this.prop1 = this.props.prop1;
+        this.prop2 = this.props.prop2;
+    }
+
+    render() {
+        //....
+    }
+
+    onEvent(e) {
+        this.setState({
+            prop1: getProp1(),
+            prop2: this.prop2() 
+        });
+    }
+}
+
+ +

Personally in most cases I would prefer to avoid managing the state through React, and to have an object that does this. A possible solution would be something similar to:

+ +
class MyState {
+
+    update(params) {
+        this.prop1 = getProp1();
+    }
+
+}
+
+
+class MyComponent extends Component {
+
+    constructor() {
+        this.state.obj = this.props.obj; // this has type MyState
+    }
+
+    render() {
+        //....
+    }
+
+    onEvent(e) {
+        this.obj.update(e.something);
+        this.setState(this.obj);
+    }
+}
+
+ +

Since I've never seen this pattern used, and I'm not sure if using the same object as a prop and for the state, I wondering if I am missing something here.

+ +

Is this idea bad for some reason? Is there a better solution to manage the component state (as a whole, or just a subset) in a dedicated object?

+",93229,,,,,1/16/2019 15:10,How can I manage the state of my application using a dedicated object instead of what React offers?,,0,3,,,,CC BY-SA 4.0,,,,, +385629,1,385650,,1/16/2019 15:24,,0,83,"

Currently i'm working on a small platform with a simple client-server model and will soon go in a closed beta with a launching customer.

+ +

In essence it's an Electron application which is mostly used for logging in, and fetching files/information from a simple PHP+MySQL backend with some other related functionality regarding those files.

+ +

One of the requirements is that it needs to be able to update. Now i have been able to update the application itself trough Amazon S3, which actually works quite nicely. But here is where i run into some problems.

+ +

Some clients in the future, might object to the backend that we provide, and want a 'copy' of our backend to use for their own users (hosting accounts, files that they use etc. even though we are the ones providing those files and end-users not having any direct access to it.).

+ +

This objection will come from a security and control standpoint. I however, will have no access (obviously) to these servers directly.

+ +

What is an acceptable manner of releasing updates for these backends? ( which will be PHP files and changes in the form of queries to the database). Also, i might not want to release a new 'client' version while some users still use a deprecated on-premise backend.

+ +

ANY helpful information is very welcome, since i'm a jr. softw. engineer and 'one-man-army' in a small company. Please ask questions if it helps in giving answers.

+",325997,,,,,1/16/2019 17:36,Updating a distributed backend and keep track of compatible releases for Electron app + Webserver (PHP + MySQL ),,1,0,,,,CC BY-SA 4.0,,,,, +385632,1,385692,,1/16/2019 15:36,,5,2777,"

Let's take the classical example of a function that may return a number or not.

+ +

In typescript this can be represented like this:

+ +
function f(): number | undefined {}
+
+ +

A more elaborate way would be to build a maybe type and use that for the typing:

+ +
type Maybe<T> = T | undefined;
+
+function f(): Maybe<number> {}
+
+ +

To check the type of the returned value we could use an if:

+ +
const x = f();
+if (x === undefined) { } else {}
+
+ +

Is there a conventional way to express the optional type in typescript?

+ +

Is there a better way to check the type of the return object?

+",93229,,,,,10/12/2019 21:00,Is there a convention for the Optional/Maybe monad in typescript?,,1,3,1,,,CC BY-SA 4.0,,,,, +385636,1,385640,,1/16/2019 15:47,,0,145,"

I am working on a project where logs of all kinds (e.g. errors, warnings, etc.) are stored in a database table, in the same transactional database used for the other business purposes. It is a MSSQL database.

+ +

A good practice is to use a NoSQL databases to store logs, stream them and query them (e.g. using the ELK stack). However, my team doesn't have that infrastructure available; only MSSQL.

+ +

Are there any benefits of placing the log table in a database by its own? That is, the resulting database would have only one table specifically to store the logs. The log table by itself is of about 10GB.

+",224256,,,,,1/16/2019 16:22,Benefits of placing error logs in a database by itself,,1,1,,,,CC BY-SA 4.0,,,,, +385637,1,385642,,1/16/2019 15:55,,0,339,"

Generally, I read that large methods benefit from some sort of inlining and the C# compiler does this sort of micro-optimizations automatically. I understand that if a method is called just one time in an enclosing method, performance might improve when inlined even if the method itself is large. However, if a large method is called in many places, performance could decrease if it is inlined, because it reduces the locality of reference. I know that all method calls have a cost such as adding to evaluation stack etc., So, my question is how do we find reduction in instruction count when inlined and the performance impact to determine if a method can benefit from inlining manually? The idea is to inline method calls selectively and manually for performance improvements. Any ideas and thoughts on this subject will be appreciated.

+",317218,,,,,1/16/2019 16:33,Method Inlining Considerations,,2,2,,,,CC BY-SA 4.0,,,,, +385646,1,,,1/16/2019 17:02,,4,106,"

Kubernetes provides a very elegant mechanism for managing configuration for pods using ConfigMaps. What's not clear from the documentation is what the recommended practice is for using ConfigMaps to manage different configurations for different environments, and also to deploy configuration changes when they occur.

+ +

Assume I'm using a ConfigMap for my pod to set various environment variables or to inject cofiguration files into my container. Evidently some (or all) of those variables or files need to be different depending on which environment the pod is deployed to.

+ +

In an ideal world I can make configuration changes and deploy those to the pod without re-building or re-deploying the container image. The implication is that those configuration settings, and the ConfigMap, should probably be stored in a separate source code repository (otherwise a build of the image would be triggered every time configuration changes).

+ +

What are some recommended practices for:

+ +
    +
  1. maintaining different configuration settings per environment (e.g. +separate branch per environment)

  2. +
  3. automatically deploying configuration changes when they change under source control, but only to the respective environment

  4. +
+",326007,,4,,3/27/2019 12:45,3/27/2019 12:45,What are best practices for deploying different configurations per environment in Kubernetes/OpenShift?,,0,0,1,,,CC BY-SA 4.0,,,,, +385653,1,,,1/16/2019 18:24,,1,51,"

I have a very large monolithic legacy application that I am tasked with breaking into many context-bounded applications on a different architecture. My management is pushing for the old and new applications to work in tandem until all of the legacy functionality has been migrated to the current architecture.

+ +

Unfortunately, as is the case with many monolithic applications, this one maintains a very large set of state data for each user interaction and it must be maintained as the user progresses through the functionality.

+ +

My question is what are some ways that I can satisfy a hybrid legacy/non-legacy architecture responsibly so that in the future state all new individual applications are hopelessly dependent on this shared state model?

+ +

My initial thought is to write the state data to a cache of some sort that is accessible to both the legacy application and the new applications so that they may work in harmony until the new applications have the infrastructure necessary to operate independently. I'm very skeptical about this approach so I'd love some feedback or new ways of looking at the problem.

+ +

Edit: Per comments, the cache data is comprised of ~170 attributed about the user and account, so it is not super large. I can add more specifics if necessary.

+ +

To detail my ideas around caching, we are an AWS shop and I was considering using dynamo db as the cache and setting a TTL for each cache entry as the data will be useless after the user's interaction. I was thinking of having every application (whether in the monolith or a new app) write its state to the cache upon completion or when sending the user to another application. When the user hits a new application the state will be retrieved for use and updated when appropriate.

+ +

I believe this could be a good solution to the problem because it will allow the user's interaction to easily traverse between functionality in new and old applications. The risk is that there will be dependency on the old system and it may require substantial efforts to wean new apps off of the reliance on the cache in the future.

+",215235,,215235,,1/17/2019 16:13,1/17/2019 16:13,dealing with state data in an incremental migration from large legacy application,,0,9,0,,,CC BY-SA 4.0,,,,, +385654,1,385658,,1/16/2019 18:33,,6,363,"

I am trying to do a design for notification part in the system I have 2 parts inApp notification and email notification so I used strategy pattern where I have interface NotificationSender with one single method send

+ +
NotificationSender{
+    public void send(A,B,C);
+}
+
+ +

then I have 2 implmentations

+ +
InAppNotificationSender{
+    public void send(A,B,C){}
+}
+EmailNotificationSender{
+    public void send(A,B,C){}
+}
+
+ +

later on, I had to increase parameters in the InAppNotificationSender so send should have D,E params as well which may change the design I thought of doing one single parameter for both methods using builder pattern something like

+ +
NotificationSender{
+        public void send(NotificationTemplateBuilder);
+    }
+
+ +

What would be the good design to do such case

+",92003,,,,,1/17/2019 20:41,Design pattern for 2 methods one has 70% arguments of other one,,5,3,1,,,CC BY-SA 4.0,,,,, +385656,1,,,1/16/2019 18:44,,2,241,"

We have two teams, each using git, and would like to share a small project between them. Git submodules sounded like an obvious answer until I started searching and found lots of ""submodules will bring you pain!"" opinions out there. An answer to a related question here suggests git subtree, which seems to be baked in to some of our git clients but not others. I'm looking for a path forward.

+ +

More specifically: we have a dev team, a doc team, and a desire to add doc's examples to dev's test suite. We don't want to require doc to check out the whole dev tree (and there's a technical barrier anyway). We want both groups to be able to update the examples; for example, if a dev change is not backward-compatible, we want fixing the example to be part of the dev task and not technical debt. Both the doc build and the dev tests require the presence of the examples.

+ +

Members of the dev team are fluent in git. The doc team includes git beginners (though at least we have gotten them onto branches and off of master, finally). Dev is working on Linux (Ubuntu and RHEL) and doc is working on Windows using Tortoise Git (or in some cases the command line). It's ok if setting up a solution requires some work (I'm one of the more git-fluent doc-team members and this will be my responsibility), but we want using it to be straightforward for both groups. (If it's not, all those support requests will come to me.)

+ +

From what I've read, git subtree sounds like a viable option, but I can't tell if a repository can be a sub to two parents (doc and dev). How should I approach my sharing problem? Does git subtree do what we need, or is there something else we should do instead?

+ +

In case it matters, we're using our own server (with Bitbucket), not GitHub.

+",124448,,,,,1/16/2019 20:53,Sharing a sub-project and some users are git beginners?,,1,1,,,,CC BY-SA 4.0,,,,, +385664,1,385671,,1/16/2019 22:18,,0,178,"

I'm trying to create a website as a forum similar to Reddit, but on a smaller scale (no comments also). The website will allow users to post a link to a forum. Then they can view a list of New or Trending links based on upvotes (gathered from an external API).

+ +

So basically all I need is to have a REST API to post links to one of many long lists, each list then users can view from with GET requests. There also needs to be a quick way to get the last 7 days of links so that the most popular ones can be computed. I'm accommodating for a maximum of about 100,000 links posted daily (about 1 per second) or and maybe 10 requests to Trending and New posts.

+ +

My current architecture would be:

+ +
    +
  • Stores past n (likely n=7) days of links in memory, then periodically empties that list into a long-term storage file.
  • +
  • Then to get New links, generally just queries memory but if a subforum is not very active gets older links from file.
  • +
  • Periodically updates the list of Trending links (short list in memory) for each subforum, then to get these Trending links queries that list in memory.
  • +
+ +

This will be mostly built for learning and it will not reach the 100,000 links daily, but I want to build something that can scale to that level so that I will be prepared if I have to in the future.

+ +

Would my above architecture be scalable, is there a better way of doing this, any other suggestions? Thanks for your help ahead of time.

+",326034,,,,,1/17/2019 2:30,Architecture for a Reddit-esque website,,1,0,1,,,CC BY-SA 4.0,,,,, +385665,1,,,1/16/2019 23:27,,0,72,"

Consider an abstract example, just to illustrate - you have a service for loans, which gets requests to borrow X USD, checks if there’s enough USD to lend that, if yes - marks that amount as reserved for a loan and sends a message to another service which actually processes the loan, and responds with success, if no - responds with an error.

+ +

Given there’s X USD available in total if 2 users concurrently check for ability to borrow X they’ll both get a response telling that there’s enough USD for them, while actually only one of them can be served.

+ +

Assuming we want to be able to scale, and the service needs to respond and we can’t just put requests to some queue for asynchronous ordered processing, what are the design and implementation best practices to ensure consistency in such cases, so the second users gets a response telling that loan is not available? The first thing which comes to my mind is locking the db table when the read operation to check availability happens but it doesn’t sound too nice...

+ +

P.S. I’m sorry if the question title is not optimal, was wondering how to word it properly... If you know how it can be improved - please tell.

+",305593,,305593,,1/16/2019 23:44,2/16/2019 1:01,how to ensure consistency for concurrent requests for same mutable data?,,1,0,,,,CC BY-SA 4.0,,,,, +385669,1,,,1/17/2019 1:30,,11,917,"

If passwords are stored hashed, how would a computer know that your password is similar to the last one if you try resetting your password? Wouldn't the two passwords be totally different since one is hashed, and unable to be reversed?

+",326041,,,,,1/22/2019 19:35,"If passwords are stored hashed, how would a computer know that your password is similar to the last one if you try resetting your password?",,5,3,0,,,CC BY-SA 4.0,,,,, +385678,1,,,1/17/2019 6:56,,1,440,"

We're building a new web-based industrial application and one of the questions that are hammering our heads for the last few days is about the integration between different ""microservices"" on this architecture.

+ +

I'm using microservices with just a pinch of salt because we're not totally embracing the concepts to define real microservices. One (and I think the biggest) difference relies on the fact that we're using the same shared database along the different modules (that I'm calling ""microservices""). A sort-of logical view of our system could be drawn as:

+ +
                  ╔══════════════╗
+                  ║    Client    ║ ══╗
+                  ╚══════════════╝   ║ (2)
+                                     ║
+                                     ▼        
+        ╔══════════════╗  (1) ╔══════════════╗
+        ║  Serv. Reg.  ║ <==> ║  API Gatew.  ║
+        ╚══════════════╝      ╚══════════════╝
+            █       █   █████████████     (4)
+           █         █              ████
+╔══════════════╗  ╔══════════════╗  ╔══════════════╗
+║   Module A   ║  ║   Module B   ║  ║   Module C   ║  <===== ""Microservices""
+╚══════════════╝  ╚══════════════╝  ╚══════════════╝
+        ║║ (3)           ║║ (3)            ║║ (3)
+        ║║               ║║                ║║
+╔══════════════════════════════════════════════════╗
+║                Database Server                   ║
+╚══════════════════════════════════════════════════╝
+
+ +

Some things that we've already figured out:

+ +
    +
  • The Clients (External Systems, Frontend Applications) will access the different Backend Modules using the Discovery/Routing pattern. We're considering the mix of Netflix OSS Eureka and Zuul to provide this. Services (Modules A,B,C) registers themselves (4) on the Service Registration Module and the API Gateway coordinates (1) with the Register to find Service Instances to fullfill the requests (2).
  • +
  • All the different Modules use the same Database. (3) This is more of a client's request than a architecture decision.
  • +
+ +

The point that we (or me, personally) are stuck is about how to do the communication between the different modules. I've read a ton of different patterns and anti-patterns to do this, and almost every single one will tell that API Integration via RestTemplate or some specialized client like Feign or Ribbon.

+ +

I tend to dislike this approach for some reasons, mainly the synchronous and stateless nature of HTTP requests. The stateless-ly nature of HTTP is my biggest issue, as the service layer of different modules can have some strong bindings. For example, a action that is fired up on Module A can have ramifications on Modules B and C and everything needs to be coordinated from a ""Transaction"" standpoint. I really don't think HTTP would be the best way to control this!

+ +

The Java EE part inside of me screams to use some kind of Service Integration like EJB or RMI or anything that does not use HTTP in the end. For me, it would be much more ""natural"" to wire a certain Service from Module B inside Module A and be sure that they participate together on a transaction.

+ +

Another thing that needs to be emphasized is that paradigms like eventual inconsistencies on the database are not enough for our client, as they're dealing with some serious kind of data. SO, the ""I promise to do my best with the data"" does not fit very well here.

+ +

Time for the question:

+ +

Is this ""Service Integration"" really a thing when dealing with ""Microservices""? Or the ""Resource Integration"" wins over it?

+ +

It seems that Spring, for example, provides Spring Integration to enable messaging between services as would a tech like EJB do. Is this the best way to integrate those services? Am I missing something?

+ +

PS: You may call my ""Microservices"" as ""Microliths"", how we usually name them around here. :)

+",326056,,,,,1/18/2019 6:17,"How to integrate different ""microservices"" into a transaction?",,3,2,,,,CC BY-SA 4.0,,,,, +385684,1,,,1/17/2019 10:05,,1,165,"

We're trying to re-think the installation process of our software suite and I'm trying to find out what specific pitfalls we're facing without using my/our limited lens of the Windows software landscape today.

+ +

We have gathered a set of issues that an installation of our software potentially has to handle, and I would like to gather whether these points do make sense to consider in the context of a Windows installation procedure, or whether some of those points are better left outside the context of the installation procedure.

+ +

We face to issue, for at least a subset of the application suite, to at least do the following:

+ +
    +
  • Require running 3rd party DMBS on the machine
  • +
  • Pre-Install .NET Framework or vcredist or other system-wide resources
  • +
  • Put Files into SystemDrive:\Program Files
  • +
  • (Optional) Automatic Update (of Program Files, ...) when launching some applications
  • +
  • Put an Icon into the Start Menu / Desktop
  • +
  • Install and maintain Windows Services
  • +
  • Edit registry keys pertaining to our software (i.e. HKLM\SOFTWARE\OurShpo\...)
  • +
  • ""Installation"" should work on Windows 7 and all versions, including server and LTSB, upwards
  • +
  • ""installation"" should be optionally automated / no user interaction.
  • +
+ +

Is there any best practices on what and how to put these things into the installation process / or installation tools. What parts to keep hand off completely and what parts to better implement inside the application. (e.g. file-extension association is one point where I am very unsure who's job it is to fix this up.)

+ +

Disclaimer: I'm not interested in the toolchain per se here. I know there's InnoSetup, WiX, NSIS, ... and all of these can achieve some - or all - of these points somehow. The problem is deciding which of these points raise which (conceptual) problems on different Windows versions or which points don't actually make sense to put into our installation process.

+",6559,,,,,1/17/2019 22:08,Installation process for a modern complex Windows Software Installation?,,1,0,1,,,CC BY-SA 4.0,,,,, +385686,1,,,1/17/2019 10:55,,-2,45,"

I have many models in my project that are unrelated to each other. I wanted to group them but I wonder what's better:

+ +

Folder/namespace per group

+ +
    +
  • Group1.Constants, Group2.Constants
  • +
  • Group1.Models, Group2.Models
  • +
+ +
Group1/
+├── Models/
+│   ├── Class1.cs
+│   ├── Class2.cs
+├── Constants/
+│   ├── Constants1.cs
+│   ├── Constants2.cs
+
+
+ +

Folder/namespace per type

+ +
    +
  • Constants.Group1, Constants.Group2
  • +
  • Models.Group1, Models.Group2
  • +
+ +
Models/
+├── Group1/
+│   ├── Class1.cs
+│   ├── Class2.cs
+
+Constants/
+├── Group1/
+│   ├── Constants1.cs
+│   ├── Constants2.cs
+
+ +

Which is better and why?

+ +

Note: I don't want to put them in separate projects, as those types will be used only by 1 project.

+",269824,,269824,,1/17/2019 11:04,1/17/2019 11:04,Grouping types in a single project,,1,0,1,43482.52431,,CC BY-SA 4.0,,,,, +385693,1,,,1/17/2019 13:34,,0,43,"

So the basic scenario is I have a class that starts/stops service objects with the methods start(String serviceid)/stop(String serviceid). It is designed to then forward various execution tasks to the appropriate service.

+ +

What I am trying to achieve is for the start/stop methods to be non-blocking. Hence if you start you may stop immediately afterwards.

+ +

However your services may not have been initialised yet if you try to stop them after you request a start - so you want to cancel the startup process reliably. I also have methods to start/stop all defined services. The startup of a service is also bound to a timeout parameter so if it fails to start in that time it is again stopped.

+ +

Currently I have an implementation mostly based around the Executor classes. When starting a new initialisation task is created in a single thread executor. Within this is another timeout single thread executor. The initialisation task is submitted there. So in essence:

+ +
Runnable failedTask;
+Runnable initialisedTask;
+Callable<Exception> serviceInitialisationTask;
+ExecutorService initialisationExecutor = Executors.newSingleThreadExecutor();
+ExecutorService timeoutExecutor = Executors.newSingleThreadExecutor();
+timeoutExecutor.execute(() -> {
+    Future<Exception> future = initialisationExecutor.submit(serviceInitialisationTask);
+    Exception futureException;
+    try {
+        futureException = future.get(timeout, timeunit);
+    } catch (Exception e) {
+        futureException = e;
+    } finally {
+        initialisationExecutor.shutdownNow();
+    }
+    // Call back to the class that started the service initialisation
+    // This tells it whether or not the task failed and take action
+    if (futureException != null) {
+
+        failedTask.run();
+    } else {
+        initialisedTask.run();
+    }
+
+    timeoutExecutor.shutdownNow();
+});
+
+ +

The initialisation task essentially creates a thread pool for the Service to execute tasks on. Failure attempts to use the same stop method as called externally. A concurrent map stores references to the servers by id. Sometimes the thread pools for the Service remain - presumably because the new service was created before the reference to the old one was used.

+ +

If that is clear does anyone know of anything that uses this sort of pattern? I have not been able to find anything that matches this but I cannot be the first person to attempt a system like this. I'm not tied to any particular way of doing this - the only thing I would like is to not have to block starting and stopping to the caller.

+",317277,,,,,1/17/2019 14:03,Service lifecycle with non blocking start and stop,,1,0,,,,CC BY-SA 4.0,,,,, +385698,1,,,1/17/2019 14:29,,0,757,"

We are trying to move from a monolith application to a microservice architecture faced by a spa application. One of the reason, is that we want to expose some services of our business to partners and another reason is to build a better user experience via an spa application.

+ +

In the old application I had a webapplication where I could register an employee (name, surname, empno, adress...), choose his company from a dropdown list and in the same time create an account for this employee in a backend erp : think of this application as a backoffice one. +All data was on a single form, when the user submit the form : the backend server save the employee record, and if successful create an account in the erp system.

+ +

In order to manage the authorization for this application : the url to register an employee in the webapplication was accessible only if the loggued user have a role that able to handle the ""RegisterEmployee function"".

+ +

Now in my spa application, I need to call :

+ +
    +
  • a microservice for verify if the employee is not already registered ,
  • +
  • a microservice to have the list of the known companies
  • +
  • a microservice to create the employee account in our erp
  • +
+ +

I was first trying to define the roles allowed in each microservice, but it seems weird because : + - all my microservices could be reused in different scenarios + - and they don't necessary share the same list of roles...

+ +

In fact I am in the same situation as in this other question : SOA/Microservices: How to handle authorization in inter-services communications? +but as the author of the question I have not found any solution yet.

+ +

I had read about api gateway lately, and perhaps it is the way to go, but I am not sure how ? Does this mean that my microservices do not have to be aware of any authorization management ?

+",285517,,285517,,1/20/2019 11:04,1/20/2019 11:04,How to handle role based access in my microservice architecture,,0,13,,,,CC BY-SA 4.0,,,,, +385700,1,,,1/17/2019 15:31,,-1,52,"

The pain that I've often felt when creating database migration files, is best described in this Speakerdeck: Sane Database Change Management with Sqitch.

+ +
+
    +
  • Paste entire function to new ""up"" script
  • +
  • Edit the new file
  • +
  • Copy the function to the new ""down"" script
  • +
  • Three copies of the function!
  • +
+
+ +

And I end up with no clear diff of the function that I can easily git-blame to understand the change later in time.

+ +

I feel too that

+ +
+

sync-based approach to schema migrations is much better.

+
+ +

I stand before this new greenfield project (as the only developer). The stack I've chosen is Postgres (on AWS RDS), Node.js, Express, Apollo (GraphQL) and React (on Heroku).

+ +

I have read about sqitch and migra, but never used them. Are those the cure to the pain I've felt? Are they compatible with the stack I'm using for this new project or what other sync-based migration tool is best compatible with this stack?

+ +

My current workflow is like this. Dev and production database models are the same. A new story arrises. An existing database function needs to be altered. I create a new migration file. I copy the function that needs to be altered into the ""up"" and again into the ""down"" part of the new migration. I commit. I alter the ""up"" part. I commit (creates diff).

+ +

This all feels very verbose just when only a few lines in the function needed to be changed. Ideally, I have the whole schema in code in git. I alter the schema. I commit (creating a diff in git history). A tool then helps to generate the required SQL statements and makes a new up and down migration file for me. I don't mind that I need to verify the generated migration file.

+",160843,,160843,,1/17/2019 16:01,1/17/2019 20:10,How to ease the pain of lack of diffs when using database migrations?,,1,1,,,,CC BY-SA 4.0,,,,, +385705,1,385706,,1/17/2019 16:05,,0,94,"

Please let me illustrate with a simple example. Suppose we have a Weather object:

+ +
class Weather:
+
+    def get_forecast(self, day, place):
+        forecast = ""code that calculates forecast""
+        return forecast
+
+ +

On the other end we've got a weather forecast web app that accesses the Weather.get_forecast method directly and renders the output on the browser.

+ +

However, if I create a JSON RESTFUL wrapper which accesses the Weather.get_forecast method and serves the output as a JSON string, that means I can then make requests to the JSON RESTFUL layer from my web app instead of accessing the Weather.get_forecast method directly, which is good for decoupling I suppose.

+ +

And my question is: is this JSON RESTFUL wrapper what is referred to as an adapter in software design patterns? To put it in another way, am I using the adapter design pattern here?

+",322501,,,,,1/17/2019 16:20,Is a JSON wrapper an adapter?,,1,1,,,,CC BY-SA 4.0,,,,, +385711,1,385712,,1/17/2019 18:01,,2,184,"

I have <5 years experience in the software industry and this is my first time doing QA.

+ +

Before the stories in a sprint move to UAT, I am supposed to retest every single story in ~1 day in an environment between QA and UAT.

+ +

The purpose is that multiple teams have their own QA environment where they exclusively test their features. The extra environment is to make sure features developed by separate teams don't interfere with each other.

+ +

This expectation seems surprising to me. I suppose this isn't technically an entire sprints worth of work, since test cases have already been written, but it still seems like a lot for one or two days.

+ +

Is this normal/reasonable/typical for an agile project?

+",326117,,,,,1/17/2019 18:55,Is it normal to retest the entire sprint before pushing to UAT,,2,3,,,,CC BY-SA 4.0,,,,, +385714,1,390720,,1/17/2019 19:23,,0,393,"

I'm creating a proof-of-concept for a Go-app for my organization. I've read all of the intro docs on setting up a Go workspace, packages, etc. However, I am still unclear about the relationship amongst the recommended directory structures, packages, AND the fully-compiled applications that will eventually be deployed.

+ +

My team needs to be able to support many small, decoupled, applications --I am not sure how I can achieve this with the single-workspace-multiple-package approach, and would greatly appreciate clarification.

+",158245,,90149,,4/22/2019 13:41,4/22/2019 13:41,Golang: Directory structure for Multiple Applications,,1,0,,,,CC BY-SA 4.0,,,,, +385722,1,385739,,1/17/2019 20:22,,-3,533,"

I am currently writing a memoire on the evolution of developing tools. Among them is of course the programming languages. I made some researches, and a lot of the most popular languages are scripts languages like Javascript, Python or Ruby But when comparing performances, they are not as efficient as VM languages or compiled languages.

+ +

I would like to be able to explain why, but all researches on the net returns answers about javascript only, while I want an explanation for scripts in general, I don't think it is a coincidence. +I thought it was because they have a more simple syntax, but some other languages have some concise syntax.

+ +

So I am wondering why scripts ?

+",326130,,,,,1/19/2019 15:11,Why are script languages so popular?,,3,8,,43484.04931,,CC BY-SA 4.0,,,,, +385728,1,385762,,1/17/2019 22:14,,-1,146,"

I have a script that migrates data in the database.

+ +

It copies property X to property Y.
+If I want the script to be idempotent, what should it do on subsequent call if X changed?

+ +

For example:

+ +
X is 'a'   
+==> I run the script, then Y is 'a'.    
+now X is 'b'   
+
+ +
    +
  • What would subsequent call to the script do? set Y to be 'b' or leave it untouched?
  • +
+ +

For those who wanted me to clarify my question - I am trying to clarify the meaning of ""idempotent"" when a resource it references changes.

+ +

I was asked to write an idempotent script, and one of the comments I got on it were that I did not handle the scenario where Y changes. I had to know if I misunderstood something or if it was not specified in the requirement.

+ +

Please do not close this answer as I think future readers may benefit it.

+",95463,,95463,,1/18/2019 23:05,1/18/2019 23:05,What should idempotent script do if resource changed?,,1,3,,,,CC BY-SA 4.0,,,,, +385729,1,,,1/17/2019 23:22,,0,80,"

My application has a grid with a list of Car objects from a third party system

+ +
[{id: 1, make: ""Ford"", model: ""Focus"", ...},...]
+
+ +

The user can select a Car and use its data (after some transformation) to create a Car specific to my application. The app does this by filling in a form on another HTML page with the transformed 3rd party Car data and allowing the user to modify data as needed.

+ +

When the app makes the request to retrieve the transformed 3rd party car data, should that request be a HTTP GET (even though it transformed the data (in a repeatable way))? What should the URL be using REST standard formats? I don't think it should be /third-party/car/{id} since that is for returning the third party car as-is.

+ +

Update

+ +

Here's an overview of the architecture: +

+ +

The diagram shows the URLs supported. The flow is:

+ +
    +
  1. Client requests HTTP GET /third-party/cars/{id}/transform, HTTP GET /cars/third-party-cars/{id} or some other URL. This is the URL in question.
  2. +
  3. My application receives the request and does the following: + +
      +
    • Gets the 3rd party version of the car from the 3rd party DB
    • +
    • Transforms the 3rd party car in the CarService or ThirdPartyCarService to my application's version of a car (i.e. Car)
    • +
    • Populates and returns an HTTP form with Car data
    • +
  4. +
  5. The user then modifies the Car data as needed
  6. +
  7. The client either requests the Car data is saved via HTTP POST /cars or cancels.
  8. +
+",90340,,90340,,1/18/2019 21:40,1/18/2019 21:40,HTTP Method and REST URL for Returning Data that May Be Saved,,0,12,,,,CC BY-SA 4.0,,,,, +385732,1,385735,,1/18/2019 4:19,,0,150,"

I have a class that uses constructor DI for IEventAggregator

+ +
 public SomeViewModel(IEventAggregator eventAggregator)
+        {           
+            this.eventAggregator = eventAggregator;
+            eventAggregator.GetEvent<SomethingChanged>().Subscribe(UpdateResults);         
+        }  
+
+ +

I have another method in the same class that executes some logic that is not using eventAggregator.

+ +
public void SomeMethod()
+{
+   //Necessary logic
+}
+
+ +

Now I need to create a unit test for this method. I have a different test project of type class library.

+ +

I created a mock object using Moq

+ +
var someObject = new SomeViewModel(new Mock<IEventAggregator>())
+someObject.SomeMethod();
+
+ +

This requires adding the prism reference to my Test Project. Is this the right way? or can I create an object for SomeViewModel without using Moq and IEventAggregator?

+",184809,,,,,1/21/2019 10:16,Unit testing for a method in a class which uses constructor DI (prism),,1,1,,,,CC BY-SA 4.0,,,,, +385740,1,,,1/18/2019 10:31,,0,23,"

I want to setup a dedicated test computer that can be restored to a specific system state. The reason for that is that the software tests to be executed on that machine include the installation process. Software can certainly be uninstalled, but that often does not revert the whole system back to its original state.

+ +

I often run into problems like software failing when being installed on a freshly setup operating system or reinstalling software giving different results than newly installing it.

+ +

In short, I want to test the software deployment.

+ +
+ +

I don't always want to start from scratch by formatting the drive and installing the operating system. The used operating systems are Windows. I want to be able to easily restore the state of just having installed the operating system, for example.

+ +

I found 3 ways of achieving that:

+ +
    +
  1. Use some cloning/mirroring software to ""capture"" the state of the hard drive.
  2. +
  3. Use the system restore point.
  4. +
  5. Use virtual machines.
  6. +
+ +

How do these compare in terms of reproducibility and convenience for software tests?

+ +
    +
  1. Cloning drives seems to be the most straight forward approach.
  2. +
  3. With the system restore points, I am not sure if they revert the entire system. Will this also remove windows updates etc? It seems to be a rather convenient option to be able to restore the system this way from within the system itself.
  4. +
  5. Virtual machines seem to give the best out of both worlds. However, the tests involve connected hardware and adding the virtualisation as another layer might be another source for change of behaviour of the software.
  6. +
+",284985,,,,,1/18/2019 11:32,"How do cloning drives, system recovery points and virtual machines compare for system state recovery?",,1,0,,,,CC BY-SA 4.0,,,,, +385743,1,385749,,1/18/2019 11:05,,0,139,"

In most definitions of negative testing, the idea is that we test outside what is specified/expected and it is highly related to robustness.

+ +

So basically, if the behavior for such conditions is defined, I would be positive testing as we verify behavior that is specified, isnt that so?

+ +

An example requirement:

+ +
+

REQ1: Name is a text field with maximal length of 32 chars.

+
+ +

In this case, I could try neative test cases with numbers, special characters etc.

+ +

However, what about this:

+ +
+

REQ1: Name is a text field with maximal length of 32 chars. + If a non-alphabetical character is entered, except for a space, a message + ""Incorrect character entered"" is shown.

+
+ +

In this case, I would say that now those tests are positive as I verify the spec.

+ +

I ask because I have read the following, in a book Fuzzing for Software security testing:

+ +
+

in a login feature..a positive tests would consist of trying a valid + user name and a valid password. Everything else is negative testing.

+
+ +

That does not seem to be right to me. Usually the behavior for wrong login is pretty well defined and specified so strictly speaking, again I verify described behavior.

+ +

Is my understanding correct?

+ +

To support the definition I mean, I quote a few books below:

+ +
+

Positive testing is done to verify known test conditions and negative + testing is done to break the product with unknowns.

+ +

Another one:

+ +

Most systems are designed with explicit and implicit restrictions and + constraints. Negative test cases can be derived to test conditions + outside of those restrictions and constraints.

+
+",60327,,60327,,1/18/2019 13:07,1/18/2019 13:07,"If ""negative"" conditions are mentioned in the spec, are such tests still negative?",,2,1,,,,CC BY-SA 4.0,,,,, +385745,1,,,1/18/2019 11:09,,4,3080,"

I am new to ef and liking it since it reduces the overhead of writing common queries by replacing it with simple add, remove functions. Agreed.

+ +

Today I got into the argument with my colleague who has been using it for a while and approached him for advice on when to use Stored Procedures and when to use EF and for what?

+ +

He replied;

+ +
+

Look, the simple thing is that you can use both but what's the point of using an ORM if you are doing it all in database i.e. stored procedures. So, how would you figure out what to do where and why? A simple formula that I have learned that use ORM for all CRUD operation and queries that require 3-4 joins but anything beyond that you better use stored procedures.

+
+ +

I thought, analyzed and replied;

+ +
+

Well, this isn't the case, I have seen blogs and examples where people are doing massive things with EF and didn't need to write any procedure.

+
+ +

But he's stubborn and calling it performance overhead which is yet beyond my understanding since I am relatively new as compared to him.

+ +

So, my question is that whether you only need to handle CRUD in ef or should do a lot more in EF as a replacement of stored procedures.

+",326180,,326180,,1/18/2019 11:18,8/30/2019 20:27,Should I use entity framework for CRUD and let the database handle the complexity that comes with high end queries?,,7,9,1,,,CC BY-SA 4.0,,,,, +385746,1,,,1/18/2019 11:43,,0,406,"

I'm working on a server, which you can pass some form of authentication as input (like connection string) and it will connect you to your database. So the DB connection is going to be dynamic. There can be multiple users at the same time, connecting to different databases.

+

What I'm wondering is, is there a preferred way of managing the connections? Should the DB Client be stored in memory after authentication, so each user can immediately retrieve it using their session data / and execute queries against it? Or should I close / reopen the connection every time the user wanted to do something. I can use JS to figure out if the user is active on the page / or left and get rid of the connection object using users' state as well.

+

Approach 1

+
    +
  • User signs in to our web application.
  • +
  • User enters the credentials to database (like the connection string)
  • +
  • Server authenticates against the DB and we now have the connection client object. We keep it in a dictionary mapped to user id.
  • +
  • User wants to run a query. We determine the user id from the request, fetch the client from the memory and run the query.
  • +
  • When user leaves the page, we detect it through JS (unload event) send a request / or socket packet to server and close the client + remove it from the dictionary.
  • +
+

Approach 2

+
    +
  • User signs in to our web application.
  • +
  • User enters the credentials to database (like the connection string)
  • +
  • Server authenticates against the DB and we just confirm that connection worked. We don't keep the client object in memory.
  • +
  • User wants to run a query. We re-connect to the database, run the query and close the connection. No dictionaries are kept in memory, we reconnect every time the user wants to do something.
  • +
+

Further design clarifications:

+
    +
  • This is a single page application.
  • +
  • Although we don't have load balancing at the moment, depending on the user load we might end up adding it.
  • +
  • We can assume only 1 user is going to connect to a particular database through this system.
  • +
  • Sessions are managed through cookies and server side code.
  • +
  • We don't really care about the distance between the SQL and our server. Given that it should be able to connect to any-given SQL - there isn't a favourable location unless it's fully distributed and we're not doing that at this stage.
  • +
+

What do you think?

+",322602,,-1,,6/16/2020 10:01,12/9/2020 0:04,Managing multiple dynamic database connections,,4,1,,,,CC BY-SA 4.0,,,,, +385752,1,,,1/18/2019 13:22,,-2,51,"

I am planning some app as a part of our product. One part of the app will be to receive push messages.

+ +

Sometimes it's months between uses of the app and, by experience with other apps I've used, it seems like there is some timeout. If you don't use the app in a while, it will stop receiving push notifications.

+ +

So, does iOS/Android push notification subscriptions expire if the app is not used in a while? or is this something the app developer has done in their own code?

+ +

If there is some limitations in iOS/Andoid, what is it and are there some way to avoid this as it's important that push notifications will arrive, even when the app hasn't been used for months.

+",292082,,90149,,1/21/2019 14:00,1/21/2019 14:00,IOS/Android Push notifications,,1,1,,,,CC BY-SA 4.0,,,,, +385763,1,,,1/18/2019 17:06,,3,64,"

I'm trying to modernize a very old PHP application for a customer to keep its code base up with modern coding standards. I am permitted to restructure things a bit, but dumping the entire app and switching to an existing framework is not yet an option. I just need some general tips on how best to restructure the application classes.

+ +

Right now, the parent script calls the base application, which extends the database class. Smarty is being used as the template rendering engine, but it is not integrated with the base application. The parent script calls on Smarty separately. This customer's app includes a lot of complicated processes that are often coded repeatedly throughout various parent scripts, but really need to be in their own custom classes as part of the app structure.

+ +

The whole thing seems backwards from how I understand PHP applications to be coded today. What would be effective way of restructuring the classes neatly, keeping the app extendable and a bit more ""DRY""?

+ +

+",326215,,326215,,1/18/2019 17:17,1/18/2019 17:17,Restructuring PHP application classes,,0,0,,,,CC BY-SA 4.0,,,,, +385765,1,,,1/18/2019 18:06,,1,391,"

Let's say you're loading a denormalized flat file of purchase transactions that looks like this:

+ +
| location_name | location_zip | product | product_price |
+|---------------|--------------|---------|---------------|
+|  downtown     |    90001     | fries   |    2.99       |
+|  west side    |    90048     | burger  |    5.99       |
+etc....
+
+ +

into a SQL database. In a normalized star schema DB, you would have tables for locations where the zip fact is stored, and for products where the price is stored.

+ +

So what you should be loading into the purchases table is this:

+ +
| location_id | product_id |
+|-------------|------------|
+|     01      |     01     |
+|     02      |     02     |
+etc....
+
+ +

My question is, how can we normalize the data like this during the ETL process, before it enters the database? The process is complicated by the fact that some locations may already exist in the database with assigned IDs, and some do not. It would be very inefficient to query the DB before inserting each purchase row to determine (or insert a new) location and product ID.

+ +

Any general advice on how to approach this problem would be greatly appreciated!

+",326218,,,,,1/18/2019 21:19,How/when to normalize during ETL?,,1,1,,,,CC BY-SA 4.0,,,,, +385768,1,385776,,1/18/2019 18:51,,-2,133,"

Anyone know, how many users are valid to use the prototype of the software that we have made? I have conducted research on the reference to the ""Software Engineering A Practitioner's Approach book Roger S. Pressman"". There is no mention for valid users of software testing?

+",324994,,,,,1/18/2019 21:29,How many valid users to test the software that has been developed?,,2,4,,,,CC BY-SA 4.0,,,,, +385769,1,385774,,1/18/2019 19:12,,-1,333,"

Is it a bad design pattern / anti-pattern to create a whole bunch of specific middleware to replace functions in-route. So instead of doing this

+ +
router.post('/myRoute', (req, res, next) => {
+    checkParams(req.body).then(paramsResult => {
+        if(paramsResult.status === 'failed') return res.send(paramsResult);
+        return checkUserEmail(req.body.email);
+    }).then(emailResult => {
+        if(emailResult.status === 'failed') return res.send(emailResult);
+        return checkPasswordLength(req.body.password);
+    }).then(passResult => {
+        if(passResult.status === 'failed') return res.send(passResult);
+        return ...
+    }).then(...).catch(err => {
+        return res.send({ status:'failed', message:'An error occurred.' });
+    });
+});
+
+ +

I do this

+ +
router.post('/myRoute', [checkParams, checkUserEmail, checkPasswordLength, ...], (req, res, next) => {
+   // I'd set req.user = {...} in the last middleware before this route.
+   createUser(req.user).then(createResult => {
+        if(createResult.status === 'failed') return res.send(createResult);
+        return res.send({ status:'success', message:'Your account has been created.' });
+   }).catch(err => {
+        return res.send({ status:'failed', message:'An error occurred.' });
+   });
+});
+
+ +

From the middleware I call in myRoute I'll be able to return res.send(errMsgObj) if I determine that the operation should fail (e.g. if the user's password is too short), and in so doing, the request won't proceed to the next middleware function (thus ""failing gracefully""). Similarly, if it meets my criteria, I can call next() at the end of the middleware function.

+ +

What I'd like to know:

+ +
    +
  1. Would it be considered bad practice to use the multiple-middleware method over the normal promise-chain-inside-the-route way?
  2. +
  3. If not, is it better than or on par with the promise-chain-inside-the-route way?
  4. +
+ +
+ +

TL;DR: How this question came to me
+Seriously, feel free to skip the following paragraphs, I only add it for context and any answers shouldn't have to depend on it.

+ +

I've been using Node and Express to develop my personal projects for a while now, but because I come from a Groovy/Grails background, Javascript's asynchronous nature still trips me up quite a bit. So much, I think, that it'd be fair to say that I'm probably still a beginner.

+ +

I've been working on a new project, trying to write ""the perfect"" User register route. Because I've gotten the hang of JS promises and promise chains a while back, implementing my route as a series of promises seemed the natural way to go. The problem I found with this approach, is that my route requires 13 steps (most of which are broken up into separate functions containing their own up to 3-long promise chains), that all have to execute successfully, one after another, for the route to return a success response. That's a lot of promises to chain together, but I set out to do it anyway. However, when I was nearly done with this monstrosity, I realised that there's no way (that I've found) to gracefully exit a promise chain without throwing an error, so if execution fails in a predictable way (e.g. the email address a user wants to register with is already taken), I'd have to throw an error and handle it in the chain's catch block. But the catch block is where I prefer to handle unpredictable errors (e.g. the database server went unexpectedly offline), by logging the request params and error info, and return a generic ""An error has occurred"" message to the user. So obviously I don't want it clogging up my logs with ""your password has to be at least 276 characters long"" messages.

+ +

I then set out to learn async/await and I love it. I feel like I'm right at home, like I'm using Groovy/Grails, because I can now assign the return value of a function to a variable and if a function returns a known-error error message (""your password's already been taken""), I can immediately return res.send() said message to the client and it won't continue executing the code for all the following steps in the route. The next problem I ran into was that I couldn't figure out how to implement a post-route middleware that writes the request and response to the database (for debugging purposes) by without the following steps in my route attempting execution, after next()ing to the middleware.

+ +

I don't know how I came up with it, but I finally had the idea of breaking all the steps involved in the route into small chunks and assigning them to their own middleware functions that would all run before the route and if they fail, they create an Event database record, containing the info that I want stored and return res.send() with an appropriate message to the client. I realise that I could also do this with a promise chain in-route, and that multiple middleware won't solve this problem, but it's just a different way of doing it that I at first thought could solve the problem.

+ +

Finally, I realise that the code example I gave in above also doesn't really demonstrate my problem that well. I gave an oversimplified example, simply to demonstrate my question, not the actual real world problem I'm facing. If you'd like me to elaborate the code example to fully understand my real world problem, let me know and I'll make an update.

+",224521,,,,,1/18/2019 21:10,Node.js / Express.js - Route consisting almost entirely of middleware,,1,0,,,,CC BY-SA 4.0,,,,, +385770,1,,,1/18/2019 19:23,,-6,38,"

What data structure i can use to lookup given a fromCode and toCode apart from hashMap which results in more number of entries in the memory. +We are ok with log(n) efficiency also. +Example data:

+ +
fromCode    toCode  distance
+100          200     10
+100          300     20
+-----      ----     ----
+
+ +

Assume fromcode and tocode are some integer values and you might get sorted data as well.

+",326229,,,,,1/18/2019 22:01,"Give a list of a from area code and to area code and a distance between them,which data structure?",,1,1,,,,CC BY-SA 4.0,,,,, +385771,1,,,1/18/2019 19:26,,1,1150,"

Currently: ASP.NET Core 2.2.

+ +

I've been doing quite an extensive research in this topic (Domain Driven Design used together with Clean Architecture):

+ +

DDD: Where to place domain event handlers?

+ +

And I've seen a couple of DDD and Clean Architecture Repos to better understand:

+ + + +

Some other topics:

+ + + +

(plus many other resources)

+ +

And I've even been skimming through Vaughn Vernon's book..

+ +

I'm trying to keep things as simple as possible. But there is something I'm not understanding... If the Aggregates themselves are the ones supposed to raise the domain events, how are we supposed to resolve the dependencies that the event handler need in order to execute successfully?

+ +

Let me explain (This is my complete understanding are you are more than welcome to correct me):

+ +

Aggregates are the ones in charge of raising the event. You would have an Event Bus that will subscribe and publish the Events and Handlers accordingly. So far so good. The problem starts when you're going to resolve the handler's dependencies.

+ +

From what I'm understanding, Aggregates should have zero external dependencies (That should be the Application's Layer responsibility), therefore you're forced to use a static method that raises the event inside the Aggregate. Once raised, you dispatch them to the respective handler. But what happens if the handler itself has dependencies such as a Repository/Service/Processor that needs to be instantiated?

+ +

Wouldn't I have to create some sort of factory that creates these dependencies? That means that I wouldn't be able to use the IoC library to make the dependencies for me, right?

+ +

I thought on using MediatR to handle same-tier, same-domain dispatches because it wires into my Dependency Resolver and automatically injects all the dependencies. For this to be achieved, MediatR would need to be injected in the Aggregate, breaking the principle of having no external dependencies.

+ +

From one of the GitHub links above, the Ametista project has the events raised in the Application Layer (Command Stack), and not inside the Aggregates.

+ +

While that would violate the Aggregate principle, it would've solved the dependency issue. Yet, it would introduce another problem, which is domain events duplication. You would've to reimplement the same event each time the criteria is met for that event to be raised.

+ +

That just put me to think... deep, and made me come up with the following (untested) solution (Which I don't have high hopes that it's going to work):

+ +

We implement the Service Locator pattern to wrap up MediatR in a Static Class. We would inject via a static method and property the Startup.cs Service Provider. I don't know if the shared static property across the entire application is a good idea. I also don't know if the Service Provider would get disposed or inaccessible in a certain part of the app. I also don't know if I'm creating a consistent MediatR instance that will be able to publish all the events transaction-wise from the bounded context (That is, all the events that were raised in the domain would work as a same transaction within the same bounded context).

+ +

In Startup.cs:

+ +
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
+public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
+{
+
+     StaticMediatR.LoadServiceProvider(app.ApplicationServices);
+}
+
+
+    public class StaticMediatR
+    {
+        private static IServiceProvider _serviceProvider;
+
+        public static void LoadServiceProvider(IServiceProvider serviceProvider)
+        {
+            _serviceProvider = serviceProvider;
+        }
+
+        public static IMediator Mediator()
+        {
+            var serviceScope = _serviceProvider.GetRequiredService<IServiceScopeFactory>().CreateScope();
+
+                return serviceScope.ServiceProvider.GetRequiredService<IMediator>();
+
+        }
+
+    }
+
+ +

I haven't tried the code above yet... That's what I have in mind.

+ +

Edit: I just tried the code above... and effectively, it's been disposed.

+ +

Edit x2: Wait, removing the Using does not dispose the object. I've made the changes above.

+",151252,,151252,,1/18/2019 21:27,1/20/2019 17:33,"Domain Events, CQRS, and dependency resolution in handlers",,3,1,1,,,CC BY-SA 4.0,,,,, +385773,1,385791,,1/18/2019 21:07,,1,82,"

I am working on a web application which allows users to review pdf documents. These documents are submitted from another public facing website. A typical workflow involves:

+ +
    +
  1. A document is uploaded on the public facing website and stored into document management system (DMS).
  2. +
  3. The internal web application grabs the document from the shared DMS and shows it to the user.
  4. +
  5. The user of the web application downloads this pdf to his local file system.
  6. +
  7. He adds some markups/annotations/comments on the pdf.
  8. +
  9. The reviewed document is submitted to the application, which processes the document and presents the corrections to the user who submitted the document on the website.
  10. +
+ +

We were using a cloud service for manipulating the pdfs until now, so there was no problem accessing the reviewed pdf, as the API handled that. But some of the customers are having issues storing the pdfs on the cloud, so I need to implement a local solution.

+ +

My question is, how can my web-application know when the user has finished reviewing the document (which will probably be a save event), so it can fetch the document for processing. As far as I know, the server doesn't know anything about, and cannot access the local file system.

+ +

I can probably force the user to upload the reviewed document on the application, and that would make my life easy. But, from a usability point of view, and for a consistent user experience with the cloud solution; which allowed the communication on ""save"" event, I would like to have something similar, i.e. when the user hits save after reviewing, my application would like to grab the document for further processing.

+ +

I am not sure if there is a way around the server trying to access the local file system.

+ +

Note: It's an on-premise, enterprise web-application.

+ +

TL;DR: My web-application server would like to know when a document is saved locally, and grab that document.

+",217385,,217385,,1/18/2019 21:13,1/19/2019 9:43,How to access a file stored locally on server?,,1,0,,,,CC BY-SA 4.0,,,,, +385781,1,,,1/19/2019 1:16,,3,40,"

I am trying to add a new template to an existing CodeIgniter 3.0 project where the original developer had not planned for additional templates/themes.

+ +

The primary objectives are to:

+ +

a) make a usable copy of all existing views and css files and leave the originals unchanged.

+ +

b) allow for core updates as they are released by the original developer that do no overwrite my changes.

+ +

My question is: To meet the previous objectives and to avoid having to manually update everywhere the developer originally referenced a view file/directory...

+ +

Should I use a CodeIgniter hook? +Should I make changes to CodeIgniter configuration files?

+ +

Project: CodeIgniter 3.0

+ +

Existing directory structure:

+ +
.
+├── application
+|   └── views
+|     └── back
+|       └── index.php
+|     └── front
+|       └── index.php
+├── template
+|   └── back
+|     └── style.css
+|   └── front
+|     └── style.css
+
+ +

Proposed directory structure:

+ +
.
+├── application
+|   └── views
+|     ├── back
+|     ├── front
+|     └── *new_template_view
+|       ├── *copy_of_back
+|       └── *copy_of_front
+├── template
+|   └── back
+|   ├── front
+|   └── *new_template_css
+|     ├── *copy_of_back
+|     └── *copy_of_front
+
+",326246,,,,,1/19/2019 1:16,How to best approach adding a new template to an existing CodeIgniter project?,,0,0,,,,CC BY-SA 4.0,,,,, +385782,1,,,1/19/2019 1:17,,-1,262,"

My question is how I can achieve more encapsulation in TypeScript.

+ +

I have a class Item, with a public setter isOwned, but I only want to call this method in specific situations: if the item is picked up or dropped by a Player.

+ +
/** @package A */
+class Item {
+    private _is_owned: boolean = false
+
+    // ...
+
+    /** @returns - Has this item been picked up by a player? */
+    get isOwned(): boolean {
+        return this._is_owned
+    }
+
+    /** @param owned - Is a player picking up this item? */
+    set isOwned(owned: boolean) {
+        this._is_owned = owned
+    }
+}
+
+/** @package B */
+class Player {
+    private _items: Set<Item> = new Set()
+
+    // ...
+
+    /**
+     * @param item - the item in question
+     * @returns - Does this player own the item?
+     */
+    owns(item: Item): boolean {
+        return this._items.has(item)
+    }
+
+    /**
+     * @param item - the item to pick up
+     * @throws - if the item is already owned
+     */
+    pickUp(item: Item): void {
+        if (item.isOwned) throw new Error('cannot pick up item that is already owned')
+        this._items.add(item)
+        item.isOwned = true // WARNING public method
+    }
+
+    /**
+     * @param item - the item to drop
+     * @throws - if this player does not own the item
+     */
+    drop(item: Item): void {
+        if (!this.owns(item)) throw new Error('cannot drop item that player does not own')
+        this._items.delete(item)
+        item.isOwned = false // WARNING public method
+    }
+}
+
+ +

The methods Player#pickUp and Player#drop need to check the ownership of an item to determine whether the player is allowed to pick up or drop the item. So in those methods, the setter Item#isOwned is called, and therefore it needs to be public. (In TypeScript, there is no such thing as package-private, and even if there were, Player and Item are not in the same package.)

+ +

I want that setter to be called only in those methods though, because there are security implications: Since Item#isOwned is public, a “hacker” could bypass the checks before the #pickUp and #drop methods, as so:

+ +
/**
+ * Allow a player to pick up an item,
+ * even if that item is owned by another player.
+ */
+function hack(player1: Player, player2: Player, item: Item): void {
+    player1.pickUp(item)
+    try {
+        player2.pickUp(item)
+    } catch (e) {
+        item.isOwned = false // <!-- this is very bad
+        player2.pickUp(item)
+    }
+}
+
+ +

I’ve tried moving the methods to the Item class, but that won’t work because the methods need to access the private field Player#_items.

+ +

Then I tried separating each method into two: one in class Player where the item is added/deleted to the player, and another in class Item where ownership is tested, but that gives the same problem (just with different method names).

+ +

It seems like no matter what I do, getting/setting the ownership of an item needs to be public, so any program that calls Player#pickUp and Player#drop will also be able to get/set the item’s ownership status.

+ +

So how do I solve this problem? Is there a different data structure I need to use, or do I need to rethink my entire strategy?

+",42643,,,,,10/12/2020 2:31,restricting access to a public setter,,3,2,,,,CC BY-SA 4.0,,,,, +385783,1,385786,,1/19/2019 5:49,,5,2476,"

I am recently studying computer science and I was introduced into boolean algebra. It seems that boolean algebra is used to simplify logic gates in hardware in order to make the circuit design minimal and thus cheaper. Is there any similar way that you can use it to reduce the number of code lines in your software in higher level languages like C++, C# or any other language?

+",99479,,9113,,1/19/2019 7:47,5/4/2020 17:56,Can I use boolean algebra to reduce the number of lines in my code?,,7,2,2,,,CC BY-SA 4.0,,,,, +385796,1,385797,,1/19/2019 13:25,,2,995,"

I have a C-coded function that realizes a very long calculation on a microcontroller. I try to optimize it for speed at the moment. The function content is created automatically using Mathematica. It has hundreds of calculations and looks like this:

+ +
void calcResult(float *result, float arg1, float arg2, ... float argN){
+   float tmp1 = arg1 * 2 + arg2;
+   float tmp2 = tmp1/arg3 + tmp1;
+   ...
+   ...
+   float tmpN = tmp320 + tmp15 * 2;
+   *result = tmp2 + tmpN;
+}
+
+ +

I asked myself if it could possibly be faster not use ""tmp"" variables but an array with the same size as the number of ""tmp"" variables or maybe to speed things up in such a function in another way (under the assumption, that the calculations to get the ""result"" are already ""optimized"" in terms of the necessary calculation time)?

+ +

Edit:

+ +

In my opinion Is micro-optimisation important when coding? doesn't answer my specific question, even if the goal is to optimise the code as well.

+",326277,,326277,,1/22/2019 6:50,1/22/2019 6:50,What's faster.. Multiple variables or single array?,,3,9,,43487.42153,,CC BY-SA 4.0,,,,, +385804,1,,,1/19/2019 16:39,,8,8959,"

I like to write classes with two constructors: a primary constructor used in production code, and a default constructor just for unit tests. I do this because the the primary constructor creates other concrete dependencies that consume external resources. I can't do all that in a unit test.

+ +

So my classes look like this:

+ +
public class DoesSomething : BaseClass
+{
+    private Foo _thingINeed;
+    private Bar _otherThingINeed;
+    private ExternalResource _another;
+
+    public DoesSomething()
+    {
+        // Empty constructor for unit tests
+    }
+
+    public DoesSomething(string someUrl, string someThingElse, string blarg)
+    {
+         _thingINeed = new Foo(someUrl);
+         _otherThingINeed = Foo.CreateBar(blarg);
+         _another = BlargFactory.MakeBlarg(_thingINeed, _otherThingINeed.GetConfigurationValue(""important"");
+    }
+}
+
+ +

This is a pattern I follow with many of my classes so that I can write unit tests for them. I always include the comment in the default constructor so others can tell what it's for. Is this a good way to write testable classes or is there a better approach?

+ +
+ +

It was suggested that this might be a duplicate of ""Should I have one constructor with injected dependencies and another that instantiates them."" It's not. That question is about a different way to create the dependencies. When I use the default unit test constructor the dependencies don't get created at all. Then, I make most methods public and they don't use the dependencies. Those are the ones I test.

+",219616,,219616,,1/20/2019 12:25,1/22/2019 20:05,Should my classes have separate constructors just for unit testing?,,2,6,4,,,CC BY-SA 4.0,,,,, +385806,1,,,1/19/2019 17:24,,1,133,"

I would like to save the current ""value"" property of several components (e.g. a Slider) as a configuration profile when the user clicks on the Save button in my application. However, the Save button is in a different file than the rest of the components. How should I deal with this? Some solutions that I have thought:

+ +
    +
  1. do a two-way binding between the ""value"" property of the component and a role in my model (by using Binding)
  2. +
  3. reference every component by ID, even if they are in different QML files, in the onClicked handler in my Save button and save the data to the model
  4. +
  5. make onClicked in my Save button to emit a signal that will be ""caught"" in every QML file, making the components commit their value to my model
  6. +
+ +

Am I on the right track here?

+",63690,,,,,1/19/2019 17:24,Save data from multiple Qt components scattered around multiple QML files,,0,1,,,,CC BY-SA 4.0,,,,, +385808,1,385810,,1/19/2019 18:36,,7,2668,"

I'm working on a .NET Core REST API and I'm writing a service class to create new user accounts. I have the following code:

+ +
    public async Task<UserDto> RegisterNewUserAccount(CreateAccountDto userInfo)
+    {
+        EnsureUserDoesNotAlreadyExist(userInfo);
+        EnsureRfidCardIsNotAlreadyClaimed(userInfo);
+
+        var user = new UserDto()
+        {
+            Email = userInfo.EmailAddress.ToLower(),
+            FirstAndLastName = userInfo.FirstAndLastName,
+            Rank = 1200,
+            JoinedTimestamp = DateTime.UtcNow.ToString()
+        };
+
+        await _usersRepository.CreateUser(user);
+
+        var passwordHasher = new PasswordHasher<AccountCredentials>();
+        var credentials = new AccountCredentials()
+        {
+            Email = userInfo.EmailAddress.ToLower(),
+            HashedPassword = passwordHasher.HashPassword(null, userInfo.Password)
+        };
+
+        await _accountCredentialsRepository.InsertNewUserCredentials(credentials);
+        await _emailSender.SendNewUserWelcomeEmail(userInfo);
+
+        return await _usersRepository.GetUserWithEmail(user.Email);
+    }
+
+    private void EnsureUserDoesNotAlreadyExist(CreateAccountDto userInfo)
+    {
+        if (_usersRepository.GetUserWithEmail(userInfo.EmailAddress) != null)
+        {
+            throw new ResourceAlreadyExistsException(""Email already in use."");
+        }
+    }
+
+    private void EnsureRfidCardIsNotAlreadyClaimed(CreateAccountDto userInfo)
+    {
+        if (_usersRepository.GetUserWithRfid(userInfo.RfidNumber) != null)
+        {
+            throw new ResourceAlreadyExistsException(""RFID card already in use."");
+        }
+    }
+
+ +

To me, this code is clean, readable, and obvious in what it does. However, it seems like a lot of people are strictly against using exceptions for this type of logic. I know exceptions shouldn't be used for normal control flow, but for something like this, it seems so natural. The one thing I've considered changing is renaming the two helper methods to ThrowIfUserAlreadyExists and ThrowIfRfidCardIsAlreadyClaimed, to make it more clear that these private helper methods serve no purpose other than to throw an exception if a condition is met.

+ +

Obviously, I wouldn't want to use exceptions for validating general user input, such as making sure their password meets the requirements, etc., but in cases like this the line seems blurred.

+ +

If you believe this is bad practice, how would you write code such as this instead? Why is it bad to do it the way I have?

+ +

For instance, if I tried to do so using a DoesUserWithEmailAlreadyExist() check before I called my method, what happens if that user goes away between that check and now? Furthermore, how would I even go about returning some type of return code here instead without having a giant mess? I can only have one return type on the function, of course. Are you saying I seriously have to wrap every response from the method in some kind of CreateUserResult object or something? That seems obnoxious and like it would leave the code a mess.

+",145002,,145002,,1/19/2019 19:06,1/21/2019 9:59,Why is it bad to use exceptions for handling this type of validation? It seems like it makes the code so much cleaner,,6,4,,,,CC BY-SA 4.0,,,,, +385812,1,385816,,1/19/2019 22:59,,1,756,"

I'm using a lot of async Task<IEnumerable<T>> methods and I want to stop doing this everytime to get the items as a list:

+ +
var items = await AsyncMethodThatReturnsEnumerable();
+var enumeratedItems = items.ToList();
+
+ +

So, if I do something like this

+ +
var items = (await AsyncMethodThatReturnsEnumerable()).ToList()
+
+ +

Will it run asynchronously?

+",314186,,314186,,1/19/2019 23:10,1/20/2019 0:31,Will (await method).ToList() block the thread?,<.net-core>,1,0,,43499.77708,,CC BY-SA 4.0,,,,, +385813,1,,,1/19/2019 23:07,,1,332,"

Could it also have made sense to call it a ""Form"", as in the Platonic sense of the ideal form that represents the thing which earthly objects strive to emulate?

+",128805,,128805,,1/20/2019 0:11,1/20/2019 10:19,"Why are classes named ""class""?",,2,5,,43486.51528,,CC BY-SA 4.0,,,,, +385819,1,,,1/20/2019 0:48,,21,15112,"

Why is argv declared as ""a pointer to pointer to the first index of the array"", rather than just being ""a pointer to the first index of array"" (char* argv)?

+ +

Why is the notion of ""pointer to pointer"" required here?

+",326297,,591,,1/21/2019 11:26,1/22/2019 9:54,"Why is C/C++ main argv declared as ""char* argv[]"" rather than just ""char* argv""?",,6,12,9,,,CC BY-SA 4.0,,,,, +385829,1,,,1/20/2019 10:42,,0,60,"

My question is about the construction of main loops running in the background listening to commands and signals and how are they constructed to be efficient. For instance in live music synthesis programming languages like SuperCollider or PureData, you have somewhere a sound server waiting for changes in your source code and applying changes immediately to your program. Are these things like simple while loops running for ever, waiting for updates in the environments. Running a simple while loop in python will consume more than half of the CPU, so this can surely be not done. Can any one give me some hints?

+",,user230703,,,,1/20/2019 17:53,Main Loops and listeners in live coding systems,,2,1,,,,CC BY-SA 4.0,,,,, +385833,1,,,1/20/2019 11:49,,1,740,"

Consider the following architecture:

+ +
    +
  • Application A
  • +
  • Application B
  • +
  • Commons-Util
  • +
+ +

A and B share a lot of functionality. That's why we plan to extract the shared code into a commons library.

+ +

I am aware of the advantages and disadvantages of multi modules vs. separate repositories. +However, I'm not sure for what to go in this specific case.

+ +
    +
  • A and B should be releaseable and deployable separately (+ separete repos)
  • +
  • A and B should always depend on the same Commons-Util library (+ multi module). Therefore, dependency management should be at a central place (+ multi module).
  • +
  • Team A and Team B should be independant (+ separate repos)
  • +
  • No other projects besides A and B are using Commons-Util (+ multi module)
  • +
+ +

What's the best solution in this case?

+",204598,,,,,1/20/2019 11:49,Multi module Maven project or separate repositories?,,0,3,,,,CC BY-SA 4.0,,,,, +385836,1,,,1/20/2019 15:34,,0,1213,"

I know that the following function is a virtual function and needs to be overridden when extended by another class:

+ +
virtual int getAge()=0;
+
+ +

What i don't understand is the following syntax I have seen in places:

+ +
double getAge(int id) override;
+
+ +

Is this also a virtual function? I thought the word ""override"" is only used when overriding an extended virtual function.

+",326330,,90149,,1/21/2019 13:57,1/24/2019 14:45,"Usage of the word ""override"" in C++ and it's virtual functions",,2,0,,,,CC BY-SA 4.0,,,,, +385838,1,385842,,1/20/2019 16:20,,1,147,"

In Javascript, converting [] to a number (e.g. +[]) gives 0, while doing the same to {} gives NaN. This leads to entertaining wats like this:

+ +
> 2 / []
+Infinity
+> 2 / {}
+NaN
+
+ +

Is there a historical reason for this, the same way that there is a historical reason for typeof null === 'undefined'? http://2ality.com/2013/10/typeof-null.html

+ +

Or is there some other reason?

+",197372,,,,,1/20/2019 16:56,Why (historically) do `+[] === 0` and `+{} === NaN` in Javascript?,,1,0,,,,CC BY-SA 4.0,,,,, +385843,1,385847,,1/20/2019 17:18,,3,128,"

My project relies on a specialized component which is stuck (by its configuration) to an older version of the framework my project is using. This makes updating to the latest version of the framework impossible.

+ +

What are the best practices when dealing with this type of issue?

+",326335,,278015,,1/21/2019 9:19,1/21/2019 9:19,Critical dependency is preventing me from updating my app to latest framework version,,2,1,,,,CC BY-SA 4.0,,,,, +385851,1,385870,,1/20/2019 22:19,,1,289,"

I'm having trouble grasping the value proposition of the ""command"" half of the CQS principle in a system that doesn't implement command query responsibility segregation.

+ +

The CQS principle states that a method that returns a value is a query, and should never affect state. A method that affects state, should never return a value.

+ +

The principle that a query should not affect state makes perfect sense to me. If you have queries in your system that change state it's hard to reason about the logic in your code, and if you forget that your code relies on the state change provided by the ""query"" call then it's easy to introduce bugs.

+ +

Even if your system has nothing to do with CQRS, you should strive to never change state in your methods that return a value.

+ +

I don't understand the value proposition of not returning a value from your commands.

+ +

What are the pitfalls of having methods that let's say return a ""result"" that contains information about the success of the command call?

+ +

What benefits does following that principle bring to your application?

+",326350,,,,,1/21/2019 10:56,What is the full value of the CQS principle in a system that doesn't implement the CQRS pattern?,,1,3,,,,CC BY-SA 4.0,,,,, +385852,1,385853,,1/20/2019 22:48,,0,751,"

I'm writing a program, part of which consists of determining if a given file is a PNG. Knowing that a file doesn't have to be named with its respective filename extension to be of a certain type, I decided to check for it's header. However, after not being able to think of anything better, I wrote this if statement with loads of &&...

+ +
uint8_t buffer[sizeof(uint8_t) * size];
+int fileType;
+
+if (fread(buffer, sizeof(uint8_t) * size, 1, fp) == 1) {
+
+    if (buffer[0] == 137 && buffer[1] == 80  &&
+            buffer[2] == 78  && buffer[3] == 71  &&
+            buffer[4] == 13  && buffer[5] == 10  &&
+            buffer[6] == 26  && buffer[7] == 10) 
+                fileType = PNG;
+
+}
+
+ +

size has been determined before and PNG is from an enum type, in case there's any confusion.

+ +

I tried having the header bits of the PNG file format as a macro, but it seemed much too complicated (because of the way they are read onto a buffer) to check if the given file's bits are the same as that macro. I also tried using strings, but it also seemed much more complicated than this.

+ +

I was wondering, how would you go about refactoring this? Or, since there probably is a better way, how else could I check if the PNG header is present in a given file?

+",326351,,,,,1/20/2019 23:15,Getting file format by checking file header,,1,3,,43487.94097,,CC BY-SA 4.0,,,,, +385855,1,,,1/21/2019 3:42,,1,78,"

I'm working on a web project that executes CLI commands to perform running tasks to spare users from having to wait since PHP/Apache combo cannot (to my understanding) create a separate process for said long-running processes.

+ +
    +
  • Is the this the best approach for running long for running long processes that do not force web users to wait?

  • +
  • Is there any time-bombs one needs to look out for when taking this approach (running many CLI processes from PHP/Apache?

  • +
+",11908,,,,,8/12/2019 15:11,Considerations for delegating long running to CLI from PHP/Apache,,2,1,,,,CC BY-SA 4.0,,,,, +385856,1,385878,,1/21/2019 6:04,,3,643,"

I'm in the process of designing a new desktop application which is very different from other stuff I did before, and so I'll be happy if I could be pointed towards the right direction regarding the basic building blocks of it.

+ +

The application should read a binary file, process it ""line by line"" and after some chunk of data has been read and processed should write it back to disk. The raw data, i.e. the original binary files, are usually too large to load into memory, so I have to processes them bit by bit. The second phase (the processing) isn't too computationally intensive, and from previous experience I'm sure that the writing-back-to-disk part will take the most time.

+ +

What I currently have in mind are three threads (and not processes) - one is in charge of reading chunks of data to disk, the other does the processing and the latter does the writing back to disk. The main application (Python or Rust, not sure yet) will allocate the memory buffer for the first thread and will be in charge of scheduling the three threads in general.

+ +

Does this make sense? I'm aware that my requirements are very similar to that of standard async web apps, so I might be missing some important tools here that could help me avoid writing all of this from scratch.

+",326363,,,,,1/21/2019 14:18,Design of file I/O -> processing -> file I/O system,,1,4,,,,CC BY-SA 4.0,,,,, +385859,1,,,1/21/2019 7:17,,2,408,"

I have a problem to solve that's very much like a thread pool, and I was hoping to hear some strategies or find some resources to information on managing the size of the pool.

+ +

Let's say I have the following:

+ +
+ +
public interface IWorkerBee : IDisposable
+{
+    Task Configure(WorkerBeeConfiguration config);
+    Task<WorkItemResult> DoWork(WorkItemData workData);
+}
+
+public class WorkerBeeConfiguration 
+    : Equ.MemberwiseEquatable<WorkerBeeConfiguration>
+{ 
+    /* initialization stuff */ 
+}
+
+public class WorkItemData
+{
+    public string CacheKey { get; set; }
+    public TimeSpan ExpireAfter { get; set; }
+    /* other per-work-item stuff */
+}
+
+public class WorkItemResult { }
+
+ +
+ +

Here are some facts about the system:

+ +
    +
  • 50 to 100 WorkItemData's per second are submitted for processing.
  • +
  • Those are spread across a few hundred unique WorkerBeeConfiguration's.
  • +
  • IWorkerBee.Configure() is expensive - it takes several seconds and creates a new System.Diagnostics.Process.
  • +
  • If the result of IWorkerBee.DoWork() is not in the cache, it takes 200-300ms to create and involves inter-process communication with the IWorkerBee's child Process.
  • +
  • IWorkerBee can be re-used after it is configured, but DoWork is single-threaded and can only process one work item at a time.
  • +
  • It is ok to be aggressive with expanding a pool and keeping worker instances for a long time. System memory is the main constraint. Low latency and high throughput are necessary.
  • +
  • WorkItemData is provided by a large system that can take data from dozens of different sources. This code runs within the same process and can consume 10-20GB of memory for cached data. The servers should have at least twice as much memory as what the cache consumes for data.
  • +
  • The distribution of WorkItemData's to WorkerBeeConfiguration's varies over time with end-user usage and the addition and removal of available WorkerBeeConfiguration's. It's not uncommon for a large user to come online and noticeably change the distribution. I might know from historical logs that WorkerBeeConfiguration-A accounts for 3-4% of WorkItemData's and WorkerBeeConfiguration-B, C, D & E each account for 1-2%, but I don't want to rely on that information for sizing - I want it to be self-adjusting.
  • +
+ +
+ +

An IWorkerBee implementation would look like this:

+ +
public class WorkerBee : IWorkerBee
+{
+    private readonly AsyncLock Lock = new AsyncLock();
+    private readonly ICache Cache;
+    private System.Diagnostics.Process WorkerProcess; // each process commits 20-30MB of private, unshared working set memory
+
+    // This is expensive - both creating and configuring the process - about a second each
+    public async Task Configure(WorkerBeeConfiguration config)
+    {
+        using (await Lock.LockAsync()) {
+            if (WorkerProcess == null)
+                WorkerProcess = await CreateProcess(config);
+
+            await ConfigureProcess(WorkerProcess, config);
+        }
+    }
+
+    private Task<System.Diagnostics.Process> CreateProcess(WorkerBeeConfiguration config) 
+        => TaskConstants<System.Diagnostics.Process>.Default;
+
+    private Task ConfigureProcess(System.Diagnostics.Process process, WorkerBeeConfiguration config)
+        => Task.CompletedTask;
+
+    public void Dispose() { WorkerProcess.Dispose(); }
+
+    // Only one item can be processed at a time. Each item takes 200-300 ms.
+    public Task<WorkItemResult> DoWork(WorkItemData workData)
+        => Cache.GetOrSetAsync(
+            workData.CacheKey,
+            workData.ExpireAfter,
+            async () => {
+                using (await Lock.LockAsync())
+                    return await DoWorkInner(workData);
+            });
+
+    private Task<WorkItemResult> DoWorkInner(WorkItemData workData)
+        => TaskConstants<WorkItemResult>.Default; // inter-process communication to perform work
+}
+
+ +
+ +

I want to keep a pool hot for each unique WorkerBeeConfiguration in use (I doubt it will be necessary to completely destroy pools that fall out of use, but a more complete implementation would do that):

+ +
public class WorkerBeeProvider
+{
+    private readonly ConcurrentDictionary<WorkerBeeConfiguration, Task<IWorkerBee>> Colony
+        = new ConcurrentDictionary<WorkerBeeConfiguration, Task<IWorkerBee>>();
+
+    public Task<IWorkerBee> GetWorkerBee(WorkerBeeConfiguration config)
+        => Colony.GetOrAdd(config, CreateAndConfigureWorkerBeePool);
+
+    static private async Task<IWorkerBee> CreateAndConfigureWorkerBeePool(WorkerBeeConfiguration config)
+    {
+        var hive = new WorkerBeePool();
+        await hive.Configure(config);
+        return hive;
+    }
+}
+
+ +
+ +

Here's a simple pool implementation, but I haven't addressed resizing in response to demand:

+ +
public interface INextBeeStrategy
+{
+    Task<IWorkerBee> GetNextBee(List<IWorkerBee> hive);
+}
+
+public class WorkerBeePool : IWorkerBee
+{
+    private List<IWorkerBee> Hive;
+    private INextBeeStrategy NextBee;
+
+    public WorkerBeePool(int initialSize = 1, INextBeeStrategy nextBeeStrategy = null)
+    {
+        NextBee = nextBeeStrategy ?? new RoundRobin();
+        Hive = new List<IWorkerBee>(initialSize);
+        for (var i = 0; i < initialSize; i++)
+            Hive.Add(new WorkerBee());
+    }
+
+    public Task Configure(WorkerBeeConfiguration config)
+        => Task.WhenAll(Hive.Select(h => h.Configure(config)));
+
+    public void Dispose()
+    {
+        foreach (var bee in Hive)
+            bee.Dispose();
+    }
+
+    public async Task<WorkItemResult> DoWork(WorkItemData workData)
+        => await (await NextBee.GetNextBee(Hive)).DoWork(workData);
+
+
+    private class RoundRobin : INextBeeStrategy
+    {
+        private int _counter = -1;
+
+        public Task<IWorkerBee> GetNextBee(List<IWorkerBee> hive)
+            => Task.FromResult(hive[Interlocked.Increment(ref _counter) % hive.Count]);
+    }
+}
+
+ +
+ +

So, how might I go about making WorkerBeePool expand with demand, rather than queue up requests too far, without exhausting system resources and staying in balance with a few hundred other pools?

+",6735,,6735,,1/21/2019 7:27,1/21/2019 7:53,Strategies for managing a dynamically sized pool of worker processes?,<.net>,1,0,1,,,CC BY-SA 4.0,,,,, +385862,1,,,1/21/2019 8:26,,4,1222,"

I have made a lot of forms for my desktop application, some forms were using the same method which I have to copy and paste the code (not OOP).

+ +

Let's say I have a method called FirstDayOfWeek() and this method is needed in 5 forms, that means I need to copy and paste the code 4 times to the other forms.

+ +

What I want to ask is:

+ +
    +
  1. If I want to put this FirstDayOfWeek() in a new class, should I make it as default or static (Instance or Static)? So if there's an update for this method I don't need to tweak it in 5 forms.
  2. +
  3. Which has better performance; Instance, Static or just Procedural code (copy and paste)?
  4. +
+",326371,,173647,,1/21/2019 13:42,1/21/2019 17:05,Using Instance or Static Method for reusable method,,4,2,,,,CC BY-SA 4.0,,,,, +385863,1,,,1/21/2019 8:37,,1,70,"

Issue: I provide a small web portal for customers with partial personal data like name, address etc. which is stored in the database in plain text. Now I need a safe concept to encrypt the personal data in the database. The encryption is no problem, but how to handle the decryption of the data? 1.) server-side: problem is how to avoid man in the middle (Key exchange, always at log in??) 2.) client side: how to implement and there is now opportunity of long-term saving a key on the client side ... Thanks for responds. Greets

+",326373,,,,,2/20/2019 13:03,concept for de- and encrypting personal data in web portal,,1,7,,,,CC BY-SA 4.0,,,,, +385873,1,385881,,1/21/2019 12:51,,1,430,"

When a caller makes a call to a callee, exception's are used to inform the caller that something different has happened so that he could change the flow of the program if required. In that sense, exception is not an error but a predictable outcome at layer of abstraction it was raised from. Most programming languages stop the execution if caller has no handlers to handle that situation.

+ +

Bjarne Stroustrup puts it like this in his article,

+ +
+

The author of a library can detect errors, but does not in general + have any idea what to do about them. The user of a library may know + how to cope with such errors, but cannot detect them or else they + would have been handled in the user’s code and not left for the + library to find.

+
+ +

But what about things that can't be detected either because it was not reported or the detection mechanism was weak, probably in the hardware?

+ +

I have have seen at least two types of behaviors:

+ +
    +
  1. The entire system just hangs. Hardware malfunction, faulty bios etc. are examples of such failures. Would it be fair to say that the error happened at a level where there was no recovery (like redundant hardware) mechanism, so the entire system failed?
  2. +
  3. The system recovers for the next operation reporting that something unknown happened like ""An unknown exception occurred! "". For example, a web-server that runs in a infinite try-except loop is an example of such a scenario. In this case, the problem happened at a level where the recovery mechanism's job was to just report and continue?
  4. +
+ +

I know this is a bit vague, but my main doubt is what happens when a completely unknown stuff happens! I am not talking about FileNotFound, or PermissionDenied etc, but something nasty probably at hardware level that is yet to be discovered or documented etc?

+",7686,,7686,,1/23/2019 7:15,1/23/2019 7:15,How to deal with an unknown situation in a system?,,3,8,,,,CC BY-SA 4.0,,,,, +385877,1,,,1/21/2019 13:58,,2,51,"

My application stores two different types of json data in s3.

+ +

For example: Schema-Foo and Schema-Bar.

+ +

Up to now I used the content-type application/json for both.

+ +

I would like to make a distinction between both types in the s3 storage.

+ +

I am unsure what the best way would be.

+ +

Is it possible to do something like application/json-schema-foo application/json-schema-bar?

+",129077,,,,,1/22/2019 6:27,Content Types of JSON in particular schema,,1,4,,,,CC BY-SA 4.0,,,,, +385880,1,,,1/21/2019 14:43,,1,472,"

I am investigating how to test a project. Some information about the project:

+ +
    +
  • Microservice architecture, with roughly 20 services. About 10 of them +with a separate database.
  • +
  • We use ServiceFabric
  • +
  • There is a fair bit of service to service communication.
  • +
  • Each server is a separate repo. With no references to each other.
  • +
  • There is one common Nuget package, where some things like date to +string. This repo does not have any references to any other repos
  • +
+ +

I would like to be able to write both unit and integration tests, but I am focusing on unit tests for now. +The problem is better illustrated by an example.

+ +

Let say I want to test AggrigateAllOrdersForUsers in the OrderMs. The code looks like this:

+ +
List<AggrigatedOrderTDO> AggrigateAllOrdersForUsers (List<Guid> userIds)
+{
+var internalUserIds= userService.GetUsers(userIds); -> calls User ms via REST API
+var orders = orderHistoryServer.GetOrdersForUsers(internalUserIds); -> calls OrderHistory ms via REST API
+
+var orderTypes = constantsServer.GetOrderTypes(); -> calls Constants ms via REST API
+var validTypes = GetValidOrderTypesForOrderAggrigation(orderTypes) -> internal function
+
+return Aggrigate(internalUserIds, orders , validTypes); -> internal function
+}
+
+ +

Orders in this case is a complicated object. It has reference to multiple other classes, where ids need to match. It is not trivial to create, and I would like to reuse the code for creating the fake data.

+ +

How would you go about mocking/stubbing/faking the external calls here? It is easy to create a fake for userService, but where do I put it in my source tree? Everything is separate. Do duplicate the fakes? Do I create a new userServiceFake repo? Do I go about it in a completly different way?

+ +

Kind regards

+",307789,,307789,,1/21/2019 15:33,1/21/2019 16:22,Unit and integration testing of microservice architecture,,2,0,,,,CC BY-SA 4.0,,,,, +385888,1,,,1/21/2019 16:30,,1,245,"

So I'm running into an issue with us trying to test multiple features in our QA environment at the same time. Essentially, changes are taking too long to get promoted in our CI/CD pipeline.

+ +

The way we have it set up now, we push feature branches directly into our QA environment if the build passes. We're using docker so we push the containers into that environment. With several changes in a single service being ready it's hard to manage them all being promoted.

+ +

Let's say someone promotes feature A to QA and they run into a problem. Then feature B gets blocked because A is in QA. Now, I have ideas about how to solve this but there are problems with them.

+ +

The solution I commonly see is to use a common branch that has both A and B in it. The issue with this is that feature A still has to be fixed before it can get released. So even if feature B works in QA and can be promoted, the build contains A and B. Based on the immutability of containers, I can't promote this build into production. I would have to redeploy something with only feature B to get it into production, and thus I have removed A and they can no longer debug their issue in QA.

+ +

Another solution I have is that we could just hold off on releasing and push all of them to production together. The issue with that is that if there's an issue in production, we have to revert several changes together. We effectively go back to the last stable build. Since we have a microservice architecture that might involve us reverting several services to effectively rollback. More frequent production deployments seems like a better strategy but I can't come up with a good solution to this problem. Sure, you can revert a change and test out a build in a reasonable amount of time, but the rollback still has to happen in the meantime.

+ +

I'm really not sure what approaches people have to this issue. I can't think of a good solution but I'm sure others have encountered this issue before.

+",326412,,,,,1/21/2019 16:48,How to deal with multiple features in a single git repository in QA,,2,0,,,,CC BY-SA 4.0,,,,, +385891,1,385917,,1/21/2019 17:03,,-2,141,"

I have being learning about deep neural networks and how the increase in hidden layers give better results. but the problem that i found was we usually get rid of loops in calculations by using matrices. Is there any way to remove the for loop which loop though the layers in forward and backward propagation?

+",304026,,,,,1/22/2019 7:44,Is there a way to get rid of loop for layers in a neural network,,1,10,,,,CC BY-SA 4.0,,,,, +385895,1,385899,,1/21/2019 20:48,,1,988,"

I have two models in my current design, student and group. Student and group are both aggregate roots.

+ +

A student can be added to a group (method on group aggregate root), and it can be active for either the whole duration of the group or part of it. The group has an invariant which says that ""all students added to a given group has to either be active for the full duration of the group, or have start- and end-dates that are inside the groups start and end dates. These connections (group.student) are saved as a child entity (0..*) that belongs to the groups aggregate. So far so good.

+ +

However, Group also has an property called ""course-code"", which is a value object conatining a specific code along with additional (for this example irrelevant) data.

+ +

The part I'm struggling with. +One business rule of the domain and system is that a student can only have one active group-connection per course-code at any given date. By that I mean that if a student is added to a group the Group.AddStudent(student, daterange)-method has to check if the student has an active connection to another group with the same course-code at the supplied daterange. This is data that doesn't belong to a single group aggregate and neither to the student as you currently can't query student for its active groups by (f.eg) student.groups. I've intentionally modeled it like this to avoid a bi-directional relationship.

+ +

What I've thought about

+ +
    +
  1. Put the whole action (add to group) in a domain service that can ask repositories for the data and enforce /hold invariants.
  2. +
  3. Pass all groups the student is currently connected to to the AddStudentToGroup-method (or possibly just groups on the same course-code).
  4. +
  5. Add a property to a student which would hold all courses it's currently studying with their respective dateranges. This would allow method to ask the passed in student if the supplied date is overlapping any existing range or not. This also has the upside of enabling a student to stand more on its own instead of having a list of group-ids that needs to be loaded each time you require some data about them.
  6. +
+ +

Someone else who's encountered a similar situation? How would/did you solve it?

+",282452,,,,,1/21/2019 22:39,DDD - how to model validation and enforce invariants that possibly resides in different aggregates,,1,6,1,,,CC BY-SA 4.0,,,,, +385896,1,,,1/21/2019 21:32,,0,185,"

A company I work with uses a ""skeleton"" project as base scaffolding for each project they have. +I work on several edits to the skeleton. Some of them are few lines on huge source files. What's a good way to document all the edits I'm making?

+",310736,,,,,3/12/2019 13:02,How to document changes to a project,,1,1,,,,CC BY-SA 4.0,,,,, +385897,1,,,1/21/2019 21:54,,2,853,"

I have a simple c++ code as the following:

+ +
#include <iostream>
+
+int main() {
+    int a = 5;
+    int *b = (int *) malloc(40);
+    return 0;
+}
+
+ +

Setting a breakpoint using GDB on line 5 and running disas will output the following:

+ +
Dump of assembler code for function main():
+   0x0000000100000f60 <+0>: push   %rbp
+   0x0000000100000f61 <+1>: mov    %rsp,%rbp
+   0x0000000100000f64 <+4>: sub    $0x10,%rsp
+   0x0000000100000f68 <+8>: mov    $0x28,%eax
+   0x0000000100000f6d <+13>:    mov    %eax,%edi
+   0x0000000100000f6f <+15>:    movl   $0x0,-0x4(%rbp)
+   0x0000000100000f76 <+22>:    movl   $0x5,-0x8(%rbp)
+=> 0x0000000100000f7d <+29>:    callq  0x100000f90
+   0x0000000100000f82 <+34>:    xor    %ecx,%ecx
+   0x0000000100000f84 <+36>:    mov    %rax,-0x10(%rbp)
+   0x0000000100000f88 <+40>:    mov    %ecx,%eax
+   0x0000000100000f8a <+42>:    add    $0x10,%rsp
+   0x0000000100000f8e <+46>:    pop    %rbp
+   0x0000000100000f8f <+47>:    retq  
+
+ +

Here are my questions:

+ +

1- Is all the stack needed for my app within this output? if I'm not mistaking the stack is pointed by rbp and moves down?

+ +

2- b is on the heap correct? I can see the address of it using x b.

+ +

3- Why does x a result in an error Cannot access memory at address 0x5? is it because 5 isn't actually anywhere on the memory? I can see it's part of the instruction movl $0x5,-0x8(%rbp)

+ +

Edit (more questions):

+ +

4- I have read that int a = 5; means variable a is created on the stack (memory) with the value 5, is this correct? when I look at the generated assembly, the value 5 is directly within the instruction set (movl $0x5,-0x8(%rbp), there is no reference to a memory location. If it IS on the memory then how can I see it's memory address? if it is NOT then why do I read that a is on the stack (memory)?

+ +

5- I know heap is also on the memory, is it visible from the above assembly?

+ +

6- I guess my biggest confusion and question is the relation between memory management and generated assembly, can I always point out what/where the heap/stack are given a assembly code? if not what else is needed?

+",326431,,326431,,1/21/2019 23:56,1/22/2019 8:22,Analyzing stack and heap using GDB and C++,,2,0,,,,CC BY-SA 4.0,,,,, +385900,1,,,1/21/2019 22:54,,1,65,"

This is a general question seeking guidance for the best practice(s) on implementing a Single Sign On (SSO) across many various installations of the same application.

+ +

The hypothetical example I would like to pose - imagine you have 200 Wordpress installations / websites that you manage. In order to log into each environment, there's a unique username/password entry for each environment's database. When signing in, the authentication is against an environment-specific database where the query runs against your users table. What would be the appropriate technology(ies) to allow:

+ +
    +
  1. Me, as an administrator, the ability to log in one time, and hop from environment to environment. Let's assume each environment runs on different domains, such as client1.com/admin and client2.com/admin

  2. +
  3. My client, client #1, should be able to sign into their environment, but not automatically gain access to environments 2 and 3...

  4. +
+ +

More context: I am building an application for an agency of several hundred employees. Each installment of the application will be a single-tenant database. I need a solution to allow for each employee to log in once, and jump from environment to environment, while still allowing the application to maintain it's own ""local"" user database for the specific clients (non-employees of the agency) who will use the application. I am also trying to wrap my head around how an ACL works in such an implementation.

+ +

Any leads, suggestions, or feedback are greatly appreciated!

+",326437,,,,,1/21/2019 22:54,Single Sign On implementation for CMS,,0,3,0,,,CC BY-SA 4.0,,,,, +385901,1,385904,,1/21/2019 22:54,,47,19740,"

I have a switch structure that has several cases to handle. The switch operates over an enum which poses the issue of duplicate code through combined values:

+ +
// All possible combinations of One - Eight.
+public enum ExampleEnum {
+    One,
+    Two, TwoOne,
+    Three, ThreeOne, ThreeTwo, ThreeOneTwo,
+    Four, FourOne, FourTwo, FourThree, FourOneTwo, FourOneThree,
+          FourTwoThree, FourOneTwoThree
+    // ETC.
+}
+
+ +

Currently the switch structure handles each value separately:

+ +
// All possible combinations of One - Eight.
+switch (enumValue) {
+    case One: DrawOne; break;
+    case Two: DrawTwo; break;
+    case TwoOne:
+        DrawOne;
+        DrawTwo;
+        break;
+     case Three: DrawThree; break;
+     ...
+}
+
+ +

You get the idea there. I currently have this broken down into a stacked if structure to handle combinations with a single line instead:

+ +
// All possible combinations of One - Eight.
+if (One || TwoOne || ThreeOne || ThreeOneTwo)
+    DrawOne;
+if (Two || TwoOne || ThreeTwo || ThreeOneTwo)
+    DrawTwo;
+if (Three || ThreeOne || ThreeTwo || ThreeOneTwo)
+    DrawThree;
+
+ +

This poses the issue of incredibly long logical evaluations that are confusing to read and difficult to maintain. After refactoring this out I began to think about alternatives and thought of the idea of a switch structure with fall-through between cases.

+ +

I have to use a goto in that case since C# doesn't allow fall-through. However, it does prevent the incredibly long logic chains even though it jumps around in the switch structure, and it still brings in code duplication.

+ +
switch (enumVal) {
+    case ThreeOneTwo: DrawThree; goto case TwoOne;
+    case ThreeTwo: DrawThree; goto case Two;
+    case ThreeOne: DrawThree; goto default;
+    case TwoOne: DrawTwo; goto default;
+    case Two: DrawTwo; break;
+    default: DrawOne; break;
+}
+
+ +

This still isn't a clean enough solution and there is a stigma associated with the goto keyword that I would like to avoid. I'm sure there has to be a better way to clean this up.

+ +
+ +

My Question

+ +

Is there a better way to handle this specific case without effecting readability and maintainability?

+",319749,,319749,,8/7/2019 18:52,8/7/2019 18:52,Avoiding the `goto` voodoo?,,12,29,12,,,CC BY-SA 4.0,,,,, +385905,1,,,1/21/2019 23:08,,0,215,"

For my application logs for a REST server, I'd like to log some details about each http request. I'm using sl4j. Should I use logger.debug or logger.info

+ +

More generally what sort of things should be logged in logger.info?

+",295991,,,,,2/21/2019 11:02,When to use logger.info in sl4j,,1,1,,,,CC BY-SA 4.0,,,,, +385910,1,385912,,1/22/2019 2:35,,1,64,"

While researching RESTful API frameworks I've come across (generally speaking) two types of frameworks:

+ +
    +
  1. The first type will (more or less) directly (or nearly directly) map an ORM/ODM scheme of your database to the resources. For example, tbone.

    + +

    It does give you options for what fields to expose, as well as the ability to create expose faux fields (e.g. being able to convert an obscure DB enum value to a human readable phrase).

  2. +
  3. The second type provides the ability to easily define your resources, but you have to fill in the details of what a GET to /users actually does. For example, Flask-RESTful.

  4. +
+ +
+ +

The first approach does offer the ability to get up and running faster, however, I have several concerns:

+ +
    +
  1. It seems like you're more locked in. With robust enough tests you'd likely be able to replace the framework but seems risky.
  2. +
  3. Are there any security concerns with giving near direct access to the database or names of fields?
  4. +
+ +

Is it safe to use these sorts of frameworks? Or is it a better idea long term to write your APIs from scratch?

+ +

Thanks!

+",167139,,,,,1/22/2019 5:06,Using frameworks that map database to RESTful APIs?,,1,1,,,,CC BY-SA 4.0,,,,, +385911,1,385913,,1/22/2019 4:35,,1,108,"

The issue I care about here is high throughput, there are a lot of sensors (monitoring devices) which send data to the server at high frequency.

+ +

It looks really like that we have to use UDP protocol for such kind of data transfer. However I've never used UDP for high-level logic API programming, a Web API using http would be easier.

+ +

Could you please suggest anything suitable for this?

+",251824,,326536,,1/31/2019 6:53,1/31/2019 6:53,Is Web API suitable as services for IoT?,,1,8,,,,CC BY-SA 4.0,,,,, +385916,1,,,1/22/2019 7:23,,-1,114,"

Working on a huge application with lots of legacy code, I often encounter environment ""checks"":

+ +
db_name = Rails.env.production? ? 'staging_db' : 'production_db'
+...
+desired_processes_count = Rails.env.production? ? 6 : 1
+...
+Rails.env.production? ? ""index,follow"" : ""noindex,nofollow""
+
+ +

and so on. It's Ruby on Rails, but I guess the question is applicable for a wider range of projects, despite the technology.

+ +

The app does use .dotenv, and in the .env file, there are:

+ +
    +
  • credentials
  • +
  • hostnames
  • +
  • secrets
  • +
+ +

and that's about it.

+ +

In my opinion, readability increases significantly if all of the aforementioned examples:

+ +
    +
  • DATABASE_NAME
  • +
  • PROCESSES_COUNT (not really readable though, could be more descriptive)
  • +
  • SEARCH_ENGINES_INDEXING_DIRECTIVE
  • +
+ +

are stored as environment variables - it does not have to be .env, obviously, just a single configuration file with all of them.

+ +

I am not an expert though.

+ +

How can I evaluate if the variable qualifies as an environment variable?

+ +

Is there any rule of thumb, set of best practices? Or is it a completely opinion-based question?

+",262572,,,,,1/22/2019 7:46,What goes into .env file?,,1,0,,,,CC BY-SA 4.0,,,,, +385920,1,385924,,1/22/2019 9:21,,-2,984,"

I am trying to follow scrum framework as much as i can but i am facing some confusion. I like to know what are the standard guide lines for a sprint. I am designing the sprint but i am not sure to get to a concrete value in terms of hrs/day. And what is the standard practice ? Should i also lock meeting hours, deployment time in a ticket ? What happens if an urgent hotfix comes up from a client ? It does not happen very often but what to do in such circumstances. Thanks

+",270730,,,,,1/22/2019 13:33,How many hours a day in a sprint in scrum framework?,,2,2,,,,CC BY-SA 4.0,,,,, +385921,1,385922,,1/22/2019 9:35,,3,1133,"

I'm rather new at programming so I'm still getting a grip on things.

+ +

I'm creating an offline login system in C# that will have the ability add/remove users. The computer will not be connected to the internet.

+ +

This was the approach that I was going to take:

+ +

Have the username and a salted + hashed password in an XML file. The XML file will be encrypted with a unique key that is worn around the user's neck (something like an ID card).

+ +

So the process should goes like this:

+ +
    +
  1. User scans their ID card and enters their username.
  2. +
  3. Application finds user's encrypted XML file and decrypts it.
  4. +
  5. User enters their password in.
  6. +
  7. Application checks it and grants/denies access.
  8. +
+ +

EDIT: To everyone reading, don't do this - It's a bad idea. The assembly can be reverse engineered.

+",,user326468,,user326468,1/23/2019 6:38,1/23/2019 6:38,Best approach to creating a secure offline login system in C#?,,3,3,,,,CC BY-SA 4.0,,,,, +385926,1,385930,,1/22/2019 10:34,,29,6637,"

I'm reading the Scrum - A Pocket Guide by Gunther Verheyen and it says:

+ +
+

The Chaos report of 2011 by the Standish Group marks a turning point. Extensive research was done in comparing traditional projects with projects that used Agile methods. The report shows that an Agile approach to software development results in a much higher yield, even against the old expectations that software must be delivered on time, on budget and with all the promised scope. The report shows that the Agile projects were three times as successful, and there were three times fewer failed Agile projects compared with traditional projects.

+
+ +

So I have an argument with one of my colleagues who says that for some projects (like medicine/military where the requirements don't change), Agile (and, particularly, Scrum) is overhead with all of the meetings etc and it's more logical to use waterfall, for example.

+ +

My point of view is that Scrum should be adopted in such projects because it will make the process more transparent and increase the productivity of a team. I also think that Scrum events won't take much time if it's not needed because we don't need to sit the whole 8 hours in Sprint Planning for 1 month sprint. We can spare 5 minutes just to be sure that we are all on the same page and start working.

+ +

So, will Scrum create additional overhead for a project where requirements don't change?

+",326472,,4,,1/22/2019 14:19,1/23/2019 10:05,Does Scrum create additional overhead for projects where requirements don't change?,,6,20,4,,,CC BY-SA 4.0,,,,, +385927,1,,,1/22/2019 10:43,,4,642,"

Quoting the known Mike Cohn:

+ +
+

The primary reason for estimating product backlog items is so that + predictions can be made about how much functionality can be delivered + by what date. If we want to estimate what can be delivered by when, + we’re talking about time. We need to estimate time. More specifically, + we need to estimate effort, which is essentially the person-days (or + hours) required to do something.

+
+ +

What is the benefit then? If the team equally estimates all the tasks in MDs, instead of SPs, they will be able to do the exact prediction of what can be delivered by a certain date.

+",60327,,,,,1/22/2019 20:18,"If story points are about time, what is truly the benefit of using them?",,4,1,,,,CC BY-SA 4.0,,,,, +385934,1,,,1/22/2019 11:45,,0,85,"

I'm looking to do some simple data mining that consists of going once per day to a single page and collect the following information:

+ +
    +
  • List of movie theaters
  • +
  • Movies today on each theater
  • +
  • Session times for each movie
  • +
  • Availability of tickets for each session (boolean yes/no)
  • +
+ +

I want to catalog this information and after 1 year see an overlay graph showing for each movie (if the same movie exists on 2 theaters I want 2 distinct instances):

+ +
    +
  • x axis: time (discrete dates)
  • +
  • y axis: number of sold out sessions
  • +
+ +

Aditionally, it would be useful to get another heatmap graph letting me know for each movie what times were the most sold out.

+ +

What would be a lightweight solution to go about this? I'm not familiar with data scraping tools or simple lightweight databases for this. I'm inexperienced in this line of work and looking to do the least coding possible. Excel would be fine for me. I've had past experience with Java and C# but would like to avoid killing a fly with a bazooka.

+ +

I'm currently thinking about doing this with ParseHub but not sure how to merge the information from 1 csv file per day to obtain the information I'm looking for.

+ +

PS: I found the JSON source that populates the web page so I could fetch that once per day, I just need to grab the important information from that file and structure it (in an Excel). Just don't know what tools I could use to accomplish such a task.

+ +

Thank you

+",326477,,326477,,1/24/2019 4:19,6/23/2019 6:02,Lightweight data mining + organization & visualization,,1,3,,,,CC BY-SA 4.0,,,,, +385936,1,385937,,1/22/2019 12:00,,3,29361,"

I want to integrate code python (hierarchical clustering algorithm) with code C#.

+ +

(The idea of ​​the project is to divide similar people and put them into classes using the algorithm. We use language c# (asp.net) and want a method to link the algorithm to the code.)

+",297768,,,,,1/22/2019 12:38,How can I integrate Python code with c# code?,,2,3,,43487.56319,,CC BY-SA 4.0,,,,, +385948,1,,,1/22/2019 15:20,,0,1027,"

I have an unresolvable dependency issue in a Maven project; different pieces of code depend on different versions of dependency A (i.e. most code needs A:0.15; some needs A:0.18). Fortunately, the code is completely separable; that is, the parts that depend on the two different versions can be split into two different projects with two different pom files.

+ +

But I'd like to take this opportunity to learn about Maven modules. I could split my project into two modules and compile them separately. But, in six months or so, I hope that this dependency issue will be resolved, and the project will go back to being unitary with no modules. So, what I would like to do is to have a parent project that is packaged as a jar, and then a child module that also compiles into a separate jar. The child project will simply swap out the dependencies.

+ +

I really don't know how to make this work, or if it is even good practice to do so. All the examples of module-ing with Maven have the parent class packaged as pom instead of jar.

+ +

How can I set up my parent and child pom file so that both of them can be packaged into separate jars, and the child pom is identical to the parent except for a single dependency version? Should I be doing this?

+",317577,,317577,,1/22/2019 16:14,3/6/2019 8:29,How to have parent and child modules in Maven that both package as jar files?,,1,2,,,,CC BY-SA 4.0,,,,, +385949,1,,,1/22/2019 15:29,,2,218,"

Our team has decided to develop using BDD/TDD in an effort to become the Agile team we're supposed to be. Vertical slicing appears to be an important part of agile working and gaining the quick feedback we require, although I think we're going to hit a problem when we start dividing our PBI tasks between us.

+ +

We are creating a distributed system with multiple web services and a single page app. Take these two user story examples, each which represent a PBI:

+ +
    +
  1. As a user I want to be prompted to enter my smartcard PIN so that I can login
  2. +
  3. As a user when I enter my smartcard PIN correctly I am taken back to the system so that I can use the system's modules
  4. +
+ +

The problem comes from the tasks that are required for each of these PBIs/user stories to be completed, as some are shared.

+ +

For example, a business requirement of our organisation is that all requests must be logged. So the acceptance criteria for these PBIs (and just about any other we will have) must contain a call to an audit service.

+ +

So with the way we plan on using vertical slicing I can imagine two devs both writing the same part of the audit service that does the logging, including the tests that will drive the implementation, effectively wasting one developer's time.

+ +

Also there's the element of remembering which PBI included some functionality that will be required in others.

+ +

How do you organise in order to avoid this? Is this an inherent problem with vertical slicing? It seems this would be less of a problem with horizontal slicing.

+",146235,,,,,1/22/2019 20:43,TDD how to avoid test duplication across team,,2,2,,,,CC BY-SA 4.0,,,,, +385954,1,385955,,1/22/2019 16:08,,3,582,"

I've been looking into dependency injection, what it is, how it works, how it's being used. It's a neat system and to understand it a bit better, I'm going to implement a small demo app using this system.

+ +

I primarily write in JavaScript so it comes natural to look for inspiration in the same ecosystem. Unfortunately, there's not a whole lot of resources about DI in JavaScript. DI doesn't appear to be a thing in this language.

+ +

But if you look at other languages:

+ +
    +
  • In Java, you can see DI in Spring.
  • +
  • In PHP, you can see DI in Symfony, Laravel, Drupal.
  • +
  • In TypeScript, you can see DI in Angular (2.x+).
  • +
+ +

There's great emphasis on DI on their popular frameworks and tools. When you dive into them, you'll immediately be greeted with things like Services and Components and Plugins and Annotations, all that fun stuff - but not in JS.

+ +

The closest you have to DI in JS is AngularJS (1.x), but that's about it. Other mainstream frameworks don't seem to use it. And if any tools or libraries exist, they're not mainstream either.

+ +

Is there something in a language that makes DI practical/not practical to implement? Is it something in the language's features that need to be there in order to do DI? Is it the type of app being built (i.e. data-ish vs UI-ish)? Or are JS frameworks and libraries actually using DI, just not in a mode I'm familiar with?

+",56020,,,,,4/25/2019 22:49,Do language features affect the use of dependency injection?,,3,6,,,,CC BY-SA 4.0,,,,, +385958,1,,,1/22/2019 17:42,,0,1462,"

I am having the requirement in my project, that nodejs server has to send the notification (kind of push notification) to our mobile, and user( or mobile holder ) have to act on that notification like processing some action, and post back to the nodejs server. As I am very new to the android development, I want to know that, +1. Is it possible to post notification to any android phone by using device ID kind of unique ID from nodejs ? +2. Is it possible to send response from android to nodejs server, as a response to the push notification? +3. I came across the term called Google cloud messaging ( or ) firebase cloud messaging. Is it helps to my requirement ?

+",326515,,,,,1/22/2019 17:42,Sending push notifications to android by nodejs server,,0,3,,,,CC BY-SA 4.0,,,,, +385973,1,,,1/22/2019 21:21,,0,1129,"

I'm in a discussion with a co-worker concerning the use of structs. I have a couple of structs that contain several static properties that are used throughout our website. The value of those properties are all strings. They're not very long, the longest value has 29 characters.

+

His argument: "I am saying there is no performance gain because there are strings inside of them. For value types yes you gain memory/gc benefits. With strings they are ref types so allocate to the heap and won't give any benefit."

+

My argument: "...I'm simply treating the string values as value types by using the struct, therefore saving time and gaining performance by not having to instantiate it every time."

+

Here is an example of one of the structs so that you can see how I'm using them:

+
public struct Hero
+{
+    public static string Image          = "Hero Image";
+    public static string Eyebrow        = "Hero Eyebrow";
+    public static string Heading        = "Hero Heading";
+    public static string Subheading     = "Hero Subheading";
+    public static string YoutubeLink    = "Youtube Hero Link";
+    public static string PardotForm     = "Pardot Form Hero Link";
+    public static string PardotDirect   = "Pardot Direct Hero Link";
+    public static string DirectLink     = "Direct Hero Link";
+    public static string FacebookLink   = "Hero Facebook Link";
+    public static string TwitterLink    = "Hero Twitter Link";
+    public static string LinkedInLink   = "Hero LinkedIn Link";
+    public static string LinkClassNames = "Class Names";
+}
+
+

Let me know if I'm completely wrong and should just use classes or if there is a better way of using the structs for my values (i.e: readonly instead of static, etc...).

+",326531,,1204,,12/21/2020 22:11,12/21/2020 22:11,Is it a good idea to use strings in a struct as values to static properties?,,3,5,,,,CC BY-SA 4.0,,,,, +385974,1,,,1/22/2019 21:38,,-2,112,"

I'm not a software engineer, but I like coding for myself. Thus, I don't know what goes in IT companies.

+ +

Keeping the amount I spend on GitHub, I never heard of the term QA/QC in regards to open source. The most related term I've heard is unit testing.

+ +

Is my assumption correct?

+",80796,,,,,1/23/2019 12:43,Is QA/QC more related to proprietary softwares?,,2,3,,,,CC BY-SA 4.0,,,,, +385977,1,385992,,1/22/2019 22:07,,2,146,"

I currently have an object that is composed from several simple objects. Each simple object is used to render a particular portion of the larger object as a whole. Think of it as building a 3D structure such as a house for rendering to the screen:

+ +
    +
  • House + +
      +
    • Walls
    • +
    • Roof
    • +
    • Windows
    • +
    • Doors
    • +
  • +
+ +

Now, in most cases, we can just render a model and the work is done, but in this case I am essentially building the model from scratch. Each piece can be built using more simplistic objects already present such as Mesh. This however requires constructing the vertices and indices of each fundamental piece of the house.

+ +

The way the current code is set up I currently build each part of the house within the House class which prevents reusability (which isn't really too big of a concern here), and definitely makes the House class very hard to understand and maintain without spending half the day analyzing it. As a basic example:

+ +
public class House : RenderObject {
+    private Mesh[] walls;
+    private Mesh[] roof;
+    private Mesh[] windows;
+    private Mesh[] doors;
+
+    public void Initialize() {
+        // Build walls.
+        // Build roof.
+        // Build windows.
+        // Build doors.
+    }
+}
+
+ +

My first thought is that the initialize method can be refactored into separate methods to handle building each piece; but, this conflicts with the thought that occurs simultaneously.

+ +
+ +

Should each piece of the House object be its own class?

+ +
    +
  • This would improve: + +
      +
    • Maintainability
    • +
    • Readability
    • +
    • Reusability
    • +
  • +
+ +

My concern is that perhaps moving each piece to its own class wouldn't really be worth the effort if it is never really reused, but then I begin arguing with myself along the lines of perhaps it could be, this is a public API after all.

+",319749,,319749,,1/23/2019 14:43,1/23/2019 14:43,Should I break functionality out into new classes?,,1,2,,,,CC BY-SA 4.0,,,,, +385989,1,385994,,1/23/2019 7:34,,2,62,"

I keep my Ansible based deployment module in the same repository/directory as my application code. +I use Mercurial for my VCS.

+ +

The inventories file is ephemeral as the IP's of the cloud hosts keep on getting modified.

+ +

The inventory file also defines groupings of hosts which is key information in ansible roles. So I just can't keep it separate as it is tightly coupled with the roles.

+ +

Any addition/deletion of servers on our cloud hosts leads to change in the inventories file which gets recorded in the VCS and pollutes the repository.

+ +

So my question is: what are the best practices regarding keeping ansible inventories in a VCS? +Should I just accept this as part of using ansible?

+",326569,,,,,1/23/2019 10:48,Where to keep Ansible Inventory file,,1,0,,,,CC BY-SA 4.0,,,,, +386003,1,,,1/23/2019 14:44,,0,450,"

To provide a very blunt example (as I am at work and can't currently think of a sensible example). +If I write a groovy class like this

+ +
class Wendy{
+    byte[] frank
+
+    String doSomethingWithFrank(){
+       frank = someServiceThatReturnsByteArray()
+    }
+} 
+
+ +

And then groovy makes me getters and setters under the hood

+ +

how is this any different to:

+ +
class Wendy{
+        public byte[] frank
+
+        String doSomethingWithFrank(){
+            frank = someServiceThatReturnsByteArray()
+        }
+    } 
+
+ +

so if you wrote

+ +
class Jenny{
+    Wendy wendy = new Wendy()
+
+    void pointless(){
+
+       wendy.doSomethingWithFrank()
+       functionThatProcessByteArray(wendy.frank)
+
+    }
+
+    functionThatProcessByteArray(Byte[] var1){
+
+      // do something
+    }
+}
+
+ +

and then a month later the someServiceThatReturnsByteArray now returns a Stream

+ +
class Wendy{
+        Stream susan
+
+        String doSomethingWithFrank(){
+           susan = someServiceThatReturnsByteArray // well used to
+        }
+    } 
+
+ +

groovy now automatically creates getters that return Stream rather than byte[]

+ +

the class Jenny would no longer compile as it is trying to use a byte[] as it is coded against the variable declaration not the desired user interface.

+ +

This mean that the two classes are very highly coupled, which is BAD.

+ +

I believe the point of making all variables private and controlling access is that the developer of the class can change any implementation detail inside without affecting anybody on the outside, but using this system, no detail of that variable is hidden from a user of the class.

+",310910,,,,,1/23/2019 14:44,Are groovy automatic getters and setter effectively any different to public variables?,,0,3,,,,CC BY-SA 4.0,,,,, +386006,1,386011,,1/23/2019 15:29,,0,714,"

Consider a simple web application, I'll use a Python(ish) example, but i think the question is relevant for other languages as well.

+ +

The user is trying to fetch a page, and in order to render that page, the application has to make an external call to a remote service. In the example, I'll try to separate concerns: collecting input parameters, and calling the actual remote service.

+ +
def view(request):
+  foo = float(request.POST['foo'])
+  return Response(process_foo(foo))
+
+def process_foo(foo):
+  return remote_service.get('/bar', data={'foo': foo})
+
+ +

In the example, view is a function that's responsible for transforming the incoming request to a Response, and process_foo is responsible for performing some business logic.

+ +

What kind of unit tests make sense here?

+ +

A few preferences:

+ +
    +
  • In view, I'd expect process_foo to be a black-box, so I'd like to replace it with something during testing, so I can refactor process_foo without breaking unit tests for view.
  • +
  • In process_foo, I'd like to replace the call remote_service.get, as it's an expensive operation.
  • +
+ +

Considering the restrictions above, I'm not sure what kinds of unit tests make sense here. In view, I could make an assertion about that process_foo was called with foo, but when I make changes to process_foo, this test will not break. The same is true for testing process_foo: if I make an assertion on that remote_service.get was called with the right URL and the right parameters, that'll not break when the remote URL changes (or it no longer accepts the same parameters).

+ +

To me it feels like that somehow I should test that foo was extracted from request.POST, but it seems that there's no reasonable way to do this.

+ +

I'm aware that integration tests can solve this problem, I'm looking for a possible solution about how the problem above could be solved with unit tests (if there's a solution at all).

+",326599,,,,,1/24/2019 14:07,Unit testing about API endpoints and parameters,,2,1,,,,CC BY-SA 4.0,,,,, +386007,1,386009,,1/23/2019 15:36,,8,400,"

One of my projects on GitHub has received a vulnerability alert, in this case of moderate severity.

+ +

The vulnerability has been detected in a dependency of an old version of the code. Current versions do not use this dependency anymore. Nevertheless, old commits may potentially be checked out and run, and open up the application to exploits of the vulnerability.

+ +

From a software engineering perspective, is it advisable to go back and change the old commits, i.e., update the now unused dependency to a newer version containing the fix to the vulnerability? Or better to keep the commit history intact?

+",70086,,,,,1/28/2019 7:42,Should detected vulnerabilities in old commits be remedied?,,4,0,,,,CC BY-SA 4.0,,,,, +386010,1,386014,,1/23/2019 15:48,,0,1630,"

Given is a short Java function and I like to create a control flow graph for it but I'm not sure if it's fine like that? Because I left some things away such as variables that have already been created with the function together (int[] A, boolean[] boo).

+ +
boolean func(int[] A, boolean[] boo){
+    boolean res;
+    int n, leng;
+    leng = A.length;
+    n = 0;
+    res = true;
+    while(n < leng){
+        if(A[n] <= 0 && !boo[n]){
+            res = false;
+        }
+        n++;
+    }
+    return res;
+}
+
+ +

+ +

Link to the chart

+ +

Is it fine like that? Because in the test I write soon I would do it like that : /

+",320475,,,,,4/29/2020 12:45,Creating a control flow graph for a given function,,2,0,,,,CC BY-SA 4.0,,,,, +386012,1,386030,,1/23/2019 16:15,,2,1762,"

Is it normal to have a bounded context spread accross multiple APIs or should there really be one API per bounded context?

+ +

I am trying to understand if I can use the Scatter Gatherer pattern (https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html) in a hobby application I am developing to improve my knowledge of DDD.

+ +

Multiple APIs - Use scatter gatherer

+ +

The example I have posted is for a mortgage application where a quote request is broadcasted to multiple vendors and then once a suitable amount of quotes is received then the aggregator chooses the best quote.

+ +

In this scenario it appears that there is a bounded context spread accross multiple APIs. The solution structure will look something like this:

+ +

The solution structure will look something like this:

+ +
Offers.API //Contains the aggregator
+Offer1.API
+Offer2.API
+Offer3.API
+
+ +

The domain model in the Offers.API will look something like this:

+ +
public class Customer
+{
+   public string Name {get; set; }
+   public DateTime DateOfBirth {get; set; }
+   public List<Offer> Offers {get; set; }
+
+   public void AssignOffer(Offer offer)
+   {
+    Offers.Add(offer);
+   }
+}
+
+ +

The Offers are passed to the AssignOffer by the aggregator. The Offer class will be anemic in the Offers.API because the domain logic is contained in the other APIs.

+ +

I believe the benefit of this approach is that it is more configurable. Every time a new offer is added then a new API can be created.

+ +

Single API

+ +

Instead I could just have one API and map offers from the database to classes using Table Per Hierarchy mappings. Every new offer added will require the application to be compiled and published.

+",65549,,,,,1/23/2019 21:45,Multiple APIs v One API,,1,9,,,,CC BY-SA 4.0,,,,, +386016,1,386070,,1/23/2019 16:48,,4,229,"

In my work we use the scrum board for the registration of activities, however there is always a dilemma of what to register there, some colleagues want to register on the scrum board the email answer (even to create a new user story), arguing that both the reading and the answer cost them approximately one hour of work, others think that it is not necessary to do this, another example of the problem occurs with the analysis of the requirements (spikes) that remain several weeks on the scrum board without moving, this because it depends a lot on the availability of the users, some consider that it is necessary to have the record of this activity on the sprint board and others consider that it is not necessary because it is an activity that takes several sprints (weeks or months) to finish.

+ +

Searching the web and forums, I find a lot of discrepancy, some justify their response using a measure of time (if the activity costs more than an hour, register it) and others justify it using a measure of complexity (if the activity costs work in a matter of analysis or development register it, no matter how much time it costs you), but the answer still does not remain clear.

+ +

For what it leads me to the question, when is it considered reasonable to record an activity on the board and why?

+ +

We have user stories that are selected to enter the sprint and we break them into different tasks, the question is focused on the task register either new within the same stories or for tasks that do not belong to a story within the sprint (like answering the email ).

+ +

We have the separate board in:
+BackLog
+Analysis
+       Ready for analysis
+       Analysis in Progress
+       Completed analysis
+Development
+      Ready for development
+      Work in progress
+      Code revision
+      Completed development
+QA
+      Ready for internal Release
+      Ready for Testing
+      Testing in progress
+      Completed testing
+Business acceptance
+Done

+",326609,,326609,,1/23/2019 17:27,1/25/2019 7:20,When is it reasonable to register an activity on the SCRUM board and when it is not necessary?,,3,4,2,,,CC BY-SA 4.0,,,,, +386017,1,,,1/23/2019 16:59,,0,198,"

Say I want to (try to) read through and understand a fairly complex piece of code (for example the free software Coreboot firmware code, which can be found here). How can I figure out where the code starts? As in, what is the first line of code that will be executed when the program runs?

+ +

I have some basic familiarity with C and I know that C programs usually start with an int main (void) function. So, should I just search through the source code files to find that declaration? Or, is there an easier way to figure this out? Perhaps some software development convention that I'm not aware of?

+ +

I'm a Mechanical Engineer by background. I have some familiarity with coding, but not how complex projects like this are structured.

+ +

Edit:

+ +

This question has been flagged as a duplicate of this question:

+ +

How do you dive into large code bases?

+ +

I don't agree that it is a duplicate. That question is much more general, about general techniques one can use to familiarize oneself with a large, unknown codebase. My question is much more focused on simply 'How can I find the starting point for the code.' It doesn't look like any of the existing answers to that question directly address that.

+ +

Besides, that question was flagged for its poor quality, as being not good or on-topic for the site.

+",294872,,294872,,1/23/2019 17:56,1/24/2019 13:37,How can I figure out where the code starts for a complex software project?,,2,8,1,43491.66458,,CC BY-SA 4.0,,,,, +386021,1,,,1/23/2019 18:21,,1,112,"

A coworker of mine recently setup testing in a new project (a JS library) where a transform step hooks in to the babel config for Webpack in the production config.

+ +

For reference, this is the setting used with Jest: https://jestjs.io/docs/en/configuration.html#transform-object-string-string

+ +

The production build targets ES5, while our CI is on Node 10 and up. This means that, for all of our tests, the source code is getting transformed by all the unnecessary Babel transforms. Mind you, our source code is regular ES2016 Javascript, nothing too fancy. The only transform required might be the ES6 import syntax.

+ +

My gut reaction was that this was quite wasteful and unnecessarily couples the tests to the production build config. But my coworker's justification was that he wants to make sure that the tests run against the same artifacts that users will be using.

+ +

That makes a lot of sense to me, but I am not sure what the right answer is. What are the pros and cons of each approach? What are the dangers of running your tests against the production build transforms?

+",277750,,,,,1/23/2019 19:21,Should I run my tests against production build transforms (i.e. Babel)?,,1,2,,,,CC BY-SA 4.0,,,,, +386023,1,386029,,1/23/2019 18:52,,3,316,"

I have a static class called RenderingUtilities that houses several useful methods and constants. Some of these constants are related to the Earth as an object such as the Earth's radius. I believe the constants related to the Earth should be contained in the Earth class, but have conflicting thoughts on it since the RenderingUtilities class shouldn't really depend on other objects.

+ +

The reason for this is that the Earth class is an object that can be rendered to the screen. Thus it has a base class of RenderObject and has properties and methods on its own. It will also rely on the RenderingUtilities class to retrieve a geometric primitive in order to render. Thus this seems like circumnavigation to me.

+ +

However, my counter argument to my own counter argument is that since these are const values and are accessed without creating an instance of the Earth class, that it should be okay.

+ +
+ +

Rendering Utilities Class

+ +

The RenderingUtilities class is a collection of helpful methods with the following categories:

+ +
    +
  • Conversions + +
      +
    • Simple conversions such as degrees to radians.
    • +
    • Computer coordinate conversions such as world to screen.
    • +
    • Coordinate conversions such as ECR to ECI.
    • +
    • Earth projections such as Polar, Mercator, etc.
    • +
  • +
  • Assistive Rendering + +
      +
    • Simplifying the process of rendering text.
    • +
    • Simplifying the process of rendering 2D polygons.
    • +
  • +
+ +

A very simple example method is the conversion from degrees to radians:

+ +
public static double ToRadians(this double degrees) => degrees * Math.PI / 180.0;
+
+ +

This allows quick access through the double type, along with accessing it through the static class:

+ +
double radians = RenderingUtilities.ToRadians(45);
+double degrees = 90;
+radians = degrees.ToRadians();
+
+ +

There are more, but this should help add some clarity as to what the RenderingUtilities class is helping with. There is no need to create individual classes for these helpful methods, and they are used quite often, such as rendering text that displays the position of the camera for debugging, or frame rate, or error messages, etc. There are practical applications such as rendering detailed information about a specific point on the Earth. However, the methods are generic enough that you supply a screen position, some text, and a color, and it displays it.

+ +
+ +

Render Object Class

+ +

The RenderObject class is an abstract base class that is home to the fundamentals of each object rendered to the screen; such as:

+ +
public abstract class RenderObject {
+
+    #region Fields
+
+    private List<RenderObject> children = new List<RenderObject>();
+
+    #endregion
+
+    #region Properties
+
+    public bool Active { get; set; } = false;
+    public string Name { get; set; } = string.Empty;
+    public RenderObject Parent { get; protected set; } = null;
+    public RenderObject[] Children => children.ToArray();
+
+    #endregion
+
+    #region Public Methods
+
+    public abstract void Initialize();
+    public abstract void Update();
+    public abstract void Render();
+
+    #endregion
+
+}
+
+ +
+ +

Earth Class

+ +

This class is derived from the class RenderObject (see above). It is responsible for, well, rendering the Earth. This includes local instances of classes (also deriving from RenderObject) such as:

+ +
    +
  • Gridlines
  • +
  • Terrain
  • +
  • Water
  • +
  • etc.
  • +
+ +

The flow of the code there is very object oriented and that is what caused me to come here and pose this question.

+ +
+ +

The Current Placement of the Constants

+ +

The reason these constants are kept within the RenderingUtilities class is due to the extensive use of those constants in conversion methods mentioned above. However, since they are related to the Earth per se, I believe it may make more sense to move them there. For example:

+ +
// Doesn't make much sense.
+RenderingUtilities.EARTH_RADIUS_IN_METERS
+
+// Makes more sense.
+Earth.RADIUS_IN_METERS
+
+ +
+ +

The Question

+ +

Is there an issue with regards to OOP practices that would prevent me from putting these constants in the Earth class and using them in the static RenderingUtilities class?

+",319749,,319749,,1/23/2019 21:00,1/25/2019 7:40,I feel like these constants should be in a different class?,,5,4,,,,CC BY-SA 4.0,,,,, +386024,1,,,1/23/2019 19:01,,1,39,"

At the moment, I'm working on a product that's being broken down from a monolith to a bunch of microservices, and it seems to be going well enough.

+ +

However, if a user is abusing the service somehow, we're not sure how to leverage that information to block the user. For security reasons, we don't pass (for example) IP address from client-facing services to internal services. For request tracing purposes, we do pass the request ID and generate span IDs, so we can identify the request.

+ +

So, we have a service deep in the stack that has identified an incoming request is abusive (or potentially abuxsive). We have the caller service way up the stack who can resolve request ID to surrounding context. And we have a bunch of internet-facing service nodes that we need to propagate information out to along the lines of ""requests that come in with the following context should be blackholed / tarpitted / errored out"".

+ +

There are, of course, a lot of different ways we could solve this problem. We could have something triggered in the monitoring / log aggregation which pushes abuse handling configuration out to internet-facing hosts, we could have a ""UserAbuse"" error type that propagates up the RPC stack, or a separate service that- as soon as an issue is detected- gets called with the abuse info, and that service also somehow resolves request IDs to context information.

+ +

My question is just ""what have large, successful companies done to solve this problem?"". What are some design patterns we can try to apply?

+",144395,,,,,1/23/2019 19:01,How should propagation of service abuse information work within a microservice architecture?,,0,1,,,,CC BY-SA 4.0,,,,, +386026,1,386050,,1/23/2019 20:33,,3,162,"

I am currently working on a project that has very little documentation overall. The team is working to change that. I am doing my part, by adding xml comments to the methods I make and the ones I edit, so that the automatic documentation generation tools can use them.

+ +

For some methods, complexity is either obvious or irrelevant.

+ +

But for some other methods, the way the code is built right now, they can seem like they are pretty light, but actually be very slow (talking about n³ and above here).

+ +

Of course, I am aware that there is an underlying problem and that refactoring is probably needed. But we neither have the time nor the budget to refactor everything within a reasonable delay, so in the meanwhile, I would like to communicate this information as a warning.

+ +

example:

+ +
/// <summary>
+/// Tests if a thing is in a valid state. WARNING: This runs in O(n³)
+/// </summary>
+/// <returns>The validity of the thing</returns>
+public virtual bool IsSomethingValid()
+{
+    return DoCodeThatIsMoreComplexThanItLooks();
+}
+
+ +

I usually find those cases when doing some performance improvements, but at this point, this happens pretty often, and I think adding such big warnings too often will lose its impact.

+ +

So my idea would be to simply add it to the standard XML comment format, and put the complexity regardless of what it is, as a bit of information that may be useful or interesting.

+ +

Is there a standard way to communicate the complexity of a method in XML comments? like a tag that Visual Studio will understand (but doesn't add by default).

+",326632,,9113,,1/24/2019 6:46,1/24/2019 12:22,How should I document a method's computational complexity in the code?,,1,5,,,,CC BY-SA 4.0,,,,, +386031,1,,,1/23/2019 21:47,,0,192,"

I have a set of tasks I perform to complete some larger operation. These tasks must be executed in linear order, and you cannot proceed onto the next task until the previous completes. For the most part, these tasks chain to each other and have no dependencies on state outside of the ""chain"", other than perhaps transferring state between the tasks as needed.

+ +

As an example, what I'm doing right now involves communication to a remote server to provision an encryption key onto a device. This device is responsible for generating data in a secure fashion, but the key it uses to encrypt that data is not initially known. So the server provisioning is to get that key and inject it into the device. The basic flow is this:

+ +
    +
  1. Software checks if a key is present on the device, if not, it initiates the provisioning process.
  2. +
  3. The first task is Authorization: + +
      +
    1. Obtain an authorization token from the device
    2. +
    3. Pass that authorization token to a server asynchronously
    4. +
    5. Receive the response to that request from the server, and pass that response on to Task #2
    6. +
  4. +
  5. The second task is Key Injection: + +
      +
    1. Build data required to request a key by passing the response from step 1 to the device.
    2. +
    3. Device gives us a token which is used to call back to the server to request a new key. This is asynchronous
    4. +
    5. Receive the response, which contains the key, and inject the key into the device.
    6. +
  6. +
  7. The third and final task is to verify the injection: + +
      +
    1. Request key information from the device
    2. +
    3. Send a request to the server, with this information, and the server verifies the key injection is valid. This request is asynchronous.
    4. +
    5. Server responds, saying key injection is good, and the operation is completed.
    6. +
  8. +
+ +

So far, I have implemented this as a state machine like so:

+ +
Idle -> Authorize -> Inject -> Verify -> Done
+
+ +

And internally, the states transition to each other. For example, when the asynchronous response is received from the server while in the Authorize state, it transitions to the Inject state.

+ +

I'm doing all of this in C++, and as far as state machine libraries go, there's not many good ones to pick from. At the moment I'm using Boost.Statechart, which is really weird to use especially since there's a period of time when it isn't valid to transition from one state to the next. As an example, until the response from the server is received, and while in the Authorize state, you can't transition at all yet.

+ +

So from a software design perspective, am I choosing the right design pattern here (Finite State Machine)? If yes, should I opt for more granular states? If no, what would be the ideal pattern here?

+ +

There's no real good way to pass information from one state to the next, so I end up having to store temporary state in the state machine object itself. My nitpick about this is that that state is not useful globally; it's only useful for a moment when you begin the next task. Could be a Boost.Statechart limitation, or maybe an indication I'm choosing the wrong pattern for the job. Just not sure.

+",31950,,,,,1/23/2019 21:47,"Linear ""steps"" of operations considered a state machine?",,0,3,,,,CC BY-SA 4.0,,,,, +386032,1,386033,,1/23/2019 22:43,,0,104,"

I have a car object. The car cannot be driven unless it is turned on. When should I check to see if the car is on before I try to drive it?

+ +

In the main program?

+ +
class Program
+{
+    static void Main(string[] args)
+    {
+
+        Car myCar = new Car();
+
+        if (myCar.isOn == false)  //check here if car is on
+        {
+            myCar.Start();
+        }
+
+        myCar.Drive();
+    }
+}
+
+class Car
+{
+    public bool isOn;
+
+    public void Drive()
+    {
+        Console.WriteLine(""Car is driving"");
+    }
+
+    public void Start()
+    {
+        Console.WriteLine(""Car is starting"");
+        isOn = true;
+    }
+}
+
+ +

Or in the drive method of the class itself?

+ +
class Program
+{
+    static void Main(string[] args)
+    {
+        Car myCar = new Car();
+        myCar.Drive();
+    }
+}
+
+class Car
+{
+    public bool isOn;
+
+    public void Drive()
+    {
+        if (this.isOn == false) //Check here if the car is on
+        {
+            this.Start();
+        }
+        Console.WriteLine(""Car is driving"");
+    }
+
+    public void Start()
+    {
+        Console.WriteLine(""Car is starting"");
+        isOn = true;
+    }
+}
+
+ +

This is an extremely simplified example, but when you have a large complex object with complicated logic, the need for organizing and controlling the object's behavior becomes apparent. +Where is the proper place of determining and correcting an object's readiness before calling one of its methods? Should the method itself be responsible or the subroutine calling that method?

+",176197,,,,,1/24/2019 1:46,Where should the logic concerning a class's behavior reside? In the class itself or in the calling subroutine?,,2,2,,,,CC BY-SA 4.0,,,,, +386037,1,386049,,1/24/2019 3:05,,2,1643,"

Simplified question with a working example: I want to reuse a std::unordered_map (let's call it umap) multiple times, similar to the following dummy code (which does not do anything meaningful). How can I make this code run faster?

+ +
#include <iostream>
+#include <unordered_map>
+#include <time.h>
+
+unsigned size = 1000000;
+
+void foo(){
+    std::unordered_map<int, double> umap;
+    umap.reserve(size);
+    for (int i = 0; i < size; i++) {
+        // in my real program: umap gets filled with meaningful data here
+        umap.emplace(i, i * 0.1);
+    }
+    // ... some code here which does something meaningful with umap
+}
+
+int main() {
+
+    clock_t t = clock();
+
+    for(int i = 0; i < 50; i++){
+        foo();
+    }
+
+    t = clock() - t;
+    printf (""%f s\n"",((float)t)/CLOCKS_PER_SEC);
+
+    return 0;
+}
+
+ +

In my original code, I want to store matrix entries in umap. In each call to foo, the key values start from 0 up to N, and N can be different in each call to foo, but there is an upper limit of 10M for indices. Also, values can be different (contrary to the dummy code here which is always i*0.1).

+ +

I tried to make umap a non-local variable, for avoiding the repeated memory allocation of umap.reserve() in each call. This requires to call umap.clear() at the end of foo, but that turned out to be actually slower than using a local variable (I measured it).

+",326652,,326652,,1/25/2019 7:44,1/25/2019 7:44,How to optimize reusing a large std::unordered_map as a temporary in a frequently called function?,,1,11,,,,CC BY-SA 4.0,,,,, +386042,1,386045,,1/24/2019 5:58,,42,12003,"

According to When is primitive obsession not a code smell?, I should create a ZipCode object to represent a zip code instead of a String object.

+ +

However, in my experience, I prefer to see

+ +
public class Address{
+    public String zipCode;
+}
+
+ +

instead of

+ +
public class Address{
+    public ZipCode zipCode;
+}
+
+ +

because I think the latter one requires me to move to the ZipCode class to understand the program.

+ +

And I believe I need to move between many classes to see the definition if every primitive data fields were replaced by a class, which feels as if suffering from the yo-yo problem (an anti-pattern).

+ +

So I would like to move the ZipCode methods into a new class, for example:

+ +

Old:

+ +
public class ZipCode{
+    public boolean validate(String zipCode){
+    }
+}
+
+ +

New:

+ +
public class ZipCodeHelper{
+    public static boolean validate(String zipCode){
+    }
+}
+
+ +

so that only the one who needs to validate the zip code would depend on the ZipCodeHelper class. And I found another ""benefit"" of keeping the primitive obsession: it keeps the class looks like its serialized form, if any, for example: an address table with string column zipCode.

+ +

My question is, is ""avoiding the yo-yo problem"" (move between class definitions) a valid reason to allow the ""primitive obsession""?

+",248528,,248528,,2/20/2019 10:14,2/20/2019 10:14,"Is ""avoid the yo-yo problem"" a valid reason to allow the ""primitive obsession""?",,9,13,11,,,CC BY-SA 4.0,,,,, +386043,1,,,1/24/2019 6:26,,4,345,"

I’m new to React/Redux tools and still figuring out some idiomatic design patterns.

+ +

Here's a scenario:

+ +

A user adding a place marker 📍 (that has some associated metadata) to an interactive map (Leaflet), which should be held in state.

+ +

I’ve just come from MeteorJS, in that situation it would go something like:

+ +
    +
  1. User defines marker location on map
  2. +
  3. Leaflet callback with marker position
  4. +
  5. Add marker properties to state object in MiniMongo
  6. +
  7. A state observer see’s the state change
  8. +
  9. Calls custom module to add marker to map, init some event handlers etc.
  10. +
+ +

However; the subscribe hook in Redux doesn’t really give much to work with (compared to meteor - so far I’ve avoided it completely, and have been using onComponentDidMount/Update for initialisation/responding to non-trivial state changes.

+ +

React's component lifecycle hooks work swimmingly when the ‘thing’ to be updated is a react component, but what about the above example - there is no React component to speak of?

+ +
+ +

So now I have:

+ +
    +
  1. User defines marker location on map
  2. +
  3. Leaflet callback with marker position
  4. +
  5. Dispatch action
  6. +
  7. Reducer updates state
  8. +
  9. ?
  10. +
+ +

Naive options

+ +
    +
  • Wrap redux subscribe with some kind of diff logic so I know what exactly needs to be updated in leaflet when state changes? This would not handle restore state gracefully though (see note below).
  • +
  • Use renderless components, i.e. React components that don't render any UI (return null), but exist within the virtual DOM and are still plugged into the component life-cycle? Others seem to be using this pattern (eg1, eg2) - which would work nicely, but it smells.
  • +
+ +

Notes

+ +
    +
  • I'd like the component structured in a way such that the state (and map marker) can be restored without any explicit intervention (which my meteor example doesn't really cover) - e.g. onComponentDidMount approach does this (if app is initialised with existing state the UI automatically reflects this).
  • +
  • The map marker is just an example to illustrate the problem, I typically work on SPA's with many 3rd party components that fit this pattern so a general solution would be great.
  • +
+ +

Happy to provide more detail if required - thanks!

+",83407,,83407,,1/24/2019 23:37,1/24/2019 23:37,How should a non-React component subscribe and respond to Redux state changes?,,0,5,1,,,CC BY-SA 4.0,,,,, +386051,1,,,1/24/2019 11:25,,2,141,"

I'd like to hear some pros and cons about where it's best to put Haskell typeclass instances. I identify 2 possible cases and can not decide for myself which one is best:

+ +
    +
  1. Put the instances together with the typeclass definition;
  2. +
  3. Put them together with the types that implement it.
  4. +
+ +

For the sake of this example, lets say we have a ToJSON typeclass. It, as the name says, has a function that converts a to a JSON:

+ +
class ToJSON a where
+  toJSON :: a -> JSON
+
+ +

Pros for the first case. Putting all ToJSON instances in the actual typeclass file results in a separation of concerns. For example, the Color type should only care about it's constructors and functions that manipulate the color, not about JSON serialization. Let the JSON type worry about that.

+ +

Pros for the second case. Assuming we export only Color type and not it's constructors, putting the ToJSON instance in Color.hs allows the use of patters matching. Also, having all of the instances near the type shows the developer all things the type can do.

+ +

Keep in mind that this is a personal preference list based on experience. Feel free to add to or disagree about it.

+",293899,,,,,5/10/2019 18:04,Placing Haskell typeclass instances,,1,3,,,,CC BY-SA 4.0,,,,, +386053,1,,,1/24/2019 11:31,,1,58,"

I'm having a discussion at work that event-based architecture should not be used in a scenario that we have multiple producers and a single consumer.

+ +

In our company, we have many external integrations that in my view, would be allowed to send an event to our queue. My proposal is the following:

+ +
    +
  1. Integration receives info from a 3rd part. We fire a foo-discovered event;
  2. +
  3. Consumer on our main system is listening to our foo-discovered event and is processing it, ingesting it to the database;
  4. +
+ +

There are several integrations of this kind, let's say, up to 10.

+ +

My colleagues proposal is that each integration fires a dedicated event, like integration1-foo-discovered and we process it in the same way as above.

+ +

Basically, he want's that the producers ""own"" the streams.

+ +

In my idea, I don't feel that's necessary, since each integration does it's work normalizing the payload and sending a standard event to the consumer.

+ +

There are any problems with my approach? Can I have some references to enrich our discussion?

+",30867,,,,,1/24/2019 11:31,Event based architecture - multiple producers single consumer,,0,1,,,,CC BY-SA 4.0,,,,, +386059,1,,,1/24/2019 13:04,,1,169,"

I´m using a framework for javascript to display beautiful alert boxes. This framework uses another framework to actually display the boxes. So it´s something like that:

+ +
 let showAlert = function (type, message, title) {
+    opts = {};
+    opts.title = title;
+    opts.type = type;
+    opts.confirmButtonText = 'Confirm';
+    opts.text = message;
+
+    return swal(opts); // here it uses the concrete component sweet alert
+};
+
+messageFramework.success = function (message, title) {
+    return showAlert ('success', message, title);
+};
+
+messageFramework.error = function (message, title) {
+    return showAlert ('error', message, title);
+};
+
+ +

So the client uses the messageFramework abstraction, but the messageFramework uses the swal function of the sweet alert implementation. The message framework is a wrapper or an adapter?

+ +

Thanks.

+",125312,,,,,1/24/2019 13:21,Is this code a wrapper or an adapter?,,1,1,,,,CC BY-SA 4.0,,,,, +386062,1,,,1/24/2019 13:24,,3,40,"

My team and I are working on a large web application, and I've noticed that we're working in a way which I believe is an incorrect usage of flux (probably from lack of understanding).
+We're using Angular 6 and angular-redux, but we use angular-redux more as an event manager than application state management i.e. dispatching changed data from a certain event from one component to other components, using the reducers as the observables which all other components subscribe to.
+So basically, there isn't a real state.

+ +

So I've been wondering - if I'm correct and we're using it wrong, how should we manage this kind of events and data transfers? (I believe it might be angular services).
+But if I'm wrong, then I'm asking why would anyone use this kind of event management, since it's terribly challenging for debugging and testing

+ +

Thanks a lot, waiting to hear your opinions!

+",326697,,,,,1/24/2019 13:24,Correct event managment architecture in web application,,0,0,,,,CC BY-SA 4.0,,,,, +386066,1,386110,,1/24/2019 13:51,,16,10489,"

I am assigned to a project where we have about 20 micro-services. Each of them is in a separate repository without any references to any other, apart from one Nuget package where we maintain some generic code like math functions. Each service reference the others by endpoints.

+ +

The advantage of this is:

+ +
    +
  • Each service is highly independent. (In reality this point is up for discussion, as a change to the API of one service is likely to effect multiple others)

  • +
  • Best practice – according to people I have talked to

  • +
+ +

The disadvantage are:

+ +
    +
  • No code re-use.

  • +
  • The some DTO objects are defined multiple times (maybe up to 10ish)

  • +
  • Each ServiceCommunication helper class that wraps the endpoints of a service for ease of use are duplicated multiple times, once for each repo.

  • +
  • API changes are hard to keep track of, often we see the failure in Test/Production
  • +
+ +

I think the following is a better way to structure the project: +One repo. +Each micro-service provides a Server.Communication helper class that wraps the endpoinds and a selection Server.Dto types which the Server.Communication class returns from its API calls. If an other service whishes to use it, it will include this.

+ +

I hope I explained the problem well enough. Is this a better solution that will address some of my issues or will I end up creating unforeseen problems?

+",307789,,,,,1/25/2019 12:55,How to structure microservices in your repository,,1,3,8,,,CC BY-SA 4.0,,,,, +386067,1,386068,,1/24/2019 13:55,,0,120,"

I have this method

+ +
arr // input
+
+
+new_ seq = []
+
+for i in arr:
+  new_seq.append(i)
+  __new_seq = [x for i, x in enumerate(arr) if x not in new_seq]
+  for j in __new_seq:
+      new_seq.append(j)
+      __new_seq = [x for i, x in enumerate(arr) if x not in new_seq]
+      for k in __new_seq:
+          new_seq.append(k)
+
+ +

How to calculate the time complexity for this method +Please note that each loop has a smaller length than the one before

+",326707,,326707,,5/15/2019 11:47,5/15/2019 11:47,time complexity for 3 foor loops different leangth,,1,3,,43490.0875,,CC BY-SA 4.0,,,,, +386072,1,,,1/24/2019 15:02,,1,30,"

We are developing a system whereby documents/files will be stored on a specialized Content Server and uploaded via a client.

+ +

However we want to be able to develop this so if we need to, we can swap the Content server out with a generic network folder. This is so we can test the other parts of the system in isolation and treat the file system as a sort of mock Content server. There is also a possibility with some installations we might actually use this strategy too.

+ +

We could store the file on the network renamed to a Guid, and then store that Guid in the client database for easy access.

+ +

However I have some questions:

+ +
    +
  1. How do we spread the files out so it is not just one folder with +1000s of Guids in one folder? Do I split into separate folders based +on the first few digits of the filename/date etc.?

  2. +
  3. If the file is renamed to a Guid, is there a way I can easily store +the filename and other meta-data or should this be the +responsibility of the client?

  4. +
+",326717,,209774,,6/17/2020 21:20,6/17/2020 21:20,Best way to spread/shard file location on a network UNC,,0,0,,,,CC BY-SA 4.0,,,,, +386074,1,,,1/24/2019 16:22,,1,275,"

I have the following branches in TFS

+ +

Dev

+ +
    +
  • 2.3.0-Printing
  • +
+ +

Main

+ +

Rel

+ +
    +
  • 2.3.0
  • +
+ +

Dev\2.3.0-Printing was branched off of Main and after completing and testing was merged back up into Main, which was then branched into Rel\2.3.0.

+ +

A bug has been found in 2.3.0 so I'm trying to figure out a branching strategy for 2.3.1 which will include the bug fix.

+ +

I was told by a previous co-worker who seemed fluent in TFS to branch from Rel\2.3.0 into Dev (ie, don't use the existing/old/previous 2.3.0-Printing branch), so I'm thinking of calling it Dev\2.3.1-Printing.

+ +

Or, should I first branch Rel\2.3.0 to Rel\2.3.1, then fork a Dev\2.3.1 branch that can later be merged back into Rel\2.3.1, effectively making Rel\2.3.0 the ""Main"" of all 2.3 revisions?

+ +

Branching off of Main is no longer an option since there have been additional releases since 2.3.0, which would result in a branch that included these newer changes.

+ +

The bug fix needs to make it's way back up into Main and I'm thinking it's probably a good idea to avoid circular branch relationships.

+",100503,,100503,,1/24/2019 16:37,1/24/2019 16:37,TFS branching strategy for a patch (bug fix),,0,7,,,,CC BY-SA 4.0,,,,, +386075,1,,,1/24/2019 16:54,,1,168,"

Months ago I began a new web project which, in the beggining, seemed like a small application with virtually few users. I began the project by using the awesome Hackathon Starter WebApp Boilerplate by Sahat because the initial requirements seemed to fit into that boilerplate. As you may expect, the project requirements have evolved since then and right now I have a large-scale SaaS application. The front end code was completely decoupled from the initial project in order to use React, and I striped out everything from the original project to leave it only as a back end REST API service. Right now, I need to implement a bunch of features and the project's codebase seems to have become a full Bolognese Spaghetti code which I need to refactor ASAP.

+ +

For the past 2 months I've been diving into architecture design books, posts and videos, but my app keeps growing without me being able to make a decision and refactor the back end code. Right now, it is just plain MVC with mongoose on the Models, express on the server and everything from business logic, to authentication, validation and data presentation inside the Controllers.

+ +

I've tried (literally) thousands, of different back end boilerplates in multiple languges and specs, but I would like to keep coding in NodeJS (TypeScript welcome). My question is, for a newbie with no experience on large-scale apps, how to refactor/rewrite my code, into a more robust architecture using NodeJS. Should I use ORMs? Should I use DDD, Service/Repository? Authentication with JWT should be a middleware? Using RBAC is acceptable? Also, the database design on MongoDB became so difficult I (literally yesterday) switched to PostgreSQL as my data began to become more relational than I anticipated. I didn't expect this project to grow this big and I am panicking as I can't afford rewriting/refactoring code more than once at least for the next 8 months as I will be the only developer involved in the back end side.

+ +

Thanks in advance and sorry for my awful english.

+",326734,,,,,1/24/2019 16:54,Refactor MVC to more scalable architecture?,,0,7,,,,CC BY-SA 4.0,,,,, +386077,1,,,1/24/2019 17:26,,1,453,"

While studying the visitor design pattern i found this phrase:

+
+

You can use Visitor along with Iterator to traverse a complex data structure and execute some operation over its elements, even if they all have different classes.

+
+

Visitor design pattern

+

While searching on the internet if it was a good idea I found different opinions. Some said that you can combine the two patterns:

+
+

You can perfectly combine both: use an iterator to move over each item of a data structure and pass a visitor to each item so that an external responsible performs some operation of the item.

+
+

iterator vs visitor

+

Others said that you can but it depends on the complexity of the operation:

+
+

The Visitor is not as such "unneeded" if you have an Iterator. It depends on the complexity of the operation(s) you want to apply to the items you are iterating over.

+
+

iterator and visitor

+

I am wondering in which situation it is useful to have such a combination of patterns. What kind of operation is considered complex ?

+",278757,,-1,,6/16/2020 10:01,1/24/2019 17:26,Combination of visitor and iterator pattern,,0,3,1,,,CC BY-SA 4.0,,,,, +386084,1,,,1/24/2019 21:39,,1,561,"

Please see the below diagram. +

+ +

There are two apps that each have a different set of functions, User A is a user of App1 and User B is a user of App2. They should not be able to log in directly to the other app.

+ +

App1 and App2 both call each other to share some of their functions so although a user can not log in to the other system, they should have access so the call can be made to a specific function from the other app.

+ +

User A can access the green functions and User B can access the red functions.

+ +

There is actually a third user that is an admin and should be allowed to log into both apps and access an enhanced level of functions.

+ +

Ideally the login/authentication system should be standalone and the app's can call it to check a users access to functions.

+ +

Is there a standard way of achieving this, like with Oauth2 or JWT?

+ +

Is it unrealistic to think that each time a function is called the users level is checked by the external auth service?

+",326754,,,,,1/25/2019 4:05,"Multiple app authentication, universal user login, best practice",,1,0,1,,,CC BY-SA 4.0,,,,, +386086,1,386112,,1/24/2019 21:55,,6,2138,"

I'm relatively new to AWS. I'm working with what I think is a common pattern:

+ +
    +
  1. Put file in S3 bucket
  2. +
  3. Do something with said file in Lambda function
  4. +
+ +

I see two options for making this link (ignoring SNS):

+ +
    +
  • invoke the lambda when an S3 event occurs
  • +
  • send the S3 event to an SQS queue, which in turn triggers the lambda
  • +
+ +

It won't be handling a huge number of events to begin with, but the hope is to hook up a lot more buckets to this lambda in the future. Immediate invocation, message ordering, and speed/time is not of critical importance. However retries, capturing files that ""fail"", DLQs, and all that good stuff is important.

+ +

I'm leaning towards the SQS route. I think it fits better with my requirements, it's the one I've managed to make a working terraform module for, and I don't think it will add to my bill in any significant way.

+ +

Is this a matter of opinion or is there an objectively better option here?

+",153014,,,,,1/25/2019 13:20,Invoking lambda directly from S3 vs going via SQS,,1,4,3,,,CC BY-SA 4.0,,,,, +386090,1,,,1/25/2019 7:26,,1,129,"

Suppose there 5 pages to a website. User flow will be 1->2->3->4->5->FINISH. I want to implement is that whenever a user enters page 3 a timer starts and if he doesn't finish a task within that time it is redirected to page 2.

+ +

I am thinking of implementing it using a Higher Order Component(HOC) which on mount starts the timer. And warpping up the components of pages 3-4-5 (which are part of the session) with the Higher Order Component created earlier.

+ +

Any flaw in my approach considering user can open multiple tabs? Any new suggestion?

+",326778,,,,,1/25/2019 7:26,How to Implement Certain timeout session on multiple pages of a website - REACTJS,,0,2,1,,,CC BY-SA 4.0,,,,, +386094,1,386171,,1/25/2019 8:35,,1,25,"

I'm developing a small web-app to help users manage shopping lists.
+ One of the required features is the ability of the application to notify the user if a shop of the same category is near him/her.
+To do so I'm using Foursquare API.

+ +
    +
  • Said API needs a KEY and CLIENT_ID to make the request.
  • +
  • Requests are made by the client via some javascript.
  • +
+ +

Should I have the KEY and CLIENT_ID in the client-side javascript or should the script use my web-app as a proxy for requests?

+ +

If the former is the better approach how can I safely save the KEY and CLIENT_ID in the client-side script?

+",326784,,326536,,1/27/2019 9:15,1/27/2019 9:15,Geolocation client side requests: Key and ID storage,,1,0,,,,CC BY-SA 4.0,,,,, +386097,1,,,1/25/2019 9:00,,1,52,"

I'm building a system where I need to measure certain algorithms, which are written by the end users. Obviously running external code is a huge security risk, therefore it needs to be isolated. The current solution is to start up docker containers for each submission, run the code inside the container, then terminate it. In terms of scalability, this solution seems rather limited, it already stutters under moderate stress tests. What I had in mind for improvements:

+ +
    +
  • Container pooling. Having a pool of containers started, kept alive and assigned to a session when it's needed. I don't like the idea of sharing containers between users, so once the session is done, the container should be replaced by a new instance in the pool.
  • +
+ +

Would this solve anything or I'm just increasing the complexity? Should I approach this issue from a completely different angle?

+",321831,,,,,1/25/2019 9:00,Dynamically start containers to isolate code compilation / running,,0,0,1,,,CC BY-SA 4.0,,,,, +386098,1,,,1/25/2019 9:07,,0,130,"

I used to call functions which returns int error code or 0 on success like this:

+ +
int tmp = function_a() ?:
+          function_b() ?:
+          function_c();
+
+if (tmp)
+        handle_error();
+
+ +

Now I'm working on a project which -std=c90 -Wpedantic and I get:

+ +
warning: ISO C forbids omitting the middle term of a ?: expression [-Wpedantic]
+
+ +

Is there any good ISO C approach for this? I want to avoid this:

+ +
int tmp;
+
+tmp = function_a();
+
+if (tmp)
+        handle_error();
+
+tmp = function_b();
+
+if (tmp)
+        handle_error();
+
+ +

And this:

+ +
int tmp;
+
+if ((tmp = function_a()))
+       handle_error();
+
+ +

And mangling code with #define macros.

+",326785,,,,,1/25/2019 9:29,Chain function calls which return error codes or 0 on success in C,,1,7,,43493.73264,,CC BY-SA 4.0,,,,, +386100,1,,,1/25/2019 9:44,,1,97,"

I am building a mobile app which lets users search for POIs around them on a map. I am curious to know what would be the best way to ""group/paginate"" these results in order to avoid downloading hundreds of search results at once from the server?

+ +

I looked at Google Maps as an example and the way they seem to do it is they only return a fixed number of results, spread across the map and once you start zooming in somewhere, you start to see more results in that area. In other words, they show one search result per a certain size of area. This seems like a good approach, but I don't know how would I implement such behaviour on the server-side.

+ +

Any input is highly appreciated!

+",326787,,,,,1/25/2019 11:59,Grouping search results on a map,,2,0,,,,CC BY-SA 4.0,,,,, +386102,1,,,1/25/2019 10:29,,2,714,"

DDD/""hexagonal architecture"" insist on separating the domain, aka model, from infrastructure requirements. This looks clean and logical until you realize that storing your domain object in memory might hurt performance if the domain object happens to be too large, and even more so fetching such domain object from persistent storage/network.

+ +

The proposed ""cure"" which I have seen, is centered around defining the aggregates to be ""small"" and around ""eventual consistency"". Both seem to delegate the business rules to stateless (""domain"") services and not the aggregates (domain entities) behaviour. So this appears to favour the creation of ""anemic domain objects""; however ""rich domain objects"" seems to me the main purpose of DDD (together with the creation of the ubiquitous domain language).

+ +

What you, as a DDD practitioner, think of this concern? (I am not currently a DDD practitioner, but given the hype around ""microservices"", I might perhaps become one).

+",39034,,39034,,1/25/2019 10:39,1/26/2019 17:02,"Aren't Domain Driven Design/""hexagonal architecture"" with real world constraints and the insistence on ""non anemic model"" contradictory?",,3,1,,,,CC BY-SA 4.0,,,,, +386106,1,,,1/25/2019 11:38,,1,595,"

I'm trying to apply DDD principles to an application that has a REST API in front and is backed by an SQL storage.

+ +

Here's the entity structure I have come up with so far:

+ +
Client: 1 ---- * Contract: 1 ----- * Contract-AddOn: 1 ----- * Feature
+
+ +

The REST endpoints will basically represent the same structure. From

+ +
/api/clients
+
+ +

up to

+ +
/api/clients/:clientId/contracts/:contractsId/addons/:addonId/features/:featureId
+
+ +

I'll also have some business rules like:

+ +
    +
  • A client can have only one active contract
  • +
  • A contract's addon cannot +start or end outside the start-end period of the contract
  • +
  • ....
  • +
+ +

With these 2 rules in mind it sounds to me like the Client should ensure that there's no more than one active contract => Client should be aggregate of Contracts.

+ +

And then addon fields are restricted by Contract fields => Addons are entities in a Contract aggregate.

+ +

So it seems like everything should go under Contract, but this feels clumsy. And then comes the REST layer which should access the Client aggregate and get the Contract for a client, get its addons to remove a feature from an addon.

+ +

I guess I'm missing something in that model. I was considering if the two mentioned business rules are not just policies, but I can't quite grasp the difference between the two and which is used when.

+ +

What would be another way to model such relation ship? Or maybe DDD is just too much for a simple model as this one?

+",160451,,160451,,1/25/2019 12:53,1/25/2019 14:13,"DDD aggregates, entities, REST and how they all fit together",,2,0,,,,CC BY-SA 4.0,,,,, +386111,1,386124,,1/25/2019 12:57,,2,216,"

I'm writing an importer, it should fetch some data from the database and put that data into appropriate places.

+ +

Now the question is, should the importer itself fetch that data, or should that data(to import) be passed to the importer and database fetching would be done outside?

+ +

What's a better way to design this?

+",269824,,,,,2/7/2019 14:32,Single responsibility principle - importer,,2,0,,,,CC BY-SA 4.0,,,,, +386115,1,,,1/25/2019 13:58,,3,191,"

We're evaluating CQRS/ES for a high-volume subsystem in our app in order to take advantage of distributed systems and ensure uptime. This is my team's first time implementing this architecture, and I'm struggling to finalize the aggregate design because the core entity changes during the transaction.

+ +

This subsystem consists of various endpoints (e.g. a web form, a physical kiosk device, and SMS) that authenticate users and allow them to log an activity. In this context, we have a concept of a Session, which tracks the things the user executes during the interaction. At some point in the interaction, the User is identified and is allowed to log an Activity.

+ +

Here are the challenges I'm having:

+ +
    +
  • The session doesn't ""belong"" to a user until and unless a valid user is authenticated during the process. We are interested in the sessions a user has, but for troubleshooting we're also interested in sessions that don't result in successful user authentication. It doesn't feel right to have a session be on its own for part of the transaction then get moved to a user after authentication.
  • +
  • The user's activity is the important thing we're getting out of the transaction. That activity is ""owned"" by the user, but it's also referenced by the session. I'm unclear whether the activity should be an aggregate root, whether the user should be, or whether we should keep focus on the session in this context and use event listeners to build relationships in a read model.
  • +
  • We are interested in these interactions and activities differently in other contexts. Sometimes we want to view them from the perspective of the endpoint. Other times we want to view them from the perspective of the user. Often for troubleshooting we have to infer the user based on metadata available to the session (Caller ID on an SMS, for example, lets us view if a user is using incorrect syntax in their messages, even if they're not authenticated).
  • +
+ +

I get that in CQRS/ES we can handle much of this with read models, but it's still unclear which entity/entities should serve as the aggregate root(s), and how commands should be constructed.

+ +

On one hand, it makes sense that the Session be the aggregate root, and commands should exist to log the interactions a user has with the endpoint, bind the user to the session once authenticated, and log an activity. Event listeners will then construct read models with the projections we are interested in (sessions by user, sessions by endpoint, activities by user, etc.)

+ +

On the other hand, it makes sense to have the Session be an aggregate root interested in the interactions between the user and the endpoint, and have the authenticated User be the aggregate root interested in the logging of an activity, with some sort of command in place on one or the other to connect the session and the user.

+ +

Hoping to get some outside perspective, perhaps from others who've gone down a similar road before.

+",39078,,,,,1/27/2019 17:11,"DDD - Aggregate that changes ""owner"" mid-process",,1,3,1,,,CC BY-SA 4.0,,,,, +386119,1,386128,,1/25/2019 14:28,,1,264,"

There's DSL format for creating and distributing dictionaries. Every dictionary article in such formats looks like this:

+ +
algorithm
+    [m0][b]al·go·rithm[/b] {{id=000001018}} [c rosybrown]\[[/c][c darkslategray][b]algorithm[/b][/c] [c darkslategray][b]algorithms[/b][/c][c rosybrown]\][/c] [p]BrE[/p] [c darkgray] [/c][c darkcyan]\[ˈælɡərɪðəm\][/c] [s]z_algorithm__gb_1.wav[/s] [p]NAmE[/p] [c darkgray] [/c][c darkcyan]\[ˈælɡərɪðəm\][/c] [s]z_algorithm__us_1.wav[/s] [c orange] noun[/c] [c darkgray] ([/c][c green]computing[/c][c darkgray])[/c]
+    [m1]{{d}}a set of rules that must be followed when solving a particular problem{{/d}} [m3] 
+    {{Word Origin}}[m3][c darkslategray][u]Word Origin:[/u][/c]
+    [m3][c darkgray] [/c]{{d}}late 17th cent.{{/d}} [c dimgray]{{etymology}} (denoting the Arabic or decimal notation of numbers): variant (influenced by {{/etymology}} [/c][c darkslategray]{{lang}}Greek{{/lang}} [/c][c darkgray] [/c][c darkcyan]{{ff}}arithmos{{/ff}} [/c][c darkgray] [/c][c darkslateblue][b]{{etym_tr}}‘number’{{/etym_tr}}[/b][/c][c dimgray]{{etymology}}) of {{/etymology}} [/c][c darkslategray]{{lang}}Middle English{{/lang}} [/c][c darkgray] [/c][c darkslategray]{{etym_i}}algorism{{/etym_i}}[/c][c dimgray]{{etymology}}, via {{/etymology}} [/c][c darkslategray]{{lang}}Old French{{/lang}} [/c][c dimgray]{{etymology}} from {{/etymology}} [/c][c darkslategray]{{lang}}medieval Latin{{/lang}} [/c][c darkgray] [/c][c darkcyan]{{ff}}algorismus{{/ff}}[/c][c dimgray]{{etymology}}. The {{/etymology}} [/c][c darkslategray]{{lang}}Arabic{{/lang}} [/c][c dimgray]{{etymology}} source, {{/etymology}} [/c][c darkcyan]{{ff}}al-K̲wārizmī{{/ff}} [/c][c darkgray] [/c][c darkslateblue][b]{{etym_tr}}‘the man of K̲wārizm’{{/etym_tr}} [/b][/c][c dimgray]{{etymology}} (now Khiva), was a name given to the 9th-cent. mathematician Abū Ja‘far Muhammad ibn Mūsa, author of widely translated works on algebra and arithmetic.{{/etymology}} [/c]
+
+ +

I need to parse it to HTML in Java application.
+My question is how to do it? I have thought about two options,

+ +
    +
  • write multiple regex expressions which will cover all cases
  • +
  • parse it to something like a semantic tree by dividing to nodes, and each node parse on its own
  • +
+ +

Absolutely have no experience with such kind of task, so I asking for advice and possible pitfalls. +Any help will be appreciated!

+",161072,,,,,1/25/2019 19:24,How to parse DSL file to HTML?,,2,5,1,,,CC BY-SA 4.0,,,,, +386123,1,,,1/25/2019 14:55,,0,124,"

Let's say I have a class with a property that returns an array of strings.

+ +
public static string[] MyStrings
+{
+    get { return new string[] { ""Foo"", ""Bar"" }; }
+}
+
+ +

Will this create multiple instances of MyStrings every time it is referenced? Are there compiler optimizations that effectively make it a singleton, or do I have to explicitly do something like this:

+ +
private static string[] _myStrings = new string[] { ""Foo"", ""Bar"" };
+public static string[] MyStrings
+{
+    get { return _myStrings; }
+}
+
+",246716,,,,,1/25/2019 19:31,Does returning an instance directly in a property create duplicate instances?,,1,8,,,,CC BY-SA 4.0,,,,, +386126,1,,,1/25/2019 15:17,,0,136,"

I have an ASP.NET Core 2.2 MVC site, that uses Facebook as an identity provider. Users can click the login button, they are redirected to Facebook to enter their credentials, and are then redirected back to the site. At that time they are authenticated, and I have a number of claims like name idenfifier, email etc. This works fine.

+ +

But now I also want a separate web API, which will be consumed by the MVC site. 

+ +

But this web API should of course also be protected, as I want to handle authorization in the web API; and for that I need to know the identity of the caller.

+ +

So my question is: how is this kind of security normally implemented? I guess I have to use a bearer token, which is sent with each call to the web API, but how do i generate this token? What is the architecture that is normally used for this kind of scenario?

+ +

Thanks for any hints!

+",36108,,,,,1/25/2019 15:17,Authenticated ASP.NET Core MVC site consuming web API,,0,2,,,,CC BY-SA 4.0,,,,, +386129,1,,,1/25/2019 16:22,,0,65,"

So, I wasn't sure if this was the place, but I have faith the mods will move it if necessary.

+ +

I'm about to leave on a trip to another country with the management team for a new project. It's my first introduction to a new industry and we will be attending a conference on the industry of the new product we're building and also meeting with our new clients.

+ +

So let me share my concerns: I'm generally new to this. I've be promoted recently, and so I'm just trying to wrap my head around a process for being ready for this entire experience.

+ +

So, what are key things I should be focusing on? What are things I should be keeping an eye out for?

+ +

I'm going to be doing some analysis and I'm generally comfortable with that process, however, it seems we will be doing a LOT of things in very little time. So I'm just trying to prepare myself to absorb as much information as possible (I'm open to tool suggestions for this. For a phone or tablet)

+ +

What I would like is a sort of checklist of things to do or prep for or look for. Also any advice outside of that checklist would be nice. I've never done a trip like this so I'm not sure what to expect.

+ +

Thank you all in advance.

+",315685,,,,,1/25/2019 20:27,Requirements Gathering in a Foreign Country,,1,1,,43490.9875,,CC BY-SA 4.0,,,,, +386130,1,,,1/25/2019 16:26,,0,112,"

I usually use IoC frameworks to inject dependencies that are services. Is it ok to mark classes that are data objects as IoC components?

+ +

To be more clear I will give an example.

+ +

I have an abstract data class A in one assembly.

+ +
  [IoCBase]
+  [Serializable]
+  public abstract class A
+  {
+     //possibly some method signatures that should be overriden in derived classes
+  }
+
+ +

I expect some assemblies that reference my assembly to define classes derived from A. The CustomIoCConcrete attribute is used to mark all the class types that I want to inject later, its parameters are a name and a version of the implementation (there can be more that one with the same name, but different namespace).

+ +
  [CustomIoCConcrete(""name"", 2)]
+  [Serializable]
+  public class B : A
+  {
+     //possibly some overriden methods and fields defined by client.
+  }
+
+ +

Then I want to gather all the types of these data classes in my Holder class using IoC. I want my Holder class to create an instance of some of the class that it is holding types for. Collection injected is of IMetadata type - a type binding the IoCBase type with an object that is Marked with CustomIoCConcrete. This type has a method CreateInstance() that will create an instance of desired type.

+ +
[IoCConcrete]
+class Holder : IHolder
+{
+    readonly IEnumerable<IMetadata<A, CustomIoCConcrete>> versionedClasses;
+
+    public DocumentDataSegmentsHolder(
+        IEnumerable<IMetadata<A, CustomIoCConcrete>> versionedClasses)
+    {
+        this.versionedClasses = versionedClasses;
+    }
+
+    public A GetObject(string name, int version)
+    {
+        return versionedClasses.SingleOrDefault(o => o.Metadata.Name == name
+            && o.Metadata.Version == version)?.CreateInstance();
+    }
+}
+
+ +

So to sum up, I am using IoC to gather types that inherit from a certain base class and are marked with a certain attribute. Then I use the class holding these types to create a concrete instance of one of the types that it is holding. The definitions will be created by user of the library. Other way would be to use factories, but then the user would have to create factories and data classes.

+",326823,,326823,,1/25/2019 19:07,1/25/2019 19:07,Is using IoC with data classes a good practice,,0,6,,,,CC BY-SA 4.0,,,,, +386131,1,386133,,1/25/2019 16:26,,10,811,"

Java and C# provide memory safety by checking array bounds and pointer dereferences.

+ +

What mechanisms could be implemented into a programming language to prevent the possibility of race conditions and deadlocks?

+",33996,,,,,1/26/2019 0:51,How could thread safety be provided by a programming language similar to the way memory safety is provided by Java and C#?,,2,6,3,,,CC BY-SA 4.0,,,,, +386132,1,,,1/25/2019 17:13,,1,97,"

Background

+ +

I am currently building this project with VBA, just to keep in the back of your mind when thinking about my question.

+ +

Imagine 2 adjacent blocks, in Excel. The first block is made up of columns A, B, and C. The second block is made of columns D, E, and F.

+ +

The first block is filled with data from a file published online as a text file. I have written a number of subroutines that will import the data I want into columns A, B, and C based on an active status. The most important data in block 1 would be in column A, it is an ID number that will never change. The subsequent data in columns B and C may change at some point in the future. When the imported data is refreshed, all of the cells in columns A, B, and C are deleted and repopulated. This data can include more or less entries on refreshing, than the previous refresh.

+ +

In the second, adjacent block, the data in columns D, E, and F will be filled out by a user with information about the object in the first block. I say object referring to the ID in column A, as B and C can change.

+ +

Please follow along with my imaginary time frame as it will help me ask my question later.

+ +

Week 1 - See Image 1 +User opens workbook: Block 1 is imported with ""Update button"" and Block 2 filled out by user for first time.

+ +

+ +

Week 2 - See Image 2 +User opens workbook: Block 1 is refreshed with ""Update button"" and now differs from week one. Before the user changes anything in block 2, there is an issue with the data.

+ +

+ +

As the Week 2 data is brought in, the week 1 data is deleted. In week 2 only 3 id's are imported rather than the initial 5 id's.

+ +

Question

+ +

My question revolves around how to handle the issue presented in week 2. Block 1 now holds values in 3 rows, even though the data initially held 5 rows. Block 2's data is still in the order of week 1, so the data is now misaligned and there are two extra rows of data that do not correspond to the ID's currently imported.

+ +

What are my options to deal with the excess data in block 2 and then realign blocks 1 and 2 correctly to look like this? Also to potentially store the data for what was removed...

+ +

+ +

My thoughts

+ +

I am not a developer of any sort, just a young engineer and I thought this would be a good tool for my company so I am trying to work my way through it.

+ +

My first thought would be to ID the block 2 data with the ID from block 1 in Week 1. as this piece of data will never change (even if name, status, favorite items change). On the refreshing of the data there would be some sort of subroutine to then match the block 2 data to the new block 1 data and remove the excess and possibly store it somewhere, so the block 2 data could be retrieved in an ID was reintroduced into block 1.

+ +

This seems convoluted to me, I am sure there is a better way to do this (maybe even in something outside of excel. The only reason I chose excel was because the users will need to be able to work on block 2 in excel.

+ +

If you have any ideas regarding a better potential software, a different language with an interface I would have to build, or how to continue building it in Excel which would be preferred at this point, even it would require rearranging the worksheet such as holding block 1 in a separate worksheet from block 2 and then merging the data later. I would like to hear what you have to say!

+ +

Thank you.

+ +
+",326828,,,,,10/16/2020 21:06,Data Matching In VBA - Best way to deal with dynamic data and user entry?,,1,0,,,,CC BY-SA 4.0,,,,, +386136,1,386179,,1/25/2019 19:02,,0,1230,"

My apps commonly have one or more prop holders. These holders contain config data used in the app such as:

+ +
@Component
+@ConfigurationProperties(prefix=""app.orders"")
+@Data //lombok
+public class OrderProps {
+
+     private int maxItems;
+     // other order props
+}
+
+ +

Various components that use one or more of these props get injected with the prop holder:

+ +
@Service
+public class OrderService {
+
+    private int maxSize;
+
+    public OrderService(OrderProps props) {
+         maxSize = props.getMaxSize();
+    }
+
+ }
+
+ +

In this example, OrderService is only dependent on maxSize. Other classes may have also a varying number of properties needed from OrderProps.

+ +

Doesn't injecting @ConfigurationProperties classes violate the Law of Demeter. Specifically, doesn't it mask the real dependency (in this case an int) as described here?

+ +

Is there way to design without violating? Littering code with @Value(""${order.maxItems}"") or perhaps less problematic @Value(""#{orderProps.maxItems}"") seems to have its own set of problems especially when refactoring / maintenance.

+",90340,,,,,2/7/2020 2:27,Using Spring Boot's @ConfigurationProperties without violating Law of Demeter,,2,0,,,,CC BY-SA 4.0,,,,, +386138,1,,,1/25/2019 19:45,,3,148,"

I'm at the point where results need to be cached to make the application more responsive.

+ +

From experience in a previous project, countless (to say the least) bugs occurred because there was lots of cached data which was never updated correctly because no one had the mental capacity to remember where everything was until someone (the end users...) noticed it and reported the bug. Now that I'm in charge this time around, I want to avoid having such mistakes happen as much as possible. I also have to deal with some very inexperienced people and would like to safeguard mistakes from happening as much as I reasonably can.

+ +

Are there any strategies or design patterns I can use to mitigate the risk of someone forgetting to update cached values?

+ +
+

As a hypothetical example, suppose we have a list of clients with a first and last name. Now suppose there is a suffix tree to help look up names quickly. If someone adds updates to a client's name, they may forget to update the suffix tree.

+
+ +
// A simple example
+class Person {
+    String firstName;
+    String lastName;
+}
+
+ +

At first I thought ""unit tests will cover it"" but with everything being very modular, such things are beyond the scope of unit tests. Then I thought about it being covered at the integration test level but then someone might not update the integration tests to have the changes, especially if they forgot about the cached values in the first place.

+ +

The next thought was to wrap everything under an encapsulating class which only exposes mutation methods, but then the setter methods might become verbose. Example:

+ +
class PersonTracker {
+    SuffixTree lastNameTree;
+    List<Person> people;
+
+    void addPerson(first, last);
+    void updatePerson(first, last, newFirst, newLast);
+    void removePerson(first, last);
+}
+
+ +

However the downside is the amount of code repetition, making it so only the specific class above can mutate the underlying person, exposing all the methods yet again (repeating myself in code), and combining multiple cached values into this one class where it may proceed towards some kind of monstrous 'God class' since for the issue I will be dealing with, there won't just be one set of cached values like the example we've read above.

+ +

I've thought of making each data an 'observable' class, so for example:

+ +
class Person {
+    Observable<String> firstName;
+    Observable<String> lastName;
+}
+
+ +

Is this a viable solution? Or am I going down an ugly rabbit hole?

+ +

Does there exist a better way of doing this?

+ +

Each method has its own pros and cons, which one is best for a long enduring code base?

+",168795,,,,,1/26/2019 17:14,Preventive measures for stopping developers from forgetting to update cached values,,2,9,,,,CC BY-SA 4.0,,,,, +386139,1,,,1/25/2019 20:25,,3,646,"

I'm building a microservice-based application (services according to DDD) and am about to implement authorization service. There are API gateways and UI applications that access backend servers, and they all need to query the authorization service.

+ +

Consider an ebay-like app with staff, sellers, and buyers. In my system there are users, and each user can have multiple roles, e.g. be both a seller and a buyer. A seller can have multiple users associated with it (in case of multiple employees.) Each employee would have its own user id, but be associated with the same role (the seller id or buyer id.) Staff also have sub-roles: seller managers, buyer managers, and admins.

+ +

I am planning on a 3-tiered, permission-based authorization scheme:

+ +
    +
  1. If UI application: is the user logged in, and does the user id have permission to view this page?
  2. +
  3. If API gateway: is the API key valid? (No other checks here)
  4. +
  5. Actions (backend services) -- does the user id have permission to access this command or query on the backend server? (Application-level auth in DDD terms)
  6. +
  7. Scopes -- Command: can the user modify this subset of data? Query: return the subset of data allowed for this user id.
  8. +
+ +

Although I'd like feedback on all three, my question is more about the 3rd. Consider the following scenarios:

+ +
    +
  1. An endpoint for retrieving all the sellers in the database. A user with a ""staff"" role and sub-role ""admin"" can view a list of all sellers. A staff with sub-role ""seller manager"" can view a list of all the sellers he manages.
  2. +
  3. Same as above, but modifying a seller. The seller manager can only modify sellers he manages.
  4. +
  5. An endpoint for retrieving seller information by seller ID. Admins have access to all IDs. Seller managers have access to all seller ids they manage. Sellers have access to only their own seller ID.
  6. +
+ +

My questions:

+ +
    +
  1. In the above scenario, what is the best way to the API endpoint? Should I have a single endpoint for returning all sellers, for example? Or should I have separate endpoints based on the sub-role types? For scenario 3, should I have separate endpoints for each of ""seller"", ""seller manager"", and ""admin""? There would be duplication but perhaps more clarity.
  2. +
  3. How should scoping be done? I can have each permission in the authorization service have a is_scoped boolean. E.g. ""list_sellers"" permission, 'scoped' = true. If scoped, the domain service will somehow detect this and limit the results. But how should this be accomplished?
  4. +
+ +

Any thoughts and suggestions would be appreciated!

+",93779,,,,,1/25/2019 20:25,"Authorization, permissions, and scoping resources in a microservice/DDD architecture",,0,0,2,,,CC BY-SA 4.0,,,,, +386145,1,386348,,1/26/2019 1:35,,2,70,"

I am developing a new feature for a well-established memory package. The feature Im implementing is about loading/copying/moving resources in and out to different type of structures like jars, libraries and so.

+ +

I started with writing tests first. I felt i need an test artifact, like a sample file/DLL/txt whatever, however, I do not know where to put it? What is the best practice to follow in this type of situation?

+",301914,,,,,1/30/2019 14:37,Where to put an artifact which will be used only by test,,1,4,,,,CC BY-SA 4.0,,,,, +386147,1,,,1/26/2019 4:00,,2,240,"

I have a problem with deciding algorithm for color quantization. The image that I want to do color quantization is an RGB image with resolution 512 x 512. I want to reduce the color value in the pixel to reduce the image size.

+ +

I don't want to use the popular algorithm k-means. And I found this mean shift algorithm.

+ +

Mean shift is a clustering algorithm (same as k-means) that have certain centroids and window (each centroids have 1) to look for densest part in its window then move the centroid into the densest part. +It will keep going like that until it converge with another centroid and not moving anymore.

+ +

Is it possible to use mean shift in color quantization? Because both k-means and mean shift are clustering algorithms but all I found is image segmentation for mean algorithm and k-means for color quantization?

+",326866,,9113,,1/28/2019 7:25,11/18/2020 12:04,Can Mean Shift Algorithm used in color quantization?,,1,1,,,,CC BY-SA 4.0,,,,, +386148,1,,,1/26/2019 5:32,,1,356,"

Recently I have been working on a rather large system with Vue.js for a single page app (SPA) and an API for the backend. The customer is concerned with the security, performance and maintainability.

+ +

So, I'm thinking along having API broken into 3 separate pieces:

+ +
    +
  1. the security api which authenticates/authorizes, issues/revokes token with user roles/permissions and accounts. It would have a separate db
  2. +
  3. the business API that has the business functionality, accessible only thru the token issued by the security api. It would have a separate db
  4. +
  5. notification api that sends real time notifications and email or text alerts. Again a separate db.
  6. +
+ +

On the front there would be separate apps: +one for security api for managing roles user accounts, monitor logs etc +and another one, the business app, for the complex business functionality.

+ +

I would like to know:

+ +
    +
  1. what are the advantages and inconveniences of this architecture (2 frontend apps and 3 webapi) compared to a single frontend and a single web api ?

  2. +
  3. which one would be the recommended one, in view of the triple constraint of security, performance and maintainability ?

  4. +
+",67385,,209774,,1/26/2019 11:07,1/26/2019 11:47,Breaking 3-tier architecture into multi-tier architecture,,1,1,,,,CC BY-SA 4.0,,,,, +386151,1,,,1/26/2019 10:53,,5,226,"

I'm working in this scenario

+ +

Post entity has many Image entities.

+ +

I also have repository to both entities:

+ +
    +
  • PostRepository
  • +
  • ImageRepository
  • +
+ +

Since this entities are tightly related, when I get a Post I want to return the list of images also.

+ +

I have to possible ways of do this:

+ +
    +
  1. The orchestator (in this case a PostService) will call indepently each one of the Repositories to fecth the data. +PROs:

    + +
      +
    • For sure don't violates SRP +CONs:
    • +
    • Create extra dependencies PostService might not have need to have a dependency with ImageRepository.
    • +
    • Unnecesary calls to database
    • +
  2. +
  3. The PostRepository will return the whole Graph Post with Images +PROs:

    + +
      +
    • Reduce dependencies
    • +
    • Simplify the database query. +CONs:
    • +
    • Might violate SRP.
    • +
  4. +
+ +

In the case of a writting scenario I clearly prefer the 1st option, since I want to keep validation of entities completely segreggated. But for reading scenario I thinkg 2nd option could be more optimal, however I couldn't find any good argument or reading for this.

+ +

So which option is better for the reading scenario?

+",284828,,,,,1/28/2019 12:09,Is a repository return a graph of entities violating SRP?,,3,2,,,,CC BY-SA 4.0,,,,, +386158,1,,,1/26/2019 13:55,,-6,84,"

I want to know why maintainability is important?

+ +

All I know is that software require updates and patches to either add new features or fix occuring bugs and errors.

+ +

But is there more to it than what I just said?

+ +

I also know that to make a maintainable software, there are certain things to follow and do. Such as good technical documentation and well written code which can be done by using proper naming conventions, comments where appropriate, spacing and indentation.

+ +

Are those the only ways to produce a maintainable software or is there something else?

+",,user326886,,,,1/27/2019 2:40,Software Development Life Cycle,,1,1,,,,CC BY-SA 4.0,,,,, +386159,1,386164,,1/26/2019 14:22,,-1,305,"

I'm currently writing a library in C++ and was wondering if I should log from within it. Googling the issue I came across this question but it makes reference to a logging facade. Is there anything equivalent for C++?

+",13236,,,,,1/26/2019 15:49,What is the C++ equivalent of a logging facade in Java?,,1,5,,,,CC BY-SA 4.0,,,,, +386160,1,,,1/26/2019 11:27,,0,2061,"

What is the difference between operating system API and system-call interface ? I have read at many places that both act as interface between program and the kernel. Then what is the actual difference between them ?

+",,Vedant Pareek,,,,1/26/2019 15:04,System Call Interface and operating system API,,1,1,0,,,CC BY-SA 4.0,,,,, +386161,1,,,1/26/2019 15:02,,2,366,"

When we talk about sourcing events , we have have a simple dual write architecture where we can write to database and we can write the events to a queue like Kafka . Other downstream system can read those events and act/use on it accordingly.

+ +

But the problem occurs when trying to make both DB and Events in sync as ordering of these events are required to make sense out of it.

+ +

To solve these problem people encouraging to use Database commit logs as source of events , and there are tools build around it like, Airbnb's Spinal Tap, Redhat's Debezium , Oracle's Golden gate etc .. It solves the problem of consistency, ordering guaranty and all these.

+ +

But the problem with use the Database commit log as event source is we are tightly coupling with DB schema. DB schemas for a micro-service is exposed , and any breaking changes in DB schema like datatype change / col name change can actually break the downstream systems.

+ +

So is using DB CDC as event source is good idea ?

+ +

A talk on these problem and using debezium for event sourcing

+",265260,,265260,,1/26/2019 15:14,1/26/2019 19:19,is Event sourcing using Database CDC considered good architecture?,,1,0,1,,,CC BY-SA 4.0,,,,, +386163,1,386188,,1/26/2019 15:24,,0,581,"

Suppose I have 3 UML Class diagrams. One is University, other is College, the other is Department. Each University can have multiple colleges, each college multiple departments. I have the UML class diagrams for these, where I assume there is a association relationship between University and college, and between college and department.

+ +

Could you please help with these questions:

+ +
    +
  • Is association the correct type of relationship, or should it be aggregation or perhaps composition?

  • +
  • Is there a methodology to change UML class diagrams such as above to Java code?

  • +
+",325783,,326536,,1/28/2019 18:25,1/28/2019 22:37,Convert UML Class to Java Class,,2,1,1,,,CC BY-SA 4.0,,,,, +386166,1,,,1/26/2019 16:18,,2,180,"

I recently worked on a file explorer within a console window (like Midnight Commander). I want to use ncurses as library. I have encountered some fundamental problems with my design and searched on the Internet for solutions and better practices.

+ +

However I ended up scrutinising my whole programming work and my approach to problems. I have always stuck to Singletons and started to develop a pattern, that uses different Managers to complete different tasks, that should be separated (e.g. Database, GUI, Networking). +I read on the Internet and heard from colleagues, that Singletons are bad and that I should stop using them. I was able to use static references to access the managers.

+ +

So the basic structure of my problem is like this:

+ +


+DirectoryController
+ One instance (or tab) that contains files and a path + It is like the logical representation of a file view

+ +

DirectoryManager
+ Manages all the directory controllers

+ +

GUIManager
+ Manages the GUI and needs somehow access to the current directory controller + (Most likely over the DirectoryManager)

+ +

KeyboardManager
+ Gets input of the user + Uses the dir controller to update path and move current selected files

+ +


+I want to separate these functionalities, because I think it is better to separate different tasks into task groups.

+ +

Maybe I am doing something wrong but I just can't think of a different solution, how to split up these functionalities and still be able to use functionalities of another group :/

+ +

Edit:
+I have the same problem with libraries like FLASK for python. How am I supposed to store and access shared data when all of the FLASK functions and all of the REST endpoints are static functions? I think that I somehow have to have a object, that is accessible by all of these functions. +But I cannot think of a solution for this either.

+",326893,,278015,,1/27/2019 12:23,3/1/2019 4:04,Design - What is the best way to separate functionalities?,,3,0,,,,CC BY-SA 4.0,,,,, +386173,1,386186,,1/26/2019 21:40,,2,205,"

I am designing a few different systems that revolve around a core system used to manage users, groups, associations between users, group memberships, user profiles and some other things.

+ +

System A is a task management system with tasks, assignments, attachments, etc...

+ +

System B us the user system.

+ +

System C is associated with some business workflow type of things (still figuring this one out)

+ +

Essentially I’m trying to determine how best to handle the fact that a table in database A will need to reference a a table in database B.

+ +

For example: in the task system, a task can be assigned to a user, or a group defined in the user system.

+ +

One idea I read about was to have some processes replicate the required data to tables in each system. So, a table in the task system that holds the group ID and it’s description would be updated when necessary from the user system.

+ +

Is this approach viable? Has anyone had success with something similar or different?

+",326908,,326536,,1/28/2019 18:26,1/28/2019 18:26,Multiple database system design,,2,0,,,,CC BY-SA 4.0,,,,, +386174,1,387997,,1/26/2019 22:45,,1,215,"

In my web app I have a long running operation that is processing some entity in the background. The state of this process should be visible to the clients. During the processing the UI should show something like ""entity is being processed"" and when it's finished ""entity has been processed"". My first idea was to model the entity with a state enum. Something like:

+ +
public class MyEntity {
+    public ProcessingState State { get; set; }
+    ... other stuff ...
+}
+
+ +

The service method that triggers the processing would have to do something like:

+ +
public void Process(int entityId) {
+    var entity = _repo.GetEntity(entityId);
+    entity.State = State.Processing;
+    // here we need to store the entity so that its state is known externally
+
+    ... do the heavy lifting that takes some time ....
+    entity.State = State.Processed;
+}
+
+ +

This typically is called by a controller that also takes care of the unit-of-work:

+ +
 public void Process(int entityId) {
+     _service.Process(entityId);
+     _unitOfWork.Commit();
+ }
+
+ +

However, the storing of the entity just to make its state known to other callers seems to me problematic and it seems to break the unit-of-work pattern.

+ +

How do you tackle this?

+ +

Ideas:

+ +
    +
  • Model as two distinct unit-of-works: 1: update state; commit; 2: do the actual work; commit. But where to put this logic? If it's on the controller level, it needs to be repeated. Somehow it feels as part of the responsibility of the service to update the state.
  • +
  • The service is allowed to ""bypass"" the unit-of-work by directly storing the change on the DB. This probably would work but it feels hackish.
  • +
+ +

P.S. The controller is simplified here. The processing happens in a background task but conceptually it doesn't make a difference.

+",112963,,112963,,3/4/2019 19:51,3/4/2019 19:51,Tracking state on entity and unit-of-work pattern,,3,0,,,,CC BY-SA 4.0,,,,, +386175,1,,,1/26/2019 23:35,,15,2748,"

As you may know, we can use GDB and set breakpoints on our code to pause execution for debugging.

+ +

My questions is, how does GDB pause a process and let you view the content of registers using i r for example. Aren't those register being used by other OS processes constantly? how do they not get overwritten?

+ +

Is it only a snapshot of the content and not live data?

+",326912,,280490,,1/27/2019 18:58,1/28/2019 0:27,How does GDB pause an execution,,3,2,4,,,CC BY-SA 4.0,,,,, +386181,1,,,1/27/2019 4:45,,1,139,"

I have compiled the following simple c++ code:

+ +
#include <iostream>
+
+int main(){
+ int a = 5;
+ int b = 6;
+ long c = 7;
+ int d = 8; 
+ return 0;
+}
+
+ +

and here is the assembly:

+ +
    pushq   %rbp
+    .cfi_def_cfa_offset 16
+    .cfi_offset %rbp, -16
+    movq    %rsp, %rbp
+    .cfi_def_cfa_register %rbp
+    xorl    %eax, %eax
+    movl    $0, -4(%rbp)
+    movl    $5, -8(%rbp)
+    movl    $6, -12(%rbp)
+    movq    $7, -24(%rbp)
+    movl    $8, -28(%rbp)
+    popq    %rbp
+    retq
+
+ +

All the ints have an allocation of 4 bytes which is normal. The long variable movq $7, -24(%rbp), is getting 12 bytes allocated to it (instead of 8) why is that?

+",326926,,,,,1/27/2019 5:56,How many bytes is a long in a 64 bit machine,,1,1,,43492.44444,,CC BY-SA 4.0,,,,, +386184,1,,,1/27/2019 8:21,,-1,229,"

Next semester we'll learn the mvc pattern in web development but since it looks pretty interesting to me i decided to learn it now but...there is a problem. Surfing trough the pages i found two approaches to the mvc pattern(in php):

+ +
ONE
+
+ +

+ +
TWO
+
+ +

+ +

Now my question is which one is the right approach? (and is there any guide that you can poin me to?)

+ +

Thank you!:D

+",326935,,326935,,1/27/2019 8:32,2/15/2019 14:49,PHP MVC - which approach is the right one,,1,2,,,,CC BY-SA 4.0,,,,, +386187,1,,,1/27/2019 9:56,,2,76,"

There's a complex model which represented by complex JSON with multiple fields and nested objects. Recently we have discussed in which way indicate to the backend that the object needs to be saved as new. There're two many proposals:

+ +
    +
  • remove id field from the model to indicate backend that new object needs to be created
  • +
  • explicitly tell by creating new nested object saveAsNew with fields which need to be changed
  • +
+",161072,,,,,1/27/2019 16:30,How to indicate to backend that model need be updated or created new?,,1,0,,,,CC BY-SA 4.0,,,,, +386191,1,,,1/27/2019 11:17,,-1,107,"

An application provides a REST interface to offer status and statistics information.

+ +
localhost:1111/stats -> return JSON encoded statistics
+localhost:1111/status -> return JSON encoded system status
+
+ +

Now, the user also wants to see the results nicely presented in an HTML page, for example under

+ +
localhost:1111
+localhost:1111/index.html
+
+ +

It would also be possible that the user sets a field in the request header to request HTML instead of JSON, so localhost:1111/status serves either JSON or HTML based on the request header. This solution should not be part of the discussion.

+ +

I'd like to know, if, from a software engineering perspective, it is a good design to

+ +
    +
  • a) Keep the endpoints the way they are
  • +
  • b) Serve JSON under localhost:1111/json/{stats,status,...} and serve the HTML frontend under root
  • +
  • c) Spin up a whole new webserver for the HTML page (as in localhost:2222) and use the REST interface from a)
  • +
+ +

Edit: I guess it does not matter for the overall decision, but I am using embedded web servers.

+",321010,,321010,,1/27/2019 13:56,1/31/2019 9:36,Use different webservers for REST interface and HTML?,,2,4,,,,CC BY-SA 4.0,,,,, +386194,1,,,1/27/2019 15:44,,47,12043,"

When compiling C code and looking at assembly, it all has the stack grow backwards like this:

+ +
_main:
+    pushq   %rbp
+    movl    $5, -4(%rbp)
+     popq    %rbp
+    ret
+
+ +

-4(%rbp) - does this mean the base pointer or the stack pointer are actually moving down the memory addresses instead of going up? Why is that?

+ +

I changed $5, -4(%rbp) to $5, +4(%rbp), compiled and ran the code and there were no errors. So why do we have to still go backwards on the memory stack?

+",326949,,591,,1/31/2019 23:44,1/31/2019 23:44,Why do we still grow the stack backwards?,,3,6,5,,,CC BY-SA 4.0,,,,, +386203,1,,,1/27/2019 19:00,,2,548,"

I am developing a distributed application built as a collection of separate services which are served from multiple load balanced instances and trying to wrap my head around the distributed configuration problem.

+ +

As my background is embedded systems, and not so much about large scale SaaS systems, I'd appreciate some feedback on my ideas, and maybe even get some more ideas about implementing configuration the right way.

+ +

Some more background:

+ +

When I write ""configuration"" I mean user profile, which can be changed during run-time by an authenticated user using an API and be applied in a timely fashion to all instances (think email filter rules, as opposed to DB connection strings which are loaded and kept static for the lifetime of the instance and hidden from users).

+ +

Each service is implemented as a pre-forked HTTP server (for security and performance reasons) and separate services use separate configuration (meaning services do not generally know about other services' configuration structure/schema). Users change configuration relatively infrequently, and the requirement is that users will not experience service downtime due to configuration change (note that some deployments have exactly one instance of each service, so a gradual restart strategy is not really an option). +Currently configuration is loaded from local INI files that are reloaded on signal, incurring a minor delay, which is acceptable.

+ +

So to my thoughts:

+ +

The plan is to use a central store (a DB or KV store such as consul), and having an agent running along each service instance to poll-for/receive changes, then call an endpoint locally that will validate the new configuration, rewrite the INI file and refresh in memory copy (meaning each instance manages the local configuration store for itself).

+ +

Why? This allows the services not to be dependent on any specific configuration provider/protocol/client, in addition there's no dependency on the central store being up, which in my case is mandatory, and also, no invalid configuration is being applied to any single instance.

+ +

Strategy to check that new configuration trickled: attach a unique value that is saved as part of the configuration, and having each instance export this value in a /health endpoint that a monitoring system can watch for (that does not necessarily mean that every pre-forked process updated it's own in memory copy, but let's leave this problem for another time).

+ +

How distributed configuration store is updated: A separate configuration service with an API that receives configurations for multiple services, having each of the configurations be validated by an arbitrary instance of the configured service, and only then persist in the central store.

+ +

Thanks head for any feedback or advice.

+",326954,,,,,2/27/2019 10:00,Configuration in a Distributed Application,,1,0,,,,CC BY-SA 4.0,,,,, +386205,1,386217,,1/27/2019 22:11,,1,966,"

So I've done some searching but can't seem to find a whole lot of suggestions on this topic. My question is what are some opinions on the best way to receive messages from an Amazon SQS queue on a .NET Core based WebAPI microservice ? The whole project consists of around 5 microservices and is utilizing AWS for infrastructure. I've implemented a basic pub/sub style event bus in which services will publish events to an Amazon SNS topic and then those messages will be delivered to whatever SQS queues are subscribed to it. Since SQS queues must be polled to receive messages, I've been able to come up with a couple solutions.

+ +
    +
  • Use long polling to continuously poll the queue for messages.
  • +
  • Trigger a lambda function when a message is delivered to SQS, which will then +send an HTTP request to the subscribing microservice and notify it that a message has arrived.
  • +
+ +

The first option seems inefficient to me as it could cause unnecessary network traffic and tie up resources. The latter option seems like a decent solution to me but may require some extra effort to deal with the competing consumer scenario since the services themselves will be hosted on ECS and may have multiple instances running and consuming messages.

+ +

So, I guess my question is does it seem like I'm on the right track here ? Or maybe there are some other solutions that I haven't thought of that someone could propose ? Appreciate any feedback. This is my first post here, so I apologize if I didn't give enough background info. Thanks.

+",326965,,,,,6/27/2019 12:01,.NET Core Microservice messaging with Amazon SNS and SQS,<.net>,1,1,1,,,CC BY-SA 4.0,,,,, +386209,1,,,1/28/2019 5:48,,1,49,"

I am suspecting cache latency issues due to constant data (or same data across multiprocessor caches) being churned up in the cache coherence protocols. Is there a way to specify specific data as constant so as to avoid snooping or such messages being passed around caches for that data?

+",181516,,,,,1/28/2019 5:48,How to specify constant data wrt cache coherence?,,0,2,,,,CC BY-SA 4.0,,,,, +386212,1,,,1/28/2019 8:25,,2,176,"

We are developing complex application on the top of the ROS framework in C++ and recently ran into discussion how to provide parameters to the parts of code far away from the starting main().

+ +

The application is a kind of a server. It has a listener for incoming requests that then finds a suitable factor for the worker capable of handling the request. The factory then creates the worker (not always the same class) and at this point the worker internals (various methods inside) need parameters that may differ for each worker. The parameters are initially within the configuration file on the file system (same for all objects, with different sections), but they are automatically loaded and parsed as the main node starts.

+ +

The parameters are easily accessible in the main() via ""node handle"" that allows to access them by name, not unlike the system properties are usually accessed. The parameter has one of the few supported primitive types. The type that is differentiated by the name of the method used to access it. Here is the tutorial explaining how the parameters are accessed.

+ +

Different developers so far had four approaches:

+ +
    +
  1. Pass the ""parameter handle"" as a parameter to about any constructor, +making sure each object has access to it. However this limits +testability - a ""live"" handle an complex interface to the whole ROS +infrastructure that must be running. Hence any object that relies on +it just to be created is not testable with simple Unit tests.
  2. +
  3. Read +all required parameters within main() after startup and store +either as global variables in class-specific namespace or +alternatively as static fields of these classes. This contradicts +generic understanding that ""global variables are bad"". While the +variables are not modified after being set on startup, looks like +they cannot be declared const at the language level due the need to +set them once inside the main.
  4. +
  5. Create the structures containing parameters and pass these around till they reach the constructor where they are used. Different classes need different parameters so we would need to define multiple helper structures, have the code to populate them from node handle and still looks annoying to pass them around.
  6. +
  7. Create the ""parameter provider"" class that can be either mock (returning the agreed values) or production implementation (wrapping node handle). Most of the developers are used to access node handle directly so this is promising the long way to persuade them to use that mediating wrapper instead.
  8. +
  9. Some proposed, use dependency injection framework. I understand Spring would do for Java but does such a thing exists or is even possible in C++?
  10. +
+ +

Which of these four approaches (or probably some other) would be the most optimal to provide parameters from the node handle far away from where the node handle is easily accessible?

+",81278,,81278,,1/30/2019 8:14,1/30/2019 8:14,Best way to provide configuration parameters for objects far away from the starting point,,2,1,,,,CC BY-SA 4.0,,,,, +386213,1,386218,,1/28/2019 8:27,,2,269,"

I'm tasked with the redevelopment of an existing application and ran into the following problem.

+ +

The application includes batch processing, e.g. scheduled processes that run on data without user input. Now I have to write the requirements for said processes in an Excel table. As these processes are quite long the requirements become quite unreadable when there are many ways for the process to go. So my question is:

+ +

Does anyone have a tip for writing requirements for batch processes?

+",326995,,173647,,6/3/2019 15:51,6/3/2019 15:51,Writing Requirements for Batch Processes,,1,5,,,,CC BY-SA 4.0,,,,, +386215,1,386219,,1/28/2019 9:48,,1,3105,"

I am writing a method in C# (SharePoint Services) which is supposed to return a SharePoint list name based on three conditions (Client, Country, and Location). There are multiple clients, countries and location, and the number is going to increase in the future.

+ +

Th method I wrote uses a series of ""if"" statements, nested into each other. It works fine, but I would like to think about maintainability of the method in future, and would really like to refactor it to something cleaner.

+ +

Can you give advice on any design patterns which would improve the code?

+ +

Here is the code:

+ +
if(client = ""Client1"")
+{
+    if(country = ""Country1""
+    {
+        if(location = ""Location1""
+        {
+           return ListMatchingTheCriteria;
+        }
+        if(location = ""Location2""
+        {
+           return ListMatchingTheCriteria;
+        }
+        ...
+    }
+    if(country = ""Country2""
+    {
+        if(location = ""Location1""
+        {
+           return ListMatchingTheCriteria;
+        }
+        if(location = ""Location2""
+        {
+           return ListMatchingTheCriteria;
+        }
+        ...
+    }
+
+}
+if(client = ""Client2"")
+{
+    if(country = ""Country1""
+    {
+        if(location = ""Location1""
+        {
+           return ListMatchingTheCriteria;
+        }
+        if(location = ""Location2""
+        {
+           return ListMatchingTheCriteria;
+        }
+        ...
+    }
+    if(country = ""Country2""
+    {
+        if(location = ""Location1""
+        {
+           return ListMatchingTheCriteria;
+        }
+        if(location = ""Location2""
+        {
+           return ListMatchingTheCriteria;
+        }
+        ...
+    }
+
+}
+...
+
+",278534,,326536,,1/28/2019 17:34,1/28/2019 23:59,"Refactor multiple ""if"" statements in C#",,1,2,,43493.7375,,CC BY-SA 4.0,,,,, +386216,1,,,1/28/2019 9:54,,3,484,"

I come from a desktop (Winforms/WPF) background were I was the sole developer in the company, and have recently changed jobs to become a part of a team doing web development. I am very much in learning mode as I've only been here a month. We use C#, Entity Framework, ASP MVC, Razor Views.

+ +

I've noticed that the code is very repetitive, for example we will have:

+ +
public partial class Customer
+{
+   public int CustomerID { get; get; }
+   public string Name { get; get; }
+   public string WebsiteURL { get; set; }
+   ...
+}
+
+public class CustomerViewModel
+{
+   public string Name { get; get; }
+   public string WebsiteURL { get; set; }
+   ...
+}
+
+public class CustomerModel
+{
+   public string Name {get; set; }
+   public string WebsiteURL { get; set; }
+}
+
+ +

In fact often there is a ListModel class as well, and the attributes are all very similar. And there are a few attributes, these above are just examples. In my previous job I tried to reduce duplication in the code using the principle DRY (don't repeat yourself). I designed the solution using inheritance and composition so that if something had to be changed, it usually only had to be changed in one place.

+ +

Is this sort of duplication common in web development, and is it considered acceptable practice?

+",327002,,,,,1/31/2019 17:54,"Is web development always repetitious, and why isn't this a problem?",,3,4,,,,CC BY-SA 4.0,,,,, +386224,1,,,1/28/2019 12:16,,0,76,"

I currently have a file processing task that runs on an on-prem VM, it's a .NET executable that also calls other services such as ffmpeg (for videos). As is, the task runs fine, but it's really only possible to process one file at a time due to computational resources.

+ +

I'm looking to change the architecture in such a way as to allow multiple files to be processed at once, theoretically well beyond the number that could be processed even by giving the VM additional resources (which isn't an option anyway).

+ +

Based on my research I think that what I'm looking for is probably some kind of container based solution, but I'm new to the concepts of containers and so I'm hoping that someone can help me ensure I'm on the right lines. Because I'm new to containers I think that Azure Container Instances is the way to go - it gives me the least to manage.

+ +

I think the rough idea I'm trying to map out is as follows:

+ +
    +
  1. Create a new container instance from an image
  2. +
  3. Start the instance
  4. +
  5. Download the file to the container from Azure Blob Storage
  6. +
  7. Process the file using the existing executables
  8. +
  9. Stop and delete the instance
  10. +
+ +

In order to do that, I'm going to need to do the following: create a container image that includes all of my executables; find some way to trigger the creation of the instance when a new file is added to Blob Storage; and figure out how to make the task to download and process the files start automatically once the container has started.

+ +

Now I believe that all of this is possible, and I'm hoping it seems sensible, but if anyone can spot a flaw in this plan then please could you highlight it for me.

+ +

Likewise if anyone has any tips on how to achieve my goals then please share, or share a source where I can find out more about what I need to learn.

+",327005,,,,,1/29/2019 22:44,How would I use Azure Container Instances to parellelise a task?,<.net>,1,5,,,,CC BY-SA 4.0,,,,, +386229,1,386245,,1/28/2019 15:11,,5,302,"

I'm designing a C API which will have about a dozen getter functions for various values. Something like:

+ +
bool getSomeBool();
+bool getSomeOtherBool();
+...
+int getSomeNumber();
+int getSomeOtherNumber();
+...
+
+ +

This seems to me like a straightforward way to do it, easy to implement, easy to document, hard to use incorrectly.

+ +

However, in APIs like OpenGL, we encounter something like the following (simplified):

+ +
typedef unsigned int GLenum;
+
+bool glGetBool(GLenum pname);
+int glGetInteger(GLenum pname);
+
+#define GL_BLEND           0x0BE2  /* may be passed to glGetBool    */
+#define GL_ACTIVE_TEXTURE  0x84E0  /* may be passed to glGetInteger */
+
+ +

This has the obvious drawbacks that the enum value must match the function that it's being passed to, and that the enum value must be a valid enum value to begin with.

+ +

So surely, this approach must have some advantage too?

+ +

Extensibility comes to mind – we can now add new enum values without having to add new functions – but I don't see how that's any better than adding functions.

+",32928,,32928,,1/28/2019 15:12,1/28/2019 23:18,"Why use multi-purpose getters that take an enum value, rather than separate getters?",,2,2,1,,,CC BY-SA 4.0,,,,, +386233,1,,,1/28/2019 18:07,,5,1314,"

As an exercise, I am trying to design a simple calendar booking system for multiple meeting rooms. I kind of got my head around some requirements such as find available rooms for a given time range, room booking look up. However, I seem to stuck a bit on a booking scenario, say in a particular racing situation, 2 users are trying to book room A at the exact same time, the time range that they pick can be exact same or overlapped. For example, user 1 is trying to book room A from 10AM to 12PM, user 2 is trying to book room A from 9AM to 11AM. As the time of look up availability this room showed up for both user saying it is available in the asked timeframe. In this case, since they are overlapped, I can only accept one booking and fail the other, for simplicity I won't give any user preference but just a first come first serve. How would I resolve this part in an efficient manner? +I was thinking to approach this in several ways:

+ +
    +
  1. Have the booking passed through in the request, post process the booking using a queue, every time I dequeue a booking I will perform the availability check before actually insert into DB confirm the booking. In the propose scenario, only one booking can get into the queue first hence the second one would fail. Then the system will turn around and notify the users about their booking either it fail or passed. (I realized this is similar to how Outlook handle their meeting room set up). But this a short coming, this force me to process the queue in a single thread then I can maintain the ordering, if I get two thread dequeue them, I ran back in the circle that now two thread will see the same result when doing the recheck condition as I plan to distribute this process.

  2. +
  3. Try to put a lock on the time slot. But this only work when I have pre-set up time slot for the room (for example, fixed from 9-10, 10-11,...), this defeat my purpose to keep this booking system open for any time range. And for this, would read on write is acceptable in such a system when talking to database? Because if I am putting an optimistic lock on the record, I would need to read the row to compare version before writing into DB?

  4. +
  5. Another way that I could see this work is just let the request go through, then having another process re-check the booking calendar to detect overlapping and only keep the first valid booking. But I feel this is not efficient as I have to do this for every booking and if the room is popular, and many user would like to book it. It would take a lot of time to turn around.

  6. +
+ +

If you had to design such a system, how would you go about at it? Is there anyway we can confirm the booking in realtime. How would the hotel booking system work where the booking unit is in day which would face the same problem given the hypothetical situation, there is only one room that users are trying to book?

+",327039,,,,,2/28/2019 13:02,How to prevent overlap booking on a calendar booking system,,2,9,1,,,CC BY-SA 4.0,,,,, +386239,1,,,1/28/2019 21:52,,-1,161,"

I have two synchronous web APIs that perform the same work but one needs to be prioritized over the other (the former is called from a client, whereas the latter is a caching optimization for before a client calls)

+ +

Constraints:

+ +
    +
  1. Both APIs have access to the same resources (such as VMs and storage) and I can't add more resources
  2. +
  3. Both APIs must remain synchronous
  4. +
  5. I want to ensure the priority API takes precedence over the optimization API, but preferably not starve either
  6. +
  7. APIs are stateless
  8. +
+ +

Question: +What is the appropriate pattern to ensure the priority API always gets cycles without starving either API? Would a throttling mechanism, where the priority API gets much more tokens than the optimization API, be appropriate?

+",84907,,84907,,1/28/2019 23:48,6/22/2020 2:06,What is the optimal pattern to ensure a priority API gets cycles over its related (non-priority) API?,,1,4,,,,CC BY-SA 4.0,,,,, +386240,1,,,1/28/2019 21:53,,7,218,"

If a change compiled/built but the semantics were different would it be considered a major change?

+ +

For example, suppose a time string returned was the same format but now was CET rather than, say, IST and that interpreting it as IST now produced erroneous results would it be classified as MAJOR?

+ +

Note that I assume there is no change to the build, everything compiles and builds as before and everything runs without errors being detected just as before; its just that the 'answer' is now wrong, not right. I can't seem to find a consistent view.

+",327051,,326536,,1/29/2019 9:47,1/29/2019 9:47,Semantic Versioning,,2,0,,,,CC BY-SA 4.0,,,,, +386250,1,386255,,1/29/2019 4:18,,3,351,"

Imagine you have a system where a program is running and somehow an abnormality occurs. (it can be a crash, or an abnormal screen or any other thing)

+ +

Imagine reproducing the problem is next to impossible but you have some logs files that record what has happened till that point.

+ +

Usually you debug by reproducing the problem and checking where does it go wrong but if you just try to find the causes of the abnormality by going through the log files, is there a specific name for this?

+",296531,,,,,1/29/2019 12:53,Is there a specific name for this kind of debugging?,,2,3,,,,CC BY-SA 4.0,,,,, +386253,1,386256,,1/29/2019 5:13,,4,1664,"

My app uses Cosmos DB to store data. I have a requirement to support full-text search, which Cosmos does not provide out of the box. One of the recommendations was to use Elasticsearch, and I have been trying that out. The more I try it out, the more I feel like I am duplicating data. I literally push the same documents to Cosmos DB and Elasticsearch.

+ +

Am I heading down a bad path here? The way I see it, I should do one of these things:

+ +
    +
  • use only Cosmos DB, simulating full-text by tokenizing+stemming content myself
  • +
  • use only Elasticsearch
  • +
  • use Cosmos DB in conjunction with Azure Search, which feels to me like it's not doubling-up on database responsibilities
  • +
+ +

Does anyone have any thoughts on my direction here?

+",121649,,,,,1/29/2019 7:02,Does it make sense to use both Cosmos DB and Elasticsearch?,,1,1,2,,,CC BY-SA 4.0,,,,, +386260,1,392107,,1/29/2019 10:04,,-3,183,"

I am working in an iOS project which have two schemes enabled from configurations,

+ +
    +
  • Release
  • +
  • Debug
  • +
+ +

As you know Debug scheme is used for developers while developing features and testing etc. However, Release scheme is used for generating official final artefact to upload in App Store. Currently my application quality is not good and many exceptions are not properly handled by developers (most of them are ignored). In current codebase most of the methods are developed in such a way that when exceptions happens they are catched and printed in log. Let's say here is a pseudocode,

+ +
func doSomething() {
+    do {
+        //code that might generate exception
+    } catch let error {
+        print(""Error: \(error)"")
+    }
+} 
+
+ +

However, I feel many exceptions should be thrown in development phase and developers should invest more time to analyse and fix them. For that reason I want to log exceptions only in Release mode but not in Debug mode. Let’s say my intention is to refactor the above similar code as below,

+ +
func doSomething() throws  {
+
+    do {
+        //code that might generate exception e
+    } catch let error {
+
+        #if Release
+            print(""Error: \(error)"")
+        #else Debug
+            throw error
+        #endif
+    }
+} 
+
+ +

Is there is any other issues might arise for this approach? Is it a bad idea anyway?

+",208831,,208831,,5/19/2019 8:59,5/19/2019 15:05,Is throwing exceptions in Debug mode a bad idea?,,1,8,,43604.74306,,CC BY-SA 4.0,,,,, +386261,1,,,1/29/2019 10:06,,0,404,"

I have recently started diving deeper into Angular 7 (with Ionic 3) and I have written a lot of code so far, and I have child & parent component relationships - but never like this before. I am trying to write ""good"" Angular code.

+ +

The problem I am now facing is, the Providers...

+ +

My Child Component needs access to the SettingsProvider but so does the Parent Component. So my question is, should I inject the SettingsProvider in both the child's and the parent's constructor or should I only inject it into the parent, and then pass it into the child via the @Input ???

+",324765,,,,,1/29/2019 20:26,Angular 2+ Providers/Service on Parent or Child component?,,1,0,,,,CC BY-SA 4.0,,,,, +386262,1,386318,,1/29/2019 10:21,,1,163,"

I have an array of ranges with start and end timestamps. I want to display those ranges on a graph.

+ +

Right now, my naive algorithm gets an array of ranges sorted by their start timestamp and

+ +
    +
  • iterate over each range in the sorted array + +
      +
    • find the first bucket where the last range of the bucket has an end timestamp lower than the start timestamp of the current sample + +
        +
      • If no such bucket exists, add a new bucket
      • +
    • +
    • add current sample to found bucket
    • +
  • +
+ +

I then use these buckets to fill my graph. Each bucket represents a line on the graph.

+ +

+ +

It's a O(n²) algorithm. This works fine for my needs, if not a bit slow when having 50,000+ ranges.

+ +

One issue with this algorithm is when a range's end timestamp changes, the algorithm needs to be run for all samples.

+ +

For whatever reason, I now want to make a change to how the data is fetched from the database, and due to technical constraints, the data cannot be sorted efficiently.

+ +

Please help me think of an efficient algorithm where data does not have to be sorted.

+",153156,,153156,,1/30/2019 3:16,1/30/2019 3:16,Sorting an array of ranges for display,,1,5,,,,CC BY-SA 4.0,,,,, +386266,1,386267,,1/29/2019 11:20,,93,16952,"

There's a debate going on in our team at the moment as to whether modifying code design to allow unit testing is a code smell, or to what extent it can be done without being a code smell. This has come about because we're only just starting to put practices in place that are present in just about every other software dev company.

+ +

Specifically, we will have a Web API service that will be very thin. Its main responsibility will be marshalling web requests/responses and calling an underlying API that contains the business logic.

+ +

One example is that we plan on creating a factory that will return an authentication method type. We have no need for it to inherit an interface as we don't anticipate it ever being anything other than the concrete type it will be. However, to unit test the Web API service we will need to mock this factory.

+ +

This essentially means we either design the Web API controller class to accept DI (through its constructor or setter), which means we're designing part of the controller just to allow DI and implementing an interface we don't otherwise need, or we use a third party framework like Ninject to avoid having to design the controller in this way, but we'll still have to create an interface.

+ +

Some on the team seem reluctant to design code just for the sake of testing. It seems to me that there has to be some compromise if you hope to unit test, but I'm unsure how allay their concerns.

+ +

Just to be clear, this is a brand new project, so it's not really about modifying code to enable unit testing; it's about designing the code we're going to write to be unit testable.

+",146235,,97259,,2/3/2019 9:43,2/4/2019 7:50,Should we design our code from the beginning to enable unit testing?,,15,37,34,,,CC BY-SA 4.0,,,,, +386269,1,386298,,1/29/2019 11:53,,6,1822,"

I stumbled upon a question in Codereview, and in one answer the feedback was to avoid std::endl because it flushes the stream. The full quote is:

+
+

I'd advise avoiding std::endl in general. Along with writing a new-line to the stream, it flushes the stream. You want the new-line, but almost never want to flush the stream, so it's generally better to just write a \n. On the rare occasion that you actually want the flush, do it explicitly: std::cout << '\n' << std::flush;.

+
+

The poster did not explain this, neither in the post or comments. So my question is simply this:

+

Why do you want to avoid flushing?

+

What made me even more curious was that the poster says that it's very rare that you want to flush. I have no problem imagining situations where you want to avoid flushing, but I still thought that you in general would want to flush when you print a newline. After all, isn't that the reason why std::endl is flushing in the first place?

+
+

Just to comment the close votes in advance:

+

I do not consider this opinion based. Which you should prefer may be opinion based but there are objective reasons to take into account. The answers so far proves this. Flushing affects performance.

+
+",283695,,-1,,6/16/2020 10:01,1/30/2019 6:06,Why do you want to avoid flushing stdout?,,3,7,,,,CC BY-SA 4.0,,,,, +386273,1,386277,,1/29/2019 12:47,,0,250,"

I am getting into SonarQube and everything looks quite and simple so far, but I am not sure what is the final purpose of CCQ overall.

+ +

Yes it gives you a lot of tips about whats going on inside your code, and no - it seems to not be perfect everytime.

+ +

Are CCQ tools expected to deny a branch merge if there is on example one critical error? Should a product like SonarQube be integrated with CI/CD or VCS systems? Or to keep it simpler - should CCQ tools be a part of automation process?

+ +

For now I see it only as personal developer lazy tool, but I would not let tools like this to influence the approval logic.

+",320543,,,,,1/31/2019 17:29,Are CCQ (Continuous Code Quality) tools like SonarQube expected to deny version control changes?,,2,0,,43499.77222,,CC BY-SA 4.0,,,,, +386280,1,386281,,1/29/2019 13:52,,1,101,"

I'm designing a solution in which users perform tasks based on task queue. The point is to create system, where user goes on specific URL and server serves him listing with for example: items to asses and put into specific predefined category - like StackExchange's queues (First Posts, Late Answers and Triage).

+ +

What I'm struggling with is making sure that users will get different tasks to do and no user will get the same task other user is currently working on.

+ +

Obvious solution would be storing datetime when given task is served to user and have other service to monitor those tasks and release them to be available again if timeout occurred.

+ +

This solution however feels kind of ""dirty"". Like it's the most straightforward but might not cover all edge-cases.

+ +

I also struggle to formulate proper query for Google, because all results I get are about event buses and asynchronous tasks.

+ +

Is there any more clever way to solve this kind of problem?

+",248825,,,,,1/29/2019 14:07,Serving tasks for multiple users from one queue in parallel way,,1,2,,,,CC BY-SA 4.0,,,,, +386284,1,,,1/29/2019 14:44,,3,172,"

I'm aware of the general distinction between data and content, namely context, but here I have a case that is clearly causing a lot of confusion and debate within the company I work for.

+ +

There is a front-end team which manage an application which calls our data API. The confusion is that they want the content structured the way they are presenting it, because according them the structure implies domain knowledge (business logic) which they don't know anything about. The thing is, they came up with the structure in the first place... together with business. So is it business logic or GUI logic?

+ +

To illustrate, here's an example:

+ +

Our REST endpoint /car/123/repair/status returns:

+ +
{
+   ""status"": ""being_painted"" /* actually an enumeration */
+}
+
+ +

One customer journey scenario conjured up by the UX guy from the front-end team includes the concept of 'stages' in which the various repair statuses can be categorized into. This was discussed and agreed upon with our business product owner. The idea is that it provides a better visual overview to the end-user (not being business).

+ +

So instead of applying GUI logic to display 'stages' correlating to the 'repair status', they want us to provide it as data. Data that has no bearing on any IT process using a concept we don't officially know.

+ +

What they want:

+ +
{
+   ""currentStage"": {
+       ""stage"": ""stage_recovery""
+       ""status"": ""being_painted""
+    }
+}
+
+ +

Is this business logic that should be part of the data API or is it GUI logic that should be part of the front-end application? Rephrased: should we provide such a granular API that it is completely specced for the highly specific presentational need or should we stick with just the statuses and create an API that only deals with data known to us internally?

+ +

One argument for providing statuses but not stages is the fact that statuses are based on the source data from our domain, while stages are based on the result of this (so not on source data). But I'm not sure how much of a real argument that is.

+",176541,,176541,,1/29/2019 16:12,1/29/2019 19:54,Is this data as a result of business logic or is this content as a result of GUI logic?,,2,6,,,,CC BY-SA 4.0,,,,, +386285,1,,,1/29/2019 14:47,,0,65,"

I've recently started my first job as a software developer at a small startup company.

+ +

I do not have a degree in a software engineering related field, although, I have very recently completed an A-Level in Computer Science.

+ +

There are 2 developers (myself and one other) and the other developer has worked here for some time and so, is particularly knowledgeable of our products and the code base.

+ +

I, however, am struggling to create new projects or maintain existing projects as I simply don't know enough about them in order to proceed.

+ +
    +
  • The code for the projects is completely undocumented (no comments or guides etc)
  • +
  • The databases we use do not have primary keys or referential integrity
  • +
  • The databases do not have meaningful table names and column names
  • +
  • I am unfamiliar with their data model
  • +
+ +

My boss is not a software engineer, and so when I do ask him for help, the information he is able to give me is very limited.

+ +

The language we use (C#) is one that I have plenty of (mostly self-taught) experience in, however, I'm wondering if it is indeed my abilities (or lack thereof) that are preventing me from being as productive as I'd like to be.

+ +

Am I just too inexperienced for this role, or is there something I can do to help me in this job?

+",327124,,,,,1/29/2019 14:47,What should I do if I am unfamiliar with my organisation's codebase?,,0,5,,43494.63819,,CC BY-SA 4.0,,,,, +386296,1,386303,,1/29/2019 17:55,,32,7780,"

I was thinking why are there (in all programming languages I have learned, such as C++, Java, Python) standard libraries like stdlib, instead of having similar ""functions"" being a primitive of the language itself.

+",,user327143,28374,,1/30/2019 1:43,2/1/2019 0:08,Why are standard libraries not programming language primitives?,,10,12,6,43495.77014,,CC BY-SA 4.0,,,,, +386310,1,,,1/29/2019 23:18,,0,365,"

Assume we have two files

+ +

a.cc

+ +
#include <iostream>
+
+int timesTwo(int in);
+
+int main(){
+ std::cout << timesTwo(5) << std::endl;
+ return 0;
+}
+
+ +

b.cc

+ +
int timesTwo(int in){
+ return in*2;
+}
+
+ +

is it better to use the linker and include b.cc in a.cc by #include ""b.cc"" or is it better to leave it as is and compile them like g++ a.cc b.cc? What are the circumstance where one is preferred over the other?

+",327161,,,,,1/30/2019 12:12,Is it better to use the linker to compile multiple C++ files or better to include them as a header file,,4,0,,,,CC BY-SA 4.0,,,,, +386317,1,,,1/30/2019 2:42,,0,352,"

I have a AWS pipeline connecting multiple resources to do 1 task which looks like as follows:

+ +

+ +

The data is listened from a SNS topic and then moved to Lambda where it is processed. From lambda, step function gets triggered. And then again one more lambda which will push to SNS topic.

+ +

So, Should the integration test cases for such kind of flow should cover all the resources at one time or one resource at one time?

+ +

If we take all the resources in one integration test, the way I am thinking to write the test case is by sending the message to first SNS and by reading from last SNS.

+ +

But I am not sure, that should we do this way or not? because then every test case might take long time and we might have to put a poller on last SNS topic.

+ +

What do you recommend on writing integration test cases for such flow?

+",287375,,,,,11/19/2020 3:42,Integration Testing on AWS,,0,0,1,,,CC BY-SA 4.0,,,,, +386323,1,,,1/30/2019 6:25,,3,707,"

Quoting the definition of interface injection from Wikipedia :

+ +
+

The advantage of interface injection is that dependencies can be completely ignorant of their clients yet can still receive a reference to a new client and, using it, send a reference-to-self back to the client. In this way, the dependencies become injectors

+
+ +

I wish to understand each part of what is said there.

+ +

Let me put the example here from the mentioned source:

+ +
// Service setter interface.
+public interface ServiceSetter {
+    public void setService(Service service);
+}
+
+// Client class
+public class Client implements ServiceSetter {
+    // Internal reference to the service used by this client.
+    private Service service;
+
+    // Set the service that this client is to use.
+    @Override
+    public void setService(Service service) {
+        this.service = service;
+    }
+}
+
+ +

Now let me split the quotation in parts and try to explain them :

+ +

part1 : ""..dependencies can be completely ignorant of their clients.."" - quite understandable

+ +

part2: ""..dependencies can be completely ignorant of their clients yet can still receive a reference to a new client and, using .. "" - bold part is obscure. Q1) Client gets a reference to the dependency not the vice versa, right ?

+ +

part3: ""..can still receive a reference to a new client and, using it, send a reference-to-self back to the client..."" - quite understandable

+ +

part4: "".. In this way, the dependencies become injectors..""- bold part is obscure. Q2) Dependencies are being injected. So how can they themselves be injectors ?

+ +

Q3) Dependency is obviously injected in the example. Can we say that interface is also being injected only to enable the injection of the dependency in the example ?

+",296171,,5099,,1/30/2019 10:45,3/5/2019 8:36,Understanding interface injection,,1,1,,,,CC BY-SA 4.0,,,,, +386328,1,,,1/30/2019 8:47,,3,148,"

My team and I had a discussion about our future architecture, since we've always just written the code that was needed in each separate project, which have now led to extremely redundant code across multiple systems. Therefore, we wish to create a local NuGet feed to share functionality across our new systems.

+ +

Some of us really like this idea, whereas some do not. I wanted to share this with you, to perhaps get some experienced eyes on the matter.

+ +

Problem
+Redundant code across multiple systems gives us loads of challenges when adding new functionality, or updates to the backend database. Everytime we make changes that should replicate to more than one application, we need to go through all thes e applications and write the exact same lines of code.

+ +

Possible solution
+As mentioned, some of us thought this through and came up with a solution using TFS to automaticaly create a NuGet package in our local NuGet feed, and then we can simply update said NuGet package to include the new functionality across multiple systems.

+ +

Here's a simple illustration to show our ideas:

+ +

+ +

Of course we would write tests for our shared libraries, but we can be sure that they're working and we do not need to write the same functionality into multiple systems, since they'd just use the library we created for this specific scenario.

+ +

Should it occur that we add breaking functionality, the existing systems would not need to update their NuGet package, unless they need that new functionality. It would be okay for System A and System B to still use Library A v1.0.0 though we just added new functionality to Library A v.1.2.4. Should it occur that the existing systems would need new functionality based on one of the libraries, and we did add breaking functionality, it's just a matter of rewriting the old code, to accept the new changes.

+ +

Ideas
+As mentioned in the very beginning, some of us really dig this idea, while others are not fans of this architecture. I'm writing this question to get experienced eyes on this discussion, and perhaps some better ideas, or suggestions for better architecture on the existing idea I've laid out for you here.

+",327153,,9506,,1/30/2019 12:08,1/30/2019 12:08,".NET shared libraries across team, using TFS and auto-NuGet publishing",<.net>,1,4,1,,,CC BY-SA 4.0,,,,, +386331,1,,,1/30/2019 9:50,,0,95,"

In my code I have 2 separate login types. I have a factory that decides which one to create based on an enum.

+ +

Each login type has a different type of credential. +Currently my factory method takes just 1 type of credential. +How can I generalise my method to accept both an enum for login type with different credential types.

+ +
class LoginType1 
+{
+    var credentials:Credentials
+}
+
+class LoginType2 
+{
+    var credentials:Credentials2
+}
+enum LoginType 
+{
+   case : type1,
+   case : type2
+
+}
+
+struct Credentials
+{
+  var username : String,
+  var pwd : String
+}
+struct Creditentials2 
+{
+  var key : String,
+  var id : String
+}
+
+public func setupProvider(type: LoginType, credentials: Credentials, ) {
+        switch type {
+        case .type1:
+            provider = LoginType1(credentials: credentials, container: view)
+        case .type2:
+            provider = LoginType2(credentials: credentials)
+        }
+
+    }
+
+",5720,,173647,,1/31/2019 23:43,1/31/2019 23:43,What design pattern would help me make my factory more generic,,1,4,,,,CC BY-SA 4.0,,,,, +386332,1,,,1/30/2019 10:04,,4,502,"

Let's say we have a code base covered with big enough amount of unit tests. We make small change to the code and want to check if tests are still passing. Wouldn't it be great to be able to rerun just tests affected by the change? It seems like every bit of data required to build the list of affected tests is in place: set of changed lines could be taken from VCS, and mapping from source code line to corresponding tests might be determined from coverage statistics.

+ +

Does any of an existing unit testing frameworks support such technique?

+ +

Update. +The point is to get a rapid feedback. CI could and should still run full suite, obviously.

+",327209,,9113,,1/31/2019 7:24,1/31/2019 7:24,Rerun unit tests affected by change,,7,4,,43499.77222,,CC BY-SA 4.0,,,,, +386340,1,386342,,1/30/2019 12:41,,2,954,"

We have a REST API that returns user generated content from a database. Before inserting the data into the database the data is sanitized. +But when returning the data we do not escape / decode the data, so in theory it would be possible to insert some user content that will execute an attack, once it is consumed from the client.

+ +

Right now the only client is an internal one, that displays the data in some HTML frontend.

+ +

Now my question is, should the API return json escaped JSON? Or should the client make sure to html escape the received content?

+ +

I am split here in my personal opinion, in general I would return data as is from any API and let take the client care of escaping (as there could be several different clients and the API should be agnostic to its content). +On the other hand its an internal one and we know how we use it, so it could return escaped data.

+ +

Any inputs?

+",123404,,,,,1/30/2019 16:22,Should REST API return escaped user generated content,,2,0,,,,CC BY-SA 4.0,,,,, +386341,1,,,1/30/2019 12:50,,0,59,"

I regularily come across code that stores some custom properties in JS Objects from external libraries, for example:

+ +
/**@param OpenLayers.Feature feature */
+
+function doSomething(feature) {
+    feature.customProperty = ""anything"";
+ }
+
+ +

I disapprove of that, but I am not sure about the ideal solution to bypass this shortcut. Any ideas?

+",326191,,326191,,1/30/2019 13:17,1/31/2019 16:52,Adding custom properties to Javascript Objects of external libraries - is that considered bad practice?,,1,2,,,,CC BY-SA 4.0,,,,, +386345,1,,,1/30/2019 14:07,,2,202,"

I have trouble understanding if I should pass an argument to a method as a primitive value or as an already-obtained object.

+ +

Consider this simple example

+ +
interface Channel{
+
+    String getId();
+
+    String getName();
+}
+
+interface ChannelRepository{
+    Channel findBy( String id);
+}
+
+public class DummyChannelRepository implements ChannelRepository{
+    @Override
+    public Channel findBy(String id) {
+        return null;
+    }
+}
+
+interface Customer{
+
+    String getId();
+
+    String getChannelId();
+
+    String getName();
+}
+
+interface CustomerRepository{
+    Customer findBy( String id);
+}
+
+public class DummyCustomerRepository implements CustomerRepository{
+    @Override
+    public Customer findBy(String id) {
+        return null;
+    }
+}
+
+ +

Now I want to create a common calculator interface that receives a channel and a customer. For me it's not clear if the calculation method of my interface should use primitive parameters (in that case Ids) or it should use object instances. +So I came up with two solutions.

+ +

Variant #1 using object instances

+ +
interface CalculatorThisWay{
+    double calculate( Channel channel, Customer customer  );
+}
+
+public static  class DummyCalculatorThisWay implements CalculatorThisWay{
+    public double calculate( Channel channel, Customer customer  ){
+        // do whatever
+        return 1.5d;
+    }
+}
+
+public static void main(String[] args) {
+    String channelId  = ""Channel-1"";
+    String customerId = ""Customer-999"";
+
+    ChannelRepository channelRepo = new DummyChannelRepository();
+    CustomerRepository customerRepo = new DummyCustomerRepository();
+
+    Channel channel = channelRepo.findBy( channelId );
+    Objects.requireNonNull( channel, ""Channel must not be null"");
+
+    Customer customer = customerRepo.findBy( customerId );
+    Objects.requireNonNull( customer, ""Customer must not be null"");
+
+    DummyCalculatorThisWay calc = new DummyCalculatorThisWay();
+    calc.calculate(channel, customer);
+
+
+}
+
+ +

Variant #2 using primitive identifiers

+ +
interface CalculatorThatWay{
+    double calculate( String channelId, String customerId );
+}
+
+public class DummyCalculatorThatWay implements CalculatorThatWay{
+
+    private final ChannelRepository channelRepo;
+
+    private final CustomerRepository customerRepo;
+
+    public DummyCalculatorThatWay(  ChannelRepository channelRepo,  CustomerRepository customerRepo ){
+        this.channelRepo = channelRepo;
+        this.customerRepo = customerRepo;
+    }
+
+    public double calculate( String channelId, String customerId ){
+
+        Channel channel = channelRepo.findBy( channelId );
+        Objects.requireNonNull( channel, ""Channel must not be null"");
+
+        Customer csutomer = customerRepo.findBy( customerId );
+        Objects.requireNonNull( csutomer, ""Customer must not be null"");
+
+        // do whatever
+
+        return 1.5d;
+    }
+}
+
+public static void main2(String[] args) {
+    String channelId  = ""Channel-1"";
+    String customerId = ""Customer-999"";
+
+    DummyCalculatorThatWay calc = new DummyCalculatorThatWay(
+            new DummyChannelRepository(),
+            new DummyCustomerRepository()
+    );
+
+    calc.calculate( channelId,  customerId);
+}
+
+ +

So for me both ways feel correct but consider reusing the logic of how to get the channel and the customer object in different places. In Variant 1 I would have to duplicate that logic or put it in some Helper class where as in Variant 2 the logic is in the class itself and can be reused.

+ +

So is there any guide when to use what or any best practice?

+",327224,,326536,,1/30/2019 14:16,1/31/2019 8:21,Passing Information to a method using primitives vs and object instance,,4,0,,,,CC BY-SA 4.0,,,,, +386357,1,,,1/30/2019 16:19,,-1,33,"

We have a 3-tier architecture: Web, Business, Models using the MVC pattern. Models are Code-First using EF6. Currently we access the dbContext directly in our Controllers to query and save changes. We define all the relationship constraints on the Models. However, we have many additional constraints on the models that are spread about in the controllers, or the Business layer as part of other processing.

+ +

Is there a cleaner way of accomplishing this whereby all the business constraints on the Models can be enforced? We have used the repository layer in the past on other projects, but it becomes inflexible and tedious to go through it for everything. Is there a best of both worlds approach whereby we can use the dbContext directly to query things, but when saving to db, we can be ensured that all the business constraints will be enforced with any violations passed back through to the caller?

+ +

Edit: this explains succintly why I don't want to use Repository pattern again.

+",55613,,55613,,2/1/2019 16:49,2/1/2019 16:49,where to put business constraints,,1,1,,,,CC BY-SA 4.0,,,,, +386359,1,,,1/30/2019 16:50,,1,2105,"

Is it a good practice at all?

+ +

E.g. we have an API layer which has to call some service when it receives a request, and the work which that service has to do must not be done concurrently. If there would be no API layer one would just use a queue and dispatch tasks there and the service would consume and process those one by one.

+ +

But what if because we need to respond from API we can't use such async approach? What comes to my mind is that one could still use a queue, but this looks messy - API layer would need to dispatch the task to the queue, hold the request, poll the task processing status (in the db or queue or whatever) and eventually respond when it's processed.

+ +

Is such solution really a bad design? What would be a better one?

+",305593,,,,,2/26/2020 15:32,Queueing API requests,,2,1,,,,CC BY-SA 4.0,,,,, +386367,1,,,1/30/2019 18:33,,1,244,"

In my organisation for one of the project we follow Agile Scrum methodology and following is distribution of the man power for the project:

+ +
Scrum Team: 
+SM
+ProxyDev1(Internal): 50% in the project
+ProxyDev2(Internal): 50% in the project
+Dev3(Internal): 100% in the project
+Dev4(Internal): 100% in the project
+Dev5(External): 100% in the project
+Dev6(External): 100% in the project
+Dev7(External): 100% in the project
+Dev8(External): 100% in the project
+
+ +

As you see above we have developers from external firm and also internally there are 2 developers who are only 50% involved in the project and these are the most experienced one in the project and other are relative new to the project.

+ +

Sprint duration is 4 weeks and we have 3 refinements(2 refinement with PO,ProxyDev and SM, and 1 internal refinement with proxy dev,devs and SM), 2 planning(1 planning with Proxydevs,SM and PO and other with PD,SM,PO,Devs), 1 estimation(all participate) and retro and dailys as usual where all participate.

+ +

As you can see proxydevs attend all the meetings as they are more experienced and we save lot of time by not wasting all developers time in these meetings.Additionally as Proxydevs are 50% involved the sprint duration is 4 weeks instead of 2 or 3 weeks so we could reduce the time used for scrum events. The PO is from other firm.

+ +

Now the problem as SM i am facing is that the external developers have less experience in the project and we are now having lot of overhead to explain them and the expectations are not being understood correctly. So things like concept work has to be done by us and they just do the implementation. How can we reduce the overhead from the external developers?

+ +

What are my options here to make the process run smoothly? Should i restructure or use other framework to work with external devs? Additionally can i improve the output by restructuring the internal team?We have to work with the external firm at any cost.

+ +

Thank you very much

+",327259,,327259,,1/30/2019 19:53,1/31/2019 11:47,How to efficiently handle a scrum project when part of development team are from external firm?,,4,2,1,,,CC BY-SA 4.0,,,,, +386370,1,,,1/30/2019 18:48,,3,955,"

Developing Big Data processing pipelines and storage, you probably come across software which is more or less a part of the Hadoop ecosystem. Be it Hadoop itself, Spark/Flink, HBase, Kafka, Accumulo, etc.

+ +

Now all of these have been very well implemented, offering fast and high-quality solutions to the developers needs. Still, especially with the Big Data usage patterns in mind, a huge amount of object allocations and deallocations happen. It is probably worthwhile to use a non-garbage collected language, like C++.

+ +

Another reason I could find for myself, why Java applications are so popular in this domain, is the distributed deployment. One key characteristic of Big Data applications is the size, they don't fit on a single machine. The JVM allows really simple deployment (just copy the bytecode around). But is this really an argument? Looking at our own cluster, the hardware is quite similar and I would assume that this holds true for most companies. So even compiled machine code should be easy to move around to all machines.

+ +

For me personally, the biggest reason would probably be DRY (don't repeat yourself). It started in Java and libraries and frameworks grew around it. They work very well and nobody is willing to invest in rewriting the whole stack in a different programming language for (if at all) marginal gain.

+ +

Maybe someone of you has a deeper insight than me?

+",321010,,,,,10/8/2020 9:06,Why is the whole Hadoop ecosystem written in Java?,,4,7,3,,,CC BY-SA 4.0,,,,, +386372,1,386374,,1/30/2019 18:56,,4,263,"

I was reading on SO and SESE about exceptions and control flow, but I can't seem to determine or figure out if using exceptions to validate parameters is a violation of that guideline.

+ +

Suppose I had a method that wrote a message, obviously, I don't want the recipient of the message to be blank. For this example, recipient is a string, but in a more sophisticated program, it might be a class with more parameters.

+ +
public void writeMessageto(string recipient, string message){
+        // check if recipient and message are blank and throw an 
+        // exception if true. 
+       ... code to format and write message below....
+}
+
+ +

In this particular instance is using exceptions controlling the flow of the program? I don't want to move on to sending a message if both parameters are blank.

+ +

Suppose if I had a Message class, in the constructor, to build a valid object I would need to check it's parameters to make sure it isn't blank. If one of the parameters is null or empty throw an exception and the object isn't created. Why is it okay in the Message constructor but not in the method?

+ +

If using if statements are better, what happens when validation leads to deeply nested if statements, but throwing an exception would make it more readable?

+",,user327264,,,,1/30/2019 20:36,Is using exceptions to validate parameters a violation of using exceptions for control flow?,,3,1,,,,CC BY-SA 4.0,,,,, +386377,1,,,1/30/2019 19:43,,-2,679,"

I'm really looking for one good example how to PUT operations should be implemented correctly.

+ +

What I understood until now:

+ +
    +
  • The operation must be idempotent
  • +
  • When the resource doesn't exists it will be created new one, API returns status code 201 - Created
  • +
  • When the resource exists and has changed, resource will be updated, API returns status code 200 - Ok
  • +
  • When the resource data hasn't been changed, no update will happen and API returns status code 204 - No content
  • +
+ +

So I created reference implementation in .NET Core.

+ +

At first I made all fields required because of idempotency.

+ +
/// <summary>
+///     To-Do model.
+/// </summary>
+public class ToDoInsertUpdateModel
+{
+    /// <summary>
+    ///     Value indicating whether To-Do is completed.
+    /// </summary>
+    /// <value><c>true</c> if this To-Do is completed; otherwise, <c>false</c>.</value>
+    [Required]
+    [DisplayName(""Completed"")]
+    [JsonProperty]
+    public bool? IsCompleted { get; set; }
+
+    /// <summary>
+    ///     To-Do description.
+    /// </summary>
+    /// <value>To-Do description.</value>
+    [Required]
+    [JsonProperty]
+    public string Description { get; set; }
+
+    /// <summary>
+    ///     Date when To-Do must begin.
+    /// </summary>
+    /// <value>Start date when To-Do must begin.</value>
+    [Required]
+    [JsonProperty]
+    [DisplayName(""Start date"")]
+    public DateTime? StartDate { get; set; }
+
+    /// <summary>
+    ///     Date when To-Do must be finished.
+    /// </summary>
+    /// <value>Due date when To-Do must be finished.</value>
+    [Required]
+    [JsonProperty]
+    [DisplayName(""Due date"")]
+    public DateTime? DueDate { get; set; }
+}
+
+ +

I commented the code for all folks who don't use .Net to be able to understand as my question doesn't targeting any specific platform.

+ +
    /// <summary>
+    ///     Create new or update existing To-Do.
+    /// </summary>
+    /// <param name=""todoToInsertOrUpdate"">To-Do to be updated or created.</param>
+    /// <param name=""id"">To-Do identifier.</param>
+    /// <response code=""201"">To-Do was created. Returns created To-Do.</response>
+    /// <response code=""200"">To-Do was updated. Returns updated To-Do.</response>
+    /// <response code=""204"">To-Do wasn't updated as data hasn't changed.</response>
+    /// <response code=""500"">Returns on unexpected server error.</response>
+    [ProducesResponseType(201)]
+    [ProducesResponseType(200)]
+    [ProducesResponseType(204)]
+    [ProducesResponseType(500)]
+    [Route(""{id:int}"")]
+    [HttpPut]
+    public async Task<ActionResult<ToDoModel>> CreateUpdate(int id,
+        [FromBody] ToDoInsertUpdateModel todoToInsertOrUpdate)
+    {
+        if (!ModelState.IsValid) { return BadRequest(); }
+
+        // Try to get To-Do from database based on id
+        ToDoModel foundToDo = await _toDoAppDbContext.ToDos.SingleOrDefaultAsync(toDo => toDo.Id == id);
+
+        if (foundToDo == null) // If To-Do wasn't found than new one will be created
+        {
+            var toDo = _mapper.Map<ToDoModel>(todoToInsertOrUpdate);
+            await _toDoAppDbContext.ToDos.AddAsync(toDo);
+            await _toDoAppDbContext.SaveChangesAsync();
+            // Return status code 201 with the new created resource in a response body and location in the header
+            return CreatedAtAction(nameof(Get), new {id = toDo.Id}, toDo);
+        }
+
+        _toDoAppDbContext.Entry(foundToDo).CurrentValues.SetValues(todoToInsertOrUpdate);
+        // Get count of updated rows
+        int changedRows = await _toDoAppDbContext.SaveChangesAsync();
+
+        // If more than zero rows than resource was updated, this case it returns status code 200,
+        // otherwise resource wasn't updated in this case it returns status code 204
+        return changedRows != 0 ? foundToDo : (ActionResult<ToDoModel>) NoContent();
+    }        
+
+ +

I have two questions:

+ +
    +
  • Is the correct what I had understood know about PUT method specification
  • +
  • When I call PUT http://api/todos/8 on not existing resource, it will create new one. But because last segment is in this case a primary key generated by database engine, API will return new resource with the different key. Is this correct behavior?
  • +
+ +

Note: +Please don't post any RFC specification snippets as an answer. I really looked into this and I would welcome to have very concrete answer based on best practice or some accepted behavior by community.

+",132561,,132561,,1/30/2019 22:15,1/30/2019 22:15,How to implement HTTP PUT correctly,<.net>,1,3,,,,CC BY-SA 4.0,,,,, +386378,1,,,1/30/2019 20:09,,0,226,"

I need to design a system that will take jobs (basically calling some web api) and run them at a future time (15 - 45 minutes depending on the job).

+ +

The first idea I had was to store the job and the timestamp of when it needs to run in some db and then have workers running each minute looking for jobs to run (select where now() < timestamp limit 1) but then I'd have to store a state (created, in-progress, finished) and then have another cronjob checking for jobs that got stuck in-progress because the worker died. And then some other cronjob deleting finished jobs.

+ +

Is there a better way to do this? Some other software build for this kind of thing? The time of execution doesn't need to be exact, just some time after the wait expires, and it should keep the jobs persisted in case of a restart. There's not much of an issue if a job is called twice but it should be prevented if possible.

+ +

Edit: the solution doesn't necessarily need to use a SQL database and I would actually prefer if I could avoid a SQL database altogether.

+",260962,,260962,,1/31/2019 16:39,1/31/2019 16:39,Scheduling jobs to run at a future time,,2,3,,,,CC BY-SA 4.0,,,,, +386392,1,,,1/30/2019 22:08,,2,330,"

I'm quite new in the DDD-World and I'm just trying to figure out all the basics so please bear with me!

+ +

I have the following Entities: + - Datamodel + - Object Types + - Object Fields

+ +

A datamodel can contain 1..* object types and each object type has a unique name and can contain 1..* object fields. +A field has a certain type which is either string, int, date OR it is an relational type:

+ +
type User {
+  name: String
+  age: Int
+  articles: Article
+}
+
+type Article {
+  name: String
+  author: User
+}
+
+ +

Now I have the folowing use cases:

+ +
    +
  • Add a object-type to the datamodel
  • +
  • Remove a object-type from the datamodel
  • +
  • Add the object-field to the object-type
  • +
  • Remove the object-field from the content-type
  • +
+ +

As far as I understood, all of my entities build the aggregate and the Datamodel entity is the root entity, is that correct?

+ +

I came along with the following implementation approach:

+ +

The Datamodel entity has the ability to add object-types and holds a list of object-types:

+ +
public class Datamodel {
+    private final List<ObjectType> objectTypes = new ArrayList<>();
+
+    public void addObjectType(String name) {
+        ObjectType objectType = new ObjectType(name);
+        this.objectTypes.add(objectType);
+    }
+}
+
+ +

The Object-Type entity has the ability to add object-fields and holds a list of object-fields.

+ +
public class ObjectType {
+  private final String name;
+  private final List<Field> fields = new ArrayList<>();
+
+  public ObjectType(String name) {
+    this.name = name;
+  }
+
+  public void addField(Field field) {
+    fields.add(Objects.requireNonNull(field));
+  }
+}
+
+ +

If I now want to add a relational field to my object type, I need to ensure, that the datamodel contains an object type with the respective name, thus I need access to the list of object types within the datamodel entity. How would I model this scenario?

+ +

Is that approach correct or should the Datamodel entity be responsible for adding both object-types and object-fields?

+",327292,,,,,10/22/2020 12:02,Applying Domain Driven Design - Model/Implementation,,1,7,,,,CC BY-SA 4.0,,,,, +386398,1,,,1/31/2019 2:32,,-1,33,"

I have a site where user logs in. I need to calculate the log-in activity score based on the age when he logged in. For example :-

+ +
    +
  1. If user logins today he will have high score than the user who logged in yesterday or sometime in past.

  2. +
  3. Similarly user who logged in this week will have high score than the user who logged in last week.

  4. +
  5. Another usecase can be user who logged in for more days in a week should be considered more active .

  6. +
+ +

I am running out +of ideas how to caluclate activity score based on above kind of rules ? Is there any solution available on google for the same(I did not +find any with my search)

+",260829,,,,,1/31/2019 9:16,Calculating activity score based on log in activity?,,1,2,,,,CC BY-SA 4.0,,,,, +386406,1,386409,,1/31/2019 9:11,,1,300,"

I'll take out the ellipsis to make this easier/quicker to read:

+ +

Large group project in school, stakeholders gave a fuzzy explanation as to what they want. I suggested that we:

+ +
    +
  1. sketch the GUIs so we all understand what is requested, find problems, ask eachother questions, to clarify in general so we all think of the same thing
  2. +
  3. make use cases
  4. +
  5. present it all to the stakeholders for feedback
  6. +
  7. derive requirements from that
  8. +
  9. get feedback on the list
  10. +
  11. make a backlog
  12. +
  13. and then we discuss & pick tools, languages, frameworks, standards etc.
  14. +
+ +

1-2 members agree with me, vast majority votes against it. Majority wants to:

+ +
    +
  1. discuss capabilities of several kinds of tools
  2. +
  3. then make use cases
  4. +
  5. then derive requirements
  6. +
  7. then backlog
  8. +
  9. then GUI design
  10. +
+ +

Pretty much the same thing except that I want design first and tools last, but they want tools first and design last. Also I want more feedback than they want(though I think I can make a case for this to them, the main issue is GUI vs. choice of tools).

+ +

They weren't convinced by my rationale(you find problems first, then pick/make solutions), I wasn't convinced by their rationale(you can't find problems without choosing how to work and you can't know how to design without first defining overall back-end functionality/API). I think we will waste time preparing to use some tool and then find that we didn't need it at all, but that we need something else which we haven't spent time reading about or preparing for, but I don't know if I'm missing something out of inexperience.

+ +

What is the right way of doing this, and why?

+",293931,,,,,1/31/2019 20:42,At what stage should the (G)UI be designed?,,3,6,,,,CC BY-SA 4.0,,,,, +386413,1,,,1/31/2019 10:42,,-1,88,"

I've worked with a lot of off-the-shelf systems and generally speaking they all, at a database level and above, have metadata such as CreatedBy/On, ModifiedBy/On etc against records/entities to give at least some visibility to who did what and when.

+ +

Obviously though these fields are fairly shallow, you'll never know who did the 2nd to last edit for example, but these fields can be reasonably useful.

+ +

If you are building a system from scratch that requires some level of auditing, whether it's basic Created/Modified fields, tracking every interaction with records or a full blown audit trail that tracks not only who did what but what the fields were before and after...is there a standard, accepted way to do each of these?

+ +

One of the issues I'm running into, especially with Created/Modified fields is that your object model shouldn't really know or care what's happening at the application level, so trying to separate those concerns poses challenges.

+ +

So I guess I'm asking, is this a solved problem? Because googling I can't seem to find a standard, universally accepted way of doing it.

+",188362,,,,,1/31/2019 15:10,"Is there an accepted way to track record metadata, CreatedOn/By etc?",,2,1,1,,,CC BY-SA 4.0,,,,, +386414,1,,,1/31/2019 12:01,,0,203,"

Scenario

+ +

I'm a new developer, using MSTest and I've encountered the following issue:

+ +
SomeClassTest // Uses a Fake Widget Controller.
+
+Test Initialize
+{
+   Many lines of code to initialize Fake Widget Controller
+}
+
+Test Method
+{
+   Uses fake widget controller
+}
+
+ +
+ +
AnotherClassTest // Uses same Fake Widget Controller.
+
+Test Initialize
+{
+   Duplicate many lines of code to initialize Fake Widget Controller
+}
+
+Test Method
+{
+   Uses fake widget controller
+}
+
+ +
+ +

Should I copy and paste the initialization code into the new class, or should I create a separate configuration class and instantiate it in each test class that requires it?

+ +

Or am I looking at this the wrong way?

+",327346,,73508,,2/4/2019 9:29,2/4/2019 9:29,Should I create a shared Test Initilization object to initialize multiple test classes?,,3,1,,,,CC BY-SA 4.0,,,,, +386419,1,386424,,1/31/2019 13:47,,2,464,"

I was reading here about OOP and methods, and the accepted answer states that method names should be verbs. However, that doesn't really answer my question.

+ +

Suppose if I had a Character class with a private List inventory.

+ +
public class Character {
+
+     private List<GameItems> inventory;
+
+     // constructor and other methods left out.
+
+     public boolean checkInventoryfor(GameItem item)
+}
+
+ +

Now, suppose at various points in my game, the player wants to check their inventory for a specific item, the checkInventoryfor(GameItem item) can be called.

+ +

I've been told that the name is a code smell, because it gives away the fact the class has a collection (inventory) that needs to be checked. A better name would be has(GameItem item). has() flows better in terms of language because most likely you'll have an if statement that reads if(character.has(sword)) {// rest of code here}.

+ +

Either way, you know that you're querying a collection, of what type? We don't know, we don't care. Neither method tells you, all it returns is a boolean.

+ +

Can method names leak implementation details and break encapsulation?

+",,user327264,,user327264,2/1/2019 14:25,2/1/2019 14:25,Can method names give any implementation details and break encapsulation?,,4,0,,,,CC BY-SA 4.0,,,,, +386420,1,,,1/31/2019 13:50,,0,63,"

I have 3 software components of a Web Application: +1. JS-client application 1 (JSApp1) +2. Java Spring (REST and Websockets) server app (SRV) +3. JS-client application 2 (JSApp2)

+ +

My current task is to send some data from JSApp1, save them in the SRV DB and respond to JSApp1 telling whether the operation was successful or not. I'm using usual HTTP Post request for that. But for saving this data the SRV needs to ask JSApp2 for the permission. I use websockets for that purpose. Having done that I would miss connection to the JSApp1 because web sockets are not synchronous at all.

+ +

Is there any workaround to this issue? I mean keeping HTTP POST request to do a send/receive operation using websockets or anything else.

+ +

Thanks in advance.

+",292078,,,,,1/31/2019 13:50,Two Web clients and Java server (HTTP and Websockets),,0,2,,,,CC BY-SA 4.0,,,,, +386421,1,,,1/31/2019 14:00,,2,120,"

Would you consider

+ +
WHERE SomeDate <= '2018-01-01 23:59:59'
+
+ +

instead of

+ +
WHERE SomeDate < '2018-01-02'
+
+ +

When the intention is that SomeDate goes no further then 2018-01-01

+ +

a code smell and if yes then why

+",71811,,71811,,1/31/2019 14:02,2/2/2019 16:01,Date comparison with the last second of a day,,4,5,,,,CC BY-SA 4.0,,,,, +386428,1,,,1/31/2019 15:12,,0,115,"

I am wondering about the Gherkin syntax for some scenarios. Suppose I have following events A, B{1}, B{2}, C, D1, D2, G. +Where uppercase{number} events like B1, D2 are parallel events(simultaneous). +Where uppercase events like A, G are normal events. The symbol || means OR but && means AND.

+ +

Now how I could write Unit specifications for the following scenarios.

+ +
 1. Arrange Event: P Act Event: Q Assert Event: C
+ 2. Arrange Event: P Act Event: B1, B2, B3 Assert Event: C && D
+ 3. Arrange Event: P || X || Y Act Event: B1, B2, B3 Assert Event: C || D
+ 4. Arrange Event: P || X && Y Act Event: B1, B2, C Assert Event: D {C only happens after B1 and B2}
+ 5. Arrange Event: P && X Act Event: B, C||D Assert Event: E {Either C or D have to happen one after another}
+ 6. Arrange Event: P Act Event: B, C&&D Assert Event: E {{Both C and D have to happen one after another}
+ 7. Arrange Event: P Act Event: B1||B2, C1&&C2, E Assert Event: F {Either of the event B1 or B2 happens simultaneously, afterward both C1 and C2 have to happen simulatenously}
+ 8. Arrange Event: P Act Event: B1||B2, C&&D, E Assert Event: F {Either of the event B1 or B2 happens simultaneously, afterward, both C and D have to happen one after another}
+
+",260628,,260628,,1/31/2019 15:26,1/31/2019 15:46,Gherkin Syntax and Unit Specification,,1,2,,,,CC BY-SA 4.0,,,,, +386432,1,,,1/31/2019 16:24,,2,166,"

I am looking into improving my overall application architecture and (I think) I understand the issues my Anemic Models are causing.

+ +

Here is my current architecture:

+ +
    +
  • Controller with injected Service and Controller Action accepting a DTO
  • +
  • Service with injected repository. Service function takes DTO, calls repository, does whatever it needs to do with the entity (lets say update properties on the entity) and saves it.
  • +
+ +

What's wrong with above:

+ +

In this setup, all Entity properties have Public getters and setters, there is no encapsulation, no OOP, no code re-usability. When I need to update this entity in another Service (or any other method for that matter), I have to write the same code again.

+ +

How to fix this:

+ +

Obviously this is not ideal. I've decided it would make sense to make the Entity Rich, by adding an Update method to allow me to reuse my code throughout all my services.

+ +

Confusion:

+ +

Where I am confused is,

+ +
    +
  • How does my Rich Entity communicate with the Service?
  • +
  • Should my Update method in the Entity take the IEntityRepository as parameter so I can call repository.Save() in the Entity?
  • +
  • Should repository.Save() be called in the Service?
  • +
  • What if I need to call an Http API from the service because of some specific action happening in the Update in the Entity? Should I call the Http API from the Entity Update() method?
  • +
+ +

Thanks tons for any directions on how to deal with this.

+",41076,,,,,1/31/2019 18:11,Migration from Anemic Models to Rich Models,,4,4,1,,,CC BY-SA 4.0,,,,, +386461,1,,,2/1/2019 6:12,,0,358,"

I need some guidance in designing an API wrapper for my backend APIs. I have tried to keep it as specific as possible.

+ +

Context: We have a project which supports certain file operations like edit, create, merge etc. +All the services are exposed as rest APIs. Now I have to create an API wrapper over this (client library) in Java. +I've been reading about DDD and trying to approach the problem using that.

+ +

As per my thinking, the core object in my project would be File, along with some minor DTOs for talking to the backend. +Edit, create, merge will be the verbs here acting on my domain object. I want to make it as easy as possible for the external developer to integrate the API. I would like the design to be something like this:

+ +

For Creating a file : File.create()

+ +

For editing : File.edit()

+ +

Same for other operations

+ +

Also, I want to have the capability of chaining operations (along the lines of fluent interfaces) for readability

+ +

For. eg. if you want to create a file and then convert it, it should be something like : File.create().convert(Required params)

+ +

My problem is each of the operation is bulky and async. I don't wanna write all the async error handling logic in the File class. Chaining the methods like above wont be easy as well if they return CompletableFuture objects, and this will be harder to maintain.

+ +

Question: What is a better way of solving this problem?

+ +

I am not looking for a spoonfed design. I just want to be guided to a design approach which fits the scenario. Feel free to point if I am understanding DDD wrong.

+",327425,,327425,,2/1/2019 10:49,10/23/2020 20:07,How to design an API wrapper with bulky operations on domain object? (Need guidance),,2,0,,,,CC BY-SA 4.0,,,,, +386463,1,,,2/1/2019 6:21,,1,94,"

We have an already running MQTT setup for communication between smart home devices and remote server, for remotely controlling the devices. Now we want to integrate our devices with Google Home and Alexa. These two use HTTP for communication with third party device clouds.

+ +

I have implemented this for Google Home and after receiving the request to device cloud, the request is converted to MQTT. This MQTT request is then sent to smart home device. The device cloud waits for few seconds to receive reply from smart home device. If no reply is received within predefined time, it then sends failure HTTP response to Google Home else it sends the received reply.

+ +

Is there a better way to handle this? Since this is a commercial project I want to get this implemented in the correct way.

+",327352,,134647,,2/3/2019 5:12,2/3/2019 5:12,Converting HTTP requests to MQTT and back again for smart home integration,,1,0,,,,CC BY-SA 4.0,,,,, +386469,1,,,2/1/2019 9:58,,1,207,"

I'm refactoring the framework of our company, trying to fix the issues we had in the past.

+ +

We're a team of 6 developers, and we have various needs and issues in regards to tidying up our framework.

+ +

Right now, we want it to be the solution we import in all our projects, because it has all the code we reuse every time. The solution would contain several projects, one for each feature. For example, we dont want to have our social network classes in all our projects, so that's in a separate project. But all our string extensions or authentication classes can be in the main project, since it's pretty much needed every time.

+ +

So in the end this solution will contain a lot of code, a lot of projects, and a lot of namespaces. Which is good, tidy, organized, but stuff becomes hard to find.

+ +

My question is, as a developer, how do I know if the code I need already exists in that framework ? Lets say I need to resize an image, for the sake of the example. I'll maybe search the code for Image and find about a thousand results. I'll search for the word size and face the same problem, so I'll try resize or resizing and see that the code does not exist in the few results I've got. Right? No.

+ +

In the end, it already existed, but was called changeSize(), and now I've written the same code twice, under two different names. I've placed it in a namespace/folder called extensions and lets say the other one was in graphics.

+ +

Now the problem is, I didnt know it existed, I tried to find it and couldn't find it. And where I looked manually it wasn't there, so in all good faith I made a mistake. How can this be avoided? Everyone should know the complete framework ? We could ask around ? We could write a complete documentation ? That seems somewhat unreasonable, because lets face it, we won't have the time or motivation/dedication to just read code for the sake of knowing it exists, asking every time I need something is quickly gonna get out of hand and annoying, and the documentation has to be both maintained and checked, which comes back to the first problem : not enough time, not enough discipline.

+ +

This is pretty bad, but I must admit that is the reality of what we're in. We've come up with a some solutions, but its not enough :

+ +
    +
  • Every time you want to add code to the framework, it has to be through pull requests, which everyone has to read and approve. You don't have to review, just browse so you might remember it. That way learning the framework will come slowly, but it will eventually. That only helps for future changes, but it's something.

  • +
  • We add a series of tags for each class which we consider valuable keywords, like on stackexchange, to help people find the classes they need. Those tags will be extracted because they're documented, and so you should be (haven't tested yet) able to find classes by tags in the docs, or just use Find solution-wide and see the results on the various tags you could type. Exemple in the following image

  • +
+ +

+ +

This is not perfect, it has flaws, but that's where we're at.

+ +

Finally, my question : how do we improve on this? How do you guys do it? We keep writing the same code twice or even forgetting stuff exists. And even if we know it exists, we sometimes struggle to find what we wrote ourselves just because there are so many classes and different names one thing could be called.

+",151303,,,,,2/1/2019 15:49,How to find already existing code | How to arrange code in a way it can be found again,,2,5,,43497.7375,,CC BY-SA 4.0,,,,, +386474,1,386476,,2/1/2019 11:07,,0,380,"

which approach should be taken when adding entities to an aggregate?

+ +

I could add an constructed entity to the aggregate

+ +
$book = new Book($title, $releaseDate, ...);
+$library->addBook($book)
+
+ +

or I could pass the parameters to the addBook method and let the method to construct the instance:

+ +
$library->addBook($title, $releaseDate, ...);
+
+",303514,,,,,2/1/2019 11:19,DDD add entity to aggregate: formed entity vs parameters,,1,0,,,,CC BY-SA 4.0,,,,, +386478,1,386487,,2/1/2019 12:38,,0,779,"

I am trying to understand the difference between an event log and event store in the context of event sourcing.

+ +

A really nice explanation I found is that an event store is the single source of truth for the write model and an event log is the single source of truth for the read model.

+ +

Therefore say I wanted to use CQRS without Event Sourcing, then the way I understand it is that I would have an event log write model (Domain Objects serialised to an SQL database) and a NoSQL (MongoDB) read model. The two databases could be in separate API projects communicating using a RabbitMQ.

+ +

On that basis it is not a bad idea to use an EventLog when using CQRS without Event Sourcing i.e. when the business is not interested in replaying historic events (in my case the system only has one event).

+ +

Is it acceptable for the EventLog to be the write database? i.e. it would contain serialized domain objects.

+ +

Have I understood this correctly?

+",65549,,65549,,2/1/2019 12:58,2/1/2019 15:10,CQRS with an Event Log and without Event Sourcing,,1,3,1,,,CC BY-SA 4.0,,,,, +386479,1,,,2/1/2019 13:06,,2,404,"

It's been a long time since I was first introduced to Event Storming in a DDD workshop. More recently we decided to apply it in practice and we have planned our first sessions with a facilitator (someone from the company who has experience).

+ +

The only thing i'm wondering now is how the process AFTER event storming takes place? How does one take the domain knowledge gathered during an event storming session and make sure that it's well documented for other people to consult?

+ +

Are there modelling techniques to document the outcome? Do I just take pictures? Should we start the software architecture process straight away and let the code speak for itself?

+",2478,,,,,11/1/2019 13:01,How to document the outcome of an event storming session,,1,1,,,,CC BY-SA 4.0,,,,, +386481,1,,,2/1/2019 13:30,,0,149,"

I am building a CLI tool which will potentially support many commands. Ideally, I want to abstract out each command to implement an interface that demands a ""run"" method. From there on, it would be a simple regex match that delegates to the appropriate command class.

+ +

The issue is, not all of these commands are stateless. One command may modify program state in a way that a future command relies on. For example, a command will build a search index, that a subsequent command uses.

+ +

I have a main CLI class handling the regex matching, and independent ""Command"" classes implementing the RunnableCommand interface.

+ +

Is it reasonable to pass the instance of the CLI class to individual ""Command"" instances in order to mutate state? (whether by getters/setters or by making CLI fields public). In C++ I would use a friend class for the commands, but that is not available here.

+",327468,,,,,2/3/2019 2:14,Designing a modular CLI tool,,2,1,,,,CC BY-SA 4.0,,,,, +386485,1,386486,,2/1/2019 14:38,,2,2282,"

motivation and context

+ +

I am designing and implementing the web interface of my bismon server program (a research prototype; in a few words: an orthogonally persistent, reflexive, homoiconic, dynamically-typed, multi-threaded, domain-specific language for static source code analysis; it runs on Linux only). It is free software (on http://github.com/bstarynk/bismon ...), GPLv3+ licensed, still officially unreleased (so 𝛂 stage). A draft report (a draft deliverable for some H2020 research project) describing it in some details is available on http://starynkevitch.net/Basile/bismon-chariot-doc.pdf (whose section §4.1 describes the web interface internals).

+ +

The web interface is for some kind of syntactic editor (grossly speaking, imagine ""emacs"" thru HTTP). I want it to be a single-page web application. I am not very familiar with web technologies (even if I read dozen of books about them). I am using the HTTP server library libonion in bismon, so the bismon process becomes a specialized web server.

+ +

My bismon is not a usual web application: it is expected to be used, by a small team (3 to 10 colleagues working on the same project) of trusted people. So there are only a dozen of browsers connected -on a trusted local area network- to a given bismon process. During several months, I would be the only user of that bismon thru the http://localhost:8086/ URL (but I might open that URL in two tabs of the same firefox browser window). There is already a login mechanism which sets a BISMONCOOKIE cookie identifying the web session. And a websocket is used. The web browser is Firefox 65 on Linux, if that matters.

+ +

question

+ +

how can such a Web application distinguish two tabs on the same browser?

+ +

My understanding is that both tabs will share the same URL (e.g. http://localhost:8086/) and the same cookie (e.g. BISMONCOOKIE being the same string, such as n000041R970099188t330716425o_6IHYL1fOROi_58xJPnBLCTe and that string corresponds to some web session object inside the bismon server). The two tabs could show two different views into the global state of my bismon.

+ +

Is there some programmatic way, browser side, to uniquely identify a tab? +What exactly happens at the level of the HTTP protocol itself? I don't know any HTTP request header field uniquely identifying the tab. Or is it known in the DOM? How?

+ +

Let's suppose, for simplicity, that there is some equivalent of <a href='http://localhost:8086/'>link</a> in the DOM (of the dynamic page served by the bookmarked http://localhost:8086/ URL). How can the bismon server separate the action ""click to follow the link"" from the menu ""open link in new tab"", assuming a recent Firefox browser? My incomplete understanding is that the same HTTP exchange happens in both cases, and I really want to separate them.

+ +

It is well known that a modern browser (like Firefox) makes several TCP connections simultaneously for the same tab; so I cannot use that reliably.

+ +

As a simpler example, this very question has some ""edit"" link. What is happening server-side or protocol-side when I ""open that link in other tab"" two times?

+ +

(perhaps I need to use the websocket -which would be reopened in every different tab and might not be shared between tabs? this could be an answer)

+ +

PS. Queinnec's paper Inverting back the inversion of control or, Continuations versus page-centric programming could be related to my question, but I don't yet understand how.

+",40065,,40065,,2/1/2019 15:44,2/1/2019 16:53,can two web browser tabs be distinguished in a single-page application?,,3,6,,,,CC BY-SA 4.0,,,,, +386492,1,386496,,2/1/2019 16:25,,1,105,"

Suppose the client wants to build an online shopping system. If we think of use case scenario, a database or a bank would be a secondary actors to this system.

+ +

Is it valid to have a user story like

+ +
 As a customer I want to be able to access the database so that I can view catalog items
+
+ +

Or

+ +
As a customer, I want to access the bank so that I can pay by credit card.
+
+ +

I think the above does not seem valid, but please advise, as I am very new to this field. Thanks

+",302389,,,,,2/1/2019 16:55,User story - Discussing the secondary actors of a use case,,2,0,,,,CC BY-SA 4.0,,,,, +386497,1,,,2/1/2019 17:19,,4,263,"

Suppose I have a use case, like this example.

+ +
Normal Flow:
+The user will indicate that she wants to order the items that have already 
+been selected.
+The system will present the billing and shipping information that the user 
+previously stored.
+The user will confirm that the existing billing and shipping information 
+should be used for this order.
+The system will present the amount that the order will cost, including 
+applicable taxes and shipping charges.
+The user will confirm that the order information is accurate.
+The system will provide the user with a tracking ID for the order.
+The system will submit the order to the fulfillment system for evaluation.
+The fulfillment system will provide the system with an estimated delivery 
+date.
+The system will present the estimated delivery date to the user.
+The user will indicate that the order should be placed.
+The system will request that the billing system should charge the user for 
+the order.
+The billing system will confirm that the charge has been placed for the 
+order.
+The system will submit the order to the fulfillment system for processing.
+The fulfillment system will confirm that the order is being processed.
+The system will indicate to the user that the user has been charged for the 
+order.
+The system will indicate to the user that the order has been placed.
+The user will exit the system.    
+
+ +

I would be grateful if you could let me know of a procedure or guideline that helps in converting this to use case diagram, more specifically the use case bubbles.

+ +

Going back to the example, I think I would have one use case bubble named place order, and I probably have another bubble named payment, and there is am ""include"" relation from payment to place order. But I hope there is something more formal than intuition to help me come up with a more accurate diagram. Thanks!!

+",302389,,209774,,11/9/2019 20:19,11/9/2019 20:19,How to convert a use case to a use case diagram?,,2,2,2,,,CC BY-SA 4.0,,,,, +386498,1,,,2/1/2019 18:50,,3,298,"

I have a website with basic crud operations that involve data and photos. +I also need to extract the metadata for the photos that are being uploaded. +My original implementation did not have var puts = new List<Action>(); so the following code

+ +
puts.Add(async () => await _s3Client.PutObjectAsync(BucketName, key, file.OpenReadStream(), file.Length, file.ContentType));
+
+ +

was originally written as

+ +
await _s3Client.PutObjectAsync(BucketName, key, file.OpenReadStream(), file.Length, file.ContentType);
+
+ +

Note: Im using AspNetBoilerplate so a few things arent shown in code.

+ +
    +
  • SaveChanges() is called as long as the method is completed and no exception is thrown.
  • +
  • If an exception is thrown, the database does not get hit.
  • +
  • I am NOT handling exceptions thrown by ImageMetadataReader as this is handled via AspNetBoilerplate and using this as a way to know if all photos had metadata.
  • +
+ +

Consider having 3 files, 2 with meta, the last without.

+ +

With my previous implementation, the first 2 files would be placed in my bucket and the 3rd would fail and i would have no entries in my db table and an error would be shown to the user.

+ +

With the current implementation, i queue up the PutObjectAsync() into a List<Action> and once all the files are processed, i process the puts.

+ +

Everything seems to work, but I wonder if its a good design or if theres a better way.

+ +
    private async Task Create(CreateOrEditSnowEntriesDto input)
+    {
+        var snowEntries = ObjectMapper.Map<SnowEntries>(input);
+        var entryId = await _snowEntriesRepository.InsertAndGetIdAsync(snowEntries);
+
+        await _s3Client.EnsureBucketExists(BucketName);
+
+        var puts = new List<Action>();
+        foreach (var file in input.Files)
+        {
+            if (file.Length <= 0) continue;
+
+            var fileName = Path.GetFileName(file.FileName);
+
+
+            DateTime? date = null;
+            GeoLocation location = null;
+
+            var metadata = ImageMetadataReader.ReadMetadata(file.OpenReadStream());
+            if (metadata.Any(a => a.GetType() == typeof(ExifSubIfdDirectory)))
+            {
+                var dateDirectory = metadata.OfType<ExifSubIfdDirectory>().FirstOrDefault();
+
+                if (dateDirectory?.ContainsTag(ExifDirectoryBase.TagDateTimeOriginal) ?? false)
+                    date = dateDirectory?.GetDateTime(ExifDirectoryBase.TagDateTimeOriginal);
+            }
+
+            if (metadata.Any(a => a.GetType() == typeof(GpsDirectory)))
+            {
+                var gpsDirectory = metadata.OfType<GpsDirectory>().FirstOrDefault();
+                location = gpsDirectory?.GetGeoLocation();
+            }
+
+            // EntryId/File.ext
+            var key = $""{entryId}/{fileName}"";
+            puts.Add(async () => await _s3Client.PutObjectAsync(BucketName, key, file.OpenReadStream(), file.Length, file.ContentType));
+
+            await _snowEntries_PhotosRepository.InsertAsync(new SnowEntries_Photos()
+            {
+                EntryId = entryId,
+                FileName = fileName,
+                Key = key,
+                DateTaken = date,
+                LocationTaken = location?.CreateCoordinates()
+            });
+        }
+
+        foreach (var put in puts)
+        {
+            put.Invoke();
+        }
+    }
+
+",308929,,308929,,2/1/2019 18:56,10/25/2020 1:07,Queueing async code to execute later,,2,1,,,,CC BY-SA 4.0,,,,, +386502,1,,,2/1/2019 20:55,,2,43,"

I recently came across a demand to create a web system (which will be done in PHP, but is for any language), to have the same functionalities of a mobile application already ready. This app has a complete and well written API in NodeJS.

+ +

With this situation, I had a question about the possibility of using the API in the system too, rather than connecting this directly to the database and rewriting all the functions already in the API.

+ +

To better illustrate what I mean, here are the two methods:

+ +

More common:

+ +

System -> Database

+ +

APP -> API -> Database

+ +

That is, a system in php connects directly to the database, while an app (for security reasons and in order to avoid reverse engineering) has the API as an intermediary. However, considering that I come across a fully functional and functional API here, I thought of this new method:

+ +

System -> Api -> Database

+ +

APP -> API -> Database

+ +

The major drawbacks I see in doing this would be the fact of losing performance, since a direct connection would be much faster than making a request and handling the response. However, the advantage of having just one maintenance area (the API) catches my attention.

+ +

What other disadvantages would I have in this method?

+",327507,,327507,,2/1/2019 21:03,2/1/2019 21:03,"Creating backend system, over API",,1,0,,,,CC BY-SA 4.0,,,,, +386504,1,386506,,2/1/2019 21:21,,5,108,"

I'm looking for some sort of standardisation, in a similar vein as POSIX for compatibility and familiarity between different commandline-interfaces, but for error reporting. Notably, I'm looking for:

+ +
    +
  • formatting rules
  • +
  • standardisation of logging practices
  • +
  • rules for error code choice
  • +
  • good habits on how much, and what to include in your error report to the user
  • +
  • design decisions on providing additional metadata on the state of the software, should the user wish to see it or report it.
  • +
  • multi-language support (i18n)
  • +
+ +

etc...

+",301801,,,,,2/1/2019 22:39,Is there some sort of standardisation for error reporting?,,2,1,,43499.775,,CC BY-SA 4.0,,,,, +386509,1,386538,,2/2/2019 0:33,,1,150,"

Consider a ""large-ish"" data set (~2-5M rows) that goes through multiple stages of cleaning/processing:

+ +
library(dplyr)
+largedat %>%
+  mutate(
+    # overwrite v1 based on others
+    v1    = somefunc(v1,v2,v3,v4),
+    errv2 = anotherfunc(v2,v5)
+) %>%
+  group_by(v5) %>%
+  mutate(
+    v6    = otherfunc(v7,v8,v9),
+    errv7 = fourthfunc(v7,v9)
+  ) %>%
+  ungroup() %>%
+  mutate(
+    v2 = if_else(errv2, NA, v2),
+    v7 = if_else(errv7, NA, v7)
+  )
+
+ +

With some hand-waving that there is sufficient need to keep things broken out like this (and that some portions might be faster if done manually in base R). The two functions here are clearly ""functional"" in that they have no side-effect, are given explicit vectors of arguments, and output a vector of the same length (or 1). In a sense, clean. Also, the potential for lots of copying of the data (depending).

+ +

Using data.table where in-place operations are standard, side-effect is by-design and an intentional decision that provides considerable improvements in memory and speed.

+ +

A more ""functional"" approach is still quite possible:

+ +
library(data.table)
+setDT(largedat)
+largedat[, newv1 := somefunc(v1, v2, v3, v4)]
+errv2 <- largedat[, anotherfunc(v2,v5)]
+largedat[, v6 := otherfunc(v7,v8,v9)]
+# ...
+# eventually using the changes
+largedat[, c(""v2"", ""v7"") := list(ifelse(errv2, NA, v2), ifelse(errv7, NA, v7)) ]
+
+ +

This still preserves the functional and side-effectless use of the functions, but can be slightly cumbersome. If we understand that at least one of these functions outputs a full data.table instead of just a vector, it gets a little more complicated, especially when we're grouping with by=""..."" (which does not preserve order in the functional output) (ref: https://stackoverflow.com/q/11680579/3358272).

+ +

Another attempt might be to adapt the functions to be in-place operators, something like:

+ +
somefunc(largedat)    # replaces v1
+anotherfunc(largedat) # optionally nullifies v2
+# ...
+
+ +

or perhaps

+ +
out <- largedat[, somefunc(.SD)
+         ][, anotherfunc(.SD)
+           ][, otherfunc(.SD), by = ""v5""
+             ][, fourthfunc(.SD), by = ""v5"" ]
+
+ +

For simple projects, whatever works (reliably) is often best, but for longer-living packages where flexibility and reliability are required, are there distinct (dis)advantages to the in-place side-effect-based functions as used in the last two code samples?

+",147602,,,,,2/2/2019 20:31,"best practice for data.table use in ""formal"" code",,1,0,1,,,CC BY-SA 4.0,,,,, +386510,1,,,2/2/2019 3:14,,1,112,"

I'm working on a library that lets me write operation on an input ""stream"" of data (I don't call them that, but it's a potentially unbounded input regardless, think data coming from a socket).

+ +

I might have one operation that eg: applies a frequency shift to the incoming data, and then another that applies a frequency-selective filter to that result, I'm writing in C++ so my syntax might look something like this:

+ +
input >> tune >> filter >> output;
+
+ +

My problem is that different operations might require an unknown number of data points to compute the output. So eg: tune perhaps (and can) work with an arbitrary number of inputs at a time, but filter requires some minimum number of samples before it can produce output.

+ +

The easiest answer is to run each filter in a thread, and connect them with some sort of thread-safe pipe or equivalent. If possible though, I'd like to avoid threading if I can.

+ +

Is anyone aware of an alternative pattern or research on composing streaming/batch operations on a stream without resorting to threads and blocking I/O?

+",327529,,,,,2/4/2019 13:46,Pattern for composing streaming operations without threads?,,2,8,,,,CC BY-SA 4.0,,,,, +386511,1,,,2/2/2019 4:08,,1,1082,"

+ +

I have created the above picture to illustrate my question.

+ +

Is there a section within memory (lets say from address 0x1 to 0x15) that all processes use to place their text segment in (left figure), or each process gets a random location in memory to use for it's combination of heap, stack , text and data(right figure)?

+",327533,,,,,2/2/2019 18:59,"Does each process have it's own section of data, text , stack and heap in the memory?",,3,1,,,,CC BY-SA 4.0,,,,, +386519,1,,,2/2/2019 11:53,,0,43,"

Assume a Parent -> Child relationship in RDBMS. +questionnaire table and question table. +question table has a foreign key questionnaire_id to store which questionnaire it belongs to.

+ +

Questionaire can have n number of Questions, say max 500.

+ +

When a user creates/updates a questionnaire API call sends the big json document to the server.

+ +
questionnaire: {
+  id: 'xx'
+  name:'xxxx',
+  questions: [
+     {
+       id: 'xx',
+       txt:'xxxxxx'
+       .......
+     },
+     {
+       id: 'xx',
+       txt:'xxxxxx'
+        .......
+    },
+    {
+       id: 'xx',
+       txt:'xxxxxx'
+       .......
+    }
+
+  ]
+}
+
+ +

When we read the questionnaire we are expected to retrieve child questions in the same order as it came in. So the order of the questions had to be persisted somewhere.

+ +

Approach 1: Put an order column in question table and use that to order by. Problem with this approach is that updates are too slow. All child elements have to be updated with new positions when a question shuffle happens in the client.

+ +

Approach 2: Put a json column in the parent that stores array of child ids in the incoming order. So now a client-side question shuffle only requires a single update query to parent questionnaire table. Sort happens in the code. Also note that no of questions will not exceed say 500.

+ +

Which approach do you think is the best and why? Also is there any better alternative approach?

+",136702,,,,,2/2/2019 15:28,"In a parent table - Child table relation , where to store order of children",,1,2,,,,CC BY-SA 4.0,,,,, +386520,1,386523,,2/2/2019 12:00,,-1,32,"

I am writing a test in pytest for a software. This test relies a lot on console output generated by the software.

+ +

The flow is something like: +* start the program +* send some commands to the program +* wait for some output from the program +* send some other commands, wait for further output

+ +

I also need to write the entire output generated to stdout, and to a file. I tried pexpect but it doesn't seem to have the capability to write to console and file.

+ +

Any other choice?

+",327543,,,,,2/2/2019 13:18,python library for stdin stdout managing,,1,0,,43499.775,,CC BY-SA 4.0,,,,, +386526,1,386527,,2/2/2019 14:22,,-1,138,"
+

A software company develops software packages for commercial animal + farming. A special function in C calculates the daily amount of feed + for different kind of animals dependent on their bodyweight. This + function already run flawlessly for years in a software package for + farms. And now it's integrated into a package for zoological gardens + und sometimes causes wrong outputs. What mistakes were done and what + are their consequences? Is it possible to detect them static/dynamic?

+
+ +
typedef enum {COW, HORSE, BEAR, MONKEY, PIG} Animal_A;
+float feedquantity(Animal_A animalkind, float weight){
+    float amount, factor, woodFlourAmount;
+
+    switch(animalkind){
+        case COW:
+            factor = 0.05;
+            break;
+        case HORSE:
+            factor = 0.1;
+            break;
+        case MONKEY:
+            factor = 0.2;
+        case BEAR:
+            factor = 0.02;
+            break;
+    }
+    amount = factor * weight;
+    return amount;
+}
+
+ +

There are some mistakes I found but I'm not sure if some of them are actually mistakes really? The first problem is that in total we have 5 animal kinds but we don't have a case for each animal kind since PIG is missing. So the typedef PIG is not used and seems useless. Another variable that is useless is woodFlourAmount. It is only defined but you don't do anything with it in the function. Another mistake which is worse for the function and will cause problems is that there is a missing break in the case for MONKEY which will, once we have the MONKEY case set factor = 0.2 and execute the next (and last) case BEAR right after which will cause a wrong output. So in case we have MONKEY, it's mistakenly treated as a BEAR.

+ +

Is it fine like that or maybe I found something which is not really a 'mistake'? But I'm not sure about how you could find these mistakes statically / dynamically or if it's possible at all :S

+",320470,,,,,2/2/2019 14:59,"static, dynamic analysis - what mistakes were made in the code?",,1,0,,,,CC BY-SA 4.0,,,,, +386536,1,,,2/2/2019 20:05,,-1,198,"

I am working on a multi contributor project, having ~2500 Commits. It has multiple components, i.e. Frontend, Backend and Data Processor/DB Ingestion.

+ +

Now that we are expecting more contributors, a Versioning System should be implemented, and should be displayed on footer of Frontend, as vA.B.C, where A = Big Change, B = New Features and C = Patch Fix. But, the issue is that we are having a very complex multi component software and has many commits.

+ +

How should I approach this?

+",327574,,,,,2/2/2019 23:08,How to use Semantic Versioning on Existing Multi Component Codebase?,,1,2,1,,,CC BY-SA 4.0,,,,, +386540,1,,,2/2/2019 22:20,,2,78,"

Here's a class with a method that calls different functions based on a parameter set in the constructor:

+ +
functions = {
+    ""arg1"": f1,
+    ""arg2"": f2,
+    ""arg3"": f3
+}
+
+class C:
+    def __init__(self, arg):
+        self.arg = arg
+    def _util_method(self, v):
+        return functions[self.arg](v)
+
+    def method1(self, v):
+        self._util_method(v)
+    def method2(self, v):
+        self._util_method(v)
+
+ +

I'm thinking of assigning util_method in the constructor depending on arg, to avoid the dictionary lookup at each call:

+ +
class C:
+    def __init__(self, arg):
+        self._util_method = functions[arg]
+
+    def method1(self, v):
+        self._util_method(v)
+    def method2(self, v):
+        self._util_method(v)
+
+ +

The class would lose readability, since the ""method"" hasn't been declared as a regular method, and finding its declaration is harder for the reader. I remember losing time trying to read code like that from a uni teacher.

+ +

Is assigning a method to a class in the constructor good practice? If not, what would be the right way of doing this without unnecessary dictionary lookup?

+",201622,,,,,2/2/2019 22:20,Is assigning a method in the constructor good practice?,,0,2,,43499.77431,,CC BY-SA 4.0,,,,, +386548,1,386592,,2/3/2019 10:57,,0,83,"

Why should someone use a stream processing engine like Apache Spark, Flink, Hadoop instead of just a normal backend worker which works on something and returns the results as soon as it's done?

+ +

Credit card fraud checking example is given when we talk about these solution so what is the problem by just write a program and put it as backend service which does this for us and returns the result?

+",252239,,,,,2/4/2019 10:03,Why use apache spark instead of just a normal worker?,,1,0,,,,CC BY-SA 4.0,,,,, +386550,1,386587,,2/3/2019 12:32,,0,955,"

I guess the goal of separating UI and business logic goes way back. I found a Martin Fowler article, from nearly 20 years ago, which is pretty clear about the benefits of the separation: https://www.martinfowler.com/ieeeSoftware/separation.pdf

+ +

I'm undertaking a development where I believe the domain layer I develop will likely live a lot longer than the interface I create, so I'd like to design the domain in such a way that a future UI developer can easily ""plug into it"".

+ +

I have a couple of important constraints:

+ +
    +
  • The UI never needs to be used outside our local network.
  • +
  • We are a ""Microsoft shop"" i.e. the technology used by developers in our company is very much focused on Microsoft technology. This is unlikely to change in the medium term.
  • +
+ +

Bearing in mind the constraints what is the simplest approach available to me, use Microsoft projects and solutions, to achieve this separation?

+ +
+ +

Edit

+ +

Some extra information: +I'll likely implement the UI using WPF or WinForms. Future implementations may use ASP web form, or whatever MS are offering for web development. +The application will have a couple of smallish (400 row) datagridviews that need updating every few seconds - I'm unsure if this makes it compute or data heavy.

+",75031,,75031,,2/4/2019 16:41,2/4/2019 21:33,Simplest Architecture to separate UI and business logic on Windows,,1,0,,,,CC BY-SA 4.0,,,,, +386554,1,386562,,2/3/2019 13:19,,3,3276,"

Onion architecture has a core which is composed by domain model, domain services and application services:

+ +

+ +

I'm in doubt about those two service layers, domain services and application services.

+ +

I've been reading that they're related to DDD but I'm not familiar with DDD myself.

+ +

I'm not asking for the relationship, just an explanation of what those two layers do, and if possible a simple example in Java.

+ +

I've read that Domain Services are services used by the domain model and Application Services are services made accessible to the outer layers. Is this correct?

+ +

So a Repository would be a Domain Service and Application Services are related to the Use Cases of the application.

+ +

All of that is still unclear to me.

+",93338,,93338,,2/3/2019 17:29,2/3/2019 18:42,What are application and domain services in onion architecture?,,2,1,,,,CC BY-SA 4.0,,,,, +386557,1,,,2/3/2019 16:28,,0,1224,"

In Javascript, there seems to be or have been an idea that undefined represents a missing primitive OR object value, while null represents just a missing object value.

+ +

See, for example, this section in Speaking JS.

+ +

The use of null in JSON, however, does not seem to obey this principle. If the value of my key is missing, JSON represents this situation as ""the key is present with the value null"". But the value could have been either a primitive or an object, so wouldn't it be more correct to those semantics to use undefined in this case?

+ +

I know JSON has no particular reason to be faithful to this aspect of Javascript now, but I'm curious about the considerations that went into this decision at the time that it was made.

+",197372,,,,,2/3/2019 16:45,What is the original reason JSON used `null` and not `undefined` to represent missing values?,,1,0,,,,CC BY-SA 4.0,,,,, +386561,1,386563,,2/3/2019 18:07,,1,135,"

I was doing some reading here and one of the suggested answers states:

+ +
+

In short: Don't try to decide how an object might react to some action + from the outside, tell it what happened, and let it deal with it.

+
+ +

This other answer suggests:

+ +
+

But in general, I recommend trying to achieve orthogonality, your code + stays more maintainable if enums do not have implicit, hidden + dependencies to something like a class hierarchy, that will become + error prone sooner or later.

+
+ +

The OP was attempting to use enums to replace instanceof.

+ +

Suppose if I have the following GameWeapon class:

+ +
class GameWeapon(ABC):
+
+    # imports and other methods left out. 
+
+    def __init__(self, name, required_strength, damage, attributes):
+        self._name = name
+        self._required_strength = required_strength
+        self._damage = damage
+        self._attributes : List[Attributes] = attributes
+
+    def contains_attribute(self, attribute):
+        return attribute in self._attributes 
+
+ +

and Character class:

+ +
class Character(ABC):
+
+    def __init__(self, name, strength, attributes):
+        self._name = name
+        self._strength = strength
+        self._attributes = attributes
+        self._inventory = []
+        self._found_attributes = []
+        self._cannot_equip = False
+        self._equipped = True
+
+    def add(self, game_object):
+        self._inventory.append(game_object)
+
+    # I may obviously want to do more checks, such as required strength,
+    # if I currently have another weapon equipped, etc..
+    def try_equip(self, game_object):
+        if self._check_for_conflicting_attributes(game_object):
+            return self._cannot_equip
+        return self._equipped
+
+    def _check_for_conflicting_attributes(self, game_object):
+        for attrib in self._attributes:
+            if game_object.contains_attribute(attrib):
+                self._found_attributes.append(attrib)
+        return len(self._found_attributes) > 0
+
+ +

and Main

+ +
def main():
+
+    wizard_attributes : List[Attributes] = []
+    wizard_attributes.append(Attributes.LongBlade)
+    wizard_attributes.append(Attributes.Reloadable)
+
+    wiz = Wizard(""Gandalf"", 100, wizard_attributes)
+
+    sword_attributes : List[Attributes] = []
+    sword_attributes.append(Attributes.LongBlade)
+    sword = Sword(""Standard Sword"", 5, 10, sword_attributes)
+
+    # Will print false
+    print(wiz.try_equip(sword))
+
+
+if __name__ == ""__main__"":
+    main()
+
+ +

Suppose if I don't want my wiz to use a sword, I filter on the enum Attribute.LongBlade. I don't decided what to do outside of the Wizard class, I try to equip it, and let the class decides if it can't or can, and I don't use an enum to model my inheritance hierarchy.

+ +

Given the quotes above, is using an enum in this way acceptable?

+",,user327264,,,,2/3/2019 18:34,Is using enums to filter attributes considered a code smell?,,1,0,,,,CC BY-SA 4.0,,,,, +386566,1,,,2/3/2019 22:04,,1,67,"

So I was discussing coding with an associate of mine at work, and was mentioning how I was working on a project where I'd need to transform the data that was provided into a standardized format before processing it (applying business rules, validation, etc). He suggested that the standardization of data should be a completely separate step, opr even a seperate program, where the information is processed into a standardized format, and then either saved or streamed to the next step.

+ +

While I'm looking at it as an unneeded separation, it may make sense to separate if I followed a micro service architecture? Wanted to see what's considered best practice

+",10885,,,,,2/4/2019 12:25,Should data be pre-processed before being handled by an ETL framework?,,2,0,,,,CC BY-SA 4.0,,,,, +386570,1,386573,,2/4/2019 0:47,,13,2220,"

I've heard the phrase being thrown arround and to me the arguments sound completely insane (sorry if I'm strawmaning here, Its not my intention), generally it goes something along the lines of:

+ +
+

You don't want to create an abstraction before you know what the general case is, otherwise (1) you might be putting things in your abstractions that don't belong, or (2) omitting things of importance.

+
+ +

(1) To me this sounds like the programmer isn't being pragmatic enough, they have made assumptions that things would exist in the final program that doesnt, so they are working with to low of a level of abstraction, the problem isn't premature abstraction, it's premature concretion.

+ +

(2) Omitting things of importance is one thing, it's entirely possible something is omitted from the spec that later turns out to be important, the solution to this isn't to come up with your own concretion and waste resources when you find out you guessed wrong, it's to get more information from the client.

+ +

We should always be working from abstractions down to concretions as this is the most pragmatic way of doing things, and not the other way around.

+ +

If we don't do so then we risk misunderstanding clients and creating things that need to be changed, but if we only build the abstractions the clients have defined in their own language we never hit this risk (at least nowhere near as likely as taking a shot in the dark with some concretion), yes it's possible clients change their minds about the details, but the abstractions they used to originally communicate what they want tend to still be valid.

+ +

Here is an example, lets say a client wishes you to create an item bagging robot:

+ +
public abstract class BaggingRobot() {
+    private Collection<Item> items;
+
+    public abstract void bag(Item item);
+}
+
+ +

We are building something from the abstractions the client used without going into more detail with things we don't know. This is extremely flexible, I've seen this being called ""premature abstraction"" when in reality it would be more premature to assume how the bagging was implemented, lets say after discussing with the client they want more than one item to be bagged at once. In order to update my class all I need to is change the signature, but for someone who started bottom up that might involve a large system overhaul.

+ +

There is no such thing as premature abstraction, only premature concretion. What is wrong with this statement? Where is the flaws in my reasoning? Thanks.

+",327654,,,,,1/15/2021 15:32,"What is ""premature abstraction""?",,6,1,,,,CC BY-SA 4.0,,,,, +386571,1,386604,,2/4/2019 0:54,,0,385,"

I'd like to highlight that this questions is about a System Sequence Diagram (SSD) and not a simple Sequence Diagram and of course any help would be appreciated!

+ +

I'm reading Graig Larman's book Applying UML and Patterns and I was wondering if it's possible to represent an if statement in a SSD to exit the system. For example, let's say we have a search bar in a blog site, when the user clicks on search and there is no result, I want the system to return an error message ""No results matching your criteria"" and then to exit the system. How can I represent this in an System Sequence Diagram?

+",327593,,9113,,2/4/2019 6:28,3/12/2019 11:16,Is it possible to represent an if statement in a system sequence diagram?,,2,2,,,,CC BY-SA 4.0,,,,, +386577,1,386580,,2/4/2019 6:26,,-2,200,"

I need to create requirements model for my software(first step of software engineering), But I don't know which icon of UML can help me to implement requirements model.

+",254148,,,,,2/4/2019 7:18,requirements model and UML,,1,2,,,,CC BY-SA 4.0,,,,, +386581,1,386589,,2/4/2019 7:18,,-1,39,"

Currently we are integrating with a 3rd party system. This includes data sent to this 3rd party where the customer is able to see it. My current task is to provide visual documentation (PDF) about which values from our tool land where in the target system. +What I am struggling with is the fact that screenshots of both systems require the full screensize, otherwise it is hard to read the field names.

+",326190,,,,,2/4/2019 8:41,How to write documentation about field correlations?,,1,0,,,,CC BY-SA 4.0,,,,, +386582,1,386584,,2/4/2019 7:23,,49,5641,"

Some years ago I wrote and released some software under the MIT license.

+ +

Recently I noticed that one (or some?) of the forks have altered the leading copyright notice at the top of the license, i.e.

+ +
Copyright (c) 2014 <my name>
+
+MIT License
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software...
+
+ +

to

+ +
Copyright (c) 2019 <new author>
+
+MIT License
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software...
+
+ +

It's just a small tool, but it does kind of feel bad having my name stripped from what was mostly my work.

+ +
    +
  • Is this something that should be covered by the MIT license? + +
      +
    • I.e. is removing a name violating the license? It's unclear to me if the MIT ""must remain whole"" statement includes the copyright part or just the ""MIT license"" part.
    • +
  • +
  • Did I pick the wrong license? + +
      +
    • Which should I have chosen to ensure my name remains attached to my work?
    • +
  • +
  • At what point (if ever?) is it appropriate to strip an original authors name from a license? + +
      +
    • I would assume never barring what would be considered a full rewrite?
    • +
  • +
+",326769,,,,,2/4/2019 16:55,Altering author names in MIT license,,1,1,5,43500.70347,,CC BY-SA 4.0,,,,, +386594,1,,,2/4/2019 12:10,,0,1646,"

In our domain we have a concept of billable activities that get logged by a user. The modeling is pretty straightforward -- the User is an aggregate root, with the Activities being a collection under the user (an activity doesn't make any sense outside a user, and we have rules to enforce at that level).

+ +

We're in the process of a redesign and are evaluating a move to CQRS/ES. The modeling above makes sense, but there's another action that happens on the activities that is a challenge: those activities are ""sequenced"" (just assigned an auto-incrementing integer SequenceId and eventually get batched for billing, usually based on a date range or as ""all activities since SequenceId X.

+ +

Due to the number of transactions (10K+ users, 20K+ activities per batch) and because we're batching based on properties intrinsic to the activity itself, it doesn't make sense to batch them through the user AR. However, the act of batching an activity does need to be recorded on the activity as it impacts the business rules for a user's actions (e.g. an activity can't be edited if it's been batched).

+ +

Now, without CQRS/ES, we just query the activities then batch flip a status property on them. That isn't possible now with the User AR managing access to those activities. This seems like a candidate for a separate bounded context, but I still am not sure the right way to share the activities between the users and the batches.

+ +

This can't be an uncommon use case, so I'm hoping someone out there has tackled it before and can offer some pointers.

+",39078,,,,,2/4/2019 14:51,DDD - Modifying multiple entities under multiple aggregate roots,,2,7,1,,,CC BY-SA 4.0,,,,, +386597,1,,,2/4/2019 12:51,,0,210,"

I have 1 long running process wrapped inside a method and it is for 2 different types like below:

+ +
    +
  • Type1
  • +
  • Type2
  • +
+ +

Code:

+ +
public interface IBaseType
+    {
+        MyResult LongRunningProcess(int jobId,int noOfTimes); //doenst save long running process data in database.just returns the results to consumer
+        void LongRunningProcess(int noOfTimes); //Save results of long running process in database.Background job with on-demand as well as scheduled
+    }
+
+public class Type1 : IBaseType
+{
+    public void LongRunningProcess(int jobId,int noOfTimes)
+    {
+        try
+        {
+           //Step1 : 
+           var type1Manager =  new Type1Manager(params);
+           for (int i = 0; i < noOfTimes; i++)
+               {
+                 var con = ConnectionFactory.OpenConnection();
+                 type1Manager.Start(con);
+                //Save results of those processing
+            }
+
+          //Step2:
+          IVersioning versioning = new Versioning();
+          string version = versioning.GetVersion();
+          using (connection = new SqlConnection(connectionString))
+          {
+               connection.Open();
+               using (var transaction = connection.BeginTransaction())
+               {
+                  try
+                  {
+                     Repository.UpdateVariantVersioning(connection, transaction,jobId, version);
+                     Repository.UpdateCategoryWithVersion(connection, transaction,versioning.Category,version);
+                     transaction.Commit();
+                  }
+                  catch(Exception ex)
+                  {
+                       transaction.Rollback();
+                       //code to delete everything that has been performed in step1
+                       throw ex;
+                  }
+                }
+            }
+        }
+        catch (Exception ex)
+        {
+            Repository.UpdateErrorDetails(connectionString,jobId,ex.Message);
+        }
+
+        //Step3 : if step1 and step2 successfull than mark this job as succeeded else failed
+        // Updating time of whole process in table
+    }
+ }
+
+
+
+
+
+public class Type2 : IBaseType
+{
+    public void LongRunningProcess(int jobId,int noOfTimes)
+    {
+        try
+        {
+           //Step1 : 
+            var type2Manager =  new Type2Manager(params);
+            for (int i = 0; i < noOfTimes; i++)
+               {
+                 var con = ConnectionFactory.OpenConnection();
+                 type2Manager.Start(con);
+                //Save results of those processing
+            }
+
+          //Step2:
+          IVersioning versioning = new Versioning();
+          string version = versioning.GetVersion();
+          using (connection = new SqlConnection(connectionString))
+          {
+               connection.Open();
+               using (var transaction = connection.BeginTransaction())
+               {
+                  try
+                  {
+                     Repository.UpdateVariantVersioning(connection, transaction,jobId, version);
+                     Repository.UpdateCategoryWithVersion(connection, transaction,versioning.Category,version);
+                     transaction.Commit();
+                  }
+                  catch(Exception ex)
+                  {
+                       transaction.Rollback();
+                       //code to delete everything that has been performed in step1
+                       throw ex;
+                  }
+                }
+            }
+        }
+        catch (Exception ex)
+        {
+            Repository.UpdateErrorDetails(connectionString,jobId,ex.Message);
+        }
+
+        //Step3 : if step1 and step2 successfull than mark this job as succeeded else failed
+        // Updating time of whole process in table
+    }
+ }
+
+ +

So as you can see here that step2 and step3 code are getting repeated for both types so I want to this code repetition.

+ +

Secondly I want to keep step1 and step2 in sync so that when step2 fails, then rollback whatever has been done inside the entire step1 process.

+ +

I am a bit confused with moving versioning in base abstract class because that would probably be tightly coupled with this long running process. I want to design it in a way that tomorrow if I think of removing versioning then it should not hamper my current design and code.

+ +

Can anybody please help me with this?

+ +

Update : Added versioning code

+ +
interface IVersion
+{
+    string CreateVersion();
+}
+
+Public class Version : IVersion
+{
+     public string Category { get; private set; }
+}
+
+",324467,,324467,,2/6/2019 8:00,11/3/2019 18:00,Keeping steps in sync of long running process and creating common layer for code repetition,,1,0,,,,CC BY-SA 4.0,,,,, +386599,1,386601,,2/4/2019 13:03,,15,3947,"

I have a method that creates a data file after talking to a digital board:

+ +
CreateDataFile(IFileAccess boardFileAccess, IMeasurer boardMeasurer)
+
+ +

Here boardFileAccess and boardMeasurer are the same instance of a Board object that implements both IFileAccess and IMeasurer. IMeasurer is used in this case for a single method that will set one pin on the board active to make a simple measurement. The data from this measurement is then stored locally on the board using IFileAccess. Board is located in a separate project.

+ +

I've come to the conclusion that CreateDataFile is doing one thing by making a quick measurement and then storing the data, and doing both in the same method is more intuitive for someone else using this code then having to make a measurement and write to a file as separate method calls.

+ +

To me, it seems awkward to pass the same object to a method twice. I've considered making a local interface IDataFileCreator that will extend IFileAccess and IMeasurer and then have an implementation containing a Board instance that will just call the required Board methods. Considering that the same board object would always be used for measurement and file writing, is it a bad practice to pass the same object to a method twice? If so, is using a local interface and implementation an appropriate solution?

+",307087,,307087,,2/4/2019 16:40,5/5/2019 23:04,Pass object twice to same method or consolidate with combined interface?,,5,3,3,,,CC BY-SA 4.0,,,,, +386602,1,,,2/4/2019 13:48,,-1,149,"

Tag dispatching +is used to:

+ +
+

dispatch based on properties of a type

+
+ +

Is there any reason to make tag values constexpr or even +const?

+ +

There are code samples demonstrated both constexpr and +non-const tag values. For example, +the selector tag value in wiki +is not declared a constexpr. OTOH, in the c++ +standard, +section 31.4.4 Locks declares several tag variables as +constexpr. One example is:

+ +
namespace std {
+  struct defer_lock_t {};
+  ...
+  inline constexpr defer_lock_t defer_lock {};
+  ...
+}
+
+ +

But why is there any need for defer_lock to be a +constexpr? Doesn't the following demo code:

+ +
  struct tag_t { constexpr tag_t(){} };
+  tag_t tag_nc; //tag_nonconst
+  constexpr tag_t tag_ce; //tag_constexpr
+//#define F_VL
+#ifdef F_VL
+  template<typename T>
+  constexpr int f(T){ return -1;}
+  constexpr int result_vl=f(tag_t{});
+#endif
+#define F_CN
+#ifdef F_CN
+  template<typename T>
+  constexpr int f(T const&){ return 1;}
+  constexpr int result_ce=f(tag_ce);
+#endif
+#define F_NC
+#ifdef F_NC
+  template<typename T>
+  constexpr int f(T&){ return 0;}
+  constexpr int result_nc=f(tag_nc);
+#endif
+
+
+  #include <iostream>
+  int main()
+  {
+  #ifdef F_VL
+    std::cout<<""result_vl=""<<result_vl<<""\n"";
+  #endif
+  #ifdef F_CN
+    std::cout<<""result_ce=""<<result_ce<<""\n"";
+  #endif
+  #ifdef F_NC
+    std::cout<<""result_nc=""<<result_nc<<""\n"";
+  #endif
+    return 0;
+  }
+
+ +

if compiled without error, demonstrate that a tag value, +such as tag_nc in the demo code, does not need to be +declared as a constexpr or even a const value to be +usable in a constexpr expression, such as the f(tag_nc) +in the demo code?

+ +

Furthermore, if an existing tag type, such as the above +std::defer_lock_t, had constexpr CTOR's, and the existing +tag value were changed to non-const, then would there be any +need to change any existing user-code? I would guess it's +highly unlikely since (using symbols from the demo code):

+ +
    +
  1. a tag_type& actual argument is implicitly converted + to a const tag_type& in any function taking a const + tag_type& formal argument.

  2. +
  3. the only intention of a tag type is to:

  4. +
+ +
       dispatch based on properties of a type
+
+ +
 and as such, the only user-code using such a type with
+ that intention would be overloads of one of these
+ possible forms:
+
+ +
       f(tag_t, ...);
+       f(tag_t&, ...);
+       f(tag_t const&, ...);
+
+ +
 and all of those possible overloads would compile and
+ produce the **same** results when called with:
+
+ +
       f(tag_nc, ...):
+
+ +
 where `tag_nc` was declared just as in the demo code.
+
+ +

Hence, isn't:

+ +
  constexpr tag_t tag;
+
+ +

where tag_t is from the demo code, overspecifying the tag +value, tag?

+",327697,,,,,3/6/2019 23:01,Any need for constexpr in tag values?,,1,0,,,,CC BY-SA 4.0,,,,, +386607,1,,,2/4/2019 15:31,,0,80,"
class DataFrameAnnotation:
+
+  def __init__(self, df: pd.DataFrame):
+    self.df = df
+
+  def transformation_1(self):
+    self.df + 1 
+
+  def transformation_2(self):
+    self.df + 1 
+
+  def main(self):
+    self.transformation_1()
+    self.transformation_2()
+    ....
+    return self.df
+
+ +

My question is if there are any issues keeping a large dataframe in an object's state like the above compared to passing the dataframe around as a parameter:

+ +
   def main(self, df):
+        df = self.transformation_1(df)
+        df = self.transformation_2(df)
+        ....
+        return df
+
+",144356,,144356,,2/6/2019 15:58,2/6/2019 15:58,Are there any issues with having having the main data in an object's state?,,2,1,,,,CC BY-SA 4.0,,,,, +386608,1,386613,,2/4/2019 15:39,,0,665,"

I understand that there are major technical differences between how a strongly typed language is compiled and how a type annotated language is compiled/transpiled.

+ +

But as a developer writing in a strongly typed language and a weakly typed language with type annotations feels pretty similar to me. I can't really articulate any differences.

+ +

What are the differences between working in a strongly typed language and a weakly typed language with type annotations from the developer's point of view?

+ +

I am primarily thinking about the examples C++ and JavaScript with TypeScript annotations.

+",186151,,,,,2/4/2019 16:30,Difference between a strongly typed language and a weakly typed language with type annotations from the developer's point of view?,,1,0,,,,CC BY-SA 4.0,,,,, +386610,1,,,2/4/2019 16:06,,0,217,"

I'm struggling with logic that is duplicated in the front end code and the database. Right now, I just put a comment.

+ +

Here is a small example (the current system has a lot of much more complicated formulas).

+ +

In the front end, I have a single property that everything use to see the result.

+ +
Class CartItem
+
+    ' The same formula is found in the database view V_CART_ITEM
+    Public Readonly Property Cost As Decimal
+        Get
+            Return Price * Amount
+        End Get
+    End Property
+
+End Class
+
+ +

In the database, I have a view that all queries and report use.

+ +
CREATE OR REPLACE VIEW V_CART_ITEM AS
+-- The cost formula is also found in the class CartItem
+select cart_item_id, price * amount as cost
+from cart_item;
+
+ +

I think it's not to bad since the formula is only duplicated once in the whole system and a new developer would know what to do if there are any changes to be made. I'm wondering if there's a better way.

+",37975,,327416,,2/5/2019 20:15,2/5/2019 23:13,Duplicated formula in front and back end,,3,3,,,,CC BY-SA 4.0,,,,, +386614,1,386648,,2/4/2019 16:31,,13,3450,"

I am learning for an exam and I have a question which I am struggling to give and answer for.

+ +

Why does no iterator base class exist all other iterators inherit from?

+ +

My guess my teacher is referring to the hierarchical structure from the cpp reference ""http://prntscr.com/mgj542"" and we have to provide a other reason than why should they?

+ +

I know what iterators are (sort of) and that they are used to work on containers. From what I understand because of the different possible underlying datastructures, different containers have different iterators because you can randomly access an array, for example, but not a linked list and different containers require different ways of moving through them.

+ +

They are probably specialized templates depending on the container, right?

+",325809,,102438,,2/13/2019 3:41,2/13/2019 3:41,"C++ Iterator, Why is there no Iterator base class all iterators inherit from",,4,2,1,,,CC BY-SA 4.0,,,,, +386619,1,386621,,2/4/2019 17:04,,3,285,"

I have a method login() which essentially sends a login request to a remote server. As I do not want to waste server resources on processing invalid data, I decided to consider sanity checking the input before I send it.

+ +

The code looks something this:

+ +
//Method to log user in
+public void login(String username, String password) throws IOException{
+    //If statement to sanity check credentials
+    if (credentialsSanityCheck(username, password)) {
+        //Perform login ...
+    }
+    else {
+        throw new IllegalArgumentException(""Invalid username/password"");
+    }
+}
+
+ +

However, after learning more about the Single Responsibilty Principle, I am now wondering if it is more appropriate to sanity check the method arguments from the calling code or within the method itself, as it's quite fuzzy (at least for me) to determine whether or not data validation falls under the method's responsibility.

+",327671,,,,,2/5/2019 7:57,Does method argument self-sanity checking violate the SRP?,,2,3,,,,CC BY-SA 4.0,,,,, +386622,1,,,2/4/2019 17:36,,1,85,"

From what I have learned, in Perspective based reading, the document/code/etc., is inspected from different perspectives. We consider roles like software designer, tester or user.

+ +

Each role follows the scenario assigned to them, and reads the code/document, etc. Each role also creates an artifact; for example the person in role of tester creates a test plan, etc

+ +

My question is with regards to that artifact. What exactly is it based on? Is it based on portion of the code/document that scenario specifies? So they have the original document, and they create a new partial document??

+ +

Also, can a person in role of tester create an artifact of user manual or design? or based on his role, he has to create only the artifact related to his role? Many thanks for any help.

+",302389,,,,,2/4/2019 17:36,Perspective based reading - Artifact created,,0,0,0,,,CC BY-SA 4.0,,,,, +386624,1,386627,,2/4/2019 18:04,,0,80,"

Background: I'm working on a project with a self-driving machine with a tank-like control, somehting like:

+ +
    +
  • forward()
  • +
  • left()
  • +
  • right()
  • +
  • stop()
  • +
+ +

The code is running on a raspberry pi. The GPIO outputs are inside my class Machine. Currently the development process is very simple, we're programming, put on raspberry, test it on the real machine. This process slow down the development very much. Are there common design patterns to implement a Simulation. The goal have to be to keep the model untouched, to use same code for real, as for simulation.

+",327727,,,,,2/4/2019 21:32,"How to design classes of a self-driving machine, if I need a simulation?",,2,2,,,,CC BY-SA 4.0,,,,, +386626,1,,,2/4/2019 18:41,,2,1612,"

No code to show (and not really a code issue) but I have an iot-ish application running that is using PI Zeroes as clients and they are slow. A single POST takes about 10 seconds round trip, the delay seems to be almost entirely within the client side Python script, possibly related to slow connections but I would like to POST/log more than 1 entity every 10 seconds.

+ +

I think about POSTing an array or other collection of values which should be more efficient but I am not sure that is very ""RESTful"". What do I return? A single status code across all entities? Is there a different HTTP Verb that I should consider? The client will never use the new IDs on the POSTed entities or requery them so that isn't an issue.

+ +

Bottom line is it reasonable to POST multiple models and then return a collection of the multiple resulting entities each with their own IDs? Better to POST arrays or embed them within a single parent container object?

+ +

Lastly my API call is somewhat asymmetric. The data being sent to the API does not model 1:1 the data being stored in my database further lowering the value of any returned data from my POST.

+",5670,,5670,,2/4/2019 19:14,7/5/2019 12:12,Web API POST: single item vs collection,,2,1,,,,CC BY-SA 4.0,,,,, +386631,1,386833,,2/4/2019 21:35,,2,195,"

What are the best practices to organize your unit tests in classes?

+ +

I see different possibilities:

+ +

1) One would be to write one ""container"" class for each function you want test and then subclass that to group in each subclass all tests that are similar in some way - e.g. one subclass contains all tests where we make make sure the functions throws errors correctly and one class that contains all tests where we make sure that the function does what it's supposed to.

+ +

2) A different one would be to group tests for different functions in one class: Suppose we want to test the functions myprint and mysearch. Instead of having as previously 2 (sub)classes per function (i.e. 4 in total, Test_myprint_Error, Test_myprint_OK, Test_mysearch_Error, Test_mysearch_OK), now we have only 2 classes in total (TestAllErrors and TestAllOK), one containing all tests to check for each function that errors are thrown correctly and one containing all tests that everything works as intended, again for each function +(So we have the same tests as in 1), but just arranged differently in classes.)

+ +

3) I could come up with even more ways to group tests in classes, but I'll stop here.

+ +

(In part 1 of my question I received general advice how to organize my suite of testing functions, but now I'm interested in this more specific query.)

+",293237,,,,,2/7/2019 18:51,How to organize my test functions? (Part II: Keepin' it classy),,2,7,3,,,CC BY-SA 4.0,,,,, +386637,1,386642,,2/4/2019 23:16,,0,167,"

We have a requirement for a security audit that our password policy must disallow the re-use of a previous password from the last 4 used passwords.

+ +

We can accomplish this fairly easily by making a call to the server to allow it to compare the password candidate against the database's previous password hashes, but that introduces some lag time in transit. Our managers have asked us to investigate if we can eliminate the delay by implementing a pure client-side solution.

+ +

Therefore, my question focuses purely on the feasibility of a safe client-side solution.

+ +

Assume that a MITM attacker has found some means to gain access to the payload via HTTPS or client-side application vulnerability.

+ +

The Process

+ +

The only process I have been able to devise would be to transmit the previous password hashes with their server-side salts to the client ahead of time.

+ +

The Risk

+ +

The requirement of needing to send the server-side hashes with the password hashes is what presents the inherent risk in my estimation.

+ +

Without the salt, then the client would be unable to test the candidate password against the password hashes. However, sending the server-side salt to the client-side exposes it to a potential MITM attacker.

+ +

If the attacker was successfully able to bruteforce the current password hash among the previous password hashes, then they could change the password under the nose of the current user.

+ +

This exploit could be exasperated if a user requested to change their password, was sent the previous password hashes + salt, and then chose to abandon the changing of the password. This would give a MITM attacker plenty of time to find a suitable entry for the current password hash + salt, and subsequently change the password.

+ +

Even if the attacker wasn't able to crack the current password in time, they'd still have plenty of time to crack and use these passwords on other sites where they may not have been updated.

+ +

The Question

+ +

I've been advised by peers that this might be overly cautious - that hashing functions like Bcrypt would take so long to bruteforce that it's practically impossible even with the salt exposed to the attacker...

+ +

In my estimation, this is equivalent to the result of a SQL injection attack. Nonetheless, I'm having trouble convincing my peers...

+ +

Is this method of checking if a password candidate would collide with a previous password unsafe? If so, is there another mechanism that would be safe, but which does not require remote calls to a server?

+",115156,,,,,2/5/2019 14:39,Is there a secure way to check previous passwords purely on the client-side?,,3,8,,,,CC BY-SA 4.0,,,,, +386638,1,,,2/4/2019 23:31,,-4,163,"

In computer science courses at University, assignments written in OO languages such as Java had file systems similar to this:

+ +
    +
  • TreeNode.java
  • +
  • BinaryTree.java
  • +
  • Assignment1.java
  • +
  • etc. ...
  • +
+ +

In writing some of my own projects, it seems like splitting up each class into a file is very trivial, especially when classes are very small. Are there other design patterns that circumvent having many files lots of small helper classes, or is this pretty much the only standard?

+",319488,,134647,,2/5/2019 20:15,6/11/2019 15:05,File structure of object-oriented projects seems cluttered,,1,1,,43501.74028,,CC BY-SA 4.0,,,,, +386649,1,,,2/5/2019 5:50,,0,111,"

I am trying to design a geometric intersection API. Below is the code to represent geometric elements.

+ +
#include <iostream>
+#include <memory>
+
+// Since I did not write the Shape class I cannot edit or change the geometric elements class. 
+class Shape {
+  //
+public:
+  virtual ~Shape() {}
+};
+
+class Segment : public Shape {
+public:
+  ~Segment() {}
+};
+
+class Triangle : public Shape {
+public:
+  ~Triangle() {}
+};
+
+class Quad : public Shape {
+public:
+  ~Quad() {}
+};
+
+// Below is the function that handles the intersection of the geometric elements.
+
+
+bool intersect(const std::shared_ptr<Segment> &s1,
+               const std::shared_ptr<Segment> &s2) {
+  std::cout << ""Seg Seg "" << std::endl;
+  // Algorithm goes here
+  return true;
+}
+
+bool intersect(const std::shared_ptr<Segment> &s1,
+               const std::shared_ptr<Triangle> &t1) {
+  std::cout << ""Seg Tri "" << std::endl;
+  // Algorithm goes here
+  return true;
+}
+
+bool intersect(const std::shared_ptr<Triangle> &t1, 
+               const std::shared_ptr<Segment> &s1
+               ) {
+  return intersect(s1, t1);
+}
+
+bool intersect(const std::shared_ptr<Shape> &s1,
+               const std::shared_ptr<Shape> &s2) {
+  // Segment Segment
+  { 
+    const std::shared_ptr<Segment> segment1 =
+      std::dynamic_pointer_cast<Segment>(s1);
+    const std::shared_ptr<Segment> segment2 =
+      std::dynamic_pointer_cast<Segment>(s2);
+
+    if (segment1 && segment2) {
+     return intersect(segment1, segment2);
+    }
+  }
+  // Segment Triangle
+  {
+    const std::shared_ptr<Segment> segment =
+      std::dynamic_pointer_cast<Segment>(s1);
+    const std::shared_ptr<Triangle> triangle =
+      std::dynamic_pointer_cast<Triangle>(s2);
+
+    if (segment && triangle) {
+      return intersect(segment, triangle);
+    }
+  }
+
+ // Triangle Segment
+  {
+    const std::shared_ptr<Triangle> triangle =
+      std::dynamic_pointer_cast<Triangle>(s1);
+    const std::shared_ptr<Segment> segment =
+      std::dynamic_pointer_cast<Segment>(s2);
+
+    if (segment && triangle) {
+      return intersect(triangle, segment);
+    }
+  }
+    // Handle other types appropriately .. 
+  return false;
+}
+
+int main() {
+  std::shared_ptr<Shape> s1(std::make_shared<Segment>());
+  std::shared_ptr<Shape> s2(std::make_shared<Segment>());
+  std::shared_ptr<Shape> t1(std::make_shared<Triangle>());
+
+  bool ret_val_seg_seg = intersect(s1, s2);
+  bool ret_val_seg_tri = intersect(s1, t1);
+  bool ret_val_tri_seg = intersect(t1, s1);
+}
+
+ +

My main concern is with code that is inside function bool intersect(const std::shared_ptr<Shape> &s1, const std::shared_ptr<Shape> &s2) and the use of dynamic_pointer_cast. Is there any better way to handle the geometric elements. I am open any suggestion.

+ +

My worry in that function is :

+ +

To do Triangle Segment intersection it has to go through six dynamic cast and three if statement. The last combination of intersection get hit the most.

+",101222,,101222,,2/5/2019 12:03,2/5/2019 12:03,Avoid numerous dynamic_cast_ptr in the API design of polymorphic types,,1,3,,,,CC BY-SA 4.0,,,,, +386655,1,386675,,2/5/2019 9:16,,0,147,"

Various analytics tools will track the number of handled and unhandled exceptions (crashes) that happen in an app. This obviously helps us find problems we didn't know existed and will fix it.

+ +

Quite often, exceptions happen in try-catch blocks and are handled, but in many cases they still are situations that should not arise.

+ +

Should I keep tracking these exceptions in my analytics ?

+ +

On one side, I think yes, of course. The problem still persists, its handled (or sometimes even silenced...), but the reason its happening is still very much there and should be dealt with.

+ +

On the other side, I think no. We've already added the safety net we deemed sufficient and we know the client will never allow time to actually fix bugs. So this data is just cluttering our crash logs more than anything now.

+ +

My question is :

+ +
    +
  • Should someone track handled exceptions in analytics in general? Even if its handled, I can see reasons to track them and not track them. What is best practice? Or is there something else to do entirely?
  • +
+",151303,,,,,2/5/2019 13:53,Should I track my handled exceptions?,,2,0,,,,CC BY-SA 4.0,,,,, +386658,1,,,2/5/2019 9:31,,1,2250,"

I have an app that uses JWT tokens for user authorization. Now, I need to be capable of deactivating users (users won't be allowed to use the system but still exist in the database), but as a requirement for that, I need to know if the user is not logged in. Is there some kind of best practice for handling this kind of scenario?

+ +

Example: admin gets a list of users -> admin selects one user to deactivate -> the server checks if the user to be deactivated is not logged in -> if it is, don't deactivate and return an error to admin, else, deactivate and return success to admin.

+ +

The way I thought about doing this was using a smaller expiration time for the token (to be refreshed) and store the last one generated in the user's table so that I can check (with the expiration) if there has been a recent user activity.

+ +

Update: +Every time a user logs in a new token is generated. There is a logout option, it blacklists the user's token so that it can't be used again.

+",255672,,255672,,2/7/2019 12:00,11/28/2020 23:06,Check if user is logged in when using JWT,,2,3,,,,CC BY-SA 4.0,,,,, +386666,1,,,2/4/2019 9:50,,1,112,"

Question: Given that two services operate on a common complex object, what are possible design patterns and supporting technologies of exchanging the object and (possibly) sharing the implementation?

+ +

How about aspects like

+ +
    +
  • deployment (independent release cycles on the two services)
  • +
  • testing (using different implementation makes testing much harder)?
  • +
+ +

If you name a possible solution, please give reference. If possible also name design patterns and technologies enabling the solution.

+ +

Sample Use Case

+ +

The question may require a small introduction (tl;dr warning), apologies for this. In the following, I construct some “use case” related to “fitting” (microservice 1) and “using” (mircoservice 2) a “curve” (complex object). This is just to illustrate the question.

+ +

By a “curve” I consider an object representing a function t -> value(t) mapping floating point numbers to floating point numbers. Such a curve can be constructed, for example, from a set of sample points (t_i, v_i) and the specification of an (possibly complex) interpolation method. However there may be also other implementations like parametric curves.

+ +

For such a curve we may have an interface, e.g.

+ +
public interface Curve {
+   double getValue(double time);
+}
+
+ +

and an implementation

+ +

The implementation may provide different interpolations methods.

+ +

Now consider two “services” (or say modules, to get rid of the word “service”) which perform some actions on a “curve”.

+ +

The first service performs a fitting (or calibration), that is, given

+ +
    +
  • times t_{i}
  • +
  • constraints F_j( curve ) -> min (j=1,…,m)
  • +
  • specification of an interpolation method or parametrization
  • +
+ +

it determines the best fitting values v_{i} to minimize the constraints F_{j}

+ +

The service then provides the fitted curve(s) which are described by a POJO. For example, for interpolating curves

+ +
Double[] times; // t_{i}
+Double[] value; // v_{i}
+String interpolationMethod; // interpolation method
+
+ +

or for parametric curves

+ +
Double[] value; // v_{i}
+String parametrizationMethod; // specification of the function used
+
+ +

Note that the result of this fitting – of course – depends on the specific interpolation method or parametrization used for the curve. (A best fit linear interpolation is different from a best fit spline interpolation).

+ +

The second service (or in general other services) uses given curves (POJOs) and provides some derived quantities (for example an integral). You may pass a curve to this service it calculates something for you, for example some function G(curve).

+ +

This requires that we need to pass the curve to the second service, e.g. represented by our POJO.

+ +

Note also that the result of this valuation depends on the specific interpolation method used for the curve.

+ +

Both services agree on a common specification

+ +

In order to get consistent results, the two services have to agree on a defined specification of “the curve”. This is a specification of how the result object of the first service has to be interpreted and of how the input object of the second service has to be interpreted. +In other words, this is the specification of how the POJO is used to implement the Curve interface.

+ +

If the first services allows for a new or modified version of the interpolation method, this has to be part of a new specification (some versioning of the interpolation method (like SPLINE_V1 or SPLINE_V2).

+ +

Sharing a common implementation - or not?

+ +

In my view, it is an advantage to provide a common implementation of the Curve interface to both services. This would imply that “errors” in the implementation exists on both side, which is sometimes an advantage in terms of consistency (I can explain this in more detail, if required).

+ +

Also, having two independent implementations also feels like a violation of DRY.

+ +

On the other hand, the philosophy of a microservice suggests to keep both services completely (?) independent, possibly suggesting to use independent implementations (versions?) of the complex object (e.g. service 1 provides curves “up to” SPLINE_V2, but service 2 can operate only on curves “up to” SPLINE_V1).

+ +

(Sorry for the tl;dr ).

+ +

Remark: The motivation for the use of a microservice in the first place is that the Curve object is used by different other components/clients/etc. and the fitting - as a time consuming process - should be cached or centralized.

+ +

Remark: The question may already arise on much simpler component, like a common method for de-serialization and serialization of some shared data structures.

+ +

Related questions and topics:

+ + +",91009,Christian Fries,91009,,2/5/2019 12:50,2/5/2019 12:50,What are the design patterns and supporting technologies for sharing complex objects between microservices?,,0,4,4,,,CC BY-SA 4.0,,,,, +386668,1,386674,,2/5/2019 12:44,,1,353,"

I have been programming my classes in the following mode. I create a public function that has a call to a private function that has all the logic and functionality. Something like:

+ +
public class MyClass
+{
+    public string DoStuff()
+    {
+        return ActuallyDoStuff();
+    }
+
+    private string ActuallyDoStuff()
+    {
+        string result = """";
+        //Do stuff
+        return result;
+    }
+}
+
+ +

This is something I inherited from control events in Winforms. If I made an control event I used to create a new function separately or, if the event was replicating a functionality already in use I just called the needed function.

+ +

This way if I had to change some functionality I didn't need to change it everywhere, just in one place.

+ +

Now I'm instinctively doing the same with public functions in my classes but the more I think about it the more I think this is unnecessary.

+ +

Is there any standard or good practice for this case in particular?

+ +

Thank you.

+",327797,,,,,2/7/2019 19:41,Calling full functionality private functions from public functions,,3,0,,,,CC BY-SA 4.0,,,,, +386671,1,,,2/5/2019 13:32,,8,1585,"

In a hypothetical system that handles adding users, there are several business rules. Some of the rules can easily be checked in the model. For example a user registration can only be saved if they entered a 10 digit phone number.

+ +

But what if we want this phone number to be unique?

+ +

In most databases it's fairly easy to add a constraint that generates an error when trying to store a duplicate value. But when following that approach, the model doesn't explicitly make clear that a phone number should be unique. If the model is reused or the database is changed, these business rules could be overlooked.

+ +

If we want to encapsulate this knowledge in the domain, we could create a Domain Service (since Entities are not supposed to communicate with the outside world like the database). This UserService could use the UserRepository to check if the phone number already exists.

+ +

The second approach requires more code and an additional round trip to the database, but it improves the domain model.

+ +

Are there (better) alternatives? Which approach would you choose and why?

+",320517,,340885,,11/11/2019 11:29,11/12/2019 14:34,"How to handle business rules that are ""uniqueness"" constraints?",,8,7,2,,,CC BY-SA 4.0,,,,, +386672,1,392048,,2/5/2019 13:34,,3,274,"

I am required to analyze and make an architecture of an application. While analyzing the requirements I find, in my system user personally identifiable information (PII) confidentiality is a very sensitive quality requirement and it must be taken to the NFR (non functional requirement) section. NFR requires that all the responses should have measurable equation which is used by tester in verification phase.

+ +

I am new in this area and facing hurdle for preparing data security related measure and their allowance limit. I prepared the NFR table as below and I am afraid that might not be proper way because, in verification phase it will be tough for tester to verify this requirement.

+ +

+ +

So, the question is, how usually data confidentiality and security attribute measures and allowance limits are set by architects for such software verification phase by testers?

+",208831,,208831,,5/18/2019 16:15,5/18/2019 16:15,How security metrics are verified in testing phase?,,2,4,0,,,CC BY-SA 4.0,,,,, +386676,1,386701,,2/5/2019 14:09,,2,333,"

Backstory (You can skip)

+ +

I am building an API for managing Files and Directories in a consistent manner across a project. This is for deduplication and consistency when performing a task, and in this particular case, I want to ensure that a file is closed and the mutex is unlocked. Now this is a straight-forward task to solve if I simply create a new object QByteArray to hold the value while I clean up, but I would like to know if it is actually possible to forgo this, and return with cleanup code happening regardless.

+ +

Problem:

+ +

Take the following FUNCTIONAL code:

+ +
QByteArray  Foo::getFileContents(QCD::FileSystem fileSystem, QString fileName) 
+{
+    formatDirectoryPath(fileSystem, fileName);
+    QString key(d_ROOT.absoluteFilePath(fileName));
+    m_Mutex.lock();
+    ct_Check(!m_Files.contains(key));       // QMap<QString,QFile*> m_Files
+    ct_Check(!m_Files[key]->exists());
+    ct_Check(!m_Files[key]->open(QIODevice::ReadOnly));
+    QByteArray ba(m_Files[key]->readAll()); // I dont want to create a new object
+    m_Files[key]->close();                  // Needs to run before return
+    m_Mutex.unlock();                       // Need to free the mutex
+    return ba;
+}
+
+ +

As you can see, I had to create a QByteArray object to hold my value while I closed the file. It would be nice if I could just do this instead:

+ +
QByteArray  Foo::getFileContents(QCD::FileSystem fileSystem, QString fileName) 
+{
+    formatDirectoryPath(fileSystem, fileName);
+    QString key(d_ROOT.absoluteFilePath(fileName));
+    m_Mutex.lock();
+    ct_Check(!m_Files.contains(key));
+    ct_Check(!m_Files[key]->exists());
+    ct_Check(!m_Files[key]->open(QIODevice::ReadOnly));
+    return m_Files[key]->readAll();
+    && m_Files[key]->close(); // Illegal, but you get the idea
+    && m_Mutex.unlock();     
+}
+
+ +

where I return the readAll() but still manage somehow to close the file inside the function block as well as unlock the mutex.

+ +

Is this possible without having to create any more objects?

+",136084,,136084,,2/5/2019 14:15,2/5/2019 20:28,"In c++, is there a way inside a function block, to execute cleanup code after the value has been returned?",,3,10,,,,CC BY-SA 4.0,,,,, +386677,1,,,2/5/2019 14:13,,1,427,"

We have a few tables with a large amount of data and with indexes on those tables to help in faster retrieval. +We are also using Spring Data JPA JpaRepository for adding data to those tables using the .save(Iterable entities) method.

+

Now, I know that the index is going to slow down the inserts in the DB. +But my question is that what is the impact of the .save() method on the indexes? Is it exactly the same as one would perform a normal insert into the table? Or is JPA smarter/better? +I could not find any concrete answer on google for this.

+

FYI: we are using postgres 9 database and jboss application server. +Please do let me know if any more information is needed.

+",273966,,-1,,6/16/2020 10:01,2/5/2019 14:13,Performance impact of JPARepository save() on a large database table with index,,0,0,,,,CC BY-SA 4.0,,,,, +386687,1,386697,,2/5/2019 15:45,,3,273,"

Some background

+ +

I am the newest member of a small team of 3 developers. For the past two years I have been working with them on an application the two of them made roughly 5-6 years ago. This application has no documentation and no design or analytical methodology has ever been applied to the development process. The latest medium-scale change to the system dragged on for over a year as a consequence of the convoluted mess that is our current codebase.

+ +

As a result of my post-mortem analysis on this change, we are moving towards a full rewrite of the whole application since the market is moving faster than we can extend the system. This time, however, I have urged that we do so methodologically, rather than rush forward. This was met with enthusiasm, though it has since become clear that my coworkers are not experienced in software design on any level.

+ +

To the question

+ +

I am the only person in my team with any experience in terms of software design, and that experience is purely academical, which means that it will fall on me to do the heavy lifting at least in the beginning, but I am at a loss as to how to approach this in a way that both elevates my team members understanding of how to reason around software while also not breaking my own back in the process. In short, I am looking for any kind of workshop ideas, or learning resources, that I could share with them on topics like GRASP and SOLID, or object oriented design in general, and how to structure such workshops in a productive way.

+ +

Edit: clarification and further background

+ +

In an attempt to keep the question short, I think I may have neglected some key points in our current situation. The rewrite I mention is more akin to a ""refactor into a new platform"", spurred on chiefly by being built on and with deprecated technology that doesn't cut the mustard any more in terms of security and performance. The application is a browser-based web application, so we can switch routes in our current solution to point to our new solution with ""minimum effort"" and gradually replace parts of the old with the new a little at a time until it is all new. The change I mentioned above was not the only factor, but it contributes to the larger problem. Originally, the software was used in-house only, but since it fills a niche in the market, we are now selling the solution to customers. The original product owner's workflow does not match the workflow of any of our other customers and their previously hands-on management approach lead to a system where everything is connected to everything else for the sake of shortcuts that are no longer necessary or workable. Before I started working here, issues would bounce between acceptance test and development for weeks due to miscommunication. Testing meant, and to a large extent still does mean, that the support staff push buttons and see if they get the expected result. This has been a concern, as features that were considered stable and complete years ago fail on deviation from expected inputs.

+ +

This is where I see the real problem. Our development process is not conducive to quality code, and we need to do something to shake up how we work. With a new version on the horizon, I see an opportunity to do just that. I want to introduce methodological software design not as a solution to all our problems, but to slow down the pace a bit so that we may better understand not only the actual requirements on our application, but of the application itself, and how its different parts fit together to form a whole. It is not an application full of complicated business logic, it is a large collection of different problem domains which almost never overlap except for interacting with a common core-entity. There is a modular architecture in there which would facilitate development, sales, and support, but it is not going to appear on its own, and if we don't do something, we will face the same problems we always have because it is too easy to look at what we have and just do the same thing in a different language.

+ +

And this is why I am looking for advice or resources on how I can introduce the rest of the team to software design, at least enough that I can apply what knowledge I have. Especially since we are dividing the project into two phases, one phase which is a small subset of entirely new features, with a new UI and general usability goal, before taking on the task of moving current features into the new system. The first phase will be done all in house, by the three of us, and the second phase will be implemented mostly by external consultants working remotely under our review, with the three of us producing the backlog and design documentation for them as they work. I believe that phase one provides a perfect opportunity for us to cut our teeth on a new development process, provided we can get the ball rolling.

+",317677,,317677,,2/23/2019 17:35,2/23/2019 17:35,How to handle a software design project when I am the only member of the team with any design experience,,1,7,,,,CC BY-SA 4.0,,,,, +386688,1,,,2/5/2019 16:10,,0,176,"

I am part of a new dev team that is assigned to work on a legacy app. The app currently has no regression or automated unit, integration and system tests.

+

Due to technical debt and convoluted architecture, automated unit tests would be difficult without refactoring the code. However refactoring code is not a something management wants to invest in.

+

I want to reduce my chance of breaking the system by creating a comprehensive list of manual regression tests for the suite of use cases that the system can perform. This way I can determine if any major functionality is impacted by a change.

+

However making this list would be time consuming. Is doing this worth the time investment?

+

The alternative is to do try to develop carefully, functionally test, and pass it on to QA and hope for the best. Not quite comfortable about the second option.

+",,user321981,155513,,1/4/2021 23:19,1/4/2021 23:19,Is is worth the time to create a list of manual regression tests for a legacy application?,,3,4,,,,CC BY-SA 4.0,,,,, +386689,1,,,2/5/2019 16:31,,12,6437,"

At my work we have a typical microservice architecture, but one of the issues we are running into is sharing data across multiple services. We have certain data that is used by multiple services and so we have to share the data across services. Our databases are separated out by service, for reference.

+ +

To give an example, we have a user service, communication service, and reward service. The user service stores all the user data including language preference and reward tier. The communication service and reward service need that data. The data is stored in the user service because it's fundamentally a property of the user model. This causes us to have to access the data outside of the microservice it's stored in. Even if it were stored in the appropriate service, running a GET to the user model would need to return this data and so we'd run into the same problem.

+ +

Right now the data is shared by making GET queries to the services for that data. However, this seems really naive, and it was largely done as an ""easiest solution"" answer.

+ +

On top of this, we have certain requirements on some of the data that require us to maintain consistency. We have a legal obligation to communicate to users in their preferred language. Similarly, rewards being applied from the wrong tier sounds like a problem (though perhaps that's not a critical problem).

+ +

How do we properly handle sharing this data?

+ +

One thought process is to share a read connection to the database. Reads scale very well and only the user service needs to update the data. It would be faster than running a GET between services that's for sure. I'm not overly sure I'm a fan of sharing database connections, but perhaps just for reads it's really not that bad.

+ +

Another is to make a shared service that stores data and publish to it, but having this data incur propagation delay seems problematic. We have obligations to have some data in sync, and any problems with the propagation can be huge issues. It's possible that if our data is only out of sync for a few seconds then it's likely not a big deal. Like if someone happens to get the wrong reward because of a timing issue it may not be that bad, or the communication is in the wrong language if it's basically right as it was sent out. I'm not sure if there are any that are critical that it absolutely has to be shared.

+ +

A third is to use a cache service that's shared. We have redis for our caching so we could potentially store data in there and only query services on cache misses. This causes us to have significantly faster lookup for data we're querying multiple times, but I don't know if we need that much data in our cache, and if we don't store enough we may miss too often.

+ +

Another is to store the data in the microservice where said data is critical. The problem with this is that the truth store is implied to be the user model. So if the data gets updated by an API call, it would update the microservice but then GETs to the data would temporarily return outdated data.

+ +

I'm sure this isn't a new problem but we're not sure about the best way to approach this. I'm fairly certain our current solution isn't ideal and won't scale, but I'm not sure which approach is the best.

+",326412,,,,,10/15/2019 17:01,How to handle shared data across microservices?,,1,11,5,,,CC BY-SA 4.0,,,,, +386690,1,386788,,2/5/2019 16:35,,6,1340,"

So, I've fallen into the fad trap, and started replacing a large amount of linq queries with extension methods.

+ +

For example:

+ +

orders.Where(o => o.Status == ShippedStatus.Shipped).Select(o => o.ID)

+ +

has become :

+ +

orders.ShippedOrderIds

+ +

Extension methods are static with all the cons implied, and I believe the 'most-correct' OO way to handle this kind of refactor would be to wrap it in a 'Orders' object and expose property(ies) that do this instead.

+ +

A couple questions:

+ +
    +
  1. Is there a third (or more) alternative that makes more sense than either of these approaches?
  2. +
  3. How much worse is the extension approach than the 'true' OO approach?
  4. +
+ +

Quick clarification of the refactoring contexts - none of these refactors operate on single objects, just collections of objects.

+",325394,,325394,,2/5/2019 18:50,2/7/2019 22:41,Replacing Linq Methods with Extension Methods,,4,6,2,,,CC BY-SA 4.0,,,,, +386702,1,386703,,2/5/2019 20:03,,65,10648,"

I recently started working at a place with some much older developers (around 50+ years old). They have worked on critical applications dealing with aviation where the system could not go down. As a result the older programmer tends to code this way.

+ +

He tends to put a boolean in the objects to indicate if an exception should be thrown or not.

+ +

Example

+ +
public class AreaCalculator
+{
+    AreaCalculator(bool shouldThrowExceptions) { ... }
+    CalculateArea(int x, int y)
+    {
+        if(x < 0 || y < 0)
+        {
+            if(shouldThrowExceptions) 
+                throwException;
+            else
+                return 0;
+        }
+    }
+}
+
+ +

(In our project the method can fail because we are trying to use a network device that can not be present at the time. The area example is just an example of the exception flag)

+ +

To me this seems like a code smell. Writing unit tests becomes slightly more complex since you have to test for the exception flag each time. Also, if something goes wrong, wouldn't you want to know right away? Shouldn't it be the caller's responsibility to determine how to continue?

+ +

His logic/reasoning is that our program needs to do 1 thing, show data to user. Any other exception that doesn't stop us from doing so should be ignored. I agree they shouldn't be ignored, but should bubble up and be handled by the appropriate person, and not have to deal with flags for that.

+ +

Is this a good way of handling exceptions?

+ +

Edit: Just to give more context over the design decision, I suspect that it is because if this component fails, the program can still operate and do its main task. Thus we wouldn't want to throw an exception (and not handle it?) and have it take down the program when for the user its working fine

+ +

Edit 2: To give even more context, in our case the method is called to reset a network card. The issue arises when the network card is disconnected and reconnected, it is assigned a different ip address, thus Reset will throw an exception because we would be trying to reset the hardware with the old ip.

+",269955,,269955,,2/7/2019 3:45,2/20/2019 23:44,Having a flag to indicate if we should throw errors,,12,22,12,,,CC BY-SA 4.0,,,,, +386710,1,,,2/5/2019 23:42,,3,2095,"

I'm a little confused about

+ +
    +
  • commands role in event sourcing
  • +
  • distinction between domain and external events
  • +
+ +

If my understanding is right

+ +
    +
  • a command represents an action initiated by an actor in terms of the domain
  • +
  • a domain event is an event can be consumed and produced by aggregate roots
  • +
  • an external event is just like a DTO - a data contract and it needs to be translated to either a domain event or a command
  • +
+ +

Example

+ +

I have a Product aggregate root. A product can have multiple active special offers. In order to manage it's SpecialOffers, the product accepts 2 domain events:

+ +
    +
  • SpecialOfferActivated
  • +
  • SpecialOfferDeactivated
  • +
+ +

So it's public interface is just 2 overloaded Apply methods:

+ +
class Product{
+    ...
+    Apply(SpecialOfferActivated){...}
+    Apply(SpecialOfferDeactivated){...}
+}
+
+ +

1st case: The request comes form front-end to the api

+ +

So a Controller is first in the line. It basically translates caller intention from data contract (DTO) to the domain language (command):

+ +
class ProductController{
+    Post(SpecialOfferDto dto){
+        ActivateSpecialOfferCommand command = Map(dto)
+        _commandBus.Send(command)
+    }
+}
+
+ +

Command sent, now we need a command handler

+ +
class ActivateSpecialOfferCommandHandler{
+    Handle(ActivateSpecialOfferCommand command){
+        SpecialOfferActivated domainEvent = Map(command)
+        _eventBus.Publish(domainEvent)
+    }
+}
+
+ +

Event published, now time for the event handler

+ +
class SpecialOfferActivatedDomainEventHandler{
+    Handle(SpecialOfferActivated domainEvent){
+        var product = GetFromDatabase()
+        product.Apply(domainEvent)
+        Save(product)
+    }
+}
+
+ +

Done.

+ +

2nd case: The process is initiated by an external event published to the service bus.

+ +

This time NewPromotionExternalEvent is the data contract (ExternalEvent) and again we need to translate it to the domain language (Command)

+ +
class NewPromotionExternalEventHandler{
+    Handle(NewPromotionExternalEvent extenalEvent){
+        ActivateSpecialOfferCommand command = Map(extenalEvent)
+        _commandBus.Send(command)
+    }
+}
+
+ +

And then it falls back to the ActivateSpecialOfferCommandHandler from the fist case. So it's the same case as the first one basically.

+ +

3rd case: Skip the domain events layer (variation of either the 1st or the 2nd case)

+ +

So either by an api or an external event a command was produced. We simply create a domain event in order to apply it to the aggregate root. We do not publish the event to the service bus.

+ +
class ActivateSpecialOfferCommandHandler{
+    Handle(ActivateSpecialOfferCommand command){
+        SpecialOfferActivated domainEvent = Map(command)
+
+        var product = GetFromDatabase()
+        product.Apply(domainEvent )
+        Save(product)
+    }
+}
+
+ +

Done.

+ +

4th case: Skip the commands layer (variation of the 1st case)

+ +

We can easily skip the commands layer

+ +
class ProductController{
+    Post(SpecialOfferDto dto){
+        SpecialOfferActivated domainEvent = Map(dto)
+        _eventBus.Publish(domainEvent)
+    }
+}
+
+ +

and fallback to the SpecialOfferActivatedDomainEventHandler

+ +

5th case: Aggregate root creation.

+ +

So either by an api or an external event a command CreateNewProductCommand was produced. And we need another handler:

+ +
CreateNewProductCommandHandler{
+    Handle(CreateNewProductCommand command){
+        var product = Map(command)
+        SaveToDatabase(product)
+
+        NewProductCreated domainEvent = Map(product)
+        _eventBus.Publish(domainEvent) // in case somebody is interested
+    }
+}
+
+ +

In this case there's really no place to stick the domain events layer.

+ +

6th case: Domain event produced by Product (aggregate root)

+ +
class Product{
+    Apply(SpecialOfferActivated domainEvent){
+        var specialOffer = Map(domainEvent)
+        _specialOffers.Add(specialOffer)
+        if(...){
+            // For simplicity sake, assume the aggregate root can access _eventBus
+            _eventBus.Publish(new ProductReceivedTooManySpromotionsDomainEvent(this.Id))
+        }
+    }
+}
+
+ +

Questions

+ +
    +
  1. The events layer is cool, it allows us to distribute jobs across multiple instances or other microservices. However, what's the point of the command layer? I could easily produce domain events right away (in a controller or external an event handler - 4th case).
  2. +
  3. Is 3rd case legit (create a domain event just to apply it to the aggregate root, without publishing it)?
  4. +
  5. Does command layer only make sense in 5th case where it gives us the benefit of delegating product creation to another microservice while domain events layer is not applicable?
  6. +
  7. Where is the line between external and domain events? Is NewPromotionExternalEvent from the 2nd really an external event or is it rather a domain event?
  8. +
  9. Who can produce domain events? Aggregate root? Command handler? Domain event handler? External event handler? Another microservice? All of them?
  10. +
  11. Can domain events be dispatched to another micro-service or would it become an external event then?
  12. +
  13. What is the proper way of handling product creation and special offer activation when it the request comes form controller or external event?
  14. +
+",105003,,,,,7/6/2019 22:01,"Confused about commands, domain events and external events in event sourcing",,1,0,1,,,CC BY-SA 4.0,,,,, +386713,1,,,2/6/2019 0:54,,1,114,"

I'm building a mobile application using Expo & React Native. As the user interacts with the mobile app I need to store their interactions in Google BigQuery, so they can later be used to train machine learning models. Think 'user swiped left', 'user clicked image' etc.

+ +

I have a few options in mind:

+ +
    +
  1. Host the app on Google App Engine. Use GCP's logging API ('sinks') to stream specific log entries into BigQuery.

  2. +
  3. Use client libraries to directly send data from the app to BigQuery.

  4. +
  5. Add Firebase Analytics to my app, using real time export to BigQuery.

  6. +
+ +

Which approach (if any) is best? I need BigQuery to be updated at low latency as users generate data, to allow for machine learning models that respond to real time user behavior.

+",327867,,173647,,2/6/2019 14:51,2/6/2019 14:51,Solution architecture for storing machine learning data,,0,0,,,,CC BY-SA 4.0,,,,, +386716,1,,,2/6/2019 1:55,,0,20,"

Say I have a database with 100 tables. Scattered throughout the tables are some string columns for short things that could be considered usernames/tags/categories/hashtags/addresses/placenames/other short names. Also scattered throughout are integer columns ranging from super tiny numbers like 10-50 to super BigInts like 1050. Each of these tables is mapped to a ""class"". So you might have some tables like this:

+ +
Table A:
+Col1 Example: foobar@domain.com
+Col2 Example: 15918591859185998519818595918851959158195819581951851958
+Col3 Example: hello,world
+
+Table B:
+Col1 Example: 123
+Col2 Example: 1235981958
+Col3 Example: foo,bar,baz
+Col4 Example: Hello there
+Col5 Example: 958151985195818959185893852459
+
+Table C:
+Col1 Example: ABC
+Col2 Example: foo,world
+...
+
+Table D:
+Col1 Example: bar,world
+Col2 Example: XYZ
+Col3 Example: 19519581985888872578275481951958198588887257827548195195819858888725782754819519581985888872578275481951958198588887257827548
+
+Table N:
+...
+
+ +

What I'm wondering is how to use a trie, or some sort of data structure, to efficiently find items matching the following types of queries....

+ +

Say you first of all just create a trie for all the possible values in one global trie. So 1235981958 maps to its binary and into a trie position, and likewise foobar@domain.com maps into the same trie. These trie nodes then point to collections of items: all the records across all the tables. This would allow for searching things like ""show me all records containing foo anywhere"", or ""show me something that looks like an email matching the \w+@\w+ pattern"", or ""show me all records containing integers in the 100 - 10000000000 range"".

+ +

From here, you then want to filter the result set down. So say we queried for all records with foo somewhere. Now we want only the records from tables B, C, or D. If we had this global trie concept implemented, then we would end up with a collection of records from tables A, B, and C, and would then naively just iterate through all of them to filter down those containing foo.

+ +

But if you had some sort of additional trie, then perhaps you could do another trie search and avoid the filtering explosion. Wondering if anything like this is possible, or any design patterns exxist around performing complex querying and filtering like this. Maybe this falls into the category of Decision Trees and maybe there is a way to do it like that instead of tries, I would like to know. Or maybe it is considered ""Tries with multiple attributes"", but I haven't found anything related to that.

+",326030,,173647,,2/6/2019 9:47,2/6/2019 9:47,How to combine multiple Tries for complex searching and filtering across tables?,,0,2,,,,CC BY-SA 4.0,,,,, +386718,1,386735,,2/6/2019 3:06,,1,2525,"

I am new to object oriented design and learning about interfaces and design patterns. In this example, I am trying to create class for cars. My question:

+ +

Is it good practice to use base class and then inherit from it. And then use interfaces for implementing different functions? If not, then what would be the best way to implement this.

+ +
Base Class:
+
+    class BaseCar
+        {
+            public string color { get; set; }
+            public double price { get; set; }
+
+            public string carType { get; set; }
+        }
+
+Interface: 
+interface ICarFunctions
+    {
+        void brakeSystem();
+
+        void drivingModes(string mode);
+
+        void entertainmentSystem();
+
+    }
+
+ +

Now I am trying to create concrete classes

+ +
 class BmwCar : BaseCar, ICarFunctions
+    {
+
+        public void brakeSystem()
+        {
+            Console.WriteLine(""Brake in less than 0.02ms"");
+        }
+
+        public void drivingModes(string mode)
+        {
+            switch(mode) 
+
+            {
+                case ""mountain"":
+                    {
+                        Console.WriteLine(""Switching to 4x4 mode"");
+                        break;
+                    }
+
+                default:
+                    {
+                        Console.WriteLine(""Normal Mode"");
+                        break;
+                    }
+
+            }
+
+        }
+
+        public void entertainmentSystem()
+        {
+            Console.WriteLine(""Music, Navigation"");
+        }
+    }
+
+ +

Now my question is: Is it good practice to use a base class and then inherit from it? And then use interfaces for implementing different functions?

+ +

My second question is if interface should have only one function (Single responsibility principle) or can it can have multiple functions which need to be implemented?

+",327869,,173647,,2/6/2019 14:51,2/6/2019 14:51,Should we inherit from base class and implement interface in this scenarios?,,3,0,,,,CC BY-SA 4.0,,,,, +386719,1,,,2/6/2019 3:25,,1,93,"

It seems pretty common knowledge that code is read far more than it is written. Does this mean that if a tool produces code that's checked in, then it's a net negative (saves the author some time in writing it, but more than makes up for it in time spent by everyone else reading it)?

+ +

If not, why not?
+If yes, is there still a reason to use code scaffolding tools (e.g. yeoman)?

+",13621,,173647,,2/6/2019 14:51,2/6/2019 14:51,Why use code bootstrapping tools?,,0,3,,,,CC BY-SA 4.0,,,,, +386721,1,386722,,2/6/2019 4:29,,-1,406,"

I have came across a concept in recursion which says that when loops are compiled or interpreted then they get converted to recursive functions. If it is true how it takes place ?

+",327876,,,,,2/6/2019 15:41,What happens when loops are compiled or interpreted?,,1,3,,,,CC BY-SA 4.0,,,,, +386724,1,386748,,2/6/2019 5:07,,2,324,"

Before I ask my question, I'm aware of Eric Lippert's Wizard and Warrior series.

+ +

I'm trying to understand GRASP, but I'm having a hard time determining if this class violates GRASP.

+ +

Suppose if I had the following Character class:

+ +
class Character(ABC):
+
+        def __init__(self, name, strength, attributes):
+            self._name = name
+            self._strength = strength
+            self._attributes = attributes
+            self._inventory = []
+            self._found_attributes = []
+            self._cannot_equip = False
+            self._equipped = True
+
+        def add(self, game_object):
+            self._inventory.append(game_object)
+
+        # I may obviously want to do more checks, such as required strength,
+        # if I currently have another weapon equipped, etc..
+        def try_equip(self, game_object):
+            if self._check_for_conflicting_attributes(game_object):
+                return self._cannot_equip
+            return self._equipped
+
+        def _check_for_conflicting_attributes(self, game_object):
+            for attrib in self._attributes:
+                if game_object.contains_attribute(attrib):
+                    self._found_attributes.append(attrib)
+            return len(self._found_attributes) > 0
+
+ +

Weapon class:

+ +
class GameWeapon(ABC):
+
+    # imports and other methods left out. 
+
+    def __init__(self, name, required_strength, damage, attributes):
+        self._name = name
+        self._required_strength = required_strength
+        self._damage = damage
+        self._attributes : List[Attributes] = attributes
+
+    def contains_attribute(self, attribute):
+        return attribute in self._attributes 
+
+ +

and Main:

+ +
def main():
+
+    wizard_attributes : List[Attributes] = []
+    wizard_attributes.append(Attributes.LongBlade)
+    wizard_attributes.append(Attributes.Reloadable)
+
+    wiz = Wizard(""Gandalf"", 100, wizard_attributes)
+
+    sword_attributes : List[Attributes] = []
+    sword_attributes.append(Attributes.LongBlade)
+    sword = Sword(""Standard Sword"", 5, 10, sword_attributes)
+
+    # Will print false
+    print(wiz.try_equip(sword))
+
+
+if __name__ == ""__main__"":
+    main()
+
+ +

Explanation:

+ +

Both Wizard and Sword have attributes, for example, Wizard might have attribute called mana-drinker and Sword might have LongBlade as an attribute. Before adding a LongBlade weapon to my character inventory, I check for specific attributes, for example, a Wizard can't use a Sword, so I check for LongBlade and if the weapon has that attribute it will prevent that weapon from being added to the character inventory.

+ +

One topic that's always mentioned is class responsibility, if the Character class does violate GRASP is it because it has the responsibility of checking and verifying if it could add a weapon to the player inventory?

+ +

My larger question is, how do you know when a class may be violating GRASP? What if I want the class or subclass to be responsible for check before adding a weapon or doing another task? Is it still a violation?

+",,user327264,,user327264,2/7/2019 2:23,2/7/2019 2:23,Does it violate GRASP if a Character class checks if it can carry a weapon?,,1,4,1,,,CC BY-SA 4.0,,,,, +386725,1,386731,,2/6/2019 6:25,,0,86,"

Our project is in its early stages and we are currently using Uncle Bob's Clean Architecture (also known as the Onion Architecture).

+ +

An overview of our project is as follows:

+ +
    +
  • domain package

    + +
      +
    1. Contains our business entities
    2. +
  • +
  • usecases package

    + +
      +
    1. Contains operators that use our domain entities to perform use cases
    2. +
    3. Contains repository interfaces for entities that need to interact with an unspecified database
    4. +
  • +
  • interfaces package

    + +
      +
    1. Contains handlers for triggering the correct usecases operators and use cases
    2. +
    3. Implements our usecases repository interfaces (tailored to LDAP)
    4. +
    5. Contains a database interface for performing database operations (using LDAP)
    6. +
  • +
  • infrastructure package

    + +
      +
    1. Implements our interfaces database interface (using LDAP)
    2. +
  • +
+ +

Our business entities are pure data structures whose only business logic is being linked to each other via UUIDs.

+ +

While the above approach decouples us from any database technology and lets us test our code independently of our database, any change in LDAP (such as adding/renaming/deleting fields) requires code changes to be made before they can be used.

+ +

Because LDAP is basically a map of maps (you can think of it as a HashMap<String, HashMap<String, Object>>), our project manager recently got the idea of taking a ""stateless"" approach where our application only needs to keep track of what field names client apps are using and the corresponding field names in LDAP.

+ +

His proposed solution aims to avoid having to change lots of code in our project and only need to work with a configuration file that keeps track of said name pairs.

+ +

e.g. Proposed JSON Configuration File Format (using a Map of Maps)

+ +
{
+    ""entity1Fields"": {
+        ""uiFieldName1"": ""ldapFieldName1"",
+        ""uiFieldName2"": ""ldapFieldName2"",
+        ""uiFieldName3"": ""ldapFieldName3""
+    },
+    ""entity2Fields"": {
+        ""uiFieldName1"": ""ldapFieldName1"",
+        ""uiFieldName2"": ""ldapFieldName2""            
+    }
+}    
+
+ +

Our questions are:

+ +
    +
  1. Are there any possible development/testing issues with this approach?

  2. +
  3. Can a ""stateless"" approach remove the need for pure data structure domain entities, or will completely relying on the configuration file cause development/testing complexities, especially when working with said entities and their fields?

  4. +
  5. Can the Clean/Onion Architecture be done with a ""stateless"" approach?

  6. +
  7. Is there a better way to implement a ""stateless"" approach?

  8. +
  9. Is taking a ""stateless"" approach (even when just working with pure data structures) generally a bad idea?

  10. +
+ +

Any advice would be greatly appreciated.

+",322362,,322362,,2/6/2019 8:37,2/6/2019 8:37,Moving from a Coded Approach to a Stateless Approach,,1,1,,,,CC BY-SA 4.0,,,,, +386726,1,,,2/6/2019 6:42,,1,114,"

I'm working with firestore and have encountered several situations where a document will contain stale data which is a duplicate of the source of truth located elsewhere in the database.

+ +

For instance I have a chats collection with documents that contain:

+ +
users: [{
+    name: 'foo',
+    companyName: 'bar'
+}]
+
+ +

The source of truth for both name and companyName are located in documents contained in the userProfile and companyProfile collections respectively.

+ +

On App startup I start database listeners that fill redux state and then continue to update in the background. I do this for the chats documents that the user is a part of so that there is no loading presented to the user after initial startup. But of course I can't listen to all data in the database, such as other users userProfile documents. So data like this can become stale if it is a duplicate of the source of truth.

+ +

Up to this point I had assumed that I can write a cloud function to update all this duplicate data upon any change but I'm thinking that's a bad idea if even feasible.

+ +

I'd like to not slow the down the user experience if I can, so I need advice here. Currently I see three options:

+ +
    +
  1. Always store the document Id of the source of truth instead of duplicating data, and fetch the up to date data. The user would experience an obtrusive load before seeing this data, or potentially ghost elements with animations.

  2. +
  3. Keep the data duplication pattern and serve the user potentially stale data, but also show them an unobtrusive load indicator while fetching the source of truth. If there are updates then update the duplicated data in the database.

  4. +
  5. Write a potentially expensive/complex cloud function that watches the source of truth and updates the duplicates all over the database when changes are detected. A users name/companyName is duplicated in several places, so this may prove difficult to target everything, if possible.

  6. +
+ +

Am I missing anything here? Are there better ways to approach this?

+ +

I realize this a trade-off between up to date data and user experience, but I'm hoping there is an eloquent or generally accepted design pattern to combat this.

+ +

Thanks!

+",305070,,305070,,2/6/2019 21:15,2/6/2019 21:15,Strategies for dealing with Stale data and Fetching,,0,0,,,,CC BY-SA 4.0,,,,, +386728,1,,,2/6/2019 7:45,,0,50,"

I've been working on a library at work which provides a simplified API for a few underlying libraries (face recognition, text to speech, etc.).

+ +

My boss asked me the best way to describe this concept in English, and I couldn't find something that felt both succinct and adequately conveys what the library does.

+ +

The best I got was ""an ease-of-use API for X"".

+ +

Is there a better description for something like this?

+",224382,,,,,2/6/2019 7:53,Name for a library that provides an API for other libraries,,1,1,,,,CC BY-SA 4.0,,,,, +386730,1,386756,,2/6/2019 7:56,,0,108,"

I want to create an application that has some optional steps. But I could not decide how to do step plans. It looks like a workflow.

+ +

I have a Work entity. This work includes specific business steps.

+ +
    +
  • Step-1 : Demand of work. ( DemanderName, DemandDate, DemandFiles )
  • +
  • Step-2 : ...
  • +
  • Step-3 : ...
  • +
  • Step-4 : Work investigation starts. ( InvestigatorName, StartDate, EndDate, Files )
  • +
  • Step-5 : After investigation, a decision done. This work is accepted or cancelled. ( Result, Date )
  • +
+ +

After a work created, the steps will done by order. But some steps could be cancelled. For example: I work on a work that does not include step-2 and step-3.

+ +
    +
  1. Should I need create a database table for every step? (And I need to sign the step of work as completed or not completed.)
  2. +
  3. I need to show work completion percentage. (Work-1 %20, Work-2 %60). If I use tables, how can I see the percentages?
  4. +
+ +

I could not decide how I can design this.

+",160523,,173647,,2/6/2019 8:03,7/7/2019 3:03,How can I do a step based design form my applicaiton?,,3,1,,,,CC BY-SA 4.0,,,,, +386739,1,,,2/6/2019 10:38,,-3,492,"

I have gone through ken thompson's compiler hack paper, can't we just go through the complier's source code and check for any backdoor, what was the article's point?

+ +

https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

+ +

Can we be sure that there are no backdoor's if we check the latest language's source code like python or php?

+",327678,,,,,2/6/2019 14:00,Ken thompson's compiler hack,,3,2,,43503.02431,,CC BY-SA 4.0,,,,, +386741,1,386744,,2/6/2019 10:42,,0,437,"

I have situation where in the Repository class I have a dictionary:

+ +
Dictionary<TableName, Dictionary<EntityColumnName, SourceColumnName> map1 = new Dictionary<TableName, Dictionary<EntityColumnName, SourceColumnName> ()
+
+ +

with methods for manipulating data (Get,Save,Add etc).

+ +

Now there is a need to search by value from the inner map, not only by key (in this example SourceColumnName).

+ +

My question is what would be best approach here? I want to add another dictionary but with reversed data:

+ +
Dictionary<TableName, Dictionary<SourceColumnName, EntityColumnName> map2 = new Dictionary<TableName, Dictionary<SourceColumnName, EntityColumnName> ()
+
+ +

But the problem is memory usage, is that a waste of memory? But if I implement lookup by value, I'll lose the main advantage of dictionary.

+ +

The data stored in the dictionary is around 20 * 5k data in the inner map. +Any suggestion how to design / refactor this class would be appreciated.

+ +

EDIT1: +Micro optimisation post didn't resolved my problem. I work on project where optimisation is very important.

+",306616,,306616,,2/6/2019 10:57,2/6/2019 11:22,Two reversed dictionaries or one dictionary with key and value lookup?,,1,1,,,,CC BY-SA 4.0,,,,, +386755,1,386758,,2/6/2019 15:38,,4,1069,"

Assuming we have different classes with methods that possess the exact same description, however execute code a bit differently for the same return type.

+ +
class Foo:
+    """"""This is the Foo class Docstring. It is a type of Bar source.""""""
+    def getBar(self,pos):
+        """"""
+        This is the Bar method. 
+        It calculates and returns the Bar field effect generated 
+        by this source based on `pos`
+        """"""   
+        effect = pos + 1 
+        return effect
+
+class Waldo:
+    """"""This is the Waldo class Docstring. It's also a type of Bar source.""""""
+    def getBar(self,pos):
+        """"""
+        This is the Bar method. 
+        It calculates and returns the Bar field effect generated 
+        by this source based on `pos`
+        """"""   
+        effect = pos + 2 
+        return effect
+
+ +

This is a problem because, assuming there are many Bar sources, if one changes the description for getBar() they'll have to repeat the change for all the sources.

+ +

How could one make it so there's a single Docstring shared between both or more of these, having it so a change in Foo.getBar() would change the description of Waldo.getBar() for tooltips?

+ +

A way of restructuring or overriding in order to achieve the same effect would also be welcome.

+ +
+ +

I've tried the __doc__+=Foo.getBar().__doc__ approach in this question but it seems unreliable (using an environment like Spyder for instance, it recognizes it as a local variable).

+",327923,,327923,,2/6/2019 15:46,11/21/2019 15:58,Sharing Docstrings between similar functions?,,2,0,1,,,CC BY-SA 4.0,,,,, +386757,1,386759,,2/6/2019 15:53,,-1,2305,"

I know it is lot of code. But I am trying to understand the concept of Factory pattern with interfaces and base class. I have a base class of Car that implements an interface which will be implemented by every derived class.

+ +

Then I have another interface that will be only implemented by few derived classes. OnlyImplementedByBMW.

+ +

Then I have derived classes that inherits from base class and also implements an interface. Now I want to use factory pattern to create instance of BMW. +CarFactoryPattern.cs works fine and returns the instance of Bmw object but it is only able to call functions in base class. How can I call BMWFunction1 from C1 object created from factory pattern?

+ +

Here is my code

+ +

Base Class:

+ +
 abstract class Car : ICarFunctions
+    {
+        string color { get; set; }
+        double price { get; set; }
+        string Model { get; set; }
+        string carType { get; set; }
+
+        public virtual void brakeSystem(){ }
+
+        public virtual void drivingModes(string mode) {}
+
+        public virtual void entertainmentSystem() {}
+
+    }
+
+ +

Interface:

+ +
 interface ICarFunctions
+    {
+        void brakeSystem();
+
+        void drivingModes(string mode);
+
+        void entertainmentSystem();
+    }
+
+ +

Derived Class:

+ +
    class Bmw : Car, OnlyImplementedByBmw
+    {
+        public string carType{ get;set;}
+
+        public string color { get; set; }
+
+        public string Model { get; set; }
+        public double price { get; set; }
+
+        public Bmw(string CarColor,string CarModel,double CarPrice, string CarType)
+        {
+            carType = CarType;
+            color = CarColor;
+            Model = CarModel;
+            price = CarPrice;
+        }
+
+        public override void brakeSystem()
+        {
+            Console.WriteLine(""Brake in less than 0.02ms"");
+        }
+
+        public override void drivingModes(string mode)
+        {
+            switch (mode)
+
+            {
+                case ""mountain"":
+                    {
+                        Console.WriteLine(""Switching to 4x4 mode"");
+                        break;
+                    }
+
+                default:
+                    {
+                        Console.WriteLine(""Normal Mode"");
+                        break;
+                    }
+            }
+
+        }
+
+        public override void entertainmentSystem()
+        {
+            Console.WriteLine(""Music, Navigation"");
+        }
+
+        public void BmwFunction1()
+        {
+            Console.WriteLine(""New Function"");
+        }
+    }
+
+ +

New Interface:

+ +
 interface OnlyImplementedByBmw
+    {
+         void BmwFunction1();
+    }
+
+ +

Factory Pattern Class:

+ +
 class CarFactoryPattern
+    {
+        private string CarColor;
+        private string CarModel;
+        private string CarType;
+        private double CarPrice;
+        public Car GetCarInstance(int id)
+        {           
+
+            switch (id)
+            {
+                case 1:
+                    {
+
+                        return new Bmw( CarColor,  CarModel,  CarPrice,  CarType);
+                    }
+
+                case 2:
+                    {
+                        return new Mercedes();
+                    }
+                default:
+                    {  return null;}
+
+            }
+        }     
+    }
+
+ +

Main program:

+ +
 class Program
+    {
+        static void Main(string[] args)
+        {
+            CarFactoryPattern cf = new CarFactoryPattern();
+            Car c1 = cf.GetCarInstance(1);
+            c1.brakeSystem();
+            c1.BmwFunction1() // This does not work
+
+        }
+}
+
+ +

How can I call BMWFunction1 from c1 object created from factory pattern? c1 object is Bmw type of object so shouldn't it has all the functions defined in class for BMW which includes the interface function of ""onlyImplementedByBMW""

+",327869,,13156,,2/6/2019 16:16,2/6/2019 16:16,Factory Design Pattern Implementation with multiple interfaces and base class,,1,0,,,,CC BY-SA 4.0,,,,, +386761,1,386762,,2/6/2019 16:12,,2,1273,"

Suppose I have a one C++ process, and I want this process to run eight threads in parallel.

+ +

And suppose that:

+ +
    +
  • I have a computer with two (2) physical CPUs.
  • +
  • Each CPU has four (4) cores, so that's 4x2 = (8) cores total.
  • +
  • Each core only allows one (1) logical thread, so (8) logical threads total.
  • +
  • I am using an x86-64-bit operating system.
  • +
+ +

My question is this:

+ +
    +
  • Can I run eight(8) threads in parallel on the above system using one process?
  • +
  • Does this depend on which x64 operating system I am using? If so, what's the difference between Ubuntu Linux x64 and Windows 10 x64?
  • +
  • Does this depend on which compiler I am using? If so, how do GCC, VC++, clang, and Intel compare in this regard?
  • +
+ +

Note: When I say ""in parallel"", I do mean on completely separate logical threads.

+",327928,,327928,,2/6/2019 16:45,2/6/2019 17:30,"One process using std::thread, 2 physical CPUs, 4 cores each, parallelism level?",,1,0,,,,CC BY-SA 4.0,,,,, +386767,1,386769,,2/6/2019 16:49,,0,315,"

I currently have a Web App using ASP.NET Core 2.2, Domain Driven Design, Clean Architecture, and CQRS. I'm using MongoDB as persistence.

+ +

I have developed a Repository pattern to abstract the MongoDB implementation. But I've been doing some research on how to prepare the application against outages, timeouts, and discrepancies in the network (such as latency). I've found the circuit breaker and retry patterns to work well for this scenario. I'm planning on using the Polly library to abstract the implementation details and implement an exponential back-off strategy.

+ +

My question is: where should I implement the patterns (Circuit Breaker+ Retry using Polly): at the application layer (Command and Query stack) or at the Repository directly?

+ +

I've been thinking on the Repository because: +1) The application becomes cleaner and my command/query stack isn't polluted by the library. + In case of exceptions, I can just catch it and rethrow it (in case of a MongoException)

+ +

2) The Repository is in fact the one that makes the connectiom to the Database and not the application layer (command/query stack). Again, I would just need to handle the exception and not the implementation details

+",151252,,,,,2/6/2019 17:00,Circuit Breaker + Retry - Repository or Application Layer,,1,0,,,,CC BY-SA 4.0,,,,, +386772,1,,,2/6/2019 17:09,,1,397,"

At my company, we use a distributed pool of virtual machines to run our UI and API tests. These machines are all connected to an onsite server which the pool uses for publishing the test results and outputs.

+ +

The problem is our storage on this server is limited, and each day we are producing 500MB - 5GB of reports (csvs, screenshots, txt logs, etc). We would like to preserve these reports for assisting QA in identifying issues, but we end up having to routinely delete large amounts of reports due to the need to free up space.

+ +

Recently, we have moved our test scripts and inputs to a Git repo on VSTS. This not only frees up some space on our test server, but also allows for source control.

+ +

We want to do the same with the test outputs. The only issue is that the repo for this would be MASSIVE, larger than the tiny local storage allotted to each test machine. And since everything I've found online seems to suggest that each machine would need to have a full copy of the repo in order to push to it, this solution is unworkable.

+ +

My question is, how can I go about making this work? Is there a way to push an individual file or collection of files to a VSTS repo without cloning it locally first? I've looked at Git Submodules but I'm unsure at how reliable or stable that would be, since, in order to get this repo to a reasonable size, we would need about 1,500 submodules. Is there a better solution for storing large amounts of test output data?

+",327936,,,,,2/6/2019 18:44,Storing test results in a Git repo?,,1,4,,,,CC BY-SA 4.0,,,,, +386778,1,386779,,2/6/2019 19:32,,1,81,"

This is a basic question, but I don't have any sense of what other developers do in this scenario.

+

The Situation

+

I am creating an interface to allow end users to insert and update data in a table stored in an Oracle database. The interface will ask for an Excel worksheet and then generate a MERGE script with SQL to insert the data into the database table. I want the users to have to log into the database before they can perform these types of transactions.

+

I can think of a few ways to accomplish this:

+
    +
  1. Ask my DBA to create a new schema in the database, store this table there, and give all end users permission to edit tables in this schema.

    +
  2. +
  3. Ask my DBA to create a new schema (aka "user" in the case of Oracle) and have end users all use this one userId and password. So maybe the userID is named "Accounting" and the password is "hunter2" and everyone in Accounting is given this user ID and password. Then I store the table in the "Accounting" schema.

    +
  4. +
+

Number 1 seems like a better option than number 2, but I'm not confident that either of these are good ideas.

+

My questions

+

Could anyone recommend a general strategy or some best practices on how to deal with this type of situation? Perhaps I'm thinking about this all the wrong way?

+",316187,,-1,,6/16/2020 10:01,2/16/2019 5:47,How to safely allow different end users to insert data into a table stored in an Oracle database?,,1,0,,,,CC BY-SA 4.0,,,,, +386780,1,386786,,2/6/2019 20:05,,1,96,"

Apologies in advance if this is considered opinion based, but I wasn't sure where to ask it and was interested in learning if there were any definitive best practices.

+ +

Say an application calls a service from one location, but a requirement is that a == 1. You would want to perform the a == 1 check in the service to be able to handle it gracefully because you should assume the calling code will perform no validations. Should you also perform the a == 1 check in the application before making the call? The downside being you've now duplicated code, but the upside being you could prevent a service call from being made that didn't need to be.

+ +

Does the answer change if the service is called from 10 locations in the application? Is it dependent on the amount of code being duplicated compared to the amount of network traffic, or load time experienced by the user, being added?

+",13211,,13211,,2/6/2019 20:19,2/7/2019 1:44,How to balance code duplication with being able to prevent unnecessary service calls,,1,1,,,,CC BY-SA 4.0,,,,, +386782,1,386797,,2/6/2019 20:39,,-1,138,"

To give a bit of context first, I have been tasked to mentor an internship on creating a UI for a certain business need:

+ +
    +
  • Already 4 months were spend on analyzing, creating the business use cases and some context diagrams, and BPMN diagrams, but the analysis is not complete yet.
  • +
  • It was foreseen that the API would be already defined and more details would be available. But due to unexpected circumstances, this was not the case.
  • +
  • The backend itself is not available either.
  • +
+ +

The intern starts next week. But the analysis has been shared with me only today and it seems quite some things are still too vague or incomplete. It was suggested to already start on the UI with the intern. Probably with making several mockups first. And do our own analysis by talking to the analyst, business, key users and operators.

+ +

From this, we can probably create some kind of prototype. And maybe even define a suggestion of how the API can look like. But we are missing quite some pieces of the puzzle, so I'm afraid this will just blow up in our faces afterwards...

+ +

How can I make sure to make this both beneficially for the intern, his thesis and the company? And how can we prevent a huge rework afterwards? We have around 50 man-day's to spend on this. What would be your suggestion? And how would you tackle this problem? And what is wrong with this way of working?

+",327948,,209774,,2/7/2019 20:15,2/7/2019 20:15,"Start frontend development with incomplete analysis, delayed backend and no API",,1,3,1,,,CC BY-SA 4.0,,,,, +386784,1,387703,,2/6/2019 22:18,,1,831,"

I've seen this pattern pop up in a couple of different teams. They have a server with a REST API and a frontend web project.

+ +

What commonly happens is the frontend developer finds their requests are being blocked by CORS, and they create a REST API for the frontend that just passes requests through to the main backend. This API eventually catches some business logic and turns into its own server, handling half of the backend responsibility separate from the true backend server.

+ +

How can this pattern be prevented? It feels wrong to spread business logic across two API projects and to have to run two APIs to use the frontend. Is there a proper way to make a ""set-and-forget"" CORS proxy for the frontend, without creating a dumping ground?

+",288984,,,,,2/25/2019 20:02,The Frontend-Backend Vs. Backend-Backend,,1,8,,,,CC BY-SA 4.0,,,,, +386790,1,386827,,2/7/2019 4:06,,1,1103,"

Although many people understand that even if we have the same setup for TEST and UAT environments as we have for PROD, there're still a lot of reasons why it makes sense to run smoke tests in production:

+ +
    +
  • Ensure stability of the core functionality
  • +
  • Identify potential bugs and fix them right after deployment
  • +
+ +

We'd like to run automated tests cases(not all of them, just those that check core business features) every time when deployment completed.

+ +

Everything looks fine except one fact that any tests(manual and automated) leave some rubbish data and we cannot leave it there, because it might impact some business features (for instance reporting). From my point of view there're the following strategies:

+ +
    +
  1. Clean data created by your tests. Doesn't look perfect due to the fact that any mistake in the process can delete real production data
  2. +
  3. Switch to another database schema to execute tests and switch back to real production schema. This approach requires additional DevOps effort and might introduce additional downtime(SLA will be lower)
  4. +
+ +

Could you please advice any best practices of running tests in production?

+",327970,,,,,2/7/2019 18:04,Running automation tests in production,,2,1,,,,CC BY-SA 4.0,,,,, +386801,1,,,2/7/2019 9:09,,-2,17,"

I have an application which is currently used by clients. So based on the feedback from the client, normally we are fixing bugs in a live application.Parallel I am adding new modules to demo projects.My plan is to migrate the new modules into live after testing.

+ +

But the issue is we have fixed many bugs in live version for old modules.So that when I am trying for migration between demo and live,there is a conflict,chance for loosing code.

+ +

Technology:-.Net Framework MVC , +Agile Methodology, +Team Foundation Server

+",327983,,,,,2/7/2019 9:20,Migration Of Demo Version Projects to Live Version in .Net MVC Applications,<.net>,1,1,,,,CC BY-SA 4.0,,,,, +386803,1,,,2/7/2019 9:46,,1,80,"

In GitLab, an issue is usually closed once a Merge Request is approved and merged into master.

+ +

However, what should happen with issues that are closed because of other reasons? +There could be various reasons for this:

+ +
    +
  • It turns out the issue is not needed.
  • +
  • The issue turns out to be not feasible, would break too much stuff etc.
  • +
  • ...
  • +
+ +

Should these issues be closed also? How can you then differentiate later between these and other closed issues, that have actually been done?

+ +

I would also be interested to know what should happen to merge requests related to such discarded issues.
+I get the generally accepted approach is to close them, it is then easily possible to see that they were not in fact merged (closed VS merged is different in MRs).
+However, in time, this puts a lot of closed merge requests into your system, even if they have no chance of ever being re-considered again. Also, what to do with the associated branches?

+ +

I would be interested in hearing recommendations or being pointed to in-depth discussions about the topic.

+",226111,,226111,,2/7/2019 15:37,2/7/2019 15:37,How to differentiate between done and closed-for-other-reaons issues in GitLab?,,0,0,0,,,CC BY-SA 4.0,,,,, +386807,1,386810,,2/7/2019 12:05,,1,110,"

The frames are sent in multiplexed fashion and have a stream id. The receiver re arranges frames with the same stream id but what happens if an older frame arrives first?

+ +

Is there a concept of sequence number in http2.0 stream frames?

+",43804,,,,,2/7/2019 13:54,How does a party receiving frames under HTTP 2.0 know the order?,,1,2,,,,CC BY-SA 4.0,,,,, +386808,1,,,2/7/2019 12:13,,1,128,"

Our website needs to send transactional emails to customers each time an event happens on the site such as:

+ +
    +
  • User registration
  • +
  • Email verification
  • +
  • Password resets
  • +
  • Order confirmations
  • +
  • Despatch confirmations
  • +
  • Comment notifications
  • +
+ +

And so on.

+ +

At the moment I am storing the email templates within an application page (email.cfc which could also be email.php or whatever language you use). Whenever an event happens, I pass in parameters like the user's email address and name to the email template and it fires off an email.

+ +

What would be the best way to manage email templates that need to be sent out for regular transactions? Should we:

+ +
    +
  • Store the templates in a DB and try to use string replacement to inject dynamic variables like the name, orderid, email address
  • +
  • Keep them within the application only
  • +
  • Some other method
  • +
+ +

I would really like to know how large companies do it so I can start off in a best-practice way that is scalable.

+",136435,,,,,2/10/2019 7:51,Where to store automated transactional email templates,,1,0,,,,CC BY-SA 4.0,,,,, +386813,1,,,2/7/2019 15:05,,1,85,"

We have git installed on our webserver (via cpanel), but unfortunately no CI- / Deployment-Tools.

+ +

Would it be good practice just initialise our repository in /public_html/ and push our local Dev-branches into the Master-Branch there instead of having an extra repository-directory on the server and 'deploy' the project manually?

+ +

Thank you in advance for your replies.

+",327997,,,,,2/7/2019 15:26,Do I need to 'deploy' my web project when I can just keep a recent clone of the Master-branch in public_html?,,3,2,,,,CC BY-SA 4.0,,,,, +386816,1,388694,,2/7/2019 15:11,,1,1473,"

I have the following ""software reality in working code"" (also no access to it):

+ +
    +
  • There is a class Class1 with different attributes + +
      +
    • AttributeA - sums up all ""ordinary attributes""
    • +
    • EnumType
    • +
    • DataType
    • +
    • AttributeValue
    • +
  • +
+ +

It is realized that there are some objects John... of type Class1, so they all have these 4 attributes. They can be set via GUI. The tricky part is: First EnumType has to be chosen. Depending on this choice the possible choices for DataType might be reduced. And the ""attribute type"" of AttributeValue might differ:

+ +
    +
  • EnumType = None => DataType: int | char && AttributeValue: Field
  • +
  • EnumType = List => DataType: int | char | double | String && AttributeValue: []-Array
  • +
  • EnumType = Diction => DataType: int | char | double | String | XML && AttributeValue: DictCollection
  • +
+ +

[update 2019-02-14]:

+ +
    +
  • Shortened AttributeValue to Value.
  • +
  • ValueType 'String' switched to 'Field' for less confusion.
  • +
  • Changed from [][]-Array to DictCollection to make clear it is a collection of key-value-pairs where the type for all keys is the same chosen from DataType, while the one for value might be a different from DataType [/update]
  • +
+ +

Example: in case of Diction it is possible to have many entries, each entry consisting of two fields with (each independently) entries in the possible data types int | char | double | String | XML. First field could be XML, second could be double.

+ +

[update 2019-02-14]

+ +
e.g. JohnA: None=>Field | => char ='a' | AttributeA
+JohnB: None=>Field | => int = 23 | AttributeA
+JohnC: List=>[]Array | => String =[""hello"",""world"",""list""] | ...
+JohnD: Diction=>DictCollection | => {XML, double} = {(<length>, 2.34); (<width>, 5.43)} | ...
+
+ +

[image updated 2019-14-02] +Tried to show it that way:

+ +

My question: how to model Class1 in UML?

+ +

My research reading:

+ + + +

I tried to adopt it to my case. Either it is not mentioned or I don't understand it. It sounded like the derived property is what I'm looking for ... just it is not a simple concatenation or calculation. Does it still takes effect?

+ +

Additional info: this Class1 also needs to be used in another class as data type of attributes over there as well - just in case that is relevant to ensure that while using it as data type the class structure and inner dependencies are coming into affect.

+ +

I tried this approach:

+ +
    +
  • top row is ""plain"" without any dependies, my ""blank page to start from""
  • +
  • Class1 with all common attributes AttributeA
  • +
  • Specialization regarding EnumType
  • +
  • three enumerations for DataType and visually linking them (to sort my brain, I know that in final draft there is none)
  • +
+ +

This approach has some issues:

+ +
    +
  • EnumType is only set to default and not ""fixated"", there is no dependecy that this subclass request this setting
  • +
  • AttributeValue the same
  • +
  • there are multiple enumerations with overlapping content
  • +
+ +

[image updated 2019-02-14]

+ +

+",327992,,327992,,2/14/2019 14:29,3/15/2019 15:06,"How to model attribute dependency ""inside one class"" in UML class diagram?",,2,0,1,,,CC BY-SA 4.0,,,,, +386830,1,,,2/7/2019 18:13,,1,378,"

The cover of the book The Practice of Programming lists programming principles:

+ +
    +
  • Simplicity
  • +
  • Clarity
  • +
  • Generality
  • +
+ +

Intuitively I understand generality as preferring to solve the general problem rather than the specific form in which the problem presents itself. For example a sort function should be able to sort anything that is comparable instead of just ints. However, that seems like a contrived example.

+ +

What are some better examples of generality?

+",3349,,,,,2/7/2019 18:13,What are some examples of generality?,,0,8,,,,CC BY-SA 4.0,,,,, +386839,1,386841,,2/7/2019 20:18,,1,1224,"

I know redis is a very robust caching solution and scales great, but when it comes to simpler non-enterprise websites I feel as if it's a bit too expensive (Azure Standard/C1: $100/m).

+ +

I'm considering just creating a simple API that utilizes the Dotnet Core In-Memory Caching.

+ +

One benefit of this would be cost, as I could host it in Azure on a Linux app service for less than half the cost of Redis (Azure Linux/Basic/B1: $38.69/m).

+ +

With the caching API separate from the main app, it wouldn't wipe the cache for deployments/reboots either.

+ +

Would I run into issues with this model? Is there anything with Redis that I might miss? At what point would I strongly need to consider switching to something like Redis?

+",328059,,,,,2/7/2019 21:49,Pros and Cons of using ASPNET.Core In-Memory Caching instead of Redis?,,1,1,,43515.46944,,CC BY-SA 4.0,,,,, +386842,1,,,2/7/2019 21:59,,4,483,"

I have a web application that has some specific differences between production and test environments. +I.e. test email config, test payment config, the word Test written on the home page (useful to be sure you're in test!!)

+ +

My previous developers taught me the basics of git and I can use github desktop pro to branch and merge changes.

+ +

Now I have a test environment I don't understand how I can merge changes developers do in their own branches, into test and then onwards into Production without me losing my test environment specific configs.

+ +

Feel like I'm missing something really fundamental in knowledge but as a non-developer without any current developers to ask I'm struggling!

+ +

I needed to get some changes live recently so I used Sourcetree and the cherry pick functionality which is great but I'm probably making a bigger headache for myself down the line.

+ +

Any answers greatly appreciated. +Thanks

+",328065,,58415,,2/8/2019 10:19,2/8/2019 10:53,How do I merge branches between Develop - Test and Production without moving across 'test' specific code?,,2,0,,,,CC BY-SA 4.0,,,,, +386843,1,,,2/7/2019 22:12,,2,134,"

I’ve got me a design and implementation problem extracting and formulating the results I need.
+What I’m trying to do is build some reporting on forecasted stock commitments in an inventory system to help drive JIT stock procurement and manufacturing. I’m really sorry, there is a fair bit to this so if you stick with me on this explanation, I really appreciate it.

+ +

The underlying database for the inventory system I’m using is a 3rd party propriety database that I cannot alter. What I’m doing is using C# I'm executing a number of simple SQL queries via an oleDB connection to extract the data into object collections then manipulate the results via a series of linq queries and lamba functions. +The data model I’m working with is as follows: Please note, this is somewhat simplified +

+ +

For any one product, what I need to show:

+ +
    +
  1. The Stock on hand
  2. +
  3. The list of jobs that make up forecasted demand (Needs to be separated by Job manufacture attribute)
  4. +
  5. Calculate the forecasted commitments for products based on business rules
  6. +
  7. Calculate the lead time of manufactured products.
  8. +
+ +

Some other notes

+ +
    +
  1. A Kitset is a product that has components (End product is manufactured).
  2. +
  3. A component can itself be a kitset.
  4. +
  5. A component is a product used in the manufacture of kitsets.
  6. +
  7. Any product can be supplied direct to end consumer regardless of being either a kitset, a component or otherwise.
  8. +
+ +

Rules to calculate product Lead Times +If product is not a Kitset, use lead time from database, no further change is required +Otherwise, will need to calculate based on the lead times for the component products as follows

+ +

If Kitset product, +Kitset Lead Time = DB LeadTime + Maximum component Lead Time where NonDiminishing = false + the Sum of component LeadTime where NonDiminishing = True

+ +

To calculate the future commitments of products +Consider this example. +

+ +

Easiest to start at the bottom and work my way up.

+ +
    +
  • Kitset Product 3, I’ve got 22 in stock, and future commitment for 30. This means I need to manufacture a further 8 units to satisfy demand.
  • +
  • Kitset Product 2, I’ve got 18 in stock and a supply commitment 16. In addition to this I also need 8 units to satisfy the manufacture requirements of Kitset Product 3 giving me a total commitment of 24 meaning I have to manufacture 6 units.
  • +
  • Kitset Product 1 follows the same sort of path. I’ve got 4 in stock, I need 6 units to satisfy the requirements of Kitset Product 2 so I need to manufacture a further 2 units.
  • +
+ +

This leads me to where I’m getting hung up. Effectively I’ve got a collection of products, each instance of a product may have a collection of products and so on. I’m really not sure how to implement this cyclic relationship.

+ +

Really appreciate any feedback you can give me.

+",328066,,,,,7/6/2020 1:05,Reporting with cyclic or relationships,,2,2,,,,CC BY-SA 4.0,,,,, +386845,1,386852,,2/8/2019 3:59,,0,2639,"

I have a list of functions which need to be tested against a list of inputs to measure their relative performance. +I have already create a test function like below:

+ +
public static String testFunction(Function<int[], int[]> function , int[] input) {}
+
+ +

I have also generated a list of inputs to feed to each function. The code is still very repetitive as I have to call each function with class::functionName and each time I add a new function, it gets even worse! I was wondering if there is a way to create a list of functions so I can use a nested for loop to test all functions against all inputs! thanks!

+",324342,,,,,2/8/2019 10:50,Creating a list of functions in java,,2,0,,43507.94167,,CC BY-SA 4.0,,,,, +386847,1,386849,,2/8/2019 4:48,,0,134,"

I'm reading the book ""The Go Programming Language"" and this sentence in the preface section ""The Origins of Go"" has me puzzled:

+ +
+

One major stream of influence comes from languages by Niklaus Wirth, beginning with Pascal. Modula-2 inspired the package concept. Oberon eliminated the distinction between module interface files and module implementation files. Oberon-2 influenced the syntax for packages, imports, and declarations, particularly method declarations.

+
+ +

As with several other concepts which are mentioned in passing only to point out that Go doesn't have them, I'm looking around to get a high-level idea of what this means. I have searched the web a bit for ""module interface vs module implementation,"" but found nothing promising. I skimmed through the Oberon Wikipedia article, but as it's mentioned that this distinction was eliminated in Oberon, I wasn't too hopeful about that resource in the first place.

+ +

I understand modules to some extent — I've already read many chapters of ""The Go Programming Language"" and I've previously done some fooling around in Python — but I am not sure what is meant by ""module interface files"" and ""module implementation files"" or what code might have looked like before this distinction was eliminated.

+ +

Can someone please fill me in on the background of this concept so I can understand this sentence more fully?

+ +

(I can guess—but it's complete speculation—that maybe in previous languages (which?), the API for a module would have to be declared separately from the actual code which made it work, something like the rule in some languages that variables must be declared before they can be initialized. But I don't like inventing details when I really don't know.)

+",200593,,,,,2/8/2019 7:12,What was the distinction between module interface files and module implementation files before Oberon?,,1,0,,,,CC BY-SA 4.0,,,,, +386850,1,,,2/8/2019 7:29,,1,55,"

I believe the consensus on unit tests is that each individual test should interact with the smallest surface possible (?).

+ +

I have a function I want to test, but it depends on some setup performed by other function.

+ +

Should I call these other function in my test, or manually setup the state as the tested function expects?

+ +

I.e.

+ +
test function_remove_item:
+  // setup the test
+  state = clone(initial_state)
+  item = new Item('an-id')
+  // !!! Call another function to setup the test state
+  add_item(state, item) // blindly assume that add_item is functioning correctly (test it in a different test)
+
+  // perform the test
+  remove_item(state, item)
+  expect(state.items).notToInclude(item)
+  expect(state.ordered).NotToInclude('my-id')
+  expect(state.some_other_complex_state).toPass()
+
+ +

vs

+ +
test function_remove_item
+  state = clone(initial_state)
+  // manually perform all the state setup without hitting add_item
+  state.items.push(new Item('my-id'))
+  state.ordered.push('my-id')
+  state.some_other_complex_state = true
+  // imagine remove_item may call other function that rely on
+  // some other state
+  state.nested_flag_for_function_check_item_status = true
+
+  // perform the test
+  remove_item(state, item)
+  expect(state.items).notToInclude(item)
+  expect(state.ordered).NotToInclude('my-id')
+  expect(state.some_other_complex_state).toPass()
+
+ +

It seems that calling add_item in my test means the test is probably more maintainable since it's harder to misconfigure the state (and the requirements of that state may grow quite large) but also makes the test feel somehow ""un-pure""?

+",326769,,,,,2/8/2019 14:44,Calling functions to setup state before testing a related function,,3,0,,,,CC BY-SA 4.0,,,,, +386854,1,,,2/8/2019 9:12,,0,50,"

I have a wcf service to create that will have to pilot a desktop process for the remote client. + My limitations are : + - the wcf service could not handle more 100 simultaneous desktop process concurrently. + - the wcf service have to be protected so that a problem with one desktop process does not stop the server to respond to other clients.

+ +

Here is the scenario that would have thought to put in place : + - the wcf service will have a pool of 100 operators to handle that could take the work for the remote client. + - First the remote client should ask for a transactionId and if possible an operator is assigned from the pool with this transactid + - Then the client would call various operations with his transactionId on the wcf service that will delegate the work to the assigned operator and would send back the result of the operation to the client. + - When the operator is assigned : it would have to survive between the different remote clients calls (or for a certain time limit in order to protect the server from resource exhaustion) + - finally the client would call the wcf service to notify that the work is done, and the operator would go back to the operator's pool.

+ +

As of now I have create the operator's pool and managed to make the assigned operator do his work on a dedicated thread. But I have some performance issues and sometime the wcf service crash because of the work done for a remote client... + Is there any strategy/pattern or common way to put this kind of architecture in place ?

+",285517,,,,,2/8/2019 9:12,Handling a pool of dedicated long lived process behind a wcf service,,0,3,,,,CC BY-SA 4.0,,,,, +386855,1,,,2/8/2019 9:21,,8,257,"

I am trying to make a user story for a basic Sudoku game, using the agile software development approach.

+ +

I get the concept behind user stories, but I was just wondering if it was possible to get an example to further my understanding?

+ +

Would saying

+ +
+
    +
  • As an avid Sudoku player, I want to have multiple levels at different difficulties.
  • +
  • As a new player I want an introduction level to the game to teach me the basics.
  • +
+
+ +

count as a user story?

+",328102,,90149,,2/8/2019 14:03,2/13/2019 23:22,Does this count as a user story for a basic Sudoku game?,,2,0,1,,,CC BY-SA 4.0,,,,, +386858,1,,,2/8/2019 9:45,,0,402,"

I'm trying to implement an Interface for a Message Queue. This interface should allow different implementations of queues to be implemented etc AWS SQS, Azure Queue Service.

+ +

So lets say that I have an interface for the Message Queue:

+ +
type Queue interface {
+   AddMessageToQueue(msg QueueMessage)
+}
+
+ +

I also have an interface for the message:

+ +
type QueueMessage interface {
+   ToSQSFormat() ...
+   ToAzureFormat() ...
+   ...
+}
+
+ +

The QueueMessage interface will allow the implementation of the Queue to convert it into the required format.

+ +

Is this the proper way to achieve this? Is the QueueMessage interface violating interface segregation principle as the queue implementation will have access to all the ""To...Format"" methods in addition to the one it requires?

+",153993,,,,,5/22/2019 0:10,Generic Message Queue Interface,,1,2,,,,CC BY-SA 4.0,,,,, +386860,1,,,2/8/2019 9:47,,1,97,"

I need help with a problem which I have been working for the last month.

+ +

I have a group of documents, each document has a set of unique words (if the word appears more than once in the document, I count it only once). I want to find for a particular amount of documents the optimum group which contains the least amount of different words.

+ +

For example, if I have a set of five documents, each of them containing a set of words:

+ +
d1 = [ a , b, c, d, e ]
+d2 = [ b , c, f ]
+d3 = [ c , e, g ]
+d4 = [ a , c, d ]
+d5 = [ c , d, e ]
+
+ +

The set of three documents with the least amount of words would be (d1,d4,d5). This group of three documents would contain only a, b, c, d and e.

+ +

So far what I have tried is the ""nearest neighbor"" approach. Take the document with the least amount of new words. I extended it with a recursive limited brute force: take the next n documents with the least amount of new words.

+ +

Is there any better algorithm for finding a good set? I know the optimum set can only be solved by brute force, but that is obviously not doable here.

+ +

EDIT: Why I have the impression that ""nearest neighbor"" is a poor solution: By extending the set of documents I sometimes get a solution which is much worse than with less documents. Theoretically, the same set of documents could always be choosen independently of how many more new documents I add.

+",328103,,328103,,2/8/2019 10:09,4/14/2019 20:24,Algorithm. Find the group of documents with the least amount of words,,2,3,,,,CC BY-SA 4.0,,,,, +386863,1,386864,,2/8/2019 10:46,,0,94,"

I'm designing an API, and I need to authenticate each request with a user. The simplest way to do that is to provide an API key to each user on their ""My account"" page, that they could regenerate at any time.

+ +

Then, our users can either include it when they design their consumer apps, or the apps they use can ask the user for their API key.

+ +

The API would only be accessible via HTTPS requests that include a header with the user's API key as value.

+ +

Is this a bad idea?

+",325892,,228759,,2/8/2019 13:03,2/8/2019 13:03,API User identification,,1,1,,,,CC BY-SA 4.0,,,,, +386870,1,,,2/8/2019 13:21,,4,1523,"

Our team is starting to refine a set of stories that cover components and UI as part of an upgrade to a newer version of Angular. These components will then be used to recreate screens in an existing application. We are considering this work technical debt.

+ +

Is there a recommended format for writing technical debt stories?

+ +

We use the ""As a ... I need to ...so I can ..."", as well as Gherkin's ""Given > When > Then"" acceptance criteria, for our standard stories that are more directly customer focused.

+ +

Is there something similar that should be used for these tech debt stories? +Or should we just list technical requirements?

+",241964,,209774,,2/9/2019 14:11,9/16/2019 11:00,Is there a recommended format for writing a technical debt story?,,2,6,1,,,CC BY-SA 4.0,,,,, +386872,1,,,2/8/2019 14:32,,3,69,"

Imagine the following situation:

+ +
    +
  1. I'm working on a python project, and I install the library antigravity with pip.

  2. +
  3. I add the function fly() which uses the library, and I commit and push the changes.

  4. +
  5. Some time after, my partner(s) run git pull.

  6. +
+ +

How do they know that they should install the new library? By failing to run the program? Because I told them so? Should the library install automatically? Is there any easy way of doing that?

+ +

For context, in case it's relevant, I'm using python 3 and the project is stored at github. Thanks.

+",325819,,,,,2/8/2019 14:32,Can/Should I make an automatic installation of new python libraries after a git pull?,,0,6,,,,CC BY-SA 4.0,,,,, +386879,1,,,2/8/2019 16:41,,0,1028,"

I'm always having the problem on how to properly design having static entity instances and it being attached to Entity Framework's Dbcontext. For example, we have the following:

+ +
using Microsoft.EntityFrameworkCore;
+using System;
+using System.Linq;
+using Xunit;
+
+namespace ClassLibrary1
+{
+    public class PersonType
+    {
+        public static readonly PersonType Student = new PersonType(1, ""Student"");
+        public static readonly PersonType Teacher = new PersonType(2, ""Teacher"");
+        public PersonType(int personTypeId, string description)
+        {
+            PersonTypeId = personTypeId;
+            Description = description;
+        }
+        public int PersonTypeId { get; set; }
+        public string Description { get; set; }
+    }
+    public class MyContext : DbContext
+    {
+        public DbSet<PersonType> PersonTypes { get; set; }
+        public MyContext(DbContextOptions<MyContext> options) : base(options) { }
+        protected override void OnModelCreating(ModelBuilder modelBuilder)
+        {
+            modelBuilder.Entity<PersonType>(builder =>
+            {
+                builder.HasData(
+                    PersonType.Student,
+                    PersonType.Teacher);
+            });
+        }
+    }
+    public class Tests
+    {
+        [Fact]
+        public void This_test_fails()
+        {
+            var options = new DbContextOptionsBuilder<MyContext>()
+                .UseInMemoryDatabase(Guid.NewGuid().ToString())
+                .Options;
+
+            using (var ctx = new MyContext(options))
+                ctx.Database.EnsureCreated();
+
+            using (var ctx = new MyContext(options))
+            {
+                //call Attach to pass
+                //ctx.Attach(PersonType.Student);
+
+                Assert.Equal(
+                    PersonType.Student,
+                    ctx.PersonTypes.First(p => p.Description == ""Student""));
+            }
+        }
+    }
+}
+
+ +

The reason I need the static entity classes is for functions with logic based on PersonType (e.g. if(personType == PersonType.Student). To make this work, I need to call DbContext.Attach (which is easy to forget to call) so the context's PersonType will be equal to its corresponding static instance.

+ +

Is calling Attach the only way to solve this? Or is there something wrong with my design choices?

+ +

I'm primarily using EF Core but I believe this scenario is also present to EF6.

+ +

There is this open feature request issue in EF Core which would solve this problem but it seems not many people encounter this scenario so I'm thinking there's a different way to design this.

+ +

EDIT: There is a PersonTypes table in the database which is related to other tables.

+",245598,,58415,,2/8/2019 19:16,2/8/2019 19:16,Entity Framework and static entity instances,<.net>,1,0,,,,CC BY-SA 4.0,,,,, +386888,1,386889,,2/8/2019 20:49,,2,3090,"

I currently have a small list (< 20) of constants that can be segmented into the following three parts:

+ +
- main script (tokens, log file location)
+- database setup (username, passwords)
+- API (API key, various external IDs)
+
+ +

I felt that it cluttered the top of the files during declaration, so I transferred them into a constants file. However, when importing them back to the three files, for instance say the database setup, it would become

+ +
import constants
+
+...
+
+db = connect(constants.RDS_HOSTNAME, constants.RDS_PASSWORD)
+
+ +

compared to if I had declared them in the file

+ +
RDS_HOSTNAME = 'xxx'
+RDS_PASSWORD = 'yyy'
+
+...
+
+db = connect(RDS_HOSTNAME, RDS_PASSWORD)
+
+ +

which I feel is a bit more readable when using the constants.

+ +

If I combine the both, I would have

+ +
from constants import *
+
+...
+
+db = connect(RDS_HOSTNAME, RDS_PASSWORD)
+
+ +

but all posts I've read suggest very heavily against this as it would increase confusion due to not knowing where the constants come from.

+ +

My question is this: given that I have only one constants file with no functions, and all constants are upper case, would using from constants import * nonetheless be poor practice? Due to the uppercase notation, I would know immediately it's a constant and comes from the constants file, thus reducing confusion.

+",,user146112,,,,2/8/2019 21:06,Declaring constants: one file or separate?,,1,0,,,,CC BY-SA 4.0,,,,, +386890,1,,,2/8/2019 21:11,,-2,92,"

First of all, I hope im in the right stack exchange.

+ +

So I'm trying to build a facial recognition system, e.g. one that recognizes a face and compares it to a database of known faces. For the first part, there are a ton of resources available, thats not the problem.

+ +

The problem I have is designing the database. How do you make that efficient? Assuming you have to check the recognized face against every known face, even possibly multiple variations of the same face (think 2 versions from different angles), how do you make sure that it's still efficient and doesn't take an hour to do per run, apart from obvious parallelization?

+ +

How do you design such a database?

+",328164,,,,,2/8/2019 21:32,Designing a facial recognition systems database,,1,1,,,,CC BY-SA 4.0,,,,, +386895,1,386898,,2/9/2019 0:18,,2,168,"

We are starting to have a few projects that have a design where a web server needs access to services/devices (think database connections and specialized hardware) that are on a local network. Instead of having users open up ports to the public internet, we are using a small program that establishes a connection from inside their network to connect to those devices and then establish a persistent connection that the server can use to request information from that program.

+ +

EDIT: Diagram

+ +

+ +

I've thought of a few terms, but I wasn't sure if there is a generally accepted term for such a program

+ +
    +
  • Agent: This one seems kind of intuitive, but it seems like it means something else.
  • +
  • Proxy: Maybe, I typically think of a client connecting to a proxy, not the other way around.
  • +
  • Service: This indicates it's a long running process that does something in the background, but nothing specific about communication.
  • +
+",182339,,182339,,2/13/2019 20:56,2/13/2019 23:00,What is the term for a client that does things on behalf of the server?,,2,6,,,,CC BY-SA 4.0,,,,, +386896,1,,,2/9/2019 0:49,,-2,86,"

I have just started a new job in which I will be overhauling and updating a web-application written in Django. I have a loose familiarity with Django (and have been reading up on the documentation for Django and external libraries like Django REST Framework). But I am hoping someone with lots of experience in quickly understanding existing web applications so they can update them would have some suggestions how how to best do so, both in general and with respect to Django.

+ +

What to look for in the code, tutorials to look at, perhaps there are some defined methodologies for understanding old code I can implement.

+ +

I realize this is a subjective question, but would appreciate any suggestions, including where to ask this question that might yield the best responses.

+",242231,,,,,2/9/2019 2:13,Ways to get up to speed understanding an existing Django program,,1,0,,,,CC BY-SA 4.0,,,,, +386901,1,,,2/9/2019 9:38,,13,2063,"

A typical implementation of a DDD repository doesn't look very OO, for example a save() method:

+ +
package com.example.domain;
+
+public class Product {  /* public attributes for brevity */
+    public String name;
+    public Double price;
+}
+
+public interface ProductRepo {
+    void save(Product product);
+} 
+
+ +

Infrastructure part:

+ +
package com.example.infrastructure;
+// imports...
+
+public class JdbcProductRepo implements ProductRepo {
+    private JdbcTemplate = ...
+
+    public void save(Product product) {
+        JdbcTemplate.update(""INSERT INTO product (name, price) VALUES (?, ?)"", 
+            product.name, product.price);
+    }
+} 
+
+ +

Such an interface expects a Product to be an anemic model, at least with getters.

+ +

On the other hand, OOP says a Product object should know how to save itself.

+ +
package com.example.domain;
+
+public class Product {
+    private String name;
+    private Double price;
+
+    void save() {
+        // save the product
+        // ???
+    }
+}
+
+ +

The thing is, when the Product knows how to save itself, it means the infstrastructure code is not separated from domain code.

+ +

Maybe we can delegate the saving to another object:

+ +
package com.example.domain;
+
+public class Product {
+    private String name;
+    private Double price;
+
+    void save(Storage storage) {
+        storage
+            .with(""name"", this.name)
+            .with(""price"", this.price)
+            .save();
+    }
+}
+
+public interface Storage {
+    Storage with(String name, Object value);
+    void save();
+}
+
+ +

Infrastructure part:

+ +
package com.example.infrastructure;
+// imports...
+
+public class JdbcProductRepo implements ProductRepo {        
+    public void save(Product product) {
+        product.save(new JdbcStorage());
+    }
+}
+
+class JdbcStorage implements Storage {
+    private final JdbcTemplate = ...
+    private final Map<String, Object> attrs = new HashMap<>();
+
+    private final String tableName;
+
+    public JdbcStorage(String tableName) {
+        this.tableName = tableName;
+    }
+
+    public Storage with(String name, Object value) {
+        attrs.put(name, value);
+    }
+    public void save() {
+        JdbcTemplate.update(""INSERT INTO "" + tableName + "" (name, price) VALUES (?, ?)"", 
+            attrs.get(""name""), attrs.get(""price""));
+    }
+}
+
+ +

What is the best approach to achieve this? Is it possible to implement an object-oriented repository?

+",328194,,58415,,2/11/2019 15:12,3/5/2019 11:27,DDD meets OOP: How to implement an object-oriented repository?,,6,17,3,,,CC BY-SA 4.0,,,,, +386903,1,,,2/9/2019 12:51,,-2,126,"

Whlist browsing on github I came across the following project: https://github.com/infection/infection and according to the project's website https://infection.github.io/guide/#What-is-Mutation-Testing .

+ +

As far as I understood what a mutation testing mean is that automatically the code is slightly being changed and some tests are performed on it and the coverage is scored on a Mutation Indicator Score.

+ +

Practically unit tests tells us (explained in childish):

+ +
+

Hey you developed this nice function, I the Mrs Unit Test framework, if you explain to me how this specific functions is used (unit tests with intput mocking) and what does mean that works (assertions) I can check this out for you so in case you change stuff you can be sure that the code in not screwed up.

+
+ +

The integration testing tells us:

+ +
+

Hey I can tell you whether the software as a whole works together on a current setup. (eg. If the database fetching code with a given database fetches and processes the data as expected)

+
+ +

Also fuzzing test/ black box testing tells us:

+ +
+

Hey I can give you garbled input so you can check how the software works there are possible holes in harsh unpredicted conditions that you never thought of. So you can sleep bit quieter making sure that the software is safeish.

+
+ +

But what mutation testing tells us in practical terms, what MSI practically indicates to us the software engineers?

+",249660,,,,,2/9/2019 14:52,What in practical-stupid-childish terms a mutation testing indicates?,,1,2,,,,CC BY-SA 4.0,,,,, +386908,1,387195,,2/9/2019 16:52,,6,2239,"

So for DDD folks there, Aggregate Roots are supposed to contain business logics only and exposed what is needed only.

+ +

In DDD Red Book by Vaugh Vernon, he used LevelDB and Hibernate as examples. LevelDB is key-value storage and Hibernate I think uses reflection.

+ +

However, if I don't want to use any of those, how am I going to save Aggregates?

+ +

3 of the easiest solutions I can think of are listed in the title:

+ +
    +
  • Exposed public getters
  • +
  • Reflection
  • +
  • Inject repository to aggregate and have a method called save (Memento pattern?)
  • +
+ +

Let's imagine Payments:

+ +
    +
  • CardPayment with cardNumber, expiration, cardHolderName
  • +
  • CashPayment with cashAmount
  • +
  • PaypalPayment with paypalAccountId
  • +
+ +

Each of those has their own unique properties but adheres to an abstract class/interface (won't go deeper for simplicity).

+ +

In my whole life, there are cases like this that can't be avoided especially when doing Repository where you really need to know what are you going to save.

+ +

Going with public getters, you might need to do an instanceOf checks in repository so you can cast and access the unique properties.

+ +

Going with reflection, it may not be a problem but feels like a hack...

+ +

Injecting repositoryObj to a save method seems to be the next best option, at least the Aggregate knows what properties to save but this violates DDD I think. It knows about persistence too much and save is not part of ubiquitous language.

+ +

I can be pragmatic and eat a cake but I want to know how it is done the pure OOP and/or DDD way.

+ +

EDIT:

+ +

Found an article from Vaughn Vernon on how to model Aggregates with Entity Framework. The article can be applied to anything else too, it's not really specific to EF. I'll just link to it to prevent longer O.P: https://kalele.io/blog-posts/modeling-aggregates-with-ddd-and-entity-framework/

+",312801,,312801,,2/12/2019 16:16,2/14/2019 17:53,"DDD/OOP - saving Aggregates without ORM. Public getter, reflection, or injecting repository?",,5,12,6,,,CC BY-SA 4.0,,,,, +386909,1,,,2/9/2019 17:26,,0,63,"

This may be a broad question but I have not been able to find an answer.

+ +

In my program I am communicating to a device over serial communication. The data coming through is binary, and formatted with some header/length/checksum bytes. I can decode the data fine, the part I am stuck on is what is a good practice on distributing that data to parts of the program.

+ +

In this example lets say there are 10 different message types, that the serial port can receive. Module A in the program wants Messages 1 to 5, and Module B wants messages 6 to 10.

+ +

When the serial port finds a valid message, how should it notify Module A and Module B?

+ +

Right now I am implementing it by registering callbacks for any module that wants serial data, and send the message to all modules registered, whether its for them or not. I am curious if there is a more accepted design/architecture for this type of problem.

+ +

I will have to do the same thing with CAN and ethernet devices, so wanted to try and get this implemented correctly. I am currently working in C.

+",328217,,,,,2/10/2019 21:44,Architecture/design for reading incoming data and distributing it across program,,2,1,,,,CC BY-SA 4.0,,,,, +386916,1,386998,,2/9/2019 19:33,,0,108,"

I am trying to figure what could be the flow since I am using JMS for the first time. Locally, I have ApacheMQ installed on my Windows Machine. And using this simple Spring JMS example mentioned here, I was able to see how to send and receive message thing works in Spring JMS and my ApacheMQ looks like the following after running the producer and consumer:

+ +

+ +

Here is my sender class :

+ +
public class Sender {
+
+  private static final Logger LOGGER =
+      LoggerFactory.getLogger(Sender.class);
+
+  @Autowired
+  private JmsTemplate jmsTemplate;
+
+  public void send(String message) {
+    LOGGER.info(""sending message='{}'"", message);
+    jmsTemplate.convertAndSend(""Testing Spring JMS"", message);
+  }
+}
+
+ +

Here is my Receiver class :

+ +
public class Receiver {
+
+  private static final Logger LOGGER =
+      LoggerFactory.getLogger(Receiver.class);
+
+  private CountDownLatch latch = new CountDownLatch(1);
+
+  public CountDownLatch getLatch() {
+    return latch;
+  }
+
+  @JmsListener(destination = ""Testing Spring JMS"")
+  public void receive(String message) {
+    LOGGER.info(""received message='{}'"", message);
+    latch.countDown();
+  }
+}
+
+ +

So, now in my User Interface, I have a ""Download"" button somewhere and when a user clicks on it, it is supposed to call a stored procedure which is going to take time.

+ +
    +
  1. I am trying to understand, how would I call this JMS application when a user clicks on the download button so that I could send the stored procedure call to the Sender. I mean there is no endpoint defined in the code above just like if we were to use a REST, the request would be hitting Controller.

  2. +
  3. And how would sender be sending it to the destination queue, in the form of a string message just like shown in the example above?

  4. +
  5. I guess if I would have a clear idea about sender processing this, I might get some idea about how the Receiver is going to handle this.

  6. +
+",198234,,326536,,2/11/2019 12:10,2/11/2019 14:16,Figuring out sql /stored procedure processing using JMS queue,,1,0,,,,CC BY-SA 4.0,,,,, +386918,1,386923,,2/9/2019 20:52,,5,215,"

I'm developing a single page application where the first page contains the subjects and the next one contains items of the chosen subjects. The items are stored in a JSON object fetched from the server and have only two properties - name and id.

+ +

My questions is - generally speaking - should I divide my fetch from the server to two separate requests - first one for the subject, and second for the items of the selected subject, or might it be good to fetch everything in once?

+ +

Obviously it depends on the quantity of the items in each subject. And I know that fetching a large object will take more time to fetch and also more memory for the client to store the object in. But what would be a good rule of thumb? +If I have in sum 200 items (names and ids), is it considered a lot already? or 400? +How should I decide?

+",328229,,326536,,2/9/2019 21:30,2/10/2019 0:06,When does a JSON object become a burden on memory?,,1,1,,43508.56944,,CC BY-SA 4.0,,,,, +386925,1,,,2/10/2019 2:43,,-1,155,"

I'm currently developing an SaaS application in PHP, with Laravel, using its own DB class.

+ +

Let's pretend we've got 2 classes under 2 namespaces, plus Laravel's own DB - so

+ +
Illuminate\Support\Facades\DB;
+Developer\App\Core\Queries;
+Developer\App\Section;
+
+ +

They're our 3 classes.

+ +

With our custom classes, we've got +File 1: Developer\App\Core\Queries\Section.php

+ +

which contains

+ +
private function fetchInfoFromTable(string $id) 
+{   
+    //Pretend there is a bit to check if we've already run the query
+    return DB::Query('blablabla');
+}
+
+ +

File 2: Developer\App\Section\XXXXX.php

+ +
private function showInfo()
+{
+       $data = Section::fetchInfoFromTable(1);
+       $data = dostufftomakeitnice($data);
+       $data = doMorestufftomakeitnice($data);
+       return $data;
+}
+
+ +

Is this worth the amount of effort I've put into typing this question? Or is it worth just putting even the queries into XXXXX.php?

+",282118,,252416,,3/12/2019 18:57,8/3/2020 22:03,Database abstraction layer,,1,2,,,,CC BY-SA 4.0,,,,, +386931,1,,,2/10/2019 5:20,,1,70,"

I am designing some software for transportation companies (air, ground, water, etc). In these industries, you have rest rules where you need to get a certain amount of rest in between activities.

+ +

So lets say trucker/pilot/sailor has an activity to go from A to B that takes 4 hours, then has a 30 min break than wants to go C to D that takes 3 hours. Then he/she needs 10 hours of rest before starting another activity.

+ +

Now I want to figure out when the next activity can be scheduled. Of course in the very simple example above, I could just take the time that D ended, and add 10 hours, then look for an activity at that point. However, there are more variables in play so I need a broader solution where I can incorporate the other elements. For instance, the trucker has to have 34 hours off in a rolling 168 hour window... so I would be checking for that as well.

+ +

My initial thinking was put all the activity in to an array, where each key/value in the array is a minute. Then go through the array to check to make sure all the rules are complied with. However, this will be a huge array and very inefficient.

+ +

I appreciate any thoughts!!

+",116683,,,,,2/10/2019 7:15,Help me with concept - rolling time window,,2,1,,,,CC BY-SA 4.0,,,,, +386932,1,386950,,2/10/2019 6:37,,0,639,"

Let's take this unit test. Unit testing guidelines state that I should only have 1 assert per test, unless I'm testing the state of an object. In this case, Muxer.Muxe is a wrapper around FFMPEG that generates the right command line and executes it. Having the function return ""Success"" really tells me nothing about whether the FFMPEG commandline was correctly generated for a variety of scenarios, which is what requires more testing. (For example an extra tag needs to be added for AAC files).

+ +

So, based on unit testing correct practices, should I include the last part of the test, which tests the internal work being done within the method, or not?

+ +
[Theory]
+[InlineData(""video.mkv"", ""audio.aac"", ""dest.mp4"")]
+[InlineData(""video.MKV"", ""audio.AAC"", ""Dest.MKV"")]
+[InlineData(""video"", ""audio"", ""dest"")]
+public void Muxe_AudioVideo_Success(string videoFile, string audioFile, string destination) {
+    var Muxer = SetupMuxer();
+
+    var Result = Muxer.Muxe(videoFile, audioFile, destination);
+
+    Assert.Equal(CompletionStatus.Success, Result);
+
+    Assert.Single(factory.Instances);
+    IProcessManagerFFmpeg Manager = factory.Instances.FirstOrDefault() as IProcessManagerFFmpeg;
+    Assert.NotNull(Manager);
+    output.WriteLine(Manager.CommandWithArgs);
+    Assert.Contains(audioFile, Manager.CommandWithArgs);
+    Assert.Contains(videoFile, Manager.CommandWithArgs);
+}
+
+",328246,,328246,,2/10/2019 6:49,2/11/2019 17:24,xUnit Should I Test Method Internal Work Or Only Result?,,3,10,,,,CC BY-SA 4.0,,,,, +386944,1,386949,,2/10/2019 10:41,,-2,125,"

I'm trying to develop a way to represent something that has an URI inside of it, let's say:

+ +

{someVariable}{separator}{URI}{separator}{anotherVariable}

+ +

where someVariable is alphanumeric.

+ +

I just can't figure out what the separator should be, having in mind that it should:

+ +
    +
  1. Unambiguously represent a separator
  2. +
  3. Be small (2 characters, 3 at most)
  4. +
  5. Be human readable
  6. +
+ +

I've thought of ""::"", ""__"", ""--"" etc, but all are legal in URI (both URNs and URLs are legal in this DSL).

+ +

Basically, is there a way of unambiguously determining what is the separator from string containing an URI?

+ +

Considered:

+ +
    +
  1. ::
  2. +
  3. __
  4. +
  5. --
  6. +
  7. ""##"" (breaks if URL ends with #)
  8. +
  9. Slashes, backslashes
  10. +
  11. Enclosing with tags, such as <>, {}, works, but then it becomes hard to type; one of the reason it has to be a string. JSON is not an option either.
  12. +
+",156590,,156590,,2/10/2019 10:53,2/10/2019 16:11,Unambiguously represent separator in a string containing an URI?,,2,4,,,,CC BY-SA 4.0,,,,, +386945,1,,,2/10/2019 11:10,,0,103,"

I am designing a Job scheduling software. Users of the software will define the job definition in XML like format https://github.com/etingof/apacheconfig/:

+ +

e.g.

+ +
<job>
+    job_name myjob
+    <command>
+        command_name command1
+        command_exec_string  'echo from command1
+    </command>
+    <command>
+        command_name command2
+        command_exec_string  'echo from command2'
+    </command>
+    <crontab> 5 4 */2 * * </crontab>
+</job> 
+
+ +

The above definition mean that run this job ""at 04:05 on every 2nd day-of-month"". After writing such job definition files users will submit this file to our install job service that will take care of scheduling the job at 4:05 AM everyday using unix cron.

+ +

Unix crontab entry will look something like below:

+ +
5 4 */2 * * /job_executor.py job-definition.xml
+
+ +

Unix cron daemon will kick launch the job_executor.py at 04:05 AM everyday.

+ +

I am using layered architecture for architecting this software. This means job_executor will go through multiple layers (script, business, persistence etc.).

+ +

Business layer again have multiple pluggable components. For example:

+ +
    +
  1. Check for dependencies before executing a command in a job - mark state of command as ""failed_on_dependency"" and state of job as ""failed_on_dependency"" if the dependency is not satisfied.

  2. +
  3. If command execution fails, then check if the configurations related to rescheduling a job are specified in job_definition, if yes then sleep till reschedule time interval - mark state of a job as ""rescheduled"" and state of command as ""failed"" etc.

  4. +
  5. And many more..

  6. +
+ +

Each of these pluggable components can either update state of the job only, or can update state of command only, or can update state of command as well as job.

+ +

My question is should I keep logic to update these states somewhere centrally, or should this be taken care by individual pluggable business layer components?

+ +

I am even considering to have a immutable global Job object and have actions defined to update the Job object (something like redux architecture), though I dont need any publish subscribe type of logic here.

+",,user317612,,user317612,2/11/2019 5:07,2/11/2019 5:07,Job Scheduling Software,,0,5,,,,CC BY-SA 4.0,,,,, +386946,1,,,2/10/2019 11:18,,11,502,"

I have just started reading the book Applying UML and Patterns by Craig Larman. I find it very interesting because it challenges many of what I have been told at work. I read that requirements aren't fully collected in one go in agile and it takes many iterations to complete requirements gathering. If that is the case, is putting up a hard set deadline, which is what I'm forced to do at work, very un-agile, considering there could be some new ground breaking requirement (or change request masquerading as requirement) tomorrow?

+",328261,,,,,2/12/2019 1:29,Is deciding the release date before collecting all requirements un-agile?,,4,0,,,,CC BY-SA 4.0,,,,, +386948,1,,,2/10/2019 13:39,,2,217,"

I am learning HSMs and I can see a problem that I don’t know how to address. The problem is shown in the image bellow. +

+ +

Inside S1, substates S11 and S12 are continuously changing in reaction to TIMERTICK event. At the same time the superstate S1 needs also to keep track of TIMERTICK event in order to change to S2 after an amount of time.

+ +

As far as I understand TIMERTICK event will be always processed in one of the substates first so it can’t be used by superstate S1. What is the way to deal with this situation?

+",328270,,,,,2/11/2019 1:27,Hierarchical State Machine (HSM) superstate and substate sharing event,,2,0,,,,CC BY-SA 4.0,,,,, +386956,1,,,2/10/2019 19:12,,0,461,"

At the moment I have a frontend client calling several backend REST APIs. For example a call may be to get information about a certain vehicle. Then the client will call REST API A to get some performance data for the vehicle and REST API B to get some model or other static data.

+ +

This is becoming tedious to handle in the frontend client, so I'm currently building a BFF which can call these different APIs and aggregate them depending on the type of call from the client.

+ +

The problem I'm having is that in order to deserialize a json response from the APIs I need to deserialize them using a model that I have created in my project to map to the fields I get from each backend. Then I take the multiple deserialized objects from different APIs and create a new model that I then serve to the client and which may only makes sense for that client.

+ +

How do I separate these two types of models? Right now I have the API models in a Models folder and the client specific models in a ViewModels folder but I don't like the ViewModel name as I'm not really doing MVC.

+ +

There is no database so the BFF doesn't really own any entities/models. In general just having doubts on how to structure this project.

+ +

Edit: There is a hard requirement that .NET must be used so I'm doing the BFF in Web Api.

+",79854,,79854,,2/10/2019 19:34,2/10/2019 19:34,Separating models in a Backend-For-Frontend (BFF) API,,0,5,,,,CC BY-SA 4.0,,,,, +386958,1,,,2/10/2019 19:25,,0,119,"

I approached our architect with a solution in which all of our microservices verify the auth of the user invoking the call. Basically, what I've been doing for years, on different projects. He basically said that I'm digging my own grave and proposed an infrastructural layer, a proxy, which will authorize the requests instead and if request is allowed, will forward the request to the microservice, without passing on the data about the user to the microservice (has service-to-service authentication).

+ +

Has anyone done this? What are the benefits and is it worth the effort?

+",156590,,,,,2/10/2019 19:25,Microservice auth layer vs each service authorizing requests?,,0,3,1,,,CC BY-SA 4.0,,,,, +386962,1,,,2/10/2019 20:14,,-2,559,"

I want to create the following distributed system: Spring back-end microservices containing the domain logic, a UAA (authentication) service, a Eureka service registry/discovery, a Spring Cloud Config service, and a Zuul API gateway.

+ +

I am wondering what is the proper way of serving an Angular front-end application. At the moment I am serving it as a standalone application calling the API gateway, but I have seen other approaches like serving Angular from a Spring Boot service registered in Eureka, or serving it from the API gateway itself*. I am leaning towards the latter, as it feels more structured, since the front-end is intended to only consume the API gateway, and nothing else.

+ +

*In fact, generating a front-end app with JHipster implements this approach: it creates a webapp folder in the src/main directory of the Zuul application.

+",289455,,,,,2/10/2019 21:14,How to serve the front-end in a Spring microservice architecture?,,1,0,,,,CC BY-SA 4.0,,,,, +386967,1,386983,,2/10/2019 21:16,,1,81,"

I'm writing the back-end for a chat application. The database being used is MongoDB, which uses objects that have their own unique IDs.

+ +

I've run into a problem; I have two document types, channels and users, and said types need to reference each other.

+ +

The user object needs a list of channels it can participate in, and the channel object needs a list of users that are participating. Without these two objects becoming accidentally desynced (a channel object having a user in its list whose own channel list does not include said channel, vice versa), how should I go about structuring the database without sorting through every user and channel every single time I need to get a user list or channel list?

+ +

I've previously tried the below model: +

+ +

...however, every time a user joins or leaves a channel, the channel must be removed from the user's channel list and the user must be removed from the channel's user list. If any error were to occur on either side and not the other, the server would have to backtrack to prevent any inconsistencies later on.

+",328298,,,,,2/11/2019 7:13,MongoDB: Two documents of separate types that reference each other,,1,1,,,,CC BY-SA 4.0,,,,, +386969,1,,,2/10/2019 22:13,,0,151,"

Suppose I have a server and a client. When I send a message to the client, would I be sending a ServerMessage, or a ClientMessage? The other way to think about it - when I receive a message, it is a ClientMessage or a ServerMessage? Or is it ClientMessage in both cases?!

+",318196,,,,,3/19/2019 2:21,"Does a server send a client a ServerMessage, or a ClientMessage",,5,3,,,,CC BY-SA 4.0,,,,, +386978,1,,,2/11/2019 4:24,,0,284,"

I am making a program that tracks whether an employee has checked in on a software app that day. The employee can check in at any time, so long as they check in before their shift ends. If they don't check in, then a notification gets sent to their supervisor (where they might get in some trouble :) ). My question is how to best implement this considering that user shifts may change and some days they may be absent, so notifications shouldn't be sent.

+ +

Right now I'm thinking about making a Shift table and assigning each user their own shift. In other words, each user has one shift, and each shift has one user. I could include it in the user table, but it's getting way too crowded. That way it can be edited easily. Maybe a second table where supervisors can save and load presets. But in this case, what type of data structure can I use to store excused absences and how can I store the days they work (Mon, Tues, Wed...)? Does their need to be another table just for approved absences? Can I just store the days in an array. I'm using postgresql, so this is possible.

+ +

Anyway, those are my ideas. Do they seem reasonable to you, or can you think of a better way to do this?

+",328314,,,,,2/11/2019 12:56,How to best design a software feature when work shifts are considered,,3,4,,,,CC BY-SA 4.0,,,,, +386980,1,,,2/11/2019 5:24,,2,1041,"

I'm working on a plugin for some drafting software. The plugin takes the form of a dynamically loaded mach-o bundle.

+ +

The software vendor provides a template plugin in the form of an XCode project.

+ +

The whole thing is written in C++, but XCode doesn't seem to support C++ in it's testing bundle targets.

+ +

Tests could be written in Objective-C that include the C++ code, but that's not a great solution.

+ +

Another option would be to create another executable target that runs it's own bunch of tests.

+ +

What have others done in this situation?

+ +

What's the best way to write unit tests in XCode when using C++?

+",248366,,,,,2/11/2019 5:24,Unit testing C++ in XCode,,0,6,1,,,CC BY-SA 4.0,,,,, +386995,1,,,2/11/2019 13:22,,5,650,"

This question is related but doesn't directly answer my question.

+ +

I can imagine that understanding category theory helps someone who is designing a programming language, apparently in particular functional programming languages.

+ +

But what I find hard to understand is, how would a software engineer benefit from a knowledge of category theory, who has no interest in designing a new language, but just wants to solve practical problems.

+ +

Could you explain any benefit it has? Let's assume the software engineer who writes in functional programming languages. How would he/she benefit from a knowledge of category theory?

+ +

I would like to get a more illustrative answer than merely ""it helps you to understand functional programming better"", because I've heard multiple lectures on category theory make a claim like that, but it has not become clear to me how, practically, it actually helps you.

+",293724,,,,,2/12/2019 13:30,How does understanding category theory help a software engineer?,,2,0,3,,,CC BY-SA 4.0,,,,, +386996,1,387001,,2/11/2019 13:25,,0,126,"

I would like to write a program in C that forks then execs three different processes.

+ +

Afterwards, two of the programs will be suspended and only one of the three will be outputting to stdout. Is is possible to later on suspend the process currently running on stdout, resume one of the other suspended processes and have that displayed on stdout instead?

+",328361,,,,,2/11/2019 15:08,Is it possible to swap out programs displayed in stdout?,,1,2,,,,CC BY-SA 4.0,,,,, +387000,1,,,2/11/2019 14:39,,2,199,"

Use case:

+ +

We have a product which ties a set of resources to a concrete user. +Now we would like to provide the customers with an API so that an automated client application can have access to a subset of a concrete user's resources. A single user's resources can be accessed by multiple client applications (potentially with different scopes)

+ +

Implementation and architecture details:

+ +

We have a server which acts as a resource server and as a web application server at the same time. +The Amazon Cognito service is being used as an identity provider and an authorization server.

+ +

Challenge:

+ +

What is a viable and secure approach to: +- Associate a user with multiple clients. +- Handle authorization logic.

+ +

What we are currently considering:

+ +

We are thinking of mapping a registered (in Cognito) client application to a user in our database (the users are one to one mapped to users in a Cognito user pool). From then on we have to put authorization logic somewhere and we think that there are two possible locations:

+ +

- Option 1: Use the ""Pre Token Generation"" lambda trigger (provided by Cognito) to request from our application server the user which is mapped to the client application that is to be authorized then attach claims to the token which the resource server will use to identify which user's resources to serve to the client application - hence the client application will have successfully accessed it's mapped user's resources.

+ +

- Option 2: Move the lambda logic to the resource server, a.k.a. extract the registered client application from the database via the token claims and then find out which is the corresponding user and do some authorization logic there.

+ +

The difference between the two is that ""Option 1"" sends the mapping to a Cognito lambda and the token already contains the authorization while ""Option 2"" does the authorization on the resource server.

+ +

Question:

+ +

What is a secure and scalable way to tie a client application to a user and where (and how) is the client to be authorized?

+",328358,,,,,2/11/2019 14:39,Viable ways to handle an access token on the resource/web application server,,0,0,,,,CC BY-SA 4.0,,,,, +387003,1,387006,,2/11/2019 15:49,,0,207,"

I have a backend that does not support emoji characters in all of its fields, so I want to block them directly in the frontend application. I'm in the register section, and I want to limit the possible characters for the email field. +I know that RFC 5322 specifies that many particularities can be found in those addresses, including special characters. Even emoji can be put there (link).

+ +

I'm using a whitelist to implement this block.

+ +

What character should I whitelist to support the common emails without falling into whitelisting every characters supported by email addresses?

+",239147,,,,,2/12/2019 7:53,What characters to limit when user enters an email?,,1,0,,,,CC BY-SA 4.0,,,,, +387007,1,387012,,2/11/2019 16:05,,0,162,"

So, I'm having some value objects in my domain, and when I'm using them in one of my builders it looks like this:

+ +
.withSomething(Id.of(123), Specifiers.of(MySpecifier.of(""233""), MySpecifier.of(""23423"")));
+
+ +

and in method I have:

+ +
withSomething(Id id, Specifiers specifiers){
+  //id.value()...
+}
+
+ +

This seems too wordy for my taste, as it can be written as:

+ +
.withSomething(123, ""233"", ""23423"");
+
+ +

and then in method I would have:

+ +
withSomething(int id, String... specifiers){
+    Id.of(id);
+    Specifiers.of(specifiers);
+}
+
+ +

First one seems more descriptive but today IDEs can also mark variable names - so it would be shown like this:

+ +
.withSomething(id: 123, specifiers: ""233"", ""23423"");
+
+ +

Which of these 2 approaches you use and why?

+",257381,,,,,2/12/2019 12:58,ValueObject - too wordy?,,2,1,,,,CC BY-SA 4.0,,,,, +387017,1,,,2/11/2019 18:32,,0,123,"

Recently I stood up a Redis server and tested the idea of storing application 'states' that only existed in our application and were never stored in our database. Don't know if we will ultimately do this but for the first time this information was available external to the application. What I did notice above all else was the speed of doing this thru Redis.

+ +

Now I have a crazy idea and I wonder if anyone has ever toyed with this before (or can point to any experience or papers related to the idea.) If my system requires message passing (say CQRS for example) can I just store the message data as a record in Redis and pass only the key? I realize that physical boundaries can affect this but in my situation everything (both front, middle and backend) will have access to the same information. Has anyone ever tried just passing keys for messages?

+ +

I can think of all the normal objections like writes failing, failed deliveries and so on but those problems also exist if I pass fully fleshed out messages. Why am I crazy?

+",328388,,,,,7/11/2019 22:01,Passing Keys instead of Messages,,1,4,,,,CC BY-SA 4.0,,,,, +387021,1,,,2/11/2019 20:57,,0,696,"

I am developing a RESTful API designed primarily (but not exclusively) for consumption by a web application. For the purposes of this question, the API is a set of GET endpoints. The main endpoint is /people/ and understands a large amount of query parameters to refine the result set.

+ +

The web application's splash page initiates this request:

+ +
GET http://my.api/people/
+Accept: application/json
+
+ +

And this responds with a body of:

+ +
{
+  ""results"": [
+    {
+      ""name"": ""john"",
+      ""age"": ""21"",
+      ""_links"": {
+        ""self"": ""/people/john/""
+      }
+    },
+    ... more people ...
+  ],
+  ""_links"": {
+    ""self"": ""/people/""
+  }
+}
+
+ +

The results array is used by the web application to populate a list view of the people, with their name and age, and also an image of the person.

+ +

The image can be retrieved with such a request:

+ +
GET http://my.api/people/john
+Accept: image/png
+
+ +

From the perspective of the web application, this presents a number of difficulties.

+ +
    +
  • The _links.self of an individual person can be used in a HTML <img> element as the src, however the request generated by this has a header of Accept: */*, which does not sit well with the API as the global conventional default type is application/json.

  • +
  • The image/png request could be manually sent via Javascript but this can result in a large number of concurrent XHR requests, possibly beyond the concurrency limit of some browsers

  • +
+ +

As I also have final say in the design of the API other possibilities are on the table:

+ +
    +
  • The /people/ endpoint could be modified to return the image representation as Base64 (or some other text format) as part of the application/json response. This could also be excluded by default (and included manually with a include=images query parameter), so that requests are not forced to download image data for each response. This increases the response size massively, leading to slower response times for clients (especially mobile users) and potentially a cost increase due to increased outbound data from the hosting platform of the API.

  • +
  • The /people/john endpoint could be updated to default to an image/png response, however this goes against the grain of the rest of the API and is a change aimed exclusively at one client of the API.

  • +
  • The /people/ endpoint could return, in each result's _links dictionary, a direct link to the image e.g. my.api/images/people/john.png. This is looking like the best option with few drawbacks, however I do not know how well this incorporates into REST / HATEOAS.

  • +
+ +

What is the most appropriate solution, from a REST / HATEOAS architectural point of view, for retrieving the image representation of a resource as well as the JSON representation

+",185177,,,,,2/12/2019 3:14,Retrieving JSON and image representation of a resource,,1,0,,,,CC BY-SA 4.0,,,,, +387022,1,387023,,2/11/2019 21:10,,0,77,"

Suppose I have a REST end point. UI sends some parameters to this REST end point which are +required for a stored procedure to run properly. Since this stored procedure is going to take +a long time to run, I am planning to use JMS and put this step to JMS Queue.

+ +

Now, I am not very clear about what to put in the JMS Queue here. Usually, whatever hello world examples of JMS I've seen, they are sending a +small message using Sender and Receiver consumes this message.

+ +

But in the case of a Stored procedure call, I am wondering what exactly should be forwarded to ApacheMQ ? If I understood correctly, +the Sender in this case would be sending the call to the stored procedure to JMS Queue in the form of a String, and then Receiver will grab this +string message(which is a call to stored procedure) from the Queue immediately (provided I've only one item in the Queue) and Receiver will start processing it? So, by receiver processing I mean +the receiver will communicate with the database and process the query. Is this how it works basically? Thanks

+",198234,,,,,2/11/2019 21:21,Understanding the flow of sending stored procedure to JMS Queue,,1,0,,,,CC BY-SA 4.0,,,,, +387027,1,387031,,2/11/2019 23:54,,6,241,"

I am currently attempting to create a gravitational n-body simulation using a modified Barnes-Hut algorithm, to be more amicable to GPU computation. This is primarily as a learning project. My goal is to simulate a number of stars comparable to that in real galaxies, meaning on the order of hundreds of billions to tens of trillions, but even a few million would be useful. It is very unlikely that I will be able to compute this at a speed amicable to display, meaning that I must pre-compute the data and look at it after the computation finishes. To do this, my first idea for how to store the data is to create a file that has the locations of all the stars concatenated together for each moment of discretized time, and then the next moment concatenated to that, to make something like the following, where the data in each bracket represents a single frame:

+ +
{x₁, y₁, z₁, x₂, y₂, z₂, …. xₙ, yₙ, zₙ}, {x₁, y₁, z₁, x₂, y₂, z₂, …. xₙ, yₙ, zₙ}, {x₁, y₁, z₁, x₂, y₂, z₂, …. xₙ, yₙ, zₙ}
+
+ +

Alternatively, this format can be described in C++ pusedo-code (portability between C++ implementations is not important):

+ +
void writeData(std::vector<Frame> frames, std::ostream &out){
+    //decoding knows how many points there are in each frame as a property of the file format, so it    can read an entire frame at a time until EOF
+    for(const Frame &frame : frames){
+        for(Point &point : frame.points()){
+            float x = point.x();
+            float y = point.y();
+            float z = point.z();
+            out.write(&x, sizeof(float));
+            out.write(&y, sizeof(float));
+            out.write(&z, sizeof(float));
+        }
+    }
+}
+
+ +

The size of the data given by this format in bytes is n*3*2*60*25 for n particles, one minute of data, 25 frames per second, and half precision floats. For one billion particles, this works out to 16 terabytes for one minute of video with one billion particles, something that I physically don’t have even close to enough hard drives to store (and I have a lot of hard drives). I also doubt that this data will compress well with standard lossless compression algorithms such as zlib, as it is fairly unstructured from a binary perspective. I also can’t think of any reasonably simple compression algorithms that would work well for this data.

+ +

Of course, I can encode a rendered frame of the particles in a 2-dimensional image and create a video with any of the many modern compression algorithms, but this sacrifices the ability to change the camera’s location while observing the data, something essential to gaining a good understanding of the three dimensional layout of the simulated particles (ie. consider that constellations appear to be completely different as Earth moves through the galaxy, over geologic time scales). Encoding multiple videos from different perspectives somewhat mitigates this, but nothing compares to the ability to control the camera’s location and angle during playback.

+ +

How can I enable camera movement with pre-computed data and a large number of particles without having thousands of dollars to drop on massive hard drives? I think that this is possible because regular, two dimensional, video is a similar problem that is now solved enough to be practically useful, and because other people have done n-body simulations (as I have seen videos of what can only be n-body simulations in news publications).

+",127478,,,,,2/18/2019 16:04,How can I reduce the amount of storage needed for a gravitational n-body simulation?,,1,1,1,,,CC BY-SA 4.0,,,,, +387033,1,,,2/12/2019 3:28,,5,298,"

I'm all on board with functional programming in Javascript - particularly within the context of using React and Redux.

+ +

Something that I've been running into again and again, is how easy it is to accidentally mutate objects and create odd bugs.

+ +

That is, where the const keyword prevents the variable from being reassigned, it doesn't prevent the object itself from having its variables from being reassigned.

+ +

For example:

+ +

my-module.js

+ +
export const DEFAULT_HEADERS = {  //We are going to be using this const in other modules too. 
+   foo: ""foo""
+}; 
+
+export function someFunction(someCond) {
+   const headers = DEFAULT_HEADERS; 
+   if (someCond) {
+       headers.foo = ""bar""; //Don't do this!
+   }
+}; 
+
+ +

In this example - if we have imported DEFAULT_HEADERS to another module, we've mutated the value of foo, and screwed that up.

+ +

Now of course - the answer is to not reassign object properties like I have here.

+ +

But I don't have a way to prevent people from mutating objects like this.

+ +

Where this get more important, is when I have a should-be-immutable object that has a nested structure.

+ +

To do it correctly, we need to destructure the object at each level, as ...{} spread syntax only clones superficially.

+ +

For example:

+ + + +
const props = {
+  alpha: {
+    innerAlpha: {
+      a: ""aaa"",
+      b: ""bbb"",
+    },
+
+    innerAlphete: {
+      b: ""bbb"",
+      c: ""ccc"",
+    }
+  },
+
+  beta: {
+    foo: ""foo"",
+    bar: ""bar"",
+  }
+}
+
+//Now we want to change the value of props.innerAlpha.b to ""BBB"", but without mutating the original object. 
+
+const props2 = { ...props,
+  ...{
+    alpha: {
+      ...props.alpha,
+      ...{
+        innerAlpha: {
+          ...props.alpha.innerAlpha,
+          ...{
+            b: ""BBB""
+          }
+        }
+      }
+    }
+  }
+};
+
+
+console.log(props);
+console.log(props2);
+
+ + + +

And it's quite a pain to write code that way, so it's easy to see that someone would do something like:

+ +
   const props2 = {...props}; 
+   props.alpha.innerAlpha.b = ""BBB""; 
+
+ +

So my main question is - are there any proposals to make truly immutable objects in Javascript?

+ +

And also - is there a more convenient way to do immutable programming in Javascript?

+",109776,,,,,5/19/2020 10:01,What is the state of immutability in Javascript in 2019?,,1,4,,,,CC BY-SA 4.0,,,,, +387034,1,,,2/12/2019 3:36,,3,328,"

I am a backend developer and was having this argument yesterday with a frontend dev in my team about whether or not should I let him fetch the displayable text message about the result of an operation from the backend.

+ +

My argument was that since it is a presentation layer decision, it should remain with the UI. They should switch on the response codes agreed upon between the backend and frontend and show the text messages accordingly. That way, the backend can also stay unaware of the interaction between the UI and the user.

+ +

Actually, since my company is extremely backend heavy, the practise here is to keep the UI extremely lightweight.

+ +

But I want to take suggestions from the community here to understand whether it is a good practice to retrieve the display messages from the backend or not.

+ +

In my small career as an Android fullstack developer (before this company), I always believed that the interaction between UI and backend should strictly about the data and not the presentation.

+",328417,,,,,2/19/2019 23:24,Where should the display messages on the UI be stored?,,1,6,,,,CC BY-SA 4.0,,,,, +387038,1,,,2/12/2019 5:41,,-1,4370,"

I am looking for an approach / algorithm for using OCR (like Tesseract) to extract only bold text from an image. The Python code I wrote can already identify small letters and numbers, but it cannot distinguish between bold and non-bold text.

+ +

Does someone have an idea, for example, for some preprocessing or postprocessing of the image to make this work? I am not looking for coding or implementation help, only for an algorithmic idea. As another tool, I could use OpenCV.

+ +

For illustration purposes, this is my current code:

+ +
import cv2
+import sys
+import numpy as np
+from PIL import Image
+import pytesseract
+
+if __name__ == '__main__':
+
+  if len(sys.argv) < 2:
+    print('Usage: test.py image.jpg')
+    sys.exit(1)
+
+  # Read image path from command line
+  imPath = sys.argv[1]
+
+
+  # Define config parameters.
+  # '-l eng'  for using the English language
+  # '--oem 1' for using LSTM OCR Engine
+  config = ('-l sin --oem 1 --psm 3')
+
+  # Read image from disk
+
+  im = cv2.imread(imPath, cv2.IMREAD_COLOR)
+
+  #im = cv2.imread(imPath)
+  #im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
+
+
+
+
+  # Run tesseract OCR on image
+  text = pytesseract.image_to_string(im, config=config)
+file = open(""testfile.txt"",""w"") 
+file.write(text)
+file.close()
+#Print recognized text
+#print(text)
+
+",328424,,9113,,2/12/2019 7:01,2/12/2019 21:01,How do I extract only bold text from this image?,,2,1,2,43508.58958,,CC BY-SA 4.0,,,,, +387044,1,,,2/12/2019 9:11,,1,84,"

Consider a token issuer/validator. Every token is PKI signed.

+ +

Is it more secure to use a new key-pair (performance could be ignored for now) per request or is it no difference to have only one key-pair for all?

+ +

It's a theoretical question, an implementation could be for example to generate a key-pair with expiration usage time etc.

+ +

The question looks obvious, but I, as a newbie in security, have concerns about it.

+",328194,,,,,3/14/2019 14:01,Is it more secure to use more PKI keys?,,1,1,2,,,CC BY-SA 4.0,,,,, +387053,1,387071,,2/12/2019 12:58,,5,1556,"

I'm just learning about clean architecture and I'm trying to design a proof of concept for an application I want to build soon.

+ +

In the Clean Architecture the Presentation layer and the Domain Model layer are separated by the Application layer. I understand how this make sense for stateless applications, like a Web base MVC application or a ""record based"" desktop application where most operations are CRUD. Also, my understanding is that the Presentation layer should not directly use the domain object, but should have it's own ViewModel that are mapped to/from models from the application layer. Is this assumption correct?

+ +

The application I plan to develop will allow the user to input chess games one move at a time. It seems to me that the rules for chess should be in the Domain Model layer. If this is the case, how does the Presentation layer validate each user input (to verify each moves legality)? Does it need to go through the Application Layer every time or does it makes more sense to let the Presentation Layer somehow manipulate the Domain objects directly to build the model according to the user input and then send it to the Application layer when it's time to save it to the database?

+ +

I tried to find resources online that would talk about this, but it seems all example/course/tutorial I found talk about a web application or at least a stateless application of the CRUD type where the business rules are applied once before saving the data or after loading them. In my application the chess rules needs to be applied every time the user edit the ViewModel to give an immediate feedback.

+ +

(As I wrote this, the rubber duck effect kicked in and I now think that maybe I should always go through the application layer. I would still like to know what more experienced people think)

+",3887,,131624,,2/12/2019 19:11,2/13/2019 19:32,How to adapt Clean Architecture to a rich desktop application,,4,1,4,,,CC BY-SA 4.0,,,,, +387060,1,,,2/12/2019 14:37,,2,669,"

What I'm looking for is a pattern for the client triggering a server process, then the server process signalling the client when its finished.

+ +

preferably with examples available in .NET or dotnetcore

+ +

My application is accessed through a REST api. I have a full-featured windows client (WPF) and a smaller web client.
+Sometimes a server process is requested that takes too long for C# await HttpClient.GetAsync(REST Url) - the server process continues, but the client times out and throws an exception.

+ +

So my temporary fix is to catch the exception, and inform the client ""process taking longer than expected, results may be available if you refresh in a few minutes"" - but this is far from satisfactory.

+ +

I dont want to go so far as creating a microservice with messaging and queues

+ +

And I'd rather not have client polling the server to check for updates.

+ +

Ideally the solution could then be used to refresh the client with percentage complete or expected time left data

+",328476,,,,,2/14/2019 16:37,Long running server process. How to update client,,1,5,,,,CC BY-SA 4.0,,,,, +387064,1,,,2/12/2019 15:47,,0,238,"

Lets say there exist some service at some organization that exposes information on a company's assets, from employees, company-issued devices (laptops and issues) to the the large xerox printers on each floor and the large servers in the many different server rooms. Each of these objects (laptops, enterprise printers, servers) have their own set of attributes.

+ +

ex. /api/v1/assets

+ +

This service is standing in front of many different asset management databases. You basically send this large JSON object to ask for what you want, whether it be information on users personal laptops or information on servers.

+ +

A request may look something like this:

+ +
{ ""asset_type"" : ""laptop"", ""attributes"" : [""assignee"", ""os"", ""physical_address"", ""manufacturer""]}
+
+ +

A response will look something like this:

+ +
[{""assignee"" : ""238947"", ""os"":""Win7Prem"", ""physical_address"" : ""3C:BF:12:90:0A:X2"", ""manufacturer"":""Dell""}
+
+ +

And just imagine that each of these objects had 20-30+ attributes and with each request you could pass a filterList that allows you filter the responses based on the values of one or more attributes. For example, pulling all laptops where Manufacturer=""Dell"".

+ +

How would you design an API wrapper for this to be used in another application?

+ +

Would you just keep these pre-built queries in a file on the server and grab them when you need to? Maybe a seperate server for queries and then just make the API calls?

+ +

OR...

+ +

Would you write an AssetsAPI class and create methods? How would you organize your calls? Keep track of queries and attributes? Would you create classes for each of the asset types?

+ +

Let me mention that the asset data is being called from the app, going through some enrichment process, and then served from the calling API as another API response

+",314221,,278015,,4/28/2019 10:00,10/20/2020 11:02,"How to design a wrapper for a large, multi-response API?",,2,0,,,,CC BY-SA 4.0,,,,, +387065,1,387255,,2/12/2019 16:03,,0,165,"

I want to to follow a clean architecture rules (with domain and architecture layers). I have a problem with properties that an entity should or should not have.

+ +

Let's say that I have a User domain. It has all the business related properties (like first name, last name, email, mobile phone number, etc.). But there are also some properties that are specific to each user, but have only indirect business relation. For example the business may want me to send an email, which is customized based on the OS the user uses to interact with our service. Can I make the ""OS version"" a property of the User entity? Or should I create an application service that will retrieve an OS version based on provided User entity?

+ +

Update: I have a supplementary use case: what if we are using an external provider who assignes its own unique ID for each of the user? For example Stripe - in order to update user's credit card info I need to provide Strip with unique ID which I get when user registered his card. Does the stripe_id become a part of User entity?

+",305310,,305310,,2/13/2019 7:04,2/15/2019 17:23,"Can an entity include ""technical"" (not business related) information?",,3,6,1,,,CC BY-SA 4.0,,,,, +387072,1,,,2/12/2019 18:20,,4,1946,"

I have a Controller in ASP Core MVC. I'm trying to trim down the dependency injected services in the constructor so I can start building unit tests more easily. However, I have some services being injected that are only used in one or two controller actions. For example, I inject ILocationService because in a couple of my actions, I need to lookup a country Id number and get a ISO Alpha-2 country code using a database (eg mapping ID number 1 to ""CA"", mapping 2 to ""US"", etc.)

+ +

Asp Core supports the [FromServices] attribute, so I have the option to inject ILocationService directly into two of my actions instead of injecting them in the controller constructor. The advantage for this is I don't need to always mock/inject ILocationService into my controller from every unit test and its more clear when writing unit tests which services each function depends on.

+ +

The obvious disadvantage is now its not completely obvious and clear what services my controller depends on since they are not all grouped in the constructor.

+ +

Is using [FromServices] bad practice or a strong indication of code smell?

+",303638,,,,,2/12/2019 18:52,ASP.NET Core - Is using [FromServices] attribute bad practice?,,1,0,1,,,CC BY-SA 4.0,,,,, +387076,1,,,2/12/2019 19:10,,0,208,"

A few months back, I was working on designing a client API (FooManager) for adding/removing/fetching a list of objects (Bar).

+ +

The requirements were simple-
+1. Fetch operation is CPU-intensive and should be asynchronous.
+2. API should be clear about when what data is available.

+ +

The options I thought of were-

+ +

Option #1 (using LiveData)-

+ +
class FooManager {
+  private LiveData<List<Bar>> barList;
+
+  public void initialize() {
+    barList = new MutableLiveData();
+    // do something
+  }
+
+  public LiveData<List<Bar>> fetchBars() {
+    return barList;    
+  }
+
+  public boolean add(Bar bar) {
+    // barList isn't guaranteed to be initialized by this point
+    barList.getValue().add(bar);
+    return true;  
+  }
+
+  public boolean remove(Bar bar) {
+    // barList isn't guaranteed to be initialized by this point
+    barList.getValue().remove(bar);
+    return true;
+  }
+}
+
+ +

Pros: UI can directly observe on these elements, fairly less verbose, lifecycle aware.
+Cons: No distinction between when what data is available for the client. Default state is confusing, when the Bar list hasn't been initialized yet.

+ +

Option #2 (using Futures and chaining Futures)-

+ +
class FooManager {
+  private ListenableFuture<List<Bar>> barFuture;
+
+  public initialize() {
+    // do something
+    barFuture = SettableFuture.create();
+  }
+
+  public ListenableFuture<List<Bar>> fetchBars() {
+    // Doesn't seem right to return a chained future when add/remove operation is also in progress with this transformed future?
+    return barFuture;    
+  }
+
+  public ListenableFuture<Boolean> add(Bar bar) {
+    barFuture = Futures.transform(barFuture, list -> { list.add(bar); //... });
+  }
+
+  public ListenableFuture<Boolean> remove(Bar bar) {
+    barFuture = Futures.transform(barFuture, list -> { list.remove(bar); //... });
+  }
+}
+
+ +

Pros: Less verbose, no global initialized state.
+Cons: Chaining Futures seems unnecessary/overhead for add/remove operations.
+Not sure if this is the right use case for a Future. Any ways to improve this option?

+ +

Option #3 (Using global initialized state with a listener)-

+ +
class FooManager {
+  interface InitializationCompleteListener {
+    void onInitializationComplete();
+  }
+
+  private InitializationCompleteListener listener;
+  private boolean initialized;
+  private List<Bar> barList;
+
+  public initialize() {
+    barList = new ArrayList<>();
+    // do something, barList.add(bar); ...
+    initialized = true;
+    if (listener != null) {
+      listener.onInitializationComplete();
+    }
+  }
+
+  public boolean isInitialized() {
+    return initialized;
+  }
+
+  public registerInitializationCompleteListener(InitializationCompleteListener listener) {
+    this.listener = listener;
+  }
+
+  // Only allowed to be called after initialization.
+  public List<Bar> fetchBars() {
+    return barList;    
+  }
+
+  public boolean add(Bar bar) {
+    barList.add(bar)
+    return true;
+  }
+
+  public boolean remove(Bar bar) {
+    barList.remove(bar)
+    return true;
+  }
+}
+
+ +

Pros: Very clear for clients as to when the data will be available.
+Cons: Too verbose, clients need to hold-off until the global initialization state is set.

+ +

This is probably a basic question but I'm looking for the best practices in the industry. +Given the pros and cons, which of these approaches make the most sense, or if there's a separate pattern we could have followed here?

+ +

Thanks!

+",136961,,136961,,2/12/2019 20:08,2/12/2019 20:08,Best practice for Asynchronous CRUD operations in Android/Java,,0,2,,,,CC BY-SA 4.0,,,,, +387077,1,387146,,2/12/2019 19:33,,1,408,"

One of the architectural challenges we are facing on a project is ensuring data consistency over our microservice domains. We have two rules that we are trying hard to enforce: 1. Services cannot directly communicate with one another (primarily to reduce latency and prevent deadlocks) and 2. Each service only has direct access to its own database. The challenge is that there's a lot of data that we need from service to service.

+ +

For example, Users are associated with Customers, which live in the Customer Domain. However, our Jobs domain service needs to know what customers a user has access to. Ensuring that an update to the Customer Association in the Customer Domain flows into the Jobs domain is a key need.

+ +

Our current design has these updates flowing on a message queue. Basically, when Customer Domain updates a Customer Association, it drops a message on the queue and anything that cares about that change can read off that queue and update its database where relevant. This feels like a lot of stuff to maintain, though, as each domain now has to have code to listen to the MQ and process data where appropriate (and also code to push messages into the MQ).

+ +

An earlier design provided by a contractor included ""Read Only Copies"" of each relevant domain's database (so Jobs would have a readonly copy of the Customer Domain database), but because we're on MS SQL Server, we could not figure out a good way to create readonly slaves for those services that would be updated as the master was updated.

+ +

Are we missing something obvious here?

+",127766,,,,,2/13/2019 21:19,Data Replication Across Microservice Domains,,2,2,1,,,CC BY-SA 4.0,,,,, +387079,1,,,2/12/2019 20:19,,2,64,"

I currently capture well structured forecast data in a SQL Server. An example of this data is below:

+ +

+ +

What this tells me is that each day, I receive a forecast for the next three days. Note that in the real world, I may receive a forecast each hour (DateOfForecast) for the next 48 hours ('DateInQuestion') at 15 minute granularity.

+ +

How do I use this data? There are several use-cases, all with subtle but important differences. I'll go through them below.

+ +

Best (Latest) Data

+ +

If I want best data, this is the most recent forecast for each 'DateInQuestion'. So my best data for January would look like:

+ +

+ +

Note that in real world, the DateOfForecast may not be regular and may be more than once per day

+ +

At A Point in Time

+ +

This is where I want to see what the data looked like at a specific point in time. In example below, I'm showing what the data looked like on the 2nd of January.

+ +

+ +

Specific Forecast

+ +

I will skip the screenshot here - this would show the forecast created, for example, on the 2nd of January and would show all records where the 'DateOfForecast' is 2nd January.

+ +

Rolling Window

+ +

In this case, I present 'what was the best data available 2 or more days before each 'DateInQuestion', giving:

+ +

+ +

That covers the majority of scenarios. Some volumes:

+ +
    +
  • If we think of the example forecast above as one data set (let's say it's temperature forecasts for London), I have several hundred thousand data sets. Each data set has a unique (artificial) key
  • +
  • The description (meta) for each data set is stored in a separate system and need not be repeated in any replacement.
  • +
  • The frequency of publication (number of forecasts per day) can be between 1 and approximately 48.
  • +
  • I may have 10 years or more history for any given data set, none of which may be archived
  • +
  • The horizon (how far out being forecasted for) can vary between one hour and one year.
  • +
  • The granularity (length of period in DateInQuestion) can be anything from 1 minute to one month. It is, however, consistent within a dataset.
  • +
  • In my current RDBMS database, I have around 2TB of data.
  • +
  • Horizon of data retrieval by query, for any example above, can be from one period (DateInQuestion) to several years.
  • +
+ +

Problems with current system + - Slowing down due to vertical scaling constraints + - Several of the query use cases I have presented represent rather non-efficient query executions

+ +

So, leading on to my question!! I have being looking into NoSQL databases generally (i.e. not focussing on a single implementation). However, most case studies I have came across follow a more '[date,value]' structure and do not address the query use cases I have mentioned.

+ +
    +
  1. Does this data lend itself well to NoSQL?
  2. +
  3. Can you suggest an object design for NoSQL?
  4. +
+ +

Please comment if any of the above is unclear and I will try to clarify.

+ +

Update 1 +Any future database can be either 'on-prem', AWS or Azure. +It must be usable from a variety of technologies such as .Net, Excel and Python. Obviously I can provide an API layer to address this if needed.

+",226255,,278015,,4/28/2019 9:12,4/28/2019 9:35,Alternatives to RDBMS For Forecast Data,,1,3,,,,CC BY-SA 4.0,,,,, +387080,1,,,2/12/2019 20:53,,0,585,"

I'm creating a survey app and I don't know how to design database. I need surveys with multiple questions and multiple types of questions. There are 3 ways that I could think of:

+ +

a) Create a database table for surveys and every type of question and connect them with Survey_id column.

+ +

b) Create a database table for questions and surveys and have them stored in JSON like text field. Those Text fields could look like this:

+ +
{
+    question: 'Who is the best student?'
+    type: 'multiple-choices',
+    choices: [
+         'Bob',
+         'Alice',
+         'Alex',
+    ],
+}
+
+ +

c) Last option is to create a database table just for surveys and have them stored in JSON like format. Like this:

+ +
{
+    heading: 'My survey'
+    date: '2019-01-01',
+    questions: [
+         {...},
+         {...},
+         {...},
+    ],
+}
+
+ +

Thanks for any suggestions.

+",328517,,,,,2/12/2019 23:07,Database design for a survey app,,1,0,,43536.48403,,CC BY-SA 4.0,,,,, +387081,1,,,2/12/2019 20:54,,0,252,"

As far as I have seen then async/await, callbacks and promises are and can only be used to achieve asynchronous programming. Correct?

+ +

So my questions are:

+ +

1) Is it correct that the former three is used for asynchronous programming only?

+ +

2) If yes, then which of them is the best and why?

+ +

3) If no, then how they differ?

+ +

I was trying mssql module of node.js and I tried different ways:

+ +

Using Async/Await:

+ +
app.get('/app/users', async (res, req) => {
+    try {
+
+            const config=
+            {
+                server: 'localhost',
+                database: 'HimHer',
+                user: 'sa',
+                password: 'Jessejames01',
+                port: 1433
+            }
+
+            let pool = await sql.connect(config)
+            let result1 = await pool.request().query('select top 1 * from dbo.Users');
+
+            console.dir(result1);
+            res.send(JSON.stringify(result1));
+
+
+        } catch (err) {
+            // ... error checks
+        }
+})
+
+ +

Using Promises:

+ +
sql.connect(config).then(pool => {
+    // Query
+
+    return pool.request().query('select * from dbo.Users')
+}).then(result => {
+    console.dir(result);
+    res.send(JSON.stringify(result));
+
+}).catch(err => {
+    console.log('Exception:+ '+err);
+    sql.close(); 
+})
+
+ +

Using Callbacks:

+ +
 new mssql.connect(configuration, error => 
+            {
+                new mssql.request().query('Select * from Users', (err, dataset) => 
+                {   
+                      if(err)
+                      {
+                          console.log(err);
+                          res.send(err);  
+                          return;
+                      }
+                      else
+                      {
+                          console.dir(dataset);  
+                          res.send(JSON.stringify(dataset));  
+                          return;           
+                      }
+
+
+             });
+        });
+
+        mssql.close();
+
+ +

Since all are achieving the same.

+",327985,,327985,,2/12/2019 21:00,2/13/2019 11:26,"If callback function, promises and async/await patterns all can be used to achieve asynchronous behaviour then why don't we stick to one?",,3,5,,,,CC BY-SA 4.0,,,,, +387083,1,,,2/12/2019 21:07,,2,793,"

I've been doing some research on multi-tenant systems and micro-services and I'm a bit confused by this.

+ +

If each micro-service has it's own database, and implements the support of multi-tenancy in it's api - what service is responsible for telling the micro-service what account Ids map to which customers?

+ +

Is there a single source of truth for the account Ids? How is this information propagated out to all the various micro-services if a new customer is added?

+ +

Initially I thought that perhaps the micro-services don't need to be aware of this at all, they just accept an account Id and store the data without caring what customer it came from. But how do you avoid the issue of duplicate/wrong/updated account Ids being sent to these services? Doesn't this seem like a big security and regulatory risk?

+ +

This isn't a problem in a single database design, because Account Id is constrained in all tables that reference it and is easier to manage.

+",74674,,,,,2/12/2019 23:35,"In a multi-tenant micro-service architecture, who stores the account IDs for each tenant?",,1,2,,,,CC BY-SA 4.0,,,,, +387086,1,,,2/12/2019 23:07,,0,316,"

Say you have a bunch of microservices each accessing dedicated databases. Due to business logic reasons, data from some DBs need to be replicated to other DBs (e.g. to serve as local cache).

+ +

One way to handle replication is to stream changes done to DB 1 (e.g. using my Debezium) into a queue (e.g. Kafka). Queue consumer then applies those changes to DB 2.

+ +

Problem is: what if queue fails temporarily? Or if DB 2 needs to be recreated from scratch due to a catastrophic failure?

+ +

The only solution I can think of is to make periodic snapshots of DB 1 and publish these snapshots in the same queue. DB 2 can be rebuilt from that snapshots plus event that happened after it. Does that make sense?

+",179167,,,,,2/13/2019 6:01,How to synchronize databases using event streams?,,1,0,,,,CC BY-SA 4.0,,,,, +387094,1,,,2/13/2019 2:34,,8,233,"

Often in my own personal Python libraries, I do something like this:

+ +
class MyClass:
+
+    # ...
+
+    def plot(self):
+        import someGraphicsLibrary as graphicslib
+        graphicslib.plot(self.data)
+
+ +

The reason is that initialising someGraphicsLibrary takes some time, up to a few seconds for one of the libraries I use. I don't always need to plot my results when I use this class, so it makes sense not to import it until the time it's actually used, if at all.

+ +

This seems to work fine, but I don't think I've seen it in anyone else's code. So my question is simply whether this is considered a good practice. Are there any hidden pitfalls to be expected when doing things this way?

+",91273,,,,,2/13/2019 4:47,Importing Python modules at the time of use,,1,4,,,,CC BY-SA 4.0,,,,, +387103,1,,,2/13/2019 6:27,,0,105,"

I am using bool flag to call base class function when needed to imitate virtual constructor. I am using this way for inheritance purpose. I have a Base class constructor with int and bool. The bool part is there to check whether to call fun() or not. By default I make base class flag true.

+

Then for Derived class I make it false so that the base class's fun() does not get called but derived class's func get called.

+
#include <iostream>
+
+class A{
+public:
+  A(int valx, bool flag=true):x(valx){
+    // initialize x with proper setting value. Called by both Base and Derived.
+    init();
+    if(flag){
+      fun();
+    }
+  }
+  // base way of setting value_set_based_on_obj_type.
+  void fun(){
+   std::cout << "base fun\n";
+   value_set_based_on_obj_type = x + 2;
+  }
+protected:
+  void init(){
+   std::cout << "init" << std::endl;
+   x = x*2;
+  }
+int x;
+int value_set_based_on_obj_type;
+};
+
+class B : public A{
+public:
+  B(int val):A(val, false){
+   fun();
+  }
+  // derive way of setting value_set_based_on_obj_type.
+  void fun(){
+   std::cout << "derived fun\n";
+   value_set_based_on_obj_type = x/3;
+  }
+};
+
+int main(){
+  B a(1);
+}
+
+

Output :

+
+

init

+

derived fun

+
+

Is this approach a proper way to imitate virtual constructor (Because virtual constructor is not possible in C++). Are there other ways which is more elegant then this.

+

Also I read some where that you should eliminate boolean arguments where possible.

+",212685,,-1,,6/16/2020 10:01,2/13/2019 7:23,Base class with bool flag to imitate virtual constructor,,0,8,,,,CC BY-SA 4.0,,,,, +387113,1,387119,,2/13/2019 11:36,,0,634,"

Umbrella activities are defined as ""the non SDLC activities that span across the entire software development life cycle"".

+ +

Considering this definition, can we say that project planning is an umbrella activity, as the plan continuously changes throughout the process? Are there any umbrella activities in software project management?

+",328569,,4,,2/13/2019 13:26,2/14/2019 4:33,Is project planning considered an 'umbrella activity'?,,3,2,,,,CC BY-SA 4.0,,,,, +387114,1,,,2/13/2019 11:58,,0,282,"

I have to represent a certain data structure, in which I have nodes that are related between them. Each node has its numeric id, and a list containing the next directly related nodes and another list containing the previous directly related.

+ +

So my code now would be:

+ +
class Node {
+    int id;
+    ArrayList<Node> previousNodes;
+    ArrayList<Node> nextNodes;
+}
+
+ +

Now the question is: having into account that this data structure may be very big, does the use of lists inside each node affects significantly on performance?

+ +

It is supposed that each list contains the reference of each node. Would be a significant improvement if instead of lists I stored an array containing the ids of the related nodes?

+ +

So, in summary, the question is:

+ +

Is there any significant improvement in performance between my code and the following code for a big data structure?:

+ +
class Node {
+    int id;
+    int[] previousNodes;
+    int[] nextNodes;
+}
+
+ +

Note: The data structure would be represented as an array of nodes, where each cell of the array contains a node whose id corresponds to the index of the cell, so getting a node by its id would be of O(1)

+",266643,,266643,,2/13/2019 12:01,2/13/2019 16:15,Efficiency in Java: Object reference vs id reference,,1,3,,,,CC BY-SA 4.0,,,,, +387117,1,387124,,2/13/2019 13:35,,-2,80,"

I design and works on lot of projects involves REST APIs. But one question is always occur to me if it is acceptable way to do REST.

+ +

So according to REST manuals online, REST is build upon two major HTTP concepts. HTTP Verbs and HTTP Codes ( their are other thing but I am focusing on these two concepts ).

+ +

So If I need to made User CRUD APIs, Then I would be making like that

+ +
Add -> POST /api/user
+GET ALL -> GET /api/user
+GET ONE -> GET /api/user/:id
+UPDATE ONE -> PUT /api/user/:id
+DELETE ONE -> DELETE /api/user/:id
+
+ +

Responses would have http code like that

+ +
Success : 200
+Wrong Input Data Format: 400 
+Authentication Error: 401
+Authorization Error: 403
+Validation Error: 422
+Internal Server Error: 500
+
+ +

But these concepts not followed during API design in any of my projects.

+ +

So instead of above, concepts like below used.

+ +
Add -> POST /api/add_user
+GET ALL -> GET /api/getusers
+GET ONE -> GET /api/getuser?user_id=12345
+UPDATE ONE -> POST /api/update_user
+DELETE ONE -> POST /api/delete_user
+
+ +

With Responses not based upon HTTP code at all. But given HTTP Code 200 in all cases with custom code and message, Something like that

+ +
Success : HTTP 200
+{
+    status: true,
+    errorCode: 0,
+    message: """",
+    data: {} 
+}  
+
+Failure : HTTP 200
+{
+    status: false,
+    errorCode: 1233,
+    message: """",
+    data: {} 
+}  
+
+ +

My question is that if bottom approach is acceptable according to minimum REST API Design principals. Also if First approach is overkill.

+",121671,,,,,2/13/2019 15:15,REST API acceptable design flexibility,,2,0,1,,,CC BY-SA 4.0,,,,, +387133,1,,,2/13/2019 18:05,,1,373,"

Is it considered to be a good practice to convert all types of exceptions (exceptions from internal logic of application + exceptions from application's external dependencies - for example: File System) to application specific exceptions?

+ +

For example,

+ +

I am developing a Job Scheduling Software in Python (using layered architecture). One of the layer in this architecture is Persistence layer. This layer is responsible for storing/retrieving state of the Job to/from the persistence store (File System).

+ +

I have defined two application specific exception classes ""PersistenceReadError"" and ""PersistenceWriteError"" for exceptions raised from persistence layer APIs (read_jobs, write_jobs etc).

+ +

I am not sure if this is considered to be a good practice i.e. is it right to even catch exceptions like FileNotFoundError, FilePermissionsError etc. and wrap them in PersistenceRead/PersistenceWrite exceptions? Also how far should I go with creating exception classes vs using limited exception classes (to group similar exceptions together) with error code/messages to distinguish subtypes of exceptions.

+",,user317612,,user317612,2/14/2019 4:29,3/1/2019 19:04,Best practices for handling application specific exceptions?,,1,2,,,,CC BY-SA 4.0,,,,, +387134,1,387140,,2/13/2019 18:02,,1,547,"

I'm looking for perspectives from other web/ui developers.

+ +

I'm a UI developer and my company requires that I use components built by another team within the company.

That team has developed some patterns over time that I would consider bad practice, but I thought I'd see what others think before I raise any awareness to the other team.

So, for example, this other team has created a drawer that slides into view when a button is pressed, and goes back out of view when it is closed. The problem I have with this is that the entire component is in the dom, but only set to hidden. So I can't see it visibly, but it is still there, all the time. +
Another component they have built is a datepicker, which when you click on the input field, it opens up the calendar with the current month displayed. When you select a day or press the close button in goes out of view. This issue with this is that when it is closed, it again is only hidden. So if you look in the dom, you'll see the elements for the calendar container, the title bar with the day names, all of the days in that month, and nested inside of each day is the span. +

These are only 2 examples of components they have made, but they follow this practice of only hiding elements with almost all of their components. +
My worry with this is that when an entire page is finished, the dom is full of unnecessary elements and could potentially bog down the performance of the app. I currently write in React and try my hardest not to render anything unless it needs to be seen. But with the other team's components it is usually not possible.

+Would anyone else consider this bad practice? Or have any suggestion that I could bring to attention? +

+Ps. There is no code because I'd be violating my security agreement

+",328610,Nate Thompson,328610,,2/14/2019 3:35,11/7/2019 4:43,Is it bad practice to leave hidden elements in the dom?,,2,8,,,,CC BY-SA 4.0,,,,, +387135,1,387178,,2/13/2019 18:38,,8,3751,"

So I am asking this after reading the following: Why shouldn't I use the repository pattern with Entity Framework?.

+ +

It seems there is a large split of people who say yay and those that say nay. What seems to be missing from some of the answers are concrete examples, whether that be code or good reasoning or whatever.

+ +

The issue is I keep reading people responding saying ""well EF is already abstraction"". Well that's great, and that's probably true, but then how would you use it without the repository pattern?

+ +

For those that say otherwise, why would you say otherwise, what personally have you run into that made is necessary?

+",327284,,58415,,2/14/2019 16:40,2/14/2019 16:40,Should Entity Framework 6 not be used with repository pattern?,,2,2,8,,,CC BY-SA 4.0,,,,, +387136,1,,,2/13/2019 18:40,,1,81,"

I have recently come across the terms migration pattern and refactoring on the topic of migrating monoliths to microservices. Is there any real difference between the two terms, or can they be used interchangeably?

+",242927,,,,,3/15/2019 19:02,What is the difference between a migration pattern and a refactoring?,,1,0,,,,CC BY-SA 4.0,,,,, +387144,1,387161,,2/13/2019 21:02,,3,502,"

I'm designing a database for name statistics (how many people where given that name). The data consists of names, and numbers for how many men and women were given that name within a time period (primarily 15 year time periods, but this varies). The data is rather simple, but I still keep getting stuck with the schema.

+ +

So here are the two (very similar) options I am considering:

+ +

1) Just one large table. +(Name, CountMen, CountWomen, Timeperiod) +Time period would probably be split to start and end columns for easier querying. As per primary key, I could either have autoincremented ID or just use combination of name and start of the time period.

+ +

2) I'll have names in a separate table (where they'll be primary keys) and the other table will contain the actual statistics (and thus look like the table in number 1). I've read that having a single-column table is not particularly bad design but I don't know if it makes any sense either or rather adds any value.

+ +

The options I have ruled out are:

+ +

1) Having a column for each time period because then I would eventually have to update the schema. This just seems like terrible design.

+ +

2) Having separate tables for each time period. Because time periods aren't that short, I wouldn't end up with that many

+ +

So how would all recommend I approach this? Is there an approach I have not considered? I know it's a simple thing and I should probably just stop overthinking and pick one approach. Still, I'd like a second opinion first because I'm quite new to database stuff.

+",328626,,,,,2/14/2019 3:10,How to design a multi-year statistics database?,,2,0,,,,CC BY-SA 4.0,,,,, +387148,1,387150,,2/13/2019 21:56,,1,455,"

I am somewhat new to writing tests and I want to build that habit into my workflow. So for example I might write a test that a user can create a blog post however I'm not sure on how to do that effectively.

+ +

Should I have a test case to verify that every field is validated?

+ +

By that I mean I could either only write one test case like :

+ +
    +
  • testThatUsersCanCreateBlog
  • +
+ +

or...

+ +
    +
  • testThatUserCanCreateBlog

  • +
  • testThatUserCannotCreateBlogWithoutTitle

  • +
  • testThatUserCannotCreateBlogWithoutTags

  • +
  • testThatUserCannotCreateBlogWithoutImage

  • +
  • etc....

  • +
+ +

Now the first approach seems kind of useless because I'm just going through the ""happy path"" so I'm not really testing anything but the second approach feels a bit invasive because I feel like my test becomes a burden instead of an asset.

+ +

I've heard that in theory a test well written should not need to be changed every time the tested item is changed but in the 2nd instance if I add or remove fields to the Blog model then I am required to make the same adjustments in my test to keep it consistent so it doesn't throw up an error.

+ +

With extensive forms of 20+ fields on ~50 pages this picture looks wrong somehow. Am I missing something?

+",263949,,,,,2/14/2019 2:11,Integration Testing: Should a test check every validation?,,2,0,,,,CC BY-SA 4.0,,,,, +387155,1,387171,,2/14/2019 0:10,,10,1059,"

I know that triggers can be used to validate stored data to keep database consistent. However, why not perform validation of data on the application side before storing them into the database?

+ +

For example, we store clients, and we want to perform some validation that cannot be easily done on DDL level. +https://severalnines.com/blog/postgresql-triggers-and-stored-function-basics

+ +

Another example is audit.

+ +

Update

+ +

How triggers and database transaction work together. +For example, if I would like to perform validation of data being inserted. It is done inside of a transaction. What happens earlier: transaction is committed or trigger is executed?

+",236741,,236741,,2/14/2019 10:11,2/21/2019 7:22,"Do I really need triggers for relational database, for example, PostgreSQL?",,5,6,,,,CC BY-SA 4.0,,,,, +387160,1,,,2/13/2019 22:09,,0,92,"

Generally, we are looking to create a logging framework that can target human readable output as well as various structured data formats. So, a goal is minimizing code duplication in packaging the format functions. We created a library of functions that can be used to format log entries.

+ +

The client can then iterate a list of these as it processes each log record, in order to build the output string based on an arbitrary recipe.

+ +
for (LogFormatSteps logFormatStep : logFormatList {    
+
+      sb = logFormatStep.getRecipeStep().f(sb, record, altContent);
+    }
+
+ +

In discussions of Lambdas, they seem to be declared each in their own class extending their functional interface. This is a lot of overhead, javadocs notwithstanding. Is it going to be a problem if the lambdas are instead collected into an enumeration, as a way of reducing the number of classes that must ultimately be defined?

+ +
import java.util.Date;
+import java.util.logging.LogRecord;
+
+@FunctionalInterface
+/** Appends stringbuilder with log information. */
+interface LogFormatLambda {
+
+  /**
+   * Adds information to log entry text.
+   *
+   * @param logFragmentParam starting log fragment
+   * @param logRecordParam which may be consulted by lambda
+   * @param altContentParam which may be converted via toString() for use by lambda.
+   */
+  public abstract StringBuilder apply(
+      StringBuilder logFragmentParam, LogRecord logRecordParam, Object altContentParam);
+}
+
+/** Formatter functions, in lambda notation, used by this formatter */
+enum LogFormatFunctions {
+
+  /** Append urrent date and time. */
+  DATE((s, r, a) -> s.append(new Date(r.getMillis())).append("" "")),
+
+  /** Append log level. */
+  LEVEL((s, r, a) -> s.append(r.getLevel().getLocalizedName())),
+
+  /** Append blank line. */
+  LINE((s, r, a) -> s.append(System.getProperty(""line.separator""))),
+
+  /** Append class name, or, lacking that, log name. */
+  CLASS(
+      (s, r, a) ->
+          (r.getSourceClassName() != null)
+              ? s.append(""     "").append(r.getSourceClassName())
+              : (r.getLoggerName() != null) ? s.append(""     "").append(r.getLoggerName()) : s),
+
+  /** Append name of method generating the log entry. */
+  METHOD(
+      (s, r, a) ->
+          (r.getSourceMethodName() != null)
+              ? s.append(""     "").append(r.getSourceMethodName())
+              : s),
+
+  /**
+   * Append message text for the log entry, adding an indent.
+   */
+  MESSAGE(
+      (s, r, a) ->
+          (a != null) ? s.append(""     "").append(a.toString().replace(""\n"", ""\n     "")) : s);
+
+
+  /** Lambda field. */
+  LogFormatLambda f;
+
+  /**
+   * Constructor -loads lambdas.
+   *
+   * @param functionParam lambda for this process step
+   */
+  LogFormatFunctions(LogFormatLambda functionParam) {
+    this.f = functionParam;
+  }
+
+  /**
+   * Appends log information.
+   *
+   * @param stringBuilderParam current log entry fragment
+   * @param logRecordParam log record (model) 
+   * @param altContentParam object, provides toString() info to process steps 
+   */
+  StringBuilder f(
+      StringBuilder stringBuilderParam, LogRecord logRecordParam, Object altContentParam) {
+
+    return f.apply(stringBuilderParam, logRecordParam, altContentParam);
+  }
+}
+
+ +

That said, are there other ways of organizing lambdas, e.g., via naming conventions package organization, etc., to avoid verbosity?

+ +

I did see posts about using lambdas to process enumerations, and enumerations to control dynamic linking of lambdas as strategy objects. But so far I haven't seen one that addressed the viability of using an enumeration for purposes of reducing clutter.

+",227861,John Meyer,,,,2/14/2019 3:41,Make lambdas concise using enumerations?,,2,3,,,,CC BY-SA 4.0,,,,, +387165,1,,,2/14/2019 4:11,,4,2631,"

In my C# solution, I have a Tests project containing unit tests (xUnit) that can run on every build. So far so good.

+ +

I also want to add integration tests, which won't run on every build but can run on request. Where do you put those, and how do you separate unit tests from integration tests? If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

+ +

What's the preferred approach? A second separate test project for integration tests?

+",328246,,,,,2/14/2019 19:40,How to Differentiate Unit Tests from Integration Tests?,,5,1,1,,,CC BY-SA 4.0,,,,, +387166,1,,,2/14/2019 4:37,,1,1113,"

We are trying to move from a monolithic architecture to a microservice architecture. We thought of what would be the best way to segregate our services and started doing so one by one. Now we have a question as to how we should make dependent calls. Let me explain in detail.

+ +

Lets say we have different microservices. One of them has details about a product. Other microservices revolve around the product, so they will be a service for transactions, orders, offers etc. All microservices communicate using gRPC.

+ +

All these services will be referencing the items microservice that has details of the items (referencing will be done via IDs). So each of the other services will only be having the ID of the Item.

+ +

Now the issue (or maybe not) is that when ever we want to see a list of transactions done by a user, we also need details of the items. Similarly list of orders places, again we need details of the items. (not all the details but some of them).

+ +

There are two options that we can think of for dealing with the issue.

+ +
    +
  • One is to make two subsequent calls, once to the transaction or order microservice and than to the item microservice to get the partial details needed. Here we have our own gateway which is extremely efficient in terms of performance and network.

  • +
  • The other is to copy the partial data using pub/sub that is required by the transaction and order microservice in the service itself. So basically something like a new table in the order microservice and taking a join in the service to serve data. thus killing the need to make dependent calls.

  • +
+ +

So first of all is the segregation of the services proper?

+ +

second which of the 2 methods are a better design

+ +

Note: we have around 10 services that would be dependent on the items database. +Also on a page there are usually calls to 5-6 microservices. The good thing is that we have our own gateway which makes all calls in parallel. So there will be max 2 sequential calls if we use the first method.

+",281750,,281750,,2/15/2019 4:22,4/28/2019 9:05,Data replication in microservices,,1,2,,,,CC BY-SA 4.0,,,,, +387167,1,387169,,2/14/2019 4:41,,0,93,"

I have been reading into possible design patterns and have found the use of singletons always referred to as an anti-pattern.

+ +

I am currently using a singleton for the sole purpose of gathering configuration details based on my current build environment from a .ini file.

+ +

My singleton only has one public static endpoint that is a variadic function that takes in keys to reach a value:

+ +
$username = Config::Get(""smpt"", ""username"");
+
+ +

Internal to my Config class i maintain a protected instance that is setup on the first time of use and reused for further get calls.

+ +

The only internal state of the class is the setup of the internal re-used instance that does a unit of work to find which ini file to target, so creating a new instance every time feels unwise.

+ +

I have been finding this design very fast and useful but everything i read externally seems to warn against it, is what i'm doing considered bad? and is what i'm doing really called a singleton?

+",304557,,,,,2/14/2019 5:28,Is singleton use acceptable for static single responsibility?,,1,1,0,,,CC BY-SA 4.0,,,,, +387168,1,,,2/14/2019 5:17,,-3,191,"

I see some services which takes a requestId from client as the mandatory attribute while some services doesn't. I feel it is a good idea to have a request Id from the clients because it helps in debugging through logs, that is, what happened with this request.

+ +

Can someone provide the good reasoning or the best practice for the API to have the requestId?

+ +

Should the requestId be unique from clients?

+ +

Also, is there any benefit of storing this id in the database or its just for logging purpose? Currently, I don't see any benefit in database, but still wanted to ask.

+",287375,,,,,2/14/2019 13:34,Is it a good idea to ask for requestId in each API call?,,1,1,,,,CC BY-SA 4.0,,,,, +387172,1,387224,,2/14/2019 8:05,,4,207,"

For example, suppose my input data and UI is not in 1 to 1 relationship: +

+ +

html:

+ +
<script>
+aChanged=function(){
+};
+
+bChanged=function(){
+};
+
+cChanged=function(){
+};
+</script>
+a:<input id=""a"" onchange=""aChanged()""/>,b:<input id=""b"" onchange=""bChanged()""/>,c:<input id=""c"" onchange=""cChanged()""/><br/>
+a+b=<span id=""a+b""></span><br/>b+c=<span id=""b+c""></span><br/>a+c=<span id=""a+c""></span>
+
+ +

which change ""a"" needs to refresh ""a+b"" and ""a+c"". My question is, how should I write the update UI functions?

+ +

Style 1 : define more methods but updates requires UI only:

+ +
aChanged=function(){
+  updateAB();
+  updateAC();
+};
+
+bChanged=function(){
+  updateAB();
+  updateBC();
+};
+
+cChanged=function(){
+  updateBC();
+  updateAC();
+};
+
+updateAB=function(){
+  document.getElementById(""a+b"").innerHTML=Number(document.getElementById(""a"").value)+Number(document.getElementById(""b"").value);
+};
+
+updateBC=function(){
+  document.getElementById(""b+c"").innerHTML=Number(document.getElementById(""b"").value)+Number(document.getElementById(""c"").value);
+};
+
+updateAC=function(){
+  document.getElementById(""a+c"").innerHTML=Number(document.getElementById(""a"").value)+Number(document.getElementById(""c"").value);
+};
+
+ +

Style 2 : define less methods but it may update some UI unnecessarily (eg:change a may also refresh b+c):

+ +
aChanged=function(){
+  updateABC();
+};
+
+bChanged=function(){
+  updateABC();
+};
+
+cChanged=function(){
+  updateABC();
+};
+
+updateABC=function(){
+  document.getElementById(""a+b"").innerHTML=Number(document.getElementById(""a"").value)+Number(document.getElementById(""b"").value);
+  document.getElementById(""b+c"").innerHTML=Number(document.getElementById(""b"").value)+Number(document.getElementById(""c"").value);
+  document.getElementById(""a+c"").innerHTML=Number(document.getElementById(""a"").value)+Number(document.getElementById(""c"").value);
+};
+
+ +

Which style should I use?

+",248528,,248528,,2/20/2019 10:13,2/20/2019 10:13,"Should I define more methods to update necessary UI only, or less methods but update some UI unnecessarily?",,2,0,,,,CC BY-SA 4.0,,,,, +387173,1,,,2/14/2019 8:18,,-2,157,"

I'm using mongodb for our web application.

+ +

Recently I'm think of adding redis for caching objects which usually is the setting/configuration. But the mongodb already cached the frequently accessed items.

+ +

So, do I need a redis for this use-case and why?

+",328343,,90149,,2/14/2019 14:09,2/14/2019 17:46,Do I need a redis for caching object while MongoDB already cached frequently accessed items,,1,3,,,,CC BY-SA 4.0,,,,, +387175,1,,,2/14/2019 8:30,,1,68,"

I am working on a project which requires me to execute standard HTTP calls with session tokens. I am building a custom HTTP client, with a custom authenticator, something like this:

+ +
Client client = Client.Builder().withConfig().withAuthenticator(Authenticator);
+
+ +

and my authenticator is an interface

+ +
public interface Authenticator{
+  SessionToken getSessionToken(); // so that different authentication methods can be supported
+}
+
+ +

Now, for executing a request, I need to get a session token, which I obtain by calling authenticator.getSessionToken(). However, the authenticator has to call the backend to get the token, for which it requires a client, leading to a circular dependency. How do I solve this? One approach is to create a separate authentication client, which I pass in the constructor of the Authenticator Implementation, but the package imports would still show circular dependency. Is there a better way to design this?

+",327425,,,,,2/14/2019 8:30,How to solve circular dependency scenario while executing http calls which require authentication?,,0,0,,,,CC BY-SA 4.0,,,,, +387177,1,387312,,2/14/2019 9:24,,-2,182,"

Considering a Python Project structure such as the following, where there are ""empty"" packages with __init__ files simply pulling code from the lib folder:

+ +
.
+├── foo
+│   ├── fred
+│   │   └── __init__.py
+│   ├── henk
+│   │   └── __init__.py
+│   ├── _lib
+│   │   ├── classes
+│   │   ���   ├── corge.py
+│   │   │   ├── _stuff.py
+│   │   │   └── thud.py
+│   │   └── utils
+│   │       ├── bar.py
+│   │       └── _stuff.py
+│   └── __init__.py
+└── script.py
+
+
+ +
+ +
# foo/__init__.py:
+from foo import henk, fred
+
+ +
# foo/fred/__init__.py:
+from foo._lib.classes.corge import Corge
+
+ +
# foo/_lib/classes/corge.py:
+from foo._lib.classes._stuff import *
+class Corge():
+    pass
+
+ +
+ +
+

The reason for this is, while a bit unorthodox looking, it seems to help with code autocompletion. Irrelevant/internal modules from a package don't show up in tools tips, like such in Spyder IDE: +

+
+ +

For context, I didn't want _stuff or thud showing up at that level; this file structure achieved this.

+ +
+ +

It's been working so far, and so I've been wondering if there are any potential side effects since I've never seen this structure before? Could this be unfriendly to contributing developers or their tools?

+ +

Any other ways to achieve a similar goal (not clutter namespace for users) would be very welcome.

+",327923,,327923,,2/15/2019 7:46,2/17/2019 16:17,Packages with only __init__.py - Possible issues?,,1,0,,,,CC BY-SA 4.0,,,,, +387179,1,387232,,2/14/2019 10:00,,-1,68,"

I have created an application to import CSV files, each CSV file that the application imports can contain different data, with new formats of the CSV files being added in the future. Rather than hard code each import I have created a definition in JSON that details each file. I have written a basic import class that imports each line from the files.

+ +

I would like to be able to perform small transformations / modifications to some of the data being imported. I store that transformation as

+ +
right(%data%,5)
+
+ +

Where %data% is the column of the CSV file i'm processing.

+ +

I started to write transformers and would process each column using the processor. Here is an example of the right transformer

+ +
public function transformRight($string,$numChars) {
+   return substr($string,(-1 * abs($numChars)));
+}
+
+ +

The issue i have now is that I want to perform more complex transformations - i want to be able to use multiple functions, i want to be able to concatenate values and take the last x characters of a string.

+ +

I would write a transform method for each unique type of transformation I want to make but that doesn't seem good practice / very efficient. I have been looking at creating a lexer / parser but everything i look at seems way to complex for the simple thing i want to create. I would store the transformation as

+ +
trim(concat('ABC',right(%data%,5)))
+
+ +

Would using a regex be a better way ? or lexer / parser ? or something i havent looked at ?

+",73485,,73485,,2/14/2019 10:47,2/15/2019 9:06,Formula Parsing,,1,3,,,,CC BY-SA 4.0,,,,, +387180,1,,,2/14/2019 11:11,,0,166,"

This is an object-oriented design question that is specific to Spring Boot. I'm extending a Spring Boot application that has an interface that is being extended and used inside another service. The interface uses dependency injection to choose the implementation. I don't want to change that. I think the inheritance design is nice for Spring Boot, but the interface does not give me the necessary methods for my implementation. Do I am thinking the Decorator Pattern, and mixing in another interface. The interface to be extended is...

+ +
public interface DataStrategyService {
+
+    DataRecord getEntityByPK(DataRecord dataRecord);
+
+    DataRecord getEntityByBK(DataRecord dataRecord);
+
+    void overwriteEntityByPK(DataRecord dataRecord);
+
+    void saveEntity(DataRecord dataRecord);
+
+}
+
+ +

There is an implementation already that works already and makes sense with that interface like so...

+ +
@Service
+@ConditionalOnProperty(name=""data.strategy"", havingValue=""DataBase"")
+public class DataBaseDataStrategyService implements DataStrategyService {
+        DataRecord getEntityByPK(DataRecord dataRecord{
+          //implementation stuff
+        }
+
+        DataRecord getEntityByBK(DataRecord dataRecord){
+          //implementation stuff
+        }
+
+        void overwriteEntityByPK(DataRecord dataRecord){
+          //implementation stuff
+        }
+
+        void saveEntity(DataRecord dataRecord){
+          //implementation stuff
+        }
+}
+
+ +

But that interface, it doesn't give me what I need for the implementation I'm doing, so Im going to mix in another Decorator like so...

+ +
@Component
+@ConditionalOnProperty(name=""data.strategy"", havingValue=""CASSANDRA"")
+public interface ColumnFamilyDataStrategyDecorator {
+    Map<String, Object> insertEntity(DataRecord dataRecord);
+    Map<String, Object> deleteEntity(DataRecord dataRecord);
+}
+
+ +

And the concrete class that would be the class that would use the decorator is...

+ +
@Service
+@ConditionalOnProperty(name=""data.strategy"", havingValue=""CASSANDRA"")
+public class ColumnFamilyDataStrategyService implements DataStrategyService, ColumnFamilyDataStrategyDecorator {
+  Map<String, Object> insertEntity(DataRecord dataRecord){
+      //implementation
+  }
+
+  Map<String, Object> deleteEntity(DataRecord dataRecord){
+      //implementation
+  }
+
+  DataRecord getEntityByPK(DataRecord dataRecord{
+      throw new UnsupportedOperationException(""ColumnFamilyDataStrategyService.getEntityByPK() currently not supported."");
+  }
+
+  DataRecord getEntityByBK(DataRecord dataRecord){
+      throw new UnsupportedOperationException(""ColumnFamilyDataStrategyService.getEntityByBK() currently not supported."");
+  }
+
+  void overwriteEntityByPK(DataRecord dataRecord){
+      throw new UnsupportedOperationException(""ColumnFamilyDataStrategyService.overwriteEntityByPK() currently not supported."");
+  }
+
+  void saveEntity(DataRecord dataRecord){
+      throw new UnsupportedOperationException(""ColumnFamilyDataStrategyService.saveEntity() currently not supported."");
+  }
+}
+
+ +

As you can see there some UnsupportedOperationExceptions being thrown. Looks a little messy. However, I don't really want to use Adapter pattern with delegation because then I wouldn't be able to use the Spring dependency inject as the author intended. But on other other hand, the exceptions look pretty messy. Any recommendations?

+ +

Also: My intention is to use ColumnFamilyDataStrategyDecorator to have some new methods that could work to replace the other interface's methods. But, I don't want to touch the calling code if possible. I don't want to implement a design pattern if it is the wrong thing to do :). But this way, I can leave the calling code alone at least, as I do not own it. The thing is, the other interface's methods are still needed for the DataBaseDataStrategyService implementation. DataBaseDataStrategyService is using DataStrategyService and will continue to do so. And the service that has a property of DataStrategyService, aka the calling code, which will also be calling ColumnFamilyDataStrategyService, I also don't want to touch that calling code if I don't have to. This decorator pattern I basically touch as little code as possible.

+",288674,,288674,,2/14/2019 20:32,2/14/2019 20:32,Decorator Pattern Java,,0,3,,,,CC BY-SA 4.0,,,,, +387189,1,387201,,2/14/2019 16:19,,1,977,"

I'm working on a project that involves online purchases. I have my web api in C# and my client in REACT (javascript). +Server and client are connected with SignalR. I want that when an ORDER is changed all the clients that are watching the details of the order receive the warn to refresh the page.

+ +

My question is: which is the best practice to do this? Is better to send a warn with the ID of the order to all clients? Is better to save on the server which page are watching a client and after send the warn only to the clients that are watching the specific order?

+ +

In production environment there will be a lot of clients and a lot of simultaneous orders (I hope :D ) and I don't know which can be the best practice.

+",328702,,326536,,2/14/2019 22:12,2/14/2019 22:12,Best practice with SignalR communication,,1,2,1,,,CC BY-SA 4.0,,,,, +387191,1,,,2/14/2019 16:53,,0,190,"

Payload:

+ +
{
+    ""selection"": {
+        ""ids"": [1,2,3,4,5]
+    },
+    ""image"": {
+        ""backgroundColor"": ""#FFFFFF"",
+        ""headlineColor"": ""#000000"",
+        ""format"": ""PNG""
+    },
+    ""processors"": [
+        {
+            ""repository"": ""org.my.world.app.repositories.RepoX"",    // fetch data from this Repository
+            ""builder"": ""org.my.world.app.builders.ModelBuilderA""    // use this ModelBuilder with the fetched data
+        },
+        {
+            ""repository"": ""org.my.world.app.repositories.RepoX"",
+            ""builder"": ""org.my.world.app.builders.ModelBuilderB""
+        },
+        {
+            ""repository"": ""org.my.world.app.repositories.RepoY"",
+            ""builder"": ""org.my.world.app.builders.ModelBuilderA""
+        },
+        {
+            ""repository"": ""org.my.world.app.repositories.RepoY"",
+            ""builder"": ""org.my.world.app.builders.ModelBuilderB""
+        }
+    ]
+}
+
+ +

This payload generates a result based on the 5 provided ids [1,2,3,4,5]. The 5 ids are fed trough each defined processor. Each processor object is a definition of how each individual id is processed. +The payload is send as a POST request to a specific REST endpoint. e.g.: http://org.my.world/generate/result

+ +

Currently in the application I try to instantiate the class with the provided class path or check for an exisiting class with the given class path.

+ +
public interface Repo {
+    Model readId( int id );
+}
+
+@Inject RepoX repoX;
+@Inject RepoY repoY;
+
+public Model fetchModel(id) {
+
+    Model model = new Model();
+
+    if ( repository.equalsIgnoreCase( RepoX.class.toString() ) ) {
+        model = repoX.readId( id );
+    }
+    else if ( repository.equalsIgnoreCase( RepoY.class.toString() ) ) {
+        model = repoY.readId( id );
+    }
+
+    return model;
+}
+
+ +

As soon as more repositories are added, to the app, the bigger the if(switch) statement gets. Is there a better way to this? +Please share your thoughts.

+",269162,,,,,2/14/2019 16:53,Frontend JSON payload that defines which classes to use in the backend,,0,2,,,,CC BY-SA 4.0,,,,, +387192,1,387196,,2/14/2019 16:54,,1,164,"

Currently I have something like the following

+ +
def writeThis(fileHandle, name):
+    fileHandle.write('Hello ' + name)
+
+writeThis(fh, 'Alice')
+
+ +

Something about this doesn't feel right however, it feels like I should try to minimize side effects and have something like the following

+ +
def foo(name):
+     return 'Hello ' + name
+
+fh.write(foo('Alice'))
+
+ +

Is the second option considered better practice? Does it matter?

+",279597,,,,,2/14/2019 22:45,Python - Should a function write to a file or return text to be written?,,2,1,,,,CC BY-SA 4.0,,,,, +387193,1,,,2/14/2019 16:57,,0,39,"

I’m trying to make a AMQP integration layer for my app. This layer would only consume messages from my customer’s queues and the result would be a specific action in my application. +My application is a type of CRM. It manages recipients and it can send emails to these recipients. +What I would like my customers to be able to do is the following:

+ +
    +
  1. Through my application admin panel, a user would be able to input their AMQP connection information (host, port, username, etc.)

  2. +
  3. Once they have setup a connection, they would be able to add queues to this connection.

  4. +
  5. To add a queue, they would input the queue name and choose the desired action on the application. Example of these would be “Create a contact” or “Send X email”.

  6. +
  7. The application would then attempt to connect to the queue and grab a single message.

  8. +
  9. Upon successfully grabbing a message, it would map out the message fields and the user would be able to map these against the application data field.

  10. +
  11. Then they would choose the synchronisation setting. This would be either live data or pre configured setting such have “Every 5/10/15/30minutes”

  12. +
+ +

This process would basically create a consumer for the customer’s queue that would feed my application with data. The possible callback for the consumers would be the different available ‘Actions’ the user can choose at step 3.

+ +

The necessary information would be stored in a database for my application to use whenever it needs to consume a message.

+ +

Does this sound right and / or do you see any issue with the above?

+ +

My main concern would be how to handle all the consumers this can potentially create? +In the past, whenever I had to integrate a customer’s queue I would make a microservice that would consume a single queue and have the appropriate callback function. That is pretty straight forward but it would have been hard to maintain in the future as I add more queues and other customers since it clearly goes against the DRY principle.

+ +

Would it be best to handle this as a plugin or is it plausible to handle all these consumers within a single app?

+",328707,,326536,,2/14/2019 19:33,2/14/2019 19:33,AMQP integration,,0,2,,,,CC BY-SA 4.0,,,,, +387199,1,387200,,2/14/2019 18:55,,-1,43,"

For the occasions that I've been offering SaaS, I've been asked to develop Apps that are pretty much a clone of regular e-commerce webpages, but with a few features like notifications and, well, being an App in itself. The thing is that it's localized to wherever my client operates, so if a guy asks me for an App to sell clothes in a 5 KM Radius, wouldn't it be viewed as unuseful and bad from Apple or Google in terms of their quality demands for apps? Since it would only be useful for people in that radius, and completely unuseful for the rest of the world.

+ +

So, do they disencourage this type of apps? If not, do they not care at all? Or do they dislike it?

+ +

I sense that, since apps only ever show at search results when people purposefully search for them, Apple nor Google should have a problem with these kind of apps, but that's just intuition.

+",328716,,58415,,2/14/2019 22:12,2/14/2019 22:12,Is a localized Mobile App considered bad and prone to removal from Apple/Google's perspective?,,1,2,,,,CC BY-SA 4.0,,,,, +387203,1,387209,,2/14/2019 19:48,,5,4225,"

Our team is in the planning stages of creating an enterprise solution for our back office. Our goal is to have one singular entry point for common tasks, such as changing an address or reprojecting a loan (we're an FI). We want several front-ends that are categorized by domain, but could pull from any number of these tasks. Does this imply a microservice architecture?

+ +

Being a relatively young and small team (three people at six years each), we have rather limited development resources to spend. However, we're very interested in implementing Uncle Bob's Clean Architecture principles across this enterprise solution.

+ +

We're in a .NET stack. How does Clean Architecture ""play"" with a microservice architecture? I see in the original diagram that the entities layer contains enterprise-wide business rules.

+ +

+ +

Does this mean that we can abstract out the above entities layer into its own microservice layer that has its own sort of Clean Architecture built in?

+ +

I'm having trouble abstracting the microservices out from the different contexts and domains that these several applications would use them, and how the .NET solution(s) could be structured. We're trying to map out an interconnected city, not just the floor plans for a house.

+ +

This question has helped in some regard; however, I'm looking for a little more detail. Hopefully a team of our size isn't biting off far more than we can chew.

+ +

Organizing Visual Studio solution for microservices?

+",307409,,,,,2/14/2019 22:14,Clean Architecture and Microservices,<.net>,1,2,3,,,CC BY-SA 4.0,,,,, +387206,1,,,2/14/2019 21:29,,2,217,"

Looking at Stack Exchange API docs, I see methods like answers/{id}/downvote answers/{id}/downvote/undo comments/{id}/delete.

+ +

Should this be considered a RESTful API? (The docs does not mention the word REST, but anyway it at least resembles a RESTful design, and here and there it's assumed to be so).

+ +

I'm asking because, while those methods look self-explanatory and ""natural"", I'm not sure if they comply with the REST philosophy. For one thing, they write the intented verb (the action to be done with the resource) as part of the URL. Instead, REST proponents argue (here and here and here), we should restrict to the four HTTP verbs. If you think you need ""more verbs"", then you are not enlightened yet. When you want to perform multiple actions on a resource, then you should use (they say) a single URL... and embed that ""extra data"" (what you want to do!) in the POST body... (which smells hideous to me in several levels, but anyway, I know little about REST).

+ +

Furthermore, the SE API, in my last example, even uses the verb ""delete"" in the URL, instead of using the (for once) available method DELETE...

+ +

Then, am I right in assuming that this is really not RESTful, or at least it's an ""impure"" (pragmatic and nice, I'd say) design, from the REST point of view?

+ +

If so, what would be the RESTful way? Instead of answers/{id}/downvote one should do something like POST answers/{id} and in the body place something like { action: ""downvote"" }?

+",12468,,12468,,2/14/2019 21:47,2/15/2019 17:12,"Is something like ""answers/{id}/downvote"" considered RESTful?",,3,4,1,,,CC BY-SA 4.0,,,,, +387217,1,387219,,2/15/2019 2:57,,2,605,"

I'm new to DDD and I'm trying to figure out the aggregate root. I'm sure this question has been asked a million times.

+ +

So I have :

+ +
    +
  • Products (thousands)
  • +
  • Catalogs (hundreds)
  • +
  • CatalogEntries that associate a product to a catalog and a price (prices are different depending on the catalog)
  • +
+ +

A catalog can exist without a product and a product can exist without a catalog so I have at least 2 aggregates:

+ +
    +
  • Product
  • +
  • Catalog
  • +
+ +

But where does CatalogEntries go ? If I delete a catalog I need to delete all it's entries but if I delete a product I need to delete all entries from this product too.

+ +

If product is the aggregate and I delete a catalog, I need load all my products to remove it and if Catalog is the aggregate I need to load all my catalogs to remove it. I doesn't make sense.

+ +

So is it a good idea to have ProductEntry as an aggregate root ? But it can't really live without a product or a catalog, what I thought was the goal of an aggregate.

+",311680,,,,,2/15/2019 3:30,DDD Aggregate with Catalog Product,,1,2,,,,CC BY-SA 4.0,,,,, +387220,1,387252,,2/15/2019 4:09,,0,555,"

Lets say I am building some large application ( multi-page app ) using Laravel. And laravel will allow me to make an API and a website on the same application.

+ +

Since the website and the API will communicate with the same database, I was wondering if it is better to consume the same API for the website using some javaScript framework like Vue.js.

+ +

So this means I will make single entry point to the database for all the clients ( web , mobile..etc ) what ever call this API.

+ +

And my plan is to make:

+ +
    +
  • ApiControllers ( communicate with the database and return data )
  • +
  • WebControllers ( return blade views which will have vue.js components inside to consume the API ). There is no communication with the database in these controllers.
  • +
+ +
+ +
    +
  • What do you suggest?
  • +
  • What will you do in this case?
  • +
  • Is this a good idea?
  • +
+",328743,,218552,,2/15/2019 14:46,2/15/2019 14:46,Use same API for both website and other clients or not?,,1,3,,,,CC BY-SA 4.0,,,,, +387225,1,387227,,2/15/2019 7:42,,0,103,"

For the most part, I am able to distinguish between functional and non-functional requirements, but at times it is not clear for me.

+ +

For example, the following are non-functional but they seem functional to me:

+ +
The software must use SSL encryption for transmissions
+
+ +

Or

+ +
The software must store the configurations in an XML file
+
+ +

Could you please help me with a rule of thumb that helps me make the distinction? Many thanks!

+",325783,,,,,2/17/2019 10:29,Identification of Nonfunctional Requirements,,2,3,,,,CC BY-SA 4.0,,,,, +387231,1,,,2/15/2019 8:55,,5,549,"

In the standard c++ library, all containers and all input/output streams have their own constructors and destructors, that handle all the relevant resource acquisition and release. +So for most tasks that would require destructors (e.g. memory and file management), the modern developer does not have to define the destructors him/herself.

+ +

One case when a destructor should be defined explicitly is handling database connections. But this is quite rare - it can be handled by at most a single class in an application.

+ +

My question is: how often does a C++ programmer today needs to actually write a destructor for his/her own classes? +And, what are the main use-cases for defining a destructor, that are not already handled by standard libraries?

+",50606,,,,,2/15/2019 14:37,How common are destructors in modern c++ code?,,1,4,,,,CC BY-SA 4.0,,,,, +387234,1,,,2/15/2019 9:26,,-2,258,"

Is it a good practice to use the static member methods to check if an object of a class is NULL or not. The object would be sent through the parameters offcourse.

+ +

Something like,

+ +
#include <iostream>
+using namespace std;
+
+
+class Box {
+    public:
+      static int checkNull(Box* b) {
+         if (b != NULL)
+            cout << ""present\n"";
+        else
+            cout << ""absent\n"";
+      }
+
+};
+
+int main() {
+
+    Box *b1, *b2;
+    Box b;
+    b1 = b2 = NULL;
+    b1 = &b;
+    Box::checkNull(b1);
+    Box::checkNull(b2);
+
+    return 0;
+}
+
+",148205,,,,,2/15/2019 9:33,Using static member methods to check for object being NULL,,1,2,,,,CC BY-SA 4.0,,,,, +387236,1,387238,,2/15/2019 9:37,,2,112,"

Let's say I have a User model defined. It makes sense that methods for retrieving certain fields of this model lives in the model file.

+ +

My question is where something that generates a unique UUID, or username should go. If I have a method for generating a unique username and it isn't specific to a particular instance of a User, does it still belong in the model file? Or a utils file? It seems wasteful to stash in a utils file, but it doesn't seem like it should go in the Model file either.

+",52359,,,,,2/15/2019 23:58,"Design: Where should methods specific to a model, but not an instance go?",,3,1,,,,CC BY-SA 4.0,,,,, +387237,1,,,2/15/2019 9:50,,2,237,"

When writing a flow chart, I can understand that it is a best practice to generally read left-to-right or right-to-left per locale, and/or top-to-bottom and generally for the directional flow of the chart to be consistent.

+ +

But, when it comes to a diamond (""decision"") element and its yes/no or true/false result arrows, is there a convention as to which one ought to be pointing down and which one to the side?

+",233051,,1204,,5/12/2020 16:07,5/12/2020 16:07,"In a flow chart, is there a convention on directionality of true/false conditions from a decision?",,2,2,,,,CC BY-SA 4.0,,,,, +387243,1,,,2/15/2019 10:44,,1,51,"

First let me state the asynchronous approaches:-

+ +
    +
  1. The user enters a character into the username form field. I create a connection to the database, use a prepared statement to confirm whether the username already exists, close the connection, and pass results back to initial page. AJAX is at work. All of this happens for every character the user enters or removes from the input field.

  2. +
  3. I pre-fetch the list of usernames from the database and store them locally. AJAX does username uniqueness checking from this locally stored pool on each character input, rather than from the database.

  4. +
+ +

Now, the synchronous approaches:-

+ +
    +
  1. The user enters a username in input field. Clicks submit. I create a connection to the database, use a prepared statement to confirm whether the username already exists, close the connection, and pass results back to initial page. All of this happens every time the user clicks submit.

  2. +
  3. Basically the same as second approach in asynchronous section, but checking happens on clicking submit, rather than on inputting each character.

  4. +
+ +

Now the second approaches work faster in both cases I believe, but they can lead to concurrency anomalies. Am I correct?

+ +

Also, while using a prepared statement to check username uniqueness from database in 1st approach in both categories, should I rely on the exception which would occur if I try to insert the same username (because of unique constraint) or should I use a select query to confirm the uniqueness? Basically is letting MySQL run into an exception considered bad practice?

+ +

If there is another better way of doing this, I'm all ears. Thanks!

+",328773,,,,,2/15/2019 11:31,Best practice to confirm unique username for user creation in JSP and JDBC?,,1,2,,,,CC BY-SA 4.0,,,,, +387244,1,,,2/15/2019 10:50,,1,101,"

In an event sourcing architecture, what is the typical pattern for passing information about related objects (aggregates)?

+ +

For example, in a order processing system, should OrderCreated event (published by the OrderService) contain productId or a Product?

+ +

Assuming there's also a ProductService), with the first option, receivers of the event can call the ProductService to get the Product and with the second option all the information it needs are already in the event. But I'm not quite clear on the pros and cons of the two approaches. Can someone shed some light?

+",324440,,,,,11/6/2020 15:07,Event sourcing access by reference,,1,0,,,,CC BY-SA 4.0,,,,, +387248,1,,,2/15/2019 12:27,,3,469,"

I've just read the book called Clean Code. The author (Robert C. Martin) talks about a single responsibility that a function should have in a program. It should only do one thing.

+ +

Now, I would like to understand how is it now possible to reuse a code that does multiple things. Let's say I have a method called runTrafficAndCheckIfItPassed and it calls two methods inside of it: runTraffic and checkIfTrafficPassed.

+ +

Now let's say that in my software I need to run traffic and to check it's result in a lot of places in my software. sometimes I need to check that traffic has failed, and sometimes I need to check if it passed.

+ +

Why wouldn't it be right to call the runTrafficAndCheckIfItPassed function and why is it way better to call the functions inside separately?

+ +

As far as I see, if there will be a change in the runTraffic function, for example to receive another parameter, the change will be implemented in one place, only in runTrafficAndCheckIfItPassed, which we see will be easy to maintain. +But if I will use the functions separately I will need to change it in any place. But the author says it's wrong. So do you have any examples or tips why it is considered wrong?

+ +

Here's how I used it:

+ +
runTrafficAndCheckIfItPassed(TCPTraffic):
+   trafficResults=runTraffic(TCPTraffic)
+   hasAllTrafficPassed=checkIfTrafficPassed(trafficResults)
+   return hasAllTrafficPassed
+
+",328778,,173647,,2/16/2019 17:23,11/26/2019 13:48,How to use single responsibility with functions when I need to reuse procedures?,,2,8,,,,CC BY-SA 4.0,,,,, +387251,1,,,2/15/2019 14:12,,0,32,"

I'm building a hybrid mobile app that has slightly different versions but share 99.9% of the code. There is a paid and free version, which differ in one function only. Also there is a slight difference between the iPhone and Android versions.

+ +

I have the code in a git repository but I don't want to replicate the repo 4 times and work on changes to the code 4 times, makes no sense.

+ +

So I'm looking for a way of having ""sub-repos"" that could share most of the code and could be easily switched to have the custom parts overridden. +Any ideas what should I use?

+",239695,,58415,,2/15/2019 14:23,2/15/2019 14:23,Code base architecture for one app that has multiple versions with small changes,,0,2,,43511.60347,,CC BY-SA 4.0,,,,, +387267,1,,,2/15/2019 19:56,,-4,113,"

I decided to make an application, networking application, don't know what it has to do yet but mainly it should be a cross-client kind of like framework networking application/library.

+ +

Since networking especially with sending stuff, should I use async since it's basically IO (as far as I understand, I = getting data from client and O = sending data to the client)

+ +

Also if I should use async, would there be any downsides / lag caused by using async for everything? Somebody told me once making everything async couldn't hurt but I'm not entirely sure and I want my application/library to be as stable and fast and neat as possible.

+",328815,,,,,2/15/2019 23:22,Should I use async for networking application/library,,2,3,,43516.89792,,CC BY-SA 4.0,,,,, +387269,1,387278,,2/15/2019 22:35,,2,4769,"

I want to test an abstract class with:

+ +
    +
  1. Pure virtual methods that should be overridden in sub-classes
  2. +
  3. Non-pure virtual methods that use the pure virtual methods (as opposed to this question)
  4. +
+ +
class Fu
+{
+  public:
+    virtual void func(int count, std::string data)
+    {
+        for(int i = 0; i < count; i++)
+        {
+            pureFunc(data);
+        }
+
+    }
+    virtual void pureFunc(std::string data) = 0;
+}
+
+ +

Now to test func, I want to make sure that it calls pureFunc count times with argument data. What I have done is to subclass Fu and also create a Mock:

+ +
class TestFu : public Fu
+{
+  public:
+    virtual void pureFunc(std::string data) override {};
+}
+
+class MockFu : public TestFu
+{
+  public:
+    MOCK_METHOD1(pureFunc, void(std::string));
+}
+
+ +

And to test I do the following:

+ +
MockFu mock_fu;
+EXPECT_CALL(mock_fu, pureFunc(""test""))
+.Times(10);
+
+mock_fu.func(10, ""test"");
+
+ +

However I was wondering if the above is a valid pattern. My worry is that I am testing the mock class as opposed to the class or it's non-abstract sub-class.

+ +

To sum up:

+ +
    +
  1. Is the above pattern valid? If yes is it ok that I am testing a mock object? If no how do I achieve my goal?
  2. +
  3. In general is it valid practice to mock some parts of a class to unit test other parts? If so what is the common practice to do so?
  4. +
+",324635,,,,,2/16/2019 7:19,Unit testing abstract classes with Google mock (gmock/gtest) (C++),,1,0,1,,,CC BY-SA 4.0,,,,, +387270,1,,,2/15/2019 23:17,,1,196,"

Let's imagine that we have this Page component:

+ +
const Page = () =>
+  <>
+    <Topbar />
+    <Drawer />
+    <Content />
+  </>
+
+ +

I'd like to test some interaction within the Drawer and the Content components in an integration test, mounting our Page component, so I'd write a mock for the Topbar component, because I don't need it, with:

+ +
jest.mock('./Topbar', () => {
+  const TopbarMock = () => null
+  return TopbarMock
+})
+
+ +

Then our tests won't render this component, increasing its execution time and it's less open to fail due to some bug introduced there, isolating our feature.

+ +

The problem I have is that, every time I need to add a new component in the Page component, I have to do the same I did for the Topbar.

+ +

My question is if there is any way to specify the components you're going to need for this integration test instead of the ones that you won't need (exactly the other way around). Something like, for this feature that I'm currently testing, I'll just need the Drawer and the Content components, so don't render anything else.

+ +

Or, is there a better way to write integration tests without needing to mock?

+",320272,,,,,2/15/2019 23:17,Write integration tests with React using Jest,,0,1,,,,CC BY-SA 4.0,,,,, +387276,1,,,2/16/2019 6:12,,-1,42,"

(This is a conceptual question but as reference, I'm using Android Studio (Java) and Firebase Firestore...)

+ +

My app currently has a structure where the user can follow authors and favorite their works. Each user on the backend has a set of follows and favorites which updates according to this activity. This seems like a fairly straightforward task -- on each tap of a ""Follow"" or ""Favorite"" button, run a request to update that user's follows/favorites set (respectively).

+ +

However an issue continues to haunt me with this -- the user could very easily tap said button rapidly, sending a multitude of requests, for one, possibly overlapping on pending prior requests, and two, generally overflowing requests to the database. It does not feel wise to give the user that much power

+ +

So, my question is, what is the best way to handle updating data of this nature on the backend? Secondarily, is this even an issue in the first place? Is it okay to give the user that much control?

+",328838,,,,,2/16/2019 7:10,"How should I update back-end data (e.g. follows, likes, etc.) which changes at an inconsistent rate?",,1,2,,,,CC BY-SA 4.0,,,,, +387280,1,387281,,2/16/2019 11:11,,1,138,"

Say your making a library Foo that depends on a 3rd-party library Bar.

+ +

Bar throws a custom exception \OtherVendor\Bar\CustomException.

+ +

Is it recommended to just throw that exact exception to your clients (devs using your lib) or should you catch it then convert it to your own exception? E.g.,

+ +
try {
+    $bar->stuff();
+} catch (\OtherVendor\Bar\CustomException $ex) {
+    throw new \MyLib\Foo\MyCustomException();
+}
+
+ +

To explain further, is it better for your clients to know about that 3rd-party exception in your documentation? e.g.,

+ +
+

You can catch \OtherVendor\Bar\CustomException in case of x...

+
+ +

or should you ""rebrand"" the exception so clients don't need to deal with another lib's namespace? E.g.,

+ +
+

You can catch \MyLub\Foo\MyCustomException in case of x...

+
+",319317,,319317,,2/16/2019 11:35,2/16/2019 15:54,"Should you ""rebrand"" the exception of the library you're using?",,2,4,,,,CC BY-SA 4.0,,,,, +387282,1,,,2/16/2019 11:26,,-1,135,"

The more declarative code is, the less explicit technical details it contains and the closer it gets to requirements expressed in domain language.

+ +

In the extreme case, there is no more difference between requirements and code. My question is not about whether this is possible or not. But surely declarative programming makes the gap between requirements and code smaller.

+ +

I believe that this is quite obvious, but in the other hand I have not been able to find any material on this relationship between requirements and declarative code.

+ +

Therefore I am wondering:

+ +
    +
  1. Is my assumption flawed?

  2. +
  3. Is it just too obvious and trivial to be mentioned?

  4. +
  5. Did I not search hard enough?

  6. +
+",217956,,,,,2/17/2019 8:23,Are executable requirements the most advanced form of declarative code?,,2,3,,,,CC BY-SA 4.0,,,,, +387283,1,387306,,2/16/2019 11:38,,0,160,"

I'm working on a system that stores details of customer purchases for several stores. One statistic that they would like is to know how many unique customers they have had over a specified day range, for a particular store, or all stores.

+ +

One way I can think to do this is storing data in a relational database (SQL), like so:

+ +
CREATE TABLE TransactionCustomers
+(
+    ShopId int,
+    TransactionDay datetime2,
+    CustomerId int
+)
+
+ +

And then to query how many customers between two dates:

+ +
SELECT COUNT(DISTINCT(CustomerId))
+FROM TransactionCustomers
+WHERE TransactionDay BETWEEN '2019-02-01' AND '2019-02-14'
+AND ShopId = 3
+
+ +

I'm wondering if anyone can think of a way to do this that shifts the processing workload onto the application that writes the transactions - basically pre-computing the unique customer count? Or is there a technology other than a relational database that is better suited for this calculation?

+",328851,,,,,2/17/2019 11:45,Most efficient way to get unique customer count,,2,4,,,,CC BY-SA 4.0,,,,, +387285,1,,,2/16/2019 12:02,,1,296,"

Background

+ +

We have a team of 8 devs and 1 QA (tester) and we're struggling with dependencies between tickets and causing a lot of merge headaches and/or people waiting around for the next bit of work to pick up.

+ +

Our current GIT flow model (branches):

+ +
Release 
+ |  > Epic (Feature Branch) 
+ |     |  > Ticket : This is where the dev does their work and QA test on
+ |     |     |       then we move to the epic branch once passed. 
+
+ +

Our company creates api's using web api (.Net, C#) we also have an Anuglar site and old admin site in MVC using mostly jquery.

+ +

Issues

+ +

On a lot of recent project's we had to create api's for insert, update, delete, and get. We usually code the ""insert"" ticket first and make the other tickets dependent on that one, because of things like the coding the controller, Db tables and class's which the other api's will need/use.

+ +

But then, the ""update"" will need to check to see if what is being updated exists in the DB. So it is dependent on the ""get"" so that we end up with 3 layers of branching.

+ +
...
+ |   > Insert
+ |      |  > Update 
+ |      |     |   >   Get. 
+
+ +

This can get a lot worse when there are more epics involved. This causes a lot of merging issues, or confusion for QA's and dev's on which branches they need.

+ +

How do people cut down on this? Pair programming? Create empty stub methods?

+ +

A big issue we have is the lack of QA resource as we're struggling to hire people. A lot of tickets just stack up in the QA pile causing to keep fixing conflicts in the ticket branches.

+ +

I was thinking of create an new git flow:

+ +
Release 
+ |   > Epic (Feature Branch) 
+ |      |    > Epic (Dev)
+ |      |       |   > Ticket
+
+ +

So with this new Epic (Dev) branch, once the ticket has been coded, we merge in straight into this branch and then the QA's test of that branch and once passed we move it into the Epic (Feature Branch).

+ +

Expected Benefits:

+ +
    +
  • QA don't have to keep switching branches
  • +
  • Can test full flows instead of just the individual tickets.
  • +
+ +

Can this, as I hope, result in less conflicts? +Does anyone think this is better then our current setup? +Does anyone have any better suggestions?

+ +

Note: At the moment we can't have a qa test environment (this is out of my control)

+",328852,,209774,,2/16/2019 13:03,4/26/2019 15:12,Reducing dependencies between stories & which branch (GIT) should QA's test on?,,2,4,2,,,CC BY-SA 4.0,,,,, +387288,1,387289,,2/16/2019 14:55,,-2,531,"

I'm implementing an App for fun. It consists of a soccer round-robin tournament simulator in which all teams play each other. A Team consists in a simple object:

+ +
class Team{
+  String _teamName;
+  double _defensePower; // 30..90
+  double _attackPower;  // 30..90
+}
+
+ +

and Match is:

+ +
class Match {
+  String _info;
+  Team _homeTeam;
+  Team _awayTeam;
+  int _homeScore;
+  int _awayScore;
+}
+
+ +

So I would like to use an algorithm to simulate each one of these matches.

+ +

Each match simulation will be sliced in a loop with 90 steps (each step represents a minute in the game in which one of the teams will have the chance to score the goal).

+ +

The simulation must take into account the level of attack and defense of the teams, but with a good deal of unpredictability.

+ +

The result of the simulation should be a match with scores that are normal for soccer matches, such as:

+ +

home 1x0 away
+home 0x0 away
+home 2x3 away
+home 4x0 away

+ +

Is there an algorithm that I could use to make this simulation?

+",175424,,326536,,2/16/2019 16:28,2/16/2019 16:28,Algorithm - Simulation of a soccer match,,1,1,,,,CC BY-SA 4.0,,,,, +387296,1,387308,,2/16/2019 23:47,,3,1623,"

I know the compilation process goes with this flow:

+ +
source -> parse -> AST -> intermediate code -> assembly -> machine code
+
+ +

and in the case of Java you will have bytecode which is translated by JVM.

+ +

In languages such as C/C++ and GO, we parse the source code and output direct machine code in a compiled file, easy enough;

+ +

But what about other languages, such as Java, JavaScript, Python etc?

+ +

Lets say I want to construct my own simple language and write it's Just in time compiler in C, my language is going to be similar to JavaScript, here is a code snippet:

+ +

my language:

+ +
var a, b = 5;
+print(a+b)
+
+ +

My compiler (which is written in C), after it has constructed an AST will try to add these 2 tree nodes together:

+ +
int compilerDoAdd(node a, node b){
+ return a + b;
+}
+
+ +

Here is my question:

+ +

Is the above method the normal way to create a compiler? I'm not creating a heap, I'm not creating stack, I'm not creating assembly, I'm not creating machine code or instructions directly anywhere and I'm just letting the C language do all that work for me. My compiler will act just as a parser.

+ +

Is this how other languages like Python and JavaScript work? or do they create their own machine instructions (and basically redo whatever C is designed to do)?

+ +

Edit. is what I just explained how an Interpreter works? +I have read in many places that An interpreter steps through the source code line by line, figuring out what it’s doing and does it. but it never explains how the does it works, does it turn into machines code? Or it works like how I explained it.

+",328883,,326536,,2/17/2019 8:06,2/17/2019 12:31,How does a compiler work when it's not directly compiling to machine code,,3,3,2,,,CC BY-SA 4.0,,,,, +387298,1,,,2/17/2019 1:39,,-1,810,"

As mentioned here:

+ +
+

The main aim of CI is to prevent integration problems, referred to as ""integration hell""

+
+ +
+ +

Our project is a 3-tier web application, with frontend(Angular 6), backend(Springboot) & Database layer

+ +

For front end code(Angular 6), we have one source code repository(.git)

+ +

For back end(SpringBoot), we have two source code repositories(.git)

+ +

For database layer(Java & MySQL), we have three source code repositories(.git)

+ +

Currently code has unit test cases.

+ +
+ +

As per the CI work flow mentioned in wiki,

+ +
+

When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts.

+ +

The longer a branch of code remains checked out, the greater the risk of multiple integration conflicts and failures when the developer branch is reintegrated into the main line. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.

+ +

Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as ""merge hell"", or ""integration hell"",[4] where the time it takes to integrate exceeds the time it took to make their original changes.

+ +

Continuous integration involves integrating early and often, so as to avoid the pitfalls of ""integration hell"". The practice aims to reduce rework and thus reduce cost and time.

+
+ +
+ +

1) To perform integration tests, s it recommended to have source code(angular_frontend/java_back_end/database) in a single repo(.git)?

+ +

2) Is it a good practice to maintain single source code repository for complete code(fullstack)? for running CI/CD pipeline...

+",131582,,131582,,2/17/2019 3:05,2/17/2019 21:30,Best practice - Single or Multiple source code repository,,1,2,,,,CC BY-SA 4.0,,,,, +387305,1,387310,,2/17/2019 11:41,,0,105,"

I have an interface with only one title field:

+ +
protocol Artist {
+    var title: String { get }
+}
+
+ +

(A) Should I pass the whole object as I did here:

+ +
class Album {
+    func setArtist(_ artist: Artist) {
+        /// Handle
+    }
+}
+
+album.setArtist(artist)
+
+ +

(B) Or set that only field:

+ +
class Album {
+    func setArtistTitle(_ artistTitle: String) {
+        /// Handle
+    }
+}
+
+album.setArtistTitle(artist.title)
+
+ +

What is the recommended way of software design? The (A) solution looks more safer as it is restricted to the type. But the latter (B) is much more flexible. There is no need to create a concrete object, e.g if I want to unit-test my class. The code is less coupled, what we are striving for

+ +

EDIT:
+All of possible duplicates consider cases with >1 parameters. For me those cases are quite obvious, since multiple parameters form a single type. In my example there is one and only one primitive parameter. The idea of the interface is guarantee existence of title, and that's it. The dilemma is where to access the title

+",267467,,267467,,2/17/2019 12:10,2/17/2019 14:34,One-field interfaces,,1,3,,,,CC BY-SA 4.0,,,,, +387309,1,387328,,2/17/2019 12:55,,-2,74,"

I have recently begun studying UML. All is going fine so far until I saw the following:

+ +

+ +

This is a class Called Point2D +It has 2 attributes which are x, type float and y, type float. +It has 3 methods

+ +
    +
  1. __init __(takes parameters 'x' and 'y', both of which are float) returns nothing
  2. +
  3. distance (takes parameter 'target' which has the type class ???) returns a float
  4. +
  5. display() returns nothing
  6. +
+ +

I have no problem of reading these but my issue arose with the method 'distance', especially on how to interpret the parameter 'target' being Point2D

+ +

How should this be interpreted?

+ +

Python reconstruction:

+ +
class Point2D:
+x = float
+y = float
+
+def display(self):
+    print('coordinate x ' + str(self.x) + ', coordinate y ' + str(self.y))
+
+def distance(self, target):
+    gap = sqrt(abs(target.x-self.x)**2 + abs(target.y-self.y)**2)
+    # target is still an enigma but the code above should describe what 
+    # distance is supposed to do
+
+
+def __init__(self, x, y):
+    self.y = float(y)
+    self.x = float(x)
+
+",286580,,9113,,2/17/2019 16:40,2/17/2019 22:27,method taking a class parameter,,1,10,,,,CC BY-SA 4.0,,,,, +387311,1,387317,,2/17/2019 15:50,,-3,150,"

Given the following class:

+ +
class S
+{
+     ...
+ private:
+     void Action1();
+     void Action2();
+     .
+     .
+     .
+     void ActionN();
+}
+
+ +

The Action functions are related, which are called in the constructor. For logical reason, I want to separate these methods:

+ +
class SActions
+{
+     public:
+         SActions(Smthg* s);
+     private:
+         void Action1();
+         void Action2();
+         .
+         .
+         .
+         void ActionN();
+}
+
+ +

In this case, the SActions s(this);line should be placed somewhere in the S. This two way dependency seems to me a little bit wrong. Is there a good pattern, or solution for this? What kind of workaround should I follow?

+",,user328912,,,,2/18/2019 20:52,Which is the best pattern or solution for this problem?,,1,3,,,,CC BY-SA 4.0,,,,, +387313,1,,,2/17/2019 16:18,,1,340,"

As briefly introduced in the question title, I am trying to design and implement a server application able to let clients share audio contents between themselves. In order to achieve that, I decided to let the server have the following features:

+ +
    +
  1. it should listen for incoming TCP connections using a dedicated thread.
  2. +
  3. it should use a defined number of threads to manage each client TCP request independently. These latter can mainly be of two different types: + +
      +
    1. entry requests (a client would start to share audio with others)
    2. +
    3. exit requests (a client would stop to share audio with others)
    4. +
  4. +
  5. it should dedicate a defined number of threads in order to send and receive contents respectively to and from connected clients. Audio content should be shared using the UDP protocol.
  6. +
  7. it should have memory of the clients connected to the network. In this way, server should be able to redirect a received audio stream to all the other connected clients.
  8. +
+ +

Taking these features into account, the code at the actual state has been arranged in the following way:

+ +
    +
  1. an AudioStreamServer class has been implemented. It represent the whole server entity.
  2. +
  3. AudioStreamServer has a TCPListener class which represent the main listening thread, a template class ThreadManager which is able to manage thread activities. These latter can be TCPConnections or UDPConnections.
  4. +
  5. TCPListener, TCPConnection and UDPConnection are like/derive from Thread which represent the basic thread features. It is nothing but a little wrapper of the std::thread class.
  6. +
+ +

But here are some issues and doubts I am facing:

+ +
    +
  1. is it a good strategy to let TCPListener, TCPConnection and UDPConnection classes inherit from Thread? If not, would it be better to let the Thread class be a template inside which TCPListener, TCPConnection, etc. are started as new thread using their () operator (treating them as functions)?
    +Here is a brief example of the second option:

    + +
    template <typename T>
    +class Thread {
    +public:
    +  Thread(): should(false), isRunning(false), t() {}
    +  ~Thread() {}
    +  void start() {
    +    if (!(isRunning)) {
    +      assert(!(should));
    +      should = true;
    +      t = std::thread(T(), std::ref(*this));
    +      isRunning = true;
    +    }
    +  }
    +  void stop() { // Well, should be uncallable from inside the thread...
    +    if (isRunning) {
    +      assert(should);
    +      should = false;
    +      if (t.joinable()) {
    +        t.join();
    +      }
    +    }
    +    isRunning = false;
    +  }
    +  void requestStop() {
    +    if (isRunning) {
    +      assert(should);
    +      should = false;
    +    }
    +  }
    +  bool shouldRun() const {
    +    return should;
    +  }
    +private:
    +  bool should;
    +  bool isRunning;
    +  std::thread t;
    +};
    +
    +class TCPListener {
    +public:
    +  TCPListener() {}
    +  ~TCPListener() {}
    +  void operator()(Thread<A>& myThread) {
    +    while (myThread.shouldRun()) {
    +      print();  // Do something...
    +      sleep(1);
    +    }
    +  }
    +  virtual void print() {
    +    std::cout << ""I'm listening!"" << std::endl;
    +  }
    +};
    +
    +int main(int argc, char const *argv[]) {
    +  Thread<TCPListener> a;
    +  a.start();
    +  sleep(10);
    +  a.stop();
    +  return 0;
    +}
    +
  2. +
  3. since, for instance, TCPListener should communicate with ThreadManager<TCPConnection> and TCPConnection, in the current state of the code this kind of communication has been made possible by constructing the object with a reference to the others. Is this a good way to let classes communicate between themselves? Am I actually coupling them, making all the code more difficult to maintain? Interface classes could be a better solution? (for example, TCPListener - derived from ITCPListener - communicates with IThreadManager<ITCPConnection> and ITCPConnection)

  4. +
+",328875,,328875,,2/18/2019 10:37,2/18/2019 10:37,Designing a multithreaded TCP/UDP server for audio sharing,,0,3,1,,,CC BY-SA 4.0,,,,, +387314,1,387344,,2/17/2019 16:37,,0,135,"

I have a method that returns the percentage change of a data in a certain period. After calling that method, i need to know if in the current period the data increased, decreased or is the same when comparing with the previous period.

+ +

The problem is that in some cases, the data of the previous period is equal to zero. In that cases there's no way to determine the percentage change, because i would have a division by zero. But I need to indicate it anyway because the data increased, even though it's not possible to calculate the percentage change.

+ +

One of my approaches was to return a Number when the previous data was different from zero, and return a String when the previous data was zero, indicating that the percentage change could not be calculated, but the data increased.

+ +
if ($gender[$last_period_key] == 0) {
+    $perc_change = 'There were no records in the previous period';
+} else {
+    $perc_change = (($gender[$current_period_key] - $gender[$last_period_key])/$gender[$last_period_key]) * 100;
+}
+
+ +

Another approach could be to retun a specific number indicating the error, like 999999999, but in my opinion it's not a good one.

+ +

There is a pattern or best practice for returning data that can assume different data types? Or how I would indicate the error using only numbers if the percentage change can be any of them?

+ +

I would like to apologise if here is the wrong place to ask this kind of question, and if it is, i would like indications of the appropriate place.

+ +

Thanks in advance.

+",328913,,328913,,2/17/2019 17:41,2/18/2019 10:56,Is there a pattern or best practice for returning dynamically typed data?,,4,4,,,,CC BY-SA 4.0,,,,, +387315,1,,,2/17/2019 16:40,,1,104,"

In my database (mongoDB) I have a model called exam and each instance of the modal has a somewhat large json object (500k).

+ +

I'm using strapi cms to make the query, using graphql plugin.

+ +

However when fetching all the ids of the model, the fetching is extremely slow even though i'm only interested in the ids and the json object inside each instance (using graphql):

+ +
query {
+                    exams {
+                      name,
+                      _id,
+                    }
+                  } 
+
+ +

This is in comparison with fetching of other models which is very fast.

+ +

Does the database read through the whole content of each instance of a model? Is there a way to change this behaviour?

+ +

This is not a bug, just asking for help.

+",328229,,90149,,2/19/2019 16:37,2/19/2019 16:37,On fetching an item does the database read its whole content?,,0,7,,,,CC BY-SA 4.0,,,,, +387318,1,,,2/17/2019 18:32,,1,420,"

I am doing some work to refactor a class. It currently a 'God class' and contains all different logic/operations solely in that class. One of my solutions is to extract all different parts of logic to their own classes. This will result in an 'Orchestrator' class calling all separate classes to execute logic. Example:

+ +
public class TripOrchestrator {
+
+  public Trip build(TripInformation information) {
+
+    final TripFee tripFee = new TripFeeBuilder(information.getFee());
+
+    final TripTiming tripTiming = new TripTimingBuilder(information.getTripTiming());
+
+    final TripSettings tripSettings = new TripSettingsBuilder(information.getSettings());
+
+    final TripNotification tripNotification = new TripNotification(information.getNotification);
+
+    return new Trip(tripFee, tripTiming, tripSettings, tripNotification);
+  }
+}
+
+ +

Before this refactoring, all of this logic was placed into one 'God class' without any builder classes.

+ +

My Question

+ +

Firstly is this a reasonable approach to go with for a problem such as this? Secondly, is it bad that I am violating DIP? For example,

+ +
final TripFee tripFee = new TripFeeBuilder(information.getFee());
+
+ +

TripFee object is a concrete class, so is TripFeeBuilder. All of these implementations are 'concrete' and not 'abstracted' to their interface. My thought on why this MAY be okay is that all of these builders and classes are very stable. It is rare that logic/functionality will change and if there is a change, it will be a very minor one.

+ +

DIP states that the above line should be:

+ +
final ITripFee tripFee = new TripFeeBuilder(information.getFee());
+
+ +

With 'ITripFee' being an interface and having TripFeeBuilder be one of the possible implementations. But like I stated, there most likely won't be another implementation of TripFee.

+ +

Would love thoughts/opinions about this. I have experience in Java but still a novice in design. Thanks.

+",328925,,,,,5/31/2019 8:34,Is it sometimes okay to intentionally violate the Dependency Inversion Principle?,,3,5,,,,CC BY-SA 4.0,,,,, +387320,1,387392,,2/17/2019 19:32,,0,94,"

I am looking at blockchain and trying to see how Merkle trees (or perhaps Merkle DAGs) can fit to a graph data structure with circular references.

+ +

For example, say I have this data model:

+ +
var profile = { setting1: 123, email: 'foo@bar.com' }
+var user2 = {}
+var user3 = {}
+var msg1 = { body: 'foo' }
+var msg2 = { body: 'bar' }
+var group1 = { name: 'Hello' }
+var group2 = { name: 'World' }
+var user1 = { messages: [ msg1, msg2, ... ] }
+
+profile.user = user
+user.profile = profile
+
+msg1.sender = user1
+msg1.recipient = user2
+msg2.sender = user1
+msg2.recipient = user3
+
+group1.members = [ user1, user3 ]
+group2.members = [ user1, user2, user3 ]
+
+user1.groups = [ group1, group2 ]
+user2.groups = [ group2 ]
+user3.groups = [ group1, group2 ]
+
+ +

This is highly cyclic, but it only has really 2 or 3 levels deep of cyclicness. In reality there could be extremely large and complicated cycles, but I don't want to overcomplicate it.

+ +

With the Merkle tree, you essentially have a bunch of blobs of data which you make the leafs of a tree, and compute hashes for each pair all the way up to the top part of the tree. But if one of your blobs changes, then you have to recalculate the whole path up to the top again. If you insert a node, you might have to recalculate even more. And this is just for basic blobs of data.

+ +

But if you have graph data like I've outlined above, I'm not sure what to do, or if there is a better data structure. So for example, if you change profile.email, then you would assume ""profile has changed"". But then I am unsure if we also say ""user"" has changed, because it is once removed. And likewise, perhaps even ""msg1"" has changed, or ""group"" has changed, since user1 is related to those through a few links. So basically, the idea of what is a ""single entity"" seems to break down in a graph, and I'm not sure what to do. Wondering if

+ +
    +
  1. Merkle trees just aren't good for this type of data structure.
  2. +
  3. If there is a way to make Merkle trees work here.
  4. +
  5. Or, if there is a better one or two data structures for this sort of data model that can accomplish what the Merkle tree does best (mainly validating that the data hasn't changed).
  6. +
+",326030,,,,,2/18/2019 21:34,"How a Merkle tree can work with Graph data, or a better data structure",,1,0,,,,CC BY-SA 4.0,,,,, +387329,1,,,2/17/2019 22:33,,-1,108,"

I had a class hierarchy with several classes that interact with each other. After introducing new feature that is optional (but it depends on external libraries) I have the following code (just simplification):

+ +
#ifdef (FEATURE1)
+    #include ""feature1.h""
+#endif
+
+class Engine
+{
+public:
+    void useFeature(bool value)
+    {
+                ...
+    }
+
+    void run()
+    {
+        // statements
+#ifdef (FEATURE1)
+        // feature-specific statements
+#endif
+        // statements
+        action.makeSomething1();
+        // statements
+    }
+private:
+    Action action;
+};
+
+class Action
+{
+public:
+    void makeSomething1()
+    {
+        // statements
+#ifdef (FEATURE1)
+        // feature-specific statements
+#endif
+        // statements
+    }
+
+    void makeSomething2()
+    {
+        // statements
+#ifdef (FEATURE1)
+        // feature-specific statements
+#endif
+        // statements
+#ifdef (FEATURE1)
+        // feature-specific statements
+#endif
+        // statements
+    }
+    private:
+        ...
+};
+
+ +

From my points of view it looks bad and unreadable and now I'm thinking how to refactor this code in the right way. What's the best design pattern approach for introducing optional feature to current code base? What about having Engine and EngineWithFeature, Action and ActionWithFeature? It seems that it will cause a lot of code duplication.

+",328944,,328944,,2/17/2019 22:42,2/17/2019 23:55,What's the best design pattern approach for introducing optional feature to current code base?,,1,6,,,,CC BY-SA 4.0,,,,, +387333,1,,,2/18/2019 3:07,,0,83,"

I'm writing an API for a JavaScript library to let my users offset an object from another.

+ +

I have to provide my user with a way to describe a ""vertical"" and ""horizontal"" offset.

+ +

My problem is that I don't have any point of reference except for the very same position and orientation of the said objects.

+ +

+ +

In the above example, I have two blocks that face themselves, and the green one is offset from the yellow one by 60 on the Y axis, and 40 on X.

+ +

I can't describe this as:

+ +
{
+  x: 40,
+  y: 60,
+}
+
+ +

Because if the blocks are rotated together by 90 degrees:

+ +

+ +

I would have to invert the axes, and my users really just want to express the ""distance between them"" and the ""distance between their bottom edge"" (bottom in this example, it may actually be ""right"" or ""left"" or ""top"" if rotated differently).

+ +

I need a way to describe the distances so that they don't need to be changed if the boxes rotate.

+ +

What's a good way to describe these two distances?

+",109278,,109278,,2/18/2019 17:31,2/19/2019 17:44,Way to describe a coordinate system without external points of reference?,,1,7,,,,CC BY-SA 4.0,,,,, +387335,1,387397,,2/18/2019 3:45,,1,389,"

How do you design a website that allows users to query a large amount of user data, more specifically:

+ +
    +
  • there are ~100 million users with ~100TB of data, data is stored in HDFS (not a database)
  • +
  • number of (concurrent) queries is not important, but each query should be as fast as possible
  • +
  • support some simple queries such as: get user info by id, get +accumulated data like monthly logins and monthly online time
  • +
  • query result is little (1 number, or a few hundred rows) so frontend performance doesn't matter
  • +
+ +

I'm more interested in the thought process on how to approach this requirement. For example:

+ +
    +
  • at 100 users, what is the design?
  • +
  • at 1,000,000 users, what needs to be changed?
  • +
  • at 100,000,000 users, what is the design now?
  • +
+ +

I've searched around and see a lot of people talking about caching, load balancing,... Of course, those techniques are useful and can be used but how do you know it can help handling N users? Nobody seems to explain this point.

+",287608,,287608,,2/19/2019 4:15,2/19/2019 14:39,Designing a big data web app,,2,6,,,,CC BY-SA 4.0,,,,, +387343,1,,,2/18/2019 7:06,,0,108,"

I am writing Unit/Component test using In Memory DB. When I write Unit/Component test, I came across with the following question.

+ +

I have the following two BL methods.

+ +
    +
  1. ToCreate
  2. +
  3. ToGet
  4. +
+ +

So when I write Unit/Component test, I need to create some sample data for the unit test.

+ +

When I write a Unit/Component test for ""ToGet"" method, can I use ToCreate (BL method) to create sample data or When I write a Unit/Component test for ""ToCreate"", Can I use ""ToGet"" method to check the created data? +Is that a correct choice?

+",110296,,110296,,2/18/2019 10:14,3/20/2019 11:02,Unit/Component testing using In Memory DB,,1,4,,,,CC BY-SA 4.0,,,,, +387349,1,,,2/18/2019 10:24,,0,125,"

We are designing the software based on the FDD (feature driven development). Our architecture is microservice. I want to know that microservices are demonstrated by the feature set? In other words, is each feature set considered as a microservice? +As I know the feature set in FDD is the same as epic in Agile. so if this statement is true then is each epic considered as a microservice in agile?

+ +

I have seen this question: +Should I consider microservice as an epic or a project in TFS?

+",328968,,328968,,2/18/2019 10:56,3/20/2019 13:01,should we consider a feature set as a microservice in FDD?,,1,3,,,,CC BY-SA 4.0,,,,, +387350,1,387364,,2/18/2019 10:38,,2,114,"

This is basically an extension to my previous question. That time our internal discussions didn't end up anywhere and the whole issue was forgotten for the time being.

+ +

Now we've touched upon it again, this time with username/password credentials for external systems. So, a more generic/slightly different version of the same question:

+ +

When your system interfaces with multiple 3rd party external systems, there are some sort of endpoint configurations involved. Typically there will be a URL and either a username/password combo, an API key, or a client certificate. More often than not there will even be two sets of these - one for a testing/development environment, another for production.

+ +

So, after these have been communicated, what is the best practice what to do with them?

+ +
    +
  • My opinion is that they should be documented somewhere. If you want them to be secure, you can encrypt then somehow (KeePass, ZIP file with a password, whatever). But there should be a logical place where they should be kept against future need. Perhaps it can be in the project documentation. Perhaps there's a central storage for these. Whatever. Just - somewhere else where people can quickly find them when needed.
  • +
  • My colleague's opinion is that they should NOT be documented anywhere. Well, OK, the test credentials can be documented, but the production credentials should only exist in a config file on the production server(s) and nowhere else. His reasoning is that nobody should need to access these credentials outside that server and storing them elsewhere compromises security. If the production server suffers a crash - that's what backups are for and those backups include the config files too. And if the whole system is so broken that even the backups are corrupted - then you have a bigger problem anyway.
  • +
+",7279,,,,,2/18/2019 12:44,Should usernames and passwords to external systems be documented?,,1,0,,,,CC BY-SA 4.0,,,,, +387357,1,,,2/18/2019 11:35,,-2,481,"

I know there are a lot of threads regarding this topic, but I can't find the answer for this precise topic:

+ +

First of all, with the ""first assembler"" I mean the program that translates, let's say, the instruction ""mov"" to the specific machine code the ALU understands, 1100111 or whatever other binary number. +There's some gap between those two steps that I can't find answers for.

+ +

I understand the process is something like: you have a cpu chip built with an specific micro architecture that implements N instructions. +Each instruction is accessed internally in the ALU with a binary number or opcode (000 mov, 001 add, etc) +At some point of history, instructions were loaded into the CPU using punched cards, tapes, etc.

+ +

But then, you want to raise the level of abstraction and needs an assembler to program in a higher language instead of opcodes, and this is exactly where I'm missing something.

+ +

At this point, I guess some bootstrapping is used to go from opcodes to assembler, but how? How do you write the assembler v0.00 for a given brand new cpu? Is there any chip hardcoding those instructions, maybe the first assembler is hardware based?

+ +

In ""Assembler and Loaders"", it seems the first assembler was created using a ROM, hardlinking telephone selectors to memory addresses.

+ +
+

""One of the first stored program computers was the EDSAC (Electronic Delay Storage Automatic Calculator) developed at Cambridge University in 1949 by Maurice Wilkes and W. Renwick. From its very first days the EDSAC had an assembler, called Initial Orders. It was implemented in a read-only memory formed from a set of rotary telephone selectors, and it accepted symbolic instructions. Each instruction consisted of a one letter mnemonic, a decimal address, and a third field that was a letter. The third field caused one of 12 constants preset by the programmer to be added to the address at assembly time.""

+
+",328977,,,,,2/20/2019 10:38,How is a first assembler assembled? (without cross-compiling),,5,4,,,,CC BY-SA 4.0,,,,, +387358,1,387389,,2/18/2019 11:41,,1,118,"

I have a question concerning the ""Uncommitted Read"" Isolation Level under DB2 DBMS for z/OS.

+ +

In this article

+ +

https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/perf/src/tpc/db2z_isolationissues.html

+ +

in the IBM Knowledge center it is stated that when using Isolation Levels other then Repeatable Read, the following phemomena can occur:

+ +
    +
  • Phantom Rows
  • +
  • Dirty Read
  • +
  • Non-repeateatable read
  • +
+ +

also the ANSI-SQL 92 standard only defines this three phenomena as possible and says that no updates can be lost:

+ +

http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt

+ +

(see page 68, chapter 4.28 ""SQL-transactions"").

+ +

quote:

+ +
+

""The four isolation levels guarantee that each SQL-transaction will be + executed completely or not at all, and that no updates will be lost.""

+
+ +

So far I have not found a definitive answer on how DB2 (for z/OS) prevents lost updates when SQL statement in programs are coded with the ""WITH UR"" clause.

+ +

Lets assume the classic example for the ""Lost Update"" phenomenon:

+ +
    +
  1. Transaction T1 reads the Value ""X"" from Table ""R"" into memory and increments it by 1.
  2. +
  3. Transaction T2 reads the value ""X"" from Table ""R"" into memory and increments it by 2.
  4. +
  5. Transaction T1 commits its changes
  6. +
  7. Transaction T2 commits its changes - the value X+2 is stored in the database altough X+3 would be the correct result when T1 and T2 would have been serialized.
  8. +
+ +

Lets assume that the transactions do not read the data by a ""for update"" cursor which internally leads to ""with cursor stability"" isolation level in DB2.

+ +

As of my understanding, to prevent this anomaly, Transaction T1 would need to set an exclusive lock on the qualifying row to prevent T2 to read the same value before T1 has not committed.

+ +

Despite that, as of the resources cited above, the Lost Update anomaly is not possible despite that - but how is this done?

+",328979,,,,,2/18/2019 20:32,Avoidance of Lost Update in DB2 for z/OS,,1,1,1,,,CC BY-SA 4.0,,,,, +387359,1,387363,,2/18/2019 12:04,,-2,365,"

Below is the GitFlow workflow, we are using for a project.

+ +

+ +
+ +

So, in our project master branch currently has only one git commit with just a Readme file. develop branch is forked from that commit. As of now, there is no other develop branch.

+ +

We are currently in a stage, running Dev(build) staging of pipeline(Jenkins) and QA staging of pipeline(Jenkins)

+ +

we are yet to understand the git commit that launch release staging of pipeline that generates binary artifact that needs to be deployed in production. c1 from release branch or c2 from master branch?

+ +

Release staging of pipeline is supposed to generate the release artifact that goes in production.

+ +

As per above workflow, my understanding is develop,release and master branch are long running branches.

+ +

develop branch will never be merged to master branch.

+ +
+ +

Question:

+ +

1) Which commit(c2 or c2) launches release staging of pipeline to deploy the release binary artifact in production?

+ +

2) If it is c1 commit, then do we require another pipeline for c2, just to verify the consistency of code in master branch?

+",131582,,131582,,2/18/2019 13:52,2/18/2019 13:52,Release pipeline - Which artifact goes in production?,,1,0,,,,CC BY-SA 4.0,,,,, +387360,1,387376,,2/18/2019 12:06,,2,915,"

I've stumbled upon a problem: ""I can't split my domain models into aggregate roots"".

+ +

I'm a junior developer and novice at DDD. I really want to understand it, but sometimes it's really confusing.

+ +

From this point I want to describe my domain briefly.

+ +

My poject dedicates to provide users opportunity to create any kind of documents by themselve. Users can create a new type of document. Each new type consists of its attributes. Then a user of this application can create a concrete document based on its type. User can also send the document for approval. An approval flow is different for each types.

+ +

So, we have the following models:

+ +
    +
  1. DocumentType/ DocumentTemplate - acts as a template based on which +concrete documents are created. It has one to many relationship with +Document.
  2. +
  3. DocumentsAttribute - represents an attribute of document. +It has many to many relationship with DocumentType.
  4. +
  5. AttributeValue - when a concrete document is created, It looks at +its type and creates values for attributes, which has +its type. Many to many relationship with Document and Attribute.
  6. +
  7. Document - represents a concrete document that is created by users.
  8. +
+ +

There are others models but I don't think that they make sense.

+ +

As you understand, here I apply Entity Attribute Value (EAV) pattern of data model. You can see a diagram that shows relationships in the database.

+ +

And my problems are:

+ +

I have a lot of entities in my model besides I have described.

+ +

I think that Document is definitely an aggregate root in my Domain. Because such things as ApprovalProcess which is aggregate cannot live out of it.

+ +

Here is the first question:

+ +

ApprovalProcess consists of its steps. Each step is an entity since it is mutable. A step has its state that can be changed. ApprvalProcess's state depends on its steps. Here we have a business invariant: ""ApprovalProcess can be approved only if all its steps is approved"".

+ +

I think that it is an aggregate root because it has the business invariant and contains entities that cannot live out of it. And we don't want to allow to have direct access to its steps in order to keep ApprovalProcess consistent.

+ +

Am I mistaken that ApprovalProcess is an aggregate root? May it is just an aggregate? +Can one aggregate root exist within another one as it's part? Does it mean that ApprovalProcess is just aggregate because Document is responsible for access to its parts? But when ApprovalProcess's step is approved, Document delegates an operation to ApprovalProcess.

+ +

For example:

+ +
Document doc = new Document(...);
+doc.SendForAooroval(); //ApprovalProcess is created.
+
+doc.ApproveStep(int stepId); // Inside the method Document delegates responsibility for approvement to ApprovalProcess.
+
+ +

Or I should leave Document and ApprovalProcess separately. Hence Document is going to refer to ApprovalProcess by Identity. And we have the following scenario:

+ +
Document doc = documentRepository.Get(docId);
+doc.SendForAooroval();// A domain event ""DocumentCreatedEvent"" is raised.
+
+ +

DocumentCreatedEventHandler:

+ +
ApprovalProcess approvalProcess = new ApprovalProcess(event.DocId); // ApprovalProcessCreatedEvent is raised
+
+approvalProcessRepository.Add(approvalProcess);
+approvalProcessRepositroy.UnitOfWork.Save(); //commit 
+
+ +

But if ApprovalProcess's state changes, Document's state also changes. ApprovalProcess is approved then Document is also approved. Another word ApprovalProcess is kind of part of Document's state. Only thanks to it we can know that Document is approved.

+ +

And the biggest problem that I'm experiencing:

+ +

DocumentType is also an aggregate root. It consists of its attributes and ApprovalScheme. I haven't mentioned ApprovalScheme yet on purpose to keep my explanation as simple as possible. ApporvalScheme consists also from some entities. It's just an approval flow for DocumentType. ApprovalProcess is created according to ApprovalScheme of DocumentType which has Document. ApprovalScheme cannot exist without DocumentType. One to one relationship.

+ +

Document refers by identity to its DocumentType. Is it correctly?

+ +

At the begining of this task I thought that DocumentType should be a part of Document.

+ +

DocumentType has many Documents but in my domain It doesn't make sense. It doesn't represent the state of DocumentType. DocumentType can be marked as deleted but can't be deleted.

+ +

Document and DocumentType are two different aggregate roots. Am I right?

+ +

Thank you so much If you read it. Thank you a lot for you attention and help! +Sorry for my terrible English.

+ +

==========================================================================================================================================================

+ +

Problem №2

+ +

Now I want to provide you more details about my Domain.

+ +

There are: users, documents, types of documents, attributes of documents.

+ +

The documents in my application have dynamic fields. So I use EAV pattern. +A type of the Document defines set of attributes and an approval flow that consists of steps. If a document is created, it is created all values of attributes that its type contains for the document.

+ +

If document is sent for approval, it is created a particular state for each step in an approval flow which is kept by the document type. This set of state relates to the document's approval process.

+ +

There are two types of users: Employee and Amin. The Employee can have differen roles like: manager, accountant and so on.

+ +

Each employee can:

+ +
    +
  • Create a new document.
  • +
  • Send document for approval.
  • +
  • Edit document if it is not on approval.
  • +
  • Approve exactly one of the step of approval process that belongs to +the document and user is responsible for.
  • +
  • Reject exactly one of the step of approval process that belongs to + the document and user is responsible for.
  • +
  • Print it if it is approved.
  • +
+ +

Each admin can:

+ +
    +
  • Create a new type of document.
  • +
  • Create new or set an existing attribute for the type of a document.
  • +
  • Create new or edit existing approval flow for the type of a document.
  • +
+ +

If the approval process is rejected, the user who is a creator of the document must make some corrects to the document and send it for approval again.

+ +

I have to option how to implement it. +The first option is I think that I have the following entities and Aggregates (I want to show their interface):

+ +
interface User {
+  Document CreateNewDocument(int docType);
+  Document SendDocumentForApproval(int docId);
+  Document EditDocument(document, List<AttributesValues> attributesValues); // list of only attributes that must be changed.
+  Document Approve(document, stepId);
+  Document Reject(document, stepId);
+  Document Print(document);
+}
+
+ +

And I'm going to use it somehow like this:

+ +

CreateDocumentCommandHandler:

+ +
User user = userRepository.GetUser(userId);
+
+Document doc = user.CreateNewDocument(docTypeId);
+
+documentRepository.Add(doc);
+documentRepository.UnitOfWork.Save();
+
+ +

SendDocumentForApprovalCommandHandler:

+ +
User user = userRepository.GetUser(userId);
+Document doc = documentRepository.GetDocument(docId)
+
+doc = user.SendDocumentForApproval(doc );
+
+documentRepository.Update(doc);
+documentRepository.UnitOfWork.Save();
+
+ +

But what is really tricky for me is what is hapenning inside SendDocumentForApproval method of the user:

+ +
Document SendDocumentForApproval(document)
+{
+   if(document == null) throw new ArgumentNullException(nameof(document));
+
+   // When document is sent for approval, it needs to initialize a new
+   // approval process and all states for the approval scheme's steps.
+   // So we need to get a type of the document and ask it to provide us
+   // ApprovalProcess that is based on ApprovalScheme.
+   document.SubmitApprove(); // DocumentSendForApprovalDomainEvent is raised
+   return document;
+}
+
+ +

DocumentSendForApprovalDomainEventHandler:

+ +
Handle(dto)
+{
+   // It seems very stupid because either I need get one more time document
+   // with the repository or I need to keep a reference to the instance of
+   // the document as a field in the dto argument of this handler.
+   Document doc = dto.Document;// looks terrible
+   // or
+   Document doc = documentRepository.GetDocument(dto.DocId);//looks also terrible because we have alredy used repository on this purpose in SendDocumentForApprovalCommandHandler
+
+    DocType docType = docTypeRepositoty.GetDocType(doc.TypeId);
+
+    ApprovalProcess approvalProcess = docType.InitNewApprovalProcessForDocument(doc.Id);
+
+
+    doc.AssignApprovalProcess(approvalProcess)
+}
+
+ +

Could someone correct the ApprovalProcess that I've modeled above please? +Or tell me please that I'm on the right way or not.

+",328981,,328981,,2/19/2019 8:49,2/19/2019 8:49,A problem with understanding aggregates and aggregate roots in Domain Driven Design (DDD),,1,3,1,,,CC BY-SA 4.0,,,,, +387365,1,387442,,2/18/2019 13:34,,0,228,"

my goal is to convert this class diagram into Java code.
+How should I approach this, given that I want the constraints to hold at all times? It creates a chicken-egg problem where the first Course or Student created will always be alone in the world, thus violating the minimal multiplicity constraint of either ""teaches"" or ""takes"".
+Any suggestions?
+Thanks

+ +

+",328987,,7422,,2/18/2019 14:18,2/19/2019 20:37,2 way multiplicity constraints in code,,2,4,,,,CC BY-SA 4.0,,,,, +387367,1,,,2/18/2019 13:48,,1,847,"

Below is the Gitflow workflow followed, where master branch has the commit history(git tag) of different releases.

+ +

From release mgmt aspect, we are deleting the release branch after merging with master and develop branch.

+ +

+ +

To apply hot fix, we first git checkout a specific release commit from master branch and then create a hotfix branch from that specific commit(shown in orange, below) and make changes.

+ +

+ +

Questions:

+ +

1) Does git merge allow hotfix branch to get merged to intermediate commit node(as shown below) on master branch? instead of merging to tip of master, because hotfix is for 0.1 release

+ +

+2) If no, How to apply hotfix(of a specific release) through master branch? because release branch is deleted

+",131582,,131582,,2/18/2019 14:12,2/19/2019 12:42,Applying hotfix to intermediate commit on master,,2,6,0,,,CC BY-SA 4.0,,,,, +387371,1,,,2/18/2019 14:35,,-1,102,"

The AMD64 specification talks about /0 with regards to instruction encoding but I don't have a clue what is meant by that. For example, in Volume 3 the ADD instruction has three forms:

+ +

ADD reg/mem16, imm16 81 /0 iw Add imm16 to reg/mem16

+ +

ADD reg/mem32, imm32 81 /0 id Add imm32 to reg/mem32.

+ +

ADD reg/mem64, imm32 81 /0 id Add sign-extended imm32 to reg/mem64.

+ +

These all use the opcode 81 followed by /0 which I presume distinguishes between the three followed by the immediate value iw or id. But what does /0 mean?

+",328998,,,,,2/18/2019 16:47,What is meant by /0 in AMD64 specification?,<64-bit>,1,0,,,,CC BY-SA 4.0,,,,, +387375,1,,,2/18/2019 16:23,,-1,133,"

While applying for a job interview I found this line in requirements.

+ +
+

Experience with clean code writing practices like avoiding callback hell like promises, async

+
+ +

Does this line make any sense ? If yes, can we actually stop using promises and async ?

+",326941,,,,,2/19/2019 8:18,Does avoiding Promises and Async leads to clean code?,,1,4,,,,CC BY-SA 4.0,,,,, +387379,1,387381,,2/18/2019 17:15,,0,187,"

I'm new on API and still undergoing some research to gain knowledge for my presentations for next few days.

+ +

Different motherboard have different CPU socket. +So I was just wondering, do API affect the different kind of CPU that only compatible with their own motherboard?

+ +

Like for example, (I don't know if this is true or not) AM4 motherboard only compatible with AMD's CPU because of the API integrated within the motherboard.

+ +

I hope my question get through.

+ +

Thanks

+",329011,,,,,2/18/2019 17:52,API and motherboard,,2,1,,,,CC BY-SA 4.0,,,,, +387385,1,,,2/18/2019 18:52,,6,2735,"

Should AutoMapper be used to take data from ViewModel and save back into a database model?

+ +

I know the opposite practice is good software practice: to have Automapper to extract database models, and place them in Viewmodels for front end web applications.

+ +

I was reading this general article here: +The article was not specifically about Automapper, but want to validate if the opinion is true. +https://stackoverflow.com/questions/35959968/best-way-to-project-viewmodel-back-into-model

+ +

""I believe best way to map from ViewModel to Entity is not to use AutoMapper for this. AutoMapper is a great tool to use for mapping objects without using any other classes other than static. Otherwise, code gets messier and messier with each added service, and at some point you won't be able to track what caused your field update, collection update, etc.""

+",329020,,,,,12/30/2019 11:02,Should AutoMapper be used to Map from ViewModel back into Model?,<.net-core>,2,0,1,,,CC BY-SA 4.0,,,,, +387387,1,,,2/18/2019 19:40,,1,245,"

Note: This is a follow-up to this question on StackOverflow.

+ +

I have to write a wrapper in Python to an external API, accessible through HTTP. The code is supposed to be publicly available on GitHub.

+ +

For this reason, I thought, it would be nice if a person cloning this repository wouldn't see tons of warnings. I opened my own code in PyCharm just to see if that was the case. It wasn't.

+ +

However, since I'm afraid what these warnings are pointing at is a design issue, please let me start by showing (in a simplified way) the design I came with:

+ +

Firstly: There are two methods to authenticate the HTTP connection. For this reason, I have a ConnectionBase abstract class, that is implemented by two concrete classes, each using one of the two available authentication methods. So far so good.

+ +

But here problems start. There is an ApiClient class that is supposed to contain wrappers of all those API routes. This ApiClient class, of course, has to make use of an instance of ConnectionBase. In addition, since there are so many API calls to provide wrappers for, I thought I'd break them into categories.

+ +

Finally, this is how (roughly) the definition of ApiClient looks like:

+ +
class ApiClient:
+    def __init__(self, connection):
+        self._connection = connection
+
+    # forwarding method
+    async def _make_call(*args, **kwargs):
+        return await self._connection._make_call(*args, **kwargs)
+
+    class _Category:
+        def __init__(self, client):
+            self._client = client
+
+        # another forwarding method
+        async def _make_call(*args, **kwargs):
+            return await self._client._make_call(*args, **kwargs)
+
+    class _SomeCategory(_Category):
+        async def some_api_call(some_arg, other_arg):
+            return await self._make_call(blah blah)
+
+        async def other_api_call(some_arg, other_arg):
+            if some_arg.some_condition():
+                other_arg.whatever_logic_here()
+            return await self._make_call(yadda yadda)
+
+    class _OtherCategory(_Category):
+        async def yet_another_api_call(some_arg, other_arg, yet_another_arg):
+            #...
+            return await self._make_call # etc etc
+
+    @property
+    def some_category(self):
+        return ApiClient._SomeCategory(self)
+
+    @property
+    def other_category(self):
+        return ApiClient._OtherCategory(self)
+
+ +

In this fashion, assuming that the user of my lib wants to make a call to this external API and that client is an instance of ApiClient, they would type:

+ +
client.some_category.some_api_call(some_arg, other_arg)
+
+ +

I believe that my use of underscores is clear: I'm preceding with an underscore all names that are not meant to be called by the end user of my lib. I thought this was the most important distinction: far more important than the distinction between variables private to a class: because the latter is nothing but an aesthetic issue, while the former is a usability issue: after all, exposing a (hopefully) clean, well-defined and intuitive public API is the very purpose of writing libraries!

+ +

Yet, PyCharm frequently complains that I'm accessing protected members of classes from outside of these classes. Apparently, and according to the linked SO question, the Pythonic understanding of a name preceded by an underscore is: This member is internal to this class and NOT the way I was understanding it, that is: This member is internal to this package.

+ +

So, all such lines are producing warnings:

+ +
await self._connection._make_call
+
+await self._client._make_call
+
+ +

etc etc in other places of my code.

+ +

But then: Am I supposed to clearly distinguish between methods supposed to be called by the user of my lib and methods supposed to be called only from within my lib (or users who know the internal workings of my package and know what they're doing)?

+ +

If yes, then how if not by an underscore, which apparently means a different thing?

+ +

Or...

+ +

Well I don't know, because as it is clear from my questions on this site, my knowledge of design patterns is underwhelming... But a hypothesis of how this situation should be understood rises up in my mind... A pretty radical hypothesis...

+ +

Maybe the correct interpretation is that there should be no methods that are intended to be called from outside of a class but not from outside of the package and/or by the end user?

+ +

I mean, I'm constantly seeing talks about the need to break dependencies, about how bad it is to have code that is so closely intermingled and tightly coupled... Perhaps such methods are examples of this unwanted coupling? And in all cases where I'd like to put an underscore in the beginning of a method name but PyCharm complains, this simply means that the existence of such a method constitutes a coupling in my code that should not be there and therefore there should not be such a method at all?

+ +

And I was so proud of myself that I made use of dependency injection :P

+ +

On a more serious note: I have strong doubts if the above, dire interpretation is correct. In Uni I had a basic course on OO design. The instructor said something along the lines of, IIRC:

+ +
+

Please do remember that inheritance should be used in case of an is-a relationship, not a has-a relationship. An example of an extremely bad inheritance is a car inheriting from a wheel or from a gas pedal. Such cases should be handled by composition instead.

+
+ +

So, since an API client is not a connection used by this API client and since a category of an API call is not a client, I thought that it would be wrong to make API client inherit from a connection or to make a category inherit from the client - even though doing this would make PyCharm stop complaining and issuing warnings. Instead I used composition. However, if composition is used, I can't see how can I get rid of methods that are intended to be called from within my lib, but not from within the class they're defined in and not by the end user.

+ +

I suppose there must be a basic design principle I'm ignoring out of ignorance.

+ +

Could you enlighten me please?

+",212639,,,,,12/27/2020 13:00,"Should there not be methods intended to be only called from inside of the package, but from the outside of the class they're defined in?",,2,4,,,,CC BY-SA 4.0,,,,, +387390,1,,,2/18/2019 21:07,,1,173,"

I would like to make a simple web application (a static website where all computation happens on the client) that generates a mesh and displays it. I have a working prototype in Unity and now I'm wondering what a good framework / language is for the task.

+ +

The problem: I would like to use Typescript or Javascript, but neither support operator overloading.

+ +

A line like this in C#

+ +
a = Vector3.forward * 3 + direction * length * 0.5;
+
+ +

would look horrible without operator overloading:

+ +
a = Vector3.forward.times(3).add(direction.times(length * 0.5));
+
+ +

What is the most elegant solution to this?

+",251093,,251093,,2/20/2019 0:54,2/20/2019 0:54,Good way to do 3D vector math in language without operator overloading,<3d>,1,16,,,,CC BY-SA 4.0,,,,, +387394,1,,,2/18/2019 22:17,,0,42,"

I want to know which practice is more convenient having the following example, (my app is a Rails app but this can be applied on any framework) +I am trying to create an endpoint which contains author posts filtered by year using an REST-API.

+ +

Having the models:

+ +
class Author < ActiveRecord::Base
+  has_many :posts
+end
+
+ +
class Post < ActiveRecord::Base
+  belongs_to :author
+
+  # attributes:
+  # - id
+  # - published_at (datetime)
+  # - author_id
+end
+
+ +

Controllers:

+ +

Solution A) (member route)

+ +
class AuthorsController < ApiController
+  def posts_by_year
+    @posts = Post.where(author_id: params['author_id']).by_year(params['year'])
+
+    render json: @posts
+  end
+end
+
+ +

Solution B) Filter search on Posts index

+ +
class PostsController < ApiController
+  def index
+    @posts = posts_search_scope # this is an illustrative example, I will use a Finder service to do this
+
+    render json: @posts
+  end
+
+ private
+ def posts_search_scope
+   query = Post
+   unless params
+     query.all
+   else
+     query.by_year(params['year']) if params['year']
+     query.where(author_id: params['author_id']) if params['author_id')
+     query
+   end
+ end
+end
+
+ +

I like more the solution B but i'm concerned that, as Post model grows, I need to support more and more search params and business rules.

+",329029,,329029,,2/19/2019 16:22,2/19/2019 16:22,Good practices: using member routes or filter on search scopes?,,0,2,1,,,CC BY-SA 4.0,,,,, +387395,1,387396,,2/18/2019 22:49,,0,226,"

I have multiple methods calling each other to simplify changing anything in the code and to avoid fixing errors and copy pasting. +It looks like this:

+ +

+ +
    +
  1. Is this a bad practice?
  2. +
  3. Does it cause too much overhead from experience? (I don't have any performance problems but I try to do best in every case I can)
  4. +
  5. Is this a good solution to make a private inline methods along with public ones? And call them when required?
  6. +
+ +

Example Code

+ +
#if UNITY_EDITOR
+public static class ScriptableObjectUtility
+{
+    [MethodImpl(MethodImplOptions.AggressiveInlining)]
+    private static T CreateAssetInline<T>(string path, string tagName, T asset)
+        where T : ScriptableObject
+    {
+        if (path == string.Empty)
+            path = PathUtility.ASSETS_PATH_NAME;
+        else if (!string.IsNullOrEmpty(Path.GetExtension(path)))
+        {
+            path = path.Replace(Path.GetFileName(AssetDatabase.GetAssetPath(Selection.activeObject)), string.Empty);
+        }
+
+        string fullGeneratedPath = AssetDatabase.GenerateUniqueAssetPath(Path.Combine(path, $""[{tagName}] name.asset""));
+
+        AssetDatabase.CreateAsset(asset, fullGeneratedPath);
+
+        AssetDatabase.SaveAssets();
+        AssetDatabase.Refresh();
+        EditorUtility.FocusProjectWindow();
+
+        Selection.activeObject = asset;
+
+        return asset;
+    }
+
+    public static T CreateAsset<T>(string path, string tagName, T asset)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline(path, tagName, asset);
+    }
+}
+#endif
+
+ +

Final Version

+ +

This is a version where all public methods make a call to their corresponding inlined methods.

+ +
#if UNITY_EDITOR
+public static class ScriptableObjectUtility
+{
+    /// <summary>
+    /// 
+    /// </summary>
+    /// <typeparam name=""T""></typeparam>
+    /// <param name=""path""></param>
+    /// <param name=""tagName""></param>
+    /// <param name=""asset""></param>
+    /// <returns></returns>
+    [MethodImpl(MethodImplOptions.AggressiveInlining)]
+    private static T CreateAssetInline<T>(string path, string tagName, T asset)
+        where T : ScriptableObject
+    {
+        if (path == string.Empty)
+            path = PathUtility.ASSETS_PATH_NAME;
+        else if (!string.IsNullOrEmpty(Path.GetExtension(path)))
+        {
+            path = path.Replace(Path.GetFileName(AssetDatabase.GetAssetPath(Selection.activeObject)), string.Empty);
+        }
+
+        Directory.CreateDirectory(path);
+
+        string fullGeneratedPath = AssetDatabase.GenerateUniqueAssetPath(Path.Combine(path, $""{tagName}.asset""));
+
+        AssetDatabase.CreateAsset(asset, fullGeneratedPath);
+
+        AssetDatabase.SaveAssets();
+        AssetDatabase.Refresh();
+
+        EditorUtility.FocusProjectWindow();
+
+        Selection.activeObject = asset;
+
+        return asset;
+    }
+
+    public static T CreateAsset<T>(string path, string tagName, T asset)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline(path, tagName, asset);
+    }
+
+    /// <summary>
+    /// Makes a copy of instance T and creates its asset at path.
+    /// </summary>
+    /// <typeparam name=""T""></typeparam>
+    /// <param name=""instance""></param>
+    /// <param name=""path""></param>
+    /// <param name=""tagName""></param>
+    /// <returns></returns>
+    public static T CreateAsset<T>(T instance, string path, string tagName)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(path, tagName, Object.Instantiate(instance));
+    }
+
+    // (string path, string tagName)
+
+    [MethodImpl(MethodImplOptions.AggressiveInlining)]
+    private static T CreateAssetInline<T>(string path, string tagName)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(path, tagName, ScriptableObject.CreateInstance<T>());
+    }
+
+    public static T CreateAsset<T>(string path, string tagName)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(path, tagName);
+    }
+
+    // (string path)
+
+    [MethodImpl(MethodImplOptions.AggressiveInlining)]
+    private static T CreateAssetInline<T>(string path)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(path, typeof(T).ToString());
+    }
+
+    public static T CreateAsset<T>(string path)
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(path);
+    }
+
+    // 
+
+    [MethodImpl(MethodImplOptions.AggressiveInlining)]
+    private static T CreateAssetInline<T>()
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>(AssetDatabase.GetAssetPath(Selection.activeObject));
+    }
+
+    public static T CreateAsset<T>()
+        where T : ScriptableObject
+    {
+        return ScriptableObjectUtility.CreateAssetInline<T>();
+    }
+}
+#endif
+
+",263948,,263948,,2/19/2019 0:36,2/19/2019 0:36,Methods linking bad/good practices,,1,8,,,,CC BY-SA 4.0,,,,, +387399,1,,,2/19/2019 2:38,,2,882,"

For example. I have a function which generates an array with random numbers.

+ +
int[] generateNum(int n) {
+   int[] result = new int[n];
+    /* Logic to generate random number */
+    ...............
+    return result;
+}
+
+ +

Then space complexity of the above program should be O(n) or O(1)?

+ +

This question is more of about space complexity and very specific about whether to count data structure holding result in space complexity.

+",329042,,329042,,2/19/2019 14:26,2/27/2019 16:29,Do we include output space in space complexity?,,1,8,1,,,CC BY-SA 4.0,,,,, +387400,1,387414,,2/19/2019 3:52,,2,249,"

In Scala, declaring a val as lazy means that its value won't be evaluated until it's used for the first time. This is often explained/demonstrated as being useful for optimization, in case a value might be expensive to compute but not needed at all.

+ +

It's also possible to use lazy in a way that it's necessary for the code to work correctly, rather than just for efficiency. For example, consider a lazy val like this:

+ +
lazy val foo = someObject.getList().find(pred) // don't use this until after someObject filled its list!
+
+ +

If foo weren't lazy, then it would always contain None, since its value would be evaluated immediately, before the list contained anything. Since it is lazy, it contains the right thing as long as it isn't evaluated before the list is filled.

+ +

My question: is it considered okay to use lazy in places like this where its absence would make the code incorrect, or should it be reserved for optimization only?

+ +

(Here is the real-world code snippet that inspired this question.)

+",275017,,,,,6/7/2019 15:38,"Is it a good idea to use ""lazy val"" for correctness?",,2,1,,,,CC BY-SA 4.0,,,,, +387402,1,387405,,2/19/2019 6:17,,0,52,"

I am fairly new to Docker and Kubernetes and I have a question for which I could not figure out the answer myself. I am working on an application that does string matching on data extracted from multiple sources but the problem is it's painfully slow. The bottleneck is composed of two nested for loops. For each row of the data frame X, scan all rows of data frame Y, and does a series of checks inside an if-elif construct, to determine if the match is acceptable. The result is also supplied in the form of a data frame.

+ +

It's written in Python and mainly uses Pandas data frames. As far as I know it could be sped up only by a faster processor which I currently do not have access to. I tried parallel processing but the overhead between the loops was too big an it resulted in even longer execution times (or my implementation was not good).

+ +

My question is, could I speed it up if I containerize with Docker and deploy to a Kubernetes cluster with 2-3 nodes on my LAN?

+ +

Keep in mind that it is a linear application, I did not write any code to make it ready for parallel processing, and we are talking about a single process, not more processes initiated by more users. In this case, does Kubernetes know to distribute the workload (maybe the processing of the various iterations on the rows of data frame X ?) on the nodes, and then put it all together to deliver the answer?

+ +

Thank you!

+",329054,,329054,,2/19/2019 6:27,2/19/2019 8:29,Can Kubernetes help with providing more processing power for the same request?,,1,0,,,,CC BY-SA 4.0,,,,, +387403,1,387406,,2/19/2019 6:17,,0,131,"

I'm building a user license portal, store, SSO, and mini CRM and admin for internal people to manage those users for a niche 3D software company, similar to Autodesk. Licenses are sold to both companies and single users. What is the best way to avoid company data duplication

+ +
    +
  1. Not all companies have tax Id and we do not collect tax. Product is international
  2. +
  3. Website was an option, but most single users do not have websites, only some* companies.
  4. +
  5. A user might buy a product in ""France"" while another one in the ""UK"" at the same time and they don't know about each other but they are from the same company
  6. +
  7. Some very large companies have branches in different countries or cities
  8. +
  9. User login is used with email address + password only
  10. +
  11. The majority of customers are single users without any company
  12. +
+",309026,,309026,,2/24/2019 2:53,2/24/2019 2:53,What is the best method to avoid company data duplication in mixed B2C and B2B product in account creation?,,2,4,,,,CC BY-SA 4.0,,,,, +387415,1,387416,,2/19/2019 9:56,,0,119,"

I have situation that 3 different instances with the same method signature are doing their job repeatedly.

+ +
interface IArgs{
+    //args stuff
+}
+
+interface IExample{
+    void Populate(IArgs);
+}
+
+class ExampleA : IExample
+{
+    void Populate(IArgs a){
+        //todo
+    }
+}
+
+class ExampleB : IExample
+{
+    void Populate(IArgs b){
+        //todo
+    }
+}
+
+class ExampleC : IExample
+{
+    void Populate(IArgs c){
+        //todo
+    }
+}
+
+
+class MainExampleClass(){
+    ExampleA classA;
+    ExampleB classB;
+    ExampleC classC;
+    //instantiation and other class stuff
+    foreach(Data data in dataCollection){
+        classA.Populate(data);
+        classB.Populate(data);
+        classC.Populate(data);
+    }
+}
+
+ +

It's worth to mention that the classes work with different args but I solve that with one dataholder class with all arguments and implementation of an interface (IArgs). +So my question is: is there any good way, practice, pattern to solve this repeatedly calling of methods? The Factory pattern, if I understood, will return one instance, but here I need all 3. +Any suggestions would be appreciated.

+",306616,,173647,,2/19/2019 10:07,2/19/2019 10:07,Calling same method on different instance (Polymorphism),,1,1,,,,CC BY-SA 4.0,,,,, +387419,1,387421,,2/19/2019 11:20,,0,237,"

Considering a class method that takes a ""vector"" (Tuple or List of either int or float) of defined values such as the following:

+ +
import sys
+from numpy import isnan, array, float64
+
+class Shape:
+""""""
+This is the Shape class. 
+Create 3D shapes described by a length 3 vector dimension input.
+Example: 
+>>x=y=z=2.5
+>>Shape([x,y,z])
+""""""
+   def __init__(self,dimension): 
+        self.dimension = array(dimension, dtype=float64, copy=False)
+        if any(isnan(self.dimension))  or  len(self.dimension)!= 3:
+            sys.exit('Bad dimension input')
+        ### more dimension sensitive code
+
+ +
+ +

How could one Type Hint this so it is communicated that I can have valid inputs such as the following?

+ +
Shape([1,2,3])
+Shape([1.0,2.0,3.0])
+Shape((1.0,2.0,3.0))
+
+ +
+ +

For now, I have the following hack which I think looks jarring but people like it:

+ +
import ...
+x=y=z=None ## Placeholder type hinting(????)
+
+class Shape:
+""""""
+Lorem Ipsum
+""""""
+   def __init__(self,dimension=[x,y,z]): 
+        self.dimension = array(dimension, dtype=float64, copy=False)
+        if any(isnan(self.dimension))  or  len(self.dimension)!= 3:
+            sys.exit('Bad dimension input')
+        ### more dimension sensitive code
+
+ +

Of course this suggests nothing about the type of the vector data but at least shows it is a 3 sized vector with known symbols (x,y,z) to my users.

+",327923,,327923,,2/19/2019 11:28,2/19/2019 12:22,Python - Type Hinting specific sized Vectors,,1,4,,,,CC BY-SA 4.0,,,,, +387429,1,,,2/19/2019 13:39,,-3,138,"

I have been looking at BDD but there is something that keeps confusing me. Consider this user story:

+ +
Given a user has placed their order
+And the payment was accepted
+Then a confirmation email should be sent.
+
+ +

Based on this, I see different things being done:

+ +
    +
  • Some sort of functional test to see whether an actual email was sent with a certain subject/content
  • +
  • Something that is closer to a unit test by testing the domain object behaviour, ie. asserting the value of a property like sendConfirmationEmail.
  • +
+ +

I guess I am confused about what exactly one is supposed to test in BDD, because ""business expectations"" is a bit vague. In the above user story, which of the two mentioned approaches is the one that is ""appropriate""?

+",312892,,110531,,2/19/2019 17:51,2/19/2019 17:51,How exactly am I supposed to test business expectations?,,1,8,,,,CC BY-SA 4.0,,,,, +387435,1,,,2/19/2019 15:25,,2,52,"

Consider this example command line session...

+ +
$ git --version
+git version 2.19.0.windows.1    
+$ echo ""header"" > ancestor.txt
+$ cp ancestor.txt left.txt
+$ cp ancestor.txt right.txt
+
+ +

At this point, I've made an ancestor file and started two ""branches"". Now I'll make a change to each one.

+ +
$ echo ""on the left"" >> left.txt
+$ echo ""on the right"" >> right.txt
+
+ +

And another, with coincidentally identical lines.

+ +
$ echo ""footer"" >> left.txt
+$ echo ""footer"" >> right.txt
+
+ +

Now I want to merge my two branches together using git merge-file.

+ +
$ git merge-file --union left.txt ancestor.txt right.txt
+$ cat left.txt
+header
+on the left
+on the right
+footer
+
+ +

There is only one ""footer"" line in the merge output. My expectation is that because each branch file has footer separately and the ancestor doesn't, I should get two ""footer"" lines in the merge, one from the left and one from the right. (I am only expecting one ""header"", because it is in the ancestor.)

+ +

If I repeat the experiment up to the git call, but replacing ancestor.txt with an empty file, I get exactly the same output. Even if I fill ancestor.txt with junk, I still get the same output.

+ +

Am I doing this wrong? Have I found a bug in git? Is the ancestor file redundant? What's going on?

+",3912,,3912,,2/19/2019 15:30,2/19/2019 15:30,What impact does the ancestor file have with git merge-file?,,0,1,,,,CC BY-SA 4.0,,,,, +387436,1,,,2/19/2019 15:36,,12,1756,"

Should I define an interface for every public behavior class (excluding data classes)?

+ +

I've spent many hours searching and reading to find a clear answer. If I search ""Do you define an interface for every public class"", 90% of the answers say you should not. However, I have yet to find any such comment explain how to do unit testing otherwise.

+ +

Some say Moq can mock concrete classes, but only for members that are declared virtual, and since the constructors cannot be marked virtual, there is no way for Moq to prevent code from running in the constructors (AFAIK). I have no interest in marking every member of every class as virtual.

+ +

It seems the answers to this question fall into 3 categories:

+ +
    +
  • those who test everything
  • +
  • those who test only part of the code
  • +
  • those who don't bother with unit testing
  • +
+ +

I've seen all the arguments on both sides already, but I still haven't seen any other good way of doing extensive unit testing of the code. Why then, are 90% of the people advocating against this?

+ +

The way I'm doing it for now is to place the interface at the top of the same file within a #region, so there is no increase in files, I can easily navigate to the implementation, and it doesn't clutter the code view. If some interface needs to be implemented several times, nothing prevents me from moving it into separate files later.

+ +

One of the main reasons for creating such interface is because of limitations of mocking frameworks. Let's say the next version of .NET allowed mocking frameworks to mock non-virtual methods, should I still create those interfaces?

+ +

Taking a simple example, I have class A, B and C. A depends on IB and IC for testing. Even if not needed for mocking, A still needs instances of B and C injected via dependency injection. Using interfaces is optional for dependency injection but I have yet to see good examples recommending to not use interfaces. So in this hypothetical scenario of not being needed for mocking, should I still create those interfaces or not?

+ +

And finally, if creating such interfaces is a good approach (which many disagree with), is there any tool that can auto-generate those interfaces at compile-time so I don't have to copy the method signatures and comments all the time?

+",328246,,,,,2/20/2019 17:58,Should I Have One Interface Per Class For Unit Testing?,<.net>,5,0,5,,,CC BY-SA 4.0,,,,, +387437,1,,,2/19/2019 15:55,,1,202,"

I would like some guidance in setting up my document structure Elasticsearch. The company I work for has an app that stores around 20,000 new phone records each day in a SQL database. And we feel we could benefit from the features that Elasticsearch provides (fast searching). So we are going to start storing these records in an Elasticsearch database, and provide some search tools that will leverage this as well.

+ +

I'm new to NoSQL and Elasticsearch (my background is SQL). So I've read some tutorials on Elasticsearch (and NoSQL in general) to get me started on this project. And I think that I understand (from a general perspective) how to go about this. The basic structure I want to set up is this:

+ +
--Owner (ownerId)
+  --Call (callId, to, from, location)
+  --Call (callId, to, from, location)
+  --Call (callId, to, from, location)
+--Owner (ownerId)
+  --Call (callId, to, from, location)
+  --Call (callId, to, from, location)
+  --Call (callId, to, from, location)
+
+ +

It's really straightforward. Each business (business=owner) has many calls that we store. And each call also has some additional info (that will be search-able), such as caller names, phone numbers, locations. Also, each owner has a unique identifier, and each call has a unique identifier. Also, calls from different owners never cross-pollinate. We never have a reason to join them.

+ +

Upon reading some docs and watching some tutorials, I see 3 possible ways I could implement this structure. I would like some guidance (from seasoned Elasticsearch developers) on these 3 methods. I feel confident that each method would work. But I'm not sure what the long-term implications might be. I need to make sure the method I use supports a high volume of adds and updates (20,000 each day), and still provides fast searching. It also needs to support the initial import of our existing data (we have around 4 million call records already in the database).

+ +

Here are the 3 methods:

+ +
    +
  1. Parent-Child Relationship (some info found here, among others).
  2. +
  3. Utilize one call store for all owners, and store the ownerId in each Call document. When creating the search interface, I would add the ownerId as a hidden search param to make sure the user only retrieves calls for his/her business. (Note: this resembles a one-to-many relationship created by a FK in traditional SQL)
  4. +
  5. Create a separate document store for each owner. For this one, I would create a separate call store for each owner. I would append the ownerId to the name of the document store. So I would have a store called calls_8j934jok83 and another one called call_98ged34h2, and so on (8j934jok83 and 98ged34h2 are separate ownerIds.
  6. +
+ +

Having no experience in NoSQL or Elasticsearch, I feel like I am playing with fire here. Like I said, I feel that any of these three would ""word"". But I want to make sure I use best practices, and that the structure I set up will be scale-able (we have around 20,000 calls coming in each day now, but we are growing).

+",306017,,306017,,2/19/2019 16:07,2/20/2019 18:28,Some Guidance on Parent-Child Relationships in Elasticsearch,,1,0,,,,CC BY-SA 4.0,,,,, +387439,1,,,2/19/2019 16:12,,2,218,"

I am working somewhere where programming is an important part of the job, but where code review is something nobody ever heard of.

+ +

Being kind of enthusiastic about programming, I've seen a lot of questions about code reviews, here and mostly on Stack Overflow, but I could not experience it myself in a professional context.

+ +

For a little more context, I work in research in epidemiology. There is a team of data-managers who set up databases (Oracle SQL) based on raw data. Then, every researcher will write code (in R or SAS) independently to query the database and perform their analysis. Code written by a researcher is usually not used by another though. A study is judged upon its results, so as long as they are likely, little errors can sneak through.

+ +

There is no code reviewing, neither in the data managers team nor in the researchers team, to track errors. I think this could be very beneficial for both teams and I'd like to convince the boss to consider it.

+ +

Unfortunately, googling ""manual code reviews guidelines"" doesn't give any useful insights on setting up but only on improvement.

+ +

How is code reviewing usually introduced in naive teams? Where could I find the resources to teach my boss the actual benefits and the methodology to set up code reviewing?

+",239460,,110531,,2/19/2019 17:51,2/19/2019 17:58,How to set up code reviewing in a naive team?,,1,6,,43516.89514,,CC BY-SA 4.0,,,,, +387446,1,387449,,2/19/2019 18:33,,0,80,"

I'm hoping someone can give some guidance on an issue I'm having. I have:

+ +
    +
  • A WebSocket service, where I have a single method on the server that handles all traffic.
  • +
  • Lots of different kinds of messages, each message has a ""type"" and we select which handler class to use based off that value.
  • +
+ +

In the HTTP world I would have a different method for each message type by creating a ""message routing"" configuration, i.e. I would assume the URL itself would contain the route to the correct method to use for the message I was sending. e.g. GET service.com/{serviceName}/{method}.

+ +

In the websocket world I don't have that luxury, we can only afford a single websocket per client, and all messages from that client will be sent to a single method, in our case onPost. Now, I still want to be able to specify which method/class should handle that message, but I don't know what pattern I should be using to achieve that.

+ +

The best I've come up with so far is to use the Service Locator pattern, whereby I would locate the correct handler on-the-fly. Something similar to this pseudo-code (off the top of my head here..):

+ +
interface IHandler{
+
+    String forMessageType;
+    void ProcessMessage(message);
+}
+
+class HandlerNumberOne implements IHandler{
+
+    String forMessageType = ""messageTypeOne"";
+
+    void ProcessMessage(message){ ... }
+}
+
+class HandlerLocator{
+
+    static List<IHandler> handlers = Container.ResolveAll<IHandler>();
+
+    static IHandler getByMessageType(String messageType){
+        return handlers.First(h=>h.forMessageType == messageType);
+    }
+}
+
+ +

I would then use that locator in the onPost method like this:

+ +
class Service{
+
+    void onPost(Message msg){
+
+        IHandler handler = HandlerLocator.getByMessageType(msg.MessageType);
+        handler.ProcessMessage(msg);
+    }
+}
+
+ +

My fellow comrades have thrown this to the fire, as they consider Service Locator a serious anti-pattern because ""it hides dependencies"". However, the desire to remove the (currently) 16 handler class dependencies we have injected into our Service constructor is leading me to grasp at pretty much anything to solve the issue.

+ +
    +
  • Are there any plugins for message routing with webockets that I'm missing? I would love to be able to send a message to a URL path and have it handled by a unique method automagically.
  • +
  • (My main question:) Are there any well known patterns I've missed that would solve our woes?
  • +
  • Why is service locator pattern any worse than message routing? It seems to me that routing a message to a method in a service based on a path value is exactly what service locator pattern is, am I wrong about that?
  • +
+ +

EDIT: I believe this question is different to this one because I'm asking about a specific scenario and for a preferable pattern to use, whereas the linked question is asking ""how to make a decision about which pattern to use"" in a much, much broader sense.

+",165319,,165319,,2/19/2019 21:47,2/19/2019 21:47,What pattern should I use to implement a Message Routing mechanism?,,1,3,,,,CC BY-SA 4.0,,,,, +387451,1,,,2/19/2019 22:17,,3,282,"

I have a web page that loads a very long list of custom web components, each with their own shadow DOM and a stylesheet shared by all instances.

+ +

Originally, I included the stylesheet as css file reference in a <link> tag to the code more readable:

+ +
(function () {
+ const template = document.createElement('template');
+ template.innerHTML = `
+   <link rel='stylesheet' href='elements/myelement.css' />
+   <div> ... </div>
+ `;
+ customElements.define('my-element', class extends HTMLElement {
+  constructor() {...}
+  connectedCallback() {...}
+ }
+})();
+
+ +

This has the unfortunate effect (I think) of the style loading after the content, and the element briefly appearing unstyled.

+ +

The problem is solved by pasting the styling between <style> tags within the innerHTML string, but the result looks unwieldy and difficult to read for a large style. One thing I tried was wrapping the value in an IIFE so the IDE could collapse it:

+ +
const style = (function(){ `...` })();
+template.innerHTML = `
+  <style>${style}<style>
+  ...
+
+ +

Using the style tags, you then get a duplicate <style> string in each shadow DOM and I'm not sure if there's a performance penalty.

+ +

Is there an efficient and readable way to style custom web components?

+",307701,,,,,2/20/2019 3:00,Efficient and readable method for styling of javascript webcomponents,,1,2,,,,CC BY-SA 4.0,,,,, +387458,1,,,2/20/2019 3:28,,-3,1530,"

I'm working in TDD for my data access layer in MongoDB and I don't know what should I test?

+ +

I think that I shouldn't test if the queries return what they have to return because It's a MongoDB concern.

+ +

Some help please!

+",329135,,328741,,2/20/2019 21:23,2/20/2019 21:23,What should I test with unit tests for data access layer? (MongoDB),,1,2,,,,CC BY-SA 4.0,,,,, +387470,1,,,2/20/2019 11:31,,4,859,"

I have a medium scale e-commerce application. Over a period of time our monolithic project got quite heavy in terms of code base. I was exploring the solutions for it and found micro-services is one option but found it like a very early optimization which involves too many risks and increased development cycles. So, I thought of splitting the monolithic app into multiple apps based on business. I come up with following three:

+ +
    +
  1. Storefront (Website)
  2. +
  3. Admin Backend (CRM, Reporting etc.)
  4. +
  5. Seller Platform
  6. +
+ +

By creating these 3 separate projects, I find following benefits from this:

+ +
    +
  1. They we can independently developed them and wherever a common code base is required, we can add them as library in current projects. B
  2. +
  3. These applications can be deployed independently.
  4. +
  5. Wherever we need to communicate between applications, we can write limited APIs to transfer data
  6. +
  7. Having common database will enable us to keep complete consistency of data by retention of all foreign keys
  8. +
+ +

+ +

Having said so, I am concerned about following points:

+ +
    +
  1. Firstly, is this a right approach? Are there any caveats in it?
  2. +
  3. Will having a common database accessed by three apps and writing on similar tables will cause any issue later on with scale?
  4. +
+",52534,,52534,,2/20/2019 11:49,2/20/2019 19:29,Splitting application into multiple but keeping database same,,4,4,1,,,CC BY-SA 4.0,,,,, +387471,1,,,2/20/2019 11:48,,0,187,"

I am designing an interface for reading and writing video frames to various inputs and outputs. Stream operators seem to me a superb alternative to named functions for the task. This is the gist of it:

+ +
struct FrameSource
+{
+    virtual FrameSource & operator>>( cv::Mat & frame ) = 0;
+    virtual ~FrameSource() = default;
+};
+
+struct FrameSink
+{
+    virtual FrameSink & operator<<( const cv::Mat & frame ) = 0;
+    virtual ~FrameSink() = default;
+};
+
+ +

Now, supposing this is an OK design, how should I signal end of stream (end of video; last picture in the folder; deinitialized camera)?

+ +

The options I have considered:

+ +
    +
  • EndOfIteration exception, like in python. Sounds slow, dangerous and not idiomatic. No way to indicate this behaviour in the header.
  • +
  • Return cv::Mat{}. Sounds slow, easy to miss(leading to infinite loops), violates the invariant that any frame can be returned, not idiomatic.
  • +
  • cv::Mat f; while(stream.get(f)); idiomatic but involves named functions, return status easy to miss.
  • +
  • A variation of the above via the conversion operator operator bool() const;.
  • +
  • Derive from std::basic_i/ostream. But those are character based.
  • +
  • Derive from iterator and provide begin(), end().
  • +
+ +

My application doesn't mandate streams, I am using them because of the subjective advantages of:

+ +
    +
  • ease of use
  • +
  • not having to hold a bunch of large files in working memory simultaneously.
  • +
+",54268,,54268,,2/21/2019 10:18,2/21/2019 10:18,How to signal end of stream?,