diff --git "a/stack_exchange/SE/SE 2019.csv" "b/stack_exchange/SE/SE 2019.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2019.csv" @@ -0,0 +1,95286 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense,,,,, +384775,1,,,1/1/2019 10:56,,-3,427,"
If a file foo.cpp already includes foo.h, and foo.cpp requires some types from (for example, string.h), which is better, to include string.h in foo.cpp, or in foo.h ?
+ +For example, Guideline#9 in this tutorial recommends to include it in the cpp, if possible, but I don't understand exactly why.
+",324699,,9113,,1/2/2019 10:49,1/2/2019 10:49,Is it a bad practice to include stdlib header file from a header file corresponding to the source file that needs that stdlib header?,I'm designing an API for a Python library. The user will create objects with several parameters. In most cases, the user will either leave these at their default values or will set them globally, for all objects. However, it should be possible also to set them individually on a per-object basis.
+ +The most obvious way to do this is to do something like this:
+ +# myModule.py
+
+contrafibularity_threshold = 10.7
+pericombobulation_index = 9
+compunctuous_mode = False
+
+class Thing:
+ def __init__(self):
+ self.contrafibularity_threshold = None
+ pericombobulation_index = None
+ compunctuous_mode = None
+
+ def get_contrafibularity_threshold(self):
+ if self.contrafibularity_threshold is not None:
+ return self.contrafibularity_threshold
+ else:
+ return contrafibularity_threshold
+
+ def get_pericombobulation_index(self):
+ if self.pericombobulation_index is not None:
+ return self.pericombobulation_index
+ else:
+ return pericombobulation_index
+
+ def get_compunctuous_mode(self):
+ if self.compunctuous_mode is not None:
+ return self.compunctuous_mode
+ else:
+ return compunctuous_mode
+
+
+This works as I would like: it allows the user to do myModule.contrafibularity_threshold = 10.9
to set the global value while also being able to do someThing.contrafibularity_threshold = 11.1
to set it for a particular object. The default may be changed at any time and will affect only those objects to which a specific value has not been assigned.
However, the code above contains a lot of repetition, and seems prone to hard-to-notice bugs if I make a mistake copy-pasting the code. Is there a better (less repetitive, less error-prone, more Pythonic) way to achieve these goals? I don't mind changing the API, as long as the user can change the defaults at both the global and per-object level.
+ +(One could arguably improve the above code by using @property
, but that wouldn't resolve the repetitive code issue.)
In Chapter 10 of Clean Architecture, Martin gives an example for the Interface Segregation Principle. I have some trouble understanding that example and his explanations.
+ +In this example we have three separate Users (Classes) that use a Class called OPS. OPS has three methods, op1, op2, and op3. Each of these is only used by one user (op1 only by User1 and so on).
+ +Martin now tells us that any change in OPS would result in a recompilation for the other classes since they all depend on OPS, even if the change was performed in a method that is of no interest to them. (So a change in op2 would require a recompilation of User1.)
+ +He argues that thus there should be three separate interfaces, one for each method. The OPS class then implements all of them. The users only use the interface they use. So you have User1 implementing only Interface1 and so on.
+ +According to Martin, this would stop the otherwise necessary redeployment of, say, User1 if the implementation of ops2 in OPS was changed (since User1 does not use the interface that describes op2).
+ +I had my doubts and did some testing. (Martin explicitly used Java for his example, so I did as well.) Even without any interfaces any change in OPS does not cause any user to be recompiled.
+ +And even if it did (which I thought it would), using three interfaces and then having the same class implement all three of them makes no sense to me either. Wouldn't any change in that class require all of the users to be recompiled, interface or no? Is the compiler smart enough to separate where I did my changes and then only recompile those users that rely on the interface describing the method I changed? I kind of doubt that.
+ +The only way how this principle makes sense to me is if we were to split the OPS class into three different classes, interfaces or no. That I could understand, but that's explicitly not the answer Martin gives.
+ +Any help would be greatly appreciated.
+",324715,,,,,1/1/2019 17:55,Interface Segregation Principle in Clean Architecture,I've made an engine that plays Connect Four using the standard search algorithms (minimax, alpha-beta pruning, iterative deepening, etc). I've implemented a transposition table so that the engine can immediately evaluate a same position reached via a different move order, and also to allow for move-ordering.
+ +The problem with the TT is that on each step of the iterative deepening process, the amount of bytes it takes up grows by at least double. The TT stores objects that represent important info for a given position. Each of these objects is 288 bytes. Below, depth limit is how far the engine searches on each step of iterative deepening:
+ +depth limit = 1 - TT size = 288 bytes (since just one node/position looked at).
+ +depth limit = 2 - TT size = 972 bytes.
+ +depth limit = 3 - TT size = 3708 bytes.
+ +depth limit = 4 - TT size = 11664 bytes
+ +depth limit = 5 - TT size = 28476 bytes.
+ +....
+ +depth limit = 12 - TT size = 11,010,960 bytes.
+ +depth limit = 13 - TT size = 22,645,728 bytes.
+ +And now at this point the .exe file crashes.
+ +I'm wondering what can be done about this problem. In Connect Four the branching factor is 7 (since there are usually 7 possible moves that can be made, at least in the beginning of the game). The reason the TT isn't growing by a factor of 7 on each step is due to pruning methods I've implemented.
+ +This problem isn't a big deal if the engine only searches up to depth 8-9, but my goal is to get it to refute Connect Four by going all the way to the end (so depth limit = 42).
+",287384,,,,,1/3/2019 0:23,Game Playing AI - Strategy to overcome the transposition table taking up too much memory?,I am using Kafka. I am developing a simple e-commerce solution. I have a non-scalable catalog admin portal where products, categories, attributes, variants of products, channels, etc are updated. For each update, an event is fired which is sent to Kafka.
+ +There can be multiple consumers deployed on different machines and they can scale up or down as per load. The consumers consume and process the events and save changes in a scalable and efficient database.
+Order of events is important for me. For example, I get a product-create event. A product P is created and lies in category C. It is important that event for the creation of category C is processed before the product-create event for product P. Now if there are two consumers, and one consumer picks up product-create event for product P and the other consumer picks up event for creation of category C, it may happen product-create event is processed first, which will lead to data inconsistency.
+There can be multiple such dependencies. How do I ensure the ordered processing or some alternative to ensure data consistency?
Two solutions that are right now in my mind:
+ +Requeuing has issues that event is now stale and no longer required. Example:
+ +The same issue is applicable to the second solution (waiting and retrying).
+ +Above issues can be solved by maintaining versions for events and ignoring an event if the targeted object(which is going to be modified by the event) has a higher version than that of the event.
+But I am very unsure of the pitfalls and the challenges of the above solutions that might not be very obvious right now.
PS: Stale data works for me but there should be no inconsistencies.
+",257969,,257969,,1/2/2019 13:41,1/2/2019 14:53,Maintaining order of events with multiple consumers in a typical pub sub scenario,I am trying to get my head around ""event driven"" microservices. I understood, there are several techniques and patterns, like event notification, event sourcing, CQRS, etc that can help us to achive that. Very simply said, it boils down to some kind of a command has been sent, which leads to a change of the systems state. If the change was applied, the system emits an event. Other services can listen to this events.
+ +But what about querying a microservice for data? Let's say we have an API gateway and some services behind that gateway. Now we want to get a list of all users
, which are stored in the user-service
. The API gateway could simply send an HTTP GET
request to the user-service
to receive the list of users
. In some kind this might lead to tight coupling, but it seems like the most plausible way.
Can you share your knowlage and experience, when someone should not use HTTP requests for querying a microservice and what alternatives there are.
+",324705,,,,,11/7/2020 20:02,Querying in event driven microservices,When specifying a period is there ever a case where passing the number of the month as integer is preferred over passing two datetimes? For example GetTotalSumByMonth(int month)
vs GetTotalSum(DateTime begin, DateTime end)
.
It seems to me that the second option has clear advantages since it is more generic and less ambiguous. You wouldn't be able to pass a month of last year in the first option since it's never given. And some people might think the number of the month starts with 0 instead of 1, like in Javascript or C, which could lead to confusion.
+ +Are there any more pros or cons which might tip the scales?
+",277345,,,,,1/2/2019 20:28,Passing a period as datetimes vs as integer,In my understanding of Uncle's Bob Clean Architecture, a Use Case (Interactor) is responsible for orchestrating the Entities. At the same time, it represents Application-specific rules. The business rules live in Entities.
+ +I have a need to orchestrate the Entities to achieve a Business-specific rule. What is the best approach for it?
+ +I am trying to do the architecture of a new application we are building. We wanted something quite modular. Considering we are open source and we want other people to be able to easily add new features, we opted for a component based architecture.
+ +We have a back end that is made in Java, and a front end that will be made with JavaFX and an other front end which is a website. We want the back end to be the same for users that use our application through a website or through a mobile/desktop device (JavaFX). All of my team, including me, are students and do not have a lot of experience designing new software.
+ +The problem I am facing is that I want to split main features in packages, so they form components that work independently of the front end and can be easily be modified. The thing is that sometimes, different components will interact with the same model. Let me illustrate my problem so you can understand better.
+ +This would be a very partial overview of my IntelliJ project structure:
+ +-/src/main/java/com/ourDomainName/ourAppName
+ -/ApplicationCore
+ -/DriversCore
+ -/LoggingCore
+ -/Signal.Java
+ -/ParsersCore
+ -/Signal.Java
+
+
+ApplicationCore
would be the package that holds all the back end. DriversCore
, LoggingCore
and ParsersCore
are packages that all represent a feature. They are components. I want ParsersCore
and LoggingCore
to both use a certain model
class, Signal
. My question is where should I put said file? This situation doesn't just apply to one file, there are many files in my model
that I want different components to use. I know many will just say that I should have a package called model
and put all my model
there, but from what I've seen, I should keep all the model relative to a component in the same package as that component. So, what exactly is the procedure when you have many model
classes that you want to be shared across different components?
I´m looking for some kind of better compilation of principles which takes the old basic concepts (DRY, KISS, etc...) and applies them to OOP related concepts like abstract clasess, interfaces etc...
+ +Some reasoning behind this quest:
+ +I find some interpretations of the SOLID principles very compelling, but I've found so diverse interpretations of these principles on the web that I find them close to useless. The idea of a software design principle IMHO, is to provide a base framework for developers to discuss software development best practices. But when the advocators can't even agree on what their principles mean, it's time to look for alternatives.
+ +I also have found that people trying to follow these principles create extremely over-modularized architectures mostly because they decompose simple implementations into even smaller modules, disperse over the project which makes it close to imposible to discern the purpose of these micro-modules in the context of the whole project.
+ +Summarizing, I just want to know if there is any other well known name for a different group of OOP principles that are more tied to the old basic KISS, DRY, etc...
+ +The question has been considered too broad by the community, so let's see how this goes for clarification:
+ +Let A be a set of renown names of sets of OOP design principles (call them ""High Level Principles""). I know SOLID to be an element of A. I found GRASP seems to be another element of A, which I found out thanks to a comment from user949300 on this question.
+ +Let B be the set of the General Principles listed here which includes these and only these: ML, KISS, MIMC, DRY, GP and RoE. Let's call them ""Low Level Principles""
+ +Let's say that there is a function T that measures how much a High Level Principle from A is tied to the Low Level Principles from B (as a whole). Eg: T(e) = Tieness of e
where e is an element from A.
I am asking if anyone can name any x such that T(x) >> T(SOLID)
. Where "">>"" means ""considerably higher than"".
Such an an answer should explain how T(x) is being estimated. I understand the estimation of T will be highly subjective, but with a good explanation, subjective can be useful.
+ +How can I tell if an answer is better than another answer? I'll consider the explanation and the number of new elements of A that is provided in the answer, but any answer mentioning at least an element from A different than ""SOLID"", and explaining how T(x) is higher for this element shall be considered as correct.
+ +I hope that makes the question clear enough for the community...
+",314241,,314241,,1/6/2019 4:34,1/6/2019 4:34,Are there any well known alternatives to the SOLID principles for OO programming?,A lot of tutorials on DDD I studied are mostly covering theory. They all have rudimentary code examples (Pluralsight and similar).
+ +On the web there are also attempts by a few people to create tutorials covering DDD with EF.
+If you begin studying them just briefly - you quickly notice they differ a lot from one another. Some people recommend to keep the app minimal and to avoid introducing additional layers e.g. repository on top of EF, others are decidedly generating extra layers, often even violating SRP by injecting DbContext
into Aggregate Roots.
I'm terribly apologizing if I'm asking an opinion-based question, but...
+ +When it comes to practice - Entity Framework is one of the most powerful and widely-used ORMs. You will not find a comprehensive course covering DDD with it, unfortunately.
+ +Important aspects:
+ +Entity Framework brings UoW & Repository (DbSet
) out of the box
with EF your models have navigation properties
with EF all of the models are always available off DbContext
(they are represented as a DbSet
)
Pitfalls:
+ +you cannot guarantee your child models are only affected via Aggregate Root - your models have navigation properties and it's possible to modify them and call dbContext.SaveChanges()
with DbContext
you can access your every model, thus circumventing Aggregate Root
you can restrict access to the root object's children via ModelBuilder
in OnModelCreating
method by marking them as fields - I still don't believe it's the right way to go about DDD plus it's hard to evaluate what kind of adventures this may lead to in future (quite skeptical)
Conflicts:
+ +without implementing another layer of repository which returns Aggregate we cannot even partly resolve the abovementioned pitfalls
by implementing an extra layer of repository we are ignoring the built-in features of EF (every DbSet
is already a repo) and over-complicating the app
Please pardon my ignorance, but based on the above info - it's either Entity Framework isn't adequate for Domain-Driven Design or the Domain-Driven Design is an imperfect and obsolete approach.
+ +I suspect each of the approaches has its merits, but I'm completely lost now and don't have the slightest idea of how to reconcile EF with DDD.
+ +If I'm wrong - could anyone at least detail a simple set of instructions (or even provide decent code examples) of how to go about DDD with EF, please?
+",175145,,208831,,5/25/2019 13:25,5/25/2019 13:25,Pitfalls of Domain Driven Design with Entity Framework,While setting up a nodejs server with a mariadb database, I found this:
+ +++ +While the recommended method is to use the question mark placeholder, you can alternatively allow named placeholders by setting this query option. Values given in the query must contain keys corresponding to the placeholder names.
+
This seems odd to me as the named placeholders seem more readable and the ability to use each instance multiple times makes it more flexible. For example, consider this with the ?
method:
connection.query(
+ ""INSERT INTO t VALUES (?, ?, ?)"",
+ [1,""Mike"",""5/12/1945""]
+)
+
+
+A named version could look like
+ +connection.query(
+ { namedPlaceholders: true,
+ ""INSERT INTO t VALUES (:id, :name, :dob)"" },
+ { id: 1, name: ""Mike"", dob: ""5/12/1945"" }
+)
+
+
+It also seems much more likely for the data to be in an object format over an array anyway. So why recommend a less readable option? Why not make namedPlaceholders
default to true instead?
I have recently graduated from university and started work as a programmer. I don't find it that hard to solve ""technical"" issues or do debugging with things that I would say have 1 solution.
+ +But there seems to be a class of problems that don't have one obvious solution -- things like software architecture. These things befuddle me and cause me great distress.
+ +I spend hours and hours trying to decide how to ""architect"" my programs and systems. For example - do I split this logic up into 1 or 2 classes, how do I name the classes, should I make this private or public, etc. These kinds of questions take up so much of my time, and it greatly frustrates me. I just want to create the program - architecture be damned.
+ +How can I get through the architecture phase more quickly and onto the coding and debugging phase which I enjoy?
+",278692,,73508,,1/3/2019 15:08,1/8/2019 17:18,How to stop wasting time designing architechture,This is a very broad question, but maybe someone has a worthwhile response.
+ +There is a general synchronization issue that often has to be solved, but always seems to be difficult. Here's an example:
+ +I was working on a remote system and had an ssh-connection and a remote desktop open at the same time for some reason. I happened to create a file on the desktop in shell, and of course it also appeared on the remote desktop view.
+ +For this to happen one of two things must take place:
+ +1) the desktop session must be constantly polling the filesystem for changes. Costly, ugly, and of course unlikely.
+ +2) The system knows that this change made by the ssh-session requires action on the remote desktop side, and updates the view. This is neat and elegant in a sense, but maintaining an accurate capability to decide when any action performed by any process in the system should cause this update is horrendously complex.
+ +In this case the culprit is the linux kernel (or Desktop environment?) and I presume what it does is the option 2). It's also very common to encounter small bugs and issues that are clearly the result of this kind of issue not being taken care of.
+ +This kind of a problem where any of multiple changes to a common resource can have an effect on other instances, but determining when is very tedious pops up in many places. +Is there a general approach to this? +Do we form separate trackers that know how the instance is sensitive to changes and that object can be interrogated? +Does every change to the resource (filesystem in this case) include a stage of making sure this kind of stuff takes place? If so, that too must compound to be a massive ordeal. +Does someone happen to know how linux handles this specific example case?
+",324822,,,,,1/22/2021 9:07,How state updates to existing instances/sessions are generally done?,I was hired to program a basic, plain text site for a local business that amongst other things, provides basic pricing quotes through a Javascript Applet. For obvious reasons, it seemed unnecessary to me to in anyway encrypt the traffic to and from the site. However, the person who hired me strongly requested that I set up HTTPS on the site ""for security reasons"". Assuming I provide minimal upkeep, is there any further risk associated with setting up the SSL certification?
+",324824,,,,,1/3/2019 17:49,Is there any risk in creating a SSL certified site?,A water user can submit an Application for a water right with the hope of getting a Permit to use water, which might later become a BonaFideWaterRight. The right holder may apply to Transfer any of the above items (or others not listed for brevity) by changing ownership, moving it to new ground, splitting it in half and selling the remainder to another individual, etc...
+ +The above-emboldened states of being for a water right (and other non-water-right things as well) have come to be known here as Processes. All of the above processes have lots of individual work items (sub-processes? But confusingly they're still referred to as Processes) in common, but the only one we need concern ourselves with here is the PointOfDiversion.
+ +I'm in the midst of an effort to refactor code that I inherited regarding these processes.
+ +First the abstract parent classes I've created (omitting a fair amount of ISomethingProcess
interfaces being inherited along the way) . . .
public abstract class WREditProcess : IWREditProcess { }
+
+public abstract class WaterRightsProcess : WREditProcess
+{
+ public IWaterRightLookupRepository QueryWaterRights { get; }
+ protected ILocationQueries LocationRepository { get; }
+
+ protected WaterRightsProcess(IWaterRightLookupRepository queryWaterRights, ILocationQueries locationRepository)
+ {
+ QueryWaterRights = queryWaterRights;
+ LocationRepository = locationRepository;
+ }
+ /* Work performed in virtual methods using those repositories */
+}
+
+public abstract class PointOfDiversionProcess : WaterRightsProcess, IPointOfDiversionProcess
+{
+ protected IPODLocationRepository PODLocationRepository { get; }
+ protected IPointOfDiversionRepository PODRepository { get; }
+
+ protected PointOfDiversionProcess(IWaterRightLookupRepository queryWaterRights, IPODLocationRepository locationRepository, IPointOfDiversionRepository pointOfDiversionRepository)
+ : base(queryWaterRights, (ILocationQueries)locationRepository)
+ {
+ PODLocationRepository = locationRepository;
+ PODRepository = pointOfDiversionRepository;
+ }
+ /* Work performed in virtual methods using those repositories */
+}
+
+
+There's a large amount of concrete work done in those abstract classes using the repositories passed in from their child classes' constructors. This continues to the concrete classes (the one for transfers shown here in its entirety) . . .
+ +public class TransferPointOfDiversionProcess : PointOfDiversionProcess
+{
+ protected override ILog Log => LogManager.GetLogger(typeof(TransferPointOfDiversionProcess));
+
+ /// <summary>
+ /// Constructor for a TransferPointOfDiversionProcess
+ /// </summary>
+ /// <param name=""baseWaterRightRepository"">Repository for base water right information</param>
+ /// <param name=""locationRepository"">Repository that abstracts the locPODTransfer table (if such a thing existed, but instead it's a clump of XML)</param>
+ /// <param name=""pointOfDiversionRepository"">Repository that abstracts the PointOfDiversion table</param>
+ [SuppressMessage(""ReSharper"", ""SuggestBaseTypeForParameter"")]
+ public TransferPointOfDiversionProcess(ITransferRepository baseWaterRightRepository,
+ IPODLocationRepository locationRepository,
+ TransferPointOfDiversionRepository pointOfDiversionRepository)
+ : base(
+ baseWaterRightRepository,
+ locationRepository,
+ pointOfDiversionRepository)
+ {
+ }
+
+ /// <inheritdoc />
+ public override string DisplayName => ""Transfer"";
+
+ /// <inheritdoc />
+ public override string ConfigLayerID => ""locPODWRTransfer"";
+
+ /// <inheritdoc />
+ public override string Name => ""locPODWRTransfer"";
+
+ /// <inheritdoc />
+ public override string CorrelateProcessName => ""Transfer"";
+}
+
+
+Note that the constructor for TransferPointOfDiversionProcess
asks for a concrete TransferPointOfDiversionRepository
class rather than the IPointOfDiversionRepository
interface that its parent specifies. This is critical -- especially for transfers because the TransferPointOfDiversionRepository
overrides all sorts of things from its parent because transfers are stored in a wholly different way from everything else. For the same reason, I'm planning a similar TransferPointOfDiversionLocationRepository
class to take the place of the IPODLocationRepository
parameter as well but haven't gotten there yet.
ReSharper tickles me with the ""Parameter can be declared with base type"" warning on this parameter, suggesting the IPointOfDiversionRepository
type be used instead. I disabled this warning for each constructor, but now I can't shake the feeling that I'm getting this warning because of design flaws--failing to abstract something away or the need for some other pattern to indicate clearly the need for a specific implementation of an interface or something like that--but I can't figure out what. Can anyone suggest improvements (or, even better, tell me not to put so much faith in ReSharper)?
My question is that is there any reason for Thread class to implement Runnable interface by itself. Are there any specific use cases where overriding Thread makes more sense than implementing Runnable by design
+",277489,,277489,,1/3/2019 17:35,2/4/2019 11:50,Why does the Thread Class implement Runnable interface,New to DDD I have a simple case a I would like to model using DDD approach
+ +2 entities Student and Course
+ +Relevant property for Student are StudentId and Budget
+ +Relevant property for Course are CourseId and Price
+ +Student and Course are entities that can exists on its own and have their own life cycle
+ +Business requirements:
+ +1) Student can book one course (CourseId is fk for Student table)
+ +2) Student can book the course only if the user's budget is higher or equal to the course price.
+ +3) Changes of course price doesn’t affect the students have already booked the course.
+ +4) When the student book the course the his budget remains unchanged (maybe changes later at the end of the course)
+ +5) Student budget can be modified setting a different amount but new amount have to be higher or equal to the price of the course the user booked. +Setting a lower amount should throw a runtime error.
+ +What the way to model this simple case following domain driven design? Where to enforce the two busines rules (points 2 and 5)?
+ +As a Course can exist without a Student I can’t define the aggregate where Student is the root entity and Course its child entity. Can I?
+ +But at the same time the business rule defined at point 5 seems to me be an invariants. Is it?
+ +So where and how to apply this rules?
+ +I tried a service approach, can work for the first simple rule (point 2) but fail for the rule described at point 5
+ +var student = studentRepository.Get(srtudentId);
+var course = courseRepository.Get(courseId)
+
+var studentService = new StudentService();
+
+studentService.SubScribeStudentToCourse(student, course);
+
+studentRepository.Update(student);
+
+
+StudentService.ChangeStudentBudget(student, 100000);
+
+studentRepository.Update(student);
+
+
+when I update the student with the new budget someone else can change the course price making the student budget inconsistent
+ +public class StudentService
+{
+ SubScribeStudentToCourse(Studen student, Course course)
+ {
+ if (studentt.Budget >= course.Price)
+ {
+ student.CourseId = course.CourseId
+ }
+ }
+
+ ChangeStudentBudget( Student student, decimal budgetAmount)
+ {
+ if (student.CourseId != null)
+ {
+ var studentCourse = courseRepository.Get(student.CourseId);
+ if ( studentCourse.Price <= budgetAmount)
+ {
+ student.Budget = budgetAmount;
+ }
+ else
+ {
+ throw new Exception(""Budget should be higher than studentCourse.Price"");
+ }
+ }
+ }
+}
+
+",261565,,,,,1/3/2019 19:53,DDD enforcing business rules,Can I draw mutual dependencies between two artifacts in a deployment diagram as a dashed line with two arrow heads? Or is this a no-go in UML?
+",324848,,,,,1/4/2019 8:39,UML dependency in a UML deployment diagram with two arrow heads,As the code below, class Foo1 implements interface IFoo, which has a property of IData.
+ +public interface IFoo
+{
+ public IData Data { get; set; }
+}
+
+public interface IData { ... }
+
+public class DataA : IData {...}
+public class DataB : IData {...}
+
+public class Foo1 : IFoo
+{
+ private DataB _data;
+ public IData Data
+ {
+ get { return _data; }
+ set { _data = new DataB(value); }
+ }
+}
+
+
+If the user assigns the Data property of Foo1 with an object of DataA, and then gets the property value back later. He will get an object of DataB instead of DataA. Does this violate any OO principles? Thanks.
+",154886,,78230,,1/3/2019 14:01,1/3/2019 22:42,Interface properties implementation,I am trying to figure out the best way to decorate html. What I mean is replacing specific syntax string with the actual content.
+ +Kind of like, razor syntax in Asp.net MVC using <%= %>.
+ +Currently, I have an HTML page with design and I just need to replace tags (for ex: <%HISTORICTABLE%>) with actual content.
+ +I have 5-6 tags in HTML that needs to be replaced with the original html.
+ +I might add new/remove tags ('behaviour') from html.
+ +I think decorator pattern should do the trick or would you think its an overkill?
+",264551,,,,,1/3/2019 12:40,decorator pattern for generating complete html,Emacs
starts up as an editor (which probably has m
functions that takes n
inputs) and an Elisp
interpreter running in the background (which can be used to change the behavior of the program - probably so much so that it is no longer emacs :-)).
Why do programs need an extension that is an interpreter? Is there any theory behind this? What fundamental feature does it provide, so that you can make a similar decision for your own project?
+ +Assuming that this is how a (linux) program is in memory,
+ + + +is it because without an interpreter (lying in the text segment
), your program is just a finite machine that can execute a finite set of instructions in the text segment
(real code a.k.a machine instructions) present in the program layout
? However, when you add something like an interpreter, you can add new instructions (probably in the heap
, because data and instruction, both are just bits?) and make it behave like an infinite machine?
I think it is the same as asking why do you need interpreter in the first place(!), but my question actually came from this specific scenario in Emacs
like editors. So I would like to understand this from both perspectives.
I have a main window and I amgetting data from http client service while form load.
+ +public class MainWindow: Window
+{
+ private IClientService service;
+ public MainWindow(IClientService service)
+ {
+ this.service = service;
+ GetClient();
+ }
+ public async Task<Client> GetClient()
+ {
+ try
+ {
+ IsDownloading = true;
+ var client = await service.GetClient();
+ if(client!=null)
+ {
+ if (client.Status)
+ Visibility = Visibility.Collapsed;
+ else
+ Visibility = Visibility.Visible;
+ ShowClientRegistrationForm();
+ }else{
+ HideClientRegistrationForm();
+ }
+ }
+ catch
+ {
+ MessageBox.Show(""Access error."", ""Error"", MessageBoxButton.OK, MessageBoxImage.Error);
+ throw;
+ }
+ finally
+ {
+ IsDownloading = false;
+ }
+ }
+}
+
+
+My GetClient()
method does 3 operations.
I think this is an antipattern. And violates the single responsibility principle. How can I get rid of it?
+",160523,,2722,,6/5/2019 16:20,6/5/2019 16:20,How can I get rid of this antipattern,I was given a more or less complex task. The goal is to interpret a SQL Check Constraint inside my C# .NET libary. In our case we have a simple UI that displays what is inside the database. We do not want out UI-Components to allow any values that wouldnt even be possible because there is a check constraint. Since everything has to be dynamic (the database can change), I cannot just hardcode the UI components.
+ +I have managed to retrieve data about every check constraint inside my SQL Server database (Northwind) with the following query:
+ +SELECT
+ [cck].[name] AS [CONSTRAINT_NAME],
+ [s].[name] AS [SCHEMA],
+ [o].[name] AS [TABLE_NAME],
+ [cstcol].[name] AS [COLUMN_NAME],
+ [cck].[definition] AS [DEFINITION],
+ [cck].[is_disabled] [IS_DISABLED]
+FROM sys.check_constraints cck
+ JOIN sys.schemas s ON cck.schema_id = s.schema_id
+ JOIN sys.objects o ON cck.parent_object_id = o.object_id
+ JOIN sys.columns cstcol ON cck.parent_object_id = cstcol.object_id AND cck.parent_column_id = cstcol.column_id
+
+
+This query gives me the following result:
+ + + +As you can see, there is a column 'DEFINITION', which pretty much shows what the CC does in a human-readable medium. Here comes my problem: How can my .NET libary understand this check constraint so that I can adjust my UI components to now allow any values that violate the CC?
+ +I've thought about those two possible solutions:
+ +Number 1 is probably the fastest if done right, but very complex (at least for me since I do not have any experience with expressions). Number 2 would be slower but the easiest way to do it, if possible.
+ +Sadly I couldnt find any good help for both of my solutions.
+ +Also: At least for now I will only care about CC on the column-level. Handling table-constraints will be another challenge
+ +Now my quesion is: What is an ""easy"" way to do something like this. It definetly does not have to be the fastest solution.
+",308504,,308504,,1/3/2019 21:36,2/4/2019 16:01,How can I interpret a SQL Check Constraint inside my C# .NET class libary?,I have a class Person.
+ +Person {
+ String firstName;
+ String lastName;
+ String Date dob;
+ String email;
+ String mobileNumber;
+ String address;
+}
+
+
+To add a person, I have following REST APIs:
+ +POST /person
+ +{
+""firstName"":""Donald"",
+""lastName"":""Trump"",
+""dob"":""01/01/1990""
+}
+
PUT /person/{id}/addContact
+ +{
+""email"":""donald.trump@us.com"",
+""mobileNumber"":""+1-123-456-789""
+}
+
PUT /person/{id}/addAddress
+ +{
+""address"":""white house""
+}
+
Now there are two ways to do that -
+ +Use same Person class and keep adding new information in the same object from API 1, 2 and 3.
Create separate models for all three APIs.
+ +PersonMain {
+ String firstName;
+ String lastName;
+ String Date dob;
+}
+
+PersonContact {
+ String email;
+ String mobileNumber;
+}
+
+PersonAddress {
+ String address;
+}
+
Finally, we also need our main Person class because all that information is going into single Person table and finally this whole Person object will be used at every place.
+ +What do you think which approach is good and why?
+",324909,,,,,1/4/2019 15:13,Which is better solution - having separate model class against each REST API or keep adding info in single object?,I am writing a potentially large web application using Angular 7, where I came across a design problem. My angular applications until now have been relatively small, so there was no problem keeping whole code in one project (divided in modules with lazy loading). However now that an application can grow in size I find it hard to keep all code in the same project as it makes project hard to navigate.
+ +My thoughts were that I could divide my application into multiple angular libraries by functionalities, which poses the following questions: do I really gain some advantage with such approach or do I just create overhead with managing dependencies making development harder because of having to link in all dependencies? If this option is viable, what would be good way to split code into multiple libraries? I have looked around for some articles about large angular apps but haven't found any with my solution - all were just one project - are there any good articles on such matter?
+",274265,,,,,1/4/2019 13:02,Split large Angular codebase to libraries,Currently, my thoughts are that GET requests would be feasible by using the concept of screen scraping combined with a cron job that runs at a set interval to scrape data from the GUI and sync to my own database.
+ +However, I'm not quite sure how I would handle actions that seek to mutate the database that sits behind the GUI. I am quite certain I would need to interface directly with the GUI, but what tools are available that could help automate this by programmatically controlling the GUI?
+ +Also, since an overall architecture such as this is far from conventional, I'm curious what strategies might be utilized to help scale a system such as this.
+ +Note: It is acceptable for data returned from a GET request to be stale for at least as long as the cron job interval, and for POSTs and PUTs and the like to complete sometime in the future, let's say half an hour.
+ +Note: Maybe my train of thought is completely idiotic and there's a better angle. I'd love to know.
+",280324,,280324,,1/4/2019 20:51,1/29/2020 23:01,"Is it possible to layer an API (REST, GraphQL, etc.) in front of data that is currently only accessible via an enterprise desktop GUI?",In laymen's terms, what is the difference between opcodes and operands in assembly language programs? I understand that one involves where to get the data and one involves what is to be performed on the data, but is there a way to understand it in more detail?
+",313211,,,,,1/4/2019 15:36,Opcodes vs Operands,Yay or nay? I have several related but separate services that are to be run in different processes. They execute a particular task unique to the service. Their call signature is similar, but the name of the service changes. For example.
+ +Service 1:
+:5000/Invoice/<id>
+:5000/Customer/<id>
+
+Service 2:
+:5001/Invoice/<id>
+:5001/Customer/<id>
+
+
+Each of the calls has e.g. GET and POST methods associated with it. I'd like to refactor this to be:
+ +:5000/Invoice/<id>/service1
+:5000/Customer/<id>/service1
+:5000/Invoice/<id>/service2
+:5000/Customer/<id>/service2
+
+
+These calls would then delegate to the services themselves. Notice there is only one port or address to call the entire service instead of a port for each service on its own. So I'm thinking that adding a layer that calls the relevant service locally would be the way to go.
+ +Is this a good approach? Is it more intuitive? It does add a layer of calling things again, so it might introduce some delay to requests, but maybe the trade off is worth it. Are there other ways of doing it? I'm rather new to web development, so I don't know much about common practices. If it makes a difference, I'm using Python and Flask.
+ +There is one service that is used more often and the others and is more critical. Perhaps the other requests could be routed through that service.
+",301321,,301321,,1/4/2019 10:51,1/4/2019 14:39,Abstracting a set of services behind a common interface,Let's pretend I have an 'Book' entity, that can contain many 'Chapter' entities, having both their own unique IDs. A chapter must belong to a book, it cannot exists on its own (ie: there is a required foreign key in the chapter's table).
+ +There is a screen where the chapter's content can be edited. So far, we considered books as aggregate roots, and everthing that was done on chapters was done through aggregates that had the parent book as root. All was great.
+ +Suddenly, we get a requirement for the chapter editing screen, in which we need to add a dropdown list to be able of changing in which book we want the chapter to appear (from a list of account owned books), which breaks our current way of doing things.
+ +How should I approach this? The application is SQL based, so being the relationship one-2-many, the operation is essentially changing the FK value in the chapter's table... but DDD wise, I believe it is more complex, since there may situations in which we need to update the book information (number of chapters, etc ... ). We work in transactional fashion, we cannot use eventual consistency for this.
+ +Making a chapter an aggregate root itself?
Making a aggregate with a composite root between the current document and the one to where I want to assign the chapter to?
Thanks.
+",324934,,,,,1/4/2019 10:25,DDD: Re-assign an entity from one aggregate to other,I am in search of information on how I to manage code in git flow and methodology to test and work with it :
+ +Not sure if I am clear in my question : but the fact is as now, for every new projects that I am dealing with, I first start to analyse the need of my client and chose to start from the branch that seem the closer to their needs : in the end I have as many branches and specialisation as clients... It's not maintenable anymore... So please give advice and strategy on modeling and git flow branching...
+",285517,,,,,1/4/2019 10:36,how to build a stable API product and allow specification per project?,I have just created a function which checks whether a ipv4 is public or not. I have not heavily tested it yet since it is kind of practically impossible to do so (because I do not know where to start).
+ +My algorithm is based on an article from this website.
+ +++ +Public IP addresses will be issued by an Internet Service Provider and + will have number ranges from 1 to 191 in the first octet, with the + exception of the private address ranges that start at 10.0.0 for Class + A private networks and 172.16.0 for the Class B private addresses.
+
Is this algorithm correct (implemented here in C++) ?
+ +struct IpAddress {
+ uint8_t oct1;
+ uint8_t oct2;
+ uint8_t oct3;
+ uint8_t oct4;
+};
+
+bool isIpPublic(const IpAddress &ip){
+ if (ip.oct1 >= 1 && ip.oct1 <= 191){ // not class C
+ if (ip.oct1 != 10){ // not class A (all of class A is private)
+ if (ip.oct1 != 172 && ip.oct2 != 16){ // not class B (172.16.x.x is private)
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+",324765,,324765,,1/4/2019 13:58,1/4/2019 13:58,Is my algorithm for determining whether a ipv4 is public or private correct?,Can anyone tell me what does ""machine"" means in Compiler Theory? Does it mean computer in general or operating system? Actually, the problem is I understand the definition of machine language as ""the language understand by the computer"". But does machine here refers to anything specific other than computer.
+ +I was reading dragon book Compilers: Principles, Techniques, and Tools. In the class professor told that Java is both compiled and interpreted language. I didn't understand the definition so I referred to the book. I still don't get the following paragraph:
+ +++",324958,,5099,,1/4/2019 19:25,1/4/2019 22:49,Meaning of Machine in Compiler Theory,Java language processors combine compilation and interpretation, + as shown in Fig. 1.4. A Java source program may first be compiled into + an intermediate form called bytecodes. The bytecodes are then interpreted by a virtual machine. A benefit of this arrangement is that bytecodes compiled on one machine can be interpreted on another machine, perhaps across a network.
+
I'm not sure what the correct procedure is, when you have a question based off an answer you read but it is a seperate question that arose because of the answer provided.
+ +the answer in question Which HTTP verb should I use to trigger an action in a REST web service? +
+ +Walkthrough of my method, where this is relevant
+ +[HtpPut(""StartDate/{id}"")]
+public async Task<IActionResult> StartDate(int id)
+{
+ //do checks to see if resource exists, and authorisation .
+ //start backend task
+ //if task successfully starts update 'isStarted' field for the entity with the inputted id
+ //return status code 200 if there is no errors
+}
+
+
+Question +when designing an API that adheres to REST as much as possible, is it okay practice in a situation like above to use a 'HttpPut' or 'HttpPatch' verb and allow the API method not to check for a Patch doc or resource? ie: the user sends a request, with whatever resource or patch doc they wish and the server does not care as long as the request id is valid and the user is authorized.
+ +secondary question if this adheres to REST(or even if it deviates from REST), is what I am doing a good solution that is acceptable, or is there a cleaner design I should be implementing for a situation like this?
+",277313,,,,,1/4/2019 21:33,Is it okay to use a 'HttpPut' or 'HttpPatch' verb and allow the API method not to check for a Patch doc or resource?,I have a base type of Entity
, and multiple implementations, Enemy
, Bunker
, Projectile
I have separated these entities into their own containers so I can pass them to different classes to perform different actions on them. However it is becoming clear now this may not have been the best approach. I am currently writing the collisions between the Projectile
and Enemy
/Bunker
. As they have their own separate lists I'm having to write multiple functions to handle the collisions.
The enemies are stored in a 2d grid using std::vector<std::vector<std::unique_ptr<Enemy>>>
The bunkers are stored in a vector std::vector<std::unique_ptr<Bunker>>
The projectiles are stored in a vector std::vector<std::unique_ptr<Projectile>>
Here are the collision functions so far
+ +Projectile -> Enemy collisions
+ +void ProjectileEnemyCollisions()
+{
+ auto projectileIterator = projectiles.begin();
+
+ while (projectileIterator != projectiles.end()) {
+ auto enemyRowIterator = enemies.begin();
+ while (enemyRowIterator != enemies.end()) {
+ std::vector<std::unique_ptr<Enemy>>const& column = *enemyRowIterator;
+ auto enemyColumnIterator = column.begin();
+
+ while (enemyColumnIterator != column.end()) {
+ if (projectiles.size() == 0) {
+ break;
+ }
+
+ std::unique_ptr<Projectile>const& projectile = *projectileIterator;
+ std::unique_ptr<Enemy>const& enemy = *enemyColumnIterator;
+
+ if (m_collisionManager->Collision(projectile->GetBoundingBox(), enemy->GetBoundingBox())) {
+
+ //collision
+
+ }
+ else {
+ ++enemyColumnIterator;
+ }
+ }
+ ++enemyRowIterator;
+ }
+
+ if (projectiles.size() != 0) {
+ if (projectileIterator != projectiles.end())
+ ++projectileIterator;
+ }
+
+ }
+
+}
+
+
+Projectile -> Bunker collisions
+ +void ProjectileBunkerCollisions()
+{
+ auto projectileIterator = projectiles.begin();
+
+ while (projectileIterator != projectiles.end()) {
+
+ std::unique_ptr<Projectile> const& projectile = *projectileIterator;
+
+ auto bunkerIterator = bunkers.begin();
+
+ while (bunkerIterator != bunkers.end()) {
+
+ if (projectiles.size() == 0) {
+ break;
+ }
+
+ std::unique_ptr<Bunker> const& bunker = *bunkerIterator;
+
+ if (m_collisionManager->Collision(projectile->GetBoundingBox(), bunker->GetBoundingBox())) {
+
+ //collision
+
+ }
+ else {
+ ++bunkerIterator;
+ }
+
+
+ }
+
+ if (projectiles.size() != 0) {
+ if (projectileIterator != projectiles.end()) {
+ ++projectileIterator;
+ }
+ }
+ }
+}
+
+
+All of these types are of Entity
, so is there a more efficient way to iterate over them? I feel like having three loops for the enemies, and then having another two loops to check the bunkers seems counter-intuitive. I'm unsure which approach is better, grouping all the entities into a single container and then iterating over them once, or separating them out into different containers like I have now, but having to iterate over them multiple times.
I have also split up the entities so that I don't have to pass around data that isn't required, i.e. for the enemy specific logic, it only requires Enemy
objects.
Entity.h
+ + class Entity {
+
+ friend class MovementManager;
+
+ public:
+ Entity(std::unique_ptr<Sprite> sprite) : m_sprite(std::move(sprite)) {
+
+ };
+
+ virtual void Update(DX::StepTimer const& timer) = 0;
+ virtual void DealDamage(int damage) = 0;
+
+ bool IsDead() {
+ return m_health == 0;
+ }
+
+ Sprite& GetSprite() const {
+ return *m_sprite;
+ }
+
+ XMFLOAT3 GetPosition() const {
+ return m_position;
+ }
+
+ BoundingBox const& GetBoundingBox() {
+ return *m_boundingBox;
+ }
+
+
+ protected:
+ std::unique_ptr<Sprite> m_sprite;
+ std::unique_ptr<BoundingBox> m_boundingBox;
+
+ XMFLOAT3 m_position;
+ XMFLOAT3 m_scale;
+ XMFLOAT3 m_rotation;
+
+ int32_t m_health;
+
+ XMFLOAT3 m_velocity;
+ XMFLOAT3 m_maxVelocity;
+ XMFLOAT3 m_slowdownForce;
+ float m_movementSpeed;
+ float m_movementStep;
+
+
+ };
+
+
+Most recent implementation using the idea from the comments
+ +void HandleCollisions()
+{
+ std::vector<std::shared_ptr<Projectile>>const& projectiles = m_projectileManager->GetProjectiles();
+ std::vector<std::vector<std::shared_ptr<Enemy>>>const& enemies = m_enemyManager->GetEnemies();
+ std::vector<std::shared_ptr<Bunker>>const& bunkers = m_bunkerManager->GetBunkers();
+
+ std::vector<std::unique_ptr<EntityBoundingBox>> boundingBoxes;
+ //projectiles
+ for (std::shared_ptr<Projectile>const& projectile : projectiles) {
+ std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(projectile->GetBoundingBox(), std::weak_ptr<Entity>(projectile));
+ boundingBoxes.push_back(std::move(boundingBox));
+ }
+
+ //enemies
+ for (unsigned int i = 0; i < enemies.size(); ++i) {
+ for (unsigned int j = 0; j < enemies[i].size(); ++j) {
+ std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(enemies[i][j]->GetBoundingBox(), std::weak_ptr<Entity>(enemies[i][j]));
+ boundingBoxes.push_back(std::move(boundingBox));
+ }
+ }
+
+ //bunkers
+ for (std::shared_ptr<Bunker>const& bunker : bunkers) {
+ std::unique_ptr<EntityBoundingBox> boundingBox = std::make_unique<EntityBoundingBox>(bunker->GetBoundingBox(), std::weak_ptr<Entity>(bunker));
+ boundingBoxes.push_back(std::move(boundingBox));
+ }
+
+ CheckEntityCollisions(boundingBoxes);
+}
+
+void CheckEntityCollisions(std::vector<std::unique_ptr<EntityBoundingBox>>& boundingBoxes) {
+
+ for (std::unique_ptr<EntityBoundingBox>& entity1 : boundingBoxes) {
+ for (std::unique_ptr<EntityBoundingBox>& entity2 : boundingBoxes) {
+ if (entity1 == entity2) continue;
+
+ //if the entity has already been removed, continue
+ auto tmp = entity1->GetEntity().lock();
+ auto tmp2 = entity2->GetEntity().lock();
+ if (!tmp || !tmp2) {
+ continue;
+ }
+
+ if (m_collisionManager->Collision(entity1->GetBoundingBox(), entity2->GetBoundingBox())) {
+ m_eventManager->Fire(Events::EventTopic::COLLISIONS_ENTITY_HIT, { { (void*)&entity1 }, { (void*)&entity2 } });
+ }
+
+ }
+ }
+}
+
+",324963,,324963,,1/5/2019 11:51,1/5/2019 11:51,Architecture of iterating over polymorphic types,I currently have two derived classes, A
and B
, that both have a field in common and I'm trying to determine if it should go up into the base class.
It is never referenced from the base class, and say if at some point down the road another class is derived, C
, that doesn't have a _field1
, then wouldn't the principal of ""least privileged"" (or something) be violated if it was?
public abstract class Base
+{
+ // Should _field1 be brought up to Base?
+ //protected int Field1 { get; set; }
+}
+
+public class A : Base
+{
+ private int _field1;
+}
+
+public class B : Base
+{
+ private int _field1;
+}
+
+public class C : Base
+{
+ // Doesn't have/reference _field1
+}
+
+",100503,,100503,,1/4/2019 17:02,1/4/2019 22:02,When to move a common field into a base class?,In my application, I have a finite number of question types, but the order in which they're asked and whether they're asked at all is not known up-front.
+ +An example analogy is a hotel booking process, during the process you may be asked a number of questions, like whether you want late check-out, rent-a-car, breakfast-selection.
+ +interface IAncillary
+{
+ string FormType { get; }
+ object GetViewData();
+ void SaveResponse(object response);
+ void Skip();
+}
+
+class LateCheckOutAncillary : IAncillary
+{
+ public FormType { get; } = ""late-check-out"";
+
+ public object GetViewData()
+ {
+ return new LateCheckOutOption[]
+ {
+ new LateCheckOutOption(""2pm"", 50m),
+ new LateCheckOutOption(""4pm"", 75m)
+ };
+ }
+
+ public void SaveResponse(object response)
+ {
+ // record in database (string response).
+ // potentially add another ancillary
+ }
+
+ public void Skip()
+ {
+ // record in database.
+ // potentially add a different ancillary or
+ // remote other ancillaries
+ }
+}
+
+
+My initial thought is that the State Design Pattern is most applicable, however, the problem for me is that the view data format and response format is different per ancillary. It'll most likely be represented as a Wizard to the end-user, but I haven't found any design pattern that solves this.
+ +All ancillaries have a Skip
option which is to be used if the client does not understand the FormType
.
The ancillaries use object
for view data and object
for response data, so if there's something that can account for that too it would be nice.
Ultimately this will need to be represented as an HTTP interface, however, I'm still wrapping my head around how I would express it with an object oriented language first.
+ +What design pattern would be best used for representing a set of sequential questions where each question is in a different format?
+",61302,,,,,6/3/2019 23:02,Design pattern for an indeterminate number and format of questions,I know the sites are not geared for recommendations so I am hoping to pose this question in a way that doesn't ask for recommendations. Questions comments are welcome.
+ +I am just getting involved in wanting to create a .net Core API for one of the projects we are working on. I have read a little on the topic and what I kind of have a hard time understanding is the authentication piece of it.
+ +Maybe I am just making a big deal out of nothing and it is as simple as:
+ +https://stackoverflow.com/questions/38977088/asp-net-core-web-api-authentication
+ +But I wanted to know since this is an API I assume I need to worry about authentication and that in some way if the authentication fails just send them along to a not authenticated page otherwise allow for usage/entry. How does this authentication piece really work (specific to .net core).
+ +Is the post I made above the recommended practice for a basic scheme to authenticate folks to my API or should I be using some other mechanism? What resources (books, videos) are there (I know there are a ton via the internet but a lot just seem to be glossing over this topic)?
+",13870,,,,,1/4/2019 20:11,Am I making API creation difficult when it comes to authentication?,I may have a tough one for you.
+ +I have a machine in the wild that is and will probably continue to be compromised. The machine is owned by a user who will be unable to keep it secure.
+ +I must have this machine pull from git. It must also automatically install all pulls without restart (no startup solutions).
+ +I would prefer a platform agnostic solution.
+ +I have a few objectives: +1). Email remote admin with logs of all pulls, making sure this process cannot be subverted or altered +2). Authenticate all git pulls in some manner without the auth being able to be cracked by an adversary
+ +I hope you all can help.
+",324988,,,,,1/5/2019 17:19,Authenticate Git Pulls on a Compromised Machine,I'm developing a Python library, and I'm also developing some code that uses it. Currently they are in the same git repository, but I want to separate out the library part into a separate repo, in preparation for eventually* releasing it.
+ +However, I'm unsure of the right way to work with two repositories. The library has a complex API that's likely to change a lot as I develop it, and at this stage of the project I'm usually developing new features simultaneously with code that uses them. If I need to restore things to a previous working state, I will need to roll both projects back to previous commits - not just the last working state for each project individually, but the last pair of states that worked together.
+ +I am not a very advanced git user. Up to now I have used it only as an undo history, meaning that I don't use branches and merges, I just do some work, commit it, do some more work, commit that, and so on. I am wondering what's the minimal change to this workflow that will allow me to keep both projects in sync without worrying.
+ +I'd like to keep things simple as simple as possible given that I'm a single developer, while bearing in mind that this project is likely to become quite complex over time. The idea is to make small changes in order to minimise disruption at each step.
+ +*A note on what I meant here: the project is in nowhere near a releasable state, and I'm not planning to release it in anywhere near its current state. (""Repository"" in this context means ""folder on my hard drive with a .git
in it"", not a public repository on github.) I asked this question because I thought that putting it in a separate repository would be something I needed to do early on for the sake of my own management of my code, but the answer from Bernhard convinces me this is not the case.
We are building a new application for a client to manage their cases. They are already using their existing system in which they are storing files associated to the cases in an FTP folder. There is an attachment table (~1M rows) which maps the cases with the ftp file location. As part of the migration the client also wants to move away from FTP to a cloud storage provider (Azure). Currently there is roughly 1TB of files in the FTP folder which we need to move to Azure.
+ +Current architecture:
+ + + +In the FTP there is no folder structure , they are just dumping the file and storing that link in the Attachment table. But in the Azure we would need to create a folder structure. Because of this we cannot just copy-paste the same files in Azure.
+ +There are couple of approaches:
+ +Option 1:
+ +Option 2:
+ +It would really be helpful to understand what is the best approach that we can take? Are there any other approaches apart from the above.
+ +Also, Option 1 could be run in parallel (multiple cases in one shot). What could be limitation in this? +Option2 would required atleast 1.2 TB local space which is little hard to get considering the current logistical limitation in the company.
+",91791,,,,,1/13/2019 13:35,Architecture for File migration from FTP to cloud service,Hopefully, this is the right forum for this type of question..
+ +We have a set of common entities which are 'shared' throughout the company - much like Master Data Services (MDS) data. Everyone has differing ways of maintaining said data...most of which are painful and/or lacking.
+ +So...I created a working 'demo' using the SQL Service Broker (SSB) to show how we can easily & seamlessly propagate the 'shared' data. Of course, this data is centrally managed & spoke-applications (themselves) do not change said data.
+ +Another person wants to use SignalR to propagate the 'shared' data to application databases. And, I love SignalR. However, to me, SignalR is ""real-time"" front-end ""componentry""...not a data transfer service solution for MDS-styled data.
+ +I see the broker as the right tool for this job. And frankly, to me...just because you CAN do something...doesn't mean you SHOULD. But I am open to being wrong.
+ +(1) Am I wrong or right. +(2) If so, why or why not?
+ +Thanks for the help.
+",23450,,,,,9/19/2019 5:20,Propagating MDS Data - SQL Service Broker or SignalR?,Clean architecture decouples an app's core from the presentation/UI layer. The UI is just a plugin, replaceable (eg, web-based to desktop) without impacting the core.
+Many data science apps mix code, user inputs, text, graphics and other outputs in one notebook, eg, Jupyter
. Everything seems coupled: the domain, UI, presentation, persistence.
Q: +How to design such an app cleanly, with the notebook maximally decoupled? Or are notebooks inherently incompatible with clean architecture?
+Perhaps I could have an independent module with core functionality. The notebook would call this module, without defining any non-trivial functionality. Would this, however, allow enough decoupling or even fit with a notebook?
+Why:
+I'll be developing an app for a client who's only used Excel. The app will predict cost effectiveness of medical treatments and will need MCMC simulations, regression and other stats.
+I plan to implement it in Python with Jupyter
or the nteract
notebook, pushed by Netflix https://medium.com/netflix-techblog/tagged/nteract. However, this may eventually prove unsuitable for the client, as Jupyter
is mainly used by those who program it themselves. There're other potential pitfalls, eg, https://docs.google.com/presentation/d/1n2RlMdmv1p25Xy5thJUhkKGvjtV-dkAIsUXP-AL4ffI/edit#slide=id.g362da58057_0_1.
+Ideally, I could easily swap between notebook types or change over to a desktop GUI.
I would like your tipps about implemening a command line interface +to interact with a running java application.
+ +Example:
+ +The Java Application is a webserver and a cli-client should interact with it:
+ +1 ) Start server application:
+java -jar webserver.jar
2) Get status of the running application: java -jar webserver.jar --status
+or other commands like: java -jar webserver.jar --add-user Paule --password 1234
so adding an entry in a hashmap in the running application.
Does anyone know a Best-Practice tutorial about this?
+ +Implementing a HTTP/TCP/UDP/UNIX-Socket would be one solution for interaction.
+ +An other solution would be reading external resources and placing commands in a file for example.
+ +What is your way to implement this?
+ +Is there a technical term for interaction with a running thread?
+ +Thanks in Advance
+",325045,,,,,1/6/2019 11:05,How to implement a CLI interaction with running java programm?,I have a question about architecture in .NET.
+ +My architecture is like this :
+ +Projet :
+ - DAL (Data Acces Layer)
+ - BLL (Business Logic Layer)
+ - DTO (Data Transfer Object)
+ - IHM (man/machine interface)
DAL : Acces to the database (CRUD) It reference DTO
+BLL : Logic Layer do all logic process and make the connection between IHM and DAL. This layer reference DAL and DTO
+IHM : Presentation Layer (asp MVC) this layer has a reference on BLL and DTO
+DTO : I put EDMX (Entity Data Model) in this layer (cross cutting)
My question is about the EDMX. I put it in DTO layer in order to make accessible the object to all other layer. +In my IHM layer I map DTO's object with ViewModel to send to the view only the field needed
+ +I see in other project they put the EDMX in DAL but they create object in each layer and map them. It's unpleasing and it's code duplication.
+ +Is it bad to put EDMX in DTO and why ?
+ +Regards
+",325076,,325076,,1/6/2019 15:24,1/6/2019 17:02,Is my Architecture correct?,<.net>When I develop apps I reach situations like this frequently but I never found a best practice to solve it.
+ +Imagine:
+ +We have chats
, each chat
can have many message
.
We have tickets
, each ticket
can have many message
too.
Solution 1:
+ +We create 3 tables: chats
, tickets
, messages
, and we link each chat or ticket to its messages using polymorphic relation ship.
In this solution:
+ +Solution 2:
+ +We create 4 tables: chats
, tickets
, chat_messages
, ticket_messages
, and we link each chat or ticket to its messages using foreign key.
In this solution:
+ +Solution 3:
+ +Please you tell...
+",278040,,278040,,1/6/2019 15:48,1/6/2019 17:18,Database relation design,Apologies if the title is incorrect, I couldn't think of better wording. I have the following code
+ +template <class E>
+void ResolveEntityHit(E& entity, Projectile& projectile) {
+ static_assert(std::is_base_of<Entity, E>::value, ""entity must be of type Entity"");
+
+ entity.DealDamage(projectile.GetProjectileDamage());
+
+ if (entity.IsDead()) {
+ DestroyEntity(entity); //here is the problem
+ }
+
+ m_projectileManager->DestroyProjectile(projectile);
+}
+
+void DestroyEntity(Enemy & enemy)
+{
+ m_enemyManager->DestroyEnemy(enemy);
+}
+
+void DestroyEntity(Bunker & bunker)
+{
+ m_bunkerManager->DestroyBunker(bunker);
+}
+
+
+I'm trying to avoid using dynamic_cast
as I have read that this isn't good practice. I'm trying to keep the ResolveEntityHit
as basic as possible that can accept multiple types, but then I would like to branch off and do different things depending on which type the entity is.
+For example, I have my entities separated into different classes, and each class is responsible for removing/adding entities, so I would need to call the function on the correct manager to remove the entity.
The code above doesn't compile and I get error C2664: 'void DestroyEntity(Bunker &)': cannot convert argument 1 from 'E' to 'Enemy &'
Hopefully it's clear what I'm trying to achieve, but I'm asking is there a better way to do this in terms of design/architecture and without the use of dynamic_cast
? Possibly through using templates?
I'm designing a system that acts as a master data service for what I shall here call boxes. The system is to be implemented in Java with a relational database (SQL) as the main storage. Each box has a ~dozen different top-level properties: ranging from simple primitives (booleans, integers, etc) to other objects and lists of other objects.
+ +The main issue is that most of the box properties may change over time, and one needs to be able to schedule those changes in advance. Furthermore, one needs to be able to schedule any number of upcoming changes to any of the attributes, in any chronological order.
+ +For example, in October we might schedule new set of derps for a box for December, to be returned back to normal on January 1. If, in mid-December, we find out the box gets a new foobar_id in February, we need to be able to schedule that change for February without affecting the upcoming derp change on January 1 -- and without accidentally reverting the derps of December back when the time comes to apply the foobar_id update.
+ +My idea is to create some sort of a queue of upcoming change events. Each queue item would only change the values of the properties given in that exact event. New events could be added into any position of the queue and existing events could be removed from it. When an event would occur, it would record the old value of the property it changed.
+ +Now, the keywords of the previous paragraph are some sort of a queue. I'm unsure how to actually implement this in Java + a relational database! It seems that a language with strong static typing doesn't lend itself well to this kind of an exercise in generic attribute changes.
+ +I'm considering a relatively simple database table with a timestamp (date of the change), the name of the property that's going to change (an enumeration), and a serialized (JSON) representation of the new data. Then each property would basically need their own handler/deserializer. Another way would be to copy the box database structure for upcoming changes and just store a bunch of ""boxes"" with no other properties than the ones that are going to change. This seems like it might be easier Java-wise but the database would become quite complex, when almost all tables would need to be duplicated.
+ +I need the system to be robust so that it's not too easy to break it when new properties inevitably are added, or some old properties are changed. As such, I'm not too fond of the idea of using reflection on this. New changes can only come in to the system as fast as human beings can type, so I don't need the solution to be optimized for speed. But I do plan to keep one complete and up to date version of each box object in the database, so that I don't need to reconstruct the object from a number of changes every time I need it. Also, for what it's worth, the database queries that the system is going to be handling are going to be pretty simple and the number of boxes is unlikely to exceed ten thousand. I'm not concerned about the actual scheduling part, i.e. triggering the changes at the correct time.
+ +So I guess that basically my questions are these, starting from the most important one:
+ +I have read on several methods to securing an API key like gitignore or placing in another file if using an application, but at some point if taken the time, anyone can get the key, even when apikey is in use or called, right? Other methods explain using a proxy which is well beyond my league. I am only aware of understanding the foundation of C# and JavaScript, and the thought of securing an apikey is mind boggling, as I think what is the most secure method. Recently, I wanted to start working on a portfolio for a better occupation, so I had thought of doing something with the Steam API, but couldn't find a concrete method to store this elsewhere and call it without anyone taking the idea of stealing or digging up this info. Even if I used it within JavaScript, how would I call this during unattended events, if I were to make a public website that was accessed by thousands of people.
+ +Edit 1: Honestly, I have though about storing the key encrypted elsewhere, but then I would need a decryption method, as well as a key, which could still be a vulnerable method.
+ +Edit 2: I understand that the key should be in clear text, as it is a cryptographic key itself. If stored on the server, is the api key stored in a path on the server, where only the web application is directed?
+",325119,,,,,1/7/2019 9:31,Methods to Securing APIKeys,I have to define the new way of working for a development team which goes from a one man unit, to a distributed team with programmers al over the world. The team will work with svn. This is a non-negotiable thing. I recommended that they switch from svn to git, but that is not going to happen. This is the first time I do something like this. At the moment I think about something like:
+ +
+White text are things that are done manually. Blue text are things that are done automatically.
Because it is important to minimise things that can go wrong (people that would give help when there is a problem probably sleep at the moment it is needed, so it should be minimised at 'all costs'), I am thinking about locking svn trunk before the commit and releasing after the automatic steps are done. In this way it should be nearly impossible that the 'Merge Back Into Trunk' goes wrong. The idea is that the tests are reasonable fast and it is better to wait a little before the commits are done, then that there is a chance that the automatic part goes wrong.
+ +Is this an acceptable way of working?
+If so: can this be done with svn?
More about the way of working I am thinking about.
+",324072,,324072,,1/7/2019 7:00,1/14/2020 14:41,Is it a good idea to lock svn,my requirement is i want to delete a Object A
+ +A-> B-> C-
+ +here if you want to delete A you have to delete B which is dependent on B , then If you Want to Delete B you Have to Delete C which is dependent on B and The Chain goes like this
+ +i'm planning to Solve it using chain of responsibility design pattern , or is there any design patterns or principles that fit this scenario
+",325137,,,,,1/7/2019 7:30,Deleting a list of dependent OPbject using chain of Responsibility design pattern,I'm currently testing a web service and I have noticed that there is only one error code ever return: 400.
+ +However, the error message return isn't always the same. Here are some examples of the error messages I got:
+ +So I was wondering if we should use different error code for each message (keeping the HTTP error to 400 but using another code inside the message like 4001, 4002, 4003, etc...). Why would it be a bad idea to do that and why would it be a good one?
+ +Is using only one single error code could make life harder for the front-dev (assuming they have to translate the error message before printing it for clients)? Wouldn't it be simpler for them to have multiple error code? And what would be the drawbacks of having multiple error code instead of one?
+",317653,,317653,,1/7/2019 11:33,1/8/2019 9:04,good practice: error message and error code,I need to produce some documentation to be compliant with IEC 62304 and, while reading all of the processes needed to be documented, I'm having a couple doubts about how to structure the whole lot of documentation.
+My concern is about how to divide all of the documentation in separate documents and what should be included.
+The whole software system can be considered composed of 3 main subsystems:
+I'm especially in charge of the latest, which is a fairly streamlined streaming-oriented application which processes and saves data on a DB (a SOUP, in the case of IEC 62304 compliance).
+Now, the data saved in the DB is visualized in a Grafana dashboard: in which document should this component be considered? What should be the limit of the scope regarding the #3 application and its interaction with the other components? +Since Grafana would be a SOUP, I was thinking about writing about it in the appropriate document where all configurations and SOUP management is. +Should I mention/reference inside the SRS of #3 application the requirements for the needed visualizations? +Which is the appropriate document where I should put this information?
+I'm using as a template reference for all of the documentation needed this blog, since I'm new to software development with ISO/Standard Regulations, but any additional resources as to how structure the whole docs in this context is highly appreciated.
+Thank you
+",325146,,-1,,6/16/2020 10:01,1/7/2019 12:36,How to structure SW documentation with SOUP components,I have few strategy class that calculate ranking. Those class implements interface with method scoreUpdates. Method scoreUpdates take two parameters( winners and lossers). Now i need add new strategies and some need more parameters. Should i add methods to base interface for this new strategy? +What is best solution for this type of case?
+ +Also I use RankingSelector service that find right strategy and returning interface. Also DI is based on this interface so i can't add new interface.
+",325149,,,,,1/7/2019 13:05,add new class that implement base interface but need one more parameter,I recently came across a set of possibilities for creating rows in a table in my database. the scenario is that I am trying to populate a notifications table by different types of notifications data based on different tables.
+ +Adding to the notifications table is done instantly after the adding of rows in other tables (like adding in invoices table).
+ +Since the code adding to the other tables is on a higher level (php) the question is : should I add new rows to notification table with a php sql query or should I implement a trigger that would do that automatically ?
+",301866,,,,,1/8/2019 3:31,Using sql triggers over higher level scripts,I've noticed this style of code a lot in frameworks like Symfony and Magento 2 (which is based on Symfony):
+ +<?php
+ class Foo
+ {
+ protected $foo;
+
+ # construct function - not needed for question
+
+ public function getFoo()
+ {
+ return $this->foo;
+ }
+ }
+
+
+Which makes things easier to pick up in terms of get/set/unset but is there any actual advantage over using public vars?
+ +<?php
+ class Foo
+ {
+ public $foo;
+ }
+
+
+It seems the latter has fewer lines but less obvious flexibility.
+ +What are the advantages/disadvantages to each method and when should I use one over the other?
+",303264,,,,,1/7/2019 12:43,Public var vs protected var and get function,I want users to be able to dynamically add 'columns' from the front-end of the website. I understand that it is probably not best practice to actually add columns to a table from the front-end, so I was looking for a better way to handle this.
+ +The use case:
+ +I am making an app with a determination table. The user can fill out details of the animal/plant (for example leaf shape) and is supposed to end at the right species.
+ +I want to make it future proof, so that if someone fills out all details and the species they have is not the one the table comes up with, the user can add both their species, and the detail that would tell both species apart.
+ +For example: if the user found a daisy but the table comes up with dandelion, the user could add the daisy and add 'petal colour' as a distinquishing feature.
+ +Users should then be able to fill out the petal colour for all plants that were already in the database.
+ +My database at the moment:
+ +I have one table where all details (like species name, leaf shape etc.) are stored in columns.
+ +My webiste: +I use angular 7 for the front-end, PHP on the server and a MySQL database, but general answers are also very welcome.
+",325162,,,,,1/7/2019 18:33,What is a good way to add extra info to data-entries from a website front-end?,Is there a protocol or a convention that supports REST (ok, maybe we should use HTTP here instead) processing chain and some neat features to help with that? Let me explain what I mean.
+ +Let's assume I have some public REST service available. Using HTTP GET, I have multiple static pictures, GIFs and movie clips available. Generally, I would like to take this data and send it to another REST endpoint, along with additional data about the recognized visual elements in the content. For example, if the image contains Steve Ballmer drinking tea, a description ""Steve Ballmer drinking tea"" is normally expected at the endpoint.
+ +However, I don't have an image processing and recognition service available, but if there are some such services available somewhere on the internet, I'm happy. Even if one works exclusively with static images and another one with movies.
+ +So, my application (let's call it MyApp) will do the following:
+ +This means the data flow is:
+ +++ +MyApp -> Src -> MyApp -> PRS (or VRS) -> MyApp_> End -> MyApp + (confirmation)
+
I am looking for the solution where data flow is this:
+ +++ +MyApp -> Src -> PRS (or VRS) -> End -> MyApp
+
This means that I only have to say to the Src: ""Get the whatever video resource I want and forward it to PRS or VRS depending on the content; after that forward it to the End"". Then Src takes the picture, sends it to PRS and says ""process this, after that forward it (along with result of processing) to the End"". +You see, I don't want MyApp to be orchestrator of everything, additionally creating extra network traffic along the way.
+ +Oh, btw, since I want it to be neatly archived, I need a zipping service in the chain, so the solution should look more like this:
+ +++ +MyApp -> Src -> PRS (or VRS) -> Zip -> End -> MyApp
+
One more thing is that I want MyApp to be informed about the percentage of processing, errors. I expect some asynchronous processing somewhere along the path (e.g. VRS is a good candidate) and everything to work correctly in that condition as well.
+ +Does something like this exist? Something maybe most similar to the Unix/Linux piping. Like ""web-pipe"". Or something. +If it does, I can't find it.
+ +EDIT
+ +I am looking for a protocol, convention, whatever fits my need and is neither tied for an existing framework (e.g. Spring, .NET MVC/WebApi, ...) nor a ""proprietary"" part of some existing technology (Java, .NET etc.) It should be something that just-works with (or via) existing HTTP. So any technology can use it. Maybe ""concept"" should be the proper term here. If it isn't something widespread already.
+ +For example, there is basic authentication. It just works with any technology. It has its rules, do's and dont's. There are WebSockets, working just the same. I need something in that context.
+",204790,,204790,,1/7/2019 16:33,1/7/2019 16:33,HTTP/REST and chained processing protocol/convention,Let's say I use a Pair in this way:
+ +Pair<Long, Date> signup = getSignup();
+System.out.println(""User with ID "" + signup.getLeft() + "" signed up on "" + signup.getRight());
+
+
+Is it a form of Primitive Obsession?
+ +I could have something like
+ +Signup signup = getSignup();
+System.out.println(""User with ID "" + signup.getUsrId() + "" signed up on "" + signup.getSignupDate());
+
+
+If it's not a form of primitive obsession, why is that?
+",325169,,,,,1/8/2019 18:35,Is using the Pair class a sign of primitive obsession code smell?,What is good practice of settings up a database in potentially large project - creating tables and updating them - should it be done in code of the app or should it be done by external database related tools like phpmyadmin ? I mean I have two ways - create tables and set them up when the app starts, or I can do this stuff with phpmyadmin by hand independently of codebase.
+",325177,,,,,1/7/2019 18:18,Database management good practice - should it be in code or with database tools?,This is a best practices question for release management of an app. But this scenario is a bit different than what I've been able to find myself.
+ +Essentially my company maintains a fork of its own app. There are two versions of the app that will have different configurations of bug fixes / features. These fixes and features come from a common pool of what's completed. The reason for the two configurations is that there are two main testing environments with different goals.
+ +Let me explain that a bit more with a scenario:
+ +#3 is important because not all features get removed or synced between the two configurations. This means that the two configurations diverge slightly over time. But only in the short term for what's in active development. Over the long term, the code base is in sync with what's in production.
+ +So, as a diagram, the builds could look like this over time (with some added features / bugs from the bulleted scenario above):
+ + + +Basically a normal development life cycle, but with twin timelines. There's the main app, and a fork of it that's derived merely by a different combination of the available patches. Patch queues would work well but we use a build server to produce the builds which requires us to publicly push committed changes (as far as I can figure out) to a remote repo.
+ +My question is really about what the easiest way to manage this is, at the actual source control level. What we've done in the past is (using Mercurial) maintain two repositories (one per configuration) and all features / bugs would get imported as needed as patches. Removing items would be done using a variety of ways, backouts probably being the most common. The problem with this is that the two repos ended up wildly different from each other with different items being applied at different times. So the entire changeset stack would be a different order.
+ +What we're thinking about doing, is still maintain two separate repos, but every effort (features, bugs) would be developed as a branch and that branch gets pushed to the repo it's needed in. Within a given repo, if the branch is wanted in the upcoming build, it gets merged in the build branch which is monitored by our Jenkins server and produces the builds.
+ +Is there a better way? An ironed out best practice that prevents messy build branches as a result of backing out items, and possibly even other issues that we don't know about yet?
+",16275,,16275,,1/9/2019 1:52,9/30/2020 4:05,Source Control Release Management: Simultaneous Releases with Different Configurations,Should I run my own webserver? If so, how do I do that? I'm running on Windows 10 with VS2017, IIS Express and MS SQL Server.
+ +I don't need a domain name. Just providing access via IP-address is fine. I'm just looking for a cheap and easy way of enabling other people to help me beta test my apps.
+ +Can Azure be used for this?
+",320898,,,,,1/7/2019 17:21,How should I make my Asp.Net Core web apps available online for beta testing?,Considering this pattern is used to support CQRS message bus, examples are buslane Python or MessageBus PHP
+ +It uses commands to change the domain model, and publishes domain events
+ +This looks great providing the separation, and encapsulating each domain write operation on its own classes, but doesn't that make an anemic domain model ? Can a domain model be thought of a collection of services, and objects ?
+ +Even if, doesn't that results in a domain model that is just an entity, or a data container, and all the business logic is implemented in their own command handlers.
+ +On the contrary, if all the commands which changes the model are implemented in a class, isn't that a kind of a god class ? Or doesn't it violate SRP ?
+",85286,,85286,,1/7/2019 20:57,1/8/2019 5:39,"Anemic Domain Model, CQRS, command bus",I just started a new job and one of my first tasks is to create local nuget packages from the existing libraries, to help with versioning, maintenance, etc. This task had already been started by another engineer. However, he chose to grab many libraries that relate, create a project holding all these libraries, and publish it as one package (specifically a nuget package).
+ +Example:
+LibraryA_v1 + LibraryB_v2 + LibraryC_v3 = PackageA_v1
+LibraryB_v1 + LibraryC_v3 = PackageB_v2
+
+
+Then, PackageA_v1
and/or PackageB_v2
would be referenced by whatever project that needs them. However, I see a lot of different problems with this approach.
PackageA_v1
and PackageB_v2
are extremely unstable. Anytime a library changes, the package would need to update.LibraryB_v1
and LibraryB_v2
would be in the same project, if PackageA_v1
and PackageB_v2
are both referenced)From my studies in software engineering and the principle previously mentioned, I think each library should be kept separate in their own nuget packages. However, my co-worker had obviously thought differently. So, should libraries be packaged together based on similar traits?
+",314489,,132397,,1/8/2019 4:48,1/8/2019 5:42,Should libraries be packaged together based on similar traits?,I'm really struggling with overheads of context switching. When I need to continue work on some part of the code after a break, it takes up to an hour to recall all the context of the problem I working on and tune up to work. How do you deal with that issue? Maybe you leave some prompts in the code describing context and next action, or keeping some kind of lists, or using any other management tricks?
+",325205,,,,,1/8/2019 6:35,How do you manage context switching overhead in software development when getting back to work over different parts of your project?,I am writing drivers for different devices to work with my embedded system. Currently I write any new driver manually but for the future I would like to automate this using a settings file. I figure I have 2 options:
+ +write a single universal driver that reads the setting file and behaves accordingly;
write a code generator that reads the setting file and generates code from that with appropriate behavior.
Which one of these is the better option and why? Are there any better options still?
+",320971,,209665,,1/9/2019 8:30,1/9/2019 8:30,generate code or write generic code,Context: I have an open source project which uses JNI. While it's possible to build it yourself, I expect most users will use standard Java tools like Maven, Gradle, or SBT to download the JAR, which contains the pre-compiled binary image. This is no problem on Windows, but on Linux it gets complex.
+ +I'm wondering how much to statically link when creating these packages so it's compatible with most Linux distributions. I know that, for example, Alpine Linux does not come with libstdc++, meaning that it would fail when in a small docker container.
+ +There's also the possibility of older versions. For example, a quick look at nm
suggests it's linking _ZNSt11logic_errorC1EPKc@@GLIBCXX_3.4.21
and __vsnprintf_chk@@GLIBC_2.3.4
. What if the host has versions older than 3.4.21 and 2.3.4?
However, most literature I've seen tells me not to link against libgcc. Is that still true? Is it the same if I switch to clang (which has its own standard libs?)
+",180,,,,,1/9/2019 6:30,Is it good practice to statically link libstdc++ and/or libgcc when creating distributable binaries for Linux?,I am working on an ASP.NET Core application that grabs a model from a database via Entity Framework, and will pass a ""subset"" of that model to our Angular front end. For example:
+ +I have a list of Users. On the user-list page, I would like to grab a list of User objects from the API and display them on the page. For each User, it should only show Name and maybe a few other fields.
+ +I would also like to be able to click on the user's name to redirect to their profile page. On this page, we will get more fields from the User table - perhaps more in-depth information like Nickname or Middle Name, etc.
+ +My question is, what is the ""correct"" way to structure this, on the front-end side and the server side?
+ +On the front end, is it best to have one class with nullable values that either get filled out or left null based on which page they are on? Like this:
+ +export class User{
+ firstName: string;
+ middleName?: string;
+ lastName: string;
+}
+
+
+Or would I have ""UserListUser"" and ""UserProfileUser"" classes that are completely separate? Or, would it be a parent class called ""User"" with a subclass with more information like ""UserFull""?
+ +And, on the back-end, is it best to do the same thing? Would you create separate classes for each page that has access to that database model? Would you just use the ORM object that is created from Entity Framework? Or would you always map that to a smaller object with only a subset of the fields on the database table?
+",176600,,,,,1/12/2019 14:32,How to handle different pages of a web application having different levels of access to a database model,Given the formula to calculate instability...
+ +I = (Ce / (Ca + Ce))
with Ce = outgoing dependencies
, Ca = incoming dependencies
, and I = Instability
,
...should I include system dependencies (such as System
, System.Data
, System.XML
, etc.) when counting outgoing dependencies (Ce
)? Or, do I just count it as one outgoing dependency?
Background Info
+ +I have been studying this topic in an academia environment. I'm starting to apply what I've learned, thus where this question derived. More info on the topic can be found at this link.
+",314489,,,,,1/9/2019 8:03,Do I include system dependencies when calculating Instability?,I'm a junior developer that is given the ability to help shape my team's processes if I can justify the change, and if it helps the team get work done. This is new for me as my past companies more or less had rigidly defined processes that came from management.
+My team is fairly small and somewhat new (<3 years old). They lack:
+And the list goes on. Management is open to the implementation of improvements so long as the value is justified and it helps the most important work (namely the development) get done. The underlying assumption however is that you have to take ownership in the implementation, as no one is going to do it for you. And it goes without saying some of the above projects are non-trivial, without a doubt time consuming, and are clearly not development work.
+Is it worth a (junior) developer's effort to try and push for the above as time goes on? Or is it best to "stay in your lane" and focus on the development, and leave the bulk of the process definition, and optimization to management?
+",,user321981,155513,,1/4/2021 23:20,1/5/2021 14:19,Should a (junior) developer try to push for better processes and practices in their development/IT team?,According to the book, The domain layer should be isolated. In domain entity, you should avoid adding a property represents database PK (usually identity surrogate column called ID).
+ +There is no problem in identifying a domain entity because by definition it includes a natural key. If this key is the same as PK, then repository will have no problem in persisting the domain entity using PK. Otherwise, the repository will need to construct a SQL command that find the entity based on some column(s) instead of PK.
+ +Allowing PK to be in domain layer is the perfect approach by book, however I cannot see risky practical issues. On the other hand, without this approach, the saving process for an aggregate might lead to a performance issue in saving.
+ +I can see only one practical problem which is ""the wrong guidance for other developers"". Do you know other practical problems for this approach?
+",247564,,,,,1/9/2019 14:45,DDD Including DB Id in domain entity,I have in mind to develop a Ruby on Rails app with two different databases.
+ +In the primary SQL db - let's say MySQL, for instance - I'd keep my app items, e.g. user profiling, user-user interactions and, in general, everything that's bound to a model, anything that I already know how is made.
+ +Now the best part: I'd like to add a secondary No-SQL db - let's say MongoDB, for instance - where I want to put other documents that I don't know which fileds they may contain, not bound to any model. End-users, while interacting with the app, should be able to add their own custom documents and create their collections, making queries and also creating views - I mean views inside the db, I'm not talking about web pages - to aggregate not-so-well-formatted records.
+ +What do I mean for not-so-well-formatted records? For example, let's say that a user inserts a record like this one:
+ +{ ""name"":""Bill"", ""surname"":""Ball"" }
+
+
+and then another record like this one:
+ +{ ""firstname"":""Tim"", ""lastname"":""Tam"" }
+
+
+As you can see, the fields name and firstname are meant to be the same field, while they're actually different feilds; at the same time, surname and lastname are meant to be the same, but they're different because the user did a sloppy job while inserting those records.
+ +I'd like the app could notify the discrepancy to the user so he can choose wether to aggregate those two fields or keep them separated; if he chooses to aggregate them, he should be able to define - in a very simple and friendly way - a view into the db, maybe applying some kind of alias to every field. Maybe even defining the type of each field, e.g. strings, dates, integers, etc. So, after the user makes few clicks, the view could look like:
+ +{ ""firstname"":""Bill"", ""lastname"":""Ball"" }
+{ ""firstname"":""Tim"", ""lastname"":""Tam"" }
+
+
+while preserving the original/raw data inside the collection. +I already know how to do this with MongoDB, by the way, but still I don't know if this could be the right approach.
+ +I don't want the user to be obliged to create any model for his data, I'd simply want to let him throw raw documents into the db and eventually autonomously ""fixing"" the discrepancies after, so he can continue querying his collections without worrying about those discrepancies in naming convention.
+ +So here's my question: is this a good approach to solve my problem? I already know that I can have multiple dbs attached to my Rails app, but is this structure convenient? Or is there something better?
+",325321,,325321,,1/9/2019 9:18,1/9/2019 12:46,Ruby on Rails: primary SQL db and secondary No-SQL db without models,I am wondering how best to slice up a Java Mustache web app which has:
+ +Important to note this is all in the same Maven module (I know).
+ +What I really dislike about this is that a lot of the Mustache logic is wrapped up with non-UI code, in case all the way to the Service layer.
+ +What I am thinking of doing is extract all the Mustache logic together with all the controllers and Spring web security stuff into a new module called project-ui or similar.
+ +Leave the Data and Service logic inside a project-api module.
+ +At this point would you either
+ +Any clear way of doing this that I am not aware of? Ideally, I'd rather let a NodeJS/ReactJS app serve the UI layer but the decisions have already been made. Think big corp environment.
+",120144,,,,,1/9/2019 9:50,Spring Boot and Mustache app separation of concerns,In short, is instanceof
a bad thing?
I had a code something like
+ +Converted convert(Object o) {
+ if (o instanceof ClassA) {
+ convert((ClassA) o);
+ }
+ if (o instanceof ClassB) {
+ convert((ClassB) o);
+ }
+ throw new IllegalArgumentException(o.class + "" not suported"");
+}
+
+
+I didn't realize it can be in fact refactored to
+ +Converted convert(Object o) {
+ throw new IllegalArgumentException(o.class + "" not suported"");
+}
+Converted convert(ClassA a) {
+ // do the conversion for ClassA
+}
+Converted convert(ClassB b) {
+ // do the conversion for ClassB
+}
+
+
+ClassA and ClassB are some generated classes without common interface so I cannot do a conversion in one method (without reflection as the method names I'm interested in are the same in ClassA and ClassB).
+ +On the other hand I do not really see a benefit of implementing it that way.
+ +Additionally (when same principe is applied), I'm working with JSF and I have several implementations of javax.faces.convert.Converter
, for example
@Component
+public class CountryConverter implements Converter {
+
+ @Autowired
+ private CountryServiceImpl countryService;
+
+ @Override
+ public CountryDto getAsObject(FacesContext context, UIComponent component, String value) {
+ return countryService.findById(Long.parseLong(value));
+ }
+
+ @Override
+ public String getAsString(FacesContext context, UIComponent component, Object value) {
+ if (value instanceof CountryDto) {
+ Long id = ((CountryDto) value).getId();
+ return Long.toString(id);
+ }
+ return null;
+ }
+
+}
+
+
+...I can have very similarly
+ + @Override
+ public String getAsString(FacesContext context, UIComponent component, Object value) {
+ return null;
+ }
+
+ @Override
+ public String getAsString(FacesContext context, UIComponent component, CountryDto country) {
+ return country.getId();
+ }
+
+
+where overloaded version with Object can be in some common parent, but it all seems to me as overengineering = as I mentioned I see no benefit of doing it that way, just because I can. KISS is a principle I like and implementation with instanceof
is straightforward...
I have data object's let say PersonDto (fields name, surname) and OrganizationDto (field name, type).
+ +Then I have some common screen, showing such data, but screen title is something related to type of - Person/Organization.
+ +The easiest way how to implement that functionality is to have field/constant in that class. Another approach would be to use some instanceof
checks (related to my previous question), the most complex (from my point of view) is to use Visitor pattern, so at some point, there is one of methods called:
Visitor {
+ accept(PersonDto dto) {
+ return ""Person"";
+ }
+ accept(OrganizationDto dto) {
+ return ""Organization"";
+ }
+}
+
+
+Just now I realized, some magic with class name can be done (for example if DTO name is not one word I'd need to add space or something), but I do not like that approach at all.
+ +Approach with additional field seems the most straighforward to me especially if I have common interface for DTOs, but it is breaking SRP in a sense, that class not only holds data, but knows something about screen/UI. I just prefer KISS more.
+",71371,,,,,1/9/2019 11:54,Is this violating SRP? Data object with some additional info,I have been trying to figure out an ACL solution for my application which should manage API endpoint's access rights dynamically. Some said that I have an option of Spring Security ACL. I checked it but lack of documentation frightened me a bit. So that I started to design my own ACL implementation; since I did not start implementation can not provide code example but at least I can provide the flow and components planned to be used.
+ +So this a highly overall idea above definitions with their required helpers.
+ +I will trigger aspect per end point access request. Since I know the label for the endpoint (written in the annotation) I can simply cross-check the access rule and user's roles ( I will access it through the Authentication object of Spring).
+ +Any suggestion or a flaw I am missing here ?
+",325349,,353068,,2/10/2020 10:56,11/6/2020 12:03,Custom ACL Implementation,Hopefully not too academic...
+ +Let's say I need real and complex numbers in my SW library.
+ +Based on is-a (or here) relationship, real number is a complex number, where b in imaginary part of complex number is simply 0.
+ +On the other hand, my implementation would be, that child extends parent, so in parent RealNumber I'd have real part and child ComplexNumber would add imaginary art.
+ +Also there is an opinion, that inheritance is evil.
+ +I recall like yesterday, when I was learning OOP at university, my professor said, this is not a good example of inheritance as absolute value of those two is calculated differently (but for that we have method overloading/polymorfism, right?)...
+ +My experience is, that we often use inheritance to solve DRY, as a result we have often artificial abstract classes in hierarchy (we often have problem to find names for as they do not represent objects from a real world).
+",71371,,316049,,1/9/2019 20:14,1/9/2019 22:35,How to implement RealNumber and ComplexNumber inheritance?,I am writing in C#, but this question may apply to other languages as well.
+ +public class Test
+{
+ int a = 10; // I created 'a' here
+ public void M()
+ {
+ int a = 20; // I forgot that I already have 'a' in the class and I initial it again ;
+
+ // do other stuff with ""a""...
+ }
+}
+
+
+This way isn't against the declaring space rule, so the compiler will have no problem with it. I know I could use this.a
if I want to access the a
outside M()
and they are not a variable actually.
My question is:
+ +Is this way could make some people confused by allowing declare the same name in a sub scope? Will it be harder to debug or doing code review?
+",277346,,,,,1/9/2019 19:51,How to avoid repeating variable initialization?,I have a specific problem in git which I havent found an answer to yet. +In gitlab I have 3 seperate repos. For my school project the teacher wants me to copy everything into a repo of his and wants to see all my git history.
+ +So I would need to fork my 3 repos into his main repo, but I just couldnt figure out an answer.
+",325358,,,,,1/9/2019 14:21,Fork 3 repo's into 1 main one,Assume there is any program that is supposed to be tested and you like to perform an equivalence class analysis on it. Let's say you identified six valid and four invalid equivalence classes. Then, how many test-cases need to be created in each case at least?
+ +I'm not sure about that but I think because every equivalence class of input files needs to be considered in at least one test-case, so you will need at least one test-case for each equivalence class? Or maybe even less because it might be possible to skip the invalid equivalence classes..? :S
+",320477,,,,,1/9/2019 17:04,How many test-cases need to be created at least (valid and invalid equivalence class)?,I'm currently building a React Native application and wondering if storing device information such:
+ +in the Redux store can be a good idea.
+I have different components that needs to know this informations and storing in Redux can grant me a predictable state.
+In the case, maybe I can store them like this:
{
+ user: {
+ id: '123'
+ name: 'Markus'
+ ...
+ },
+ device: {
+ locationPermission: 'denied'
+ locationActive: false
+ lastKnownLocation: {
+ lat: 44.123,
+ lng: 32.123
+ }
+ },
+ ...
+}
+
+
+Are there any cons about this approach?
+",325366,,,,,10/13/2020 15:07,Correctly store device info with Redux in React Native app,We're a small team of 3 senior and 1 junior developers and I've been tasked with introducing BDD within our development process.
+ +To say there's a lot of confusion about BDD is an understatement and it's appearing within the team after I created some scenarios for user based behaviour.
+ +My understanding of BDD is that it's a way of abstracting requirements in a way that everyone can understand, and so far it seems to help the team visualise some of the behaviour that's required. The problem is now that the rest of the team has run away with the idea and want all behaviour written in Gherkin, including non-user based things such as what should happen in the database (e.g. auditing, error logging, sessions etc) and interaction between web services.
+ +I know BDD isn't about testing, which is why it was invented by Dan North, but the few user-centric scenarios I've created can nicely have user acceptance tests derived from them, so now the rest of the team would like this applied to all layers of the system - even though we won't produce UATs for the behaviour, instead integration and unit tests.
+ +The BDD work is under my responsibility but now I'm not sure how to proceed. I'm weary that we'll get bogged down with a huge number of scenarios and waste precious time if we continue with the wishes of everyone else.
+ +I'd like to know how other teams who use BDD/TDD actually use their secnarios, as everything I've seen online only seems to refer to user interaction.
+ +I understand that scenarios are best used as part of the ""living documentation"", so does this mean all behaviour?
+ +For instance how useful would the following be? Especially since the users won't care about this, it's that we require auditing as standard when creating systems:
+ +Feature: The audit service logs all requests made
+Scenario: A request is made and logged to the audit database
+
+Given a request is made to *the service*
+When *the service* receives the request
+Then *the service* calls the *audit service*
+And *the audit* service logs the request to *the database*
+
+
+I can understand that this flow helps us know what we should be programming but this seems like its shoe-horning something that doesn't fit into BDD. We already have sequence diagrams detailing the above scenario.
+",146235,,146235,,1/9/2019 16:33,1/10/2019 16:05,Should BDD/Gherkin be used only for user visible behaviour?,I am member of the Apache PLC4X (incubating) project. Here we are currently implementing multiple industry PLC protocols. While we initially focussed on creating Java versions of these, we are currently starting to work on also providing C++ and other languages.
+ +Instead of manually syncing and maintaining these, we would rather define the message structures of these protocols in a generic way and have the model, parsers and serializers generated from these definitions.
+ +I have looked at several options: 1) Protobuf 2) Thrift 3) DFDL
+ +The problems with these are the following:
+ +1) Protobuf seems to be ideal do design a model and have model, serializers and parsers generated from that. With Protobuf it is easy to define a model and ensure I can serialize an object and deserialize it with any language. However I don't have full control over the transport format. For example if I was to encode the constant byte value of 0xFF, this would be a problem.
+ +2) Thrift seems to be more focussed on the services and the models used by these services. The same limitations seem to apply as for Protobuf: I have no full control over the transport format
+ +3) DFDL seems to be exactly what I'm looking for as I want a language to describe my data-format ... unfortunately I could find projects like Daffodil, which seem to be able to use DFDL definitions to parse any data format into some XML like Dom structure. For performance and memory reasons we would rather not do that. Other than that I couldn't find any usable tooling.
+ +Also had a look at Avro and Kaitai Struct but Avro seems to have the same issues for my usecase as Protobuf and the guys from Kaitai told me serialization was still experimental
+ +My ideal workflow would be (Using Maven):
+ +1) For every protocol I define the DFDL documents describing the different types of messages for a given protocol
+ +2) I define multiple protocol implementation modules (one for each language)
+ +3) I use a maven plugin in each of these to generate the code for that particular language from those central DFDL definitions
+",325378,,,,,11/1/2020 22:05,"Options for having model, parsers and serializers for a given data-format generated in multiple languages?",I'm developing a system, and I've had a question that might help other people.
+ +The system would be written in PHP, with a lot of chance of turning a mobile app later, too. Normally, I would create it with a controller that would make the connection to the database, and others responsible for inserts, updates, etc.
+ +Considering the application, you would then do an API with endpoints for the same functions.
+ +Thinking about it, I imagined then that the system itself could also work with the API. In this way, it would be the same as the mobile application: It would not connect directly to the database, but would connect to the API and it would take care of operations.
+ +I see that, this way, the system would be slower than connecting directly to the database, but, perhaps, it would be more useful in maintenance, because it would be just a place to change, the API.
+",325379,,,,,1/9/2019 16:34,API as system controller,I am trying to modify an opensource project (json serialization one: gson, I want to let it serialize/deserialize objects with circular references, which is not allowed now. +) to do it I have to change an abstract class widely extended.
+ +I want to change the behaviour of that class with a strategy object so every child classes will be using it without needing to know it, so I have add the strategy object to the abstract class. As draft, I have done this:
+ +...
+public abstract class TypeAdapter<T> {
+
+ CircularReferenceStrategy<T> circularStrategy = (new CircularStrategyFactory()).create();
+
+ public void write(JsonWriter out, T value) throws IOException{
+ circularStrategy.write(this, out, value);
+ }
+/**
+ * Writes one JSON value (an array, object, string, number, boolean or null)
+ * for {@code value}.
+ *
+ * @param value the Java object to write. May be null.
+ */
+ public abstract void doWrite(JsonWriter out, T value) throws IOException;
+...
+
+
+That circularStrategy can be, p.e: let it fail by throwing a StackOverflowException (actual behaviour), or substitute circular references with 'null'/NullObjects or, as is done in Jackson (another serialization library), mark each object with an 'id' and add a reference to that id in the serialized json so the serialized/deserialized objects will have the circular references... whatever, the point is that they can be many strategies and that one of them must be selected before starting the serialization.
+ +So my question is:
+ +How should I tell the factory which strategy must be used?
+ +I would set the info about which strategy must be used up in the GsonBuilder* and inject it to the abstract CLASS TypeAdapter (as a static field, to the class, not to the instances) but there is something that stops me from doing that, a spider-alert... Is it ok to inject things to a class? from where?
+ +How would you do this?
+ +* That is the main builder of the library, it builds a Gson object that has toJson and fromJson methods that are what you use
+",110507,,,,,1/10/2019 8:53,How to inject behaviours to an abstract class?,I'm trying to accomplish this scenario :
+ +There are 2 types of users, let's say Admin
and Worker
, and there have different roles.
Admin can do a CRUD
of questions, and also can create a room where the users can join to play together (this is just a name) but maybe is a good idea to create more attributes inside of it like, WHO is playing in this room, POINTS for everyone but let's talk it afterwards when I show you the design.
Worker can play solo or multiplayer.
+ +Ok the thing is, on my design I have :
+ +Collection named User
which contains :
This is a default one, but then I'm wondering how do I define the Role
if it's an Admin
or a Worker
, something like isAdmin:true
and then I check this Bool
? Also I'd like to have the reference for those questions where the user has failed more, I mean like a wrongQuestionNumber which contains the _id of the question and the times he/she failed
Then I'd like to have the Question
collection where contains :
Then the Room
collection should contains :
There's a collection named Topic
as well, if my question have a topic then I can select the question by Topic.
+An example of Topic should me Math so user can do only exams or do tests with math questions.
Then I have to store like a historic about what are the questions worker has answered correct and what did not, to make some statistics, but I need to store some historicals for Admin to see in this topic the average that Workers have failed more is : Question23 (for instance) something like that.
+ +Any tip is welcome, and improvement as well.
+ +@uokesita recommend to me to use PostgreSQL
so maybe it's a good idea doing this way, how could be the schema?
++ +In object-oriented and functional programming, an immutable object (unchangeable[1] object) is an object whose state cannot be modified after it is created
+ +Wikipedia (https://en.wikipedia.org/wiki/Immutable_object)
+
class ImmutablePaymentMethodManager
+ {
+ private $paymentMethods = [];
+
+ public function __construct(array $paymentMethods)
+ {
+ $this->paymentMethods = $paymentMethods;
+ }
+
+ public function enabledPaymentMethods() : iterable
+ {
+ $result = [];
+ foreach($this->paymentMethods as $paymentMethod) {
+ if($paymentMethod->enabled()) {
+ $result = $paymentMethod;
+ }
+ }
+ return $result;
+ }
+ }
+
+ class InMemoryPaymentMethod implements PaymentMethodInterface
+ {
+ private $name, $costs, $enabled;
+
+ public function __construct(string $name, float $costs, bool $enabled)
+ {
+ $this->name = $name;
+ $this->costs = $costs;
+ $this->enabled = $enabled;
+ }
+
+ public function name()
+ {
+ return $this->name;
+ }
+
+ public function costs() : float
+ {
+ return $this->costs;
+ }
+
+ public function enabled() : bool
+ {
+ return $this->enabled;
+ }
+ }
+
+ class DbAwarePaymentMethod implements PaymentMethodInterface
+ {
+ private $dao;
+
+ public function __construct(PaymentMethodDao $dao)
+ {
+ $this->dao = $dao;
+ }
+
+ public function name()
+ {
+ return 'My db aware payment method';
+ }
+
+ public function costs() : float
+ {
+ return $this->dao->getCosts($this->name);
+ }
+
+ public function enabled() : bool
+ {
+ return $this->dao->isEnabled($this->name);
+ }
+ }
+
+ class TimeAwarePaymentMethod implements PaymentMethodInterface
+ {
+ public function name()
+ {
+ return 'My time aware payment method';
+ }
+
+ public function costs() : float
+ {
+ return 33;
+ }
+
+ //only enabled at even 2,4,6,8,10,12,14,16,... hours
+ //is this considered a state change?
+ public function enabled() : bool
+ {
+ $hour = date('h');
+ return $hour % 2 === 0;
+ }
+ }
+
+ //Immutable (enabledPaymentMethods) we can expect the same results
+ $paymentMethodManager = new ImmutablePaymentMethodManager([
+ New InMemoryPaymentMethod()
+ ]);
+
+ //Not immutable (enabledPaymentMethods) we cannot expect the same result
+ $paymentMethodManagerWithDbAwarePaymentMethod = new ImmutablePaymentMethodManager([
+ New InMemoryPaymentMethod(),
+ new DbAwarePaymentMethod(new PaymentMethodDao())
+ ]);
+
+ //Not immutable (enabledPaymentMethods) we cannot expect the same result each time
+ $paymentMethodManagerWithTimeAwarePaymentMethod = new ImmutablePaymentMethodManager([
+ New InMemoryPaymentMethod(),
+ new TimeAwarePaymentMethod()
+ ]);
+
+
+In the example above, encapsulation is a great way to hide the database details. But hiding the database logic in this DbAwarePaymentMethod
now makes ImmutablePaymentMethodManager
mutable, since it's result can very each time it is accessed.
I ask these questions, because I really like immutability, but I also like encapsulation like in the example above.
+ +Assumption + +We can say $paymentMethodManager
is immutable. I will assume there is no debate about this.
$paymentMethodManagerWithDbAwarePaymentMethod
is immutable?Is accessing the database seen as a state change? Even the state of the object does not change, the state it communicates outwards does...
+ +$paymentMethodManagerWithTimeAwarePaymentMethod
is immutable?Is the added behavior to paymentmethod.enabled()
, seen as a state change?
If all objects should be immutable, we must find a way of hiding the enabled
-logic in another structure.
Or are there any patterns (known to PHP) or other languages that make all payment method managers immutable? Those would include moving the behavior out of the payment method implementations and use InMemoryPaymentMethod
for each payment method?
I wrote a small INI file parser as a library which I want to use in a bigger project. Following good practice I decided I should write test cases, too. But I fail to find a good start.
+ +parse(), get_sections(), get_value(section, key)
)So the first test I'd write is:
+ +I am unhappy with that. Maintaining the expected output and the input files is quite a burden. The tests will fail when you update the input but not the expectations which I think is a bad design decision for tests.
+ +There are many guidelines for writing good tests out there, but I always find it hard to apply them. I guess experience is the key. So maybe you can guide me to some good example codes or share your personal experiences? Much appreciated.
+",321010,,,,,1/9/2019 22:20,How to identify test cases?,Is it possible to scale a low resolution image to a highier resolution upto the point with minimum effect on quality, sharpness and other notable attributes of an image.
+",324609,,,,,1/9/2019 23:51,Low resolution Image to High resolution,