diff --git "a/stack_exchange/SE/SE 2021.csv" "b/stack_exchange/SE/SE 2021.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/SE/SE 2021.csv" @@ -0,0 +1,54664 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense +420543,1,,,1/1/2021 13:45,,3,139,"

Using SPAs as the UI in front of an API has become standard practice, with OAuth 2.0 / OIDC being the common authz mechanism for this. This approach generally entails the SPA receiving its access via the the authorization code flow or the implicit flow by first redirecting the user to an authorization server where they can log in and consent to the SPA's access.

+

I am having trouble determining if this approach can be used for the authorization server itself: in other words, use an SPA to generate the UI for a login form and have the user log in securely.

+

Naively, one could assume the resource owner password flow matches this criteria since it accepts a username and password. But it requires the client to be able to use a secret. An SPA can't do this.

+

Of course, I could design my own API to accept a username and password and respond with enough information for the SPA to work. But I am wondering if this portion is specified in any of the OAuth 2.0 or OIDC RFCs/specs, or if there is an alternative, highly-specified API for logging in users securely via an SPA that should be considered before rolling one's own.

+",167734,,,,,1/1/2021 13:45,Is it possible to use an SPA as the UI for an OAuth 2.0 authorization server?,,0,4,,,,CC BY-SA 4.0 +420560,1,420561,,1/2/2021 7:45,,-6,1534,"

An API is often analogized to a waiter in the restaurant but the analogy stops here fruitlessly. To assist me to understand this analogy, I want to match each of the words in "API" to something about the restaurant waiter, like body parts or waiter's functions. Thus what exactly is the Application? Programming? Interface?

+

+

Source of picture. Understanding APIs: What They Are Why They Matter Rigor

+
+

Think of an API as a waiter at a restaurant:

+

+

+

At a restaurant, you are handed a menu of items that you can order. You can then add specific instructions about how you want those items prepared.

+

Once you’ve made your decisions, you give your order to the waiter, saying, “I want the filet mignon, medium-rare, with a side of creamed spinach and a fully-loaded baked potato with no chives.”

+

The waiter writes down your order and delivers it to the kitchen team, who prepares your meal to your exact specifications.

+

Once your meal is prepared, the waiter picks it up from the kitchen and serves it to you at your table. You review it to make sure that you have received everything you requested – and then you eat.

+

The waiter’s role in this scenario closely mirrors the role of an API in the delivery of data on the modern web. Like a waiter, an API:

+
    +
  1. Receives a set of instructions (a request) from a source (such as an application or engineer)
  2. +
  3. Takes that request to the database
  4. +
  5. Fetches the requested data or facilitates a set of actions
  6. +
  7. Returns a response to the source
  8. +
+
+",,user382297,,,,1/2/2021 13:03,"If an API is like a restaurant waiter, what exactly are the Application, Programming, Interface?",,2,3,,,,CC BY-SA 4.0 +420563,1,420568,,1/2/2021 12:55,,1,70,"

I've been practicing DDD and refactoring an app to understand it's principles and applications better. However I can't fully grasp some of the ideas and how to implement them in my business domain.

+

Let me start with stating a business requirement: A user can place and order and this order would need to be verified by a sales manager who is responsible for the orders of that specific product. User can be a registered user or an anonymous user. In both cases order comes from a request form that contains all the required information for order to be placed. Users can purchase their orders with different supported payment methods.

+

Now this requirement only belongs to the "placing an order" context. Other parts of the app includes how to process this order after it's verified, and then when the processing is complete, eventually deliver the order.

+

For this requirement I'm trying to think of my domain entities. First I think it'd be logical to have Customer aggregate, because from the original requirement it's clear that a Customer can place an order so my code could be something like the following:

+
class Customer {
+   Key Id
+   void PlaceOrder(OrderRequest request) {...}
+}
+
+

However, business requirement states that this user can be an anonymous user in that case I have no "Customer" instance that I can map the incoming request to. So I change my model and come up with the following aggregate:

+
class OrderContext {
+   Key OrderId;
+   void PlaceOrder(OrderRequest request, Nullable<Customer> customer) {...}
+}
+
+

But then DDD examples and books state that domain entities should correspond to their use cases. In this context, it's not very clean when you read this code and out loud state "an Order can place itself" - errm what?

+

How do you model your domain for a business requirement like the above? Who is actually responsible for creating this order? On top of that, an order can be placed from the sales manager's if, for example we want to gift to a customer for compensation. In this case, a "SalesManager" entity would also have "placeOrder" method in it. I'm totally confused and lost about how to translate this kind of business logic into my domain models.

+

How would be the relation between orders and customers and sales managers then? I could have "orders" collection in all of those entities and then it'd be a nightmare to keep all of them in a consistent state considering this is a web app and aggregates are loaded - consumed and then destroyed per request basis.

+",120404,,,,,1/2/2021 15:20,Defining domain logic and finding the correct place to put it,,1,2,1,,,CC BY-SA 4.0 +420565,1,420581,,1/2/2021 13:13,,5,260,"

Context: +Recently I was discussing the design of a small programme with a new team member (I am the product manager). The programme is essentially a protocol converter that reads data from a number of sockets (as a client) and inserts that data into the database. We discussed the idea of having one thread per socket to allow the programme to scale to more sockets without blocking itself. The same team member then discussed this (in general terms) with a senior dev who gave him the advice "threads should only ever block on one thing" i.e. if the same thread can block in two places, it should be two threads.

+

In the context of my programme, this would mean two threads per socket, or possibly one thread per socket plus one that does all the database insertions.

+

However, my question is not about my programme specifically but about the advice "threads should only ever block on one thing" - is this good advice to give to someone? Is it a well-known best practice? Or perhaps it derives from a particular school of thought or programming paradigm.

+",353683,,,,,1/5/2021 11:46,"""Threads should only block on one thing"" - is this a common best practice?",,2,4,3,,,CC BY-SA 4.0 +420571,1,420572,,1/2/2021 16:28,,-2,98,"

This code is from https://www.geeksforgeeks.org/segregate-even-and-odd-numbers/

+

It takes an array of numbers & segregates them in place without bothering about the order.

+
void segregateEvenOdd(int arr[], int size)
+{
+    /* Initialize left and right indexes */
+    int left = 0, right = size-1;
+    while (left < right)
+    {
+        /* Increment left index while we see 0 at left */
+        while (arr[left] % 2 == 0 && left < right)
+            left++;
+
+        /* Decrement right index while we see 1 at right */
+        while (arr[right] % 2 == 1 && left < right)
+            right--;
+
+        if (left < right)
+        {
+            /* Swap arr[left] and arr[right]*/
+            swap(&arr[left], &arr[right]);
+            left++;
+            right--;
+        }
+    }
+}
+
+

The site mentions that this algorithm is O(n).

+

However, since there is an inner loop and an other loop, I don't think this is O(n). Since the inner loop doesn't run through for each iteration of the other loop, it's not O(n^2) either.How does one analyse the complexity of this algorithm? I think it's an O(nlogn) algorithm but I am not sure how to arrive at that?

+",69796,,,,,1/2/2021 17:33,What is the time complexity of this algorithm,,1,2,,,,CC BY-SA 4.0 +420578,1,420579,,1/2/2021 20:40,,3,111,"

I'm trying to learn more about the fundamentals of containerization.

+

I came across the term "OS-Level Virtualization" as the partitioning of the user space to further increase process isolation. However, I have two questions about this

+
    +
  • what is the benefit of this? aren't processes already fairly isolated via separate virtual address spaces?
  • +
  • how is this related to different processes seeing different filesystems? does OS-Level Virtualization enable processes to 'see' different filesystems? If so, how?
  • +
+",357949,,,,,1/2/2021 21:16,Trying to understand OS-Level Virtualization,,1,1,,,,CC BY-SA 4.0 +420585,1,420589,,1/3/2021 7:19,,-1,160,"

As we all know /usr/local/sbin, /usr/local/bin, /usr/bin, /sbin, /bin, /usr/games, /usr/local/games and /snap/bin are directories that Linux commands ( Except ones that built-in to shell and custom alias'. ). So, some applicatons doesn't needs root password to put their program launcher command to path. I made a launcher for my program and I wonder can I put that to a local path and make my user can install my program without root permissions?

+

TL;DR: Is there a local path directory? Something like $HOME/.bin maybe.

+",381927,,,,,1/3/2021 10:50,Local Path on Linux,,1,5,0,1/4/2021 17:53,,CC BY-SA 4.0 +420587,1,420646,,1/3/2021 7:43,,-3,164,"

We are building a software application for a client with a particular naming convention for REST services.

+

For example if you use a POJO for your request or response in a REST service something like that:

+
public class Document {
+    private String codeDocument;
+    private String codeParentDocument;
+    private Double amountPayed;
+    private String nameOwner;
+    private String descriptionDocument;
+}
+
+

Your REST contract should be so:

+
{
+  "codDocument": "123",
+  "codParentDocument": "123",
+  "amtPayed": 25.5,
+  "desDocument": "owner"
+}
+
+

[EDIT]

+

As you can see, some attributes names are using accronyms ("cod", "amt", "des"). There are many others that are used in this project.

+

I would like to know if the use of this "acronyms" can be considered as a bad practice or in contrast, don't break any good practice?

+

Thanks in advance for your opinions!

+",92024,,92024,,1/3/2021 20:21,1/4/2021 17:02,Standarized prefixes for naming REST POJO (Bad or good practice)?,,1,4,,1/4/2021 17:53,,CC BY-SA 4.0 +420591,1,420601,,1/3/2021 11:24,,7,611,"

What is difference between Average length of codes and Average length of codewords in Huffman Algorithm? is both the same meaning? I get stuck in some facts:

+

I see a fact that marked as False:

+
+

for a text that it's characters get from set of n alphabets then +average length of codes is O(log n).

+
+

I ran into a bigger challenge when see these two facts:

+
+

The average codeword length in Huffman's algorithm is Omega(log n).

+

The average codeword length in Huffman's algorithm is O(log n).

+
+

I need a very small example to clear the difference between these concepts.

+

Update:

+

This is the answer that why this fact is false: (i.e: average length of codes is O(log n) is false because:)

+

+",382358,,382358,,1/3/2021 15:52,1/6/2021 15:18,some misunderstanding in concept of Huffman algorithm,,2,3,3,,,CC BY-SA 4.0 +420596,1,,,1/3/2021 13:22,,2,81,"

Pandoc is a command-line tool and Haskell library for converting between many different markup and document formats. One of the ways Pandoc's behavior can be customized is via filters -- Pandoc serializes its' representation of the document into JSON and passes the JSON into the standard input of the filter; the filter can return modified JSON to Pandoc via its' standard output, which Pandoc will then use to create the output document. Filters can be written in any language which can process standard input/output.

+

I've written a library for writing such filters in .NET. In order to ensure the library produces the proper JSON, I have the following test:

+
    +
  • For a given document, call Pandoc on the document to produce the JSON equivalent; pass the JSON into a dummy filter which doesn't do anything; and ensure the JSON output is semantically equivalent (source). This ensures the library itself doesn't introduce any unwanted changes to the JSON.
  • +
+

I have four dummy filters -- different variants on the base filter classes in the library -- which I am using for the tests.

+

I run this test against all the (relevant) documents in the Pandoc test documents folder. All the tests pass (save for those documents which Pandoc for one reason or another cannot parse).

+

But my problem is that running these tests on my machine takes almost 40 minutes, which feels far too long.

+

I don't want to store the generated JSON for each document (instead of having Pandoc produce it each time), as other versions of Pandoc might produce a different JSON.

+

Is this length of time a valid concern? What might I do to improve the test pipeline?

+",100120,,100120,,1/3/2021 13:58,1/3/2021 14:31,Improvements to test architecture for faster testing,,1,1,,,,CC BY-SA 4.0 +420598,1,,,1/3/2021 13:45,,5,120,"

We have a few backend services that our frontend SPAs fetch data from. Right now, the SPAs use JS libraries to authenticate with the Auth server (Azure AD) which returns a JWT which is validated by my backend services before responding to the requests. We also have a couple of native mobile apps and they too are using platform specific libraries for auth. This works fine for now.

+

But slowly the number of our SPAs are increasing and it is becoming a pain to write and maintain the same auth code in all the applications. Moreover, we are also looking to deploy our apps on-premise for some of our clients who might have separate auth needs (say Auth0 or Okta). This is also true for our native mobile apps.

+

As such, I was thinking of removing authentication handling from our SPAs and proxy all requests through a reverse proxy like NGINX which can also authenticate requests by redirecting them to a sign-in page.

+

But, I don't know if this will help us in doing something similar in a native mobile app. As far as I understand, since the client is not requesting a page everytime it loads (like an SPA does), I am not sure where exactly the popup(or maybe redirection?) should happen in a mobile app. Or is that even possible? Is using platform specific auth SDKs the only way in a mobile app? If so, is there a way (or a library) that is not auth provider specific and I can switch out auth easily?

+",310325,,,,,1/3/2021 13:45,Is it possible to use a reverse proxy authentication in a native mobile app,,0,1,1,,,CC BY-SA 4.0 +420602,1,420614,,1/3/2021 16:22,,6,904,"

Goal:

+

I am trying to create a UML class diagram for a java spring application. +Spring uses a lot of annotations and I couldn't find any resources online on how to properly model them in UML.

+

I know that there is this question and it gives a good answer on how to model class annotations but it does not talk about method or variable annotations.

+

Examples:

+

Example class with annotations:

+
@RestController
+@RequestMapping("/someRoute")
+public class BaseController {
+
+  @Autowired
+  protected BaseService service;
+  
+  @GetMapping(BaseEntity.ID_MAPPING)
+  public ResponseEntity<BaseEntity> findById(
+          @PathVariable(value = BaseEntity.ID) long id
+          ) throws ResourceNotFoundException {
+      
+      BaseEntity entity = service.findById(id);
+      return ResponseEntity.ok().body(entity);
+      
+  }
+
+}
+
+

For the class annotations I would use this:

+

+

For method and attribute annotations I tried using this:

+

+

But as you can see this gets very long and complicated to read very fast. +This also would not work if something had multiple tagged annotations.

+

Question:

+

So I would like to know if there is a correct or better way to show annotations in an UML class diagram? Or if java annotations should even be in an UML diagram at all?

+",382368,,,,,1/4/2021 0:20,Modelling java annotations in an UML class diagram,,1,4,,,,CC BY-SA 4.0 +420603,1,420604,,1/3/2021 17:53,,-2,93,"

I have a WPF MVVM application, the WPF is the UI frontend and I have another project in the same solution that does DB access, I do this by starting a Task from the frontend that executes the backend DB access method.

+

I'm wondering what is the best practice to handle exceptions in this case? I'm thinking the backend should handle exceptions that it can do something about and maybe fix, while unfixable problems like "Can't connect to the database" should be shown to the user by checking if the Task object's "Exception" property is not null?

+",372966,,,,,1/3/2021 19:40,WPF MVVM using TPL - should I handle exceptions in the back or front end?,,1,6,,,,CC BY-SA 4.0 +420605,1,,,1/3/2021 19:48,,1,159,"

The Common Lisp Cookbook discusses how to use ftype to declare the inputs and outputs of functions. In compilers with a lot of type inferencing like SBCL, this would seem to offer a lot of support during development at both compile time and run time for error checking and generating efficient compiled code. (The extent of run time checks seem easily adjusted by changing compiler policy.)

+

However, much of the professional code I've looked at doesn't use such declarations. For example, Edi Weitz's popular cl-ppcre doesn't (although he does declaim the types of parameters). Special efficiency considerations obviously call for declarations in hot spots, but wouldn't program-wide ftype declamations also be a significant help during development? Or is this overkill?

+",381128,,,,,1/3/2021 19:48,Using ftype to Declare Functions in Common Lisp,,0,0,,,,CC BY-SA 4.0 +420610,1,,,1/3/2021 22:12,,1,497,"

The title almost sums it up but let me be example-led and clearer.

+

Assuming we have a class:

+
class T:
+  def goto(self, value):
+    print('go')
+
+

And a second inheriting class:

+
class A(T):
+  def __init__(self):
+    pass
+
+

Now, of course, instances of A can call the goto method.

+

My question is: Does this break immutability? Do we say: A always inherited T, and T always contained the goto method, so it was not mutated. Or do we say: A precedes T and inherits from T. A is therefore mutated to include the goto method and immutability is broken.

+

Does that make sense? I tend to think the latter because T could change over time. Therefore, A is not "immutable," so to speak, by virtue of having inheritance.

+

So it leaves me wondering: Is inheritance incompatible with immutability?

+",138502,,,,,1/4/2021 10:31,How is inheritance possible with immutability?,,3,6,,,,CC BY-SA 4.0 +420616,1,,,1/4/2021 6:52,,1,261,"

I have a function which returns either true/false, each return plays nicely with the function name: isOnline, however, there are cases in which I want to throw an error inside of it. Maybe the status server isn't available or what-not. But the problem is, errors in PHP are not that well-supported, nor is the community to keen about them. A try/catchis a foreign concept to most, so, I need to return a custom ErrorObject. So, your code ends up looking something like this:

+
$online = isOnline();
+
+if( !$online ) {
+  return False;
+}
+
+if( gotError( $online ) ) {
+  //Return the error or do something, execution stops here.
+}
+
+

I personally like it. While verbose, I genuinely never found this level of error checking to be exhaustive or to slow-down development, but a function that has more than 2 return possibilities just feels wrong.

+

Is there any literature/thoughts about returning errors?

+",382392,,,,,1/5/2021 11:15,"Is having 3 return types for a function, in order to facilitate error handling a bad idea?",,4,7,,,,CC BY-SA 4.0 +420618,1,,,1/4/2021 8:38,,1,62,"

Context:

+

Creating a "middleware" between 2 services I have to Get data from Service Source and get it into Service ERP. +There is a multiple type of Data: A, B, C. An integration process follows the following script:

+
    +
  • Get pending ID for A
  • +
  • Get A for each of those Id
  • +
  • For each of those A convert them into ERP A.
  • +
  • Submit the A to the ERP.
  • +
  • If any pb occurred cancel this A.
  • +
  • Else Valid this A.
  • +
+

With a little Log and Error handling around each of those steps. +The process stays the same for A, B and C.
+I would like to refactor to avoid repetition and having to modify multiple method for a simple change in the overall behavior.

+
class Program
+{
+    static IServiceMock_Source ServiceSource;
+    static IServiceMock_ERP ServiceERP;
+    
+    static void Main(string[] args)
+    {
+        Integrator_A();
+
+        //Integrator_B();
+        //Integrator_C();
+    }
+
+    public static bool Integrator_A()
+    {
+        var As = GetPending_A();
+
+        if (!As.Any())
+        {
+            Logger.Log(typeof(Program),
+                Level.Debug, "There is no As to integrate", null);
+            return false;
+        }
+        Creator_A(As.ToArray());
+        return true;
+    }
+    private static List<A_Entity> GetPending_A()
+    {
+        var As = new List<A_Entity>();
+
+        int[] AsIds = null;
+        try
+        {
+            AsIds = ServiceSource.GetA_Pending();
+        }
+        catch (Exception e)
+        {
+            Logger.Log(typeof(Program),
+               Level.Error,
+                $"ERR ServiceSource.GetA_Pending : " + e
+               , e);
+        }
+
+        if (AsIds.Any())
+        {
+            Logger.Log(typeof(Program),
+                Level.Info,
+                $"AsIds : [{string.Join(", ", AsIds)}]"
+                , null);
+        }
+
+        foreach (var id in AsIds)
+        {
+            A_Entity tempA;
+            try
+            {
+                tempA = ServiceSource.GetA(id);
+                As.Add(tempA);
+            }
+            catch (Exception e)
+            {
+                Logger.Log(typeof(Program),
+                 Level.Error,
+                  $"ERR ServiceSource.GetA, Impossible to get l'ID[{id}] : " + e
+                 , e);
+            }
+        }
+        return As;
+    }
+    private static void Creator_A(params A_Entity[] a_Entities)
+    {
+        foreach (var entity in a_Entities)
+        {
+            var isACreated = CreateAInERP(entity, out string error);
+
+            if (!isACreated)
+            {
+                var err = $"Failed A creation" +
+                    $"[{entity.A_EntityEntityDbId}, entity.otherId, {entity.ProcessableEntityDbId}] [..]" +
+                    $"\nError : \n{error}";
+                Logger.Log(typeof(Program), Level.Error, err, null);
+
+                ServiceSource.CancelA(entity.A_EntityEntityDbId, true);
+
+                var source = $"MachineName:{System.Environment.MachineName}" +
+                    $", App:{System.AppDomain.CurrentDomain.FriendlyName}" +
+                    $", Path:{Environment.GetCommandLineArgs()[0]}"
+                    ;
+
+                //ServiceSource.CreateErrorMessage(new ErrorMessageDTO
+                //{
+                //    ProcessID = entity.ProcessInformation.ProcessInformationId,
+                //    Source = source.Truncate(500),
+                //    Category = this.GetType().FullName,
+                //    Query = "CreateAInERP()",
+                //    Message = err,
+                //});
+            }
+            ServiceSource.ValideA(entity.A_EntityEntityDbId, isACreated);
+        }
+    } 
+    private static bool CreateAInERP(A_Entity entity, out string error)
+    {
+        error = "";
+        A_In erpItem = null;
+        try
+        {
+            erpItem = Converter.ToERP(entity);
+        }
+        catch (Exception e)
+        {
+            error = "Erreur projection: CreateAInERP." +
+                 $" DB_id = {entity.A_EntityEntityDbId}." + e;
+            return false;
+        }
+
+        A_Response result;
+        try
+        {
+            result = ServiceERP.Submit_A(
+               new A_Request
+               {
+                   Context = new Context { },
+                   A_In = erpItem,
+               });
+        }
+        catch (Exception e)
+        {// Timeout and service exception
+            error = "Error Integration ERP: Submit_A." +
+               $" DB_id = {entity.A_EntityEntityDbId}." + e;
+            return false;
+        }
+
+        if (result.ErrorCode != "OK")
+        {// bizness Error
+            error = "Error Integration: " +
+                $"DB_id = {entity.A_EntityEntityDbId}. " +
+                $"[{result.ErrorCode}] : result.errorMsg";
+            return false;
+        }
+        return true;
+    }
+}
+
+public class Converter
+{
+    public static A_In ToERP(A_Entity entity)
+    { // Complexe mapping of ERP entities
+        return new A_In();
+    }
+    public static B_In ToERP(B_Entity entity)=> new B_In();
+    // internal static C_In ToERP(C_Entity entity)=> new C_In();
+}
+
+

This code as a huge repetition between process A and B.
+You will notice that the following code is 100% the same as A. With only a type difference.
+In fact the real code for B,C,D are coded by copy pasting A block. And using Ctrl+R+R for rename just a few times.

+
public static bool Integrator_B()
+{
+    var Bs = GetPending_B();
+
+    if (!Bs.Any())
+    {
+        Logger.Log(typeof(Program),
+            Level.Debug, "There is no Bs to integrate", null);
+        return false;
+    }
+    Creator_B(Bs.ToArray());
+    return true;
+}
+private static List<B_Entity> GetPending_B()
+{
+    var Bs = new List<B_Entity>();
+
+    int[] BsIds = null;
+    try
+    {
+        BsIds = ServiceSource.GetB_Pending();
+    }
+    catch (Exception e)
+    {
+        Logger.Log(typeof(Program),
+            Level.Error,
+            $"ERR ServiceSource.GetB_Pending : " + e
+            , e);
+    }
+
+    if (BsIds.Any())
+    {
+        Logger.Log(typeof(Program),
+            Level.Info,
+            $"BsIds : [{string.Join(", ", BsIds)}]"
+            , null);
+    }
+
+    foreach (var id in BsIds)
+    {
+        B_Entity tempB;
+        try
+        {
+            tempB = ServiceSource.GetB(id);
+            Bs.Add(tempB);
+        }
+        catch (Exception e)
+        {
+            Logger.Log(typeof(Program),
+                Level.Error,
+                $"ERR ServiceSource.GetB, Impossible to get ID[{id}] : " + e
+                , e);
+        }
+    }
+    return Bs;
+}
+private static void Creator_B(params B_Entity[] b_Entities)
+{
+    foreach (var entity in b_Entities)
+    {
+        var isBCreated = CreateBInERP(entity, out string error);
+
+        if (!isBCreated)
+        {
+            var err = $"Failed B creation" +
+                $"[{entity.B_EntityEntityDbId}, entity.otherId, {entity.ProcessableEntityDbId}] [..]" +
+                $"\nError : \n{error}";
+            Logger.Log(typeof(Program), Level.Error, err, null);
+
+            ServiceSource.CancelB(entity.B_EntityEntityDbId, true);
+
+            var source = $"MachineName:{System.Environment.MachineName}" +
+                $", App:{System.AppDomain.CurrentDomain.FriendlyName}" +
+                $", Path:{Environment.GetCommandLineArgs()[0]}"
+                ;
+
+            //ServiceSource.CreateErrorMessage(new ErrorMessageDTO
+            //{
+            //    ProcessID = entity.ProcessInformation.ProcessInformationId,
+            //    Source = source.Truncate(500),
+            //    Category = this.GetType().FullName,
+            //    Query = "CreateBInERP()",
+            //    Message = err,
+            //});
+        }
+        ServiceSource.ValideB(entity.B_EntityEntityDbId, isBCreated);
+    }
+}
+private static bool CreateBInERP(B_Entity entity, out string error)
+{
+    error = "";
+    B_In erpItem = null;
+    try
+    {
+        erpItem = Converter.ToERP(entity);
+    }
+    catch (Exception e)
+    {
+        error = "Erreur projection: CreateBInERP." +
+                $" DB_id = {entity.B_EntityEntityDbId}." + e;
+        return false;
+    }
+
+    B_Response result;
+    try
+    {
+        result = ServiceERP.Submit_B(
+            new B_Request
+            {
+                Context = new Context { },
+                B_In = erpItem,
+            });
+    }
+    catch (Exception e)
+    {// Timeout and service exception
+        error = "Error Integration ERP: Submit_B." +
+            $" DB_id = {entity.B_EntityEntityDbId}." + e;
+        return false;
+    }
+
+    if (result.ErrorCode != "OK")
+    {// bizness Error
+        error = "Error Integration: " +
+            $"DB_id = {entity.B_EntityEntityDbId}. " +
+            $"[{result.ErrorCode}] : result.errorMsg";
+        return false;
+    }
+
+    return true;    
+}
+
+

Here is the code use for the mockup. It's out of modification Scope.
+But it's needed to have no compilation error in this MRE.

+
public interface IServiceMock_ERP
+{
+    public A_Response Submit_A(A_Request request);
+    public B_Response Submit_B(B_Request request);
+}
+
+public class A_Request
+{
+    public Context Context { get; set; }
+    public A_In A_In { get; set; }
+}
+public class B_Request
+{
+    public Context Context { get; set; }
+    public B_In B_In { get; set; }
+}
+public class Context { }
+public class A_In
+{
+    public string RealDataHere { get; set; }
+}
+public class B_In
+{
+    public string RealDataHere { get; set; }
+}
+public class A_Response
+{
+    public string ErrorCode { get; set; }
+    public A_Out A_Out { get; set; }
+}
+public class B_Response
+{
+    public string ErrorCode { get; set; }
+    public B_Out B_Out { get; set; }
+}
+public class A_Out
+{
+    public string Error { get; set; }
+}
+public class B_Out
+{
+    public string Error { get; set; }
+}
+
+public interface IServiceMock_Source
+{
+    public int[] GetA_Pending();
+    public A_Entity GetA(int id_A);
+    public bool CancelA(int id_A, bool value);
+    public bool ValideA(int id_A, bool value);
+
+    public int[] GetB_Pending();
+    public B_Entity GetB(int id_B);
+    public bool CancelB(int id_B, bool value);
+    public bool ValideB(int id_B, bool value);
+
+    // etc.. 
+    //public int[] GetC_Pending();
+    //public C_Entity GetC(int id_C);
+    //public bool CancelC(int id_C, bool value);
+    //public bool ValideC(int id_C, bool value);
+
+}
+public class A_Entity : ProcessableEntity
+{
+    public int A_EntityEntityDbId { get; set; }
+    public string RealDataHere { get; set; }
+}
+public class B_Entity : ProcessableEntity
+{
+    public int B_EntityEntityDbId { get; set; }
+    public string Rename { get; set; }
+}
+
+

My question is : +How to refactor this to avoid repeating the same process in Integrator_B, Integrator_C etc.. ?

+

What have I try: +I went to the road of Func, Action, and delegate.
+Giving codes like yhe following.

+
public bool Integration_Generic<T>(Func<List<T>> GetItems, Action<T[]> Integrator, string NoElementErreurMessage)
+{
+    var items = GetItems();
+    if (!items.Any())
+    {
+        Logger.Log(this.GetType(), Level.Debug, $"Aucun {NoElementErreurMessage} à integrer.", null);
+        return false;
+    }
+    Integrator(items.ToArray());
+    return true;
+}
+private List<T> GetPending<T>(Func<int[]> PendingIds, Func<int, T> TGetter)
+{
+    var items = new List<T>();
+
+    int[] itemsIds = null;
+    try
+    {
+        itemsIds = PendingIds();
+    }
+    catch (Exception e)
+    {
+        Logger.Log(typeof(Integrateur),
+            Level.Error,
+            $"ERR {this.GetType().Namespace}.{PendingIds.Method.Name} : " + e
+            , e);
+    }
+
+    if (itemsIds.Any())
+    {
+        Logger.Log(typeof(Integrateur),
+            Level.Debug,
+            $"{typeof(T).Name} itemsIds : [{string.Join(", ", itemsIds)}]"
+            , null);
+    }
+
+    foreach (var id in itemsIds)
+    {
+        T tempT;
+        try
+        {
+            tempT = TGetter(id);
+            items.Add(tempT);
+        }
+        catch (Exception e)
+        {
+            Logger.Log(typeof(Integrateur),
+                Level.Error,
+                $"ERR {this.GetType().Namespace}.{TGetter.Method.Name}, Impossible de recuperer l'ID[{id}] : " + e
+                , e);
+        }
+    }
+    return items;
+}
+
+delegate V Creator_Delegate<T, U, V>(T input, out U output);
+
+

But there is too mutch compilation error for this code to be functional. Getting a fresh start may be better that fixing my try to tinker with things I don't fully understand.

+",382394,,397719,,1/6/2022 7:16,1/6/2022 7:16,Refactoring similar integration service code block,,1,6,,,,CC BY-SA 4.0 +420620,1,420621,,1/4/2021 9:31,,4,53,"

In the UML specification 2.5.1 (Link) on page 117 it is specified that the notation of operations (methods) should look like the following:

+
[<visibility>] <name> ‘(‘ [<parameter-list>] ‘)’ [‘:’ [<return-type>] [‘[‘ <multiplicity-range> ‘]’]
+ [‘{‘ <oper-property> [‘,’ <oper-property>]* ‘}’]]
+
+

What irritates me are the blanks. If I set them as described in the specification above, then they are unfortunately not consistent with the example found in the same chapter on page 119. Here the example looks like the following:

+
+createWindow (location: Coordinates, container: Container [0..1]): Window
+
+

See for instance: In the example, there is no blank between the <visibility> and the <name> but in the specification, there is a blank between them.

+

Can someone help me understand this inconsistency? Why are the blanks set so strangely anyway? If one wants to make it 100% correct, how would the blanks be set?

+

Kind Regards and Thanks, +Raphael

+",382396,,,,,1/4/2021 23:36,UML v2.5.1 correct notation of blanks in operations (methods)?,,1,0,,,,CC BY-SA 4.0 +420623,1,,,1/4/2021 9:48,,0,131,"

I have a method GetReportAsync that takes one XML and generates another:

+
public async Task<string> GetReportAsync(string id)
+{
+    //  Get Order.xml from a file database
+    var order = await _dataService.GetOrderAsync(id);
+
+    var report = _reportGenerator.WriteReport(order);
+
+    var xml = SerializeToXml(report);
+    
+    return xml;
+}
+
+

As an automated testing tool, I write XMLUnit.NET snapshot tests that assert that an order will match a certain reference XML document, a stored snapshot.

+
[TestMethod]
+[DataRow("12345")]
+[DataRow("67890")]
+public async Task Snapshots_Should_Match(string id)
+{
+    var actual = await _testClass.GetReportAsync(id);
+
+    var expected = Input.FromFile($@"Snapshots\__snapshots__\{id}.xml");
+
+    var diff = DiffBuilder
+        .Compare(expected)
+        .WithTest(actual)
+        .Build();
+
+    Assert.IsFalse(diff.HasDifferences());
+}
+
+

Now, I realize that the Order.xml file can be retrieved in two ways:

+
    +
  • Alternative A: Store a number of Order.xml files in the project and implement _dataService.GetOrderAsync to read from these files.
  • +
  • Alternative B: Get the actual Order.xml from a test (or production) database, just like in the real implementation.
  • +
+

Alternative A will assert that the method works given the order files in their exact state from when this snapshot test was written. However, I struggle to see the reason for such a guarantee.

+

Alternative B will give me tests that fail if the data service for some reason changes its response, possibly because of a non-backwards compatible change they introduce. To me this seems to give me more value than alternative A. However I do see that testing against a real database would possibly break some fundamental rules.

+

Furthermore, since the method is not writing to the database, then why not test against the production database instead of a test database?

+",339678,,339678,,1/4/2021 15:13,2/4/2022 21:08,Should snapshot tests compare against stored test data or data from a database?,,1,4,,,,CC BY-SA 4.0 +420625,1,,,1/4/2021 10:39,,-3,391,"

In many discussion I learnt that was undesirable (forbidden) to expose IQueryable from Respository pattern.

+

What is the best practice then for server-side filtering and paging?

+",110839,,,,,1/4/2021 11:44,"How to filter and add paging, if we must not expose IQueryable at Repository pattern?",,1,5,,,,CC BY-SA 4.0 +420629,1,,,1/4/2021 11:47,,1,325,"

An Aggregate Root should always have a unique ID within the bounded context. Typically the examples one finds use a GUID for this to ensure global uniqueness.

+

However consider a bounded context for a Chat. In this case I deem messages and chats as their own individual aggregate roots. One may consider Message an entity of Chat, however if messages are to grow without bounds, this is infeasible.

+

Therefore a Message would hold the reference to the Chat to which it belongs, by ID. In this case I would need a large enough message Id to ensure that it is unique w.r.t. all other messages independent of Chat.

+

I am wondering if it is bad practice to instead make a composite key for Message of the form (ChatId, MessageId). This would ensure uniqueness, and at the same time I do not need MessageId to be as large as mentioned above, thereby saving some space.

+",382375,,,,,1/4/2021 15:00,Composite Id based on another Aggregate root?,,2,12,,,,CC BY-SA 4.0 +420630,1,,,1/4/2021 12:09,,1,121,"

Situation

+

Right now, I am at a point of realization, that at my present position I am not completing requests with regular interval, have spaced out request completion. But to approach my true ability I want to try to in the course of a day, going for my first attempt, to push my ability to push out 3 moderate sized commits in a span of 1 workday (8 hours).

+

Scope

+

It is fair to mention that I work with vanilla PHP, MySQL and Oracle with no framework other than jQuery. And that at present I have taken ample research time, that I can hone in on.

+

Definition

+

One commit corresponds to an incoming support request received by email, some which are outstanding but not ready for commit. Right now I am including uncommitted finished support requests that will be pushed through first.

+
+

I am asking for feedback from the community here as to overall effectiveness of my process in a span of 2 hours maximum with the above information provided

+
+

Proposed 13 Step Solution (1 Hour 30 Minutes - Max 2 Hours)

+

Proposed Solution to enhance my productivity (check back immediately by email with manager or requester if I spend more than 10 minutes on a portion each step without comfortable progress)

+

Preparatory Work for process of steps I have identified, with bullet points specified at first

+

Identify 4 support requests in inbox and email back log to resolve, considering priority and time to completion. Check with Manager or requester throughout.

+

For each support request

+

Capture problem and verify understanding by email to requester (20 Minutes)

+

10 Minutes +0) Read the email one or two times and write in my own words the problem requested. Open web page of target request to verify problem encountered. Write down questions and clarification needed in step 1...

+

5 Minutes

+
    +
  1. Create bulleted points of the request to capture the scope of the the current problem (when I do this..this happens), along with the along with bulleted requirement points that I created (this should happen instead...)
  2. +
+

5 Minutes

+
    +
  1. Once received, begin formulating a phrased bulleted requirements list break down each specific requirement and ask by requester if the phrased bulleted requirements capture the scope of the solution

    +
  2. +
  3. Email requester with a brief email addressing 0), 1), 2) asking if I understand both problem and solution correctly and completely and wait for response and make necessary revisions before proceeding.

    +
  4. +
+

Test Case Formulation (10 minutes)

+

5 Minutes

+
    +
  1. Write test case list for proposed solution
  2. +
+

5 Minutes

+
    +
  1. Test plan adhere to each test case (directly driven by requirements)
  2. +
+

Formulate solution (20 minutes)

+

10 minutes +6) Write pseudocode and draw flow chart (half page, one page respectively)

+

10 minutes +7) Write out language specific (PHP primarily, and SQL) lines of code to match step 5

+

Implementation/Test and Validation (40 minutes)

+

30 Minutes

+
    +
  1. Perform code entry for list of bulleted feature requirements points and incrementally verify inserted code lines into proper code region with testing against test cases
  2. +
+

10 minutes

+
    +
  1. Once all cases have passed required functionality... +Validate by self the complete implemented solution against principal desired function to check if the +understood and desired functionality was met

    +

    Once validated, share workable solution with requester or manager independently validate that solution functions as intended

    +
  2. +
+

Revise solution (10 minutes)

+
    +
  1. If not, then get clarification and revise solution starting from step 6
  2. +
+

Commit # (5 minutes)

+
    +
  1. If yes, then begin commit process to appropriate branches
  2. +
+

Move to Next Support Request

+
    +
  1. Move to next support request
  2. +
+

Total planned time for uniform task (1HR 45 MIN up to 2 HR)

+",207922,,207922,,1/4/2021 17:12,1/4/2021 17:12,Productivity - Pushing towards 3 (at max 4) completed requests followed by code commits in a span of 8 hour interval on novel support requests,,2,2,,,,CC BY-SA 4.0 +420643,1,420729,,1/4/2021 16:27,,1,144,"

I have written a software as a student. All functionality was in the software and it was a 30 days fully functional version. After some years there were cracks around and my income was 10% of before. This was where I decided to switch the paradigm to full version vs. demo version with limited functionality (the function weren't even in the EXE using compiler switches). This in my case ended all piracy and worked very well.

+

Now you could ask what if a customer uploads his full version to a crack website? For this case I compiled the customer's full address visibly and invisibly into the EXE file so that I could see which customer was a bad boy...

+

After some more years I had a new problem: anti virus software. Since my software can set keyboard shortcuts, the heuristic algorithms of some anti virus apps started complaining. So I sent the demo version EXE to the anti virus companies to mark it as "safe". This worked very well. But only for the demo version which is fixed in bytesize. When I compile the customer's personal data into the EXE file, the filesize varies a bit and so the checksum differs and the EXE file isn't marked as "safe" by the anti virus software anymore and the complaining starts again for the customers.

+

Does anyone have an idea how I could solve this? I can't add a separate file because this could be deleted by the customer, of course.

+

Thanks in advance.

+",382430,,,,,1/7/2021 5:55,"Copy protection for Windows software for the case ""demo version vs. full version""",,3,10,,,,CC BY-SA 4.0 +420647,1,420705,,1/4/2021 17:45,,1,52,"

To simplify I'll say I'm using a list containing an array of Movies, and Movies.

+

Problem

+

I have two options to update both the note in the list, and the global note of the Movie.

+

Single Calls

+
POST www.example.com/api/lists/:id/ // Creates a new movie entry in the list, updates Movie's note
+
+PUT www.example.com/api/lists/:id/movies/:movieId // Updates entry, updates the note in Movie 
+
+DELETE www.example.com/api/lists/:id/movies/:movieId // Delete the entry, remove the note from Movie
+
+

Multiple Calls

+
POST www.example.com/api/lists/:id // Create a new list
++
+PUT www.example.com/api/movies/:id // Updates the note (total =+ note, nbNotes++)
+
+
+PUT www.example.com/api/lists/:id/movies/:movieId // Updates the Movie's note in the list
++
+PUT www.example.com/api/movies/:id // Updates the Movie's notes (total =+ (new-old))
+
+DELETE www.example.com/api/lists/:id/movies/:movieId // Remove the entry
++
+PUT www.example.com/api/movies/:id // Updates the Movie's notes (total =- note, nbNotes--)
+
+

I would tend to use the Single Calls but have no idea if modifying another ressource is correct. (Considering that the Movie's note will probably never be modified by its PUT api anywhere else)

+

I tried to take into account other StackExchange's answers but they seem more focused on the retrieval of information in the frontend whereas mine is more focused on updating data.

+
+

It feels like the Single Calls introduced some concurrency problems on implementation (or I coded it very poorly) as sometimes the note updates negatively (as an update with note: null, decreases the total note because an entry in a list can not have a note)

+
+",382433,,342873,,1/5/2021 21:24,1/5/2021 21:24,Use multiple or a single endpoint to modify different related ressources,,1,0,,,,CC BY-SA 4.0 +420650,1,,,1/4/2021 19:05,,1,82,"

I'm designing a new language and the package-management system for it (something like NPM, Cargo, Pip, Gem, Cpan, Cabal, NuGet or the like).

+

I'm trying to decide what's a good way to handle the versioning of a package when only part of it is updated. I'm also curious to learn what's the best way to solve this problem with existing package managers.

+

Example of the problem

+

Let's suppose a package exports some types and functions that many other packages use:

+
# package: string 1.0
+
+type String {
+  length: Number,
+  data: byte*
+}
+
+function from_NUL_terminated( data: byte[] ) -> String { ... }
+function to_NUL_terminated( string: String ) -> byte[] { ... }
+function from_repeated_char( length: Number, char: byte ) -> String { ... }
+function get_UTF8_length( string: String ) -> Number { ... }
+# ...
+
+

At some point, something in the "string" package needs to be changed. For instance a bug in from_NUL_terminated() gets fixed, or a new function get_UTF16_length() is added.

+

Because of this change, a new version of the "string" package is released: string 1.1.
+This makes everything incompatible between the two versions: the language considers String from "string 1.0" a different type from String from "string 1.1". Even though the String type didn't change: only other stuff in that package did.

+

A consequence is that packages that depend on a different version of "string" cannot pass Strings to each other. A little function that is only used internally by a few packages causes a major split in the ecosystem of the language.

+

Obviously this problem must be avoided. How?

+

Further thoughts about the problem

+

From the example above it sounds like only types need to be protected from this issue. That's practical in most cases, but it's not technically correct. Ideally functions that don't change shouldn't be re-released in the new version either.

+

It sounds like every item in a package should have its own versioning, rather than package itself: every type, every function, every piece of data or metada.

+

What's the solution?

+

How do existing languages and their package managers handle this situation?

+

And how could I handle it for my new language and its new package manager? Can the package manager fix this problem on its own for any language, or would it need support from the language to do things properly?

+

Addendum: a minimal example showing the problem in Node.js

+

I created a project in Rust where two different versions of the crate "mystr" are used by the "main" crate and the "printerlib" crate: https://github.com/BlueNebulaDev/rust-version-test . (a crate is a package in Rust terminology)

+

The "mystr" crate exposes a type MyStr. This type is defined identically in both version. "mystr 2.0.0" also exposes a function that wasn't available in the previous version: mystr::from_slice().

+

The program can't compile, because the "main" package creates an mystr(1.0.0)::MyStr object and tries to pass it to a function that expects a mystr(2.0.0)::MyStr object. However MyStr never changes in the two versions: it's only other functions in the same crate that change.

+

I also created a Node.js project showing the same issue in that environment: https://github.com/BlueNebulaDev/node-version-test/ .

+",382435,,382435,,1/4/2021 22:48,1/4/2021 22:48,How to manage versioning for changes that only affect some pieces of a package?,,1,2,,,,CC BY-SA 4.0 +420655,1,,,1/4/2021 20:26,,1,407,"

I want to develop an end-to-end machine learning application where data will be in GPU-memory and computations will run on the GPU. A stateless RESTfull service with a database is not desirable since the traffic between GPU-memory and database will destroy the "purpose" of it being fast.

+

The way I see it is that I need a way to "serve" the class (let's call it as experiment class) which has the data and the methods, then call them using rest APIs.

+

Right now I am using FastApi and initialize the experiment class in it which I believe is not optimal. My class (as well as the data) lives in FastAPI runtime. Kinda like,

+
import experiment_class
+import FastApi
+
+app = FastAPI()
+my_experiment = expertiment_class()
+
+@app.get("/load_csv")
+my_experiment.load_csv("some_file_path")
+
+// do some more on the data
+...
+
+

There are two problems I am having a hard time with,

+

One of them is the terminology:

+
    +
  • Is this really a stateful application?
  • +
  • Is there a word to describe what I am doing? Is this a "Model, View, Controller" design, can it be a simple "Server-Client" or is it something completely different?
  • +
  • Do I need a "Web-server", a "Web-framework" or a "Web-service" for this?
  • +
+

Another one is what technology I can use for this :

+
    +
  • Is it okay to use FastAPI like this?
  • +
  • Do I set up an RPC server (Remote Procedure Call) and call it using Rest API?
  • +
  • Is WSGI or an ASGI server suitable for this task?
  • +
  • Are Django, Flask, Tornado like web frameworks only used for stateless apps? Because nearly all of the examples are.
  • +
  • Do I stick to bare bone Python where I use threads or BaseManager servers?
  • +
+

P.S. What I meant with end-to-end machine learning is that I should be able to load data, process it, and give it to the model for training all the while without leaving the GPU-memory. You can think of a Jupyter-notebook, but we call the cells with rest API.

+",382445,,,,,1/4/2021 22:44,Best approach for developing a stateful computation-heavy application with a rest-api interface using python?,,1,0,,,,CC BY-SA 4.0 +420662,1,,,1/5/2021 0:17,,2,599,"

I've read a number of conflicting articles as to whether microservices should share a database. How and when should microservices communicate?

+

Someone posed the example of 2 microservices:

+
    +
  1. Employee
  2. +
  3. Department
  4. +
+

Suppose the Employee microservice needs information about a department.

+
    +
  • Should these microservices share a database?
  • +
  • Should they communicate over REST?
  • +
  • Should they duplicate data in each of their own databases?
  • +
+",380999,,319783,,1/5/2021 4:07,1/5/2021 12:24,"In a micro service architecture, how should two services communicate with each other? Shared database? REST calls?",,3,4,,,,CC BY-SA 4.0 +420663,1,420683,,1/5/2021 1:25,,2,162,"

Let's say a process (P1) is asking for 100 MB of memory, and the RAM looks like this:

+
[[50 MB free] [USED] [60 MB free] [USED]]
+
+

Since there are technically enough memory that are available (110MB free), what would happen? According to some sources I saw online, the OS will just refuse to allocate the memory, but then again isn't Linux only supposed to throw a memory error when there aren't enough memory?

+

Thanks

+",375680,,,,,1/6/2021 19:50,"Can the operating system ""break up"" a memory allocation (Linux)?",,2,3,1,,,CC BY-SA 4.0 +420665,1,420667,,1/5/2021 1:41,,5,822,"

I wrote this valid piece code, which made me wonder if there was a name for it:

+
public class GenericObject<T> {
+    public T Obj { get; set; }
+}
+public class DerivedClass: GenericObject<DerivedClass> { }
+
+

This leads to the capability of:

+
var x = new DerivedClass();
+x.Obj = x;
+x.Obj.Obj = x;
+x.Obj.Obj.Obj = x;
+// ...
+x.Obj.Obj.Obj.Obj.Obj.Obj.Obj.Obj.Obj.Obj...Obj = x;
+
+

Which is sure to raise a lot of eyebrows depending on the use case.

+
+

Is there a name for this? If so, what is it called, and what is a practical application?

+",319749,,319749,,1/5/2021 18:56,1/5/2021 18:56,Is there a name for this construct with generics?,,2,5,,,,CC BY-SA 4.0 +420666,1,,,1/5/2021 1:49,,1,147,"

I am working on a project which connects to different data sources and fetches data. The problem is each of these data source needs different parameters to fetch the data

+
s3 = S3(ACCESS_KEY, SECRET_KEY, BUCKET, HOST)
+db = DB(HOST, USERNAME, PASSWORD, DB_NAME, SCHEMA)
+sftp = SFTP(HOST, USERNAME, PASSWORD)
+     
+
+

The fetch data function also a different signature

+
s3.fetch_data(folder_path, filename)
+db.fetch_data(table_name, filter_args)
+sftp.fetch_data(file_path)
+
+

How to design a common interface that can stream data from and to any of the above data sources(defined dynamically via a config). Is there a design pattern that addresses this problem.

+

I have looked into strategy pattern but I assume that it applies to cases where the behavior changes but the is-a relationship prevails.

+

Incase of repository pattern there needs to be a common object on multiple storage

+

Both cases doesn't apply here

+",285945,,285945,,1/5/2021 2:29,1/6/2021 14:49,Creating an interface that connects to different data sources,,2,3,,,,CC BY-SA 4.0 +420669,1,,,1/5/2021 3:13,,2,1145,"

We have a service (lets say FileService) that provides API to store/download files into various stores (S3/local file etc.,)

+

We have a scenario where one microservice (Service A) writes a file to S3 via FileService and the same file needs to be read by another microservice (Service B). In an ideal scenario, Service A will have an API exposed to read/download this file; Service B can leverage this API and be able to read it.

+

However, due to the file(s) being huge, we wanted to see if its OK to have Service B read/download the file directly via FileService. (Given that Service A agrees that its fine to provide read access to Service B)

+

In this case as the data being shared across microservices is files, is this an acceptable pattern ? Do we foresee any issues with this approach ?

+",382462,,,,,1/16/2021 13:57,Share files between microservices,,2,0,1,,,CC BY-SA 4.0 +420673,1,,,1/5/2021 6:22,,2,40,"

Tech Stack: I am using MySQL 8 with InnoDB engine. Application is built using Java with Spring & Hibernate. Database is hosted in AWS RDS and web apps in EC2.

+

I have two primary tables for handling orders:

+
    +
  • Orders (Rows Count = 1,294,361)
  • +
  • Orders_Item (Rows Count = 2,028,424)
  • +
+

On peak days, my storefront generates orders at a rate of 30 orders per minute and each order information is written primarily in above mentioned tables.

+

I have a separate project for OMS (Order Manager System) which lookup in same tables to provide me list of pending orders, changing their status, fulfillment etc. With this order generation rate, it usually causes slowdown of OMS order list page. Also I have a CMS (Customer Management System) which also lookup in same tables for handling customer queries related to their orders.

+

Hence, these two tables are used at a very high rate that causes slowdown on one or other place. I am using best possible indexes in these tables.

+

I am thinking of below mentioned solution:

+
    +
  • Maintain duplicate orders data, one will serve for CMS and new order creation as a master and duplicate will server for OMS
  • +
+

But I am not sure is this a right approach. Please share your inputs.

+",52534,,52534,,1/5/2021 6:34,1/5/2021 6:34,Scaling Order Management System for ecommerce,,0,1,,,,CC BY-SA 4.0 +420676,1,420687,,1/5/2021 9:30,,-4,663,"

Command Query Responsibility Segregation and Model–view–controller patterns look pretty similar to me.

+

Are they comparable? Do they act at the same layer of abstraction? How do they differ? Can they be used together or one replaces the other? What am I missing?

+",382469,,,,,1/5/2021 13:47,Comparison of CQRS and MVC,,1,4,,,,CC BY-SA 4.0 +420682,1,,,1/5/2021 12:25,,-1,45,"

Assume that I am implementing a method that takes a data source in a system. Assume that it's a multi-tenant system, so a data source belongs to an organization (as other relevant entities, like users/datasets/etc). There are two ways to go for it:

+
    +
  1. getDataSource(dataSourceName, organizationId)
  2. +
  3. getDataSource(organizationId, dataSourceName)
  4. +
+

Which one is better, and why?

+",22047,,,,,1/5/2021 12:38,Method parameter ordering,,1,0,,1/5/2021 13:02,,CC BY-SA 4.0 +420685,1,,,1/5/2021 13:32,,0,35,"

I have been searching for an answer in this topic but I haven’t been able to find a satisfactory one like in other topics, where the consensus is solid.

+

The situation

+

To keep things simple: I am implementing a custom Dependency Injection Container into one of my current projects (I know, I should use an already built one, but I’m doing it with learning purposes; so answers like ‘use this func of that container…’ are not useful) and I’ve stumbled a problem with instantiation of new elements inside a collection.

+

The problem

+

Imagine that I have a complex object, for example a car. This car has several dependencies (engine, axis, seats, airbags…) that have, at the same time, their own dependencies, and so on. It is not a big issue to make the DiC (via autowiring or using a config file) build the object graph and inject all the dependencies with a simple line of code like:

+
$car = $container->get(‘car’);
+
+

The problem arrives when I build a CarCollection, which is a simple class that wraps an array of cars. The issue comes when I try to use a method that populates the collection with all the cars that exist in the database. It’s obvios that the collection should be able to create the Car objects on the fly when we call the “getAll” method from the database. The code would be something like this:

+
public function populate(array $filters) {
+    $all_data = $this->dao->getAll($filters); // Call the data access object to query all cars.
+    foreach($all_data as $data) {
+        $new_object = $this->container(‘car’); // create a template object
+        $new_object->setData($data); // set the info.
+        $this->items[] = $new_object; // Add to the collection.
+     }
+}
+
+

If car was not such a complex object it would be easier, because I could pass the car fqcn as a parameter for carCollection and use it in every iteration. But that’s not possible for a very complex object (or if I want to instantiate different sub types of the object - for example: lorry, pick-up, van…- depending on information from the database).

+

The question.

+

Regarding the collection being aware about the container: does not it break the purpose of the DIC phylosophy?

+

I guess not on one side, because I am using PSR\Container to type hint the container I pass to the collection (which loosens the coupling). But it breaks the idea that the container should not be coupled with the domain model at all.

+

The only alternative that I have thought about is substituting the creation of one new object for each iteration with a cloning from a prototype object that lives in the collection as a property. But we all know cloning in php can get really tricky and very difficult to debug (Or worse: very difficult to even know that there is a problem going on).

+

Similar issue.

+

PS: I have the same problem when I try to do lazy loading using Porxy objects: I need the proxy objects to have access to the container if I want to instantiate the full object later, which also breaks the principles of a DiC.

+

Thank you all.

+",382485,,,,,1/5/2021 13:32,Dependency Injector and Collections,,0,3,,,,CC BY-SA 4.0 +420686,1,,,1/5/2021 13:44,,2,1874,"

In a typical Java Spring Web APP:

+

we have the following layers:

+

Model [DB Models]

+

Repositories [where you have queries to DB]

+

Services [Business service where you have the @Transactional annotation]

+

Controllers [Rest endpoints]

+

So for a simple model e.g Car

+
@Entity
+Car {
+   Long id;
+   String name;
+   @ManyToOne(fetch = FetchType.LAZY) // Notice lazy here
+   Engine engine;
+}
+
+CarRepo extends JpaRepository {....}
+
+@Transactional
+CarService {
+ ....
+}
+
+@RestController
+CarController{
+  @GET
+  public CarDto getCar(Long id) {
+      ???
+  }
+}
+
+

??? : Here is the big dilemma, I use a mapstruct to convert objects to other formats, whenever I use it as in first following scenario I get LazyInitializationException:

+

Scenario#1 Get the model in controller (Which is not so good to do especially that models should be encapsulated from the view layer) and convert it to CarDto

+
CarController{
+      @GET
+      public CarDto getCar(Long id) {
+          Car car= carService.getCar(id);
+          return carMapper.toCarDto(car); // BAM `LazyInitializationException`, on `Engine` field!!! 
+      }
+    }
+
+

But here the problem, When mapper starts to convert Engine it will get LazyInitializationException since transaction was already committed and closed in service and Engine is lazy initialized

+

That moves us to Scenario#2 +Ok so do the conversions in service then daa! while you still have the transaction opened, in service, so update the getCar method to return a CarDto instead:

+
  @Transactional
+  CarService {
+       CarDto getCar(Long id) {.... return mapper.toCarDto(car);} // Hurrah!!, no lazy exceptions since there is a transnational context wrapping that method
+    }
+
+

But here is another problem, for other services in that uses Car suppose we have FactoryService and we want to get a car by id so that we can assign it to a factory model so we will diffidently need the Car model not the dto,

+
FactoryService {
+    void createFactory() { 
+      Factory factory = ....;
+      Car car = carService.getCarModel...
+      factory.addCar(car);
+    }
+}
+
+

so simple solution to this is to add another method with different name but will return model that time in the CarService

+
@Transactional
+    CarService {
+       CarDto getCar(Long id) {.... return mapper.toCarDto(car);}
+       Car getCarModel(Long id) {.... return car;}
+    }
+
+

But as you can see it is now ugly! to have the same function twice with same logic only with 2 different return types, that will also lead to have aloooot of same type logic method across the services

+

Eventually we have Scenario#3 and that is simply use Solution#1 but move the @Transactional annotation to the controller now we won't get the lazy exception when we use mapstruct in controller (But this is not very recommended thing to do since we are taking controlling transactions out of the service (business) layer)

+
@Transactional
+CarController{
+      @GET
+      public CarDto getCar(Long id) {
+          ???
+      }
+    
+    }
+
+

So what would be the best approach to follow here

+",92003,,92003,,1/8/2021 14:48,1/11/2021 14:16,Best way to handle lazy models with mapstruct and spring transnational scope,,1,17,2,,,CC BY-SA 4.0 +420689,1,,,1/5/2021 13:58,,1,506,"

In my applications users can perform actions on a few thousand aggregate root instances with a single click. The problem is that the UI is blocked for several seconds (~ 3) what feels too slow. So, I'm looking for way to improve the database operation.

+

The respective entity class looks (simplified) like this:

+
class InspectionPoint {
+  val id: InspectionPointId
+  val version: Short
+  val description: String
+  val maintenanceLevels: Set<MaintenanceLevelId>
+}
+
+

The application uses JPA/Hibernate for persistence. The current behavior is that all aggregate roots are fetched from the database, updated by the application and then written back to the database with lots of update, delete, and insert statements. The flow is as follows:

+
    +
  1. Fetch all entities (aggregate roots) from the DB
  2. +
  3. n * update entity (increment version in this case)
  4. +
  5. n * delete maintenanceLevels from collection table
  6. +
  7. n * insert maintenanceLevels into collection table
  8. +
+

As you can see, there are lots of database statements.

+

The question is how to speed it up. Since every aggregate root carries a version attribute for optimistic concurrency control, it wouldn't be possible to just manipulate the collection table. But maybe this flow would work:

+

Performing updates without loading entities

+
    +
  1. update all InspectionPoint rows with the given IDs directly in the database (increment version)
  2. +
  3. insert or delete rows in the collection table for maintenanceLevels, what would require to distinguish both operation in the client, public (HTTP) API, and in the application service.
  4. +
+

The main disadvantages are:

+
    +
  • client, HTTP service, application service needs to be modified
  • +
  • domain logic in the entities gets completely bypassed
  • +
  • custom SQL is required what requires some work and makes maintenance harder
  • +
+

Although the performance should be pretty good, there are some severe disadvantages, too.

+

Do you have any other suggestion how to solve aggregate root bulk updates?

+",63946,,,,,3/6/2021 21:01,Bulk Update of DDD Aggregate Roots,,1,2,1,,,CC BY-SA 4.0 +420692,1,420741,,1/5/2021 14:20,,7,322,"

I have implemented idempotent order placement (mostly to avoid accidental double submissions) but I am not sure how to handle incomplete operations. Example scenario:

+
    +
  1. User tries to place an order.
  2. +
  3. An order instance with status PENDING_PAYMENT is created in the DB.
  4. +
  5. Order payment succeeds (3rd party processor, supporting idempotence keys, e.g. Stripe).
  6. +
  7. My DB fails to update order status to PAID (e.g. it suddenly went down for a minute) and user receives some error.
  8. +
+

Since the whole operation is idempotent, it is safe to retry the operation, and some (most?) users would choose to do that.

+

But what if the user abandons the operation?

+
    +
  • I could implement a Completer process, which would push all incomplete operations through to completion. However, this might come as a surprise to the user.
  • +
  • I could combine the Completer with the assumption that it will eventually be able to successfully place the order, in which case I wouldn't even have to alert the user. However, in an odd case of a failure, I'd have an even more surprising outcome - the once successful order would now appear to be failed.
  • +
+

Questions:

+
    +
  1. What are some ways of dealing with this situation?
  2. +
  3. What would the user typically expect?
  4. +
+
    +
  • 2.1. Should I let the user know exactly what happened (i.e. payment ok, status not ok), inform the user of a generic failure (something went wrong, please retry), or let them know nothing at all?
  • +
  • 2.2. If I inform the user of a generic error, they might decide to update their basket and then resubmit the order. I was thinking that the way to deal with this is to simply generate a fresh idempotence key and create a second order. What are the alternatives?
  • +
+

Additional details:

+
    +
  • I don't expect a high rate of failures, but I want to be prepared.
  • +
  • I am not dealing with big money or sensitive data - consider this a simple e-shop.
  • +
+

Update

+

I actually followed this article from Brandur Leach whilst implementing my idempotent operations, in case you're interested: https://brandur.org/idempotency-keys.

+

I contacted Brandur directly regarding my problem and you can see what he had to say for yourselves: https://github.com/brandur/sorg/issues/268. The gist is that I should always push all operations to completion, which agrees with the answers here. I can then decide what to do with the result. There may be multiple ways of informing the user too.

+",315196,,315196,,1/7/2021 15:44,1/8/2021 3:50,How to deal with abandoned idempotent operations?,,3,3,1,,,CC BY-SA 4.0 +420701,1,420702,,1/5/2021 18:28,,0,69,"

I'm practicing an exercise of use cases but it's hard for me know when it's ok or acceptable. The exercise it's about a system for sacrament of catholic churches and a part say

+

The archbishopric requires that:

+

1) All baptism and weddings realized in catholic churches at province be registered in a central database which can be accessed from any catholic church (for realize the online documentation controls at the moment of give a turn)

+

2) Pcs of each church has to has internet connection. Each parish priest needs to have a user account with a password for access to system

+

3) Baptismal certificate and proof of marriage must be able to being printed according the official format of each type of certificate and been handed it over with independency of church where sacrament was celebrated

+

4) Know if there was a previous marriage and its actual state +Clarification: if girlfriend or boyfriend has a previous marriage but it was annulled or it’s windowed then it’s authorized for a new marriage

+

5) Know reasons by which marriage was annulated

+

From this points I derivated this use cases:

+
    +
  1. Register Baptism, Consult Baptism, Register Marriages, Consult Marriages, Register Turn, Consult Turns, Modify Turns
  2. +
  3. Generate Sacrament Certificate
  4. +
  5. and 5) Consult Marriages
  6. +
+

and actor Parish Priest, so my diagrame it's:

+

+

But I'm not sure how modeling the users of Parish Priests in point 2) and the print of the certificate in point 3) and I would like know what it's your opinion about the derivated use cases as far. Any advice it would be greatly appreciated

+",382089,,382089,,1/5/2021 19:20,1/6/2021 6:16,Doubts for modeling with use cases,,2,0,,,,CC BY-SA 4.0 +420707,1,,,1/5/2021 19:58,,7,358,"

I've started in this new company a few weeks ago, this is the CTO CI strategy:

+

+

Current: Developer team has the repo prod/master and they merge everything into master (no branching strategy). +Once the code is ready in prod/master they'll ask Infrastructure team to start the deployment process which uses Jenkins.

+

The Infrastructure team executes a job in Jenkins that performs this actions:

+
    +
  1. Clone the whole prod/master into build/master (so they don't mess with the developers)
  2. +
  3. Execute scripts to build the binary(ies)
  4. +
  5. Generate a .txt file with the version of the build
  6. +
  7. Commit and push this changes into build/master (reason: prepare the deployment)
  8. +
  9. Apply environment specific settings and push, configurations, binaries and code to distro/master
  10. +
+
+

We end up with three repos at the end of the day for each application, +that means, if we have 10 applications we would have 30 repositories

+
+

Reasons of the CTO for this:

+
    +
  1. prod/master: For developers and their code (no branching, only master)
  2. +
  3. build/master: For Infra team to generate versions (to prepare the deployment)
  4. +
  5. distro/master: Binaries + code + specific environment configurations (to perform rollbacks, traceability and havebackup)
  6. +
+
+

Cons:

+
+
    +
  • Really complex process
  • +
  • Unnecesary large data ammounts in repositories and slower processing when performing deployments
  • +
  • Only works for FileSystem deployments (Databases are not considered in this sceneario and that kind of changes are manually performed)
  • +
  • No instant feedback for developers
  • +
  • Complexity when crossed patches/fixes and deployments
  • +
  • Developers are involved in the production deployment (quite often, in order to test and apply changes on hot)
  • +
  • Most of the deployments are performed directly into production
  • +
+
+

Pros:

+
+
    +
  • There's backup and posibility to rollback
  • +
  • Easy traceability (for rollbacks, not for development)
  • +
  • Specific configurations per environment are stored in the repos with the code and binaries
  • +
+

And this is my approach:

+

+
    +
  1. Developers create a JIRA ticket, which will be used as tag for the build and to create the branch
  2. +
  3. Developers will deploy and test in a Q.A/PRE-PROD environment
  4. +
  5. Once the code works, it will be integrated to master
  6. +
  7. Once integrated with master, the binary goes to a "binary repo like artifactory or other"
  8. +
+
+

Pros:

+
+
    +
  1. Traceability: The code deployed is easy to find through the tag (JIRA-XXX) for an specific build.
  2. +
  3. Rollback: Taking the binary from the repo (Artifactory)
  4. +
  5. One Repository per project, it means 10 projects are 10 repos, not 30.
  6. +
  7. Instant feedback to developers, if the deployment is not sucessful they can change their code
  8. +
  9. This design contemplates db scripts as hooks
  10. +
  11. The configurations per environment will be handled with Ansible + GIT, generating templates with placeholders and a backup of each configuration.
  12. +
+
+

Cons:

+
+
    +
  • Re-educate developers to work in branches
  • +
  • Force developers to integrate code only when it really works
  • +
  • Change the CTO mindset only will happen through examples (working on it)
  • +
  • We must create new infra (new environments to create deployments and not going to production directly)
  • +
  • Lots of hours automating through hooks, rest apis
  • +
  • Need to implement new technologies
  • +
+

I'd like to know the opinion of people with expertise on this git strategies and the balance between development and operations.

+

Regards.

+

H.

+",382505,,382505,,1/6/2021 17:16,1/6/2021 17:16,"The ""real and effective"" GIT CI/CD strategy",,2,2,2,,,CC BY-SA 4.0 +420710,1,,,1/5/2021 20:40,,1,167,"

I recently found out about Domain Driven Design and I liked it. However, it is quite overwhelming and requires quite of expertise to get it right. For this reason, I wanted to try to model a simple domain using DDD and event storming. +The domain is the following:

+

The application allows publishers to publish small articles or books. Both publishers and users can browse, read, "pin" articles and "follow" more than one publisher to get notification about their new articles. Lets say that publishers are the same as users, with the additional functionality that they can publish. +Users have free access. Publishers have only a subscription based access. Users have an hard limit on the number of articles they can pin. However, this limit can be increased by buying a subscription. Publishers have an hard limit on the number of articles they can publish. However, this limit can be increased by buying an advanced subscription.

+

This is what I modelled till now and it is only a small part of it:

+

+

The Article Bounded Context contains a single aggregate Portfolio. The Portfolio holds the owner of the Portfolio, the created Articles entities and the ArticleQuotas ValueObject. To create an Article the Publisher has to go to the Portfolio, so that we can regulate the creation of new Articles. The Publisher can publish an Article of its Portfolio and the published Article will be visible in the PublishedArticle ReadModel. Finally, the PortfolioQuotas are regulated via events generated by the Subscription BoundedContext, by incrementing the PortfolioQuotas. +At first I was tempted to separate the concept of Article and Quotas, but then there is the problem of the eventual consistency between the creation of an Article and the Exceed of Quotas.

+

What I'm asking here is whether I'm going in the right direction and, otherwise, if you have some suggestions in modeling using Domain Driven Design.

+

Thank you very much

+",382476,,,,,1/5/2021 20:40,Domain Driven Design Exercise,,0,0,,,,CC BY-SA 4.0 +420716,1,420773,,1/6/2021 2:02,,5,171,"

I'm designing a microservices structure, and I'm facing some problems on how to place data on different microservices.

+

For example, I have users that subscribe to plans and the plans have different features. For example, a user can download 10 items per month.

+

So I'm thinking of building the following microservices:

+
    +
  • User microservice: Maintain users data and the downloads
  • +
  • Plans microservice: Maintain plans data and the features each plan enables
  • +
  • Other microservices: Other microservices that may use the previous two to check permissions +So when a user requests a new download, I need to get the user's actual amount of downloads and the user plan. And then check if the actual amount of downloads allows a new download based on the plan amount of download limit. If it's allowed, then I need to reach the users microservice to update the amount of downloads.
  • +
+

When I'm thinking about this design, I'm not sure of the following:

+

Where should I store what plan a user has (Users vs Plans microservices)? +Should the microservices communicate with HTTP?

+

Thanks

+",382480,,379622,,1/6/2021 2:29,1/7/2021 22:49,Microservices shared data,,2,0,1,,,CC BY-SA 4.0 +420717,1,,,1/6/2021 2:54,,4,235,"

I've been finding that for a lot of code I've been writing recently, it naively might look like this:

+

Array approach:

+
const options = [
+   {
+      id: 'red', 
+      label: 'Red', 
+      data: '#f00'
+   }, 
+   {
+      id: 'blue', 
+      label: 'Blue',
+      data: '#00f' 
+   }
+]; 
+
+
+

And then something like, in say React context (but my question relates to programming generally):

+
return <select> 
+    {options.map((v) => <option key = {v.id} value = {v.id}>{v.label}</option>}
+</select>; 
+
+

Now the problem with storing the list of options as an array is that if you ever need to look up an options object by just an id, you have to do a full scan over the array like:

+
function findOptionById(id) {
+    return options.find((v) => v.id === id); 
+}
+
+

Which doesn't seem particularly efficient (if this function is being called every render for example) and becomes particularly problematic when you have nested objects.

+

So the alternative:

+

Map approach:

+
const options = {
+   red: {
+      id: 'red', 
+      label: 'Red', 
+      data: '#f00'
+   }, 
+   blue: {
+      id: 'blue', 
+      label: 'Blue',
+      data: '#00f' 
+   }
+};
+
+

Mapping over it:

+
Object.values(options).map((v) => <options key={v.id} value={v.id}>{v.label}</option>)
+
+

Finding an item in the list:

+
function findOptionById(id) {
+    return options[id]
+}
+
+

Faster lookup (I believe? or am I wrong in the context of javascript specifically?), and has the added advantage of enforcing some kind of ID uniqueness which is in my scenarios is always necessary.

+

My question(s)

+

It seems to me that in a scenario where 'You have a list of items, and they have some kind of unique key' then a map is always (or usually) advantageous to use.

+

However, from a code readability and 'the data structures make sense' POV, using arrays seems more intuitive.

+

That is for example, if I am creating a RadioList component and I'm saying 'it has an options property which is a list of items containing id, label, and data', then declaring this type as an array it's a lot more obvious to the user what the meaning of this property is.

+

Is there some kind of term or concept in software engineering that considers when an array should be used vs a map?

+

Edit: Although I've mentioned performance, it's not really my main concern. My main concern is around the ease of use of this list object, inserting, removing, looking up items etc.

+",109776,,54480,,1/10/2021 19:22,1/10/2021 19:22,Arrays vs Maps for listing items that have a unique id,,2,4,,,,CC BY-SA 4.0 +420723,1,420724,,1/6/2021 7:19,,3,93,"

I'd like to build a simple privnote-type clone for fun. The idea is this:

+
    +
  1. User A writes a note in their browser, browser encrypts it client-side
  2. +
  3. Server saves the pre-encrypted note without knowing the decryption key
  4. +
  5. User A then sends a link like abc.hidden/mynoteid#mydecryptionkey to user B
  6. +
  7. User B decrypts the message on a local browser
  8. +
+

The question I'm struggling with is this - should the server allow anyone to fetch abc.hidden/mynoteid? Server being able to decrypt messages (I'd like this to be entirely immune to logging of any sort and all encryption/decryption happening clientside) defeats the entire purpose.

+

Because the notes are one-time-use-only, a fetching of the note must destroy it. But how can I know that a correct decryption key was supplied without decrypting the message server-side exposing it to logging?

+

Lastly, would a React app and a generic REST server with Redis to store messages suffice for this task? (Supposing that messages have a TTL, Redis seems an ideal choice) What happens if a malicious actor gains access somehow (without knowing the decryption keys which should be generated on the spot and only just once)

+

What encryption algorithm is best suited for this task? I don't think we need 10 seconds of bcrypt "work".

+

I understand that sending sensitive info over the internet is yucky but it happens a lot and if it does happen in a proverbial "marketing department", having a tool like that could ease some worries about PII.

+

Plus, I think it's a fun project either way.

+",238360,,238360,,1/6/2021 7:26,1/6/2021 11:06,Designing a privnote clone - security considerations,,1,1,2,,,CC BY-SA 4.0 +420725,1,420733,,1/6/2021 9:11,,-2,62,"

I have tried this piece of code:

+
        String[] latitudesArray = latitudes.split(",");
+        String[] longitudesArray = longitudes.split(",");
+
+        Double startLat = StringUtils.isNotEmpty(latitudes) ?
+                Double.valueOf(latitudes.split(",", 2)[0]) :
+                null;
+
+        Double endLat = StringUtils.isNotEmpty(latitudes) ?
+                Double.valueOf(latitudes.split(",", 2)[1]) :
+                null;
+
+        Double startLong =  StringUtils.isNotEmpty(longitudes) ?
+                Double.valueOf(longitudes.split(",", 2)[0]) :
+                null;
+
+        Double endLong =  StringUtils.isNotEmpty(longitudes) ?
+                Double.valueOf(longitudes.split(",", 2)[1]) :
+                null;
+
+        Coordinate coordinate;
+        if (latitudesArray.length == 1 && longitudesArray.length == 1 ) {
+            coordinate = Coordinate.of(startLat, startLong);
+        }   else {
+            coordinate = centerOfRectangle(startLat, startLong, endLat, endLong);
+        }
+
+

latitudes or longitudes can look like this:

+
String latitudes = "35.6701669,35.6968372"
+String longitudes = "139.6891322,139.7003097" 
+
+

It can also be just a single latitude and longitude.

+

My question is: Can I improve my implementation ? Can I write it more elegant or efficient ?

+",382546,,379622,,1/6/2021 13:53,1/6/2021 14:43,"Initialize some values from an array, write more elegant or efficient",,1,2,,,,CC BY-SA 4.0 +420726,1,,,1/6/2021 9:16,,3,364,"

My question is regarding caching and ViewModels in ASP.NET (Core) MVC.

+

I have a service which injects a Repository<T> which is used to fetch domain models from the database. The service layer transforms the domain models into ViewModels via automapper and then caches them server side.

+

Example controller:

+
public FooController(IFooService service)
+{
+    this.Service = service;
+}
+
+public IActionResult Index()
+{
+    var vm = this.Service.GetSomeModel();
+    return View(vm);
+}
+
+

Example service:

+
public FooService(IRepository<FooDomainModel> repository, IMapper mapper, ICacheService cache)
+{
+      this.repository = repository;
+      this.mapper = mapper;
+      this.cache = cache;
+}
+
+public FooViewModel GetSomeModel()
+{
+   var viewModel = this.cache.Get(some id);
+
+   if (viewModel == null)
+   {
+       var domainModel = this.repository.GetDomainModel();
+       viewModel = this.mapper.Map<FooViewModel>(domainModel);
+
+       this.cache.Add(viewModel);
+   }
+
+   return viewModel;
+}
+
+

Is it considered bad practice for the service to transform the ViewModel and cache it?

+

Should we instead cache the domain model only and then leave the responsibility of transforming to a ViewModel up to the controller instead?

+

I feel like i'm saving a tiny bit of processing by caching the mapped ViewModel in the service layer but there's some code smell here... mapping seems like it ought to be the job of the controller.

+

What is the consensus? Or is there a different pattern entirely that i should consider looking at?

+",62007,,,,,1/6/2021 9:41,ASP.NET MVC Caching of ViewModels,,1,0,,,,CC BY-SA 4.0 +420731,1,,,1/6/2021 13:20,,4,98,"

I'm trying to think of a scalable solution for my current system. +The current system is +

+

3 microscopes +1 processing machine

+
 1. 60-100GB Files come from 2-3 microscopes every 30 minutes
+ 2. That data is transferred to a (local) network mount of the processing machine
+ 3. The processing machine runs and contains the ETL(airflow)
+
+

Scaling issue

+

Right now it currently works well. +I am concerned in the future that as the demand and load (size of file, processing times, etc..) increases we may face bottleneck(s). I was thinking of using a cluster of machines (via cloud computing or buy a couple more machines), but our network is not the fastest, maybe transferring around 100-200mbps. I worry with distributed computing the transfer speed would nullify the benefit of multiple machines.

+

Current thinking

+

I'm considering an idea where a group of machines are in a queue, if the top of the queue is not busy then the microscope can transfer the initial file to that machine and the rest of the process(Steps 2-3) can run as normal. I'm just wondering if this is a sane approach or if there is anything I can improve on.

+",382556,,382556,,1/6/2021 14:41,1/7/2021 18:23,Designing an ETL with where there are a few points of entry,,2,2,,,,CC BY-SA 4.0 +420735,1,,,1/6/2021 14:49,,2,103,"

QA here. Relatively new to API testing (manual). Thought I'd turn to the experts to try and figure out if my expectations around how much functionality should sit in the API are more or less valid. +More specifically, I'm testing dependencies ie.

+

if input_field_1 = No then input field 2 should be read-only

+

if input_field_1 = Yes then input field 2 should be enabled

+

I'd expect that if I didn't change the value from No to Yes in the API for input_field_1, then trying to manipulate the value of input_field_2 would have no effect and should not be updated when the call is executed. This is not the case.

+

My devs say that this logic is not necessary in the API because it is handled by the frontend and if an input field is read-only (dependent on another field), it would essentially never pass through a value in the first place. They also say that the API is basically impenetrable and no one would ever be carrying out this sort of manipulation (but I am). I understand the first half of this sentiment, but I'm still reluctant to omit these edge cases in the API. Essentially, I am able to update fields that should be read-only in the API and those changes are pulling through to the frontend. This doesn't seem technically sound or correct to me.

+",382573,Jasmine,,,,1/6/2021 15:22,How much of the frontend functionality should be mirrored in the API?,,1,2,,,,CC BY-SA 4.0 +420743,1,420745,,1/6/2021 21:09,,6,173,"

According to Robert C. Martin's "Clean Architecture" you should try to structure your system in such a way as to separate low-level concerns from high-level domain concepts.

+

Following this logic, Martin proposes, that

+
+

the Web is a delivery mechanism—an IO device—and [...]

+
+

therefore should be seen as a low-level concern. As a consequence, Web apps should only present data they receive from the domain.

+

In principle, a PWA could be seen as its own enclosed system with appropriate architectural boundaries, so that the app wouldn't need to have a backend which contains the domain logic. In that case, implementing the Clean Architecture would be a fairly trivial act.

+

But imagine your domain logic already exists on, say, a server. +How would you then avoid duplicating your domain logic into the PWA if UX concepts such as offline availability are key to your product? What if your domain logic is quite sensitive and mustn't be exposed to end users under any circumstance? (e.g. by implementing it inside a PWA)

+

Another way to phrase the question would be: How do you build PWAs with rich UX, such as offline capability, without exposing your potentially sensitive domain logic by embedding it into the frontend?

+",382592,,379622,,1/10/2021 19:27,1/10/2021 19:27,"How does Robert Martin's ""Clean Architecture"" deal with Progressive Web Apps (PWAs)?",,1,7,1,,,CC BY-SA 4.0 +420748,1,,,1/7/2021 3:32,,4,864,"

If each cloud function or lambda function is hosted and scaled independently would it be considered a microservice?

+",274828,,,,,12/31/2021 17:32,Is a cloud / lambda function a microservice?,,5,2,,,,CC BY-SA 4.0 +420752,1,420760,,1/7/2021 7:27,,3,609,"

As the title says, why do developers (especially, but not only, new developers) habitually underestimate the work involved in 'greenfield' projects or 'total rewrites'?

+

We all know that software estimation is not a science, but most problems to be solved are not new, and many elementary problems have been getting solved again and again for several decades, so collectively there is a fair amount of accumulated experience of how long things take (certainly to a granularity of man-years or man-decades).

+

Very often a developer will look at something that has taken a 3-man team 5 years to deliver, and insist that it is a total mess. But that's 15 man-years to do it badly. Redoing it well may take even more, especially if it hopes to achieve even more in terms of complexity or integration, but even if it takes less, one would be braced for it to still take on the order of several years to redo from scratch.

+

The obvious answer to the question is ignorance, but I'm looking for a more structural explanation for why ignorance in this area of estimation appears so often to crop up. We have academic courses and we have professional forums where developers exchange knowledge and experience.

+

Why do we fail to systematically reproduce at least gross rules-of-thumb about how much time various kinds of project tend to consume?

+",292095,,1204,,1/7/2021 19:01,1/8/2021 9:45,Why do developers habitually underestimate work?,,5,12,,1/7/2021 14:11,,CC BY-SA 4.0 +420754,1,,,1/7/2021 8:42,,2,123,"

I'm sometimes torn between two naming conventions defining the order of words that make up a function name. The first one is choosing the words in the same order we would natural use in a sentence, for example:

+
getFilters(...)
+getPairedFilters(...)
+getUniquePairedFilters(...)
+
+

This reads rather naturally but it's not immediately obvious that all of those functions return filters. Only the last part of the function name indicates that. If the name is long, that's not as convenient as the following:

+
getFilters(...)
+getFiltersPaired(...)
+getFiltersPairedUnique(...)
+
+

This, however, does not read as naturally but the names immediately suggest that they all return filters. Is there some sort of a consensus what name ordering is "better"?

+",293899,,,,,1/7/2021 8:42,Function naming: choosing the order of nouns / verbs in the name,,0,2,,,,CC BY-SA 4.0 +420765,1,420767,,1/7/2021 13:21,,0,84,"

I'd like to build detail analytics of my computer usage to try and detect patterns and improvement opportunities. So I'm building a software to record every mouse click, keyboard event and window activation event with a timestamp.

+

This is potentially a huge amount of data streaming continuously to disk and I'm somewhat afraid that it'll shorten its lifespan. For now I'm sequentially appending them to an unencrypted file but I'd like to do better.

+

What is an appropriate database or datastore? And is there a lightweight way to protect/encrypt this data?

+

EDIT: about the number of daily events: I'm a software developer so I type and do "normal" stuff on the computer all day.

+",164491,,164491,,1/7/2021 13:30,1/7/2021 13:58,Appropriate storage to store all keyboard and mouse events on my computer?,,2,1,,,,CC BY-SA 4.0 +420783,1,420794,,1/7/2021 20:10,,1,104,"

In my limited understanding, one-way data binding could happen like the following:

+

On the back-end, I have a Node backend server. In that, I have a layer that communicates with the database (Model). I have a controller that exposes URIs that the frontend server can connect to (Controller).

+

On the front-end, I have a React frontend server (View). The frontend-server sends requests to the URI-s exposed by the backend server.

+

If the frontend server wants to get data from the backend, it sends GET requests to the URIs exposed by the server (Controller), the server fetches the data (Model), and sends it to the frontend server (View).

+

If the frontend server wants to post some data so it will be stored on the server, it sends some POST requests to the server, which in turn handles it and saves the corresponding data.

+

I read that one-way data binding is basically an observer relationship between the view and the model. The view registers a callback with the model, and when something changes on the backend, the model will call the callback, and it will update things in the view. But why is this necessary, if you can just send and receive data using the URIs exposed by the controller? Is this forced by React? Obviously I'm wrong about something, but for me it seems like its counter-productive because it creates tight coupling without using the Controller, that would behave as an interface between View and the Model.

+",382645,,,,,1/8/2021 9:10,How does one-way data binding and MVC achieve loose coupling?,,1,1,,,,CC BY-SA 4.0 +420784,1,,,1/7/2021 20:57,,3,298,"

In decomposing a monolithic web application into smaller services and by following the Strangler Fig pattern, I'm in the middle of a problem for which I can't find a practical solution. There is a web app written in PHP and I'm taking steps to:

+
    +
  1. Move highly cohesive code into its own service (more precisely a docker container later and for now I wouldn't want to refactor a lot or rewrite significant parts of the application, which is huge).

    +
  2. +
  3. Move business logic from controllers and other places to a service / repository layer under a known namespace (this is a clean up step)

    +
  4. +
  5. Modify all old code to invoke new code (it will be like $this->newService->method(...$args))

    +
  6. +
  7. Then bring up a docker container in the same network which handles the requests from step 3. (those $this->newService->... calls will be remote calls but the code is not remote. I just added an extra network call here.)

    +
  8. +
+

Now the problem I'm facing is that I see different code that changes the model state (by model I mean M as in MVC) and then passes it to some other methods, they make their own changes on the model-object and then the changes are finally committed and written to the database.

+

But since I'm delegating all jobs to the new service, it would be like this:

+
$this->newService->methodA(123); // this selects a record from a MySQL table, performs some work and commits those changes
+$this->newService->methodB(123); // this does the same thing
+
+

So with this new service I will have two selects and two updates, but in the monolithic application, there is no such repetition:

+
$model = Model::getByID(123);
+// these are local calls
+$this->serviceOne->methodA($model);
+$this->serviceTwo->methodB($model);
+$model->save();
+
+

My question is, what should be done in this case to prevent multiple database reads and writes? I am considering building a caching layer in order to persist model state and in that way avoid multiple individual reads and writes to the database and then later all at once commit all accumulated model changes to the database, but I can't figure if that is the right solution or not. I'm literally lost.

+",105822,,379622,,1/14/2021 21:58,1/19/2021 19:42,Caching model objects to avoid multiple SQL commits,,3,2,1,,,CC BY-SA 4.0 +420792,1,,,1/8/2021 6:34,,1,47,"

As a hobby / to learn I am building an app in JavaScript using Node.js where a component of it will take input from a client, send it to a server, and then broadcast it to other clients. For simplicity let's say that the data looks like: {"x_pos":0.4, "y_pos":0.2}, and specifies an avatar's (x,y) position on a map in a game. I want each user to have an avatar, and each avatar's (x,y) position shared.

+

Currently I am using Websocket (socket.io) to do this. I figured Websocket would be ideal because it is TCP, and will include an identifier of who each user is. However, the fact that communication is bidirectional seems to be sub-optimal. Additionally, I am emitting position data from all clients 30 times a second to the server, which then broadcasts it to all users. This works well for one user, but I do not know how it would scale.

+

However, I have also heard that UDP is ideal for games, but I understand that UDP is connectionless and doesn't track user connections etc. So then would this mean that I would not be able to keep track of who incoming (x,y) data belongs to? (I suppose I could change the data to be something like {"user":"id", "x_pos":0.4, "y_pos":0.2} and handle updates on the Client side that way). There is also WebRTC, that uses UDP, but peer to peer connections I doubt would scale well.

+

So I am curious what people think is the best protocol here. Am I on the right track by using Websocket to broadcast player position? Or should I be using something else?

+

I would like to note I am not building a commercial app in any way, and I anticipate the load to be no more than 6 people at once for this. But 6 people * 30 emits a second to the server + 6 * 30 emits to all clients means 360 socket.io emit() events a second, which seems maybe not what socket.io was built for here? That said, I hear that Websocket establish a data stream, where UDP does not, so maybe that means that UDP may be more overhead? I honestly do not know and cannot find this information readily online.

+",382678,,379622,,1/10/2021 19:26,1/10/2021 19:26,"advice for web communication protocol for ""streaming"" multiple JSON objects to multiple clients",,0,3,,,,CC BY-SA 4.0 +420795,1,,,1/8/2021 9:10,,-2,87,"

I have been trying to learn neural networks from scratch +(in Python). Wherever they talk about neural networks, I don't get the meaning of activation of a neuron, and need to understand the basic meaning of activation. Is it like activated - non activated boolean?

+

The definition of bias says it helps to compensate the activation of the neuron. Then do we need an activation function at all?

+",382684,,379622,,1/8/2021 16:34,1/8/2021 16:34,Can't understand activation function,,1,0,,,,CC BY-SA 4.0 +420799,1,,,1/8/2021 10:37,,1,85,"

For my final year project I'm looking to build a distributed version of a popular benchmarking client (this has already been done using various methods involving some form of existing frameworks), I have been advised by my dissertation supervisor to consider implementing a RESTful service. Essentially what will be happening is that a program will be started from one client (provided the details of the benchmark) , which shall initiate a master program / server who will then decide how many network nodes to run and on how many clients of which to run it on. These will all perform various actions on the database, have their own individual results returned to them & subsequently the original program / server that initiated them where the results will be aggregated.

+

I saw a few documents online stating that SOA's are useful where there will be many stateless processing units (the network nodes, in my head in the diagram) and said that an SOA includes a main storage (database cluster in my diagram) as well as an application that combines and composes them together (the clients on the networks nodes results will be being aggregated).

+

On the contrary I have read up on ROA's and RESTful service and it looks like it could fit this design pattern also.

+

So my question is, which of these design patterns shall I implement:

+

SOA or ROA?

+

And if ROA, I'm struggling to wrap my head around the CRUD operations in a situation other than a web service.

+

So for example, my network nodes will each be running a client interacting with the database individually (not via REST).

+

How would I include a GET / PUT / DELETE / POST to the client?

+

Because all I'm imagining in my head is GET in the context of GET databaseX/1 and retrieving an individual record or something similar.

+

Thank you in advance.

+

+",382689,,,,,1/8/2021 12:39,Should I use ROA or SOA for a distributed application and how could I implement REST If using ROA,,1,0,,,,CC BY-SA 4.0 +420803,1,,,1/8/2021 13:40,,-2,57,"

Here is description of my app I'm working on. On the client-side (index.html) a user can interact with a data. When he needs to call a server operation for example reading or writing a file on the server, he must authenticate himself and send JS async request. The index.html and API are hosted on the same domain.

+

The first way I see is to pass at every request an identifier (it may be username, email or phone number), authenticator i.e. password, operation name e.g. read or write and argument i.e. file path (and file content if writing). Since we talk about async requests without reloading the page in the browser, user can type credentials only once and JS will store them in its memory until the user reloads the page (it will mean signing out). I think to store user data in JSON files above the public directory.

+

When a user sends an async call with the credentials, with the server operation name and the argument(s), the server verifies the user credentials, retrieves his role, sees whether the role has permission to run the passed server operation with the passed argument and if so, performs the operation. I mean one role can not run writing operation at all and can only read files, another role can write files, but only in a specific directory (i.e. specific argument) and so on. That is how I see the scenario.

+

I don't know authentication solutions and issues very well, so I need your useful advices. Maybe it would be better to use modern OAuth-based scheme in my case with redirect to authentication service and JWT? If so, is there a custom and fully self-hosted solution which is made like the scheme (i.e. instantiating authentication client signIn() object in JS, then using its status and tokens etc.) so that I can use it on my server without additional third-party server, but can switch to another the same solution with authentication service at anytime? Or maybe you see another better scenario for my case?

+

Would be thankful for any ideas, advices and information

+",382704,,,,,1/8/2021 14:12,Authentication solution for custom PHP-based API,,1,0,,,,CC BY-SA 4.0 +420806,1,420807,,1/8/2021 14:14,,1,65,"

Imagine an app like Instagram/Reddit with a feed of posts.

+

Problem: We want to show users posts they have not seen before.

+

When the user first opens the app, we retrieve 30 latest posts from the backend and show them to the user.

+

The user sees 10 posts and leaves the app.

+

At this point, the user has seen post with id 30 - 21 and not seen post with id 20 - 1

+

When the user comes back the next day, the database in total has 40 posts.

+

This time, when the API retrieves the latest posts, it also fetches 10 of the same old posts that the user has already seen.

+

We want to be able to skip the posts that the user has already seen.

+

Probable solution:

+

Divide the entire feed into SEEN and UNSEEN ranges. These ranges can be identified by their start and end markers, AKA post IDs.

+

For example, when the user leaves the app for the first time, we store it as a range. -> (30,20)

+

This range identifies posts already seen by the user.

+

Next time when the user opens the app, we send these ranges to the API, and then the API filters posts such as posts on either side of this range are returned. Posts between this range are not.

+

Sample SQL:

+
SELECT * from post where Id > 30 OR Id < 20
+
+

If we have multiple ranges, for example (50, 40) and (30, 20) the SQL becomes:

+
SELECT * from post where (Id > 50) OR (Id < 40 AND Id > 30) OR (Id < 20)
+
+

Essentially, we are dividing the feed into black/white or seen/unseen markers.

+

However, while this seems plausible, there are cases when the ranges become overlapping. For example, when the user opens the app with 50 posts, he/she might actually go down to post id = 7 and then leave the app. The correct range for this user should now only be (50, 7). How would these be efficiently merged then?

+

Are there any other/better solutions to this problem?

+",292434,,,,,1/8/2021 14:26,How to segregate blog posts into seen and unseen?,,1,0,1,,,CC BY-SA 4.0 +420814,1,420817,,1/8/2021 20:26,,0,186,"

Recently I came across some article on Chess OOPS design.Following is some snippet from it:

+
public class Chess {
+
+    ChessBoard chessBoard;
+    Player[] player;
+    Player currentPlayer;
+    List<Move> movesList;
+    GameStatus gameStatus;
+
+    public boolean playerMove(CellPosition fromPosition, CellPositionb toPosition, Piece piece); 
+    public boolean endGame();
+    private void changeTurn();
+
+}
+public abstract class Piece {
+
+    Color color;
+
+    public boolean move(CellPosition fromPosition, CellPositionb toPosition);
+    public List<CellPosition> possibleMoves(CellPosition fromPosition);
+    public boolean validate(CellPosition fromPosition, CellPositionb toPosition);
+}
+
+public class Knight extends Piece {
+
+    public boolean move(CellPosition fromPosition, CellPositionb toPosition);
+    public List<CellPosition> possibleMoves(CellPosition fromPosition);
+    public boolean validate(CellPosition fromPosition, CellPositionb toPosition);
+
+}
+
+

I pretty much liked the way classes are constructed but there is something that is confusing me.Now the code here is self explanatory. As per the author, the way it is designed,the "possibleMoves" function in the Piece class will give the list of possible moves and these moves are shown to user and user can select one of the move,which makes sense to me.

+

Now my question is, let us say we get possible moves by calling possibleMoves function from Piece class. Now since while actually making a move, there is a possibility that a piece can cross another piece in its way, which is not allowed except knight. SO where will we check that?I questioned the designed and the author is saying it should be done in chess class or in some Rule Engine, but my suggestion is to pass Board as a parameter and let the piece decide it as it knows how it will move.Otherwise in chess class or in rule engine we have to put a logic for each piece to check if while making a move it does not cross any other piece and for that we need to replicate the logic of how piece move in Piece class and in Chess/Rule Engine class. +What can be the correct way here?

+",370444,,,,,1/8/2021 22:06,Object Oriented Design for chess,,2,0,,,,CC BY-SA 4.0 +420815,1,420840,,1/8/2021 20:57,,1,98,"

Currently, I have an application generating time series spacial data. The data is weather data with coordinates and a time of reading.

+

I would like to receive chunks of the data in a time series way and I am currently using a generated CSV as a response to input requests

+
temperature,latitude,longitude,timestamp,
+10,50,4,11,1610138555,
+...
+...
+
+

I am used to working with geojson and would like the coordinates to be 4 dimensional I.e. 3 spacial directions and time.

+

+{ "type": "FeatureCollection",
+  "features": [
+    { "type": "Feature",
+      "geometry": {"type": "Point", "coordinates": [102.0, 0.5]},
+      "properties": {"prop0": "value0"}
+      },
+    { "type": "Feature",
+      "geometry": {
+        "type": "LineString",
+        "coordinates": [
+          [102.0, 0.0, 100, 123456], [103.0, 1.0, 105, 123456], [104.0, 0.0, 106, 123456], [105.0, 1.0, 107, 123456]
+          ]
+        },
+      "properties": {
+        "prop0": "value0",
+        "prop1": 0.0
+        }
+      }
+    ]
+  }
+
+

The purpose is to represent mobile sensors that run on a trajectory and I would like to have the data clearly represented.

+

Any suggestions on good JSON formats? Ideally something with a common standard similar to GeoJSON would be ideal.

+",382728,,,,,1/9/2021 15:29,How to represent Spacial Time Series data in json clearly?,,1,2,,,,CC BY-SA 4.0 +420820,1,420832,,1/8/2021 22:55,,-2,54,"

I have a website that was writing using .net stack technologies. It is accessible via the internet. Some of my potential (Enterprise) customers want me to install the whole website on their own VM and we dealt with them that the website will live on their VM the only one month (it is enough time to test its functionality). They want to install the website locally because it is working with processing specific files that customers don't want to upload via the internet. +My worries are to protect a few years of development from steal and reproducing. If the potential customers become to be Enterprise then we'll sign a EULA license and I'll remove licensing from the website at all on their server.

+

The first thing I decided to do is to protect the source code by applying obfuscation. +The second thing is to modify the website to work somehow with a license file (probably customer VM will be offline without access to the internet). The main restrictions of license are: +It is not allowed to move the website to the other VMs (need somehow to bind it to machine ID). +It should stop responding to all incoming requests after the trial period (it could be any period), maybe just respond with a specific HttpStatusCode, like Service Unavailable.

+

I'm a skilled programmer but I didn't do such things before and I can implement my own license option. But I want to be sure about security and performance.

+

Could anyone provide the best practices they faced to implement what I need? Thanks.

+",382732,,,,,1/9/2021 10:19,How to deliver a website to the customer VM (real server) with a trial period?,,1,1,,,,CC BY-SA 4.0 +420823,1,420830,,1/9/2021 0:34,,-1,195,"

My question to the community is this:

+

What makes C#'s LINQ Unique from other query language in other languages and frameworks, or does it not have anything to make it unique at this point?

+

Specifically with regard to Django or Laravel, but that doesn't matter all that much. I'm not looking for an opinion of why someone might like it more but rather if there is any concrete difference between them that a general developer would be inclined to choose one over the other because of that difference.

+

Thoughts

+

Before I posted this I found this SE post (Python Unique Characteristics), that talks about it may be rather hard to find something that is actually unique. Which is a valid point, however the reason I ask is because I know a lot of .NET developers that use LINQ as a main selling point of .NET (myself included). However, when thinking about it more in depth I don't know if it is a valid argument anymore which is more what I'm trying to accomplish with this question.

+

What follows is context

+

My coworker and I were debating on the reason why LINQ (in the context of C#) could be an argument to use C# over another language. Especially when it can be compared to other DSLs like it (At least I believe it's a kind of DSL, could be wrong though). We could not actually think of any concrete reason why LINQ would have any distinct advantage, feature or unique oddity over any other query syntax in a modern language or framework. The closest I could come up with was that you could write an underlying driver for LINQ to allow it to theoretically work with just about any data system out there, but the argument could me made that other languages support stuff like that as well.

+

Part of this question is due to an on going internal discussion between our devs about the merits of C# vs Django, but that is an entirely other area not for the scope of this question.

+

General Information

+

The following are some locations where I have been looking to try and find information at least from more official sources.

+ +",382739,,,,,1/9/2021 9:20,What makes LINQ (C#) unique compared to another DSL such as Django query syntax?,,2,0,,,,CC BY-SA 4.0 +420825,1,420831,,1/9/2021 1:36,,-4,57,"

As a software architect/product lead/project manager etc., should you run your database model by your customer? Is it common practice to show that model to have their opinion, or should they trust that the model you chose will fit their requirements?

+

What is common practice on the subject if they don't explicitly ask for it?

+

EDIT: I wanted to keep the question open as I was hoping for a generic answer as well, as that is what stack exchange is for, but seeing all the downvotes, here is the more specific case I had in mind, which I received the answer for:

+

We are a software development company developing an application for an external customer. The application uses a database as it is necessary for their business requirements (data persistense and sharing across the company). As the software developers, we are designing and implementing that database. Doing so corresponds to many hours of work that are difficult to show to the customer (hours of work only to be able to close the app and reopen it and see that the changes are still there). As I am the person preseting our progress to the customer and I had little to really show, I was wondering if it was common practice to go deeper into the database model.

+",375251,,375251,,1/11/2021 19:39,1/11/2021 19:39,Should you show your Database Model to the customer?,,1,2,,1/9/2021 11:43,,CC BY-SA 4.0 +420834,1,420858,,1/9/2021 10:47,,5,389,"

I'm currently a developer in a team of size 3-4 and I am concerned our team is not at all resilient to me taking days off or saying goodbye.

+

Two years ago when I arrived at the company I made the choice to take over and learn the work of someone departing. We had also the team downsized due to the start of an other product branch with team built from ours. That with other refactors I carried alone made me the sole expert over parts of the product that are currently undergoing most if not all current feature requests.

+

The team dynamics in terms of task assignation consists roughly of me developing features, someone else correcting legacy bugs, someone else refactoring legacy code. We sit on a lot of code so it's not without its value as well - but some tasks such as refactoring have been added to sprint as a courtesy, not as providing customer value. The other developers so far seem currently unwilling from learning from my expertise, although I remember being always welcoming for questions and explanations, sometimes perhaps even overwhelmingly so.

+

I suspect some of the reason they are not coming to me is that a vicious circle kind of started, where I'm busy working and they are afraid to interrupt me to learn from me because I'm busy, but this doesn't actually help at all.

+

Within the company things are going well but I'm concerned I did not manage to empower them to be more autonomous. We're currently in a state of things where 50% the tickets we have they can't handle without taking a massive delay or risking regressions or both.

+

I'm aware that if they don't want I may just have to be patient, but how do I prepare best their way to take over things I know, knowing there is a lot to learn, improving thus the truck factor (or bus factor)? Documentation? Pair programming? Tests? How to apply this to focus on the most efficient transfer?

+",125615,,125615,,1/9/2021 22:34,1/10/2021 14:59,"How to improve the ""truck factor"" in my development team?",,5,6,,,,CC BY-SA 4.0 +420836,1,420841,,1/9/2021 13:45,,-3,115,"

With the introduction of Apple M1 processor, ARM has stood up to be a capable competitor and an alternative to x86 processors. We can foresee a future where ARM captures considerable market share of x86 in the server space. That means we will be writing software that works and is optimised for ARM.

+

Such a change would definitely affect developers who deal with low level code(device drivers, compilers, os and os kernels, etc) that requires knowledge and expertise of underlying cpu architecture.

+

But would such a technology shift affect "general" developers too? I mean developers who are mostly involved in implementing business logic using high level languages such as Python, Javascript, Java, C#, etc. Those languages and their implementations usually take care of running the same piece of code on different os and cpu architectures and developers just have to focus on implementing the requirements.

+

If yes, how would it affect the "general" devs and what would change in the development work? How can we prepare for such a change? Will some programming languages become preferable over others? Should we consider cross-platform frameworks?

+",179656,,209774,,1/9/2021 18:29,1/9/2021 18:29,How to anticipate a software future where ARM (potentially) replaces x86 in server and PCs?,,1,12,,1/9/2021 17:29,,CC BY-SA 4.0 +420837,1,,,1/9/2021 13:51,,1,108,"

My goal is to collect/publish different types of information from the application. We use Kafka for the event bus. Consider the following sample code.

+
class UserService {
+
+public User userUpdateService(String username) {
+
+    try {
+        userRepo.save(new User(username));
+    } catch (Exception e) {
+
+        // this is a direct method call to publish error event in case of error
+        publishEvent.asyncPublishUserUpdateErrorEvent(username, e);
+
+        throw new RuntimeException();
+    }
+
+    // As the execution reach the last statement, assuming user update success 
+    publishEvent.asyncPublishUserUpdateSuccessEvent(username);
+}
+
+}
+
+

As you see, currently I am calling the method in a UserService class to publish the error or success event. I call this method every time a new event needs to be collected and called directly from the code. I know that the ELK stack can be useful in my use case, but that's not an option for me.

+

One benefit I've seen in this direct method call is that I have more control over what kind of information is to be published. But by using this method I have to change the class, which I think violates many OO principles.

+

So, in practice, how can this type of task be accomplished? I'm using the Spring Boot framework.

+",102780,,379622,,1/11/2021 2:00,1/11/2021 2:00,What are the best way to publish application event in a spring boot application?,,1,2,,,,CC BY-SA 4.0 +420839,1,420848,,1/9/2021 14:10,,4,70,"

I have legacy code and it has a function called initialize and this function calls N amount of methods of same object and these methods are responsible for validating identity of personas.

+

At each method call, the function checks whether the said method approved or disapproved the persona. If the method returned 0 then it assume the method disapproved the said persona and return an error.

+

I want to refactor this into an elegant way. I found Chain of Responsibility pattern, but it seems from the examples I found that this pattern is for multiple instances of an object rather a single instance.

+

Can I use this pattern for single instance as-well?

+",382769,,,,,1/9/2021 18:01,Chain of responsibility for a single instance?,,1,0,,,,CC BY-SA 4.0 +420842,1,420868,,1/9/2021 17:24,,8,482,"

Consider typical gym trainings tracker app.

+

User has account related attributes:

+
User {
+  id
+  login
+  password
+  email
+  fname, lname
+  isBlocked
+}
+
+

However, the requirements are that an application's user manages his trainings, trainings history, achievements, profile, etc. All of those entities should be somehow linked with user account.

+

How do I link it with an account? What is the common way to do it and its pros/cons?

+
+

I can imagine two scenarios:

+

Possibility 1: Making User a large 'god' object:

+
User {
+  id
+  login
+  password
+  email
+  fname, lname
+  isBlocked
+
+  trainings        # one to many
+  training_history # one to one
+  achievements     # one to many
+  /** possibly many more relations */
+}
+
+

Possiblity 2: Link User with UserProfile, and then UserProfile holds all the relations.

+
User {
+  id
+  login
+  password
+  email
+  fname, lname
+  isBlocked
+
+  user_profile     # one to one
+}
+
+UserProfile {
+  user_id          # one to one
+
+  trainings        # one to many
+  training_history # one to one
+  achievements     # one to many
+  /** possibly many more relations */
+}
+
+

Is the second option really better than the first one? Can I do better?

+",366489,,,,,1/11/2021 3:16,How to avoid making User a god object?,,5,0,2,,,CC BY-SA 4.0 +420860,1,420862,,1/10/2021 2:03,,2,98,"

I am working on implementing a database of sorts and am stuck wanting to make it perfect from the get go because I realize I don't know how to migrate the database engine from one data structure to another (as the data structure implementations evolve). I am afraid that if I pick a database data structure, then I won't be able to adjust it down the road.

+

Take for example a hash table for sake of this question. Say I implemented a database using a hash table, and I used hashing algorithm h1(). Well that literally distributes my records all over the place. Now I want to use hashing algorithm h2(). I can't just boot up my database with this new hash algorithm, it won't know how to read the locations of the existing database records. So I need to somehow migrate one hash to the other. I don't see how to do that. Not only that, but then I need to do this for every client that upgrades to v2 of my database so to speak. Only when you start the new one or something.

+

My question is, how do database implementors generally manage this problem? How do they effectively migrate their database implementation? Take for example migrating from a hash-table to a b+tree, or b+tree-1 to b+tree-2. How do they serialize the old data into the new form in practice? How do they get everyone off of the old data structures and onto the new ones?

+",73722,,,,,1/10/2021 8:08,How does a database implementor migrate their database engine to a new data structure?,,2,7,,,,CC BY-SA 4.0 +420863,1,,,1/10/2021 9:16,,2,87,"

I have been developing software driver for the analog to digital converter in C++. The a/d converter is primary intended for conversion of the temperature signals. The temperature signals are then used for the algorithms of temperature protections. My goal is to design the driver interface in such a manner that the driver can be used in the RTOS based application and also in the bare metal application.

+

Based on the above mentioned requirements I have designed the interface of the driver in this manner

+

+

So my intention is to use a sort of buffering of the samples of the analog signals in the internal array analog_inputs. The idea is that the client software calls the initialize method for initialization of the adc peripheral and then calls the update method from within a RTOS task or from timer interrupt service routine. The periodic call of the update method (which basically calls the startConversion method) results in periodic invoking of the endOfConversionCallback "behind the scene". Here the analog_inputs array is filled by the samples. In case the isReady method returns true the analog_inputsarray contains first samples of all the analog inputs and the client can start to access them via the getRawValue mehod call.

+

My question is whether you think that the approach which I have suggested above is suitable for my requirements or whether you see any potential problems in this approach?

+",379411,,,,,1/10/2021 9:16,How to design software driver for the analog to digital converter?,,0,5,,,,CC BY-SA 4.0 +420865,1,,,1/10/2021 12:37,,0,69,"

For learning purposes, I'm trying to build an Expense Tracker web-app from scratch. I'm in the process of designing it before coding it. I decided to attempt to make it a Headless Web-app: The main server will only have an API and the client will query it to render content. This means I can use the same API for mobile apps and websites.

+

Now I've come across an interesting dilemma:

+
    +
  • How do I authenticate this API? I shouldn't ask for username and password every time I request something.
  • +
  • Do I use API keys? If so, how often do I renew them? Are they temporary or permanent?
  • +
  • If I want to make a public API, how do I distinguish the regular user from someone using the API?
  • +
+

Also please let me know if my idea is flawed, I'm open to suggestions and changes.

+

Thank you!

+",248635,,,,,1/10/2021 12:37,How to authenticate an API in a headless web-app?,,0,2,,,,CC BY-SA 4.0 +420866,1,,,1/10/2021 14:22,,1,155,"

The following is an example of "composition":

+
public class Car
+{
+    Engine engine;  // Engine is a class
+}
+
+

But is it still called "composition" if we are using primitive data types? for example:

+
public class Car
+{
+    int x;  // int is a primitive data type
+}
+
+",247763,,209774,,1/10/2021 15:38,1/11/2021 15:31,"Is it called ""composition"" if we are using primitive data types?",,3,1,,,,CC BY-SA 4.0 +420872,1,,,1/10/2021 17:46,,76,15133,"

How does a functional programming language, such as Elm, achieve "No runtime exceptions"?

+

Coming from an OOP background, runtime exceptions have been part of whatever framework that is based on OOP, both browser-based frameworks based on JavaScript and also Java (e.g., Google Web Toolkit, TeaVM, etc. - correct me if I'm wrong though), so learning that functional programming paradigm eliminates this is big.

+

Here's a screen grab from NoRedInk's presentation on Elm, showing the runtime exceptions from their previous JavaScript-based code to the new Elm codebase:

+

+
    +
  • How does the functional paradigm or programming approach eliminate runtime exceptions?
  • +
  • Are runtime exceptions a great disadvantage of OOP over functional programming?
  • +
  • If it is such a disadvantage, why have the OOP paradigm, programming approach, and frameworks been the industry standard? What is the technical reason? And what's the history behind this?
  • +
+",382819,,591,,1/17/2021 9:17,1/17/2021 9:17,"How functional programming achieves ""No runtime exceptions""",,10,20,20,,,CC BY-SA 4.0 +420877,1,,,1/10/2021 21:46,,2,170,"

I work for an organization that heavily leverages AWS. There is a strong push that every team move from containers deployed on ECS to leverage AWS Lambda and step functions for (almost) every project. I know that there are workflows for which lambdas are the best solution, for example if you are running infrequent, short duration processes or processing S3 uploads for example. However I feel like my project isn't a great use case for them because:

+
    +
  1. We have many calls to a database and I don't want to have to worry about having to re-establish connections because the container a lambda was running in isn't available anymore.

    +
  2. +
  3. We have many independent flows which would require too many lambdas to manage efficiently. With each new lambda you create you have to maintain an independent deployment pipeline and all the bureaucratic processes and items that go with owning a deploy-able component. By limiting the number of these the team can focus on delivering value vs maintenance.

    +
  4. +
  5. We run a service that needs to be available 24/7 with Transactions Per Second around 10 to 30 around the clock. The runtime for each invocation is generally under 10 seconds with total transactions for a day in the 10's of thousands.

    +
  6. +
+

Also generally, I'm not bought into the serverless ecosystem because of a few pain points:

+
    +
  1. Local development. I know the tooling for developing AWS Lambdas on a developer machine has gotten much better, but having to start all these different lambdas locally with a step function to test an application locally seems like a huge hassle. I think it makes much more sense to have a single Java Spring Boot application with a click of a button you can test end to end and debug if necessary.

    +
  2. +
  3. Reduced Isolation. If you have two ECS clusters and one is experiencing a huge throughput spike, the other ECS cluster will not be impacted because they are independent. Not so for lambda. We've seen that if other lambdas are using all the excess provisioned concurrency and we have to go over our reserved concurrency limit, then we are out of luck and we'll be rate limited heavily leading to errors. I know this should be a niche scenario, but why risk this at all? I think the fact that lambdas are not independent is one of things I like least about this ecosystem.

    +
  4. +
+

Am I thinking about lambdas/ serverless wrong? I am surrounded by developers who think that Java and Spring are dead and virtually every project must be built as a go/python lambda going forward.

+

@Mods if there are any ways that I can make this question more appropriate for the software engineering stack exchange community or rephrase it, I'm happy to make changes here as well.

+

Here's some links to research I've done so far on the topic:

+
    +
  1. https://stackoverflow.com/questions/52275235/fargate-vs-lambda-when-to-use-which
  2. +
  3. https://clouductivity.com/amazon-web-services/aws-lambda-vs-ecs/
  4. +
  5. https://www.youtube.com/watch?v=-L6g9J9_zB8
  6. +
+",257995,,,,,1/10/2021 21:46,Determining when to use Serverless vs Containerized application (AWS Lambda vs ECS) - Is Java Spring dead?,,0,0,,,,CC BY-SA 4.0 +420880,1,420899,,1/11/2021 0:38,,5,492,"

Let's assume I want to use an open-source software, the developer says that the software is open-source and provides the source code.

+

Now my question is, how can I be 100% sure that the given binary files are compiled from the given source code?

+

Of course I could always compile the source code of every open source project I want to make use of but that is quite time-consuming if I want to use more than just one program or even not possible if I want to use for example an iPhone and do not have a macbook.

+

So do I have to trust the developer that the binary files are truly from that source code or is there another way?

+

For example: Let's assume I want to use the messenger app Signal. How can I be sure that there is not a built-in backdoor in the binary files which is not in the provided source code?

+",382831,,382831,,1/11/2021 14:39,1/11/2021 14:39,How to prove that given binary files are compiled from provided source code?,,2,6,,,,CC BY-SA 4.0 +420881,1,,,1/11/2021 0:41,,0,94,"

From the one side, customer can order "I need products list on /products and conversion statistics on /statistics/conversion". In this case, we need to obey in and write something like:

+
const RoutingData: { [routeID: string]: Route } = {
+  products: {
+    URN: "/products",
+    queryParameters: {
+      category: "CATEGORY",
+      tag: "TAG"
+    }
+  },
+  conversion: {
+    URN: "/conversion"
+  }
+}
+
+

I suppose, in this case the routing is the Business Rules, because the customers wants it and it will bring the income to customer (at least, the customer thinks as such).

+

From the other side, the routing is just the Web application feature, but the Business rules must not know about implementation method like Web or Native.

+

Just in case, I'll remind the Clean Architecture terminology:

+

+",375105,,209774,,1/11/2021 8:07,1/31/2021 4:18,Is web application routing Enterprise or Application Business Rules from the viewpoint of Clean Architecture?,,4,0,,,,CC BY-SA 4.0 +420887,1,420892,,1/11/2021 3:47,,8,788,"

I'm learning how to design a Microservice architecture. For example, here is a simple Microservice architecture:

+

+

I'm kind of confused by the Account Service.

+

As we know, for a web service, normally we need to perform a login first, and then we can get access to some services. For example, we perform a login, and then we can post a blog or buy some products. As shown from the above image, services such as "post a blog", "buy some products" should be independent of each other, because this is how a micro service works. But for these services, we must perform a login, which means that when a client sends a request to some service, the service must check the authorization of the client. To my understanding, this will cause many duplicated code.

+

Let's say I login a web server, and its Account Service sends back a token to me. Then I write a blog and send it to the Blog Service. When the Blog Service receives my request, it will first validate the token. After that, I buy some products, so I send a request to the relative service. Again, this service will validate the token too. Besides, to validate a token, the server has to store some data too, right?

+

So does Microservice architecture mean that I have to design a database in which all services has access to store the data used to validate a user-token? And does Microservice architecture mean that we have to duplicate some code on each service to validate the token?

+",265685,,265685,,1/12/2021 9:36,1/12/2021 9:36,How does user authorization work in a Microservice architecture,,2,0,4,,,CC BY-SA 4.0 +420888,1,420891,,1/11/2021 3:52,,35,7860,"

I have this code in some part of an application:

+
long sum1 = new Multiples().ofAny(new long[] { 3, 5 }).until(32768).sum();
+long sum2 = new Multiples().ofAll(new long[] { 3, 5 }).until(32768).sum();
+long sum3 = new Multiples().of(32).until(4096).sum();
+
+

I created it so readers have a clear vision of what's happening, but each method call returns a different object of a different type (Multiples -> MultiplesCalculator -> MultiplesCalculationResult -> long).

+

In other words, I am doing A -> B -> C -> D, while Law of Demeter (LoD) recommends only A -> B

+

Is this a valid use case to break LoD?

+",,user218158,,user218158,1/18/2021 12:17,1/18/2021 12:17,Is this a good scenario to violate the Law of Demeter?,,1,10,9,,,CC BY-SA 4.0 +420898,1,,,1/11/2021 9:26,,60,7019,"

Java has "checked exceptions", which force the caller of the method to either handle an exception or to rethrow it, e.g.

+
// requires ParseException to be handled or rethrown
+int i = NumberFormat.getIntegerInstance().parse("42").intValue();
+
+

Other, more recent languages such as Go and Rust use multiple return values instead:

+
i, err := strconv.Atoi("42")    // Go
+
+match "42".parse::<i32>() {     // Rust
+  Ok(n) => do_something_with(n),
+  Err(e) => ...,
+}
+
+

The underlying concept is similar: The caller of the method has to do something about potential errors and can't just let them "bubble up the stack trace" by default (as would be the case with non-checked exceptions). From some points of view, checked exceptions can be seen as syntactic sugar for alternative return values.

+

However, checked exceptions are widely disliked. The C# designers made the deliberate decision to not have them. On the other hand, Go and Rust and extremely popular.

+
+

Why did this concept (see the bolded sentence above) fail in Java but succeed in Go and Rust? What mistakes did the Java designers make that the Go and Rust designers didn't? And what can we learn about programming language design from that?

+",33843,,109329,,1/29/2021 13:56,1/29/2021 13:56,"Why do ""checked exceptions"", i.e., ""value-or-error return values"", work well in Rust and Go but not in Java?",,9,40,7,,,CC BY-SA 4.0 +420905,1,,,1/11/2021 14:16,,2,47,"

I have a 3rd party application. Basically I need to run one instance of that application for a one user. For 10 users I have to run 10 instances. From my API +I want to communicate with a specific client based on API parameters. Please check the attached image.

+

+

I have two methods and each have some issues.

+

1. using multiple ports

+

I can record port number in a database and forward API data to specific instance. For example localhost/instances:4501;

+

Issues is it's not scalable as there is a limit in port addresses. Also identifying usable ports is another challenge without interface with other applications.

+

2. Using docker network

+

with this approach I don't need to assign ports , I can just name containers as instance1,intance2 and can forward traffic to specific instance. How ever with this approach I cannot run application with docker. I want to run my application without docker as well.

+

Which approach is better? And is there any other best way to achieve this? +thank you

+",144953,,144953,,1/12/2021 6:32,1/12/2021 6:32,Communicate with multiclient applications,,0,6,1,,,CC BY-SA 4.0 +420907,1,420910,,1/11/2021 15:01,,0,199,"

I'm reading through Mickens' OS notes, and I came across the following depiction of a virtual address space.

+

I conceptually understand "user mode" of a process' virtual address space. It contains program instructions, stack, heap, static data, etc.

+

But what about 'kernel mode'? I always thought of kernel instructions as elsewhere... I thought of kernel as a separate process. And when a system call happens that kernel process gets loaded and a handler gets executed.

+
    +
  • Is this incorrect? Part of the kernel is co-located with the process? Which part?
  • +
+",357949,,379622,,1/11/2021 15:06,1/12/2021 0:19,What's contained in 'kernel mode' in virtual address space of a process?,,1,1,1,,,CC BY-SA 4.0 +420911,1,,,1/11/2021 16:25,,1,121,"

I am working on REST API and it calls another service and fetch the data and return to the UI. So It does not have any direct DB interactions. Recently we added exception handling feature which uses Controller Advice to handle the Application level exceptions. A sample response looks like below:

+
{
+    "timestamp":"2021-11-01T12:14:45.624+0000",
+    "status":500,
+    "error":"Internal Server Error",
+    "message":"No message available",
+    "path":"/api/book/1"
+}
+
+

When ever there is an error, the UI logs the message to Splunk which alreadys logs using Timestamp. So Is there really a advantage of adding timestamp to the response? Or What other advantages do I get using this timestamp field in my response?

+",380794,,,,,1/11/2021 16:53,What are the advantages of sending timestamp in the response?,,1,17,,,,CC BY-SA 4.0 +420912,1,420915,,1/11/2021 16:47,,0,131,"

I have written a Command Line Interface, where the user has to construct an object basically by providing input to a bunch of questions. I have a hard time testing these functions as there is too much happening in there. Basically for every input, there is some validation and it will loop forever, print error message, asks again until the user enters a correct input.

+

A quite simplified case might look something like this:

+
// CommandLineInterface.h
+void createPerson(DatabaseClass database, std::ostream ostream, std::istream istream)
+
+
+
// CommandLineInterface.cpp
+namespace {
+std::string getPersonNameInput(std::ostream ostream, std::istream istream) {
+   while(true) {
+     ostream << "Enter Person Name";
+     std::string name;
+     istream >> name; 
+
+     if(someOtherFunctionToValidateName(name)) 
+        return name; 
+
+     ostream << "Some error message";
+   }
+}
+}
+
+void createPerson(DatabaseClass database, std::ostream ostream, std::istream istream) {
+    auto name = getPersonNameInput(ostream, istream); 
+    auto age = getPersonAgeInput(); 
+    database.addPerson(Person { name, age }); 
+}
+
+

So there is one function part of the public interface, which delegates input, error handling, validation to some helper function in an anonymous namespace.

+

I've learnt that you shouldn't test Implementation Details (such as functions in an anonymous namespace or private functions), but only test the public Interface, which will call these directly. But I also learnt to test only one noticeable end result per unit (the big end result here is the successful call of some function with the constructed object ... but there are loads of other noticeable results such as the error messages). +This might be an indicator that my function does to many things and does not separate concerns.

+

One "fix" would be to put getPersonNameInput in the header as well and make it part of the public interface and then unit test separately. I could then test createPerson by mocking this function. +But that seems wrong to me as well. Making helper functions part of the public interface.

+

Is my design just bad here? If yes what would be ways to do improve the design, separate concerns, make it more testable? +If no how would I test it then best? (Btw: I now that it's sometimes possible to test private functions or function in anonymous namespaces, but as said above you usually would not want to test these)

+

Thanks for help!

+",382802,,379622,,1/11/2021 16:55,1/11/2021 17:26,How to Unit test / design differently a complicated free function,,1,0,,,,CC BY-SA 4.0 +420917,1,420919,,1/11/2021 18:27,,1,594,"

I am using the AWS SDK for Java V2, and I want to delete a SageMaker endpoint and a SageMaker endpoint config. What is the best way to do this?

+

For example, I am currently first using the describeEndpoint method to see if the endpoint already exists. I do this by calling describeEndpoint and if I get an exception that includes the phrase Could not find endpoint, then I know the endpoint does not exist.

+

That way, I can only call the deleteEndpoint method if the endpoint already exists, as it will otherwise throw an exception. This feels like a very inelegant solution. Can we do better?

+",382896,,379622,,1/11/2021 18:33,1/12/2021 22:35,How to check if an AWS SageMaker endpoint already exists?,,1,1,1,,,CC BY-SA 4.0 +420921,1,420934,,1/11/2021 19:06,,2,174,"

As the question states, is this bad practice? +I have a User aggregate root in the bounded context of Identity for authenticating the user. In this bounded contexts I have fields for the User related to identification of the User e.g. email, salted pw and so on.

+

I also have a generic subdomain for handling notifications. In this context a User is a Notificant. In this context, the Notificant has fields for e.g. the number of unread notifications, lastRead etc.

+

Is it good to reuse the User id in this case, as I know there is a 1-to-1 correspondence between a User and Notificant? Or should I have a field in the Notificant root referencing the User? It feels redundant, because then I have to make a lookup to map between them when I know their relationship is symmetric.

+",382375,,397719,,1/6/2022 7:16,1/6/2022 7:16,Reusing aggregate root key across bounded contexts?,,1,1,,,,CC BY-SA 4.0 +420922,1,,,1/11/2021 19:32,,0,35,"

I am enhancing an existing API to provide shipping rates for a b2b web service. This specific endpoint returns a single rate based on matching request parameters such as the service level and package information to choose the best option. We currently support allowing the client pass in a carrier on the request, and it will then only select a rate from that carrier. However, in some cases the requested carrier does not offer a matching service, and the business has requested that we now allow this service to substitute the requested carrier for a different carrier that can satisfy the request, but only in certain circumstances.

+

I am planning to add a field to the request that allows the client to specify whether their requested carrier is absolutely required or just preferred, so that they know whether to expect an error or a substitution if that carrier cannot be used. However, I am not sure if this is the right way to do it, as it is not a common pattern, and I don't really know what to call such a field. Which of these options would seem most correct?

+
    +
  1. A new field on the request which has an enum of [required, preferred] as possible values. What would this field be called?
  2. +
  3. Two separate fields requiredCarrier and preferredCarrier in something like an OpenAPI oneOf so that only one of the two can be passed in from the client. This may require a new major version of the API.
  4. +
  5. A structural change to the requested carrier field to make it an object with multiple properties, such as requestedCarrier: { carrier: 'FedEx', isRequired: true }. This would definitely require a new major version of the API.
  6. +
+",292377,,,,,1/11/2021 21:11,Allowing a client to specify whether parameter is Required or Preferred,,1,0,,,,CC BY-SA 4.0 +420926,1,420930,,1/11/2021 20:04,,6,846,"

I'm designing a RESTful API and have come across a problem when it comes to designing my routes, specifically the admin routes. My application currently has 2 types of users: regular users and administrators, and as of right now I have the following routes:

+

Users:

+
    +
  • GET /users/current - gets the user object for the currently logged-in user that contains stuff like date of birth etc.
  • +
  • GET /users/{id} - admin route
  • +
+

Appointments (this is where I have ran into a problem):

+
    +
  • GET /appointments - gets the appointments for the currently logged in user
  • +
  • GET /appointments/{id} - gets the appointment by ID, fails for regular users if the appointment doesn't belong to them with 403 Forbidden, succeeds for any id for admins
  • +
  • What should the route to get all appointments be for admins? GET /appointments/all?
  • +
  • similarly what about POST? POST /appointments creates an appointment for a currently logged in user, what about the admin route that just creates an appointment for a given user?
  • +
+

How would you tidy up this design? I've tried to look what the go-to way for this is but was unable to find guidance.

+",382902,,379622,,1/11/2021 20:38,1/11/2021 21:14,Designing routes for my REST API,,1,0,,,,CC BY-SA 4.0 +420937,1,,,1/11/2021 23:40,,0,92,"

I keep reading articles analyzing Monitoring and Observability, or having lots and lots of text regarding how the latter is the extension of the first, or how they are complementary, or how tracing is the next step in APM (Application Performance Monitoring/Management), or any other random opinion on the subject. I honestly can't see anything left after you remove the marketing hype and fluff.

+

I appreciate monitoring and alerting with a combination of Prometheus and Grafana (or any equivalent stack), and how you can track and visualize metrics collected from application logs, or how the logs themselves can be indexed/labeled for retrospective analysis, and how one can integrate threshold/prediction alerting in that regard.

+

But at the end of the day, "tracing" sounds like simply adding additional labels (host/container name, service/endpoint, timestamp), and APMs look like a metadata store for service exceptions, RESTful requests/transactions and call stacks, or something high level like that.

+

Am I a confused dinosaur, or am I missing something? Is there anything more to the above?

+

OK the above was overly dramatic, but the question remains: How can we do something more than monitoring in a shared application where the only thing we can analyze are application logs? How can we track a transaction through various application levels in a non-distributed, non-containerized deployment (if at all)?

+",56253,,,,,1/12/2021 11:46,How to add APM/observability/tracing in a monolithic architecture?,,1,0,,,,CC BY-SA 4.0 +420940,1,,,1/12/2021 2:08,,3,166,"

TL;DR: How distributed open-source apps like Scuttlebutt are secured from DoS and hackers who can make custom version of application?

+

I'm struggle with designing an open-source distributed application architecture. I want to create an application consisting of open source server, client, and provider. Client sends requests to one random provider instance that have list of all instances both client and server, and sends it to one of the random server, which, after processing the information, sends a result request back to the client. Every part of this distributed app is open-source, so everyone can create their own instance of client, provider, server, and everything seems to be fine, but what if some programmers will have bad intentions, and they will change client code in the way, that it will send millions of requests (DoS attack) to the specific, not random provider, or change providers with, so it will send all requests to one specific server? Also they can change server code, so if client expects to get a specific picture from server database, hacker will send some inappropriate pictures to all avaible clients.

+

If I hardcode some kind of verification, like hashing of important functions of API, then hacker will just remove this in his own fork. Therefore, I cannot solve this problem in any way, except by making the code of one of the parts private. For example I can make provider application with private code, so it will check hash of both client and server, and if this check fails - provider will delete this instance from list of instances. This solution sounds good, but in this case, the whole project will no longer be open source.

+

Summarize: I want to create an open source distributed application, so everyone can make their own instance, improve it, add new functionality, but how can I secure it, so this ability to create custom versions should not be misused for DoS, sniffing, or information corruption in conjunction with all many different versions working together.

+

I don't quite familiar with this topic, so I'll be glad if you can give me advice, a link to an article on a similar topic, or a book.

+",382921,,9113,,1/12/2021 6:51,2/19/2021 9:19,How to protect an open-source distributed application consisting of clients and servers from forks made by hackers?,,1,5,,,,CC BY-SA 4.0 +420943,1,,,1/12/2021 7:52,,4,104,"
    +
  • In Entity Framework 6 and/or Entity Framework Core 3+, the code-first types generated by the scaffolding (or other code-generation tools, my preference is this T4 script) are mutable classes that do not expose their Entry<T> state.

    +
  • +
  • In web-applications today, current practice is to load the required entities inside the Controller's Action and compose a View Model object, then pass the View Model off to the View. When the Controller's Action method returns the DbContext is disposed, so the View Model must be fully-populated and thus cannot depend on any lazy-loading to render the View.

    +
      +
    • (btw, this question is not concerned with Model Binding or anything like that)
    • +
    +
  • +
  • The problem is that if the View Model class simply embeds Entity Types (which is fine when the View Model is not being used in a <form>!) the View has no way of knowing what data is "loaded" and what it isn't - and there's also no easy way in C# to use the type-system to express these kinds of invariants about an object's state.

    +
  • +
+

For example, consider these mutable entity types (in .NET 4.8 - so we can't use nullable-reference-types):

+
namespace Contoso.Data
+{
+    class Customer
+    {
+        public Int32  CustomerId { get; set; }
+        public String Name       { get; set; }
+
+        public ICollection<Order> Orders { get; set; }
+    }
+
+    class Order
+    {
+        public Int32    OrderId    { get; set; }
+        public Int32    CustomerId { get; set; }
+        public Customer Customer   { get; set; }
+    }
+}
+
+

And this immutable View Model for a page which is a read-only view of a Customer's Orders:

+
namespace Contoso.Web
+{
+    class CustomerOrderPageViewModel : IPageViewModel
+    {
+        String IPageViewModel.PageTitle => this.CustomerSubject.Name + "'s Orders";
+
+        public CustomerOrderPageViewModel( Customer c )
+        {
+            this.CustomerSubject = c ?? throw new ArgumentNullException(nameof(c));
+            if( c.Orders is null ) throw new ArgumentException( "Orders is null." )
+        }
+
+        public Customer CustomerSubject { get; }
+    }
+}
+
+

And this straightforward Razor view's fragment:

+
<table>
+    <tbody>
+@foreach( Order order in this.Model.Customer.Orders ) {
+        <tr>
+            <td>@order.OrderId</td>
+        </tr>
+}
+    </tbody>
+</table>
+
+

The view expects that CustomerOrderPageViewModel.CustomerSubject contains a Customer with a loaded Orders collection (not to mention not-null too). It is not possible for the View to statically express its expectation that the collection is loaded (this would be possible with C# Code Contracts, but that's dead in the water right now - I'd love to see it come back, however).

+

The "solution" is to move the Orders collection to the CustomerOrderPageViewModel so it at least that expresses the expectation that Orders is available - and as CustomerOrderPageViewModel is immutable and the .Orders collection is only set by the constructor that makes it "safe" as far as I'm concerned - but this introduces a lot of time wasting by essentially copying the data's structural definition from the Entity type to the View-Model type - which doesn't "scale" when an application could have hundreds of different views all with similar requirements. My job is in the business of automating away tedious things - so I'd rather not have tedious things to do myself!

+

Another problem is that the View (and View-Model) has no way of knowing if an empty CustomerSubject.Orders collection represents a either a collection that hasn't been loaded yet (e.g. due to a bug/missing-step in the entity loading code) - or if it did actually load the collection but the Customer has not made any orders.

+
+

So I've been thinking about how we could use immutable types in C#/.NET to express invariants, and how as I'm already using a very customizable T4 template to scaffold entity types I can extend that T4 to generate state invariant types to represent the shape of loaded data.

+

An example based on the scenario above would involve a new type struct CustomerWithLoadedOrders, like so (it's a struct because invariants shouldn't participate in an inheritance hierarchy, they're immutable, and we should avoid GC heap allocations anyway):

+
struct CustomerWithLoadedOrders
+{
+    public static implicit operator CustomerWithLoadedOrders( ( Customer c, IReadOnlyList<Order> o ) t ) => new CustomerWithLoadedOrders( t.c, t.o );
+    public static implicit operator Customer( CustomerWithLoadedOrders self ) => self.Customer;
+    public static implicit operator IReadOnlyList<Order>( CustomerWithLoadedOrders self ) => self.Orders;
+
+    public CustomerWithLoadedOrders( Customer c, IReadOnlyList<Order> loadedOrders )
+    {
+        this.Customer     = c            ?? throw new ArgumentNullException(nameof(c));
+        this.LoadedOrders = loadedOrders ?? throw new ArgumentNullException(nameof(loadedOrders));
+    }
+
+    public Customer             Customer     { get; }
+    public IReadOnlyList<Order> LoadedOrders { get; }
+}
+
+

So then I'd have an extension-method on the DbContext:

+
public static async Task<CustomerWithLoadedOrders> LoadCustomerWithOrdersAsync( this MyDbContext db, Int32 customerId )
+{
+    Customer cus = await db.Customers.SingleAsync( c => c.CustomerId == customerId ).ConfigureAwait(false);
+
+    await db.Entry( cus ).Collection( c => c.Orders ).LoadAsync().ConfigureAwait(false);
+
+    return ( cus, cus.Orders ); // or just `new CustomerWithLoadedOrders( cus, cus.Orders )`
+}
+
+

This looks tediuous to write my hand, but with T4 invariant types can be easily generated for each loadable member of every entity type, and T4 can also be used to generate combinations of invariants together, think of it as a worse-than-poor-man's implementation of an ADT product type (with most of the fiddly syntax pain removed and compatibility with entity types maintained with implicit conversion).

+

...and this works for the simpler cases involving single types or collections-of, but eventually it results in massive code-bloat (even if it is generated code) when needing to define invariants for the state of a any moderately-sized object-graph: for example, a Customer, all their Orders, as well as all Products in all of those Orders - and it quickly spirals from there. An additional pain-point is that for batch loading or customized Linq queries we can't use extension methods like LoadCustomerWithOrdersAsync because those load single entities - so we need to rely on runtime assertions which defeats the point of using a static-type system to encode invariants in our program.

+

Have you run into this problem, and how did you come to a solution?

+",91916,,91916,,1/12/2021 9:04,2/14/2021 12:29,"In the absence of code-contracts, what techniques exist to represent state invariants (e.g. ""Customer with Orders loaded"" with Entity Framework)?",,2,0,,,,CC BY-SA 4.0 +420948,1,420951,,1/12/2021 12:19,,3,232,"

I learnt the concept of abstraction as:

+
+

Reducing complexity by hiding unnecessary details.

+
+

Does this have a relationship with the abstract keyword in java? +I see that the abstract keyword is being used in methods in classes like this:

+
public abstract void printName();
+
+",382763,,379622,,1/12/2021 13:22,1/14/2021 5:30,Does abstraction have a relationship with the abstract key word in java?,,5,2,,,,CC BY-SA 4.0 +420954,1,,,1/12/2021 13:57,,1,182,"

I used to have a base object with subtypes behaving in all but the same way -- the difference being in their render methods. This base class defined a default render method, overridden by Some subtypes who have their dependencies wired in.

+
<?php
+
+    class SubType1 extends BaseType {
+        
+        function __construct(string $userData) {
+
+            $this->userData = $userData;
+        }
+
+        function render (ParserHelper $parser) {
+
+            return $parser->render();
+        }
+    }
+
+    class SubType2 extends BaseType {
+        
+        function __construct(string $userData, array $settings) {
+
+            $this->userData = $userData;
+
+            $this->settings = $settings;
+        }
+
+        function render (XmlHelper $parser) { // doesn't implement same interface as ParserHelper
+
+            return $parser->getResponse($this->userData);
+        }
+    }
+
+    class BaseType {
+
+        function render () {
+
+            return "I'm in A";
+        }
+    }
+
+

Keep in mind that the following classes are utilized in this way

+
    $renderer = $typeFinder->search([ // define critical objects in user facing client. this definition is frequently performed and would be an inconvenience to constantly pull dedicated $parser before definition
+        new SubType1("John"),
+
+        new SubType2("Ann", ["dark_mode" => true])
+    ]);
+
+

Then, internally, where the actual consumption is being done, we had

+
    $dependencies = $app->autoWire($renderer, "render"); // the violation here
+
+    $renderer->render(...$dependencies);
+
+

So, in trying to maintain the same interface, BaseType was made abstract with all the consistent behavior retaining their default positions. The user facing client still looks the same. However, the underlying classes were refactored to

+
<?php
+
+    class SubType1 extends BaseType {
+        
+        function __construct(string $userData) {
+
+            $this->userData = $userData;
+        }
+
+        function render () {
+
+            return $this->regularParser->render();
+        }
+    }
+
+    class SubType2 extends BaseType {
+        
+        function __construct(string $userData, array $settings) {
+
+            $this->userData = $userData;
+
+            $this->settings = $settings;
+        }
+
+        function render () {
+
+            return $this->xmlParser->getResponse($this->userData);
+        }
+    }
+
+    abstract class BaseType {
+
+        abstract function render ();
+
+        function initialize (ParserHelper $parser, XmlHelper $xmlParser):void {
+
+            $this->regularParser = $parser;
+
+            $this->xmlParser = $xmlParser;
+        }
+    }
+
+

Then consumed with

+
    $renderer->initialize($app->get(ParserHelper::class), $app->get(XmlHelper::class));
+
+    $renderer->render();
+
+

Is it ideal to plug in those dependencies to even subtypes who won't be using them? Does this obey the open/closed principle as well? It doesn't look like it to me. The subtypes are safe as far as creating new ones is concerned. But there's the caveat that their parent must receive that parser first. There goes the "closed" part of the principle.

+

I realize a decorator object who works with the contents of $decorated->userData should do the trick, but I can't wrap my head around how to decouple the decorated user-facing subtype constructors from the decorator; while still allowing decorated autonomy to pick what parser best suits it. MAYBE that client facing subtype just has to swallow the bitter pill of defining the parser each time

+

I have an idea on how to use the decorator pattern, retain the subtype signature, leave them closed. It looks like this

+
<?php
+
+    class SubType1 extends BaseType {
+        
+        function __construct(string $userData) {
+
+            $this->userData = $userData;
+        }
+    }
+
+    class SubType2 extends BaseType {
+        
+        function __construct(string $userData, array $settings) {
+
+            $this->userData = $userData;
+
+            $this->settings = $settings;
+        }
+    }
+
+    abstract class BaseType {
+
+        // define common functionality
+    }
+
+    interface Renderable {
+
+        public function render();
+
+        public function setApp();
+    }
+
+    class SubType1Decorator extends SubType1 implements Renderable {
+
+        function __construct (string $userData) {
+
+            parent::__construct($userData);
+        }
+
+        function render () {
+
+            return $this->app->get(ParserHelper::class)->render();
+        }
+
+        public function setApp(App $app) {
+
+            $this->app = $app;
+        }
+    }
+
+    class SubType2Decorator extends SubType2 implements Renderable {
+
+        function __construct (string $userData, array $settings) {
+
+            parent::__construct($userData, $settings);
+        }
+
+        function render () {
+
+            return $this->app->get(XmlHelper::class)->getResponse($this->userData);
+        }
+
+        public function setApp(App $app) {
+
+            $this->app = $app;
+        }
+    }
+
+

I don't know if this ticks all the boxes. The definition will be refactored to

+
    $renderer = $typeFinder->search([
+        new SubType1Decorator("John"),
+
+        new SubType2Decorator("Ann", ["dark_mode" => true])
+    ]);
+
+

Renderer internal consumption:

+
    $rendererDecorator->setApp($app);
+
+    $rendererDecorator->render();
+
+",379153,,379153,,1/12/2021 14:43,1/13/2021 10:14,Is this correct adherence to Liskov Substitution Principle?,,1,3,,,,CC BY-SA 4.0 +420959,1,420962,,1/12/2021 15:37,,1,1101,"

In redis docs, it is stated that keys command should not be used in production, since it blocks other processes while executing, it is better to use scan iteration over all keys with some batch size. +I've read in docs that redis use hash index, so I assume it can't use it for range queries like scan and keys.

+

But our system is done in such a way that we need to use scans extensively. Is it ok, could it decrease the performance of hash queries significantly?

+",339249,,,,,1/12/2021 16:12,Is it ok to use redis scan extensively?,,1,0,,,,CC BY-SA 4.0 +420965,1,420966,,1/12/2021 17:04,,-1,195,"

Given is the following package/class diagram of an implementation of java.util.List:

+

+

1. Which design principle is (most likely) hurt by the ArrayBasedList class?

+

The design principle that is hurt by the ArrayBasedList class is the "Use inheritance as a Specialization" because the ArrayBasedList is (extends) a List that uses the ArrayUtilities. This principle is hurt, as inheritance should not be used for "usage" properties.

+

In addition, multiple inheritance is not supported by Java.

+

2. What would be the preferred way to model this situation in a correct way (as UML)?

+

The preferred way to model this situation in a correct way using UML will be to make the ArrayBasedList implement the List interface and use the ArrayUtilites.

+

In code this would be translated in:

+
package app.list;
+import app.util;
+
+public class ArrayBasedList implements List {
+    
+    private ArrayUtilities util = new ArrayUtilities ();
+    public ArrayBasedList(){
+    
+    }
+}
+
+

And in UML in this:

+

+

My solution was based on How should I have explained the difference between an Interface and an Abstract class?

+

What do you think about my approach?

+

Edit 13/01/2021: This question was part of an examination sheet in a Software related master program. I can understand that the question perhaps is a bit abstract but at the same time, after reading carefully the comment/response, I do realize that my solution approach seems to heading in the right direction.

+

Edit 31/01/2021: After a discussion with a Professor who teaches System Design at a Software related master program, I reliased that the optimal solution to this problem wouldn't "convert" arbitrarily the ArrayUtilities from a class to an interface. As a result, I changed the relationship between the ArrayBasedList and the ArrayUtilities as shown above.

+",381747,,381747,,1/31/2021 12:05,1/31/2021 17:39,"Is ""Inheritance as Specialization"" hurt? And how can it be fixed?",,1,2,,1/12/2021 19:27,,CC BY-SA 4.0 +420967,1,,,1/12/2021 19:29,,3,719,"

What I've learned so far as a programmer has lead me to think that the easiest way to write large scale software is to use a lot of interfaces - this makes things very easy to isolate and test. In high performance code, the performance loss due to v-table look-ups is often stressed. I often see this mentioned when discussing game engines, databases ect.

+

My question is, since virtual functions are so convenient - at what point should we stop using them in order to gain performance? I have seen much divided opinion online on the matter. Can anybody share some wisdom?

+

Ps. I'm sorry if the question is too vague - from my perspective this is unknown uknown territory, so it's hard to ask the right questions.

+",367844,,,,,1/13/2021 20:15,True cost of virtual dispatch in C++ - when stop using it?,,3,5,1,,,CC BY-SA 4.0 +420969,1,,,1/12/2021 21:48,,-4,841,"

An often quoted disadvantage of the strategy pattern is:

+
+

The application must be aware of all the strategies to select the +right one for the right situation

+
+

Why is this a disadvantage and what can be done to overcome it?

+",24685,,209774,,1/12/2021 23:38,1/15/2021 17:48,Disadvantage of the strategy pattern and how to overcome it,,4,4,,,,CC BY-SA 4.0 +420971,1,,,1/12/2021 22:57,,0,122,"

I'm working on a web application with JWT authentication (with token stored in cookie) and an SQL database. Upon loading, the frontend makes a request to the backend to determine whether the user is authenticated and who he is.

+

In order to make the app more convenient to use, I want to allow users to use most of the features without signing up (i.e. submitting email + password).

+
    +
  1. My currently preferred approach is to automatically create a proper account whenever the frontend makes that initial request without a valid authentication token and the user is immediately signed in, though the frontend pretends otherwise. When the user performs a sign in, the created account would be deleted and user would be signed in to their own account. Upon sign up, the user would become the owner of the guest account by setting an email and a password. The big advantage of this approach is, that, since the user always has a proper account with user id, all the APIs can keep the same logic for both guests and signed up users. The only downside I see, is that the email field in the users table would have to contain a random UUID to satisfy the unique constraint on the email field while the user is a guest.
  2. +
  3. The other approach would be to create a separate table "sessions" and return a session id stored in a cookie to guest users. Upon sign up, the entry in the users table would be created and all the information attached to the session copied into tables attached to the users table. This would require creating a separate logic for guests and signed up users for each API.
  4. +
+

Neither approach is completely clean, the first one essentially creates fake user accounts for guest users while the second one complicates the logic in every API function, having to decide where to store information depending on whether the user is authenticated.

+

Is there an alternative approach that's better?
+What are the issues that could arise if I go with the first approach?

+

Edit: A third approach would be to add user_proxys table and associate with it all information regardless of whether user is authenticated or not. It would have an optional key to the users table which would be set upon signing up. The JWT token would need to reference an id from this table. This seems to be clean but maybe unnecessarily complicated compared to approach 1.

+

Edit 2: This question is about a similar problem but in the context of an e-commerce application. The accepted answer suggests approach 2 but in that case they only need to worry about orders and addresses while I have about a dozen interrelated tables that need to be filled in by both guest and signed up users.

+",352890,,352890,,1/13/2021 9:07,1/13/2021 9:07,Providing guest users with full accounts vs only sessions (a.k.a. guest checkout),,0,6,,,,CC BY-SA 4.0 +420981,1,420993,,1/13/2021 2:46,,-4,124,"

I've been doing technical studies in web design for a while. I have developed several web pages, however, I have not published any.

+

This year I plan to publish my first website. I would like to know how should I order the folders and their various files (index.html, scss, js, multimedia, child web pages) in the best and most optimized way.

+

Also, I would like you to give me some recommendations on the structure of folders and/or other topics.

+

Please and thank you.

+",344382,,344382,,1/14/2021 16:35,1/14/2021 16:35,What is the best way to organize the folders and files in my web project?,,1,0,,1/13/2021 11:43,,CC BY-SA 4.0 +420985,1,,,1/13/2021 5:23,,0,17,"

I am building an app wherein a user will be requesting one of three engagement types:

+
    +
  • Appointment (no SLA)
  • +
  • Consult (2hr SLA)
  • +
  • Emergency (10mins SLA)
  • +
+

We are considering two different scenarios for submission of the desired request:

+
    +
  • Option 1 - At the beginning
  • +
  • Option 2 - In the middle
  • +
+

Option 1:

+

User opens app with three buttons:

+

Engagement Types:

+
    +
  • 1-Appointment
  • +
  • 2-Consult
  • +
  • 3-Emergency
  • +
+

Next screen requires inputs: +Enter Patient Details (Name, ID, Reason for Request)

+
    +
  • Click NEXT
  • +
+

User will then see the next screen with the option to add images of injury or chart -

+
    +
  • If user selected Engagement Type 1 - Click NEXT (only option beyond image captures)
  • +
  • If user selected Engagement Type 2 - Text Box will be available to enter consult request detail - Click SUBMIT CONSULT REQUEST
  • +
  • If user selected Engagement Type 3 - Click SUBMIT EMERGENCY CALLBACK (will auto-send sms to on-call individual with Requestor Name, Patient Name, Reason for Request)
  • +
+

Engagement Type 1 Continued:

+

User will see a screen with an auto-generated Appointment Location/Date/Time

+

-----END OF REQUEST-----

+

Option 2:

+

User opens app with one button: Start Request

+

Next screen requires inputs: +Enter Patient Details (Name, ID, Reason for Request) +Select Engagement Type:

+
    +
  • 1- Appointment
  • +
  • 2- Consult
  • +
  • 3- EMERGENCY
  • +
  • Click NEXT
  • +
+

Next screen has option to add images of injury or chart and different option depending on Engagement Type:

+
    +
  • If Engagement Type 1 screen: Click NEXT (only option beyond image captures)

    +
  • +
  • If Engagement Type 2 screen: Text Box will be available to enter consult request detail - Click SUBMIT CONSULT REQUEST (auto-sends SMS with Requestor Name, Patient Name, Reason for Consult)

    +
  • +
  • If Engagement Type 3 screen: Click SUBMIT EMERGENCY CALLBACK (will auto-send sms to on-call individual with Requestor Name, Patient Name, Reason for Request)

    +
  • +
+

Engagement Type 1 Continued:

+

1 - Appointment: User will see a screen with an auto-generated Appointment Location/Date/Time

+

-----END OF REQUEST-----

+

Basically, is it better practice to have the user select the type of request up front, or to simplify with one button initiation and multiple radio buttons after the required inputs are completed?

+",382996,,63202,,1/13/2021 17:26,1/13/2021 17:26,"App Design - Engagement Type selected at beginning, or Radio buttons mid-request?",,0,2,,,,CC BY-SA 4.0 +420986,1,420989,,1/13/2021 5:54,,10,6417,"

I see this word in many places but don't get it.

+

From the Wikipedia,

+
+

In message-oriented middleware solutions, fan-out is a messaging pattern used to model an information exchange that implies the delivery (or spreading) of a message to one or multiple destinations possibly in parallel, and not halting the process that executes the messaging to wait for any response to that message.

+
+

but I don't know what it means under other contexts, for example,

+ +

Can you please explain this term to me? Are there other places using this term? Thank you.

+",380846,,379622,,1/13/2021 6:00,1/13/2021 6:27,What is the meaning of fan-out,,2,1,1,,,CC BY-SA 4.0 +420994,1,,,1/13/2021 12:25,,0,276,"

I am currently developing a huge project in my company with N-Tier Layer architecture using spring boot but I can not sure about the package and class namings. In our project, we have a complex structure. For example, we have to call oracle plsql procedure/function, pure sql, jpa with basic jpa repositories, web services calls(soap/rest) because of the requirements. For example;

+
    +
  • I have to call procedure/function/pure sql with RestTemplate
  • +
  • I have to call rest web services with again RestTemplate
  • +
+

What is the best approach for naming the returning results, classes?

+

When a client calls a web service in our project, we decided to name the request XRequest, later we map the request objects to the XInput(it contains headers etc.) object with our necessary parameters and passed them to the service layers.

+

In the service layers, we call necessary other layers.

+

In these layers, after calling the other layers and returning the results, the procedure call, we map it to the XOutput in the Service Layer with required parameters, we passed XOutput to the Controller Layer, and the Controller Layer map the results to the XResponse objects with the required parameters.

+

+

In the above sample structure, what is the best approach of the returning objects’ naming in the Web service/custom repositories. What is the naming of the Class naming(such as Custom Repositories, DaoFactory etc.). What should the naming of the web service call returning objects?

+

To sum up, in this kind of unstructured project, what is the best approach to the naming conventions of the classes and packages.

+",383018,,383018,,1/13/2021 13:52,1/13/2021 13:52,"Spring boot, n-tier layer structure naming conventions",,0,2,,,,CC BY-SA 4.0 +421003,1,,,1/13/2021 18:08,,5,352,"

I have a system with two applications interacting via a message queue. Let's call them Producer and Consumer. Some key context is that this a multi-tenancy scenario.

+

Producer produces events based on various inputs (user interactions, api, etc...) and Consumer does down stream processing on these. One of our key constraints is that Consumer can only process events one-at-a-time-per-tenant.

+

Our current solution (a bit naive) is that multiple worker threads are pulling from the queue and processing events, and if a tenant has another event in progress later worker thread(s) just waits. This has been fine for a couple years given our thread pools and typical event production patterns, but we had a scenario where thousands of events for a single tenant were generated in Producer, and all of Consumer's worker threads except one were stuck waiting. Consumer was therefore processing events from the queue one at a time, and our "eventual consistency" lag time became suboptimal.

+

We've got some candidate ideas for managing this:

+
    +
  1. Load balancing across queues - new messages go to the most empty queue, but tenants are locked to a single queue (how we achieve this exactly TBD)
  2. +
  3. Create a "slow lane" queue - if during processing of an event, the tenant is already in use, move the events to the "slow lane". This will drain the primary queue quickly but has implications for event processing idempotency I'm not sure will be valid for our scenario.
  4. +
+

Before we start digging on these options and looking for others, I'm curious if anyone here has experience with patterns for dealing with this type of situation.

+

Appreciate any info/advice/guidance. Thanks!!

+",350136,,350136,,1/13/2021 18:57,1/15/2021 5:55,Design question for handling large volumes of messages in multi-tenant queue,,2,2,,,,CC BY-SA 4.0 +421005,1,421032,,1/13/2021 19:39,,1,325,"

I'm new in the team (< 3 weeks) but I believe I am experience enough as a Xamarin dev to know what I'm doing and also experience enough on using Prism library to give some criticism on how things should be done on an existing code base.

+

On our next sprint, I am planning to give some dev talk and raise some concerns for the following:

+
    +
  1. Image files are located on the project root folder (App/App.PlatformName/**.png_or_jpg) which I believe should be put in another folder, specifically App/App.UWP/Images/*.png_or_jpg
  2. +
  3. Image names are hardcoded as string literal everywhere. I think they should be put in a single file/class which contains all the image names as constant string and this should be generated by a script that triggers every pre-build.
  4. +
  5. Lazy loading a Command, more specifically, a DelegateCommand is over engineering. The type is not even resource intensive to be a candidate for lazy loading.
  6. +
  7. Consider the code below:
  8. +
+
IsBusy = true; // I believe this would show up some UI blockers
+// some synchronous stuffs
+IsBusy = false; // hide the blockers
+
+

I believe the code doesn't really do the intended purpose. If the synchronous stuff takes more time, say > 5 seconds, it would cause an ANR (application not responding), instead of showing some UI blockers. This would only work if the operation in the middle is an awaited Task, and if the process is CPU-bound operation. Wrapping it with IsBusy property does nothing.

+
    +
  1. String literals are stored on a .cs files (noticed the S) instead of a single .resx which would make localization a nightmare task although there's really no task for it but I came from mobile development background and this is always put in mind during development.
  2. +
+

There is more concerns from me but I'll cut it out here.

+

My questions are: Should I proceed on doing this? Would I appear cocky in front of the team? Would you do the same or let it go since the clients are not really concern about these things?

+

Thank you and appreciate your time on reading this.

+",168528,,,,,1/15/2021 17:43,Raising concerns on the codebase as a new hire,,4,3,,,,CC BY-SA 4.0 +421006,1,,,1/13/2021 19:56,,1,120,"

When someone comments on my question/answer/comment I will see the notification from the inbox, and it also shows under my response tab. Later when the comment is edited, the comment shown under the response tab changes; when the comment is deleted, the comment disappears from response tab as well. And the inbox message updates correspondingly during the process.

+

I think the system sends the notification through the queue service, and at the same time push the comment to my response tab. After that my account tries to pull the system periodically to update any changes (edit/delete). Is my understanding correct? Will it be a bottleneck for the system if so many users account pulls at the same time?

+",379783,,342873,,1/13/2021 20:54,1/13/2021 20:54,How does Stackoverflow comment system work,,0,3,,,,CC BY-SA 4.0 +421008,1,421015,,1/13/2021 20:45,,3,583,"

I am writing a functional specification for a project and I wish to express the expected system behaviour in Use Cases.

+

I have read a bunch of opposing opinions on whether "logging in" should be considered a Use Case or not and my conclusion is that there is no consensus, and that it also depends on the context.

+

My question is this: assuming that authentication is not a standalone Use Case, how should I describe the authentication requirements and interactions if the functional requirements of the system in question are otherwise expressed in Use Cases?

+

I assume that in this case a login flow should be an integral part of each and every Use Case. That seems like a lot of repetition, so to avoid that I could extract the login part of the flows to some "Common Flow" and reference that in my Use Cases. Is this a valid approach?

+

Regardless, the approach seems a little awkward to me (just a subjective opinion). Things would seem more natural if being logged in was a precondition to other Use Cases. How/where should I describe the authentication flows in this case?

+",315196,,315196,,1/13/2021 22:31,3/17/2022 16:07,"If logging in is not a Use Case, then what is it and where do I describe it?",,4,3,,,,CC BY-SA 4.0 +421010,1,421011,,1/13/2021 21:52,,2,126,"

Simple summary of a real problem:

+
    +
  1. I'm making a chess game
  2. +
  3. The engine that makes the chess do the magic is its own independent code (by design, for easy implementation)
  4. +
  5. I am now implementing the chess game into a game (Minecraft, but could be any mod-able game)
  6. +
+

Problem:

+
    +
  1. The code that runs the chess has classes like "Board", "Game", "Piece", "Square", etc.
  2. +
  3. The code that runs the implementation, in an optimal world, would have classes with the same name, but that would make everything I'm doing in my IDE into a jungle
  4. +
+

Question:

+
    +
  • What's the best way to solve this? I'm thinking about calling the core chess game classes into something like "EBoard", "EPiece", etc., "E" being short for "Engine" /edit: i do not mean for this to be an opinion based question; "best way to solve it" as in "most logical for the IDE to digest what I'm doing"
  • +
+

Note:

+
    +
  • This is not a "name that thing" question. It is partially, but I am also asking because I'm looking for naming conventions that IDEs can handle well. In prior projects I have made errors like naming classes in full upper case letters occasionally, and the IDE had problems recognizing that i was actually looking for a class. You get the idea.
  • +
+",383060,,379622,,1/13/2021 22:15,1/14/2021 17:09,Naming conventions for classes that technically could/should have the same name,,1,3,,,,CC BY-SA 4.0 +421014,1,,,1/13/2021 22:30,,1,76,"

In an ASP.NET MVC 4 REST API application, we have SQL errors which are occuring during the request.

+

However, since the requests are big, we are streaming the data back to the client, which means we have already returned the 200 OK response and headers.

+

So midway in the stream, something bad happens. +How can we indicate to the downstream client something happened? +With XML, we can at least just leave an incomplete file being returned. +With CSV though, I cannot come up with a clean way to indicate errors.

+

What's the best practice for indicating an error occurred in the middle of a REST GET request? Google has somehow failed me on this one

+

Thank you

+",383064,,,,,1/14/2021 9:52,How to provide error details back to REST request?,,2,2,,,,CC BY-SA 4.0 +421016,1,421025,,1/13/2021 23:40,,4,310,"

This came up at work and left me thinking about the best way to model this:

+

In Python, we have the built-in list container, which is a mutable sequence. Equality between two lists is defined as equality of all items in the list, in their respective positions.

+

Now a colleague felt the need to define a type that's a list for all practical purposes, except that two lists should be considered equal if they contain the same elements, in the same quantities, but in arbitrary order. Basically,

+
unordered_list_1 = UnorderedList([1,2,3])
+unordered_list_2 = UnorderedList([3,2,1])
+
+unordered_list_1 == unordered_list_2 # True!
+
+

The colleague solved this by inheriting from list and overriding the __eq__ special method:

+
class UnorderedList(list):
+  def __eq__(self, other):
+    if isinstance(other, UnorderedList):
+      return ordered(self) == ordered(other)
+    else:
+      return NotImplemented
+
+

In this form it runs into a gotcha, because the builtin python types such as list take some shortcuts with their special methods; the not-equal __ne__ special method does not just fall back onto the __eq__ special method, so you get the funny scenario where two of these unordered lists can both be equal and not equal.

+

I suggested inheriting from UserList instead, which is meant to be subclassed, or maybe from one of the collections.abc abstract base classes. Another colleague chimed in with the familiar "Favor composition over inheritance" advice.

+

I feel that composition in this case would lead to a lot of boilerplate delegation code:

+
class UnorderedListUsingDelegation:
+  def __init__(self):
+    self._list = list()
+
+  def __eq__(self, other):
+    if isinstance(other, UnorderedListUsingDelegation):
+      return ordered(self._list) == ordered(self.other._list)
+    else:
+      return NotImplemented
+
+  def append(self, item):
+    self._list.append(item)
+
+  # Basically def every method implemented by class list and write delegation code for it:
+  # pop, push, extend, __getitem__, __setitem__ and so on
+
+

So from that consideration, I feel like inheritance is exactly right here: A teeny tiny specialization of behavior.

+

But on the other hand: Is the UnorderedList actually substitutable for the list class? Not so sure here. If you do "normal" list operations, then you shouldn't notice whether you are using an instance of the list class or of the UnorderedList class. Inserting and retrieving of elements works just fine. On the other hand, you might get unexpected behavior when comparing lists:

+
list1 = UnorderedList()
+list2 = UnorderedList()
+
+list1.append(1)
+list2.append(3)
+
+list1 == list2 # False
+
+list1.append(2)
+list2.append(2)
+
+list1 == list2 # False
+
+list1.append(3)
+list2.append(1)
+
+list1 == list2 # True!
+
+

I guess what I'm after is some clarity on how broadly or narrowly the Liskov substitution principle should be applied. Or maybe the solution is something altogether different. Maybe we shouldn't put such a "hack" into the __eq__ special method and rather be explicit about what we're doing, by writing a function like

+
def sorted_equal(a, b):
+  return sorted(a) == sorted(b)
+
+

I assume the colleague is working with some framework that expects to be working with list objects but wants to inject this special way of comparing lists.

+",161522,,161522,,1/14/2021 16:20,1/14/2021 16:20,"Inheritance vs composition: How would you implement an ""unordered list""? Subclass of list, or composition?",,3,10,,,,CC BY-SA 4.0 +421018,1,421029,,1/14/2021 1:19,,2,67,"

I'm study UML and I have nobody for verify my solutions so hence I will greatly appreciated if anyone give me some feedback. I'm doing exercises about use cases and sometimes it's hard for me know if I'm on the right track because these exercises are not mathematics in which you have an exact result. These models can vary from one analyst to other.

+

So the exercise's statement is a little large and it's the next:

+

Exchange of Points

+
+

A supermarket chain with branches in much of the provincial territory has decided to implement a customer loyalty program. This program is based on the accumulation of points through purchases made at the different branches, which can then be exchanged for different prizes.

+
+
+

This company has requested the design and implementation of an information system in a web environment, with an Oracle database, which allows it to manage the processes related to the new loyalty program. After conducting a survey, the following information and considerations were obtained to build the system: +The client must register in the loyalty program, with which their personal data (type and number of document, first and last name, date of birth, current address), the registration number, the date of registration in the program and you are given a card –this card has a unique fifteen-digit number and a barcode–, and a unique security code (independent of the card) that is informed to the customer. The purpose of this code is to act as a security question that the customer will be asked to report, to certify his identity when requesting a trade-in. When a customer makes a purchase, he must present the card to the cashier, and he will be awarded a certain amount of points. For each movement to obtain points, the date on which they were accumulated, the amount of points obtained and the associated ticket number are recorded. +Currently one point is accumulated for every 2 dollars. Points do not have an expiration date. The business adopted the position of recording the points obtained for each customer, independently of the card; This benefits the customer as they keep the points despite losing or changing their card for any reason. When the client wishes to obtain a prize, on the supermarket's website he can consult the catalog of prizes that he can access by exchanging points. Said catalog is valid for a certain period of time, and contains a set of prizes of which a code is defined, the name and brief description of the product, the quantity available and the points necessary to access the prize. A certain product - for example, an electric oven of a certain brand, 20 liter capacity, digital - may appear in more than one catalog; for each catalog the same product may have a different code. Those products for which there is no quantity in stock will be shown with a watermark with the legend "Not available".

+
+
+

The customer must contact the Customer Service Call Center, where they will be attended to start the exchange process. The first step will be to request the client's document to access his data; then the security code communicated when enrolling in the program will be required. The client must inform the prize/s to which they wish to access (said prizes must belong to the current prize catalog). The client must confirm whether the delivery address for the awards will be the registered address or another. In case of having the necessary points, the exchange of the points will be recorded with the data of the awards exchanged and a certificate of exchange will be issued, on A5 size paper, with QR code and Arial font, which will contain the client's data, address and award/s obtained, which will be delivered to the logistics area for the subsequent delivery of the award/s.

+
+
+

Weekly, reports will be issued regarding

+
    +
  • Enrollments made to the program, with number of clients per location
  • +
  • Exchanges made, with total number of exchanges, number of products exchanged, products most required in exchanges.
  • +
+
+

And my model is:

+

+

I would like know if that model is acceptable, if it has some minor issues, if it contains serious mistakes that make me have re-study key concepts, if diagram can has more use cases or less, if some relationship or actor are bad, if any name is not very representative of the functionality it models, and stuff like these.

+",382089,,379622,,1/14/2021 16:59,1/14/2021 16:59,Feedback for Use Cases Model,,1,0,1,,,CC BY-SA 4.0 +421022,1,,,1/14/2021 3:27,,9,1413,"

New to golang and is now researching on how to do test in golang. I see that there are popular mocking library like gomock. However, at the same time, I see that large golang open source projects don't use any mocking library at all. For example, geth (ETH miner), wire (dependency injection library) and sendgrid go client (for sending email) all choose not to use any mocking library.

+

Considering the fact that there is no class in golang, is it a practice that not to mock interface (which most mocking library did)? Should I just make the function to be tested as package variable and overwrite in test instead?

+",188791,,,,,1/25/2021 0:59,Why Golang projects seldom use mocking library in testing?,,1,2,0,,,CC BY-SA 4.0 +421036,1,421099,,1/14/2021 12:31,,2,97,"

Currently studying about replication and I was wondering:

+

In Passive replication, we have an agreement stage where the Primary replica manager (RM) waits for ack from the backup RM, wouldn't it be possible to handle byzantine failures whereby doing some sort of consensus algorithm? (we could survive f+1 failures in a system of 2f+1)

+

It would make sense that this is true since in a Active replication system the Frontend of the system would be performing this.

+

I can find sources telling that byzantine failure can be handeled in a Active replication system, but the same sources are saying that Passive replication systems can't handle byzantine failures.

+

But by the example I have given, it seems like Passive replication can handle the failures.

+",383099,,,,,1/15/2021 18:06,Can passive and active replication handle byzantine failures?,,1,0,1,,,CC BY-SA 4.0 +421039,1,,,1/14/2021 14:23,,1,59,"

We have a react website that, as part of it's process, loads a dynamically generated javascript file from a third party. It uses some of the scripts in this javascript file to generate values, which the react website then passes to our API in an API call.

+

Our API then does it's own processing and calls the third party's API, passing back in some of the values generated from their javascript.

+

These values are only valid for a short time frame.

+

I'm looking to create some integration tests with the third party's API, but I cannot hard code any values as they are only temporarily valid.

+

We would only need to perform these integration tests when we modify the client class that calls the third party, so our current solution is to manually set the values but I'm wondering if there is an existing solution to handle this?

+

Our API is a c# webservice written in .net core.

+",253640,,253640,,1/14/2021 15:05,1/16/2021 9:54,How can I write integration tests if I need dynamically generated values from a javscript file?,<.net-core>,1,1,,,,CC BY-SA 4.0 +421040,1,421089,,1/14/2021 15:20,,0,1673,"

When providing JavaScript's parseInt with a non-parsable string it returns NaN. I'm trying to understand the reasons for designing a parsing function this way.

+

When I write a parsing function I usually return null or throw an exception in case the input is not parsable. It seems to me that NaN is unsafe because it allows the code to keep running even when there is no value to work with.

+

For example, this will not throw any runtime error:

+
parseInt('a') + 1
+
+

But this could lead to unexpected behavior when you expect something to be an actual number while it's actually NaN. But perhaps there is a benefit to doing this that I'm not seeing.

+

So my question is: Are there significant dis-/advantages to the solutions below?

+
    +
  1. Returning NaN
  2. +
  3. Returning null
  4. +
  5. Throwing an exception
  6. +
  7. Something else
  8. +
+",257235,,257235,,1/19/2021 15:21,1/19/2021 15:21,What should the result be of a failing parseInt function?,,5,18,,,,CC BY-SA 4.0 +421041,1,,,1/14/2021 16:05,,0,240,"

Currently I am a student learning Machine Learning, and so my observation is from an academic context. It may be different in a business environment.

+

One thing I find very odd when I see Python code for machine learning is that when you call a typical network class, you send it lots of parameters, which strikes me as a risky thing to do. +Surely it would be better practice to put all of your hyperparameters and configuration values in a special class for hyperparameters and configuration, and then when you want to change the values they are easy to find?

+

If the class is in a seperate file, you can simply copy it across for the next project.

+

Would anyone want to comment on why this is not the obvious thing to do?

+

Here is an example of what I mean:

+
    agent = agent_(gamma=args.gamma,
+              epsilon=args.eps,
+              lr=args.lr,
+              input_dims=env.observation_space.shape,
+              n_actions=env.action_space.n,
+              mem_size=args.max_mem,
+              eps_min=args.eps_min,
+              batch_size=args.bs,
+              replace=args.replace,
+              eps_dec=args.eps_dec,
+              chkpt_dir=args.path,
+              algo=args.algo,
+              env_name=args.env)
+
+

Apologies I imagined you might have seen code like this before. As you can see, a lot of hyperparameters and configuration values. I see this a lot in books and courses. Instead of passing all these parameters around, just put them in a class.

+",27197,,1204,,1/14/2021 22:01,1/16/2021 6:46,Is it really good practice in Python code for machine learning to use so many parameters?,,3,10,,,,CC BY-SA 4.0 +421047,1,,,1/14/2021 18:33,,0,54,"

In my country, there is a high number of fraudulent doctor's sick letter as they are manually written on paper. I'm designing a web application to combat this issue and have the whole process electronic.

+

My current thinking is that when the doctor issues the sick letter on the application, the app generates a unique identifier like a UUID but a human-friendly readable unique identifier which is stored in the database table alongside the sick letter's metadata(leave_start_date,leave_end_date etc) then email the generated letter to the patient which he/she will submit to his employer.

+

The employer will verify the legitimacy of the sick letter by providing the leave_start_date,leave_end_date and the unique identifier and query the database, if the query finds a record, the application will confirm that it's a valid sick letter.

+

My questions are as follows:

+
    +
  • Are there any issues you can think with this design?
  • +
  • Performance concerns?
  • +
  • Security concerns?
  • +
  • Data privacy concerns?
  • +
+

I will be using Spring Boot(REST API), Postgresql and Keycloak as my stack. Will have a Javascript front end and mobile client.

+

+",94766,,94766,,1/14/2021 19:07,1/14/2021 19:07,How can I design a secure content verification web application?,,0,3,,,,CC BY-SA 4.0 +421052,1,,,1/14/2021 20:54,,0,110,"

I’m trying to model a system with Users, Permissions and Products. The main goal is to have a way of checking if a User has a specific Permission in order to allow or deny other system operations.

+

So Users will have a list of Permissions and this Permissions will be given to the User when the User buys a specific Product.

+

This way the Users will have Permissions but the Products must have something like a “template” of the Permission that the User that buys the product will get.

+

This structure could be simplified to users having products and products having permissions but one of the requirements is to have the possibility of assigning a specific permission to a specific user without needing a product. And also each permission has its own context so, each permission has its own relation with a user and some other variables that mutate over time and can change the permission validity.

+

I’m trying to map this domain to a database structure but I’m stuck with the relation between permissions and products.

+

Thanks

+",382480,,382480,,1/15/2021 12:57,4/1/2022 18:32,Design an extensible permissions model in database,,1,8,,,,CC BY-SA 4.0 +421053,1,,,1/14/2021 21:07,,4,156,"

The DDD literature is quite clear that when a word/term has a different meaning for different users, a Bounded Context should be created to be able to separate the domain models.

+

I'm facing the situation where two different terms are being used to describe the same concept and I'm having a hard time figuring out how to handle this situation. Should I hold a popular vote among the domain experts to pick the most widely used term? Should we discuss it further and come up with a third name that would satisfy everyone (not sure it's possible)? Any other suggestions?

+

Note that in the UI it's possible to dynamically display the preferred term for the current user(group). I'm mainly talking about how to call the concept in the model and source code.

+

Some context:

+

Organisation has two existing off the shelf applications. Application A calls a concept Foo and application B calls the same concept Bar. Both applications have API's and I'm creating an integration application that allows users to get and manipulate data from both applications.

+

Application A is the source, when a new Foo is created, my application will create a new thing. Application B also reacts to the creation of Foo and my application will use data from B as well. Various users will work with the thing, until the process is complete and the thing will dissapear. My application will be used by users who work with Application A or Application B.

+

Update with more concrete context:

+

The domain is about a distribution center.

+

Application A turns customer orders into shipments, a single shipment can contain multiple orders from the same customer and large orders are split into multiple shipments. A shipment in this context is basically the container that will be shipped to the customer.

+

Application B takes those shipments, but calls them distribution orders. This application controls various processes to make sure the products end up in the correct container.

+

My application will integrate with both contexts and will contain anti-corruption layers, but that still leaves me with the main question of how to call the concept in my application. I'm leaning towards just picking a term (in collaboration with the experts) and describing it in the Published Language.

+",320517,,320517,,1/15/2021 8:58,1/15/2021 8:58,How to name a domain concept when experts use different terms?,,1,8,,,,CC BY-SA 4.0 +421057,1,421072,,1/14/2021 23:05,,-2,566,"

What Every Computer Scientist Should Know About Floating-point Arithmetic is widely considered absolutely mandatory reading for every programmer. Why is this the case? What aspects of the article make it stand out as still being important to this day?

+

Upon my reading of it, I found that a lot of is was only concerned with mathematical proofs and memory-level implementations of floating-point arithmetic. Aside from the general points that floating-point arithmetic is neither precise nor associative - a pair of facts that could fit on a single page - I see little reason why the article is of significance to anyone who is programming in any language where memory management is largely not done by hand. Why would say, a Java programmer, care?

+",373159,,,,,1/15/2021 11:29,Why is What Every Computer Scientist Should Know About Floating-point Arithmetic considered mandatory reading?,,5,11,1,1/18/2021 0:48,,CC BY-SA 4.0 +421060,1,,,1/15/2021 1:42,,4,145,"

Does a domain object have to be persisted, or does this violate some convention about domain objects?

+

For example, let's say I'm using an object called AuthenticationState to represent authentication state in the application. This object has a boolean field isLoggedIn. I want to force the user to re-login each time so when I start the application again, I just create a new instance of the model with isLoggedIn set to false, instead of attempting to load one from local device/browser storage.

+

I feel like I may be overthinking things, but every example I've found online always has its domain objects use some sort of persistence. Is using domain objects in this way still acceptable?

+",383133,,379622,,1/15/2021 1:46,1/15/2021 9:05,Do Domain Objects Have To Be Persisted?,,1,1,,,,CC BY-SA 4.0 +421071,1,421087,,1/15/2021 11:17,,0,85,"

Consider the following pseudo-code:

+
cont = 0
+for g = 1,...,m
+        for h = g,...,m
+                cont = cont + 1
+        end for
+end for
+
+

I'm searching for the explicit map that returns cont in function of g and h. I've tried with

+

cont = m*(g - 1) + [h - (g - 1)]

+

but this formula works only in the case m = 2.

+
+

observation +for the following cycle

+
cont = 0
+for g = 1,...,m
+        for h = 1,...,m
+                cont = cont + 1
+        end for
+end for
+
+

the value of cont in function of g and h is given by +cont = m*(g - 1) + h

+",383157,,383157,,1/15/2021 14:40,1/15/2021 21:55,Explicit expression of a counter,,1,5,,,,CC BY-SA 4.0 +421073,1,421078,,1/15/2021 11:50,,3,374,"

A commonly repeated best practice is to not reuse local variables. However, when doing multiple small operations on the same variable, I struggle both with coming up with good names for all the variables, and I find that the multiple similar names hurt the readability.

+

Alternative 1: always create new variables

+
def clean_text(self, text: str) -> str:
+    text_without_double_quotes = self._replace_double_quotes_with_single_quotes(text)
+    text_without_double_quotes_and_foo = self._replace_foo_with_bar(text)
+    text_without_double_quotes_and_foo_with_formatted_html = self._format_html(text)
+    return text_without_double_quotes_and_foo_with_formatted_html
+
+

Alternative 2: reuse the variable

+
def clean_text(self, text: str) -> str:
+    text = self._replace_double_quotes_with_single_quotes(text)
+    text = self._replace_foo_with_bar(text)
+    text = self._format_html(text)
+    return text
+
+

Alternative 1 does have the advantage that I can set a breakpoint at the end of the method and inspect how the text was transformed in each step, but I am not sure if the tradeoff in readability is worth it or not.

+",132835,,,,,1/16/2021 9:13,"When doing multiple operations on a variable, is it considered bad practice to reuse the same variable name?",,3,3,,,,CC BY-SA 4.0 +421075,1,421211,,1/15/2021 11:55,,2,193,"

Sometimes there are functions that return complicated data and cannot be divided any further e.g. in the area of signal processing or when reading and decoding a bytestring to another format. How am I supposed to create stubs (e.g. the bytestring) to be able to assert equality of expected data with the return data without getting into trouble with complicated stub generation?

+

In the following example I want to test two functions. One writes my_package-objects to disk and the other reads them from disk into an object dictionary. The open-dependency is mocked away. How can I define the stub_binary()-function?

+
def read(filename):
+    """Read any compatible my_package objects from disk."""
+    with open(filename, 'rb') as f:
+        return _decode(f.read())
+
+
+def write(filename, **objs):
+    """Write any compatible my_package objects to disk."""
+    with open(filename, 'wb') as f:
+        f.write(_encode(objs))
+
+
import my_package as mp
+
+@patch('my_package.open', new_callable=mock_open)
+def test_read(m_open):
+    # arrange
+    m_enter = m_open.return_value.__enter__.return_value
+    m_enter.read.side_effect = lambda: stub_binary()
+    # act
+    obj_dict = mp.read('./obj_A_and_B')
+    # assert
+    assert obj_dict == {'A': A(), 'B': B()}
+
+
+@patch('my_package.open', new_callable=mock_open)
+def test_write(m_open):
+    # arrange
+    m_enter = m_open.return_value.__enter__.return_value
+    # act
+    mp.write('./obj_A_and_B', A=A(), B=B())
+    # assert
+    m_enter.write.assert_called_with(stub_binary())
+
+
    +
  1. Stub_binary could return a hard-coded bytestring, but that gets easily messy:
  2. +
+
def stub_binary():
+    return (
+        b'\x93NUMPY\x01\x00v\x00{"descr": "<f8", "fortran_order": False, '
+        b'"shape": (1, 3), }                                             '
+        b'             \n\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00'
+        b'\x00\x00@\x00\x00\x00\x00\x00\x00\x08@')
+
+
    +
  1. Or reading the above byte string from a file that was generated with mp.write:
  2. +
+
def stub_binary(): 
+    with open(gen_data, 'rb') as f:
+        return f.read()
+
+
    +
  1. Or just replace stub_binary() with the following:
  2. +
+
def stub_binary(x):
+    with io.BytesIO() as buffer:
+        mp.write(buffer, A=A(), B=B())
+    return buffer   
+
+

I am tempted to create the data with my production code mp.write(). But this seems not right to me. How would you approach this?

+",319368,,319368,,1/15/2021 17:21,1/18/2021 15:38,How can I avoid chasing my own tail when testing against complicated return values?,,3,8,1,,,CC BY-SA 4.0 +421076,1,421080,,1/15/2021 12:21,,0,68,"

I am not sure if what I'm asking is even possible (or desirable, for that matter), but we were wondering what would be the best way to handle SQL changes to a Database schema, when this schema is shared across multiple apps/teams.

+

We work for a large corporation where database schemas are shared between different software applications (and therefore independent, unrelated, incommunicated, development teams). This means that certain changes to the schema (new views, procedures, constraints, tables, etc.) can come from a variety of apps. +Having version control of the DB objects is ideal, but depending on the strategy

+
    +
  • either a DB snapshot has to be composed from the changes produced by different apps (who knows which?), by traversing all possible apps that touch that schema,
  • +
  • or to see the latest changes in a specific application's version, two (or more) repositories have to be checked (in addition, there is no hard-link between commits, releases, etc.).
  • +
+

i.e: Attempting to version control the schema objects (Oracle) puts us in the dilemma of how to do so:

+
    +
  • Do we store the changes to the different schema objects inside the application's repository (thus distributing the schema snapshot between an undefined nº of repos)?
  • +
  • Or do we have separate repositories for the DB schemas, and make two (or more) commits to different repositories when a new version is uploaded (thus difficulting the compilation of a new release changeset)?
  • +
+

I was wondering if it was possible to specify in Git:

+

Whenever you make a push on the Application repo, send the files in the /SchemaA SQL/ folder to SchemaA repository, the files in /Schema B/ folder to the Schema B repository, and finally the rest to the Application repository. Thus, distributing the contents of the commit between repos in a single operation. Maybe .gitattributes? If using Github/Gitlab, maybe through a webhook on the remote?

+",383163,,,,,1/15/2021 13:59,Distribute commit files between different repositories,,1,0,,,,CC BY-SA 4.0 +421079,1,421083,,1/15/2021 13:44,,32,6012,"

I am developing code mainly using Bash, C, Python and Fortran and recently also HTML/CSS+JavaScript. My OS is Ubuntu.

+

Maybe I am exaggerating, but I figured that I kind of spend more time getting software (Debian and Python packages mainly, sometimes also from source) to be installed properly than actually developing code. And I am not talking about coding vs. debugging, debugging is part of coding for me.

+

It happens so often to me that I update my Linux packages and then my Python packages and my software does not work anymore, because some .so files have another name now, and Python does not find them anymore. Or I setup a totally clean Ubuntu VM, install a package with pip and get two screens of error message, because some debian package was not installed. I am not a system administrator, I enjoy developing software. But this just annoys me. I do not want to inform myself on all the 157 Python packages and thousands of Debian packages I have on my system and know what their dependancies are. I want to write code and implement new functionality into my code.

+

What am I doing wrong?

+",383168,,,,,8/27/2021 4:30,I am spending more time installing software than coding. Why?,,4,7,1,,,CC BY-SA 4.0 +421081,1,,,1/15/2021 14:05,,-1,89,"

Most of the answers I see that discuss what the model layer is comprised of, only address stateless MVC, particularly ASP.NET's implementation of it. When working with desktop MVC frameworks such as Cocoa, is application state considered as part of the "Model" layer?

+",383133,,,,,2/15/2021 20:06,Is State Considered Part of Model In Desktop MVC?,,1,2,,,,CC BY-SA 4.0 +421082,1,,,1/15/2021 14:23,,0,54,"

Introduction

+

A customer of ours has embedded products with sensors and actuators. Now they would like to connect this device to the cloud so they can remotely monitor and configure it. It should support:

+
    +
  1. Periodic data updates (LwM2M read or notify? Depends on how we implement it)
  2. +
  3. Alerts (e.g. data above threshold) (LwM2M notifies)
  4. +
  5. Configuration updates (LwM2M writes)
  6. +
  7. LwM2M Execute triggers
  8. +
  9. Optionally Cloud-to-Device data requests (that gets the most recent reading)
  10. +
+

They want us to provide them with a simple module to accomplish this.

+

Our company is already using LwM2M quite extensively. More specifically, we have devices running Zephyr RTOS that use the built-in LwM2M engine. This engine requires to know the layout of the OMA/IPSO objects (which resources, what are their properties, etc.). Also, we register data pointers to the resources, and register read callbacks so that a read from the cloud triggers an update of those data before responding. Lastly, we also register write callbacks for e.g. configuration settings, so that they can trigger the required actions.

+

The LwM2M object .c files use an Observer pattern to observe the sensors/status/data in the appropriate other software modules, and the write callbacks call "SetConfig" type of functions directly at the target modules. This has led to quite tight coupling.

+

Now I'm investigating how to best integrate this functionality in an easy-to-use and generic module

+

I think the first step is to get rid of the tight coupling I described above, and thus maybe move all the Observers to a single module that handles all communication between the LwM2M objects/engine and the rest of the system? (Mediator / Facade pattern?)

+

I made this analysis of what that module would need to support the wished functionalities:

+
    +
  1. Periodic data updates -> An update interval must be set per resource. Alternatively, the LwM2M server could set observe attributes pmin, pmax to trigger periodic notifies and thus periodic reads, these reads would then trigger read callbacks that update the info before responding to the server.
  2. +
  3. Data threshold alerts -> Alert trigger that trigger a notify to the server.
  4. +
  5. Configuration updates -> Write callbacks that take appropriate action (e.g. configure interrupt threshold).
  6. +
  7. LwM2M Execute triggers -> Execute callback that takes appropriate action (e.g. reboot device).
  8. +
  9. Optionally Cloud-triggered data updates -> Read callbacks that update the info before responding to the server.
  10. +
+

What I find difficult

+
    +
  1. As you can see, functionality 2 and 5 are sort of opposite (push vs. +pull), but functionality 1 can be implemented in 1 of 2 ways, either push or pull. This is part of what I find +challenging, there seems no generic solution for all 3?
  2. +
  3. If we would choose server-triggered periodic updates (via the observe attributes), I see two problems: If the customer requires that no periodic sensor-data is lost on network failure, this can't work. Also, I think all resources in an object with different settings would require separate observes?
  4. +
  5. The other thing I find challenging is the module interface design. +From the above ideas, it would require aliases for each resource to abstract away the LwM2M internals object/resource structure, and the following data per alias: +
      +
    • Update interval (if the device is the one with the update schedule)
    • +
    • Alert trigger
    • +
    • Read callback
    • +
    • Write callback
    • +
    • Execute callback
    • +
    +
  6. +
  7. If going with the idea from challenge 3: This is quite an extensive list, and if the embedded firmware +changes at the LwM2M module boundary, or if the LwM2M cloud API +changes, this would require our interference, as we would have to +update the internal resource mapping.
  8. +
+

I'm very curious for your opinions, remarks and ideas on this! I would very much appreciate your advice.

+",350205,,350205,,1/15/2021 14:36,1/15/2021 14:36,How to best design this communication module/library?,,0,8,,,,CC BY-SA 4.0 +421084,1,,,1/15/2021 15:24,,2,125,"

I'm developing an app with a user management system. There is a database table named user with the following columns:

+
| Column Name     | Column Type |
+|-----------------|-------------|
+| userId          | BIGINT      |
+| email           | TEXT        |
+| firstName       | TEXT        |
+| lastName        | TEXT        |
+| passwordDigest  | TEXT        |
+| birthday        | DATE        |
+| address         | TEXT        |
+
+

The most straightforward design is to create a Hibernate entity called User:

+
@Entity
+@Table(name = "user")
+public class User {
+    @Id
+    private long userId;
+    private String email;
+    private String firstName;
+    private String lastName;
+    private String passwordDigest;
+    private LocalDate birthday;
+    private String address;
+}
+
+

However, the general user info (i.e. email, first name, last name, birthday, address) and password are getting updated in separate pages in my app. Specifically, there is a "Edit Your General Info" page for users to update their general info, and there is a separate "Update Your Password" page for users to update their passwords. Just like lots of other apps.

+

Therefore, if I'm using the User entity above, the code for each pages will be:

+
void updateUserGeneralInfo (User newUser) {
+    User oldUser = userDao.getExistingUser(newUser.getUserId());
+    newUser.setPasswordDigest(oldUser.getPasswordDigest());
+    userDao.updateUser(newUser);
+}
+
+void updateUserPassword (long userId, String newPassword) {
+    User oldUser = userDao.getExistingUser(userId);
+    oldUser.setPasswordDigest(calculateDigest(newPassword)); // auto save to DB by Hibernate
+}
+
+

The problem lies in updateUserGeneralInfo(). Since newUser is passed from GUI, it doesn't contain a passwordDigest. Therefore, directly calling updateUser(newUser) would wipe out the user's passwordDigest in the DB. To avoid that, it's necessary to retrieve the user's existing entity just to fill in the password digest, so that updateUser(newUser) won't affect the user's passwordDigest. This is kind of clunky and hard to maintain.

+

To solve the problem, I'm thinking about creating 2 Hibernate entities that maps to the same database table user:

+
@Entity
+@Table(name = "user")
+public class UserGeneralInfo {
+    @Id
+    private long userId;
+    private String email;
+    private String firstName;
+    private String lastName;
+    private LocalDate birthday;
+    private String address;
+}
+
+@Entity
+@Table(name = "user")
+public class UserCredential {
+    @Id
+    private long userId;
+    private String passwordDigest;
+}
+
+

The code for the above-mentioned pages will then be:

+
void updateUserGeneralInfo (UserGeneralInfo newUserGeneralInfo) {
+    userDao.updateUserGeneralInfo(newUserGeneralInfo);
+}
+
+void updateUserPassword (long userId, String newPassword) {
+    UserCredential newUserCredential = new UserCredential(userId, calculateDigest(newPassword));
+    userDao.updateUserCredential(newUserCredential);
+}
+
+

With this new design, the code is much clearer because the responsibility for each method is isolated. updateUserGeneralInfo() won't worry about wiping out user's passwordDigest anymore, and updateUserPassword() won't touch any part of the user table other than userId and passwordDigest.

+

My question is: is the new design (i.e. separating UserGeneralInfo and UserCredential rather than a single User) a really good design? Is there any disadvantage that I'm not aware of? Furthermore, is it a common design pattern for generic user management systems? Thanks!

+",336452,,336452,,1/18/2021 8:24,2/12/2022 19:03,Is it a good design to have separate Hibernate entities for general user info and user password digest?,,1,1,,,,CC BY-SA 4.0 +421088,1,,,1/15/2021 15:42,,-1,100,"

In Visual Studio, ASP.NET MVC project template is designed for MVC pattern, but what about ASP.NET Web API project template?
+I know that we can create API from MVC project, also we can build MVC app from Web API project.

+

But what if I use Web API project and return only data, not Views? What is the design pattern behind it? It does not consider to be a MVC anymore, becouse it doesn't have View. Is it just a N-Tier architecture?

+",383180,,,,,1/15/2021 17:33,ASP.NET Web API - what is the design pattern?,,1,3,,,,CC BY-SA 4.0 +421093,1,,,1/15/2021 17:16,,5,427,"

I work in web development and the team I work on is growing so I see the need to hammer out a more formal Git workflow. We have a process down that is working for now but it's starting to cause more problems than it solves.

+

Currently, we work off of a dev and master branch. The dev branch is checked out to our dev environment, master to our test environment, and our prod environment is checked out to a tag from master.

+

I've been reading up on Gitflow and a common theme is checking out feature branches from the dev branch.

+

Our team tends to have a number of irons in the fire at one time, each with different timelines and can't necessarily follow a regular release schedule at this time. If Developer A checked out a new feature branch from dev, made a quick fix, received approval, and pushed up to master, then deployed to prod there's a good chance they'd unintentionally deploy code from Developer B that was being reviewed on the dev environment.

+

To avoid this, our general practice has been to create feature branches off of master as it's always reflective of what's on production. Then, merge them into dev for review, then up to master once approved.

+

A frequent thorn in my side is that over time the dev environment gets "messy" due to its sandbox nature and will occasionally have abandoned features or tests that are left there and not necessarily cleaned up.

+

How can we improve this process? Is there a better workflow or process we should look into?

+",219887,,,,,1/15/2021 21:24,Confused on Git workflow and role of dev environment/branch,,1,0,,,,CC BY-SA 4.0 +421096,1,,,1/15/2021 17:38,,2,54,"

A Makefile is a representation of a depency graph. The files are the vertices, for example somefunctions.h, somefunctions.c and myprogram.c are the "input" vertices (is there a formal word for this?) and somefunctions.o and myprogram are the "output" vertices (again, correct word?).

+

Then the edges of that graph are somehow related to invocation of the compiler and linker in this example. But not one-to-one, since myprogram would have three edges connected to it (somefunctions.o, somefunctions.h and myprogram.c), but only one call to the compiler to create myprogram. What would be the correct way to describe that relation?

+",252349,,209774,,1/15/2021 20:07,1/15/2021 20:07,What is the relation between edges in a dependency graph and the program call to create a vertex?,,1,0,,,,CC BY-SA 4.0 +421101,1,,,1/15/2021 18:31,,0,87,"

I have a situation as follows, I have a relative path that I want to get for a directory. The directory structure is as follows,

+

Windows Folder Structure

+

C:\FileFolder\LowerLevel\ThirdLevel\script.py

+

C:\FileFolder\FolderOfInterest\filesStuff.txt

+

Linux Structure

+

PathAbove/LowerLevel/ThirdLevel/script.py

+

PathAbove/FileFolder/FolderOfInterest/filesStuff.txt

+
+

Top Level

+
args = parse_args(sys.argv[1:])
+main(args.filepath, args.repository)
+
+

Function Definitions

+
main(filepath,repopath)
+{
+//do stuff with filepath, repopath
+    do_stuff()
+    
+}
+
+do_stuff()
+   path_to_repo = rel_path()  
+   #use path_to_repo 
+
+
+
+
+def rel_path():
+    """
+    Gets the relative path two directory levels up where FileFolder folder lives
+    """
+    return os.path.abspath(os.path.join(os.path.dirname( __file__), '../../', 'FileFolder/FolderOfInterest'))
+
+

I have been asked to make this more general so I don't have to rely on the FileFolder name being 'FileFolder' in case somebody has it named differently.

+

I pass in the path directly to repopath at the start, so I could use that since it's validated before use. My usual solution is this

+

My Usual Solution

+

Pass repository_path from the top level.

+
main(args.filepath,args.repository)
+    repository_path = args.repository
+
+    do_stuff(repository_path)
+        rel_path(repository_path)
+    def rel_path(repository_path):
+        """
+        Gets the relative path two directory levels up where galaxy folder lives
+        """
+        return os.path.abspath(os.path.join(os.path.dirname( repository_path ), '/FileOfInterest'))
+
+

It comes up often that I need to pass in information from a higher level function to a lower level function, usually after I realize that info is needed and I want to refactor something to use it for whatever reason. This requires adding an extra argument to multiple functions and changing functionality slightly. There is actually a third function in-between this in my real code, but this illustrates the issue.

+

Here is my question

+

My question is this, is there a best practice for passing in information to a lower level function used only inside another function that isn't called by the main function? Or am I way off base here? Is there any easier way to get the relative path of FolderOfInterest that I'm interested in getting? Typically I have historically programmed in procedural languages and this has come up plenty of times before in the past. But, it also comes up in OOP programming I've done before.

+

This comes up often enough that I thought it was worth asking here. How do I pass information around without requiring rework of multiple parts of my code when that information is embedded at a higher level than where it needs to be used? I'm trying to make this as agnostic as possible with relative paths, function arguments so it's easier for others to use/modify later, and so people using it don't have to make changes depending on whether they run it on Linux or Windows.

+

I hope I've written this somewhat clear and this is a useful enough question to be here. I originally had this on stackoverflow, but since it's more about best practices in software engineering, I put it here.

+

I've marked this as python because that's what I'm writing in, but this comes up just as much in c, matlab, c++, and other languages I've written in that I'm not necessarily tied to the answer being specific to python syntax

+",383187,,1204,,1/15/2021 18:43,1/15/2021 23:43,What is best practice for getting a variable passed into a function several layers deep in a local function call?,,2,0,,,,CC BY-SA 4.0 +421104,1,,,1/15/2021 19:25,,5,171,"

I work on small to medium, database driven, line of business applications. What I usually find, especially in older systems, is most of the data lives in the database. By data I mean stuff like a CountryTable, a ZipCodeTable, lots of/one big table for dropdowns.

+

Recently I inherited a project in which the client has a complicated nested hierarchical data structure of modules, categories, submodules, settings, configurations etc. But the data is completely static. We just need it so we can ... put everything where it belongs. There are now like 7 complicated ef core entities with relationships. The former team and the client made up an entire mini markup language and wrote a parser for it.

+

This is static(ish), essential data. It is used to describe the system.

+

If somewhere in my system, something has a priority level, I need to describe that somewhere. I need to say "low, medium and high exist". The applications I'm used to seeing, do that in the database.

+

In general I am drifting more and more away from a focus on the database. I would simply define it in code. Why have a PriorityEntity and a PrioritiesTable which needs to get seeded to only ever end up saying "please give me a list of all the priorities"? I want to just define the list in code.

+

For more complex static data (like my modules) you're probably gonna end up with additional types and mapping code, because of the object relational mismatch.

+

"But what if it does change? U never know, zip codes vanish all the time"

+

People have little problem describing something like their menu structure in either markup or in code. If there is need for change a developer can just adjust the code and push an update.

+

I'm arguing with myself, but I actually do have a question. Are there any reasons against it I'm not thinking of? Is there a name for what I describe? I feel like there should be.

+",383188,,,,,1/16/2021 16:48,Static data - database or code,,4,1,,,,CC BY-SA 4.0 +421108,1,,,1/15/2021 19:43,,0,167,"

So I just started studying Software Engineering because I am really interested in it and my professor in London asked us to create an app which is like Instagram (only theoretically, without the actual implementation of code) and I would like some help on something.

+

I have started with writing and studying different parts, like risk analysis and everything and now I just created a class diagram with the classes that are in my opinion essential for the project. Like content, photos, users, display, etc. What I really haven't understood is that we needed to draw the class diagram based on the problem domain of my case.

+

Maybe it's because of my lower English level, but problem sounds literal to me and while some websites mention that the problem domain relates to the risks and the problems of your project, others say that the problem domain is whatever is essential for the creation of the app.

+

Can someone please elaborate on that? You will really help me get deeper into the project because it's very confusing and they haven't explained it so much at uni yet..

+

Thanks :)

+",,user383189,,,,1/15/2021 19:43,Could you help me understand what a problem domain is and how can I build my class diagram based on it?,,0,10,,,,CC BY-SA 4.0 +421112,1,421122,,1/15/2021 21:26,,1,294,"

I haven't been able to find a definitive answer online, so I'm hoping that someone with experience can help answer this.

+

Many MVC tutorials I find online end up using the MVC architectural pattern as the architecture for the entire application. But, I have read conflicting statements from those on this site and other sites who say that MVC is just an architecture for the presentation layer of a layered architecture.

+

At this point I'm leaning towards the idea that it is perfectly valid as a pattern for an entire application, especially because it seems like overkill to have to design 3 separate layers for a small to medium sized application.

+

Which is it? Is MVC a perfectly good architectural pattern for an entire application, or is it just meant to be used as the presentation layer of a layered architecture?

+",383133,,,,,1/16/2021 8:38,Difference Between MVC and MVC + 3 Layered Architecture?,,2,0,,,,CC BY-SA 4.0 +421128,1,421349,,1/16/2021 13:54,,1,173,"

I have two audio clips:

+
    +
  1. Source of truth
  2. +
  3. Recording of user
  4. +
+

I want to compare the two, testing if they are similar enough, removing accents, etc. Any idea how I could do this on Android?

+

To add more detail, I want to record the user reciting some Arabic and then compare it to the correct pronunciation. The idea would be test their pronunciation and give them feedback on where they need to improve. I'm thinking of doing this offline (vs online) for faster response times to the user.

+",8679,,,,,1/21/2021 5:23,Offline audio comparison for Android,,1,7,,,,CC BY-SA 4.0 +421129,1,,,1/16/2021 14:28,,0,47,"

I want to implement a 2-3 tree in C# language, where every leaf has a unique key. The keys are from a given class that implements the IComparable interface (the specific class is unknown, and it's should work for every class which implements the interface).

+

I want to use two sentinel nodes to represent plus and minus infinity, but I can't figure out the best way to do it, that won't be too complex.

+

I thought about creating a class which will implement the IComparable interface, and use it as two leaves in the tree, but I am not sure if that's the best thing to do or not.

+",340359,,,,,1/16/2021 14:28,Sentinel nodes in a 2-3 tree,,0,7,,,,CC BY-SA 4.0 +421132,1,,,1/16/2021 15:45,,0,1234,"

I am given the following system description :

+
+

Consider a hotel +management system to manage a group of 5-stars hotels. If this system is modelled using OOP methodology, and the classes are identified as Hotel, Trip, Room, VIP Room, Regular Room, Suite, Customer, Payment, Cash_Payment, Credit_Debit_Payment, Reservation.

+
+

Now I need to draw its UML class diagram, here is my solution : +

+

I have related the customer class indirectly with the Hotel using the reservation class which is related to the Room class and Trip class and both of them are related to the Hotel class is this a valid solution ?. Also, did I miss any composition or Aggregation relationships? Because I am really confused in the relationship between reservation and payment classes because we can say that payment is part of reservation so it may be right that this relationship is a composition one ? Also , does the relationship between reservation and room is a " part of " relationship also because if we delete the room class we should delete the reservation class? Finally, are these multiplicities right? I am just participating in some problems for class diagrams to understand it, so please help me.

+",331812,,331812,,1/17/2021 8:20,1/17/2021 8:20,Class diagram of an Hotel Management System,,0,1,,,,CC BY-SA 4.0 +421134,1,421141,,1/16/2021 16:20,,6,567,"

Context of the problem:

+
    +
  1. I have made chess GUI (Java)
  2. +
  3. The GUI is capable of loading chess puzzles/problems to solve
  4. +
  5. Of said puzzles, I have gotten my hands on a database which is just shy of a million entries
  6. +
+

The problem/question: +How does one most efficiently go about getting a random puzzle from this database?

+

Obviously keeping the database in memory is not an option, despite it already being compressed. Stripped all data that isn't needed from it and in the end converted it into byte-arrays. All to no avail. The full database always ends up taking up somewhere between 100 and 200 MB of memory. A tenth of that would be acceptable (for my purposes).

+

Even worse: When processing the entire database (in attempts to keep it all in memory), the process file->memory took upwards of 700 MB memory.

+

Let's say I have the following:

+
    +
  • The database as either a txt or csv file
  • +
  • The amount of lines in said file (which is equal to the amount of puzzles)
  • +
+

Am I with that, in some way, capable of grabbing either a random or specific puzzle from the file (albeit async, that doesn't matter) without having to load the entire database into memory?

+

/edit:

+

Some additional context: +The chess GUI i have created is running in Bukkit/Spigot adaption of the Minecraft server software. This means that players are able to interact with 3D chess boards and start/play chess games to their hearts content.

+

The puzzles come in as an additional feature that's supposed to give players the ability to practice finding the best moves.

+

The amounts of memory consumed in the process i originally described are a problem, because i intend to make this chess game available to all minecraft servers that desire to have it - and each server will have an unknown amount of RAM available attached to it. It may be that they're running on a low total of 1-2 GB, which isn't too uncommon, or that they're equipped with 16-32 GB.

+

Of course, i could just attach an instructional "help"-file that explains that in order to launch this chess game without causing OutOfMemory execptions, they need to have a certain amount of RAM on boot time, but that just seems like lazy & bad practice; "In order to start this plugin you must have ~1GB spare RAM at boot time, but after that you don't need it anymore".

+

As for the randomness, it really just needs to be a (targeted) random line from the database. The puzzles come with thematic-tags attached to them, but i have generated additional TXT-files that sort these puzzles by theme. +For instance:

+
    +
  • Player requests a random mate-in-2-theme puzzle
  • +
  • File "MATE_IN_2.txt" contains all puzzle line numbers that are mate-in-2 puzzles and returns one of those lines randomly
  • +
  • The puzzle is retrieved from the txt file that contains all the puzzle data via the line number
  • +
  • Mind you, that puzzles may have multiple themes so sorting them like this is necessary
  • +
+

/edit: +Solution +The marked reply is the way to go. +Here's how i solved it:

+
int length = line.getBytes().length;
+System.out.println("Line offset/length: " + byteOffset + "/" + length);
+indices.add(byteOffset);
+byteOffset += length;
+byteOffset++;
+
+

^ This code is ran while initially iterating through the database. "indices" may be a collection or list. "byteOffset" is initialized with "0", because the first line starts at 0.

+
randomAccessFile.seek(offset);
+StringBuilder stringBuilder = new StringBuilder();
+while (true) {
+    int b = randomAccessFile.read();
+    char c = (char) ((byte) b);
+    if (c == '\n') {
+            break;
+        } else {
+            stringBuilder.append(c);
+        }
+    }
+System.out.println("Line at offset " + offset + ": \"" + stringBuilder.toString() + "\"");
+
+

^ This retrieves the line using RandomAccessFile, "offset" being a value from the prior "indices"

+

final edit: +For those stumbling over this in the future: i benchmarked this and can confirm that reading through files like this extremely RAM friendly.

+",383060,,383060,,1/17/2021 16:59,1/17/2021 16:59,How do I efficiently read random lines from a TXT or CSV file?,,2,5,1,,,CC BY-SA 4.0 +421143,1,421149,,1/17/2021 2:08,,0,52,"

I have a project on Azure DevOps that uses an appconfig.

+

The appconfig holds sensetive data like usernames and passwords and is committed empty to the repo.

+

I have to deploy the project on two different environments that need two different appconfigs to run.

+

What would be the best practice in this case? +
I'm conflicted between saving the appconfigs for the different environments and selecting the appropriate one to deploy vs them holding sensitive data and not wanting to save it anywhere.

+",383249,,,,,1/17/2021 9:19,Deploying appconfigs to different environments,,1,0,1,,,CC BY-SA 4.0 +421145,1,421210,,1/17/2021 7:03,,0,89,"

I want to setup my objects to be composed of components that can be added and removed so I have more flexibility in how I set them up.

+

A simple example would be some object that can have components attached such as:

+
MoveComponent //lets the object move around
+HealthComponent //allows the object to take damage and be destroyed
+
+

So say i have some object like Player and i want to impact damage to it, how do i first check it can take damage by checking it has a HealthComponent and then also call a method in the HealthComponent to apply the damage.

+

My first thought was have each component register the to the Player object and store them in a HashSet<IComponent> but if I use an interface for the polymorphic benefits to add them to a collection theres not really much else in common with them and I would have to loop through the hash set every time to check if for example the HealthComponent exists and then if it does, cast it from the interface to the type required and call the Damage() function.

+

Such as:

+
public bool CanTakeDamange()
+{
+   foreach(var component in components)
+   {
+      if(component is HealthComponent) return true; 
+   }
+}
+public void ApplyDamange(float damageValue)
+{
+   if(!CanTakeDamange)return;
+   foreach(var component in components)
+   {
+      if(component is HealthComponent) 
+      { 
+         ((HealthComponent)component).ApplyDamage(damageValue);
+         return;
+      } 
+   }
+}
+
+

This does not seem like a smart approach to me, lots of loops every single time, as well as type checking and casting. Is this actually how people do component based systems to allow for a more decoupled setup?

+

Or is there a smarter way more streamlined way to do this?

+",303640,,,,,2/17/2021 16:08,How do you structure components to objects so they are more decoupled and non dependant?,,3,4,,,,CC BY-SA 4.0 +421146,1,421147,,1/17/2021 7:43,,7,127,"

I'm designing an e-commerce application. Main flow is pretty straightforward: customer add items to basket, checkouts the basket (place an order) and waits for delivery.

+

There are following requirements:

+
    +
  • basket is kept on backend
  • +
  • system should handle +5M baskets/orders per day
  • +
+

I know there is no single answer, but I'm looking for some comments/inspirations on how to design the basket & order module(s). I see following options:

+
    +
  1. basket and order are separate services, and basket sends to order an ID of a basket to checkout, then orders calls basket for a basket details
  2. +
+

1A) API call based communication

+

1B) async message based communication

+
    +
  1. basket and order are separate services, and basket sends to order a full basket details
  2. +
+

2A) API call based communication

+

2B) async message based communication

+
    +
  1. basket and order are the same service, and checked-out basket becomes an order (it is same entity, just presented to user as basket or order, depending on it's state)
  2. +
+",343027,,345158,,1/17/2021 13:40,1/23/2021 17:24,Architecting basket in high scale distributed application,,1,1,1,,,CC BY-SA 4.0 +421152,1,,,1/17/2021 10:11,,0,122,"

I've heard often that Subtyping breaks some important and useful properties: many nice innovations developed by pure programming language researchers can't be brought to Java or C++ because of subtyping. They say that the language Rust avoided Subtyping for this reason.

+

Is such a claim correct?

+

What are some cool things that cannot be applied to languages with subtyping?

+

Is any language offering Subtyping completely cursed and incompatible with a lot of cool features? Or only the pieces of code that use subtyping are incompatible?

+

Could you try to explain what it means to someone coming from C++ with little theoretical knowledge?

+

I searched for explanations and found:

+ +",373524,,,,,1/18/2021 2:34,What are the problems of subtyping?,,1,2,,,,CC BY-SA 4.0 +421155,1,421167,,1/17/2021 12:12,,3,91,"

I'm working on a multi-threaded program that interfaces with external USB/serial devices via user-space device drivers.

+

Early in the design stage, I made the decision to split the program into three components: A, B and C.

+
    +
  • Component A would possess full responsibility of communication with external devices (this is where the user-space device drivers would run). It would run on a dedicated thread.
  • +
  • Component B would serve an API off of a TCP/IP socket to third-party clients that needed access to the external devices. This component would also run on a dedicated thread.
  • +
  • Component C would provide a GUI for the user, allowing them to view and manipulate data from the external devices. Again, this would be on a dedicated thread.
  • +
+

So components B and C both require access to the external devices, which component A would provide.

+

I needed a way for the three components to interface with each other, and at the time I thought making the program event-driven would be appropriate. With this approach, components would emit an event and other components would handle those events. I always knew that some events would require responses, in the form of subsequent events. For example, if component C wanted to pull some data from the external device, to present to the user in the GUI, it would emit an event to request the data from component A, and component A would handle that event, and return the requested data in a subsequent event, which component C would be waiting for.

+

Since defining the above approach, I've realised that, in some cases, a component may require a specific response to a specific event. So building off of the example above, if component C and component B requested different data from component A, component C should be able to wait for the correct response event (that is, the response to the event emitted by component C).

+

So I'm considering implementing the ability for a component to wait for an event that is a direct response to the previously emitted event. I would do this via event IDs, where each event would carry an ID, and an optional response ID. The response ID would be the ID of the event that the current event is in response to.

+

But I feel like I'm on the wrong path. Is this really an appropriate use of event-driven design? Is it OK for events to serve as requests for data, and subsequent events as responses? Would you do it differently? If so, how?

+

EDIT/UPDATE:

+

I've just come across this video by Mark Richards, which seems to describe my approach with event IDs and response IDs (which he calls "correlation IDs"). So maybe I'm not on the wrong path - he seems to think using events for request/response is fine. Would still appreciate your thoughts.

+",330584,,330584,,1/17/2021 12:34,1/17/2021 15:25,"When components of an event-driven program require specific responses to specific events, is event-driven still the correct approach?",,1,0,1,,,CC BY-SA 4.0 +421156,1,421158,,1/17/2021 12:33,,4,167,"

Consider animals being some REST resources. User has animals assigned to him.

+

The endpoint /api/animals/{animalId}/feed is used to feed a given animal by the authenticated user.

+

User should not be able to feed animals he does not own. What HTTP status code should be emitted in such a scenario?

+

400, 401, 403, 404, something else?

+
+

Also, should the situation where passing animalId that does not exist, e.g. 123456789 be distinguished from the situation where animalId does not belong to the logged in user?

+

I personally feel like I should return 404 in all cases.

+
+

This seems like a typical REST design situation, so I am wondering how experienced devs would solve it.

+",366489,,,,,1/17/2021 13:45,"Accessing Animal not belonging to User: 400, 401, 403, 404, other?",,1,5,1,,,CC BY-SA 4.0 +421171,1,,,1/17/2021 19:17,,0,41,"

Let's say that there is a simple sequential algorithm that performs a linear search of a unique key in an array:

+
public class SearchSeq {
+
+  public static int search(int[] a, int key) {
+    for (int i = 0; i < a.length; i++) {
+      if(a[i] == key)
+        return i;
+    }
+    return -1;
+ }
+
+
+public static void main(String[] args) {
+
+   int[] a = …; // an array with n elements
+   int key = …; // the key to be searched within a
+
+   long start = System.currentTimeMillis();
+
+   int pos = search(a, key);
+
+   long stop = System.currentTimeMillis();
+   long duration = stop - start;
+
+   if(pos > 0)
+     System.out.print("found " + key + " at " + pos); 
+   else 
+     System.out.print(key + " not found"); 
+
+  System.out.println(" within " + duration + "ms");
+ }
+}
+
+

What will be the most fitting thread model in order to redesign the algorithm to run in parallel?

+
+

In my opinion the most fitting thread model would be the Master/Worker because in this way we would divide the array into segmenets and search in parallel inside of each segment for the key. Smaller array size -> faster results.

+
+

Edit 17/01/2021: The thread models that I have in mind are:

+
    +
  • Master/Worker
  • +
  • Producer-Consumer
  • +
  • Pipes & Filters
  • +
  • Peer to Peer
  • +
+

What do you think?

+",381747,,381747,,1/17/2021 21:46,1/17/2021 21:46,What is the most fitting thread model to redesign a sequential linear search algorithm?,,0,5,,,,CC BY-SA 4.0 +421173,1,,,1/17/2021 20:40,,0,35,"

I am trying to process sets of items (also, different sets have different items) in specific RabbitMQ consumers (one for each set of items) that would be created on-demand and are dispatched to the relevant instance (i.e., custom load-balancing depending on different heuristics).

+

To ensure that a set of items are processed exclusively by the same consumer, I know there is an exclusive flag. But putting that aside, I have no idea how to tackle this sort of architectural requirements.

+

I am wondering if there are already existing solutions for this kind of problem using RabbitMQ?

+

To some extent, I feel it all boils down to one single thing -- how the right service instance can say "hey I'm the one with enough load (or matching whatever heuristic) to be the right exclusive consumer"? I feel, I might need a system (i.e., each instance) that is capable of taking sneek peaks of all the queues available in RabbitMQ to decide what to process based on their own CPU. But that doesn't seem really possible.

+

Resources:

+ +

+",171752,,379622,,1/17/2021 22:49,1/18/2021 4:29,How to dispatch sets of items to the relevant (LB) exclusive consumers with RabbitMQ?,,1,0,,,,CC BY-SA 4.0 +421178,1,421179,,1/17/2021 23:01,,1,63,"

Sorry if this is a basic question, I'm studying for my operating systems class and compiler theory class at the same time and this is confusing me. From what I do understand, virtual memory is larger than RAM and the virtual memory of a process looks like this:

+
[stack][heap][uninitialized data segment][initialized data][text segment]
+
+

where the text segment basically contains the code that needs to be run. Anything that the CPU needs from the virtual address will be loaded into the RAM when needed.

+

And the relocatable machine code is code that can be ran from any address. Does this mean it can be pretty much anythere in the virtual address (if that address is not already used by another section)?

+

Thanks

+",375680,,,,,1/18/2021 0:01,is the relocatable machine code essentially the text segment of the virtual address?,,1,0,,,,CC BY-SA 4.0 +421181,1,,,1/18/2021 3:10,,2,235,"

I am trying to write an algorithm to accurately calculate exponents (antilogs) for a variable precision floating point library I am working on. The base is not relevant since I can convert between them.

+

I was able to manually calculate log10() using repetitive application of x^10. This is a digit by digit calculation and requires 4 multiplies per digit. I can reverse the algorithm to calculate exp10(), but this requires repeated application of a 10th root. Calculating the 10th root is significantly more CPU costly than 10th power.

+

I searched the web and a lot of people suggested using a Taylor Series to calculate exp_e(). I did that and found that it requires about 2 iterations per digit for accurate results. Only two multiplies and one divide per iteration. This is still a bit steep in terms of CPU cycles especially when some FP numbers can be 100 digits long.

+

Now, I also found the algorithm that was used to calculate EXP in the old Sinclair ZX81. The author claimed that it was Chebyshev polynomials. I mention this because when I tested it, the algorithm was calculating accurately to one digit per iteration - much better than the Taylor Series.

+

I would use the algorithm as-is if it weren't for the fact that the floating point library has to be accurate to an arbitrary number of digits. The ZX81 EXP code is only accurate to 8 digits. There is no explanation as to how to extend the number of iterations to get more accuracy.

+

So does anyone know how to calculate EXP() using Chebyshev Polynomials? Can they be expanded like the Taylor Series for more accuracy? Anything better than either?

+

[Please no long math proofs. That's over my head. I just want the algorithms.]

+

UPDATE +The test results for the Taylor Series are as follows - LOG(25):

+
EXP10(1.397940) Round   1: = Taylor=19.16290731874155394000     
+EXP10(1.397940) Round   2: = Taylor=23.36085084533393060000     
+EXP10(1.397940) Round   3: = Taylor=24.64302976078316452000      
+EXP10(1.397940) Round   4: = Taylor=24.93674192499081185000     
+EXP10(1.397940) Round   5: = Taylor=24.99056707177124531000     
+EXP10(1.397940) Round   6: = Taylor=24.99878698562735818000     
+EXP10(1.397940) Round   7: = Taylor=24.99986296146780963000     
+EXP10(1.397940) Round   8: = Taylor=24.99998619980410040000     
+EXP10(1.397940) Round   9: = Taylor=24.99999874670913982000     
+EXP10(1.397940) Round  10: = Taylor=24.99999989637041996000     
+EXP10(1.397940) Round  11: = Taylor=24.99999999213623594000     
+EXP10(1.397940) Round  12: = Taylor=24.99999999944868008000     
+EXP10(1.397940) Round  13: = Taylor=24.99999999996408967000     
+EXP10(1.397940) Round  14: = Taylor=24.99999999999782289000     
+EXP10(1.397940) Round  15: = Taylor=24.99999999999988352000     
+EXP10(1.397940) Round  16: = Taylor=25.00000000000000153000     
+EXP10(1.397940) Round  17: = Taylor=25.00000000000000789000     
+EXP10(1.397940) Round  18: = Taylor=25.00000000000000821000     
+EXP10(1.397940) Round  19: = Taylor=25.00000000000000822000     
+
+

Taylor Series 16 digits of accuracy with long double and 19 iterations. Note that the optimal iterations in this example is 16 since those that follow are actually farther from the mark.

+
Sinclair EXP(1.397940)= 25.00000001907205060000 
+
+

Sinclair 9 digits of accuracy with double and 8 iterations

+

Here are the actual functions used: +Taylor Series (iterates until no change):

+
// Taylor series to figure exp10^x
+
+void TaylorEx(long double x)
+{
+    int i, j, intpart;
+    long double a, frac;
+    long double factorial;
+    long double power, inp, old, out;
+
+    // separate the int part from the frac part
+    intpart = (int) x;
+    frac = x - intpart;
+
+    // The Taylor series operates on base E.
+    // To convert base e to base 10, multiply input by ln(10)
+    inp = frac * 2.3025850929940456840179914546844;
+
+    factorial = 1;
+    power = inp;
+    a = 1;
+    for (i = 1; i < 50; i++)
+    {
+        factorial *= i;
+
+        old = a;
+        a += (power / factorial);
+        if (a == old) break;
+
+        // for display, add base 10 exponent to A
+        out = a;
+        if (intpart > 0)
+            for (j = 0; j < intpart; j++) out *= 10;
+        else if (intpart < 0)
+            for (j = 0; j < intpart; j++) out /= 10;
+
+        printf("EXP10(%Lf) Round %3d: = Taylor=%.20Lf\tFPU=%.20Lf\n", x, i, out, powl(10,x));
+
+        power *= inp;
+    }
+}
+
+

Sinclair (fixed at 8 iterations):

+
double Sinclair_Exp(double C)
+{
+    int N;
+    double T, D, Z, BERG, M0, M1, M2, I, U;
+    union {unsigned ui[2]; double f; } u;
+    double A[8] =
+    {
+        0.000000001,        // A1   1 / 1000000000.0
+        0.000000053,        // A2   1 / 18867924.528301886792452830188679
+        0.000001851,        // A3   1 / 540248.51431658562938951917882226
+        0.000053453,        // A4   1 / 18708.023871438459955474903185976
+        0.001235714,        // A5   1 / 809.24874202283052550994809478569
+        0.021446556,        // A6   1 / 46.627533110677537223225957584985
+        0.248762434,        // A7   1 / 4.0198995640957589279738274308733
+        1.456999875         // A8   1 / 0.68634185709864937359723520909705
+    };
+
+    // DEMONSTRATION FOR EXP X
+
+    //D = C * 1.4426950408889634073599246810019;    // Log2(e) - uncomment for input base E
+    D = C * 3.3219280948873623478703194294894;    // log2(10) - uncomment for inout base 10
+    N = (int) D;
+    Z = D - N;
+    Z = 2 * Z - 1;
+
+    // USE "SERIES CALCULATOR"
+    // SERIES CALCULATOR
+    // FIRST VALUE IN Z
+
+    M0 = 2 * Z;
+    M2 = 0;
+    T = 0;
+    for (I = 0; I < 8; I++)
+    {
+        M1 = M2;
+        U = T * M0 - M2 + A[I];
+        M2 = T;
+        T = U;
+    }
+    T = T - M1;
+    // LAST VALUE IN T
+
+    // get original exponent of T
+    u.f = T;
+    u.ui[1] >>= 20;         // shift out mantissa
+    u.ui[1] &= 0x7FF;       // mask off sign
+
+    // Add correction
+    N += u.ui[1];
+    if (N > 2048) printf("Exponent Overflow!\n");
+
+    if (N < 0.0) T = 0.0;
+    else
+    {
+        // modify exponent
+        u.f = T;
+        u.ui[1] &= 0x800FFFFF;  // clear old exponent
+        N <<= 20;               // shift new exponent into place
+        u.ui[1] |= N;           // replace exponent
+        T = u.f;
+    }
+
+    printf("Sinclair EXP(%lf)= %.20lf\tFPU EXP(%lf)=%.20lf\n", C, T, C, pow(10,C));
+
+
+    return(T);
+}
+
+

So am I stuck with the Taylor Series or is there a way to extend the Sinclair Chebyshev algorithm to n arbitrary precision?

+",264480,,264480,,1/18/2021 15:23,2/14/2021 1:07,How to use chebyshev polynomials to calculate exponents (antilogs)?,,2,3,,,,CC BY-SA 4.0 +421182,1,,,1/18/2021 4:00,,1,50,"

I need to run arbitrary code snippets in Python and Javascript on a server. It cannot be run in the browser.

+

I'm thinking of sandboxing the code in an AWS Lambda serveless function. However, I'm unsure of the best ways to disable networking (outside of the AWS returning a response over the network) and other potential threats.

+

How do sites like HackerRank sanitize user submitted code? Is there anything else I should think about?

+

I've seen answers such as this, but the top rated answers tend to be out of date.

+",383289,,,,,1/18/2021 4:00,Securing Arbitrary Code,,0,1,,,,CC BY-SA 4.0 +421183,1,,,1/18/2021 4:04,,0,425,"

I'm building a real time chat application like Whatsapp. I have a websocket server with node+express, but I'm a bit confused on which flow I should use.

+

I'm considering sending the image as binary data through the websocket to the server, process it and store it in AWS s3, and then send the URL back to the user.

+

Another idea I have thought about is making have an endpoint to make a PUT request to the server, store that Image in S3, and then checking for the specific chatroom id in MongoDB, and then send the Url through websocket.

+

Can someone aware me on a better solution than what I currently have?

+",293058,,,,,10/20/2021 22:07,Better solution instead of sending an image as binary through websocket for real time chat app,,1,2,,,,CC BY-SA 4.0 +421185,1,,,1/18/2021 4:30,,0,439,"

Why can't you just use strongly consistent reads for all your DB reads, with retries on 500 responses? According to CAP theorem increasing consistency should probably lower availability, but can't the decreased availability (increased 500 responses) be handled fairly easily using retries? (assuming you are fine with a small percentage of queries taking a bit longer due to retries)

+

Using DynamoDB as an example, but this can be generalized to any noSQL cloud offering - It also seems like DDB with on demand scaling will simply increase your read capacity units (RCU) used if you turn on strong consistency, incurring a higher cost ($) but keeping the same latency on db queries, so it seems like the negative is only higher cost. It seems like you can just keep vertical scaling the DB's processing power to meet your needs. Is it actually plausible that with a noSQL cloud database with a high traffic level, you cannot just throw enough money at it, and it could hit some scaling limit to make strongly consistent reads slower?

+

And then with regard to the entire question generalized to distributed systems, what does it actually mean for a distributed system to be 'strongly consistent.' I've heard this used to describe systems before but I don't actually know it means beyond 'all DB interactions being strongly consistent.'

+
+

My second question might be more basic level but necessary to understand the cost of providing consistency, but why do consistent read issues actually occur, ie. why does stale data occur (in single queries like read/write, not transactions with multiple reads/writes per transaction)?

+

From what I understand, with decreasing probability any time after a write occurs, it's possible for one reader to read correct data, then a second reader to read stale data AFTER the first reader reads correct data (correct me if this isn't actually true, but my understanding was it is). Why does this happen? Doesn't a read just involve a read from some location on the disk?

+",383288,,,,,1/18/2021 23:06,Why is it considered hard to maintain strong consistency in a distributed system?,,2,0,1,,,CC BY-SA 4.0 +421186,1,,,1/18/2021 5:15,,28,5827,"

One of the most common things I see when discussing pros/cons of microservice vs monolithic architecture is that monolithic applications have, or always trend toward, 'tight coupling.'

+

To be honest, I'm not seeing why this is true if your developers know how to leverage any standard OO language. If I need a certain part of my application to handle, say, payments, and multiple other parts to interact with the payments system, can I not leverage the abstraction features of the language I'm using (classes, access control, interfaces) to create modularization between different application functions?

+

For example, if I'm using Java, I could create a 'PaymentsDAO' (data access object), or maybe 'PaymentsClient', which can expose functions that the rest of the code can use to interact with the payments database, etc. If one sub-team in my team wants to work on payments, they can continue to write code in the PaymentsDAO, publish that code to the central repo, etc, while I simply use the DAO's function signatures, which would not change, and continue to write code wherever I need it, right? Where's the coupling? If payments code changes, I don't need to change anything in my code, or understand the changes, to account for that.

+

Is the only drawback of this 'coupling' that I need to git pull more often, since the payments code would need to be in the same deployment as my change, as opposed to a separate deployment, and then consumed over the network through an API call?

+

To be honest, I'm not seeing a strong case for the 'tight coupling', and I want someone to change my view here because my current team at work is using a microservice architecture :D I'm more certain about the other pros of MSA, like scalability, flexibility of technology stacks across microservices, fault tolerance of a dist system, and less deployment complexity, but I'm still uncertain on coupling.

+",383288,,,,,3/11/2021 15:31,I'm not seeing 'tightly coupled code' as one of the drawbacks of a monolithic application architecture,,5,13,4,,,CC BY-SA 4.0 +421189,1,421192,,1/18/2021 6:59,,1,100,"

My question relates to semantic versioning (specifically as specified here).

+

Say I have some feature I introduced in version 1.10.0 and then some time later (let's say the project has advanced to version 1.50.0 for the sake of argument) I discovered a bug with the 1.10.0 version. So my understanding is, we fix that, and that should then be version 1.10.1. So far so good. How does that though look in practice? Do I check out version 1.10.0 again, do the fix here (and in the worst case potentially use internal (not public API) features that may have changed from 1.10.0 to 1.50.0) and then rebase my changes onto version 1.50.0? (this potentially would cause a lot of merge conflect while we replay the master onto the latest commit). Potentially cherry-picking may be better here? (though I have not used that feature myself so I am not sure here). Or, could I apply this bugfix to version 1.50.0 and then make that version 1.50.1, even though the bug fix may not have anything to do with the feature introduced in 1.50.0? The website mentioned above does not seem to have an answer to that ...

+

There are two additional scenarios I would like to consider, which may result in a different approach to the problem (not sure, but maybe someone could shed some light here, maybe the approach will be exactly as before though ...):

+
    +
  1. What happens if this happens instead between version 0.10.0 and 0.50.0? (i.e. we do not guarantee a stable API)?

    +
  2. +
  3. What happens if this happens instead between version 1.10.0 and, say, 2.0.0 or later (i.e. we guarantee a public API change)?

    +
  4. +
+",364273,,,,,1/18/2021 7:39,How do deal with bugs in semantic versioning introduced in old branches,,1,0,,,,CC BY-SA 4.0 +421191,1,421193,,1/18/2021 7:12,,11,1774,"

Below, I define an IInstantNotification Interface. TextNotification Class and EmailNotification Class inherit from this interface.

+
public interface IInstantNotification<T> {
+        List<string> Addresses { get; set; }
+        string NotificationContent { get; set; }
+        T NotificationArguments { get; set; }
+        bool SendNotification();
+}
+
+public class TextNotification : IInstantNotification<TextArguments>
+{
+        public List<string> Addresses { get; set; }
+        public string NotificationContent { get; set; }
+        public TextArguments NotificationArguments { get; set; }
+        public bool SendNotification(){
+            //send the message and confirm
+            return true;
+        }
+
+        public TextNotification(
+            List<string> p_Addresses,
+            string p_NotificationContent,
+            MailArguments p_NotificationArguments)
+        {
+            Addresses = p_Addresses;
+            NotificationContent = p_NotificationContent;
+            NotificationArguments = p_NotificationArguments;
+        }
+}
+
+
+public class EmailNotification : IInstantNotification<MailArguments>
+{
+    ...
+}
+
+

I can then instantiate a class, pass the args, and send the message. Pretty straight forward.

+
TextNotification TextObj = new TextNotification(myAddresses,myNotifContent, myArgs);
+bool success = TextObj.SendNotification();
+
+

Instead, I always end up doing something like the following: scrapping the interfaces and putting everything in a static class for organization purposes only.

+
public static class TextNotification
+{
+        public static bool SendNotification(
+            List<string> p_Addresses,
+            string p_NotificationContent,
+            MailArguments p_NotificationArguments)
+        {
+           //send the message and confirm
+            return true;
+        }   
+}
+
+
+public static class EmailNotification
+{
+    ...
+}
+
+

It seems like the steps to take the action of sending a notification now have a lot less overhead (and are just as easy to understand).

+
bool success = TextNotification.SendNotification(myAddresses, myNotifContent, myArgs);
+
+

To my ignorant more functional programming oriented mindset, the latter is pretty simple and it seems like the best way to implement things. I have really been struggling wrapping my mind around the reason for doing it the more "OOP" way (like at the top of my post). I realize this might be too trivial of an example, but it is the best one I could come up with.

+

This is all coming from a middle-tier application code perspective. Oftentimes, my code intercepts an http request, invokes static functions to execute business-layer actions, and those actions call my data layer (which are just a bunch of static functions) that then call stored procedures and things bubble back up until the response is eventually returned to the client.

+

Representing business-layer actions with fancy OO design patterns seems pointless.

+

I humbly ask for someone to help me understand where my thinking is flawed. I really want to embrace the world of OOP and its fancy design patterns, as well as being able to fully leverage C# and its potential as an OOP language. I feel like I am doing a disservice to myself writing functional-first C# code...

+",314224,,314224,,1/22/2021 3:41,1/22/2021 3:41,The role of OOP in the business layer,,4,10,3,,,CC BY-SA 4.0 +421196,1,421197,,1/18/2021 9:41,,2,343,"

Our situation

+

At first, our company had 1 product. Custom hardware with firmware we wrote ourselves.

+

Now more projects are starting to be added. Many can reuse most of the components of our first product, but of course the business logic is different. Also the hardware could change, and the remote device monitoring interfaces, as the sensors and available data could change.

+

Now we are looking at how to structure and manage our codebase. Currently we are leaning towards making a repository that will include all the non-project-specific firmware code. This includes battery management, remote device management skeleton, hardware drivers, etc. Everything that the different projects may share. This way, fixes and new features for these modules only need to be committed once.

+

Furthermore, we would create repositories per project, where the project-specific code is stored.

+

I think this is called multi-repo.

+

My thoughts

+
    +
  1. Project setup and management becomes harder (it would e.g. perhaps need a script to get the right version of the non-project-specific repo)
  2. +
  3. Each project can have its own rules (branching strategy
  4. +
  5. We would have to setup CI for each extra repo (build validation, code style, policies)
  6. +
  7. Because of 1-3, would monorepo be better? Won't build validation and such become a lot harder because not all code is meant to be together (e.g. different projects)? How do we keep our freelancers out of the code they don't need?
  8. +
  9. Are there other (better) alternatives in our case?
  10. +
+",350205,,350205,,1/18/2021 10:04,1/18/2021 11:15,How to setup our codebase for efficient code sharing and development?,,1,2,,,,CC BY-SA 4.0 +421202,1,421244,,1/18/2021 14:10,,1,116,"

I'm building up an ecommerce app based on microservices and almost every article points out to use async communication. +But I'm facing a situation that I think is better use sync communication. +So, how control inventory using async communication?

+

Let's say I've a bus with 1k messages for decreasing the quantity of a product (and I've only 1k pieces of it), while not all messages are processed I still have those quantities and meanwhile another 500 orders are being placed, how handle that using bus service? +I mean, this 500 orders should be refused...

+

Just for clarification, I'm starting with microservices.

+",383320,,,,,1/20/2021 1:14,Microservices asynchronous communication do handle inventory,,3,4,1,,,CC BY-SA 4.0 +421209,1,,,1/18/2021 15:11,,0,77,"

I'm working on a system with a user-facing frontend and with 1-n backend services which I'm trying to design according to the principles of the Twelve-Factor App.

+

I'm now facing the task of sending emails to the user that contain a link to the frontend. The question is, how do I generate the URLs, or more specifically where do I take the URL domain/host from?

+

I'm seeing the following approaches:

+

1. Put the domain in the configuration

+

I'm putting the domain in an environment variable and expose it to the service using the framework's configuration API.

+

Here, I see a potential problem if in the future the application needs to be available from multiple domains (e.g. one domain per country or multi tenancy). Currently, this is not the case, but who knows what new requirements might come.

+

2. Extract the host from the request

+

When the the user enables email notifications, I extract the domain from the request headers. As the email(s) will be sent at a later time, I'll need to persist the domain in the database where the rest of the per-user settings live.

+

This will work in a scenario where the application has more than one domain but until this is the case, putting the same value (domain = "https://my-domain.com") in every database entry feels redundant.

+

3. Have the frontend generate the whole URL

+

Currently, the system only serves one domain but it already supports localization using subpaths (think /{language}/login). To generate the correct URL for each user, I need to persist the locale that will be part of the URL in the DB anyway.

+

This begs the question, why not have the frontend generate the complete URL including domain, language and path and store it in the database? This would kill two birds with one stone since now the backend doesn't need to know the URL structure of the frontend. However this would potentially open up a possibility for a malicious client to mess with the generated URLs. The redundancy argument from 2. also applies.

+

Are there any more arguments for/against the given approaches or even alternative approaches or does it come down to taste?

+",142818,,,,,1/18/2021 16:09,"How to generate frontend URLs in a ""12 factor app"" service?",,2,0,,,,CC BY-SA 4.0 +421217,1,421541,,1/18/2021 16:27,,7,691,"

I work on a number of code projects, some open source and some not. Many of these projects are intended to be cross-platform, most often running on either Linux (my natural habitat) or Windows and generally relying on CMake to build. Recently, I noticed that a Windows developer on one of the projects checked in .sln files and .vcxproj and .vcxproj.filters files strewn about in nearly every directory.

+

Some of these files appear to contain things like paths that seem likely to be unique to that particular person's particular computer, which prompted the question about whether these should be added to the projects .gitignore file or more generally excluded from version control.

+

My criteria

+

The criteria I typically use for deciding whether things belong in version control:

+
    +
  1. It is required to build one or more artifacts (including docs, source code, graphics)
  2. +
  3. It is required to be there for other reasons (e.g. README and LICENSE files)
  4. +
  5. If it's a built artifact, it should NOT go into version control
  6. +
+

What I've looked at

+

There is this question which asks about which Visual C++ file types should be checked in for a Visual C++ project. (My emphasis.) This isn't really that; the intent is actually to use CMake to create the build system, one of which could be Visual Studio. That question and most of the answers seem to assume that everyone will be using VS, which is not the case here.

+

I've also consulted Microsoft's docs on using CMake in Visual Studio, which seems to indicate that for a CMake project, the .sln files, and others will either not be needed or will be regenerated if they are. For that reason, they seem to fail under criteron 1 above. On the other hand, it's common for autotools-based projects to include things in their repositories that autotools creates so that those who rebuild from source don't need autotools.

+

Finally, of course, I actually spoke to the other developer who, like me, could see arguments either way. Since both of us are apparently too annoyingly collaborative to make the definitive decision in this case, I thought I'd inquire here.

+

My questions

+
    +
  1. Should .sln and .vcxproj and .vcxproj.filters files be checked in to version control for multi-platform projects?
  2. +
  3. If so, is there a way that non-VS developers can easily omit those files to reduce clutter and distraction?
  4. +
  5. If not, should VS developers be given any particular guidance on how to use CMake?
  6. +
+

To be clear, I'm looking primarily for a logical rationale that we might be able to apply as policy for future projects and NOT unsupported opinion.

+",252267,,,,,2/1/2021 9:42,Should Visual Studio specific files be excluded from version control?,,2,6,2,,,CC BY-SA 4.0 +421220,1,,,1/18/2021 16:56,,4,471,"

Let's say I have a class like this:

+
    public class Validator  {
+
+        private HashSet<byte> _validFlags;
+
+        public Validator()
+        { 
+            _validFlags = new HashSet<byte>
+            { 
+                1, 3, 4, 7, 19, 30, 47//These numbers are chosen for whatever reason and there does 
+                                      //not have to be any logic here
+            };
+        }
+
+        public void Validate(byte inputToValidate)
+        {
+            if(!_validFlags.Contains(byte))
+            {
+                throw new ArgumentException("Invalid argument value");
+            }
+        }
+    }
+
+

Now, I want to write unit tests for this class.

+

Positive unit tests are easy for this. However, how about negative unit tests?

+

Basically, I want to make sure that if I pass anything but specified seven values to this method, I get an ArgumentException. Would passing all the possible byte values other than these seven be an overkill?

+

I understand I cannot do that if the type of the argument is long or int or float, but what about byte? It would be fairly easy to write a piece of code to populate a structure with all invalid values and just run the test that would expect ArgumentException for all of them. But, again, is that an overkill, both in terms of time needed to write such a test and the time required for that test to be executed (there could be tens or even hundreds of tests like these)? Should I just pick a few invalid values here? I am worried about allowing possible invalid values to pass through the validation because I did not cover them in my negative unit test.

+

I would appreciate any opinion on this topic.

+",235135,,9113,,1/20/2021 6:03,1/23/2021 13:30,Negative unit testing,,7,5,,,,CC BY-SA 4.0 +421223,1,421237,,1/18/2021 17:58,,0,110,"

Hi I'm bulding an ML pipeline with PyTorch to support various tasks and am looking for some advice on efficient ways to store processed data.

+

The main frame work was 3 layers [data prep] -> [data loading] -> [training/inference]. +The dataprep module is responsible for taking raw data (medical data in this case) and storing it in an organized and efficient way to be handled subsequently by the dataloaders. Data preb is ideally only done once initially for each dataset and data loading/training may be done many times.

+

The main insights I'm looking are:

+
    +
  • Video files: is storing them as raw pixels (uint8) in .npy files an okay method or am I really missing out on optimizations in standard video formats (mp4/avi...). For further info most videos are around 100-200 frames.
  • +
  • Segmentations: A segmentation can be a contour of 20-40 x/y points. My plan is to save the contours all to a single json file which can be loaded directly into ram in the constructor of a PyTorch dataloader. Then every call to getitem would take the next contour and call something like skimage.draw.polygon to convert it into a binary mask. Wondering first if loading all the segmentations in the constructor is naive and if I should store them as npy files, or directly as binary masks from the start them load them individually with the dataloader on each call to getitem
  • +
  • Images I think just using png is fine and loading during calls to getitem
  • +
+

Any insights on these would be appreciated, it's difficult to find agreed upon best practices for these kinds of thing on google.

+

Thanks

+",383335,,,,,9/9/2021 7:02,Best file formats for ML training,