qid
int64
1
74.7M
question
stringlengths
17
39.2k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
2
41.1k
response_k
stringlengths
2
47.9k
28,025,633
I trying to hash the password while I create new user. This is what I have when user is added: ``` $username = $_POST['username']; $password = $_POST['password']; $options = [ 'cost' => 11, 'salt' => hash('sha256', uniqid(mt_rand(), true) . 'dsdsajJDSK&&^*^%FKLD876' . strtolower($username)), ]; $hash = password_hash($password, PASSWORD_BCRYPT, $options); $sql = "INSERT INTO users ( username, password) VALUES (:username, :password)"; $q = $pdo->prepare($sql); $q->execute(array( ':username' => $username, ':password' => $hash )); ``` And in database is stored something like this > > $2y$11$fbd730bf81fe115d43283uAjC849wT.rD1F7CuBHEJHCVIVNn > The in my login file I have this > > > ``` <?php session_start(); include 'misc/database.inc.php'; if(isSet($_POST['submit'])) { $pdo = Database::connect(); $username=$_POST['username']; $password=$_POST['password']; $stmt = $pdo->prepare("SELECT * FROM users WHERE username = :username LIMIT 1"); $stmt->bindParam(':username', $username); $stmt->execute(); $res = $stmt -> fetch(); if(password_verify($_POST['password'], $res["password"])) { if ($res['level'] == 1) { $_SESSION['username'] = $username; header( "location: admin/main.php"); } elseif ( $res['level'] >= 4 ) { $_SESSION['user_id'] = $res['user_id']; $_SESSION['username'] = $username; header('Location: users/main.php'); } else { header("location: index.php"); } $pdo = null; } } else { ?> // html <?php } ?> ``` The problem is that I have user which has old `sha1` password and I'm able to log with this user. I think it must not be able to log? UPDATE: ``` $username=$_POST['username']; $password=$_POST['password']; $stmt = $pdo->prepare("SELECT username,password FROM users WHERE username = :username LIMIT 1"); $stmt->bindParam(':username', $username); $stmt->execute(); $res = $stmt -> fetch(); if(password_verify($password, $res["password"])){ //if(password_verify($_POST['password'], $res["password"])){ ... } else { ... } ```
2015/01/19
[ "https://Stackoverflow.com/questions/28025633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1378364/" ]
All [`password_verify()`](http://php.net/manual/en/function.password-verify.php) needs is the user's password and the hash from the database. You don't need to hash anything yourself again. Instead, you should just use: ``` if(password_verify($password, $res["password"])){ // or if(password_verify($_POST['password'], $res["password"])){ ```
The password\_verify() function also works with other hash algorithms. The first part $2y (in this case BCrypt) tells the function which algorithm was used to generate the hash, so it can use the same algorithm for verification. ``` $2y$11$fbd730bf81fe11... ``` Some tips to improve your code: Do not create your own salt, especially not one which is a derrived from other parameters. Just let the function create a safe salt: ``` $options = array("cost" => 11); ``` Put an exit() after each header(...) otherwise the script continues anyway. ``` if ($res['level'] == 1) { $_SESSION['username'] = $username; header( "location: admin/main.php"); exit(); } ```
9,223,738
Being new to tablet programming this occured to me. I could see that there exists a different sets of API for iPhone and iPAD. Is this a similar pattern in android and blackberry tablets as well. What i mean is, is there a different sets of API for Android tablet than Android Phones. If yes, then from where can i download them, and how can i run an emulator for tablet on MAC.
2012/02/10
[ "https://Stackoverflow.com/questions/9223738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077152/" ]
Previously, Android tablets used to run Android 3.0, while phones were still running 2.x. 3.0 is now deprecated and should no longer be used. Ice Cream Sandwich (4.0) was recently released, and it's the API that all future phones and tablets should be using (so to answer your question, it's the same SDK). It's the first API that is "shared" in this manner. When you develop an app, it should be flexible enough to run on any Android platform. The GUI stretches to fit any screen size. Note that there are ways to programmatically detect if the device running your app is a tablet, and you could use that to adapt your app if you wanted to. See [the official guide to supporting tablets and handhelds](http://developer.android.com/guide/practices/tablets-and-handsets.html) for in-depth information.
From Android 3.0 API ll (Honeycomb) will support Android tablet download the sdk and do coding for your tablet version. yes there is an different sets of API for Android tablet and phones. 3.x on wards support tablet. 2.x and 4.x(ICS) will support both tablet and phones
9,223,738
Being new to tablet programming this occured to me. I could see that there exists a different sets of API for iPhone and iPAD. Is this a similar pattern in android and blackberry tablets as well. What i mean is, is there a different sets of API for Android tablet than Android Phones. If yes, then from where can i download them, and how can i run an emulator for tablet on MAC.
2012/02/10
[ "https://Stackoverflow.com/questions/9223738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1077152/" ]
Previously, Android tablets used to run Android 3.0, while phones were still running 2.x. 3.0 is now deprecated and should no longer be used. Ice Cream Sandwich (4.0) was recently released, and it's the API that all future phones and tablets should be using (so to answer your question, it's the same SDK). It's the first API that is "shared" in this manner. When you develop an app, it should be flexible enough to run on any Android platform. The GUI stretches to fit any screen size. Note that there are ways to programmatically detect if the device running your app is a tablet, and you could use that to adapt your app if you wanted to. See [the official guide to supporting tablets and handhelds](http://developer.android.com/guide/practices/tablets-and-handsets.html) for in-depth information.
The newer Android SDK versions are for both phones and tablets. You use the same SDK for both. The SDK is designed to handle both, but you have to perform significant duplication in your resources to handle all platforms. It is a bit of a pain. For example they don't support vector images for icons, so you normally have to make raster duplications for low, medium, high, and extra high dpi devices.
41,687,908
I keep getting errors when I tried to solve a system of three equations using the following code in python3: --- ``` import sympy from sympy import Symbol, solve, nsolve x = Symbol('x') y = Symbol('y') z = Symbol('z') eq1 = x - y + 3 eq2 = x + y eq3 = z - y print(nsolve( (eq1, eq2, eq3), (x,y,z), (-50,50))) ``` --- Here is the error message: > > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 928, in findroot > fx = f(\*x0) > TypeError: () missing 1 required positional argument: > '\_Dummy\_15' > > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File "", line 1, in > File "", line 12, in File > "/usr/lib/python3/dist-packages/sympy/solvers/solvers.py", line 2498, > in nsolve > x = findroot(f, x0, J=J, \*\*kwargs) > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 931, in findroot > fx = f(x0[0]) > TypeError: () missing 2 required positional arguments: > '\_Dummy\_14' and '\_Dummy\_15' > > > --- The strange thing is, the error message goes away if I only solve the first two equation --- by changing the last line of the code to ``` print(nsolve( (eq1, eq2), (x,y), (-50,50))) ``` output: ``` exec(open('bug444.py').read()) [-1.5] [ 1.5] ``` I'm baffled; your help is most appreciated! A few pieces of additional info: * I'm using python3.4.0 + sympy 0.7.6-3 on ubuntu 14.04. I got the same error in python2 * I could solve this system using solve( [eq1,eq2,eq3], [x,y,z] ) but this system is just a toy example; in the actual applications the system is non-linear and I need higher precision, and I don't see how to adjust the precision for solve, whereas for nsolve I could use `nsolve(... , prec=100)` THANKS!
2017/01/17
[ "https://Stackoverflow.com/questions/41687908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428078/" ]
In your print statement, you are missing your guess for `z` ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (-50, 50))) ``` try this (in most cases, using `1` for all the guesses is fine): ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (1, 1, 1))) ``` **Output:** ``` [-1.5] [ 1.5] [ 1.5] ```
You can discard the initial guesses/dummies if you use [`linsolve`](http://docs.sympy.org/dev/modules/solvers/solveset.html#sympy.solvers.solveset.linsolve): ``` >>> from sympy import linsolve >>> print(linsolve((eq1, eq2, eq3), x,y,z)) {(-3/2, 3/2, 3/2)} ``` And then you can use [`nonlinsolve`](http://docs.sympy.org/dev/modules/solvers/solveset.html#sympy.solvers.solveset.nonlinsolve) for your non linear problem set.
41,687,908
I keep getting errors when I tried to solve a system of three equations using the following code in python3: --- ``` import sympy from sympy import Symbol, solve, nsolve x = Symbol('x') y = Symbol('y') z = Symbol('z') eq1 = x - y + 3 eq2 = x + y eq3 = z - y print(nsolve( (eq1, eq2, eq3), (x,y,z), (-50,50))) ``` --- Here is the error message: > > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 928, in findroot > fx = f(\*x0) > TypeError: () missing 1 required positional argument: > '\_Dummy\_15' > > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File "", line 1, in > File "", line 12, in File > "/usr/lib/python3/dist-packages/sympy/solvers/solvers.py", line 2498, > in nsolve > x = findroot(f, x0, J=J, \*\*kwargs) > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 931, in findroot > fx = f(x0[0]) > TypeError: () missing 2 required positional arguments: > '\_Dummy\_14' and '\_Dummy\_15' > > > --- The strange thing is, the error message goes away if I only solve the first two equation --- by changing the last line of the code to ``` print(nsolve( (eq1, eq2), (x,y), (-50,50))) ``` output: ``` exec(open('bug444.py').read()) [-1.5] [ 1.5] ``` I'm baffled; your help is most appreciated! A few pieces of additional info: * I'm using python3.4.0 + sympy 0.7.6-3 on ubuntu 14.04. I got the same error in python2 * I could solve this system using solve( [eq1,eq2,eq3], [x,y,z] ) but this system is just a toy example; in the actual applications the system is non-linear and I need higher precision, and I don't see how to adjust the precision for solve, whereas for nsolve I could use `nsolve(... , prec=100)` THANKS!
2017/01/17
[ "https://Stackoverflow.com/questions/41687908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428078/" ]
In your print statement, you are missing your guess for `z` ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (-50, 50))) ``` try this (in most cases, using `1` for all the guesses is fine): ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (1, 1, 1))) ``` **Output:** ``` [-1.5] [ 1.5] [ 1.5] ```
The Problem is number of variables should be equal to the number of guess vectors, `print(nsolve((eq1, eq2, eq3), (x,y,z), (-50,50,50)))` If you're using a numerical solver on a multidimensional problem, it wants to start from somewhere and follow a gradient to the solution. the guess vector is where you start. if there are multiple local minima / maxima in the space, different guess vectors can lead to diffierent outputs. Or an unfortunate guess vector may not converge at all. For a one-dimensional problem the guess vector is just x0. For most functions you can write down easily, almost any vector will converge to the one global solutions. so (1,1,1) guess vectors here is as good as (-50,50,50) Just don't leave a null space for the sake of program
41,687,908
I keep getting errors when I tried to solve a system of three equations using the following code in python3: --- ``` import sympy from sympy import Symbol, solve, nsolve x = Symbol('x') y = Symbol('y') z = Symbol('z') eq1 = x - y + 3 eq2 = x + y eq3 = z - y print(nsolve( (eq1, eq2, eq3), (x,y,z), (-50,50))) ``` --- Here is the error message: > > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 928, in findroot > fx = f(\*x0) > TypeError: () missing 1 required positional argument: > '\_Dummy\_15' > > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File "", line 1, in > File "", line 12, in File > "/usr/lib/python3/dist-packages/sympy/solvers/solvers.py", line 2498, > in nsolve > x = findroot(f, x0, J=J, \*\*kwargs) > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 931, in findroot > fx = f(x0[0]) > TypeError: () missing 2 required positional arguments: > '\_Dummy\_14' and '\_Dummy\_15' > > > --- The strange thing is, the error message goes away if I only solve the first two equation --- by changing the last line of the code to ``` print(nsolve( (eq1, eq2), (x,y), (-50,50))) ``` output: ``` exec(open('bug444.py').read()) [-1.5] [ 1.5] ``` I'm baffled; your help is most appreciated! A few pieces of additional info: * I'm using python3.4.0 + sympy 0.7.6-3 on ubuntu 14.04. I got the same error in python2 * I could solve this system using solve( [eq1,eq2,eq3], [x,y,z] ) but this system is just a toy example; in the actual applications the system is non-linear and I need higher precision, and I don't see how to adjust the precision for solve, whereas for nsolve I could use `nsolve(... , prec=100)` THANKS!
2017/01/17
[ "https://Stackoverflow.com/questions/41687908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428078/" ]
In your print statement, you are missing your guess for `z` ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (-50, 50))) ``` try this (in most cases, using `1` for all the guesses is fine): ``` print(nsolve((eq1, eq2, eq3), (x, y, z), (1, 1, 1))) ``` **Output:** ``` [-1.5] [ 1.5] [ 1.5] ```
your code should be: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1,1]) ``` your code was: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1]) ``` you were mising one guess value in the last argument. point is: if you are solving for `n` unknown terms you provide a guess for each unknown term (`n` guesses in the last argument)
41,687,908
I keep getting errors when I tried to solve a system of three equations using the following code in python3: --- ``` import sympy from sympy import Symbol, solve, nsolve x = Symbol('x') y = Symbol('y') z = Symbol('z') eq1 = x - y + 3 eq2 = x + y eq3 = z - y print(nsolve( (eq1, eq2, eq3), (x,y,z), (-50,50))) ``` --- Here is the error message: > > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 928, in findroot > fx = f(\*x0) > TypeError: () missing 1 required positional argument: > '\_Dummy\_15' > > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File "", line 1, in > File "", line 12, in File > "/usr/lib/python3/dist-packages/sympy/solvers/solvers.py", line 2498, > in nsolve > x = findroot(f, x0, J=J, \*\*kwargs) > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 931, in findroot > fx = f(x0[0]) > TypeError: () missing 2 required positional arguments: > '\_Dummy\_14' and '\_Dummy\_15' > > > --- The strange thing is, the error message goes away if I only solve the first two equation --- by changing the last line of the code to ``` print(nsolve( (eq1, eq2), (x,y), (-50,50))) ``` output: ``` exec(open('bug444.py').read()) [-1.5] [ 1.5] ``` I'm baffled; your help is most appreciated! A few pieces of additional info: * I'm using python3.4.0 + sympy 0.7.6-3 on ubuntu 14.04. I got the same error in python2 * I could solve this system using solve( [eq1,eq2,eq3], [x,y,z] ) but this system is just a toy example; in the actual applications the system is non-linear and I need higher precision, and I don't see how to adjust the precision for solve, whereas for nsolve I could use `nsolve(... , prec=100)` THANKS!
2017/01/17
[ "https://Stackoverflow.com/questions/41687908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428078/" ]
You can discard the initial guesses/dummies if you use [`linsolve`](http://docs.sympy.org/dev/modules/solvers/solveset.html#sympy.solvers.solveset.linsolve): ``` >>> from sympy import linsolve >>> print(linsolve((eq1, eq2, eq3), x,y,z)) {(-3/2, 3/2, 3/2)} ``` And then you can use [`nonlinsolve`](http://docs.sympy.org/dev/modules/solvers/solveset.html#sympy.solvers.solveset.nonlinsolve) for your non linear problem set.
your code should be: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1,1]) ``` your code was: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1]) ``` you were mising one guess value in the last argument. point is: if you are solving for `n` unknown terms you provide a guess for each unknown term (`n` guesses in the last argument)
41,687,908
I keep getting errors when I tried to solve a system of three equations using the following code in python3: --- ``` import sympy from sympy import Symbol, solve, nsolve x = Symbol('x') y = Symbol('y') z = Symbol('z') eq1 = x - y + 3 eq2 = x + y eq3 = z - y print(nsolve( (eq1, eq2, eq3), (x,y,z), (-50,50))) ``` --- Here is the error message: > > Traceback (most recent call last): > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 928, in findroot > fx = f(\*x0) > TypeError: () missing 1 required positional argument: > '\_Dummy\_15' > > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): File "", line 1, in > File "", line 12, in File > "/usr/lib/python3/dist-packages/sympy/solvers/solvers.py", line 2498, > in nsolve > x = findroot(f, x0, J=J, \*\*kwargs) > File > "/usr/lib/python3/dist-packages/mpmath/calculus/optimization.py", line > 931, in findroot > fx = f(x0[0]) > TypeError: () missing 2 required positional arguments: > '\_Dummy\_14' and '\_Dummy\_15' > > > --- The strange thing is, the error message goes away if I only solve the first two equation --- by changing the last line of the code to ``` print(nsolve( (eq1, eq2), (x,y), (-50,50))) ``` output: ``` exec(open('bug444.py').read()) [-1.5] [ 1.5] ``` I'm baffled; your help is most appreciated! A few pieces of additional info: * I'm using python3.4.0 + sympy 0.7.6-3 on ubuntu 14.04. I got the same error in python2 * I could solve this system using solve( [eq1,eq2,eq3], [x,y,z] ) but this system is just a toy example; in the actual applications the system is non-linear and I need higher precision, and I don't see how to adjust the precision for solve, whereas for nsolve I could use `nsolve(... , prec=100)` THANKS!
2017/01/17
[ "https://Stackoverflow.com/questions/41687908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7428078/" ]
The Problem is number of variables should be equal to the number of guess vectors, `print(nsolve((eq1, eq2, eq3), (x,y,z), (-50,50,50)))` If you're using a numerical solver on a multidimensional problem, it wants to start from somewhere and follow a gradient to the solution. the guess vector is where you start. if there are multiple local minima / maxima in the space, different guess vectors can lead to diffierent outputs. Or an unfortunate guess vector may not converge at all. For a one-dimensional problem the guess vector is just x0. For most functions you can write down easily, almost any vector will converge to the one global solutions. so (1,1,1) guess vectors here is as good as (-50,50,50) Just don't leave a null space for the sake of program
your code should be: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1,1]) ``` your code was: ``` nsolve([eq1, eq2, eq3], [x,y,z], [1,1]) ``` you were mising one guess value in the last argument. point is: if you are solving for `n` unknown terms you provide a guess for each unknown term (`n` guesses in the last argument)
62,073,240
The Kata: [link](https://www.codewars.com/kata/550498447451fbbd7600041c/train/javascript) My solution: ``` function comp(array1, array2) { let result; if (Array.isArray(array1) && Array.isArray(array2) && array1.length && array2.length) { result = true; const squares = array2.map(e => Math.sqrt(e)); squares.forEach((e) => { if (array1.includes(e)) return; result = false; }); } else { result = false } return result; } ``` **I don't want another solution.** I want to figure out why mine doesn't pass in all of the tests. (Fails on two tests but I can't see which) I suspect the test expects `true` if both arrays are `[]`. But the Kata's description says otherwise: > > If a or b are nil (or null or None), the problem doesn't make sense so return false. > > > Help would be appreciated. --- **Working Solution Based off of the answers:** ``` function comp(array1, array2) { let result; if (Array.isArray(array1) && Array.isArray(array2)) { result = true; const sortedArray1 = array1.sort((a, b) => a - b); const squares = array2.map(e => Math.sqrt(e)).sort((a, b) => a - b); squares.forEach((e, i) => { if (sortedArray1[i] === e) return; result = false; }); } else { result = false } return result; } ```
2020/05/28
[ "https://Stackoverflow.com/questions/62073240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10088643/" ]
Okey, some cheating workarounds but your code fails here: [![enter image description here](https://i.stack.imgur.com/j7mlb.png)](https://i.stack.imgur.com/j7mlb.png) That's why after removing two `.length` you pass one more test. Next one is [![enter image description here](https://i.stack.imgur.com/MqEOE.png)](https://i.stack.imgur.com/MqEOE.png) After `.sqrt` array2 you get `2, 3, 3` that's why you return true, but it's false. That answer will be enough to help you solve this kata. Btw. > > I suspect the test expects true if both arrays are []. But the Kata's > description says otherwise: > > > Kata's description: > > a or b might be [] (all languages except R, Shell). > : - D > > > P.S. I know for some people checking arguments maybe treated as cheating, I just do it for educational purposes. Black boxes are not always enough to figure out what's wrong.
if there is more then one number the same, but different count here and there, it will return true in your code, since it's `includes` in both arrays. you should sort it and iterate by index for compare, in order to make sure each item in the array used only once. Here is what works for me: ``` function comp(array1, array2){ if(!array1 || !array2) return false; array1 = array1.map(t => t**2).sort((a,b)=>a-b); array2 = array2.sort((a,b)=>a-b); for(let i=0;i<array1.length;i++){if(array1[i] !== array2[i])return false} return true; } ``` sure youcan understand the idea and just make some twick in your existing code
4,059,389
i want to display many images like thumbnails in a scroll view and and i want the images displayed dynamically we scrolls down or left like a table view cells can u please tell how to that... Thanks with the following code..when we scroll the scroll view im calling this code and able to display the images dynamically (which r only visible) but the problem is.. while scrolling with scroll bars im getting the two images..vertically and horizontally..its only happening when i scroll.. can any body help me out please..? ``` int tileSize; int imgSize; if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ tileSize = 255; imgSize = 247; }else{ tileSize = 120; imgSize = 116; } CGRect visibleBounds = [songsContainer bounds]; for (UIView *tile in [songsContainer subviews]) { CGRect scaledTileFrame =[tile frame]; if (! CGRectIntersectsRect(scaledTileFrame, visibleBounds)) { for(UIView *view in [tile subviews]) [view removeFromSuperview]; [recycledCells addObject:tile]; [tile removeFromSuperview]; } } int maxRow =[songsDict count]-1; // this is the maximum possible row int maxCol = noOfRowsInCell-1; // and the maximum possible column int firstNeededRow = MAX(0, floorf(visibleBounds.origin.y / tileSize)); int firstNeededCol = MAX(0, floorf(visibleBounds.origin.x / tileSize)); int lastNeededRow = MIN(maxRow, floorf(CGRectGetMaxY(visibleBounds) / tileSize)); int lastNeededCol = MIN(maxCol, floorf(CGRectGetMaxX(visibleBounds) / tileSize)); NSLog(@".........MaxRow-%d,MaxCol-%d,firstNeddedRow-%d,firstNeededcol-%d,lNR-%d,lNC%d",maxRow, maxCol, firstNeededRow,firstNeededCol,lastNeededRow,lastNeededCol); // iterate through needed rows and columns, adding any tiles that are missing for (int row = firstNeededRow; row <= lastNeededRow; row++) { NSMutableArray *tempArray = (NSMutableArray *)[songsDict objectAtIndex:row]; for (int col = firstNeededCol; col <= lastNeededCol ; col++) { BOOL tileIsMissing = (firstVisibleRow > row || firstVisibleColumn > col || lastVisibleRow < row || lastVisibleColumn < col); if (tileIsMissing) { UIView *tile = (UIView *)[self dequeueReusableTile]; if (!tile) { // the scroll view will handle setting the tile's frame, so we don't have to worry about it tile = [[[UIView alloc] initWithFrame:CGRectZero] autorelease]; tile.backgroundColor = [UIColor clearColor]; } //tile.image = image for row and col; // set the tile's frame so we insert it at the correct position CGRect frame = CGRectMake(tileSize * col, tileSize * row, imgSize, imgSize); tile.frame = frame; if(col<[tempArray count]) [self addContentForTile:tile:row:col]; else tile.backgroundColor = [UIColor clearColor]; [songsContainer addSubview:tile]; } } } firstVisibleRow = firstNeededRow+1; firstVisibleColumn = firstNeededCol+1; lastVisibleRow = lastNeededRow; lastVisibleColumn = lastNeededCol; ```
2010/10/30
[ "https://Stackoverflow.com/questions/4059389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372510/" ]
Not that I know of. The official docs are located at <http://www.php.net/manual/en/>. There is a search option, [quick references](http://www.php.net/tips.php) and [various shortcuts available](http://www.php.net/urlhowto.php), but it does not have a drilldown search like visualjquery.com For [SPL, there is the helly pages in addition to the Manual pages](http://www.php.net/~helly/php/ext/spl/)
There's no visual documentation like that (as far as I know), but there is a very good reference which is commonly used: [**php.net's manual**](http://www.php.net/manual/en/). Because it is so complete, there is no need for many other references. By the way, it's very easy to search for specific functions on php.net. Just browse to `http://php.net/[insert function name here]` and it will redirect you to the page you want.
4,059,389
i want to display many images like thumbnails in a scroll view and and i want the images displayed dynamically we scrolls down or left like a table view cells can u please tell how to that... Thanks with the following code..when we scroll the scroll view im calling this code and able to display the images dynamically (which r only visible) but the problem is.. while scrolling with scroll bars im getting the two images..vertically and horizontally..its only happening when i scroll.. can any body help me out please..? ``` int tileSize; int imgSize; if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ tileSize = 255; imgSize = 247; }else{ tileSize = 120; imgSize = 116; } CGRect visibleBounds = [songsContainer bounds]; for (UIView *tile in [songsContainer subviews]) { CGRect scaledTileFrame =[tile frame]; if (! CGRectIntersectsRect(scaledTileFrame, visibleBounds)) { for(UIView *view in [tile subviews]) [view removeFromSuperview]; [recycledCells addObject:tile]; [tile removeFromSuperview]; } } int maxRow =[songsDict count]-1; // this is the maximum possible row int maxCol = noOfRowsInCell-1; // and the maximum possible column int firstNeededRow = MAX(0, floorf(visibleBounds.origin.y / tileSize)); int firstNeededCol = MAX(0, floorf(visibleBounds.origin.x / tileSize)); int lastNeededRow = MIN(maxRow, floorf(CGRectGetMaxY(visibleBounds) / tileSize)); int lastNeededCol = MIN(maxCol, floorf(CGRectGetMaxX(visibleBounds) / tileSize)); NSLog(@".........MaxRow-%d,MaxCol-%d,firstNeddedRow-%d,firstNeededcol-%d,lNR-%d,lNC%d",maxRow, maxCol, firstNeededRow,firstNeededCol,lastNeededRow,lastNeededCol); // iterate through needed rows and columns, adding any tiles that are missing for (int row = firstNeededRow; row <= lastNeededRow; row++) { NSMutableArray *tempArray = (NSMutableArray *)[songsDict objectAtIndex:row]; for (int col = firstNeededCol; col <= lastNeededCol ; col++) { BOOL tileIsMissing = (firstVisibleRow > row || firstVisibleColumn > col || lastVisibleRow < row || lastVisibleColumn < col); if (tileIsMissing) { UIView *tile = (UIView *)[self dequeueReusableTile]; if (!tile) { // the scroll view will handle setting the tile's frame, so we don't have to worry about it tile = [[[UIView alloc] initWithFrame:CGRectZero] autorelease]; tile.backgroundColor = [UIColor clearColor]; } //tile.image = image for row and col; // set the tile's frame so we insert it at the correct position CGRect frame = CGRectMake(tileSize * col, tileSize * row, imgSize, imgSize); tile.frame = frame; if(col<[tempArray count]) [self addContentForTile:tile:row:col]; else tile.backgroundColor = [UIColor clearColor]; [songsContainer addSubview:tile]; } } } firstVisibleRow = firstNeededRow+1; firstVisibleColumn = firstNeededCol+1; lastVisibleRow = lastNeededRow; lastVisibleColumn = lastNeededCol; ```
2010/10/30
[ "https://Stackoverflow.com/questions/4059389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372510/" ]
It looks like you simply want a different interface to the normal documentation. The online PHP manual, and several others are built using a [PHP-based Docbook renderer](http://doc.php.net/phd/docs/) which could be extended (via additional packages and formats) to provide whatever kind of output you desire: including a fancy Visual Jquery-like one. --- Alternatively, you could take an existing output format (for example, PhD can render XML/JSON/PHP) and add a simple visual UI on top. For example, the JSON output for the [`file()`](http://php.net/file) function is ``` { "name": "file", "purpose": "Reads entire file into an array", "manualid": "function.file", "version": "PHP 4, PHP 5", "params": [ { "name": "filename", "type": "string", "optional": "false" }, { "name": "flags", "type": "int", "optional": "true" }, { "name": "context", "type": "resource", "optional": "true" } ], "return": { "type": "array", "description": "Returns the file in an array. Each element of the array corresponds to a\n line in the file, with the newline still attached. Upon failure,\n file returns FALSE.Each line in the resulting array will include the line ending, unless\n FILE_IGNORE_NEW_LINES is used, so you still need to\n use rtrim if you do not want the line ending\n present.If PHP is not properly recognizing\nthe line endings when reading files either on or created by a Macintosh\ncomputer, enabling the\nauto_detect_line_endings\nrun-time configuration option may help resolve the problem." }, "errors": null, "notes": [ { "type": "warning", "description": "When using SSL, Microsoft IIS\nwill violate the protocol by closing the connection without sending a\nclose_notify indicator. PHP will report this as \"SSL: Fatal\nProtocol Error\" when you reach the end of the data. To work around this, the\nvalue of error_reporting should be\nlowered to a level that does not include warnings.\nPHP 4.3.7 and higher can detect buggy IIS server software when you open\nthe stream using the https:\/\/ wrapper and will suppress the\nwarning. When using fsockopen to create an\nssl:\/\/ socket, the developer is responsible for detecting\nand suppressing this warning." } ], "changelog": [ { "version": "5.0.0", "change": "The context parameter was added" }, { "version": "5.0.0", "change": "Prior to PHP 5.0.0 the flags parameter only\n covered include_path and was\n enabled with 1" }, { "version": "4.3.0", "change": "file became binary safe" } ], "seealso": [ { "type": "function", "name": "readfile" }, { "type": "function", "name": "fopen" }, { "type": "function", "name": "fsockopen" }, { "type": "function", "name": "popen" }, { "type": "function", "name": "file_get_contents" }, { "type": "function", "name": "include" }, { "type": "function", "name": "stream_context_create" } ] } ```
Not that I know of. The official docs are located at <http://www.php.net/manual/en/>. There is a search option, [quick references](http://www.php.net/tips.php) and [various shortcuts available](http://www.php.net/urlhowto.php), but it does not have a drilldown search like visualjquery.com For [SPL, there is the helly pages in addition to the Manual pages](http://www.php.net/~helly/php/ext/spl/)
4,059,389
i want to display many images like thumbnails in a scroll view and and i want the images displayed dynamically we scrolls down or left like a table view cells can u please tell how to that... Thanks with the following code..when we scroll the scroll view im calling this code and able to display the images dynamically (which r only visible) but the problem is.. while scrolling with scroll bars im getting the two images..vertically and horizontally..its only happening when i scroll.. can any body help me out please..? ``` int tileSize; int imgSize; if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ tileSize = 255; imgSize = 247; }else{ tileSize = 120; imgSize = 116; } CGRect visibleBounds = [songsContainer bounds]; for (UIView *tile in [songsContainer subviews]) { CGRect scaledTileFrame =[tile frame]; if (! CGRectIntersectsRect(scaledTileFrame, visibleBounds)) { for(UIView *view in [tile subviews]) [view removeFromSuperview]; [recycledCells addObject:tile]; [tile removeFromSuperview]; } } int maxRow =[songsDict count]-1; // this is the maximum possible row int maxCol = noOfRowsInCell-1; // and the maximum possible column int firstNeededRow = MAX(0, floorf(visibleBounds.origin.y / tileSize)); int firstNeededCol = MAX(0, floorf(visibleBounds.origin.x / tileSize)); int lastNeededRow = MIN(maxRow, floorf(CGRectGetMaxY(visibleBounds) / tileSize)); int lastNeededCol = MIN(maxCol, floorf(CGRectGetMaxX(visibleBounds) / tileSize)); NSLog(@".........MaxRow-%d,MaxCol-%d,firstNeddedRow-%d,firstNeededcol-%d,lNR-%d,lNC%d",maxRow, maxCol, firstNeededRow,firstNeededCol,lastNeededRow,lastNeededCol); // iterate through needed rows and columns, adding any tiles that are missing for (int row = firstNeededRow; row <= lastNeededRow; row++) { NSMutableArray *tempArray = (NSMutableArray *)[songsDict objectAtIndex:row]; for (int col = firstNeededCol; col <= lastNeededCol ; col++) { BOOL tileIsMissing = (firstVisibleRow > row || firstVisibleColumn > col || lastVisibleRow < row || lastVisibleColumn < col); if (tileIsMissing) { UIView *tile = (UIView *)[self dequeueReusableTile]; if (!tile) { // the scroll view will handle setting the tile's frame, so we don't have to worry about it tile = [[[UIView alloc] initWithFrame:CGRectZero] autorelease]; tile.backgroundColor = [UIColor clearColor]; } //tile.image = image for row and col; // set the tile's frame so we insert it at the correct position CGRect frame = CGRectMake(tileSize * col, tileSize * row, imgSize, imgSize); tile.frame = frame; if(col<[tempArray count]) [self addContentForTile:tile:row:col]; else tile.backgroundColor = [UIColor clearColor]; [songsContainer addSubview:tile]; } } } firstVisibleRow = firstNeededRow+1; firstVisibleColumn = firstNeededCol+1; lastVisibleRow = lastNeededRow; lastVisibleColumn = lastNeededCol; ```
2010/10/30
[ "https://Stackoverflow.com/questions/4059389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372510/" ]
It looks like you simply want a different interface to the normal documentation. The online PHP manual, and several others are built using a [PHP-based Docbook renderer](http://doc.php.net/phd/docs/) which could be extended (via additional packages and formats) to provide whatever kind of output you desire: including a fancy Visual Jquery-like one. --- Alternatively, you could take an existing output format (for example, PhD can render XML/JSON/PHP) and add a simple visual UI on top. For example, the JSON output for the [`file()`](http://php.net/file) function is ``` { "name": "file", "purpose": "Reads entire file into an array", "manualid": "function.file", "version": "PHP 4, PHP 5", "params": [ { "name": "filename", "type": "string", "optional": "false" }, { "name": "flags", "type": "int", "optional": "true" }, { "name": "context", "type": "resource", "optional": "true" } ], "return": { "type": "array", "description": "Returns the file in an array. Each element of the array corresponds to a\n line in the file, with the newline still attached. Upon failure,\n file returns FALSE.Each line in the resulting array will include the line ending, unless\n FILE_IGNORE_NEW_LINES is used, so you still need to\n use rtrim if you do not want the line ending\n present.If PHP is not properly recognizing\nthe line endings when reading files either on or created by a Macintosh\ncomputer, enabling the\nauto_detect_line_endings\nrun-time configuration option may help resolve the problem." }, "errors": null, "notes": [ { "type": "warning", "description": "When using SSL, Microsoft IIS\nwill violate the protocol by closing the connection without sending a\nclose_notify indicator. PHP will report this as \"SSL: Fatal\nProtocol Error\" when you reach the end of the data. To work around this, the\nvalue of error_reporting should be\nlowered to a level that does not include warnings.\nPHP 4.3.7 and higher can detect buggy IIS server software when you open\nthe stream using the https:\/\/ wrapper and will suppress the\nwarning. When using fsockopen to create an\nssl:\/\/ socket, the developer is responsible for detecting\nand suppressing this warning." } ], "changelog": [ { "version": "5.0.0", "change": "The context parameter was added" }, { "version": "5.0.0", "change": "Prior to PHP 5.0.0 the flags parameter only\n covered include_path and was\n enabled with 1" }, { "version": "4.3.0", "change": "file became binary safe" } ], "seealso": [ { "type": "function", "name": "readfile" }, { "type": "function", "name": "fopen" }, { "type": "function", "name": "fsockopen" }, { "type": "function", "name": "popen" }, { "type": "function", "name": "file_get_contents" }, { "type": "function", "name": "include" }, { "type": "function", "name": "stream_context_create" } ] } ```
There's no visual documentation like that (as far as I know), but there is a very good reference which is commonly used: [**php.net's manual**](http://www.php.net/manual/en/). Because it is so complete, there is no need for many other references. By the way, it's very easy to search for specific functions on php.net. Just browse to `http://php.net/[insert function name here]` and it will redirect you to the page you want.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
You need to initialize the replicaSet ``` $ mongo --port 27017 > rs.initiate() ``` Then to deploy additional members ``` $ mongod --port 27018 --dbpath "C:\Program Files\MongoDB\Server\5.0\data2" --replSet replicaSet1 ``` Make sure the `port` & `dbpath` are different for each new node. Add each new node to the replicaSet from the shell ``` $ mongo --port 27017 PRIMARY> rs.add("localhost:27018") ```
this is expected. You would have to open a new terminal while mongod daemon is running in another terminal. And carry on with the "mongo" command to open the mongo shell.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
Well, first, I must say that Matt's answer (<https://stackoverflow.com/a/71179703/1264663>) gave some clues about the solution. What was missing there was a better explanation. Yes, the thing is that the replica server itself must be initiated. The commands starting the MongoDB daemons will continue to show the error until that happens, or will eventually stop (for a timeout). I want to make clear that this worked for me on Windows. I was following a tutorial where the instructor did the procedure on Linux, and the error didn't appear. I'm going to describe here all the steps to make more clear how to solve the problem. As it has already been said, the port and the data folder must be different for each node. 1. On separate command shells run every `mongod` command (e.g. `mongod --replSet cookingSet --dbpath=c:\temp\mongodb\data\rs1 --port 27018`) and let them run. 2. On another command shell, connect to some of the daemons started in the step 1, using the MongoDB shell, like `mongosh --port 27018` Note that the port must be specified, and must match one of the ports used for starting the nodes. 3. On this shell, create a variable like > > > ``` > rsconfig = { > _id: "cookingSet", > members: [ > {_id: 0, host: "localhost:27018"}, > {_id: 1, host: "localhost:27019"}, > {_id: 2, host: "localhost:27020"} > ] > } > > ``` > > where \_id must be equal to the value of the parameter 'replSet' used for starting the nodes. In this case, all nodes are running in my localhost. This variable represents the configuration of the replica set. 4. run the command `rs.initiate(rsconfig)` 'rsconfig' is the name of the variable created before. 5. Verify the replica set by running rs.status() You should see the members listed in the result. 6. Verify the error is not thrown any more. Go to some of the shells where the nodes were started and see that the error is not thrown anymore.
this is expected. You would have to open a new terminal while mongod daemon is running in another terminal. And carry on with the "mongo" command to open the mongo shell.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
You need to initialize the replicaSet ``` $ mongo --port 27017 > rs.initiate() ``` Then to deploy additional members ``` $ mongod --port 27018 --dbpath "C:\Program Files\MongoDB\Server\5.0\data2" --replSet replicaSet1 ``` Make sure the `port` & `dbpath` are different for each new node. Add each new node to the replicaSet from the shell ``` $ mongo --port 27017 PRIMARY> rs.add("localhost:27018") ```
I was facing this issue on windows 10. This happened because the mongodb daemon was running as a service by default. Follow these steps * Go to task manager and click on the services tab * Find mongodb service * right click and select stop hopefully, this will fix the issue.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
Well, first, I must say that Matt's answer (<https://stackoverflow.com/a/71179703/1264663>) gave some clues about the solution. What was missing there was a better explanation. Yes, the thing is that the replica server itself must be initiated. The commands starting the MongoDB daemons will continue to show the error until that happens, or will eventually stop (for a timeout). I want to make clear that this worked for me on Windows. I was following a tutorial where the instructor did the procedure on Linux, and the error didn't appear. I'm going to describe here all the steps to make more clear how to solve the problem. As it has already been said, the port and the data folder must be different for each node. 1. On separate command shells run every `mongod` command (e.g. `mongod --replSet cookingSet --dbpath=c:\temp\mongodb\data\rs1 --port 27018`) and let them run. 2. On another command shell, connect to some of the daemons started in the step 1, using the MongoDB shell, like `mongosh --port 27018` Note that the port must be specified, and must match one of the ports used for starting the nodes. 3. On this shell, create a variable like > > > ``` > rsconfig = { > _id: "cookingSet", > members: [ > {_id: 0, host: "localhost:27018"}, > {_id: 1, host: "localhost:27019"}, > {_id: 2, host: "localhost:27020"} > ] > } > > ``` > > where \_id must be equal to the value of the parameter 'replSet' used for starting the nodes. In this case, all nodes are running in my localhost. This variable represents the configuration of the replica set. 4. run the command `rs.initiate(rsconfig)` 'rsconfig' is the name of the variable created before. 5. Verify the replica set by running rs.status() You should see the members listed in the result. 6. Verify the error is not thrown any more. Go to some of the shells where the nodes were started and see that the error is not thrown anymore.
I was facing this issue on windows 10. This happened because the mongodb daemon was running as a service by default. Follow these steps * Go to task manager and click on the services tab * Find mongodb service * right click and select stop hopefully, this will fix the issue.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
You need to initialize the replicaSet ``` $ mongo --port 27017 > rs.initiate() ``` Then to deploy additional members ``` $ mongod --port 27018 --dbpath "C:\Program Files\MongoDB\Server\5.0\data2" --replSet replicaSet1 ``` Make sure the `port` & `dbpath` are different for each new node. Add each new node to the replicaSet from the shell ``` $ mongo --port 27017 PRIMARY> rs.add("localhost:27018") ```
Well, first, I must say that Matt's answer (<https://stackoverflow.com/a/71179703/1264663>) gave some clues about the solution. What was missing there was a better explanation. Yes, the thing is that the replica server itself must be initiated. The commands starting the MongoDB daemons will continue to show the error until that happens, or will eventually stop (for a timeout). I want to make clear that this worked for me on Windows. I was following a tutorial where the instructor did the procedure on Linux, and the error didn't appear. I'm going to describe here all the steps to make more clear how to solve the problem. As it has already been said, the port and the data folder must be different for each node. 1. On separate command shells run every `mongod` command (e.g. `mongod --replSet cookingSet --dbpath=c:\temp\mongodb\data\rs1 --port 27018`) and let them run. 2. On another command shell, connect to some of the daemons started in the step 1, using the MongoDB shell, like `mongosh --port 27018` Note that the port must be specified, and must match one of the ports used for starting the nodes. 3. On this shell, create a variable like > > > ``` > rsconfig = { > _id: "cookingSet", > members: [ > {_id: 0, host: "localhost:27018"}, > {_id: 1, host: "localhost:27019"}, > {_id: 2, host: "localhost:27020"} > ] > } > > ``` > > where \_id must be equal to the value of the parameter 'replSet' used for starting the nodes. In this case, all nodes are running in my localhost. This variable represents the configuration of the replica set. 4. run the command `rs.initiate(rsconfig)` 'rsconfig' is the name of the variable created before. 5. Verify the replica set by running rs.status() You should see the members listed in the result. 6. Verify the error is not thrown any more. Go to some of the shells where the nodes were started and see that the error is not thrown anymore.
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
You need to initialize the replicaSet ``` $ mongo --port 27017 > rs.initiate() ``` Then to deploy additional members ``` $ mongod --port 27018 --dbpath "C:\Program Files\MongoDB\Server\5.0\data2" --replSet replicaSet1 ``` Make sure the `port` & `dbpath` are different for each new node. Add each new node to the replicaSet from the shell ``` $ mongo --port 27017 PRIMARY> rs.add("localhost:27018") ```
I was facing the same problem but was using docker-compose. Solved: This is my *docker-compose.yaml* file: ```js version: "3" services: mongo1: hostname: mongo1 container_name: localmongo1 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongo2: hostname: mongo2 container_name: localmongo2 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongo3: hostname: mongo3 container_name: localmongo3 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongosetup: image: mongo depends_on: - mongo1 - mongo2 - mongo3 volumes: - .:/scripts restart: "no" entrypoint: [ "bash", "/scripts/mongo_setup.sh"] ``` And here is the config file for replicaSet initialization (*mongo\_setup.sh* **file name**): ```js #!/bin/bash echo "sleeping for 10 seconds" sleep 10 echo mongo_setup.sh time now: `date +"%T" ` mongosh --host mongo1:27017 <<EOF var cfg = { "_id": "rs0", "version": 1, "members": [ { "_id": 0, "host": "mongo1:27017", "priority": 2 }, { "_id": 1, "host": "mongo2:27017", "priority": 0 }, { "_id": 2, "host": "mongo3:27017", "priority": 0 } ] }; rs.initiate(cfg); EOF ```
70,081,140
I tried to create a replica set following instruction such as : <https://hevodata.com/learn/mongodb-replica-set-3-easy-methods/> Sadly, I have a problem at the first step : **Problem** The command : ``` mongod --port 27017 --dbpath "C:\Program Files\MongoDB\Server\5.0\data" --replSet replicaSet1 ``` **Log file** ``` {"t":{"$date":"2021-11-23T13:24:04.506+01:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"} {"t":{"$date":"2021-11-23T13:24:05.247+01:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.250+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.251+01:00"},"s":"I", "c":"NETWORK", "id":4648602, "ctx":"main","msg":"Implicit TCP FastOpen in use."} {"t":{"$date":"2021-11-23T13:24:05.255+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.258+01:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}} {"t":{"$date":"2021-11-23T13:24:05.261+01:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}} {"t":{"$date":"2021-11-23T13:24:05.270+01:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":2852,"port":27017,"dbPath":"C:/Program Files/MongoDB/Server/5.0/data","architecture":"64-bit","host":"DQFQNH2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23398, "ctx":"initandlisten","msg":"Target operating system minimum version","attr":{"targetMinOS":"Windows 7/Windows Server 2008 R2"}} {"t":{"$date":"2021-11-23T13:24:05.272+01:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.3","gitVersion":"657fea5a61a74d7a79df7aff8e4bcf0bc742b748","modules":[],"allocator":"tcmalloc","environment":{"distmod":"windows","distarch":"x86_64","target_arch":"x86_64"}}}} {"t":{"$date":"2021-11-23T13:24:05.281+01:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Microsoft Windows 10","version":"10.0 (build 18363)"}}} {"t":{"$date":"2021-11-23T13:24:05.292+01:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"port":27017},"replication":{"replSet":"replicaSet1"},"storage":{"dbPath":"C:\\Program Files\\MongoDB\\Server\\5.0\\data"}}}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"C:/Program Files/MongoDB/Server/5.0/data","storageEngine":"wiredTiger"}} {"t":{"$date":"2021-11-23T13:24:05.298+01:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=3525M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}} {"t":{"$date":"2021-11-23T13:24:05.330+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:330481][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.406+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:405280][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.500+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:500025][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 55/55552 to 56/256"}} {"t":{"$date":"2021-11-23T13:24:05.656+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:656606][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 55 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.749+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:748359][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 56 through 56"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.821+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:821165][2852:140718885658272], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}} {"t":{"$date":"2021-11-23T13:24:05.826+01:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1637670245:826155][2852:140718885658272], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 68329"}} {"t":{"$date":"2021-11-23T13:24:05.840+01:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":542}} {"t":{"$date":"2021-11-23T13:24:05.841+01:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}} {"t":{"$date":"2021-11-23T13:24:05.854+01:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":false}} {"t":{"$date":"2021-11-23T13:24:05.858+01:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"} {"t":{"$date":"2021-11-23T13:24:05.865+01:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.866+01:00"},"s":"W", "c":"CONTROL", "id":22140, "ctx":"initandlisten","msg":"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning","tags":["startupWarnings"]} {"t":{"$date":"2021-11-23T13:24:05.879+01:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":13,"maxWireVersion":13},"outgoing":{"minWireVersion":13,"maxWireVersion":13},"isInternalClient":true}}} {"t":{"$date":"2021-11-23T13:24:05.883+01:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"} {"t":{"$date":"2021-11-23T13:24:05.891+01:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"} {"t":{"$date":"2021-11-23T13:24:05.901+01:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"initandlisten","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"W", "c":"FTDC", "id":23718, "ctx":"initandlisten","msg":"Failed to initialize Performance Counters for FTDC","attr":{"error":{"code":179,"codeName":"WindowsPdhError","errmsg":"PdhAddEnglishCounterW failed with 'L’objet spécifié n’a pas été trouvé sur l’ordinateur.'"}}} {"t":{"$date":"2021-11-23T13:24:06.182+01:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"C:/Program Files/MongoDB/Server/5.0/data/diagnostic.data"}} {"t":{"$date":"2021-11-23T13:24:06.192+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.","nextWakeupMillis":200}} {"t":{"$date":"2021-11-23T13:24:06.197+01:00"},"s":"I", "c":"REPL", "id":21311, "ctx":"initandlisten","msg":"Did not find local initialized voted for document at startup"} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21529, "ctx":"initandlisten","msg":"Initializing rollback ID","attr":{"rbid":1}} {"t":{"$date":"2021-11-23T13:24:06.200+01:00"},"s":"I", "c":"REPL", "id":21313, "ctx":"initandlisten","msg":"Did not find local replica set configuration document at startup","attr":{"error":{"code":47,"codeName":"NoMatchingDocument","errmsg":"Did not find replica set configuration document in local.system.replset"}}} {"t":{"$date":"2021-11-23T13:24:06.208+01:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"REPL", "id":40440, "ctx":"initandlisten","msg":"Starting the TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.209+01:00"},"s":"I", "c":"CONTROL", "id":20711, "ctx":"LogicalSessionCacheReap","msg":"Failed to reap transaction table","attr":{"error":"NotYetInitialized: Replication has not yet been configured"}} {"t":{"$date":"2021-11-23T13:24:06.218+01:00"},"s":"I", "c":"REPL", "id":40445, "ctx":"TopologyVersionObserver","msg":"Started TopologyVersionObserver"} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}} {"t":{"$date":"2021-11-23T13:24:06.219+01:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}} {"t":{"$date":"2021-11-23T13:24:06.413+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":400}} {"t":{"$date":"2021-11-23T13:24:06.844+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":600}} {"t":{"$date":"2021-11-23T13:24:07.445+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":800}} {"t":{"$date":"2021-11-23T13:24:08.245+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1000}} {"t":{"$date":"2021-11-23T13:24:09.247+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1200}} {"t":{"$date":"2021-11-23T13:24:10.448+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1400}} {"t":{"$date":"2021-11-23T13:24:11.849+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1600}} {"t":{"$date":"2021-11-23T13:24:13.451+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":1800}} {"t":{"$date":"2021-11-23T13:24:15.252+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2000}} {"t":{"$date":"2021-11-23T13:24:17.253+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2200}} {"t":{"$date":"2021-11-23T13:24:19.454+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2400}} {"t":{"$date":"2021-11-23T13:24:21.856+01:00"},"s":"I", "c":"-", "id":4939300, "ctx":"monitoring-keys-for-HMAC","msg":"Failed to refresh key cache","attr":{"error":"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.","nextWakeupMillis":2600}} ``` I think the problem is with the sentence : ``` "NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing" ``` Any ideas on what to do here? I am on windows 10. **More info** ``` # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # Where and how to store data. storage: dbPath: C:\Program Files\MongoDB\Server\5.0\data journal: enabled: true # engine: # wiredTiger: # where to write logging data. systemLog: destination: file logAppend: true path: C:\Program Files\MongoDB\Server\5.0\log\mongod.log # network interfaces net: port: 27017 bindIp: 127.0.0.1 #processManagement: #security: #operationProfiling: #replication: #sharding: ## Enterprise-Only Options: #auditLog: #snmp: ```
2021/11/23
[ "https://Stackoverflow.com/questions/70081140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17488374/" ]
Well, first, I must say that Matt's answer (<https://stackoverflow.com/a/71179703/1264663>) gave some clues about the solution. What was missing there was a better explanation. Yes, the thing is that the replica server itself must be initiated. The commands starting the MongoDB daemons will continue to show the error until that happens, or will eventually stop (for a timeout). I want to make clear that this worked for me on Windows. I was following a tutorial where the instructor did the procedure on Linux, and the error didn't appear. I'm going to describe here all the steps to make more clear how to solve the problem. As it has already been said, the port and the data folder must be different for each node. 1. On separate command shells run every `mongod` command (e.g. `mongod --replSet cookingSet --dbpath=c:\temp\mongodb\data\rs1 --port 27018`) and let them run. 2. On another command shell, connect to some of the daemons started in the step 1, using the MongoDB shell, like `mongosh --port 27018` Note that the port must be specified, and must match one of the ports used for starting the nodes. 3. On this shell, create a variable like > > > ``` > rsconfig = { > _id: "cookingSet", > members: [ > {_id: 0, host: "localhost:27018"}, > {_id: 1, host: "localhost:27019"}, > {_id: 2, host: "localhost:27020"} > ] > } > > ``` > > where \_id must be equal to the value of the parameter 'replSet' used for starting the nodes. In this case, all nodes are running in my localhost. This variable represents the configuration of the replica set. 4. run the command `rs.initiate(rsconfig)` 'rsconfig' is the name of the variable created before. 5. Verify the replica set by running rs.status() You should see the members listed in the result. 6. Verify the error is not thrown any more. Go to some of the shells where the nodes were started and see that the error is not thrown anymore.
I was facing the same problem but was using docker-compose. Solved: This is my *docker-compose.yaml* file: ```js version: "3" services: mongo1: hostname: mongo1 container_name: localmongo1 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongo2: hostname: mongo2 container_name: localmongo2 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongo3: hostname: mongo3 container_name: localmongo3 image: mongo expose: - 27017 restart: always entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ] mongosetup: image: mongo depends_on: - mongo1 - mongo2 - mongo3 volumes: - .:/scripts restart: "no" entrypoint: [ "bash", "/scripts/mongo_setup.sh"] ``` And here is the config file for replicaSet initialization (*mongo\_setup.sh* **file name**): ```js #!/bin/bash echo "sleeping for 10 seconds" sleep 10 echo mongo_setup.sh time now: `date +"%T" ` mongosh --host mongo1:27017 <<EOF var cfg = { "_id": "rs0", "version": 1, "members": [ { "_id": 0, "host": "mongo1:27017", "priority": 2 }, { "_id": 1, "host": "mongo2:27017", "priority": 0 }, { "_id": 2, "host": "mongo3:27017", "priority": 0 } ] }; rs.initiate(cfg); EOF ```
41,544,560
I want to validate the email in iOS. I've written a category class and here's my code but in some cases like **abc@gmail.com.com** it's not working... If the user type two times .com or .in continuously it is not detecting it... I tried with some solutions but that also not working. That's why am asking here ``` - (BOOL)isValidEmail { NSString *emailString = [self stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; BOOL isValid = YES; BOOL sticterFilter = YES; NSString *stricterFilterString = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSString *laxString = @".+@.+\\.[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = sticterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if (![emailTest evaluateWithObject:emailString]) { isValid = NO; } return isValid; ``` }
2017/01/09
[ "https://Stackoverflow.com/questions/41544560", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try re-ordering your args so that ACCESS\_KEY is the 1st param and SECRET\_KEY is the second: ``` creds := credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, "") ``` Try adding the region as well: ``` sess, err := session.NewSession(&aws.Config{ Region: aws.String("us-west-2"), Credentials: credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, ""), }) ```
Additionally, if you hadn't known, the SDK allows for the use of the shared config under `.aws/config`. You can put your values in there and then set the environment variable `AWS_SDK_LOAD_CONFIG` to a truthy value to load the shared config. An example shared config would look like this: ``` [default] aws_access_key_id = AKID aws_secret_access_key = SECRET ``` Then running: ``` AWS_SDK_LOAD_CONFIG=true go run main.go ```
41,544,560
I want to validate the email in iOS. I've written a category class and here's my code but in some cases like **abc@gmail.com.com** it's not working... If the user type two times .com or .in continuously it is not detecting it... I tried with some solutions but that also not working. That's why am asking here ``` - (BOOL)isValidEmail { NSString *emailString = [self stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; BOOL isValid = YES; BOOL sticterFilter = YES; NSString *stricterFilterString = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSString *laxString = @".+@.+\\.[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = sticterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if (![emailTest evaluateWithObject:emailString]) { isValid = NO; } return isValid; ``` }
2017/01/09
[ "https://Stackoverflow.com/questions/41544560", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try re-ordering your args so that ACCESS\_KEY is the 1st param and SECRET\_KEY is the second: ``` creds := credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, "") ``` Try adding the region as well: ``` sess, err := session.NewSession(&aws.Config{ Region: aws.String("us-west-2"), Credentials: credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, ""), }) ```
Or you can just temporaly set Environment variables. ``` package main import ( "fmt" "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3/s3manager" ) const ( AccessKeyId = "XXXXXXXXXXXXXXXXXX" SecretAccessKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" Region = "eu-west-1" Bucket = "XXXXX-XXXX-XXX" ) func main() { os.Setenv("AWS_ACCESS_KEY_ID", AccessKeyId) os.Setenv("AWS_SECRET_ACCESS_KEY", SecretAccessKey) filename := os.Args[1] file, err := os.Open(filename) if err != nil { fmt.Println("Failed to open file", filename, err) os.Exit(1) } defer file.Close() conf := aws.Config{Region: aws.String(Region)} sess := session.New(&conf) svc := s3manager.NewUploader(sess) fmt.Println("Uploading file to S3...") result, err := svc.Upload(&s3manager.UploadInput{ Bucket: aws.String(Bucket), Key: aws.String(filepath.Base(filename)), Body: file, }) if err != nil { fmt.Println("error", err) os.Exit(1) } } ```
41,544,560
I want to validate the email in iOS. I've written a category class and here's my code but in some cases like **abc@gmail.com.com** it's not working... If the user type two times .com or .in continuously it is not detecting it... I tried with some solutions but that also not working. That's why am asking here ``` - (BOOL)isValidEmail { NSString *emailString = [self stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; BOOL isValid = YES; BOOL sticterFilter = YES; NSString *stricterFilterString = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSString *laxString = @".+@.+\\.[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = sticterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if (![emailTest evaluateWithObject:emailString]) { isValid = NO; } return isValid; ``` }
2017/01/09
[ "https://Stackoverflow.com/questions/41544560", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try re-ordering your args so that ACCESS\_KEY is the 1st param and SECRET\_KEY is the second: ``` creds := credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, "") ``` Try adding the region as well: ``` sess, err := session.NewSession(&aws.Config{ Region: aws.String("us-west-2"), Credentials: credentials.NewStaticCredentials(conf.AWS_ACCESS_KEY_ID, conf.AWS_SECRET_ACCESS_KEY, ""), }) ```
Connect your sdk client using this generic service ``` var awsSession *session.Session func init() { initializeAwsSession() } func initializeAwsSession() { awsSession = session.Must(session.NewSession(&aws.Config{ Region: aws.String("ap-southeast-1"), Credentials: credentials.NewStaticCredentials("YOUR_ACCESS_KEY","YOUR SECRET_KEY", ""), })) } ```
41,544,560
I want to validate the email in iOS. I've written a category class and here's my code but in some cases like **abc@gmail.com.com** it's not working... If the user type two times .com or .in continuously it is not detecting it... I tried with some solutions but that also not working. That's why am asking here ``` - (BOOL)isValidEmail { NSString *emailString = [self stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; BOOL isValid = YES; BOOL sticterFilter = YES; NSString *stricterFilterString = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSString *laxString = @".+@.+\\.[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = sticterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if (![emailTest evaluateWithObject:emailString]) { isValid = NO; } return isValid; ``` }
2017/01/09
[ "https://Stackoverflow.com/questions/41544560", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Additionally, if you hadn't known, the SDK allows for the use of the shared config under `.aws/config`. You can put your values in there and then set the environment variable `AWS_SDK_LOAD_CONFIG` to a truthy value to load the shared config. An example shared config would look like this: ``` [default] aws_access_key_id = AKID aws_secret_access_key = SECRET ``` Then running: ``` AWS_SDK_LOAD_CONFIG=true go run main.go ```
Connect your sdk client using this generic service ``` var awsSession *session.Session func init() { initializeAwsSession() } func initializeAwsSession() { awsSession = session.Must(session.NewSession(&aws.Config{ Region: aws.String("ap-southeast-1"), Credentials: credentials.NewStaticCredentials("YOUR_ACCESS_KEY","YOUR SECRET_KEY", ""), })) } ```
41,544,560
I want to validate the email in iOS. I've written a category class and here's my code but in some cases like **abc@gmail.com.com** it's not working... If the user type two times .com or .in continuously it is not detecting it... I tried with some solutions but that also not working. That's why am asking here ``` - (BOOL)isValidEmail { NSString *emailString = [self stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; BOOL isValid = YES; BOOL sticterFilter = YES; NSString *stricterFilterString = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSString *laxString = @".+@.+\\.[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = sticterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if (![emailTest evaluateWithObject:emailString]) { isValid = NO; } return isValid; ``` }
2017/01/09
[ "https://Stackoverflow.com/questions/41544560", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Or you can just temporaly set Environment variables. ``` package main import ( "fmt" "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3/s3manager" ) const ( AccessKeyId = "XXXXXXXXXXXXXXXXXX" SecretAccessKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" Region = "eu-west-1" Bucket = "XXXXX-XXXX-XXX" ) func main() { os.Setenv("AWS_ACCESS_KEY_ID", AccessKeyId) os.Setenv("AWS_SECRET_ACCESS_KEY", SecretAccessKey) filename := os.Args[1] file, err := os.Open(filename) if err != nil { fmt.Println("Failed to open file", filename, err) os.Exit(1) } defer file.Close() conf := aws.Config{Region: aws.String(Region)} sess := session.New(&conf) svc := s3manager.NewUploader(sess) fmt.Println("Uploading file to S3...") result, err := svc.Upload(&s3manager.UploadInput{ Bucket: aws.String(Bucket), Key: aws.String(filepath.Base(filename)), Body: file, }) if err != nil { fmt.Println("error", err) os.Exit(1) } } ```
Connect your sdk client using this generic service ``` var awsSession *session.Session func init() { initializeAwsSession() } func initializeAwsSession() { awsSession = session.Must(session.NewSession(&aws.Config{ Region: aws.String("ap-southeast-1"), Credentials: credentials.NewStaticCredentials("YOUR_ACCESS_KEY","YOUR SECRET_KEY", ""), })) } ```
58,386
For my Ph.D work I used large volume of data which can not be put into my thesis. So, I am planning to put all this data online, so that other could download freely and cross check my work. It would also help future scholars to do the same work conveniently in future. Providing data online used for research is not commonly practiced in my country. I just want to know is it right(also sensible) to do the same? Won't it lead to any complication in future? Can you also please tell me what is being practiced in other countries like USA and UK regarding data? [Note : all data is downloaded from free sources so does not involve any copyright issue]
2015/11/18
[ "https://academia.stackexchange.com/questions/58386", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/38848/" ]
In general I think this is a good idea, but it might depend on your area. This way others can verify your method and compare it to yours. At least this is how it is used in for example Computer Vision and Machine Learning. Some examples: * [Online image database (University of Edinburgh](http://homepages.inf.ed.ac.uk/rbf/CVonline/Imagedbase.htm) * [Different data sets of the University of Texas](http://www.cs.utexas.edu/~grauman/courses/spring2008/datasets.htm) * [Computer Vision test images (Carnegie Mellon University)](https://www.cs.cmu.edu/~cil/v-images.html) * [Microsoft Research Image understanding](http://research.microsoft.com/en-us/projects/objectclassrecognition/) Some theses with a dataset published: * [Doug Downey's PhD thesis](http://turing.cs.washington.edu/papers/downey_thesis_data/) * [Soo Ling Lims' PhD thesis](https://soolinglim.wordpress.com/datasets/) * [Toine Borgers' PhD thesis](http://ilk.uvt.nl/~toine/phd-thesis/) [Professor S. Rüger](http://people.kmi.open.ac.uk/stefan/) from the Open University UK, wrote in [How to write a good PhD thesis and survive the viva](http://people.kmi.open.ac.uk/stefan/thesis-writing.pdf) (for a PhD in the computing subject): > > Some value in a PhD thesis is drawn from careful > experimental design. It is best practice to only change one parameter at a time; **to use datasets that are publicly available or at least make datasets available**; to describe experiments in a way so that they are reproducible; and, particularly in Computing, to set up experiments in > an automated batch fashion. > > > You however have to make sure that you are allowed to (re)publish the data on your web without violating any copyrights (even if it can be downloaded for free), otherwise you can just link to the external data set. *Edit:* also see [this question](https://datascience.stackexchange.com/questions/155/publicly-available-datasets) about [Publicly Available Datasets](https://datascience.stackexchange.com/questions/155/publicly-available-datasets).
Some things people should consider before making their data freely downloadable. Many of these may not apply to you, but they're general considerations. In some cases these are things that should be decided before data are ever collected. 1. **Are the data truly yours?** Does your institution have some claim to the raw data, or to the analysis? Do you have co-authors who also have a claim to the data? Do you have permission from all parties to make this available? Can you document this? 2. Is there anything **confidential** in the data? 3. Is there anything **copyrighted** in the data? Are there other potential legal concerns about making it available? If you have modified or analyzed the data using software, is the download compatible with the software license? 4. If humans were involved in any way, is their information **anonymized**? Did they give permission to make their data available, even in aggregate? Did your institutional review board approve this part of the project? Do you have clear documentation showing this? 5. Are you willing and able to **maintain the data**? Will it be on a site that you control and will control for a period of time? If not, who controls it, and are they willing and able to continue to make it available? What is a reasonable time for the data to remain available -- two years? Five? Twenty? For many studies, 1-4 may not apply, but 5 is something people don't seem to think about very much. Far too often, individual researchers stick their data, or supplementary information or whatever, up on their institutional web sites, and then two years later their IT people do some reorganization and all the links are broken; or the people move to another institution and their pages are all deleted; or a bug hits and no one notices. Or they put the data up on their personal pages, and then GeoCities is bought by Yahoo! and gets shut down in their country. The web five years ago was a very different place from today, and it will be very different in another five years. One option is [Dryad](http://datadryad.org), which promises to store researcher data; see their [claims](http://datadryad.org/pages/repository) here. I have no experience with them other than downloading data, but the concept seems good.
48,687,656
I have put custom radio button, but radio button is not showing. It works when arrange like this ``` <label> <input type="radio" name="gender" /> <span>Female</span> </label> ``` Why it is not working. Is there any wrong with this code? My HTML Code ``` <div class="radio_btns"> <label><span>Gender</span></label> <label> <span>Female</span> <input type="radio" name="gender" /> </label> <label> <span>Male</span> <input type="radio" name="gender" /> </label> </div> ``` CSS Code ``` .radio_btns {width:100%; float:left;} .radio_btns [type="radio"] {border: 0; clip: rect(0 0 0 0); height: 1px; margin: -1px; overflow: hidden; padding: 0; position: absolute; width: 1px;} .radio_btns label {display: block; cursor: pointer; line-height: 2.5;} .radio_btns [type="radio"] + span {display: block;} .radio_btns [type="radio"] + span:before {content: ''; display: inline-block; width: 1em; height: 1em; vertical-align: -0.25em; border-radius: 1em; border: 0.125em solid #fff; box-shadow: 0 0 0 0.15em #000; margin-right: 0.75em; transition: 0.5s ease all;} .radio_btns [type="radio"]:checked + span:before {background: #07eb07; box-shadow: 0 0 0 0.25em #000;} .radio_btns label {float:left; width:150px;} ``` [What I get with this code](https://jsfiddle.net/2w72ets2/)
2018/02/08
[ "https://Stackoverflow.com/questions/48687656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7963454/" ]
The plus is an [adjacent sibling selector](https://developer.mozilla.org/en-US/docs/Web/CSS/Adjacent_sibling_selectors) - it means the element directly following so in your case, you need a `span` after your `input` for the styles to work: ```css .radio_btns { width: 100%; float: left; } .radio_btns [type="radio"] { border: 0; clip: rect(0 0 0 0); height: 1px; margin: -1px; overflow: hidden; padding: 0; position: absolute; width: 1px; } .radio_btns label { display: block; cursor: pointer; line-height: 2.5; } .radio_btns [type="radio"]+span { display: inline-block; } .radio_btns [type="radio"]+span:before { content: ''; display: inline-block; width: 1em; height: 1em; vertical-align: -0.25em; border-radius: 1em; border: 0.125em solid #fff; box-shadow: 0 0 0 0.15em #000; margin-right: 0.75em; transition: 0.5s ease all; } .radio_btns [type="radio"]:checked+span:before { background: #07eb07; box-shadow: 0 0 0 0.25em #000; } .radio_btns label { float: left; width: 150px; } ``` ```html <div class="radio_btns"> <label><span>Gender</span></label> <label> <span>Female</span> <input type="radio" name="gender" /> <span></span> </label> <label> <span>Male</span> <input type="radio" name="gender" /> <span></span> </label> </div> ```
The problem is with the css style, there for the class `.radio_btns [type="radio"]` you are setting custom `height: 1px; width: 1px; and position: absolute;` which is added to the radio button and not able to visible properly so remove those css property to see the radio buttons.
55,811,491
I try to load data to my dataTable from my Firestore database but it doeasn't work. Is there any other way to push data from firestore to array? ```js $(document).ready(function() { $('#pageName').html('Dashboard'); $('#pageName-li').html('Dashboard'); var dataSet = []; x = 1 let db = firebase.firestore(); db.collection("warehouses").where("useremail", "==", firebase.auth().currentUser.email).get().then(function(querySnapshot) { querySnapshot.forEach(function(doc) { dataSet[x][1] = doc.data().name; dataSet[x][2] = doc.data().useremail; dataSet[x][3] = doc.data().address; x++ }); }); $('#example').DataTable({ data: dataSet, columns: [ { title: "Name" }, { title: "Address" }, { title: "User email" } ] }); }); ```
2019/04/23
[ "https://Stackoverflow.com/questions/55811491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4086818/" ]
Data is loaded from Firestore asynchronously. You can best see how this affects your program with some `console.log` statements: ``` console.log("Before running query"); db.collection("warehouses").where("useremail","==",firebase.auth().currentUser.email).get().then(function(querySnapshot) { console.log("Got data"); }); console.log("After running query"); ``` When you run this code you get the following output: > > Before running query > > > After running query > > > Got data > > > This is probably not the order that you expected the output in. But it completely explains why your code doesn't work: by the time you call `$('#example').DataTable( { data: dataSet, ...` the data hasn't been loaded yet. --- Since data is loaded from Firebase asynchronously, any code that needs access to the data myst be inside the `then` callback, or be called from there. So the simplest fix is to populate the data table from within the callback: ``` db.collection("warehouses").where("useremail","==",firebase.auth().currentUser.email).get().then(function(querySnapshot) { querySnapshot.forEach(function(doc) { dataSet[x][1] = doc.data().name; dataSet[x][2] = doc.data().useremail; dataSet[x][3] = doc.data().address; x++ }); $('#example').DataTable( { data: dataSet, columns: [ { title: "Name" }, { title: "Address" }, { title: "User email" } ] }); }); ```
Thank you Frank Here is the working code. (it doesn't work without button so I added one) ``` <button type="button" id="warehousesGetDataBTN">Get Warehouses</button> <table id="example" class="display" width="100%"></table> <script> function getDatainTable() { let db = firebase.firestore(); var dataSet = new Array(); var i=1; db.collection("warehouses").where("useremail","==",firebase.auth().currentUser.email).get().then(function(querySnapshot) { querySnapshot.forEach(function(doc) { dataSet.push([doc.data().name, doc.data().useremail]); i=i+1; }); $('#example').DataTable( { data: dataSet, columns: [ { title: "Name" }, { title: "Email" } ] } ); }); } $(document).ready(function(){ $( '#warehousesGetDataBTN' ).click(function(){getDatainTable()}); }); </script> ```
31,102,047
I have a control on my app that can be resized by the user, it has some button anchored to the top-right side and also a scrollbar. The problem is that when the control is resized, the controls anchored to the right also changes the position, and only after a few ms the controls goes into the right place. So it looks like the child controls "shakes" while the parent control is resized. I already tried all kind of things, like using `SuspendLayout` and `ResumeLayout` on the parent control, setting double buffering and other styles on each control to true, setting `WS_EX_COMPOSITED` bit, but nothing seems to make this issue go away. This issue is present on other apps too, and is pretty annoying. So is there anyway to fix that on .net? Maybe making it render everything to a backbuffer, and then when *everything* is finished render it to screen?
2015/06/28
[ "https://Stackoverflow.com/questions/31102047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4103818/" ]
I would create a new event that fires after resize is done, using a little timer magic stopping and starting a timer with an interval about 50ish ms on each resize event you can create this fake ResizeEnd kind of event. On the first resize event I would stop the drawing of the UI using the dllimport call (dont recall which it was) to stop drawing the contents of your window or control. Then when the resize is done, enable drawing again using the same dllimport call. The effect would be that it will only redraw itself after resize is done or every 50ms if you pause while resizing. ResizeEnd: [WinForms - action after resize event](https://stackoverflow.com/questions/3083146/winforms-action-after-resize-event/30398501#30398501) SuspendDrawing: [How do I suspend painting for a control and its children?](https://stackoverflow.com/questions/487661/how-do-i-suspend-painting-for-a-control-and-its-children)
override the below virtual method from namespace **using System.Drawing;** ``` protected override Point ScrollToControl(Control activeControl) { return AutoScrollPosition; } ``` should solve the problem !
69,008,705
**This is the code in python for reading the particular file.** class Display2(Screen): ``` def data_even(self): w_f = "abc.docs" try: with open(f"{w_f}") as w: f = w.readlines() self.ids.display.text = str(f) except FileNotFoundError: self.ids.display.text = "Not found, Sorry, the user has no data entered yet." ``` **In KIVY** KV= ''' : ``` name: "Display2" GridLayout: cols: 1 Label: id: display text: "I will be displaying your data" color: "#1e272e" MDFloatLayout: MDFillRoundFlatButton: pos_hint: {'center_x': 0.1, 'center_y': 0.1} id:data text: "Show Data" on_press: root.data_even() MDFillRoundFlatButton: pos_hint: {'center_x': 0.9, 'center_y': 0.1} text: "Close" on_release: app.root.current = "Evening" root.manager.transition.direction = "left" ``` ''' When I am trying to run this above code snippet I am not able to see the text which is present in the file instead only a black colored label is coming. I hope I explained myself clear. Thanks!! **Please note this is not the full code only some snippets of the program**
2021/09/01
[ "https://Stackoverflow.com/questions/69008705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16426039/" ]
Something simple like this should do the trick. ```js const callAPI = (value) =>{ if(value.length === 11){ console.log("Call API") } } ``` ```html <input type="text" oninput="callAPI(value)"/> ```
You could make the component be a [controlled component](https://reactjs.org/docs/forms.html#controlled-components), and perform the check to trigger the API call in the input's change handler. Here is a code sample. ``` import { useState } from 'react'; const MyComponent = () => { const [inputValue, setInputValue] = useState(0); const handleChange = (e) => { setInputValue(e.target.value); if (inputValue.length === 11) { // code to trigger API call } } return ( <input value={inputValue} onChange={(e) => handleChange(e)} /> ); } ``` **Explanation:** A controlled component controls the value of the input element using the React state itself. 1. Import the state hook. `import { useState } from 'react';` 2. Use a state hook for the input's value. `const [inputValue, setInputValue] = useState(0);` 3. Set the input's value attribute to equal to the state. `<input value={inputValue} />` 4. Add an `onChange` event handler to the input element. `<input value={inputValue} onChange={handleChange} />` 5. Create the event handler. `const handleChange = (e) => { //code for event handler }` Whenever you type in the input field, this will trigger the `onChange` event and run the event handler `handleChange`. 6. In the event handler, first update the state using the user input. `setInputValue(e.target.value);` 7. Then, check the length of the input value, and trigger the call accordingly. `if (inputValue.length === 11) { // code to trigger API call }`
69,008,705
**This is the code in python for reading the particular file.** class Display2(Screen): ``` def data_even(self): w_f = "abc.docs" try: with open(f"{w_f}") as w: f = w.readlines() self.ids.display.text = str(f) except FileNotFoundError: self.ids.display.text = "Not found, Sorry, the user has no data entered yet." ``` **In KIVY** KV= ''' : ``` name: "Display2" GridLayout: cols: 1 Label: id: display text: "I will be displaying your data" color: "#1e272e" MDFloatLayout: MDFillRoundFlatButton: pos_hint: {'center_x': 0.1, 'center_y': 0.1} id:data text: "Show Data" on_press: root.data_even() MDFillRoundFlatButton: pos_hint: {'center_x': 0.9, 'center_y': 0.1} text: "Close" on_release: app.root.current = "Evening" root.manager.transition.direction = "left" ``` ''' When I am trying to run this above code snippet I am not able to see the text which is present in the file instead only a black colored label is coming. I hope I explained myself clear. Thanks!! **Please note this is not the full code only some snippets of the program**
2021/09/01
[ "https://Stackoverflow.com/questions/69008705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16426039/" ]
Something simple like this should do the trick. ```js const callAPI = (value) =>{ if(value.length === 11){ console.log("Call API") } } ``` ```html <input type="text" oninput="callAPI(value)"/> ```
If you are making a controlled form it is easy to check when you update the value. ``` import { useState } from "react"; export default function Form() { const [inputValue, setInputValue] = useState(""); function handelChange(event) { setInputValue(event.target.value); // If the length is 11 characters do something if (inputValue.length === 11) { document.body.style.backgroundColor = "black"; } else { // Otherwise do something else document.body.style.backgroundColor = "white"; } } return ( <input type="text" onChange={(e) => { handelChange(e); }} value={inputValue} /> ); } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
Use this class to hide and show keyboard at runtime. Try to call the method on your onTabChangedListener. Hope it helps. ``` public class KeyBoardHandler { public static void hideSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.hideSoftInputFromWindow(activity.getCurrentFocus().getWindowToken(), 0); } public static void showSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.toggleSoftInput(InputMethodManager.SHOW_FORCED, InputMethodManager.HIDE_IMPLICIT_ONLY); } } ```
Try with below code Interface: ICallBacks ``` public interface ICallBacks { public void isChanged(); } ``` In your activity define on variable like ``` public ICallBacks mCallbacks; ``` In OnPageChangeListener ``` @Override public void onPageScrolled(int arg0, float arg1, int arg2) { if (mCallbacks != null) mCallbacks.isChanged(); } ``` In your fragment you need to implement with ICallBacks interface ``` @Override public void onAttach(Activity activity) { // TODO Auto-generated method stub super.onAttach(activity); if (activity != null) { ((PagerActivity) getActivity()).mCallbacks = this; } } @Override public void isChanged() { if (isVisible()) hideKeyboard(); } private void hideKeyboard() { InputMethodManager inputManager = (InputMethodManager) this.getSystemService(Context.INPUT_METHOD_SERVICE); View view = this.getCurrentFocus(); if (view != null) { inputManager.hideSoftInputFromWindow(view.getWindowToken(), InputMethodManager.HIDE_NOT_ALWAYS); } } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
Put this code in onDestroy method of fragment. ``` try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(mView.getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { } ```
Try with below code Interface: ICallBacks ``` public interface ICallBacks { public void isChanged(); } ``` In your activity define on variable like ``` public ICallBacks mCallbacks; ``` In OnPageChangeListener ``` @Override public void onPageScrolled(int arg0, float arg1, int arg2) { if (mCallbacks != null) mCallbacks.isChanged(); } ``` In your fragment you need to implement with ICallBacks interface ``` @Override public void onAttach(Activity activity) { // TODO Auto-generated method stub super.onAttach(activity); if (activity != null) { ((PagerActivity) getActivity()).mCallbacks = this; } } @Override public void isChanged() { if (isVisible()) hideKeyboard(); } private void hideKeyboard() { InputMethodManager inputManager = (InputMethodManager) this.getSystemService(Context.INPUT_METHOD_SERVICE); View view = this.getCurrentFocus(); if (view != null) { inputManager.hideSoftInputFromWindow(view.getWindowToken(), InputMethodManager.HIDE_NOT_ALWAYS); } } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
The problem with using that code in `onCreateView()` is the fragment initialized within the tabs are created whenever the tabs are created in the parent fragment / activity. I did some investigating with the behavior of fragments within tabs and realised you'd have the same problem overriding many of the lifecycle methods such as `onViewCreated()`, `onResume()`, etc. I found that the best solution to this problem is to override `setUserVisibleHint(boolean isVisibleToUser)` in the fragment where you want the keyboard to be hidden. This method is called any time the visibility of a fragment changes. ``` @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if (isVisibleToUser) { try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(getView().getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { Log.e(TAG, "setUserVisibleHint: ", e); } } } ```
Try with below code Interface: ICallBacks ``` public interface ICallBacks { public void isChanged(); } ``` In your activity define on variable like ``` public ICallBacks mCallbacks; ``` In OnPageChangeListener ``` @Override public void onPageScrolled(int arg0, float arg1, int arg2) { if (mCallbacks != null) mCallbacks.isChanged(); } ``` In your fragment you need to implement with ICallBacks interface ``` @Override public void onAttach(Activity activity) { // TODO Auto-generated method stub super.onAttach(activity); if (activity != null) { ((PagerActivity) getActivity()).mCallbacks = this; } } @Override public void isChanged() { if (isVisible()) hideKeyboard(); } private void hideKeyboard() { InputMethodManager inputManager = (InputMethodManager) this.getSystemService(Context.INPUT_METHOD_SERVICE); View view = this.getCurrentFocus(); if (view != null) { inputManager.hideSoftInputFromWindow(view.getWindowToken(), InputMethodManager.HIDE_NOT_ALWAYS); } } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
Use this class to hide and show keyboard at runtime. Try to call the method on your onTabChangedListener. Hope it helps. ``` public class KeyBoardHandler { public static void hideSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.hideSoftInputFromWindow(activity.getCurrentFocus().getWindowToken(), 0); } public static void showSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.toggleSoftInput(InputMethodManager.SHOW_FORCED, InputMethodManager.HIDE_IMPLICIT_ONLY); } } ```
Put this code in onDestroy method of fragment. ``` try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(mView.getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
The problem with using that code in `onCreateView()` is the fragment initialized within the tabs are created whenever the tabs are created in the parent fragment / activity. I did some investigating with the behavior of fragments within tabs and realised you'd have the same problem overriding many of the lifecycle methods such as `onViewCreated()`, `onResume()`, etc. I found that the best solution to this problem is to override `setUserVisibleHint(boolean isVisibleToUser)` in the fragment where you want the keyboard to be hidden. This method is called any time the visibility of a fragment changes. ``` @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if (isVisibleToUser) { try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(getView().getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { Log.e(TAG, "setUserVisibleHint: ", e); } } } ```
Use this class to hide and show keyboard at runtime. Try to call the method on your onTabChangedListener. Hope it helps. ``` public class KeyBoardHandler { public static void hideSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.hideSoftInputFromWindow(activity.getCurrentFocus().getWindowToken(), 0); } public static void showSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.toggleSoftInput(InputMethodManager.SHOW_FORCED, InputMethodManager.HIDE_IMPLICIT_ONLY); } } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
Use this class to hide and show keyboard at runtime. Try to call the method on your onTabChangedListener. Hope it helps. ``` public class KeyBoardHandler { public static void hideSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.hideSoftInputFromWindow(activity.getCurrentFocus().getWindowToken(), 0); } public static void showSoftKeyboard(Activity activity) { InputMethodManager inputMethodManager = (InputMethodManager) activity.getSystemService(Activity.INPUT_METHOD_SERVICE); inputMethodManager.toggleSoftInput(InputMethodManager.SHOW_FORCED, InputMethodManager.HIDE_IMPLICIT_ONLY); } } ```
If you come here like me in 2022 you should be using `ViewPager2` and `onResume` and `onPause` methods to know which fragment is currently visible. I found this answer [here](https://stackoverflow.com/questions/57885849/in-androidx-fragment-app-fragment-setuservisiblehint-is-deprecated-and-not-exec) and I got to it thanks to the IDE telling me a method used in [this](https://stackoverflow.com/a/43890114/9921564) answer is deprecated, so thanks for the answer! I am still trying to figure out how to not show it in the first place, but I guess that's because I selected something on the other tab which for some reason gets loaded first...
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
The problem with using that code in `onCreateView()` is the fragment initialized within the tabs are created whenever the tabs are created in the parent fragment / activity. I did some investigating with the behavior of fragments within tabs and realised you'd have the same problem overriding many of the lifecycle methods such as `onViewCreated()`, `onResume()`, etc. I found that the best solution to this problem is to override `setUserVisibleHint(boolean isVisibleToUser)` in the fragment where you want the keyboard to be hidden. This method is called any time the visibility of a fragment changes. ``` @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if (isVisibleToUser) { try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(getView().getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { Log.e(TAG, "setUserVisibleHint: ", e); } } } ```
Put this code in onDestroy method of fragment. ``` try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(mView.getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { } ```
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
Put this code in onDestroy method of fragment. ``` try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(mView.getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { } ```
If you come here like me in 2022 you should be using `ViewPager2` and `onResume` and `onPause` methods to know which fragment is currently visible. I found this answer [here](https://stackoverflow.com/questions/57885849/in-androidx-fragment-app-fragment-setuservisiblehint-is-deprecated-and-not-exec) and I got to it thanks to the IDE telling me a method used in [this](https://stackoverflow.com/a/43890114/9921564) answer is deprecated, so thanks for the answer! I am still trying to figure out how to not show it in the first place, but I guess that's because I selected something on the other tab which for some reason gets loaded first...
28,358,881
I have created tab strip with custom classes and I am displaying one fragment in each tab. When the keyboard is open and I switch to tab then second fragment is getting called but the keyboard is not hiding. I am using the code below in onCreateView() in both fragment but it's not working: ``` //To Hide Soft getActivity().getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN); ```
2015/02/06
[ "https://Stackoverflow.com/questions/28358881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3138859/" ]
The problem with using that code in `onCreateView()` is the fragment initialized within the tabs are created whenever the tabs are created in the parent fragment / activity. I did some investigating with the behavior of fragments within tabs and realised you'd have the same problem overriding many of the lifecycle methods such as `onViewCreated()`, `onResume()`, etc. I found that the best solution to this problem is to override `setUserVisibleHint(boolean isVisibleToUser)` in the fragment where you want the keyboard to be hidden. This method is called any time the visibility of a fragment changes. ``` @Override public void setUserVisibleHint(boolean isVisibleToUser) { super.setUserVisibleHint(isVisibleToUser); if (isVisibleToUser) { try { InputMethodManager mImm = (InputMethodManager) getActivity().getSystemService(Context.INPUT_METHOD_SERVICE); mImm.hideSoftInputFromWindow(getView().getWindowToken(), 0); mImm.hideSoftInputFromWindow(getActivity().getCurrentFocus().getWindowToken(), 0); } catch (Exception e) { Log.e(TAG, "setUserVisibleHint: ", e); } } } ```
If you come here like me in 2022 you should be using `ViewPager2` and `onResume` and `onPause` methods to know which fragment is currently visible. I found this answer [here](https://stackoverflow.com/questions/57885849/in-androidx-fragment-app-fragment-setuservisiblehint-is-deprecated-and-not-exec) and I got to it thanks to the IDE telling me a method used in [this](https://stackoverflow.com/a/43890114/9921564) answer is deprecated, so thanks for the answer! I am still trying to figure out how to not show it in the first place, but I guess that's because I selected something on the other tab which for some reason gets loaded first...
73,952,309
``` this.employeeForm = this.fb.group({ fullName: [ '', [ Validators.required, Validators.minLength(2), Validators.maxLength(10), ], ], email: [''], skills: this.fb.group({ skillName: [''], experienceInYears: [''], proficiency: [''], }), }); ``` I am using the reactive form (angular) for validation and error showcase. But to show error message that, input entered by user is not between min and max criteria, I am facing problem. ``` <div class="col-sm-8"> <input id="fullName" type="text" class="form-control" formControlName="fullName" /> <p *ngIf=" employeeForm.get('fullName')?.invalid && employeeForm.get('fullName')?.touched " > please enter the valid full name </p> <p *ngIf="employeeForm.get('fullName')?.errors"> <-- I am not able to access the minLength and maxLength after errors , therefore not able to show case the error message also Full name should be under min and max </p> </div> </div> ``` how to showcase the error in case of error for min and max length. As after `employeeForm.get('fullName')?.errors. no minLength / maxLenth` nothing is coming. thanks in advance
2022/10/04
[ "https://Stackoverflow.com/questions/73952309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18936446/" ]
the Toof\_LD's answer is fine but if you would like to think about implementing more reusable code then please have a look at the custom validators. They are very easy to use, and once created, you can use anywhere in your application. Of course, this is a solution recommended when there is a lot of code checking the validation (such as in HTML) or when you need to perform more advanced validation. Overall the syntax is very simple. Syntax for a custom validator with no parameters: ``` function customValidator(control: AbstractControl): {[key: string]: boolean} | null { if(sthWrong) { return { yourValidatorName: true}; } return null; } ``` Syntax for a validator with parameters: ``` function customValidatorWithParameters(parameters: any): ValidatorFn { return(control: AbstractControl): {[key: string]: boolean} | null { if(sthWrong) { return { yourValidatorName: true}; } return null; } } ``` The best part is that it all comes down to checking in HTML that the control is valid (eg by a true or false value). Below is an example with a simple use of a custom validator with parameters. <https://stackblitz.com/edit/angular-ivy-4tfvfd?file=src/app/app.component.ts>
Try this: * In .ts file: ``` get formControl(){ return this.employeeForm.controls; } ``` * in form template ```html <div class="col-sm-8"> <input id="fullName" type="text" [ngClass]="{'is-invalid': isSubmitted && formControl.fullName.errors}" class="form-control" formControlName="fullName" /> <div class="invalid-feedback" *ngIf="isSubmitted && formControl.fullName.errors"> <ng-container *ngIf="formControl.fullName.errors.minLength || formControl.fullName.errors.maxLength "> please enter the valid full name </ng-container> </div> </div> ```
73,952,309
``` this.employeeForm = this.fb.group({ fullName: [ '', [ Validators.required, Validators.minLength(2), Validators.maxLength(10), ], ], email: [''], skills: this.fb.group({ skillName: [''], experienceInYears: [''], proficiency: [''], }), }); ``` I am using the reactive form (angular) for validation and error showcase. But to show error message that, input entered by user is not between min and max criteria, I am facing problem. ``` <div class="col-sm-8"> <input id="fullName" type="text" class="form-control" formControlName="fullName" /> <p *ngIf=" employeeForm.get('fullName')?.invalid && employeeForm.get('fullName')?.touched " > please enter the valid full name </p> <p *ngIf="employeeForm.get('fullName')?.errors"> <-- I am not able to access the minLength and maxLength after errors , therefore not able to show case the error message also Full name should be under min and max </p> </div> </div> ``` how to showcase the error in case of error for min and max length. As after `employeeForm.get('fullName')?.errors. no minLength / maxLenth` nothing is coming. thanks in advance
2022/10/04
[ "https://Stackoverflow.com/questions/73952309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18936446/" ]
the Toof\_LD's answer is fine but if you would like to think about implementing more reusable code then please have a look at the custom validators. They are very easy to use, and once created, you can use anywhere in your application. Of course, this is a solution recommended when there is a lot of code checking the validation (such as in HTML) or when you need to perform more advanced validation. Overall the syntax is very simple. Syntax for a custom validator with no parameters: ``` function customValidator(control: AbstractControl): {[key: string]: boolean} | null { if(sthWrong) { return { yourValidatorName: true}; } return null; } ``` Syntax for a validator with parameters: ``` function customValidatorWithParameters(parameters: any): ValidatorFn { return(control: AbstractControl): {[key: string]: boolean} | null { if(sthWrong) { return { yourValidatorName: true}; } return null; } } ``` The best part is that it all comes down to checking in HTML that the control is valid (eg by a true or false value). Below is an example with a simple use of a custom validator with parameters. <https://stackblitz.com/edit/angular-ivy-4tfvfd?file=src/app/app.component.ts>
You should use `maxlength` instead `maxLength`.
69,340
Have a bit of a tough question, which I realize there may not be an easy solution to. Figured what better place to ask. I'm attempting to transform a polygon into a series of arbitrary points with arbitrary radii (circles!) that best represent the area covered by the polygon. Minor over- and underflow is acceptable, as is circle-overlap, in hopes of achieving an efficient solution (i.e. fewest possible points, no gaps). Ideally, a given poly would be represented by a few large circles, and several smaller circle on the perimeters. Essentially, the problem is that I have a dynamic number of polys that get hit on geo-spatial queries given a specific gps coordinate, however, we are required to move to a system wherein I will not be able to utilize a point-within-poly query, but will have to rely on point-within-distance queries. Hopefully someone has at least attempted something similar, and, if not, hopefully someone is willing to throw some ideas around! Open to most languages, but this needs to be done programmatically! --- Update per clarification in comments: My point is that I feed indexes to the system and I get alerted when they are hit. I don't perform any queries on this system myself (black box), so I wouldn't have control enough to negate a query. That's the reason I need to transform the poly to a representation of points.
2013/08/21
[ "https://gis.stackexchange.com/questions/69340", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/21298/" ]
If you are serious about sampling **uniformly** within a distance of a particular point then you need account for the curvature of the ellipsoid. This is pretty easy to do using a rejection technique. The following Matlab code implements this. ``` function [lat, lon] = geosample(lat0, lon0, r0, n) % [lat, lon] = geosample(lat0, lon0, r0, n) % % Return n points on the WGS84 ellipsoid within a distance r0 of % (lat0,lon0) and uniformly distributed on the surface. The returned % lat and lon are n x 1 vectors. % % Requires Matlab package % http://www.mathworks.com/matlabcentral/fileexchange/39108 todo = true(n,1); lat = zeros(n,1); lon = lat; while any(todo) n1 = sum(todo); r = r0 * max(rand(n1,2), [], 2); % r = r0*sqrt(U) using cheap sqrt azi = 180 * (2 * rand(n1,1) - 1); % sample azi uniformly [lat(todo), lon(todo), ~, ~, m, ~, ~, sig] = ... geodreckon(lat0, lon0, r, azi); % Only count points with sig <= 180 (otherwise it's not a shortest % path). Also because of the curvature of the ellipsoid, large r % are sampled too frequently, by a factor r/m. This following % accounts for this... todo(todo) = ~(sig <= 180 & r .* rand(n1,1) <= m); end end ``` This code samples uniformly within a circle on the azimuthal equidistant projection centered at *lat0*, *lon0*. The radial, resp. azimuthal, scale for this projection is 1, resp. *r/m*. Hence the areal distortion is *r/m* and this is accounted for by accepting such points with a probability *m/r*. This code also accounts for the situation where *r0* is about half the circumference of the earth and avoids double sampling nearly antipodal points.
You can use this formula, with a fixed distance d and random value of tc: <http://williams.best.vwh.net/avform.htm#LL> All parameters are radians. edit: This link is dead. Here is an archived version: <https://web.archive.org/web/20161209044600/http://williams.best.vwh.net/avform.htm#LL> For posterity, the formula is: > > **Lat/lon given radial and distance** > > > A point {lat,lon} is a distance d out on the tc radial from point 1 if: > > > > ``` > lat=asin(sin(lat1)*cos(d)+cos(lat1)*sin(d)*cos(tc)) > IF (cos(lat)=0) > lon=lon1 // endpoint a pole > ELSE > lon=mod(lon1-asin(sin(tc)*sin(d)/cos(lat))+pi,2*pi)-pi > ENDIF > > ``` > > This algorithm is limited to distances such that dlon extend around less than one quarter of the circumference of the earth in longitude. A completely general, but more complicated algorithm is necessary if greater distances are allowed: > > > > ``` > lat =asin(sin(lat1)*cos(d)+cos(lat1)*sin(d)*cos(tc)) > dlon=atan2(sin(tc)*sin(d)*cos(lat1),cos(d)-sin(lat1)*sin(lat)) > lon=mod( lon1-dlon +pi,2*pi )-pi > > ``` > >
201,029
I have a Windows workstation that has established an SSH connection to a remote server using Putty. From that workstation, I can tunnel stuff through the SSH tunnel, including a HTTP proxy that is running on the SSH server I'm wondering, is it possible to enable other computers in the same subnet as the workstation to connection to that workstation, and use its SSH tunnel/HTTP proxy? I tried using the "Local ports accept connections from other hosts" option in Putty, but I still get a connection refused when I try to connect from other boxes to the Windows workstations. What is the proper way to configure this, so that I can share the HTTP proxy via SSH tunnel? Cheers, Victor
2010/11/12
[ "https://serverfault.com/questions/201029", "https://serverfault.com", "https://serverfault.com/users/23018/" ]
This is dependend which settings you used for the tunnels. Forget the crap which I wrote. Go with settings like these but point the destination to the proxy server like you already did before I told you otherwise. ![tunnel settings](https://i.stack.imgur.com/U5iHf.png) After initiating the tunnel use `netstat -a` and look if the port is openend correctly under the address `0.0.0.0`. ``` $ netstat -a | find "3025" TCP 0.0.0.0:3025 pacey-PC:0 ABHÖREN TCP [::]:3025 pacey-PC:0 ABHÖREN ``` You can also check the eventlog in the putty menu.
[corrected based on below comment] telnet to the forwarded port from local machine and then from remote machine. If you are able to connect using local machine (where putty is running) and not from other machine then it is firewall issue or SSH client is listening only on local port and not on 0.0.0.0 for incoming connections.
37,860,353
I am trying to replace entire dropdown - select element, with an input field, if a selected option value is, say, 777. ```js $(document).ready(function(){ $("#mylist").change(function(){ //var val = $(":selected",this).val(); if(this.val == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } }) // change-function ends }) // doc-ready ends ``` ```html <select id="mylist"> <option value="777">Nothing exists</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> </select> ``` But it's not working. Where I am going wrong?
2016/06/16
[ "https://Stackoverflow.com/questions/37860353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5549944/" ]
`.val` is not a valid property in javascript, it is jquery method. You should either use clean javascript `this.value` or jquery `$(this).val()` instead of `this.val` here: ```js $(document).ready(function(){ $("#mylist").change(function(){ if($(this).val() == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } }) ;// change-function ends }); // doc-ready ends ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <select id="mylist"> <option value="777">Nothing exists</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> </select> ```
You are using `val` which is not a `property` or `function` you need to use `.value` Try this ``` if(this.value == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } ```
37,860,353
I am trying to replace entire dropdown - select element, with an input field, if a selected option value is, say, 777. ```js $(document).ready(function(){ $("#mylist").change(function(){ //var val = $(":selected",this).val(); if(this.val == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } }) // change-function ends }) // doc-ready ends ``` ```html <select id="mylist"> <option value="777">Nothing exists</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> </select> ``` But it's not working. Where I am going wrong?
2016/06/16
[ "https://Stackoverflow.com/questions/37860353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5549944/" ]
try this ``` <script type="text/javascript"> $(document).ready(function(){ $("#mylist").change(function(){ var val = $("option:selected",this).val(); if(val == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } }); }); </script> ```
You are using `val` which is not a `property` or `function` you need to use `.value` Try this ``` if(this.value == "777"){ $("#mylist").replaceWith('<input type="text" name="my_name" placeholder="xxx" required="required" />'); } ```
46,542,754
my problem here is about styling some holiday events. I had fetch the eventSource with regular events and holiday events, marked like: `event.holiday = true` What I'm trying to do is show regular events and hiding those holiday events with a css rule `display : none`, and changing the color of the day number. I add a `holiday` class for being able to access from jQuery before. This is the piece of code that changes the color of the day number: ``` eventRender: function (event, element, view){ $('.fc-day-number').each(function () { var currentDate = (new Date(event.start)).toISOString().slice(0, 10); var day = $(this).parent().attr('data-date'); if (currentDate == day && event.holiday) { $(this).addClass('holiday'); } }); } ``` And it works, it changes the color, but if I click to change month, it'll disappear. Am I missing something? Is there any easier way to achieve this?
2017/10/03
[ "https://Stackoverflow.com/questions/46542754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7918738/" ]
`WhereNotIn` will take `array` of value You also need to pass column name in it ``` if($city !='Others'){ $posts = Post::orderby('id','desc')->where(['city'=>$city])->paginate(10); } else{ $posts = Post::orderby('id','desc')->whereNotIn('city',['Lahore'])->paginate(10); } ```
This is a guess based on how your database is setup. **EDIT based on reply (see: <https://laravel.com/docs/5.5/queries#where-clauses> and [Laravel != operator in where not working](https://stackoverflow.com/questions/23260171/laravel-operator-in-where-not-working)):** ``` if($company!='Others'){ $posts = Post::orderby('id','desc')->where(['company'=>$company])->paginate(10); } else{ $posts = Post::orderby('id','desc')->where('city', '<>', 'Lahore')->paginate(10); } ``` Does that help (take a look at the links)?
46,542,754
my problem here is about styling some holiday events. I had fetch the eventSource with regular events and holiday events, marked like: `event.holiday = true` What I'm trying to do is show regular events and hiding those holiday events with a css rule `display : none`, and changing the color of the day number. I add a `holiday` class for being able to access from jQuery before. This is the piece of code that changes the color of the day number: ``` eventRender: function (event, element, view){ $('.fc-day-number').each(function () { var currentDate = (new Date(event.start)).toISOString().slice(0, 10); var day = $(this).parent().attr('data-date'); if (currentDate == day && event.holiday) { $(this).addClass('holiday'); } }); } ``` And it works, it changes the color, but if I click to change month, it'll disappear. Am I missing something? Is there any easier way to achieve this?
2017/10/03
[ "https://Stackoverflow.com/questions/46542754", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7918738/" ]
Thank you for all your help. The only thing required was to keep whereNotIn statement values in an array. Like this. ``` if($city!='Others'){ $posts = Post::orderby('id','desc')->where(['city'=>$city])->paginate(10); } else{ $posts = Post::orderby('id','desc')->whereNotIn(['city',array('Lahore')->paginate(10); } } ```
This is a guess based on how your database is setup. **EDIT based on reply (see: <https://laravel.com/docs/5.5/queries#where-clauses> and [Laravel != operator in where not working](https://stackoverflow.com/questions/23260171/laravel-operator-in-where-not-working)):** ``` if($company!='Others'){ $posts = Post::orderby('id','desc')->where(['company'=>$company])->paginate(10); } else{ $posts = Post::orderby('id','desc')->where('city', '<>', 'Lahore')->paginate(10); } ``` Does that help (take a look at the links)?
57,997,021
The sum of all odd digits of n.(eg. n is 32677, the sum would be 3+7+7=17) Here is the code. For this question, any of loop or function is acceptable, but not longer than this answer. ``` #include <stdio.h> int main() { char n[20]; int m=0,i; printf("Enter integers for the variable n: "); for (i=0;i<20;i++) { scanf("%c",&n[i]); if(n[i]=='\n') { break; } } for (i=0;i<20;i++)// this is the part I would like to simplified { if (n[i]%2!=0) { if(n[i]==49) m++; if(n[i]==51) m+=3; if(n[i]==53) m+=5; if(n[i]==55) m+=7; else if(n[i]==57) m+=9; } } printf("The sum of odd digits of n is %d.",m); } ```
2019/09/18
[ "https://Stackoverflow.com/questions/57997021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12086164/" ]
Here are some tools/ideas you can use: * In ctype.h is a function `isdigit()` which tells you whether or not a character represents a digit. * Assuming the characters for the digits 0..9 are in sequence, the value represented by a character digit `c` is `c-'0'`
Here you are ``` #include <stdio.h> int main( void ) { enum { N = 20 }; char value[N]; printf( "Enter an unsigned integer: " ); size_t n = 0; for ( char digit; n < N && scanf( "%c", &digit ) == 1 && digit != '\n'; ++n ) { value[n] = digit; } unsigned int sum = 0; for ( size_t i = 0; i < n; i++ ) { if ( value[i] % 2 != 0 ) sum += value[i] - '0'; } printf( "The sum of odd digits of the value is %u.\n", sum ); } ``` The program output might look like ```none Enter an unsigned integer: 0123456789 The sum of odd digits of the value is 25 ``` Or you can add a check that an entered character is a digit. For example ``` #include <stdio.h> #include <ctype.h> int main( void ) { enum { N = 20 }; char value[N]; printf( "Enter an unsigned integer: " ); size_t n = 0; for ( char digit; n < N && scanf( "%c", &digit ) == 1 && isdigit( ( unsigned char )digit ); ++n ) { value[n] = digit; } unsigned int sum = 0; for ( size_t i = 0; i < n; i++ ) { if ( value[i] % 2 != 0 ) sum += value[i] - '0'; } printf( "The sum of odd digits of the value is %u\n", sum ); } ``` As for your code then in this loop ``` for (i=0;i<20;i++) { scanf("%c",&n[i]); if(n[i]=='\n') { break; } } ``` you have to count how many digits were entered. And the new line character shall not be stored in the array. Otherwise this loop ``` for (i=0;i<20;i++) ``` can result in undefined behavior. And you should not use magic numbers like for example `49`.
366,978
I'm looking for a function to resample a raster that can consider a minimum number of valid pixels in order to compute the new pixel value. E.g. If I have to resample a raster of 300m/px to 1Km/px I would consider a window of 3x3 pixels to compute the average value. But I would like to set a control to be sure that at least 5 pixels inside my matrix have valid data. [![Example of results](https://i.stack.imgur.com/1Lo7d.jpg)](https://i.stack.imgur.com/1Lo7d.jpg) I'm trying using GDAL (in PyQGIS) but neither GDAL Translate nor GDAL Warp have this option
2020/07/06
[ "https://gis.stackexchange.com/questions/366978", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/87003/" ]
It could be done with r.neighbors in grass : 1. Sum number of cells using a 3x3 2. Count number of cells without Nan/Null value using a 3x3 3. Divide first generated raster by second one Example : Set a random region 3x3 cells `g.region rows=3 cols=3` Initialize with random cell raster `r.random.cells output=random_cells distance=0 ncells=5` [![Random raster created](https://i.stack.imgur.com/fJjV7.png)](https://i.stack.imgur.com/fJjV7.png) Sum cells (3x3 is default but you could use -c option to set a different cell research) `r.neighbors input=random_cells output=sums method=sum` Count cells `r.neighbors input=random_cells output=counts method=count` Change region resolution to output desired raster resampling : `g.region res=1` Divide sum by count `r.mapcalc "outresamp = sums / counts"` Optinoal : copy colors from random cells to newly resampling raster : `r.colors map=outresamp raster=random_cells` [![Resampling raster excluding nan values](https://i.stack.imgur.com/xCVFT.png)](https://i.stack.imgur.com/xCVFT.png) <https://grass.osgeo.org/grass78/manuals/r.neighbors.html> Alternatively, you could use interpolation with r.fillnulls or r.resamp\* methods
Thanks Sylvain for your answer. At the end I understood that there isn't a way to solve this issue in only one step. So, I wrote this PyQGIS code to do all these steps: <https://github.com/fgianoli/CopernicusGlobalLand/blob/master/CGL_resampler.py> Basically, I did a reclass of the input raster in 0-1 (where 1 are my valid values), then I did a resample using mode -so 0 is where at least 5 pixels inside my kernel aren't valid - and I have multiplied this results with the result of 3x3 resampling using average. In this way I have obtained my results. [![enter image description here](https://i.stack.imgur.com/yADxr.jpg)](https://i.stack.imgur.com/yADxr.jpg)
657,653
I often use ctrl+alt+I and windows magnifier to invert the colors of my screen, to make bright windows easier to read. The problem is that, eg, some windows are naturally dark and some are naturally light, and so I find myself often toggling the inversion as I switch between windows. I would like to be able to apply preferences on a per-window basis: eg, my email client is white-based and I would like to keep it always inverted. Is there a way to accomplish this?
2013/10/11
[ "https://superuser.com/questions/657653", "https://superuser.com", "https://superuser.com/users/133501/" ]
My **workaround** is to transform standard contrast of the entire screen into lower one. This way, window with inverted colors becomes more bearable. 1. get [NegativeScreen](https://zerowidthjoiner.net/negativescreen) open source freeware 2. open configuration file 3. using copy-paste, add the following matrices: ``` # Based on Smart Inversion Low Contrast si1=win+shift+alt+F5 { 0.3333333, -0.6666667, -0.6666667, 0.0000000, 0.0000000 } { -0.6666667, 0.3333333, -0.6666667, 0.0000000, 0.0000000 } { -0.6666667, -0.6666667, 0.3333333, 0.0000000, 0.0000000 } { 0.0000000, 0.0000000, 0.0000000, 1.0000000, 0.0000000 } { 1.2000000, 1.2000000, 1.2000000, 0.0000000, 1.0000000 } # Based on Smart Inversion Alt 1: High saturation, good pure colors. Low Contrast si2 (×*60% +30%)=win+shift+alt+F6 { 0.6, -0.6, -0.6, 0.0, 0.0 } { -0.6, 0.6, -0.6, 0.0, 0.0 } { -0.6, -0.6, 0.6, 0.0, 0.0 } { 0.0, 0.0, 0.0, 1.0, 0.0 } { 0.9, 0.9, 0.9, 0.0, 1.0 } ``` 4. you can [play with them](https://msdn.microsoft.com/en-us/library/windows/desktop/ms533875(v=vs.85).aspx) until brightness and sharpness adjustment fits your needs. *UPDATE 2016-10-20:* Now you can [create and edit matrices interactively using ColorMatrix Viewer tool](https://zerowidthjoiner.net/colormatrix-viewer). Based on what you actually need, this could help you, because these adjustments can go beyond color adjustments reachable on standard LCD panels. FYI the NegativeScreen tool is using your favorite magnifier functionality, but it can supply it with more color transformations than default simple inversion. If eyesight is your main reason for asking this, you can also search for some decent [e-ink display](https://hardwarerecs.stackexchange.com/questions/2415/e-ink-monitor-display-panel) solution or check [how to reduce blue light](https://superuser.com/a/1087436/287473) which in my case helped me more than e-ink display. Blue light and flicker of LCD backlight are behind several types of eye problems.
From <https://github.com/mlaily/NegativeScreen/issues/2>: some `.exe` from the NegativeScreen's author to invert just a single monitor (not as precise as the one window, but better than the default NegativeScreen that inverts all monitors): > > Thanks for your feedback everyone. Still no progress on this feature request, but I'd like to mention I released an unofficial version a (long) while ago that got lost in the comments of my website. > > > It's based on the old v1, and handles multiple monitors with the ability to choose to enable monitors separately. > > > It's a bit crude, but probably still usable: <https://0.x2a.yt/other/private/NegativeScreen-custom-multi-monitor.exe> ([mirror](https://archive.org/details/negative-screen)) > > > To select which monitor to invert colors, click on [![enter image description here](https://i.stack.imgur.com/lAuql.png)](https://i.stack.imgur.com/lAuql.png) in the notification area of the Microsoft Windows taskbar. [![enter image description here](https://i.stack.imgur.com/jlTFc.png)](https://i.stack.imgur.com/jlTFc.png) Tested on Windows 7 SP1 x64 Ultimate with 3 monitors, 1 of which using [DisplayLink](http://www.displaylink.com/).
657,653
I often use ctrl+alt+I and windows magnifier to invert the colors of my screen, to make bright windows easier to read. The problem is that, eg, some windows are naturally dark and some are naturally light, and so I find myself often toggling the inversion as I switch between windows. I would like to be able to apply preferences on a per-window basis: eg, my email client is white-based and I would like to keep it always inverted. Is there a way to accomplish this?
2013/10/11
[ "https://superuser.com/questions/657653", "https://superuser.com", "https://superuser.com/users/133501/" ]
My **workaround** is to transform standard contrast of the entire screen into lower one. This way, window with inverted colors becomes more bearable. 1. get [NegativeScreen](https://zerowidthjoiner.net/negativescreen) open source freeware 2. open configuration file 3. using copy-paste, add the following matrices: ``` # Based on Smart Inversion Low Contrast si1=win+shift+alt+F5 { 0.3333333, -0.6666667, -0.6666667, 0.0000000, 0.0000000 } { -0.6666667, 0.3333333, -0.6666667, 0.0000000, 0.0000000 } { -0.6666667, -0.6666667, 0.3333333, 0.0000000, 0.0000000 } { 0.0000000, 0.0000000, 0.0000000, 1.0000000, 0.0000000 } { 1.2000000, 1.2000000, 1.2000000, 0.0000000, 1.0000000 } # Based on Smart Inversion Alt 1: High saturation, good pure colors. Low Contrast si2 (×*60% +30%)=win+shift+alt+F6 { 0.6, -0.6, -0.6, 0.0, 0.0 } { -0.6, 0.6, -0.6, 0.0, 0.0 } { -0.6, -0.6, 0.6, 0.0, 0.0 } { 0.0, 0.0, 0.0, 1.0, 0.0 } { 0.9, 0.9, 0.9, 0.0, 1.0 } ``` 4. you can [play with them](https://msdn.microsoft.com/en-us/library/windows/desktop/ms533875(v=vs.85).aspx) until brightness and sharpness adjustment fits your needs. *UPDATE 2016-10-20:* Now you can [create and edit matrices interactively using ColorMatrix Viewer tool](https://zerowidthjoiner.net/colormatrix-viewer). Based on what you actually need, this could help you, because these adjustments can go beyond color adjustments reachable on standard LCD panels. FYI the NegativeScreen tool is using your favorite magnifier functionality, but it can supply it with more color transformations than default simple inversion. If eyesight is your main reason for asking this, you can also search for some decent [e-ink display](https://hardwarerecs.stackexchange.com/questions/2415/e-ink-monitor-display-panel) solution or check [how to reduce blue light](https://superuser.com/a/1087436/287473) which in my case helped me more than e-ink display. Blue light and flicker of LCD backlight are behind several types of eye problems.
I think [WindowTop](https://windowtop.info/) ([Git](https://github.com/BiGilSoft/WindowTop)) might be just what you're looking for. Just tested the old v3.x version and it works fine in inverting the colors of just my LibreOffice window/task/app/program. Stumbled over it in [issue 5](https://github.com/mlaily/NegativeScreen/issues/5#issuecomment-688644813) of NegativeScreen's Git repo.
657,653
I often use ctrl+alt+I and windows magnifier to invert the colors of my screen, to make bright windows easier to read. The problem is that, eg, some windows are naturally dark and some are naturally light, and so I find myself often toggling the inversion as I switch between windows. I would like to be able to apply preferences on a per-window basis: eg, my email client is white-based and I would like to keep it always inverted. Is there a way to accomplish this?
2013/10/11
[ "https://superuser.com/questions/657653", "https://superuser.com", "https://superuser.com/users/133501/" ]
From <https://github.com/mlaily/NegativeScreen/issues/2>: some `.exe` from the NegativeScreen's author to invert just a single monitor (not as precise as the one window, but better than the default NegativeScreen that inverts all monitors): > > Thanks for your feedback everyone. Still no progress on this feature request, but I'd like to mention I released an unofficial version a (long) while ago that got lost in the comments of my website. > > > It's based on the old v1, and handles multiple monitors with the ability to choose to enable monitors separately. > > > It's a bit crude, but probably still usable: <https://0.x2a.yt/other/private/NegativeScreen-custom-multi-monitor.exe> ([mirror](https://archive.org/details/negative-screen)) > > > To select which monitor to invert colors, click on [![enter image description here](https://i.stack.imgur.com/lAuql.png)](https://i.stack.imgur.com/lAuql.png) in the notification area of the Microsoft Windows taskbar. [![enter image description here](https://i.stack.imgur.com/jlTFc.png)](https://i.stack.imgur.com/jlTFc.png) Tested on Windows 7 SP1 x64 Ultimate with 3 monitors, 1 of which using [DisplayLink](http://www.displaylink.com/).
I think [WindowTop](https://windowtop.info/) ([Git](https://github.com/BiGilSoft/WindowTop)) might be just what you're looking for. Just tested the old v3.x version and it works fine in inverting the colors of just my LibreOffice window/task/app/program. Stumbled over it in [issue 5](https://github.com/mlaily/NegativeScreen/issues/5#issuecomment-688644813) of NegativeScreen's Git repo.
68,720,911
Im trying to use a range (160:280) instead of '160', '161' and so on. How would i do that? ``` group_by(disp = fct_collapse(as.character(disp), Group1 = c(160:280), Group2 = c(281:400)) %>% summarise(meanHP = mean(hp))) Error: Problem adding computed columns in `group_by()`. x Problem with `mutate()` column `disp`. i `disp = `%>%`(...)`. x Each input to fct_recode must be a single named string. Problems at positions: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 17``` ```
2021/08/10
[ "https://Stackoverflow.com/questions/68720911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16497279/" ]
For range of values it is better to use `cut` where you can define `breaks` and `labels`. ``` library(dplyr) library(forcats) mtcars %>% group_by(disp = cut(disp, c(0, 160, 280, 400, Inf), paste0('Group', 1:4))) %>% summarise(meanHP = mean(hp)) # disp meanHP # <fct> <dbl> #1 Group1 93.1 #2 Group2 143 #3 Group3 217. #4 Group4 217. ``` So here 0-160 becomes `'Group1'`, 160-280 `'Group2'` and so on. --- With `fct_collapse` you can do - ``` mtcars %>% group_by(disp = fct_collapse(as.character(disp), Group1 = as.character(160:280), Group2 = as.character(281:400))) %>% summarise(meanHP = mean(hp)) %>% suppressWarnings() ``` However, this works only for exact values which are present so 160 would be in group1 but not 160.1.
We could also do ``` library(dplyr) library(stringr) mtcars %>% group_by(disp = cut(disp, c(0, 160, 280, 400, Inf), strc('Group', 1:4))) %>% summarise(meanHP = mean(hp)) ```
37,703,484
For the last two hour and a half i had been trying to do something really simple: change the padding in the Android's AutoCompleteTextView's popup (the one that shows the auto complete options). i'm trying to do this because the item in my app has the height of the text (i'm not sure why), so i want to make it easier to click on. But every think i could find didn't work at all. So i really would be glad if anyone could spot a light in this problem or give an alternative solution. And just for the record, i'm using android studio, and i had removed the support API (since my min API is 16), so my app is using 100% native resorts only.
2016/06/08
[ "https://Stackoverflow.com/questions/37703484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6440408/" ]
I just found a way to make it, i had to make a custom view layout with an textview already including the item's padding. Than i created a custom adapter with uses this layout. The layout goes like this ``` <?xml version="1.0" encoding="utf-8"?> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:singleLine="true" android:layout_width="match_parent" android:layout_height="wrap_content" android:ellipsize="marquee" android:layout_margin="0dp" android:paddingLeft="@dimen/thin_margin" android:paddingRight="@dimen/thin_margin" android:paddingTop="@dimen/list_1line_item_padding" android:paddingBottom="@dimen/list_1line_item_padding"/> ``` And in the custom adapter just used it in the getView method ``` itemView = LayoutInflater.from(ctx).inflate(R.layout.list_1line_item, null); ```
define another relative layout wrapping only the autocomplete textview and the button. look at this link [Android layout padding is ignored by dropdown](https://stackoverflow.com/questions/19914328/android-layout-padding-is-ignored-by-dropdown)
37,703,484
For the last two hour and a half i had been trying to do something really simple: change the padding in the Android's AutoCompleteTextView's popup (the one that shows the auto complete options). i'm trying to do this because the item in my app has the height of the text (i'm not sure why), so i want to make it easier to click on. But every think i could find didn't work at all. So i really would be glad if anyone could spot a light in this problem or give an alternative solution. And just for the record, i'm using android studio, and i had removed the support API (since my min API is 16), so my app is using 100% native resorts only.
2016/06/08
[ "https://Stackoverflow.com/questions/37703484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6440408/" ]
``` <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_marginTop="20dp" > <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingTop="@dimen/dp_15" android:paddingBottom="@dimen/dp_15" android:id="@+id/parentid"> <AutoCompleteTextView android:id="@+id/address_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/light_gray_bg" android:drawableRight="@drawable/icon_search_smaller" android:gravity="center_vertical" android:hint="Start typing location" android:inputType="textCapWords" android:popupBackground="@drawable/auto_location_popup_bg" android:textColor="@color/black" android:textColorHint="@color/dark_grey" android:textSize="16sp" android:visibility="visible" android:dropDownWidth="wrap_content" android:dropDownAnchor="@+id/parentid">/> <requestFocus /> </RelativeLayout> </RelativeLayout> ```
define another relative layout wrapping only the autocomplete textview and the button. look at this link [Android layout padding is ignored by dropdown](https://stackoverflow.com/questions/19914328/android-layout-padding-is-ignored-by-dropdown)
37,703,484
For the last two hour and a half i had been trying to do something really simple: change the padding in the Android's AutoCompleteTextView's popup (the one that shows the auto complete options). i'm trying to do this because the item in my app has the height of the text (i'm not sure why), so i want to make it easier to click on. But every think i could find didn't work at all. So i really would be glad if anyone could spot a light in this problem or give an alternative solution. And just for the record, i'm using android studio, and i had removed the support API (since my min API is 16), so my app is using 100% native resorts only.
2016/06/08
[ "https://Stackoverflow.com/questions/37703484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6440408/" ]
I just found a way to make it, i had to make a custom view layout with an textview already including the item's padding. Than i created a custom adapter with uses this layout. The layout goes like this ``` <?xml version="1.0" encoding="utf-8"?> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:singleLine="true" android:layout_width="match_parent" android:layout_height="wrap_content" android:ellipsize="marquee" android:layout_margin="0dp" android:paddingLeft="@dimen/thin_margin" android:paddingRight="@dimen/thin_margin" android:paddingTop="@dimen/list_1line_item_padding" android:paddingBottom="@dimen/list_1line_item_padding"/> ``` And in the custom adapter just used it in the getView method ``` itemView = LayoutInflater.from(ctx).inflate(R.layout.list_1line_item, null); ```
``` <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_marginTop="20dp" > <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingTop="@dimen/dp_15" android:paddingBottom="@dimen/dp_15" android:id="@+id/parentid"> <AutoCompleteTextView android:id="@+id/address_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/light_gray_bg" android:drawableRight="@drawable/icon_search_smaller" android:gravity="center_vertical" android:hint="Start typing location" android:inputType="textCapWords" android:popupBackground="@drawable/auto_location_popup_bg" android:textColor="@color/black" android:textColorHint="@color/dark_grey" android:textSize="16sp" android:visibility="visible" android:dropDownWidth="wrap_content" android:dropDownAnchor="@+id/parentid">/> <requestFocus /> </RelativeLayout> </RelativeLayout> ```
21,462,520
I have three collections: ``` private ICollection<FPTAssetClassAsset> wrassets; private ICollection<FPTFundAsset> wrfunds; private ICollection<FPTManagedStrategyAsset> wrstrats; ``` If a foreach loop returns 0 objects, the collections don't get set and are therefore null. When i add this icollection (Union) to another icollection it fails with: "Value cannot be null" because the icollection is null, rather than being Empty. How can i set this collection as empty instead? Loop: ``` public void GetWrapperAssets(FPT fpt) { foreach (var w in fpt.CouttsPositionSection.Wrappers .Union(fpt.StandAloneSection.Wrappers) .Union(fpt.BespokePropositionSection.Wrappers) .Union(fpt.NonCouttsPositionSection.Wrappers) ) { foreach (var a in w.UnderlyingAssets.OfType<FPTManagedStrategyAsset>()) { wrstrats.Add(a); } foreach (var a in w.UnderlyingAssets.OfType<FPTAssetClassAsset>()) { wrassets.Add(a); } foreach (var a in w.UnderlyingAssets.OfType<FPTFundAsset>()) { wrfunds.Add(a); } } } ```
2014/01/30
[ "https://Stackoverflow.com/questions/21462520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1217058/" ]
Well, you can always check for `null` before adding. Or you can turn it into a property: ``` private ICollection<FPTAssetClassAsset> wrassets { get { return _wrassets == null ? new List<FPTAssetClassAsset>() : _wrassets; } } private ICollection<FPTAssetClassAsset> _wrassets; ```
You can use `Array.Empty<T>` if you're just looking for something that won't crash upon a read until you can get around to replacing it with something you use in earnest: ``` private ICollection<FPTAssetClassAsset> wrassets = Array.Empty<FPTAssetClassAsset>(); private ICollection<FPTFundAsset> wrfunds = Array.Empty<FPTFundAsset>(); private ICollection<FPTManagedStrategyAsset> wrstrats = Array.Empty<FPTManagedStrategyAsset>(); ```
21,462,520
I have three collections: ``` private ICollection<FPTAssetClassAsset> wrassets; private ICollection<FPTFundAsset> wrfunds; private ICollection<FPTManagedStrategyAsset> wrstrats; ``` If a foreach loop returns 0 objects, the collections don't get set and are therefore null. When i add this icollection (Union) to another icollection it fails with: "Value cannot be null" because the icollection is null, rather than being Empty. How can i set this collection as empty instead? Loop: ``` public void GetWrapperAssets(FPT fpt) { foreach (var w in fpt.CouttsPositionSection.Wrappers .Union(fpt.StandAloneSection.Wrappers) .Union(fpt.BespokePropositionSection.Wrappers) .Union(fpt.NonCouttsPositionSection.Wrappers) ) { foreach (var a in w.UnderlyingAssets.OfType<FPTManagedStrategyAsset>()) { wrstrats.Add(a); } foreach (var a in w.UnderlyingAssets.OfType<FPTAssetClassAsset>()) { wrassets.Add(a); } foreach (var a in w.UnderlyingAssets.OfType<FPTFundAsset>()) { wrfunds.Add(a); } } } ```
2014/01/30
[ "https://Stackoverflow.com/questions/21462520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1217058/" ]
Initialise your collections before the foreach loop, this way they will always have a value: ``` private ICollection<FPTAssetClassAsset> wrassets = new Collection<FPTAssetClassAsset>(); private ICollection<FPTFundAsset> wrfunds = new Collection<FPTFundAsset>(); private ICollection<FPTManagedStrategyAsset> wrstrats = new Collection<FPTManagedStrategyAsset>(); ```
You can use `Array.Empty<T>` if you're just looking for something that won't crash upon a read until you can get around to replacing it with something you use in earnest: ``` private ICollection<FPTAssetClassAsset> wrassets = Array.Empty<FPTAssetClassAsset>(); private ICollection<FPTFundAsset> wrfunds = Array.Empty<FPTFundAsset>(); private ICollection<FPTManagedStrategyAsset> wrstrats = Array.Empty<FPTManagedStrategyAsset>(); ```
58,961,965
I have a range of directories from 2010 to 2017 and a sub-directory in them from 1 to 12. There is a file in each sub-directory, I need to add a line to each of these files. This is part of my script: ``` #!/bin/bash mkdir -p test/201{0..7}/{1..12}/ touch test/201{0..7}/{1..12}/file_{0..9}.txt ```
2019/11/20
[ "https://Stackoverflow.com/questions/58961965", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12406103/" ]
``` echo "42" | tee test/201{0..7}/{1..12}/file_{0..9}.txt ``` Append to files: ``` echo "42" | tee -a test/201{0..7}/{1..12}/file_{0..9}.txt ```
In your script, it was created a range of directories from 2010 to 2017 and a range of sub-directories that goes from 1 to 12 in each directory. You also created nine files in each sub-directory at the end. So, to append a new line in each of these files you should use the `echo` command to create the contents of your line and redirect it to the `tee` command to add it to all your files at once, as follows: ``` echo "new line" | tee -a test/201{0..7}/{1..12}/file_{0..9}.txt ``` That should be enough.
33,189,352
I understand that you can do the following... ``` query.orderByAscending("rowValue"); query.orderByDescending("rowValue"); ``` But what if you actually want your data to come out in random order every time your activity is opened? How might this be accomplished?
2015/10/17
[ "https://Stackoverflow.com/questions/33189352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4950598/" ]
There is no built in function for random sort order in the [Parse API.](https://parse.com/docs/android/api/com/parse/ParseQuery.html) You can randomize the list after you receive it using `Collections.shuffle()` Ex. ``` ParseQuery<ParseObject> query = ParseQuery.getQuery("MyClass"); query.findInBackground(new FindCallback<ParseObject>() { public void done(List<ParseObject> objects, ParseException e) { if (e == null) { Collections.shuffle(objects); objectsWereRetrievedSuccessfully(objects); } else { objectRetrievalFailed(); } } } ```
Why not just randomize the data after you query?
33,189,352
I understand that you can do the following... ``` query.orderByAscending("rowValue"); query.orderByDescending("rowValue"); ``` But what if you actually want your data to come out in random order every time your activity is opened? How might this be accomplished?
2015/10/17
[ "https://Stackoverflow.com/questions/33189352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4950598/" ]
Why not just randomize the data after you query?
I guess that was pretty easy... I hope others find this helpful! ``` ParseQuery<ParseObject> query = new ParseQuery<ParseObject>( "SuggestedUser"); ob = query.find(); // Randomizes the order of the records in the query Collections.shuffle(ob); for (ParseObject author : ob) { ParseFile image = (ParseFile) author.get("brandImage"); SuggestedUser map = new SuggestedUser(); map.setRank((String) author.get("author")); map.setUsername((String) author.get("username")); map.setFlag(image.getUrl()); map.setUserID((String) author.get("userId")); worldpopulationlist.add(map); ```
33,189,352
I understand that you can do the following... ``` query.orderByAscending("rowValue"); query.orderByDescending("rowValue"); ``` But what if you actually want your data to come out in random order every time your activity is opened? How might this be accomplished?
2015/10/17
[ "https://Stackoverflow.com/questions/33189352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4950598/" ]
There is no built in function for random sort order in the [Parse API.](https://parse.com/docs/android/api/com/parse/ParseQuery.html) You can randomize the list after you receive it using `Collections.shuffle()` Ex. ``` ParseQuery<ParseObject> query = ParseQuery.getQuery("MyClass"); query.findInBackground(new FindCallback<ParseObject>() { public void done(List<ParseObject> objects, ParseException e) { if (e == null) { Collections.shuffle(objects); objectsWereRetrievedSuccessfully(objects); } else { objectRetrievalFailed(); } } } ```
I guess that was pretty easy... I hope others find this helpful! ``` ParseQuery<ParseObject> query = new ParseQuery<ParseObject>( "SuggestedUser"); ob = query.find(); // Randomizes the order of the records in the query Collections.shuffle(ob); for (ParseObject author : ob) { ParseFile image = (ParseFile) author.get("brandImage"); SuggestedUser map = new SuggestedUser(); map.setRank((String) author.get("author")); map.setUsername((String) author.get("username")); map.setFlag(image.getUrl()); map.setUserID((String) author.get("userId")); worldpopulationlist.add(map); ```
29,470
`ssh` can use to run remote commands. ``` ssh me@server.com 'long-script.sh' ``` I run a long script that will take a lot of time, but I want to close my computer and keep running the script in the remote server. I know how to achieve this with [GNU Screen](http://www.gnu.org/software/screen/), but I need do it via `ssh`. Can I do that without interrupting my script?
2012/01/19
[ "https://unix.stackexchange.com/questions/29470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6219/" ]
Use "nohup" to run a command immune to hangups, with output to a non-tty: ``` nohup your_command & ``` and to run a command via ssh, without first logging into the remote machine: ``` ssh user_name@machine_address "nohup your_script.sh" & ```
``` $ ssh me@server.com screen -dm long-script.sh ```
29,470
`ssh` can use to run remote commands. ``` ssh me@server.com 'long-script.sh' ``` I run a long script that will take a lot of time, but I want to close my computer and keep running the script in the remote server. I know how to achieve this with [GNU Screen](http://www.gnu.org/software/screen/), but I need do it via `ssh`. Can I do that without interrupting my script?
2012/01/19
[ "https://unix.stackexchange.com/questions/29470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6219/" ]
Use "nohup" to run a command immune to hangups, with output to a non-tty: ``` nohup your_command & ``` and to run a command via ssh, without first logging into the remote machine: ``` ssh user_name@machine_address "nohup your_script.sh" & ```
`ssh user@server "nohup script.sh >/var/log/output.log 2>&1 &"` That should run the remote command without leaving a running ssh process on your client.
65,563,762
I am building a Tic Tac Toe AI. Here are the rules for the AI: 1. If there is a winning move, play it. 2. If the opponent has a winning move, block it. 3. Otherwise, play randomly. Here's the code: ``` # main.py # Prorities: # - If there is a winning move, play it # - If the opponent has a winning move, block it. # - If nothing to block, make a random move. import random import time import copy boxes = [] for i in range(3): row = [] for j in range(3): row.append(" ") boxes.append(row) def printBoard(): to_print = "" to_print += " " + boxes[0][0] + " | " + boxes[0][1] + " | " + boxes[0][2] + " \n" to_print += "---+---+---\n" to_print += " " + boxes[1][0] + " | " + boxes[1][1] + " | " + boxes[1][2] + " \n" to_print += "---+---+---\n" to_print += " " + boxes[2][0] + " | " + boxes[2][1] + " | " + boxes[2][2] + " \n" return to_print turn = random.randint(1, 2) if turn == 1: coin = "you (X)" else: coin = "the AI (O)" print("The coin flip shows", coin, "will go first!") input("Press Enter to begin! ") def checkWin(boxes): win = False who = " " for i in range(3): if boxes[i][0] == boxes[i][1] and boxes[i][1] == boxes[i][2]: who = boxes[i][0] if who != " ": win = True for i in range(3): if boxes[0][i] == boxes[1][i] and boxes[2][i] == boxes[1][i]: who = boxes[0][i] if who != " ": win = True if boxes[0][0] == boxes[1][1] and boxes[1][1] == boxes[2][2]: who = boxes[0][0] if who != " ": win = True if boxes[0][2] == boxes[1][1] and boxes[1][1] == boxes[2][0]: who = boxes[0][2] if who != " ": win = True return win, who def checkTie(boxes): for row in boxes: for box in boxes: if box != "X" and box != "O": return False return True def checkMove(boxes, player): for i in range(3): for j in range(3): if boxes[i][j] != "X" and boxes[i][j] != "O": boxCopy = copy.deepcopy(boxes) boxCopy[i][j] = player win, who = checkWin(boxCopy) if win: return True, i, j return False, 0, 0 while True: print("Check 1") win, who = checkWin(boxes) if win and who == "X": print("Player X has won.") print(" ") print(printBoard()) break elif win and who == "O": print("Player O has won.") print(" ") print(printBoard()) break elif checkTie(boxes) == True: print("It has been concluded as a tie.") break print("Check 2") if turn == 1: print("") print(printBoard()) row = (int(input("Pick a row to play: ")) -1) col = (int(input("Pick a column to play: ")) -1) if ((row < 4 and row > -1) and (col < 4 and col > -1)) and (boxes[row][col] == " "): boxes[row][col] = "X" turn = 2 else: print("Sorry, that is not allowed.") print(" ") # Prorities: # - If there is a winning move, play it # - If the opponent has a winning move, block it. # - If nothing to block, make a random move. else: print("") print(printBoard()) print("[*] AI is choosing...") time.sleep(1) row = random.randint(0, 2) col = random.randint(0, 2) winMove, winRow, winCol = checkMove(boxes, "O") lossMove, lossRow, lossCol = checkMove(boxes, "X") if winMove and (boxes[winRow][winCol] != "X" and boxes[winRow][winCol] != "O"): boxes[winRow][winCol] = "O" turn = 1 print("Statement 1: Win play") elif lossMove and (boxes[lossRow][lossCol] != "X" and boxes[lossRow][lossCol] != "O"): boxes[lossRow][lossCol] = "O" turn = 1 print("Statement 2: Block play") elif boxes[row][col] != "X" and boxes[row][col] != "O": boxes[row][col] = "O" turn = 1 print("Statement 3: Random play") else: print("Statement 4: None") print("Check 3") ``` The problem occurs when there is a tie. Either the function `checkTie()`, or the `if` statement isn't working. You might see a couple `print('Check #')` every once in a while. When you run the code and it's a tie, it shows all the checks going by. Which means it is passing through the check. When there is a tie, it just keeps doing the loop and repeating its turn but not making a move. What is the mistake and how can I do this correctly?
2021/01/04
[ "https://Stackoverflow.com/questions/65563762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I think your function should be ```py def checkTie(boxes): for row in boxes: for box in row: if box != "X" and box != "O": return False return True ``` You mistyped ( I think) `boxes` for `row` in the second `for` statement.
```py def checkTie(boxes): if any(" " in box for box in boxes): return False return True ``` Change your `checkTie` function to this The rest is all good.
36,527
I have a solution in C# where users are able to submit a lengthy form, which then automatically kicks off a workflow that was created in Designer. However, I am looking to add functionality to the solution which will allow a user to temporary "Save" the form. This saving function basically consists of temporarily disabling the form validation, and blocking the workflow from starting. I have a DisableValidators() function which works, however I cannot find a way to bypass the automatic start of the workflow. Any ideas? Thanks.
2012/05/18
[ "https://sharepoint.stackexchange.com/questions/36527", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/8444/" ]
Yeah, this is a frequent issue. The command prompt didn't work for me either. Just double click on the MSI and walk through the wizard. It'll work correctly then.
One user had a similar issue while installing & configuring RBS. I've described the process here [Installing RBS for Foundation](https://sharepoint.stackexchange.com/questions/34241/installing-rbs/34739#34739) which also includes some troubleshooting steps. Have a look and let us know if that helps you troubleshoot!
36,293,755
I want to apply the value of a dropdown on a website. There is a button tag in the source HTML code. After clicking the button (button.click), I select the dropdown value for all tag having class as "dropdownAvailable" from the Option tag. Although the value of the dropdown seems to change, the page doesn't change accordingly. The HTML code for the Select tag is like below. ``` <select name="dropdown_selected_size" autocomplete="off" data-a-touch-header="Size" id="selected_size" class="a-native-dropdown"> <option id="native_size_-1" data-a-id="size_-1" selected>Select</option> <option class="dropdownAvailable" id="native_size_0" data-a-id="size_0" data-a-html-content="40.5">40.5</option> <option class="dropdownUnavailable" id="native_size_1" data-a-id="size_1" data-a-html-content="40.5">40.5</option> <option class="dropdownAvailable" id="native_size_2" data-a-id="size_2" data-a-html-content="41">41</option> <option class="dropdownUnavailable" id="native_size_3" data-a-id="size_3" data-a-html-content="41">41</option> <option class="dropdownAvailable" id="native_size_4" data-a-id="size_4" data-a-html-content="42">42</option> </select> ``` I have tried all of the below options for the dropdown to be selected. 1. ie.document.getElementById("selected\_size").onchange 2. ie.document.getElementById("selected\_size").FireEvent ("onchange") 3. ie.document.getElementById("selected\_size").getElementsByTagName("option")(n).Click 4. ie.document.getElementById("selected\_size").Click 5. Application.SendKeys "{TAB}" 6. Application.SendKeys "{ENTER}" 7. Application.SendKeys "~" I'm using IE Version 11 and MS Excel 2013.
2016/03/29
[ "https://Stackoverflow.com/questions/36293755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5860809/" ]
Posting in case anyone else comes to this and is as confused as I was. There is a fundamental difference with how nearly all Amazon MWS requests work except this particular one. All other requests technically accept the parameters as query parameters instead of POST data. The scratchpad even suggests this is how it is actually working (although the MWS Scratchpad actually sends the data as Post Data Fields also).
This code worked for me. Hope it will help someone. ``` <?php require_once('.config.inc.php'); // More endpoints are listed in the MWS Developer Guide // North America: $serviceUrl = "https://mws.amazonservices.com/Products/2011-10-01"; // Europe //$serviceUrl = "https://mws-eu.amazonservices.com/Products/2011-10-01"; // Japan //$serviceUrl = "https://mws.amazonservices.jp/Products/2011-10-01"; // China //$serviceUrl = "https://mws.amazonservices.com.cn/Products/2011-10-01"; $config = array ( 'ServiceURL' => $serviceUrl, 'ProxyHost' => null, 'ProxyPort' => -1, 'ProxyUsername' => null, 'ProxyPassword' => null, 'MaxErrorRetry' => 3, ); $service = new MarketplaceWebServiceProducts_Client( AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, APPLICATION_NAME, APPLICATION_VERSION, $config); // @TODO: set request. Action can be passed as MarketplaceWebServiceProducts_Model_GetLowestPricedOffersForSKU $request = new MarketplaceWebServiceProducts_Model_GetLowestPricedOffersForSKURequest(); $request->setSellerId(MERCHANT_ID); $request->setMWSAuthToken(MWSAUTH_TOKEN); $request->setMarketplaceId(MARKETPLACE_ID); $request->setSellerSKU($sellerSKU); $request->setItemCondition($ItemCondition); // object or array of parameters invokeGetLowestPricedOffersForSKU($service, $request); function invokeGetLowestPricedOffersForSKU(MarketplaceWebServiceProducts_Interface $service, $request) { try { $response = $service->GetLowestPricedOffersForSKU($request); echo ("Service Response\n"); echo ("=============================================================================\n"); $dom = new DOMDocument(); $dom->loadXML($response->toXML()); $dom->preserveWhiteSpace = false; $dom->formatOutput = true; echo $dom->saveXML(); echo("ResponseHeaderMetadata: " . $response->getResponseHeaderMetadata() . "\n"); } catch (MarketplaceWebServiceProducts_Exception $ex) { echo("Caught Exception: " . $ex->getMessage() . "\n"); echo("Response Status Code: " . $ex->getStatusCode() . "\n"); echo("Error Code: " . $ex->getErrorCode() . "\n"); echo("Error Type: " . $ex->getErrorType() . "\n"); echo("Request ID: " . $ex->getRequestId() . "\n"); echo("XML: " . $ex->getXML() . "\n"); echo("ResponseHeaderMetadata: " . $ex->getResponseHeaderMetadata() . "\n"); } } ?> ```
36,293,755
I want to apply the value of a dropdown on a website. There is a button tag in the source HTML code. After clicking the button (button.click), I select the dropdown value for all tag having class as "dropdownAvailable" from the Option tag. Although the value of the dropdown seems to change, the page doesn't change accordingly. The HTML code for the Select tag is like below. ``` <select name="dropdown_selected_size" autocomplete="off" data-a-touch-header="Size" id="selected_size" class="a-native-dropdown"> <option id="native_size_-1" data-a-id="size_-1" selected>Select</option> <option class="dropdownAvailable" id="native_size_0" data-a-id="size_0" data-a-html-content="40.5">40.5</option> <option class="dropdownUnavailable" id="native_size_1" data-a-id="size_1" data-a-html-content="40.5">40.5</option> <option class="dropdownAvailable" id="native_size_2" data-a-id="size_2" data-a-html-content="41">41</option> <option class="dropdownUnavailable" id="native_size_3" data-a-id="size_3" data-a-html-content="41">41</option> <option class="dropdownAvailable" id="native_size_4" data-a-id="size_4" data-a-html-content="42">42</option> </select> ``` I have tried all of the below options for the dropdown to be selected. 1. ie.document.getElementById("selected\_size").onchange 2. ie.document.getElementById("selected\_size").FireEvent ("onchange") 3. ie.document.getElementById("selected\_size").getElementsByTagName("option")(n).Click 4. ie.document.getElementById("selected\_size").Click 5. Application.SendKeys "{TAB}" 6. Application.SendKeys "{ENTER}" 7. Application.SendKeys "~" I'm using IE Version 11 and MS Excel 2013.
2016/03/29
[ "https://Stackoverflow.com/questions/36293755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5860809/" ]
MWS needs the POST data passed as form params instead of as a query string for some operations. Otherwise, it pukes a `Failed processing arguments of org.jboss.resteasy.spi.metadata` style `400 Bad Request` error for some operations such as this one (`GetMyFeesEstimate` is another that suffers from this). For instance, if you did the following POST request in Guzzle 6 then you'd likely get the error: ``` $response = $client->request('POST', 'https://mws.amazonservices.com/Products/2011-10-01/?AWSAccessKeyId=YOURAWSACCESSKEY&Action=GetLowestPricedOffersForASIN&SellerId=YOURSELLERID&MWSAuthToken=amzn.mws.fghsffg-4t44e-hfgh-dfgd-zgsdbfe5erg&SignatureVersion=2&Timestamp=2017-07-09T15%3A45%3A18%2B00%3A00&Version=2011-10-01&Signature=bCasdxXmYDCasdaXBhsdgse4pQ6hEbevML%2FJvzdgdsfdy2o%3D&SignatureMethod=HmacSHA256&MarketplaceId=ATVPDKIKX0DER&ASIN=B007EZK19E'); ``` To fix this you'd submit it as form data as in this [Guzzle 6](http://docs.guzzlephp.org/en/stable/request-options.html?highlight=post#form-params) example: ``` $response = $client->request('POST', 'https://mws.amazonservices.com/Products/2011-10-01', [ 'form_params' => [ 'AWSAccessKeyId' => 'YOURAWSACCESSKEY', 'Action' => 'GetLowestPricedOffersForASIN', 'SellerId' => 'YOURSELLERID', 'MWSAuthToken' => 'amzn.mws.fghsffg-4t44e-hfgh-dfgd-zgsdbfe5erg', 'SignatureVersion' => 2, 'Timestamp' => '2017-07-09T15%3A45%3A18%2B00%3A00', 'Version' => '2011-10-01', 'Signature' => 'bCasdxXmYDCasdaXBhsdgse4pQ6hEbevML%2FJvzdgdsfdy2o%3D', 'SignatureMethod' => 'HmacSHA256', 'MarketplaceId' => 'ATVPDKIKX0DER', 'ASIN' => 'B007EZK19E', ] ]); ```
This code worked for me. Hope it will help someone. ``` <?php require_once('.config.inc.php'); // More endpoints are listed in the MWS Developer Guide // North America: $serviceUrl = "https://mws.amazonservices.com/Products/2011-10-01"; // Europe //$serviceUrl = "https://mws-eu.amazonservices.com/Products/2011-10-01"; // Japan //$serviceUrl = "https://mws.amazonservices.jp/Products/2011-10-01"; // China //$serviceUrl = "https://mws.amazonservices.com.cn/Products/2011-10-01"; $config = array ( 'ServiceURL' => $serviceUrl, 'ProxyHost' => null, 'ProxyPort' => -1, 'ProxyUsername' => null, 'ProxyPassword' => null, 'MaxErrorRetry' => 3, ); $service = new MarketplaceWebServiceProducts_Client( AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, APPLICATION_NAME, APPLICATION_VERSION, $config); // @TODO: set request. Action can be passed as MarketplaceWebServiceProducts_Model_GetLowestPricedOffersForSKU $request = new MarketplaceWebServiceProducts_Model_GetLowestPricedOffersForSKURequest(); $request->setSellerId(MERCHANT_ID); $request->setMWSAuthToken(MWSAUTH_TOKEN); $request->setMarketplaceId(MARKETPLACE_ID); $request->setSellerSKU($sellerSKU); $request->setItemCondition($ItemCondition); // object or array of parameters invokeGetLowestPricedOffersForSKU($service, $request); function invokeGetLowestPricedOffersForSKU(MarketplaceWebServiceProducts_Interface $service, $request) { try { $response = $service->GetLowestPricedOffersForSKU($request); echo ("Service Response\n"); echo ("=============================================================================\n"); $dom = new DOMDocument(); $dom->loadXML($response->toXML()); $dom->preserveWhiteSpace = false; $dom->formatOutput = true; echo $dom->saveXML(); echo("ResponseHeaderMetadata: " . $response->getResponseHeaderMetadata() . "\n"); } catch (MarketplaceWebServiceProducts_Exception $ex) { echo("Caught Exception: " . $ex->getMessage() . "\n"); echo("Response Status Code: " . $ex->getStatusCode() . "\n"); echo("Error Code: " . $ex->getErrorCode() . "\n"); echo("Error Type: " . $ex->getErrorType() . "\n"); echo("Request ID: " . $ex->getRequestId() . "\n"); echo("XML: " . $ex->getXML() . "\n"); echo("ResponseHeaderMetadata: " . $ex->getResponseHeaderMetadata() . "\n"); } } ?> ```
36,293,755
I want to apply the value of a dropdown on a website. There is a button tag in the source HTML code. After clicking the button (button.click), I select the dropdown value for all tag having class as "dropdownAvailable" from the Option tag. Although the value of the dropdown seems to change, the page doesn't change accordingly. The HTML code for the Select tag is like below. ``` <select name="dropdown_selected_size" autocomplete="off" data-a-touch-header="Size" id="selected_size" class="a-native-dropdown"> <option id="native_size_-1" data-a-id="size_-1" selected>Select</option> <option class="dropdownAvailable" id="native_size_0" data-a-id="size_0" data-a-html-content="40.5">40.5</option> <option class="dropdownUnavailable" id="native_size_1" data-a-id="size_1" data-a-html-content="40.5">40.5</option> <option class="dropdownAvailable" id="native_size_2" data-a-id="size_2" data-a-html-content="41">41</option> <option class="dropdownUnavailable" id="native_size_3" data-a-id="size_3" data-a-html-content="41">41</option> <option class="dropdownAvailable" id="native_size_4" data-a-id="size_4" data-a-html-content="42">42</option> </select> ``` I have tried all of the below options for the dropdown to be selected. 1. ie.document.getElementById("selected\_size").onchange 2. ie.document.getElementById("selected\_size").FireEvent ("onchange") 3. ie.document.getElementById("selected\_size").getElementsByTagName("option")(n).Click 4. ie.document.getElementById("selected\_size").Click 5. Application.SendKeys "{TAB}" 6. Application.SendKeys "{ENTER}" 7. Application.SendKeys "~" I'm using IE Version 11 and MS Excel 2013.
2016/03/29
[ "https://Stackoverflow.com/questions/36293755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5860809/" ]
Posting in case anyone else comes to this and is as confused as I was. There is a fundamental difference with how nearly all Amazon MWS requests work except this particular one. All other requests technically accept the parameters as query parameters instead of POST data. The scratchpad even suggests this is how it is actually working (although the MWS Scratchpad actually sends the data as Post Data Fields also).
As @eComEvoFor stated, for this particularly method, Amazon demands you specify the params in the body of the request. In node.js using axios library you can do this: ``` const paramsSorted = {} Object.keys(params) .sort() .forEach((key) => { paramsSorted[key] = params[key] }) const data = new URLSearchParams(paramsSorted) url = urljoin(this.marketplace.url, this.api, this.api.Version) const response = await axios.post(url, data) ```
36,293,755
I want to apply the value of a dropdown on a website. There is a button tag in the source HTML code. After clicking the button (button.click), I select the dropdown value for all tag having class as "dropdownAvailable" from the Option tag. Although the value of the dropdown seems to change, the page doesn't change accordingly. The HTML code for the Select tag is like below. ``` <select name="dropdown_selected_size" autocomplete="off" data-a-touch-header="Size" id="selected_size" class="a-native-dropdown"> <option id="native_size_-1" data-a-id="size_-1" selected>Select</option> <option class="dropdownAvailable" id="native_size_0" data-a-id="size_0" data-a-html-content="40.5">40.5</option> <option class="dropdownUnavailable" id="native_size_1" data-a-id="size_1" data-a-html-content="40.5">40.5</option> <option class="dropdownAvailable" id="native_size_2" data-a-id="size_2" data-a-html-content="41">41</option> <option class="dropdownUnavailable" id="native_size_3" data-a-id="size_3" data-a-html-content="41">41</option> <option class="dropdownAvailable" id="native_size_4" data-a-id="size_4" data-a-html-content="42">42</option> </select> ``` I have tried all of the below options for the dropdown to be selected. 1. ie.document.getElementById("selected\_size").onchange 2. ie.document.getElementById("selected\_size").FireEvent ("onchange") 3. ie.document.getElementById("selected\_size").getElementsByTagName("option")(n).Click 4. ie.document.getElementById("selected\_size").Click 5. Application.SendKeys "{TAB}" 6. Application.SendKeys "{ENTER}" 7. Application.SendKeys "~" I'm using IE Version 11 and MS Excel 2013.
2016/03/29
[ "https://Stackoverflow.com/questions/36293755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5860809/" ]
MWS needs the POST data passed as form params instead of as a query string for some operations. Otherwise, it pukes a `Failed processing arguments of org.jboss.resteasy.spi.metadata` style `400 Bad Request` error for some operations such as this one (`GetMyFeesEstimate` is another that suffers from this). For instance, if you did the following POST request in Guzzle 6 then you'd likely get the error: ``` $response = $client->request('POST', 'https://mws.amazonservices.com/Products/2011-10-01/?AWSAccessKeyId=YOURAWSACCESSKEY&Action=GetLowestPricedOffersForASIN&SellerId=YOURSELLERID&MWSAuthToken=amzn.mws.fghsffg-4t44e-hfgh-dfgd-zgsdbfe5erg&SignatureVersion=2&Timestamp=2017-07-09T15%3A45%3A18%2B00%3A00&Version=2011-10-01&Signature=bCasdxXmYDCasdaXBhsdgse4pQ6hEbevML%2FJvzdgdsfdy2o%3D&SignatureMethod=HmacSHA256&MarketplaceId=ATVPDKIKX0DER&ASIN=B007EZK19E'); ``` To fix this you'd submit it as form data as in this [Guzzle 6](http://docs.guzzlephp.org/en/stable/request-options.html?highlight=post#form-params) example: ``` $response = $client->request('POST', 'https://mws.amazonservices.com/Products/2011-10-01', [ 'form_params' => [ 'AWSAccessKeyId' => 'YOURAWSACCESSKEY', 'Action' => 'GetLowestPricedOffersForASIN', 'SellerId' => 'YOURSELLERID', 'MWSAuthToken' => 'amzn.mws.fghsffg-4t44e-hfgh-dfgd-zgsdbfe5erg', 'SignatureVersion' => 2, 'Timestamp' => '2017-07-09T15%3A45%3A18%2B00%3A00', 'Version' => '2011-10-01', 'Signature' => 'bCasdxXmYDCasdaXBhsdgse4pQ6hEbevML%2FJvzdgdsfdy2o%3D', 'SignatureMethod' => 'HmacSHA256', 'MarketplaceId' => 'ATVPDKIKX0DER', 'ASIN' => 'B007EZK19E', ] ]); ```
As @eComEvoFor stated, for this particularly method, Amazon demands you specify the params in the body of the request. In node.js using axios library you can do this: ``` const paramsSorted = {} Object.keys(params) .sort() .forEach((key) => { paramsSorted[key] = params[key] }) const data = new URLSearchParams(paramsSorted) url = urljoin(this.marketplace.url, this.api, this.api.Version) const response = await axios.post(url, data) ```
11,428
In software engineering, there is the concept of encapsulation: hiding the details of one program from another program. The theory is that by doing this, the other program will use only details provided (the interface), not caring about the inner details of the program being used. This apparently reduces code dependency on extraneous details that can be changed at any time in the program being used. Code dependency on extraneous details is bad because one change in the extraneous details of the program being used will necessitate change in the program that uses it. That dependency of one system on another necessitates change in the dependent system if the independent one is changed is axiomatic. I was wondering if there was a formal term for this axiom, or if it could be proven?
2014/05/09
[ "https://philosophy.stackexchange.com/questions/11428", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/6672/" ]
There are a number of good books about Aristotle that will greatly, greatly aide you in your quest to read Aristotle. The best single volume introduction to Aristotle's thought currently available in English is Christopher Shields' book [*Aristotle*](http://www.routledge.com/books/details/9780415622493/) published by Routledge. Even better, the book is arranged thematically, which will allow you to read one of the primary works, then turn automatically to the relevant chapter for Shields to tell you what has just happened. Start with the logical works. Read the *De interpretatione* and the first books of the *Prior* and *Posterior Analytics* for Aristotle's ideas on logic and method as well as some important criticisms of Plato's theory of the forms. Then move to the philosophy of nature works, namely the *Physics* and *De anima*. I'd just read the first two books of the *Physics*, and just book I of *De anima*. Then move to the read hardcore metaphysical issues, namely read books I, III, IV, VII and VIII of the *Metaphysics*. Then, for a little breather, read the *Nicomachean Ethics*, the whole thing. Finish with the *Rhetoric* and the *Politics*. If you read through all that material, together with Shields' book, you'll basically have a good beginner's grasp of Aristotelian philosophy and be ready to start looking in more detail at specific subjects.
St. Thomas Aquinas, considered one of the greatest commentators on Aristotle, only commentated on these works by him: ``` Peri Hermeneias Posteriora Analytica Physica De coelo et mundo De generatione et corruptione Super Meteora De anima De sensu et sensato De memoria et reminiscentia Ethica Tabula Ethicorum Politica Metaphysica ``` ([source](https://isidore.co/aquinas/)) And some of his commentaries are only partial (e.g., he didn't commentate on Books 13—Μ & 14—Ν of Aristotle's *Metaphysics*). He describes in his [*Sententia Ethic.*, lib. 6 l. 7 n. 17](https://isidore.co/aquinas/Ethics6.htm#7) [1211.] which subjects and in what order boys must learn: > > [T]he proper order of learning is that boys first be instructed in > things pertaining to logic because logic teaches the method of the > whole of philosophy. Next, they should be instructed in mathematics, > which does not need experience and does not exceed the imagination. > Third, in natural sciences, which, even though not exceeding sense and > imagination, nevertheless require experience. Fourth, in the moral > sciences, which require experience and a soul free from passions > … Fifth, in the sapiential and divine sciences, which exceed > imagination and require a sharp mind. > > > This is roughly the order the works are placed in the list above.
11,428
In software engineering, there is the concept of encapsulation: hiding the details of one program from another program. The theory is that by doing this, the other program will use only details provided (the interface), not caring about the inner details of the program being used. This apparently reduces code dependency on extraneous details that can be changed at any time in the program being used. Code dependency on extraneous details is bad because one change in the extraneous details of the program being used will necessitate change in the program that uses it. That dependency of one system on another necessitates change in the dependent system if the independent one is changed is axiomatic. I was wondering if there was a formal term for this axiom, or if it could be proven?
2014/05/09
[ "https://philosophy.stackexchange.com/questions/11428", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/6672/" ]
Shane's answer is great overall on what to read, but reading your title and question body again... you asked what to skip. Skip his Biology in its entirety. There's quite a few texts in there. Mostly interesting only on an anecdotal level (nearly all the primary texts here <http://plato.stanford.edu/entries/aristotle-biology/>). I'm not saying that it's completely worthless -- just that it will only matter if you want to study a particular subfield in history of philosophy of science. All of some sections of the *Politics* can be skipped. Particularly, the lengthy discussions of each political system in the Greek world (BK 2 sections 8-12). Some of them are worthwhile so consult a contemporary commentary that should give you the highlights instead of trudging through the descriptions. Also if you will read both *Nicomachean Ethics* and *Politics*, you can skip the second half of *NE* Book 8.
St. Thomas Aquinas, considered one of the greatest commentators on Aristotle, only commentated on these works by him: ``` Peri Hermeneias Posteriora Analytica Physica De coelo et mundo De generatione et corruptione Super Meteora De anima De sensu et sensato De memoria et reminiscentia Ethica Tabula Ethicorum Politica Metaphysica ``` ([source](https://isidore.co/aquinas/)) And some of his commentaries are only partial (e.g., he didn't commentate on Books 13—Μ & 14—Ν of Aristotle's *Metaphysics*). He describes in his [*Sententia Ethic.*, lib. 6 l. 7 n. 17](https://isidore.co/aquinas/Ethics6.htm#7) [1211.] which subjects and in what order boys must learn: > > [T]he proper order of learning is that boys first be instructed in > things pertaining to logic because logic teaches the method of the > whole of philosophy. Next, they should be instructed in mathematics, > which does not need experience and does not exceed the imagination. > Third, in natural sciences, which, even though not exceeding sense and > imagination, nevertheless require experience. Fourth, in the moral > sciences, which require experience and a soul free from passions > … Fifth, in the sapiential and divine sciences, which exceed > imagination and require a sharp mind. > > > This is roughly the order the works are placed in the list above.
65,492,137
I was reading about callback functions [here](https://www.freecodecamp.org/news/javascript-callback-functions-what-are-callbacks-in-js-and-how-to-use-them/#:%7E:text=Callbacks%20make%20sure%20that%20a,safe%20from%20problems%20and%20errors.) (also in an online course which I am participating) and now I am stuck. The reason is that I cannot understand Why do I need to use callback functions if I can simply call them. Exemples below: 1 - Using callback functions: ``` function showArticle(id, callbackSuccess, callbackError){ if (true){ callbackSuccess("This is a callback function", "It is very utilized."); } else { callbackError("Error on data recovery."); } } var callbackSuccess = function(title, description){ document.write("<h1>" + title + "</h1>"); document.write("<hr>"); document.write("<p>" + description + "</p>"); } var callbackError = function(error){ document.write("<p><b>Erro:</b>" + error + "</p>"); } showArticle(1, callbackSuccess, callbackError); ``` 2 - Here is my code not using callback functions and having the same results: ``` function showArticle(id){ if (true){ callbackSuccess("This is a callback function", "It is very utilized."); } else { callbackError("Error on data recovery."); } } var callbackSuccess = function(title, description){ document.write("<h1>" + title + "</h1>"); document.write("<hr>"); document.write("<p>" + description + "</p>"); } var callbackError = function(error){ document.write("<p><b>Erro:</b>" + error + "</p>"); } showArticle(1); ``` Why should I use callback functions and not simply calling them in the example 2?
2020/12/29
[ "https://Stackoverflow.com/questions/65492137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11119760/" ]
You're right, there's no point to the callback functions in the example you've given, but that's not how callback functions are normally used. Typically, callbacks are used: 1. By iteration, mapping, or filtering functions that call your callback for every element in an array, list, or other container 2. By functions that perform asynchronous work that call your callback when the work is successfully completed, fails, or both (depending on the style of the API you're calling) 3. By functions that accept callbacks they'll call when or if something happens, such as a `click` event handler on a DOM element ...but there are other categories as well. The `filter` function on arrays is an example of #1: It calls the callback for each entry in the array, using the return value of the callback to decide whether the entry should be in the new, filtered array it returns: ```js const numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; const odds = numbers.filter(num => { console.log(`callback called with ${num}`); // logs 10 times return num % 2; }); console.log(odds); // [1, 3, 5, 7, 9] ``` The Promise methods `then`, `catch`, and `finally` are examples of #2. Let's assume we have a `startSomethingAsynchronous` function that returns a promise. Here's how the fulfillment and rejection handlers (callbacks) might be set up: ``` startSomethingAsynchronous() // Starts an asynchronous process .then(result => { // <−+ // ...do something with the result... // +− This is a fulfillment handler }) // <−+ .catch(error => { // <−+ // ...report or handle error... // +− This is a rejection handler }); // <−+ ``` The fulfillment handler is called if the promise from `startSomethingAsynchronous()` is fulfilled (successfully completed). The rejection handler is called if that promise is rejected (fails), or if that promise is fulfilled but the promise returned by `then` rejects (for instance, because an error occurs in the fulfillment handler). (Chaining things like this is fairly common, but there are lots of other ways to use promises, this is just one example.) The `addEventListener` function in the DOM is an example of #3: ```js document.querySelector("input[type=button]") .addEventListener("click", function() { console.log("Clicked!"); // logs as many times as you click the button }); ``` ```html <input type="button" value="Click Me"> ```
> > JavaScript runs code sequentially in top-down order. However, there are some cases that code runs (or must run) after something else happens and also not sequentially. This is called asynchronous programming. > > > Callbacks make sure that a function is not going to run before a task is completed but will run right after the task has completed. It helps us develop asynchronous JavaScript code and keeps us safe from problems and errors. > > > In JavaScript, the way to create a callback function is to pass it as a parameter to another function, and then to call it back right after something has happened or some task is completed. > > > - freecodecamp.org You can read more [here](https://www.freecodecamp.org/news/javascript-callback-functions-what-are-callbacks-in-js-and-how-to-use-them/).
65,492,137
I was reading about callback functions [here](https://www.freecodecamp.org/news/javascript-callback-functions-what-are-callbacks-in-js-and-how-to-use-them/#:%7E:text=Callbacks%20make%20sure%20that%20a,safe%20from%20problems%20and%20errors.) (also in an online course which I am participating) and now I am stuck. The reason is that I cannot understand Why do I need to use callback functions if I can simply call them. Exemples below: 1 - Using callback functions: ``` function showArticle(id, callbackSuccess, callbackError){ if (true){ callbackSuccess("This is a callback function", "It is very utilized."); } else { callbackError("Error on data recovery."); } } var callbackSuccess = function(title, description){ document.write("<h1>" + title + "</h1>"); document.write("<hr>"); document.write("<p>" + description + "</p>"); } var callbackError = function(error){ document.write("<p><b>Erro:</b>" + error + "</p>"); } showArticle(1, callbackSuccess, callbackError); ``` 2 - Here is my code not using callback functions and having the same results: ``` function showArticle(id){ if (true){ callbackSuccess("This is a callback function", "It is very utilized."); } else { callbackError("Error on data recovery."); } } var callbackSuccess = function(title, description){ document.write("<h1>" + title + "</h1>"); document.write("<hr>"); document.write("<p>" + description + "</p>"); } var callbackError = function(error){ document.write("<p><b>Erro:</b>" + error + "</p>"); } showArticle(1); ``` Why should I use callback functions and not simply calling them in the example 2?
2020/12/29
[ "https://Stackoverflow.com/questions/65492137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11119760/" ]
You're right, there's no point to the callback functions in the example you've given, but that's not how callback functions are normally used. Typically, callbacks are used: 1. By iteration, mapping, or filtering functions that call your callback for every element in an array, list, or other container 2. By functions that perform asynchronous work that call your callback when the work is successfully completed, fails, or both (depending on the style of the API you're calling) 3. By functions that accept callbacks they'll call when or if something happens, such as a `click` event handler on a DOM element ...but there are other categories as well. The `filter` function on arrays is an example of #1: It calls the callback for each entry in the array, using the return value of the callback to decide whether the entry should be in the new, filtered array it returns: ```js const numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; const odds = numbers.filter(num => { console.log(`callback called with ${num}`); // logs 10 times return num % 2; }); console.log(odds); // [1, 3, 5, 7, 9] ``` The Promise methods `then`, `catch`, and `finally` are examples of #2. Let's assume we have a `startSomethingAsynchronous` function that returns a promise. Here's how the fulfillment and rejection handlers (callbacks) might be set up: ``` startSomethingAsynchronous() // Starts an asynchronous process .then(result => { // <−+ // ...do something with the result... // +− This is a fulfillment handler }) // <−+ .catch(error => { // <−+ // ...report or handle error... // +− This is a rejection handler }); // <−+ ``` The fulfillment handler is called if the promise from `startSomethingAsynchronous()` is fulfilled (successfully completed). The rejection handler is called if that promise is rejected (fails), or if that promise is fulfilled but the promise returned by `then` rejects (for instance, because an error occurs in the fulfillment handler). (Chaining things like this is fairly common, but there are lots of other ways to use promises, this is just one example.) The `addEventListener` function in the DOM is an example of #3: ```js document.querySelector("input[type=button]") .addEventListener("click", function() { console.log("Clicked!"); // logs as many times as you click the button }); ``` ```html <input type="button" value="Click Me"> ```
*The above answers are right. Mine is just a simplification of one of the reasons (async nature)* Note that **NOT everything is sequential.** * A call to a database could take 100ms or 200ms or 1s to return data. * Reading a file that you do not know the size it could take `X` seconds. In cases where you do not know how long a operation will take you use the callback approach and that is a JavaScript **feature**. Some languages block the *flow* (I will wait the call to the database) or create threads ( I will execute in another "process" those operations) JS will go with Promises and callbacks.
71,017,766
I have two NVidia GPUs in the machine, but I am not using them. I have three NN training running on my machine. When I am trying to run the fourth one, the script is giving me the following error: ``` my_user@my_machine:~/my_project/training_my_project$ python3 my_project.py Traceback (most recent call last): File "my_project.py", line 211, in <module> load_data( File "my_project.py", line 132, in load_data tx = tf.convert_to_tensor(data_x, dtype=tf.float32) File "/home/my_user/.local/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/my_user/.local/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor return ops.EagerTensor(value, ctx.device_name, dtype) tensorflow.python.framework.errors_impl.FailedPreconditionError: Failed to allocate scratch buffer for device 0 my_user@my_machine:~/my_project/training_my_project$ ``` **How can I resolve this issue?** The following is my RAM usage: ``` my_user@my_machine:~/my_project/training_my_project$ free -m total used free shared buff/cache available Mem: 15947 6651 3650 20 5645 8952 Swap: 2047 338 1709 my_user@my_machine:~/my_project/training_my_project$ ``` The following is my CPU usage: ``` my_user@my_machine:~$ top -i top - 12:46:12 up 79 days, 21:14, 2 users, load average: 4,05, 3,82, 3,80 Tasks: 585 total, 2 running, 583 sleeping, 0 stopped, 0 zombie %Cpu(s): 11,7 us, 1,6 sy, 0,0 ni, 86,6 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st MiB Mem : 15947,7 total, 3638,3 free, 6662,7 used, 5646,7 buff/cache MiB Swap: 2048,0 total, 1709,4 free, 338,6 used. 8941,6 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2081821 my_user 20 0 48,9g 2,5g 471076 S 156,1 15,8 1832:54 python3 2082196 my_user 20 0 48,8g 2,6g 467708 S 148,5 16,8 1798:51 python3 2076942 my_user 20 0 47,8g 1,6g 466916 R 147,5 10,3 2797:51 python3 1594 gdm 20 0 3989336 65816 31120 S 0,7 0,4 38:03.14 gnome-shell 93 root rt 0 0 0 0 S 0,3 0,0 0:38.42 migration/13 1185 root -51 0 0 0 0 S 0,3 0,0 3925:59 irq/54-nvidia 2075861 root 20 0 0 0 0 I 0,3 0,0 1:30.17 kworker/22:0-events 2076418 root 20 0 0 0 0 I 0,3 0,0 1:38.65 kworker/1:0-events 2085325 root 20 0 0 0 0 I 0,3 0,0 1:17.15 kworker/3:1-events 2093002 root 20 0 0 0 0 I 0,3 0,0 1:00.05 kworker/23:0-events 2100000 root 20 0 0 0 0 I 0,3 0,0 0:45.78 kworker/2:2-events 2104688 root 20 0 0 0 0 I 0,3 0,0 0:33.08 kworker/9:0-events 2106767 root 20 0 0 0 0 I 0,3 0,0 0:25.16 kworker/20:0-events 2115469 root 20 0 0 0 0 I 0,3 0,0 0:01.98 kworker/11:2-events 2115470 root 20 0 0 0 0 I 0,3 0,0 0:01.96 kworker/12:2-events 2115477 root 20 0 0 0 0 I 0,3 0,0 0:01.95 kworker/30:1-events 2116059 my_user 20 0 23560 4508 3420 R 0,3 0,0 0:00.80 top ``` The following is my TF configuration: ``` import os os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" # os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # os.environ["CUDA_VISIBLE_DEVICES"] = "99" # Use both gpus for training. import sys, random import time import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.callbacks import ModelCheckpoint import numpy as np from lxml import etree, objectify # <editor-fold desc="GPU"> # resolve GPU related issues. try: physical_devices = tf.config.list_physical_devices('GPU') for gpu_instance in physical_devices: tf.config.experimental.set_memory_growth(gpu_instance, True) except Exception as e: pass # END of try # </editor-fold> ``` Please, take the commented lines as commented-out lines. **Relevant source code:** ``` def load_data(fname: str, class_index: int, feature_start_index: int, **selection): i = 0 file = open(fname) if "top_n_lines" in selection: lines = [next(file) for _ in range(int(selection["top_n_lines"]))] elif "random_n_lines" in selection: tmp_lines = file.readlines() lines = random.sample(tmp_lines, int(selection["random_n_lines"])) else: lines = file.readlines() data_x, data_y = [], [] for l in lines: row = l.strip().split() x = [float(ix) for ix in row[feature_start_index:]] y = encode(row[class_index]) data_x.append(x) data_y.append(y) # END for l in lines num_rows = len(data_x) given_fraction = selection.get("validation_part", 1.0) if given_fraction > 0.9999: valid_x, valid_y = data_x, data_y else: n = int(num_rows * given_fraction) data_x, data_y = data_x[n:], data_y[n:] valid_x, valid_y = data_x[:n], data_y[:n] # END of if-else block tx = tf.convert_to_tensor(data_x, np.float32) ty = tf.convert_to_tensor(data_y, np.float32) vx = tf.convert_to_tensor(valid_x, np.float32) vy = tf.convert_to_tensor(valid_y, np.float32) return tx, ty, vx, vy # END of the function ```
2022/02/07
[ "https://Stackoverflow.com/questions/71017766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/159072/" ]
**Using multiple GPUs** If developing on a system with a single GPU, you can simulate multiple GPUs with virtual devices. This enables easy testing of multi-GPU setups without requiring additional resources. ``` gpus = tf.config.list_physical_devices('GPU') if gpus: # Create 2 virtual GPUs with 1GB memory each try: tf.config.set_logical_device_configuration( gpus[0], [tf.config.LogicalDeviceConfiguration(memory_limit=1024), tf.config.LogicalDeviceConfiguration(memory_limit=1024)]) logical_gpus = tf.config.list_logical_devices('GPU') print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Virtual devices must be set before GPUs have been initialized print(e) ``` NOTE: Virtual devices cannot be modified after being initialized Once there are multiple logical GPUs available to the runtime, you can utilize the multiple GPUs with `tf.distribute.Strategy` or with *manual placement*. With `tf.distribute.Strategy` best practice for using multiple GPUs, here is a simple example: ``` tf.debugging.set_log_device_placement(True) gpus = tf.config.list_logical_devices('GPU') strategy = tf.distribute.MirroredStrategy(gpus) with strategy.scope(): inputs = tf.keras.layers.Input(shape=(1,)) predictions = tf.keras.layers.Dense(1)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=predictions) model.compile(loss='mse', optimizer=tf.keras.optimizers.SGD(learning_rate=0.2)) ``` This program will run a copy of your model on each GPU, splitting the input data between them, also known as "data parallelism". For more information about [distribution strategies](https://www.tensorflow.org/guide/distributed_training) or [manual placement](https://www.tensorflow.org/guide/gpu#manual_placement), check out the guides on the links.
The RAM complaint isn't about your system ram (call it CPU RAM). It's about your GPU RAM. The moment TF loads, it allocates all the GPU RAM for itself (some small fraction is left over due to page size stuff). Your sample makes TF dynamically allocate GPU RAM, but it could still end up using up all the GPU RAM. Use the code below to provide a hard stop on GPU RAM per process. you'll likely want to change 1024 to 8096 or something like that. and FYI, use `nvidia-smi` to monitor your GPU ram usage. From the docs: <https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth> ``` gpus = tf.config.list_physical_devices('GPU') if gpus: # Restrict TensorFlow to only allocate 1GB of memory on the first GPU try: tf.config.set_logical_device_configuration( gpus[0], [tf.config.LogicalDeviceConfiguration(memory_limit=1024)]) logical_gpus = tf.config.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Virtual devices must be set before GPUs have been initialized print(e) ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
you can solve this by using the `And()` method ``` BitArray ba = new BitArray(new bool[] { true, true, false, false, false, true, true, false }); BitArray ba2 = new BitArray(new bool[] { false, true, false, true, false, true, false, true }); int result = ba.And(ba2).Cast<bool>().Count(x => x); //2 ```
Assuming `a` and `b` have equal `Length`. ``` int[] a = new[] {1,0,1, ...}; int[] b = new[] {0,0,1, ...}; int c = 0; for (int i = 0; i < a.Length; i++) c += a[i] == 1 && b[i] == 1 ? 1 : 0; ``` Simple. Time complexity is *O(n)* where *n* is a number of elements in arrays.
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
you can solve this by using the `And()` method ``` BitArray ba = new BitArray(new bool[] { true, true, false, false, false, true, true, false }); BitArray ba2 = new BitArray(new bool[] { false, true, false, true, false, true, false, true }); int result = ba.And(ba2).Cast<bool>().Count(x => x); //2 ```
A more concise answer: ``` bool[] A = ...; bool[] B = ...; var result = A.Where((val, ix)=>val && B[ix]).Count(); ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
Use `And` Method, and count `true`, I think this is faster than other answer. ``` var bit1 = new BitArray(new bool[]{true, false, ...}); var bit2 = new BitArray(new bool[]{false, false, ...}); var and = bit1.And(bit2); var result = 0; //Total count I think you want. for (int i = 0; i < and.Length; i++) { if (and[i]) { result++; } } ``` **UPDATE** I came up with a solution for performance improvement. Replace `popCount` to this: ``` private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); for (Int32 i = 0; i < ints.Length; i++) { Int32 c = ints[i]; if (c == 0) { continue; } // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` In my machine, when `simRes.Length > 16000000`, `if(c == 0){...}` block gives good performance. But when `simRes.Length < 16000000`, `if(c == 0){...}` block should be deleted.
you can solve this by using the `And()` method ``` BitArray ba = new BitArray(new bool[] { true, true, false, false, false, true, true, false }); BitArray ba2 = new BitArray(new bool[] { false, true, false, true, false, true, false, true }); int result = ba.And(ba2).Cast<bool>().Count(x => x); //2 ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
you can solve this by using the `And()` method ``` BitArray ba = new BitArray(new bool[] { true, true, false, false, false, true, true, false }); BitArray ba2 = new BitArray(new bool[] { false, true, false, true, false, true, false, true }); int result = ba.And(ba2).Cast<bool>().Count(x => x); //2 ```
``` static void Main() { var a = new BitArray(new bool[]{true, false,true}); var b = new BitArray(new bool[]{false, false,true}); int result = 0; int size = Math.Min( a.Length, b.Length); //or a.Length or 200000 for (int i = 0; i < size ; i++) { if (a[i] == true && b[i] == true ) { result++; } } Console.WriteLine("{0}",result); } ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
Use `And` Method, and count `true`, I think this is faster than other answer. ``` var bit1 = new BitArray(new bool[]{true, false, ...}); var bit2 = new BitArray(new bool[]{false, false, ...}); var and = bit1.And(bit2); var result = 0; //Total count I think you want. for (int i = 0; i < and.Length; i++) { if (and[i]) { result++; } } ``` **UPDATE** I came up with a solution for performance improvement. Replace `popCount` to this: ``` private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); for (Int32 i = 0; i < ints.Length; i++) { Int32 c = ints[i]; if (c == 0) { continue; } // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` In my machine, when `simRes.Length > 16000000`, `if(c == 0){...}` block gives good performance. But when `simRes.Length < 16000000`, `if(c == 0){...}` block should be deleted.
Assuming `a` and `b` have equal `Length`. ``` int[] a = new[] {1,0,1, ...}; int[] b = new[] {0,0,1, ...}; int c = 0; for (int i = 0; i < a.Length; i++) c += a[i] == 1 && b[i] == 1 ? 1 : 0; ``` Simple. Time complexity is *O(n)* where *n* is a number of elements in arrays.
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
Assuming `a` and `b` have equal `Length`. ``` int[] a = new[] {1,0,1, ...}; int[] b = new[] {0,0,1, ...}; int c = 0; for (int i = 0; i < a.Length; i++) c += a[i] == 1 && b[i] == 1 ? 1 : 0; ``` Simple. Time complexity is *O(n)* where *n* is a number of elements in arrays.
``` static void Main() { var a = new BitArray(new bool[]{true, false,true}); var b = new BitArray(new bool[]{false, false,true}); int result = 0; int size = Math.Min( a.Length, b.Length); //or a.Length or 200000 for (int i = 0; i < size ; i++) { if (a[i] == true && b[i] == true ) { result++; } } Console.WriteLine("{0}",result); } ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
Use `And` Method, and count `true`, I think this is faster than other answer. ``` var bit1 = new BitArray(new bool[]{true, false, ...}); var bit2 = new BitArray(new bool[]{false, false, ...}); var and = bit1.And(bit2); var result = 0; //Total count I think you want. for (int i = 0; i < and.Length; i++) { if (and[i]) { result++; } } ``` **UPDATE** I came up with a solution for performance improvement. Replace `popCount` to this: ``` private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); for (Int32 i = 0; i < ints.Length; i++) { Int32 c = ints[i]; if (c == 0) { continue; } // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` In my machine, when `simRes.Length > 16000000`, `if(c == 0){...}` block gives good performance. But when `simRes.Length < 16000000`, `if(c == 0){...}` block should be deleted.
A more concise answer: ``` bool[] A = ...; bool[] B = ...; var result = A.Where((val, ix)=>val && B[ix]).Count(); ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
A more concise answer: ``` bool[] A = ...; bool[] B = ...; var result = A.Where((val, ix)=>val && B[ix]).Count(); ```
``` static void Main() { var a = new BitArray(new bool[]{true, false,true}); var b = new BitArray(new bool[]{false, false,true}); int result = 0; int size = Math.Min( a.Length, b.Length); //or a.Length or 200000 for (int i = 0; i < size ; i++) { if (a[i] == true && b[i] == true ) { result++; } } Console.WriteLine("{0}",result); } ```
35,985,274
I have two bitarrays with each length of 200.000. I need to find how many 1's in each list at the same order. Let me draw it: ``` 1 0 **1 1** 0 0 0 1 0 0 **1 1** 1 0 0 1 .. .. ``` So the result should be 2. and I'm doing this comparison in -two nested for- about 20 million times :). I'm doing it now with bitarray with & operator than using a popCount method to find the result. So what do you suggest for this kind of problem. Where would you store these vectors and how would you compare them in a way that I want? I need speed. **UPDATE:** i ve done this with 760 lenght arrays and it took under 5 seconds with my method. Every method suggested in the comments took >1min (i stopped the program than) So i guess its me who has to answer it. I simplified my code. ``` for(i<761) var vector1 = matris[getvectorthing]; for(j=i+1<761) { var vector2 = matris[getvectorthing]; var similarityResult = vector1Temp.And(vector2); var similarityValuePay = popCount(similarityResult); //similarityValuePay is result that i want } } private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); var tempInt = ints.Where(k => k != 0).ToArray(); for (Int32 i = 0; i < tempInt.Length; i++) { Int32 c = tempInt[i]; // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` i asked it because may be there is much cleaver method or simple tuning to make performance better. For example: ``` var tempInt = ints.Where(k => k != 0).ToArray(); ``` this ToArray() seems to be a part that i need to fix. etc.
2016/03/14
[ "https://Stackoverflow.com/questions/35985274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068947/" ]
Use `And` Method, and count `true`, I think this is faster than other answer. ``` var bit1 = new BitArray(new bool[]{true, false, ...}); var bit2 = new BitArray(new bool[]{false, false, ...}); var and = bit1.And(bit2); var result = 0; //Total count I think you want. for (int i = 0; i < and.Length; i++) { if (and[i]) { result++; } } ``` **UPDATE** I came up with a solution for performance improvement. Replace `popCount` to this: ``` private static int popCount(BitArray simRes) { Int32[] ints = new Int32[(simRes.Count >> 5) + 1]; simRes.CopyTo(ints, 0); Int32 count = 0; // fix for not truncated bits in last integer that may have been set to true with SetAll() ints[ints.Length - 1] &= ~(-1 << (simRes.Count % 32)); for (Int32 i = 0; i < ints.Length; i++) { Int32 c = ints[i]; if (c == 0) { continue; } // magic (http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel) unchecked { c = c - ((c >> 1) & 0x55555555); c = (c & 0x33333333) + ((c >> 2) & 0x33333333); c = ((c + (c >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } count += c; } return count; } ``` In my machine, when `simRes.Length > 16000000`, `if(c == 0){...}` block gives good performance. But when `simRes.Length < 16000000`, `if(c == 0){...}` block should be deleted.
``` static void Main() { var a = new BitArray(new bool[]{true, false,true}); var b = new BitArray(new bool[]{false, false,true}); int result = 0; int size = Math.Min( a.Length, b.Length); //or a.Length or 200000 for (int i = 0; i < size ; i++) { if (a[i] == true && b[i] == true ) { result++; } } Console.WriteLine("{0}",result); } ```
23,144
A Una mala idea o algo sin sentido lo nombramos comunmente como algo "descabellado". El DLE explica su significado: > > [**descabellado, da**](http://dle.rae.es/?id=CZo4FB6) > > > Del part. de *descabellar.* > > > 1. adj. Que va fuera de orden, concierto o razón. > > > Allí se menciona que el término viene del participio de un verbo en desuso "descabellar:" > > [**descabellar**](http://dle.rae.es/?id=CZtcAjt) > > > De *des*- y *cabello*. > > > 2. tr. desus. Despeinar, desgreñar. Era u. m. c. prnl. > > > En la página [Significado y Origen de Expresiones Famosas](https://sigificadoyorigen.wordpress.com/2010/06/01/una-idea-descabellada/) encontré una posible explicación: > > **Una idea descabellada** > > > En la práctica discursiva de la población se desacredita un proyecto o una propuesta con esta frase. Al decir descabellada refiriéndose a una idea, algo inmaterial, cuesta componer la figura mental de algo así como un pensamiento calvo. El tema es que descabellada significa no tener pelos y como los cabellos están sujetos al cuero que cubre el cráneo, decir descabellado es como decir que no tiene cabeza, por eso las ideas descabelladas proviene de personas sin cerebro. > > > Mi problema con dicha explicación es que dice: "descabellada significa no tener pelos". Eso no es lo que dice el DLE. Además, el razonamiento que siguen para concluir que descabellado es como no tener cabeza no me termina de convencer. Ahora, si parto de la definición para "descabellar" en el DLE tendría que pensar en una idea despeinada o desgreñada, lo cual también se entiende metafóricamente, pero me hace preguntar si hay otra explicación más literal, o si hay alguna asociación que me estoy perdiendo ¿Es correcta la explicación de la página que referencio o hay algun otro origen?
2017/11/09
[ "https://spanish.stackexchange.com/questions/23144", "https://spanish.stackexchange.com", "https://spanish.stackexchange.com/users/17686/" ]
Resulta curioso que en el diccionario de Covarrubias de 1611, la palabra *descabellado* signifique simplemente "el que trae el cabello rebuelto [sic] y desgreñado". Y ya en el *Diccionario de Autoridades*, tomo D-F (1732), se diga: > > DESCABELLADO, DA. adj. Desgreñado. Desmelenado. > > DESCABELLADO. Se toma tambien por lo que vá fuera de orden, concierto y razón. > > DESCABELLADO. Vale tambien desproporcionado, por mui grande ò vehemente: y se toma con freqüéncia por los últimos y mui fuertes dolóres que padecen las mugéres en el parto. > > > En algún momento entre estos dos puntos debió de producirse el cambio. Veamos si encontramos algo en el CORDE. Si limitamos la búsqueda a textos hasta 1600 (la época de Covarrubias), encontramos: > > *... et con las sus vnnas se rascaua et se despedaçaua los sus muyt tiernos labros, et descabellada se messaua fuertment et se arrancaua los cabellos de la carne.* (1376-1396) > > > *... vestidas de duelo, las caras rompidas, coronas d'esparto e sogas çeñidas, > descalças e rotas e descabelladas, e tristes, amargas e desconsoladas...* (a 1435) > > > *... vieron venir a una muger desnuda y descabellada, corriendo, dando bozes...* (a 1504) > > > *... veo a las dueñas y donzellas todas descabelladas, con las caras llenas de sangre...* (1511) > > > *Aunque sea deshonrada en la tierra, descabellada, desnuda y afeada, aquél por cuyo amor yo sufro esto, tomará de ti venganza, enemigo de justicia, y te dará tu merecido.* (1583) > > > Lo primero es que me resulta curioso que prácticamente todos los casos se refieran a mujeres. Lo segundo es que, si bien es cierto que la palabra se usaba literalmente como sinónimo de *desgreñada* o *despeinada*, los ejemplos son bastante gráficos en cuanto al motivo del *despeine*. Una mujer *descabellada* no lo estaba por voluntad propia, sino por haber sido objeto de algún sufrimiento. De ahí que a partir de 1600 se comiencen a encontrar textos con la palabra como sinónimo de *dolores fuertes*, como indica el *Diccionario de Autoridades*: > > *... pues sin considerar que estoy en lo más descabellado de los dolores, no digo del aprieto de mis deudas, sino del parto de mis esperanzas...* (1613-1626) > > > *¿En las buenas nuevas risas y risa en los dolores descabellados?* (1633) > > > La causa está clara: sufres unos dolores tan fuertes que te retuerces y acabas con el pelo revuelto. El cambio hacia el sentido que nos ocupa lo puedes ver en frases como la siguiente: > > *... en espeçial cantidad de mugeres corriendo descabelladas y gritando, como suçede en lugares que se saquean...* (c 1618) > > > Al correr como locas, las mujeres del texto se despeinaban, y ese *despeine* quedaba como sinónimo de algo que se hace sin orden ni concierto. Y ese cambio que ahora podemos comprender como una evolución lógica no tardó en llegar: > > *Su descabellado enredo / en dubias inundaciones, / si hace al oro que se anegue, / hace al carmín que se ahogue.* (a 1659) > > > *Mayor era sin comparación su buena industria, en meter paces entre algunos indios que andaban en guerras, cuanto eran más descabelladas las razones en que se fundaban, que á no tenerlos bien conocidos, fuera imposible meterlos en camino.* (1676) > > >
The first definition of *descabellar* in the DLE > > 1. tr. Taurom. Matar instantáneamente al toro, hiriéndolo en la cerviz con la punta de la espada o con la puntilla. > > > This concept of severing the head from the spinal column seems to me to correspond to the meaning of *Que va fuera de orden, concierto o razón* mentioned in the question. After all if you have just lost the connection between your brain and the rest of your body ...
23,144
A Una mala idea o algo sin sentido lo nombramos comunmente como algo "descabellado". El DLE explica su significado: > > [**descabellado, da**](http://dle.rae.es/?id=CZo4FB6) > > > Del part. de *descabellar.* > > > 1. adj. Que va fuera de orden, concierto o razón. > > > Allí se menciona que el término viene del participio de un verbo en desuso "descabellar:" > > [**descabellar**](http://dle.rae.es/?id=CZtcAjt) > > > De *des*- y *cabello*. > > > 2. tr. desus. Despeinar, desgreñar. Era u. m. c. prnl. > > > En la página [Significado y Origen de Expresiones Famosas](https://sigificadoyorigen.wordpress.com/2010/06/01/una-idea-descabellada/) encontré una posible explicación: > > **Una idea descabellada** > > > En la práctica discursiva de la población se desacredita un proyecto o una propuesta con esta frase. Al decir descabellada refiriéndose a una idea, algo inmaterial, cuesta componer la figura mental de algo así como un pensamiento calvo. El tema es que descabellada significa no tener pelos y como los cabellos están sujetos al cuero que cubre el cráneo, decir descabellado es como decir que no tiene cabeza, por eso las ideas descabelladas proviene de personas sin cerebro. > > > Mi problema con dicha explicación es que dice: "descabellada significa no tener pelos". Eso no es lo que dice el DLE. Además, el razonamiento que siguen para concluir que descabellado es como no tener cabeza no me termina de convencer. Ahora, si parto de la definición para "descabellar" en el DLE tendría que pensar en una idea despeinada o desgreñada, lo cual también se entiende metafóricamente, pero me hace preguntar si hay otra explicación más literal, o si hay alguna asociación que me estoy perdiendo ¿Es correcta la explicación de la página que referencio o hay algun otro origen?
2017/11/09
[ "https://spanish.stackexchange.com/questions/23144", "https://spanish.stackexchange.com", "https://spanish.stackexchange.com/users/17686/" ]
Resulta curioso que en el diccionario de Covarrubias de 1611, la palabra *descabellado* signifique simplemente "el que trae el cabello rebuelto [sic] y desgreñado". Y ya en el *Diccionario de Autoridades*, tomo D-F (1732), se diga: > > DESCABELLADO, DA. adj. Desgreñado. Desmelenado. > > DESCABELLADO. Se toma tambien por lo que vá fuera de orden, concierto y razón. > > DESCABELLADO. Vale tambien desproporcionado, por mui grande ò vehemente: y se toma con freqüéncia por los últimos y mui fuertes dolóres que padecen las mugéres en el parto. > > > En algún momento entre estos dos puntos debió de producirse el cambio. Veamos si encontramos algo en el CORDE. Si limitamos la búsqueda a textos hasta 1600 (la época de Covarrubias), encontramos: > > *... et con las sus vnnas se rascaua et se despedaçaua los sus muyt tiernos labros, et descabellada se messaua fuertment et se arrancaua los cabellos de la carne.* (1376-1396) > > > *... vestidas de duelo, las caras rompidas, coronas d'esparto e sogas çeñidas, > descalças e rotas e descabelladas, e tristes, amargas e desconsoladas...* (a 1435) > > > *... vieron venir a una muger desnuda y descabellada, corriendo, dando bozes...* (a 1504) > > > *... veo a las dueñas y donzellas todas descabelladas, con las caras llenas de sangre...* (1511) > > > *Aunque sea deshonrada en la tierra, descabellada, desnuda y afeada, aquél por cuyo amor yo sufro esto, tomará de ti venganza, enemigo de justicia, y te dará tu merecido.* (1583) > > > Lo primero es que me resulta curioso que prácticamente todos los casos se refieran a mujeres. Lo segundo es que, si bien es cierto que la palabra se usaba literalmente como sinónimo de *desgreñada* o *despeinada*, los ejemplos son bastante gráficos en cuanto al motivo del *despeine*. Una mujer *descabellada* no lo estaba por voluntad propia, sino por haber sido objeto de algún sufrimiento. De ahí que a partir de 1600 se comiencen a encontrar textos con la palabra como sinónimo de *dolores fuertes*, como indica el *Diccionario de Autoridades*: > > *... pues sin considerar que estoy en lo más descabellado de los dolores, no digo del aprieto de mis deudas, sino del parto de mis esperanzas...* (1613-1626) > > > *¿En las buenas nuevas risas y risa en los dolores descabellados?* (1633) > > > La causa está clara: sufres unos dolores tan fuertes que te retuerces y acabas con el pelo revuelto. El cambio hacia el sentido que nos ocupa lo puedes ver en frases como la siguiente: > > *... en espeçial cantidad de mugeres corriendo descabelladas y gritando, como suçede en lugares que se saquean...* (c 1618) > > > Al correr como locas, las mujeres del texto se despeinaban, y ese *despeine* quedaba como sinónimo de algo que se hace sin orden ni concierto. Y ese cambio que ahora podemos comprender como una evolución lógica no tardó en llegar: > > *Su descabellado enredo / en dubias inundaciones, / si hace al oro que se anegue, / hace al carmín que se ahogue.* (a 1659) > > > *Mayor era sin comparación su buena industria, en meter paces entre algunos indios que andaban en guerras, cuanto eran más descabelladas las razones en que se fundaban, que á no tenerlos bien conocidos, fuera imposible meterlos en camino.* (1676) > > >
Especulación, sin fundamento que lo soporte. ¿Podría tener relación con que una mala idea haría que la gente se arrepintiera, **mesándose las barbas**, y **tirándose del pelo** (y así, quedando el pelo sin orden ni concierto?
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
I found this, we can use [`Thread.holdsLock(Object obj)`](http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#holdsLock-java.lang.Object-) to check if an object is locked: > > Returns `true` if and only if the current thread holds the monitor lock on the specified object. > > > Note that `Thread.holdsLock()` returns `false` if the lock is held by *something* and the calling thread isn't the thread that holds the lock.
I needed to also find a solution to this, so searched the Java Concurrency API and came across [StampedLock](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html). The project is using Java 8. I am working in a heavily-threaded asynchronous data service that communicates with a native library and contains long-living configuration objects, necessitating sometimes-complex concurrency logic; thankfully this turned out to be relatively simple with the StampedLock class. StampedLock has a method called [tryOptimisticRead](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html#tryOptimisticRead--) which does not wait, it just returns the status in the form of a long-time time stamp, where zero (0) indicates an exclusive lock is held. I then do delay for up to a second but you could just use the function without any sort of delay. Here's how I'm detecting whether or not there's an exclusive lock, this paradigm is used in multiple locations and includes error handling: ``` int delayCount = 0; //Makes sure that if there is data being written to this field at // this moment, wait until the operation is finished writing the // updated data. while (data1StampedLock.tryOptimisticRead() == 0) { try { delay(WRITE_LOCK_SHORT_DELAY); delayCount += 1; } catch (InterruptedException e) { logError("Interrupted while waiting for the write lock to be released!", e); Thread.currentThread().interrupt(); //There may be an issue with the JVM if this occurs, treat // it like we might crash and try to release the write lock. data1StampedLock.tryUnlockWrite(); break; } if (delayCount * WRITE_LOCK_SHORT_DELAY > TimeUnit.SECONDS.toMillis(1)) { logWarningWithAlert("Something is holding a write lock on" + " the data for a very, very long time (>1s). This may" + " indicate a problem that could cause cascading" + " problems in the near future." + " Also, the value for the data that is about to be" + " retrieved could potentially be invalid."); break; } } long nonExclusiveLockStamp = data1StampedLock.readLock(); Data data1NonVolatile = data1; data1StampedLock.unlockRead(nonExclusiveLockStamp); return data1NonVolatile; ``` The read locks on a StampedLock are non-exclusive and are like reading from a thread-safe Map or HashTable, where it is multi-read/single-write. Here is how I am using the exclusive lock to communicate to other threads that the instance data is being written to: ``` long d1LockStamp = data1StampedLock.writeLock(); this.data1 = data1; data1StampedLock.unlockWrite(d1LockStamp); ``` So if you wanted to only check whether or not something is locked at any given moment, you need only something simple like the following statement to get the status: ``` boolean data1IsLocked = data1StampedLock.tryOptimisticRead() == 0; ``` Then check the value of that boolean. There are, of course, the caveats and Here Be Dragons information mentioned in other answers (namely that the information is immediately stale), but if you really need to lock something and check that lock from another thread, this seemed to me to be the most reasonable, safe, and effective way that uses the java.util.concurrency package with no external dependencies.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
One thing to note is that the *instant* you receive such information, it's stale. In other words, you could be told that no-one has the lock, but then when you try to acquire it, you block because another thread took out the lock between the check and you trying to acquire it. Brian is right to point at `Lock`, but I think what you really want is its [`tryLock`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#tryLock()) method: ``` Lock lock = new ReentrantLock(); ...... if (lock.tryLock()) { // Got the lock try { // Process record } finally { // Make sure to unlock so that we don't cause a deadlock lock.unlock(); } } else { // Someone else had the lock, abort } ``` You can also call `tryLock` with an amount of time to wait - so you could try to acquire it for a tenth of a second, then abort if you can't get it (for example). (I think it's a pity that the Java API doesn't - as far as I'm aware - provide the same functionality for the "built-in" locking, as the `Monitor` class does in .NET. Then again, there are plenty of other things I dislike in both platforms when it comes to threading - every object potentially having a monitor, for example!)
While the Lock answers are very good, I thought I'd post an alternative using a different data structure. Essentially, your various threads want to know which records are locked and which aren't. One way to do this is to keep track of the locked records and make sure that data structure has the right atomic operations for adding records to the locked set. I will use CopyOnWriteArrayList as an example because it's less "magic" for illustration. CopyOnWriteArraySet is a more appropriate structure. If you have lots and lots of records locked at the same time on average then there may be performance implications with these implementations. A properly synchronized HashSet would work too and locks are brief. Basically, usage code would look like this: ``` CopyOnWriteArrayList<Record> lockedRecords = .... ... if (!lockedRecords.addIfAbsent(record)) return; // didn't get the lock, record is already locked try { // Do the record stuff } finally { lockedRecords.remove(record); } ``` It keeps you from having to manage a lock per record and provides a single place should clearing all locks be necessary for some reason. On the other hand, if you ever have more than a handful of records then a real HashSet with synchronization may do better since the add/remove look-ups will be O(1) instead of linear. Just a different way of looking at things. Just depends on what your actual threading requirements are. Personally, I would use a Collections.synchronizedSet( new HashSet() ) because it will be really fast... the only implication is that threads may yield when they otherwise wouldn't have.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
Another workaround is (in case of you didnt have chance with the answers given here )is using timeouts. i.e. below one will return null after 1 second hanging: ``` ExecutorService executor = Executors.newSingleThreadExecutor(); //create a callable for the thread Future<String> futureTask = executor.submit(new Callable<String>() { @Override public String call() throws Exception { return myObject.getSomething(); } }); try { return futureTask.get(1000, TimeUnit.MILLISECONDS); } catch (InterruptedException | ExecutionException | TimeoutException e) { //object is already locked check exception type return null; } ```
Thanks for this, it helped me out solving a race condition. I changed it a little to wear both belt and suspenders. **So here is my suggestion for AN IMPROVEMENT of the accepted answer:** You can ensure that you get safe access to the `tryLock()` method by doing something like this: ```java Lock localLock = new ReentrantLock(); private void threadSafeCall() { boolean isUnlocked = false; synchronized(localLock) { isUnlocked = localLock.tryLock(); } if (isUnlocked) { try { rawCall(); } finally { localLock.unlock(); } } else { LOGGER.log(Level.INFO, "THANKS! - SAVED FROM DOUBLE CALL!"); } } ``` This would avoid the situation where you might get two calling `tryLock()` at the almost same time, causing the return to be potentially doubt full. I'd like to now if I'm wrong, I might be over cautios here. But hey! My gig is stable now :-).. Read more on my development issues at my [Blog](http://theengine.schwartzengine.com/#home).
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
While the Lock answers are very good, I thought I'd post an alternative using a different data structure. Essentially, your various threads want to know which records are locked and which aren't. One way to do this is to keep track of the locked records and make sure that data structure has the right atomic operations for adding records to the locked set. I will use CopyOnWriteArrayList as an example because it's less "magic" for illustration. CopyOnWriteArraySet is a more appropriate structure. If you have lots and lots of records locked at the same time on average then there may be performance implications with these implementations. A properly synchronized HashSet would work too and locks are brief. Basically, usage code would look like this: ``` CopyOnWriteArrayList<Record> lockedRecords = .... ... if (!lockedRecords.addIfAbsent(record)) return; // didn't get the lock, record is already locked try { // Do the record stuff } finally { lockedRecords.remove(record); } ``` It keeps you from having to manage a lock per record and provides a single place should clearing all locks be necessary for some reason. On the other hand, if you ever have more than a handful of records then a real HashSet with synchronization may do better since the add/remove look-ups will be O(1) instead of linear. Just a different way of looking at things. Just depends on what your actual threading requirements are. Personally, I would use a Collections.synchronizedSet( new HashSet() ) because it will be really fast... the only implication is that threads may yield when they otherwise wouldn't have.
I needed to also find a solution to this, so searched the Java Concurrency API and came across [StampedLock](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html). The project is using Java 8. I am working in a heavily-threaded asynchronous data service that communicates with a native library and contains long-living configuration objects, necessitating sometimes-complex concurrency logic; thankfully this turned out to be relatively simple with the StampedLock class. StampedLock has a method called [tryOptimisticRead](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html#tryOptimisticRead--) which does not wait, it just returns the status in the form of a long-time time stamp, where zero (0) indicates an exclusive lock is held. I then do delay for up to a second but you could just use the function without any sort of delay. Here's how I'm detecting whether or not there's an exclusive lock, this paradigm is used in multiple locations and includes error handling: ``` int delayCount = 0; //Makes sure that if there is data being written to this field at // this moment, wait until the operation is finished writing the // updated data. while (data1StampedLock.tryOptimisticRead() == 0) { try { delay(WRITE_LOCK_SHORT_DELAY); delayCount += 1; } catch (InterruptedException e) { logError("Interrupted while waiting for the write lock to be released!", e); Thread.currentThread().interrupt(); //There may be an issue with the JVM if this occurs, treat // it like we might crash and try to release the write lock. data1StampedLock.tryUnlockWrite(); break; } if (delayCount * WRITE_LOCK_SHORT_DELAY > TimeUnit.SECONDS.toMillis(1)) { logWarningWithAlert("Something is holding a write lock on" + " the data for a very, very long time (>1s). This may" + " indicate a problem that could cause cascading" + " problems in the near future." + " Also, the value for the data that is about to be" + " retrieved could potentially be invalid."); break; } } long nonExclusiveLockStamp = data1StampedLock.readLock(); Data data1NonVolatile = data1; data1StampedLock.unlockRead(nonExclusiveLockStamp); return data1NonVolatile; ``` The read locks on a StampedLock are non-exclusive and are like reading from a thread-safe Map or HashTable, where it is multi-read/single-write. Here is how I am using the exclusive lock to communicate to other threads that the instance data is being written to: ``` long d1LockStamp = data1StampedLock.writeLock(); this.data1 = data1; data1StampedLock.unlockWrite(d1LockStamp); ``` So if you wanted to only check whether or not something is locked at any given moment, you need only something simple like the following statement to get the status: ``` boolean data1IsLocked = data1StampedLock.tryOptimisticRead() == 0; ``` Then check the value of that boolean. There are, of course, the caveats and Here Be Dragons information mentioned in other answers (namely that the information is immediately stale), but if you really need to lock something and check that lock from another thread, this seemed to me to be the most reasonable, safe, and effective way that uses the java.util.concurrency package with no external dependencies.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
One thing to note is that the *instant* you receive such information, it's stale. In other words, you could be told that no-one has the lock, but then when you try to acquire it, you block because another thread took out the lock between the check and you trying to acquire it. Brian is right to point at `Lock`, but I think what you really want is its [`tryLock`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#tryLock()) method: ``` Lock lock = new ReentrantLock(); ...... if (lock.tryLock()) { // Got the lock try { // Process record } finally { // Make sure to unlock so that we don't cause a deadlock lock.unlock(); } } else { // Someone else had the lock, abort } ``` You can also call `tryLock` with an amount of time to wait - so you could try to acquire it for a tenth of a second, then abort if you can't get it (for example). (I think it's a pity that the Java API doesn't - as far as I'm aware - provide the same functionality for the "built-in" locking, as the `Monitor` class does in .NET. Then again, there are plenty of other things I dislike in both platforms when it comes to threading - every object potentially having a monitor, for example!)
Another workaround is (in case of you didnt have chance with the answers given here )is using timeouts. i.e. below one will return null after 1 second hanging: ``` ExecutorService executor = Executors.newSingleThreadExecutor(); //create a callable for the thread Future<String> futureTask = executor.submit(new Callable<String>() { @Override public String call() throws Exception { return myObject.getSomething(); } }); try { return futureTask.get(1000, TimeUnit.MILLISECONDS); } catch (InterruptedException | ExecutionException | TimeoutException e) { //object is already locked check exception type return null; } ```
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
One thing to note is that the *instant* you receive such information, it's stale. In other words, you could be told that no-one has the lock, but then when you try to acquire it, you block because another thread took out the lock between the check and you trying to acquire it. Brian is right to point at `Lock`, but I think what you really want is its [`tryLock`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#tryLock()) method: ``` Lock lock = new ReentrantLock(); ...... if (lock.tryLock()) { // Got the lock try { // Process record } finally { // Make sure to unlock so that we don't cause a deadlock lock.unlock(); } } else { // Someone else had the lock, abort } ``` You can also call `tryLock` with an amount of time to wait - so you could try to acquire it for a tenth of a second, then abort if you can't get it (for example). (I think it's a pity that the Java API doesn't - as far as I'm aware - provide the same functionality for the "built-in" locking, as the `Monitor` class does in .NET. Then again, there are plenty of other things I dislike in both platforms when it comes to threading - every object potentially having a monitor, for example!)
Take a look at the [Lock](http://java.sun.com/docs/books/tutorial/essential/concurrency/newlocks.html) objects introduced in the Java 5 concurrency packages. e.g. ``` Lock lock = new ReentrantLock() if (lock.tryLock()) { try { // do stuff using the lock... } finally { lock.unlock(); } } ... ``` The [ReentrantLock](http://java.sun.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html) object is essentially doing the same thing as the traditional `synchronized` mechanism, but with more functionality. EDIT: As Jon has noted, the `isLocked()` method tells you at *that instant*, and thereafter that information is out of date. The [tryLock()](http://www.ensta.fr/~diam/java/online/jdk/150/api/java/util/concurrent/locks/ReentrantLock.html#tryLock%28%29) method will give more reliable operation (note you can use this with a timeout as well) EDIT #2: Example now includes `tryLock()/unlock()` for clarity.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
Whilst the above approach using a Lock object is the best way to do it, if you have to be able to check for locking using a monitor, it can be done. However, it does come with a health warning as the technique isn't portable to non Oracle Java VMs and it may break in future VM versions as it isn't a supported public API. Here is how to do it: ``` private static sun.misc.Unsafe getUnsafe() { try { Field field = sun.misc.Unsafe.class.getDeclaredField("theUnsafe"); field.setAccessible(true); return (Unsafe) field.get(null); } catch (Exception e) { throw new RuntimeException(e); } } public void doSomething() { Object record = new Object(); sun.misc.Unsafe unsafe = getUnsafe(); if (unsafe.tryMonitorEnter(record)) { try { // record is locked - perform operations on it } finally { unsafe.monitorExit(record); } } else { // could not lock record } } ``` My advice would be to use this approach only if you cannot refactor your code to use java.util.concurrent Lock objects for this and if you are running on an Oracle VM.
I needed to also find a solution to this, so searched the Java Concurrency API and came across [StampedLock](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html). The project is using Java 8. I am working in a heavily-threaded asynchronous data service that communicates with a native library and contains long-living configuration objects, necessitating sometimes-complex concurrency logic; thankfully this turned out to be relatively simple with the StampedLock class. StampedLock has a method called [tryOptimisticRead](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html#tryOptimisticRead--) which does not wait, it just returns the status in the form of a long-time time stamp, where zero (0) indicates an exclusive lock is held. I then do delay for up to a second but you could just use the function without any sort of delay. Here's how I'm detecting whether or not there's an exclusive lock, this paradigm is used in multiple locations and includes error handling: ``` int delayCount = 0; //Makes sure that if there is data being written to this field at // this moment, wait until the operation is finished writing the // updated data. while (data1StampedLock.tryOptimisticRead() == 0) { try { delay(WRITE_LOCK_SHORT_DELAY); delayCount += 1; } catch (InterruptedException e) { logError("Interrupted while waiting for the write lock to be released!", e); Thread.currentThread().interrupt(); //There may be an issue with the JVM if this occurs, treat // it like we might crash and try to release the write lock. data1StampedLock.tryUnlockWrite(); break; } if (delayCount * WRITE_LOCK_SHORT_DELAY > TimeUnit.SECONDS.toMillis(1)) { logWarningWithAlert("Something is holding a write lock on" + " the data for a very, very long time (>1s). This may" + " indicate a problem that could cause cascading" + " problems in the near future." + " Also, the value for the data that is about to be" + " retrieved could potentially be invalid."); break; } } long nonExclusiveLockStamp = data1StampedLock.readLock(); Data data1NonVolatile = data1; data1StampedLock.unlockRead(nonExclusiveLockStamp); return data1NonVolatile; ``` The read locks on a StampedLock are non-exclusive and are like reading from a thread-safe Map or HashTable, where it is multi-read/single-write. Here is how I am using the exclusive lock to communicate to other threads that the instance data is being written to: ``` long d1LockStamp = data1StampedLock.writeLock(); this.data1 = data1; data1StampedLock.unlockWrite(d1LockStamp); ``` So if you wanted to only check whether or not something is locked at any given moment, you need only something simple like the following statement to get the status: ``` boolean data1IsLocked = data1StampedLock.tryOptimisticRead() == 0; ``` Then check the value of that boolean. There are, of course, the caveats and Here Be Dragons information mentioned in other answers (namely that the information is immediately stale), but if you really need to lock something and check that lock from another thread, this seemed to me to be the most reasonable, safe, and effective way that uses the java.util.concurrency package with no external dependencies.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
I needed to also find a solution to this, so searched the Java Concurrency API and came across [StampedLock](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html). The project is using Java 8. I am working in a heavily-threaded asynchronous data service that communicates with a native library and contains long-living configuration objects, necessitating sometimes-complex concurrency logic; thankfully this turned out to be relatively simple with the StampedLock class. StampedLock has a method called [tryOptimisticRead](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html#tryOptimisticRead--) which does not wait, it just returns the status in the form of a long-time time stamp, where zero (0) indicates an exclusive lock is held. I then do delay for up to a second but you could just use the function without any sort of delay. Here's how I'm detecting whether or not there's an exclusive lock, this paradigm is used in multiple locations and includes error handling: ``` int delayCount = 0; //Makes sure that if there is data being written to this field at // this moment, wait until the operation is finished writing the // updated data. while (data1StampedLock.tryOptimisticRead() == 0) { try { delay(WRITE_LOCK_SHORT_DELAY); delayCount += 1; } catch (InterruptedException e) { logError("Interrupted while waiting for the write lock to be released!", e); Thread.currentThread().interrupt(); //There may be an issue with the JVM if this occurs, treat // it like we might crash and try to release the write lock. data1StampedLock.tryUnlockWrite(); break; } if (delayCount * WRITE_LOCK_SHORT_DELAY > TimeUnit.SECONDS.toMillis(1)) { logWarningWithAlert("Something is holding a write lock on" + " the data for a very, very long time (>1s). This may" + " indicate a problem that could cause cascading" + " problems in the near future." + " Also, the value for the data that is about to be" + " retrieved could potentially be invalid."); break; } } long nonExclusiveLockStamp = data1StampedLock.readLock(); Data data1NonVolatile = data1; data1StampedLock.unlockRead(nonExclusiveLockStamp); return data1NonVolatile; ``` The read locks on a StampedLock are non-exclusive and are like reading from a thread-safe Map or HashTable, where it is multi-read/single-write. Here is how I am using the exclusive lock to communicate to other threads that the instance data is being written to: ``` long d1LockStamp = data1StampedLock.writeLock(); this.data1 = data1; data1StampedLock.unlockWrite(d1LockStamp); ``` So if you wanted to only check whether or not something is locked at any given moment, you need only something simple like the following statement to get the status: ``` boolean data1IsLocked = data1StampedLock.tryOptimisticRead() == 0; ``` Then check the value of that boolean. There are, of course, the caveats and Here Be Dragons information mentioned in other answers (namely that the information is immediately stale), but if you really need to lock something and check that lock from another thread, this seemed to me to be the most reasonable, safe, and effective way that uses the java.util.concurrency package with no external dependencies.
Thanks for this, it helped me out solving a race condition. I changed it a little to wear both belt and suspenders. **So here is my suggestion for AN IMPROVEMENT of the accepted answer:** You can ensure that you get safe access to the `tryLock()` method by doing something like this: ```java Lock localLock = new ReentrantLock(); private void threadSafeCall() { boolean isUnlocked = false; synchronized(localLock) { isUnlocked = localLock.tryLock(); } if (isUnlocked) { try { rawCall(); } finally { localLock.unlock(); } } else { LOGGER.log(Level.INFO, "THANKS! - SAVED FROM DOUBLE CALL!"); } } ``` This would avoid the situation where you might get two calling `tryLock()` at the almost same time, causing the return to be potentially doubt full. I'd like to now if I'm wrong, I might be over cautios here. But hey! My gig is stable now :-).. Read more on my development issues at my [Blog](http://theengine.schwartzengine.com/#home).
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
One thing to note is that the *instant* you receive such information, it's stale. In other words, you could be told that no-one has the lock, but then when you try to acquire it, you block because another thread took out the lock between the check and you trying to acquire it. Brian is right to point at `Lock`, but I think what you really want is its [`tryLock`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#tryLock()) method: ``` Lock lock = new ReentrantLock(); ...... if (lock.tryLock()) { // Got the lock try { // Process record } finally { // Make sure to unlock so that we don't cause a deadlock lock.unlock(); } } else { // Someone else had the lock, abort } ``` You can also call `tryLock` with an amount of time to wait - so you could try to acquire it for a tenth of a second, then abort if you can't get it (for example). (I think it's a pity that the Java API doesn't - as far as I'm aware - provide the same functionality for the "built-in" locking, as the `Monitor` class does in .NET. Then again, there are plenty of other things I dislike in both platforms when it comes to threading - every object potentially having a monitor, for example!)
Whilst the above approach using a Lock object is the best way to do it, if you have to be able to check for locking using a monitor, it can be done. However, it does come with a health warning as the technique isn't portable to non Oracle Java VMs and it may break in future VM versions as it isn't a supported public API. Here is how to do it: ``` private static sun.misc.Unsafe getUnsafe() { try { Field field = sun.misc.Unsafe.class.getDeclaredField("theUnsafe"); field.setAccessible(true); return (Unsafe) field.get(null); } catch (Exception e) { throw new RuntimeException(e); } } public void doSomething() { Object record = new Object(); sun.misc.Unsafe unsafe = getUnsafe(); if (unsafe.tryMonitorEnter(record)) { try { // record is locked - perform operations on it } finally { unsafe.monitorExit(record); } } else { // could not lock record } } ``` My advice would be to use this approach only if you cannot refactor your code to use java.util.concurrent Lock objects for this and if you are running on an Oracle VM.
1,779,795
I have a process A that contains a table in memory with a set of records (recordA, recordB, etc...) Now, this process can launch many threads that affect the records, and sometimes we can have 2 threads trying to access the same record - this situation must be denied. Specifically if a record is LOCKED by one thread I want the other thread to abort (I do not want to BLOCK or WAIT). Currently I do something like this: ``` synchronized(record) { performOperation(record); } ``` But this is causing me problems ... because while Process1 is performing the operation, if Process2 comes in it blocks/waits on the synchronized statement and when Process1 is finished it performs the operation. Instead I want something like this: ``` if (record is locked) return; synchronized(record) { performOperation(record); } ``` Any clues on how this can be accomplished? Any help would be much appreciated. Thanks,
2009/11/22
[ "https://Stackoverflow.com/questions/1779795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161200/" ]
One thing to note is that the *instant* you receive such information, it's stale. In other words, you could be told that no-one has the lock, but then when you try to acquire it, you block because another thread took out the lock between the check and you trying to acquire it. Brian is right to point at `Lock`, but I think what you really want is its [`tryLock`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/locks/Lock.html#tryLock()) method: ``` Lock lock = new ReentrantLock(); ...... if (lock.tryLock()) { // Got the lock try { // Process record } finally { // Make sure to unlock so that we don't cause a deadlock lock.unlock(); } } else { // Someone else had the lock, abort } ``` You can also call `tryLock` with an amount of time to wait - so you could try to acquire it for a tenth of a second, then abort if you can't get it (for example). (I think it's a pity that the Java API doesn't - as far as I'm aware - provide the same functionality for the "built-in" locking, as the `Monitor` class does in .NET. Then again, there are plenty of other things I dislike in both platforms when it comes to threading - every object potentially having a monitor, for example!)
I needed to also find a solution to this, so searched the Java Concurrency API and came across [StampedLock](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html). The project is using Java 8. I am working in a heavily-threaded asynchronous data service that communicates with a native library and contains long-living configuration objects, necessitating sometimes-complex concurrency logic; thankfully this turned out to be relatively simple with the StampedLock class. StampedLock has a method called [tryOptimisticRead](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/StampedLock.html#tryOptimisticRead--) which does not wait, it just returns the status in the form of a long-time time stamp, where zero (0) indicates an exclusive lock is held. I then do delay for up to a second but you could just use the function without any sort of delay. Here's how I'm detecting whether or not there's an exclusive lock, this paradigm is used in multiple locations and includes error handling: ``` int delayCount = 0; //Makes sure that if there is data being written to this field at // this moment, wait until the operation is finished writing the // updated data. while (data1StampedLock.tryOptimisticRead() == 0) { try { delay(WRITE_LOCK_SHORT_DELAY); delayCount += 1; } catch (InterruptedException e) { logError("Interrupted while waiting for the write lock to be released!", e); Thread.currentThread().interrupt(); //There may be an issue with the JVM if this occurs, treat // it like we might crash and try to release the write lock. data1StampedLock.tryUnlockWrite(); break; } if (delayCount * WRITE_LOCK_SHORT_DELAY > TimeUnit.SECONDS.toMillis(1)) { logWarningWithAlert("Something is holding a write lock on" + " the data for a very, very long time (>1s). This may" + " indicate a problem that could cause cascading" + " problems in the near future." + " Also, the value for the data that is about to be" + " retrieved could potentially be invalid."); break; } } long nonExclusiveLockStamp = data1StampedLock.readLock(); Data data1NonVolatile = data1; data1StampedLock.unlockRead(nonExclusiveLockStamp); return data1NonVolatile; ``` The read locks on a StampedLock are non-exclusive and are like reading from a thread-safe Map or HashTable, where it is multi-read/single-write. Here is how I am using the exclusive lock to communicate to other threads that the instance data is being written to: ``` long d1LockStamp = data1StampedLock.writeLock(); this.data1 = data1; data1StampedLock.unlockWrite(d1LockStamp); ``` So if you wanted to only check whether or not something is locked at any given moment, you need only something simple like the following statement to get the status: ``` boolean data1IsLocked = data1StampedLock.tryOptimisticRead() == 0; ``` Then check the value of that boolean. There are, of course, the caveats and Here Be Dragons information mentioned in other answers (namely that the information is immediately stale), but if you really need to lock something and check that lock from another thread, this seemed to me to be the most reasonable, safe, and effective way that uses the java.util.concurrency package with no external dependencies.
57,432,617
I use Guzzle in my Laravel 5.8 project I'm trying to make a GET to a URL (signed cert) serve on `https` "<https://172.1.1.1:443/accounts>" ``` public static function get($url) { // dd($url); try { $client = new Client(); $options = [ 'http_errors' => true, 'connect_timeout' => 3.14, 'read_timeout' => 3.14, 'timeout' => 3.14, 'curl' => array( CURLOPT_SSL_VERIFYHOST => false, CURLOPT_SSL_VERIFYPEER => false ) ]; $headers = [ 'headers' => [ 'Keep-Alive' => 'timeout=300' ] ]; $result = $client->request('GET', $url, $headers, $options); // dd($result); } catch (ConnectException $e) { //Logging::error($e); return null; } return json_decode($result->getBody(), true); } ``` I used these 2 flags already ``` CURLOPT_SSL_VERIFYHOST => false, URLOPT_SSL_VERIFYPEER => false ``` I'm not sure why I kept getting, ![](https://i.imgur.com/rzkqsUE.png)
2019/08/09
[ "https://Stackoverflow.com/questions/57432617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4480164/" ]
the only sensible explanation (short of [cosmic rays](https://en.wikipedia.org/wiki/Soft_error#Cosmic_rays_creating_energetic_neutrons_and_protons) and ram errors) is that Client (whatever Client is) is either overriding your custom curl settings, or ignoring them. there should be no way to get that error with `CURLOPT_SSL_VERIFYHOST` & `CURLOPT_SSL_VERIFYPEER` disabled. ... btw if that is a Guzzle\Client, have you tried adding `'verify' => 'false'` to $options ? that's supposedly the guzzle-way of disabling it, wouldn't surprise me if Guzzle is overriding your custom settings because of Guzzle's `verify` option not being disabled (it's [enabled by default](http://docs.guzzlephp.org/en/stable/request-options.html#verify))
I'm agree with @hanshenrik, I tested it and it's the exact problem. By default the client you create have the following configuration: ``` $defaults = [ 'allow_redirects' => RedirectMiddleware::$defaultSettings, 'http_errors' => true, 'decode_content' => true, 'verify' => true, 'cookies' => false ]; ``` So you have to change it to: ``` public static function get($url) { // dd($url); try { $client = new Client(['verify' => false]); $options = [ 'http_errors' => true, 'connect_timeout' => 3.14, 'read_timeout' => 3.14, 'timeout' => 3.14 ) ]; $headers = [ 'headers' => [ 'Keep-Alive' => 'timeout=300' ] ]; $result = $client->request('GET', $url, $headers, $options); // dd($result); } catch (ConnectException $e) { //Logging::error($e); return null; } return json_decode($result->getBody(), true); } ```
69,606,958
I am looking for the easiest way to draw a solid line on a picture and display the length of the line drawn. Attached a picture. Any idea to do it under win7? Thank you [Example](https://i.stack.imgur.com/A3U7Y.png)
2021/10/17
[ "https://Stackoverflow.com/questions/69606958", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17176089/" ]
Easiest is to go to [ImageJ](https://ij.imjoy.io/) in your browser, and click: ``` File -> Open Sample -> Cardio ``` Then click `Line Tool` and draw your line. It'll tell you the length in the info window. You could equally install it locally, but that would be harder. You could just as easily use [Photopea](https://www.photopea.com/).
To draw a line coordinates are needed. So then it's possible to calculate the length of the line from equation: length = sqrt((Xend - Xstart)^2 + (Yend - Ystart)^2 ). And according to that example mentioned in question: You can make a loop for every row to store the value of first coloured pixel and last coloured pixel, calculate the length between them and compare the result with the next row. In auxiliary variable may be stored the maximal result (actualization after every row calculation) ``` // C++ Implementation for drawing line #include <graphics.h> // driver code int main() { // gm is Graphics mode which is a computer display // mode that generates image using pixels. // DETECT is a macro defined in "graphics.h" header file int gd = DETECT, gm; // initgraph initializes the graphics system // by loading a graphics driver from disk initgraph(&gd, &gm, ""); // line for x1, y1, x2, y2 line(150, 150, 450, 150); // line for x1, y1, x2, y2 line(150, 200, 450, 200); // line for x1, y1, x2, y2 line(150, 250, 450, 250); getch(); // closegraph function closes the graphics // mode and deallocates all memory allocated // by graphics system . closegraph(); } ``` You can use C++ for example CodeBlocks. But to start the best would be Processing <https://processing.org/reference/line_.html>
24,012,430
I can only get correct output for decimal less than five. the output for five is 110 when it should be 101. the output for six is 101 and the output for ten is 1100. ``` //while loop divides each digit while (decimal > 0) { //divides and digit becomes the remainder int digit = decimal % 2; //makes the digit into a string builder so it can be reversed binaryresult.append(digit); decimal = decimal / 2; display.setText(binaryresult.reverse()); } ```
2014/06/03
[ "https://Stackoverflow.com/questions/24012430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3657245/" ]
Usee the below code it may work for you ``` import java.util.Scanner; public class decToBinary{ public String decToBin(int n) { StringBuilder result = new StringBuilder(); int i = 0; int b[] = new int[10]; while (n != 0) { i++; b[i] = n % 2; n = n / 2; } for (int j = i; j > 0; j--) { result.append(b[j]); } return result.toString(); } public static void main(String args[]){ Scanner sc=new Scanner(System.in); System.out.println("Enter decimal no :"); int n=sc.nextInt(); System.out.println("binary numbers is :"); decToBinary dtb = new decToBinary(); System.out.println(dtb.decToBin(n)); } } ```
Something like this may be closer to what you are asking for: ``` while (decimal > 0) { result.insert(0, (char) ((decimal % 2) + '0')); decimal /= 2; } ``` It uses `insert` to avoid reversing and adds the character instead of the number. But you would be better off using a built in mechanism such as: ``` BigInteger.valueOf(decimal).toString(2) ```
24,012,430
I can only get correct output for decimal less than five. the output for five is 110 when it should be 101. the output for six is 101 and the output for ten is 1100. ``` //while loop divides each digit while (decimal > 0) { //divides and digit becomes the remainder int digit = decimal % 2; //makes the digit into a string builder so it can be reversed binaryresult.append(digit); decimal = decimal / 2; display.setText(binaryresult.reverse()); } ```
2014/06/03
[ "https://Stackoverflow.com/questions/24012430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3657245/" ]
Usee the below code it may work for you ``` import java.util.Scanner; public class decToBinary{ public String decToBin(int n) { StringBuilder result = new StringBuilder(); int i = 0; int b[] = new int[10]; while (n != 0) { i++; b[i] = n % 2; n = n / 2; } for (int j = i; j > 0; j--) { result.append(b[j]); } return result.toString(); } public static void main(String args[]){ Scanner sc=new Scanner(System.in); System.out.println("Enter decimal no :"); int n=sc.nextInt(); System.out.println("binary numbers is :"); decToBinary dtb = new decToBinary(); System.out.println(dtb.decToBin(n)); } } ```
i am not able to reproduce the behaviour using the following: ``` public static String decToBin(int n) { StringBuilder result = new StringBuilder(); while (n > 0) { int dec = n % 2; result.append(dec); n = n / 2; } return result.reverse().toString(); } ``` however, consider using the build-in ``` public static String decToBin(int n) { return Integer.toBinaryString(n); } ``` (or even BigInteger as stated above)