questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Sympy: Equation equals zero => How to remove denominator and get factors? After some calculations on the complex number z_re and its conjugate z(bar)_re, I get the following equations:I simplify and expand and get the second or third of the three equations from the picture. This looks almost like what I want to achieve.My goal is an equation of the form: A*z_re*z(bar)_re + B*z_re + C*z(bar)_re + D = 0.How can I get rid of the denominator of the second equation (the equation is equal to zero) and extract the factors A, B, C, and D from the equation?For the example shown above the result should be: A=3, B=-1, C=-1, and D=0 | Here's some code that will do what you ask. Basically I just expressed your equation, factored it such that it is in the form <some_fraction> = 0 and got the numerator of that fraction, which is what you need.from sympy import *z_re = Symbol('z_re',Complex=True)z_re_c = conjugate(z_re)e1 = Mul(z_re,Pow(Add(z_re,Integer(-1)),Integer(-1)))e2 = Mul(z_re,z_re_c,Pow(Add(z_re,Integer(-1)),Integer(-1)),Pow(Add(z_re_c,Integer(-1)),Integer(-1)))e3 = Mul(z_re_c,Pow(Add(z_re_c,Integer(-1)),Integer(-1)))e4 = Add(e1,e2,e3)e5 =e4.factor()e6 = fraction(e5)[0] # just the numerator |
Unable to change value of method variable inside if statement in python def a(): no_user = False number_of_users = # gets number of users from db if number_of_users == 0: no_user = True if no_user: print("no users in db") else: print(number_of_users)when above method is run, it never prints "no user in db", even when there are no users in db. the variable no_user in if block warns that it is not used.I understand this is because of scope and this could have been solved if the no_user variable was global instead of method variable. But in this case, I need it to be inside method, and change to True when there are no users in db. | i guess, you should modify part as below: if number_of_users > 0: no_user = Falsesince if number_of_users > 0, then there are users and no_user should be false |
When you run the script KeyError: 'EXIF DateTimeOriginal' I need to know the properties of an image data taken (day, time, hour, minute, second)import exifreadimport osdirectoryInput=r"C:\tekstilshiki"for filename in os.listdir(directoryInput): if filename.endswith('.jpg'): with open(r"%s\%s" % (directoryInput, "11.jpg"), 'rb') as image: # directory and name bleat exif = exifread.process_file(image) dt = str(exif['EXIF DateTimeOriginal']) # into date and time day, dtime = dt.split(" ", 1) hour, minute, second = dtime.split(":", 2)When you run the script Goes error Traceback (most recent call last): File "C:/tekstilshiki/ffd.py", line 8, in dt = str(exif['EXIF DateTimeOriginal']) KeyError: 'EXIF DateTimeOriginal'I assume that the tag name is not correctHow can I read from all EXIF properties only the key time and the capture dateng | Each instant of 'exif' can contain different keys, based on what is extracted from the image, so to avoid the "KeyError" message you need to check if 'exif' contains the key "EXIF DateTimeOriginal":import exifread, osdirectoryInput=r"C:\tekstilshiki"for filename in os.listdir(directoryInput): if filename.endswith('.jpg'): with open(os.path.join(directoryInput, filename), "rb") as image: # Change "11.jpg" to filename variable exif = exifread.process_file(image) if "DateTimeOriginal" in exif: dt = str(exif["EXIF DateTimeOriginal"]) # into date and time day, dtime = dt.split(" ", 1) hour, minute, second = dtime.split(":", 2)P.SAlthough you have used 'os.listdir' to find all the files in the selected directory, in line 6, you have hard-coded the same file "11.jpg". |
Tokenize and label text Here's a simple scanner, that tokenizes text according to certain rules, and labels the tokens.What is the best way to handle unknown characters, and label them asunknown?Is there a recommended way/library to speed things up whileaccomplishing similar results and remaining relatively simple.Example:import redef alpha(scanner,token): return token, 'a'def numeric(scanner,token): return token,'rn'def punctuation(scanner,token): return token, 'p'def superscript(scanner,token): return token, 'sn'scanner = re.Scanner([ (u"[a-zA-Z]+", alpha), (u"[.,:;!?]", punctuation), (u"[0-9]+", numeric), (u"[\xb9\u2070\xb3\xb2\u2075\u2074\u2077\u2076\u2079\u2078]", superscript), (r"[\s\n]+", None), # whitespace, newline ])tokens, _ = scanner.scan("This is a little test? With 7,9 and 6.")print tokensout:[('This', 'a'), ('is', 'a'), ('a', 'a'), ('little', 'a'), ('test', 'a'), ('?', 'p'), ('With', 'a'), ('7', 'rn'), (',', 'p'), ('9', 'rn'), ('and', 'a'), ('6', 'rn'), ('.', 'p')]ps! Defined functions will probably try to categorize the tokens further. | The re.Scanner matches patterns in the order provided. So you can provide a very general pattern at the end to catch "unknown" characters:(r".", unknown)import redef alpha(scanner,token): return token, 'a'def numeric(scanner,token): return token,'rn'def punctuation(scanner,token): return token, 'p'def superscript(scanner,token): return token, 'sn'def unknown(scanner,token): return token, 'uk'scanner = re.Scanner([ (r"[a-zA-Z]+", alpha), (r"[.,:;!?]", punctuation), (r"[0-9]+", numeric), (r"[\xb9\u2070\xb3\xb2\u2075\u2074\u2077\u2076\u2079\u2078]", superscript), (r"[\s\n]+", None), # whitespace, newline (r".", unknown) ])tokens, _ = scanner.scan("This is a little test? With 7,9 and 6. \xa0-\xaf")print tokensyields[('This', 'a'), ('is', 'a'), ('a', 'a'), ('little', 'a'), ('test', 'a'), ('?', 'p'), ('With', 'a'), ('7', 'rn'), (',', 'p'), ('9', 'rn'), ('and', 'a'), ('6', 'rn'), ('.', 'p'), ('\xa0', 'uk'), ('-', 'uk'), ('\xaf', 'uk')]Some of your patterns are unicode, and one is a str. It is true that in Python2 the pattern and the strings to be matched can be either unicode or str.However, in Python3: Unicode strings and 8-bit strings cannot be mixed: that is, you cannot match an Unicode string with a byte pattern or vice-versaIt is good practice, therefore, not to mix them, even in Python2.I think your code is wonderfully simple (except for superscript regex. Eek!). I don't know of a library which would make it any simpler. |
Return average array for each element of list of arrays with columns and rows of fixed shape I have got multiple arrays with 1000 rows and 500 columns and I want to return an array which takes each element (row i and column j) of the arrays and calculates its average.I have tried the following:listofarrays=[array1,array2,array3,array4,...,arrayx]lst1=[]newavgarray=[]n=1000m=500for i in range(0,n): for j in range(0,m): for h in range(0,len(listofarrays)): arraynumber=listofarrays[h] lst1=backgroundnumber[i,j].append avg=lst1.mean() newavgarray=avg.append()print(newavgarray) | you can review here : average of matrixhttps://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.matrix.mean.html |
How do I select corresponding column field values in a dataframe? So I have created a data frame as follows -|id | Image_name | result | classified |-------------------------------------------------|01 | 1.bmp | 0 | 10 ||02 | 2.bmp | 1 | 11 ||03 | 3.bmp | 0 | 10 ||04 | 4.bmp | 2 | 12 |Now, in my directory I have a folder called images, where I have all the .bmp files stored (1.bmp, 2.bmp, 3.bmp, 4.bmp and so on ). I am trying to write a script the automatically finds those files in the "Image_name" in the data frame and returns their result and classified values respectively. import pandas as pd import glob import os data = pd.read_csv("filename.csv") for file in glob.glob("*.bmp"): fname = os.path.basename(file)So this was my initial code, I want to find all fnames extracted and then check if the following fname exists in the dataframe and display it with its result and classified columns. | First get all the images names from the folder and store in a listall_files_names=os.listdir("#path to the dir") df.loc[df['Image_name'].isin(all_files_names)]Output (assuming all four are there) id Image_name result classified0 1 1.bmp 0 101 2 2.bmp 1 112 3 3.bmp 0 103 4 4.bmp 2 12 |
Rendering multiple HTML pages in Python Flask (Heroku) App I am trying to serve multiple HTML Pages to a single page and then serve that final single page as a PDF. I have a total of 95 pages and I have already achieved this using the following stack;Python/ FlaskWeasyPrint HTML to PDF CreatorJinja Templating using include {% include 'page1.html' %}{% include 'page2.html' %}......{% include 'page95.html' %}Heroku deploymentgunicorn and nginx along side Flask in productionMy problem is, the final page takes more than 80seconds to display as a PDF(i.e: the final html page containing 95 other html pages). And Heroku can maintain a connection only for 28-30 seconds. Is there any way I can speed up this process of serving the final PDF?Will multi-threading help this? (I may have to read up on how to do this - not an expert) I already have this in my app app.run(threaded=True)Apologies if I am using any unclear terms here. | After trying out a few things, I think the best way to reduce the time is to simply use Reportlab and make PDF out of single pages. Then I will be using pyPDF2to merge all those single pages into one single PDF file to download. I will mark this as the answer, if I am able to execute it successfully! |
Keras ValueError: Dimensions must be equal LSTM I'm creating a Bidirectional LSTM but I faced following errorValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250]I have no idea what is wrong and how to fix it!I have a text dataset with 59k row for train the model and i would divid them into 15 classes which then I would use for text similarity base on classes for the received new text.Based on the other post I played with loss but still it doesn't solve the issue.Here is the model plot:Also sequential model would be as follow:model_lstm = Sequential()model_lstm.add(InputLayer(250,))model_lstm.add(Embedding(input_dim=max_words+1, output_dim=200, weights=[embedding_matrix], mask_zero=True, trainable= True, name='corpus_embed')) enc_lstm = Bidirectional(LSTM(128, activation='sigmoid', return_sequences=True, name='LSTM_Encod'))model_lstm.add(enc_lstm)model_lstm.add(Dropout(0.25))model_lstm.add(Bidirectional(LSTM( 128, activation='sigmoid',dropout=0.25, return_sequences=True, name='LSTM_Decod')))model_lstm.add(Dropout(0.25))model_lstm.add(Dense(15, activation='softmax'))model_lstm.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['Accuracy'])## Feed the modelhistory = model_lstm.fit(x=corpus_seq_train, y=target_seq_train, batch_size=128, epochs=50, validation_data=(corpus_seq_test,target_seq_test), callbacks=[tensorboard], sample_weight= sample_wt_mat)This is the model summary:Model: "sequential"_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= corpus_embed (Embedding) (None, 250, 200) 4000200 bidirectional (Bidirectiona (None, 250, 256) 336896 l) dropout (Dropout) (None, 250, 256) 0 bidirectional_1 (Bidirectio (None, 250, 256) 394240 nal) dropout_1 (Dropout) (None, 250, 256) 0 dense (Dense) (None, 250, 15) 3855 =================================================================Total params: 4,735,191Trainable params: 4,735,191Non-trainable params: 0_________________________________and dataset shape:corpus_seq_train.shape, target_seq_train.shape((59597, 250), (59597, 5, 8205))Finally, here is the error:Epoch 1/50---------------------------------------------------------------------------ValueError Traceback (most recent call last)C:\Users\AMIRSH~1\AppData\Local\Temp/ipykernel_10004/3838451254.py in <module> 9 ## Feed the model 10 ---> 11 history = model_lstm.fit(x=corpus_seq_train, 12 y=target_seq_train, 13 batch_size=128,C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__)---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tbC:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in tf__train_function(iterator) 13 try: 14 do_return = True---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope) 16 except: 17 do_return = FalseValueError: in user code: File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1051, in train_function * return step_function(self, iterator) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1040, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1030, in run_step ** outputs = model.train_step(data) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 894, in train_step return self.compute_metrics(x, y, y_pred, sample_weight) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 987, in compute_metrics self.compiled_metrics.update_state(y, y_pred, sample_weight) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\compile_utils.py", line 501, in update_state metric_obj.update_state(y_t, y_p, sample_weight=mask) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 70, in decorated update_op = update_state_fn(*args, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 140, in update_state_fn return ag_update_state(*args, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\base_metric.py", line 646, in update_state ** matches = ag_fn(y_true, y_pred, **self._fn_kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\keras\metrics\metrics.py", line 3295, in categorical_accuracy return metrics_utils.sparse_categorical_matches( File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\metrics_utils.py", line 893, in sparse_categorical_matches matches = tf.cast(tf.equal(y_true, y_pred), backend.floatx()) ValueError: Dimensions must be equal, but are 5 and 250 for '{{node Equal}} = Equal[T=DT_INT64, incompatible_shape_error=true](ArgMax, ArgMax_1)' with input shapes: [?,5], [?,250]. | the problem is because of the Loss function and y-label shape.we should not pad y_label and it should fit the model directly without any other process |
Mypy loses type of the TypedDict when unpacked I have the following code when trying to spread the dictionaryfrom typing import TypedDictclass MyDict(TypedDict): foo: intdef test(inp: MyDict): m: MyDict = inp # OK n: MyDict = {**inp} # <-- ERRORI receive an error Expression of type "dict[str, object]" cannot be assigned to declared typeAny idea how can I preserve the type after spread? | Currently, neither mypy nor pyright is smart enough to infer the type of an unpacked dict. See https://github.com/python/mypy/issues/4122.As a workaround use typing.cast:from typing import cast, TypedDictclass MyDict(TypedDict): foo: intdef test(inp: MyDict): m: MyDict = inp # OK n: MyDict = cast(MyDict, {**inp}) # OK |
Encounter an issue while trying to remove unicode emojis from strings I am having a problem removing unicode emojis from my string. Here, I am providing some examples that I've seen in my data['\\\\ud83d\\\\ude0e', '\\\\ud83e\\\\udd20', '\\\\ud83e\\\\udd23', '\\\\ud83d\\\\udc4d', '\\\\ud83d\\\\ude43', '\\\\ud83d\\\\ude31', '\\\\ud83d\\\\ude14', '\\\\ud83d\\\\udcaa', '\\\\ud83d\\\\ude0e', '\\\\ud83d\\\\ude09', '\\\\ud83d\\\\ude09', '\\\\ud83d\\\\ude18','\\\\ud83d\\\\ude01' , '\\\\ud83d\\\\ude44', '\\\\ud83d\\\\ude17']I would like to remind that these are just some examples, not all of them and they are actually inside some strings in my data.Here is the function I tried to remove themdef remove_emojis(data): emoji_pattern = re.compile( u"(\\\\ud83d[\\\\ude00-\\\\ude4f])|" # emoticons u"(\\\\ud83c[\\\\udf00-\\\\uffff])|" # symbols & pictographs (1 of 2) u"(\\\\ud83d[\\\\u0000-\\\\uddff])|" # symbols & pictographs (2 of 2) u"(\\\\ud83d[\\\\ude80-\\\\udeff])|" # transport & map symbols u"(\\\\ud83c[\\\\udde0-\\\\uddff])" # flags (iOS) "+", flags=re.UNICODE) return re.sub(emoji_pattern, '', data)If I use "Naja, gegen dich ist sie ein Waisenknabe \\\\ud83d\\\\ude02\\\\ud83d\\\\ude02\\\\ud83d\\\\ude02" as an input, my output is "Naja, gegen dich ist sie ein Waisenknabe \\\\ude02\\\\ude02\\\\ude02". However my desired output should be "Naja, gegen dich ist sie ein Waisenknabe ".What is the mistake that I am doing and how can I fix that to get my desired results. | Since your text does not contain emoji chars themselves, but their representations in hexadecimal notation (\uXXXX), you can usedata = re.sub(r'\s*(?:\\+u[a-fA-F0-9]{4})+', '', data)Details:\s* - zero or more whitespaces(?:\\+u[a-fA-F0-9]{4})+ - one or more sequences of\\+ - one or more backslashesu - a u char[a-fA-F0-9]{4} - four hex chars.See the regex demo. |
Alphvantage Intraday API has not been working for last few days, API is throwing back "Key Error: "Time Series (1min)'" Have been trying to query intraday series, but the call is failing with the below error.Can someone please help me resolve this error? Code is really simple, just querying USG equity symbol for 1 min interval from APIdata, meta_data = av_ts.get_intraday(symbol='USG',interval='1min', outputsize='full')This is the error: --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-3e7fe1bd42d0> in <module>() ----> 1 data, meta_data = av_ts.get_intraday(symbol='USG',interval='1min', outputsize='full') c:\users\sampleuser\appdata\local\programs\python\python36\lib\site-packages\alpha_vantage\alphavantage.py in _format_wrapper(self, *args, **kwargs) 175 if 'json' in self.output_format.lower() or 'pandas' \ 176 in self.output_format.lower(): --> 177 data = call_response[data_key] 178 if meta_data_key `enter code here`is not None: 179 meta_data = call_response[meta_data_key] KeyError: 'Time Series (1min)' | USG traded on April 18th, and on the 23rd, but not since then.The API is having trouble offering 1-minute updates of an issue that doesn't trade. |
discord.py: Adding role to user throws error I need help with a bot event. Whenever I add the code, it gives me this error:Ignoring exception in on_member_joinTraceback (most recent call last): File "/home/runner/Tragic-Bot/venv/lib/python3.8/site-packages/discord/client.py", line 343, in _run_event await coro(*args, **kwargs) File "main.py", line 20, in on_member_join await client.add_roles(member, role)AttributeError: 'Bot' object has no attribute 'add_roles'Here is the code:@client.eventasync def on_member_join(member): guild = client.get_guild(999108955437551666) channel = guild.get_channel(999108956020555798) role = discord.utils.get(guild.roles, id=999356633023000586) await client.add_roles(member, role) await channel.send(f'Hello, {member.mention} :partying_face:! Welcome to Tragic Tech! Make sure to read the rules so you wont get banned right away!')If anyone could help me that would be great. This happens on replit.com, and I am new so any help is great. Thanks! | The error already tells you: 'Bot' object has no attribute 'add_roles' -> Bot.add_roles() doesn't exist. But what exists is Member.add_roles().So, your code would look like that:@client.eventasync def on_member_join(member): guild = client.get_guild(999108955437551666) channel = guild.get_channel(999108956020555798) role = discord.utils.get(guild.roles, id=999356633023000586) # add_roles() is a method of `discord.Member` await member.add_roles(role) await channel.send('your message')References:discord.MemberMember.add_roles() |
How to handle swapping variables in a pythonic way? I often have the case where I use two variables, one of them being the "current" value of something, another one a "newly retrieved" one.After checking for equality (and a relevant action taken), they are swapped. This is then repeated in a loop.import timeimport randomdef get_new(): # the new value is retrieved here, form a service or whatever vals = [x for x in range(3)] return random.choice(vals)current = Nonewhile True: # get a new value new = get_new() if new != current: print('a change!') else: print('no change :(') current = new time.sleep(1)This solution works but I feel that it is a naïve approach and I think I remember (for "write pythonic code" series of talks) that there are better ways.What is the pythonic way to handle such mechanism? | Really, all you have is a simple iteration over a sequence, and you want to detect changes from one item to the next. First, define an iterator that provides values from get_new:# Each element is a return value of get_new(), until it returns None.# You can choose a different sentinel value as necessary.sequence = iter(get_new, None)Then, get two copies of the iterator, one to use as a source for current values, the other for new values.i1, i2 = itertools.tee(sequence)Throw out the first value from one of the iterators:next(i2)Finally, iterate over the two zipped together. Putting it all together:current_source, new_source = tee(iter(get_new, None))next(new_source)for current, new in zip(current_source, new_source): if new != current: ... else: ... time.sleep(1)Using itertoolz.cons:current_source, new_source = tee(iter(get_new, None))for current, new in zip(cons(None, current_source), new_source)): ... |
How to get response parameters in Django? I want to implement login with twitter without using any library for my django app. I am sending the user to login page using a request function in views by passing the tokens which is successfully going to the twitter login page.Twitter redirects user to a url which I have configured aslogin/twitter/callbackHow do I access the parameters sent by twitter on this url using a view ? | Please note that your case requires to extract parameters from a request. Twitter redirects the user to your application, and the redirect makes a request to your serverYou can get using the following approach:assuming your URL is called as GET login/twitter/callback?token=abc123def twitter_callback(request): token = request.GET.get('token') if token is None: # do something if parameter is missing # do other thingsIf your view expects other methods, you can get them as well:POST myurl/path?param=abc123def post_view(request): param = request.POST.get('param') |
How do you add a CSS style to a HTML file with a python http.server? I have a simple http server running from python which returns an HTML file as a GET request. The HTMl file just has some input and it is sent correctly but is not styled even though it is linked to a CSS file. Here is the server.py:from http.server import BaseHTTPRequestHandler, HTTPServerimport timehostName = "localhost"serverPort = 8080class MyServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() h = open("main.html", "rb") self.wfile.write(h.read()) def do_POST(s): if s.path == '/': s.path = './main.html'if __name__ == "__main__": webServer = HTTPServer((hostName, serverPort), MyServer) print("Server started http://%s:%s" % (hostName, serverPort)) try: webServer.serve_forever() except KeyboardInterrupt: pass webServer.server_close() print("Server stopped.")main.html is this<html><head> <link rel="stylesheet" href="./output.css"></head><body> <h1 class="mx-auto text-center mt-8 px-0 py-8 border border-4 border-solid border-gray-600" style="width: 700!important;">CHECKBOX INPUT</h1> <div class="flex h-full mx-auto"> <form action=""> <div class="w-3/4 py-10 px-8"> <table class="table-auto"> <thead> <tr> <th class="py-10 h-4"> <div class="mr-64"> <input type="checkbox" class="form-checkbox h-8 w-8"> <label class="ml-4">test</label> </div> </th> </tr> <tr> <th class="py-10 h-4"> <div class="mr-64"> <input type="checkbox" class="form-checkbox h-8 w-8"> <label class="ml-4"></label> </div> </th> </tr> <tr> <th class="py-10 h-4"> <div class="mr-64"> <input type="checkbox" class="form-checkbox h-8 w-8"> <label class="ml-4">test</label> </div> </th> </tr> <tr> <th class="py-10 h-4"> <div class="mr-64"> <input type="checkbox" class="form-checkbox h-8 w-8"> <label class="ml-4">test</label> </div> </th> </tr> <tr> <th class="px-4 py-10 h-4"> <div class="mx-auto"> <span>TEXT INPUT:</span> <input type="text" class="form-input mt-4"> <select> <option value="value1" selected>DROPDOWN</option> <option value="valeu2">Value 2</option> <option value="valeu3">Value 3</option> </select> </div> </th> </tr> <tr> <th class="px-4 py-10 h-4"> <div class="mx-auto"> </div> </th> </tr> <tr> <th class="px-4 py-10 h-4"> <div class="mx-auto"> <input type="submit" value="Submit" class="bg-gray-600 p-4 border-0 border-solid rounded-lg"> </div> </th> </tr> </thead> </table> </div> </form> </div></body></html>Even though the HTMl file is linked to output.css when I host the server it returns the HTML file without any stylingThank you in advance | Its really complicated to do so, because you have to create new server that serve your css file.**Better you used Powerfull & popular solutions likeFlask and Django **where you can configure these files easily.for more info about Flaskhttps://pymbook.readthedocs.io/en/latest/flask.html |
Blank terminal screen unable to type anything in Platformio-ide-terminal in Atom I have installed platformio-ide-terminal in Atom for working on python project. But when I open the terminal it shows blank screen with no option to write anything.Blank terminal screenCan anyone please help me out with this. I have also tried terminal plus and still the same issue. | Below worked for me :Open 'Atom' -> 'File' -> 'Settings' -> 'Packages' -> 'Settings' for platformio-ide-terminal -> Scroll down to 'Shell override' and pass correct path of command prompt.e.g., |
finding angles 0-360 in arctan I need help with a math issue:I need to get the angle from 0 until 360 degrees but this code gives the angle between -90 until 90 degrees:N = math.cos(β * (math.pi / 180)) * math.tan((f + ω) * (math.pi / 180))N2 = math.atan(N) * (180 / math.pi)I want to N2 change between 0 to 360 degrees. | Use atan2 like soimport mathmath.atan2(-0.1, 0.1) + math.piThe problem is atan does not know which quadrant you are in, while atan2 does as it accepts an x and y coordinate as input.If you compute atan(y / x) you need to switch things around so you compute atan2(y, x) instead. I can't understand how that relates to your example exactly. |
Django.db.models.deletion related_objects takes 3 positional arguments I'm upgrading my project from Django 2.2 to 3.2 and wracking my brain at what seems to be a bug in their code.I have a test that does a simple DELETE request to a resource (incidentally a DjangoRestFramework resource, DRF version is 3.12.4), and a crash happens inside django.db.models.deletion. here is the relevant part of the stack trace: response = admin_client.delete( reverse(self.url_name, args=[project.pk, category.pk]),> content_type='application/json' )test_category_views.py:609: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/rn/venvs/lib/python3.6/site-packages/django/test/client.py:795: in delete response = super().delete(path, data=data, content_type=content_type, secure=secure, **extra)/home/rn/venvs/lib/python3.6/site-packages/django/test/client.py:447: in delete secure=secure, **extra)/home/rn/venvs/lib/python3.6/site-packages/django/test/client.py:473: in generic return self.request(**r)/home/rn/venvs/lib/python3.6/site-packages/django/test/client.py:719: in request self.check_exception(response)/home/rn/venvs/lib/python3.6/site-packages/django/test/client.py:580: in check_exception raise exc_value/home/rn/venvs/lib/python3.6/site-packages/django/core/handlers/exception.py:47: in inner response = get_response(request)../../../hazardlog/platform.py:127: in _get_response response = self.process_exception_by_middleware(e, request)../../../hazardlog/platform.py:125: in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs)/usr/local/lib/python3.6/contextlib.py:52: in inner return func(*args, **kwds)/home/rn/venvs/lib/python3.6/site-packages/django/views/decorators/csrf.py:54: in wrapped_view return view_func(*args, **kwargs)/home/rn/venvs/lib/python3.6/site-packages/django/views/generic/base.py:70: in view return self.dispatch(request, *args, **kwargs)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/views.py:509: in dispatch response = self.handle_exception(exc)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/views.py:469: in handle_exception self.raise_uncaught_exception(exc)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/views.py:480: in raise_uncaught_exception raise exc/home/rn/venvs/lib/python3.6/site-packages/rest_framework/views.py:506: in dispatch response = handler(request, *args, **kwargs)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/generics.py:291: in delete return self.destroy(request, *args, **kwargs)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/mixins.py:91: in destroy self.perform_destroy(instance)/home/rn/venvs/lib/python3.6/site-packages/rest_framework/mixins.py:95: in perform_destroy instance.delete()/home/rn/venvs/lib/python3.6/site-packages/django/db/models/base.py:953: in delete collector.collect([self], keep_parents=keep_parents)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <django.db.models.deletion.Collector object at 0x7f7870265ac8>objs = [<Category: Test category 0>], source = None, nullable = Falsecollect_related = True, source_attr = None, reverse_dependency = Falsekeep_parents = False, fail_on_restricted = True def collect(self, objs, source=None, nullable=False, collect_related=True, source_attr=None, reverse_dependency=False, keep_parents=False, fail_on_restricted=True): """ Add 'objs' to the collection of objects to be deleted as well as all parent instances. 'objs' must be a homogeneous iterable collection of model instances (e.g. a QuerySet). If 'collect_related' is True, related objects will be handled by their respective on_delete handler. If the call is the result of a cascade, 'source' should be the model that caused it and 'nullable' should be set to True, if the relation can be null. If 'reverse_dependency' is True, 'source' will be deleted before the current model, rather than after. (Needed for cascading to parent models, the one case in which the cascade follows the forwards direction of an FK rather than the reverse direction.) If 'keep_parents' is True, data of parent model's will be not deleted. If 'fail_on_restricted' is False, error won't be raised even if it's prohibited to delete such objects due to RESTRICT, that defers restricted object checking in recursive calls where the top-level call may need to collect more objects to determine whether restricted ones can be deleted. """ if self.can_fast_delete(objs): self.fast_deletes.append(objs) return new_objs = self.add(objs, source, nullable, reverse_dependency=reverse_dependency) if not new_objs: return model = new_objs[0].__class__ if not keep_parents: # Recursively collect concrete model's parent models, but not their # related objects. These will be found by meta.get_fields() concrete_model = model._meta.concrete_model for ptr in concrete_model._meta.parents.values(): if ptr: parent_objs = [getattr(obj, ptr.name) for obj in new_objs] self.collect(parent_objs, source=model, source_attr=ptr.remote_field.related_name, collect_related=False, reverse_dependency=True, fail_on_restricted=False) if not collect_related: return if keep_parents: parents = set(model._meta.get_parent_list()) model_fast_deletes = defaultdict(list) protected_objects = defaultdict(list) for related in get_candidate_relations_to_delete(model._meta): # Preserve parent reverse relationships if keep_parents=True. if keep_parents and related.model in parents: continue field = related.field if field.remote_field.on_delete == DO_NOTHING: continue related_model = related.related_model if self.can_fast_delete(related_model, from_field=field): model_fast_deletes[related_model].append(field) continue batches = self.get_del_batches(new_objs, [field]) for batch in batches:> sub_objs = self.related_objects(related_model, [field], batch)E TypeError: related_objects() takes 3 positional arguments but 4 were given/home/rn/venvs/ramrisk/lib/python3.6/site-packages/django/db/models/deletion.py:282: TypeErrorSo, related_objects is called with 4 positional args even though it supposedly only accepts 3. Python counts self with these, so the 4 is correct. It gets called with self, related_model, [field], and batch. Great. So, let's look at the definition of self.related_objects in django.db.models.deletion: def related_objects(self, related_model, related_fields, objs): """ Get a QuerySet of the related model to objs via related fields. """ predicate = reduce(operator.or_, ( query_utils.Q(**{'%s__in' % related_field.name: objs}) for related_field in related_fields )) return related_model._base_manager.using(self.using).filter(predicate)Right, well, that quite clearly takes 4 parameters as I'd expect, so where does that TypeError come from? Keep in mind, I've only shown you Django 3.2 code and none of my own. Even if I somehow put something crazy into these variables, they should still never be able to produce that error... | Alright, found my answer. Actually this is probably something no-one would have been able to guess, but I just want to share what I learned.So I was right, the error does not make sense, because it doesn't fit with the function signature. It should never be able to happen. So how to debug that?Well, my first instinct was that maybe this function definition is replaced at runtime somewhere. How would I check that? I set a breakpoint at the line that calls the method, and then in my debugger, I doimport inspectinspect.getmodule(self.related_objects)This gave me my answer. Seems like we've monkey-patched that function to decorate it with extra functionality, but now that the Django version is upgraded, the expected signature is changed.This is exactly why Monkey patching is dangerous and not to be used carelessly. Lesson here: If you do monkey patching, be sure to revisit all of those whenever you upgrade the lib versions. This one failed, so I had to find out what was happening, but it could easily have been more sinister. It could have silently done something different and incorrect, because of small changes to the lib, which the monkey patch was unaware of. |
Handling assertion in python I can't understand why this code:x='aaaa'try: self.assertTrue(x==y)except: print (x)generates me this errorAssertionError: False is not TrueIt should be handle it byprint(x)EDIToriginal code is:try: self.assertTrue('aaaa'==self.compare_object_name[1])except: print ('aaa')@Space_C0wb0y I can't give you full code because it is not my code, and I don't have a permission. | You should include the code that defines the assertTrue method. From the output you get, I'd say that it actually does not throw an exception, but deals with it internally (thus the error message being printed, and not your value).You can use the built-in assert statement of Python, which works as expected:x = 'aaaa'y = 'bbb'try: assert(x == y)except: print (x)Output:>>> aaaa |
python processing a log file and stripping characters I am making a quick log parse tool:findme = 'important 'logf = file('new.txt')newlines = [] for line in logf: if findme in line: line.partition("as follows: ")[2] newlines.append(line) outfile = file('out.txt', 'w')outfile.writelines(newlines)Not sure how I should go about using something like partition to remove the text "as follows: " and everything before it on a per line basis. I get no error, but the text I am trying to strip remains in the output. | Plus, I'm a little confused about the lineline.partition("as follows: ")[2]. It simply does nothing. Maybe you wantedline = line.partition("as follows")[2]? By the way, it ist better to just write each line in the for loop instead of a giant writelines at the end. Your current solution will use lots of memory for large files and not work at all with infinite files.That final version would look like this:findme = 'important 'outfile = open('out.txt', 'w')for line in open('new.txt'): if findme in line: outfile.write(line.partition('as follows: ')[2]) |
How do I install sklearn module properly? I'm trying to install sklearn module using pip command but after the installation is completed , all I can see is this folderC:\Users\Aditi\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\sklearn-0.0-py3.8.egg-info In my directory and even the error says module name sklearn not found.I've tried reinstalling it many times but still I'm not able to see the main sklearn folder in the above directory.Only 1 folder is installed i.e sklearn-0.0-py3.8.egg-info .Can anyone please help? | Try to install using command pip install scikit-learn or you can use pip install sklearn but I prefer the first one.If it still not work for you, you can update the numpy or reinstall the numpy.You can check here all the help related to installation and verifying the installation of scikit-learn |
Validation Accuracy stuck at .5073 I am trying to create a regression model but my validation accuracy stays at .5073. I am trying to train on images and have the network find the position of an object and the rough area it covers. I increased the unfrozen layers and the plateau for accuracy dropped to .4927. I would appreciate any help finding out what I am doing wrong.base = MobileNet(weights='imagenet', include_top=False, input_shape=(200,200,3), dropout=.3)location = base.outputlocation = GlobalAveragePooling2D()(location)location = Dense(16, activation='relu', name="locdense1")(location)location = Dense(32, activation='relu', name="locdense2")(location)location = Dense(64, activation='relu', name="locdense3")(location)finallocation = Dense(3, activation='sigmoid', name="finalLocation")(location)model = Model(inputs=base_model.input,outputs=finallocation)#[types, finallocation])for layer in model.layers[:91]: #freeze up to 87 if ('loc' or 'Loc') in layer.name: layer.trainable=True else: layer.trainable=Falseoptimizer = Adam(learning_rate=.001)model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=['accuracy'])history = model.fit(get_batches(type='Train'), validation_data=get_batches(type='Validation'), validation_steps=500, steps_per_epoch=1000, epochs=10)Data is generated from a tfrecord file which has image data and some labels. This is the last bit of that generator.IMG_SIZE = 200def format_position(image, positionx, positiony, width): image = tf.cast(image, tf.float32) image = (image/127.5) - 1 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) labels = tf.stack([positionx, positiony, width]) return image, labelsGet batches:dataset is loaded from two directories with tfrecord files, one for training, and other for validationdef get_batches(type): dataset = load_dataset(type=type) if type == 'Train': databatch = dataset.repeat() databatch = dataset.batch(32) databatch = databatch.prefetch(2) return databatch```positionx positiony width``` are all normalized from 0-1 (relative position with respect to the image.Here is an example output:Epoch 1/101000/1000 [==============================] - 233s 233ms/step - loss: 0.0267 - accuracy: 0.5833 - val_loss: 0.0330 - val_accuracy: 0.5073Epoch 2/101000/1000 [==============================] - 283s 283ms/step - loss: 0.0248 - accuracy: 0.6168 - val_loss: 0.0337 - val_accuracy: 0.5073Epoch 3/101000/1000 [==============================] - 221s 221ms/step - loss: 0.0238 - accuracy: 0.6309 - val_loss: 0.0312 - val_accuracy: 0.5073 | The final activation function in your model should not be sigmoid since it will output numbers between 0 and 1 and I am assuming your labels (i.e., positionx, positiony, and width are not in this range). You could replace it with either 'linear' or 'relu'.You're doing regression, and your loss function is 'mean_squared_error'. You cannot use accuracy as the metric function. You should use 'mae' (mean absolute error) or 'mse' to check the difference between your predictions and actual target values. |
Merging dataframes with time series data + checking if values already exist in the first df I'm new at Python so please bear with me. I have a dataframe that looks like this:df1 Company 1/2020 2/2020 Apple 1 0 Google 0 2I want to be able to merge a new data frame that may look like:df2 Company 2/2020 3/2020 Apple 1 1 Google 2 0How do I join the two df's and is there a way to overwrite the value if the new value is greater?I tried using just a merge and a join function and neither worked. | I'm not sure if I fully understand the intent of the question.If the sum of df1+df2 is required, the following code can be used. import pandas as pd import io data = ''' Company 1/2020 2/2020 Apple 1 0 Google 0 2 ''' data2 = ''' Company 2/2020 3/2020 Apple 1 1 Google 2 0 ''' df1 = pd.read_csv(io.StringIO(data), sep=' ') df2 = pd.read_csv(io.StringIO(data2), sep=' ') df3 = pd.concat([df1,df2], axis=0).fillna(0).groupby('Company').agg(sum) df3 1/2020 2/2020 3/2020 Company Apple 1.0 1 1.0 Google 0.0 4 0.0 |
upload image to custom folder (fastapi) When I try to upload an image, images uploads in the main dir. how can I change the upload destination into the media [email protected]('/icon', status_code=status.HTTP_201_CREATED,)async def create_file(single_file: UploadFile = File(...)): with open(single_file.filename, "wb") as buffer: shutil.copyfileobj(single_file.file, buffer) return {"filename": single_file} | I'm not familiar with shutil module, but obviously, you should usewith open(f'my_dir/{single_file.filename}', "wb") as buffer: |
How to overlay a scatterplot on top of boxplot with sns.catplot? It is possible to combine axes-level plot functions by simply calling them successively:import seaborn as snsimport matplotlib.pyplot as plttips = sns.load_dataset("tips")sns.set_theme(style="whitegrid")ax = sns.boxplot(x="day", y="total_bill", data=tips)ax = sns.stripplot(x="day", y="total_bill", data=tips, color=".25", alpha=0.7, ax=ax)plt.show()How to achieve this for the figure-level function sns.catplot()? Successive calls to sns.catplot() creates a new figure each time, and passing a figure handle is not possible.# This creates two separate figures:sns.catplot(..., kind="box")sns.catplot(..., kind="strip") | The following works for me with seaborn v0.11:import seaborn as sns import matplotlib.pyplot as plttips = sns.load_dataset("tips")g = sns.catplot(x="sex", y="total_bill", hue="smoker", col="time", data=tips, kind="box", palette=["#FFA7A0", "#ABEAC9"], height=4, aspect=.7);g.map_dataframe(sns.stripplot, x="sex", y="total_bill", hue="smoker", palette=["#404040"], alpha=0.6, dodge=True)# g.map(sns.stripplot, "sex", "total_bill", "smoker", # palette=["#404040"], alpha=0.6, dodge=True)plt.show()Explanations: In a first pass, the box-plots are created using sns.catplot(). The function returns a sns.FacetGrid that accommodates the different axes for each value of the categorical parameter time. In a second pass, this FacetGrid is reused to overlay the scatter plot (sns.stripplot, or alternatively, sns.swarmplot). The above uses method map_dataframe() because data is a pandas DataFrame with named columns. (Alternatively, using map() is also possible.) Setting dodge=True makes sure that the scatter plots are shifted along the categorical axis for each hue category. Finally, note that by calling sns.catplot() with kind="box" and then overlaying the scatter in a second step, the problem of duplicated legend entries is implicitly circumvented.Alternative (not recommended): It is also possible to create a FacetGrid object first and then call map_dataframe() twice. While this works for this example, in other situations one has to make sure that the mapping of properties is synchronized correctly across facets (see the warning in the docs). sns.catplot() takes care of this, as well as the legend.g = sns.FacetGrid(tips, col="time", height=4, aspect=.7)g.map_dataframe(sns.boxplot, x="sex", y="total_bill", hue="smoker", palette=["#FFA7A0", "#ABEAC9"])g.map_dataframe(sns.stripplot, x="sex", y="total_bill", hue="smoker", palette=["#404040"], alpha=0.6, dodge=True)# Note: the default legend is not resulting in the correct entries.# Some fix-up step is required here...# g.add_legend()plt.show() |
Convert date string to another date string format I am trying to convert a string datetime to another string time i.e... May 4, 2021 but I am getting the following error#convert '2021-05-04T05:55:43.013-0500' ->>> May 4, 2021timing = '2021-05-04T05:55:43.013-0500'ans = timing.strftime(f'%Y-%m-%d 'f'%H:%M:%S.%f')Here is the errorAttributeError: 'str' object has no attribute 'strftime'What am I doing wrong? | You want datetime.strptime() not timing.strftime(). timing is a string that doesn't have any functions called strftime. The datetime class of the datetime moduleI know, it's confusing, OTOH, does have a function to parse a string into a datetime object. Then, that datetime object has a function strftime() that will format it into a string!from datetime import datetimetiming = '2021-05-04T05:55:43.013-0500'dtm_obj = datetime.strptime(timing, f'%Y-%m-%dT%H:%M:%S.%f%z')formatted_string = dtm_obj.strftime('%b %d, %Y')print(formatted_string)# Outputs:# May 04, 2021 |
Get coordinate(x,y) from xml file and put it into float list I want to put several coordinates (x, y) recovered from an xml file in a list that I can used with a drawcontour or polyline functionthe problem is that I don't know how to put them in a list I used liste.append but its not working :( please help me<?xml version="1.0" ?><TwoDimensionSpatialCoordinate> <coordinateIndex value="0"/> <x value="302.6215607602997"/> <y value="166.6285651861381"/> <coordinateIndex value="1"/> <x value="3.6215607602997"/> <y value="1.6285651861381"/></TwoDimensionSpatialCoordinate>import xml.dom.minidomdef main(file): doc = xml.dom.minidom.parse(file) values = doc.getElementsByTagName("coordinateIndex") coordX = doc.getElementsByTagName("x") coordY = doc.getElementsByTagName("y") d = [] for atr_x in coordX: for atr_y in coordY: x = atr_x.getAttribute('value') y = atr_y.getAttribute('value') print("x",x,"y",y) d.append(x) d.append(y) print(d)result = main('1.631791322.58809740.14.834982.40440.3641459051.955.6373933.1920.xml')print(result)Output:x 302.6215607602997 y 179.53418754193044x 317.14038591056607 y 179.53418754193044x 328.11016491298955 y 179.53418754193044x 337.6280614003864 y 179.53418754193044x 350.0497229178365 y 179.53418754193044x 363.9232669503133 y 179.53418754193044This result is when I get the x,y coordination from xml file but when I add d.append it doesn't define the d:NameError: name 'd' is not defined. | Your XML is strange (x and y are not in coordinateIndex)Indentation matters in pythonYou probably want to try ElementTree, which is considered a better alternative to minidomWorking code for minidom and your input formatdef main(file): doc = xml.dom.minidom.parse(file) coordX = doc.getElementsByTagName("x") coordY = doc.getElementsByTagName("y") d = [] for atr_x, atr_y in zip(coordX, coordY): x = atr_x.getAttribute('value') y = atr_y.getAttribute('value') print("x", x, "y", y) d.append(x) d.append(y) return d |
How to find title and list where data-id is highest number I want to click the element with highest data-id. I'm generating title like this:char_set = string.ascii_uppercase tagTitle = "AI TAG " + ''.join(random.sample(char_set * 4, 4)) driver.find_element_by_xpath("//*[@id='FolderName']").send_keys(tagTitle)Currently I'm getting all elements of UI class:driver.find_element_by_xpath("/html/body/div[2]/div[2]/div[1]/div[1]/div/div[2]/ul/li")<ul class="investorGroup ul-groups"> <li data-id="-1" class=""> <a href="javascript:void(0)" onclick="$.InvestorContact.Group.LoadGroupInvestorsOnGroupClick(-1,null, 0)">Master</a> </li> <li title="AI TAG AOAI" data-id="371"> <a href="javascript:void(0)" onclick="$.InvestorContact.Group.LoadGroupInvestorsOnGroupClick(371)">2451b 24 (<span class="contactCatCount">0</span>)</a> <a href="javascript:$.InvestorContact.Group.OpenAddGroupModal(371)" class="edit"><i class="fa fa-pencil" aria-hidden="true"></i></a> </li> <li title="AI TAG CANG" data-id="376" > <a href="javascript:void(0)" onclick="$.InvestorContact.Group.LoadGroupInvestorsOnGroupClick(376)">452352 (<span class="contactCatCount">0</span>)</a> <a href="javascript:$.InvestorContact.Group.OpenAddGroupModal(376)" class="edit"><i class="fa fa-pencil" aria-hidden="true"></i></a> </li></ul>now tried and its showing the element : $x('/html/body/div2/div2/div1/div1/div/div2/ul/li[contains(@title,"AI TAG FOVE")]')but does not click and gives error via python: TagElement = driver.find_element_by_xpath('/html/body/div2/div2/div1/div1/div/div2/ul/li[contains(@title,"AI TAG FOVE")]') TagElement.click()sorry if i skip something just a learner here. selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (122, 388). Other element would receive the click: ... (Session info: chrome=80.0.3987.132) | First you can get all the elements in a list and then you can click on the last element because that would be having the highest data-id and if you want to get the title which you are clicking then you can get it by using get_attribute() method.You can do it like:# Fetching the elements using xpathtitle_list = driver.find_elements_by_xpath("//ul[@class='investorGroup ul-groups']//li[contains(@title,'AI TAG')]") # Getting the title of the last elementtitle_list[-1].get_attribute("title") # Clicking on the last elementtitle_list[-1].click()Edited ans for picking text from a variable:value = "AI TAG"Now fetch the list by using:title_list = driver.find_elements_by_xpath("//ul[@class='investorGroup ul-groups']//li[contains(@title,"+value+")]") |
Why does my Python bot not work? (PyAutoGUI) I coded a bot in Python that should automatically play Friday Night Funkin' (press the arrows when they are meant to be pressed) but for some reason it doesn't do anything. I took screenshots of the arrows when they are meant to be pressed and I made it so if python sees that the arrow is meant to be pressed (it sees the images/screenshots) it presses the corresponding key. I tried turning greyscale off but it didn't work. Is there any way to fix this or make it in a different way? I'm really new to Python and this is my first code so sorry if it's a stupid problem and question.My FNF Version: https://poki.pl/g/friday-night-funkinMy images: https://imgur.com/a/n8LUibPMy code:from pyautogui import *import pyautoguiimport timeimport keyboardimport numpy as npimport randomimport win32api, win32contime.sleep(5)while keyboard.is_pressed('q') == False:if pyautogui.locateOnScreen('leftarrow.png', region=(1010, 50, 650, 200), grayscale=False, confidence=0.7) != None: pyautogui.keyDown('left') time.sleep(0.1) pyautogui.keyUp('left')if pyautogui.locateOnScreen('rightarrow.png', region=(1010, 50, 650, 200), grayscale=False, confidence=0.7) != None: pyautogui.keyDown('right') time.sleep(0.1) pyautogui.keyUp('right') if pyautogui.locateOnScreen('uparrow.png', region=(1010, 50, 650, 200), grayscale=False, confidence=0.7) != None: pyautogui.keyDown('up') time.sleep(0.1) pyautogui.keyUp('up')if pyautogui.locateOnScreen('downarrow.png', region=(1010, 50, 650, 200), grayscale=False, confidence=0.7) != None: pyautogui.keyDown('down') time.sleep(0.1) pyautogui.keyUp('down') | maybe it is copy-paste lag, but there is no indention before "if"use "not None" instead of "!= None"useif __name__ == "__main__": your script as it now can be run only as python file, through console command or import. |
Pygame - Mouse clicks not getting detected I'm learning Pygame to make games w/ Python. However, I'm encountering a problem. I'm trying to detect when the player is currently clicking the screen, but my code isn't working. Is my code actually screwed, or is it just the online Pygame compiler that I'm using?import pygamepygame.init()screen = pygame.display.set_mode((800, 800))while True: pygame.display.update() mouse = pygame.mouse.get_pressed() if mouse: print("You are clicking") else: print("You released")When I ran this code, the output console spammed the text "You are clicking", thousands of times in a second. Even when I'm not clicking the screen, it still says this. Even when my mouse isn't over the screen. Just the same text. Over, and over. Is Pygame executing my program correctly?To learn Pygame, I am using the official Docs from the developers. https://www.pygame.org/docs/ Is this an outdated way to learn, is this why my code continues to run errors? | The coordinates which are returned by pygame.mouse.get_pressed() are evaluated when the events are handled. You need to handle the events by either pygame.event.pump() or pygame.event.get().See pygame.event.get():For each frame of your game, you will need to make some sort of call to the event queue. This ensures your program can internally interact with the rest of the operating system.pygame.mouse.get_pressed() returns a sequence of booleans representing the state of all the mouse buttons. Hense you have to evaluate if any button is pressed (any(buttons)) or if a special button is pressed by subscription (e.g. buttons[0]).For instance:import pygamepygame.init()screen = pygame.display.set_mode((800, 800))run = Truewhile run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False buttons = pygame.mouse.get_pressed() # if buttons[0]: # for the left mouse button if any(buttons): # for any mouse button print("You are clicking") else: print("You released") pygame.display.update()If you just want to detect when the mouse button is pressed respectively released, then you have to implement the MOUSEBUTTONDOWN and MOUSEBUTTONUP (see pygame.event module):import pygamepygame.init()screen = pygame.display.set_mode((800, 800))run = Truewhile run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: print("You are clicking", event.button) if event.type == pygame.MOUSEBUTTONUP: print("You released", event.button) pygame.display.update()While pygame.mouse.get_pressed() returns the current state of the buttons, the MOUSEBUTTONDOWN and MOUSEBUTTONUP occurs only once a button is pressed. |
Compute percentage changes with next row Pandas I want to compute the percentage change with the next n row. I've tried pct_change() but I don't get the expected resultsFor example, with n=1 close return_n0 100 1.00%1 101 -0.99%2 100 -1.00%3 99 -4.04%4 95 7.37%5 102 NaNWith n=2 close return_n0 100 0.00%1 101 -1.98%2 100 -5.00%3 99 3.03%4 95 NaN5 102 NaN | You can do shift with pct_changen = 2df['new'] = df.close.pct_change(periods=n).shift(-n)dfOut[247]: close return_n new0 100 1.00% 0.0000001 101 -0.99% -0.0198022 100 -1.00% -0.0500003 99 -4.04% 0.0303034 95 7.37% NaN5 102 NaN NaN |
Pandas passing arguments to apply I'm trying to apply a function to a dataframe, creating a new column as a result, like so:def defensive_weights(DSp=None,SGp=None,FCp=None): if dfcrop['opp_goals'] == 0: DInd = (DSp*2 + SGp + FCp) else: DInd = (DSp + SGp + FCp) return DInd dfcrop['IED'] = dfcrop['opp_goals'].apply(defensive_weights, DSp=DSp,SGp=SGp,FCp=FCp)I'm getting:TypeError: defensive_weights() got multiple values for argument 'DSp'What am I missing? | It appears you're calling the entire dataframe series from within the function. I don't think you want to do this. You should allow the function to take a parameter, and pass it to the conditional:def defensive_weights(item, DSp=None,SGp=None,FCp=None): if item == 0: DInd = (DSp*2 + SGp + FCp) else: DInd = (DSp + SGp + FCp) return DIndLet's say dfcrop['opp_goals'] == [1, 2, 3, 4, 5]Right now, your function is trying to do this:if [1,2,3,4,5] == 0:It would always return false in this case.It's passing the entire column for each row, because you're calling it from within the function.I have a feeling you want it to do this:if 2 == 0orif 0 == 0So you need to provide the function with just those integers. You do this by feeding them in one by one, which is not really easily done from within the function, you need to create a function parameter (I called it "item"), and feed them in one by one using your "apply" method.Also, your apply syntax is calling the entire row and attempting to pass it to the first parameter. I'd recommend using a lambda to control which columns go to which parameters:dfcrop.apply(lambda x: defensive_weights(x.opp_goals, DSp=x.DSp,SGp=x.SGp,FCp=x.FCp), axis=1)I don't know what your data looks like so I am assuming you have several columns named after those named parameters.Edit: Here's a simple example of a function passed to the apply method that should illustrate the nuts and bolts of how to use apply:a = pd.DataFrame({'a':[1,2,3,4], 'b':[5,6,7,8]})def modder(x,y): return x**ya['c'] = a.apply(lambda x: modder(x.a, x.b), axis=1)a |
Is it possible to run regular python code on Google TPU? So I'm pretty new with Google TPU. From what I've already researched, it is optimized specifically for training machine learning models written on TensorFlow.Currently, I am trying to see how the TPU performs with other types of functions. These functions are not related to machine learning. I have been trying to adapt my code so it can run on the TPU in Google Colab, but I am not sure if it is working or if this is the best approach.This is the code I have for a O(n3) matrix multiplication algorithm:import osimport numpy as npfrom random import seedfrom random import randomimport tensorflow as tfimport time;#check that this is running on the TPUtry: tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # TPU detection print('Running on TPU ', tpu.cluster_spec().as_dict()['worker']) except ValueError: print("Running on GPU or CPU") tpu = None#TPU detailsif 'COLAB_TPU_ADDR' not in os.environ: print('ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!')else: tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR'] print ('TPU address is', tpu_address)def multiplicationComputation(): #size of matrix row_size = 128 col_size = 128 N = row_size*col_size #class for matrix class MatrixMultiplication: matrix1 = np.empty(N) #DO NOT USE np.arange(N) matrix2 = np.empty(N) product = np.empty(N) #product size is the matrix1.columns x matrix2.rows #create MatrixMultiplication object m = MatrixMultiplication() #fill objects's data structures #seed for matrix 1 seed(1) for x in range(N): value = random() m.matrix1[x] = value #seed for matrix 2 seed(7) for x in range(N): value = random() m.matrix2[x] = value #multiply matrix1 and matrix2 start = time.time() qtySaves = 0; for i in range(row_size): for j in range(col_size): i_col = i * col_size sum = 0 for k in range(row_size): k_col = k * col_size multiplication = m.matrix1[i_col + k] * m.matrix2[k_col + j] sum = sum + multiplication m.product[i_col + j] = sum #The result of the multiplication is saved on the product matrix qtySaves = qtySaves + 1 end = time.time() #print result print() print("Result O(n^3): ") for i in range(N): if i % row_size == 0 and i > 0: print() print(str(m.product[i]), end =" ") print() print("For n = " + str(N) + ", time is " + str(end - start))#rewrite computation so it can be executed on the TPU#tpuOperation = tf.contrib.tpu.rewrite(multiplicationComputation)tpuOperation = tf.contrib.tpu.batch_parallel(multiplicationComputation, [], num_shards=8)#runsession = tf.Session(tpu_address, config=tf.ConfigProto(isolate_session_state=True, log_device_placement=True)) #isolate session state = True for distributed runtimetry: session.run(tf.contrib.tpu.initialize_system()) #initializes a distributed TPU system session.run(tpuOperation)finally: #TPU sessions must be shutdown separately from closing the session session.run(tf.contrib.tpu.shutdown_system()) session.close()My fear is that this is not running on the TPU. When calling session.list_devices() I see that there is a CPU listed, and I am afraid that my code might actually be running on the CPU and not on the TPU. This is the output of said command:TPU devices: [_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 10448234186946304259), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 2088593175391423031), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 1681908406791603718), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 2618396797726491975), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 14243051360425930068), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 15491507241115490455), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 9239156557030772892), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 16970377907446102335), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 6145936732121669294), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 11372860691871753999), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 12653526146081894211)]For now, I'm not looking for advice on what accelerator to use. I want to test the TPU and make sure my code is running on it. Please help! | I am afraid the presence or absence of tensorflow has no effect on how np operations are executed.In your example above when you specify tpuOperation = tf.contrib.tpu.batch_parallel(multiplicationComputation, [], num_shards=8)where multiplicationComputation has no TPU specific code to be parallelized and it will run the way it would normally run when you call a multiplicationComputation - on CPU.You will have to rewrite your code using TF operation to allow it to run on GPU. Tensorflow will translate your operations into TPU specific code. |
How to create user in amazon-cognito using boto3 in python I'm trying to create user using python3.x and boto3 but end up with facing some issuesI've tried using "admin_create_user" even id didn't worked for meimport boto3aws_client = boto3.client('cognito-idp', region_name = CONFIG["cognito"]["region"])response = aws_client.admin_create_user( UserPoolId = CONFIG["cognito"]["pool_id"], Username = email, UserAttributes = [ {"Name": "first_name", "Value": first_name}, {"Name": "last_name", "Value": last_name}, { "Name": "email_verified", "Value": "true" } ], DesiredDeliveryMediums = ['EMAIL'])Error facing | I think you didn't pass the configuration. First install the AWS CLI.pip install awscli --upgrade --userThen type below command in your terminal,aws configureProvide your details correctly,AWS Access Key ID [****************6GOW]: AWS Secret Access Key [****************BHOD]: Default region name [us-east-1]: Default output format [None]:Try this and you can also view your credentials in below paths.sudo cat ~/.aws/credentials[default]aws_access_key_id = ******7MVXLBPHW66GOWaws_secret_access_key = wKtT*****UqN1sO/1Pfn+BCrvNst*****695BHODsudo cat ~/.aws/config[default]region = us-east-1or you can view all those in one place by aws configure list command, |
sorting only what's in parentheses in a string s = "Kadu (b, a), Dadu, Adu (y, i)"I need this string to be sorted as follows:Adu (i, y), Dadu, Kadu (a, b)Extra explanation for those who have one more minute: As a translator, I sometimes have to translate alphabetically sorted, comma-delimited lists in which some of the items have sub-lists in parentheses, also alphabetically sorted. After the translation, the alphabetical order changes, so I have to resort everything using primitive tools like Word. I'm trying to write myself some code that can help me automatise this. I'm very new to coding, I can basically only do patchwork using what I find around here but I'm learning.Thanks to some Regex code I used in Python, I was able to split the string in the right way, sort it and then rejoin the list. As of now I am able to get this:Adu (y, i), Dadu, Kadu (b, a)As you see, the main list is nicely sorted but what's inside parentheses remains untouched.import res = "Kadu (b, a), Dadu, Adu (y, i)"unsorted = re.split(r',\s*(?![^()]*\))', s)unsorted.sort()result = ", "result = result.join(unsorted)print(result)Now I guess I need to write some Regex code that treats only what's in parentheses. I haven't been able to find a piece of code I can adapt for this. Does anyone know how I should do it? | You can use re.sub with a callback function to find the parts within (...) and replace it with a sorted version of itself.>>> ', '.join(sorted(re.sub("(?<=\().+(?=\))", lambda m: ', '.join(sorted(m .group().split(', '))), x) for x in re.split(r',\s*(?![^()]*\))', s))) 'Adu (i, y), Dadu, Kadu (a, b)' |
What am I missing when getting nouns from sentence and reversed sentence using nltk? I Have a is_noun definition using nltk:is_noun = lambda pos: pos == 'NN' or pos == 'NNP' or pos == 'NNS' or pos == 'NNPS'then I have this in a function:def test(text): tokenized = nltk.word_tokenize(text) nouns = [word for (word, pos) in nltk.pos_tag(tokenized) if is_noun(pos)] print ('Nouns:', nouns) return nounsthen I call the function:test('When will this long and tedious journey ever end? Like all good')and get:Nouns: ['journey']then call same function but with reversed sentence and get:test('good all Like end? ever journey tedious and long this will When')results: Nouns: ['end']I am expecting to get same amount of nouns but that is not the case. What am I doing wrong? | Summary: GIGO (Garbage In => Garbage Out).As the comment suggests, word order matters. English is rife with words that can act as multiple parts of speech, depending on placement within a phrase. Consider:You can cage a swallow.You cannot swallow a cage.In the second text you present, you do not have a legal sentence by any means. The best the English parser can determine is that "end" may be the direct object of the verb "like", and is therefore a noun in this case. Similarly, "journey" appears to be the main verb of the second sequence of words. |
Separate items in a list (or dictionary or counter *not sure XD) My code:list1 = []for line in open('live.txt'): name = line.strip() list1.append(name)import collectionsprint("Original List : ",list1)ctr = collections.Counter(list1)print(ctr)Output:Original List : ['Heart', 'Thumbs up', 'Thumbs up', 'Smile', 'Heart', 'Thumbs down', 'Smile']Counter({'Heart': 2, 'Thumbs up': 2, 'Smile': 2, 'Thumbs down': 1})Well my problem is that I want to separate the items in the list so the output will look like this: Heart: 2 Thumbs up: 2 Smile: 2 Thumbs down: 1 | you should just iterate through a dictioryfor key in ctr: print(key, ': ', ctr[key]) |
how to get all possible combinations of strings/words with each word multiple times I'm trying to create all possible stochiometries of chemical compounds, which essentially is combining strings/words:Let's say I have a list of elements: els=['Ba','Ti','O']and I say the number of each element can be maximally 3 and I want all possible combinations,with always each element at least once. The desired output would be:['BaTiO','BaTiO2','BaTiO3','BaTi2O','BaTi2O2','BaTi2O3'.....]AND the input list should be of arbitrary length, e.g. if it is els=['Ba','Sr','Ti','O']i want as a result:['BaSrTiO','BaSrTiO2'....](the output could also be of the form [BaTiO,BaTiOO,BaTiOOO…] instead of the numbers)I tried to come up with something using itertools, but I can't find a way how to do it.Any suggestions? | You could use itertools.permutations for a shorter and maybe a bit more readable solution:from itertools import permutationselements = {"Ba", "Ti", "O"} # Set of elementsmaxi = 3 # Maximum occurrenceraw_output = permutations(range(1, maxi + 1), len(elements))for i in raw_output: str_output = " ".join([f"{e}{v}" for e, v in zip(elements, i)]) print(str_output)>>> O1 Ba2 Ti3 O1 Ba3 Ti2 O2 Ba1 Ti3 O2 Ba3 Ti1 O3 Ba1 Ti2 O3 Ba2 Ti1 |
Is there equivalent of bash's "set -x" in python? All I 've found is something like python3 -m pdb myscript.py but it does not do what set -x does which executes the script and shows on terminal each line that gets executed with the actual values of the variables.For example:#!/bin/bashset -xecho "This is a foo message"sshpass -p $2 ssh root@$1echo "this is just argument no3 --> $3 :)"So when you run the script with arguments you see what exactly gets done.root@notebook:~# ./myscript.sh myserver.com mypassword bar+ echo 'This is a foo message'This is a foo message+ sshpass -p mypassword ssh [email protected]+ echo "this is just argument no3 --> bar :)"this is just argument no3 --> bar :) | yes hi, perhaps using python -m trace -t myscript.py will show you the trace you're interested in. |
subprocess call unable to find dot although it's installed I'm following this this sample code provided by this article. The IDE is Spyder 4.1.5 with Python 3.8, anaconda got the below exception saying "FileNotFoundError: [WinError 2] The system cannot find the file specified".I'm new to python (and Spyder), so not sure what file was missing as the exception message didn't include a filename. Any hint will be highly appreciated.I've checked the environment, dot is available in path, and package graphviz has been installed.Exception trace:runfile('C:/my/work/smlb/challenge_1/code/tree_to_image.py', wdir='C:/my/work/smlb/challenge_1/code')Traceback (most recent call last): File "C:\my\work\smlb\challenge_1\code\tree_to_image.py", line 31, in <module> call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600']) File "C:\Users\user\anaconda3\lib\subprocess.py", line 340, in call with Popen(*popenargs, **kwargs) as p: File "C:\Users\user\anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 105, in __init__ super(SubprocessPopen, self).__init__(*args, **kwargs) File "C:\Users\user\anaconda3\lib\subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\user\anaconda3\lib\subprocess.py", line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args,FileNotFoundError: [WinError 2] The system cannot find the file specifiedThe sample code I was trying to run:I'm trying to run the below sample source code to generate an image to visualize a binary decision.from sklearn.datasets import load_irisiris = load_iris()# Model (can also use single decision tree)from sklearn.ensemble import RandomForestClassifiermodel = RandomForestClassifier(n_estimators=10)# Trainmodel.fit(iris.data, iris.target)# Extract single treeestimator = model.estimators_[5]from sklearn.tree import export_graphviz# Export as dot fileexport_graphviz(estimator, out_file='tree.dot', feature_names = iris.feature_names, class_names = iris.target_names, rounded = True, proportion = False, precision = 2, filled = True)# Convert to png using system command (requires Graphviz)from subprocess import callcall(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])## It works if directly call command `dot -Tpng tree.dot -o tree.png -Gdpi=600`# but the subprocess call here doesn't work## Display in jupyter notebookfrom IPython.display import ImageImage(filename = 'tree.png')Check python packages in Anaconda Prompt:(base) C:\Users\user>conda list graphviz# packages in environment at C:\Users\user\anaconda3:## Name Version Build Channelgraphviz 2.38 hfd603c8_2python-graphviz 0.16 pyhd3eb1b0_1(base) C:\Users\user>where dotC:\Users\user\anaconda3\Library\bin\dot.bat(base) C:\Users\user> | As you can see, the error is at the linecall(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])and comes down tohp, ht, pid, tid = _winapi.CreateProcess(executable, args,in the subprocess module. So it seems that the executable, i.e. dot cannot be found. Check if you can run where dot in a cmd. If not, then you probably need to install the graphviz library. Make sure to check "Add to path" during the installation |
Replace abitrary HTML (subtree) within HTML document with other HTML (subtree) with BS4 or regex I am trying to build a function along the following lines:import bs4def replace(html: str, selector: str, old: str, new: str) -> str: soup = bs4.BeautifulSoup(html) # likely complete HTML document old_soup = bs4.BeautifulSoup(old) # can contain HTML tags etc new_soup = bs4.BeautifulSoup(new) # can contain HTML tags etc for selected in soup.select(selector): ### pseudo-code start for match in selected.find_everything(old_soup): match.replace_with(new_soup) ### pseudo-code end return str(soup)I want to be able to replace an arbitrary HTML subtree below a CSS selector within a full HTML document with another arbitrary HTML subtree. selector, old and new are read as strings from a configuration file.My document could look as follows:before = r"""<!DOCTYPE html><html><head> <title>No target here</head></head><body> <h1>This is the target!</h1> <p class="target"> Yet another <b>target</b>. </p> <p> <!-- Comment --> Foo target Bar </p></body></html>"""This is supposed to work:after = replace( html = before, selector = 'body', # from config text file old = 'target', # from config text file new = '<span class="special">target</span>', # from config text file)assert after == r"""<!DOCTYPE html><html><head> <title>No target here</head></head><body> <h1>This is the <span class="special">target</span>!</h1> <p class="target"> Yet another <b><span class="special">target</span></b>. </p> <p> <!-- Comment --> Foo <span class="special">target</span> Bar </p></body></html>"""A plain str.replace does not work because the "target" can appear literally everywhere ... I have briefly considered to do this with a regular expression. I have to admit that I did not succeed, but I'd be happy to see this working. Currently, I think my best chance is to use beautifulsoup.I understand how to swap a specific tag. I can also replace specific text etc. However, I am really failing to replace an "arbitrary HTML subtree", as in I want to replace some HTML with some other HTML in a sane manner. In this context, I want to treat old and new really as HTML, so if old is simply a "word" that does also appear for instance in a class name, I really only want to replace it if it is content in the document, but not if it is a class name as shown above.Any ideas how to do this? | The solution below works in three parts:All matches of selector from html are discovered.Then, each match (as a soup object) is recursively traversed and every child is matched against old.If the child object is equivalent to old, then it is extracted and new is inserted into the original match at the same index as the child object.import bs4from bs4 import BeautifulSoup as soupdef replace(html:str, selector:str, old:str, new:str) -> str: def update_html(d:soup, old:soup) -> None: i = 0 while (c:=getattr(d, 'contents', [])[i:]): if isinstance((a:=c[0]), bs4.element.NavigableString) and str(old) in str(a): a.extract() for j, k in enumerate((l:=str(a).split(str(old)))): i += 1 d.insert(i, soup(k, 'html.parser')) if j + 1 != len(l): i += 1 d.insert(i, soup(new, 'html.parser')) elif a == old: a.extract() d.insert(i, soup(new, 'html.parser')) i += 1 else: update_html(a, old) i += 1 source, o = [soup(j, 'html.parser') for j in [html, old]] for i in source.select(selector): update_html(i, o.contents[0]) return str(source)after = replace( html = before, selector = 'body', # from config text file old = 'target', # from config text file new = '<span class="special">target</span>', # from config text file)print(after)Output:<!DOCTYPE html><html><head><title>No target here</title></head><body><h1>This is the <span class="special">target</span>!</h1><p class="target"> Yet another <b><span class="special">target</span></b>. </p><p><!-- Comment --> Foo <span class="special">target</span> Bar </p></body></html> |
Why is my VSCode trying to use cuda even though I installed directml (I'm on amd)? I have a tensor flow object detection project I want to build and read that it would be slow on cpu. Thats when someone told me to use directml because I have an AMD gpu and not a NVIDIA one.I have created an anaconda environment which I called "directml" and installed tensorflow and directml on it (see the picture). If I now try to run my test application which I found from this tutorial (https://docs.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-windows):import tensorflow.compat.v1 as tf tf.enable_eager_execution(tf.ConfigProto(log_device_placement=True)) print(tf.add([1.0, 2.0], [3.0, 4.0]))I dont get the desired output:2020-06-15 11:27:18.240065: I tensorflow/core/common_runtime/dml/dml_device_factory.cc:32] DirectML: creating device on adapter 0 (AMD Radeon VII) 2020-06-15 11:27:18.323949: I tensorflow/stream_executor/platform/default/dso_loader.cc:60] Successfully opened dynamic library DirectMLba106a7c621ea741d2159d8708ee581c11918380.dll 2020-06-15 11:27:18.337830: I tensorflow/core/common_runtime/eager/execute.cc:571] Executing op Add in device /job:localhost/replica:0/task:0/device:DML:0 tf.Tensor([4. 6.], shape=(2,), dtype=float32)But I instead get this:2021-09-16 17:15:03.700209: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found2021-09-16 17:15:03.700418: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.2021-09-16 17:15:05.192685: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found2021-09-16 17:15:05.192902: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)2021-09-16 17:15:05.197503: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-N3L36AL2021-09-16 17:15:05.197857: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-N3L36AL2021-09-16 17:15:05.198376: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX22021-09-16 17:15:05.202832: I tensorflow/core/common_runtime/eager/execute.cc:571] Executing op Add in device /job:localhost/replica:0/task:0/device:CPU:0tf.Tensor([4. 6.], shape=(2,), dtype=float32)To me it looks like tensorflow is trying to use cuda and not directml, but I have no idea why that is. My Windows, aswell as my AMD drivers are up to date. | You shouldn't install tensorflow only tensorflow-directml. Because now python is importing tensorflow not tensorflow-directml. Uninstall tensorflow and it should fix imports. |
How to read picke files using pyarrow I have a bunch of code for reading multiple pickle files using Pandas:dfs = [] for filename in glob.glob(os.path.join(path,"../data/simulated-data-raw/", "*.pkl")): with open(filename, 'rb') as f: temp = pd.read_pickle(f) dfs.append(temp) df = pd.DataFrame() df = df.append(dfs)how can I read the files using pyarrow? Meanwhile, this way does not work and raises an error.dfs = []for filename in glob.glob(os.path.join(path, "../data/simulated-data-raw/", "*.pkl")): with open(filename, 'rb') as f: temp = pa.read_serialized(f) dfs.append(temp)df = pd.DataFrame()df = df.append(dfs) | FYI, pyarrow.read_serialized is deprecated and you should just use arrow ipc or python standard pickle module when willing to serialize data.Anyway I'm not sure what you are trying to achieve, saving objects with Pickle will try to deserialize them with the same exact type they had on save, so even if you don't use pandas to load back the object, you will still get back a pandas DataFrame (as that's what you pickled) and will still need pandas installed to be able to create one.For example, you can easily get rid of pandas.read_pickle and replace it with just pickle.load, but what you get back will still be a pandas.DataFrameimport pandas as pdoriginal_df = pd.DataFrame({"foo": range(5), "bar": range(5, 10)})pd.to_pickle(original_df, "./dummy.pkl")import pickleloaded_back = pickle.load(open("./dummy.pkl", "rb"))print(loaded_back) |
Len function not returning the correct value for string So I am trying to call the length of a coded message, and then divide that length by 3 (every 3 chars represents one letter). Here is the message:10311132-10710510810832-121111117114115101108102The dashs are set to be placed directly after spaces, or the letters a, b, or c (because the ord function converts these chars to only 2 digit values and the program requires 3 digits per letter). I have this code:message = (10311132-10710510810832-121111117114115101108102)message = str(message)length = len(message)print(length)The code returns 25 even those the string is 48 chars. Why is this, and how can I fix it? | Because you've initiated message as an int. You can fix this by putting quotes around it:message = ('10311132-10710510810832-121111117114115101108102')At the moment, converting message to a string afterwards converts it after it has performed the minus operations. For example:>>> message = (10311132-10710510810832-121111117114115101108102)>>> message-121111117124825601607802As a side-note, the brackets around the assignment are unnecessary. You can just do:message = '10311132-10710510810832-121111117114115101108102' |
Mismatch between janusgraph date value and gremlin query result I have some graph data with date type values.My gremlin query for the date type property is working, but output value is not the date value.Environment:Janusgraph 0.3.1 gremlinpython 3.4.3Below is my example:Data (JanusGraph): {"ID": "doc_1", "MY_DATE": [Tue Jan 10 00:00:00 KST 1079]}Query: g.V().has("ID", "doc_1").valueMap("MY_DATE")Output (gremlinpython): datetime(1079, 1, 16)The error is 6 days (1079.1.10 -> 1079.1.16).This mismatch does not occur when the years are above 1600.Does the timestamp have some serialization/deserialization problems between janusgraph and gremlinpython?Thanks | There were some issue with Python and dates but I would have them fixed for 3.4.3, which is the version you stated you were using. The issue is described here at TINKERPOP-2264 along with the fix, but basically there were some issues with timezones. From your example data, it looks like you store your date with a timezone (i.e. KST). I'm not completely sure, but I would imagine things would work as expected if the date was stored as UTC. |
Group list of objects based on close datetime attribute Say I have a list of objects. Each of these has a string representing a date (parseable by dateutil). How can I go about grouping these in a list of lists, in which each sublist contains consecutive (within 5 minutes) objects? For example:o1.time = "2016-03-01 23:25:00-08:00"o2.time = "2016-03-01 23:30:00-08:00"o3.time = "2016-03-01 23:35:00-08:00"o4.time = "2016-03-02 12:35:00-08:00"list1 = [o1, o2, o3, o4]list2 = group_by_time(list1)at which point list2 would be[[o1,o2,o3],[o4]]It seems like there should be a python solution using lambdas or itertools along with dateutil, but my google schools are failing me.Thanks! | Take a look at groupby function from itertools. It takes a list of objects and groups them according to a key function. Your code could look like thisfrom dateutil.parser import parsefrom itertools import groupbydef rounded_date(item): d = parse(item.time) # round date return dgrouped_items = groupby(items, keyfunc=rounded_date)have a look at this question to find out how to round datetimes: How to round the minute of a datetime object python |
Matplotlib bar plot remove internal lines I have a bar plot with high resolution. Is it possible to have only the border/frame/top line of the plot like in the following ROOT plot without, i.e. without internal lines?If I plot with facecolor='none' or 'white', the plot is slashed by both vertical and horizontal lines:The only way I can get rid of them is to make edgecolor and facecolor the same, but that's not the look I need... | Found out the answer: the simplest way to achieve the desired look is to use plt.step instead of plt.bar, that simple. Feel shame for asking. |
Pygame, user input on a GUI? I need a user input for my pygame program, but I need it on my GUI(pygame.display.set_mode etc.), not just like: var = input("input something"). Does anybody have suggestions how to do this? | There are some answers already here. Anyway, use PGU (Pygame GUI Utilities), it's available on pygame's site. It turns pygame into GUI toolkit. There is an explanation on how to combine it and your game. Otherwise, program it yourself using key events. It's not hard but time consuming and boring. |
replace letters in python string Im writing a program for french that turns present tense verbs into past tense. The problem is that I need to replace letters but they are user inputed so I have to have it replacing the letters from the end of the line. Here's what I have so far, but it doesn't change the letters it just gives an error:word = raw_input("what words do you want to turn into past tense?")word2= wordif word2.endswith("re"): word3 = word2.replace('u', 're') print word3elif word2.endswith("ir"): word2[-2:] = "i" print word2elif word2.endswith("er"): word2[-2:] = "e" print word2else: print "nope"I tried word replace and that doesn't work either, it just gives me back the same string. If some one could just give me an example and maybe explain it a little that would be awesome. :/ | IMO there might be a problem with the way you are using replace. The syntax for replace is explained. herestring.replace(s, old, new[, maxreplace])This ipython session might be able to help you.In [1]: mystring = "whatever"In [2]: mystring.replace('er', 'u')Out[2]: 'whatevu'In [3]: mystringOut[3]: 'whatever'basically the pattern you want replaced comes first, followed by the string you want to replace with. |
Remove border from matplotlib 3D pane I would like to remove the borders from my 3D scene as described below. Any idea how to do that?Here the code to generate the current scene:import matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3D# Create figureplt.style.use('dark_background') # Dark themefig = plt.figure()ax = fig.add_subplot(111, projection='3d')# Make panes transparentax.xaxis.pane.fill = False # Left paneax.yaxis.pane.fill = False # Right pane# Remove grid linesax.grid(False)# Remove tick labelsax.set_xticklabels([])ax.set_yticklabels([])ax.set_zticklabels([])# Print chartfile_path = 'charts/3d.png'fig.savefig(file_path, bbox_inches='tight', pad_inches=0.05, transparent=True) | I usually set the alpha channel to 0 for spines and panes, and finally I remove the ticks: import matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3D# Create figureplt.style.use('dark_background') # Dark themefig = plt.figure()ax = fig.add_subplot(111, projection='3d')# Make panes transparentax.xaxis.pane.fill = False # Left paneax.yaxis.pane.fill = False # Right pane# Remove grid linesax.grid(False)# Remove tick labelsax.set_xticklabels([])ax.set_yticklabels([])ax.set_zticklabels([])# Transparent spinesax.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0))ax.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0))ax.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0))# Transparent panesax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))# No ticksax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) |
How to fix broken up text with python docx to get free text for Ebooks? I'm trying to edit a free Ebook I found online into easily readable text for Kindle, with headers and full paragraphs. I'm very new to Python and coding in general so I don't really have any progress.Each line is separated by a break with Enter, so each line is considered a separate Paragraph by python.Basically what needs to be done is delete the space and breaks between the lines so the text doesn't break when converted into MOBI or EPUB.The text looks like this:Unformatted:And should look like this: Formatted:Any help is welcome! | I used the docx library that is not installed by default, you can use pip or conda:pip install python-docxconda install python-docx --channel conda-forgeAfter install:from docx import Documentdoc = Document(r'path\to\file\pride_and_prejudice.docx')all_text=[]all_text_str=''for para in doc.paragraphs: all_text.append(para.text)all_text_str=all_text_str.join(all_text)clean_text=all_text_str.replace('\n', '') # Remove linebreaksclean_text=clean_text.replace(' ', '') # Remove even number of spaces (e.g. This usually eliminates non-spaces nicely, but you can tweak accordingly.document = Document()p = document.add_paragraph(clean_text)document.save(r'path\to\file\pride_and_prejudice_clean.docx') |
astropy.table writing problems I'm having problems to write astropy.tables, since yesterday when I updated to astropy 4.0, I cannot write tables into files.I even tried to copy the examples in the astropy web like:import numpy as npfrom astropy.table import Table, Column, MaskedColumnfrom astropy.io import asciix = np.array([1, 2, 3])y = x ** 2data = Table([x, y], names=['x', 'y'])ascii.write(data, 'values.dat')and I always get the same strange error:ValueError: Data type <class 'astropy.table.table.Table'> not allowed to init TableAnyone have an idea of what could be happening? Sorry for the vague question, but I really do not understand why even the examples of the web are failing...NOTE: I'm using python 3.7.3 on anaconda, in a Mac OS 10.14.6. UPDATE:After two downgrades and upgrades the problem resolved itself... I still don't know what happened but it's no longer | After two downgrades and upgrades the problem resolved itself... I still don't know what happened but it's no longer displaying that odd behaviour.Thanks in any case! |
Python load .txt as array Just have a colors.txt file with data: [(216, 172, 185), (222, 180, 190), (231, 191, 202), (237, 197, 206), (236, 194, 204), (227, 184, 194), (230, 188, 200), (232, 192, 203), (237, 199, 210), (245, 207, 218), (245, 207, 218)]now just try to read this in python as an arrayf = open("colors.txt", "r")data = f.read()data2 = np.append(data)f.close()now want to print first value but I have an errorprint(data2[0]) TypeError: _append_dispatcher() missing 1 required positional argument: 'values' | The problem is that you are appending in string data from your file, when you really want a list. So use literal_eval to safely evaluate the data type:import numpy as npfrom ast import literal_evalwith open('colors.txt') as fh: data = literal_eval(fh.read())# np.array can consume a listarr = np.array(data)array([[216, 172, 185], [222, 180, 190], [231, 191, 202], [237, 197, 206], [236, 194, 204], [227, 184, 194], [230, 188, 200], [232, 192, 203], [237, 199, 210], [245, 207, 218], [245, 207, 218]])You also don't want to use np.append, since this takes two arguments, the array you are appending to and the data to append. You want to construct an array out of the data you have read from the file |
I'm trying to create a sorting algorithm to find all combinations that would yield a certain result, but keep getting an error about the index the data is a numpy array (784,)here is the sorting function:while flips < max_flip: flipped_accuracy = 0 combination = [] while flipped_accuracy <= original_accuracy: i_vals = [] for i in range(flips): i_vals.append(i) index = 1 last_added = 0 while flips - index > 0: for i in i_vals: ind = indexes[i] combination.append(index_accuracies[ind]) if np.mean(combination) > original_accuracy: flip_combinations.append(combination) last_added = 0 else: last_added += 1 if i_vals[-index] < 784: if last_added > 10 or (i_vals[0] == 783 and i_vals[-1] == 783): flips += 1 break i_vals[-index] += 1 if index > 1: index -= 1 else: index += 1augemented_images = []for c in flip_combinations: z = pixel_flipper(x0_test, c) augemented_images.append(z)`and the error I keep getting is ind = indexes[i] IndexError: index 784 is out of bounds for axis 0 with size 784 | The following code seems to be the likely culprit:if i_vals[-index] < 784: # ... i_vals[-index] += 1If i_vals[-index] is 783 it will be increased to 784, so the next time that value is used as the index it will cause the error. |
An "Expecting property name:" error started coming - OAuth 2.0 Google APIs Client Library for Python I took this example from google code samples. It was working earlier, but suddenly it stopped working. I tried reseting everything. Still not luck.What is that I'm doing wrong?Here is the error log.Kshitij:oauth Kshitij$ python playlistitems.pyTraceback (most recent call last): File "playlistitems.py", line 51, in <module> scope=YOUTUBE_READONLY_SCOPE) File "/Users/Kshitij/django project/trailers_backend/oauth/oauth2client/util.py", line 132, in positional_wrapper return wrapped(*args, **kwargs) File "/Users/Kshitij/django project/trailers_backend/oauth/oauth2client/client.py", line 1343, in flow_from_clientsecrets client_type, client_info = clientsecrets.loadfile(filename, cache=cache) File "/Users/Kshitij/django project/trailers_backend/oauth/oauth2client/clientsecrets.py", line 145, in loadfile return _loadfile(filename) File "/Users/Kshitij/django project/trailers_backend/oauth/oauth2client/clientsecrets.py", line 103, in _loadfile obj = simplejson.load(fp) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 290, in load **kw) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 365, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 381, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Expecting property name: line 16 column 3 (char 561) | I found the problem. I had added extra parameters in the client_secrets.json which is why flow_from_clientsecrets() wasn't able to parse it. It started working after I removed them. |
Is there a way to make this Python function to pull data from a ftp site better? I've created python function to extract data from an ftp site. It works well. However, there are a lot of try/except statements. I read about using a python "with" statement to make this better but I'm not clear how that will improve the function. Here is the code:HOST = 'ftp.osuosl.org'DIRN = 'debian/tools'FILE = 'loadlin.txt'def func(HOST, DIRN, FILE): import ftplib from StringIO import StringIO import os import socket try: f = ftplib.FTP(HOST) except (socket.error, socket.gaierror), e: print 'ERROR: cannot reach "%s"' % HOST return "None" print '*** Connected to host "%s"' % HOST try: f.login() except ftplib.error_perm: print 'ERROR: cannot login anonymously' f.quit() return "None" print '*** Logged in as "anonymous"' try: f.cwd(DIRN) except ftplib.error_perm: print 'ERROR: cannot CD to "%s"' % DIRN f.quit() return "None" print '*** Changed to "%s" folder' % DIRN try: r = StringIO() f.retrbinary('RETR %s' % FILE, r.write) except ftplib.error_perm: print 'ERROR: cannot read file "%s"' % FILE return "None" else: print '*** Downloaded "%s" to CWD' % FILE f.quit() return r.getvalue()print func(HOST, DIRN, FILE) | In general this style works well. There's a try/except block per interesting section of code, so you can communicate specific details about the error to the caller.The code has f.quit() in several places. This is fine, but it's easy to lose track of which cases should have quit and which shouldn't. It's easy to miss one.Consider this style, with the finally block. This block always gets executed, if the RETR succeeds or fails. It's safer.try: r = StringIO() f.retrbinary('RETR %s' % FILE, r.write) except ftplib.error_perm: print 'ERROR: cannot read file "%s"' % FILE return "None" else: print '*** Downloaded "%s" to CWD' % FILE finally: f.quit() |
List Append Conditionals In this specific situation, How do I make a proper if conditional that appends only a price value for example below 53 to a list?offers_list = re.findall("<div class=\"list\"(.*?)</div>", http_response_body, re.DOTALL) # Find list of offers in HTTP Response Bodyprice_list = []offers_list2 = []for i in offers_list: # loop in the list of offers to look for the specific price values a = re.findall("\"price\"=(.*?)\"", i) # Find specific price value within in each offer print a price_list.append(a) # Append to list only if the price is lower than X amount offers_list2.append(a)The above code outputs:[u'47.00'][u'49.00'][u'49.00'][u'50.00'][u'50.00'][u'50.00'][u'50.00'][u'51.50'][u'52.50'][u'53.00'][...]However print a outside the for loop prints only one value obviously because it did only one search instead of a loop trough the all the offers. | Assuming your regex works properly, something like this would probably do:for price in a: if int(price)<=53: price_list.append(price) offers_list2.append(price)Also, DO NOT PARSE HTML WITH REGEX |
“ warnings.warn('the tensorboard callback does not support '” “ warnings.warn('the tensorboard callback does not support '”when i wanted to use the Tensorboard ,i meet such promblementer image description here | You didn't list the callback in model.fit call.Try:tb_callback = Tensorboard(...)model.fit(..., callbacks=[tb_callback])I didn't like naming the callback Tensorboard, so I changed it to tb_callback. Then I told model.fit to use that callback. |
Python - reorder pandas Dataframe rows based on column values I have a dataframe with 2 columns : id , antecedent_idI would like a code to reorder the dataframe in the right order using antecedent_id.The first id is the one with antecedent_id emptyDataframe example:idantecedent_idid1id2id4id7id6id3id7id3id4id2id6id5id1The dataframe reordered should be like this:idantecedent_idid7id4id7id3id4id6id3id2id6id1id2id5id1I would like to find the fastest code to do that as I have a huge number of rowsThanks you so much for your help ! | You basically want to sort the dataframe by values in a column:import pandas as pddf = pd.DataFrame({ "id": ["id1", "id4", "id6", "id7", "id3", "id2", "id5"], "antecedent_id": ["id2", "id7", "id3", "", "id4", "id6", "id1"]})sorted_df = df.sort_values("antecedent_id", ascending=False)print(sorted_df)This code chunk returns a sorted dataframe like so: id antecedent_id1 id4 id75 id2 id64 id3 id42 id6 id30 id1 id26 id5 id13 id7 |
convert list of lists to a list of smaller tuples I need some help with converting a list of equal sized lists x to a list of tuples such that each tuple should be the length of xx = [['4', '8', '16', '32', '64', '128', '256', '512', '1,024'], ['1,200', '2,400', '4,800', '4,800', '6,200', '6,200', '6,200', '6,200', '6,200'], ['300', '600', '1,200', '2,400', '3,200', '3,200', '4,000', '4,000', '4,000']]# some functions that converts it to expected_output = [('4', '1,200', '300'), ('8', '2,400', '600'), ...]in this case len(x) is 3 but I want a function that can handle any length of x | Use zip with unpacking operator *:out = list(zip(*x))Output:[('4', '1,200', '300'), ('8', '2,400', '600'), ('16', '4,800', '1,200'), ('32', '4,800', '2,400'), ('64', '6,200', '3,200'), ('128', '6,200', '3,200'), ('256', '6,200', '4,000'), ('512', '6,200', '4,000'), ('1,024', '6,200', '4,000')] |
Use IPython Widget Button to call Keras Training Function I would like to use an ipython button to run a function that trains a deep learning model using Keras's fit.generator() and ImageDataGenerator(). I tried to use lambda to pass the arguments to the function, but it returns TypeError: expected str, bytes or os.PathLike object, not Button.Code:def trainGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "grayscale", mask_color_mode = "grayscale",image_save_prefix = "image",mask_save_prefix = "mask", flag_multi_class = False,num_class = 2,save_to_dir = None,target_size = (256,256),seed = 1): image_datagen = ImageDataGenerator(**aug_dict) mask_datagen = ImageDataGenerator(**aug_dict) image_generator = image_datagen.flow_from_directory( train_path, classes = [image_folder], class_mode = None, color_mode = image_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = image_save_prefix, seed = seed) mask_generator = mask_datagen.flow_from_directory( train_path, classes = [mask_folder], class_mode = None, color_mode = mask_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = mask_save_prefix, seed = seed) train_generator = zip(image_generator, mask_generator) for (img,mask) in train_generator: img,mask = adjustData(img,mask,flag_multi_class,num_class) yield (img,mask)def segmentation_training(trainfolder, modelname): data_gen_args = dict(rotation_range=0.1, width_shift_range=[0.0, 0, 0.5], height_shift_range=[0.0, 0, 0.5], zoom_range=[0.5,1], horizontal_flip=True, fill_mode='nearest') myGene = trainGenerator(2,trainfolder,'image','label',data_gen_args,save_to_dir = None) model = unet() model_checkpoint = ModelCheckpoint(os.path.join('Models',modelname+'.hdf5'), monitor='loss',verbose=1, save_best_only=True) model.fit_generator(myGene,steps_per_epoch=3,epochs=1,callbacks=[model_checkpoint])modelname = "test"trainfolder = Path('Data/Segmentation/dataset/train')btn = widgets.Button(description="Run")btn.on_click(lambda trainfolder=trainfolder, modelname=modelname : segmentation_training(trainfolder,modelname))display(btn)Error:---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-41-d4282548b872> in <lambda>(trainfolder, modelname) 46 trainfolder = Path('Data/Segmentation/dataset/train') 47 btn = widgets.Button(description="Run")---> 48 btn.on_click(lambda trainfolder=trainfolder, modelname=modelname : segmentation_training(trainfolder,modelname)) 49 display(btn)<ipython-input-41-d4282548b872> in segmentation_training(trainfolder, modelname) 40 model = unet() 41 model_checkpoint = ModelCheckpoint(os.path.join('Models',modelname+'.hdf5'), monitor='loss',verbose=1, save_best_only=True)---> 42 model.fit_generator(myGene,steps_per_epoch=3,epochs=1,callbacks=[model_checkpoint]) 43 44 ~/virtualenv/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs) 89 warnings.warn('Update your `' + object_name + 90 '` call to the Keras 2 API: ' + signature, stacklevel=2)---> 91 return func(*args, **kwargs) 92 wrapper._original_function = func 93 return wrapper~/virtualenv/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 1413 use_multiprocessing=use_multiprocessing, 1414 shuffle=shuffle,-> 1415 initial_epoch=initial_epoch) 1416 1417 @interfaces.legacy_generator_methods_support~/virtualenv/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 175 batch_index = 0 176 while steps_done < steps_per_epoch:--> 177 generator_output = next(output_generator) 178 179 if not hasattr(generator_output, '__len__'):~/virtualenv/lib/python3.6/site-packages/keras/utils/data_utils.py in get(self) 791 success, value = self.queue.get() 792 if not success:--> 793 six.reraise(value.__class__, value, value.__traceback__)~/virtualenv/lib/python3.6/site-packages/six.py in reraise(tp, value, tb) 691 if value.__traceback__ is not tb: 692 raise value.with_traceback(tb)--> 693 raise value 694 finally: 695 value = None~/virtualenv/lib/python3.6/site-packages/keras/utils/data_utils.py in _data_generator_task(self) 656 # => Serialize calls to 657 # infinite iterator/generator's next() function--> 658 generator_output = next(self._generator) 659 self.queue.put((True, generator_output)) 660 else:<ipython-input-41-d4282548b872> in trainGenerator(batch_size, train_path, image_folder, mask_folder, aug_dict, image_color_mode, mask_color_mode, image_save_prefix, mask_save_prefix, flag_multi_class, num_class, save_to_dir, target_size, seed) 13 save_to_dir = save_to_dir, 14 save_prefix = image_save_prefix,---> 15 seed = seed) 16 mask_generator = mask_datagen.flow_from_directory( 17 train_path,~/virtualenv/lib/python3.6/site-packages/keras_preprocessing/image.py in flow_from_directory(self, directory, target_size, color_mode, classes, class_mode, batch_size, shuffle, seed, save_to_dir, save_prefix, save_format, follow_links, subset, interpolation) 962 follow_links=follow_links, 963 subset=subset,--> 964 interpolation=interpolation) 965 966 def standardize(self, x):~/virtualenv/lib/python3.6/site-packages/keras_preprocessing/image.py in __init__(self, directory, image_data_generator, target_size, color_mode, classes, class_mode, batch_size, shuffle, seed, data_format, save_to_dir, save_prefix, save_format, follow_links, subset, interpolation) 1731 self.samples = sum(pool.map(function_partial, 1732 (os.path.join(directory, subdir)-> 1733 for subdir in classes))) 1734 1735 print('Found %d images belonging to %d classes.' %/usr/lib/python3.6/multiprocessing/pool.py in map(self, func, iterable, chunksize) 264 in a list that is returned. 265 '''--> 266 return self._map_async(func, iterable, mapstar, chunksize).get() 267 268 def starmap(self, func, iterable, chunksize=None):/usr/lib/python3.6/multiprocessing/pool.py in _map_async(self, func, iterable, mapper, chunksize, callback, error_callback) 374 raise ValueError("Pool not running") 375 if not hasattr(iterable, '__len__'):--> 376 iterable = list(iterable) 377 378 if chunksize is None:~/virtualenv/lib/python3.6/site-packages/keras_preprocessing/image.py in <genexpr>(.0) 1731 self.samples = sum(pool.map(function_partial, 1732 (os.path.join(directory, subdir)-> 1733 for subdir in classes))) 1734 1735 print('Found %d images belonging to %d classes.' %/usr/lib/python3.6/posixpath.py in join(a, *p) 78 will be discarded. An empty last part will result in a path that 79 ends with a separator."""---> 80 a = os.fspath(a) 81 sep = _get_sep(a) 82 path = aTypeError: expected str, bytes or os.PathLike object, not ButtonWhen I run segmentation_train(trainpath,modelname) without the button implementation, it works fine. How can I call the function by pressing the button?Thanks in advance | Your lambda is bound to the Button class it was passed into, which implicitly made the first parameter the Button object itself. The result was that the trainpath parameter, was actually a renamed btn instance of Button. The functions that were trying to use trainpath as a filepath string were confused and so threw the error. If you want to keep the lambda, simply add self as the first parameter, and then ignore it:btn.on_click(lambda self, trainfolder=trainfolder, modelname=modelname : segmentation_training(trainfolder,modelname))Otherwise, there is another suggested implementation using functools and calling a function with explicit parameters:import functools def click_func(trainfolder,modelname): segmentation_training(trainfolder,modelname)btn.on_click(functools.partial(click_func,trainfolder=trainfolder,modelname=modelname)) |
How is bitmap arrays more efficient than logical arrays? So, I am new to bitmaps. Please pardon the level of my question.I am trying to make a decision on the type of datastructure I should be using to make pairwise comparisons on vectors. I was told to use bitmaps instead of the representing each vector (40k in total),v1 ={ 12,78,96,87,100,...}I would like to know how bitmaps are going to increase the efficiency of the overall operation? Doesnt it make the length of each vector longer ? Thats where I am confused.and also , if there is a good guide on using encoding formats.I use python for implementation.Thank you in advance. | Generally bitmaps are used to deal with a large number of related booleans in a memory-compact way. Such as, if you had 16 booleans, you could use a single bitmap to encode all of their states in a single 16-bit integer.It sounds like you want to compare more abstract vectors, in which case it doesn't sound like bitmaps get you anything.Unless you're referring to bitmaps as in the image format, and the vectors in question are colour channels for an image? In that case, it does make sense to use a standard image format to represent images. |
PySpark execute plain Python function on each DataFrame row I have Spark DataFrame DF1 with millions of rows. Each row have up to 100 columns. col1 | col2 | col3 | ... | colN--------------------------------v11 | v12 | v13 | ... | v1Nv21 | v22 | v23 | ... | v2N... | ... | ... | ... | ...Also, I have another DataFrame DF2 where I have hundreds of rows with name and body columns. Name contains function name, body contains plain Python code, the boolean function which returns true or false. These functions inside their logic, can refer to any column in the single row from DF1. func_name | func_body-----------------------------------------------func1 | col2 < col45func2 | col11.contains("London") and col32*col15 < col21funcN | .... I need to join both of these DataFrames - DF1 with DF2 and apply each function from Df2 to each row in DF1. Each function must be able to accept the parameters from DF1, let's say dictionary array with key/value pairs which represent name/value of all columns of the corresponding row from DF1.I know how to join DF1 and DF2, also, I understand that execution of Python functions will not work in destributed fashion. That's fine for now. This is a temporal solution. I just need to destribute all of the rows from DF1 over the workers nodes, and apply each Python function to each row of DF1 in different tasks of Apache Spark application. Evaluate eval() them and pass dictionary array with key/value pairs inside, as I mentioned above.In general, each Python function is a tag, that I'd like to assign to row in DF1 in case certain function returned true. For example, this is resulting DataFrame DF3:col1 | col2 | col3 | ... | colN | tags--------------------------------------v11 | v12 | v13 | ... | v1N | [func1, func76, funcN]v21 | v22 | v23 | ... | v2N | [func32]... | ... | ... | ... | ... | [..., ..., ..., ..., ...]Is it possible with PySpark and if so, could you please show an example how it can be achieved? Is UDF functions with Map from DF.columns as an input parameter is a right way to go or it can be done in some more simple fashion? Does Spark have any limitations on how much UDF functions(number) can be registered at one point of time? | You can achieve that using SQL expressions which can be evaluated using expr. However, you'll not be able to join the 2 DataFrames as SQL expressions can't be evaluated as column values (see this post), so you have to collect the functions into a list (as you have only hundreds of lines, it can fit in memory). Here is a working example you can adapt for your requirement:data1 = [(1, "val1", 4, 5, "A", 10), (0, "val2", 7, 8, "B", 20), (9, "val3", 8, 1, "C", 30), (10, "val4", 2, 9, "D", 30), (20, "val5", 6, 5, "E", 50), (3, "val6", 100, 2, "X", 45)]df1 = spark.createDataFrame(data1, ["col1", "col2", "col3", "col4", "col5", "col6"])data2 = [("func1", "col1 + col3 = 5 and col2 like '%al1'"), ("func2", "col6 = 30 or col1 * col4 > 20"), ("func3", "col5 in ('A', 'B', 'C') and col6 - col1 < 30"), ("func4", "col2 like 'val%' and col1 > 0")]df2 = spark.createDataFrame(data2, ["func_name", "func_body"])# get functions into a listfunctions = df2.collect()# case/when expression to evaluate the functionssatisfied_expr = [when(expr(f.func_body), lit(f.func_name)) for f in functions]# add new column tagsdf1.withColumn("tags", array(*satisfied_expr)) \ .withColumn("tags", expr("filter(tags, x -> x is not null)")) \ .show(truncate=False)After adding the array column tags, filter function is used to remove null values that correspond to unsatisfied expressions. This function is only available starting from Spark 2.4+, you'll have to use and UDF for older versions. Gives:+----+----+----+----+----+----+---------------------+|col1|col2|col3|col4|col5|col6|tags |+----+----+----+----+----+----+---------------------+|1 |val1|4 |5 |A |10 |[func1, func3, func4]||0 |val2|7 |8 |B |20 |[func3] ||9 |val3|8 |1 |C |30 |[func2, func3, func4]||10 |val4|2 |9 |D |30 |[func2, func4] ||20 |val5|6 |5 |E |50 |[func2, func4] ||3 |val6|100 |2 |X |45 |[func4] |+----+----+----+----+----+----+---------------------+ |
Translate curl post to python requests post I am trying to upload an XML file to an IIS server using Python 3.8.1 and requests. I have successfully done this many times using curl. This works:curl -v -H "Content-Type: text/xml" --data-binary @MS1481_20200204_163918_4461289.xml https://oursever.com/postHereThe XML file that I am attempting to upload is created earlier in the program. The filename is dynamically created using a series of strings concatenated together, including the current date and time in the middle.This is how I am attempting to upload using Python:postURL = 'https://oursever.com/postHere'postXML = {'file': open(xmlfilename, 'rb')}postResult = requests.post(postURL, files=postXML)print(postResult)I continue to get <Response [400]>. I can successfully upload immediately afterwards using curl. Suggestions? | A 400 error indicates that something is wrong with your client side input.Is the file getting properly opened and read? You should use either the with condition to open it,with open(path) as f: print(type(f))or make sure you read it with this open(xmlfilename, 'rb').read()Try:postURL = 'https://oursever.com/postHere'postXML = {'file': open(xmlfilename, 'rb').read()}postResult = requests.post(postURL, files=postXML)print(postResult) |
Why isn't the regular expression's "non-capturing" group working? In the snippet below, the non-capturing group "(?:aaa)" should be ignored in the matching result,The result should be "_bbb" only.However, I get "aaa_bbb" in the matching result; only when I specify group(2) does it show "_bbb".>>> import re>>> s = "aaa_bbb">>> print(re.match(r"(?:aaa)(_bbb)", s).group())aaa_bbb | I think you're misunderstanding the concept of a "non-capturing group". The text matched by a non-capturing group still becomes part of the overall regex match.Both the regex (?:aaa)(_bbb) and the regex (aaa)(_bbb) return aaa_bbb as the overall match. The difference is that the first regex has one capturing group which returns _bbb as its match, while the second regex has two capturing groups that return aaa and _bbb as their respective matches. In your Python code, to get _bbb, you'd need to use group(1) with the first regex, and group(2) with the second regex.The main benefit of non-capturing groups is that you can add them to a regex without upsetting the numbering of the capturing groups in the regex. They also offer (slightly) better performance as the regex engine doesn't have to keep track of the text matched by non-capturing groups.If you really want to exclude aaa from the overall regex match then you need to use lookaround. In this case, positive lookbehind does the trick: (?<=aaa)_bbb. With this regex, group() returns _bbb in Python. No capturing groups needed.My recommendation is that if you have the ability to use capturing groups to get part of the regex match, use that method instead of lookaround. |
TypeError: 'list' object is not callable while trying to access a list I am trying to run this code where I have a list of lists. I need to add to inner lists, but I get the error TypeError: 'list' object is not callable.Can anyone tell me what am I doing wrong here.def createlists(): global maxchar global minchar global worddict global wordlists for i in range(minchar, maxchar + 1): wordlists.insert(i, list()) #add data to list now for words in worddict.keys(): print words print wordlists(len(words)) # <--- Error here. (wordlists(len(words))).append(words) # <-- Error here too print "adding word " + words + " at " + str(wordlists(len(words))) print wordlists(5) | For accessing the elements of a list you need to use the square brackets ([]) and not the parenthesis (()).Instead of:print wordlists(len(words))you need to use:print worldlists[len(words)]And instead of:(wordlists(len(words))).append(words)you need to use:worldlists[len(words)].append(words) |
How to use `apply()` or other vectorized approach when previous value matters Assume I have a DataFrame of the following form where the first column is a random number, and the other columns will be based on the value in the previous column.For ease of use, let's say I want each number to be the previous one squared. So it would look like the below.I know I can write a pretty simple loop to do this, but I also know looping is not usually the most efficient in python/pandas. How could this be done with apply() or rolling_apply()? Or, otherwise be done more efficiently?My (failed) attempts below:In [12]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})In [13]: aOut[13]: 0 1 2 30 1 0 0 01 2 0 0 02 3 0 0 03 4 0 0 04 5 0 0 0In [14]: a = a.apply(lambda x: x**2)In [15]: aOut[15]: 0 1 2 30 1 0 0 01 4 0 0 02 9 0 0 03 16 0 0 04 25 0 0 0In [16]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})In [17]: pandas.rolling_apply(a,1,lambda x: x**2)C:\WinPython64bit\python-3.5.2.amd64\lib\site-packages\spyderlib\widgets\externalshell\start_ipython_kernel.py:1: FutureWarning: pd.rolling_apply is deprecated for DataFrame and will be removed in a future version, replace with DataFrame.rolling(center=False,window=1).apply(args=<tuple>,kwargs=<dict>,func=<function>) # -*- coding: utf-8 -*-Out[17]: 0 1 2 30 1.0 0.0 0.0 0.01 4.0 0.0 0.0 0.02 9.0 0.0 0.0 0.03 16.0 0.0 0.0 0.04 25.0 0.0 0.0 0.0In [18]: a = pandas.DataFrame({0:[1,2,3,4,5],1:0,2:0,3:0})In [19]: a = a[:-1]**2In [20]: aOut[20]: 0 1 2 30 1 0 0 01 4 0 0 02 9 0 0 03 16 0 0 0In [21]: So, my issue is mostly how to refer to the previous column value in my DataFrame calculations. | What you're describing is a recurrence relation, and I don't think there is currently any non-loop way to do that. Things like apply and rolling_apply still rely on having all the needed data available before they begin, and outputting all the result data at once at the end. That is, they don't allow you to compute the next value using earlier values of the same series. See this question and this one as well as this pandas issue.In practical terms, for your example, you only have three columns you want to fill in, so doing a three-pass loop (as shown in some of the other answers) will probably not be a major performance hit. |
How to fix problem of "ModuleNotFoundError: No module named 'PIL'"? I tried with the solution given in 'stackoverflow', but not resolved.I am trying to extract text from images with the help of pytesseract module from python.The following are the steps I followed:code:py -m pip install --user virtualenvpy -m venv tessa #creating virtual environmentc:\Users\folder\tessa\Scripts>activate #activated virtual environment(tessa) c:\Users\folder>jupyter notebook #initiated jupyter IDEpip install opencv-pythonpip install pytesseractimport pytesseractpytesseract.pytesseract.tesseract_cmd = r'C:\\Users\\folder\\subfolder\\Local\\Programs\\Tesseract-OCR\\tesseract.exe'Now problem start as shown in the image uploaded here in.Also showing error 'ModuleNotFoundError : No module named "Image"'I am not able to fix this issue. Can anybody help on this error, to fix it?Thanks a lot. | It is saying that the module named Pillow(PIL) is missing.You can install it using pip. Enter the following in Command Line.pip install Pillow |
pytesseract not idenfiying digits properly as well it is detecting dashed 0 as 8 Pytesseract unable to identify proper characters as well it is predicting slashed zero wrong.Here is my Image:from PIL import Imageimport pytesseractimport cv2import numpy as npimg = cv2.imread('dilation_1_0.png') #dilation_1.png working,eroded.png,eroded_1.pngtext = pytesseract.image_to_string(img,config="--psm 6 oem 0")print(text)cv2.imshow("image",img)cv2.waitKey(0)cv2.destroyAllWindows() | For any image, you need to preprocess it make it detect more easilysome methods are1.grayscale image2.erosion3.opening - erosion followed by dilation4.canny edge detection5.skew correction6.template matchingChoose which one works best for you, or you can do it in combinations. Check the following link for detailed explanationsPreprocessing for PytesseractNB;Make sure you installed tesseract.exe also. If it is working for normal text images, your tesseract.exe is installed(this is different from pytessearct) |
Copying numpy array with '=' operator. Why is it working? According to this answer, B=A where A is a numpy array, B should be pointing to the same object A.import cv2import numpy as npimg = cv2.imread('rose.jpeg')print("img.shape: ", np.shape(img))img2 = imgimg = cv2.resize(img, (250,100))print("img.shape: ", img.shape)print("img2.shape:", img2.shape)Output:img.shape: (331, 500, 3)img.shape: (100, 250, 3)img2.shape: (331, 500, 3)It seems to be a very basic question, but I have been scratching my head over this. Could someone please explain what's happening behind it? | The "problem" is that your not using numpy here but opencv and while numpy array.resize() is in-place opencv img.resize() is not.So your call to img = cv2.resize(img, (250,100))creates a new object (image) with the given size. So here the img variable will point to a different object then before the call. img2 = imgadds a new name for the original object. Here img2 and img are refering to exactly the same object/piece of memory. img = cv2.resize(img, (250,100))cv2.resize(img, (250,100)) creates a new object and the name img now refers to that new object/piece of memory. print("img.shape: ", img.shape)gets you the size of the new object and print("img2.shape:", img2.shape)the size of the original object as img2 still refers to the original object.By the way in numpy the call a = a.resize(...) would be really bad - because a would then by None (return value of resize) instead of the resized array. There you would just do a.resize(...) |
BeautifulSoup Loop Thru Items I have a page that has the following structure <div class="cloud-grid margin-bottom-40"><div class="cloud-grid__col is-6"> <a href="https://cloud.google.com/bigquery/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="bigQuery" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> BigQuery </a> <div class="cloud-product-card__sub-headline"> A fully managed, highly scalable data warehouse with built-in ML. </div> <a href="https://cloud.google.com/dataflow/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudDataflow" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Dataflow </a> <div class="cloud-product-card__sub-headline"> Real-time batch and stream data processing. </div> <a href="https://cloud.google.com/dataproc/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudDataproc" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Dataproc </a> <div class="cloud-product-card__sub-headline"> Managed Spark and Hadoop service. </div> <a href="https://cloud.google.com/datalab/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudDatalab" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Datalab </a> <div class="cloud-product-card__sub-headline"> Explore, analyze, and visualize large datasets. </div> <a href="https://cloud.google.com/dataprep/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudDataprep" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Dataprep </a> <div class="cloud-product-card__sub-headline"> Cloud data service to explore, clean, and prepare data for analysis. </div> <a href="https://cloud.google.com/pubsub/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudPubSub" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Pub/Sub </a> <div class="cloud-product-card__sub-headline"> Ingest event streams from anywhere, at any scale. </div></div><div class="cloud-grid__col is-6"> <a href="https://cloud.google.com/composer/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudComposer" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Composer </a> <div class="cloud-product-card__sub-headline"> A fully managed workflow orchestration service built on Apache Airflow. </div> <a href="https://cloud.google.com/data-fusion/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="cloudDataFusion" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Cloud Data Fusion </a> <div class="cloud-product-card__sub-headline"> Fully managed, code-free data integration. </div> <a href="https://cloud.google.com/data-catalog/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="dataCatalog" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Data Catalog </a> <div class="cloud-product-card__sub-headline"> A fully managed and highly scalable data discovery and metadata management service. </div> <a href="https://cloud.google.com/genomics/" track-type="navigateTo" track-name="link" track-metadata-eventdetail="genomics" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Genomics </a> <div class="cloud-product-card__sub-headline"> Power your science with Google Genomics. </div> <a href="https://marketingplatform.google.com/about/enterprise/#?modal_active=none" target="_blank" rel="noopener" track-type="navigateTo" track-name="link" track-metadata-eventdetail="googleMarketingPlatform" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Google Marketing Platform* </a> <div class="cloud-product-card__sub-headline"> Enterprise analytics for better customer experiences. </div> <a href="https://marketingplatform.google.com/about/data-studio/" target="_blank" rel="noopener" track-type="navigateTo" track-name="link" track-metadata-eventdetail="googleDataStudio" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Google Data Studio* </a> <div class="cloud-product-card__sub-headline"> Tell great data stories to support better business decisions. </div> <a href="https://firebase.google.com/products/performance/" target="_blank" rel="noopener" track-type="navigateTo" track-name="link" track-metadata-eventdetail="firebasePerformanceMonitoring" track-metadata-position="body" track-metadata-section="dataAnalytics" class="cloud-product-card__headline"> Firebase Performance Monitoring </a> <div class="cloud-product-card__sub-headline"> Gain insight into your app's performance. </div></div>I also have a python script that will get the html code and extract the following elements:a class="cloud-product-card__headline" get [href] and Textdiv class="cloud-product-card__sub-headline" get TextHere is my Code :soup = BeautifulSoup(html_elem, 'html.parser')listdt = []for dt in soup.find_all(True, {"class": ["cloud-product-card__headline", "cloud-product-card__sub-headline"]}): listdt.append(dt) for dt in listdt: prod_name = dt.find_next('a').text.strip() prod_href = dt.find_next('a')['href'] if dt.find_next('a') is not None else '----' prod_desc = dt.find_next('div').text.strip() print(prod_name + ' - ' + prod_href + ' - ' + prod_desc)I manage to get all the results back but they are very unorganized.I ma trying to get/scrape the data out of https://cloud.google.com/products/ in csv or json format | A slightly different approach: There are equal numbers of these items, and a regular structure, so you could use join the three items as a list within a list comprehension. Title and link can both come from elements with class cloud-product-card__headline, and then the description is the next_sibling.next_sibling. A little string cleaning can be done on the description before output. import requests, re, csvfrom bs4 import BeautifulSoup as bsr = requests.get('https://cloud.google.com/products/')soup = bs(r.content, 'lxml')products = [[i.text.strip(), i['href'], re.sub('\n\s+',' ',i.next_sibling.next_sibling.text.strip())] for i in soup.select('.cloud-product-card__headline')]with open("data.csv", "w", encoding="utf-8-sig", newline='') as csv_file: w = csv.writer(csv_file, delimiter = ",", quoting=csv.QUOTE_MINIMAL) w.writerow(['Title','Link','Description']) for product in products: w.writerow(product)Example output rows: |
How to count elements in an array withtin a given increasing interval? I have an array of time values. I want to know how many values are in each 0.05 seconds window. For example, some values of my array are: -1.9493, -1.9433, -1.911 , -1.8977, -1.8671,..In the first interval of 0.050 seconds (from -1.9493 to -1.893) I´m expecting to have 3 elementsI already create another array with the 0.050 seconds steps. a=max(array) b=min(array) ventanalinea1=np.arange(b,a,0.05) v1=np.array(ventanalinea1)In other words, I would like to compare my original array with this one. I would like to know if there is a way to ask python to evaluate my array within a given dynamic range. | One of the variants:import numpy as np# original arraya = [-1.9493, -1.9433, -1.911 , -1.8977, -1.8671]step = 0.05bounds = np.arange(min(a), max(a) + step, step)result = [ list(filter(lambda x: b[i] <= x <= b[i+1], a)) for i in range(len(b)-1)] |
python: checking for errors in the users input I would liketo check if a string can be a float before I attempt to convert it to a float. This way, if thestring is not float, we can print an error message and exit instead of crashing theprogram.so when the user inputs something, I wanna see if its a float so it will print "true" if its not then it will print"false" rather than crashing. I don't want to use built in functions for this. I need to make my own function for this.I tried :import typesdef isFloat(): x = raw_input("Enter: ") if(x) == float: print("true") if(x) == str: print("false")isFloat()I don't know if its true or not but it wont work it wont print anything either | The only reliable way to figure out whether a string represents a float is to try to convert it. You could check first and convert then, but why should you? You'd do it twice, without need.Consider this code:def read_float(): """ return a floating-point number, or None """ while True: s = raw_input('Enter a float: ').strip() if not s: return None try: return float(s) except ValueError: print 'Not a valid number: %r' % snum = read_float()while num is not None: ... do something ... print 'Try again?' num = read_float() |
How to enumerate possible reconstructions of a Hamiltonian cycle without DFS/BFS? I have a directed Hamiltonian cycle:[..., a, b, ... , c, d, ..., e, f, ...]Where b = next(a), d = next(c), and f = next(e).Say I delete edges (a, b), (c, d), and (e, f). Question: How do I generate all possible recombinations of the graph such that it remains a Hamiltonian cycle (keeping in mind that I may have to reverse the ordering in one of the pieces to fix direction)?What I knowI know that the number of new Hamiltonian cycles reachable by removing n-edges is double-factorial(n-1). I also know that if I'm removing two consecutive edges, I'll get duplicate solutions (which I'm fine with ... they should be minimal relative to unique cycles and it keeps the resulting code clean).What I've tried (in rough Pseudocode)One way to think about this is that any of the yielded Hamiltonian cycles must preserve the property that you have to travel along the pieces of the disconnected graph before jumping to a new piece.So, for example, thinking about the cycle above (where n = 3), there are the following 3 pieces:[b, ..., c][d, ..., e][f, ..., a]So let's say my new solution starts as follows:[..., a, d, ...]I know that e is the vertex that must come next from my list of terminal vertices.So, using that idea, the Python would be something like this:from itertools import permutationsdef hamiltonian_cycles(l, s=None, r=None): if s is None: s = [l[0]] if r is None: r = l[1:] if not r: yield s else: for permutation in permutations(r): s1 = s[:] + [permutation[0]] for cycle in hamiltonian_cycles(l, s1, permutation[1:]): yield cycle s2 = s[:] + [(permutation[0][1], permutation[0][0])] for cycle in hamiltonian_cycles(l, s2, permutation[1:]): yield cycle>>> l = [('f', 'a'), ('b', 'c'), ('d', 'e')]>>> for cycle in hamiltonian_cycles(l):... print(cycle)[('f', 'a'), ('b', 'c'), ('d', 'e')][('f', 'a'), ('b', 'c'), ('e', 'd')][('f', 'a'), ('c', 'b'), ('d', 'e')][('f', 'a'), ('c', 'b'), ('e', 'd')][('f', 'a'), ('d', 'e'), ('b', 'c')][('f', 'a'), ('d', 'e'), ('c', 'b')][('f', 'a'), ('e', 'd'), ('b', 'c')][('f', 'a'), ('e', 'd'), ('c', 'b')]This seems ugly and stops working past n=3, though, hence the question.Why I don't want to use BFS/DFSI need a generator that is linear in the number of edges deleted, not in the number of total edges + vertices. | Thanks to this helpful answer here, here's the code that does it.from itertools import chain, permutations, productdef new_edge_sets(deleted_edges): def edges_to_pieces(l): new_list = [] n = len(l) for i in xrange(-1,n-1): new_list.append((l[i%n][1], l[(i+1)%n][0])) return new_list def permute(it): return product(*(permutations(i) for i in it)) def permute2(it): return chain.from_iterable(permute(p) for p in permutations(it)) def pieces_to_edges(p): new_list = [] n = len(p) for i in xrange(n): new_list.append((p[i%n][1], p[(i+1)%n][0])) return new_list def permute3(s): return (pieces_to_edges(s[:1] + list(p)) for p in permute2(s[1:])) return permute3(edges_to_pieces(deleted_edges))Example:>>> deleted_edges = [('a', 'b'), ('c', 'd'), ('e', 'f'), ('g', 'h')]>>> l = list(new_edge_sets(deleted_edges))>>> len(l)48>>> for new_edges in l:... print(new_edges)[('a', 'b'), ('c', 'd'), ('e', 'f'), ('g', 'h')][('a', 'b'), ('c', 'd'), ('e', 'g'), ('f', 'h')][('a', 'b'), ('c', 'e'), ('d', 'f'), ('g', 'h')][('a', 'b'), ('c', 'e'), ('d', 'g'), ('f', 'h')][('a', 'c'), ('b', 'd'), ('e', 'f'), ('g', 'h')][('a', 'c'), ('b', 'd'), ('e', 'g'), ('f', 'h')][('a', 'c'), ('b', 'e'), ('d', 'f'), ('g', 'h')][('a', 'c'), ('b', 'e'), ('d', 'g'), ('f', 'h')][('a', 'b'), ('c', 'f'), ('g', 'd'), ('e', 'h')][('a', 'b'), ('c', 'f'), ('g', 'e'), ('d', 'h')][('a', 'b'), ('c', 'g'), ('f', 'd'), ('e', 'h')][('a', 'b'), ('c', 'g'), ('f', 'e'), ('d', 'h')][('a', 'c'), ('b', 'f'), ('g', 'd'), ('e', 'h')][('a', 'c'), ('b', 'f'), ('g', 'e'), ('d', 'h')][('a', 'c'), ('b', 'g'), ('f', 'd'), ('e', 'h')][('a', 'c'), ('b', 'g'), ('f', 'e'), ('d', 'h')][('a', 'd'), ('e', 'b'), ('c', 'f'), ('g', 'h')][('a', 'd'), ('e', 'b'), ('c', 'g'), ('f', 'h')][('a', 'd'), ('e', 'c'), ('b', 'f'), ('g', 'h')][('a', 'd'), ('e', 'c'), ('b', 'g'), ('f', 'h')][('a', 'e'), ('d', 'b'), ('c', 'f'), ('g', 'h')][('a', 'e'), ('d', 'b'), ('c', 'g'), ('f', 'h')][('a', 'e'), ('d', 'c'), ('b', 'f'), ('g', 'h')][('a', 'e'), ('d', 'c'), ('b', 'g'), ('f', 'h')][('a', 'd'), ('e', 'f'), ('g', 'b'), ('c', 'h')][('a', 'd'), ('e', 'f'), ('g', 'c'), ('b', 'h')][('a', 'd'), ('e', 'g'), ('f', 'b'), ('c', 'h')][('a', 'd'), ('e', 'g'), ('f', 'c'), ('b', 'h')][('a', 'e'), ('d', 'f'), ('g', 'b'), ('c', 'h')][('a', 'e'), ('d', 'f'), ('g', 'c'), ('b', 'h')][('a', 'e'), ('d', 'g'), ('f', 'b'), ('c', 'h')][('a', 'e'), ('d', 'g'), ('f', 'c'), ('b', 'h')][('a', 'f'), ('g', 'b'), ('c', 'd'), ('e', 'h')][('a', 'f'), ('g', 'b'), ('c', 'e'), ('d', 'h')][('a', 'f'), ('g', 'c'), ('b', 'd'), ('e', 'h')][('a', 'f'), ('g', 'c'), ('b', 'e'), ('d', 'h')][('a', 'g'), ('f', 'b'), ('c', 'd'), ('e', 'h')][('a', 'g'), ('f', 'b'), ('c', 'e'), ('d', 'h')][('a', 'g'), ('f', 'c'), ('b', 'd'), ('e', 'h')][('a', 'g'), ('f', 'c'), ('b', 'e'), ('d', 'h')][('a', 'f'), ('g', 'd'), ('e', 'b'), ('c', 'h')][('a', 'f'), ('g', 'd'), ('e', 'c'), ('b', 'h')][('a', 'f'), ('g', 'e'), ('d', 'b'), ('c', 'h')][('a', 'f'), ('g', 'e'), ('d', 'c'), ('b', 'h')][('a', 'g'), ('f', 'd'), ('e', 'b'), ('c', 'h')][('a', 'g'), ('f', 'd'), ('e', 'c'), ('b', 'h')][('a', 'g'), ('f', 'e'), ('d', 'b'), ('c', 'h')][('a', 'g'), ('f', 'e'), ('d', 'c'), ('b', 'h')] |
Why can't I make a column with extracted months from the 'dates' column in my DataFrame? I have a dataframe with dates, and I want to make a column with only the month of the corresponding date in each row. First, I converted my dates to ts objects like this:df['Date'] = pd.to_datetime(df['Date'])After that, I tried to make my new column for the month like this:df['Month'] = df['Date'].monthHowever, it gives me an error: AttributeError: 'Series' object has no attribute 'month'I do not understand why I can't do it like this. I double checked whether the conversion to ts objects actually works, and that does work. Also, if I extract 1 date using slicing, I can append .month to get the month. I technically could solve the problem by looping over all indices and then slicing for each index, but my dataframe contains 166000+ rows so that is not an option. | You have to use property (or accessor object) dtdf["month"] = df.date.dt.month |
Creating a matrix from the data inside the csv I'm reading a CSV-file (data is comma separated), appending the two columns inside this file into two different arrays named 'x_train' and 'y_train'. The problem is that I can't manage to form the data the way I wanted to. So, to summarise; I want each entry for row[0] to be appended in x_train and row[1] for y_train.import numpy as npimport csvx_train = []y_train = []with open("length_weight.csv", newline='') as csvfile: reader = csv.reader(csvfile, quoting=csv.QUOTE_NONNUMERIC) for row in reader: x_train.append(row[0]) y_train.append(row[1])x_train = np.mat(x_train)y_train = np.mat(y_train)A small portion of the CSV-file8.070000000000000284e+01,1.126768031895251987e+018.040000000000000568e+01,1.195844519276935358e+017.250000000000000000e+01,8.317461617744008606e+001.030000000000000000e+02,1.880844309373589951e+011.075999999999999943e+02,1.947419293659330108e+017.940000000000000568e+01,9.877652348817933969e+008.190000000000000568e+01,1.127064360995226977e+011.015999999999999943e+02,1.640426417487080357e+011.085999999999999943e+02,1.749193091101176378e+01Expected output: [[1.12341234], [1,43214321], ...]But actual output is: [[1.12341234, 1.12341234, ...]] | If you need every number inside a list you can do it directly while appending in to x_train and y_train:import numpy as npimport csvx_train = []y_train = []with open("length_weight.csv", newline='') as csvfile: reader = csv.reader(csvfile, quoting=csv.QUOTE_NONNUMERIC) for row in reader: x_train.append([row[0]]) y_train.append([row[1]]) |
Row wise extraction of common elements from 2 lists of list I have two lists of list with equal len in Python (let's say 3 for this example).A = [['Horse','Duck','Goat'],['Rome','New York'],['Apple','Rome','Goat','Boat']]B = [['Carrot','Duck'],['Car','Boat','Plane'],['Goat','Apple','Boat']]I would like to match elements in each row and create a new list of the common elements. The resultant output I require is:c = [['Duck'],[],['Apple','Goat','Boat']]and,d = [1,0,3] ; where d is a list with the count of common elements at each row.Note that within each row of the list of lists, elements can appear in any order. | Using list comprehension and zip:>>> A = [['Horse','Duck','Goat'],['Rome','New York'], ['Apple','Rome','Goat','Boat']]>>> B = [['Carrot','Duck'],['Car','Boat','Plane'], ['Goat','Apple','Boat']]>>> c = [[x for x in a if x in b] for a, b in zip(A, map(set, B))]>>> d = [len(x) for x in c]>>> # or d = list(map(len, c)) # you can omit `list` in python 2.x>>> c[['Duck'], [], ['Apple', 'Goat', 'Boat']]>>> d[1, 0, 3] |
Pass all arguments of a function to another function I want to have a class that I can create subclasses of that has a print function that only prints on a particular condition.Here's basically what I'm trying to do:class ClassWithPrintFunctionAndReallyBadName: ... def print(self, *args): if self.condition: print(*args)This works already except for the fact that there are arguments that have to be explicitly stated with the default print function, such as end (example: print('Hello, world!', end='')). How can I make my new class's print function accept arguments such as end='' and pass them to the default print? | The standard way to pass on all arguments is as @JohnColeman suggested in a comment:class ClassWithPrintFunctionAndReallyBadName: ... def print(self, *args, **kwargs): if self.condition: print(*args, **kwargs)As parameters, *args receives a tuple of the non-keyword (positional) arguments, and **kwargs is a dictionary of the keyword arguments.When calling a function with * and **, the former tuple is expanded as if the parameters were passed separately and the latter dictionary is expanded as if they were keyword parameters. |
Install a python package/module from github in local folder an use it IssueI would like to install with pip3 a python module from github into a local folder named local_lib/ and then use it in a script, without any virtualenv.ContextHere is my folder structure :.+-- local_lib/ // Folder where the package must be installed+-- my_script.pyHere is the command line i use to install the path.py package from github into the local_lib/ folder :pip3 install --upgrade --target local_lib git+https://github.com/jaraco/path.py.git Here is the content of the local_lib/ folder after the command line :.+-- local_lib/ // Folder where the package must be installed| +-- __pycache__| +-- importlib_metadata-0.8.dist-info| +-- path.py-11.5.1.dev20+g3684c4d.dist-info| +-- zipp-0.3.3.dist-info| +-- importlib_metadata+-- my_script.pyHere is the content of my_script.py :#!/usr/bin/env python3# -*- coding: utf-8 -*-from local_lib.path import Pathif __name__ == '__main__':: print(Path('.') / 'generated_folder')But when i execute the script with python3 my_script.py, i get the following error of import : Traceback (most recent call last): File "my_program.py", line 4, in module from local_lib.path import Path ModuleNotFoundError: No module named 'local_lib.path' Should i change the way i import the package into my_scipt.py or should i change the command line to install the package ? | You have to tell Python that it has to look in local_lib for modules. E.g. by adding it to sys.path in your script (before importing from it) or by adding it to your PYTHONPATH environment variable. |
How to safely truncate a quoted string? I have the following string:Customer sale 88% in urm 50Quoted with urllib.parse.quote, it becomes:Customer%20sale%2088%25%20in%20urm%2050%27Then I need to limit its length to a maximum of 30 characters and I use value[:30].The problem is that it becomes "Customer%20sale%2088%25%20in%" which is not valid:The last % is part of %20 from quoted string and makes it an invalid quoted string.I don't have control over the original string, and the final result needs to have a maximum 30 length, so I can't truncate it beforehand. What approach would be feasible? | urllib.quote uses percent-encoding as defined in RFC 3986. This means that encoded character will always be of the form "%" HEXDIG HEXDIG.So you simply can delete any trailing rest of the encoding by looking for a % sign in the last two characters.For example:>>> s=quote("Customer sale 88% in urm 50")[:30]>>> n=s.find('%', -2)>>> s if n < 0 else s[:n]'Customer%20sale%2088%25%20in' |
How to extract one of the histogram plots resulting from using pd.Dataframe.hist()? when I use the hist() from Pandas it produces a series of histograms for all the features in the dataset. I want to know how to extract/select/reference only one of the histograms returned by hist()?For example, let'say I have the following code:import pandas as pdimport numpy as npimport matplotlib.pyplot as pltdf = pd.DataFrame({'X' : np.random.rand(100), 'Y': np.random.rand(100)})dfdf.hist()array([[<matplotlib.axes._subplots.AxesSubplot object at 0x00000150DAC658C8>, <matplotlib.axes._subplots.AxesSubplot object at 0x00000150DB29AD48>]], dtype=object)I have tried slicing the array of matplotlib axes returned by the hist() method using [] (i.e. df.hist()[0]), but it does not extract only one plot but the two of them. | I believe you can pass in a column name to hist() in order to select one of the histograms.df.hist(column = column_name) |
How to display the correct date century in Pandas? I have following data in one of my columns:df['DOB']0 01-01-841 31-07-852 24-08-853 30-12-934 09-12-775 08-09-906 01-06-887 04-10-898 15-11-919 01-06-68Name: DOB, dtype: objectI want to convert this to a datatype column.I tried following:print(pd.to_datetime(df1['Date.of.Birth']))0 1984-01-011 1985-07-312 1985-08-243 1993-12-304 1977-09-125 1990-08-096 1988-01-067 1989-04-108 1991-11-159 2068-01-06Name: DOB, dtype: datetime64[ns]How can I get the date as 1968-01-06 instead of 2068-01-06? | You can first convert to datetimes and if years are above or equal 2020 then subtract 100 years created by DateOffset:df['DOB'] = pd.to_datetime(df['DOB'], format='%d-%m-%y')df.loc[df['DOB'].dt.year >= 2020, 'DOB'] -= pd.DateOffset(years=100)#same like#mask = df['DOB'].dt.year >= 2020#df.loc[mask, 'DOB'] = df.loc[mask, 'DOB'] - pd.DateOffset(years=100)print (df) DOB0 1984-01-011 1985-07-312 1985-08-243 1993-12-304 1977-12-095 1990-09-086 1988-06-017 1989-10-048 1991-11-159 1968-06-01Or you can add 19 or 20 to years by Series.str.replace and set valuies by numpy.where with condition.Notice: Solution working also for years 00 for 2000, up to 2020.s1 = df['DOB'].str.replace(r'-(\d+)$', r'-19\1')s2 = df['DOB'].str.replace(r'-(\d+)$', r'-20\1')mask = df['DOB'].str[-2:].astype(int) <= 20df['DOB'] = pd.to_datetime(np.where(mask, s2, s1))print (df) DOB0 1984-01-011 1985-07-312 1985-08-243 1993-12-304 1977-09-125 1990-08-096 1988-01-067 1989-04-108 1991-11-159 1968-01-06If all years are below 2000:s1 = df['DOB'].str.replace(r'-(\d+)$', r'-19\1')df['DOB'] = pd.to_datetime(s1, format='%d-%m-%Y')print (df) DOB0 1984-01-011 1985-07-312 1985-08-243 1993-12-304 1977-12-095 1990-09-086 1988-06-017 1989-10-048 1991-11-159 1968-06-01 |
Radiobutton navigation and value storing I am trying to write a multiple choice quiz using Python Tkinter. I have a 2 part question.I have radio buttons that display the choices and collect the selected option. I also have a created a button to navigate to the next question or back to the previous question as well as another button to view the score.Part 1 - How do I keep the selected option of the radio button present for each question when navigation backwards/forwards through the quiz?Part 2 - The way I have thought out how the view score button should work is: Compare each collected option (saved in a list?) to the correct answerCalculate scoreDisplayPoints 2 and 3 are the easiest part for me. Can you indicate to me the right direction to go on Point number one? from tkinter import messageboximport tkinter as tkfrom tkinter import *# question listq = [ "question 1", "question 2", "question 3", "question 4"]# options listoptions = [ ["a","b","c","d"], ["b","c","d","a"], ["c","d","a","b"], ["d","a","b","c"],]# correct answers lista = [3,4,1,2]class Quiz: def __init__(self, master): self.opt_selected = IntVar() self.qn = 0 self.correct = 0 self.ques = self.create_q(master, self.qn) self.opts = self.create_options(master, 4) self.display_q(self.qn) self.button = Button(master, text="Previous Question", command=self.back_btn, width=16, borderwidth=3, relief=RAISED) self.button.pack(side=LEFT) self.button = Button(master, text="Next Question", command=self.next_btn, width=16, borderwidth=3, relief=RAISED) self.button.pack(side=LEFT) self.button = Button(master, text="View Score", command=self.score_viewer, width=16, borderwidth=3, relief=RAISED) self.button.pack(side=LEFT) # define questions def create_q(self, master, qn): w = Label(master, text=q[qn], anchor='w', wraplength=400, justify=LEFT) w.pack(anchor='w') return w # define multiple options def create_options(self, master, n): b_val = 0 b = [] while b_val < n: btn = Radiobutton(master, text="foo", variable=self.opt_selected, value=b_val+1) b.append(btn) btn.pack(side=TOP, anchor="w") b_val = b_val + 1 return b # define questions for display when clicking on the NEXT Question Button def display_q(self, qn): b_val = 0 self.opt_selected.set(0) self.ques['text'] = q[qn] for op in options[qn]: self.opts[b_val]['text'] = op b_val = b_val + 1 # define questions for display when clicking on the PREVIOUS Question Button def display_prev_q(self, qn): b_val = 0 self.opt_selected.set(0) self.ques['text'] = q[qn] for op in options[qn]: self.opts[b_val]['text'] = op b_val = b_val + 1 # check option selected against correct answer list def check_q(self, qn): if self.opt_selected.get() == a[qn]: self.correct += 1 else: self.correct += 0 # print results def print_results(self): print("Score: ", self.correct, "/", len(q)) # define PREVIOUS button def back_btn(self): self.qn = self.qn - 1 self.display_prev_q(self.qn) # define NEXT button def next_btn(self): # if self.check_q(self.qn): # print("Correct") # self.correct += 1 self.qn = self.qn + 1 self.display_prev_q(self.qn) # if self.qn >= len(q): # self.print_results() # else: # self.display_q(self.qn) # define SCORE view button and score results def score_viewer(self): score_viewer = messagebox.askquestion("Warning", 'Would you like to view your current score?', icon='warning') if score_viewer == 'yes': self.check_q(self.qn) corr_ans = self.correct total_quest = len(q) output = '{:.1%}'.format(self.correct / len(q)) score_text = "\nScore: %s " % output output_text = "Correctly answered %a out of %d questions. %s" % (corr_ans, total_quest, score_text) messagebox.showinfo("Score", output_text) else: tk.messagebox.showinfo('Return', 'Returning to quiz') | Unfortunately, I think you need change the fundamental architecture of your program and make it much more object-oriented. Specifically, instead of having a bunch of separate lists like you have:# question listq = [ "question 1", "question 2", "question 3", "question 4"]# options listoptions = [ ["a","b","c","d"], ["b","c","d","a"], ["c","d","a","b"], ["d","a","b","c"],]# correct answers lista = [3,4,1,2]I think you should define a custom class to encapsulate questions and their current state, and then create a (single) list of them during application initialization. This approach not only makes it relatively easily to switch from displaying one to another (not to mention keeping track of the current state of each), it also makes it fairly straight-forward to do all the related things you say you wish to do.Here's a complete implementation illustrating what I mean. Note it uses @Bryan Oakley's frame-switching technique similar to what's in his answer to the question Switch between two frames in tkinter to display each question. The primary difference being that the "pages" (questions) are stored in a list referenced via an index instead of in a dict accessed by a class name.Another nice aspect of this design is that the question data is completely separate from the Quiz code, which mean it could be stored externally in a file or database if desired.I also tried to make the code conform to PEP 8 - Style Guide for Python Code (which you should also do as much as possible).import tkinter as tkfrom tkinter.constants import *from tkinter import messageboxclass Question(tk.Frame): """ Frame subclass encapsulating a multiple-option question. """ def __init__(self, master, text, options, correct_ans): super(Question, self).__init__(master) self.text = text self.options = options self.correct_ans = correct_ans self.opt_selected = tk.IntVar() tk.Label(self, text=self.text, anchor=W, wraplength=400, justify=LEFT).pack(anchor=W) for b_val, option in enumerate(self.options, start=1): tk.Radiobutton(self, text=option, variable=self.opt_selected, value=b_val).pack(side=TOP, anchor=W) def check_q(self): """ Check if currently selected option is correct answer. """ return self.opt_selected.get() == self.correct_ansclass Quiz: def __init__(self, master, quiz_questions): self.master = master # The container is a stack of question Frames on top of one another. # The one we want visible will be raised above the others. container = tk.Frame(master) container.pack(side="top", fill="both", expand=True) container.grid_rowconfigure(0, weight=1) container.grid_columnconfigure(0, weight=1) # Create internal list of question Frames. self.questions = [] for args in quiz_questions: q_frame = Question(container, *args) q_frame.grid(row=0, column=0, sticky=NSEW) self.questions.append(q_frame) self.qn = 0 # Current question number. self.display_q() # Show it. # Create naviagtion Buttons. btn = tk.Button(master, width=16, borderwidth=3, relief=RAISED, text="Previous Question", command=self.display_prev_q) btn.pack(side=LEFT) btn = tk.Button(master, width=16, borderwidth=3, relief=RAISED, text="Next Question", command=self.display_next_q) btn.pack(side=LEFT) btn = tk.Button(master, width=16, borderwidth=3, relief=RAISED, text="View Score", command=self.score_viewer) btn.pack(side=LEFT) def display_q(self): """ Show the current question by lifting it to top. """ frame = self.questions[self.qn] frame.tkraise() def display_next_q(self): """ Increment question number, wrapping to first one at end, and display it. """ self.qn = (self.qn+1) % len(self.questions) self.display_q() def display_prev_q(self): """ Decrement question number, wrapping to last one at beginning, and display it. """ self.qn = (self.qn-1) % len(self.questions) self.display_q() def score_viewer(self): """ Score results with user consent. """ view_score = messagebox.askquestion( "Warning", 'Would you like to view your current score?', icon='warning') if view_score != 'yes': tk.messagebox.showinfo('Return', 'Returning to quiz') else: # Calculate number of correct answers and percentage correct. correct = sum(question.check_q() for question in self.questions) accuracy = correct / len(self.questions) * 100 messagebox.showinfo("Score", "You have correctly answered %d out of %d questions.\n" "Score: %.1f%%" % (correct, len(self.questions), accuracy))if __name__ == '__main__': # Note this data could also be stored separately, such as in a file. question_data = [('Question 1', ("a1", "b1", "c1", "d1"), 3), ('Question 2', ("b2", "c2", "d2", "a2"), 4), ('Question 3', ("c3", "d3", "a3"), 1), ('Question 4', ("d4", "a4", "b4", "c4"), 2)] root = tk.Tk() root.title('Quiz') quiz = Quiz(root, question_data) root.mainloop() |
Is there way to write hdf5 files row by row in Python? For CSV files we could usewriter = csv.writer(output)writer.writerow([a, b, c, d])Is there anything like that for writing Hdf5 files? | If you are not bound to a specific technology, check out HDFql as this will alleviate you from low-level details when dealing with HDF5 files.To solve your question, you need to create a dataset with two dimensions: the first is extendible and the second has a size of four (based on your code snippet, I assume you want to store four integers per row; also, if the datatype is not an integer, please check HDFql reference manual for an enumeration of all datatypes and change the code snippet below accordingly).In Python, to create such dataset execute (called dset in this example):HDFql.execute("CREATE DATASET dset AS INT(UNLIMITED, 4)")Then, for each row you want to write, execute (please replace val0, val1, val2 and val3 with proper values):HDFql.execute("INSERT INTO dset(-1:::) VALUES(%d, %d, %d, %d)" % (val0, val1, val2, val3))... finally, extend the first dimension of dataset dset by one like this:HDFql.execute("ALTER DIMENSION dset TO +1")Repeat code snippet line #2 and #3 as many times as the rows you want to write. |
Skipping duplicated when generating combinations I have this code: from collections import Counterdef groups(d, l, c = []): if l == len(c): yield c else: for i in d: if i not in c: _c = Counter([j for k in [*c, i] for j in k]) if all(j < 3 for j in _c.values()): yield from groups(d, l, c+[i])data = [(1,2),(2,3),(2,4),(2,5),(2,6),(3,1),(3,2),(3,4)]result = list(groups(data, 3))This code is generating triples of pairs like this:[[(1, 2), (2, 3), (3, 1)], [(1, 2), (2, 3), (3, 4)], [(1, 2), (2, 4), (3, 1)],1[(1, 2), (2, 4), (3, 4)], [(1, 2), (2, 5), (3, 1)], [(1, 2), (2, 5), (3, 4)] ...The problem is, that there are duplicates like this: [(1, 2), (2, 3), (3, 1)] and [(2, 3), (1, 2), (3, 1)]Is there a way how to avoid them in process of generating? | You are reinventing the wheel. Simply use itertools.combinations:from itertools import combinationsdata = [(1, 2), (2, 3), (2, 4), (2, 5), (2, 6), (3, 1), (3, 2), (3, 4)]print(list(combinations(data, 3)))# [((1, 2), (2, 3), (2, 4)), ((1, 2), (2, 3), (2, 5)), ...You can confirm that this does not have repetitions by checking the length of the returned list (which is 56), which is exactly what you would expect (8 choose 3 is 56)If you need to apply custom logic you can still do that:from itertools import combinationsdata = [(1, 2), (2, 3), (2, 4), (2, 5), (2, 6), (3, 1), (3, 2), (3, 4)]wanted_combinations = []for combination in combinations(data, 3): # apply logic if condition: wanted_combinations.append(combination) |
Get timestamps with the same time_zone from all nodes in distributed system with Python I am building a mechanism to store information with the timestamp in a distributed system. Assuming that the information from all nodes in a distributed system will be merged together and sorted according to timestamp, how to make sure that all the timestamps from all systems refer to the same time_zone in Python?From my research, time.time() returns the time since Epoch, but it might return different results depending on the platform:Does Python's time.time() return a timestamp in UTC?Another solution that comes to my mind is to use datetime.utcnow() from datetime package. If I use datetime.utcnow() in all nodes, from my understanding all nodes will be using the same time_zone (UTC), hence the timestamps between all the nodes will be in sync. Can anyone confirm if I am correct in my logic? | If you want synchronize data with time, System time is not good idea for distributed systems. In distributed systems, system time is unreliable. There are many different scenarios where a simple failure. Example scenarios:Someone can change local time on machine(or accidentally)Out-of-date machine joining a clustersynchronized clocks drifting at slightly different ratesThat can cause not-traceable anomalies. NTP can help to you. You can synchronize times with this protocol. These strategies named Global Clock. It is not easy for implement and increase latency.As far as I know Cassandra is using NTP for synchronize data. |
Dedupe a list of dicts where the match criteria is multiple key value pairs being identical For the given sample input list, I want to dedupe the dicts based on the values of the keys code, tc, signal, and in_force all matching.sample input:signals = [ None, None, {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 1, 'target': 0}, {'code': 'lr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 2, 'target': 1}, {'code': 'sr', 'tc': 1, 'signal': '2U-2D', 'in_force': True, 'trigger': 3, 'target': 2}, None, {'code': 'sr', 'tc': 0, 'signal': '1-2U-2D', 'in_force': True, 'trigger': 4, 'target': 3}, {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': False, 'trigger': 5, 'target': 4}, {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 6, 'target': 5}, None, {'code': 'lr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 7, 'target': 6}, {'code': 'sr', 'tc': 1, 'signal': '2U-2D', 'in_force': True, 'trigger': 8, 'target': 7}, {'code': 'sr', 'tc': 0, 'signal': '1-2U-2D', 'in_force': True, 'trigger': 9, 'target': 8}, {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': False, 'trigger': 0, 'target': 9},]expected/desired output:[ {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 1, 'target': 0}, {'code': 'lr', 'tc': 0, 'signal': '2U-2D', 'in_force': True, 'trigger': 2, 'target': 1}, {'code': 'sr', 'tc': 1, 'signal': '2U-2D', 'in_force': True, 'trigger': 3, 'target': 2}, {'code': 'sr', 'tc': 0, 'signal': '1-2U-2D', 'in_force': True, 'trigger': 4, 'target': 3}, {'code': 'sr', 'tc': 0, 'signal': '2U-2D', 'in_force': False, 'trigger': 5, 'target': 4},] The order of the list does not need to be preserved, and whether it returns the 1st or nth matching dict in the list does not matter.I could make a very verbose version of this reference code that creates each list of matching key/values, but I feel like there's got to be a better way.new_list = []for position, signal in enumerate(signals): if type(signal) == dict: if { key: value for key, value in signal.items() if signal["code"] == "sr" and signal["tc"] == 0 and signal["signal"] == "2U-2D" and signal["in_force"] == True }: new_list.append(signal) | I'd suggest something like this, with only Python's standard library:result = []seen = set()for s in signals: if not isinstance(s, dict): continue signature = (s['code'], s['tc'], s['signal'], s['in_force']) if signature in seen: continue seen.add(signature) result.append(s) |
create nested dictionary or collection counter with pandas and python I would like to create a nested dictionary or Collection in python by groupingseriesA = ["groupA", "groupA", "groupB", "groupB", "groupC"]seriesB = ["item1", "item1," "item3", "item1", "item2"]Desired output:{ 'groupA': {'item1': 2}, 'groupB': {'item3': 1}, {'item1':1}, 'groupC': {'item2': 1}}In Python, is there an easier way or would I iterate through the listed tuples, and add a collection counter?nested_dict["groupA"]["item1"] ...should return 2 occurrences. | I'd use collections.defaultdict and collections.Counter:from collections import defaultdict, Counterfrom pprint import pprintseriesA = ["groupA", "groupA", "groupB", "groupB", "groupC"]seriesB = ["item1", "item1", "item3", "item1", "item2"]nested_dict = defaultdict(Counter)for a,b in zip(seriesA, seriesB): nested_dict[a][b] += 1assert nested_dict["groupA"]["item1"] == 2 |
concat 2 dataframes by multiindex Here I have two Nx1 dataframes(ds and code are indices, not columns). My purpose is, for each day, to concat open and close by code.df1:ds code open20160101 001 1.4 002 1.3 003 1.2``` ``` ```20201231 001 12.3 003 2.4 007 3.4anddf2:ds code close20160101 001 1.5 002 1.12 003 1.21``` ``` ```20201231 001 14.5 003 2.2 007 3.3My ideal result isds code open close20160101 001 1.4 1.5 002 1.3 1.12 003 1.2 1.21``` ``` ```20201231 001 12.3 14.5 003 2.4 2.2 007 3.4 3.3I tried to use the following method but it does not workdf = pd.concat([df1,df2], axis = 0)No matter I add "keys" or "levels", I could not get the wanted result, any help would be appreciated | you can use join or merge to merge two dataframe.df = df1.join(df2, how='outer')if the index is not unique, pd.concat with axis=1 will not work. |
migrate error 'No migration to apply' also not add table in postgresql in django I'm trying to create a model for an eCommerce site but after makemigrations when I trying to migrate terminal show "No migration to apply" but when I checked database no new table was there. Please help me.`from django.db import modelsfrom django.utils import timezone# Create your models here.class Collection(models.Model): name = models.CharField(max_length=200, null=True) def __str__(self): return self.nameclass ProductCategory(models.Model): name = models.CharField(max_length=100, null=True) def __str__(self): return self.nameclass Brand(models.Model): name = models.CharField(max_length=100, null=True) def __str__(self): return self.nameclass Color(models.Model): color = models.CharField(max_length=50, null=True) def __str__(self): return self.colorclass Size(models.Model): size = models.CharField(max_length=20, null=True) def __str__(self): return self.sizeclass Products(models.Model): name = models.CharField(max_length=200, null=True) for_people = models.ForeignKey(Collection, on_delete=models.CASCADE) category = models.ForeignKey(ProductCategory, on_delete=models.CASCADE) description = models.TextField() old_price = models.FloatField(null=True, blank=True) price = models.FloatField(null=True) input_date = models.DateTimeField(auto_now=False, auto_now_add=True) update_date = models.DateTimeField(auto_now=True, auto_now_add=False) def __str__(self): return self.nameclass Image(models.Model): product_name = models.ForeignKey(Products, related_name='pro_images', on_delete=models.CASCADE) image = models.ImageField(upload_to='media/product_img/', null=True, blank=True) def __str__(self): return self.product_name.name + 'image'`The below picture of my terminal please check itTerminal show | python manage.py migrate productAfter migrate with the specific model name it's worked for me. |
Problems when implementing Keras model in Tensorflow I'm just starting off with Tensorflow.I tried implementing a model to classify digits in the MNSIT dataset.I am familiar with Keras, so I first used it to create the model.Keras code:from keras.models import Sequentialfrom keras.layers import Densefrom keras.datasets import mnistfrom os import pathimport numpy as npnetwork = Sequential()network.add(Dense(700, input_dim=784, activation='tanh'))network.add(Dense(500, activation='tanh'))network.add(Dense(500, activation='tanh'))network.add(Dense(500, activation='tanh'))network.add(Dense(10, activation='softmax'))network.compile(loss='categorical_crossentropy', optimizer='adam')(x_train, y_temp), (x_test, y_test) = mnist.load_data()y_train = vectorize(y_temp) # I defined this function to create vectors of the labels. It works without issues.x_train = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])network.fit(x_train, y_train, batch_size=100, epochs=3)x_test = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])scores = network.predict(x_test)correct_pred = 0for i in range(len(scores)): if np.argmax(scores[i]) == y_test[i]: correct_pred += 1print((correct_pred/len(scores))*100)The above code gives me an accuracy of around 92%.I tried implementing the same model in Tensorflow:import sysimport tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datadata = input_data.read_data_sets('.', one_hot=True)sess = tf.InteractiveSession()x = tf.placeholder(tf.float32, [None, 784])y = tf.placeholder(tf.float32, [None, 10])w = tf.Variable(tf.zeros([784, 700]))w2 = tf.Variable(tf.zeros([700, 500]))w3 = tf.Variable(tf.zeros([500, 500]))w4 = tf.Variable(tf.zeros([500, 500]))w5 = tf.Variable(tf.zeros([500, 10]))h1 = tf.nn.tanh(tf.matmul(x, w))h2 = tf.nn.tanh(tf.matmul(h1, w2))h3 = tf.nn.tanh(tf.matmul(h2, w3))h4 = tf.nn.tanh(tf.matmul(h3, w4))h = tf.matmul(h4, w5)loss = tf.math.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=h, labels=y))gradient_descent = tf.train.AdamOptimizer().minimize(loss)correct_mask = tf.equal(tf.argmax(h, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_mask, tf.float32))sess.run(tf.global_variables_initializer())for i in range(3): batch_x, batch_y = data.train.next_batch(100) loss_print = tf.print(loss, output_stream=sys.stdout) sess.run([gradient_descent, loss_print], feed_dict={x: batch_x, y: batch_y})ans = sess.run(accuracy, feed_dict={x: data.test.images, y: data.test.labels})print(ans)However, this code only gave me an accuracy of around 11%.I tried increasing the number of epochs to 1000, but the result didn't change. Furthermore, the loss in every epoch was the same (2.30).Am I missing something in the Tensorflow code? | Turns out, the problem was that I initialized the weights as zeros!Simply changingw = tf.Variable(tf.zeros([784, 700]))w2 = tf.Variable(tf.zeros([700, 500]))w3 = tf.Variable(tf.zeros([500, 500]))w4 = tf.Variable(tf.zeros([500, 500]))w5 = tf.Variable(tf.zeros([500, 10]))tow = tf.Variable(tf.random_normal([784, 700], seed=42))w2 = tf.Variable(tf.random_normal([700, 500], seed=42))w3 = tf.Variable(tf.random_normal([500, 500], seed=42))w4 = tf.Variable(tf.random_normal([500, 500], seed=42))w5 = tf.Variable(tf.random_normal([500, 10], seed=42))gave significant improvements. |
When I run the code in vs code, the results seem to appear and then disappear quickly, how do I fix this? I downloaded Anaconda and VS Code and tried to link them.However, when I just test very simple code that just prints "hello world", it did not show the result in the terminal. So I tried to change the default terminal setting to one of other options (Command Prompt, Powershell, Windows Powershell), but none of them solved the problem.**I can see the result, if I debug python file. The problem is only showed in terminalTerminal shows this first:And it changed to this:How can I see the result? | After starting your application (debug mode), click View > Output (Ctrl + Alt + O) to show the output window. Stop your application and restart Visual Studio. Next time you run your application the output window should be visible automatically because Visual Studio remembers your opened windows in debug mode. |
using python read a column 'H' from csv and implement this function SUM(H16:H$280)/H$14*100 Using python read a column 'H' from a dataframe and implement this function:CDF = {SUM(H1:H$266)/G$14}*100Where:H$266 is the last element of the column, andG$14 is the total sum of the column H.In sum(), the first variable iterates (H1, H2, H3 ... H266) but the last value remains the same (H$266). So the first value of CDF is obviously 100 and then it goes on decreasing downwards.I want to implement this using dataframe. | As an example, you could do this:from pandas import Seriess = Series([1, 2, 3]) # H1:H266 datasum_of_s = s.sum() # G14def calculus(subset, total_sum): return subset.sum() / total_sum * 100result = Series([calculus(s.iloc[i:], sum_of_s) for i in range(len(s))])print(result)You should adapt it to your dataset, but basically it's the idea. Let me know if it works. |
python win32print can't set custom page size i am trying to print pdf file with custom page size in python with win32print i can change other setting like number of copies but setting custom page length and width is not working it always try to fit pdf content into page by covering whole page this is my codeprinters=win32print.EnumPrinters(win32print.PRINTER_ENUM_LOCAL)PRINTER_DEFAULTS = {"DesiredAccess":win32print.PRINTER_ALL_ACCESS}temprint=printers[1][2]handle = win32print.OpenPrinter(temprint, PRINTER_DEFAULTS)level = 2attributes = win32print.GetPrinter(handle, level)attributes['pDevMode'].PaperWidth = 600 attributes['pDevMode'].PaperLength = 30 attributes['pDevMode'].PaperSize =0 print(win32print.SetPrinter(handle, level, attributes, 0))win32api.ShellExecute(0,'printto','test.pdf','"%s"' % temprint,'.',0)win32print.ClosePrinter(handle)can anyone tell me what i am doing wrong here | I am not sure if this also applies in this case. But from the class documentation, I remember that the values for the mentioned attributes were assigned as (Tenths of a millimeter).Your values here don't correspond to thatattributes['pDevMode'].PaperWidth = 600 attributes['pDevMode'].PaperLength = 30 attributes['pDevMode'].PaperSize =0 |
Combine two lists while adding common values I have two lists of sets I'd like to combine, while adding the second set value when the first value matches.Example input:listOne = [('a', 1), ('b', 3), ('c', 2), ('d', 5)]listTwo = [('a', 2), ('b', 1), ('c', 4)]Desired output:[('a', 3), ('b', 4), ('c', 6), ('d', 5)]What would be the easiest way to do this? | from collections import Counterresult = list((Counter(dict(listOne)) + Counter(dict(listTwo))).items()) |
transpose the output of a sql output using pyspark I have a sparksql select query as belowselect max(age),min(age),avg(age),max(sal),min(sal),avg(sal) from Emp;Output dataframe is getting created as below:max(age)min(age)avg(age)max(sal)min(sal)avg(sal)4623311000020005000My requirement is the dataframe should be as below using pyspark using transpose.columnsmaxminavgage462331sal1000020005000Thanks for the help in advance. | The easiest way would be to run two queries (one for sal and one for age and union them.select 'age' as column, max(age) as max, min(age) as min, avg(age) as avg from Emp;select 'sal' as column, max(sal) as max, min(sal) as min, avg(sal) as avg from Emp;Load those into two dataframes df_sal and df_age and union them:final = df_sal.union(df_age)Update:In case only a single query can be done (as commented by the OP). In this case the stack method can help you.df = spark.createDataFrame([ Row(avg_sal=1, max_sal=1, min_sal=1, avg_age=1, max_age=1, min_age=1)])df.show()+-------+-------+-------+-------+-------+-------+|avg_sal|max_sal|min_sal|avg_age|max_age|min_age|+-------+-------+-------+-------+-------+-------+| 1| 2| 3| 4| 5| 6|+-------+-------+-------+-------+-------+-------+( df .select(F.expr("stack(2, 'sal', avg_sal, max_sal, min_sal, 'age', avg_age, max_age, min_age) as (column, avg, max, min)")) .show())+------+---+---+---+|column|avg|max|min|+------+---+---+---+| sal| 1| 2| 3|| age| 4| 5| 6|+------+---+---+---+In the example I renamed the input columns to avoid problems with brackets in column names. This can be done directly in the SQL query. |
Python Memory error solutions if permanent access is required first, I am aware of the amount of Python memory error questions on SO, but so far, none has matched my use case.I am currently trying to parse a bunch of textfiles (~6k files with ~30 GB) and store each unique word. Yes, I am building a wordlist, no I am not planning on doing evil things with it, it is for the university.I implemented the list of found words as a set (created with words = set([]), used with words.add(word)) and I am just adding every found word to it, considering that the set mechanics should remove all duplicates.This means that I need permanent access to the whole set for this to work (Or at least I see no alternative, since the whole list has to be checked for duplicates on every insert).Right now, I am running into MemoryError about 25% through, when it uses about 3.4 GB of my RAM. I am on a Linux 32bit, so I know where that limitation comes from, and my PC only has 4 Gigs of RAM, so even 64 bit would not help here.I know that the complexity is probably terrible (Probably O(n) on each insert, although I don't know how Python sets are implemented (trees?)), but it is still (probably) faster and (definitly) more memory efficient than adding each word to a primitive list and removing duplicates afterwards.Is there any way to get this to run? I expect about 6-10 GB of unique words, so using my current RAM is out of the question, and upgrading my RAM is currently not possible (and does not scale too well once I start letting this script loose on larger amounts of files).My only Idea at the moment is caching on Disk (Which will slow the process down even more), or writing temporary sets to disk and merging them afterwards, which will take even more time and the complexity would be horrible indeed. Is there even a solution that will not result in horrible runtimes?For the record, this is my full source. As it was written for personal use only, it is pretty horrible, but you get the idea.import osimport syswords=set([])lastperc = 0current = 1argl = 0print "Searching for .txt-Files..."for _,_,f in os.walk("."): for file in f: if file.endswith(".txt"): argl=argl+1print "Found " + str(argl) + " Files. Beginning parsing process..."print "0% 50% 100%"for r,_,f in os.walk("."): for file in f: if file.endswith(".txt"): fobj = open(os.path.join(r,file),"r") for line in fobj: line = line.strip() word, sep, remains = line.partition(" ") if word != "": words.add(word) word, sep, remains = remains.partition(" ") while sep != "": words.add(word) word, sep, remains2 = remains.partition(" ") remains = remains2 if remains != "": words.add(remains) newperc = int(float(current)/argl*100) if newperc-lastperc > 0: for i in range(newperc-lastperc): sys.stdout.write("=") sys.stdout.flush() lastperc = newperc current = current+1print ""print "Done. Set contains " + str(len(words)) + " different words. Sorting..."sorteddic = sorted(words, key=str.lower)print "Sorted. Writing to File"print "0% 50% 100%"lastperc = 0current = 1sdicl = len(sorteddic)-1fobj = open(sys.argv[1],"w")for element in sorteddic: fobj.write(element+"\n") newperc = int(float(current)/sdicl*100) if newperc-lastperc > 0: for i in range(newperc-lastperc): sys.stdout.write("=") sys.stdout.flush() lastperc = newperc current = current+1print ""print "Done. Enjoy your wordlist."Thanks for your help and Ideas. | You're probably going to need to store the keys on disk. A key-value store like Redis might fit the bill. |
Kivy label opacity not being consistent I am learning the basics of Kivy and going through tutorials. I noticed that when I start a Kivy app, the opacity of the labels are not consistent. Sometimes when I start the app, some labels are full opacity while others are half opacity. Sometimes I start the app and some labels are entirely opaque and missing.I can't figure out why this is happening. All of the labels have the same definition and I believe should not be behaving this way.I have tried just closing and starting the app over and over to see if there is a pattern and it seems that the first label in the top left is always consistent while the other 3 labels on the 3 other buttons are not.I have also tried out some of the demo apps in kivy and the demo apps are showing this behavior as well.Here are the files that I am using:test.pyimport kivykivy.require('1.10.1')from kivy.app import Appfrom kivy.uix.gridlayout import GridLayoutclass GridLayoutApp(App): def build(self): return GridLayout()if __name__ == '__main__': glApp = GridLayoutApp() glApp.run()gridlayout.kv<GridLayout>: cols: 2 rows: 2 spacing: 10 padding: 10 Button: text: "1st" size_hint_x: None width: 200 Button: text: "2nd" Button: text: "3rd" size_hint_x: None width: 200 Button: text: "4th"I expected all of the buttons to have the same opacity. Sometimes the program does get this right but most of the time, the opacity is off for some reason.I have no idea how to even approach this problem so any suggestions are very much appreciated! | It's a bug that appeared during an sdl2 version update. It's fixed in Kivy 1.11, released a couple of days ago, make sure your Kivy is up to date. |
shell start / stop for python script I have a simple python script i need to start and stop and i need to use a start.sh and stop.sh script to do it. I have start.sh: #!/bin/shscript='/path/to/my/script.py'echo 'starting $script with nohup'nohup /usr/bin/python $script &and stop.sh#!/bin/shPID=$(ps aux | grep "/path/to/my/script.py" | awk '{print $2}')echo "killing $PID"kill -15 $PIDI'm mainly concerned with the stop.sh script. I think that's an appropriate way to find the pid but i wouldn't bet much on it. start.sh successfully starts it. when i run stop.sh, i can no longer find the process by "ps aux | grep 'myscript.py'" but the console outputs: killing 2505225058./stop.sh: 5: kill: No such processso it seems like it works AND gives an error of sorts with "No such process". Is this actually an error? Am I approaching this in a sane way? Are there other things I should be paying attention to? EDIT - I actually ended up with something like this: start.sh #!/bin/bashENVT=$1COMPONENTS=$2TARGETS=("/home/user/project/modules/script1.py" "/home/user/project/modules/script2.py")for target in "${TARGETS[@]}"do PID=$(ps aux | grep -v grep | grep $target | awk '{print $2}') echo $PID if [[ -z "$PID" ]] then echo "starting $target with nohup for env't: $ENVT" nohup python $target $ENVT $COMPONENTS & fidonestop.sh #!/bin/bashENVT=$1TARGETS=("/home/user/project/modules/script1.py" "/home/user/project/modules/script2.py")for target in "${TARGETS[@]}"do pkill -f $target echo "killing process $target"done | It is because ps aux |grep SOMETHING also finds the grep SOMETHING process, because SOMETHING matches. After the execution the grep is finished, so it cannot find it. Add a line: ps aux | grep -v grep | grep YOURSCRIPTWhere -v means exclude. More in man grep. |
Error in importing Sequential from Keras.Models I have installed Keras using pip install keras and tensorflow version 1.9.0 via python -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py2-none-any.whl. I followed the directions at this post and chose a version that seemed to be able to install on my computer. I am not sure if it is because the version is too old. I am using Python 2 on a Windows computer. I am running the following import statements and get the following error message.from keras.models import Sequentialfrom keras.layers import DenseError:Traceback (most recent call last): File "C:\Downloads\keras_code.py", line 2, in <module> from keras.models import Sequential File "C:\Python27\lib\site-packages\keras\__init__.py", line 21, in <module> from tensorflow.python import tf2 File "C:\Python27\lib\site-packages\tensorflow\__init__.py", line 22, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Python27\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "C:\Python27\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg)ImportError: Traceback (most recent call last): File "C:\Python27\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Python27\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "C:\Python27\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper import _pywrap_tensorflow_internalImportError: No module named _pywrap_tensorflow_internalFailed to load the native TensorFlow runtime.See https://www.tensorflow.org/install/install_sources#common_installation_problemsfor some common reasons and solutions. Include the entire stack traceabove this error message when asking for help.Getting a new computer will not be possible instantaneously, is there a way to get keras/tensorflow to work on an older computer? | @Jellyfish, you are using very old Tensorflow version. Install the latest Tensorflow version, 2.6.0. Latest Tensorflow version installs Keras library as well.Use imports as below.import tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.